Effect of different simulated altitudes on repeat-sprint performance in team-sport athletes.
Goods P, S R; Dawson, Brian T; Landers, Grant J; Gore, Christopher J; Peeling, Peter
2014-09-01
This study aimed to assess the impact of 3 heights of simulated altitude exposure on repeat-sprint performance in team-sport athletes. Ten trained male team-sport athletes completed 3 sets of repeated sprints (9 × 4 s) on a nonmotorized treadmill at sea level and at simulated altitudes of 2000, 3000, and 4000 m. Participants completed 4 trials in a random order over 4 wk, with mean power output (MPO), peak power output (PPO), blood lactate concentration (Bla), and oxygen saturation (SaO2) recorded after each set. Each increase in simulated altitude corresponded with a significant decrease in SaO2. Total work across all sets was highest at sea level and correspondingly lower at each successive altitude (P < .05; sea level < 2000 m < 3000 m < 4000 m). In the first set, MPO was reduced only at 4000 m, but for subsequent sets, decreases in MPO were observed at all altitudes (P < .05; 2000 m < 3000 m < 4000 m). PPO was maintained in all sets except for set 3 at 4000 m (P < .05; vs sea level and 2000 m). BLa levels were highest at 4000 m and significantly greater (P < .05) than at sea level after all sets. These results suggest that "higher may not be better," as a simulated altitude of 4000 m may potentially blunt absolute training quality. Therefore, it is recommended that a moderate simulated altitude (2000-3000 m) be employed when implementing intermittent hypoxic repeat-sprint training for team-sport athletes.
A simple mass-conserved level set method for simulation of multiphase flows
NASA Astrophysics Data System (ADS)
Yuan, H.-Z.; Shu, C.; Wang, Y.; Shu, S.
2018-04-01
In this paper, a modified level set method is proposed for simulation of multiphase flows with large density ratio and high Reynolds number. The present method simply introduces a source or sink term into the level set equation to compensate the mass loss or offset the mass increase. The source or sink term is derived analytically by applying the mass conservation principle with the level set equation and the continuity equation of flow field. Since only a source term is introduced, the application of the present method is as simple as the original level set method, but it can guarantee the overall mass conservation. To validate the present method, the vortex flow problem is first considered. The simulation results are compared with those from the original level set method, which demonstrates that the modified level set method has the capability of accurately capturing the interface and keeping the mass conservation. Then, the proposed method is further validated by simulating the Laplace law, the merging of two bubbles, a bubble rising with high density ratio, and Rayleigh-Taylor instability with high Reynolds number. Numerical results show that the mass is a well-conserved by the present method.
A Simulation of Readiness-Based Sparing Policies
2017-06-01
variant of a greedy heuristic algorithm to set stock levels and estimate overall WS availability. Our discrete event simulation is then used to test the...available in the optimization tools. 14. SUBJECT TERMS readiness-based sparing, discrete event simulation, optimization, multi-indenture...variant of a greedy heuristic algorithm to set stock levels and estimate overall WS availability. Our discrete event simulation is then used to test the
A highly efficient 3D level-set grain growth algorithm tailored for ccNUMA architecture
NASA Astrophysics Data System (ADS)
Mießen, C.; Velinov, N.; Gottstein, G.; Barrales-Mora, L. A.
2017-12-01
A highly efficient simulation model for 2D and 3D grain growth was developed based on the level-set method. The model introduces modern computational concepts to achieve excellent performance on parallel computer architectures. Strong scalability was measured on cache-coherent non-uniform memory access (ccNUMA) architectures. To achieve this, the proposed approach considers the application of local level-set functions at the grain level. Ideal and non-ideal grain growth was simulated in 3D with the objective to study the evolution of statistical representative volume elements in polycrystals. In addition, microstructure evolution in an anisotropic magnetic material affected by an external magnetic field was simulated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Zhijie; Li, Dongsheng; Xu, Wei
2015-04-01
In atom probe tomography (APT), accurate reconstruction of the spatial positions of field evaporated ions from measured detector patterns depends upon a correct understanding of the dynamic tip shape evolution and evaporation laws of component atoms. Artifacts in APT reconstructions of heterogeneous materials can be attributed to the assumption of homogeneous evaporation of all the elements in the material in addition to the assumption of a steady state hemispherical dynamic tip shape evolution. A level set method based specimen shape evolution model is developed in this study to simulate the evaporation of synthetic layered-structured APT tips. The simulation results ofmore » the shape evolution by the level set model qualitatively agree with the finite element method and the literature data using the finite difference method. The asymmetric evolving shape predicted by the level set model demonstrates the complex evaporation behavior of heterogeneous tip and the interface curvature can potentially lead to the artifacts in the APT reconstruction of such materials. Compared with other APT simulation methods, the new method provides smoother interface representation with the aid of the intrinsic sub-grid accuracy. Two evaporation models (linear and exponential evaporation laws) are implemented in the level set simulations and the effect of evaporation laws on the tip shape evolution is also presented.« less
Two-way coupled SPH and particle level set fluid simulation.
Losasso, Frank; Talton, Jerry; Kwatra, Nipun; Fedkiw, Ronald
2008-01-01
Grid-based methods have difficulty resolving features on or below the scale of the underlying grid. Although adaptive methods (e.g. RLE, octrees) can alleviate this to some degree, separate techniques are still required for simulating small-scale phenomena such as spray and foam, especially since these more diffuse materials typically behave quite differently than their denser counterparts. In this paper, we propose a two-way coupled simulation framework that uses the particle level set method to efficiently model dense liquid volumes and a smoothed particle hydrodynamics (SPH) method to simulate diffuse regions such as sprays. Our novel SPH method allows us to simulate both dense and diffuse water volumes, fully incorporates the particles that are automatically generated by the particle level set method in under-resolved regions, and allows for two way mixing between dense SPH volumes and grid-based liquid representations.
Radio frequency tank eigenmode sensor for propellant quantity gauging
NASA Technical Reports Server (NTRS)
Zimmerli, Gregory A. (Inventor)
2013-01-01
A method for measuring the quantity of fluid in a tank may include the steps of selecting a match between a measured set of electromagnetic eigenfrequencies and a simulated plurality of sets of electromagnetic eigenfrequencies using a matching algorithm, wherein the match is one simulated set of electromagnetic eigenfrequencies from the simulated plurality of sets of electromagnetic eigenfrequencies, and determining the fill level of the tank based upon the match.
The Predictive Value of Ultrasound Learning Curves Across Simulated and Clinical Settings.
Madsen, Mette E; Nørgaard, Lone N; Tabor, Ann; Konge, Lars; Ringsted, Charlotte; Tolsgaard, Martin G
2017-01-01
The aim of the study was to explore whether learning curves on a virtual-reality (VR) sonographic simulator can be used to predict subsequent learning curves on a physical mannequin and learning curves during clinical training. Twenty midwives completed a simulation-based training program in transvaginal sonography. The training was conducted on a VR simulator as well as on a physical mannequin. A subgroup of 6 participants underwent subsequent clinical training. During each of the 3 steps, the participants' performance was assessed using instruments with established validity evidence, and they advanced to the next level only after attaining predefined levels of performance. The number of repetitions and time needed to achieve predefined performance levels were recorded along with the performance scores in each setting. Finally, the outcomes were correlated across settings. A good correlation was found between time needed to achieve predefined performance levels on the VR simulator and the physical mannequin (Pearson correlation coefficient .78; P < .001). Performance scores on the VR simulator correlated well to the clinical performance scores (Pearson correlation coefficient .81; P = .049). No significant correlations were found between numbers of attempts needed to reach proficiency across the 3 different settings. A post hoc analysis found that the 50% fastest trainees at reaching proficiency during simulation-based training received higher clinical performance scores compared to trainees with scores placing them among the 50% slowest (P = .025). Performances during simulation-based sonography training may predict performance in related tasks and subsequent clinical learning curves. © 2016 by the American Institute of Ultrasound in Medicine.
Couto, Thomaz Bittencourt; Kerrey, Benjamin T; Taylor, Regina G; FitzGerald, Michael; Geis, Gary L
2015-04-01
Pediatric emergencies require effective teamwork. These skills are developed and demonstrated in actual emergencies and in simulated environments, including simulation centers (in center) and the real care environment (in situ). Our aims were to compare teamwork performance across these settings and to identify perceived educational strengths and weaknesses between simulated settings. We hypothesized that teamwork performance in actual emergencies and in situ simulations would be higher than for in-center simulations. A retrospective, video-based assessment of teamwork was performed in an academic, pediatric level 1 trauma center, using the Team Emergency Assessment Measure (TEAM) tool (range, 0-44) among emergency department providers (physicians, nurses, respiratory therapists, paramedics, patient care assistants, and pharmacists). A survey-based, cross-sectional assessment was conducted to determine provider perceptions regarding simulation training. One hundred thirty-two videos, 44 from each setting, were reviewed. Mean total TEAM scores were similar and high in all settings (31.2 actual, 31.1 in situ, and 32.3 in-center, P = 0.39). Of 236 providers, 154 (65%) responded to the survey. For teamwork training, in situ simulation was considered more realistic (59% vs. 10%) and more effective (45% vs. 15%) than in-center simulation. In a video-based study in an academic pediatric institution, ratings of teamwork were relatively high among actual resuscitations and 2 simulation settings, substantiating the influence of simulation-based training on instilling a culture of communication and teamwork. On the basis of survey results, providers favored the in situ setting for teamwork training and suggested an expansion of our existing in situ program.
Modification of Obstetric Emergency Simulation Scenarios for Realism in a Home-Birth Setting.
Komorowski, Janelle; Andrighetti, Tia; Benton, Melissa
2017-01-01
Clinical competency and clear communication are essential for intrapartum care providers who encounter high-stakes, low-frequency emergencies. The challenge for these providers is to maintain infrequently used skills. The challenge is even more significant for midwives who manage births at home and who, due to low practice volume and low-risk clientele, may rarely encounter an emergency. In addition, access to team simulation may be limited for home-birth midwives. This project modified existing validated obstetric simulation scenarios for a home-birth setting. Twelve certified professional midwives (CPMs) in active home-birth practice participated in shoulder dystocia and postpartum hemorrhage simulations. The simulations were staged to resemble home-birth settings, supplies, and personnel. Fidelity (realism) of the simulations was assessed with the Simulation Design Scale, and satisfaction and self-confidence were assessed with the Student Satisfaction and Self-Confidence in Learning Scale. Both utilized a 5-point Likert scale, with higher scores suggesting greater levels of fidelity, participant satisfaction, and self-confidence. Simulation Design Scale scores indicated participants agreed fidelity was achieved for the home-birth setting, while scores on the Student Satisfaction and Self-Confidence in Learning indicated high levels of participant satisfaction and self-confidence. If offered without modification, simulation scenarios designed for use in hospitals may lose fidelity for home-birth midwives, particularly in the environmental and psychological components. Simulation is standard of care in most settings, an excellent vehicle for maintaining skills, and some evidence suggests it results in improved perinatal outcomes. Additional study is needed in this area to support home-birth providers in maintaining skills. This pilot study suggests that simulation scenarios intended for hospital use can be successfully adapted to the home-birth setting. © 2016 by the American College of Nurse-Midwives.
High Fidelity Simulation Experience in Emergency settings: doctors and nurses satisfaction levels.
Calamassi, Diletta; Nannelli, Tiziana; Guazzini, Andrea; Rasero, Laura; Bambi, Stefano
2016-11-22
Lots of studies describe High Fidelity Simulation (HFS) as an experience well-accepted by the learners. This study has explored doctors and nurses satisfaction levels during HFS sessions, searching the associations with the setting of simulation events (simulation center or on the field simulation). Moreover, we studied the correlation between HFS experience satisfaction levels and the socio-demographic features of the participants. Mixed method study, using the Satisfaction of High-Fidelity Simulation Experience (SESAF) questionnaire through an online survey. SESAF was administered to doctors and nurses who previously took part to HFS sessions in a simulation center or in the field. Quantitative data were analyzed through descriptive and inferential statistics methods; qualitative data was performed through the Giorgi method. 143 doctors and 94 nurses filled the questionnaire. The satisfaction level was high: on a 10 points scale, the mean score was 8.17 (SD±1.924). There was no significant difference between doctors and nurses satisfaction levels in almost all the SESAF factors. We didn't find any correlation between gender and HFS experience satisfaction levels. The knowledge of theoretical aspects of the simulated case before the HFS experience is related to a higher general satisfaction (r=0.166 p=0.05), a higher effectiveness of debriefing (r=0,143 p=0,05), and a higher professional impact (r=0.143 p=0.05). The respondents that performed a HFS on the field, were more satisfied than the others, and experienced a higher "professional impact", "clinical reasoning and self efficacy", and "team dynamics" (p< 0,01). Narrative data suggest that HFS facilitators should improve their behaviors during the debriefing. Healthcare managers should extend the HFS to all kind of healthcare workers in real clinical settings. There is the need to improve and implement the communication competences of HFS facilitators.
A discontinuous Galerkin conservative level set scheme for interface capturing in multiphase flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Owkes, Mark, E-mail: mfc86@cornell.edu; Desjardins, Olivier
2013-09-15
The accurate conservative level set (ACLS) method of Desjardins et al. [O. Desjardins, V. Moureau, H. Pitsch, An accurate conservative level set/ghost fluid method for simulating turbulent atomization, J. Comput. Phys. 227 (18) (2008) 8395–8416] is extended by using a discontinuous Galerkin (DG) discretization. DG allows for the scheme to have an arbitrarily high order of accuracy with the smallest possible computational stencil resulting in an accurate method with good parallel scaling. This work includes a DG implementation of the level set transport equation, which moves the level set with the flow field velocity, and a DG implementation of themore » reinitialization equation, which is used to maintain the shape of the level set profile to promote good mass conservation. A near second order converging interface curvature is obtained by following a height function methodology (common amongst volume of fluid schemes) in the context of the conservative level set. Various numerical experiments are conducted to test the properties of the method and show excellent results, even on coarse meshes. The tests include Zalesak’s disk, two-dimensional deformation of a circle, time evolution of a standing wave, and a study of the Kelvin–Helmholtz instability. Finally, this novel methodology is employed to simulate the break-up of a turbulent liquid jet.« less
Vectorized algorithms for spiking neural network simulation.
Brette, Romain; Goodman, Dan F M
2011-06-01
High-level languages (Matlab, Python) are popular in neuroscience because they are flexible and accelerate development. However, for simulating spiking neural networks, the cost of interpretation is a bottleneck. We describe a set of algorithms to simulate large spiking neural networks efficiently with high-level languages using vector-based operations. These algorithms constitute the core of Brian, a spiking neural network simulator written in the Python language. Vectorized simulation makes it possible to combine the flexibility of high-level languages with the computational efficiency usually associated with compiled languages.
Pisano, E D; Zong, S; Hemminger, B M; DeLuca, M; Johnston, R E; Muller, K; Braeuning, M P; Pizer, S M
1998-11-01
The purpose of this project was to determine whether Contrast Limited Adaptive Histogram Equalization (CLAHE) improves detection of simulated spiculations in dense mammograms. Lines simulating the appearance of spiculations, a common marker of malignancy when visualized with masses, were embedded in dense mammograms digitized at 50 micron pixels, 12 bits deep. Film images with no CLAHE applied were compared to film images with nine different combinations of clip levels and region sizes applied. A simulated spiculation was embedded in a background of dense breast tissue, with the orientation of the spiculation varied. The key variables involved in each trial included the orientation of the spiculation, contrast level of the spiculation and the CLAHE settings applied to the image. Combining the 10 CLAHE conditions, 4 contrast levels and 4 orientations gave 160 combinations. The trials were constructed by pairing 160 combinations of key variables with 40 backgrounds. Twenty student observers were asked to detect the orientation of the spiculation in the image. There was a statistically significant improvement in detection performance for spiculations with CLAHE over unenhanced images when the region size was set at 32 with a clip level of 2, and when the region size was set at 32 with a clip level of 4. The selected CLAHE settings should be tested in the clinic with digital mammograms to determine whether detection of spiculations associated with masses detected at mammography can be improved.
NASA Astrophysics Data System (ADS)
Pendota, Premchand
Many physical phenomena and industrial applications involve multiphase fluid flows and hence it is of high importance to be able to simulate various aspects of these flows accurately. The Dynamic Contact Angles (DCA) and the contact lines at the wall boundaries are a couple of such important aspects. In the past few decades, many mathematical models were developed for predicting the contact angles of the inter-face with the wall boundary under various flow conditions. These models are used to incorporate the physics of DCA and contact line motion in numerical simulations using various interface capturing/tracking techniques. In the current thesis, a simple approach to incorporate the static and dynamic contact angle boundary conditions using the level set method is developed and implemented in multiphase CFD codes, LIT (Level set Interface Tracking) (Herrmann (2008)) and NGA (flow solver) (Desjardins et al (2008)). Various DCA models and associated boundary conditions are reviewed. In addition, numerical aspects such as the occurrence of a stress singularity at the contact lines and grid convergence of macroscopic interface shape are dealt with in the context of the level set approach.
Water-balance wodel of a wetland on the Fort Berthold Reservation, North Dakota
Vining, Kevin C.
2007-01-01
A numerical water-balance model was developed to simulate the responses of a wetland on the Fort Berthold Reservation, North Dakota, to historical and possible extreme hydrological inputs and to changes in hydrological inputs that might occur if a proposed refinery is built on the reservation. Results from model simulations indicated that the study wetland would likely contain water during most historical and extreme-precipitation events with the addition of maximum potential discharges of 0.6 acre-foot per day from proposed refinery holding ponds. Extended periods with little precipitation and above-normal temperatures may result in the wetland becoming nearly dry, especially if potential holding-pond discharges are near zero. Daily simulations based on the historical-enhanced climate data set for May and June 2005, which included holding-pond discharges of 0.6 acre-foot per day, indicated that the study-wetland maximum simulated water volume was about 16.2 acre-feet and the maximum simulated water level was about 1.2 feet at the outlet culvert. Daily simulations based on the extreme summer data set, created to represent an extreme event with excessive June precipitation and holding-pond discharges of 0.6 acre-foot per day, indicated that the study-wetland maximum simulated water volume was about 38.6 acre-feet and the maximum simulated water level was about 2.6 feet at the outlet culvert. A simulation performed using the extreme winter climate data set and an outlet culvert blocked with snow and ice resulted in the greatest simulated wetland water volume of about 132 acre-feet and the greatest simulated water level, which would have been about 6.2 feet at the outlet culvert, but water was not likely to overflow an adjacent highway.
Ultrasound-Guided Regional Anesthesia Simulation Training: A Systematic Review.
Chen, Xiao Xu; Trivedi, Vatsal; AlSaflan, AbdulHadi A; Todd, Suzanne Clare; Tricco, Andrea C; McCartney, Colin J L; Boet, Sylvain
Ultrasound-guided regional anesthesia (UGRA) has become the criterion standard of regional anesthesia practice. Ultrasound-guided regional anesthesia teaching programs often use simulation, and guidelines have been published to help guide URGA education. This systematic review aimed to examine the effectiveness of simulation-based education for the acquisition and maintenance of competence in UGRA. Studies identified in MEDLINE, EMBASE, CINAHL, Cochrane Central Register of Controlled Trials, and ERIC were included if they assessed simulation-based UGRA teaching with outcomes measured at Kirkpatrick level 2 (knowledge and skills), 3 (transfer of learning to the workplace), or 4 (patient outcomes). Two authors independently reviewed all identified references for eligibility, abstracted data, and appraised quality. After screening 176 citations and 45 full-text articles, 12 studies were included. Simulation-enhanced training improved knowledge acquisition (Kirkpatrick level 2) when compared with nonsimulation training. Seven studies measuring skill acquisition (Kirkpatrick level 2) found that simulation-enhanced UGRA training was significantly more effective than alternative teaching methods or no intervention. One study measuring transfer of learning into the clinical setting (Kirkpatrick level 3) found no difference between simulation-enhanced UGRA training and non-simulation-based training. However, this study was discontinued early because of technical challenges. Two studies examined patient outcomes (Kirkpatrick level 4), and one of these found that simulation-based UGRA training improved patient outcomes compared with didactic teaching. Ultrasound-guided regional anesthesia knowledge and skills significantly improved with simulation training. The acquired UGRA skills may be transferred to the clinical setting; however, further studies are required to confirm these changes translate to improved patient outcomes.
Uncertainty Analysis of Sonic Boom Levels Measured in a Simulator at NASA Langley
NASA Technical Reports Server (NTRS)
Rathsam, Jonathan; Ely, Jeffry W.
2012-01-01
A sonic boom simulator has been constructed at NASA Langley Research Center for testing the human response to sonic booms heard indoors. Like all measured quantities, sonic boom levels in the simulator are subject to systematic and random errors. To quantify these errors, and their net influence on the measurement result, a formal uncertainty analysis is conducted. Knowledge of the measurement uncertainty, or range of values attributable to the quantity being measured, enables reliable comparisons among measurements at different locations in the simulator as well as comparisons with field data or laboratory data from other simulators. The analysis reported here accounts for acoustic excitation from two sets of loudspeakers: one loudspeaker set at the facility exterior that reproduces the exterior sonic boom waveform and a second set of interior loudspeakers for reproducing indoor rattle sounds. The analysis also addresses the effect of pressure fluctuations generated when exterior doors of the building housing the simulator are opened. An uncertainty budget is assembled to document each uncertainty component, its sensitivity coefficient, and the combined standard uncertainty. The latter quantity will be reported alongside measurement results in future research reports to indicate data reliability.
ERIC Educational Resources Information Center
Yu, Bing; Hong, Guanglei
2012-01-01
This study uses simulation examples representing three types of treatment assignment mechanisms in data generation (the random intercept and slopes setting, the random intercept setting, and a third setting with a cluster-level treatment and an individual-level outcome) in order to determine optimal procedures for reducing bias and improving…
Judd, Belinda Karyn; Alison, Jennifer Ailsey; Waters, Donna; Gordon, Christopher James
2016-08-01
Simulation-based clinical education often aims to replicate varying aspects of real clinical practice. It is unknown whether learners' stress levels in simulation are comparable with those in clinical practice. The current study compared acute stress markers during simulation-based clinical education with that experienced in situ in a hospital-based environment. Undergraduate physiotherapy students' (n = 33) acute stress responses [visual analog scales of stress and anxiety, continuous heart rate (HR), and saliva cortisol] were assessed during matched patient encounters in simulation-based laboratories using standardized patients and during hospital clinical placements with real patients. Group differences in stress variables were compared using repeated measures analysis of variance for 3 time points (before, during the patient encounter, and after) at 2 settings (simulation and hospital). Visual analog scale stress and anxiety as well as HR increased significantly from baseline levels before the encounter in both settings (all P < 0.05). Stress and anxiety were significantly higher in simulation [mean (SD), 45 (22) and 44 (25) mm; P = 0.003] compared with hospital [mean (SD), 31 (21) and 26 (20) mm; P = 0.002]. The mean (SD) HR during the simulation patient encounter was 90 (16) beats per minute and was not different compared with hospital [mean (SD), 87 (15) beats per minute; P = 0.89]. Changes in salivary cortisol before and after patient encounters were not statistically different between settings [mean (SD) simulation, 1.5 (2.4) nmol/L; hospital, 2.5 (2.9) nmol/L; P = 0.70]. Participants' experienced stress on clinical placements, irrespective of the clinical education setting (simulation vs. hospital). This study revealed that psychological stress and anxiety were greater during simulation compared with hospital settings; however, physiological stress responses (HR and cortisol) were comparable. These results indicate that psychological stress may be heightened in simulation, and health professional educators need to consider the impact of this on learners in simulation-based clinical education. New learners in their clinical education program may benefit from a less stressful simulation environment, before a gradual increase in stress demands as they approach clinical practice.
Development of the TeamOBS-PPH - targeting clinical performance in postpartum hemorrhage.
Brogaard, Lise; Hvidman, Lone; Hinshaw, Kim; Kierkegaard, Ole; Manser, Tanja; Musaeus, Peter; Arafeh, Julie; Daniels, Kay I; Judy, Amy E; Uldbjerg, Niels
2018-06-01
This study aimed to develop a valid and reliable TeamOBS-PPH tool for assessing clinical performance in the management of postpartum hemorrhage (PPH). The tool was evaluated using video-recordings of teams managing PPH in both real-life and simulated settings. A Delphi panel consisting of 12 obstetricians from the UK, Norway, Sweden, Iceland, and Denmark achieved consensus on (i) the elements to include in the assessment tool, (ii) the weighting of each element, and (iii) the final tool. The validity and reliability were evaluated according to Cook and Beckman. (Level 1) Four raters scored four video-recordings of in situ simulations of PPH. (Level 2) Two raters scored 85 video-recordings of real-life teams managing patients with PPH ≥1000 mL in two Danish hospitals. (Level 3) Two raters scored 15 video-recordings of in situ simulations of PPH from a US hospital. The tool was designed with scores from 0 to 100. (Level 1) Teams of novices had a median score of 54 (95% CI 48-60), whereas experienced teams had a median score of 75 (95% CI 71-79; p < 0.001). (Level 2) The intra-rater [intra-class correlation (ICC) = 0.96] and inter-rater (ICC = 0.83) agreements for real-life PPH were strong. The tool was applicable in all cases: atony, retained placenta, and lacerations. (Level 3) The tool was easily adapted to in situ simulation settings in the USA (ICC = 0.86). The TeamOBS-PPH tool appears to be valid and reliable for assessing clinical performance in real-life and simulated settings. The tool will be shared as the free TeamOBS App. © 2018 Nordic Federation of Societies of Obstetrics and Gynecology.
ERIC Educational Resources Information Center
Bakermans-Kranenburg, Marian J.; Alink, Lenneke R. A.; Biro, Szilvia; Voorthuis, Alexandra; van IJzendoorn, Marinus H.
2015-01-01
Observation of parental sensitivity in a standard procedure, in which caregivers are faced with the same level of infant demand, enables the comparison of sensitivity "between" caregivers. We developed an ecologically valid standardized setting using an infant simulator with interactive features, the Leiden Infant Simulator Sensitivity…
ERIC Educational Resources Information Center
Woodfield, Brian F.; Andrus, Merritt B.; Waddoups, Gregory L.; Moore, Melissa S.; Swan, Richard; Allen, Rob; Bodily, Greg; Andersen, Tricia; Miller, Jordan; Simmons, Bryon; Stanger, Richard
2005-01-01
A set of sophisticated and realistic laboratory simulations is created for use in freshman- and sophomore-level chemistry classes and laboratories called 'Virtual ChemLab'. A detailed assessment of student responses is provided and the simulation's pedagogical utility is described using the organic simulation.
ERIC Educational Resources Information Center
Dieckmann, Peter; Friis, Susanne Molin; Lippert, Anne; Ostergaard, Doris
2012-01-01
Introduction: This study describes (a) process goals, (b) success factors, and (c) barriers for optimizing simulation-based learning environments within the simulation setting model developed by Dieckmann. Methods: Seven simulation educators of different experience levels were interviewed using the Critical Incident Technique. Results: (a) The…
A comparative study of two hazard handling training methods for novice drivers.
Wang, Y B; Zhang, W; Salvendy, G
2010-10-01
The effectiveness of two hazard perception training methods, simulation-based error training (SET) and video-based guided error training (VGET), for novice drivers' hazard handling performance was tested, compared, and analyzed. Thirty-two novice drivers participated in the hazard perception training. Half of the participants were trained using SET by making errors and/or experiencing accidents while driving with a desktop simulator. The other half were trained using VGET by watching prerecorded video clips of errors and accidents that were made by other people. The two groups had exposure to equal numbers of errors for each training scenario. All the participants were tested and evaluated for hazard handling on a full cockpit driving simulator one week after training. Hazard handling performance and hazard response were measured in this transfer test. Both hazard handling performance scores and hazard response distances were significantly better for the SET group than the VGET group. Furthermore, the SET group had more metacognitive activities and intrinsic motivation. SET also seemed more effective in changing participants' confidence, but the result did not reach the significance level. SET exhibited a higher training effectiveness of hazard response and handling than VGET in the simulated transfer test. The superiority of SET might benefit from the higher levels of metacognition and intrinsic motivation during training, which was observed in the experiment. Future research should be conducted to assess whether the advantages of error training are still effective under real road conditions.
Systematic review of skills transfer after surgical simulation-based training.
Dawe, S R; Pena, G N; Windsor, J A; Broeders, J A J L; Cregan, P C; Hewett, P J; Maddern, G J
2014-08-01
Simulation-based training assumes that skills are directly transferable to the patient-based setting, but few studies have correlated simulated performance with surgical performance. A systematic search strategy was undertaken to find studies published since the last systematic review, published in 2007. Inclusion of articles was determined using a predetermined protocol, independent assessment by two reviewers and a final consensus decision. Studies that reported on the use of surgical simulation-based training and assessed the transferability of the acquired skills to a patient-based setting were included. Twenty-seven randomized clinical trials and seven non-randomized comparative studies were included. Fourteen studies investigated laparoscopic procedures, 13 endoscopic procedures and seven other procedures. These studies provided strong evidence that participants who reached proficiency in simulation-based training performed better in the patient-based setting than their counterparts who did not have simulation-based training. Simulation-based training was equally as effective as patient-based training for colonoscopy, laparoscopic camera navigation and endoscopic sinus surgery in the patient-based setting. These studies strengthen the evidence that simulation-based training, as part of a structured programme and incorporating predetermined proficiency levels, results in skills transfer to the operative setting. © 2014 BJS Society Ltd. Published by John Wiley & Sons Ltd.
Simulation-based planning for theater air warfare
NASA Astrophysics Data System (ADS)
Popken, Douglas A.; Cox, Louis A., Jr.
2004-08-01
Planning for Theatre Air Warfare can be represented as a hierarchy of decisions. At the top level, surviving airframes must be assigned to roles (e.g., Air Defense, Counter Air, Close Air Support, and AAF Suppression) in each time period in response to changing enemy air defense capabilities, remaining targets, and roles of opposing aircraft. At the middle level, aircraft are allocated to specific targets to support their assigned roles. At the lowest level, routing and engagement decisions are made for individual missions. The decisions at each level form a set of time-sequenced Courses of Action taken by opposing forces. This paper introduces a set of simulation-based optimization heuristics operating within this planning hierarchy to optimize allocations of aircraft. The algorithms estimate distributions for stochastic outcomes of the pairs of Red/Blue decisions. Rather than using traditional stochastic dynamic programming to determine optimal strategies, we use an innovative combination of heuristics, simulation-optimization, and mathematical programming. Blue decisions are guided by a stochastic hill-climbing search algorithm while Red decisions are found by optimizing over a continuous representation of the decision space. Stochastic outcomes are then provided by fast, Lanchester-type attrition simulations. This paper summarizes preliminary results from top and middle level models.
Etch Profile Simulation Using Level Set Methods
NASA Technical Reports Server (NTRS)
Hwang, Helen H.; Meyyappan, Meyya; Arnold, James O. (Technical Monitor)
1997-01-01
Etching and deposition of materials are critical steps in semiconductor processing for device manufacturing. Both etching and deposition may have isotropic and anisotropic components, due to directional sputtering and redeposition of materials, for example. Previous attempts at modeling profile evolution have used so-called "string theory" to simulate the moving solid-gas interface between the semiconductor and the plasma. One complication of this method is that extensive de-looping schemes are required at the profile corners. We will present a 2D profile evolution simulation using level set theory to model the surface. (1) By embedding the location of the interface in a field variable, the need for de-looping schemes is eliminated and profile corners are more accurately modeled. This level set profile evolution model will calculate both isotropic and anisotropic etch and deposition rates of a substrate in low pressure (10s mTorr) plasmas, considering the incident ion energy angular distribution functions and neutral fluxes. We will present etching profiles of Si substrates in Ar/Cl2 discharges for various incident ion energies and trench geometries.
Level-Set Methodology on Adaptive Octree Grids
NASA Astrophysics Data System (ADS)
Gibou, Frederic; Guittet, Arthur; Mirzadeh, Mohammad; Theillard, Maxime
2017-11-01
Numerical simulations of interfacial problems in fluids require a methodology capable of tracking surfaces that can undergo changes in topology and capable to imposing jump boundary conditions in a sharp manner. In this talk, we will discuss recent advances in the level-set framework, in particular one that is based on adaptive grids.
iBIOMES Lite: Summarizing Biomolecular Simulation Data in Limited Settings
2015-01-01
As the amount of data generated by biomolecular simulations dramatically increases, new tools need to be developed to help manage this data at the individual investigator or small research group level. In this paper, we introduce iBIOMES Lite, a lightweight tool for biomolecular simulation data indexing and summarization. The main goal of iBIOMES Lite is to provide a simple interface to summarize computational experiments in a setting where the user might have limited privileges and limited access to IT resources. A command-line interface allows the user to summarize, publish, and search local simulation data sets. Published data sets are accessible via static hypertext markup language (HTML) pages that summarize the simulation protocols and also display data analysis graphically. The publication process is customized via extensible markup language (XML) descriptors while the HTML summary template is customized through extensible stylesheet language (XSL). iBIOMES Lite was tested on different platforms and at several national computing centers using various data sets generated through classical and quantum molecular dynamics, quantum chemistry, and QM/MM. The associated parsers currently support AMBER, GROMACS, Gaussian, and NWChem data set publication. The code is available at https://github.com/jcvthibault/ibiomes. PMID:24830957
A Low Cost Simulation System to Demonstrate Pilot Induced Oscillation Phenomenon
NASA Technical Reports Server (NTRS)
Ali, Syed Firasat
1997-01-01
A flight simulation system with graphics and software on Silicon Graphics computer workstations has been installed in the Flight Vehicle Design Laboratory at Tuskegee University. The system has F-15E flight simulation software from NASA Dryden which uses the graphics of SGI flight simulation demos. On the system, thus installed, a study of pilot induced oscillations is planned for future work. Preliminary research is conducted by obtaining two sets of straight level flights with pilot in the loop. In one set of flights no additional delay is used between the stick input and the appearance of airplane response on the computer monitor. In another set of flights, a 500 ms additional delay is used. The flight data is analyzed to find cross correlations between deflections of control surfaces and response of the airplane. The pilot dynamics features depicted from cross correlations of straight level flights are discussed in this report. The correlations presented here will serve as reference material for the corresponding correlations in a future study of pitch attitude tracking tasks involving pilot induced oscillations.
Vessel Segmentation and Blood Flow Simulation Using Level-Sets and Embedded Boundary Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deschamps, T; Schwartz, P; Trebotich, D
In this article we address the problem of blood flow simulation in realistic vascular objects. The anatomical surfaces are extracted by means of Level-Sets methods that accurately model the complex and varying surfaces of pathological objects such as aneurysms and stenoses. The surfaces obtained are defined at the sub-pixel level where they intersect the Cartesian grid of the image domain. It is therefore straightforward to construct embedded boundary representations of these objects on the same grid, for which recent work has enabled discretization of the Navier-Stokes equations for incompressible fluids. While most classical techniques require construction of a structured meshmore » that approximates the surface in order to extrapolate a 3D finite-element gridding of the whole volume, our method directly simulates the blood-flow inside the extracted surface without losing any complicated details and without building additional grids.« less
Circuit design tool. User's manual, revision 2
NASA Technical Reports Server (NTRS)
Miyake, Keith M.; Smith, Donald E.
1992-01-01
The CAM chip design was produced in a UNIX software environment using a design tool that supports definition of digital electronic modules, composition of these modules into higher level circuits, and event-driven simulation of these circuits. Our design tool provides an interface whose goals include straightforward but flexible primitive module definition and circuit composition, efficient simulation, and a debugging environment that facilitates design verification and alteration. The tool provides a set of primitive modules which can be composed into higher level circuits. Each module is a C-language subroutine that uses a set of interface protocols understood by the design tool. Primitives can be altered simply by recoding their C-code image; in addition new primitives can be added allowing higher level circuits to be described in C-code rather than as a composition of primitive modules--this feature can greatly enhance the speed of simulation.
Boet, Sylvain; Bould, M Dylan; Fung, Lillia; Qosa, Haytham; Perrier, Laure; Tavares, Walter; Reeves, Scott; Tricco, Andrea C
2014-06-01
Simulation-based learning is increasingly used by healthcare professionals as a safe method to learn and practice non-technical skills, such as communication and leadership, required for effective crisis resource management (CRM). This systematic review was conducted to gain a better understanding of the impact of simulation-based CRM teaching on transfer of learning to the workplace and subsequent changes in patient outcomes. Studies on CRM, crisis management, crew resource management, teamwork, and simulation published up to September 2012 were searched in MEDLINE(®), EMBASE™, CINAHL, Cochrane Central Register of Controlled Trials, and ERIC. All studies that used simulation-based CRM teaching with outcomes measured at Kirkpatrick Level 3 (transfer of learning to the workplace) or 4 (patient outcome) were included. Studies measuring only learners' reactions or simple learning (Kirkpatrick Level 1 or 2, respectively) were excluded. Two authors independently reviewed all identified titles and abstracts for eligibility. Nine articles were identified as meeting the inclusion criteria. Four studies measured transfer of simulation-based CRM learning into the clinical setting (Kirkpatrick Level 3). In three of these studies, simulation-enhanced CRM training was found significantly more effective than no intervention or didactic teaching. Five studies measured patient outcomes (Kirkpatrick Level 4). Only one of these studies found that simulation-based CRM training made a clearly significant impact on patient mortality. Based on a small number of studies, this systematic review found that CRM skills learned at the simulation centre are transferred to clinical settings, and the acquired CRM skills may translate to improved patient outcomes, including a decrease in mortality.
A variational approach to multi-phase motion of gas, liquid and solid based on the level set method
NASA Astrophysics Data System (ADS)
Yokoi, Kensuke
2009-07-01
We propose a simple and robust numerical algorithm to deal with multi-phase motion of gas, liquid and solid based on the level set method [S. Osher, J.A. Sethian, Front propagating with curvature-dependent speed: Algorithms based on Hamilton-Jacobi formulation, J. Comput. Phys. 79 (1988) 12; M. Sussman, P. Smereka, S. Osher, A level set approach for capturing solution to incompressible two-phase flow, J. Comput. Phys. 114 (1994) 146; J.A. Sethian, Level Set Methods and Fast Marching Methods, Cambridge University Press, 1999; S. Osher, R. Fedkiw, Level Set Methods and Dynamics Implicit Surface, Applied Mathematical Sciences, vol. 153, Springer, 2003]. In Eulerian framework, to simulate interaction between a moving solid object and an interfacial flow, we need to define at least two functions (level set functions) to distinguish three materials. In such simulations, in general two functions overlap and/or disagree due to numerical errors such as numerical diffusion. In this paper, we resolved the problem using the idea of the active contour model [M. Kass, A. Witkin, D. Terzopoulos, Snakes: active contour models, International Journal of Computer Vision 1 (1988) 321; V. Caselles, R. Kimmel, G. Sapiro, Geodesic active contours, International Journal of Computer Vision 22 (1997) 61; G. Sapiro, Geometric Partial Differential Equations and Image Analysis, Cambridge University Press, 2001; R. Kimmel, Numerical Geometry of Images: Theory, Algorithms, and Applications, Springer-Verlag, 2003] introduced in the field of image processing.
Simulation of void formation in interconnect lines
NASA Astrophysics Data System (ADS)
Sheikholeslami, Alireza; Heitzinger, Clemens; Puchner, Helmut; Badrieh, Fuad; Selberherr, Siegfried
2003-04-01
The predictive simulation of the formation of voids in interconnect lines is important for improving capacitance and timing in current memory cells. The cells considered are used in wireless applications such as cell phones, pagers, radios, handheld games, and GPS systems. In backend processes for memory cells, ILD (interlayer dielectric) materials and processes result in void formation during gap fill. This approach lowers the overall k-value of a given metal layer and is economically advantageous. The effect of the voids on the overall capacitive load is tremendous. In order to simulate the shape and positions of the voids and thus the overall capacitance, the topography simulator ELSA (Enhanced Level Set Applications) has been developed which consists of three modules, a level set module, a radiosity module, and a surface reaction module. The deposition process considered is deposition of silicon nitride. Test structures of interconnect lines of memory cells were fabricated and several SEM images thereof were used to validate the corresponding simulations.
Biosafety Level 3 Recon Training
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dickens, Brian Scott; Chavez, Melanie Ann; Heimer, Donovan J.
The Biosafety Level 3 Recon training is a 3D virtual tool developed for the Counter WMD Analysis Cell (CWAC) and the Asymmetric Warfare Group (AWG) by the Application Modeling and Development Team within the NEN-3 International Threat Reduction Group. The training simulates a situation where friendly forces have secured from hostile forces a suspected bioweapons development laboratory. The trainee is a squad member tasked to investigate the facility, locate laboratories within the facility, and identify hazards to entrants and the surrounding area. Before beginning the 3D simulation, the trainee must select the appropriate MOPP level for entering the facility. Themore » items in the simulation, including inside and outside the bioweapon facility, are items that are commonly used by scientists in Biosafety Level (BSL) laboratories. Each item has clickable red tags that, when activated, give the trainee a brief description of the item and a controllable turn-around view. The descriptions also contain information about potential hazards the item can present. Trainees must find all tagged items in order to complete the simulation, but can also reference descriptions and turn-around view of the items in a glossary menu. Training is intended to familiarize individuals whom have little or no biology or chemistry background with technical equipment used in BSL laboratories. The revised edition of this simulation (Biosafety Level 3 Virtual Lab) changes the trainee into a investigator instead of a military combatant. Many doors now require a virtual badge swipe to open. Airlock doors may come in sets such that the open door must be closed before the next door in the set can be opened. A user interface was added so that the instructor can edit the information about the items (the brief descriptions mentioned above) using the simulation software instead of the previous method of manually entering the material in xml settings files. Facility labels, such as "No Parking" and "Men's room", were changed from Korean, into English. No other changes were made.« less
Factors influencing delivered mean airway pressure during nasal CPAP with the RAM cannula.
Gerdes, Jeffrey S; Sivieri, Emidio M; Abbasi, Soraya
2016-01-01
To measure mean airway pressure (MAP) delivered through the RAM Cannula® when used with a ventilator in CPAP mode as a function of percent nares occlusion in a simulated nasal interface/test lung model and to compare the results to MAPs using a nasal continuous positive airway pressure (NCPAP) interface with nares fully occluded. An artificial airway model was connected to a spontaneous breathing lung model in which MAP was measured at set NCPAP levels between 4 and 8 cmH2 O provided by a Dräger Evita XL® ventilator and delivered through three sizes of RAM cannulae. Measurements were performed with varying leakage at the nasal interface by decreasing occlusion from 100% to 29%, half-way prong insertion, and simulated mouth leakage. Comparison measurements were made using the Dräger BabyFlow® NCPAP interface with a full nasal seal. With simulated mouth closed, the Dräger interface delivered MAPs within 0.5 cmH2 O of set CPAP levels. For the RAM cannula, with 60-80% nares occlusion, overall delivered MAPs were 60 ± 17% less than set CPAP levels (P < 0.001). Further, MAP decreased progressively with decreasing percent nares occlusion. The simulated open mouth condition resulted in significantly lower MAPs to <1.7 cmH2 O. The one-half prong insertion depth condition, with closed mouth, yielded MAPs approximately 35 ± 9% less than full insertion pressures (P < 0.001). In our bench tests, the RAM interface connected to a ventilator in NCPAP mode failed to deliver set CPAP levels when applied using the manufacturer recommended 60-80% nares occlusion, even with closed mouth and full nasal prong insertion conditions. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Muñoz-Esparza, Domingo; Kosović, Branko; Jiménez, Pedro A.; Coen, Janice L.
2018-04-01
The level-set method is typically used to track and propagate the fire perimeter in wildland fire models. Herein, a high-order level-set method using fifth-order WENO scheme for the discretization of spatial derivatives and third-order explicit Runge-Kutta temporal integration is implemented within the Weather Research and Forecasting model wildland fire physics package, WRF-Fire. The algorithm includes solution of an additional partial differential equation for level-set reinitialization. The accuracy of the fire-front shape and rate of spread in uncoupled simulations is systematically analyzed. It is demonstrated that the common implementation used by level-set-based wildfire models yields to rate-of-spread errors in the range 10-35% for typical grid sizes (Δ = 12.5-100 m) and considerably underestimates fire area. Moreover, the amplitude of fire-front gradients in the presence of explicitly resolved turbulence features is systematically underestimated. In contrast, the new WRF-Fire algorithm results in rate-of-spread errors that are lower than 1% and that become nearly grid independent. Also, the underestimation of fire area at the sharp transition between the fire front and the lateral flanks is found to be reduced by a factor of ≈7. A hybrid-order level-set method with locally reduced artificial viscosity is proposed, which substantially alleviates the computational cost associated with high-order discretizations while preserving accuracy. Simulations of the Last Chance wildfire demonstrate additional benefits of high-order accurate level-set algorithms when dealing with complex fuel heterogeneities, enabling propagation across narrow fuel gaps and more accurate fire backing over the lee side of no fuel clusters.
A cis-regulatory logic simulator.
Zeigler, Robert D; Gertz, Jason; Cohen, Barak A
2007-07-27
A major goal of computational studies of gene regulation is to accurately predict the expression of genes based on the cis-regulatory content of their promoters. The development of computational methods to decode the interactions among cis-regulatory elements has been slow, in part, because it is difficult to know, without extensive experimental validation, whether a particular method identifies the correct cis-regulatory interactions that underlie a given set of expression data. There is an urgent need for test expression data in which the interactions among cis-regulatory sites that produce the data are known. The ability to rapidly generate such data sets would facilitate the development and comparison of computational methods that predict gene expression patterns from promoter sequence. We developed a gene expression simulator which generates expression data using user-defined interactions between cis-regulatory sites. The simulator can incorporate additive, cooperative, competitive, and synergistic interactions between regulatory elements. Constraints on the spacing, distance, and orientation of regulatory elements and their interactions may also be defined and Gaussian noise can be added to the expression values. The simulator allows for a data transformation that simulates the sigmoid shape of expression levels from real promoters. We found good agreement between sets of simulated promoters and predicted regulatory modules from real expression data. We present several data sets that may be useful for testing new methodologies for predicting gene expression from promoter sequence. We developed a flexible gene expression simulator that rapidly generates large numbers of simulated promoters and their corresponding transcriptional output based on specified interactions between cis-regulatory sites. When appropriate rule sets are used, the data generated by our simulator faithfully reproduces experimentally derived data sets. We anticipate that using simulated gene expression data sets will facilitate the direct comparison of computational strategies to predict gene expression from promoter sequence. The source code is available online and as additional material. The test sets are available as additional material.
Briët, Olivier Jt; Penny, Melissa A
2013-11-07
Stagnating funds for malaria control have spurred interest in the question of how to sustain the gains of recent successes with long-lasting insecticidal nets (LLINs) and improved case management (CM). This simulation study examined the malaria transmission and disease dynamics in scenarios with sustained LLINs and CM interventions and tried to determine optimal LLIN distribution rates. The effects of abruptly halting LLIN distribution were also examined. Dynamic simulations of malaria in humans and mosquitoes were run on the OpenMalaria platform, using stochastic individual-based simulation models. LLINs were distributed in a range of transmission settings, with varying CM coverage levels. In the short-term, LLINs were beneficial over the entire transmission spectrum, reducing both transmission and disease burden. In the long-term, repeated distributions sustainably reduced transmission in all settings. However, because of the resulting reduction in acquired immunity in the population, the malaria disease burden, after initially being reduced, gradually increased and eventually stabilized at a new level. This new level was higher than the pre-intervention level in previously high transmission settings, if there is a maximum disease burden in the relationship between transmission and disease burden at intermediate transmission levels. This result could lead one to conclude that sustained LLIN distribution might not be cost-effective in high transmission settings in the long term. However, improved CM rendered LLINs more cost-effective in higher transmission settings than in those without improved CM and the majority of the African population lives in areas where CM and LLINs are sustainably combined. The effects of changes in LLIN distribution rate on cost-effectiveness were relatively small compared to the effects of changes in transmission setting and CM. Abruptly halting LLIN distribution led to temporary morbidity peaks, which were particularly large in low to intermediate transmission settings. This study reaffirms the importance of context specific intervention planning. Intervention planning must include combinations of malaria vector control and CM, and must consider both the pre-intervention transmission level and the intervention history to account for the loss of immunity and the potential for rebounds in disease burden.
The Application of Voltage Transformer Simulator in Electrical Test Training
NASA Astrophysics Data System (ADS)
Li, Nan; Zhang, Jun; Chai, Ziqi; Wang, Jingpeng; Yang, Baowei
2018-02-01
The voltage transformer test is an important means to monitor its operating state. The accuracy and reliability of the test data is directly related to the test skill level of the operator. However, the risk of test instruments damage, equipment being tested damage and electric shock in operator is caused by improper operation when training the transformer test. In this paper, a simulation device of voltage transformer is set up, and a simulation model is built for the most common 500kV capacitor voltage transformer (CVT), the simulation model can realize several test items of CVT by combing with teaching guidance platform, simulation instrument, complete set of system software and auxiliary equipment in Changchun. Many successful applications show that the simulation device has good practical value and wide application prospect.
Judd, Belinda K; Scanlan, Justin N; Alison, Jennifer A; Waters, Donna; Gordon, Christopher J
2016-08-05
Despite the recent widespread adoption of simulation in clinical education in physiotherapy, there is a lack of validated tools for assessment in this setting. The Assessment of Physiotherapy Practice (APP) is a comprehensive tool used in clinical placement settings in Australia to measure professional competence of physiotherapy students. The aim of the study was to evaluate the validity of the APP for student assessment in simulation settings. A total of 1260 APPs were collected, 971 from students in simulation and 289 from students in clinical placements. Rasch analysis was used to examine the construct validity of the APP tool in three different simulation assessment formats: longitudinal assessment over 1 week of simulation; longitudinal assessment over 2 weeks; and a short-form (25 min) assessment of a single simulation scenario. Comparison with APPs from 5 week clinical placements in hospital and clinic-based settings were also conducted. The APP demonstrated acceptable fit to the expectations of the Rasch model for the 1 and 2 week clinical simulations, exhibiting unidimensional properties that were able to distinguish different levels of student performance. For the short-form simulation, nine of the 20 items recorded greater than 25 % of scores as 'not-assessed' by clinical educators which impacted on the suitability of the APP tool in this simulation format. The APP was a valid assessment tool when used in longitudinal simulation formats. A revised APP may be required for assessment in short-form simulation scenarios.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, F; Yang, Y; Young, L
Purpose: Radiomic texture features derived from the oncologic PET have recently been brought under intense investigation within the context of patient stratification and treatment outcome prediction in a variety of cancer types; however, their validity has not yet been examined. This work is aimed to validate radiomic PET texture metrics through the use of realistic simulations in the ground truth setting. Methods: Simulation of FDG-PET was conducted by applying the Zubal phantom as an attenuation map to the SimSET software package that employs Monte Carlo techniques to model the physical process of emission imaging. A total of 15 irregularly-shaped lesionsmore » featuring heterogeneous activity distribution were simulated. For each simulated lesion, 28 texture features in relation to the intensity histograms (GLIH), grey-level co-occurrence matrices (GLCOM), neighborhood difference matrices (GLNDM), and zone size matrices (GLZSM) were evaluated and compared with their respective values extracted from the ground truth activity map. Results: In reference to the values from the ground truth images, texture parameters appearing on the simulated data varied with a range of 0.73–3026.2% for GLIH-based, 0.02–100.1% for GLCOM-based, 1.11–173.8% for GLNDM-based, and 0.35–66.3% for GLZSM-based. For majority of the examined texture metrics (16/28), their values on the simulated data differed significantly from those from the ground truth images (P-value ranges from <0.0001 to 0.04). Features not exhibiting significant difference comprised of GLIH-based standard deviation, GLCO-based energy and entropy, GLND-based coarseness and contrast, and GLZS-based low gray-level zone emphasis, high gray-level zone emphasis, short zone low gray-level emphasis, long zone low gray-level emphasis, long zone high gray-level emphasis, and zone size nonuniformity. Conclusion: The extent to which PET imaging disturbs texture appearance is feature-dependent and could be substantial. It is thus advised that use of PET texture parameters for predictive and prognostic measurements in oncologic setting awaits further systematic and critical evaluation.« less
Ingrassia, Pier Luigi; Barozza, Ludovico Giovanni; Franc, Jeffrey Michael
2018-01-01
In Italy, there is no framework of procedural skills that all medical students should be able to perform autonomously at graduation. The study aims at identifying (1) a set of essential procedural skills and (2) which abilities could be potentially taught with simulation. Desirability score was calculated for each procedure to determine the most effective manner to proceed with simulation curriculum development. A web poll was conducted at the School of Medicine in Novara, looking at the level of expected and self-perceived competency for common medical procedures. Three groups were enrolled: (1) faculty, (2) junior doctors in their first years of practice, and (3) recently graduated medical students. Level of importance of procedural skills for independent practice expressed by teachers, level of mastery self-perceived by learners (students and junior doctors) and suitability of simulation training for the given technical skills were measured. Desirability function was used to set priorities for future learning. The overall mean expected level of competency for the procedural skills was 7.9/9. Mean level of self reported competency was 4.7/9 for junior doctors and 4.4/9 for recently graduated students. The highest priority skills according to the desirability function were urinary catheter placement, nasogastric tube insertion, and incision and drainage of superficial abscesses. This study identifies those technical competencies thought by faculty to be important and assessed the junior doctors and recent graduates level of self-perceived confidence in performing these skills. The study also identifies the perceived utility of teaching these skills by simulation. The study prioritizes those skills that have a gap between expected and observed competency and are also thought to be amenable to teaching by simulation. This allows immediate priorities for simulation curriculum development in the most effective manner. This methodology may be useful to researchers in other centers to prioritize simulation training.
Simulated impact of RTS,S/AS01 vaccination programs in the context of changing malaria transmission.
Brooks, Alan; Briët, Olivier J T; Hardy, Diggory; Steketee, Richard; Smith, Thomas A
2012-01-01
The RTS,S/AS01 pre-erythrocytic malaria vaccine is in phase III clinical trials. It is critical to anticipate where and how it should be implemented if trials are successful. Such planning may be complicated by changing levels of malaria transmission. Computer simulations were used to examine RTS,S/AS01 impact, using a vaccine profile based on phase II trial results, and assuming that protection decays only slowly. Settings were simulated in which baseline transmission (in the absence of vaccine) was fixed or varied between 2 and 20 infectious mosquito bites per person per annum (ibpa) over ten years. Four delivery strategies were studied: routine infant immunization (EPI), EPI plus infant catch-up, EPI plus school-based campaigns, and EPI plus mass campaigns. Impacts in changing transmission settings were similar to those in fixed settings. Assuming a persistent effect of vaccination, at 2 ibpa, the vaccine averted approximately 5-7 deaths per 1000 doses of vaccine when delivered via mass campaigns, but the benefit was less at higher transmission levels. EPI, catch-up and school-based strategies averted 2-3 deaths per 1000 doses in settings with 2 ibpa. In settings where transmission was decreasing or increasing, EPI, catch-up and school-based strategies averted approximately 3-4 deaths per 1000 doses. Where transmission is changing, it appears to be sufficient to consider simulations of pre-erythrocytic vaccine impact at a range of initial transmission levels. At 2 ibpa, mass campaigns averted the most deaths and reduced transmission, but this requires further study. If delivered via EPI, RTS,S/AS01 could avert approximately 6-11 deaths per 1000 vaccinees in all examined settings, similar to estimates for pneumococcal conjugate vaccine in African infants. These results support RTS,S/AS01 implementation via EPI, for example alongside vector control interventions, providing that the phase III trials provide support for our assumptions about efficacy.
Larsen, C R; Grantcharov, T; Aggarwal, R; Tully, A; Sørensen, J L; Dalsgaard, T; Ottesen, B
2006-09-01
Safe realistic training and unbiased quantitative assessment of technical skills are required for laparoscopy. Virtual reality (VR) simulators may be useful tools for training and assessing basic and advanced surgical skills and procedures. This study aimed to investigate the construct validity of the LapSimGyn VR simulator, and to determine the learning curves of gynecologists with different levels of experience. For this study, 32 gynecologic trainees and consultants (juniors or seniors) were allocated into three groups: novices (0 advanced laparoscopic procedures), intermediate level (>20 and <60 procedures), and experts (>100 procedures). All performed 10 sets of simulations consisting of three basic skill tasks and an ectopic pregnancy program. The simulations were carried out on 3 days within a maximum period of 2 weeks. Assessment of skills was based on time, economy of movement, and error parameters measured by the simulator. The data showed that expert gynecologists performed significantly and consistently better than intermediate and novice gynecologists. The learning curves differed significantly between the groups, showing that experts start at a higher level and more rapidly reach the plateau of their learning curve than do intermediate and novice groups of surgeons. The LapSimGyn VR simulator package demonstrates construct validity on both the basic skills module and the procedural gynecologic module for ectopic pregnancy. Learning curves can be obtained, but to reach the maximum performance for the more complex tasks, 10 repetitions do not seem sufficient at the given task level and settings. LapSimGyn also seems to be flexible and widely accepted by the users.
NASA Astrophysics Data System (ADS)
Ghafouri, H. R.; Mosharaf-Dehkordi, M.; Afzalan, B.
2017-07-01
A simulation-optimization model is proposed for identifying the characteristics of local immiscible NAPL contaminant sources inside aquifers. This model employs the UTCHEM 9.0 software as its simulator for solving the governing equations associated with the multi-phase flow in porous media. As the optimization model, a novel two-level saturation based Imperialist Competitive Algorithm (ICA) is proposed to estimate the parameters of contaminant sources. The first level consists of three parallel independent ICAs and plays as a pre-conditioner for the second level which is a single modified ICA. The ICA in the second level is modified by dividing each country into a number of provinces (smaller parts). Similar to countries in the classical ICA, these provinces are optimized by the assimilation, competition, and revolution steps in the ICA. To increase the diversity of populations, a new approach named knock the base method is proposed. The performance and accuracy of the simulation-optimization model is assessed by solving a set of two and three-dimensional problems considering the effects of different parameters such as the grid size, rock heterogeneity and designated monitoring networks. The obtained numerical results indicate that using this simulation-optimization model provides accurate results at a less number of iterations when compared with the model employing the classical one-level ICA. A model is proposed to identify characteristics of immiscible NAPL contaminant sources. The contaminant is immiscible in water and multi-phase flow is simulated. The model is a multi-level saturation-based optimization algorithm based on ICA. Each answer string in second level is divided into a set of provinces. Each ICA is modified by incorporating a new knock the base model.
Drag and drop simulation: from pictures to full three-dimensional simulations
NASA Astrophysics Data System (ADS)
Bergmann, Michel; Iollo, Angelo
2014-11-01
We present a suite of methods to achieve ``drag and drop'' simulation, i.e., to fully automatize the process to perform thee-dimensional flow simulations around a bodies defined by actual images of moving objects. The overall approach requires a skeleton graph generation to get level set function from pictures, optimal transportation to get body velocity on the surface and then flow simulation thanks to a cartesian method based on penalization. We illustrate this paradigm simulating the swimming of a mackerel fish.
SimSup's Loop: A Control Theory Approach to Spacecraft Operator Training
NASA Technical Reports Server (NTRS)
Owens, Brandon Dewain; Crocker, Alan R.
2015-01-01
Immersive simulation is a staple of training for many complex system operators, including astronauts and ground operators of spacecraft. However, while much has been written about simulators, simulation facilities, and operator certification programs, the topic of how one develops simulation scenarios to train a spacecraft operator is relatively understated in the literature. In this paper, an approach is presented for using control theory as the basis for developing the immersive simulation scenarios for a spacecraft operator training program. The operator is effectively modeled as a high level controller of lower level hardware and software control loops that affect a select set of system state variables. Simulation scenarios are derived from a STAMP-based hazard analysis of the operator's high and low level control loops. The immersive simulation aspect of the overall training program is characterized by selecting a set of scenarios that expose the operator to the various inadequate control actions that stem from control flaws and inadequate control executions in the different sections of the typical control loop. Results from the application of this approach to the Lunar Atmosphere and Dust Environment Explorer (LADEE) mission are provided through an analysis of the simulation scenarios used for operator training and the actual anomalies that occurred during the mission. The simulation scenarios and inflight anomalies are mapped to specific control flaws and inadequate control executions in the different sections of the typical control loop to illustrate the characteristics of anomalies arising from the different sections of the typical control loop (and why it is important for operators to have exposure to these characteristics). Additionally, similarities between the simulation scenarios and inflight anomalies are highlighted to make the case that the simulation scenarios prepared the operators for the mission.
NASA Technical Reports Server (NTRS)
Sud, Y. C.; Chao, Winston C.; Walker, G. K.
1992-01-01
The influence of a cumulus convection scheme on the simulated atmospheric circulation and hydrologic cycle is investigated by means of a coarse version of the GCM. Two sets of integrations, each containing an ensemble of three summer simulations, were produced. The ensemble sets of control and experiment simulations are compared and differentially analyzed to determine the influence of a cumulus convection scheme on the simulated circulation and hydrologic cycle. The results show that cumulus parameterization has a very significant influence on the simulation circulation and precipitation. The upper-level condensation heating over the ITCZ is much smaller for the experiment simulations as compared to the control simulations; correspondingly, the Hadley and Walker cells for the control simulations are also weaker and are accompanied by a weaker Ferrel cell in the Southern Hemisphere. Overall, the difference fields show that experiment simulations (without cumulus convection) produce a cooler and less energetic atmosphere.
1978-06-01
stimulate at-least three levels of crew function. At the most complex level, visual cues are used to discriminate the presence or activities of...limited to motion on- set cues washed out at subliminal levels.. Because of the cues they provide the driver, gunner, and commander, and the dis...motion, i.e.,which physiological receptors are affected, how they function,and how they may be stimulated by a simulator motion system. I Motion is
Comparing volume of fluid and level set methods for evaporating liquid-gas flows
NASA Astrophysics Data System (ADS)
Palmore, John; Desjardins, Olivier
2016-11-01
This presentation demonstrates three numerical strategies for simulating liquid-gas flows undergoing evaporation. The practical aim of this work is to choose a framework capable of simulating the combustion of liquid fuels in an internal combustion engine. Each framework is analyzed with respect to its accuracy and computational cost. All simulations are performed using a conservative, finite volume code for simulating reacting, multiphase flows under the low-Mach assumption. The strategies used in this study correspond to different methods for tracking the liquid-gas interface and handling the transport of the discontinuous momentum and vapor mass fractions fields. The first two strategies are based on conservative, geometric volume of fluid schemes using directionally split and un-split advection, respectively. The third strategy is the accurate conservative level set method. For all strategies, special attention is given to ensuring the consistency between the fluxes of mass, momentum, and vapor fractions. The study performs three-dimensional simulations of an isolated droplet of a single component fuel evaporating into air. Evaporation rates and vapor mass fractions are compared to analytical results.
SeaWiFS technical report series. Volume 9: The simulated SeaWiFS data set, version 1
NASA Technical Reports Server (NTRS)
Gregg, Watson W.; Chen, Frank C.; Mezaache, Ahmed L.; Chen, Judy D.; Whiting, Jeffrey A.; Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Indest, A. W. (Editor)
1993-01-01
Data system development activities for the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) must begin well before the scheduled 1994 launch. To assist in these activities, it is essential to develop a simulated SeaWiFS data set as soon as possible. Realism is of paramount importance in this data set, including SeaWiFS spectral bands, orbital and scanning characteristics, and known data structures. Development of the simulated data set can assist in identification of problem areas that can be addressed and solved before the actual data are received. This paper describes the creation of the first version of the simulated SeaWiFS data set. The data set includes the spectral band, orbital, and scanning characteristics of the SeaWiFS sensor and SeaStar spacecraft. The information is output in the data structure as it is stored onboard. Thus, it is a level-0 data set which can be taken from start to finish through a prototype data system. The data set is complete and correct at the time of printing, although the values in the telemetry fields are left blank. The structure of the telemetry fields, however, is incorporated. Also, no account for clouds has been included. However, this version facilitates early prototyping activities by the SeaWiFS data system, providing a realistic data set to assess performance.
Kumar, Arunaz; Gilmour, Carole; Nestel, Debra; Aldridge, Robyn; McLelland, Gayle; Wallace, Euan
2014-12-01
Core clinical skills acquisition is an essential component of undergraduate medical and midwifery education. Although interprofessional education is an increasingly common format for learning efficient teamwork in clinical medicine, its value in undergraduate education is less clear. We present a collaborative effort from the medical and midwifery schools of Monash University, Melbourne, towards the development of an educational package centred around a core skills-based workshop using low fidelity simulation models in an interprofessional setting. Detailed feedback on the package was positive with respect to the relevance of the teaching content, whether the topic was well taught by task trainers and simulation models used, pitch of level of teaching and perception of confidence gained in performing the skill on a real patient after attending the workshop. Overall, interprofessional core skills training using low fidelity simulation models introduced at an undergraduate level in medicine and midwifery had a good acceptance. © 2014 The Royal Australian and New Zealand College of Obstetricians and Gynaecologists.
Numerical Simulation of Hydrodynamics of a Heavy Liquid Drop Covered by Vapor Film in a Water Pool
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, W.M.; Yang, Z.L.; Giri, A.
2002-07-01
A numerical study on the hydrodynamics of a droplet covered by vapor film in water pool is carried out. Two level set functions are used as to implicitly capture the interfaces among three immiscible fluids (melt-drop, vapor and coolant). This approach leaves only one set of conservation equations for the three phases. A high-order Navier-Stokes solver, called Cubic-Interpolated Pseudo-Particle (CIP) algorithm, is employed in combination with level set approach, which allows large density ratios (up to 1000), surface tension and jump in viscosity. By this calculation, the hydrodynamic behavior of a melt droplet falling into a volatile coolant is simulated,more » which is of great significance to reveal the mechanism of steam explosion during a hypothetical severe reactor accident. (authors)« less
Cloud-Based Orchestration of a Model-Based Power and Data Analysis Toolchain
NASA Technical Reports Server (NTRS)
Post, Ethan; Cole, Bjorn; Dinkel, Kevin; Kim, Hongman; Lee, Erich; Nairouz, Bassem
2016-01-01
The proposed Europa Mission concept contains many engineering and scientific instruments that consume varying amounts of power and produce varying amounts of data throughout the mission. System-level power and data usage must be well understood and analyzed to verify design requirements. Numerous cross-disciplinary tools and analysis models are used to simulate the system-level spacecraft power and data behavior. This paper addresses the problem of orchestrating a consistent set of models, tools, and data in a unified analysis toolchain when ownership is distributed among numerous domain experts. An analysis and simulation environment was developed as a way to manage the complexity of the power and data analysis toolchain and to reduce the simulation turnaround time. A system model data repository is used as the trusted store of high-level inputs and results while other remote servers are used for archival of larger data sets and for analysis tool execution. Simulation data passes through numerous domain-specific analysis tools and end-to-end simulation execution is enabled through a web-based tool. The use of a cloud-based service facilitates coordination among distributed developers and enables scalable computation and storage needs, and ensures a consistent execution environment. Configuration management is emphasized to maintain traceability between current and historical simulation runs and their corresponding versions of models, tools and data.
Evaluation of a cardiopulmonary resuscitation curriculum in a low resource environment.
Chang, Mary P; Lyon, Camila B; Janiszewski, David; Aksamit, Deborah; Kateh, Francis; Sampson, John
2015-11-07
To evaluate whether a 2-day International Liaison Committee on Resuscitation (ILCOR) Universal Algorithm-based curriculum taught in a tertiary care hospital in Liberia increases local health care provider knowledge and skill comfort level. A combined basic and advanced cardiopulmonary resuscitation (CPR) curriculum was developed for low-resource settings that included lectures and low-fidelity manikin-based simulations. In March 2014, the curriculum was taught to healthcare providers in a tertiary care hospital in Liberia. In a quality assurance review, participants were evaluated for knowledge and comfort levels with resuscitation before and after the workshop. They were also videotaped during simulation sessions and evaluated on standardized performance metrics. Fifty-two hospital staff completed both pre-and post-curriculum surveys. The median score was 45% pre-curriculum and 82% post-curriculum (p<0.00001). The median provider comfort level score was 4 of 5 pre-curriculum and 5 of 5 post-curriculum (p<0.00001). During simulations, 93.2% of participants performed the pulse check within 10 seconds, and 97.7% performed defibrillation within 180 seconds. Clinician knowledge of and comfort level with CPR increased significantly after participating in our curriculum. A CPR curriculum based on lectures and low-fidelity manikin simulations may be an effective way to teach resuscitation in this low-resource setting.
Surgical simulation training in orthopedics: current insights.
Kalun, Portia; Wagner, Natalie; Yan, James; Nousiainen, Markku T; Sonnadara, Ranil R
2018-01-01
While the knowledge required of residents training in orthopedic surgery continues to increase, various factors, including reductions in work hours, have resulted in decreased clinical learning opportunities. Recent work suggests residents graduate from their training programs without sufficient exposure to key procedures. In response, simulation is increasingly being incorporated into training programs to supplement clinical learning. This paper reviews the literature to explore whether skills learned in simulation-based settings results in improved clinical performance in orthopedic surgery trainees. A scoping review of the literature was conducted to identify papers discussing simulation training in orthopedic surgery. We focused on exploring whether skills learned in simulation transferred effectively to a clinical setting. Experimental studies, systematic reviews, and narrative reviews were included. A total of 15 studies were included, with 11 review papers and four experimental studies. The review articles reported little evidence regarding the transfer of skills from simulation to the clinical setting, strong evidence that simulator models discriminate among different levels of experience, varied outcome measures among studies, and a need to define competent performance in both simulated and clinical settings. Furthermore, while three out of the four experimental studies demonstrated transfer between the simulated and clinical environments, methodological study design issues were identified. Our review identifies weak evidence as to whether skills learned in simulation transfer effectively to clinical practice for orthopedic surgery trainees. Given the increased reliance on simulation, there is an immediate need for comprehensive studies that focus on skill transfer, which will allow simulation to be incorporated effectively into orthopedic surgery training programs.
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Hopkins, D. A.
1985-01-01
A set of thermoviscoplastic nonlinear constitutive relationships (1VP-NCR) is presented. The set was developed for application to high temperature metal matrix composites (HT-MMC) and is applicable to thermal and mechanical properties. Formulation of the TVP-NCR is based at the micromechanics level. The TVP-NCR are of simple form and readily integrated into nonlinear composite structural analysis. It is shown that the set of TVP-NCR is computationally effective. The set directly predicts complex materials behavior at all levels of the composite simulation, from the constituent materials, through the several levels of composite mechanics, and up to the global response of complex HT-MMC structural components.
A Simulated Peer-Assessment Approach to Improving Student Performance in Chemical Calculations
ERIC Educational Resources Information Center
Scott, Fraser J.
2014-01-01
This paper describes the utility of using simulated, rather than real, student solutions to problems within a peer-assessment setting and whether this approach can be used as a means of improving performance in chemical calculations. The study involved a small cohort of students, of two levels, who carried out a simulated peer-assessment as a…
Data-driven train set crash dynamics simulation
NASA Astrophysics Data System (ADS)
Tang, Zhao; Zhu, Yunrui; Nie, Yinyu; Guo, Shihui; Liu, Fengjia; Chang, Jian; Zhang, Jianjun
2017-02-01
Traditional finite element (FE) methods are arguably expensive in computation/simulation of the train crash. High computational cost limits their direct applications in investigating dynamic behaviours of an entire train set for crashworthiness design and structural optimisation. On the contrary, multi-body modelling is widely used because of its low computational cost with the trade-off in accuracy. In this study, a data-driven train crash modelling method is proposed to improve the performance of a multi-body dynamics simulation of train set crash without increasing the computational burden. This is achieved by the parallel random forest algorithm, which is a machine learning approach that extracts useful patterns of force-displacement curves and predicts a force-displacement relation in a given collision condition from a collection of offline FE simulation data on various collision conditions, namely different crash velocities in our analysis. Using the FE simulation results as a benchmark, we compared our method with traditional multi-body modelling methods and the result shows that our data-driven method improves the accuracy over traditional multi-body models in train crash simulation and runs at the same level of efficiency.
NASA Astrophysics Data System (ADS)
Yuan, H. Z.; Chen, Z.; Shu, C.; Wang, Y.; Niu, X. D.; Shu, S.
2017-09-01
In this paper, a free energy-based surface tension force (FESF) model is presented for accurately resolving the surface tension force in numerical simulation of multiphase flows by the level set method. By using the analytical form of order parameter along the normal direction to the interface in the phase-field method and the free energy principle, FESF model offers an explicit and analytical formulation for the surface tension force. The only variable in this formulation is the normal distance to the interface, which can be substituted by the distance function solved by the level set method. On one hand, as compared to conventional continuum surface force (CSF) model in the level set method, FESF model introduces no regularized delta function, due to which it suffers less from numerical diffusions and performs better in mass conservation. On the other hand, as compared to the phase field surface tension force (PFSF) model, the evaluation of surface tension force in FESF model is based on an analytical approach rather than numerical approximations of spatial derivatives. Therefore, better numerical stability and higher accuracy can be expected. Various numerical examples are tested to validate the robustness of the proposed FESF model. It turns out that FESF model performs better than CSF model and PFSF model in terms of accuracy, stability, convergence speed and mass conservation. It is also shown in numerical tests that FESF model can effectively simulate problems with high density/viscosity ratio, high Reynolds number and severe topological interfacial changes.
LUXSim: A component-centric approach to low-background simulations
Akerib, D. S.; Bai, X.; Bedikian, S.; ...
2012-02-13
Geant4 has been used throughout the nuclear and high-energy physics community to simulate energy depositions in various detectors and materials. These simulations have mostly been run with a source beam outside the detector. In the case of low-background physics, however, a primary concern is the effect on the detector from radioactivity inherent in the detector parts themselves. From this standpoint, there is no single source or beam, but rather a collection of sources with potentially complicated spatial extent. LUXSim is a simulation framework used by the LUX collaboration that takes a component-centric approach to event generation and recording. A newmore » set of classes allows for multiple radioactive sources to be set within any number of components at run time, with the entire collection of sources handled within a single simulation run. Various levels of information can also be recorded from the individual components, with these record levels also being set at runtime. This flexibility in both source generation and information recording is possible without the need to recompile, reducing the complexity of code management and the proliferation of versions. Within the code itself, casting geometry objects within this new set of classes rather than as the default Geant4 classes automatically extends this flexibility to every individual component. No additional work is required on the part of the developer, reducing development time and increasing confidence in the results. Here, we describe the guiding principles behind LUXSim, detail some of its unique classes and methods, and give examples of usage.« less
LBM-EP: Lattice-Boltzmann method for fast cardiac electrophysiology simulation from 3D images.
Rapaka, S; Mansi, T; Georgescu, B; Pop, M; Wright, G A; Kamen, A; Comaniciu, Dorin
2012-01-01
Current treatments of heart rhythm troubles require careful planning and guidance for optimal outcomes. Computational models of cardiac electrophysiology are being proposed for therapy planning but current approaches are either too simplified or too computationally intensive for patient-specific simulations in clinical practice. This paper presents a novel approach, LBM-EP, to solve any type of mono-domain cardiac electrophysiology models at near real-time that is especially tailored for patient-specific simulations. The domain is discretized on a Cartesian grid with a level-set representation of patient's heart geometry, previously estimated from images automatically. The cell model is calculated node-wise, while the transmembrane potential is diffused using Lattice-Boltzmann method within the domain defined by the level-set. Experiments on synthetic cases, on a data set from CESC'10 and on one patient with myocardium scar showed that LBM-EP provides results comparable to an FEM implementation, while being 10 - 45 times faster. Fast, accurate, scalable and requiring no specific meshing, LBM-EP paves the way to efficient and detailed models of cardiac electrophysiology for therapy planning.
Construction of multi-functional open modulized Matlab simulation toolbox for imaging ladar system
NASA Astrophysics Data System (ADS)
Wu, Long; Zhao, Yuan; Tang, Meng; He, Jiang; Zhang, Yong
2011-06-01
Ladar system simulation is to simulate the ladar models using computer simulation technology in order to predict the performance of the ladar system. This paper presents the developments of laser imaging radar simulation for domestic and overseas studies and the studies of computer simulation on ladar system with different application requests. The LadarSim and FOI-LadarSIM simulation facilities of Utah State University and Swedish Defence Research Agency are introduced in details. This paper presents the low level of simulation scale, un-unified design and applications of domestic researches in imaging ladar system simulation, which are mostly to achieve simple function simulation based on ranging equations for ladar systems. Design of laser imaging radar simulation with open and modularized structure is proposed to design unified modules for ladar system, laser emitter, atmosphere models, target models, signal receiver, parameters setting and system controller. Unified Matlab toolbox and standard control modules have been built with regulated input and output of the functions, and the communication protocols between hardware modules. A simulation based on ICCD gain-modulated imaging ladar system for a space shuttle is made based on the toolbox. The simulation result shows that the models and parameter settings of the Matlab toolbox are able to simulate the actual detection process precisely. The unified control module and pre-defined parameter settings simplify the simulation of imaging ladar detection. Its open structures enable the toolbox to be modified for specialized requests. The modulization gives simulations flexibility.
Experimental Characterization of Gas Turbine Emissions at Simulated Flight Altitude Conditions
NASA Technical Reports Server (NTRS)
Howard, R. P.; Wormhoudt, J. C.; Whitefield, P. D.
1996-01-01
NASA's Atmospheric Effects of Aviation Project (AEAP) is developing a scientific basis for assessment of the atmospheric impact of subsonic and supersonic aviation. A primary goal is to assist assessments of United Nations scientific organizations and hence, consideration of emissions standards by the International Civil Aviation Organization (ICAO). Engine tests have been conducted at AEDC to fulfill the need of AEAP. The purpose of these tests is to obtain a comprehensive database to be used for supplying critical information to the atmospheric research community. It includes: (1) simulated sea-level-static test data as well as simulated altitude data; and (2) intrusive (extractive probe) data as well as non-intrusive (optical techniques) data. A commercial-type bypass engine with aviation fuel was used in this test series. The test matrix was set by parametrically selecting the temperature, pressure, and flow rate at sea-level-static and different altitudes to obtain a parametric set of data.
An Evaluation of the High Level Architecture (HLA) as a Framework for NASA Modeling and Simulation
NASA Technical Reports Server (NTRS)
Reid, Michael R.; Powers, Edward I. (Technical Monitor)
2000-01-01
The High Level Architecture (HLA) is a current US Department of Defense and an industry (IEEE-1516) standard architecture for modeling and simulations. It provides a framework and set of functional rules and common interfaces for integrating separate and disparate simulators into a larger simulation. The goal of the HLA is to reduce software costs by facilitating the reuse of simulation components and by providing a runtime infrastructure to manage the simulations. In order to evaluate the applicability of the HLA as a technology for NASA space mission simulations, a Simulations Group at Goddard Space Flight Center (GSFC) conducted a study of the HLA and developed a simple prototype HLA-compliant space mission simulator. This paper summarizes the prototyping effort and discusses the potential usefulness of the HLA in the design and planning of future NASA space missions with a focus on risk mitigation and cost reduction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arion is a library and tool set that enables researchers to holistically define test system models. To define a complex system for testing an algorithm or control requires expertise across multiple domains. Simulating a complex system requires the integration of multiple simulators and test hardware, each with their own specification languages and concepts. This requires extensive set of knowledge and capabilities. Arion was developed to alleviate this challenge. Arion is a library of Java libraries that abstracts the concepts from supported simulators into a cohesive model language that allows someone to build models to their needed level of fidelity andmore » expertise. Arion is also a software tool that translates the users model back into the specification languages of the simulators and test hardware needed for execution.« less
Santaniello, Francesca; Djupström, Line B; Ranius, Thomas; Weslien, Jan; Rudolphi, Jörgen; Sonesson, Johan
2017-10-01
Boreal forests are an important source of timber and pulp wood, but provide also other products and services. Utilizing a simulation program and field data from a tree retention experiment in a Scots pine forest in central Sweden, we simulated the consequences during the following 100 years of various levels of retention on production of merchantable wood, dead wood input (as a proxy for biodiversity), and carbon stock changes. At the stand level, wood production decreased with increased retention levels, while dead wood input and carbon stock increased. We also compared 12 scenarios representing a land sharing/land sparing gradient. In each scenario, a constant volume of wood was harvested with a specific level of retention in a 100-ha landscape. The area not needed to reach the defined volume was set-aside during a 100-year rotation period, leading to decreasing area of set-asides with increasing level of retention across the 12 scenarios. Dead wood input was positively affected by the level of tree retention whereas the average carbon stock decreased slightly with increasing level of tree retention. The scenarios will probably vary in how they favor species preferring different substrates. Therefore, we conclude that a larger variation of landscape-level conservation strategies, also including active creation of dead wood, may be an attractive complement to the existing management. Copyright © 2017 Elsevier Ltd. All rights reserved.
Evaluation and comparison of predictive individual-level general surrogates.
Gabriel, Erin E; Sachs, Michael C; Halloran, M Elizabeth
2018-07-01
An intermediate response measure that accurately predicts efficacy in a new setting at the individual level could be used both for prediction and personalized medical decisions. In this article, we define a predictive individual-level general surrogate (PIGS), which is an individual-level intermediate response that can be used to accurately predict individual efficacy in a new setting. While methods for evaluating trial-level general surrogates, which are predictors of trial-level efficacy, have been developed previously, few, if any, methods have been developed to evaluate individual-level general surrogates, and no methods have formalized the use of cross-validation to quantify the expected prediction error. Our proposed method uses existing methods of individual-level surrogate evaluation within a given clinical trial setting in combination with cross-validation over a set of clinical trials to evaluate surrogate quality and to estimate the absolute prediction error that is expected in a new trial setting when using a PIGS. Simulations show that our method performs well across a variety of scenarios. We use our method to evaluate and to compare candidate individual-level general surrogates over a set of multi-national trials of a pentavalent rotavirus vaccine.
Hysteresis in simulations of malaria transmission
NASA Astrophysics Data System (ADS)
Yamana, Teresa K.; Qiu, Xin; Eltahir, Elfatih A. B.
2017-10-01
Malaria transmission is a complex system and in many parts of the world is closely related to climate conditions. However, studies on environmental determinants of malaria generally consider only concurrent climate conditions and ignore the historical or initial conditions of the system. Here, we demonstrate the concept of hysteresis in malaria transmission, defined as non-uniqueness of the relationship between malaria prevalence and concurrent climate conditions. We show the dependence of simulated malaria transmission on initial prevalence and the initial level of human immunity in the population. Using realistic time series of environmental variables, we quantify the effect of hysteresis in a modeled population. In a set of numerical experiments using HYDREMATS, a field-tested mechanistic model of malaria transmission, the simulated maximum malaria prevalence depends on both the initial prevalence and the initial level of human immunity in the population. We found the effects of initial conditions to be of comparable magnitude to the effects of interannual variability in environmental conditions in determining malaria prevalence. The memory associated with this hysteresis effect is longer in high transmission settings than in low transmission settings. Our results show that efforts to simulate and forecast malaria transmission must consider the exposure history of a location as well as the concurrent environmental drivers.
Bailes, Stephanie A; Firestone, Kimberly S; Dunn, Diane K; McNinch, Neil L; Brown, Miraides F; Volsko, Teresa A
2016-03-01
Bubble CPAP, used for spontaneously breathing infants to avoid intubation or postextubation support, can be delivered with different interface types. This study compared the effect that interfaces had on CPAP delivery. We hypothesized that there would be no difference between set and measured levels between interface types. A validated preterm infant nasal airway model was attached to the ASL 5000 breathing simulator. The simulator was programmed to deliver active breathing of a surfactant-deficient premature infant with breathing frequency at 70 breaths/min inspiratory time of 0.30 s, resistance of 150 cm H2O/L/s, compliance of 0.5 mL/cm H2O, tidal volume of 5 mL, and esophageal pressure of -10 cm H2O. Nasal CPAP prongs, size 4030, newborn and infant RAM cannulas were connected to a nasal airway model and a bubble CPAP system. CPAP levels were set at 4, 5, 6, 7, 8, and 9 cm H2O with flows of 6, 8, and 10 L/min each. Measurements were recorded after 1 min of stabilization. The analysis was performed using SAS 9.4. The Kolmogorov-Smirnov test assessed normality of the data. The Friedman test was used to compare non-normally distributed repeated measures. The Wilcoxon signed-rank test was used to conduct post hoc analysis. All tests were 2-sided, and P values of <.05 were considered as indicating significant differences unless otherwise indicated. At lower set CPAP levels, 4-6 cm H2O, measured CPAP dropped precipitously with the nasal prongs with the highest flow setting. At higher CPAP levels, 7-9 cm H2O measured CPAP concomitantly increased as the flow setting increased. Statistically significant differences in set and measured CPAP occurred for all devices across all CPAP levels, with the measured CPAP less than set for all conditions, P < .001. Set flow had a profound effect on measured CPAP. The concomitant drop in measured pressure with high and low flows could be attributed to increased resistance to spontaneous breathing or insufficient flow to meet inspiratory demand. Clinicians should be aware of the effect that the interface and flow have on CPAP delivery. Copyright © 2016 by Daedalus Enterprises.
Mok, Daniel W K; Lee, Edmond P F; Chau, Foo-Tim; Dyke, John M
2009-03-10
RCCSD(T) and/or CASSCF/MRCI calculations have been carried out on the X̃(1)A' and Ã(1)A'' states of HSiCl employing basis sets of up to the aug-cc-pV5Z quality. Contributions from core correlation and extrapolation to the complete basis set limit were included in determining the computed equilibrium geometrical parameters and relative electronic energy of these two states of HSiCl. Franck-Condon factors which include allowance for anharmonicity and Duschinsky rotation between these two states of HSiCl and DSiCl were calculated employing RCCSD(T) and CASSCF/MRCI potential energy functions, and were used to simulate the Ã(1)A'' ← X̃(1)A' absorption and Ã(1)A'' → X̃(1)A' single vibronic level (SVL) emission spectra of HSiCl and DSiCl. Simulated absorption and experimental LIF spectra, and simulated and observed Ã(1)A''(0,0,0) → X̃(1)A' SVL emission spectra, of HSiCl and DSiCl are in very good agreement. However, agreement between simulated and observed Ã(1)A''(0,1,0) → X̃(1)A' and Ã(1)A''(0,2,1) → X̃(1)A' SVL emission spectra of DSiCl is not as good. Preliminary calculations on low-lying excited states of HSiCl suggest that vibronic interaction between low-lying vibrational levels of the Ã(1)A'' state and highly excited vibrational levels of the ã(3)A'' is possible. Such vibronic interaction may change the character of the low-lying vibrational levels of the Ã(1)A'' state, which would lead to perturbation in the SVL emission spectra from these vibrational levels.
The Thick Level-Set model for dynamic fragmentation
Stershic, Andrew J.; Dolbow, John E.; Moës, Nicolas
2017-01-04
The Thick Level-Set (TLS) model is implemented to simulate brittle media undergoing dynamic fragmentation. This non-local model is discretized by the finite element method with damage represented as a continuous field over the domain. A level-set function defines the extent and severity of damage, and a length scale is introduced to limit the damage gradient. Numerical studies in one dimension demonstrate that the proposed method reproduces the rate-dependent energy dissipation and fragment length observations from analytical, numerical, and experimental approaches. In conclusion, additional studies emphasize the importance of appropriate bulk constitutive models and sufficient spatial resolution of the length scale.
Translations from Kommunist, Number 13, September 1978
1978-10-30
programmed machine tool here is merely a component of a more complex reprogrammable technological system. This includes the robot machine tools with...sufficient possibilities for changing technological operations and processes and automated technological lines. 52 The reprogrammable automated sets will...simulate the possibilities of such sets. A new technological level will be developed in industry related to reprogrammable automated sets, their design
A validation procedure for a LADAR system radiometric simulation model
NASA Astrophysics Data System (ADS)
Leishman, Brad; Budge, Scott; Pack, Robert
2007-04-01
The USU LadarSIM software package is a ladar system engineering tool that has recently been enhanced to include the modeling of the radiometry of Ladar beam footprints. This paper will discuss our validation of the radiometric model and present a practical approach to future validation work. In order to validate complicated and interrelated factors affecting radiometry, a systematic approach had to be developed. Data for known parameters were first gathered then unknown parameters of the system were determined from simulation test scenarios. This was done in a way to isolate as many unknown variables as possible, then build on the previously obtained results. First, the appropriate voltage threshold levels of the discrimination electronics were set by analyzing the number of false alarms seen in actual data sets. With this threshold set, the system noise was then adjusted to achieve the appropriate number of dropouts. Once a suitable noise level was found, the range errors of the simulated and actual data sets were compared and studied. Predicted errors in range measurements were analyzed using two methods: first by examining the range error of a surface with known reflectivity and second by examining the range errors for specific detectors with known responsivities. This provided insight into the discrimination method and receiver electronics used in the actual system.
Level-set simulations of soluble surfactant driven flows
NASA Astrophysics Data System (ADS)
Cleret de Langavant, Charles; Guittet, Arthur; Theillard, Maxime; Temprano-Coleto, Fernando; Gibou, Frédéric
2017-11-01
We present an approach to simulate the diffusion, advection and adsorption-desorption of a material quantity defined on an interface in two and three spatial dimensions. We use a level-set approach to capture the interface motion and a Quad/Octree data structure to efficiently solve the equations describing the underlying physics. Coupling with a Navier-Stokes solver enables the study of the effect of soluble surfactants that locally modify the parameters of surface tension on different types of flows. The method is tested on several benchmarks and applied to three typical examples of flows in the presence of surfactant: a bubble in a shear flow, the well-known phenomenon of tears of wine, and the Landau-Levich coating problem.
Detection of CMOS bridging faults using minimal stuck-at fault test sets
NASA Technical Reports Server (NTRS)
Ijaz, Nabeel; Frenzel, James F.
1993-01-01
The performance of minimal stuck-at fault test sets at detecting bridging faults are evaluated. New functional models of circuit primitives are presented which allow accurate representation of bridging faults under switch-level simulation. The effectiveness of the patterns is evaluated using both voltage and current testing.
Helsel, Dennis R.; Gilliom, Robert J.
1986-01-01
Estimates of distributional parameters (mean, standard deviation, median, interquartile range) are often desired for data sets containing censored observations. Eight methods for estimating these parameters have been evaluated by R. J. Gilliom and D. R. Helsel (this issue) using Monte Carlo simulations. To verify those findings, the same methods are now applied to actual water quality data. The best method (lowest root-mean-squared error (rmse)) over all parameters, sample sizes, and censoring levels is log probability regression (LR), the method found best in the Monte Carlo simulations. Best methods for estimating moment or percentile parameters separately are also identical to the simulations. Reliability of these estimates can be expressed as confidence intervals using rmse and bias values taken from the simulation results. Finally, a new simulation study shows that best methods for estimating uncensored sample statistics from censored data sets are identical to those for estimating population parameters. Thus this study and the companion study by Gilliom and Helsel form the basis for making the best possible estimates of either population parameters or sample statistics from censored water quality data, and for assessments of their reliability.
NASA Technical Reports Server (NTRS)
Ungar, Stephen G.; Merry, Carolyn J.; Mckim, Harlan L.; Irish, Richard; Miller, Michael S.
1988-01-01
A simulated data set was used to evaluate techniques for extracting topography from side-looking satellite systems for an area of northwest Washington state. A negative transparency orthophotoquad was digitized at a spacing of 85 microns, resulting in an equivalent ground distance of 9.86 m between pixels and a radiometric resolution of 256 levels. A bilinear interpolation was performed on digital elevation model data to generate elevation data at a 9.86-m resolution. The nominal orbital characteristics and geometry of the SPOT satellite were convoluted with the data to produce simulated panchromatic HRV digital stereo imagery for three different orbital paths and techniques for reconstructing topographic data were developed. Analyses with the simulated HRV data and other data sets show that the method is effective.
NASA Technical Reports Server (NTRS)
Lorenzini, E. C.; Cosmo, M.; Vetrella, S.; Moccia, A.
1988-01-01
This paper investigates the dynamics and acceleration levels of a new tethered system for micro and variable-gravity applications. The system consists of two platforms tethered on opposite sides to the Space Station. A fourth platform, the elevator, is placed in between the Space Station and the upper platform. Variable-g levels on board the elevator are obtained by moving this facility along the upper tether, while micro-g experiments are carried out on board the Space Station. By controlling the length of the lower tether the position of the system CM can be maintained on board the Space Station despite variations of the station's distribution of mass. The paper illustrates the mathematical model, the environmental perturbations and the control techniques which have been adopted for the simulation and control of the system dynamics. Two sets of results from two different simulation runs are shown. The first set shows the system dynamics and the acceleration spectra on board the Space Station and the elevator during station-keeping. The second set of results demonstrates the capability of the elevator to attain a preselected g-level.
Does the accuracy of blood pressure measurement correlate with hearing loss of the observer?
Song, Soohwa; Lee, Jongshill; Chee, Youngjoon; Jang, Dong Pyo; Kim, In Young
2014-02-01
The auscultatory method is influenced by the hearing level of the observers. If the observer has hearing loss, it is possible to measure blood pressure inaccurately by misreading the Korotkoff sounds at systolic blood pressure (SBP) and diastolic blood pressure (DBP). Because of the potential clinical problems this discrepancy may cause, we used a hearing loss simulator to determine how hearing level affects the accuracy of blood pressure measurements. Two data sets (data set A, 32 Korotkoff sound video clips recorded by the British Hypertension Society; data set B, 28 Korotkoff sound data acquired from the Korotkoff sound recording system developed by Hanyang University) were used and all the data were attenuated to simulate a hearing loss of 5, 10, 15, 20, and 25 dB using the hearing loss simulator. Five observers with normal hearing assessed the blood pressures from these data sets and the differences between the values measured from the original recordings (no attenuation) and the attenuated versions were analyzed. Greater attenuation of the Korotkoff sounds, or greater hearing loss, resulted in larger blood pressure measurement differences when compared with the original data. When measuring blood pressure with hearing loss, the SBP tended to be underestimated and the DBP was overestimated. The mean differences between the original data and the 25 dB hearing loss data for the two data sets combined were 1.55±2.71 and -4.32±4.21 mmHg for SBP and DBP, respectively. This experiment showed that the accuracy of blood pressure measurements using the auscultatory method is affected by observer hearing level. Therefore, to reduce possible error using the auscultatory method, observers' hearing should be tested.
NASA Technical Reports Server (NTRS)
Perez, Reinaldo J.
2011-01-01
Single Event Transients in analog and digital electronics from space generated high energetic nuclear particles can disrupt either temporarily and sometimes permanently the functionality and performance of electronics in space vehicles. This work first provides some insights into the modeling of SET in electronic circuits that can be used in SPICE-like simulators. The work is then directed to present methodologies, one of which was developed by this author, for the assessment of SET at different levels of integration in electronics, from the circuit level to the subsystem level.
Theory of quantized systems: formal basis for DEVS/HLA distributed simulation environment
NASA Astrophysics Data System (ADS)
Zeigler, Bernard P.; Lee, J. S.
1998-08-01
In the context of a DARPA ASTT project, we are developing an HLA-compliant distributed simulation environment based on the DEVS formalism. This environment will provide a user- friendly, high-level tool-set for developing interoperable discrete and continuous simulation models. One application is the study of contract-based predictive filtering. This paper presents a new approach to predictive filtering based on a process called 'quantization' to reduce state update transmission. Quantization, which generates state updates only at quantum level crossings, abstracts a sender model into a DEVS representation. This affords an alternative, efficient approach to embedding continuous models within distributed discrete event simulations. Applications of quantization to message traffic reduction are discussed. The theory has been validated by DEVSJAVA simulations of test cases. It will be subject to further test in actual distributed simulations using the DEVS/HLA modeling and simulation environment.
Gradient augmented level set method for phase change simulations
NASA Astrophysics Data System (ADS)
Anumolu, Lakshman; Trujillo, Mario F.
2018-01-01
A numerical method for the simulation of two-phase flow with phase change based on the Gradient-Augmented-Level-set (GALS) strategy is presented. Sharp capturing of the vaporization process is enabled by: i) identification of the vapor-liquid interface, Γ (t), at the subgrid level, ii) discontinuous treatment of thermal physical properties (except for μ), and iii) enforcement of mass, momentum, and energy jump conditions, where the gradients of the dependent variables are obtained at Γ (t) and are consistent with their analytical expression, i.e. no local averaging is applied. Treatment of the jump in velocity and pressure at Γ (t) is achieved using the Ghost Fluid Method. The solution of the energy equation employs the sub-grid knowledge of Γ (t) to discretize the temperature Laplacian using second-order one-sided differences, i.e. the numerical stencil completely resides within each respective phase. To carefully evaluate the benefits or disadvantages of the GALS approach, the standard level set method is implemented and compared against the GALS predictions. The results show the expected trend that interface identification and transport are predicted noticeably better with GALS over the standard level set. This benefit carries over to the prediction of the Laplacian and temperature gradients in the neighborhood of the interface, which are directly linked to the calculation of the vaporization rate. However, when combining the calculation of interface transport and reinitialization with two-phase momentum and energy, the benefits of GALS are to some extent neutralized, and the causes for this behavior are identified and analyzed. Overall the additional computational costs associated with GALS are almost the same as those using the standard level set technique.
Ikuma, Laura H; Babski-Reeves, Kari; Nussbaum, Maury A
2009-05-01
The objectives of this study were to determine the efficacy of experimental manipulations of psychosocial exposures and to evaluate the sensitivity of a psychosocial questionnaire by determining the factors perceived. A 50-item questionnaire was developed from the job content questionnaire (JCQ) and the quality of worklife survey (QWL). The experiment involved simulated work at different physical and psychosocial levels. Forty-eight participants were exposed to two levels of one psychosocial manipulation (job control, job demands, social support, or time pressure). Significantly different questionnaire responses supported the effectiveness of psychosocial manipulations. Exploratory factor analysis revealed five factors: skill discretion and decision authority, stress level and supervisor support, physical demands, quality of coworker support, and decision-making support. These results suggest that psychosocial factors can be manipulated experimentally, and that questionnaires can distinguish perceptions of these factors. These questionnaires may be used to assess perceptions of psychosocial factors in experimental settings.
NASA Astrophysics Data System (ADS)
Montoliu, C.; Ferrando, N.; Gosálvez, M. A.; Cerdá, J.; Colom, R. J.
2013-10-01
The use of atomistic methods, such as the Continuous Cellular Automaton (CCA), is currently regarded as a computationally efficient and experimentally accurate approach for the simulation of anisotropic etching of various substrates in the manufacture of Micro-electro-mechanical Systems (MEMS). However, when the features of the chemical process are modified, a time-consuming calibration process needs to be used to transform the new macroscopic etch rates into a corresponding set of atomistic rates. Furthermore, changing the substrate requires a labor-intensive effort to reclassify most atomistic neighborhoods. In this context, the Level Set (LS) method provides an alternative approach where the macroscopic forces affecting the front evolution are directly applied at the discrete level, thus avoiding the need for reclassification and/or calibration. Correspondingly, we present a fully-operational Sparse Field Method (SFM) implementation of the LS approach, discussing in detail the algorithm and providing a thorough characterization of the computational cost and simulation accuracy, including a comparison to the performance by the most recent CCA model. We conclude that the SFM implementation achieves similar accuracy as the CCA method with less fluctuations in the etch front and requiring roughly 4 times less memory. Although SFM can be up to 2 times slower than CCA for the simulation of anisotropic etchants, it can also be up to 10 times faster than CCA for isotropic etchants. In addition, we present a parallel, GPU-based implementation (gSFM) and compare it to an optimized, multicore CPU version (cSFM), demonstrating that the SFM algorithm can be successfully parallelized and the simulation times consequently reduced, while keeping the accuracy of the simulations. Although modern multicore CPUs provide an acceptable option, the massively parallel architecture of modern GPUs is more suitable, as reflected by computational times for gSFM up to 7.4 times faster than for cSFM.
Crowell, Valerie; Briët, Olivier J T; Hardy, Diggory; Chitnis, Nakul; Maire, Nicolas; Di Pasquale, Aurelio; Smith, Thomas A
2013-01-03
Past experience and modelling suggest that, in most cases, mass treatment strategies are not likely to succeed in interrupting Plasmodium falciparum malaria transmission. However, this does not preclude their use to reduce disease burden. Mass screening and treatment (MSAT) is preferred to mass drug administration (MDA), as the latter involves massive over-use of drugs. This paper reports simulations of the incremental cost-effectiveness of well-conducted MSAT campaigns as a strategy for P. falciparum malaria disease-burden reduction in settings with varying receptivity (ability of the combined vector population in a setting to transmit disease) and access to case management. MSAT incremental cost-effectiveness ratios (ICERs) were estimated in different sub-Saharan African settings using simulation models of the dynamics of malaria and a literature-based MSAT cost estimate. Imported infections were simulated at a rate of two per 1,000 population per annum. These estimates were compared to the ICERs of scaling up case management or insecticide-treated net (ITN) coverage in each baseline health system, in the absence of MSAT. MSAT averted most episodes, and resulted in the lowest ICERs, in settings with a moderate level of disease burden. At a low pre-intervention entomological inoculation rate (EIR) of two infectious bites per adult per annum (IBPAPA) MSAT was never more cost-effective than scaling up ITNs or case management coverage. However, at pre-intervention entomological inoculation rates (EIRs) of 20 and 50 IBPAPA and ITN coverage levels of 40 or 60%, respectively, the ICER of MSAT was similar to that of scaling up ITN coverage further. In all the transmission settings considered, achieving a minimal level of ITN coverage is a "best buy". At low transmission, MSAT probably is not worth considering. Instead, MSAT may be suitable at medium to high levels of transmission and at moderate ITN coverage. If undertaken as a burden-reducing intervention, MSAT should be continued indefinitely and should complement, not replace, case management and vector control interventions.
Cheng, Li-Tien; Wang, Zhongming; Setny, Piotr; Dzubiella, Joachim; Li, Bo; McCammon, J Andrew
2009-10-14
A model nanometer-sized hydrophobic receptor-ligand system in aqueous solution is studied by the recently developed level-set variational implicit solvent model (VISM). This approach is compared to all-atom computer simulations. The simulations reveal complex hydration effects within the (concave) receptor pocket, sensitive to the distance of the (convex) approaching ligand. The ligand induces and controls an intermittent switching between dry and wet states of the hosting pocket, which determines the range and magnitude of the pocket-ligand attraction. In the level-set VISM, a geometric free-energy functional of all possible solute-solvent interfaces coupled to the local dispersion potential is minimized numerically. This approach captures the distinct metastable states that correspond to topologically different solute-solvent interfaces, and thereby reproduces the bimodal hydration behavior observed in the all-atom simulation. Geometrical singularities formed during the interface relaxation are found to contribute significantly to the energy barrier between different metastable states. While the hydration phenomena can thus be explained by capillary effects, the explicit inclusion of dispersion and curvature corrections seems to be essential for a quantitative description of hydrophobically confined systems on nanoscales. This study may shed more light onto the tight connection between geometric and energetic aspects of biomolecular hydration and may represent a valuable step toward the proper interpretation of experimental receptor-ligand binding rates.
NASA Astrophysics Data System (ADS)
Kergadallan, Xavier; Bernardara, Pietro; Benoit, Michel; Andreewsky, Marc; Weiss, Jérôme
2013-04-01
Estimating the probability of occurrence of extreme sea levels is a central issue for the protection of the coast. Return periods of sea level with wave set-up contribution are estimated here in one site : Cherbourg in France in the English Channel. The methodology follows two steps : the first one is computation of joint probability of simultaneous wave height and still sea level, the second one is interpretation of that joint probabilities to assess a sea level for a given return period. Two different approaches were evaluated to compute joint probability of simultaneous wave height and still sea level : the first one is multivariate extreme values distributions of logistic type in which all components of the variables become large simultaneously, the second one is conditional approach for multivariate extreme values in which only one component of the variables have to be large. Two different methods were applied to estimate sea level with wave set-up contribution for a given return period : Monte-Carlo simulation in which estimation is more accurate but needs higher calculation time and classical ocean engineering design contours of type inverse-FORM in which the method is simpler and allows more complex estimation of wave setup part (wave propagation to the coast for example). We compare results from the two different approaches with the two different methods. To be able to use both Monte-Carlo simulation and design contours methods, wave setup is estimated with an simple empirical formula. We show advantages of the conditional approach compared to the multivariate extreme values approach when extreme sea-level occurs when either surge or wave height is large. We discuss the validity of the ocean engineering design contours method which is an alternative when computation of sea levels is too complex to use Monte-Carlo simulation method.
NASA Technical Reports Server (NTRS)
Moller, Bjorn; Garro, Alfredo; Falcone, Alberto; Crues, Edwin Z.; Dexter, Daniel E.
2016-01-01
Distributed and Real-Time Simulation plays a key-role in the Space domain being exploited for missions and systems analysis and engineering as well as for crew training and operational support. One of the most popular standards is the 1516-2010 IEEE Standard for Modeling and Simulation (M&S) High Level Architecture (HLA). HLA supports the implementation of distributed simulations (called Federations) in which a set of simulation entities (called Federates) can interact using a Run-Time Infrastructure (RTI). In a given Federation, a Federate can publish and/or subscribes objects and interactions on the RTI only in accordance with their structures as defined in a FOM (Federation Object Model). Currently, the Space domain is characterized by a set of incompatible FOMs that, although meet the specific needs of different organizations and projects, increases the long-term cost for interoperability. In this context, the availability of a reference FOM for the Space domain will enable the development of interoperable HLA-based simulators for related joint projects and collaborations among worldwide organizations involved in the Space domain (e.g. NASA, ESA, Roscosmos, and JAXA). The paper presents a first set of results achieved by a SISO standardization effort that aims at providing a Space Reference FOM for international collaboration on Space systems simulations.
Level-Set Simulation of Viscous Free Surface Flow Around a Commercial Hull Form
2005-04-15
Abstract The viscous free surface flow around a 3600 TEU KRISO Container Ship is computed using the finite volume based multi-block RANS code, WAVIS...developed at KRISO . The free surface is captured with the Level-set method and the realizable k-ε model is employed for turbulence closure. The...computations are done for a 3600 TEU container ship of Korea Research Institute of Ships & Ocean Engineering, KORDI (hereafter, KRISO ) selected as
Conrads, Paul; Roehl, Edwin A.
2010-01-01
Two scenarios were simulated with the LOXANN DSS. One scenario increased the historical flows at four control structures by 40 percent. The second scenario used a user-defined hydrograph to set the outflow from the Refuge to the weekly average inflow to the Refuge delayed by 2 days. Both scenarios decreased the potential of canal water intruding into the marsh by decreasing the slope of the water level between the canals and the marsh.
Adaptive multi-GPU Exchange Monte Carlo for the 3D Random Field Ising Model
NASA Astrophysics Data System (ADS)
Navarro, Cristóbal A.; Huang, Wei; Deng, Youjin
2016-08-01
This work presents an adaptive multi-GPU Exchange Monte Carlo approach for the simulation of the 3D Random Field Ising Model (RFIM). The design is based on a two-level parallelization. The first level, spin-level parallelism, maps the parallel computation as optimal 3D thread-blocks that simulate blocks of spins in shared memory with minimal halo surface, assuming a constant block volume. The second level, replica-level parallelism, uses multi-GPU computation to handle the simulation of an ensemble of replicas. CUDA's concurrent kernel execution feature is used in order to fill the occupancy of each GPU with many replicas, providing a performance boost that is more notorious at the smallest values of L. In addition to the two-level parallel design, the work proposes an adaptive multi-GPU approach that dynamically builds a proper temperature set free of exchange bottlenecks. The strategy is based on mid-point insertions at the temperature gaps where the exchange rate is most compromised. The extra work generated by the insertions is balanced across the GPUs independently of where the mid-point insertions were performed. Performance results show that spin-level performance is approximately two orders of magnitude faster than a single-core CPU version and one order of magnitude faster than a parallel multi-core CPU version running on 16-cores. Multi-GPU performance is highly convenient under a weak scaling setting, reaching up to 99 % efficiency as long as the number of GPUs and L increase together. The combination of the adaptive approach with the parallel multi-GPU design has extended our possibilities of simulation to sizes of L = 32 , 64 for a workstation with two GPUs. Sizes beyond L = 64 can eventually be studied using larger multi-GPU systems.
Simulation for learning and teaching procedural skills: the state of the science.
Nestel, Debra; Groom, Jeffrey; Eikeland-Husebø, Sissel; O'Donnell, John M
2011-08-01
Simulation is increasingly used to support learning of procedural skills. Our panel was tasked with summarizing the "best evidence." We addressed the following question: To what extent does simulation support learning and teaching in procedural skills? We conducted a literature search from 2000 to 2010 using Medline, CINAHL, ERIC, and PSYCHINFO databases. Inclusion criteria were established and then data extracted from abstracts according to several categories. Although secondary sources of literature were sourced from key informants and participants at the "Research Consensus Summit: State of the Science," they were not included in the data extraction process but were used to inform discussion. Eighty-one of 1,575 abstracts met inclusion criteria. The uses of simulation for learning and teaching procedural skills were diverse. The most commonly reported simulator type was manikins (n = 17), followed by simulated patients (n = 14), anatomic simulators (eg, part-task) (n = 12), and others. For research design, most abstracts (n = 52) were at Level IV of the National Health and Medical Research Council classification (ie, case series, posttest, or pretest/posttest, with no control group, narrative reviews, and editorials). The most frequent Best Evidence Medical Education ranking was for conclusions probable (n = 37). Using the modified Kirkpatrick scale for impact of educational intervention, the most frequent classification was for modification of knowledge and/or skills (Level 2b) (n = 52). Abstracts assessed skills (n = 47), knowledge (n = 32), and attitude (n = 15) with the majority demonstrating improvements after simulation-based interventions. Studies focused on immediate gains and skills assessments were usually conducted in simulation. The current state of the science finds that simulation usually leads to improved knowledge and skills. Learners and instructors express high levels of satisfaction with the method. While most studies focus on short-term gains attained in the simulation setting, a small number support the transfer of simulation learning to clinical practice. Further study is needed to optimize the alignment of learner, instructor, simulator, setting, and simulation for learning and teaching procedural skills. Instructional design and educational theory, contextualization, transferability, accessibility, and scalability must all be considered in simulation-based education programs. More consistently, robust research designs are required to strengthen the evidence.
ERIC Educational Resources Information Center
Blikstein, Paulo; Wilensky, Uri
2009-01-01
This article reports on "MaterialSim", an undergraduate-level computational materials science set of constructionist activities which we have developed and tested in classrooms. We investigate: (a) the cognition of students engaging in scientific inquiry through interacting with simulations; (b) the effects of students programming simulations as…
ERIC Educational Resources Information Center
Koka, Andre
2017-01-01
This study examined the effectiveness of a brief theory-based intervention on muscular strength among adolescents in a physical education setting. The intervention adopted a process-based mental simulation technique. The self-reported frequency of practising for and actual levels of abdominal muscular strength/endurance as one component of…
"MSTGen": Simulated Data Generator for Multistage Testing
ERIC Educational Resources Information Center
Han, Kyung T.
2013-01-01
Multistage testing, or MST, was developed as an alternative to computerized adaptive testing (CAT) for applications in which it is preferable to administer a test at the level of item sets (i.e., modules). As with CAT, the simulation technique in MST plays a critical role in the development and maintenance of tests. "MSTGen," a new MST…
NASA Astrophysics Data System (ADS)
Žukovič, Milan; Hristopulos, Dionissios T.
2009-02-01
A current problem of practical significance is how to analyze large, spatially distributed, environmental data sets. The problem is more challenging for variables that follow non-Gaussian distributions. We show by means of numerical simulations that the spatial correlations between variables can be captured by interactions between 'spins'. The spins represent multilevel discretizations of environmental variables with respect to a number of pre-defined thresholds. The spatial dependence between the 'spins' is imposed by means of short-range interactions. We present two approaches, inspired by the Ising and Potts models, that generate conditional simulations of spatially distributed variables from samples with missing data. Currently, the sampling and simulation points are assumed to be at the nodes of a regular grid. The conditional simulations of the 'spin system' are forced to respect locally the sample values and the system statistics globally. The second constraint is enforced by minimizing a cost function representing the deviation between normalized correlation energies of the simulated and the sample distributions. In the approach based on the Nc-state Potts model, each point is assigned to one of Nc classes. The interactions involve all the points simultaneously. In the Ising model approach, a sequential simulation scheme is used: the discretization at each simulation level is binomial (i.e., ± 1). Information propagates from lower to higher levels as the simulation proceeds. We compare the two approaches in terms of their ability to reproduce the target statistics (e.g., the histogram and the variogram of the sample distribution), to predict data at unsampled locations, as well as in terms of their computational complexity. The comparison is based on a non-Gaussian data set (derived from a digital elevation model of the Walker Lake area, Nevada, USA). We discuss the impact of relevant simulation parameters, such as the domain size, the number of discretization levels, and the initial conditions.
He, Yujie; Zhuang, Qianlai; McGuire, David; Liu, Yaling; Chen, Min
2013-01-01
Model-data fusion is a process in which field observations are used to constrain model parameters. How observations are used to constrain parameters has a direct impact on the carbon cycle dynamics simulated by ecosystem models. In this study, we present an evaluation of several options for the use of observations in modeling regional carbon dynamics and explore the implications of those options. We calibrated the Terrestrial Ecosystem Model on a hierarchy of three vegetation classification levels for the Alaskan boreal forest: species level, plant-functional-type level (PFT level), and biome level, and we examined the differences in simulated carbon dynamics. Species-specific field-based estimates were directly used to parameterize the model for species-level simulations, while weighted averages based on species percent cover were used to generate estimates for PFT- and biome-level model parameterization. We found that calibrated key ecosystem process parameters differed substantially among species and overlapped for species that are categorized into different PFTs. Our analysis of parameter sets suggests that the PFT-level parameterizations primarily reflected the dominant species and that functional information of some species were lost from the PFT-level parameterizations. The biome-level parameterization was primarily representative of the needleleaf PFT and lost information on broadleaf species or PFT function. Our results indicate that PFT-level simulations may be potentially representative of the performance of species-level simulations while biome-level simulations may result in biased estimates. Improved theoretical and empirical justifications for grouping species into PFTs or biomes are needed to adequately represent the dynamics of ecosystem functioning and structure.
Gray, Whitney Austin; Kesten, Karen S; Hurst, Stephen; Day, Tama Duffy; Anderko, Laura
2012-01-01
The aim of this pilot study was to test design interventions such as lighting, color, and spatial color patterning on nurses' stress, alertness, and satisfaction, and to provide an example of how clinical simulation centers can be used to conduct research. The application of evidence-based design research in healthcare settings requires a transdisciplinary approach. Integrating approaches from multiple fields in real-life settings often proves time consuming and experimentally difficult. However, forums for collaboration such as clinical simulation centers may offer a solution. In these settings, identical operating and patient rooms are used to deliver simulated patient care scenarios using automated mannequins. Two identical rooms were modified in the clinical simulation center. Nurses spent 30 minutes in each room performing simulated cardiac resuscitation. Subjective measures of nurses' stress, alertness, and satisfaction were collected and compared between settings and across time using matched-pair t-test analysis. Nurses reported feeling less stressed after exposure to the experimental room than nurses who were exposed to the control room (2.22, p = .03). Scores post-session indicated a significant reduction in stress and an increase in alertness after exposure to the experimental room as compared to the control room, with significance levels below .10. (Change in stress scores: 3.44, p = .069); (change in alertness scores: 3.6, p = .071). This study reinforces the use of validated survey tools to measure stress, alertness, and satisfaction. Results support human-centered design approaches by evaluating the effect on nurses in an experimental setting.
Single-Event Effects in High-Frequency Linear Amplifiers: Experiment and Analysis
NASA Astrophysics Data System (ADS)
Zeinolabedinzadeh, Saeed; Ying, Hanbin; Fleetwood, Zachary E.; Roche, Nicolas J.-H.; Khachatrian, Ani; McMorrow, Dale; Buchner, Stephen P.; Warner, Jeffrey H.; Paki-Amouzou, Pauline; Cressler, John D.
2017-01-01
The single-event transient (SET) response of two different silicon-germanium (SiGe) X-band (8-12 GHz) low noise amplifier (LNA) topologies is fully investigated in this paper. The two LNAs were designed and implemented in 130nm SiGe HBT BiCMOS process technology. Two-photon absorption (TPA) laser pulses were utilized to induce transients within various devices in these LNAs. Impulse response theory is identified as a useful tool for predicting the settling behavior of the LNAs subjected to heavy ion strikes. Comprehensive device and circuit level modeling and simulations were performed to accurately simulate the behavior of the circuits under ion strikes. The simulations agree well with TPA measurements. The simulation, modeling and analysis presented in this paper can be applied for any other circuit topologies for SET modeling and prediction.
Comparing multiple turbulence restoration algorithms performance on noisy anisoplanatic imagery
NASA Astrophysics Data System (ADS)
Rucci, Michael A.; Hardie, Russell C.; Dapore, Alexander J.
2017-05-01
In this paper, we compare the performance of multiple turbulence mitigation algorithms to restore imagery degraded by atmospheric turbulence and camera noise. In order to quantify and compare algorithm performance, imaging scenes were simulated by applying noise and varying levels of turbulence. For the simulation, a Monte-Carlo wave optics approach is used to simulate the spatially and temporally varying turbulence in an image sequence. A Poisson-Gaussian noise mixture model is then used to add noise to the observed turbulence image set. These degraded image sets are processed with three separate restoration algorithms: Lucky Look imaging, bispectral speckle imaging, and a block matching method with restoration filter. These algorithms were chosen because they incorporate different approaches and processing techniques. The results quantitatively show how well the algorithms are able to restore the simulated degraded imagery.
On predicting contamination levels of HALOE optics aboard UARS using direct simulation Monte Carlo
NASA Technical Reports Server (NTRS)
Woronowicz, Michael S.; Rault, Didier F. G.
1993-01-01
A three-dimensional version of the direct simulation Monte Carlo method is adapted to assess the contamination environment surrounding a highly detailed model of the Upper Atmosphere Research Satellite. Emphasis is placed on simulating a realistic, worst-case set of flowfield and surface conditions and geometric orientations in order to estimate an upper limit for the cumulative level of volatile organic molecular deposits at the aperture of the Halogen Occultation Experiment. Problems resolving species outgassing and vent flux rates that varied over many orders of magnitude were handled using species weighting factors. Results relating to contaminant cloud structure, cloud composition, and statistics of simulated molecules impinging on the target surface are presented, along with data related to code performance. Using procedures developed in standard contamination analyses, the cumulative level of volatile organic deposits on HALOE's aperture over the instrument's 35-month nominal data collection period is estimated to be about 2700A.
Cosimulation of embedded system using RTOS software simulator
NASA Astrophysics Data System (ADS)
Wang, Shihao; Duan, Zhigang; Liu, Mingye
2003-09-01
Embedded system design often employs co-simulation to verify system's function; one efficient verification tool of software is Instruction Set Simulator (ISS). As a full functional model of target CPU, ISS interprets instruction of embedded software step by step, which usually is time-consuming since it simulates at low-level. Hence ISS often becomes the bottleneck of co-simulation in a complicated system. In this paper, a new software verification tools, the RTOS software simulator (RSS) was presented. The mechanism of its operation was described in a full details. In RSS method, RTOS API is extended and hardware simulator driver is adopted to deal with data-exchange and synchronism between the two simulators.
Improving membrane protein expression by optimizing integration efficiency
2017-01-01
The heterologous overexpression of integral membrane proteins in Escherichia coli often yields insufficient quantities of purifiable protein for applications of interest. The current study leverages a recently demonstrated link between co-translational membrane integration efficiency and protein expression levels to predict protein sequence modifications that improve expression. Membrane integration efficiencies, obtained using a coarse-grained simulation approach, robustly predicted effects on expression of the integral membrane protein TatC for a set of 140 sequence modifications, including loop-swap chimeras and single-residue mutations distributed throughout the protein sequence. Mutations that improve simulated integration efficiency were 4-fold enriched with respect to improved experimentally observed expression levels. Furthermore, the effects of double mutations on both simulated integration efficiency and experimentally observed expression levels were cumulative and largely independent, suggesting that multiple mutations can be introduced to yield higher levels of purifiable protein. This work provides a foundation for a general method for the rational overexpression of integral membrane proteins based on computationally simulated membrane integration efficiencies. PMID:28918393
A Flexible Approach for the Statistical Visualization of Ensemble Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Potter, K.; Wilson, A.; Bremer, P.
2009-09-29
Scientists are increasingly moving towards ensemble data sets to explore relationships present in dynamic systems. Ensemble data sets combine spatio-temporal simulation results generated using multiple numerical models, sampled input conditions and perturbed parameters. While ensemble data sets are a powerful tool for mitigating uncertainty, they pose significant visualization and analysis challenges due to their complexity. We present a collection of overview and statistical displays linked through a high level of interactivity to provide a framework for gaining key scientific insight into the distribution of the simulation results as well as the uncertainty associated with the data. In contrast to methodsmore » that present large amounts of diverse information in a single display, we argue that combining multiple linked statistical displays yields a clearer presentation of the data and facilitates a greater level of visual data analysis. We demonstrate this approach using driving problems from climate modeling and meteorology and discuss generalizations to other fields.« less
Simulating Operation of a Large Turbofan Engine
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.; Frederick, Dean K.; DeCastro, Jonathan
2008-01-01
The Commercial Modular Aero- Propulsion System Simulation (C-MAPSS) is a computer program for simulating transient operation of a commercial turbofan engine that can generate as much as 90,000 lb (.0.4 MN) of thrust. It includes a power-management system that enables simulation of open- or closed-loop engine operation over a wide range of thrust levels throughout the full range of flight conditions. C-MAPSS provides the user with a set of tools for performing open- and closed-loop transient simulations and comparison of linear and non-linear models throughout its operating envelope, in an easy-to-use graphical environment.
Large Terrain Modeling and Visualization for Planets
NASA Technical Reports Server (NTRS)
Myint, Steven; Jain, Abhinandan; Cameron, Jonathan; Lim, Christopher
2011-01-01
Physics-based simulations are actively used in the design, testing, and operations phases of surface and near-surface planetary space missions. One of the challenges in realtime simulations is the ability to handle large multi-resolution terrain data sets within models as well as for visualization. In this paper, we describe special techniques that we have developed for visualization, paging, and data storage for dealing with these large data sets. The visualization technique uses a real-time GPU-based continuous level-of-detail technique that delivers multiple frames a second performance even for planetary scale terrain model sizes.
NASA Technical Reports Server (NTRS)
Markham, Brian L.; Wood, Frank M., Jr.; Ahmad, Suraiya P.
1988-01-01
The NS001 Thematic Mapper Simulator scanner (TMS) and several modular multispectral radiometers (MMRs) are among the primary instruments used in the First ISLSCP (International Satellite Land Surface Climatology Project) Field Experiment (FIFE). The NS001 has a continuously variable gain setting. Calibration of the NS001 data is influenced by drift in the dark current level of up to six counts during a mirror scan at typical gain settings. The MMR instruments are being used in their 1 deg FOV configuration on the helicopter and 15 deg FOV on the ground.
EqualWrites: Reducing Intra-set Write Variations for Enhancing Lifetime of Non-volatile Caches
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mittal, Sparsh; Vetter, Jeffrey S.
Driven by the trends of increasing core-count and bandwidth-wall problem, the size of last level caches (LLCs) has greatly increased and hence, the researchers have explored non-volatile memories (NVMs) which provide high density and consume low-leakage power. Since NVMs have low write-endurance and the existing cache management policies are write variation-unaware, effective wear-leveling techniques are required for achieving reasonable cache lifetimes using NVMs. We present EqualWrites, a technique for mitigating intra-set write variation. In this paper, our technique works by recording the number of writes on a block and changing the cache-block location of a hot data-item to redirect themore » future writes to a cold block to achieve wear-leveling. Simulation experiments have been performed using an x86-64 simulator and benchmarks from SPEC06 and HPC (high-performance computing) field. The results show that for single, dual and quad-core system configurations, EqualWrites improves cache lifetime by 6.31X, 8.74X and 10.54X, respectively. In addition, its implementation overhead is very small and it provides larger improvement in lifetime than three other intra-set wear-leveling techniques and a cache replacement policy.« less
EqualWrites: Reducing Intra-set Write Variations for Enhancing Lifetime of Non-volatile Caches
Mittal, Sparsh; Vetter, Jeffrey S.
2015-01-29
Driven by the trends of increasing core-count and bandwidth-wall problem, the size of last level caches (LLCs) has greatly increased and hence, the researchers have explored non-volatile memories (NVMs) which provide high density and consume low-leakage power. Since NVMs have low write-endurance and the existing cache management policies are write variation-unaware, effective wear-leveling techniques are required for achieving reasonable cache lifetimes using NVMs. We present EqualWrites, a technique for mitigating intra-set write variation. In this paper, our technique works by recording the number of writes on a block and changing the cache-block location of a hot data-item to redirect themore » future writes to a cold block to achieve wear-leveling. Simulation experiments have been performed using an x86-64 simulator and benchmarks from SPEC06 and HPC (high-performance computing) field. The results show that for single, dual and quad-core system configurations, EqualWrites improves cache lifetime by 6.31X, 8.74X and 10.54X, respectively. In addition, its implementation overhead is very small and it provides larger improvement in lifetime than three other intra-set wear-leveling techniques and a cache replacement policy.« less
NASA Astrophysics Data System (ADS)
Sampath, D. M. R.; Boski, T.
2018-05-01
Large-scale geomorphological evolution of an estuarine system was simulated by means of a hybrid estuarine sedimentation model (HESM) applied to the Guadiana Estuary, in Southwest Iberia. The model simulates the decadal-scale morphodynamics of the system under environmental forcing, using a set of analytical solutions to simplified equations of tidal wave propagation in shallow waters, constrained by empirical knowledge of estuarine sedimentary dynamics and topography. The key controlling parameters of the model are bed friction (f), current velocity power of the erosion rate function (N), and sea-level rise rate. An assessment of sensitivity of the simulated sediment surface elevation (SSE) change to these controlling parameters was performed. The model predicted the spatial differentiation of accretion and erosion, the latter especially marked in the mudflats within mean sea level and low tide level and accretion was mainly in a subtidal channel. The average SSE change mutually depended on both the friction coefficient and power of the current velocity. Analysis of the average annual SSE change suggests that the state of intertidal and subtidal compartments of the estuarine system vary differently according to the dominant processes (erosion and accretion). As the Guadiana estuarine system shows dominant erosional behaviour in the context of sea-level rise and sediment supply reduction after the closure of the Alqueva Dam, the most plausible sets of parameter values for the Guadiana Estuary are N = 1.8 and f = 0.8f0, or N = 2 and f = f0, where f0 is the empirically estimated value. For these sets of parameter values, the relative errors in SSE change did not exceed ±20% in 73% of simulation cells in the studied area. Such a limit of accuracy can be acceptable for an idealized modelling of coastal evolution in response to uncertain sea-level rise scenarios in the context of reduced sediment supply due to flow regulation. Therefore, the idealized but cost-effective HESM model will be suitable for estimating the morphological impacts of sea-level rise on estuarine systems on a decadal timescale.
NASA Technical Reports Server (NTRS)
Sree, Dave
2015-01-01
Near-field acoustic power level analysis of F31A31 open rotor model has been performed to determine its noise characteristics at simulated cruise flight conditions. The non-proprietary parts of the test data obtained from experiments in the 8x6 supersonic wind tunnel were provided by NASA-Glenn Research Center. The tone and broadband components of total noise have been separated from raw test data by using a new data analysis tool. Results in terms of sound pressure levels, acoustic power levels, and their variations with rotor speed, freestream Mach number, and input shaft power, with different blade-pitch setting angles at simulated cruise flight conditions, are presented and discussed. Empirical equations relating models acoustic power level and input shaft power have been developed. The near-field acoustic efficiency of the model at simulated cruise conditions is also determined. It is hoped that the results presented in this work will serve as a database for comparison and improvement of other open rotor blade designs and also for validating open rotor noise prediction codes.
Huntington II Simulation Program - RATS. Student Workbook, Teacher's Guide, and Resource Handbook.
ERIC Educational Resources Information Center
Frishman, Austin
Presented are instructions for the use of "RATS," a model simulating the dynamics of a rat population in either a city or an apartment house. The student controls the conditions of growth and sets the points at which the computer program prints reports. The rat population is controlled by variables including garbage levels selected for the site,…
ERIC Educational Resources Information Center
Dart, Evan H.; Radley, Keith C.; Briesch, Amy M.; Furlow, Christopher M.; Cavell, Hannah J.; Briesch, Amy M.
2016-01-01
Two studies investigated the accuracy of eight different interval-based group observation methods that are commonly used to assess the effects of classwide interventions. In Study 1, a Microsoft Visual Basic program was created to simulate a large set of observational data. Binary data were randomly generated at the student level to represent…
Teaching Cockpit Automation in the Classroom
NASA Technical Reports Server (NTRS)
Casner, Stephen M.
2003-01-01
This study explores the idea of teaching fundamental cockpit automation concepts and skills to aspiring professional pilots in a classroom setting, without the use of sophisticated aircraft or equipment simulators. Pilot participants from a local professional pilot academy completed eighteen hours of classroom instruction that placed a strong emphasis on understanding the underlying principles of cockpit automation systems and their use in a multi-crew cockpit. The instructional materials consisted solely of a single textbook. Pilots received no hands-on instruction or practice during their training. At the conclusion of the classroom instruction, pilots completed a written examination testing their mastery of what had been taught during the classroom meetings. Following the written exam, each pilot was given a check flight in a full-mission Level D simulator of a Boeing 747-400 aircraft. Pilots were given the opportunity to fly one practice leg, and were then tested on all concepts and skills covered in the class during a second leg. The results of the written exam and simulator checks strongly suggest that instruction delivered in a traditional classroom setting can lead to high levels of preparation without the need for expensive airplane or equipment simulators.
2013-01-01
Background Unexpected obstetric emergencies threaten the safety of pregnant women. As emergencies are rare, they are difficult to learn. Therefore, simulation-based medical education (SBME) seems relevant. In non-systematic reviews on SBME, medical simulation has been suggested to be associated with improved learner outcomes. However, many questions on how SBME can be optimized remain unanswered. One unresolved issue is how 'in situ simulation' (ISS) versus 'off site simulation' (OSS) impact learning. ISS means simulation-based training in the actual patient care unit (in other words, the labor room and operating room). OSS means training in facilities away from the actual patient care unit, either at a simulation centre or in hospital rooms that have been set up for this purpose. Methods and design The objective of this randomized trial is to study the effect of ISS versus OSS on individual learning outcome, safety attitude, motivation, stress, and team performance amongst multi-professional obstetric-anesthesia teams. The trial is a single-centre randomized superiority trial including 100 participants. The inclusion criteria were health-care professionals employed at the department of obstetrics or anesthesia at Rigshospitalet, Copenhagen, who were working on shifts and gave written informed consent. Exclusion criteria were managers with staff responsibilities, and staff who were actively taking part in preparation of the trial. The same obstetric multi-professional training was conducted in the two simulation settings. The experimental group was exposed to training in the ISS setting, and the control group in the OSS setting. The primary outcome is the individual score on a knowledge test. Exploratory outcomes are individual scores on a safety attitudes questionnaire, a stress inventory, salivary cortisol levels, an intrinsic motivation inventory, results from a questionnaire evaluating perceptions of the simulation and suggested changes needed in the organization, a team-based score on video-assessed team performance and on selected clinical performance. Discussion The perspective is to provide new knowledge on contextual effects of different simulation settings. Trial registration ClincialTrials.gov NCT01792674. PMID:23870501
Sørensen, Jette Led; Van der Vleuten, Cees; Lindschou, Jane; Gluud, Christian; Østergaard, Doris; LeBlanc, Vicki; Johansen, Marianne; Ekelund, Kim; Albrechtsen, Charlotte Krebs; Pedersen, Berit Woetman; Kjærgaard, Hanne; Weikop, Pia; Ottesen, Bent
2013-07-17
Unexpected obstetric emergencies threaten the safety of pregnant women. As emergencies are rare, they are difficult to learn. Therefore, simulation-based medical education (SBME) seems relevant. In non-systematic reviews on SBME, medical simulation has been suggested to be associated with improved learner outcomes. However, many questions on how SBME can be optimized remain unanswered. One unresolved issue is how 'in situ simulation' (ISS) versus 'off site simulation' (OSS) impact learning. ISS means simulation-based training in the actual patient care unit (in other words, the labor room and operating room). OSS means training in facilities away from the actual patient care unit, either at a simulation centre or in hospital rooms that have been set up for this purpose. The objective of this randomized trial is to study the effect of ISS versus OSS on individual learning outcome, safety attitude, motivation, stress, and team performance amongst multi-professional obstetric-anesthesia teams.The trial is a single-centre randomized superiority trial including 100 participants. The inclusion criteria were health-care professionals employed at the department of obstetrics or anesthesia at Rigshospitalet, Copenhagen, who were working on shifts and gave written informed consent. Exclusion criteria were managers with staff responsibilities, and staff who were actively taking part in preparation of the trial. The same obstetric multi-professional training was conducted in the two simulation settings. The experimental group was exposed to training in the ISS setting, and the control group in the OSS setting. The primary outcome is the individual score on a knowledge test. Exploratory outcomes are individual scores on a safety attitudes questionnaire, a stress inventory, salivary cortisol levels, an intrinsic motivation inventory, results from a questionnaire evaluating perceptions of the simulation and suggested changes needed in the organization, a team-based score on video-assessed team performance and on selected clinical performance. The perspective is to provide new knowledge on contextual effects of different simulation settings. ClincialTrials.gov NCT01792674.
NASA Astrophysics Data System (ADS)
Janardhanan, S.; Datta, B.
2011-12-01
Surrogate models are widely used to develop computationally efficient simulation-optimization models to solve complex groundwater management problems. Artificial intelligence based models are most often used for this purpose where they are trained using predictor-predictand data obtained from a numerical simulation model. Most often this is implemented with the assumption that the parameters and boundary conditions used in the numerical simulation model are perfectly known. However, in most practical situations these values are uncertain. Under these circumstances the application of such approximation surrogates becomes limited. In our study we develop a surrogate model based coupled simulation optimization methodology for determining optimal pumping strategies for coastal aquifers considering parameter uncertainty. An ensemble surrogate modeling approach is used along with multiple realization optimization. The methodology is used to solve a multi-objective coastal aquifer management problem considering two conflicting objectives. Hydraulic conductivity and the aquifer recharge are considered as uncertain values. Three dimensional coupled flow and transport simulation model FEMWATER is used to simulate the aquifer responses for a number of scenarios corresponding to Latin hypercube samples of pumping and uncertain parameters to generate input-output patterns for training the surrogate models. Non-parametric bootstrap sampling of this original data set is used to generate multiple data sets which belong to different regions in the multi-dimensional decision and parameter space. These data sets are used to train and test multiple surrogate models based on genetic programming. The ensemble of surrogate models is then linked to a multi-objective genetic algorithm to solve the pumping optimization problem. Two conflicting objectives, viz, maximizing total pumping from beneficial wells and minimizing the total pumping from barrier wells for hydraulic control of saltwater intrusion are considered. The salinity levels resulting at strategic locations due to these pumping are predicted using the ensemble surrogates and are constrained to be within pre-specified levels. Different realizations of the concentration values are obtained from the ensemble predictions corresponding to each candidate solution of pumping. Reliability concept is incorporated as the percent of the total number of surrogate models which satisfy the imposed constraints. The methodology was applied to a realistic coastal aquifer system in Burdekin delta area in Australia. It was found that all optimal solutions corresponding to a reliability level of 0.99 satisfy all the constraints and as reducing reliability level decreases the constraint violation increases. Thus ensemble surrogate model based simulation-optimization was found to be useful in deriving multi-objective optimal pumping strategies for coastal aquifers under parameter uncertainty.
2015-01-01
To simulate effects of pesticides on different honeybee (Apis mellifera L.) life stages, we used the BEEHAVE model to explore how increased mortalities of larvae, in-hive workers, and foragers, as well as reduced egg-laying rate, could impact colony dynamics over multiple years. Stresses were applied for 30 days, both as multiples of the modeled control mortality and as set percentage daily mortalities to assess the sensitivity of the modeled colony both to small fluctuations in mortality and periods of low to very high daily mortality. These stresses simulate stylized exposure of the different life stages to nectar and pollen contaminated with pesticide for 30 days. Increasing adult bee mortality had a much greater impact on colony survival than mortality of bee larvae or reduction in egg laying rate. Importantly, the seasonal timing of the imposed mortality affected the magnitude of the impact at colony level. In line with the LD50, we propose a new index of “lethal imposed stress”: the LIS50 which indicates the level of stress on individuals that results in 50% colony mortality. This (or any LISx) is a comparative index for exploring the effects of different stressors at colony level in model simulations. While colony failure is not an acceptable protection goal, this index could be used to inform the setting of future regulatory protection goals. PMID:26444386
Garcia, Ediberto; Newfang, Daniel; Coyle, Jayme P; Blake, Charles L; Spencer, John W; Burrelli, Leonard G; Johnson, Giffe T; Harbison, Raymond D
2018-07-01
Three independently conducted asbestos exposure evaluations were conducted using wire gauze pads similar to standard practice in the laboratory setting. All testing occurred in a controlled atmosphere inside an enclosed chamber simulating a laboratory setting. Separate teams consisting of a laboratory technician, or technician and assistant simulated common tasks involving wire gauze pads, including heating and direct wire gauze manipulation. Area and personal air samples were collected and evaluated for asbestos consistent with the National Institute of Occupational Safety Health method 7400 and 7402, and the Asbestos Hazard Emergency Response Act (AHERA) method. Bulk gauze pad samples were analyzed by Polarized Light Microscopy and Transmission Electron Microscopy to determine asbestos content. Among air samples, chrysotile asbestos was the only fiber found in the first and third experiments, and tremolite asbestos for the second experiment. None of the air samples contained asbestos in concentrations above the current permissible regulatory levels promulgated by OSHA. These findings indicate that the level of asbestos exposure when working with wire gauze pads in the laboratory setting is much lower than levels associated with asbestosis or asbestos-related lung cancer and mesothelioma. Copyright © 2018. Published by Elsevier Inc.
Social cognitive theory, metacognition, and simulation learning in nursing education.
Burke, Helen; Mancuso, Lorraine
2012-10-01
Simulation learning encompasses simple, introductory scenarios requiring response to patients' needs during basic hygienic care and during situations demanding complex decision making. Simulation integrates principles of social cognitive theory (SCT) into an interactive approach to learning that encompasses the core principles of intentionality, forethought, self-reactiveness, and self-reflectiveness. Effective simulation requires an environment conducive to learning and introduces activities that foster symbolic coding operations and mastery of new skills; debriefing builds self-efficacy and supports self-regulation of behavior. Tailoring the level of difficulty to students' mastery level supports successful outcomes and motivation to set higher standards. Mindful selection of simulation complexity and structure matches course learning objectives and supports progressive development of metacognition. Theory-based facilitation of simulated learning optimizes efficacy of this learning method to foster maturation of cognitive processes of SCT, metacognition, and self-directedness. Examples of metacognition that are supported through mindful, theory-based implementation of simulation learning are provided. Copyright 2012, SLACK Incorporated.
Rogue taxa phenomenon: a biological companion to simulation analysis
Westover, Kristi M.; Rusinko, Joseph P.; Hoin, Jon; Neal, Matthew
2013-01-01
To provide a baseline biological comparison to simulation study predictions about the frequency of rogue taxa effects, we evaluated the frequency of a rogue taxa effect using viral data sets which differed in diversity. Using a quartet-tree framework, we measured the frequency of a rogue taxa effect in three data sets of increasing genetic variability (within viral serotype, between viral serotype, and between viral family) to test whether the rogue taxa was correlated with the mean sequence diversity of the respective data sets. We found a slight increase in the percentage of rogues as nucleotide diversity increased. Even though the number of rogues increased with diversity, the distribution of the types of rogues (friendly, crazy, or evil) did not depend on the diversity and in the case of the order-level data set the net rogue effect was slightly positive. This study, assessing frequency of the rogue taxa effect using biological data, indicated that simulation studies may over-predict the prevalence of the rogue taxa effect. Further investigations are necessary to understand which types of data sets are susceptible to a negative rogue effect and thus merit the removal of taxa from large phylogenetic reconstructions. PMID:23707704
Rogue taxa phenomenon: a biological companion to simulation analysis.
Westover, Kristi M; Rusinko, Joseph P; Hoin, Jon; Neal, Matthew
2013-10-01
To provide a baseline biological comparison to simulation study predictions about the frequency of rogue taxa effects, we evaluated the frequency of a rogue taxa effect using viral data sets which differed in diversity. Using a quartet-tree framework, we measured the frequency of a rogue taxa effect in three data sets of increasing genetic variability (within viral serotype, between viral serotype, and between viral family) to test whether the rogue taxa was correlated with the mean sequence diversity of the respective data sets. We found a slight increase in the percentage of rogues as nucleotide diversity increased. Even though the number of rogues increased with diversity, the distribution of the types of rogues (friendly, crazy, or evil) did not depend on the diversity and in the case of the order-level data set the net rogue effect was slightly positive. This study, assessing frequency of the rogue taxa effect using biological data, indicated that simulation studies may over-predict the prevalence of the rogue taxa effect. Further investigations are necessary to understand which types of data sets are susceptible to a negative rogue effect and thus merit the removal of taxa from large phylogenetic reconstructions. Copyright © 2013 Elsevier Inc. All rights reserved.
A methodology for modeling barrier island storm-impact scenarios
Mickey, Rangley C.; Long, Joseph W.; Plant, Nathaniel G.; Thompson, David M.; Dalyander, P. Soupy
2017-02-16
A methodology for developing a representative set of storm scenarios based on historical wave buoy and tide gauge data for a region at the Chandeleur Islands, Louisiana, was developed by the U.S. Geological Survey. The total water level was calculated for a 10-year period and analyzed against existing topographic data to identify when storm-induced wave action would affect island morphology. These events were categorized on the basis of the threshold of total water level and duration to create a set of storm scenarios that were simulated, using a high-fidelity, process-based, morphologic evolution model, on an idealized digital elevation model of the Chandeleur Islands. The simulated morphological changes resulting from these scenarios provide a range of impacts that can help coastal managers determine resiliency of proposed or existing coastal structures and identify vulnerable areas within those structures.
Learning in Stochastic Bit Stream Neural Networks.
van Daalen, Max; Shawe-Taylor, John; Zhao, Jieyu
1996-08-01
This paper presents learning techniques for a novel feedforward stochastic neural network. The model uses stochastic weights and the "bit stream" data representation. It has a clean analysable functionality and is very attractive with its great potential to be implemented in hardware using standard digital VLSI technology. The design allows simulation at three different levels and learning techniques are described for each level. The lowest level corresponds to on-chip learning. Simulation results on three benchmark MONK's problems and handwritten digit recognition with a clean set of 500 16 x 16 pixel digits demonstrate that the new model is powerful enough for the real world applications. Copyright 1996 Elsevier Science Ltd
Guo, Zuojun; Li, Bo; Dzubiella, Joachim; Cheng, Li-Tien; McCammon, J Andrew; Che, Jianwei
2013-03-12
In this article, we systematically apply a novel implicit-solvent model, the variational implicit-solvent model (VISM) together with the Coulomb-Field Approximation (CFA), to calculate the hydration free energy of a large set of small organic molecules. Because these molecules have been studied in detail by molecular dynamics simulations and other implicit-solvent models, they provide a good benchmark for evaluating the performance of VISM-CFA. With all-atom Amber force field parameters, VISM-CFA is able to reproduce well not only the experimental and MD simulated total hydration free energy but also the polar and nonpolar contributions individually. The correlation between VISM-CFA and experiments is R 2 = 0.763 for the total hydration free energy, with a root-mean-square deviation (RMSD) of 1.83 kcal/mol, and the correlation to results from TIP3P explicit water MD simulations is R 2 = 0.839 with a RMSD = 1.36 kcal/mol. In addition, we demonstrate that VISM captures dewetting phenomena in the p53/MDM2 complex and hydrophobic characteristics in the system. This work demonstrates that the level-set VISM-CFA can be used to study the energetic behavior of realistic molecular systems with complicated geometries in solvation, protein-ligand binding, protein-protein association, and protein folding processes.
NASA Astrophysics Data System (ADS)
Mirfenderesgi, G.; Bohrer, G.; Matheny, A. M.; Fatichi, S.; Frasson, R. P. M.; Schafer, K. V.
2015-12-01
The Finite-Elements Tree-Crown Hydrodynamics model version 2 (FETCH2) simulates water flow through the tree using the porous media analogy. Empirical equations relate water potential within the stem to stomatal conductance at the leaf level. Leaves are connected to the stem at each height. While still simplified, this approach brings realism to the simulation of transpiration compared with models where stomatal conductance is directly linked to soil moisture. The FETCH2 model accounts for plant hydraulic traits such as xylem conductivity, area of hydro-active xylem, vertical distribution of leaf area, and maximal and minimal xylem water content, and their effect on the dynamics of water flow in the tree system. Such a modeling tool enhances our understanding of the role of hydraulic limitations and allows us to incorporate the effects of short-term water stresses on transpiration. Here, we use FETCH2 parameterized and evaluated with a large sap-flow observations data set, collected from 21 trees of two genera (oak/pine) at Silas Little Experimental Forest, NJ. The well-drained deep sandy soil leads to water stress during many days throughout the growing season. We conduct a set of tree-level transpiration simulations, and use the results to evaluate the effects of different hydraulic strategies on daily transpiration and water use efficiency. We define these "hydraulic strategies" through combinations of multiple sets of parameters in the model that describe the root, stem and leaf hydraulics. After evaluating the performance of the model, we use the results to shed light on the future trajectory of the forest in terms of species-specific transpiration responses. Application of the model on the two co-occurring oak species (Quercus prinus L. and Quercus velutina Lam) shows that the applied modeling approach was successfully captures the differences in water-use strategy through optimizing multiple physiological and hydraulic parameters.
Optimising the performance of an outpatient setting.
Sendi, Pedram; Al, Maiwenn J; Battegay, Manuel; Al Maiwenn, J
2004-01-24
An outpatient setting typically includes experienced and novice resident physicians who are supervised by senior staff physicians. The performance of this kind of outpatient setting, for a given mix of experienced and novice resident physicians, is determined by the number of senior staff physicians available for supervision. The optimum mix of human resources may be determined using discrete-event simulation. An outpatient setting represents a system where concurrency and resource sharing are important. These concepts can be modelled by means of timed Coloured Petri Nets (CPN), which is a discrete-event simulation formalism. We determined the optimum mix of resources (i.e. the number of senior staff physicians needed for a given number of experienced and novice resident physicians) to guarantee efficient overall system performance. In an outpatient setting with 10 resident physicians, two staff physicians are required to guarantee a minimum level of system performance (42-52 patients are seen per 5-hour period). However, with 3 senior staff physicians system performance can be improved substantially (49-56 patients per 5-hour period). An additional fourth staff physician does not substantially enhance system performance (50-57 patients per 5-hour period). Coloured Petri Nets provide a flexible environment in which to simulate an outpatient setting and assess the impact of any staffing changes on overall system performance, to promote informed resource allocation decisions.
Towards a Credibility Assessment of Models and Simulations
NASA Technical Reports Server (NTRS)
Blattnig, Steve R.; Green, Lawrence L.; Luckring, James M.; Morrison, Joseph H.; Tripathi, Ram K.; Zang, Thomas A.
2008-01-01
A scale is presented to evaluate the rigor of modeling and simulation (M&S) practices for the purpose of supporting a credibility assessment of the M&S results. The scale distinguishes required and achieved levels of rigor for a set of M&S elements that contribute to credibility including both technical and process measures. The work has its origins in an interest within NASA to include a Credibility Assessment Scale in development of a NASA standard for models and simulations.
Mid-Holocene hydrologic model of the Shingobee watershed, Minnesota
Filby, S.K.; Locke, Sharon M.; Person, M.A.; Winter, T.C.; Rosenberry, D.O.; Nieber, J.L.; Gutowski, W.J.; Ito, E.
2002-01-01
A hydrologifc model of the Shingobee Watershed in north-central Minnesota was developed to reconstruct mid-Holocene paleo-lake levels for Williams Lake, a surface-water body located in the southern portion of the watershed. Hydrologic parameters for the model were first estimated in a calibration exercise using a 9-yr historical record (1990-1998) of climatic and hydrologic stresses. The model reproduced observed temporal and spatial trends in surface/groundwater levels across the watershed. Mid-Holocene aquifer and lake levels were then reconstructed using two paleoclimatic data sets: CCM1 atmospheric general circulation model output and pollen-transfer functions using sediment core data from Williams Lake. Calculated paleo-lake levels based on pollen-derived paleoclimatic reconstructions indicated a 3.5-m drop in simulated lake levels and were in good agreement with the position of mid-Holocene beach sands observed in a Williams Lake sediment core transect. However, calculated paleolake levels based on CCM1 climate forcing produced only a 0.05-m drop in lake levels. We found that decreases in winter precipitation rather than temperature increases had the largest effect on simulated mid-Holocene lake levels. The study illustrates how watershed models can be used to critically evaluate paleoclimatic reconstructions by integrating geologic, climatic, limnologic, and hydrogeologic data sets. ?? 2002 University of Washington.
Frisch, Stefan A.; Pisoni, David B.
2012-01-01
Objective Computational simulations were carried out to evaluate the appropriateness of several psycholinguistic theories of spoken word recognition for children who use cochlear implants. These models also investigate the interrelations of commonly used measures of closed-set and open-set tests of speech perception. Design A software simulation of phoneme recognition performance was developed that uses feature identification scores as input. Two simulations of lexical access were developed. In one, early phoneme decisions are used in a lexical search to find the best matching candidate. In the second, phoneme decisions are made only when lexical access occurs. Simulated phoneme and word identification performance was then applied to behavioral data from the Phonetically Balanced Kindergarten test and Lexical Neighborhood Test of open-set word recognition. Simulations of performance were evaluated for children with prelingual sensorineural hearing loss who use cochlear implants with the MPEAK or SPEAK coding strategies. Results Open-set word recognition performance can be successfully predicted using feature identification scores. In addition, we observed no qualitative differences in performance between children using MPEAK and SPEAK, suggesting that both groups of children process spoken words similarly despite differences in input. Word recognition ability was best predicted in the model in which phoneme decisions were delayed until lexical access. Conclusions Closed-set feature identification and open-set word recognition focus on different, but related, levels of language processing. Additional insight for clinical intervention may be achieved by collecting both types of data. The most successful model of performance is consistent with current psycholinguistic theories of spoken word recognition. Thus it appears that the cognitive process of spoken word recognition is fundamentally the same for pediatric cochlear implant users and children and adults with normal hearing. PMID:11132784
Hart, Diana Elizabeth; Forman, Mark; Veale, Andrew G
2011-09-01
Water condensate in the humidifier tubing can affect bi-level ventilation by narrowing tube diameter and increasing airflow resistance. We investigated room temperature and tubing type as ways to reduce condensate and its effect on bi-level triggering and pressure delivery. In this bench study, the aim was to test the hypothesis that a relationship exists between room temperature and tubing condensate. Using a patient simulator, a Res-med bi-level device was set to 18/8 cm H(2)O and run for 6 h at room temperatures of 16°C, 18°C and 20°C. The built-in humidifier was set to a low, medium or high setting while using unheated or insulated tubing or replaced with a humidifier using heated tubing. Humidifier output, condensate, mask pressure and triggering delay of the bi-level were measured at 1 and 6 h using an infrared hygrometer, metric weights, Honeywell pressure transducer and TSI pneumotach. When humidity output exceeded 17.5 mg H(2)O/L, inspiratory pressure fell by 2-15 cm H(2)O and triggering was delayed by 0.2-0.9 s. Heating the tubing avoided any such ventilatory effect whereas warmer room temperatures or insulating the tubing were of marginal benefit. Users of bi-level ventilators need to be aware of this problem and its solution. Bi-level humidifier tubing may need to be heated to ensure correct humidification, pressure delivery and triggering.
NASA Technical Reports Server (NTRS)
Yeh, J. W.
1971-01-01
The general features of the GENET system for simulating networks are described. A set of features is presented which are desirable for network simulations and which are expected to be achieved by this system. Among these features are: (1) two level network modeling; and (2) problem oriented operations. Several typical network systems are modeled in GENET framework to illustrate various of the features and to show its applicability.
Simulating Vibrations in a Complex Loaded Structure
NASA Technical Reports Server (NTRS)
Cao, Tim T.
2005-01-01
The Dynamic Response Computation (DIRECT) computer program simulates vibrations induced in a complex structure by applied dynamic loads. Developed to enable rapid analysis of launch- and landing- induced vibrations and stresses in a space shuttle, DIRECT also can be used to analyze dynamic responses of other structures - for example, the response of a building to an earthquake, or the response of an oil-drilling platform and attached tanks to large ocean waves. For a space-shuttle simulation, the required input to DIRECT includes mathematical models of the space shuttle and its payloads, and a set of forcing functions that simulates launch and landing loads. DIRECT can accommodate multiple levels of payload attachment and substructure as well as nonlinear dynamic responses of structural interfaces. DIRECT combines the shuttle and payload models into a single structural model, to which the forcing functions are then applied. The resulting equations of motion are reduced to an optimum set and decoupled into a unique format for simulating dynamics. During the simulation, maximum vibrations, loads, and stresses are monitored and recorded for subsequent analysis to identify structural deficiencies in the shuttle and/or payloads.
Chai, C; Wong, Y D
2014-02-01
At intersection, vehicles coming from different directions conflict with each other. Improper geometric design and signal settings at signalized intersection will increase occurrence of conflicts between road users and results in a reduction of the safety level. This study established a cellular automata (CA) model to simulate vehicular interactions involving right-turn vehicles (as similar to left-turn vehicles in US). Through various simulation scenarios for four case cross-intersections, the relationships between conflict occurrences involving right-turn vehicles with traffic volume and right-turn movement control strategies are analyzed. Impacts of traffic volume, permissive right-turn compared to red-amber-green (RAG) arrow, shared straight-through and right-turn lane as well as signal setting are estimated from simulation results. The simulation model is found to be able to provide reasonable assessment of conflicts through comparison of existed simulation approach and observed accidents. Through the proposed approach, prediction models for occurrences and severity of vehicle conflicts can be developed for various geometric layouts and traffic control strategies. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Switzer, George F.
2008-01-01
This document contains a general description for data sets of a wake vortex system in a turbulent environment. The turbulence and thermal stratification of the environment are representative of the conditions on November 12, 2001 near John F. Kennedy International Airport. The simulation assumes no ambient winds. The full three dimensional simulation of the wake vortex system from a Boeing 747 predicts vortex circulation levels at 80% of their initial value at the time of the proposed vortex encounter. The linked vortex oval orientation showed no twisting, and the oval elevations at the widest point were about 20 meters higher than where the vortex pair joined. Fred Proctor of NASA?s Langley Research Center presented the results from this work at the NTSB public hearing that started 29 October 2002. This document contains a description of each data set including: variables, coordinate system, data format, and sample plots. Also included are instructions on how to read the data.
NASA Astrophysics Data System (ADS)
Söderman, Christina; Johnsson, Ã. se; Vikgren, Jenny; Rossi Norrlund, Rauni; Molnar, David; Mirzai, Maral; Svalkvist, Angelica; Mânsson, Lars Gunnar; Bâth, Magnus
2016-03-01
Chest tomosynthesis may be a suitable alternative to computed tomography for the clinical task of follow up of pulmonary nodules. The aim of the present study was to investigate the detection of pulmonary nodule growth suggestive of malignancy using chest tomosynthesis. Previous studies have indicated remained levels of detection of pulmonary nodules at dose levels corresponding to that of a conventional lateral radiograph, approximately 0.04 mSv, which motivated to perform the present study this dose level. Pairs of chest tomosynthesis image sets, where the image sets in each pair were acquired of the same patient at two separate occasions, were included in the study. Simulated nodules with original diameters of approximately 8 mm were inserted in the pairs of image sets, simulating situations where the nodule had remained stable in size or increased isotropically in size between the two different imaging occasions. Four different categories of nodule growth were included, corresponding to a volume increase of approximately 21 %, 68 %, 108 % and 250 %. All nodules were centered in the depth direction in the tomosynthesis images. All images were subjected to a simulated dose reduction, resulting in images corresponding to an effective dose of 0.04 mSv. Four observers were given the task of rating their confidence that the nodule was stable in size or not on a five-level rating scale. This was done both before any size measurements were made of the nodule as well as after measurements were performed. Using Receiver operating characteristic analysis, the rating data for the nodules that were stable in size was compared to the rating data for the nodules simulated to have increased in size. Statistically significant differences between the rating distributions for the stable nodules and all of the four nodule growth categories were found. For the three largest nodule growths, nearly perfect detection of nodule growth was seen. In conclusion, the present study indicates that during optimal imaging conditions and for nodules with diameters of approximately 8 mm that grow fairly symmetrically, chest tomosynthesis performed at a dose level corresponding to that of a lateral chest radiograph can, with high sensitivity, differentiate nodules stable in size from nodules growing at rates associated with fast growing malignant nodules.
Improvements to Level Set, Immersed Boundary methods for Interface Tracking
NASA Astrophysics Data System (ADS)
Vogl, Chris; Leveque, Randy
2014-11-01
It is not uncommon to find oneself solving a moving boundary problem under flow in the context of some application. Of particular interest is when the moving boundary exerts a curvature-dependent force on the liquid. Such a force arises when observing a boundary that is resistant to bending or has surface tension. Numerically speaking, stable numerical computation of the curvature can be difficult as it is often described in terms of high-order derivatives of either marker particle positions or of a level set function. To address this issue, the level set method is modified to track not only the position of the boundary, but the curvature as well. The definition of the signed-distance function that is used to modify the level set method is also used to develop an interpolation-free, closest-point method. These improvements are used to simulate a bending-resistant, inextensible boundary under shear flow to highlight area and volume conservation, as well as stable curvature calculation. Funded by a NSF MSPRF grant.
Persistent homology and non-Gaussianity
NASA Astrophysics Data System (ADS)
Cole, Alex; Shiu, Gary
2018-03-01
In this paper, we introduce the topological persistence diagram as a statistic for Cosmic Microwave Background (CMB) temperature anisotropy maps. A central concept in 'Topological Data Analysis' (TDA), the idea of persistence is to represent a data set by a family of topological spaces. One then examines how long topological features 'persist' as the family of spaces is traversed. We compute persistence diagrams for simulated CMB temperature anisotropy maps featuring various levels of primordial non-Gaussianity of local type. Postponing the analysis of observational effects, we show that persistence diagrams are more sensitive to local non-Gaussianity than previous topological statistics including the genus and Betti number curves, and can constrain Δ fNLloc= 35.8 at the 68% confidence level on the simulation set, compared to Δ fNLloc= 60.6 for the Betti number curves. Given the resolution of our simulations, we expect applying persistence diagrams to observational data will give constraints competitive with those of the Minkowski Functionals. This is the first in a series of papers where we plan to apply TDA to different shapes of non-Gaussianity in the CMB and Large Scale Structure.
NASA Astrophysics Data System (ADS)
Zhang, Jun; Li, Ri Yi
2018-06-01
Building energy simulation is an important supporting tool for green building design and building energy consumption assessment, At present, Building energy simulation software can't meet the needs of energy consumption analysis and cabinet level micro environment control design of prefabricated building. thermal physical model of prefabricated building is proposed in this paper, based on the physical model, the energy consumption calculation software of prefabricated cabin building(PCES) is developed. we can achieve building parameter setting, energy consumption simulation and building thermal process and energy consumption analysis by PCES.
NASA Astrophysics Data System (ADS)
Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.
2017-12-01
We propose efficient single-step formulations for reinitialization and extending algorithms, which are critical components of level-set based interface-tracking methods. The level-set field is reinitialized with a single-step (non iterative) "forward tracing" algorithm. A minimum set of cells is defined that describes the interface, and reinitialization employs only data from these cells. Fluid states are extrapolated or extended across the interface by a single-step "backward tracing" algorithm. Both algorithms, which are motivated by analogy to ray-tracing, avoid multiple block-boundary data exchanges that are inevitable for iterative reinitialization and extending approaches within a parallel-computing environment. The single-step algorithms are combined with a multi-resolution conservative sharp-interface method and validated by a wide range of benchmark test cases. We demonstrate that the proposed reinitialization method achieves second-order accuracy in conserving the volume of each phase. The interface location is invariant to reapplication of the single-step reinitialization. Generally, we observe smaller absolute errors than for standard iterative reinitialization on the same grid. The computational efficiency is higher than for the standard and typical high-order iterative reinitialization methods. We observe a 2- to 6-times efficiency improvement over the standard method for serial execution. The proposed single-step extending algorithm, which is commonly employed for assigning data to ghost cells with ghost-fluid or conservative interface interaction methods, shows about 10-times efficiency improvement over the standard method while maintaining same accuracy. Despite their simplicity, the proposed algorithms offer an efficient and robust alternative to iterative reinitialization and extending methods for level-set based multi-phase simulations.
Simpson, Robin; Devenyi, Gabriel A; Jezzard, Peter; Hennessy, T Jay; Near, Jamie
2017-01-01
To introduce a new toolkit for simulation and processing of magnetic resonance spectroscopy (MRS) data, and to demonstrate some of its novel features. The FID appliance (FID-A) is an open-source, MATLAB-based software toolkit for simulation and processing of MRS data. The software is designed specifically for processing data with multiple dimensions (eg, multiple radiofrequency channels, averages, spectral editing dimensions). It is equipped with functions for importing data in the formats of most major MRI vendors (eg, Siemens, Philips, GE, Agilent) and for exporting data into the formats of several common processing software packages (eg, LCModel, jMRUI, Tarquin). This paper introduces the FID-A software toolkit and uses examples to demonstrate its novel features, namely 1) the use of a spectral registration algorithm to carry out useful processing routines automatically, 2) automatic detection and removal of motion-corrupted scans, and 3) the ability to perform several major aspects of the MRS computational workflow from a single piece of software. This latter feature is illustrated through both high-level processing of in vivo GABA-edited MEGA-PRESS MRS data, as well as detailed quantum mechanical simulations to generate an accurate LCModel basis set for analysis of the same data. All of the described processing steps resulted in a marked improvement in spectral quality compared with unprocessed data. Fitting of MEGA-PRESS data using a customized basis set resulted in improved fitting accuracy compared with a generic MEGA-PRESS basis set. The FID-A software toolkit enables high-level processing of MRS data and accurate simulation of in vivo MRS experiments. Magn Reson Med 77:23-33, 2017. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Calvert, Katrina L; McGurgan, Paul M; Debenham, Edward M; Gratwick, Frances J; Maouris, Panos
2013-12-01
Obstetric emergencies contribute significantly to maternal morbidity and mortality. Current training in the management of obstetric emergencies in Australia and internationally focusses on utilising a multidisciplinary simulation-based model. Arguments for and against this type of training exist, using both economic and clinical reasoning. To identify the evidence base for the clinical impact of simulation training in obstetric emergencies and to address some of the concerns regarding appropriate delivery of obstetric emergency training in the Australian setting. A literature search was performed to identify research undertaken in the area of obstetric emergency training. The initial literature search using broad search terms identified 887 articles which were then reviewed and considered for inclusion if they provided original research with a specific emphasis on the impact of training on clinical outcomes. Ninety-two articles were identified, comprising evidence in the following clinical situations: eclampsia, shoulder dystocia, postpartum haemorrhage, maternal collapse, cord prolapse and teamwork training. Evidence exists for a benefit in knowledge or skills gained from simulation training and for the benefit of training in small units without access to high-fidelity equipment or facilities. Evidence exists for a positive impact of training in obstetric emergencies, although the majority of the available evidence applies to evaluation at the level of participants' confidence, knowledge or skills rather than at the level of impact on clinical outcomes. The model of simulation-based training is an appropriate one for the Australian setting and should be further utilised in rural and remote settings. © 2013 The Royal Australian and New Zealand College of Obstetricians and Gynaecologists.
NASA Astrophysics Data System (ADS)
Howard, A. D.; Matsubara, Y.; Lloyd, H.
2006-12-01
The DELIM landform evolution model has been adapted to investigate erosional and depositional landforms in two setting with fluctuating base levels. The first is erosion and wave planation of terraced landscapes in Coastal Plain sediments along the estuarine Potomac River. The last 3.5 million years of erosion is simulated with base level fluctuations based upon the long-term oceanic delta 18O record, eustatic sea level changes during the last 120 ka, estimates of the history of tectonic uplift in the region, and maximum depths of incision of the Potomac River during sea-level lowstands. Inhibition of runoff erosion by vegetation has been a crucial factor allowing persistence of uplands in the soft coastal plain bedrock. The role of vegetation is simulated as a contributing area- dependent critical shear stress. Development of wave-cut terraces is simulated by episodic planation of the landscape during base-level highstands. Although low base level excursions are infrequent and of short duration, the total amount of erosion is largely controlled by the depth and frequency of lowstands. The model has also been adapted to account for flow routing and accompanying erosion and sedimentation in landscapes with multiple enclosed depressions. The hydrological portion of the model has been calibrated and tested in the Great Basin and Mojave regions of the southwestern U.S. In such a setting, runoff, largely from mountains, may flow through several lacustrine basins, each with evaporative losses. An iterative approach determines the size and depth of lakes, including overflow (or not) that balances runoff and evaporation. The model utilizes information on temperatures, rainfall, runoff, and evaporation within the region to parameterize evaporation and runoff as functions of latitude, mean annual temperature, precipitation, and elevation. The model is successful in predicting the location of modern perennial lakes in the region as well as that of lakes during the last glacial maximum based upon published estimates of changes in mean annual temperature and precipitation within the region. The hydrological model has been coupled with the DELIM landform evolution model to investigate expected patterns of basin sedimentation in cratered landscapes on Mars and the role that fluctuating lake levels has on the form and preservation of deltaic and shoreline sedimentary platforms. As would be expected, base levels that fluctuate widely complicate the pattern of depositional landforms, but recognizable coastal benches develop even with high-amplitude variations.
NASA Astrophysics Data System (ADS)
Calderer, Antoni; Guo, Xin; Shen, Lian; Sotiropoulos, Fotis
2018-02-01
We develop a numerical method for simulating coupled interactions of complex floating structures with large-scale ocean waves and atmospheric turbulence. We employ an efficient large-scale model to develop offshore wind and wave environmental conditions, which are then incorporated into a high resolution two-phase flow solver with fluid-structure interaction (FSI). The large-scale wind-wave interaction model is based on a two-fluid dynamically-coupled approach that employs a high-order spectral method for simulating the water motion and a viscous solver with undulatory boundaries for the air motion. The two-phase flow FSI solver is based on the level set method and is capable of simulating the coupled dynamic interaction of arbitrarily complex bodies with airflow and waves. The large-scale wave field solver is coupled with the near-field FSI solver with a one-way coupling approach by feeding into the latter waves via a pressure-forcing method combined with the level set method. We validate the model for both simple wave trains and three-dimensional directional waves and compare the results with experimental and theoretical solutions. Finally, we demonstrate the capabilities of the new computational framework by carrying out large-eddy simulation of a floating offshore wind turbine interacting with realistic ocean wind and waves.
Direct simulation Monte Carlo prediction of on-orbit contaminant deposit levels for HALOE
NASA Technical Reports Server (NTRS)
Woronowicz, Michael S.; Rault, Didier F. G.
1994-01-01
A three-dimensional version of the direct simulation Monte Carlo method is adapted to assess the contamination environment surrounding a highly detailed model of the Upper Atmosphere Research Satellite. Emphasis is placed on simulating a realistic, worst-case set of flow field and surface conditions and geometric orientations for the satellite in order to estimate an upper limit for the cumulative level of volatile organic molecular deposits at the aperture of the Halogen Occultation Experiment. A detailed description of the adaptation of this solution method to the study of the satellite's environment is also presented. Results pertaining to the satellite's environment are presented regarding contaminant cloud structure, cloud composition, and statistics of simulated molecules impinging on the target surface, along with data related to code performance. Using procedures developed in standard contamination analyses, along with many worst-case assumptions, the cumulative upper-limit level of volatile organic deposits on HALOE's aperture over the instrument's 35-month nominal data collection period is estimated at about 13,350 A.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rylander, Matthew; Reno, Matthew J.; Quiroz, Jimmy E.
This paper describes methods that a distribution engineer could use to determine advanced inverter settings to improve distribution system performance. These settings are for fixed power factor, volt-var, and volt-watt functionality. Depending on the level of detail that is desired, different methods are proposed to determine single settings applicable for all advanced inverters on a feeder or unique settings for each individual inverter. Seven distinctly different utility distribution feeders are analyzed to simulate the potential benefit in terms of hosting capacity, system losses, and reactive power attained with each method to determine the advanced inverter settings.
van Dongen, Koen W; Ahlberg, Gunnar; Bonavina, Luigi; Carter, Fiona J; Grantcharov, Teodor P; Hyltander, Anders; Schijven, Marlies P; Stefani, Alessandro; van der Zee, David C; Broeders, Ivo A M J
2011-01-01
Virtual reality (VR) simulators have been demonstrated to improve basic psychomotor skills in endoscopic surgery. The exercise configuration settings used for validation in studies published so far are default settings or are based on the personal choice of the tutors. The purpose of this study was to establish consensus on exercise configurations and on a validated training program for a virtual reality simulator, based on the experience of international experts to set criterion levels to construct a proficiency-based training program. A consensus meeting was held with eight European teams, all extensively experienced in using the VR simulator. Construct validity of the training program was tested by 20 experts and 60 novices. The data were analyzed by using the t test for equality of means. Consensus was achieved on training designs, exercise configuration, and examination. Almost all exercises (7/8) showed construct validity. In total, 50 of 94 parameters (53%) showed significant difference. A European, multicenter, validated, training program was constructed according to the general consensus of a large international team with extended experience in virtual reality simulation. Therefore, a proficiency-based training program can be offered to training centers that use this simulator for training in basic psychomotor skills in endoscopic surgery.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forsline, P.L.; Musselman, R.C.; Kender, W.J.
Mature McIntosh, Empire, and Golden Delicious apple trees (Malus domestica) were sprayed with simulated acid rain solutions in the pH range of 2.5 to 5.5 at full bloom in 1980 and 1981. In 1981, weekly sprays were applied at pH 2.75 and pH 3.25. Necrotic lesions developed on apple petals at pH 2.5 with slight injury appearing at pH 3.0 and 3.5. Apple foliage had no acid rain lesions at any of the pH levels tested. Pollen germination was reduced at pH 2.5 in Empire. Slight fruit set reduction at pH 2.5 was observed in McIntosh. Even at the lowestmore » pH levels no detrimental effects of simulated acid rain were found on apple tree productivity and fruit quality when measured as fruit set, seed number per fruit, and fruit size and appearance.« less
Ventilation of Animal Shelters in Wildland Fire Scenarios
NASA Astrophysics Data System (ADS)
Bova, A. S.; Bohrer, G.; Dickinson, M. B.
2009-12-01
The effects of wildland fires on cavity-nesting birds and bats, as well as fossorial mammals and burrow-using reptiles, are of considerable interest to the fire management community. However, relatively little is known about the degree of protection afforded by various animal shelters in wildland fire events. We present results from our ongoing investigation, utilizing NIST’s Fire Dynamics Simulator (FDS) and experimental data, of the effectiveness of common shelter configurations in protecting animals from combustion products. We compare two sets of simulations with observed experimental results. In the first set, wind tunnel experiments on single-entry room ventilation by Larsen and Heiselberg (2008) were simulated in a large domain resolved into 10 cm cubic cells. The set of 24 simulations comprised all combinations of incident wind speeds of 1,3 and 5 m/s; angles of attack of 0, 45, 90 and 180 degrees from the horizontal normal to the entrance; and temperature differences of 0 and 10 degrees C between the building interior and exterior. Simulation results were in good agreement with experimental data, thus providing a validation of FDS code for further ventilation experiments. In the second set, a cubic simulation domain of ~1m on edge and resolved into 1 cm cubic cells, was set up to represent the experiments by Ar et al. (2004) of wind-induced ventilation of woodpecker cavities. As in the experiments, we simulated wind parallel and perpendicular to the cavity entrance with different mean forcing velocities, and monitored the rates of evacuation of a neutral-buoyancy tracer from the cavity. Simulated ventilation rates in many, though not all, cases fell within the range of experimental data. Reasons for these differences, which include vagueness in the experimental setup, will be discussed. Our simulations provide a tool to estimate the viability of an animal in a shelter as a function of the shelter geometry and the fire intensity. In addition to the above, we explore the role of turbulence and its effect on ventilation rates, especially in single-entrance shelters. The goal of this work is to provide engineering formulas to estimate the probable levels of harmful or irritating combustion products in animal shelters during wildland fires.
NASA Astrophysics Data System (ADS)
McGovern, S.; Kollet, S. J.; Buerger, C. M.; Schwede, R. L.; Podlaha, O. G.
2017-12-01
In the context of sedimentary basins, we present a model for the simulation of the movement of ageological formation (layers) during the evolution of the basin through sedimentation and compactionprocesses. Assuming a single phase saturated porous medium for the sedimentary layers, the modelfocuses on the tracking of the layer interfaces, through the use of the level set method, as sedimentationdrives fluid-flow and reduction of pore space by compaction. On the assumption of Terzaghi's effectivestress concept, the coupling of the pore fluid pressure to the motion of interfaces in 1-D is presented inMcGovern, et.al (2017) [1] .The current work extends the spatial domain to 3-D, though we maintain the assumption ofvertical effective stress to drive the compaction. The idealized geological evolution is conceptualized asthe motion of interfaces between rock layers, whose paths are determined by the magnitude of a speedfunction in the direction normal to the evolving layer interface. The speeds normal to the interface aredependent on the change in porosity, determined through an effective stress-based compaction law,such as the exponential Athy's law. Provided with the speeds normal to the interface, the level setmethod uses an advection equation to evolve a potential function, whose zero level set defines theinterface. Thus, the moving layer geometry influences the pore pressure distribution which couplesback to the interface speeds. The flexible construction of the speed function allows extension, in thefuture, to other terms to represent different physical processes, analogous to how the compaction rulerepresents material deformation.The 3-D model is implemented using the generic finite element method framework Deal II,which provides tools, building on p4est and interfacing to PETSc, for the massively parallel distributedsolution to the model equations [2]. Experiments are being run on the Juelich Supercomputing Center'sJureca cluster. [1] McGovern, et.al. (2017). Novel basin modelling concept for simulating deformation from mechanical compaction using level sets. Computational Geosciences, SI:ECMOR XV, 1-14.[2] Bangerth, et. al. (2011). Algorithms and data structures for massively parallel generic adaptive finite element codes. ACM Transactions on Mathematical Software (TOMS), 38(2):14.
NASA Astrophysics Data System (ADS)
Gorlov, A. P.; Averchenkov, V. I.; Rytov, M. Yu; Eryomenko, V. T.
2017-01-01
The article is concerned with mathematical simulation of protection level assessment of complex organizational and technical systems of industrial enterprises by creating automated system, which main functions are: information security (IS) audit, forming of the enterprise threats model, recommendations concerning creation of the information protection system, a set of organizational-administrative documentation.
Impact of the Montreal Protocol on Antarctic Surface Mass Balance
NASA Astrophysics Data System (ADS)
Previdi, M. J.; Polvani, L. M.
2016-12-01
We investigate the impact of the Montreal Protocol, and associated phase-out of ozone-depleting substances (ODSs), on the surface mass balance (SMB) of Antarctica, using simulations from the Community Earth System Model-Whole Atmosphere Community Climate Model (CESM-WACCM). The effect of ODSs on Antarctic SMB is first established by contrasting two sets of WACCM integrations (each with 6 ensemble members) for the period 1956-2005: one set that includes the full suite of natural and anthropogenic forcings, and a second set identical to the first but with atmospheric concentrations of ODSs held fixed at year 1955 levels. We find that holding ODSs fixed reduces the simulated increase in Antarctic SMB by nearly 60% in the ensemble mean, due to a suppression of Antarctic-mean warming. Having established this SMB impact of ODSs, we next examine three sets of future WACCM integrations (each with 3 ensemble members) for the period 2006-2065. The first two of these are the CMIP5 RCP4.5 and RCP8.5 integrations that include decreases in ODSs due to the implementation of the Montreal Protocol, and increases in other well-mixed greenhouse gases such as CO2. The third set of future integrations represents a hypothetical "world avoided" scenario in which the Montreal Protocol is assumed to have never been implemented, resulting in drastic increases in ODSs during the next several decades. In the world avoided, the simulated increase in Antarctic SMB is substantially larger than the other two scenarios, exceeding the SMB increases occurring under RCP4.5 and RCP8.5 by a factor of 3.7 and 1.9, respectively. The implications of this for future global sea-level rise will be discussed.
A novel simulation theory and model system for multi-field coupling pipe-flow system
NASA Astrophysics Data System (ADS)
Chen, Yang; Jiang, Fan; Cai, Guobiao; Xu, Xu
2017-09-01
Due to the lack of a theoretical basis for multi-field coupling in many system-level models, a novel set of system-level basic equations for flow/heat transfer/combustion coupling is put forward. Then a finite volume model of quasi-1D transient flow field for multi-species compressible variable-cross-section pipe flow is established by discretising the basic equations on spatially staggered grids. Combining with the 2D axisymmetric model for pipe-wall temperature field and specific chemical reaction mechanisms, a finite volume model system is established; a set of specific calculation methods suitable for multi-field coupling system-level research is structured for various parameters in this model; specific modularisation simulation models can be further derived in accordance with specific structures of various typical components in a liquid propulsion system. This novel system can also be used to derive two sub-systems: a flow/heat transfer two-field coupling pipe-flow model system without chemical reaction and species diffusion; and a chemical equilibrium thermodynamic calculation-based multi-field coupling system. The applicability and accuracy of two sub-systems have been verified through a series of dynamic modelling and simulations in earlier studies. The validity of this system is verified in an air-hydrogen combustion sample system. The basic equations and the model system provide a unified universal theory and numerical system for modelling and simulation and even virtual testing of various pipeline systems.
GOCE gravity field simulation based on actual mission scenario
NASA Astrophysics Data System (ADS)
Pail, R.; Goiginger, H.; Mayrhofer, R.; Höck, E.; Schuh, W.-D.; Brockmann, J. M.; Krasbutter, I.; Fecher, T.; Gruber, T.
2009-04-01
In the framework of the ESA-funded project "GOCE High-level Processing Facility", an operational hardware and software system for the scientific processing (Level 1B to Level 2) of GOCE data has been set up by the European GOCE Gravity Consortium EGG-C. One key component of this software system is the processing of a spherical harmonic Earth's gravity field model and the corresponding full variance-covariance matrix from the precise GOCE orbit and calibrated and corrected satellite gravity gradiometry (SGG) data. In the framework of the time-wise approach a combination of several processing strategies for the optimum exploitation of the information content of the GOCE data has been set up: The Quick-Look Gravity Field Analysis is applied to derive a fast diagnosis of the GOCE system performance and to monitor the quality of the input data. In the Core Solver processing a rigorous high-precision solution of the very large normal equation systems is derived by applying parallel processing techniques on a PC cluster. Before the availability of real GOCE data, by means of a realistic numerical case study, which is based on the actual GOCE orbit and mission scenario and simulation data stemming from the most recent ESA end-to-end simulation, the expected GOCE gravity field performance is evaluated. Results from this simulation as well as recently developed features of the software system are presented. Additionally some aspects on data combination with complementary data sources are addressed.
Abraham, Mark James; Murtola, Teemu; Schulz, Roland; ...
2015-07-15
GROMACS is one of the most widely used open-source and free software codes in chemistry, used primarily for dynamical simulations of biomolecules. It provides a rich set of calculation types, preparation and analysis tools. Several advanced techniques for free-energy calculations are supported. In version 5, it reaches new performance heights, through several new and enhanced parallelization algorithms. This work on every level; SIMD registers inside cores, multithreading, heterogeneous CPU–GPU acceleration, state-of-the-art 3D domain decomposition, and ensemble-level parallelization through built-in replica exchange and the separate Copernicus framework. Finally, the latest best-in-class compressed trajectory storage format is supported.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abraham, Mark James; Murtola, Teemu; Schulz, Roland
GROMACS is one of the most widely used open-source and free software codes in chemistry, used primarily for dynamical simulations of biomolecules. It provides a rich set of calculation types, preparation and analysis tools. Several advanced techniques for free-energy calculations are supported. In version 5, it reaches new performance heights, through several new and enhanced parallelization algorithms. This work on every level; SIMD registers inside cores, multithreading, heterogeneous CPU–GPU acceleration, state-of-the-art 3D domain decomposition, and ensemble-level parallelization through built-in replica exchange and the separate Copernicus framework. Finally, the latest best-in-class compressed trajectory storage format is supported.
ERIC Educational Resources Information Center
Rizavi, Saba; Way, Walter D.; Lu, Ying; Pitoniak, Mary; Steffen, Manfred
2004-01-01
The purpose of this study was to use realistically simulated data to evaluate various CAT designs for use with the verbal reasoning measure of the Medical College Admissions Test (MCAT). Factors such as item pool depth, content constraints, and item formats often cause repeated adaptive administrations of an item at ability levels that are not…
Level-Set Variational Implicit-Solvent Modeling of Biomolecules with the Coulomb-Field Approximation
2011-01-01
Central in the variational implicit-solvent model (VISM) [Dzubiella, Swanson, and McCammon Phys. Rev. Lett.2006, 96, 087802 and J. Chem. Phys.2006, 124, 084905] of molecular solvation is a mean-field free-energy functional of all possible solute–solvent interfaces or dielectric boundaries. Such a functional can be minimized numerically by a level-set method to determine stable equilibrium conformations and solvation free energies. Applications to nonpolar systems have shown that the level-set VISM is efficient and leads to qualitatively and often quantitatively correct results. In particular, it is capable of capturing capillary evaporation in hydrophobic confinement and corresponding multiple equilibrium states as found in molecular dynamics (MD) simulations. In this work, we introduce into the VISM the Coulomb-field approximation of the electrostatic free energy. Such an approximation is a volume integral over an arbitrary shaped solvent region, requiring no solutions to any partial differential equations. With this approximation, we obtain the effective boundary force and use it as the “normal velocity” in the level-set relaxation. We test the new approach by calculating solvation free energies and potentials of mean force for small and large molecules, including the two-domain protein BphC. Our results reveal the importance of coupling polar and nonpolar interactions in the underlying molecular systems. In particular, dehydration near the domain interface of BphC subunits is found to be highly sensitive to local electrostatic potentials as seen in previous MD simulations. This is a first step toward capturing the complex protein dehydration process by an implicit-solvent approach. PMID:22346739
Detecting aircraft with a low-resolution infrared sensor.
Jakubowicz, Jérémie; Lefebvre, Sidonie; Maire, Florian; Moulines, Eric
2012-06-01
Existing computer simulations of aircraft infrared signature (IRS) do not account for dispersion induced by uncertainty on input data, such as aircraft aspect angles and meteorological conditions. As a result, they are of little use to estimate the detection performance of IR optronic systems; in this case, the scenario encompasses a lot of possible situations that must be indeed addressed, but cannot be singly simulated. In this paper, we focus on low-resolution infrared sensors and we propose a methodological approach for predicting simulated IRS dispersion of poorly known aircraft and performing aircraft detection on the resulting set of low-resolution infrared images. It is based on a sensitivity analysis, which identifies inputs that have negligible influence on the computed IRS and can be set at a constant value, on a quasi-Monte Carlo survey of the code output dispersion, and on a new detection test taking advantage of level sets estimation. This method is illustrated in a typical scenario, i.e., a daylight air-to-ground full-frontal attack by a generic combat aircraft flying at low altitude, over a database of 90,000 simulated aircraft images. Assuming a white noise or a fractional Brownian background model, detection performances are very promising.
A Facility and Architecture for Autonomy Research
NASA Technical Reports Server (NTRS)
Pisanich, Greg; Clancy, Daniel (Technical Monitor)
2002-01-01
Autonomy is a key enabling factor in the advancement of the remote robotic exploration. There is currently a large gap between autonomy software at the research level and software that is ready for insertion into near-term space missions. The Mission Simulation Facility (MST) will bridge this gap by providing a simulation framework and suite of simulation tools to support research in autonomy for remote exploration. This system will allow developers of autonomy software to test their models in a high-fidelity simulation and evaluate their system's performance against a set of integrated, standardized simulations. The Mission Simulation ToolKit (MST) uses a distributed architecture with a communication layer that is built on top of the standardized High Level Architecture (HLA). This architecture enables the use of existing high fidelity models, allows mixing simulation components from various computing platforms and enforces the use of a standardized high-level interface among components. The components needed to achieve a realistic simulation can be grouped into four categories: environment generation (terrain, environmental features), robotic platform behavior (robot dynamics), instrument models (camera/spectrometer/etc.), and data analysis. The MST will provide basic components in these areas but allows users to plug-in easily any refined model by means of a communication protocol. Finally, a description file defines the robot and environment parameters for easy configuration and ensures that all the simulation models share the same information.
Manual for a workstation-based generic flight simulation program (LaRCsim), version 1.4
NASA Technical Reports Server (NTRS)
Jackson, E. Bruce
1995-01-01
LaRCsim is a set of ANSI C routines that implement a full set of equations of motion for a rigid-body aircraft in atmospheric and low-earth orbital flight, suitable for pilot-in-the-loop simulations on a workstation-class computer. All six rigid-body degrees of freedom are modeled. The modules provided include calculations of the typical aircraft rigid-body simulation variables, earth geodesy, gravity and atmospheric models, and support several data recording options. Features/limitations of the current version include English units of measure, a 1962 atmosphere model in cubic spline function lookup form, ranging from sea level to 75,000 feet, rotating oblate spheroidal earth model, with aircraft C.G. coordinates in both geocentric and geodetic axes. Angular integrations are done using quaternion state variables Vehicle X-Z symmetry is assumed.
Coupled Neutronics Thermal-Hydraulic Solution of a Full-Core PWR Using VERA-CS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clarno, Kevin T; Palmtag, Scott; Davidson, Gregory G
2014-01-01
The Consortium for Advanced Simulation of Light Water Reactors (CASL) is developing a core simulator called VERA-CS to model operating PWR reactors with high resolution. This paper describes how the development of VERA-CS is being driven by a set of progression benchmark problems that specify the delivery of useful capability in discrete steps. As part of this development, this paper will describe the current capability of VERA-CS to perform a multiphysics simulation of an operating PWR at Hot Full Power (HFP) conditions using a set of existing computer codes coupled together in a novel method. Results for several single-assembly casesmore » are shown that demonstrate coupling for different boron concentrations and power levels. Finally, high-resolution results are shown for a full-core PWR reactor modeled in quarter-symmetry.« less
Integrated Medical Model (IMM) Optimization Version 4.0 Functional Improvements
NASA Technical Reports Server (NTRS)
Arellano, John; Young, M.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Goodenow, D. A.; Myers, J. G.
2016-01-01
The IMMs ability to assess mission outcome risk levels relative to available resources provides a unique capability to provide guidance on optimal operational medical kit and vehicle resources. Post-processing optimization allows IMM to optimize essential resources to improve a specific model outcome such as maximization of the Crew Health Index (CHI), or minimization of the probability of evacuation (EVAC) or the loss of crew life (LOCL). Mass and or volume constrain the optimized resource set. The IMMs probabilistic simulation uses input data on one hundred medical conditions to simulate medical events that may occur in spaceflight, the resources required to treat those events, and the resulting impact to the mission based on specific crew and mission characteristics. Because IMM version 4.0 provides for partial treatment for medical events, IMM Optimization 4.0 scores resources at the individual resource unit increment level as opposed to the full condition-specific treatment set level, as done in version 3.0. This allows the inclusion of as many resources as possible in the event that an entire set of resources called out for treatment cannot satisfy the constraints. IMM Optimization version 4.0 adds capabilities that increase efficiency by creating multiple resource sets based on differing constraints and priorities, CHI, EVAC, or LOCL. It also provides sets of resources that improve mission-related IMM v4.0 outputs with improved performance compared to the prior optimization. The new optimization represents much improved fidelity that will improve the utility of the IMM 4.0 for decision support.
Zevin, Boris; Dedy, Nicolas J; Bonrath, Esther M; Grantcharov, Teodor P
2017-05-01
There is no comprehensive simulation-enhanced training curriculum to address cognitive, psychomotor, and nontechnical skills for an advanced minimally invasive procedure. 1) To develop and provide evidence of validity for a comprehensive simulation-enhanced training (SET) curriculum for an advanced minimally invasive procedure; (2) to demonstrate transfer of acquired psychomotor skills from a simulation laboratory to live porcine model; and (3) to compare training outcomes of SET curriculum group and chief resident group. University. This prospective single-blinded, randomized, controlled trial allocated 20 intermediate-level surgery residents to receive either conventional training (control) or SET curriculum training (intervention). The SET curriculum consisted of cognitive, psychomotor, and nontechnical training modules. Psychomotor skills in a live anesthetized porcine model in the OR was the primary outcome. Knowledge of advanced minimally invasive and bariatric surgery and nontechnical skills in a simulated OR crisis scenario were the secondary outcomes. Residents in the SET curriculum group went on to perform a laparoscopic jejunojejunostomy in the OR. Cognitive, psychomotor, and nontechnical skills of SET curriculum group were also compared to a group of 12 chief surgery residents. SET curriculum group demonstrated superior psychomotor skills in a live porcine model (56 [47-62] versus 44 [38-53], P<.05) and superior nontechnical skills (41 [38-45] versus 31 [24-40], P<.01) compared with conventional training group. SET curriculum group and conventional training group demonstrated equivalent knowledge (14 [12-15] versus 13 [11-15], P = 0.47). SET curriculum group demonstrated equivalent psychomotor skills in the live porcine model and in the OR in a human patient (56 [47-62] versus 63 [61-68]; P = .21). SET curriculum group demonstrated inferior knowledge (13 [11-15] versus 16 [14-16]; P<.05), equivalent psychomotor skill (63 [61-68] versus 68 [62-74]; P = .50), and superior nontechnical skills (41 [38-45] versus 34 [27-35], P<.01) compared with chief resident group. Completion of the SET curriculum resulted in superior training outcomes, compared with conventional surgery training. Implementation of the SET curriculum can standardize training for an advanced minimally invasive procedure and can ensure that comprehensive proficiency milestones are met before exposure to patient care. Copyright © 2017 American Society for Bariatric Surgery. Published by Elsevier Inc. All rights reserved.
Zhou, Shenggao; Sun, Hui; Cheng, Li-Tien; Dzubiella, Joachim; McCammon, J. Andrew
2016-01-01
Recent years have seen the initial success of a variational implicit-solvent model (VISM), implemented with a robust level-set method, in capturing efficiently different hydration states and providing quantitatively good estimation of solvation free energies of biomolecules. The level-set minimization of the VISM solvation free-energy functional of all possible solute-solvent interfaces or dielectric boundaries predicts an equilibrium biomolecular conformation that is often close to an initial guess. In this work, we develop a theory in the form of Langevin geometrical flow to incorporate solute-solvent interfacial fluctuations into the VISM. Such fluctuations are crucial to biomolecular conformational changes and binding process. We also develop a stochastic level-set method to numerically implement such a theory. We describe the interfacial fluctuation through the “normal velocity” that is the solute-solvent interfacial force, derive the corresponding stochastic level-set equation in the sense of Stratonovich so that the surface representation is independent of the choice of implicit function, and develop numerical techniques for solving such an equation and processing the numerical data. We apply our computational method to study the dewetting transition in the system of two hydrophobic plates and a hydrophobic cavity of a synthetic host molecule cucurbit[7]uril. Numerical simulations demonstrate that our approach can describe an underlying system jumping out of a local minimum of the free-energy functional and can capture dewetting transitions of hydrophobic systems. In the case of two hydrophobic plates, we find that the wavelength of interfacial fluctuations has a strong influence to the dewetting transition. In addition, we find that the estimated energy barrier of the dewetting transition scales quadratically with the inter-plate distance, agreeing well with existing studies of molecular dynamics simulations. Our work is a first step toward the inclusion of fluctuations into the VISM and understanding the impact of interfacial fluctuations on biomolecular solvation with an implicit-solvent approach. PMID:27497546
The effects of greenhouse gases on the Antarctic ozone hole in the past, present, and future
NASA Astrophysics Data System (ADS)
Newman, P. A.; Li, F.; Lait, L. R.; Oman, L.
2017-12-01
The Antarctic ozone hole is primarily caused by human-produced ozone depleting substances such as chlorine-containing chlorofluorocarbons (CFCs) and bromine-containing halons. The large ozone spring-time depletion relies on the very-cold conditions of the Antarctic lower stratosphere, and the general containment of air by the polar night jet over Antarctica. Here we show the Goddard Earth Observing System Chemistry Climate Model (GEOSCCM) coupled ocean-atmosphere-chemistry model for exploring the impact of increasing greenhouse gases (GHGs). Model simulations covering the 1960-2010 period are shown for: 1) a control ensemble with observed levels of ODSs and GHGs, 2) an ensemble with fixed 1960 GHG concentrations, and 3) an ensemble with fixed 1960 ODS levels. We look at a similar set of simulations (control, 2005 fixed GHG levels, and 2005 fixed ODS levels) with a new version of GEOSCCM over the period 2005-2100. These future simulations show that the decrease of ODSs leads to similar ozone recovery for both the control run and the fixed GHG scenarios, in spite of GHG forced changes to stratospheric ozone levels. These simulations demonstrate that GHG levels will have major impacts on the stratosphere by 2100, but have only small impacts on the Antarctic ozone hole.
Allvin, Renée; Berndtzon, Magnus; Carlzon, Liisa; Edelbring, Samuel; Hult, Håkan; Hultin, Magnus; Karlgren, Klas; Masiello, Italo; Södersved Källestedt, Marie-Louise; Tamás, Éva
2017-01-01
Medical simulation enables the design of learning activities for competency areas (eg, communication and leadership) identified as crucial for future health care professionals. Simulation educators and medical teachers follow different career paths, and their education backgrounds and teaching contexts may be very different in a simulation setting. Although they have a key role in facilitating learning, information on the continuing professional development (pedagogical development) of simulation educators is not available in the literature. To explore changes in experienced simulation educators' perceptions of their own teaching skills, practices, and understanding of teaching over time. A qualitative exploratory study. Fourteen experienced simulation educators participated in individual open-ended interviews focusing on their development as simulation educators. Data were analyzed using an inductive thematic analysis. Marked educator development was discerned over time, expressed mainly in an altered way of thinking and acting. Five themes were identified: shifting focus, from following to utilizing a structure, setting goals, application of technology, and alignment with profession. Being confident in the role as an instructor seemed to constitute a foundation for the instructor's pedagogical development. Experienced simulation educators' pedagogical development was based on self-confidence in the educator role, and not on a deeper theoretical understanding of teaching and learning. This is the first clue to gain increased understanding regarding educational level and possible education needs among simulation educators, and it might generate several lines of research for further studies.
Mena, Carlos F; Walsh, Stephen J; Frizzelle, Brian G; Xiaozheng, Yao; Malanson, George P
2011-01-01
This paper describes the design and implementation of an Agent-Based Model (ABM) used to simulate land use change on household farms in the Northern Ecuadorian Amazon (NEA). The ABM simulates decision-making processes at the household level that is examined through a longitudinal, socio-economic and demographic survey that was conducted in 1990 and 1999. Geographic Information Systems (GIS) are used to establish spatial relationships between farms and their environment, while classified Landsat Thematic Mapper (TM) imagery is used to set initial land use/land cover conditions for the spatial simulation, assess from-to land use/land cover change patterns, and describe trajectories of land use change at the farm and landscape levels. Results from prior studies in the NEA provide insights into the key social and ecological variables, describe human behavioral functions, and examine population-environment interactions that are linked to deforestation and agricultural extensification, population migration, and demographic change. Within the architecture of the model, agents are classified as active or passive. The model comprises four modules, i.e., initialization, demography, agriculture, and migration that operate individually, but are linked through key household processes. The main outputs of the model include a spatially-explicit representation of the land use/land cover on survey and non-survey farms and at the landscape level for each annual time-step, as well as simulated socio-economic and demographic characteristics of households and communities. The work describes the design and implementation of the model and how population-environment interactions can be addressed in a frontier setting. The paper contributes to land change science by examining important pattern-process relations, advocating a spatial modeling approach that is capable of synthesizing fundamental relationships at the farm level, and links people and environment in complex ways.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stershic, Andrew J.; Dolbow, John E.; Moës, Nicolas
The Thick Level-Set (TLS) model is implemented to simulate brittle media undergoing dynamic fragmentation. This non-local model is discretized by the finite element method with damage represented as a continuous field over the domain. A level-set function defines the extent and severity of damage, and a length scale is introduced to limit the damage gradient. Numerical studies in one dimension demonstrate that the proposed method reproduces the rate-dependent energy dissipation and fragment length observations from analytical, numerical, and experimental approaches. In conclusion, additional studies emphasize the importance of appropriate bulk constitutive models and sufficient spatial resolution of the length scale.
Cosmic Rays with Portable Geiger Counters: From Sea Level to Airplane Cruise Altitudes
ERIC Educational Resources Information Center
Blanco, Francesco; La Rocca, Paola; Riggi, Francesco
2009-01-01
Cosmic ray count rates with a set of portable Geiger counters were measured at different altitudes on the way to a mountain top and aboard an aircraft, between sea level and cruise altitude. Basic measurements may constitute an educational activity even with high school teams. For the understanding of the results obtained, simulations of extensive…
Constrained Local UniversE Simulations: a Local Group factory
NASA Astrophysics Data System (ADS)
Carlesi, Edoardo; Sorce, Jenny G.; Hoffman, Yehuda; Gottlöber, Stefan; Yepes, Gustavo; Libeskind, Noam I.; Pilipenko, Sergey V.; Knebe, Alexander; Courtois, Hélène; Tully, R. Brent; Steinmetz, Matthias
2016-05-01
Near-field cosmology is practised by studying the Local Group (LG) and its neighbourhood. This paper describes a framework for simulating the `near field' on the computer. Assuming the Λ cold dark matter (ΛCDM) model as a prior and applying the Bayesian tools of the Wiener filter and constrained realizations of Gaussian fields to the Cosmicflows-2 (CF2) survey of peculiar velocities, constrained simulations of our cosmic environment are performed. The aim of these simulations is to reproduce the LG and its local environment. Our main result is that the LG is likely a robust outcome of the ΛCDMscenario when subjected to the constraint derived from CF2 data, emerging in an environment akin to the observed one. Three levels of criteria are used to define the simulated LGs. At the base level, pairs of haloes must obey specific isolation, mass and separation criteria. At the second level, the orbital angular momentum and energy are constrained, and on the third one the phase of the orbit is constrained. Out of the 300 constrained simulations, 146 LGs obey the first set of criteria, 51 the second and 6 the third. The robustness of our LG `factory' enables the construction of a large ensemble of simulated LGs. Suitable candidates for high-resolution hydrodynamical simulations of the LG can be drawn from this ensemble, which can be used to perform comprehensive studies of the formation of the LG.
Blood pressure long term regulation: A neural network model of the set point development
2011-01-01
Background The notion of the nucleus tractus solitarius (NTS) as a comparator evaluating the error signal between its rostral neural structures (RNS) and the cardiovascular receptor afferents into it has been recently presented. From this perspective, stress can cause hypertension via set point changes, so offering an answer to an old question. Even though the local blood flow to tissues is influenced by circulating vasoactive hormones and also by local factors, there is yet significant sympathetic control. It is well established that the state of maturation of sympathetic innervation of blood vessels at birth varies across animal species and it takes place mostly during the postnatal period. During ontogeny, chemoreceptors are functional; they discharge when the partial pressures of oxygen and carbon dioxide in the arterial blood are not normal. Methods The model is a simple biological plausible adaptative neural network to simulate the development of the sympathetic nervous control. It is hypothesized that during ontogeny, from the RNS afferents to the NTS, the optimal level of each sympathetic efferent discharge is learned through the chemoreceptors' feedback. Its mean discharge leads to normal oxygen and carbon dioxide levels in each tissue. Thus, the sympathetic efferent discharge sets at the optimal level if, despite maximal drift, the local blood flow is compensated for by autoregulation. Such optimal level produces minimum chemoreceptor output, which must be maintained by the nervous system. Since blood flow is controlled by arterial blood pressure, the long-term mean level is stabilized to regulate oxygen and carbon dioxide levels. After development, the cardiopulmonary reflexes play an important role in controlling efferent sympathetic nerve activity to the kidneys and modulating sodium and water excretion. Results Starting from fixed RNS afferents to the NTS and random synaptic weight values, the sympathetic efferents converged to the optimal values. When learning was completed, the output from the chemoreceptors became zero because the sympathetic efferents led to normal partial pressures of oxygen and carbon dioxide. Conclusions We introduce here a simple simulating computational theory to study, from a neurophysiologic point of view, the sympathetic development of cardiovascular regulation due to feedback signals sent off by cardiovascular receptors. The model simulates, too, how the NTS, as emergent property, acts as a comparator and how its rostral afferents behave as set point. PMID:21693057
High-fidelity nursing simulation: impact on student self-confidence and clinical competence.
Blum, Cynthia A; Borglund, Susan; Parcells, Dax
2010-01-01
Development of safe nursing practice in entry-level nursing students requires special consideration from nurse educators. The paucity of data supporting high-fidelity patient simulation effectiveness in this population informed the development of a quasi-experimental, quantitative study of the relationship between simulation and student self-confidence and clinical competence. Moreover, the study reports a novel approach to measuring self-confidence and competence of entry-level nursing students. Fifty-three baccalaureate students, enrolled in either a traditional or simulation-enhanced laboratory, participated during their first clinical rotation. Student self-confidence and faculty perception of student clinical competence were measured using selected scale items of the Lasater Clinical Judgment Rubric. The results indicated an overall improvement in self-confidence and competence across the semester, however, simulation did not significantly enhance these caring attributes. The study highlights the need for further examination of teaching strategies developed to promote the transfer of self-confidence and competence from the laboratory to the clinical setting.
Numerical Propulsion System Simulation Architecture
NASA Technical Reports Server (NTRS)
Naiman, Cynthia G.
2004-01-01
The Numerical Propulsion System Simulation (NPSS) is a framework for performing analysis of complex systems. Because the NPSS was developed using the object-oriented paradigm, the resulting architecture is an extensible and flexible framework that is currently being used by a diverse set of participants in government, academia, and the aerospace industry. NPSS is being used by over 15 different institutions to support rockets, hypersonics, power and propulsion, fuel cells, ground based power, and aerospace. Full system-level simulations as well as subsystems may be modeled using NPSS. The NPSS architecture enables the coupling of analyses at various levels of detail, which is called numerical zooming. The middleware used to enable zooming and distributed simulations is the Common Object Request Broker Architecture (CORBA). The NPSS Developer's Kit offers tools for the developer to generate CORBA-based components and wrap codes. The Developer's Kit enables distributed multi-fidelity and multi-discipline simulations, preserves proprietary and legacy codes, and facilitates addition of customized codes. The platforms supported are PC, Linux, HP, Sun, and SGI.
Lagrangian large eddy simulations of boundary layer clouds on ERA-Interim and ERA5 trajectories
NASA Astrophysics Data System (ADS)
Kazil, J.; Feingold, G.; Yamaguchi, T.
2017-12-01
This exploratory study examines Lagrangian large eddy simulations of boundary layer clouds along wind trajectories from the ERA-Interim and ERA5 reanalyses. The study is motivated by the need for statistically representative sets of high resolution simulations of cloud field evolution in realistic meteorological conditions. The study will serve as a foundation for the investigation of biomass burning effects on the transition from stratocumulus to shallow cumulus clouds in the South-East Atlantic. Trajectories that pass through a location with radiosonde data (St. Helena) and which exhibit a well-defined cloud structure and evolution were identified in satellite imagery, and sea surface temperature and atmospheric vertical profiles along the trajectories were extracted from the reanalysis data sets. The System for Atmospheric Modeling (SAM) simulated boundary layer turbulence and cloud properties along the trajectories. Mean temperature and moisture (in the free troposphere) and mean wind speed (at all levels) were nudged towards the reanalysis data. Atmospheric and cloud properties in the large eddy simulations were compared with those from the reanalysis products, and evaluated with satellite imagery and radiosonde data. Simulations using ERA-Interim data and the higher resolution ERA5 data are contrasted.
A Validation Study of Merging and Spacing Techniques in a NAS-Wide Simulation
NASA Technical Reports Server (NTRS)
Glaab, Patricia C.
2011-01-01
In November 2010, Intelligent Automation, Inc. (IAI) delivered an M&S software tool to that allows system level studies of the complex terminal airspace with the ACES simulation. The software was evaluated against current day arrivals in the Atlanta TRACON using Atlanta's Hartsfield-Jackson International Airport (KATL) arrival schedules. Results of this validation effort are presented describing data sets, traffic flow assumptions and techniques, and arrival rate comparisons between reported landings at Atlanta versus simulated arrivals using the same traffic sets in ACES equipped with M&S. Initial results showed the simulated system capacity to be significantly below arrival capacity seen at KATL. Data was gathered for Atlanta using commercial airport and flight tracking websites (like FlightAware.com), and analyzed to insure compatible techniques were used for result reporting and comparison. TFM operators for Atlanta were consulted for tuning final simulation parameters and for guidance in flow management techniques during high volume operations. Using these modified parameters and incorporating TFM guidance for efficiencies in flowing aircraft, arrival capacity for KATL was matched for the simulation. Following this validation effort, a sensitivity study was conducted to measure the impact of variations in system parameters on the Atlanta airport arrival capacity.
A multilingual audiometer simulator software for training purposes.
Kompis, Martin; Steffen, Pascal; Caversaccio, Marco; Brugger, Urs; Oesch, Ivo
2012-04-01
A set of algorithms, which allows a computer to determine the answers of simulated patients during pure tone and speech audiometry, is presented. Based on these algorithms, a computer program for training in audiometry was written and found to be useful for teaching purposes. To develop a flexible audiometer simulator software as a teaching and training tool for pure tone and speech audiometry, both with and without masking. First a set of algorithms, which allows a computer to determine the answers of a simulated, hearing-impaired patient, was developed. Then, the software was implemented. Extensive use was made of simple, editable text files to define all texts in the user interface and all patient definitions. The software 'audiometer simulator' is available for free download. It can be used to train pure tone audiometry (both with and without masking), speech audiometry, measurement of the uncomfortable level, and simple simulation tests. Due to the use of text files, the user can alter or add patient definitions and all texts and labels shown on the screen. So far, English, French, German, and Portuguese user interfaces are available and the user can choose between German or French speech audiometry.
Validation of the SimSET simulation package for modeling the Siemens Biograph mCT PET scanner
NASA Astrophysics Data System (ADS)
Poon, Jonathan K.; Dahlbom, Magnus L.; Casey, Michael E.; Qi, Jinyi; Cherry, Simon R.; Badawi, Ramsey D.
2015-02-01
Monte Carlo simulation provides a valuable tool in performance assessment and optimization of system design parameters for PET scanners. SimSET is a popular Monte Carlo simulation toolkit that features fast simulation time, as well as variance reduction tools to further enhance computational efficiency. However, SimSET has lacked the ability to simulate block detectors until its most recent release. Our goal is to validate new features of SimSET by developing a simulation model of the Siemens Biograph mCT PET scanner and comparing the results to a simulation model developed in the GATE simulation suite and to experimental results. We used the NEMA NU-2 2007 scatter fraction, count rates, and spatial resolution protocols to validate the SimSET simulation model and its new features. The SimSET model overestimated the experimental results of the count rate tests by 11-23% and the spatial resolution test by 13-28%, which is comparable to previous validation studies of other PET scanners in the literature. The difference between the SimSET and GATE simulation was approximately 4-8% for the count rate test and approximately 3-11% for the spatial resolution test. In terms of computational time, SimSET performed simulations approximately 11 times faster than GATE simulations. The new block detector model in SimSET offers a fast and reasonably accurate simulation toolkit for PET imaging applications.
Faudeux, Camille; Tran, Antoine; Dupont, Audrey; Desmontils, Jonathan; Montaudié, Isabelle; Bréaud, Jean; Braun, Marc; Fournier, Jean-Paul; Bérard, Etienne; Berlengi, Noémie; Schweitzer, Cyril; Haas, Hervé; Caci, Hervé; Gatin, Amélie; Giovannini-Chami, Lisa
2017-09-01
To develop a reliable and validated tool to evaluate technical resuscitation skills in a pediatric simulation setting. Four Resuscitation and Emergency Simulation Checklist for Assessment in Pediatrics (RESCAPE) evaluation tools were created, following international guidelines: intraosseous needle insertion, bag mask ventilation, endotracheal intubation, and cardiac massage. We applied a modified Delphi methodology evaluation to binary rating items. Reliability was assessed comparing the ratings of 2 observers (1 in real time and 1 after a video-recorded review). The tools were assessed for content, construct, and criterion validity, and for sensitivity to change. Inter-rater reliability, evaluated with Cohen kappa coefficients, was perfect or near-perfect (>0.8) for 92.5% of items and each Cronbach alpha coefficient was ≥0.91. Principal component analyses showed that all 4 tools were unidimensional. Significant increases in median scores with increasing levels of medical expertise were demonstrated for RESCAPE-intraosseous needle insertion (P = .0002), RESCAPE-bag mask ventilation (P = .0002), RESCAPE-endotracheal intubation (P = .0001), and RESCAPE-cardiac massage (P = .0037). Significantly increased median scores over time were also demonstrated during a simulation-based educational program. RESCAPE tools are reliable and validated tools for the evaluation of technical resuscitation skills in pediatric settings during simulation-based educational programs. They might also be used for medical practice performance evaluations. Copyright © 2017 Elsevier Inc. All rights reserved.
Periodical capacity setting methods for make-to-order multi-machine production systems
Altendorfer, Klaus; Hübl, Alexander; Jodlbauer, Herbert
2014-01-01
The paper presents different periodical capacity setting methods for make-to-order, multi-machine production systems with stochastic customer required lead times and stochastic processing times to improve service level and tardiness. These methods are developed as decision support when capacity flexibility exists, such as, a certain range of possible working hours a week for example. The methods differ in the amount of information used whereby all are based on the cumulated capacity demand at each machine. In a simulation study the methods’ impact on service level and tardiness is compared to a constant provided capacity for a single and a multi-machine setting. It is shown that the tested capacity setting methods can lead to an increase in service level and a decrease in average tardiness in comparison to a constant provided capacity. The methods using information on processing time and customer required lead time distribution perform best. The results found in this paper can help practitioners to make efficient use of their flexible capacity. PMID:27226649
Numerical Modelling of Three-Fluid Flow Using The Level-set Method
NASA Astrophysics Data System (ADS)
Li, Hongying; Lou, Jing; Shang, Zhi
2014-11-01
This work presents a numerical model for simulation of three-fluid flow involving two different moving interfaces. These interfaces are captured using the level-set method via two different level-set functions. A combined formulation with only one set of conservation equations for the whole physical domain, consisting of the three different immiscible fluids, is employed. Numerical solution is performed on a fixed mesh using the finite volume method. Surface tension effect is incorporated using the Continuum Surface Force model. Validation of the present model is made against available results for stratified flow and rising bubble in a container with a free surface. Applications of the present model are demonstrated by a variety of three-fluid flow systems including (1) three-fluid stratified flow, (2) two-fluid stratified flow carrying the third fluid in the form of drops and (3) simultaneous rising and settling of two drops in a stationary third fluid. The work is supported by a Thematic and Strategic Research from A*STAR, Singapore (Ref. #: 1021640075).
A digital retina-like low-level vision processor.
Mertoguno, S; Bourbakis, N G
2003-01-01
This correspondence presents the basic design and the simulation of a low level multilayer vision processor that emulates to some degree the functional behavior of a human retina. This retina-like multilayer processor is the lower part of an autonomous self-organized vision system, called Kydon, that could be used on visually impaired people with a damaged visual cerebral cortex. The Kydon vision system, however, is not presented in this paper. The retina-like processor consists of four major layers, where each of them is an array processor based on hexagonal, autonomous processing elements that perform a certain set of low level vision tasks, such as smoothing and light adaptation, edge detection, segmentation, line recognition and region-graph generation. At each layer, the array processor is a 2D array of k/spl times/m hexagonal identical autonomous cells that simultaneously execute certain low level vision tasks. Thus, the hardware design and the simulation at the transistor level of the processing elements (PEs) of the retina-like processor and its simulated functionality with illustrative examples are provided in this paper.
Andrade Neto, A S; Secchi, A R; Souza, M B; Barreto, A G
2016-10-28
An adaptive nonlinear model predictive control of a simulated moving bed unit for the enantioseparation of praziquantel is presented. A first principle model was applied at the proposed purity control scheme. The main concern about this kind of model in a control framework is in regard to the computational effort to solve it; however, a fast enough solution was achieved. In order to evaluate the controller's performance, several cases were simulated, including external pumps and switching valve malfunctions. The problem of plant-model mismatch was also investigated, and for that reason a parameter estimation step was introduced in the control strategy. In every studied scenario, the controller was able to maintain the purity levels at their set points, which were set to 99% and 98.6% for extract and raffinate, respectively. Additionally, fast responses and smooth actuation were achieved. Copyright © 2016 Elsevier B.V. All rights reserved.
Simulation of the neutron flux in the irradiation facility at RA-3 reactor.
Bortolussi, S; Pinto, J M; Thorp, S I; Farias, R O; Soto, M S; Sztejnberg, M; Pozzi, E C C; Gonzalez, S J; Gadan, M A; Bellino, A N; Quintana, J; Altieri, S; Miller, M
2011-12-01
A facility for the irradiation of a section of patients' explanted liver and lung was constructed at RA-3 reactor, Comisión Nacional de Energía Atómica, Argentina. The facility, located in the thermal column, is characterized by the possibility to insert and extract samples without the need to shutdown the reactor. In order to reach the best levels of security and efficacy of the treatment, it is necessary to perform an accurate dosimetry. The possibility to simulate neutron flux and absorbed dose in the explanted organs, together with the experimental dosimetry, allows setting more precise and effective treatment plans. To this end, a computational model of the entire reactor was set-up, and the simulations were validated with the experimental measurements performed in the facility. Copyright © 2011 Elsevier Ltd. All rights reserved.
A level-set method for two-phase flows with moving contact line and insoluble surfactant
NASA Astrophysics Data System (ADS)
Xu, Jian-Jun; Ren, Weiqing
2014-04-01
A level-set method for two-phase flows with moving contact line and insoluble surfactant is presented. The mathematical model consists of the Navier-Stokes equation for the flow field, a convection-diffusion equation for the surfactant concentration, together with the Navier boundary condition and a condition for the dynamic contact angle derived by Ren et al. (2010) [37]. The numerical method is based on the level-set continuum surface force method for two-phase flows with surfactant developed by Xu et al. (2012) [54] with some cautious treatment for the boundary conditions. The numerical method consists of three components: a flow solver for the velocity field, a solver for the surfactant concentration, and a solver for the level-set function. In the flow solver, the surface force is dealt with using the continuum surface force model. The unbalanced Young stress at the moving contact line is incorporated into the Navier boundary condition. A convergence study of the numerical method and a parametric study are presented. The influence of surfactant on the dynamics of the moving contact line is illustrated using examples. The capability of the level-set method to handle complex geometries is demonstrated by simulating a pendant drop detaching from a wall under gravity.
3D registration of surfaces for change detection in medical images
NASA Astrophysics Data System (ADS)
Fisher, Elizabeth; van der Stelt, Paul F.; Dunn, Stanley M.
1997-04-01
Spatial registration of data sets is essential for quantifying changes that take place over time in cases where the position of a patient with respect to the sensor has been altered. Changes within the region of interest can be problematic for automatic methods of registration. This research addresses the problem of automatic 3D registration of surfaces derived from serial, single-modality images for the purpose of quantifying changes over time. The registration algorithm utilizes motion-invariant, curvature- based geometric properties to derive an approximation to an initial rigid transformation to align two image sets. Following the initial registration, changed portions of the surface are detected and excluded before refining the transformation parameters. The performance of the algorithm was tested using simulation experiments. To quantitatively assess the registration, random noise at various levels, known rigid motion transformations, and analytically-defined volume changes were applied to the initial surface data acquired from models of teeth. These simulation experiments demonstrated that the calculated transformation parameters were accurate to within 1.2 percent of the total applied rotation and 2.9 percent of the total applied translation, even at the highest applied noise levels and simulated wear values.
A simulation study on Bayesian Ridge regression models for several collinearity levels
NASA Astrophysics Data System (ADS)
Efendi, Achmad; Effrihan
2017-12-01
When analyzing data with multiple regression model if there are collinearities, then one or several predictor variables are usually omitted from the model. However, there sometimes some reasons, for instance medical or economic reasons, the predictors are all important and should be included in the model. Ridge regression model is not uncommon in some researches to use to cope with collinearity. Through this modeling, weights for predictor variables are used for estimating parameters. The next estimation process could follow the concept of likelihood. Furthermore, for the estimation nowadays the Bayesian version could be an alternative. This estimation method does not match likelihood one in terms of popularity due to some difficulties; computation and so forth. Nevertheless, with the growing improvement of computational methodology recently, this caveat should not at the moment become a problem. This paper discusses about simulation process for evaluating the characteristic of Bayesian Ridge regression parameter estimates. There are several simulation settings based on variety of collinearity levels and sample sizes. The results show that Bayesian method gives better performance for relatively small sample sizes, and for other settings the method does perform relatively similar to the likelihood method.
The island dynamics model on parallel quadtree grids
NASA Astrophysics Data System (ADS)
Mistani, Pouria; Guittet, Arthur; Bochkov, Daniil; Schneider, Joshua; Margetis, Dionisios; Ratsch, Christian; Gibou, Frederic
2018-05-01
We introduce an approach for simulating epitaxial growth by use of an island dynamics model on a forest of quadtree grids, and in a parallel environment. To this end, we use a parallel framework introduced in the context of the level-set method. This framework utilizes: discretizations that achieve a second-order accurate level-set method on non-graded adaptive Cartesian grids for solving the associated free boundary value problem for surface diffusion; and an established library for the partitioning of the grid. We consider the cases with: irreversible aggregation, which amounts to applying Dirichlet boundary conditions at the island boundary; and an asymmetric (Ehrlich-Schwoebel) energy barrier for attachment/detachment of atoms at the island boundary, which entails the use of a Robin boundary condition. We provide the scaling analyses performed on the Stampede supercomputer and numerical examples that illustrate the capability of our methodology to efficiently simulate different aspects of epitaxial growth. The combination of adaptivity and parallelism in our approach enables simulations that are several orders of magnitude faster than those reported in the recent literature and, thus, provides a viable framework for the systematic study of mound formation on crystal surfaces.
Dănilă, R; Gerdes, B; Ulrike, H; Domínguez Fernández, E; Hassan, I
2009-01-01
The learning curve in laparoscopic surgery may be associated with higher patient risk, which is unacceptable in the setting of kidney donation. Virtual reality simulators may increase the safety and efficiency of training in laparoscopic surgery. The aim of this study was to investigate if the results of a training session reflect the actual skill level of transplantation surgeons and whether the simulator could differentiate laparoscopic experienced transplantation surgeon from advanced trainees. 16 subjects were assigned to one of two groups: 5 experienced transplantation surgeon and 11 advanced residents, with only assistant role during transplantation. The level of performance was measured by a relative scoring system that combines single parameters assessed by the computer. The higher the level of transplantation experience of a participant, the higher the laparoscopic performance. Experienced transplantation surgeons showed statistically significant better scores than the advanced group for time and precision parameters. Our results show that performance of the various tasks on the simulator corresponds to the respective level of experience in transplantation surgery in our research groups. This study confirms construct validity for the LapSim. It thus measures relevant skills and can be integrated in an endoscopic training and assessment curriculum for transplantations surgeons.
Kuo, Alexander S; Vijjeswarapu, Mary A; Philip, James H
2016-03-01
Inhaled induction with spontaneous respiration is a technique used for difficult airways. One of the proposed advantages is if airway patency is lost, the anesthetic agent will spontaneously redistribute until anesthetic depth is reduced and airway patency can be recovered. There are little and conflicting clinical or experimental data regarding the kinetics of this anesthetic technique. We used computer simulation to investigate this situation. We used GasMan, a computer simulation of inhaled anesthetic kinetics. For each simulation, alveolar ventilation was initiated with a set anesthetic induction concentration. When the vessel-rich group level reached the simulation specified airway obstruction threshold, alveolar ventilation was set at 0 to simulate complete airway obstruction. The time until the vessel-rich group anesthetic level decreased below the airway obstruction threshold was designated time to spontaneous recovery. We varied the parameters for each simulation, exploring the use of sevoflurane and halothane, airway obstruction threshold from 0.5 to 2 minimum alveolar concentration (MAC), anesthetic induction concentration 2 to 4 MAC sevoflurane and 4 to 6 MAC halothane, cardiac output 2.5 to 10 L/min, functional residual capacity 1.5 to 3.5 L, and relative vessel-rich group perfusion 67% to 85%. In each simulation, there were 3 general phases: anesthetic wash-in, obstruction and overshoot, and then slow redistribution. During the first 2 phases, there was a large gradient between the alveolar and vessel-rich group. Alveolar do not reflect vessel-rich group anesthetic levels until the late third phase. Time to spontaneous recovery varied between 35 and 749 seconds for sevoflurane and 13 and 222 seconds for halothane depending on the simulation parameters. Halothane had a faster time to spontaneous recovery because of the lower alveolar gradient and less overshoot of the vessel-rich group, not faster redistribution. Higher airway obstruction thresholds, decreased anesthetic induction, and higher cardiac output reduced time to spontaneous recovery. To a lesser effect, decreased functional residual capacity and the decreased relative vessel-rich groups' perfusion also reduced the time to spontaneous recovery. Spontaneous recovery after complete airway obstruction during inhaled induction is plausible, but the recovery time is highly variable and depends on the clinical and physiologic situation. These results emphasize that induction is a non-steady-state situation, thus effect-site anesthetic levels should be modeled in future research, not alveolar concentration. Finally, this study provides an example of using computer simulation to explore situations that are difficult to investigate clinically.
76 FR 5691 - Cyprodinil; Pesticide Tolerances
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-02
....'' This includes exposure through drinking water and in residential settings, but does not include... exposure from drinking water. The Agency used screening level water exposure models in the dietary exposure analysis and risk assessment for cyprodinil in drinking water. These simulation models take into account...
75 FR 17579 - Aminopyralid; Pesticide Tolerances
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-07
... exposure through drinking water and in residential settings, but does not include occupational exposure... from drinking water. The Agency used screening level water exposure models in the dietary exposure analysis and risk assessment for aminopyralid in drinking water. These simulation models take into account...
Simulating ice thickness and velocity evolution of Upernavik Isstrøm 1849-2017 with ISSM
NASA Astrophysics Data System (ADS)
Haubner, K.; Box, J.; Schlegel, N.; Larour, E. Y.; Morlighem, M.; Solgaard, A.; Kjeldsen, K. K.; Larsen, S. H.; Rignot, E. J.; Dupont, T. K.; Kjaer, K. H.
2017-12-01
Tidewater terminus changes have a significant influence on glacier velocity and mass balance and impact therefore Greenland's ice mass balance. Improving glacier front changes in ice sheet models helps understanding the processes that are driving glacier mass changes and improves predictions on Greenland's mass loss. We use the level set based moving boundary capability (Bondzio et al., 2016) included in the Ice Sheet System Model ISSM to reconstruct velocity and thickness changes on Upernavik Isstrøm, Greenland from 1849 to 2017. During the simulation, we use various data sets. For the model initialization, trim line data and an observed calving front position determine the shape of the ice surface elevation. The terminus changes are prescribed by observations. Data sets like the GIMP DEM, ArcticDEM, IceBridge surface elevation and ice surface velocities from the ESA project CCI and NASA project MEaSUREs help evaluating the simulation performance. The simulation is sensitive to the prescribed terminus changes, showing an average acceleration along the three flow lines between 50% and 190% from 1849 to 2017. Simulated ice surface velocity and elevation between 1990 and 2012 are within +/-20% of observations (GIMP, ArcticDEM, IceBridge, CCI and MEaSUREs). Simulated mass changes indicate increased dynamical ice loss from 1932 onward, amplified by increased negative SMB anomalies after 1998. More detailed information about methods and findings can be found in Haubner et al., 2017 (in TC discussion, describing simulation results between 1849-2012). Future goals are the comparison of ice surface velocity changes simulated with prescribed terminus retreat against other retreat schemes (Morlighem et al., 2016; Levermann et al., 2012; Bondzio et al., 2017) and applying the method onto other tidewater glaciers.
Comparison of metal ion release from different bracket archwire combinations: an in vitro study.
Karnam, Srinivas Kumar; Reddy, A Naveen; Manjith, C M
2012-05-01
The metal ion released from the orthodontic appliance may cause allergic reactions particularly nickel and chromium ions. Hence, this study was undertaken to determine the amount of nickel, chromium, copper, cobalt and iron ions released from simulated orthodontic appliance made of new archwires and brackets. Sixty sets of new archwire, band material, brackets and ligature wires were prepared simulating fixed orthodontic appliance. These sets were divided into four groups of fifteen samples each. Group 1: Stainless steel rectangular archwires. Group 2: Rectangular NiTi archwires. Group 3: Rectangular copper NiTi archwires. Group 4: Rectangular elgiloy archwires. These appliances were immersed in 50 ml of artificial saliva solution and stored in polypropylene bottles in the incubator to simulate oral conditions. After 90 days the solution were tested for nickel, chromium, copper, cobalt and iron ions using atomic absorption spectrophotometer. Results showed that high levels of nickel ions were released from all four groups, compared to all other ions, followed by release of iron ion levels. There is no significant difference in the levels of all metal ions released in the different groups. The study confirms that the use of newer brackets and newer archwires confirms the negligible release of metal ions from the orthodontic appliance. The measurable amount of metals, released from orthodontic appliances in artificial saliva, was significantly below the average dietary intake and did not reach toxic concentrations.
NASA Astrophysics Data System (ADS)
Pang, Guofei; Perdikaris, Paris; Cai, Wei; Karniadakis, George Em
2017-11-01
The fractional advection-dispersion equation (FADE) can describe accurately the solute transport in groundwater but its fractional order has to be determined a priori. Here, we employ multi-fidelity Bayesian optimization to obtain the fractional order under various conditions, and we obtain more accurate results compared to previously published data. Moreover, the present method is very efficient as we use different levels of resolution to construct a stochastic surrogate model and quantify its uncertainty. We consider two different problem set ups. In the first set up, we obtain variable fractional orders of one-dimensional FADE, considering both synthetic and field data. In the second set up, we identify constant fractional orders of two-dimensional FADE using synthetic data. We employ multi-resolution simulations using two-level and three-level Gaussian process regression models to construct the surrogates.
NASA Astrophysics Data System (ADS)
Edvardsson, A.; Ceberg, S.
2013-06-01
The aim of this study was 1) to investigate interfraction set-up uncertainties for patients treated with respiratory gating for left-sided breast cancer, 2) to investigate the effect of the inter-fraction set-up on the absorbed dose-distribution for the target and organs at risk (OARs) and 3) optimize the set-up correction strategy. By acquiring multiple set-up images the systematic set-up deviation was evaluated. The effect of the systematic set-up deviation on the absorbed dose distribution was evaluated by 1) simulation in the treatment planning system and 2) measurements with a biplanar diode array. The set-up deviations could be decreased using a no action level correction strategy. Not using the clinically implemented adaptive maximum likelihood factor for the gating patients resulted in better set-up. When the uncorrected set-up deviations were simulated the average mean absorbed dose was increased from 1.38 to 2.21 Gy for the heart, 4.17 to 8.86 Gy to the left anterior descending coronary artery and 5.80 to 7.64 Gy to the left lung. Respiratory gating can induce systematic set-up deviations which would result in increased mean absorbed dose to the OARs if not corrected for and should therefore be corrected for by an appropriate correction strategy.
Controls of multi-modal wave conditions in a complex coastal setting
Hegermiller, Christie; Rueda, Ana C.; Erikson, Li H.; Barnard, Patrick L.; Antolinez, J.A.A.; Mendez, Fernando J.
2017-01-01
Coastal hazards emerge from the combined effect of wave conditions and sea level anomalies associated with storms or low-frequency atmosphere-ocean oscillations. Rigorous characterization of wave climate is limited by the availability of spectral wave observations, the computational cost of dynamical simulations, and the ability to link wave-generating atmospheric patterns with coastal conditions. We present a hybrid statistical-dynamical approach to simulating nearshore wave climate in complex coastal settings, demonstrated in the Southern California Bight, where waves arriving from distant, disparate locations are refracted over complex bathymetry and shadowed by offshore islands. Contributions of wave families and large-scale atmospheric drivers to nearshore wave energy flux are analyzed. Results highlight the variability of influences controlling wave conditions along neighboring coastlines. The universal method demonstrated here can be applied to complex coastal settings worldwide, facilitating analysis of the effects of climate change on nearshore wave climate.
Controls of Multimodal Wave Conditions in a Complex Coastal Setting
NASA Astrophysics Data System (ADS)
Hegermiller, C. A.; Rueda, A.; Erikson, L. H.; Barnard, P. L.; Antolinez, J. A. A.; Mendez, F. J.
2017-12-01
Coastal hazards emerge from the combined effect of wave conditions and sea level anomalies associated with storms or low-frequency atmosphere-ocean oscillations. Rigorous characterization of wave climate is limited by the availability of spectral wave observations, the computational cost of dynamical simulations, and the ability to link wave-generating atmospheric patterns with coastal conditions. We present a hybrid statistical-dynamical approach to simulating nearshore wave climate in complex coastal settings, demonstrated in the Southern California Bight, where waves arriving from distant, disparate locations are refracted over complex bathymetry and shadowed by offshore islands. Contributions of wave families and large-scale atmospheric drivers to nearshore wave energy flux are analyzed. Results highlight the variability of influences controlling wave conditions along neighboring coastlines. The universal method demonstrated here can be applied to complex coastal settings worldwide, facilitating analysis of the effects of climate change on nearshore wave climate.
Rainfall runoff modelling of the Upper Ganga and Brahmaputra basins using PERSiST.
Futter, M N; Whitehead, P G; Sarkar, S; Rodda, H; Crossman, J
2015-06-01
There are ongoing discussions about the appropriate level of complexity and sources of uncertainty in rainfall runoff models. Simulations for operational hydrology, flood forecasting or nutrient transport all warrant different levels of complexity in the modelling approach. More complex model structures are appropriate for simulations of land-cover dependent nutrient transport while more parsimonious model structures may be adequate for runoff simulation. The appropriate level of complexity is also dependent on data availability. Here, we use PERSiST; a simple, semi-distributed dynamic rainfall-runoff modelling toolkit to simulate flows in the Upper Ganges and Brahmaputra rivers. We present two sets of simulations driven by single time series of daily precipitation and temperature using simple (A) and complex (B) model structures based on uniform and hydrochemically relevant land covers respectively. Models were compared based on ensembles of Bayesian Information Criterion (BIC) statistics. Equifinality was observed for parameters but not for model structures. Model performance was better for the more complex (B) structural representations than for parsimonious model structures. The results show that structural uncertainty is more important than parameter uncertainty. The ensembles of BIC statistics suggested that neither structural representation was preferable in a statistical sense. Simulations presented here confirm that relatively simple models with limited data requirements can be used to credibly simulate flows and water balance components needed for nutrient flux modelling in large, data-poor basins.
SimGen: A General Simulation Method for Large Systems.
Taylor, William R
2017-02-03
SimGen is a stand-alone computer program that reads a script of commands to represent complex macromolecules, including proteins and nucleic acids, in a structural hierarchy that can then be viewed using an integral graphical viewer or animated through a high-level application programming interface in C++. Structural levels in the hierarchy range from α-carbon or phosphate backbones through secondary structure to domains, molecules, and multimers with each level represented in an identical data structure that can be manipulated using the application programming interface. Unlike most coarse-grained simulation approaches, the higher-level objects represented in SimGen can be soft, allowing the lower-level objects that they contain to interact directly. The default motion simulated by SimGen is a Brownian-like diffusion that can be set to occur across all levels of representation in the hierarchy. Links can also be defined between objects, which, when combined with large high-level random movements, result in an effective search strategy for constraint satisfaction, including structure prediction from predicted pairwise distances. The implementation of SimGen makes use of the hierarchic data structure to avoid unnecessary calculation, especially for collision detection, allowing it to be simultaneously run and viewed on a laptop computer while simulating large systems of over 20,000 objects. It has been used previously to model complex molecular interactions including the motion of a myosin-V dimer "walking" on an actin fibre, RNA stem-loop packing, and the simulation of cell motion and aggregation. Several extensions to this original functionality are described. Copyright © 2016 The Francis Crick Institute. Published by Elsevier Ltd.. All rights reserved.
Neurosurgery simulation in residency training: feasibility, cost, and educational benefit.
Gasco, Jaime; Holbrook, Thomas J; Patel, Achal; Smith, Adrian; Paulson, David; Muns, Alan; Desai, Sohum; Moisi, Marc; Kuo, Yong-Fan; Macdonald, Bart; Ortega-Barnett, Juan; Patterson, Joel T
2013-10-01
The effort required to introduce simulation in neurosurgery academic programs and the benefits perceived by residents have not been systematically assessed. To create a neurosurgery simulation curriculum encompassing basic and advanced skills, cadaveric dissection, cranial and spine surgery simulation, and endovascular and computerized haptic training. A curriculum with 68 core exercises per academic year was distributed in individualized sets of 30 simulations to 6 neurosurgery residents. The total number of procedures completed during the academic year was set to 180. The curriculum includes 79 simulations with physical models, 57 cadaver dissections, and 44 haptic/computerized sessions. Likert-type evaluations regarding self-perceived performance were completed after each exercise. Subject identification was blinded to junior (postgraduate years 1-3) or senior resident (postgraduate years 4-6). Wilcoxon rank testing was used to detect differences within and between groups. One hundred eighty procedures and surveys were analyzed. Junior residents reported proficiency improvements in 82% of simulations performed (P < .001). Senior residents reported improvement in 42.5% of simulations (P < .001). Cadaver simulations accrued the highest reported benefit (71.5%; P < .001), followed by physical simulators (63.8%; P < .001) and haptic/computerized (59.1; P < .001). Initial cost is $341,978.00, with $27,876.36 for annual operational expenses. The systematic implementation of a simulation curriculum in a neurosurgery training program is feasible, is favorably regarded, and has a positive impact on trainees of all levels, particularly in junior years. All simulation forms, cadaver, physical, and haptic/computerized, have a role in different stages of learning and should be considered in the development of an educational simulation program.
78 FR 3328 - Fluroxypyr; Pesticide Tolerances
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-16
... drinking water and in residential settings, but does not include occupational exposure. Section 408(b)(2)(C... from drinking water. The Agency used screening level water exposure models in the dietary exposure analysis and risk assessment for fluroxypyr in drinking water. These simulation models take into account...
Optimal strategy analysis based on robust predictive control for inventory system with random demand
NASA Astrophysics Data System (ADS)
Saputra, Aditya; Widowati, Sutrisno
2017-12-01
In this paper, the optimal strategy for a single product single supplier inventory system with random demand is analyzed by using robust predictive control with additive random parameter. We formulate the dynamical system of this system as a linear state space with additive random parameter. To determine and analyze the optimal strategy for the given inventory system, we use robust predictive control approach which gives the optimal strategy i.e. the optimal product volume that should be purchased from the supplier for each time period so that the expected cost is minimal. A numerical simulation is performed with some generated random inventory data. We simulate in MATLAB software where the inventory level must be controlled as close as possible to a set point decided by us. From the results, robust predictive control model provides the optimal strategy i.e. the optimal product volume that should be purchased and the inventory level was followed the given set point.
Let's get honest about sampling.
Mobley, David L
2012-01-01
Molecular simulations see widespread and increasing use in computation and molecular design, especially within the area of molecular simulations applied to biomolecular binding and interactions, our focus here. However, force field accuracy remains a concern for many practitioners, and it is often not clear what level of accuracy is really needed for payoffs in a discovery setting. Here, I argue that despite limitations of today's force fields, current simulation tools and force fields now provide the potential for real benefits in a variety of applications. However, these same tools also provide irreproducible results which are often poorly interpreted. Continued progress in the field requires more honesty in assessment and care in evaluation of simulation results, especially with respect to convergence.
Development of Partial Discharging Simulation Test Equipment
NASA Astrophysics Data System (ADS)
Kai, Xue; Genghua, Liu; Yan, Jia; Ziqi, Chai; Jian, Lu
2017-12-01
In the case of partial discharge training for recruits who lack of on-site work experience, the risk of physical shock and damage of the test equipment may be due to the limited skill level and improper operation by new recruits. Partial discharge simulation tester is the use of simulation technology to achieve partial discharge test process simulation, relatively true reproduction of the local discharge process and results, so that the operator in the classroom will be able to get familiar with and understand the use of the test process and equipment.The teacher sets up the instrument to display different partial discharge waveforms so that the trainees can analyze the test results of different partial discharge types.
Design of 3D simulation engine for oilfield safety training
NASA Astrophysics Data System (ADS)
Li, Hua-Ming; Kang, Bao-Sheng
2015-03-01
Aiming at the demand for rapid custom development of 3D simulation system for oilfield safety training, this paper designs and implements a 3D simulation engine based on script-driven method, multi-layer structure, pre-defined entity objects and high-level tools such as scene editor, script editor, program loader. A scripting language been defined to control the system's progress, events and operating results. Training teacher can use this engine to edit 3D virtual scenes, set the properties of entity objects, define the logic script of task, and produce a 3D simulation training system without any skills of programming. Through expanding entity class, this engine can be quickly applied to other virtual training areas.
NASA Astrophysics Data System (ADS)
Chan, Steven C.; Kahana, Ron; Kendon, Elizabeth J.; Fowler, Hayley J.
2018-03-01
The UK Met Office has previously conducted convection-permitting climate simulations over the southern UK (Kendon et al. in Nat Clim Change 4:570-576, 2014). The southern UK simulations have been followed up by a new set of northern UK simulations using the same model configuration. Here we present the mean and extreme precipitation projections from these new simulations. Relative to the southern UK, the northern UK projections show a greater summertime increase of return levels and extreme precipitation intensity in both 1.5 km convection-permitting and 12 km convection-parameterised simulations, but this increase is against a backdrop of large decreases in summertime mean precipitation and precipitation frequency. Similar to the southern UK, projected change is model resolution dependent and the convection-permitting simulation projects a larger intensification. For winter, return level increases are somewhat lower than for the southern UK. Analysis of model biases highlight challenges in simulating the diurnal cycle over high terrain, sensitivity to domain size and driving-GCM biases, and quality issues of radar precipitation observations, which are relevant to the wider regional climate modelling community.
Allvin, Renée; Berndtzon, Magnus; Carlzon, Liisa; Edelbring, Samuel; Hult, Håkan; Hultin, Magnus; Karlgren, Klas; Masiello, Italo; Södersved Källestedt, Marie-Louise; Tamás, Éva
2017-01-01
Background Medical simulation enables the design of learning activities for competency areas (eg, communication and leadership) identified as crucial for future health care professionals. Simulation educators and medical teachers follow different career paths, and their education backgrounds and teaching contexts may be very different in a simulation setting. Although they have a key role in facilitating learning, information on the continuing professional development (pedagogical development) of simulation educators is not available in the literature. Objectives To explore changes in experienced simulation educators’ perceptions of their own teaching skills, practices, and understanding of teaching over time. Methods A qualitative exploratory study. Fourteen experienced simulation educators participated in individual open-ended interviews focusing on their development as simulation educators. Data were analyzed using an inductive thematic analysis. Results Marked educator development was discerned over time, expressed mainly in an altered way of thinking and acting. Five themes were identified: shifting focus, from following to utilizing a structure, setting goals, application of technology, and alignment with profession. Being confident in the role as an instructor seemed to constitute a foundation for the instructor’s pedagogical development. Conclusion Experienced simulation educators’ pedagogical development was based on self-confidence in the educator role, and not on a deeper theoretical understanding of teaching and learning. This is the first clue to gain increased understanding regarding educational level and possible education needs among simulation educators, and it might generate several lines of research for further studies. PMID:28176931
Ghafouri, H R; Mosharaf-Dehkordi, M; Afzalan, B
2017-07-01
A simulation-optimization model is proposed for identifying the characteristics of local immiscible NAPL contaminant sources inside aquifers. This model employs the UTCHEM 9.0 software as its simulator for solving the governing equations associated with the multi-phase flow in porous media. As the optimization model, a novel two-level saturation based Imperialist Competitive Algorithm (ICA) is proposed to estimate the parameters of contaminant sources. The first level consists of three parallel independent ICAs and plays as a pre-conditioner for the second level which is a single modified ICA. The ICA in the second level is modified by dividing each country into a number of provinces (smaller parts). Similar to countries in the classical ICA, these provinces are optimized by the assimilation, competition, and revolution steps in the ICA. To increase the diversity of populations, a new approach named knock the base method is proposed. The performance and accuracy of the simulation-optimization model is assessed by solving a set of two and three-dimensional problems considering the effects of different parameters such as the grid size, rock heterogeneity and designated monitoring networks. The obtained numerical results indicate that using this simulation-optimization model provides accurate results at a less number of iterations when compared with the model employing the classical one-level ICA. Copyright © 2017 Elsevier B.V. All rights reserved.
When ecosystem services interact: crop pollination benefits depend on the level of pest control
Lundin, Ola; Smith, Henrik G.; Rundlöf, Maj; Bommarco, Riccardo
2013-01-01
Pollination is a key ecosystem service which most often has been studied in isolation although effects of pollination on seed set might depend on, and interact with, other services important for crop production. We tested three competing hypotheses on how insect pollination and pest control might jointly affect seed set: independent, compensatory or synergistic effects. For this, we performed a cage experiment with two levels of insect pollination and simulated pest control in red clover (Trifolium pratense L.) grown for seed. There was a synergistic interaction between the two services: the gain in seed set obtained when simultaneously increasing pollination and pest control outweighed the sum of seed set gains obtained when increasing each service separately. This study shows that interactions can alter the benefits obtained from service-providing organisms, and this needs to be considered to properly manage multiple ecosystem services. PMID:23269852
Stochastic evolutionary dynamics in minimum-effort coordination games
NASA Astrophysics Data System (ADS)
Li, Kun; Cong, Rui; Wang, Long
2016-08-01
The minimum-effort coordination game draws recently more attention for the fact that human behavior in this social dilemma is often inconsistent with the predictions of classical game theory. Here, we combine evolutionary game theory and coalescence theory to investigate this game in finite populations. Both analytic results and individual-based simulations show that effort costs play a key role in the evolution of contribution levels, which is in good agreement with those observed experimentally. Besides well-mixed populations, set structured populations have also been taken into consideration. Therein we find that large number of sets and moderate migration rate greatly promote effort levels, especially for high effort costs.
Optimization as a Tool for Consistency Maintenance in Multi-Resolution Simulation
NASA Technical Reports Server (NTRS)
Drewry, Darren T; Reynolds, Jr , Paul F; Emanuel, William R
2006-01-01
The need for new approaches to the consistent simulation of related phenomena at multiple levels of resolution is great. While many fields of application would benefit from a complete and approachable solution to this problem, such solutions have proven extremely difficult. We present a multi-resolution simulation methodology that uses numerical optimization as a tool for maintaining external consistency between models of the same phenomena operating at different levels of temporal and/or spatial resolution. Our approach follows from previous work in the disparate fields of inverse modeling and spacetime constraint-based animation. As a case study, our methodology is applied to two environmental models of forest canopy processes that make overlapping predictions under unique sets of operating assumptions, and which execute at different temporal resolutions. Experimental results are presented and future directions are addressed.
Robust guaranteed cost tracking control of quadrotor UAV with uncertainties.
Xu, Zhiwei; Nian, Xiaohong; Wang, Haibo; Chen, Yinsheng
2017-07-01
In this paper, a robust guaranteed cost controller (RGCC) is proposed for quadrotor UAV system with uncertainties to address set-point tracking problem. A sufficient condition of the existence for RGCC is derived by Lyapunov stability theorem. The designed RGCC not only guarantees the whole closed-loop system asymptotically stable but also makes the quadratic performance level built for the closed-loop system have an upper bound irrespective to all admissible parameter uncertainties. Then, an optimal robust guaranteed cost controller is developed to minimize the upper bound of performance level. Simulation results verify the presented control algorithms possess small overshoot and short setting time, with which the quadrotor has ability to perform set-point tracking task well. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Architectural-level power estimation and experimentation
NASA Astrophysics Data System (ADS)
Ye, Wu
With the emergence of a plethora of embedded and portable applications and ever increasing integration levels, power dissipation of integrated circuits has moved to the forefront as a design constraint. Recent years have also seen a significant trend towards designs starting at the architectural (or RT) level. Those demand accurate yet fast RT level power estimation methodologies and tools. This thesis addresses issues and experiments associate with architectural level power estimation. An execution driven, cycle-accurate RT level power simulator, SimplePower, was developed using transition-sensitive energy models. It is based on the architecture of a five-stage pipelined RISC datapath for both 0.35mum and 0.8mum technology and can execute the integer subset of the instruction set of SimpleScalar . SimplePower measures the energy consumed in the datapath, memory and on-chip buses. During the development of SimplePower , a partitioning power modeling technique was proposed to model the energy consumed in complex functional units. The accuracy of this technique was validated with HSPICE simulation results for a register file and a shifter. A novel, selectively gated pipeline register optimization technique was proposed to reduce the datapath energy consumption. It uses the decoded control signals to selectively gate the data fields of the pipeline registers. Simulation results show that this technique can reduce the datapath energy consumption by 18--36% for a set of benchmarks. A low-level back-end compiler optimization, register relabeling, was applied to reduce the on-chip instruction cache data bus switch activities. Its impact was evaluated by SimplePower. Results show that it can reduce the energy consumed in the instruction data buses by 3.55--16.90%. A quantitative evaluation was conducted for the impact of six state-of-art high-level compilation techniques on both datapath and memory energy consumption. The experimental results provide a valuable insight for designers to develop future power-aware compilation frameworks for embedded systems.
Milliren, Carly E; Evans, Clare R; Richmond, Tracy K; Dunn, Erin C
2018-06-06
Recent advances in multilevel modeling allow for modeling non-hierarchical levels (e.g., youth in non-nested schools and neighborhoods) using cross-classified multilevel models (CCMM). Current practice is to cluster samples from one context (e.g., schools) and utilize the observations however they are distributed from the second context (e.g., neighborhoods). However, it is unknown whether an uneven distribution of sample size across these contexts leads to incorrect estimates of random effects in CCMMs. Using the school and neighborhood data structure in Add Health, we examined the effect of neighborhood sample size imbalance on the estimation of variance parameters in models predicting BMI. We differentially assigned students from a given school to neighborhoods within that school's catchment area using three scenarios of (im)balance. 1000 random datasets were simulated for each of five combinations of school- and neighborhood-level variance and imbalance scenarios, for a total of 15,000 simulated data sets. For each simulation, we calculated 95% CIs for the variance parameters to determine whether the true simulated variance fell within the interval. Across all simulations, the "true" school and neighborhood variance parameters were estimated 93-96% of the time. Only 5% of models failed to capture neighborhood variance; 6% failed to capture school variance. These results suggest that there is no systematic bias in the ability of CCMM to capture the true variance parameters regardless of the distribution of students across neighborhoods. Ongoing efforts to use CCMM are warranted and can proceed without concern for the sample imbalance across contexts. Copyright © 2018 Elsevier Ltd. All rights reserved.
Evaluation of a Local Anesthesia Simulation Model with Dental Students as Novice Clinicians.
Lee, Jessica S; Graham, Roseanna; Bassiur, Jennifer P; Lichtenthal, Richard M
2015-12-01
The aim of this study was to evaluate the use of a local anesthesia (LA) simulation model in a facilitated small group setting before dental students administered an inferior alveolar nerve block (IANB) for the first time. For this pilot study, 60 dental students transitioning from preclinical to clinical education were randomly assigned to either an experimental group (N=30) that participated in a small group session using the simulation model or a control group (N=30). After administering local anesthesia for the first time, students in both groups were given questionnaires regarding levels of preparedness and confidence when administering an IANB and level of anesthesia effectiveness and pain when receiving an IANB. Students in the experimental group exhibited a positive difference on all six questions regarding preparedness and confidence when administering LA to another student. One of these six questions ("I was prepared in administering local anesthesia for the first time") showed a statistically significant difference (p<0.05). Students who received LA from students who practiced on the simulation model also experienced fewer post-injection complications one day after receiving the IANB, including a statistically significant reduction in trismus. No statistically significant difference was found in level of effectiveness of the IANB or perceived levels of pain between the two groups. The results of this pilot study suggest that using a local anesthesia simulation model may be beneficial in increasing a dental student's level of comfort prior to administering local anesthesia for the first time.
Dynamics and energetics of the solar chromosphere
NASA Astrophysics Data System (ADS)
Carlsson, Mats; Stein, Robert F.
2002-06-01
We present a summary of results from a number of observational programs carried out with the SUMER instrument on board SOHO. Most datasets show clear quasi-periodic dynamic behavior ("grains") in contiunuum intensities with frequencies 3-10 mHz. Corresponding grains are seen in intensities and velocities in neutral lines, normally with phase differences consistent with upward propagating sound-waves. We compare the observations with 1D radiation hydrodynamic simulations using MDI Doppler-shifts to set the lower boundary. For continua formed in the mid-chromosphere we find that the simulations give a good match to the intensity fluctuations but that the minimum intensity is too low. We find that high frequency acoustic waves (missing from the current simulations) are unlikely to give the extra heating necessary because of the strong radiative damping (90-99%) of such waves in the photosphere. In continua formed in the low chromosphere the mean intensity is similar in the simulations and the observations but the simulated fluctuations are too large. The reported findings are consistent with a picture where a basic intensity level is set by a magnetic heating process even in the darkest internetwork areas with superimposed intensity variations caused by acoustic waves.
NASA Astrophysics Data System (ADS)
Ferdous, Nazneen; Bhat, Chandra R.
2013-01-01
This paper proposes and estimates a spatial panel ordered-response probit model with temporal autoregressive error terms to analyze changes in urban land development intensity levels over time. Such a model structure maintains a close linkage between the land owner's decision (unobserved to the analyst) and the land development intensity level (observed by the analyst) and accommodates spatial interactions between land owners that lead to spatial spillover effects. In addition, the model structure incorporates spatial heterogeneity as well as spatial heteroscedasticity. The resulting model is estimated using a composite marginal likelihood (CML) approach that does not require any simulation machinery and that can be applied to data sets of any size. A simulation exercise indicates that the CML approach recovers the model parameters very well, even in the presence of high spatial and temporal dependence. In addition, the simulation results demonstrate that ignoring spatial dependency and spatial heterogeneity when both are actually present will lead to bias in parameter estimation. A demonstration exercise applies the proposed model to examine urban land development intensity levels using parcel-level data from Austin, Texas.
NASA Technical Reports Server (NTRS)
Perigaud, C.; Florenchie, P.
2000-01-01
In situ and satellite sea level data sets over 1980-1998 are used to estimate the interannual variations of the geostrophic zonal transport across the opening of the Northwestern Pacific boundary into the Celebes sea.
Dodging Marshmallows: Simulations to Teach Ethics
ERIC Educational Resources Information Center
Weidman, Justin; Coombs, Dawan
2016-01-01
Students had just participated in an experiential learning exercise where they played dodgeball using marshmallows, but the unique context of the game introduced ethical dilemmas that integrated ethics education into a technical education classroom setting. Research shows that "Engineering curriculum and activities at the K-12 level should be…
2006-06-01
levels of automation applied as per Figure 13. .................................. 60 x THIS PAGE...models generated for this thesis were set to run for 60 minutes. To run the simulation for the set time, the analyst provides a random number seed to...1984). The IMPRINT 59 workload value of 60 has been used by a consensus of workload modeling SMEs to represent the ‘high’ threshold, while the
Sørensen, Jette Led; Østergaard, Doris; LeBlanc, Vicki; Ottesen, Bent; Konge, Lars; Dieckmann, Peter; Van der Vleuten, Cees
2017-01-21
Simulation-based medical education (SBME) has traditionally been conducted as off-site simulation in simulation centres. Some hospital departments also provide off-site simulation using in-house training room(s) set up for simulation away from the clinical setting, and these activities are called in-house training. In-house training facilities can be part of hospital departments and resemble to some extent simulation centres but often have less technical equipment. In situ simulation, introduced over the past decade, mainly comprises of team-based activities and occurs in patient care units with healthcare professionals in their own working environment. Thus, this intentional blend of simulation and real working environments means that in situ simulation brings simulation to the real working environment and provides training where people work. In situ simulation can be either announced or unannounced, the latter also known as a drill. This article presents and discusses the design of SBME and the advantage and disadvantage of the different simulation settings, such as training in simulation-centres, in-house simulations in hospital departments, announced or unannounced in situ simulations. Non-randomised studies argue that in situ simulation is more effective for educational purposes than other types of simulation settings. Conversely, the few comparison studies that exist, either randomised or retrospective, show that choice of setting does not seem to influence individual or team learning. However, hospital department-based simulations, such as in-house simulation and in situ simulation, lead to a gain in organisational learning. To our knowledge no studies have compared announced and unannounced in situ simulation. The literature suggests some improved organisational learning from unannounced in situ simulation; however, unannounced in situ simulation was also found to be challenging to plan and conduct, and more stressful among participants. The importance of setting, context and fidelity are discussed. Based on the current limited research we suggest that choice of setting for simulations does not seem to influence individual and team learning. Department-based local simulation, such as simulation in-house and especially in situ simulation, leads to gains in organisational learning. The overall objectives of simulation-based education and factors such as feasibility can help determine choice of simulation setting.
Simulator technology as a tool for education in cardiac care.
Hravnak, Marilyn; Beach, Michael; Tuite, Patricia
2007-01-01
Assisting nurses in gaining the cognitive and psychomotor skills necessary to safely and effectively care for patients with cardiovascular disease can be challenging for educators. Ideally, nurses would have the opportunity to synthesize and practice these skills in a protected training environment before application in the dynamic clinical setting. Recently, a technology known as high fidelity human simulation was introduced, which permits learners to interact with a simulated patient. The dynamic physiologic parameters and physical assessment capabilities of the simulated patient provide for a realistic learning environment. This article describes the High Fidelity Human Simulation Laboratory at the University of Pittsburgh School of Nursing and presents strategies for using this technology as a tool in teaching complex cardiac nursing care at the basic and advanced practice nursing levels. The advantages and disadvantages of high fidelity human simulation in learning are discussed.
Computational modeling of cardiovascular response to orthostatic stress
NASA Technical Reports Server (NTRS)
Heldt, Thomas; Shim, Eun B.; Kamm, Roger D.; Mark, Roger G.
2002-01-01
The objective of this study is to develop a model of the cardiovascular system capable of simulating the short-term (< or = 5 min) transient and steady-state hemodynamic responses to head-up tilt and lower body negative pressure. The model consists of a closed-loop lumped-parameter representation of the circulation connected to set-point models of the arterial and cardiopulmonary baroreflexes. Model parameters are largely based on literature values. Model verification was performed by comparing the simulation output under baseline conditions and at different levels of orthostatic stress to sets of population-averaged hemodynamic data reported in the literature. On the basis of experimental evidence, we adjusted some model parameters to simulate experimental data. Orthostatic stress simulations are not statistically different from experimental data (two-sided test of significance with Bonferroni adjustment for multiple comparisons). Transient response characteristics of heart rate to tilt also compare well with reported data. A case study is presented on how the model is intended to be used in the future to investigate the effects of post-spaceflight orthostatic intolerance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valkenburg, Wessel; Hu, Bin, E-mail: valkenburg@lorentz.leidenuniv.nl, E-mail: hu@lorentz.leidenuniv.nl
2015-09-01
We present a description for setting initial particle displacements and field values for simulations of arbitrary metric theories of gravity, for perfect and imperfect fluids with arbitrary characteristics. We extend the Zel'dovich Approximation to nontrivial theories of gravity, and show how scale dependence implies curved particle paths, even in the entirely linear regime of perturbations. For a viable choice of Effective Field Theory of Modified Gravity, initial conditions set at high redshifts are affected at the level of up to 5% at Mpc scales, which exemplifies the importance of going beyond Λ-Cold Dark Matter initial conditions for modifications of gravitymore » outside of the quasi-static approximation. In addition, we show initial conditions for a simulation where a scalar modification of gravity is modelled in a Lagrangian particle-like description. Our description paves the way for simulations and mock galaxy catalogs under theories of gravity beyond the standard model, crucial for progress towards precision tests of gravity and cosmology.« less
Benchmarking MARS (accident management software) with the Browns Ferry fire
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dawson, S.M.; Liu, L.Y.; Raines, J.C.
1992-01-01
The MAAP Accident Response System (MARS) is a userfriendly computer software developed to provide management and engineering staff with the most needed insights, during actual or simulated accidents, of the current and future conditions of the plant based on current plant data and its trends. To demonstrate the reliability of the MARS code in simulatng a plant transient, MARS is being benchmarked with the available reactor pressure vessel (RPV) pressure and level data from the Browns Ferry fire. The MRS software uses the Modular Accident Analysis Program (MAAP) code as its basis to calculate plant response under accident conditions. MARSmore » uses a limited set of plant data to initialize and track the accidnt progression. To perform this benchmark, a simulated set of plant data was constructed based on actual report data containing the information necessary to initialize MARS and keep track of plant system status throughout the accident progression. The initial Browns Ferry fire data were produced by performing a MAAP run to simulate the accident. The remaining accident simulation used actual plant data.« less
: A Scalable and Transparent System for Simulating MPI Programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perumalla, Kalyan S
2010-01-01
is a scalable, transparent system for experimenting with the execution of parallel programs on simulated computing platforms. The level of simulated detail can be varied for application behavior as well as for machine characteristics. Unique features of are repeatability of execution, scalability to millions of simulated (virtual) MPI ranks, scalability to hundreds of thousands of host (real) MPI ranks, portability of the system to a variety of host supercomputing platforms, and the ability to experiment with scientific applications whose source-code is available. The set of source-code interfaces supported by is being expanded to support a wider set of applications, andmore » MPI-based scientific computing benchmarks are being ported. In proof-of-concept experiments, has been successfully exercised to spawn and sustain very large-scale executions of an MPI test program given in source code form. Low slowdowns are observed, due to its use of purely discrete event style of execution, and due to the scalability and efficiency of the underlying parallel discrete event simulation engine, sik. In the largest runs, has been executed on up to 216,000 cores of a Cray XT5 supercomputer, successfully simulating over 27 million virtual MPI ranks, each virtual rank containing its own thread context, and all ranks fully synchronized by virtual time.« less
Market-Based and System-Wide Fuel Cycle Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, Paul Philip Hood; Scopatz, Anthony; Gidden, Matthew
This work introduces automated optimization into fuel cycle simulations in the Cyclus platform. This includes system-level optimizations, seeking a deployment plan that optimizes the performance over the entire transition, and market-level optimization, seeking an optimal set of material trades at each time step. These concepts were introduced in a way that preserves the flexibility of the Cyclus fuel cycle framework, one of its most important design principles.
Sahoo, S.; Russo, T. A.; Elliott, J.; ...
2017-05-13
Climate, groundwater extraction, and surface water flows have complex nonlinear relationships with groundwater level in agricultural regions. To better understand the relative importance of each driver and predict groundwater level change, we develop a new ensemble modeling framework based on spectral analysis, machine learning, and uncertainty analysis, as an alternative to complex and computationally expensive physical models. We apply and evaluate this new approach in the context of two aquifer systems supporting agricultural production in the United States: the High Plains aquifer (HPA) and the Mississippi River Valley alluvial aquifer (MRVA). We select input data sets by using a combinationmore » of mutual information, genetic algorithms, and lag analysis, and then use the selected data sets in a Multilayer Perceptron network architecture to simulate seasonal groundwater level change. As expected, model results suggest that irrigation demand has the highest influence on groundwater level change for a majority of the wells. The subset of groundwater observations not used in model training or cross-validation correlates strongly (R > 0.8) with model results for 88 and 83% of the wells in the HPA and MRVA, respectively. In both aquifer systems, the error in the modeled cumulative groundwater level change during testing (2003-2012) was less than 2 m over a majority of the area. Here, we conclude that our modeling framework can serve as an alternative approach to simulating groundwater level change and water availability, especially in regions where subsurface properties are unknown.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sahoo, S.; Russo, T. A.; Elliott, J.
Climate, groundwater extraction, and surface water flows have complex nonlinear relationships with groundwater level in agricultural regions. To better understand the relative importance of each driver and predict groundwater level change, we develop a new ensemble modeling framework based on spectral analysis, machine learning, and uncertainty analysis, as an alternative to complex and computationally expensive physical models. We apply and evaluate this new approach in the context of two aquifer systems supporting agricultural production in the United States: the High Plains aquifer (HPA) and the Mississippi River Valley alluvial aquifer (MRVA). We select input data sets by using a combinationmore » of mutual information, genetic algorithms, and lag analysis, and then use the selected data sets in a Multilayer Perceptron network architecture to simulate seasonal groundwater level change. As expected, model results suggest that irrigation demand has the highest influence on groundwater level change for a majority of the wells. The subset of groundwater observations not used in model training or cross-validation correlates strongly (R > 0.8) with model results for 88 and 83% of the wells in the HPA and MRVA, respectively. In both aquifer systems, the error in the modeled cumulative groundwater level change during testing (2003-2012) was less than 2 m over a majority of the area. Here, we conclude that our modeling framework can serve as an alternative approach to simulating groundwater level change and water availability, especially in regions where subsurface properties are unknown.« less
Can We Study Autonomous Driving Comfort in Moving-Base Driving Simulators? A Validation Study.
Bellem, Hanna; Klüver, Malte; Schrauf, Michael; Schöner, Hans-Peter; Hecht, Heiko; Krems, Josef F
2017-05-01
To lay the basis of studying autonomous driving comfort using driving simulators, we assessed the behavioral validity of two moving-base simulator configurations by contrasting them with a test-track setting. With increasing level of automation, driving comfort becomes increasingly important. Simulators provide a safe environment to study perceived comfort in autonomous driving. To date, however, no studies were conducted in relation to comfort in autonomous driving to determine the extent to which results from simulator studies can be transferred to on-road driving conditions. Participants ( N = 72) experienced six differently parameterized lane-change and deceleration maneuvers and subsequently rated the comfort of each scenario. One group of participants experienced the maneuvers on a test-track setting, whereas two other groups experienced them in one of two moving-base simulator configurations. We could demonstrate relative and absolute validity for one of the two simulator configurations. Subsequent analyses revealed that the validity of the simulator highly depends on the parameterization of the motion system. Moving-base simulation can be a useful research tool to study driving comfort in autonomous vehicles. However, our results point at a preference for subunity scaling factors for both lateral and longitudinal motion cues, which might be explained by an underestimation of speed in virtual environments. In line with previous studies, we recommend lateral- and longitudinal-motion scaling factors of approximately 50% to 60% in order to obtain valid results for both active and passive driving tasks.
Pilot-Induced Oscillation Prediction With Three Levels of Simulation Motion Displacement
NASA Technical Reports Server (NTRS)
Schroeder, Jeffery A.; Chung, William W. Y.; Tran, Duc T.; Laforce, Soren; Bengford, Norman J.
2001-01-01
Simulator motion platform characteristics were examined to determine if the amount of motion affects pilot-induced oscillation (PIO) prediction. Five test pilots evaluated how susceptible 18 different sets of pitch dynamics were to PIOs with three different levels of simulation motion platform displacement: large, small, and none. The pitch dynamics were those of a previous in-flight experiment, some of which elicited PIOs These in-flight results served as truth data for the simulation. As such, the in-flight experiment was replicated as much as possible. Objective and subjective data were collected and analyzed With large motion, PIO and handling qualities ratings matched the flight data more closely than did small motion or no motion. Also, regardless of the aircraft dynamics, large motion increased pilot confidence in assigning handling qualifies ratings, reduced safety pilot trips, and lowered touchdown velocities. While both large and small motion provided a pitch rate cue of high fidelity, only large motion presented the pilot with a high fidelity vertical acceleration cue.
The 2016 Al-Mishraq sulphur plant fire: Source and health risk area estimation
NASA Astrophysics Data System (ADS)
Björnham, Oscar; Grahn, Håkan; von Schoenberg, Pontus; Liljedahl, Birgitta; Waleij, Annica; Brännström, Niklas
2017-11-01
On October 20, 2016, Daesh (Islamic State) set fire to the sulphur production site Al-Mishraq as the battle of Mosul in northern Iraq became more intense. An extensive plume of toxic sulphur dioxide and hydrogen sulphide caused comprehensive casualties. The intensity of the SO2 release was reaching levels of minor volcanic eruptions and the plume was observed by several satellites. By investigation of the measurement data from instruments on the MetOp-A, MetOp-B, Aura and Soumi satellites we have estimated the time-dependent source term to 161 kilotonnes sulphur dioxide released into the atmosphere during seven days. A long-range dispersion model was utilized to simulate the atmospheric transport over the Middle East. The ground level concentrations predicted by the simulation were compared with observation from the Turkey National Air Quality Monitoring Network. Finally, the simulation data provided, using a probit analysis of the simulated data, an estimate of the health risk area that was compared to reported urgent medical treatments.
A Pipeline for Constructing a Catalog of Multi-method Models of Interacting Galaxies
NASA Astrophysics Data System (ADS)
Holincheck, Anthony
Galaxies represent a fundamental unit of matter for describing the large-scale structure of the universe. One of the major processes affecting the formation and evolution of galaxies are mutual interactions. These interactions can including gravitational tidal distortion, mass transfer, and even mergers. In any hierarchical model, mergers are the key mechanism in galaxy formation and evolution. Computer simulations of interacting galaxies have evolved in the last four decades from simple restricted three-body algorithms to full n-body gravity models. These codes often included sophisticated physical mechanisms such as gas dynamics, supernova feedback, and central blackholes. As the level of complexity, and perhaps realism, increases so does the amount of computational resources needed. These advanced simulations are often used in parameter studies of interactions. They are usually only employed in an ad hoc fashion to recreate the dynamical history of specific sets of interacting galaxies. These specific models are often created with only a few dozen or at most few hundred sets of simulation parameters being attempted. This dissertation presents a prototype pipeline for modeling specific pairs of interacting galaxies in bulk. The process begins with a simple image of the current disturbed morphology and an estimate of distance to the system and mass of the galaxies. With the use of an updated restricted three-body simulation code and the help of Citizen Scientists, the pipeline is able to sample hundreds of thousands of points in parameter space for each system. Through the use of a convenient interface and innovative scoring algorithm, the pipeline aids researchers in identifying the best set of simulation parameters. This dissertation demonstrates a successful recreation of the disturbed morphologies of 62 pairs of interacting galaxies. The pipeline also provides for examining the level of convergence and uniqueness of the dynamical properties of each system. By creating a population of models for actual systems, the current research is able to compare simulation-based and observational values on a larger scale than previous efforts. Several potential relationships between star formation rate and dynamical time since closest approach are presented.
Numerical aerodynamic simulation facility. Preliminary study extension
NASA Technical Reports Server (NTRS)
1978-01-01
The production of an optimized design of key elements of the candidate facility was the primary objective of this report. This was accomplished by effort in the following tasks: (1) to further develop, optimize and describe the function description of the custom hardware; (2) to delineate trade off areas between performance, reliability, availability, serviceability, and programmability; (3) to develop metrics and models for validation of the candidate systems performance; (4) to conduct a functional simulation of the system design; (5) to perform a reliability analysis of the system design; and (6) to develop the software specifications to include a user level high level programming language, a correspondence between the programming language and instruction set and outline the operation system requirements.
Schwämmle, Veit; León, Ileana Rodríguez; Jensen, Ole Nørregaard
2013-09-06
Large-scale quantitative analyses of biological systems are often performed with few replicate experiments, leading to multiple nonidentical data sets due to missing values. For example, mass spectrometry driven proteomics experiments are frequently performed with few biological or technical replicates due to sample-scarcity or due to duty-cycle or sensitivity constraints, or limited capacity of the available instrumentation, leading to incomplete results where detection of significant feature changes becomes a challenge. This problem is further exacerbated for the detection of significant changes on the peptide level, for example, in phospho-proteomics experiments. In order to assess the extent of this problem and the implications for large-scale proteome analysis, we investigated and optimized the performance of three statistical approaches by using simulated and experimental data sets with varying numbers of missing values. We applied three tools, including standard t test, moderated t test, also known as limma, and rank products for the detection of significantly changing features in simulated and experimental proteomics data sets with missing values. The rank product method was improved to work with data sets containing missing values. Extensive analysis of simulated and experimental data sets revealed that the performance of the statistical analysis tools depended on simple properties of the data sets. High-confidence results were obtained by using the limma and rank products methods for analyses of triplicate data sets that exhibited more than 1000 features and more than 50% missing values. The maximum number of differentially represented features was identified by using limma and rank products methods in a complementary manner. We therefore recommend combined usage of these methods as a novel and optimal way to detect significantly changing features in these data sets. This approach is suitable for large quantitative data sets from stable isotope labeling and mass spectrometry experiments and should be applicable to large data sets of any type. An R script that implements the improved rank products algorithm and the combined analysis is available.
NASA Astrophysics Data System (ADS)
Ge, Zhouyang; Loiseau, Jean-Christophe; Tammisola, Outi; Brandt, Luca
2018-01-01
Aiming for the simulation of colloidal droplets in microfluidic devices, we present here a numerical method for two-fluid systems subject to surface tension and depletion forces among the suspended droplets. The algorithm is based on an efficient solver for the incompressible two-phase Navier-Stokes equations, and uses a mass-conserving level set method to capture the fluid interface. The four novel ingredients proposed here are, firstly, an interface-correction level set (ICLS) method; global mass conservation is achieved by performing an additional advection near the interface, with a correction velocity obtained by locally solving an algebraic equation, which is easy to implement in both 2D and 3D. Secondly, we report a second-order accurate geometric estimation of the curvature at the interface and, thirdly, the combination of the ghost fluid method with the fast pressure-correction approach enabling an accurate and fast computation even for large density contrasts. Finally, we derive a hydrodynamic model for the interaction forces induced by depletion of surfactant micelles and combine it with a multiple level set approach to study short-range interactions among droplets in the presence of attracting forces.
Energy-optimal path planning by stochastic dynamically orthogonal level-set optimization
NASA Astrophysics Data System (ADS)
Subramani, Deepak N.; Lermusiaux, Pierre F. J.
2016-04-01
A stochastic optimization methodology is formulated for computing energy-optimal paths from among time-optimal paths of autonomous vehicles navigating in a dynamic flow field. Based on partial differential equations, the methodology rigorously leverages the level-set equation that governs time-optimal reachability fronts for a given relative vehicle-speed function. To set up the energy optimization, the relative vehicle-speed and headings are considered to be stochastic and new stochastic Dynamically Orthogonal (DO) level-set equations are derived. Their solution provides the distribution of time-optimal reachability fronts and corresponding distribution of time-optimal paths. An optimization is then performed on the vehicle's energy-time joint distribution to select the energy-optimal paths for each arrival time, among all stochastic time-optimal paths for that arrival time. Numerical schemes to solve the reduced stochastic DO level-set equations are obtained, and accuracy and efficiency considerations are discussed. These reduced equations are first shown to be efficient at solving the governing stochastic level-sets, in part by comparisons with direct Monte Carlo simulations. To validate the methodology and illustrate its accuracy, comparisons with semi-analytical energy-optimal path solutions are then completed. In particular, we consider the energy-optimal crossing of a canonical steady front and set up its semi-analytical solution using a energy-time nested nonlinear double-optimization scheme. We then showcase the inner workings and nuances of the energy-optimal path planning, considering different mission scenarios. Finally, we study and discuss results of energy-optimal missions in a wind-driven barotropic quasi-geostrophic double-gyre ocean circulation.
Chau, Foo-Tim; Mok, Daniel K W; Lee, Edmond P F; Dyke, John M
2004-07-22
Restricted-spin coupled-cluster single-double plus perturbative triple excitation [RCCSD(T)] potential energy functions (PEFs) were calculated for the X (2)A" and A (2)A' states of HPCl employing the augmented correlation-consistent polarized-valence-quadruple-zeta (aug-cc-pVQZ) basis set. Further geometry optimization calculations were carried out on both electronic states of HPCl at the RCCSD(T) level with all electron and quasirelativistic effective core potential basis sets of better than the aug-cc-pVQZ quality, and also including some core electrons, in order to obtain more reliable geometrical parameters and relative electronic energy of the two states. Anharmonic vibrational wave functions of the two states of HPCl and DPCl, and Franck-Condon (FC) factors of the A (2)A'-X (2)A" transition were computed employing the RCCSD(T)/aug-cc-pVQZ PEFs. Calculated FC factors with allowance for Duschinsky rotation and anharmonicity were used to simulate the single-vibronic-level (SVL) emission spectra of HPCl and DPCl reported by Brandon et al. [J. Chem. Phys. 119, 2037 (2003)] and the chemiluminescence spectrum reported by Bramwell et al. [Chem. Phys. Lett. 331, 483 (2000)]. Comparison between simulated and observed SVL emission spectra gives the experimentally derived equilibrium geometry of the A (2)A' state of HPCl of r(e)(PCl) = 2.0035 +/- 0.0015 A, theta(e) = 116.08 +/- 0.60 degrees, and r(e)(HP) = 1.4063+/-0.0015 A via the iterative Franck-Condon analysis procedure. Comparison between simulated and observed chemiluminescence spectra confirms that the vibrational population distribution of the A (2)A' state of HPCl is non-Boltzmann, as proposed by Baraille et al. [Chem. Phys. 289, 263 (2003)].
Online monitoring of oil film using electrical capacitance tomography and level set method.
Xue, Q; Sun, B Y; Cui, Z Q; Ma, M; Wang, H X
2015-08-01
In the application of oil-air lubrication system, electrical capacitance tomography (ECT) provides a promising way for monitoring oil film in the pipelines by reconstructing cross sectional oil distributions in real time. While in the case of small diameter pipe and thin oil film, the thickness of the oil film is hard to be observed visually since the interface of oil and air is not obvious in the reconstructed images. And the existence of artifacts in the reconstructions has seriously influenced the effectiveness of image segmentation techniques such as level set method. Besides, level set method is also unavailable for online monitoring due to its low computation speed. To address these problems, a modified level set method is developed: a distance regularized level set evolution formulation is extended to image two-phase flow online using an ECT system, a narrowband image filter is defined to eliminate the influence of artifacts, and considering the continuity of the oil distribution variation, the detected oil-air interface of a former image can be used as the initial contour for the detection of the subsequent frame; thus, the propagation from the initial contour to the boundary can be greatly accelerated, making it possible for real time tracking. To testify the feasibility of the proposed method, an oil-air lubrication facility with 4 mm inner diameter pipe is measured in normal operation using an 8-electrode ECT system. Both simulation and experiment results indicate that the modified level set method is capable of visualizing the oil-air interface accurately online.
Online monitoring of oil film using electrical capacitance tomography and level set method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xue, Q., E-mail: xueqian@tju.edu.cn; Ma, M.; Sun, B. Y.
2015-08-15
In the application of oil-air lubrication system, electrical capacitance tomography (ECT) provides a promising way for monitoring oil film in the pipelines by reconstructing cross sectional oil distributions in real time. While in the case of small diameter pipe and thin oil film, the thickness of the oil film is hard to be observed visually since the interface of oil and air is not obvious in the reconstructed images. And the existence of artifacts in the reconstructions has seriously influenced the effectiveness of image segmentation techniques such as level set method. Besides, level set method is also unavailable for onlinemore » monitoring due to its low computation speed. To address these problems, a modified level set method is developed: a distance regularized level set evolution formulation is extended to image two-phase flow online using an ECT system, a narrowband image filter is defined to eliminate the influence of artifacts, and considering the continuity of the oil distribution variation, the detected oil-air interface of a former image can be used as the initial contour for the detection of the subsequent frame; thus, the propagation from the initial contour to the boundary can be greatly accelerated, making it possible for real time tracking. To testify the feasibility of the proposed method, an oil-air lubrication facility with 4 mm inner diameter pipe is measured in normal operation using an 8-electrode ECT system. Both simulation and experiment results indicate that the modified level set method is capable of visualizing the oil-air interface accurately online.« less
Learning Outcomes for Cyber Defense Competitions
ERIC Educational Resources Information Center
Woszczynski, Amy B.; Green, Andrew
2017-01-01
Cyber defense competitions (CDCs) simulate a real-world environment where the competitors must protect the information assets of a fictional organization. These competitions are becoming popular at the high school and college levels, as well as in industry and governmental settings. However, there is little research to date on the learning…
CFD Validation Studies for Hypersonic Flow Prediction
NASA Technical Reports Server (NTRS)
Gnoffo, Peter A.
2001-01-01
A series of experiments to measure pressure and heating for code validation involving hypersonic, laminar, separated flows was conducted at the Calspan-University at Buffalo Research Center (CUBRC) in the Large Energy National Shock (LENS) tunnel. The experimental data serves as a focus for a code validation session but are not available to the authors until the conclusion of this session. The first set of experiments considered here involve Mach 9.5 and Mach 11.3 N2 flow over a hollow cylinder-flare with 30 degree flare angle at several Reynolds numbers sustaining laminar, separated flow. Truncated and extended flare configurations are considered. The second set of experiments, at similar conditions, involves flow over a sharp, double cone with fore-cone angle of 25 degrees and aft-cone angle of 55 degrees. Both sets of experiments involve 30 degree compressions. Location of the separation point in the numerical simulation is extremely sensitive to the level of grid refinement in the numerical predictions. The numerical simulations also show a significant influence of Reynolds number on extent of separation. Flow unsteadiness was easily introduced into the double cone simulations using aggressive relaxation parameters that normally promote convergence.
CFD Validation Studies for Hypersonic Flow Prediction
NASA Technical Reports Server (NTRS)
Gnoffo, Peter A.
2001-01-01
A series of experiments to measure pressure and heating for code validation involving hypersonic, laminar, separated flows was conducted at the Calspan-University at Buffalo Research Center (CUBRC) in the Large Energy National Shock (LENS) tunnel. The experimental data serves as a focus for a code validation session but are not available to the authors until the conclusion of this session. The first set of experiments considered here involve Mach 9.5 and Mach 11.3 N, flow over a hollow cylinder-flare with 30 deg flare angle at several Reynolds numbers sustaining laminar, separated flow. Truncated and extended flare configurations are considered. The second set of experiments, at similar conditions, involves flow over a sharp, double cone with fore-cone angle of 25 deg and aft-cone angle of 55 deg. Both sets of experiments involve 30 deg compressions. Location of the separation point in the numerical simulation is extremely sensitive to the level of grid refinement in the numerical predictions. The numerical simulations also show a significant influence of Reynolds number on extent of separation. Flow unsteadiness was easily introduced into the double cone simulations using aggressive relaxation parameters that normally promote convergence.
NASA Astrophysics Data System (ADS)
Wahl, Thomas; Jensen, Jürgen; Mudersbach, Christoph
2010-05-01
Storm surges along the German North Sea coastline led to major damages in the past and the risk of inundation is expected to increase in the course of an ongoing climate change. The knowledge of the characteristics of possible storm surges is essential for the performance of integrated risk analyses, e.g. based on the source-pathway-receptor concept. The latter includes the storm surge simulation/analyses (source), modelling of dike/dune breach scenarios (pathway) and the quantification of potential losses (receptor). In subproject 1b of the German joint research project XtremRisK (www.xtremrisk.de), a stochastic storm surge generator for the south-eastern North Sea area is developed. The input data for the multivariate model are high resolution sea level observations from tide gauges during extreme events. Based on 25 parameters (19 sea level parameters and 6 time parameters) observed storm surge hydrographs consisting of three tides are parameterised. Followed by the adaption of common parametric probability distributions and a large number of Monte-Carlo-Simulations, the final reconstruction leads to a set of 100.000 (default) synthetic storm surge events with a one-minute resolution. Such a data set can potentially serve as the basis for a large number of applications. For risk analyses, storm surges with peak water levels exceeding the design water levels are of special interest. The occurrence probabilities of the simulated extreme events are estimated based on multivariate statistics, considering the parameters "peak water level" and "fullness/intensity". In the past, most studies considered only the peak water levels during extreme events, which might not be the most important parameter in any cases. Here, a 2D-Archimedian copula model is used for the estimation of the joint probabilities of the selected parameters, accounting for the structures of dependence overlooking the margins. In coordination with subproject 1a, the results will be used as the input for the XtremRisK subprojects 2 to 4. The project is funded by the German Federal Ministry of Education and Research (BMBF) (Project No. 03 F 0483 B).
Adaptive Set-Based Methods for Association Testing
Su, Yu-Chen; Gauderman, W. James; Kiros, Berhane; Lewinger, Juan Pablo
2017-01-01
With a typical sample size of a few thousand subjects, a single genomewide association study (GWAS) using traditional one-SNP-at-a-time methods can only detect genetic variants conferring a sizable effect on disease risk. Set-based methods, which analyze sets of SNPs jointly, can detect variants with smaller effects acting within a gene, a pathway, or other biologically relevant sets. While self-contained set-based methods (those that test sets of variants without regard to variants not in the set) are generally more powerful than competitive set-based approaches (those that rely on comparison of variants in the set of interest with variants not in the set), there is no consensus as to which self-contained methods are best. In particular, several self-contained set tests have been proposed to directly or indirectly ‘adapt’ to the a priori unknown proportion and distribution of effects of the truly associated SNPs in the set, which is a major determinant of their power. A popular adaptive set-based test is the adaptive rank truncated product (ARTP), which seeks the set of SNPs that yields the best-combined evidence of association. We compared the standard ARTP, several ARTP variations we introduced, and other adaptive methods in a comprehensive simulation study to evaluate their performance. We used permutations to assess significance for all the methods and thus provide a level playing field for comparison. We found the standard ARTP test to have the highest power across our simulations followed closely by the global model of random effects (GMRE) and a LASSO based test. PMID:26707371
Physical Uncertainty Bounds (PUB)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vaughan, Diane Elizabeth; Preston, Dean L.
2015-03-19
This paper introduces and motivates the need for a new methodology for determining upper bounds on the uncertainties in simulations of engineered systems due to limited fidelity in the composite continuum-level physics models needed to simulate the systems. We show that traditional uncertainty quantification methods provide, at best, a lower bound on this uncertainty. We propose to obtain bounds on the simulation uncertainties by first determining bounds on the physical quantities or processes relevant to system performance. By bounding these physics processes, as opposed to carrying out statistical analyses of the parameter sets of specific physics models or simply switchingmore » out the available physics models, one can obtain upper bounds on the uncertainties in simulated quantities of interest.« less
Design and test of a simulation system for autonomous optic-navigated planetary landing
NASA Astrophysics Data System (ADS)
Cai, Sheng; Yin, Yanhe; Liu, Yanjun; He, Fengyun
2018-02-01
In this paper, a simulation system based on commercial projector is proposed to test the optical navigation algorithms for autonomous planetary landing in laboratorial scenarios. The design work of optics, mechanics and synchronization control are carried out. Furthermore, the whole simulation system is set up and tested. Through the calibration of the system, two main problems, synchronization between the projector and CCD and pixel-level shifting caused by the low repeatability of DMD used in the projector, are settled. The experimental result shows that the RMS errors of pitch, yaw and roll angles are 0.78', 0.48', and 2.95' compared with the theoretical calculation, which can fulfill the requirement of experimental simulation for planetary landing in laboratory.
Efficacy of Surgical Simulation Training in a Low-Income Country.
Tansley, Gavin; Bailey, Jonathan G; Gu, Yuqi; Murray, Michelle; Livingston, Patricia; Georges, Ntakiyiruta; Hoogerboord, Marius
2016-11-01
Simulation training has evolved as an important component of postgraduate surgical education and has shown to be effective in teaching procedural skills. Despite potential benefits to low- and middle-income countries (LMIC), simulation training is predominately used in high-income settings. This study evaluates the effectiveness of simulation training in one LMIC (Rwanda). Twenty-six postgraduate surgical trainees at the University of Rwanda (Kigali, Rwanda) and Dalhousie University (Halifax, Canada) participated in the study. Participants attended one 3-hour simulation session using a high-fidelity, tissue-based model simulating the creation of an end ileostomy. Each participant was anonymously recorded completing the assigned task at three time points: prior to, immediately following, and 90 days following the simulation training. A single blinded expert reviewer assessed the performance using the Objective Structured Assessment of Technical Skill (OSATS) instrument. The mean OSATS score improvement for participants who completed all the assessments was 6.1 points [95 % Confidence Interval (CI) 2.2-9.9, p = 0.005]. Improvement was sustained over a 90-day period with a mean improvement of 4.1 points between the first and third attempts (95 % CI 0.3-7.9, p = 0.038). Simulation training was effective in both study sites, though most gains occurred with junior-level learners, with a mean improvement of 8.3 points (95 % CI 5.1-11.6, p < 0.001). Significant improvements were not identified for senior-level learners. This study supports the benefit for simulation in surgical training in LMICs. Skill improvements were limited to junior-level trainees. This work provides justification for investment in simulation-based curricula in Rwanda and potentially other LMICs.
A novel plant protection strategy for transient reactors
NASA Astrophysics Data System (ADS)
Bhattacharyya, Samit K.; Lipinski, Walter C.; Hanan, Nelson A.
The present plant protection system (PPS) has been defined for use in the TREAT-upgrade (TU) reactor for controlled transient operation of reactor-fuel behavior testing under simulated reactor-accident conditions. A PPS with energy-dependent trip set points lowered worst-case clad temperatures by as much as 180 K, relative to the use of conventional fixed-level trip set points. The multilayered multilevel protection strategy represents the state-of-the-art in terrestrial transient reactor protection systems, and should be applicable to multi-MW space reactors.
DHM simulation in virtual environments: a case-study on control room design.
Zamberlan, M; Santos, V; Streit, P; Oliveira, J; Cury, R; Negri, T; Pastura, F; Guimarães, C; Cid, G
2012-01-01
This paper will present the workflow developed for the application of serious games in the design of complex cooperative work settings. The project was based on ergonomic studies and development of a control room among participative design process. Our main concerns were the 3D human virtual representation acquired from 3D scanning, human interaction, workspace layout and equipment designed considering ergonomics standards. Using Unity3D platform to design the virtual environment, the virtual human model can be controlled by users on dynamic scenario in order to evaluate the new work settings and simulate work activities. The results obtained showed that this virtual technology can drastically change the design process by improving the level of interaction between final users and, managers and human factors team.
Zhao, Hui-Jie; Jiang, Cheng; Jia, Guo-Rui
2014-01-01
Adjacency effects may introduce errors in the quantitative applications of hyperspectral remote sensing, of which the significant item is the earth-atmosphere coupling radiance. However, the surrounding relief and shadow induce strong changes in hyperspectral images acquired from rugged terrain, which is not accurate to describe the spectral characteristics. Furthermore, the radiative coupling process between the earth and the atmosphere is more complex over the rugged scenes. In order to meet the requirements of real-time processing in data simulation, an equivalent reflectance of background was developed by taking into account the topography and the geometry between surroundings and targets based on the radiative transfer process. The contributions of the coupling to the signal at sensor level were then evaluated. This approach was integrated to the sensor-level radiance simulation model and then validated through simulating a set of actual radiance data. The results show that the visual effect of simulated images is consistent with that of observed images. It was also shown that the spectral similarity is improved over rugged scenes. In addition, the model precision is maintained at the same level over flat scenes.
NASA Astrophysics Data System (ADS)
Taleghani, Mohammad; Sailor, David; Ban-Weiss, George A.
2016-02-01
The urban heat island impacts the thermal comfort of pedestrians in cities. In this paper, the effects of four heat mitigation strategies on micrometeorology and the thermal comfort of pedestrians were simulated for a neighborhood in eastern Los Angeles County. The strategies investigated include solar reflective ‘cool roofs’, vegetative ‘green roofs’, solar reflective ‘cool pavements’, and increased street-level trees. A series of micrometeorological simulations for an extreme heat day were carried out assuming widespread adoption of each mitigation strategy. Comparing each simulation to the control simulation assuming current land cover for the neighborhood showed that additional street-trees and cool pavements reduced 1.5 m air temperature, while cool and green roofs mostly provided cooling at heights above pedestrian level. However, cool pavements increased reflected sunlight from the ground to pedestrians at a set of unshaded receptor locations. This reflected radiation intensified the mean radiant temperature and consequently increased physiological equivalent temperature (PET) by 2.2 °C during the day, reducing the thermal comfort of pedestrians. At another set of receptor locations that were on average 5 m from roadways and underneath preexisting tree cover, cool pavements caused significant reductions in surface air temperatures and small changes in mean radiant temperature during the day, leading to decreases in PET of 1.1 °C, and consequent improvements in thermal comfort. For improving thermal comfort of pedestrians during the afternoon in unshaded locations, adding street trees was found to be the most effective strategy. However, afternoon thermal comfort improvements in already shaded locations adjacent to streets were most significant for cool pavements. Green and cool roofs showed the lowest impact on the thermal comfort of pedestrians since they modify the energy balance at roof level, above the height of pedestrians.
Spacecraft Data Simulator for the test of level zero processing systems
NASA Technical Reports Server (NTRS)
Shi, Jeff; Gordon, Julie; Mirchandani, Chandru; Nguyen, Diem
1994-01-01
The Microelectronic Systems Branch (MSB) at Goddard Space Flight Center (GSFC) has developed a Spacecraft Data Simulator (SDS) to support the development, test, and verification of prototype and production Level Zero Processing (LZP) systems. Based on a disk array system, the SDS is capable of generating large test data sets up to 5 Gigabytes and outputting serial test data at rates up to 80 Mbps. The SDS supports data formats including NASA Communication (Nascom) blocks, Consultative Committee for Space Data System (CCSDS) Version 1 & 2 frames and packets, and all the Advanced Orbiting Systems (AOS) services. The capability to simulate both sequential and non-sequential time-ordered downlink data streams with errors and gaps is crucial to test LZP systems. This paper describes the system architecture, hardware and software designs, and test data designs. Examples of test data designs are included to illustrate the application of the SDS.
Binary black hole merger dynamics and waveforms
NASA Technical Reports Server (NTRS)
Baker, John G.; Centrella, Joan; Choi, Dae-II; Koppitz, Michael; vanMeter, James
2006-01-01
We apply recently developed techniques for simulations of moving black holes to study dynamics and radiation generation in the last few orbits and merger of a binary black hole system. Our analysis produces a consistent picture from the gravitational wave forms and dynamical black hole trajectories for a set of simulations with black holes beginning on circular-orbit trajectories at a variety of initial separations. We find profound agreement at the level of 1% among the simulations for the last orbit, merger and ringdown, resulting in a final black hole with spin parameter a/m = 0.69. Consequently, we are confident that this part of our waveform result accurately represents the predictions from Einstein's General Relativity for the final burst of gravitational radiation resulting from the merger of an astrophysical system of equal-mass non-spinning black holes. We also find good agreement at a level of roughly 10% for the radiation generated in the preceding few orbits.
Volatility, house edge and prize structure of gambling games.
Turner, Nigel E
2011-12-01
This study used simulations to examine the effect of prize structure on the outcome volatility and the number of winners of various game configurations. The two most common prize structures found in gambling games are even money payoff games (bet $1; win $2) found on most table games and multilevel prizes structures found in gambling machine games. Simulations were set up to examine the effect of prize structure on the long-term outcomes of these games. Eight different prize structures were compared in terms of the number of winners and volatility. It was found that the standard table game and commercial gambling machines produced fairly high numbers of short term winners (1 h), but few long term winners (50 h). It was found that the typical even money game set up produced the lowest level of volatility. Of the multilevel prize structures examined, the three simulations based on commercial gambling machines were the least volatile. The results are examined in terms of the pragmatics of game design.
Cookbook Recipe to Simulate Seawater Intrusion with Standard MODFLOW
NASA Astrophysics Data System (ADS)
Schaars, F.; Bakker, M.
2012-12-01
We developed a cookbook recipe to simulate steady interface flow in multi-layer coastal aquifers with regular groundwater codes such as standard MODFLOW. The main step in the recipe is a simple transformation of the hydraulic conductivities and thicknesses of the aquifers. Standard groundwater codes may be applied to compute the head distribution in the aquifer using the transformed parameters. For example, for flow in a single unconfined aquifer, the hydraulic conductivity needs to be multiplied with 41 and the base of the aquifer needs to be set to mean sea level (for a relative seawater density of 1.025). Once the head distribution is obtained, the Ghijben-Herzberg relationship is applied to compute the depth of the interface. The recipe may be applied to quite general settings, including spatially variable aquifer properties. Any standard groundwater code may be used, as long as it can simulate unconfined flow where the transmissivity is a linear function of the head. The proposed recipe is benchmarked successfully against a number of analytic and numerical solutions.
Roberts, Nicole K; Williams, Reed G; Schwind, Cathy J; Sutyak, John A; McDowell, Christopher; Griffen, David; Wall, Jarrod; Sanfey, Hilary; Chestnut, Audra; Meier, Andreas H; Wohltmann, Christopher; Clark, Ted R; Wetter, Nathan
2014-02-01
Communication breakdowns and care coordination problems often cause preventable adverse patient care events, which can be especially acute in the trauma setting, in which ad hoc teams have little time for advanced planning. Existing teamwork curricula do not address the particular issues associated with ad hoc emergency teams providing trauma care. Ad hoc trauma teams completed a preinstruction simulated trauma encounter and were provided with instruction on appropriate team behaviors and team communication. Teams completed a postinstruction simulated trauma encounter immediately afterward and 3 weeks later, then completed a questionnaire. Blinded raters rated videotapes of the simulations. Participants expressed high levels of satisfaction and intent to change practice after the intervention. Participants changed teamwork and communication behavior on the posttest, and changes were sustained after a 3-week interval, though there was some loss of retention. Brief training exercises can change teamwork and communication behaviors on ad hoc trauma teams. Copyright © 2014 Elsevier Inc. All rights reserved.
Mena, Carlos F.; Walsh, Stephen J.; Frizzelle, Brian G.; Xiaozheng, Yao; Malanson, George P.
2010-01-01
This paper describes the design and implementation of an Agent-Based Model (ABM) used to simulate land use change on household farms in the Northern Ecuadorian Amazon (NEA). The ABM simulates decision-making processes at the household level that is examined through a longitudinal, socio-economic and demographic survey that was conducted in 1990 and 1999. Geographic Information Systems (GIS) are used to establish spatial relationships between farms and their environment, while classified Landsat Thematic Mapper (TM) imagery is used to set initial land use/land cover conditions for the spatial simulation, assess from-to land use/land cover change patterns, and describe trajectories of land use change at the farm and landscape levels. Results from prior studies in the NEA provide insights into the key social and ecological variables, describe human behavioral functions, and examine population-environment interactions that are linked to deforestation and agricultural extensification, population migration, and demographic change. Within the architecture of the model, agents are classified as active or passive. The model comprises four modules, i.e., initialization, demography, agriculture, and migration that operate individually, but are linked through key household processes. The main outputs of the model include a spatially-explicit representation of the land use/land cover on survey and non-survey farms and at the landscape level for each annual time-step, as well as simulated socio-economic and demographic characteristics of households and communities. The work describes the design and implementation of the model and how population-environment interactions can be addressed in a frontier setting. The paper contributes to land change science by examining important pattern-process relations, advocating a spatial modeling approach that is capable of synthesizing fundamental relationships at the farm level, and links people and environment in complex ways. PMID:24436501
NASA Astrophysics Data System (ADS)
Shin, Henry; Suresh, Nina L.; Zev Rymer, William; Hu, Xiaogang
2018-02-01
Objective. Chronic muscle weakness impacts the majority of individuals after a stroke. The origins of this hemiparesis is multifaceted, and an altered spinal control of the motor unit (MU) pool can lead to muscle weakness. However, the relative contribution of different MU recruitment and discharge organization is not well understood. In this study, we sought to examine these different effects by utilizing a MU simulation with variations set to mimic the changes of MU control in stroke. Approach. Using a well-established model of the MU pool, this study quantified the changes in force output caused by changes in MU recruitment range and recruitment order, as well as MU firing rate organization at the population level. We additionally expanded the original model to include a fatigue component, which variably decreased the output force with increasing length of contraction. Differences in the force output at both the peak and fatigued time points across different excitation levels were quantified and compared across different sets of MU parameters. Main results. Across the different simulation parameters, we found that the main driving factor of the reduced force output was due to the compressed range of MU recruitment. Recruitment compression caused a decrease in total force across all excitation levels. Additionally, a compression of the range of MU firing rates also demonstrated a decrease in the force output mainly at the higher excitation levels. Lastly, changes to the recruitment order of MUs appeared to minimally impact the force output. Significance. We found that altered control of MUs alone, as simulated in this study, can lead to a substantial reduction in muscle force generation in stroke survivors. These findings may provide valuable insight for both clinicians and researchers in prescribing and developing different types of therapies for the rehabilitation and restoration of lost strength after stroke.
Radio-Frequency Tank Eigenmode Sensor for Propellant Quantity Gauging
NASA Technical Reports Server (NTRS)
Zimmerli, Gregory A.; Buchanan, David A.; Follo, Jeffrey C.; Vaden, Karl R.; Wagner, James D.; Asipauskas, Marius; Herlacher, Michael D.
2010-01-01
Although there are several methods for determining liquid level in a tank, there are no proven methods to quickly gauge the amount of propellant in a tank while it is in low gravity or under low-settling thrust conditions where propellant sloshing is an issue. Having the ability to quickly and accurately gauge propellant tanks in low-gravity is an enabling technology that would allow a spacecraft crew or mission control to always know the amount of propellant onboard, thus increasing the chances for a successful mission. The Radio Frequency Mass Gauge (RFMG) technique measures the electromagnetic eigenmodes, or natural resonant frequencies, of a tank containing a dielectric fluid. The essential hardware components consist of an RF network analyzer that measures the reflected power from an antenna probe mounted internal to the tank. At a resonant frequency, there is a drop in the reflected power, and these inverted peaks in the reflected power spectrum are identified as the tank eigenmode frequencies using a peak-detection software algorithm. This information is passed to a pattern-matching algorithm, which compares the measured eigenmode frequencies with a database of simulated eigenmode frequencies at various fill levels. A best match between the simulated and measured frequency values occurs at some fill level, which is then reported as the gauged fill level. The database of simulated eigenmode frequencies is created by using RF simulation software to calculate the tank eigenmodes at various fill levels. The input to the simulations consists of a fairly high-fidelity tank model with proper dimensions and including internal tank hardware, the dielectric properties of the fluid, and a defined liquid/vapor interface. Because of small discrepancies between the model and actual hardware, the measured empty tank spectra and simulations are used to create a set of correction factors for each mode (typically in the range of 0.999 1.001), which effectively accounts for the small discrepancies. These correction factors are multiplied to the modes at all fill levels. By comparing several measured modes with the simulations, it is possible to accurately gauge the amount of propellant in the tank. An advantage of the RFMG approach of applying computer simulations and a pattern-matching algorithm is that the Although there are several methods for determining liquid level in a tank, there are no proven methods to quickly gauge the amount of propellant in a tank while it is in low gravity or under low-settling thrust conditions where propellant sloshing is an issue. Having the ability to quickly and accurately gauge propellant tanks in low-gravity is an enabling technology that would allow a spacecraft crew or mission control to always know the amount of propellant onboard, thus increasing the chances for a successful mission. The Radio Frequency Mass Gauge (RFMG) technique measures the electromagnetic eigenmodes, or natural resonant frequencies, of a tank containing a dielectric fluid. The essential hardware components consist of an RF network analyzer that measures the reflected power from an antenna probe mounted internal to the tank. At a resonant frequency, there is a drop in the reflected power, and these inverted peaks in the reflected power spectrum are identified as the tank eigenmode frequencies using a peak-detection software algorithm. This information is passed to a pattern-matching algorithm, which compares the measured eigenmode frequencies with a database of simulated eigenmode frequencies at various fill levels. A best match between the simulated and measured frequency values occurs at some fill level, which is then reported as the gauged fill level. The database of simulated eigenmode frequencies is created by using RF simulation software to calculate the tank eigenmodes at various fill levels. The input to the simulations consists of a fairly high-fidelity tank model with proper dimensions and including internal tank harare, the dielectric properties of the fluid, and a defined liquid/vapor interface. Because of small discrepancies between the model and actual hardware, the measured empty tank spectra and simulations are used to create a set of correction factors for each mode (typically in the range of 0.999 1.001), which effectively accounts for the small discrepancies. These correction factors are multiplied to the modes at all fill levels. By comparing several measured modes with the simulations, it is possible to accurately gauge the amount of propellant in the tank. An advantage of the RFMG approach of applying computer simulations and a pattern-matching algorithm is that the
NASA Technical Reports Server (NTRS)
Maiorano, Andrea; Martre, Pierre; Asseng, Senthold; Ewert, Frank; Mueller, Christoph; Roetter, Reimund P.; Ruane, Alex C.; Semenov, Mikhail A.; Wallach, Daniel; Wang, Enli
2016-01-01
To improve climate change impact estimates and to quantify their uncertainty, multi-model ensembles (MMEs) have been suggested. Model improvements can improve the accuracy of simulations and reduce the uncertainty of climate change impact assessments. Furthermore, they can reduce the number of models needed in a MME. Herein, 15 wheat growth models of a larger MME were improved through re-parameterization and/or incorporating or modifying heat stress effects on phenology, leaf growth and senescence, biomass growth, and grain number and size using detailed field experimental data from the USDA Hot Serial Cereal experiment (calibration data set). Simulation results from before and after model improvement were then evaluated with independent field experiments from a CIMMYT worldwide field trial network (evaluation data set). Model improvements decreased the variation (10th to 90th model ensemble percentile range) of grain yields simulated by the MME on average by 39% in the calibration data set and by 26% in the independent evaluation data set for crops grown in mean seasonal temperatures greater than 24 C. MME mean squared error in simulating grain yield decreased by 37%. A reduction in MME uncertainty range by 27% increased MME prediction skills by 47%. Results suggest that the mean level of variation observed in field experiments and used as a benchmark can be reached with half the number of models in the MME. Improving crop models is therefore important to increase the certainty of model-based impact assessments and allow more practical, i.e. smaller MMEs to be used effectively.
One doll fits all: validation of the Leiden Infant Simulator Sensitivity Assessment (LISSA).
Voorthuis, Alexandra; Out, Dorothée; van der Veen, Rixt; Bhandari, Ritu; van IJzendoorn, Marinus H; Bakermans-Kranenburg, Marian J
2013-01-01
Children vary hugely in how demanding of their caregivers they are. This creates differences in demands on parents during observation, making the comparison of sensitivity between parents difficult. It would therefore be of interest to create standard situations in which all caregivers are faced with the same level of demand. This study developed an ecologically valid but standardized setting using an infant simulator with interactive features, the Leiden Infant Simulator Sensitivity Assessment (LISSA). The infant simulator resembles a real infant in appearance and it produces crying sounds that are life-like. The simulator begins with fussing and progresses to more intense crying in case of no care or inappropriate care. It responds by being calm again if appropriate care is given. One hundred and eighty-one female participants took care of the infant simulator for two evenings and in a 30 min lab session with increasing competing demands. Sensitive parenting behavior during the lab session was coded with the Ainsworth Sensitivity Scale. Sensitivity ratings covered the whole range of the scale (1-9), and were stable across settings (free play, competing demands). Sensitivity was related to an increase of positive affect during caretaking, and insensitivity was related to intended harsh caregiving response during a computerized cry paradigm. Sensitivity was unrelated to social desirability and self-reported quality of care given to the infant simulator. We discuss the potentials of the infant simulator for research on sensitive parenting, for preventive interventions, and for clinical practices.
Establishing a convention for acting in healthcare simulation: merging art and science.
Sanko, Jill S; Shekhter, Ilya; Kyle, Richard R; Di Benedetto, Stephen; Birnbach, David J
2013-08-01
Among the most powerful tools available to simulation instructors is a confederate. Although technical and logical realism is dictated by the simulation platform and setting, the quality of role playing by confederates strongly determines psychological or emotional fidelity of simulation. The highest level of realism, however, is achieved when the confederates are properly trained. Theater and acting methodology can provide simulation educators a framework from which to establish an acting convention specific to the discipline of healthcare simulation. This report attempts to examine simulation through the lens of theater arts and represents an opinion on acting in healthcare simulation for both simulation educators and confederates. It aims to refine the practice of simulation by embracing the lessons of the theater community. Although the application of these approaches in healthcare education has been described in the literature, a systematic way of organizing, publicizing, or documenting the acting within healthcare simulation has never been completed. Therefore, we attempt, for the first time, to take on this challenge and create a resource, which infuses theater arts into the practice of healthcare simulation.
NASA Astrophysics Data System (ADS)
Ekberg, Joakim; Timpka, Toomas; Morin, Magnus; Jenvald, Johan; Nyce, James M.; Gursky, Elin A.; Eriksson, Henrik
Computer simulations have emerged as important tools in the preparation for outbreaks of infectious disease. To support the collaborative planning and responding to the outbreaks, reports from simulations need to be transparent (accessible) with regard to the underlying parametric settings. This paper presents a design for generation of simulation reports where the background settings used in the simulation models are automatically visualized. We extended the ontology-management system Protégé to tag different settings into categories, and included these in report generation in parallel to the simulation outcomes. The report generator takes advantage of an XSLT specification and collects the documentation of the particular simulation settings into abridged XMLs including also summarized results. We conclude that even though inclusion of critical background settings in reports may not increase the accuracy of infectious disease simulations, it can prevent misunderstandings and less than optimal public health decisions.
Determination of carboxyhaemoglobin in humans following low-level exposures to carbon monoxide.
Gosselin, Nathalie H; Brunet, Robert C; Carrier, Gaétan
2009-11-01
This study proposes to estimate carboxyhaemoglobin (COHb) levels in the blood of men and women of various ages exposed to common concentrations of carbon monoxide (CO) using a model with only one free parameter while integrating alveoli-blood and blood-tissue CO exchanges. The model retained is essentially that of Coburn et al. (1965) with two important additions: an alveoli compartment for the dynamics of CO exchanges between alveoli and blood, and a compartment for the significant amounts of CO bound to heme proteins in extravascular spaces. The model was validated by comparing its simulations with various published data sets for the COHb time profiles of volunteers exposed to known CO concentrations. Once the model was validated, it was used to simulate various situations of interest for their impact on public health. This approach yields reliable estimations of the time profiles of COHb levels resulting from different levels of CO exposure over various periods of time and under various conditions (resting, exercise, working, and smoking). The non-linear kinetics of CO, observed experimentally, were correctly reproduced by simulations with the model. Simulations were also carried out iteratively to determine the exposure times and CO concentrations in ambient air needed to reach the maximum levels of COHb recommended by Health Canada, the U.S. Environmental Protection Agency (EPA), and the World Health Organisation (WHO) for each age group of the general population. The lowest CO concentrations leading to maximum COHb levels of 1.5, 2, and 2.5% were determined.
Microbially Mediated Kinetic Sulfur Isotope Fractionation: Reactive Transport Modeling Benchmark
NASA Astrophysics Data System (ADS)
Wanner, C.; Druhan, J. L.; Cheng, Y.; Amos, R. T.; Steefel, C. I.; Ajo Franklin, J. B.
2014-12-01
Microbially mediated sulfate reduction is a ubiquitous process in many subsurface systems. Isotopic fractionation is characteristic of this anaerobic process, since sulfate reducing bacteria (SRB) favor the reduction of the lighter sulfate isotopologue (S32O42-) over the heavier isotopologue (S34O42-). Detection of isotopic shifts have been utilized as a proxy for the onset of sulfate reduction in subsurface systems such as oil reservoirs and aquifers undergoing uranium bioremediation. Reactive transport modeling (RTM) of kinetic sulfur isotope fractionation has been applied to field and laboratory studies. These RTM approaches employ different mathematical formulations in the representation of kinetic sulfur isotope fractionation. In order to test the various formulations, we propose a benchmark problem set for the simulation of kinetic sulfur isotope fractionation during microbially mediated sulfate reduction. The benchmark problem set is comprised of four problem levels and is based on a recent laboratory column experimental study of sulfur isotope fractionation. Pertinent processes impacting sulfur isotopic composition such as microbial sulfate reduction and dispersion are included in the problem set. To date, participating RTM codes are: CRUNCHTOPE, TOUGHREACT, MIN3P and THE GEOCHEMIST'S WORKBENCH. Preliminary results from various codes show reasonable agreement for the problem levels simulating sulfur isotope fractionation in 1D.
Contact Analog/Compressed Symbology Heading Tape Assessment
NASA Technical Reports Server (NTRS)
Shively, R. Jay; Atencio, Adolph; Turpin, Terry; Dowell, Susan
2002-01-01
A simulation assessed the performance, handling qualities and workload associated with a contact-analog, world-referenced heading tape as implemented on the Comanche Helmet Integrated Display Sight System (HIDSS) when compared with a screen-fixed, compressed heading tape. Six pilots, four active duty Army Aviators and two civilians flew three ADS-33 maneuvers and a traffic pattern in the Ames Vertical Motion Simulation facility. Small, but statistically significant advantages were found for the compressed symbology for handling qualities, workload, and some of the performance measures. It should be noted however that the level of performance and handling qualities for both symbology sets fell within the acceptable tolerance levels. Both symbology sets yield satisfactory handling qualities and performance in velocity stabilization mode and adequate handling qualities in the automatic flight control mode. Pilot comments about the contact analog symbology highlighted the lack of useful rate of change information in the heading tape and "blurring" due to the rapid movement of the heading tape. These issues warrant further study. Care must be taken in interpreting the operational significance of these results. The symbology sets yielded categorically similar data, i.e., acceptable handling qualities and adequate performance, so while the results point to the need for further study, their operational significance has yet to be determined.
Quantum chemical calculations of glycine glutaric acid
NASA Astrophysics Data System (ADS)
Arioǧlu, ćaǧla; Tamer, Ömer; Avci, Davut; Atalay, Yusuf
2017-02-01
Density functional theory (DFT) calculations of glycine glutaric acid were performed by using B3LYP levels with 6-311++G(d,p) basis set. The theoretical structural parameters such as bond lengths and bond angles are in a good agreement with the experimental values of the title compound. HOMO and LUMO energies were calculated, and the obtained energy gap shows that charge transfer occurs in the title compound. Vibrational frequencies were calculated and compare with experimental ones. 3D molecular surfaces of the title compound were simulated using the same level and basis set. Finally, the 13C and 1H NMR chemical shift values were calculated by the application of the gauge independent atomic orbital (GIAO) method.
Simulations for Teaching Chemical Equilibrium
NASA Astrophysics Data System (ADS)
Huddle, Penelope A.; White, Margaret Dawn; Rogers, Fiona
2000-07-01
This paper outlines a systematic approach to teaching chemical equilibrium using simulation experiments that address most known alternate conceptions in the topic. Graphs drawn using the data from the simulations are identical to those obtained using real experimental data for reactions that go to equilibrium. This allows easy mapping of the analogy to the target. The requirements for the simulations are simple and inexpensive, making them accessible to even the poorest schools. The simulations can be adapted for all levels, from pupils who are first encountering equilibrium through students in tertiary education to qualified teachers who have experienced difficulty in teaching the topic. The simulations were piloted on four very different audiences. Minor modifications were then made before the Equilibrium Games as reported in this paper were tested on three groups of subjects: a Grade 12 class, college students, and university Chemistry I students. Marked improvements in understanding of the concept were shown in two of the three sets of subjects.
Toward a Principled Sampling Theory for Quasi-Orders
Ünlü, Ali; Schrepp, Martin
2016-01-01
Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets. PMID:27965601
Toward a Principled Sampling Theory for Quasi-Orders.
Ünlü, Ali; Schrepp, Martin
2016-01-01
Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets.
NASA Astrophysics Data System (ADS)
Lei, H.; Lu, Z.; Vesselinov, V. V.; Ye, M.
2017-12-01
Simultaneous identification of both the zonation structure of aquifer heterogeneity and the hydrogeological parameters associated with these zones is challenging, especially for complex subsurface heterogeneity fields. In this study, a new approach, based on the combination of the level set method and a parallel genetic algorithm is proposed. Starting with an initial guess for the zonation field (including both zonation structure and the hydraulic properties of each zone), the level set method ensures that material interfaces are evolved through the inverse process such that the total residual between the simulated and observed state variables (hydraulic head) always decreases, which means that the inversion result depends on the initial guess field and the minimization process might fail if it encounters a local minimum. To find the global minimum, the genetic algorithm (GA) is utilized to explore the parameters that define initial guess fields, and the minimal total residual corresponding to each initial guess field is considered as the fitness function value in the GA. Due to the expensive evaluation of the fitness function, a parallel GA is adapted in combination with a simulated annealing algorithm. The new approach has been applied to several synthetic cases in both steady-state and transient flow fields, including a case with real flow conditions at the chromium contaminant site at the Los Alamos National Laboratory. The results show that this approach is capable of identifying the arbitrary zonation structures of aquifer heterogeneity and the hydrogeological parameters associated with these zones effectively.
Maier, Joscha; Sawall, Stefan; Kachelrieß, Marc
2014-05-01
Phase-correlated microcomputed tomography (micro-CT) imaging plays an important role in the assessment of mouse models of cardiovascular diseases and the determination of functional parameters as the left ventricular volume. As the current gold standard, the phase-correlated Feldkamp reconstruction (PCF), shows poor performance in case of low dose scans, more sophisticated reconstruction algorithms have been proposed to enable low-dose imaging. In this study, the authors focus on the McKinnon-Bates (MKB) algorithm, the low dose phase-correlated (LDPC) reconstruction, and the high-dimensional total variation minimization reconstruction (HDTV) and investigate their potential to accurately determine the left ventricular volume at different dose levels from 50 to 500 mGy. The results were verified in phantom studies of a five-dimensional (5D) mathematical mouse phantom. Micro-CT data of eight mice, each administered with an x-ray dose of 500 mGy, were acquired, retrospectively gated for cardiac and respiratory motion and reconstructed using PCF, MKB, LDPC, and HDTV. Dose levels down to 50 mGy were simulated by using only a fraction of the projections. Contrast-to-noise ratio (CNR) was evaluated as a measure of image quality. Left ventricular volume was determined using different segmentation algorithms (Otsu, level sets, region growing). Forward projections of the 5D mouse phantom were performed to simulate a micro-CT scan. The simulated data were processed the same way as the real mouse data sets. Compared to the conventional PCF reconstruction, the MKB, LDPC, and HDTV algorithm yield images of increased quality in terms of CNR. While the MKB reconstruction only provides small improvements, a significant increase of the CNR is observed in LDPC and HDTV reconstructions. The phantom studies demonstrate that left ventricular volumes can be determined accurately at 500 mGy. For lower dose levels which were simulated for real mouse data sets, the HDTV algorithm shows the best performance. At 50 mGy, the deviation from the reference obtained at 500 mGy were less than 4%. Also the LDPC algorithm provides reasonable results with deviation less than 10% at 50 mGy while PCF and MKB reconstruction show larger deviations even at higher dose levels. LDPC and HDTV increase CNR and allow for quantitative evaluations even at dose levels as low as 50 mGy. The left ventricular volumes exemplarily illustrate that cardiac parameters can be accurately estimated at lowest dose levels if sophisticated algorithms are used. This allows to reduce dose by a factor of 10 compared to today's gold standard and opens new options for longitudinal studies of the heart.
NASA Astrophysics Data System (ADS)
Demir, I.
2013-12-01
Recent developments in web technologies make it easy to manage and visualize large data sets with general public. Novel visualization techniques and dynamic user interfaces allow users to create realistic environments, and interact with data to gain insight from simulations and environmental observations. The floodplain simulation system is a web-based 3D interactive flood simulation environment to create real world flooding scenarios. The simulation systems provides a visually striking platform with realistic terrain information, and water simulation. Students can create and modify predefined scenarios, control environmental parameters, and evaluate flood mitigation techniques. The web-based simulation system provides an environment to children and adults learn about the flooding, flood damage, and effects of development and human activity in the floodplain. The system provides various scenarios customized to fit the age and education level of the users. This presentation provides an overview of the web-based flood simulation system, and demonstrates the capabilities of the system for various flooding and land use scenarios.
Electro-optical co-simulation for integrated CMOS photonic circuits with VerilogA.
Sorace-Agaskar, Cheryl; Leu, Jonathan; Watts, Michael R; Stojanovic, Vladimir
2015-10-19
We present a Cadence toolkit library written in VerilogA for simulation of electro-optical systems. We have identified and described a set of fundamental photonic components at the physical level such that characteristics of composite devices (e.g. ring modulators) are created organically - by simple instantiation of fundamental primitives. Both the amplitude and phase of optical signals as well as optical-electrical interactions are simulated. We show that the results match other simulations and analytic solutions that have previously been compared to theory for both simple devices, such as ring resonators, and more complicated devices and systems such as single-sideband modulators, WDM links and Pound Drever Hall Locking loops. We also illustrate the capability of such toolkit for co-simulation with electronic circuits, which is a key enabler of the electro-optic system development and verification.
Lukasczyk, Jonas; Weber, Gunther; Maciejewski, Ross; ...
2017-06-01
Tracking graphs are a well established tool in topological analysis to visualize the evolution of components and their properties over time, i.e., when components appear, disappear, merge, and split. However, tracking graphs are limited to a single level threshold and the graphs may vary substantially even under small changes to the threshold. To examine the evolution of features for varying levels, users have to compare multiple tracking graphs without a direct visual link between them. We propose a novel, interactive, nested graph visualization based on the fact that the tracked superlevel set components for different levels are related to eachmore » other through their nesting hierarchy. This approach allows us to set multiple tracking graphs in context to each other and enables users to effectively follow the evolution of components for different levels simultaneously. We show the effectiveness of our approach on datasets from finite pointset methods, computational fluid dynamics, and cosmology simulations.« less
Trick Simulation Environment 07
NASA Technical Reports Server (NTRS)
Lin, Alexander S.; Penn, John M.
2012-01-01
The Trick Simulation Environment is a generic simulation toolkit used for constructing and running simulations. This release includes a Monte Carlo analysis simulation framework and a data analysis package. It produces all auto documentation in XML. Also, the software is capable of inserting a malfunction at any point during the simulation. Trick 07 adds variable server output options and error messaging and is capable of using and manipulating wide characters for international support. Wide character strings are available as a fundamental type for variables processed by Trick. A Trick Monte Carlo simulation uses a statistically generated, or predetermined, set of inputs to iteratively drive the simulation. Also, there is a framework in place for optimization and solution finding where developers may iteratively modify the inputs per run based on some analysis of the outputs. The data analysis package is capable of reading data from external simulation packages such as MATLAB and Octave, as well as the common comma-separated values (CSV) format used by Excel, without the use of external converters. The file formats for MATLAB and Octave were obtained from their documentation sets, and Trick maintains generic file readers for each format. XML tags store the fields in the Trick header comments. For header files, XML tags for structures and enumerations, and the members within are stored in the auto documentation. For source code files, XML tags for each function and the calling arguments are stored in the auto documentation. When a simulation is built, a top level XML file, which includes all of the header and source code XML auto documentation files, is created in the simulation directory. Trick 07 provides an XML to TeX converter. The converter reads in header and source code XML documentation files and converts the data to TeX labels and tables suitable for inclusion in TeX documents. A malfunction insertion capability allows users to override the value of any simulation variable, or call a malfunction job, at any time during the simulation. Users may specify conditions, use the return value of a malfunction trigger job, or manually activate a malfunction. The malfunction action may consist of executing a block of input file statements in an action block, setting simulation variable values, call a malfunction job, or turn on/off simulation jobs.
NASA Technical Reports Server (NTRS)
Ostroff, Aaron J.
1998-01-01
This paper describes a redesigned longitudinal controller that flew on the High-Alpha Research Vehicle (HARV) during calendar years (CY) 1995 and 1996. Linear models are developed for both the modified controller and a baseline controller that was flown in CY 1994. The modified controller was developed with three gain sets for flight evaluation, and several linear analysis results are shown comparing the gain sets. A Neal-Smith flying qualities analysis shows that performance for the low- and medium-gain sets is near the level 1 boundary, depending upon the bandwidth assumed, whereas the high-gain set indicates a sensitivity problem. A newly developed high-alpha Bode envelope criterion indicates that the control system gains may be slightly high, even for the low-gain set. A large motion-base simulator in the United Kingdom was used to evaluate the various controllers. Desired performance, which appeared to be satisfactory for flight, was generally met with both the low- and medium-gain sets. Both the high-gain set and the baseline controller were very sensitive, and it was easy to generate pilot-induced oscillation (PIO) in some of the target-tracking maneuvers. Flight target-tracking results varied from level 1 to level 3 and from no sensitivity to PIO. These results were related to pilot technique and whether actuator rate saturation was encountered.
Physiological responses and external validity of a new setting for taekwondo combat simulation.
Hausen, Matheus; Soares, Pedro Paulo; Araújo, Marcus Paulo; Porto, Flávia; Franchini, Emerson; Bridge, Craig Alan; Gurgel, Jonas
2017-01-01
Combat simulations have served as an alternative framework to study the cardiorespiratory demands of the activity in combat sports, but this setting imposes rule-restrictions that may compromise the competitiveness of the bouts. The aim of this study was to assess the cardiorespiratory responses to a full-contact taekwondo combat simulation using a safe and externally valid competitive setting. Twelve male national level taekwondo athletes visited the laboratory on two separate occasions. On the first visit, anthropometric and running cardiopulmonary exercise assessments were performed. In the following two to seven days, participants performed a full-contact combat simulation, using a specifically designed gas analyser protector. Oxygen uptake ([Formula: see text]), heart rate (HR) and capillary blood lactate measurements ([La-]) were obtained. Time-motion analysis was performed to compare activity profile. The simulation yielded broadly comparable activity profiles to those performed in competition, a mean [Formula: see text] of 36.6 ± 3.9 ml.kg-1.min-1 (73 ± 6% [Formula: see text]) and mean HR of 177 ± 10 beats.min-1 (93 ± 5% HRPEAK). A peak [Formula: see text] of 44.8 ± 5.0 ml.kg-1.min-1 (89 ± 5% [Formula: see text]), a peak heart rate of 190 ± 13 beats.min-1 (98 ± 3% HRmax) and peak [La-] of 12.3 ± 2.9 mmol.L-1 was elicited by the bouts. Regarding time-motion analysis, combat simulation presented a similar exchange time, a shorter preparation time and a longer exchange-preparation ratio. Taekwondo combats capturing the full-contact competitive elements of a bout elicit moderate to high cardiorespiratory demands on the competitors. These data are valuable to assist preparatory strategies within the sport.
Physiological responses and external validity of a new setting for taekwondo combat simulation
2017-01-01
Combat simulations have served as an alternative framework to study the cardiorespiratory demands of the activity in combat sports, but this setting imposes rule-restrictions that may compromise the competitiveness of the bouts. The aim of this study was to assess the cardiorespiratory responses to a full-contact taekwondo combat simulation using a safe and externally valid competitive setting. Twelve male national level taekwondo athletes visited the laboratory on two separate occasions. On the first visit, anthropometric and running cardiopulmonary exercise assessments were performed. In the following two to seven days, participants performed a full-contact combat simulation, using a specifically designed gas analyser protector. Oxygen uptake (V˙O2), heart rate (HR) and capillary blood lactate measurements ([La-]) were obtained. Time-motion analysis was performed to compare activity profile. The simulation yielded broadly comparable activity profiles to those performed in competition, a mean V˙O2 of 36.6 ± 3.9 ml.kg-1.min-1 (73 ± 6% V˙O2PEAK) and mean HR of 177 ± 10 beats.min-1 (93 ± 5% HRPEAK). A peak V˙O2 of 44.8 ± 5.0 ml.kg-1.min-1 (89 ± 5% V˙O2PEAK), a peak heart rate of 190 ± 13 beats.min-1 (98 ± 3% HRmax) and peak [La-] of 12.3 ± 2.9 mmol.L–1 was elicited by the bouts. Regarding time-motion analysis, combat simulation presented a similar exchange time, a shorter preparation time and a longer exchange-preparation ratio. Taekwondo combats capturing the full-contact competitive elements of a bout elicit moderate to high cardiorespiratory demands on the competitors. These data are valuable to assist preparatory strategies within the sport. PMID:28158252
Pradhan, Sudeep; Song, Byungjeong; Lee, Jaeyeon; Chae, Jung-Woo; Kim, Kyung Im; Back, Hyun-Moon; Han, Nayoung; Kwon, Kwang-Il; Yun, Hwi-Yeol
2017-12-01
Exploratory preclinical, as well as clinical trials, may involve a small number of patients, making it difficult to calculate and analyze the pharmacokinetic (PK) parameters, especially if the PK parameters show very high inter-individual variability (IIV). In this study, the performance of a classical first-order conditional estimation with interaction (FOCE-I) and expectation maximization (EM)-based Markov chain Monte Carlo Bayesian (BAYES) estimation methods were compared for estimating the population parameters and its distribution from data sets having a low number of subjects. In this study, 100 data sets were simulated with eight sampling points for each subject and with six different levels of IIV (5%, 10%, 20%, 30%, 50%, and 80%) in their PK parameter distribution. A stochastic simulation and estimation (SSE) study was performed to simultaneously simulate data sets and estimate the parameters using four different methods: FOCE-I only, BAYES(C) (FOCE-I and BAYES composite method), BAYES(F) (BAYES with all true initial parameters and fixed ω 2 ), and BAYES only. Relative root mean squared error (rRMSE) and relative estimation error (REE) were used to analyze the differences between true and estimated values. A case study was performed with a clinical data of theophylline available in NONMEM distribution media. NONMEM software assisted by Pirana, PsN, and Xpose was used to estimate population PK parameters, and R program was used to analyze and plot the results. The rRMSE and REE values of all parameter (fixed effect and random effect) estimates showed that all four methods performed equally at the lower IIV levels, while the FOCE-I method performed better than other EM-based methods at higher IIV levels (greater than 30%). In general, estimates of random-effect parameters showed significant bias and imprecision, irrespective of the estimation method used and the level of IIV. Similar performance of the estimation methods was observed with theophylline dataset. The classical FOCE-I method appeared to estimate the PK parameters more reliably than the BAYES method when using a simple model and data containing only a few subjects. EM-based estimation methods can be considered for adapting to the specific needs of a modeling project at later steps of modeling.
Sagoo, Navjit; Valdes, Paul; Flecker, Rachel; Gregoire, Lauren J
2013-10-28
Geological data for the Early Eocene (56-47.8 Ma) indicate extensive global warming, with very warm temperatures at both poles. However, despite numerous attempts to simulate this warmth, there are remarkable data-model differences in the prediction of these polar surface temperatures, resulting in the so-called 'equable climate problem'. In this paper, for the first time an ensemble with a perturbed climate-sensitive model parameters approach has been applied to modelling the Early Eocene climate. We performed more than 100 simulations with perturbed physics parameters, and identified two simulations that have an optimal fit with the proxy data. We have simulated the warmth of the Early Eocene at 560 ppmv CO2, which is a much lower CO2 level than many other models. We investigate the changes in atmospheric circulation, cloud properties and ocean circulation that are common to these simulations and how they differ from the remaining simulations in order to understand what mechanisms contribute to the polar warming. The parameter set from one of the optimal Early Eocene simulations also produces a favourable fit for the last glacial maximum boundary climate and outperforms the control parameter set for the present day. Although this does not 'prove' that this model is correct, it is very encouraging that there is a parameter set that creates a climate model able to simulate well very different palaeoclimates and the present-day climate. Interestingly, to achieve the great warmth of the Early Eocene this version of the model does not have a strong future climate change Charney climate sensitivity. It produces a Charney climate sensitivity of 2.7(°)C, whereas the mean value of the 18 models in the IPCC Fourth Assessment Report (AR4) is 3.26(°)C±0.69(°)C. Thus, this value is within the range and below the mean of the models included in the AR4.
Edwards, Michael B; Kanters, Michael A; Bocarro, Jason N
2014-01-16
Extracurricular school sports programs can provide adolescents, including those who are economically disadvantaged, with opportunities to engage in physical activity. Although current models favor more exclusionary interscholastic sports, a better understanding is needed of the potential effects of providing alternative school sports options, such as more inclusive intramural sports. The purpose of this study was to simulate the potential effect of implementing intramural sports programs in North Carolina middle schools on both the rates of sports participation and on energy expenditure related to physical activity levels. Simulations were conducted by using a school-level data set developed by integrating data from multiple sources. Baseline rates of sports participation were extrapolated from individual-level data that were based on school-level characteristics. A regression model was estimated by using the simulated baseline school-level sample. Participation rates and related energy expenditure for schools were calculated on the basis of 2 policy change scenarios. Currently, 37.2% of school sports participants are economically disadvantaged. Simulations suggested that policy changes to implement intramural sports along with interscholastic sports could result in more than 43,000 new sports participants statewide, of which 64.5% would be economically disadvantaged students. This estimate represents a 36.75% increase in economically disadvantaged participants. Adding intramural sports to existing interscholastic sports programs at all middle schools in North Carolina could have an annual effect of an additional 819,892.65 kilogram calories expended statewide. Implementing intramural sports may provide economically disadvantaged students more access to sports, thus reducing disparities in access to school sports while increasing overall physical activity levels among all children.
Performance Logic in Simulation Research at the University of British Columbia.
ERIC Educational Resources Information Center
Boyd, Marcia A.
Advantages of the performance simulation setting are considered, along with what can be studied or developed within this setting. Experiences at the University of British Columbia (UBC) and views on future development and research opportunities in the performance simulation setting are also discussed. The benefits of simulating the clinical…
NASA Astrophysics Data System (ADS)
Rundle, P. B.; Rundle, J. B.; Morein, G.; Donnellan, A.; Turcotte, D.; Klein, W.
2004-12-01
The research community is rapidly moving towards the development of an earthquake forecast technology based on the use of complex, system-level earthquake fault system simulations. Using these topologically and dynamically realistic simulations, it is possible to develop ensemble forecasting methods similar to that used in weather and climate research. To effectively carry out such a program, one needs 1) a topologically realistic model to simulate the fault system; 2) data sets to constrain the model parameters through a systematic program of data assimilation; 3) a computational technology making use of modern paradigms of high performance and parallel computing systems; and 4) software to visualize and analyze the results. In particular, we focus attention on a new version of our code Virtual California (version 2001) in which we model all of the major strike slip faults in California, from the Mexico-California border to the Mendocino Triple Junction. Virtual California is a "backslip model", meaning that the long term rate of slip on each fault segment in the model is matched to the observed rate. We use the historic data set of earthquakes larger than magnitude M > 6 to define the frictional properties of 650 fault segments (degrees of freedom) in the model. To compute the dynamics and the associated surface deformation, we use message passing as implemented in the MPICH standard distribution on a Beowulf clusters consisting of >10 cpus. We also will report results from implementing the code on significantly larger machines so that we can begin to examine much finer spatial scales of resolution, and to assess scaling properties of the code. We present results of simulations both as static images and as mpeg movies, so that the dynamical aspects of the computation can be assessed by the viewer. We compute a variety of statistics from the simulations, including magnitude-frequency relations, and compare these with data from real fault systems. We report recent results on use of Virtual California for probabilistic earthquake forecasting for several sub-groups of major faults in California. These methods have the advantage that system-level fault interactions are explicitly included, as well as laboratory-based friction laws.
Mistraletti, Giovanni; Giacomini, Matteo; Sabbatini, Giovanni; Pinciroli, Riccardo; Mantovani, Elena S; Umbrello, Michele; Palmisano, Debora; Formenti, Paolo; Destrebecq, Anne L L; Iapichino, Gaetano
2013-02-01
The performances of 2 noninvasive CPAP systems (high flow and low flow air-entrainment masks) were compared to the Boussignac valve in 3 different scenarios. Scenario 1: pneumatic lung simulator with a tachypnea pattern (tidal volume 800 mL at 40 breaths/min). Scenario 2: Ten healthy subjects studied during tidal breaths and tachypnea. Scenario 3: Twenty ICU subjects enrolled for a noninvasive CPAP session. Differences between set and effective CPAP level and F(IO(2)), as well as the lowest airway pressure and the pressure swing around the imposed CPAP level, were analyzed. The lowest airway pressure and swing were correlated to the pressure-time product (area of the airway pressure curve below the CPAP level) measured with the simulator. P(aO(2)) was a subject's further performance index. Lung simulator: Boussignac F(IO(2)) was 0.54, even if supplied with pure oxygen. The air-entrainment masks had higher swing than the Boussignac (P = .007). Pressure-time product correlated better with pressure swing (Spearman correlation coefficient [ρ] = 0.97) than with lowest airway pressure (ρ = 0.92). In healthy subjects, the high-flow air-entrainment mask showed lower difference between set and effective F(IO(2)) (P < .001), and lowest airway pressure (P < .001), compared to the Boussignac valve. In all measurements the Boussignac valve showed higher than imposed CPAP level (P < .001). In ICU subjects the high-flow mask had lower swing than the Boussignac valve (P = .03) with similar P(aO(2)) increase. High-flow air-entrainment mask showed the best performance in human subjects. During high flow demand, the Boussignac valve delivered lower than expected F(IO(2)) and showed higher dynamic hyper-pressurization than the air-entrainment masks. © 2013 Daedalus Enterprises.
Food web structure and the evolution of ecological communities
NASA Astrophysics Data System (ADS)
Quince, Christopher; Higgs, Paul G.; McKane, Alan J.
Simulations of the coevolution of many interacting species are performed using the Webworld model. The model has a realistic set of predator-prey equations that describe the population dynamics of the species for any structure of the food web. The equations account for competition between species for the same resources, and for the diet choice of predators between alternative prey according to an evolutionarily stable strategy. The set of species present undergoes long-term evolution d ue to speciation and extinction events. We summarize results obtained on the macro-evolutionary dynamics of speciations and extinctions, and on the statistical properties of the food webs that are generated by the model. Simulations begin from small numbers of species and build up to larger webs with relatively constant species number on average. The rate of origination and extinction of species are relatively high, but remain roughly balanced throughout the simulations. When a 'parent' species undergoes sp eciation, the 'child' species usually adds to the same trophic level as the parent. The chance of the child species surviving is significantly higher if the parent is on the second or third trophic level than if it is on the first level, most likely due to a wider choice of possible prey for species on higher levels. Addition of a new species sometimes causes extinction of existing species. The parent species has a high probability of extinction because it has strong competition with the new species. Non-pa rental competitors of the new species also have a significantly higher extinction probability than average, as do prey of the new species. Predators of the new species are less likely than average to become extinct.
Some like it hot: medical student views on choosing the emotional level of a simulation.
Lefroy, Janet; Brosnan, Caragh; Creavin, Sam
2011-04-01
This study aimed to determine the impact of giving junior medical students control over the level of emotion expressed by a simulated patient (SP) in a teaching session designed to prepare students to handle emotions when interviewing real patients on placements. Year 1 medical students at Keele University School of Medicine were allowed to set the degree of emotion to be displayed by the SP in their first 'emotional interview'. This innovation was evaluated by mixed methods in two consecutive academic years as part of an action research project, along with other developments in a new communications skills curriculum. Questionnaires were completed after the first and second iterations by students, tutors and SPs. Sixteen students also participated in evaluative focus group discussions at the end of Year 1. Most students found the 'emotion-setting switch' helpful, both when interviewing the SP and when observing. Student-interviewers were helped by the perception that they had control over the difficulty of the task. Student-observers found it helpful to see the different levels of emotion and to think about how they might empathise with patients. By contrast, some students found the 'control switch' unnecessary or even unhelpful. These students felt that challenge was good for them and preferred not to be given the option of reducing it. The emotional level control was a useful innovation for most students and may potentially be used in any first encounter with challenging simulation. We suggest that it addresses innate needs for competence and autonomy. The insights gained enable us to suggest ways of building the element of choice into such sessions. The disadvantages of choice highlighted by some students should be surmountable by tutor 'scaffolding' of the learning for both student-interviewers and student-observers. © Blackwell Publishing Ltd 2011.
METRO-APEX Volume 8.1: Water Quality Manager's Manual. Revised.
ERIC Educational Resources Information Center
University of Southern California, Los Angeles. COMEX Research Project.
The water Quality Manager's Manual is one of a set of twenty-one manuals used in METRO-APEX 1974, a computerized college and professional level, computer-supported, role-play, simulation exercise of a community with "normal" problems. Stress is placed on environmental quality considerations. APEX 1974 is an expansion of APEX--Air…
Saint Anne: A Multicultural Education Dilemma.
ERIC Educational Resources Information Center
Bruce, Bill; And Others
This 5-hour simulation is designed to give secondary- and college-level students and community persons the opportunity to deal with multicultural issues in a typical organizational and community setting. St. Anne is a fictitious town of 75,000 residents with two major ethnic neighborhoods--one German and the other Swedish. The local paper industry…
27ps DFTMD Simulations of Maltose using a Reduced Basis Set
USDA-ARS?s Scientific Manuscript database
The disaccharide, a-maltose, has been studied using constant energy density functional molecular dynamics (DFTMD) at the B3LYP/6-31+G*/4-31G+COSMO (solvent) level of theory. Maltose is of particular interest as the variation in glycosidic dihedral angles has been found to be dependent upon the star...
METRO-APEX Volume 21.1: Pressure Groups' Manual. Revised.
ERIC Educational Resources Information Center
University of Southern California, Los Angeles. COMEX Research Project.
The Pressure Groups' Manual is one of a set of twenty-one manuals used in METRO-APEX 1974, a computerized college and professional level, computer-supported, role-play, simulation exercise of a community with "normal" problems. Stress is placed on environmental quality considerations. APEX 1974 is an expansion of APEX--Air Pollution…
METRO-APEX Volume 20.1: News Media Manual. Revised.
ERIC Educational Resources Information Center
University of Southern California, Los Angeles. COMEX Research Project.
The News Media Manual is one of a set of twenty-one manuals used in METRO-APEX 1974, a computerized college and professional level, computer-supported, role-play, simulation exercise of a community with "normal" problems. Stress is placed on environmental quality considerations. APEX 1974 is an expansion of APEX--Air Pollution Exercise…
METRO-APEX Volume 18.1: Legal Reference Manual. Revised.
ERIC Educational Resources Information Center
University of Southern California, Los Angeles. COMEX Research Project.
The Legal Reference Manual is one of a set of twenty-one manuals used in METRO-APEX 1974, a computerized college and professional level, computer-supported, role-play, simulation exercise of a community with "normal" problems. Stress is placed on environmental quality considerations. APEX 1974 is an expansion of APEX--Air Pollution…
METRO-APEX Volume 10.1: Developer's Manual. Revised.
ERIC Educational Resources Information Center
University of Southern California, Los Angeles. COMEX Research Project.
The Developer's Manual is one of a set of twenty-one manuals used in METRO-APEX 1974, a computerized college and professional level, computer-supported, role-play, simulation exercise of a community with "normal" problems. Stress is placed on environmental quality considerations. APEX 1974 is an expansion of APEX--Air Pollution Exercise…
Integrating fire management analysis into land management planning
Thomas J. Mills
1983-01-01
The analysis of alternative fire management programs should be integrated into the land and resource management planning process, but a single fire management analysis model cannot meet all planning needs. Therefore, a set of simulation models that are analytically separate from integrated land management planning models are required. The design of four levels of fire...
APEX (Air Pollution Exercise) Volume 20: Reference Materials.
ERIC Educational Resources Information Center
Environmental Protection Agency, Research Triangle Park, NC. Office of Manpower Development.
The Reference Materials Manual is part of a set of 21 manuals (AA 001 009-001 029) used in APEX (Air Pollution Exercise), a computerized college and professional level "real world" game simulation of a community with urban and rural problems, industrial activities, and air pollution difficulties. For the purposes of the gaming exercise, APEX…
Learning Qualitative and Quantitative Reasoning in a Microworld for Elastic Impacts.
ERIC Educational Resources Information Center
Ploetzner, Rolf; And Others
1990-01-01
Discusses the artificial-intelligence-based microworld DiBi and MULEDS, a multilevel diagnosis system. Developed to adapt tutoring style to the individual learner. Explains that DiBi sets up a learning environment, and simulates elastic impacts as a subtopic of classical mechanics, and supporting reasoning on different levels of mental domain…
METRO-APEX Volume 4.1: County Politician's Manual. Revised.
ERIC Educational Resources Information Center
University of Southern California, Los Angeles. COMEX Research Project.
The County Politician's Manual is one of a set of twenty-one manuals used in METRO-APEX 1974, a computerized college and professional level, computer-supported, role-play, simulation exercise of a community with "normal" problems. Stress is placed on environmental quality considerations. APEX 1974 is an expansion of APEX--Air Pollution…
METRO-APEX Volume 9.1: Solid Waste Manager's Manual. Revised.
ERIC Educational Resources Information Center
University of Southern California, Los Angeles. COMEX Research Project.
The Solid Waste Manager's Manual is one of a set of twenty-one manuals used in METRO-APEX 1974, a computerized college and professional level, computer-supported, role-play, simulation exercise of a community with "normal" problems. Stress is placed on environmental quality considerations. APEX 1974 is an expansion of APEX--Air Pollution…
METRO-APEX Volume 3.1: City Politician's Manual. Revised.
ERIC Educational Resources Information Center
University of Southern California, Los Angeles. COMEX Research Project.
The City Politician's Manual is one of a set of twenty-one manuals used in METRO-APEX 1974, a computerized college and professional level, computer-supported, role-play, simulation exercise of a community with "normal" problems. Stress is placed on environmental quality considerations. APEX 1974 is an expansion of APEX--Air Pollution…
METRO-APEX Volume 5.1: Planner's Manual. Revised.
ERIC Educational Resources Information Center
University of Southern California, Los Angeles. COMEX Research Project.
The Planner's Manual is one of a set of twenty-one manuals used in METRO-APEX 1974, a computerized college and professional level, computer-supported, role-play, simulation exercise of a community with "normal" problems. Stress is placed on environmental quality considerations. APEX 1974 is an expansion of APEX--Air Pollution Exercise…
METRO-APEX Volume 2.1: Computer Operator's Manual. Revised.
ERIC Educational Resources Information Center
University of Southern California, Los Angeles. COMEX Research Project.
The Computer Operator's Manual is one of a set of twenty-one manuals used in METRO-APEX 1974, a computerized college and professional level, computer-supported, role-play, simulation exercise of a community with "normal" problems. Stress is placed on environmental quality considerations. APEX 1974 is an expansion of APEX--Air Pollution…
METRO-APEX Volume 6.1: Environmental Quality Agency's Manual. Revised.
ERIC Educational Resources Information Center
University of Southern California, Los Angeles. COMEX Research Project.
The Environmental Quality Agency's Manual is one of a set of twenty-one manuals used in METRO-APEX 1974, a computerized college and professional level, computer-supported, role-play, simulation exercise of a community with "normal" problems. Stress is placed on environmental quality considerations. APEX 1974 is an expansion of APEX--Air…
METRO-APEX Volume 1.1: Game Overall Director's Manual. Revised.
ERIC Educational Resources Information Center
University of Southern California, Los Angeles. COMEX Research Project.
The Game Overall Director's Manual is one of a set of twenty-one manuals used in METRO-APEX 1974, a computerized college and professional level, computer-supported, role-play, simulation exercise of a community with "normal" problems. Stress is placed on environmental quality considerations. APEX 1974 is an expansion of APEX--Air…
The effect of monocular target blur on simulated telerobotic manipulation
NASA Technical Reports Server (NTRS)
Liu, Andrew; Stark, Lawrence
1991-01-01
A simulation involving three types of telerobotic tasks that require information about the spatial position of objects is reported. This is similar to the results of psychophysical experiments examining the effect of blur on stereoacuity. It is suggested that other psychophysical experimental results could be used to predict operator performance for other telerobotic tasks. It is demonstrated that refractive errors in the helmet-mounted stereo display system can affect performance in the three types of telerobotic tasks. The results of two sets of experiments indicate that monocular target blur of two diopters or more degrades stereo display performance to the level of monocular displays. This indicates that moderate levels of visual degradation that affect the operator's stereoacuity may eliminate the performance advantage of stereo displays.
DHM and serious games: a case-study oil and gas laboratories.
Santos, V; Zamberlan, M; Streit, P; Oliveira, J; Guimarães, C; Pastura, F; Cid, G
2012-01-01
The aim in this paper is to present a research on the application of serious games for the design of laboratories in the oil and gas industries. The focus is in human virtual representation acquired from 3D scanning, human interaction, workspace layout and equipment designed considering ergonomics standards. The laboratory studies were simulated in Unity3D platform, which allows the users to control the DHM1 on the dynamic virtual scenario, in order to simulate work activities. This methodology can change the design process by improving the level of interaction between final users, managers and human factor teams. That helps to better visualize future work settings and improve the level of participation between all stakeholders.
Simulation of Ge Dopant Emission in Indirect-Drive ICF Implosion Experiments
NASA Astrophysics Data System (ADS)
Macfarlane, J. J.; Golovkin, I.; Kulkarni, S.; Regan, S.; Epstein, R.; Mancini, R.; Peterson, K.; Suter, L. J.
2013-10-01
We present results from simulations performed to study the radiative properties of dopants used in inertial confinement fusion indirect-drive capsule implosion experiments on NIF. In Rev5 NIF ignition capsules, a Ge dopant is added to an inner region of the CH ablator to absorb hohlraum x-ray preheat. Spectrally resolved emission from ablator dopants can be used to study the degree of mixing of ablator material into the ignition hot spot. Here, we study the atomic processes that affect the radiative characteristics of these elements using a set of simulation tools to first estimate the evolution of plasma conditions in the compressed target, and then to compute the atomic kinetics of the dopant and the resultant radiative emission. Using estimates of temperature and density profiles predicted by radiation-hydrodynamics simulations, we set up simple 2-D plasma grids where we allow dopant material to be embedded in the fuel, and perform multi-dimensional collisional-radiative simulations using SPECT3D to compute non-LTE atomic level populations and spectral signatures from the dopant. Recently improved Stark-broadened line shape modeling for Ge K-shell lines has been included. The goal is to study the radiative and atomic processes that affect the emergent spectra, including the effects of inner-shell photoabsorption and K α reemission from the dopant.
Wei, Qun; Kim, Mi-Jung; Lee, Jong-Ha
2018-01-01
Drinking water has several advantages that have already been established, such as improving blood circulation, reducing acid in the stomach, etc. However, due to people not noticing the amount of water they consume every time they drink, most people drink less water than the recommended daily allowance. In this paper, a capacitive sensor for developing an automatic tumbler to measure water level is proposed. Different than in previous studies, the proposed capacitive sensor was separated into two sets: the main sensor for measuring the water level in the tumbler, and the reference sensor for measuring the incremental level unit. In order to confirm the feasibility of the proposed idea, and to optimize the shape of the sensor, a 3D model of the capacitive sensor with the tumbler was designed and subjected to Finite Element Analysis (FEA) simulation. According to the simulation results, the electrodes were made of copper and assembled in a tumbler manufactured by a 3D printer. The tumbler was filled with water and was subjected to experiments in order to assess the sensor's performance. The comparison of experimental results to the simulation results shows that the measured capacitance value of the capacitive sensor changed linearly as the water level varied. This proves that the proposed sensor can accurately measure the water level in the tumbler. Additionally, by use of the curve fitting method, a compensation algorithm was found to match the actual level with the measured level. The experimental results proved that the proposed capacitive sensor is able to measure the actual water level in the tumbler accurately. A digital control part with micro-processor will be designed and fixed on the bottom of the tumbler for developing a smart tumbler.
Simulation-based comprehensive benchmarking of RNA-seq aligners
Baruzzo, Giacomo; Hayer, Katharina E; Kim, Eun Ji; Di Camillo, Barbara; FitzGerald, Garret A; Grant, Gregory R
2018-01-01
Alignment is the first step in most RNA-seq analysis pipelines, and the accuracy of downstream analyses depends heavily on it. Unlike most steps in the pipeline, alignment is particularly amenable to benchmarking with simulated data. We performed a comprehensive benchmarking of 14 common splice-aware aligners for base, read, and exon junction-level accuracy and compared default with optimized parameters. We found that performance varied by genome complexity, and accuracy and popularity were poorly correlated. The most widely cited tool underperforms for most metrics, particularly when using default settings. PMID:27941783
Smith, Jason F.; Chen, Kewei; Pillai, Ajay S.; Horwitz, Barry
2013-01-01
The number and variety of connectivity estimation methods is likely to continue to grow over the coming decade. Comparisons between methods are necessary to prune this growth to only the most accurate and robust methods. However, the nature of connectivity is elusive with different methods potentially attempting to identify different aspects of connectivity. Commonalities of connectivity definitions across methods upon which base direct comparisons can be difficult to derive. Here, we explicitly define “effective connectivity” using a common set of observation and state equations that are appropriate for three connectivity methods: dynamic causal modeling (DCM), multivariate autoregressive modeling (MAR), and switching linear dynamic systems for fMRI (sLDSf). In addition while deriving this set, we show how many other popular functional and effective connectivity methods are actually simplifications of these equations. We discuss implications of these connections for the practice of using one method to simulate data for another method. After mathematically connecting the three effective connectivity methods, simulated fMRI data with varying numbers of regions and task conditions is generated from the common equation. This simulated data explicitly contains the type of the connectivity that the three models were intended to identify. Each method is applied to the simulated data sets and the accuracy of parameter identification is analyzed. All methods perform above chance levels at identifying correct connectivity parameters. The sLDSf method was superior in parameter estimation accuracy to both DCM and MAR for all types of comparisons. PMID:23717258
Thompson, William L.; Miller, Amy E.; Mortenson, Dorothy C.; Woodward, Andrea
2011-01-01
Monitoring natural resources in Alaskan national parks is challenging because of their remoteness, limited accessibility, and high sampling costs. We describe an iterative, three-phased process for developing sampling designs based on our efforts to establish a vegetation monitoring program in southwest Alaska. In the first phase, we defined a sampling frame based on land ownership and specific vegetated habitats within the park boundaries and used Path Distance analysis tools to create a GIS layer that delineated portions of each park that could be feasibly accessed for ground sampling. In the second phase, we used simulations based on landcover maps to identify size and configuration of the ground sampling units (single plots or grids of plots) and to refine areas to be potentially sampled. In the third phase, we used a second set of simulations to estimate sample size and sampling frequency required to have a reasonable chance of detecting a minimum trend in vegetation cover for a specified time period and level of statistical confidence. Results of the first set of simulations indicated that a spatially balanced random sample of single plots from the most common landcover types yielded the most efficient sampling scheme. Results of the second set of simulations were compared with field data and indicated that we should be able to detect at least a 25% change in vegetation attributes over 31. years by sampling 8 or more plots per year every five years in focal landcover types. This approach would be especially useful in situations where ground sampling is restricted by access.
Crewther, Blair T; Heke, Taati; Keogh, Justin W L
2011-01-01
This study examined the effects of training volume and competition on the salivary cortisol (Sal-C) concentrations of Olympic weightlifters. Male (n = 5) and female (n = 4) Olympic weightlifters provided saliva samples across a 5-week experimental = period. The first aim was to assess the weekly effects of high (≥ 200 sets) and low (≤ 100 sets) training volume on Sal-C. The second aim was to compare Sal-C concentrations and 1 repetition maximum (1RM) performance during 2 simulated and 2 actual competitions. Performance was assessed using the snatch, clean and jerk, and the Olympic total lift. Data from each competition setting were pooled before analysis. There were no significant weekly changes in Sal-C levels (p > 0.05). The actual competitions produced higher (128-130%) Sal-C concentrations (p < 0.001) and superior 1RM lifts (1.9-2.6%) for the clean and jerk, and the Olympic total, than the simulated competitions (p < 0.05). Individual Sal-C concentrations before the simulated competitions were positively correlated to all of the 1RM lifts (r = 0.48-0.49, p < 0.05). In conclusion, actual competitions produced greater Sal-C responses than simulated competitions, and this appeared to benefit the 1RM performance of Olympic weightlifters. Individuals with higher Sal-C concentrations also tended to exhibit superior 1RM lifts during the simulated competitions. Given these findings, greater emphasis should be placed upon the monitoring of C to establish normative values, training standards and to assist with performance prediction.
Modelling wildland fire propagation by tracking random fronts
NASA Astrophysics Data System (ADS)
Pagnini, G.; Mentrelli, A.
2013-11-01
Wildland fire propagation is studied in literature by two alternative approaches, namely the reaction-diffusion equation and the level-set method. These two approaches are considered alternative each other because the solution of the reaction-diffusion equation is generally a continuous smooth function that has an exponential decay and an infinite support, while the level-set method, which is a front tracking technique, generates a sharp function with a finite support. However, these two approaches can indeed be considered complementary and reconciled. Turbulent hot-air transport and fire spotting are phenomena with a random character that are extremely important in wildland fire propagation. As a consequence the fire front gets a random character, too. Hence a tracking method for random fronts is needed. In particular, the level-set contourn is here randomized accordingly to the probability density function of the interface particle displacement. Actually, when the level-set method is developed for tracking a front interface with a random motion, the resulting averaged process emerges to be governed by an evolution equation of the reaction-diffusion type. In this reconciled approach, the rate of spread of the fire keeps the same key and characterizing role proper to the level-set approach. The resulting model emerges to be suitable to simulate effects due to turbulent convection as fire flank and backing fire, the faster fire spread because of the actions by hot air pre-heating and by ember landing, and also the fire overcoming a firebreak zone that is a case not resolved by models based on the level-set method. Moreover, from the proposed formulation it follows a correction for the rate of spread formula due to the mean jump-length of firebrands in the downwind direction for the leeward sector of the fireline contour.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Shenggao, E-mail: sgzhou@suda.edu.cn, E-mail: bli@math.ucsd.edu; Sun, Hui; Cheng, Li-Tien
Recent years have seen the initial success of a variational implicit-solvent model (VISM), implemented with a robust level-set method, in capturing efficiently different hydration states and providing quantitatively good estimation of solvation free energies of biomolecules. The level-set minimization of the VISM solvation free-energy functional of all possible solute-solvent interfaces or dielectric boundaries predicts an equilibrium biomolecular conformation that is often close to an initial guess. In this work, we develop a theory in the form of Langevin geometrical flow to incorporate solute-solvent interfacial fluctuations into the VISM. Such fluctuations are crucial to biomolecular conformational changes and binding process. Wemore » also develop a stochastic level-set method to numerically implement such a theory. We describe the interfacial fluctuation through the “normal velocity” that is the solute-solvent interfacial force, derive the corresponding stochastic level-set equation in the sense of Stratonovich so that the surface representation is independent of the choice of implicit function, and develop numerical techniques for solving such an equation and processing the numerical data. We apply our computational method to study the dewetting transition in the system of two hydrophobic plates and a hydrophobic cavity of a synthetic host molecule cucurbit[7]uril. Numerical simulations demonstrate that our approach can describe an underlying system jumping out of a local minimum of the free-energy functional and can capture dewetting transitions of hydrophobic systems. In the case of two hydrophobic plates, we find that the wavelength of interfacial fluctuations has a strong influence to the dewetting transition. In addition, we find that the estimated energy barrier of the dewetting transition scales quadratically with the inter-plate distance, agreeing well with existing studies of molecular dynamics simulations. Our work is a first step toward the inclusion of fluctuations into the VISM and understanding the impact of interfacial fluctuations on biomolecular solvation with an implicit-solvent approach.« less
Software development infrastructure for the HYBRID modeling and simulation project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Epiney, Aaron S.; Kinoshita, Robert A.; Kim, Jong Suk
One of the goals of the HYBRID modeling and simulation project is to assess the economic viability of hybrid systems in a market that contains renewable energy sources like wind. The idea is that it is possible for the nuclear plant to sell non-electric energy cushions, which absorb (at least partially) the volatility introduced by the renewable energy sources. This system is currently modeled in the Modelica programming language. To assess the economics of the system, an optimization procedure is trying to find the minimal cost of electricity production. The RAVEN code is used as a driver for the wholemore » problem. It is assumed that at this stage, the HYBRID modeling and simulation framework can be classified as non-safety “research and development” software. The associated quality level is Quality Level 3 software. This imposes low requirements on quality control, testing and documentation. The quality level could change as the application development continues.Despite the low quality requirement level, a workflow for the HYBRID developers has been defined that include a coding standard and some documentation and testing requirements. The repository performs automated unit testing of contributed models. The automated testing is achieved via an open-source python script called BuildingsP from Lawrence Berkeley National Lab. BuildingsPy runs Modelica simulation tests using Dymola in an automated manner and generates and runs unit tests from Modelica scripts written by developers. In order to assure effective communication between the different national laboratories a biweekly videoconference has been set-up, where developers can report their progress and issues. In addition, periodic face-face meetings are organized intended to discuss high-level strategy decisions with management. A second means of communication is the developer email list. This is a list to which everybody can send emails that will be received by the collective of the developers and managers involved in the project. Thirdly, to exchange documents quickly, a SharePoint directory has been set-up. SharePoint allows teams and organizations to intelligently share, and collaborate on content from anywhere.« less
Evaluation of a Low-Cost Bubble CPAP System Designed for Resource-Limited Settings.
Bennett, Desmond J; Carroll, Ryan W; Kacmarek, Robert M
2018-04-01
Respiratory compromise is a leading contributor to global neonatal death. CPAP is a method of treatment that helps maintain lung volume during expiration, promotes comfortable breathing, and improves oxygenation. Bubble CPAP is an effective alternative to standard CPAP. We sought to determine the reliability and functionality of a low-cost bubble CPAP device designed for low-resource settings. The low-cost bubble CPAP device was compared to a commercially available bubble CPAP system. The devices were connected to a lung simulator that simulated neonates of 4 different weights with compromised respiratory mechanics (∼1, ∼3, ∼5, and ∼10 kg). The devices' abilities to establish and maintain pressure and flow under normal conditions as well as under conditions of leak were compared. Multiple combinations of pressure levels (5, 8, and 10 cm H 2 O) and flow levels (3, 6, and 10 L/min) were tested. The endurance of both devices was also tested by running the systems continuously for 8 h and measuring the changes in pressure and flow. Both devices performed equivalently during the no-leak and leak trials. While our testing revealed individual differences that were statistically significant and clinically important (>10% difference) within specific CPAP and flow-level settings, no overall comparisons of CPAP or flow were both statistically significant and clinically important. Each device delivered pressures similar to the desired pressures, although the flows delivered by both machines were lower than the set flows in most trials. During the endurance trials, the low-cost device was marginally better at maintaining pressure, while the commercially available device was better at maintaining flow. The low-cost bubble CPAP device evaluated in this study is comparable to a bubble CPAP system used in developed settings. Extensive clinical trials, however, are necessary to confirm its effectiveness. Copyright © 2018 by Daedalus Enterprises.
Catalog of Simulation Models and Wargames Used for Unit and Leader Training. Second Edition.
1987-01-01
College; two or three brigade level games per year at brigade groups. In past - have played corps level games at Canadian Forces Command and Staff College...dedicated PERQ microcomputer Data Output Analysis: 5 minutes (generation of killer/victim scoreboard) Frequency of use: 4 times per year Users: HEL...classification: UNCLASSIFIED Frequency of use: 3 plays per year . Users: Training Gps, TF HQ and Infantry Centre. (The p. settings are changed to suit each
Automated Boundary Conditions for Wind Tunnel Simulations
NASA Technical Reports Server (NTRS)
Carlson, Jan-Renee
2018-01-01
Computational fluid dynamic (CFD) simulations of models tested in wind tunnels require a high level of fidelity and accuracy particularly for the purposes of CFD validation efforts. Considerable effort is required to ensure the proper characterization of both the physical geometry of the wind tunnel and recreating the correct flow conditions inside the wind tunnel. The typical trial-and-error effort used for determining the boundary condition values for a particular tunnel configuration are time and computer resource intensive. This paper describes a method for calculating and updating the back pressure boundary condition in wind tunnel simulations by using a proportional-integral-derivative controller. The controller methodology and equations are discussed, and simulations using the controller to set a tunnel Mach number in the NASA Langley 14- by 22-Foot Subsonic Tunnel are demonstrated.
Extending quantum mechanics entails extending special relativity
NASA Astrophysics Data System (ADS)
Aravinda, S.; Srikanth, R.
2016-05-01
The complementarity between signaling and randomness in any communicated resource that can simulate singlet statistics is generalized by relaxing the assumption of free will in the choice of measurement settings. We show how to construct an ontological extension for quantum mechanics (QMs) through the oblivious embedding of a sound simulation protocol in a Newtonian spacetime. Minkowski or other intermediate spacetimes are ruled out as the locus of the embedding by virtue of hidden influence inequalities. The complementarity transferred from a simulation to the extension unifies a number of results about quantum non-locality, and implies that special relativity has a different significance for the ontological model and for the operational theory it reproduces. Only the latter, being experimentally accessible, is required to be Lorentz covariant. There may be certain Lorentz non-covariant elements at the ontological level, but they will be inaccessible at the operational level in a valid extension. Certain arguments against the extendability of QM, due to Conway and Kochen (2009) and Colbeck and Renner (2012), are attributed to their assumption that the spacetime at the ontological level has Minkowski causal structure.
Walker, John F.; Hunt, Randall J.; Markstrom, Steven L.; Hay, Lauren E.; Doherty, John
2009-01-01
A major focus of the U.S. Geological Survey’s Trout Lake Water, Energy, and Biogeochemical Budgets (WEBB) project is the development of a watershed model to allow predictions of hydrologic response to future conditions including land-use and climate change. The coupled groundwater/surface-water model GSFLOW was chosen for this purpose because it could easily incorporate an existing groundwater flow model and it provides for simulation of surface-water processes. The Trout Lake watershed in northern Wisconsin is underlain by a highly conductive outwash sand aquifer. In this area, streamflow is dominated by groundwater contributions; however, surface runoff occurs during intense rainfall periods and spring snowmelt. Surface runoff also occurs locally near stream/lake areas where the unsaturated zone is thin. A diverse data set, collected from 1992 to 2007 for the Trout Lake WEBB project and the co-located and NSF-funded North Temperate Lakes LTER project, includes snowpack, solar radiation, potential evapotranspiration, lake levels, groundwater levels, and streamflow. The timeseries processing software TSPROC (Doherty 2003) was used to distill the large time series data set to a smaller set of observations and summary statistics that captured the salient hydrologic information. The timeseries processing reduced hundreds of thousands of observations to less than 5,000. Model calibration included specific predictions for several lakes in the study area using the PEST parameter estimation suite of software (Doherty 2007). The calibrated model was used to simulate the hydrologic response in the study lakes to a variety of climate change scenarios culled from the IPCC Fourth Assessment Report of the Intergovernmental Panel on Climate Change (Solomon et al. 2007). Results from the simulations indicate climate change could result in substantial changes to the lake levels and components of the hydrologic budget of a seepage lake in the flow system. For a drainage lake lower in the flow system, the impacts of climate change are diminished.
Madaan, Nitesh; Bao, Jie; Nandasiri, Manjula I.; ...
2015-08-31
The experimental atom probe tomography results from two different specimen orientations (top-down and side-ways) of a high oxygen ion conducting Samaria-doped-ceria/Scandia-stabilized-zirconia multilayer thin film solid oxide fuel cell electrolyte was correlated with level-set method based field evaporation simulations for the same specimen orientations. This experiment-theory correlation explains the dynamic specimen shape evolution and ion trajectory aberrations that can induce density artifacts in final reconstruction leading to inaccurate estimation of interfacial intermixing. This study highlights the need and importance of correlating experimental results with field evaporation simulations when using atom probe tomography for studying oxide heterostructure interfaces.
Akeroyd, Michael A; Chambers, John; Bullock, David; Palmer, Alan R; Summerfield, A Quentin; Nelson, Philip A; Gatehouse, Stuart
2007-02-01
Cross-talk cancellation is a method for synthesizing virtual auditory space using loudspeakers. One implementation is the "Optimal Source Distribution" technique [T. Takeuchi and P. Nelson, J. Acoust. Soc. Am. 112, 2786-2797 (2002)], in which the audio bandwidth is split across three pairs of loudspeakers, placed at azimuths of +/-90 degrees, +/-15 degrees, and +/-3 degrees, conveying low, mid, and high frequencies, respectively. A computational simulation of this system was developed and verified against measurements made on an acoustic system using a manikin. Both the acoustic system and the simulation gave a wideband average cancellation of almost 25 dB. The simulation showed that when there was a mismatch between the head-related transfer functions used to set up the system and those of the final listener, the cancellation was reduced to an average of 13 dB. Moreover, in this case the binaural interaural time differences and interaural level differences delivered by the simulation of the optimal source distribution (OSD) system often differed from the target values. It is concluded that only when the OSD system is set up with "matched" head-related transfer functions can it deliver accurate binaural cues.
Adaptive Set-Based Methods for Association Testing.
Su, Yu-Chen; Gauderman, William James; Berhane, Kiros; Lewinger, Juan Pablo
2016-02-01
With a typical sample size of a few thousand subjects, a single genome-wide association study (GWAS) using traditional one single nucleotide polymorphism (SNP)-at-a-time methods can only detect genetic variants conferring a sizable effect on disease risk. Set-based methods, which analyze sets of SNPs jointly, can detect variants with smaller effects acting within a gene, a pathway, or other biologically relevant sets. Although self-contained set-based methods (those that test sets of variants without regard to variants not in the set) are generally more powerful than competitive set-based approaches (those that rely on comparison of variants in the set of interest with variants not in the set), there is no consensus as to which self-contained methods are best. In particular, several self-contained set tests have been proposed to directly or indirectly "adapt" to the a priori unknown proportion and distribution of effects of the truly associated SNPs in the set, which is a major determinant of their power. A popular adaptive set-based test is the adaptive rank truncated product (ARTP), which seeks the set of SNPs that yields the best-combined evidence of association. We compared the standard ARTP, several ARTP variations we introduced, and other adaptive methods in a comprehensive simulation study to evaluate their performance. We used permutations to assess significance for all the methods and thus provide a level playing field for comparison. We found the standard ARTP test to have the highest power across our simulations followed closely by the global model of random effects (GMRE) and a least absolute shrinkage and selection operator (LASSO)-based test. © 2015 WILEY PERIODICALS, INC.
Rotolo, Federico; Paoletti, Xavier; Burzykowski, Tomasz; Buyse, Marc; Michiels, Stefan
2017-01-01
Surrogate endpoints are often used in clinical trials instead of well-established hard endpoints for practical convenience. The meta-analytic approach relies on two measures of surrogacy: one at the individual level and one at the trial level. In the survival data setting, a two-step model based on copulas is commonly used. We present a new approach which employs a bivariate survival model with an individual random effect shared between the two endpoints and correlated treatment-by-trial interactions. We fit this model using auxiliary mixed Poisson models. We study via simulations the operating characteristics of this mixed Poisson approach as compared to the two-step copula approach. We illustrate the application of the methods on two individual patient data meta-analyses in gastric cancer, in the advanced setting (4069 patients from 20 randomized trials) and in the adjuvant setting (3288 patients from 14 randomized trials).
Simulation of a master-slave event set processor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Comfort, J.C.
1984-03-01
Event set manipulation may consume a considerable amount of the computation time spent in performing a discrete-event simulation. One way of minimizing this time is to allow event set processing to proceed in parallel with the remainder of the simulation computation. The paper describes a multiprocessor simulation computer, in which all non-event set processing is performed by the principal processor (called the host). Event set processing is coordinated by a front end processor (the master) and actually performed by several other functionally identical processors (the slaves). A trace-driven simulation program modeling this system was constructed, and was run with tracemore » output taken from two different simulation programs. Output from this simulation suggests that a significant reduction in run time may be realized by this approach. Sensitivity analysis was performed on the significant parameters to the system (number of slave processors, relative processor speeds, and interprocessor communication times). A comparison between actual and simulation run times for a one-processor system was used to assist in the validation of the simulation. 7 references.« less
Future Wave Height Situation estimated by the Latest Climate Scenario around Funafuti Atoll, Tuvalu
NASA Astrophysics Data System (ADS)
Sato, D.; Yokoki, H.; Kuwahara, Y.; Yamano, H.; Kayanne, H.; Okajima, H.; Kawamiya, M.
2012-12-01
Sea-level rise due to the global warming is significant phenomenon to coastal region in the world. Especially the atoll islands, which are low-lying and narrow, have high vulnerability against the sea-level rise. Recently the improved future climate projection (MIROC-ESM) was provided by JAMSTEC, which adopted the latest climate scenarios based on the RCP (Representative Concentration Pathway) of the green house gasses. Wave field simulation including the latest sea-level rise pathway by MIROC-ESM was conducted to understand the change of significant wave heights in Funafuti Atoll, Tuvalu, which was an important factor to manage the coast protection. MIROC-ESM provides monthly sea surface height in the fine gridded world (1.5 degree near the equator). Wave field simulation was conducted using the climate scenario of RCP45 in which the radioactive forcing of the end of 21st century was stabilized to 4.5 W/m2. Sea-level rise ratio of every 10 years was calculated based on the historical data set from 1850 to 2005 and the estimated data set from 2006 to 2100. In that case, the sea-level increases by 10cm after 100 years. In this study, the numerical simulation of wave field at the rate of sea-level rise was carried out using the SWAN model. The wave and wind conditions around Funafuti atoll is characterized by two seasons that are the trade (Apr. - Nov.) and non-trade (Jan. - Mar., Dec.) wind season. Then, we set up the two seasonal boundary conditions for one year's simulation, which were calculated from ECMWF reanalysis data. Simulated results of significant wave heights are analyzed by the increase rate (%) calculated from the base results (Average for 2000 - 2005) and the results of 2100. Calculated increase rate of the significant wave height for both seasons was extremely high on the reef-flat. Maximum increase rates of the trade and non-trade wind season were 1817% and 686%, respectively. The southern part of the atoll has high increasing rate through the two seasons. In the non-trade wind season, the northern tip and the southern part of the island were higher increase rate in the lagoon-side coasts, which was about 7%, and the average rate was 3.4%. On the other hand, the average rate in the trade wind season was 5.0%. Ocean side coast has high increase rate through the two seasons. Especially, the very large rate was calculated in the northern part of the Fongafale Island locally. The DEM data in the middle of Fongafale Island, which is most populated area in the island, showed that the northern oceanic coast has wide and high storm ridge and the increase rate was extremely large there. In such coasts, sea-level rise due to global warming has same effect as storm surge due to tropical cyclone in the point of increasing the sea-level, although the time scale of them is not same. Thus we can consider that the calculated area with large increase rate has already experienced the high wave due to tropical cyclone, which was enabled to construct the wide and high storm ridge. This result indicated that the effective coastal management under the sea-level rise needs to understand not only the quantitative estimation of the future situation but also the protect potential constructed by the present wave and wind condition.
Migration of formaldehyde from melamine-ware: UK 2008 survey results.
Potter, E L J; Bradley, E L; Davies, C R; Barnes, K A; Castle, L
2010-06-01
Fifty melamine-ware articles were tested for the migration of formaldehyde - with hexamethylenetetramine (HMTA) expressed as formaldehyde - to see whether the total specific migration limit (SML(T)) was being observed. The SML(T), given in European Commission Directive 2002/72/EC as amended, is 15 mg kg(-1). Fourier transform-infrared (FT-IR) spectroscopy was carried out on the articles to confirm the plastic type. Articles were exposed to the food simulant 3% (w/v) aqueous acetic acid under conditions representing their worst foreseeable use. Formaldehyde and HMTA in food simulants were determined by a spectrophotometric derivatization procedure. Positive samples were confirmed by a second spectrophotometric procedure using an alternative derivatization agent. As all products purchased were intended for repeat use, three sequential exposures to the simulant were carried out. Formaldehyde was detected in the simulant exposed to 43 samples. Most of the levels found were well below the limits set in law such that 84% of the samples tested were compliant. However, eight samples had formaldehyde levels that were clearly above the legal maximum at six to 65 times the SML(T).
Cosmic Reionization On Computers: Numerical and Physical Convergence
Gnedin, Nickolay Y.
2016-04-01
In this paper I show that simulations of reionization performed under the Cosmic Reionization On Computers (CROC) project do converge in space and mass, albeit rather slowly. A fully converged solution (for a given star formation and feedback model) can be determined at a level of precision of about 20%, but such a solution is useless in practice, since achieving it in production-grade simulations would require a large set of runs at various mass and spatial resolutions, and computational resources for such an undertaking are not yet readily available. In order to make progress in the interim, I introduce amore » weak convergence correction factor in the star formation recipe, which allows one to approximate the fully converged solution with finite resolution simulations. The accuracy of weakly converged simulations approaches a comparable, ~20% level of precision for star formation histories of individual galactic halos and other galactic properties that are directly related to star formation rates, like stellar masses and metallicities. Yet other properties of model galaxies, for example, their HI masses, are recovered in the weakly converged runs only within a factor of two.« less
Cosmic Reionization On Computers: Numerical and Physical Convergence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gnedin, Nickolay Y.
In this paper I show that simulations of reionization performed under the Cosmic Reionization On Computers (CROC) project do converge in space and mass, albeit rather slowly. A fully converged solution (for a given star formation and feedback model) can be determined at a level of precision of about 20%, but such a solution is useless in practice, since achieving it in production-grade simulations would require a large set of runs at various mass and spatial resolutions, and computational resources for such an undertaking are not yet readily available. In order to make progress in the interim, I introduce amore » weak convergence correction factor in the star formation recipe, which allows one to approximate the fully converged solution with finite resolution simulations. The accuracy of weakly converged simulations approaches a comparable, ~20% level of precision for star formation histories of individual galactic halos and other galactic properties that are directly related to star formation rates, like stellar masses and metallicities. Yet other properties of model galaxies, for example, their HI masses, are recovered in the weakly converged runs only within a factor of two.« less
EOS MLS Level 2 Data Processing Software Version 3
NASA Technical Reports Server (NTRS)
Livesey, Nathaniel J.; VanSnyder, Livesey W.; Read, William G.; Schwartz, Michael J.; Lambert, Alyn; Santee, Michelle L.; Nguyen, Honghanh T.; Froidevaux, Lucien; wang, Shuhui; Manney, Gloria L.;
2011-01-01
This software accepts the EOS MLS calibrated measurements of microwave radiances products and operational meteorological data, and produces a set of estimates of atmospheric temperature and composition. This version has been designed to be as flexible as possible. The software is controlled by a Level 2 Configuration File that controls all aspects of the software: defining the contents of state and measurement vectors, defining the configurations of the various forward models available, reading appropriate a priori spectroscopic and calibration data, performing retrievals, post-processing results, computing diagnostics, and outputting results in appropriate files. In production mode, the software operates in a parallel form, with one instance of the program acting as a master, coordinating the work of multiple slave instances on a cluster of computers, each computing the results for individual chunks of data. In addition, to do conventional retrieval calculations and producing geophysical products, the Level 2 Configuration File can instruct the software to produce files of simulated radiances based on a state vector formed from a set of geophysical product files taken as input. Combining both the retrieval and simulation tasks in a single piece of software makes it far easier to ensure that identical forward model algorithms and parameters are used in both tasks. This also dramatically reduces the complexity of the code maintenance effort.
Acculturation levels and personalizing orthognathic surgery for the Asian American patient.
Sy, A A; Kim, W S; Chen, J; Shen, Y; Tao, C; Lee, J S
2016-10-01
This study was performed to investigate whether the level of acculturation among Asians living in the USA plays a significant role in their opinion of facial profiles. One hundred and ninety-eight Asian American subjects were asked to complete a pre-validated survey to measure their level of acculturation and to evaluate four sets of pictures that displayed a class II male, class II female, class III male, and class III female. Each set consisted of three lateral profile pictures: an initial unaltered photo, a picture simulating a flatter profile (orthodontic camouflage in class II; mandibular setback in class III), and a picture simulating a fuller profile (mandibular advancement in class II; maxillary advancement in class III). For the class II male, subjects who were more acculturated indicated that a flatter profile (orthodontic camouflage) was less attractive. For the class II female, higher acculturated subjects chose expansive treatment (mandibular advancement) as more aesthetic compared to the less acculturated subjects. Each of these scenarios had statistically significant odds ratios. In general, highly acculturated subjects preferred a fuller facial profile, while low acculturated subjects preferred a flatter facial profile appearance, except for the class III female profile, which did not follow this trend. Copyright © 2016 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
A hybrid interface tracking - level set technique for multiphase flow with soluble surfactant
NASA Astrophysics Data System (ADS)
Shin, Seungwon; Chergui, Jalel; Juric, Damir; Kahouadji, Lyes; Matar, Omar K.; Craster, Richard V.
2018-04-01
A formulation for soluble surfactant transport in multiphase flows recently presented by Muradoglu and Tryggvason (JCP 274 (2014) 737-757) [17] is adapted to the context of the Level Contour Reconstruction Method, LCRM, (Shin et al. IJNMF 60 (2009) 753-778, [8]) which is a hybrid method that combines the advantages of the Front-tracking and Level Set methods. Particularly close attention is paid to the formulation and numerical implementation of the surface gradients of surfactant concentration and surface tension. Various benchmark tests are performed to demonstrate the accuracy of different elements of the algorithm. To verify surfactant mass conservation, values for surfactant diffusion along the interface are compared with the exact solution for the problem of uniform expansion of a sphere. The numerical implementation of the discontinuous boundary condition for the source term in the bulk concentration is compared with the approximate solution. Surface tension forces are tested for Marangoni drop translation. Our numerical results for drop deformation in simple shear are compared with experiments and results from previous simulations. All benchmarking tests compare well with existing data thus providing confidence that the adapted LCRM formulation for surfactant advection and diffusion is accurate and effective in three-dimensional multiphase flows with a structured mesh. We also demonstrate that this approach applies easily to massively parallel simulations.
A Modular Set of Mixed Reality Simulators for Blind and Guided Procedures
2016-08-01
AWARD NUMBER: W81XWH-14-1-0113 TITLE: A Modular Set of Mixed Reality Simulators for “blind” and Guided Procedures PRINCIPAL INVESTIGATOR...2015 – 07/31/2016 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER A Modular Set of Mixed Reality Simulators for “Blind” and Guided Procedures 5b...editor developed to facilitate creation by non-technical educators of ITs for the set of modular simulators, (c) a curriculum for self-study and self
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maier, Joscha, E-mail: joscha.maier@dkfz.de; Sawall, Stefan; Kachelrieß, Marc
2014-05-15
Purpose: Phase-correlated microcomputed tomography (micro-CT) imaging plays an important role in the assessment of mouse models of cardiovascular diseases and the determination of functional parameters as the left ventricular volume. As the current gold standard, the phase-correlated Feldkamp reconstruction (PCF), shows poor performance in case of low dose scans, more sophisticated reconstruction algorithms have been proposed to enable low-dose imaging. In this study, the authors focus on the McKinnon-Bates (MKB) algorithm, the low dose phase-correlated (LDPC) reconstruction, and the high-dimensional total variation minimization reconstruction (HDTV) and investigate their potential to accurately determine the left ventricular volume at different dose levelsmore » from 50 to 500 mGy. The results were verified in phantom studies of a five-dimensional (5D) mathematical mouse phantom. Methods: Micro-CT data of eight mice, each administered with an x-ray dose of 500 mGy, were acquired, retrospectively gated for cardiac and respiratory motion and reconstructed using PCF, MKB, LDPC, and HDTV. Dose levels down to 50 mGy were simulated by using only a fraction of the projections. Contrast-to-noise ratio (CNR) was evaluated as a measure of image quality. Left ventricular volume was determined using different segmentation algorithms (Otsu, level sets, region growing). Forward projections of the 5D mouse phantom were performed to simulate a micro-CT scan. The simulated data were processed the same way as the real mouse data sets. Results: Compared to the conventional PCF reconstruction, the MKB, LDPC, and HDTV algorithm yield images of increased quality in terms of CNR. While the MKB reconstruction only provides small improvements, a significant increase of the CNR is observed in LDPC and HDTV reconstructions. The phantom studies demonstrate that left ventricular volumes can be determined accurately at 500 mGy. For lower dose levels which were simulated for real mouse data sets, the HDTV algorithm shows the best performance. At 50 mGy, the deviation from the reference obtained at 500 mGy were less than 4%. Also the LDPC algorithm provides reasonable results with deviation less than 10% at 50 mGy while PCF and MKB reconstruction show larger deviations even at higher dose levels. Conclusions: LDPC and HDTV increase CNR and allow for quantitative evaluations even at dose levels as low as 50 mGy. The left ventricular volumes exemplarily illustrate that cardiac parameters can be accurately estimated at lowest dose levels if sophisticated algorithms are used. This allows to reduce dose by a factor of 10 compared to today's gold standard and opens new options for longitudinal studies of the heart.« less
Accuracy of Handheld Blood Glucose Meters at High Altitude
de Vries, Suzanna T.; Fokkert, Marion J.; Dikkeschei, Bert D.; Rienks, Rienk; Bilo, Karin M.; Bilo, Henk J. G.
2010-01-01
Background Due to increasing numbers of people with diabetes taking part in extreme sports (e.g., high-altitude trekking), reliable handheld blood glucose meters (BGMs) are necessary. Accurate blood glucose measurement under extreme conditions is paramount for safe recreation at altitude. Prior studies reported bias in blood glucose measurements using different BGMs at high altitude. We hypothesized that glucose-oxidase based BGMs are more influenced by the lower atmospheric oxygen pressure at altitude than glucose dehydrogenase based BGMs. Methodology/Principal Findings Glucose measurements at simulated altitude of nine BGMs (six glucose dehydrogenase and three glucose oxidase BGMs) were compared to glucose measurement on a similar BGM at sea level and to a laboratory glucose reference method. Venous blood samples of four different glucose levels were used. Moreover, two glucose oxidase and two glucose dehydrogenase based BGMs were evaluated at different altitudes on Mount Kilimanjaro. Accuracy criteria were set at a bias <15% from reference glucose (when >6.5 mmol/L) and <1 mmol/L from reference glucose (when <6.5 mmol/L). No significant difference was observed between measurements at simulated altitude and sea level for either glucose oxidase based BGMs or glucose dehydrogenase based BGMs as a group phenomenon. Two GDH based BGMs did not meet set performance criteria. Most BGMs are generally overestimating true glucose concentration at high altitude. Conclusion At simulated high altitude all tested BGMs, including glucose oxidase based BGMs, did not show influence of low atmospheric oxygen pressure. All BGMs, except for two GDH based BGMs, performed within predefined criteria. At true high altitude one GDH based BGM had best precision and accuracy. PMID:21103399
Isupov, Inga; McInnes, Matthew D F; Hamstra, Stan J; Doherty, Geoffrey; Gupta, Ashish; Peddle, Susan; Jibri, Zaid; Rakhra, Kawan; Hibbert, Rebecca M
2017-04-01
The purpose of this study is to develop a tool to assess the procedural competence of radiology trainees, with sources of evidence gathered from five categories to support the construct validity of tool: content, response process, internal structure, relations to other variables, and consequences. A pilot form for assessing procedural competence among radiology residents, known as the RAD-Score tool, was developed by evaluating published literature and using a modified Delphi procedure involving a group of local content experts. The pilot version of the tool was tested by seven radiology department faculty members who evaluated procedures performed by 25 residents at one institution between October 2014 and June 2015. Residents were evaluated while performing multiple procedures in both clinical and simulation settings. The main outcome measure was the percentage of residents who were considered ready to perform procedures independently, with testing conducted to determine differences between levels of training. A total of 105 forms (for 52 procedures performed in a clinical setting and 53 procedures performed in a simulation setting) were collected for a variety of procedures (eight vascular or interventional, 42 body, 12 musculoskeletal, 23 chest, and 20 breast procedures). A statistically significant difference was noted in the percentage of trainees who were rated as being ready to perform a procedure independently (in postgraduate year [PGY] 2, 12% of residents; in PGY3, 61%; in PGY4, 85%; and in PGY5, 88%; p < 0.05); this difference persisted in the clinical and simulation settings. User feedback and psychometric analysis were used to create a final version of the form. This prospective study describes the successful development of a tool for assessing the procedural competence of radiology trainees with high levels of construct validity in multiple domains. Implementation of the tool in the radiology residency curriculum is planned and can play an instrumental role in the transition to competency-based radiology training.
Automated Algorithms for Quantum-Level Accuracy in Atomistic Simulations: LDRD Final Report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thompson, Aidan Patrick; Schultz, Peter Andrew; Crozier, Paul
2014-09-01
This report summarizes the result of LDRD project 12-0395, titled "Automated Algorithms for Quantum-level Accuracy in Atomistic Simulations." During the course of this LDRD, we have developed an interatomic potential for solids and liquids called Spectral Neighbor Analysis Poten- tial (SNAP). The SNAP potential has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projectedmore » on to a basis of hyperspherical harmonics in four dimensions. The SNAP coef- ficients are determined using weighted least-squares linear regression against the full QM training set. This allows the SNAP potential to be fit in a robust, automated manner to large QM data sets using many bispectrum components. The calculation of the bispectrum components and the SNAP potential are implemented in the LAMMPS parallel molecular dynamics code. Global optimization methods in the DAKOTA software package are used to seek out good choices of hyperparameters that define the overall structure of the SNAP potential. FitSnap.py, a Python-based software pack- age interfacing to both LAMMPS and DAKOTA is used to formulate the linear regression problem, solve it, and analyze the accuracy of the resultant SNAP potential. We describe a SNAP potential for tantalum that accurately reproduces a variety of solid and liquid properties. Most significantly, in contrast to existing tantalum potentials, SNAP correctly predicts the Peierls barrier for screw dislocation motion. We also present results from SNAP potentials generated for indium phosphide (InP) and silica (SiO 2 ). We describe efficient algorithms for calculating SNAP forces and energies in molecular dynamics simulations using massively parallel computers and advanced processor ar- chitectures. Finally, we briefly describe the MSM method for efficient calculation of electrostatic interactions on massively parallel computers.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Corona, Edmundo; Song, Bo
This memo concerns the transmission of mechanical signals through silicone foam pads in a compression Kolsky bar set-up. The results of numerical simulations for four levels of pad pre-compression and two striker velocities were compared directly to test measurements to assess the delity of the simulations. The nite element model simulated the Kolsky tests in their entirety and used the hyperelastic `hyperfoam' model for the silicone foam pads. Calibration of the hyperfoam model was deduced from quasi-static compression data. It was necessary, however, to augment the material model by adding sti ness proportional damping in order to generate results thatmore » resembled the experimental measurements. Based on the results presented here, it is important to account for the dynamic behavior of polymeric foams in numerical simulations that involve high loading rates.« less
NASA Astrophysics Data System (ADS)
Duives, Dorine C.; Daamen, Winnie; Hoogendoorn, Serge P.
2016-04-01
In recent years numerous pedestrian simulation tools have been developed that can support crowd managers and government officials in their tasks. New technologies to monitor pedestrian flows are in dire need of models that allow for rapid state-estimation. Many contemporary pedestrian simulation tools model the movements of pedestrians at a microscopic level, which does not provide an exact solution. Macroscopic models capture the fundamental characteristics of the traffic state at a more aggregate level, and generally have a closed form solution which is necessary for rapid state estimation for traffic management purposes. This contribution presents a next step in the calibration and validation of the macroscopic continuum model detailed in Hoogendoorn et al. (2014). The influence of global and local route choice on the development of crowd movement phenomena, such as dissipation, lane-formation and stripe-formation, is studied. This study shows that most self-organization phenomena and behavioural trends only develop under very specific conditions, and as such can only be simulated using specific parameter sets. Moreover, all crowd movement phenomena can be reproduced by means of the continuum model using one parameter set. This study concludes that the incorporation of local route choice behaviour and the balancing of the aptitude of pedestrians with respect to their own class and other classes are both essential in the correct prediction of crowd movement dynamics.
The Role of Simulation in Microsurgical Training.
Evgeniou, Evgenios; Walker, Harriet; Gujral, Sameer
Simulation has been established as an integral part of microsurgical training. The aim of this study was to assess and categorize the various simulation models in relation to the complexity of the microsurgical skill being taught and analyze the assessment methods commonly employed in microsurgical simulation training. Numerous courses have been established using simulation models. These models can be categorized, according to the level of complexity of the skill being taught, into basic, intermediate, and advanced. Microsurgical simulation training should be assessed using validated assessment methods. Assessment methods vary significantly from subjective expert opinions to self-assessment questionnaires and validated global rating scales. The appropriate assessment method should carefully be chosen based on the simulation modality. Simulation models should be validated, and a model with appropriate fidelity should be chosen according to the microsurgical skill being taught. Assessment should move from traditional simple subjective evaluations of trainee performance to validated tools. Future studies should assess the transferability of skills gained during simulation training to the real-life setting. Copyright © 2018 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Virtual Systems Pharmacology (ViSP) software for simulation from mechanistic systems-level models.
Ermakov, Sergey; Forster, Peter; Pagidala, Jyotsna; Miladinov, Marko; Wang, Albert; Baillie, Rebecca; Bartlett, Derek; Reed, Mike; Leil, Tarek A
2014-01-01
Multiple software programs are available for designing and running large scale system-level pharmacology models used in the drug development process. Depending on the problem, scientists may be forced to use several modeling tools that could increase model development time, IT costs and so on. Therefore, it is desirable to have a single platform that allows setting up and running large-scale simulations for the models that have been developed with different modeling tools. We developed a workflow and a software platform in which a model file is compiled into a self-contained executable that is no longer dependent on the software that was used to create the model. At the same time the full model specifics is preserved by presenting all model parameters as input parameters for the executable. This platform was implemented as a model agnostic, therapeutic area agnostic and web-based application with a database back-end that can be used to configure, manage and execute large-scale simulations for multiple models by multiple users. The user interface is designed to be easily configurable to reflect the specifics of the model and the user's particular needs and the back-end database has been implemented to store and manage all aspects of the systems, such as Models, Virtual Patients, User Interface Settings, and Results. The platform can be adapted and deployed on an existing cluster or cloud computing environment. Its use was demonstrated with a metabolic disease systems pharmacology model that simulates the effects of two antidiabetic drugs, metformin and fasiglifam, in type 2 diabetes mellitus patients.
Virtual Systems Pharmacology (ViSP) software for simulation from mechanistic systems-level models
Ermakov, Sergey; Forster, Peter; Pagidala, Jyotsna; Miladinov, Marko; Wang, Albert; Baillie, Rebecca; Bartlett, Derek; Reed, Mike; Leil, Tarek A.
2014-01-01
Multiple software programs are available for designing and running large scale system-level pharmacology models used in the drug development process. Depending on the problem, scientists may be forced to use several modeling tools that could increase model development time, IT costs and so on. Therefore, it is desirable to have a single platform that allows setting up and running large-scale simulations for the models that have been developed with different modeling tools. We developed a workflow and a software platform in which a model file is compiled into a self-contained executable that is no longer dependent on the software that was used to create the model. At the same time the full model specifics is preserved by presenting all model parameters as input parameters for the executable. This platform was implemented as a model agnostic, therapeutic area agnostic and web-based application with a database back-end that can be used to configure, manage and execute large-scale simulations for multiple models by multiple users. The user interface is designed to be easily configurable to reflect the specifics of the model and the user's particular needs and the back-end database has been implemented to store and manage all aspects of the systems, such as Models, Virtual Patients, User Interface Settings, and Results. The platform can be adapted and deployed on an existing cluster or cloud computing environment. Its use was demonstrated with a metabolic disease systems pharmacology model that simulates the effects of two antidiabetic drugs, metformin and fasiglifam, in type 2 diabetes mellitus patients. PMID:25374542
NASA Astrophysics Data System (ADS)
Zhang, Chunxi; Wang, Yuqing
2018-01-01
The sensitivity of simulated tropical cyclones (TCs) to the choice of cumulus parameterization (CP) scheme in the advanced Weather Research and Forecasting Model (WRF-ARW) version 3.5 is analyzed based on ten seasonal simulations with 20-km horizontal grid spacing over the western North Pacific. Results show that the simulated frequency and intensity of TCs are very sensitive to the choice of the CP scheme. The sensitivity can be explained well by the difference in the low-level circulation in a height and sorted moisture space. By transporting moist static energy from dry to moist region, the low-level circulation is important to convective self-aggregation which is believed to be related to genesis of TC-like vortices (TCLVs) and TCs in idealized settings. The radiative and evaporative cooling associated with low-level clouds and shallow convection in dry regions is found to play a crucial role in driving the moisture-sorted low-level circulation. With shallow convection turned off in a CP scheme, relatively strong precipitation occurs frequently in dry regions. In this case, the diabatic cooling can still drive the low-level circulation but its strength is reduced and thus TCLV/TC genesis is suppressed. The inclusion of the cumulus momentum transport (CMT) in a CP scheme can considerably suppress genesis of TCLVs/TCs, while changes in the moisture-sorted low-level circulation and horizontal distribution of precipitation are trivial, indicating that the CMT modulates the TCLVs/TCs activities in the model by mechanisms other than the horizontal transport of moist static energy.
Fast and Accurate Simulation of the Cray XMT Multithreaded Supercomputer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Villa, Oreste; Tumeo, Antonino; Secchi, Simone
Irregular applications, such as data mining and analysis or graph-based computations, show unpredictable memory/network access patterns and control structures. Highly multithreaded architectures with large processor counts, like the Cray MTA-1, MTA-2 and XMT, appear to address their requirements better than commodity clusters. However, the research on highly multithreaded systems is currently limited by the lack of adequate architectural simulation infrastructures due to issues such as size of the machines, memory footprint, simulation speed, accuracy and customization. At the same time, Shared-memory MultiProcessors (SMPs) with multi-core processors have become an attractive platform to simulate large scale machines. In this paper, wemore » introduce a cycle-level simulator of the highly multithreaded Cray XMT supercomputer. The simulator runs unmodified XMT applications. We discuss how we tackled the challenges posed by its development, detailing the techniques introduced to make the simulation as fast as possible while maintaining a high accuracy. By mapping XMT processors (ThreadStorm with 128 hardware threads) to host computing cores, the simulation speed remains constant as the number of simulated processors increases, up to the number of available host cores. The simulator supports zero-overhead switching among different accuracy levels at run-time and includes a network model that takes into account contention. On a modern 48-core SMP host, our infrastructure simulates a large set of irregular applications 500 to 2000 times slower than real time when compared to a 128-processor XMT, while remaining within 10\\% of accuracy. Emulation is only from 25 to 200 times slower than real time.« less
Expanded Processing Techniques for EMI Systems
2012-07-01
possible to perform better target detection using physics-based algorithms and the entire data set, rather than simulating a simpler data set and mapping...possible to perform better target detection using physics-based algorithms and the entire data set, rather than simulating a simpler data set and...54! Figure 4.25: Plots of simulated MetalMapper data for two oblate spheroidal targets
Multiple Point Statistics algorithm based on direct sampling and multi-resolution images
NASA Astrophysics Data System (ADS)
Julien, S.; Renard, P.; Chugunova, T.
2017-12-01
Multiple Point Statistics (MPS) has become popular for more than one decade in Earth Sciences, because these methods allow to generate random fields reproducing highly complex spatial features given in a conceptual model, the training image, while classical geostatistics techniques based on bi-point statistics (covariance or variogram) fail to generate realistic models. Among MPS methods, the direct sampling consists in borrowing patterns from the training image to populate a simulation grid. This latter is sequentially filled by visiting each of these nodes in a random order, and then the patterns, whose the number of nodes is fixed, become narrower during the simulation process, as the simulation grid is more densely informed. Hence, large scale structures are caught in the beginning of the simulation and small scale ones in the end. However, MPS may mix spatial characteristics distinguishable at different scales in the training image, and then loose the spatial arrangement of different structures. To overcome this limitation, we propose to perform MPS simulation using a decomposition of the training image in a set of images at multiple resolutions. Applying a Gaussian kernel onto the training image (convolution) results in a lower resolution image, and iterating this process, a pyramid of images depicting fewer details at each level is built, as it can be done in image processing for example to lighten the space storage of a photography. The direct sampling is then employed to simulate the lowest resolution level, and then to simulate each level, up to the finest resolution, conditioned to the level one rank coarser. This scheme helps reproduce the spatial structures at any scale of the training image and then generate more realistic models. We illustrate the method with aerial photographies (satellite images) and natural textures. Indeed, these kinds of images often display typical structures at different scales and are well-suited for MPS simulation techniques.
LC-MSsim – a simulation software for liquid chromatography mass spectrometry data
Schulz-Trieglaff, Ole; Pfeifer, Nico; Gröpl, Clemens; Kohlbacher, Oliver; Reinert, Knut
2008-01-01
Background Mass Spectrometry coupled to Liquid Chromatography (LC-MS) is commonly used to analyze the protein content of biological samples in large scale studies. The data resulting from an LC-MS experiment is huge, highly complex and noisy. Accordingly, it has sparked new developments in Bioinformatics, especially in the fields of algorithm development, statistics and software engineering. In a quantitative label-free mass spectrometry experiment, crucial steps are the detection of peptide features in the mass spectra and the alignment of samples by correcting for shifts in retention time. At the moment, it is difficult to compare the plethora of algorithms for these tasks. So far, curated benchmark data exists only for peptide identification algorithms but no data that represents a ground truth for the evaluation of feature detection, alignment and filtering algorithms. Results We present LC-MSsim, a simulation software for LC-ESI-MS experiments. It simulates ESI spectra on the MS level. It reads a list of proteins from a FASTA file and digests the protein mixture using a user-defined enzyme. The software creates an LC-MS data set using a predictor for the retention time of the peptides and a model for peak shapes and elution profiles of the mass spectral peaks. Our software also offers the possibility to add contaminants, to change the background noise level and includes a model for the detectability of peptides in mass spectra. After the simulation, LC-MSsim writes the simulated data to mzData, a public XML format. The software also stores the positions (monoisotopic m/z and retention time) and ion counts of the simulated ions in separate files. Conclusion LC-MSsim generates simulated LC-MS data sets and incorporates models for peak shapes and contaminations. Algorithm developers can match the results of feature detection and alignment algorithms against the simulated ion lists and meaningful error rates can be computed. We anticipate that LC-MSsim will be useful to the wider community to perform benchmark studies and comparisons between computational tools. PMID:18842122
Framework of passive millimeter-wave scene simulation based on material classification
NASA Astrophysics Data System (ADS)
Park, Hyuk; Kim, Sung-Hyun; Lee, Ho-Jin; Kim, Yong-Hoon; Ki, Jae-Sug; Yoon, In-Bok; Lee, Jung-Min; Park, Soon-Jun
2006-05-01
Over the past few decades, passive millimeter-wave (PMMW) sensors have emerged as useful implements in transportation and military applications such as autonomous flight-landing system, smart weapons, night- and all weather vision system. As an efficient way to predict the performance of a PMMW sensor and apply it to system, it is required to test in SoftWare-In-the-Loop (SWIL). The PMMW scene simulation is a key component for implementation of this simulator. However, there is no commercial on-the-shelf available to construct the PMMW scene simulation; only there have been a few studies on this technology. We have studied the PMMW scene simulation method to develop the PMMW sensor SWIL simulator. This paper describes the framework of the PMMW scene simulation and the tentative results. The purpose of the PMMW scene simulation is to generate sensor outputs (or image) from a visible image and environmental conditions. We organize it into four parts; material classification mapping, PMMW environmental setting, PMMW scene forming, and millimeter-wave (MMW) sensorworks. The background and the objects in the scene are classified based on properties related with MMW radiation and reflectivity. The environmental setting part calculates the following PMMW phenomenology; atmospheric propagation and emission including sky temperature, weather conditions, and physical temperature. Then, PMMW raw images are formed with surface geometry. Finally, PMMW sensor outputs are generated from PMMW raw images by applying the sensor characteristics such as an aperture size and noise level. Through the simulation process, PMMW phenomenology and sensor characteristics are simulated on the output scene. We have finished the design of framework of the simulator, and are working on implementation in detail. As a tentative result, the flight observation was simulated in specific conditions. After implementation details, we plan to increase the reliability of the simulation by data collecting using actual PMMW sensors. With the reliable PMMW scene simulator, it will be more efficient to apply the PMMW sensor to various applications.
Pawar, Swapnil; Jacques, Theresa; Deshpande, Kush; Pusapati, Raju; Meguerdichian, Michael J
2018-04-01
The simulation in critical care setting involves a heterogeneous group of participants with varied background and experience. Measuring the impacts of simulation on emotional state and cognitive load in this setting is not often performed. The feasibility of such measurement in the critical care setting needs further exploration. Medical and nursing staff with varying levels of experience from a tertiary intensive care unit participated in a standardised clinical simulation scenario. The emotional state of each participant was assessed before and after completion of the scenario using a validated eight-item scale containing bipolar oppositional descriptors of emotion. The cognitive load of each participant was assessed after the completion of the scenario using a validated subjective rating tool. A total of 103 medical and nursing staff participated in the study. The participants felt more relaxed (-0.28±1.15 vs 0.14±1, P<0.005; d=0.39), excited (0.25±0.89 vs 0.55±0.92, P<0.005, d=0.35) and alert (0.85±0.87 vs 1.28±0.73, P<0.00001, d=0.54) following simulation. There was no difference in the mean scores for the remaining five items. The mean cognitive load for all participants was 6.67±1.41. There was no significant difference in the cognitive loads among medical staff versus nursing staff (6.61±2.3 vs 6.62±1.7; P>0.05). A well-designed complex high fidelity critical care simulation scenario can be evaluated to identify the relative cognitive load of the participants' experience and their emotional state. The movement of learners emotionally from a more negative state to a positive state suggests that simulation can be an effective tool for improved knowledge transfer and offers more opportunity for dynamic thinking.
The effects of workload on respiratory variables in simulated flight: a preliminary study.
Karavidas, Maria Katsamanis; Lehrer, Paul M; Lu, Shou-En; Vaschillo, Evgeny; Vaschillo, Bronya; Cheng, Andrew
2010-04-01
In this pilot study, we investigated respiratory activity and end-tidal carbon dioxide (P(et)CO(2)) during exposure to varying levels of work load in a simulated flight environment. Seven pilots (age: 34-60) participated in a one-session test on the Boeing 737-800 simulator. Physiological data were collected while pilots wore an ambulatory multi-channel recording device. Respiratory variables, including inductance plethysmography (respiratory pattern) and pressure of end-tidal carbon dioxide (P(et)CO(2)), were collected demonstrating change in CO(2) levels proportional to changes in flight task workload. Pilots performed a set of simulation flight tasks. Pilot performance was rated for each task by a test pilot; and self-report of workload was taken using the NASA-TLX scale. Mixed model analysis revealed that respiration rate and minute ventilation are significantly associated with workload levels and evaluator scores controlling for "vanilla baseline" condition. Hypocapnia exclusively occurred in tasks where pilots performed more poorly. This study was designed as a preliminary investigation in order to develop a psychophysiological assessment methodology, rather than to offer conclusive findings. The results show that the respiratory system is very reactive to high workload conditions in aviation and suggest that hypocapnia may pose a flight safety risk under some circumstances. Copyright © 2010 Elsevier B.V. All rights reserved.
Dopamine, Affordance and Active Inference
Friston, Karl J.; Shiner, Tamara; FitzGerald, Thomas; Galea, Joseph M.; Adams, Rick; Brown, Harriet; Dolan, Raymond J.; Moran, Rosalyn; Stephan, Klaas Enno; Bestmann, Sven
2012-01-01
The role of dopamine in behaviour and decision-making is often cast in terms of reinforcement learning and optimal decision theory. Here, we present an alternative view that frames the physiology of dopamine in terms of Bayes-optimal behaviour. In this account, dopamine controls the precision or salience of (external or internal) cues that engender action. In other words, dopamine balances bottom-up sensory information and top-down prior beliefs when making hierarchical inferences (predictions) about cues that have affordance. In this paper, we focus on the consequences of changing tonic levels of dopamine firing using simulations of cued sequential movements. Crucially, the predictions driving movements are based upon a hierarchical generative model that infers the context in which movements are made. This means that we can confuse agents by changing the context (order) in which cues are presented. These simulations provide a (Bayes-optimal) model of contextual uncertainty and set switching that can be quantified in terms of behavioural and electrophysiological responses. Furthermore, one can simulate dopaminergic lesions (by changing the precision of prediction errors) to produce pathological behaviours that are reminiscent of those seen in neurological disorders such as Parkinson's disease. We use these simulations to demonstrate how a single functional role for dopamine at the synaptic level can manifest in different ways at the behavioural level. PMID:22241972
Curriculum-Based Measurement of Oral Reading: Quality of Progress Monitoring Outcomes
ERIC Educational Resources Information Center
Christ, Theodore J.; Zopluoglu, Cengiz; Long, Jeffery D.; Monaghen, Barbara D.
2012-01-01
Curriculum-based measurement of oral reading (CBM-R) is frequently used to set student goals and monitor student progress. This study examined the quality of growth estimates derived from CBM-R progress monitoring data. The authors used a linear mixed effects regression (LMER) model to simulate progress monitoring data for multiple levels of…
METRO-APEX Volume 17.1: Industrialist's Manual No. 7, Shick Cannery. Revised.
ERIC Educational Resources Information Center
University of Southern California, Los Angeles. COMEX Research Project.
The Industrialist's Manual No. 7 (Shick Cannery) is one of a set of twenty-one manuals used in METRO-APEX 1974, a computerized college and professional level, computer-supported, role-play, simulation exercise of a community with "normal" problems. Stress is placed on environmental quality considerations. APEX 1974 is an expansion of…
METRO-APEX Volume 19.1: City Manager and County Administrative Officer's Manual. Revised.
ERIC Educational Resources Information Center
University of Southern California, Los Angeles. COMEX Research Project.
The City Manager and County Administrative Officer's Manual is one of a set of twenty-one manuals used in METRO-APEX 1974, a computerized college and professional level, computer-supported, role-play, simulation exercise of a community with "normal" problems. Stress is placed on environmental quality considerations. APEX 1974 is an…
APEX (Air Pollution Exercise) Volume 21: Legal References: Air Pollution Control Regulations.
ERIC Educational Resources Information Center
Environmental Protection Agency, Research Triangle Park, NC. Office of Manpower Development.
The Legal References: Air Pollution Control Regulations Manual is the last in a set of 21 manuals (AA 001 009-001 029) used in APEX (Air Pollution Exercise), a computerized college and professional level "real world" game simulation of a community with urban and rural problems, industrial activities, and air pollution difficulties. The manual…
METRO-APEX Volume 13.1: Industrialist's Manual No. 3, Rusty's Iron Foundry. Revised.
ERIC Educational Resources Information Center
University of Southern California, Los Angeles. COMEX Research Project.
The Industrialist's Manual No. 3 (Rusty's Iron Foundry) is one of a set of twenty-one manuals used in METRO-APEX 1974, a computerized college and professional level, computer-supported, role-play, simulation exercise of a community with "normal" problems. Stress is placed on environmental quality considerations. APEX 1974 is an expansion…
METRO-APEX Volume 7.1: Air Pollution Control Officer's Manual. Revised.
ERIC Educational Resources Information Center
University of Southern California, Los Angeles. COMEX Research Project.
The Air Pollution Control Officer's Manual is one of a set of twenty-one manuals used in METRO-APEX 1974, a computerized college and professional level, computer-supported, role-play, simulation exercise of a community with "normal" problems. Stress is placed on environmental quality considerations. APEX 1974 is an expansion of APEX--Air…
METRO-APEX Volume 11.1: Industrialists' Manual No. 1, Shear Power Company. Revised.
ERIC Educational Resources Information Center
University of Southern California, Los Angeles. COMEX Research Project.
The Industrialist's Manual No. 1 (Shear Power Company) is one of a set of twenty-one manuals used in METRO-APEX 1974, a computerized college and professional level, computer-supported, role-play, simulation exercise of a community with "normal" problems. Stress is placed on environmental quality considerations. APEX 1974 is an expansion…
METRO-APEX Volume 12.1: Industrialist's Manual No. 2, People's Pulp Plant. Revised.
ERIC Educational Resources Information Center
University of Southern California, Los Angeles. COMEX Research Project.
The Industrialist's Manual No. 2 (People's Pulp Plant) is one of a set of twenty-one manuals used in METRO-APEX 1974, a computerized college and professional level, computer-supported, role-play, simulation exercise of a community with "normal" problems. Stress is placed on environmental quality considerations. APEX 1974 is an expansion…
METRO-APEX Volume 15.1: Industrialist's Manual No. 5, Caesar's Rendering Plant. Revised.
ERIC Educational Resources Information Center
University of Southern California, Los Angeles. COMEX Research Project.
The Industrialist's Manual No. 5 (Caesar's Rendering Plant) is one of a set of twenty-one manuals used in METRO-APEX 1974, a computerized college and professional level, computer-supported, role-play, simulation exercise of a community with "normal" problems. Stress is placed on environmental quality considerations. APEX 1974 is an…
METRO-APEX Volume 16.1: Industrialist's Manual No. 6, Dusty Rhodes Cement Company. Revised.
ERIC Educational Resources Information Center
University of Southern California, Los Angeles. COMEX Research Project.
The Industrialist's Manual No. 6 (Dusty Rhodes Cement Company) is one of a set of twenty-one manuals used in METRO-APEX 1974, a computerized college and professional level, computer-supported, role-play, simulation exercise of a community with "normal" problems. Stress is placed on environmental quality considerations. APEX 1974 is an…
METRO-APEX Volume 14.1: Industrialist's Manual No. 4, Gestalt Malt Brewery. Revised.
ERIC Educational Resources Information Center
University of Southern California, Los Angeles. COMEX Research Project.
The Industrialist's Manual No. 4 (Gestalt Malt Brewery) is one of a set of twenty-one manuals used in METRO-APEX 1974, a computerized college and professional level, computer-supported, role-play, simulation exercise of a community with "normal" problems. Stress is placed on environmental quality considerations. APEX 1974 is an expansion…
Warburton, Bruce; Gormley, Andrew M
2015-01-01
Internationally, invasive vertebrate species pose a significant threat to biodiversity, agricultural production and human health. To manage these species a wide range of tools, including traps, are used. In New Zealand, brushtail possums (Trichosurus vulpecula), stoats (Mustela ermine), and ship rats (Rattus rattus) are invasive and there is an ongoing demand for cost-effective non-toxic methods for controlling these pests. Recently, traps with multiple-capture capability have been developed which, because they do not require regular operator-checking, are purported to be more cost-effective than traditional single-capture traps. However, when pest populations are being maintained at low densities (as is typical of orchestrated pest management programmes) it remains uncertain if it is more cost-effective to use fewer multiple-capture traps or more single-capture traps. To address this uncertainty, we used an individual-based spatially explicit modelling approach to determine the likely maximum animal-captures per trap, given stated pest densities and defined times traps are left between checks. In the simulation, single- or multiple-capture traps were spaced according to best practice pest-control guidelines. For possums with maintenance densities set at the lowest level (i.e. 0.5/ha), 98% of all simulated possums were captured with only a single capacity trap set at each site. When possum density was increased to moderate levels of 3/ha, having a capacity of three captures per trap caught 97% of all simulated possums. Results were similar for stoats, although only two potential captures per site were sufficient to capture 99% of simulated stoats. For rats, which were simulated at their typically higher densities, even a six-capture capacity per trap site only resulted in 80% kill. Depending on target species, prevailing density and extent of immigration, the most cost-effective strategy for pest control in New Zealand might be to deploy several single-capture traps rather than investing in fewer, but more expense, multiple-capture traps.
Gilliom, Robert J.; Helsel, Dennis R.
1986-01-01
A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensored observations, for determining the best performing parameter estimation method for any particular data set. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification.
Estimation of distributional parameters for censored trace-level water-quality data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilliom, R.J.; Helsel, D.R.
1984-01-01
A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water-sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensored observations,more » for determining the best-performing parameter estimation method for any particular data set. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least-squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification. 6 figs., 6 tabs.« less
Key role of coupling, delay, and noise in resting brain fluctuations
Deco, Gustavo; Jirsa, Viktor; McIntosh, A. R.; Sporns, Olaf; Kötter, Rolf
2009-01-01
A growing body of neuroimaging research has documented that, in the absence of an explicit task, the brain shows temporally coherent activity. This so-called “resting state” activity or, more explicitly, the default-mode network, has been associated with daydreaming, free association, stream of consciousness, or inner rehearsal in humans, but similar patterns have also been found under anesthesia and in monkeys. Spatiotemporal activity patterns in the default-mode network are both complex and consistent, which raises the question whether they are the expression of an interesting cognitive architecture or the consequence of intrinsic network constraints. In numerical simulation, we studied the dynamics of a simplified cortical network using 38 noise-driven (Wilson–Cowan) oscillators, which in isolation remain just below their oscillatory threshold. Time delay coupling based on lengths and strengths of primate corticocortical pathways leads to the emergence of 2 sets of 40-Hz oscillators. The sets showed synchronization that was anticorrelated at <0.1 Hz across the sets in line with a wide range of recent experimental observations. Systematic variation of conduction velocity, coupling strength, and noise level indicate a high sensitivity of emerging synchrony as well as simulated blood flow blood oxygen level-dependent (BOLD) on the underlying parameter values. Optimal sensitivity was observed around conduction velocities of 1–2 m/s, with very weak coupling between oscillators. An additional finding was that the optimal noise level had a characteristic scale, indicating the presence of stochastic resonance, which allows the network dynamics to respond with high sensitivity to changes in diffuse feedback activity. PMID:19497858
Comparison of normalization methods for differential gene expression analysis in RNA-Seq experiments
Maza, Elie; Frasse, Pierre; Senin, Pavel; Bouzayen, Mondher; Zouine, Mohamed
2013-01-01
In recent years, RNA-Seq technologies became a powerful tool for transcriptome studies. However, computational methods dedicated to the analysis of high-throughput sequencing data are yet to be standardized. In particular, it is known that the choice of a normalization procedure leads to a great variability in results of differential gene expression analysis. The present study compares the most widespread normalization procedures and proposes a novel one aiming at removing an inherent bias of studied transcriptomes related to their relative size. Comparisons of the normalization procedures are performed on real and simulated data sets. Real RNA-Seq data sets analyses, performed with all the different normalization methods, show that only 50% of significantly differentially expressed genes are common. This result highlights the influence of the normalization step on the differential expression analysis. Real and simulated data sets analyses give similar results showing 3 different groups of procedures having the same behavior. The group including the novel method named “Median Ratio Normalization” (MRN) gives the lower number of false discoveries. Within this group the MRN method is less sensitive to the modification of parameters related to the relative size of transcriptomes such as the number of down- and upregulated genes and the gene expression levels. The newly proposed MRN method efficiently deals with intrinsic bias resulting from relative size of studied transcriptomes. Validation with real and simulated data sets confirmed that MRN is more consistent and robust than existing methods. PMID:26442135
Physiological Based Simulator Fidelity Design Guidance
NASA Technical Reports Server (NTRS)
Schnell, Thomas; Hamel, Nancy; Postnikov, Alex; Hoke, Jaclyn; McLean, Angus L. M. Thom, III
2012-01-01
The evolution of the role of flight simulation has reinforced assumptions in aviation that the degree of realism in a simulation system directly correlates to the training benefit, i.e., more fidelity is always better. The construct of fidelity has several dimensions, including physical fidelity, functional fidelity, and cognitive fidelity. Interaction of different fidelity dimensions has an impact on trainee immersion, presence, and transfer of training. This paper discusses research results of a recent study that investigated if physiological-based methods could be used to determine the required level of simulator fidelity. Pilots performed a relatively complex flight task consisting of mission task elements of various levels of difficulty in a fixed base flight simulator and a real fighter jet trainer aircraft. Flight runs were performed using one forward visual channel of 40 deg. field of view for the lowest level of fidelity, 120 deg. field of view for the middle level of fidelity, and unrestricted field of view and full dynamic acceleration in the real airplane. Neuro-cognitive and physiological measures were collected under these conditions using the Cognitive Avionics Tool Set (CATS) and nonlinear closed form models for workload prediction were generated based on these data for the various mission task elements. One finding of the work described herein is that simple heart rate is a relatively good predictor of cognitive workload, even for short tasks with dynamic changes in cognitive loading. Additionally, we found that models that used a wide range of physiological and neuro-cognitive measures can further boost the accuracy of the workload prediction.
Fossett, Mark
2011-01-01
This paper considers the potential for using agent models to explore theories of residential segregation in urban areas. Results of generative experiments conducted using an agent-based simulation of segregation dynamics document that varying a small number of model parameters representing constructs from urban-ecological theories of segregation can generate a wide range of qualitatively distinct and substantively interesting segregation patterns. The results suggest how complex, macro-level patterns of residential segregation can arise from a small set of simple micro-level social dynamics operating within particular urban-demographic contexts. The promise and current limitations of agent simulation studies are noted and optimism is expressed regarding the potential for such studies to engage and contribute to the broader research literature on residential segregation. PMID:21379372
Tharmmaphornphilas, Wipawee; Green, Benjamin; Carnahan, Brian J; Norman, Bryan A
2003-01-01
This research developed worker schedules by using administrative controls and a computer programming model to reduce the likelihood of worker hearing loss. By rotating the workers through different jobs during the day it was possible to reduce their exposure to hazardous noise levels. Computer simulations were made based on data collected in a real setting. Worker schedules currently used at the site are compared with proposed worker schedules from the computer simulations. For the worker assignment plans found by the computer model, the authors calculate a significant decrease in time-weighted average (TWA) sound level exposure. The maximum daily dose that any worker is exposed to is reduced by 58.8%, and the maximum TWA value for the workers is reduced by 3.8 dB from the current schedule.
Helicopter roll control effectiveness criteria program summary
NASA Technical Reports Server (NTRS)
Heffley, Robert K.; Bourne, Simon M.; Mnich, Marc A.
1988-01-01
A study of helicopter roll control effectiveness is summarized for the purpose of defining military helicopter handling qualities requirements. The study is based on an analysis of pilot-in-the-loop task performance of several basic maneuvers. This is extended by a series of piloted simulations using the NASA Ames Vertical Motion Simulator and selected flight data. The main results cover roll control power and short-term response characteristics. In general the handling qualities requirements recommended are set in conjunction with desired levels of flight task and maneuver response which can be directly observed in actual flight. An important aspect of this, however, is that vehicle handling qualities need to be set with regard to some quantitative aspect of mission performance. Specific examples of how this can be accomplished include a lateral unmask/remask maneuver in the presence of a threat and an air tracking maneuver which recognizes the kill probability enhancement connected with decreasing the range to the target. Conclusions and recommendations address not only the handling qualities recommendations, but also the general use of flight simulators and the dependence of mission performance on handling qualities.
Kasaie, Parastu; Mathema, Barun; Kelton, W David; Azman, Andrew S; Pennington, Jeff; Dowdy, David W
2015-01-01
In any setting, a proportion of incident active tuberculosis (TB) reflects recent transmission ("recent transmission proportion"), whereas the remainder represents reactivation. Appropriately estimating the recent transmission proportion has important implications for local TB control, but existing approaches have known biases, especially where data are incomplete. We constructed a stochastic individual-based model of a TB epidemic and designed a set of simulations (derivation set) to develop two regression-based tools for estimating the recent transmission proportion from five inputs: underlying TB incidence, sampling coverage, study duration, clustered proportion of observed cases, and proportion of observed clusters in the sample. We tested these tools on a set of unrelated simulations (validation set), and compared their performance against that of the traditional 'n-1' approach. In the validation set, the regression tools reduced the absolute estimation bias (difference between estimated and true recent transmission proportion) in the 'n-1' technique by a median [interquartile range] of 60% [9%, 82%] and 69% [30%, 87%]. The bias in the 'n-1' model was highly sensitive to underlying levels of study coverage and duration, and substantially underestimated the recent transmission proportion in settings of incomplete data coverage. By contrast, the regression models' performance was more consistent across different epidemiological settings and study characteristics. We provide one of these regression models as a user-friendly, web-based tool. Novel tools can improve our ability to estimate the recent TB transmission proportion from data that are observable (or estimable) by public health practitioners with limited available molecular data.
Kasaie, Parastu; Mathema, Barun; Kelton, W. David; Azman, Andrew S.; Pennington, Jeff; Dowdy, David W.
2015-01-01
In any setting, a proportion of incident active tuberculosis (TB) reflects recent transmission (“recent transmission proportion”), whereas the remainder represents reactivation. Appropriately estimating the recent transmission proportion has important implications for local TB control, but existing approaches have known biases, especially where data are incomplete. We constructed a stochastic individual-based model of a TB epidemic and designed a set of simulations (derivation set) to develop two regression-based tools for estimating the recent transmission proportion from five inputs: underlying TB incidence, sampling coverage, study duration, clustered proportion of observed cases, and proportion of observed clusters in the sample. We tested these tools on a set of unrelated simulations (validation set), and compared their performance against that of the traditional ‘n-1’ approach. In the validation set, the regression tools reduced the absolute estimation bias (difference between estimated and true recent transmission proportion) in the ‘n-1’ technique by a median [interquartile range] of 60% [9%, 82%] and 69% [30%, 87%]. The bias in the ‘n-1’ model was highly sensitive to underlying levels of study coverage and duration, and substantially underestimated the recent transmission proportion in settings of incomplete data coverage. By contrast, the regression models’ performance was more consistent across different epidemiological settings and study characteristics. We provide one of these regression models as a user-friendly, web-based tool. Novel tools can improve our ability to estimate the recent TB transmission proportion from data that are observable (or estimable) by public health practitioners with limited available molecular data. PMID:26679499
Eichhorn, Stefan; Spindler, Johannes; Polski, Marcin; Mendoza, Alejandro; Schreiber, Ulrich; Heller, Michael; Deutsch, Marcus Andre; Braun, Christian; Lange, Rüdiger; Krane, Markus
2017-05-01
Investigations of compressive frequency, duty cycle, or waveform during CPR are typically rooted in animal research or computer simulations. Our goal was to generate a mechanical model incorporating alternate stiffness settings and an integrated blood flow system, enabling defined, reproducible comparisons of CPR efficacy. Based on thoracic stiffness data measured in human cadavers, such a model was constructed using valve-controlled pneumatic pistons and an artificial heart. This model offers two realistic levels of chest elasticity, with a blood flow apparatus that reflects compressive depth and waveform changes. We conducted CPR at opposing levels of physiologic stiffness, using a LUCAS device, a motor-driven plunger, and a group of volunteers. In high-stiffness mode, blood flow generated by volunteers was significantly less after just 2min of CPR, whereas flow generated by LUCAS device was superior by comparison. Optimal blood flow was obtained via motor-driven plunger, with trapezoidal waveform. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
An Interactive Simulation Program for Exploring Computational Models of Auto-Associative Memory.
Fink, Christian G
2017-01-01
While neuroscience students typically learn about activity-dependent plasticity early in their education, they often struggle to conceptually connect modification at the synaptic scale with network-level neuronal dynamics, not to mention with their own everyday experience of recalling a memory. We have developed an interactive simulation program (based on the Hopfield model of auto-associative memory) that enables the user to visualize the connections generated by any pattern of neural activity, as well as to simulate the network dynamics resulting from such connectivity. An accompanying set of student exercises introduces the concepts of pattern completion, pattern separation, and sparse versus distributed neural representations. Results from a conceptual assessment administered before and after students worked through these exercises indicate that the simulation program is a useful pedagogical tool for illustrating fundamental concepts of computational models of memory.
Flow Simulation of N3-X Hybrid Wing-Body Configuration
NASA Technical Reports Server (NTRS)
Kim, Hyoungjin; Liou, Meng-Sing
2013-01-01
System studies show that a N3-X hybrid wing-body aircraft with a turboelectric distributed propulsion system using a mail-slot inlet/nozzle nacelle can meet the environmental and performance goals for N+3 generation transports (three generations beyond the current air transport technology level) set by NASA s Subsonic Fixed Wing Project. In this study, a Navier-Stokes flow simulation of N3-X on hybrid unstructured meshes was conducted, including the mail-slot propulsor. The geometry of the mail-slot propulsor was generated by a CAD (Computer-Aided Design)-free shape parameterization. A body force approach was used for a more realistic and efficient simulation of the turning and loss effects of the fan blades and the inlet-fan interactions. Flow simulation results of the N3-X demonstrates the validity of the present approach.
The Impact of TRMM on Mesoscale Model Simulation of Super Typhoon Paka
NASA Technical Reports Server (NTRS)
Tao, W.-K.; Jia, Y.; Halverson, J.; Hou, A.; Olson, W.; Rodgers, E.; Simpson, J.
1999-01-01
Tropical cyclone Paka formed during the first week of December 1997 and underwent three periods of rapid intensification over the following two weeks. During one of these periods, which initiated early on December 10, Paka's Dvorak-measured windspeed increased from 23 to 60 m/s over a 48-hr period. On December 18, during the last rapid deepening episode, Paka became a supertyphoon with a maximum wind speed of about 80 m/s. In this study, the Penn State/NCAR Mesoscale Model (MM5) with improved physics (i.e., cloud microphysics, radiation, land-soil-vegetation-surface processes, and TOGA COARE flux scheme) and a multiple level nesting technique (135, 45 and 15 km horizontal resolution) will be used to simulate supertyphoon Paka. We performed two runs initialized with Goddard Earth Observing System (GEOS) data sets. The first GEOS data set does not incorporate either TRMM (tropical rainfall measuring mission satellite) or SSM/I (sensor microwave imager) observed rainfall fields into the GEOS's assimilation system while the second one does. Preliminary results show that the MM5 simulated surface pressure deepened by more than 25 mb (45 km resolution domain) in the run initialized with the GEOS data set incorporating TRMM and SSM/I derived rainfall, compared to the one initialized without. However, the track and precipitation patterns are quite similar between the runs. In our presentation, we will show the impact of TRMM rainfall upon the MM5 simulation of Paka at various horizontal resolutions. We will also examine the physical processes associated with initial explosive development by comparing MM5 simulated rainfall and latent heat release. In addition, budget (vorticity, PV, momentum and heat) calculations and sensitivity tests will be performed to examine the upper-tropospheric and SST mechanisms responsible for the explosive development of Paka.
Combining multiple tools outperforms individual methods in gene set enrichment analyses.
Alhamdoosh, Monther; Ng, Milica; Wilson, Nicholas J; Sheridan, Julie M; Huynh, Huy; Wilson, Michael J; Ritchie, Matthew E
2017-02-01
Gene set enrichment (GSE) analysis allows researchers to efficiently extract biological insight from long lists of differentially expressed genes by interrogating them at a systems level. In recent years, there has been a proliferation of GSE analysis methods and hence it has become increasingly difficult for researchers to select an optimal GSE tool based on their particular dataset. Moreover, the majority of GSE analysis methods do not allow researchers to simultaneously compare gene set level results between multiple experimental conditions. The ensemble of genes set enrichment analyses (EGSEA) is a method developed for RNA-sequencing data that combines results from twelve algorithms and calculates collective gene set scores to improve the biological relevance of the highest ranked gene sets. EGSEA's gene set database contains around 25 000 gene sets from sixteen collections. It has multiple visualization capabilities that allow researchers to view gene sets at various levels of granularity. EGSEA has been tested on simulated data and on a number of human and mouse datasets and, based on biologists' feedback, consistently outperforms the individual tools that have been combined. Our evaluation demonstrates the superiority of the ensemble approach for GSE analysis, and its utility to effectively and efficiently extrapolate biological functions and potential involvement in disease processes from lists of differentially regulated genes. EGSEA is available as an R package at http://www.bioconductor.org/packages/EGSEA/ . The gene sets collections are available in the R package EGSEAdata from http://www.bioconductor.org/packages/EGSEAdata/ . monther.alhamdoosh@csl.com.au mritchie@wehi.edu.au. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Poster error probability in the Mu-11 Sequential Ranging System
NASA Technical Reports Server (NTRS)
Coyle, C. W.
1981-01-01
An expression is derived for the posterior error probability in the Mu-2 Sequential Ranging System. An algorithm is developed which closely bounds the exact answer and can be implemented in the machine software. A computer simulation is provided to illustrate the improved level of confidence in a ranging acquisition using this figure of merit as compared to that using only the prior probabilities. In a simulation of 20,000 acquisitions with an experimentally determined threshold setting, the algorithm detected 90% of the actual errors and made false indication of errors on 0.2% of the acquisitions.
Segmentation of mouse dynamic PET images using a multiphase level set method
NASA Astrophysics Data System (ADS)
Cheng-Liao, Jinxiu; Qi, Jinyi
2010-11-01
Image segmentation plays an important role in medical diagnosis. Here we propose an image segmentation method for four-dimensional mouse dynamic PET images. We consider that voxels inside each organ have similar time activity curves. The use of tracer dynamic information allows us to separate regions that have similar integrated activities in a static image but with different temporal responses. We develop a multiphase level set method that utilizes both the spatial and temporal information in a dynamic PET data set. Different weighting factors are assigned to each image frame based on the noise level and activity difference among organs of interest. We used a weighted absolute difference function in the data matching term to increase the robustness of the estimate and to avoid over-partition of regions with high contrast. We validated the proposed method using computer simulated dynamic PET data, as well as real mouse data from a microPET scanner, and compared the results with those of a dynamic clustering method. The results show that the proposed method results in smoother segments with the less number of misclassified voxels.
Li, Junli; Li, Chunyan; Qiu, Rui; Yan, Congchong; Xie, Wenzhang; Wu, Zhen; Zeng, Zhi; Tung, Chuanjong
2015-09-01
The method of Monte Carlo simulation is a powerful tool to investigate the details of radiation biological damage at the molecular level. In this paper, a Monte Carlo code called NASIC (Nanodosimetry Monte Carlo Simulation Code) was developed. It includes physical module, pre-chemical module, chemical module, geometric module and DNA damage module. The physical module can simulate physical tracks of low-energy electrons in the liquid water event-by-event. More than one set of inelastic cross sections were calculated by applying the dielectric function method of Emfietzoglou's optical-data treatments, with different optical data sets and dispersion models. In the pre-chemical module, the ionised and excited water molecules undergo dissociation processes. In the chemical module, the produced radiolytic chemical species diffuse and react. In the geometric module, an atomic model of 46 chromatin fibres in a spherical nucleus of human lymphocyte was established. In the DNA damage module, the direct damages induced by the energy depositions of the electrons and the indirect damages induced by the radiolytic chemical species were calculated. The parameters should be adjusted to make the simulation results be agreed with the experimental results. In this paper, the influence study of the inelastic cross sections and vibrational excitation reaction on the parameters and the DNA strand break yields were studied. Further work of NASIC is underway. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
DynaSim: A MATLAB Toolbox for Neural Modeling and Simulation
Sherfey, Jason S.; Soplata, Austin E.; Ardid, Salva; Roberts, Erik A.; Stanley, David A.; Pittman-Polletta, Benjamin R.; Kopell, Nancy J.
2018-01-01
DynaSim is an open-source MATLAB/GNU Octave toolbox for rapid prototyping of neural models and batch simulation management. It is designed to speed up and simplify the process of generating, sharing, and exploring network models of neurons with one or more compartments. Models can be specified by equations directly (similar to XPP or the Brian simulator) or by lists of predefined or custom model components. The higher-level specification supports arbitrarily complex population models and networks of interconnected populations. DynaSim also includes a large set of features that simplify exploring model dynamics over parameter spaces, running simulations in parallel using both multicore processors and high-performance computer clusters, and analyzing and plotting large numbers of simulated data sets in parallel. It also includes a graphical user interface (DynaSim GUI) that supports full functionality without requiring user programming. The software has been implemented in MATLAB to enable advanced neural modeling using MATLAB, given its popularity and a growing interest in modeling neural systems. The design of DynaSim incorporates a novel schema for model specification to facilitate future interoperability with other specifications (e.g., NeuroML, SBML), simulators (e.g., NEURON, Brian, NEST), and web-based applications (e.g., Geppetto) outside MATLAB. DynaSim is freely available at http://dynasimtoolbox.org. This tool promises to reduce barriers for investigating dynamics in large neural models, facilitate collaborative modeling, and complement other tools being developed in the neuroinformatics community. PMID:29599715
Simulation-Optimization Model for Seawater Intrusion Management at Pingtung Coastal Area, Taiwan
NASA Astrophysics Data System (ADS)
Huang, P. S.; Chiu, Y.
2015-12-01
In 1970's, the agriculture and aquaculture were rapidly developed at Pingtung coastal area in southern Taiwan. The groundwater aquifers were over-pumped and caused the seawater intrusion. In order to remedy the contaminated groundwater and find the best strategies of groundwater usage, a management model to search the optimal groundwater operational strategies is developed in this study. The objective function is to minimize the total amount of injection water and a set of constraints are applied to ensure the groundwater levels and concentrations are satisfied. A three-dimension density-dependent flow and transport simulation model, called SEAWAT developed by U.S. Geological Survey, is selected to simulate the phenomenon of seawater intrusion. The simulation model is well calibrated by the field measurements and replaced by the surrogate model of trained artificial neural networks (ANNs) to reduce the computational time. The ANNs are embedded in the management model to link the simulation and optimization models, and the global optimizer of differential evolution (DE) is applied for solving the management model. The optimal results show that the fully trained ANNs could substitute the original simulation model and reduce much computational time. Under appropriate setting of objective function and constraints, DE can find the optimal injection rates at predefined barriers. The concentrations at the target locations could decrease more than 50 percent within the planning horizon of 20 years. Keywords : Seawater intrusion, groundwater management, numerical model, artificial neural networks, differential evolution
DynaSim: A MATLAB Toolbox for Neural Modeling and Simulation.
Sherfey, Jason S; Soplata, Austin E; Ardid, Salva; Roberts, Erik A; Stanley, David A; Pittman-Polletta, Benjamin R; Kopell, Nancy J
2018-01-01
DynaSim is an open-source MATLAB/GNU Octave toolbox for rapid prototyping of neural models and batch simulation management. It is designed to speed up and simplify the process of generating, sharing, and exploring network models of neurons with one or more compartments. Models can be specified by equations directly (similar to XPP or the Brian simulator) or by lists of predefined or custom model components. The higher-level specification supports arbitrarily complex population models and networks of interconnected populations. DynaSim also includes a large set of features that simplify exploring model dynamics over parameter spaces, running simulations in parallel using both multicore processors and high-performance computer clusters, and analyzing and plotting large numbers of simulated data sets in parallel. It also includes a graphical user interface (DynaSim GUI) that supports full functionality without requiring user programming. The software has been implemented in MATLAB to enable advanced neural modeling using MATLAB, given its popularity and a growing interest in modeling neural systems. The design of DynaSim incorporates a novel schema for model specification to facilitate future interoperability with other specifications (e.g., NeuroML, SBML), simulators (e.g., NEURON, Brian, NEST), and web-based applications (e.g., Geppetto) outside MATLAB. DynaSim is freely available at http://dynasimtoolbox.org. This tool promises to reduce barriers for investigating dynamics in large neural models, facilitate collaborative modeling, and complement other tools being developed in the neuroinformatics community.
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacPhee, A. G., E-mail: macphee2@llnl.gov; Hatch, B. W.; Bell, P. M.
2016-11-15
We report simulations and experiments that demonstrate an increase in spatial resolution of the NIF core diagnostic x-ray streak cameras by at least a factor of two, especially off axis. A design was achieved by using a corrector electron optic to flatten the field curvature at the detector plane and corroborated by measurement. In addition, particle in cell simulations were performed to identify the regions in the streak camera that contribute the most to space charge blurring. These simulations provide a tool for convolving synthetic pre-shot spectra with the instrument function so signal levels can be set to maximize dynamicmore » range for the relevant part of the streak record.« less
MacPhee, A G; Dymoke-Bradshaw, A K L; Hares, J D; Hassett, J; Hatch, B W; Meadowcroft, A L; Bell, P M; Bradley, D K; Datte, P S; Landen, O L; Palmer, N E; Piston, K W; Rekow, V V; Hilsabeck, T J; Kilkenny, J D
2016-11-01
We report simulations and experiments that demonstrate an increase in spatial resolution of the NIF core diagnostic x-ray streak cameras by at least a factor of two, especially off axis. A design was achieved by using a corrector electron optic to flatten the field curvature at the detector plane and corroborated by measurement. In addition, particle in cell simulations were performed to identify the regions in the streak camera that contribute the most to space charge blurring. These simulations provide a tool for convolving synthetic pre-shot spectra with the instrument function so signal levels can be set to maximize dynamic range for the relevant part of the streak record.
Irvine, M A; Reimer, L J; Njenga, S M; Gunawardena, S; Kelly-Hope, L; Bockarie, M; Hollingsworth, T D
2015-10-22
With ambitious targets to eliminate lymphatic filariasis over the coming years, there is a need to identify optimal strategies to achieve them in areas with different baseline prevalence and stages of control. Modelling can assist in identifying what data should be collected and what strategies are best for which scenarios. We develop a new individual-based, stochastic mathematical model of the transmission of lymphatic filariasis. We validate the model by fitting to a first time point and predicting future timepoints from surveillance data in Kenya and Sri Lanka, which have different vectors and different stages of the control programme. We then simulate different treatment scenarios in low, medium and high transmission settings, comparing once yearly mass drug administration (MDA) with more frequent MDA and higher coverage. We investigate the potential impact that vector control, systematic non-compliance and different levels of aggregation have on the dynamics of transmission and control. In all settings, increasing coverage from 65 to 80 % has a similar impact on control to treating twice a year at 65 % coverage, for fewer drug treatments being distributed. Vector control has a large impact, even at moderate levels. The extent of aggregation of parasite loads amongst a small portion of the population, which has been estimated to be highly variable in different settings, can undermine the success of a programme, particularly if high risk sub-communities are not accessing interventions. Even moderate levels of vector control have a large impact both on the reduction in prevalence and the maintenance of gains made during MDA, even when parasite loads are highly aggregated, and use of vector control is at moderate levels. For the same prevalence, differences in aggregation and adherence can result in very different dynamics. The novel analysis of a small amount of surveillance data and resulting simulations highlight the need for more individual level data to be analysed to effectively tailor programmes in the drive for elimination.
NASA Astrophysics Data System (ADS)
Mejia-Rodriguez, Daniel; Trickey, S. B.
2017-11-01
We explore the simplification of widely used meta-generalized-gradient approximation (mGGA) exchange-correlation functionals to the Laplacian level of refinement by use of approximate kinetic-energy density functionals (KEDFs). Such deorbitalization is motivated by the prospect of reducing computational cost while recovering a strictly Kohn-Sham local potential framework (rather than the usual generalized Kohn-Sham treatment of mGGAs). A KEDF that has been rather successful in solid simulations proves to be inadequate for deorbitalization, but we produce other forms which, with parametrization to Kohn-Sham results (not experimental data) on a small training set, yield rather good results on standard molecular test sets when used to deorbitalize the meta-GGA made very simple, Tao-Perdew-Staroverov-Scuseria, and strongly constrained and appropriately normed functionals. We also study the difference between high-fidelity and best-performing deorbitalizations and discuss possible implications for use in ab initio molecular dynamics simulations of complicated condensed phase systems.
Anatomically-Aided PET Reconstruction Using the Kernel Method
Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi
2016-01-01
This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest (ROI) quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization (EM) algorithm. PMID:27541810
Anatomically-aided PET reconstruction using the kernel method.
Hutchcroft, Will; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi
2016-09-21
This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.
Anatomically-aided PET reconstruction using the kernel method
NASA Astrophysics Data System (ADS)
Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi
2016-09-01
This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.
Folding and stability of helical bundle proteins from coarse-grained models.
Kapoor, Abhijeet; Travesset, Alex
2013-07-01
We develop a coarse-grained model where solvent is considered implicitly, electrostatics are included as short-range interactions, and side-chains are coarse-grained to a single bead. The model depends on three main parameters: hydrophobic, electrostatic, and side-chain hydrogen bond strength. The parameters are determined by considering three level of approximations and characterizing the folding for three selected proteins (training set). Nine additional proteins (containing up to 126 residues) as well as mutated versions (test set) are folded with the given parameters. In all folding simulations, the initial state is a random coil configuration. Besides the native state, some proteins fold into an additional state differing in the topology (structure of the helical bundle). We discuss the stability of the native states, and compare the dynamics of our model to all atom molecular dynamics simulations as well as some general properties on the interactions governing folding dynamics. Copyright © 2013 Wiley Periodicals, Inc.
Ganguly, Arnab; Alexeenko, Alina A; Schultz, Steven G; Kim, Sherry G
2013-10-01
A physics-based model for the sublimation-transport-condensation processes occurring in pharmaceutical freeze-drying by coupling product attributes and equipment capabilities into a unified simulation framework is presented. The system-level model is used to determine the effect of operating conditions such as shelf temperature, chamber pressure, and the load size on occurrence of choking for a production-scale dryer. Several data sets corresponding to production-scale runs with a load from 120 to 485 L have been compared with simulations. A subset of data is used for calibration, whereas another data set corresponding to a load of 150 L is used for model validation. The model predictions for both the onset and extent of choking as well as for the measured product temperature agree well with the production-scale measurements. Additionally, we study the effect of resistance to vapor transport presented by the duct with a valve and a baffle in the production-scale freeze-dryer. Computation Fluid Dynamics (CFD) techniques augmented with a system-level unsteady heat and mass transfer model allow to predict dynamic process conditions taking into consideration specific dryer design. CFD modeling of flow structure in the duct presented here for a production-scale freeze-dryer quantifies the benefit of reducing the obstruction to the flow through several design modifications. It is found that the use of a combined valve-baffle system can increase vapor flow rate by a factor of 2.2. Moreover, minor design changes such as moving the baffle downstream by about 10 cm can increase the flow rate by 54%. The proposed design changes can increase drying rates, improve efficiency, and reduce cycle times due to fewer obstructions in the vapor flow path. The comprehensive simulation framework combining the system-level model and the detailed CFD computations can provide a process analytical tool for more efficient and robust freeze-drying of bio-pharmaceuticals. Copyright © 2013 Elsevier B.V. All rights reserved.
Simulation of Ge Dopant Emission in Indirect-Drive ICF Implosion Experiments
NASA Astrophysics Data System (ADS)
Macfarlane, Joseph; Golovkin, I.; Regan, S.; Epstein, R.; Mancini, R.; Peterson, K.; Suter, L.
2012-10-01
We present results from simulations performed to study the radiative properties of dopants used in inertial confinement fusion indirect-drive capsule implosion experiments on NIF. In Rev5 NIF ignition capsules, a Ge dopant is added to an inner region of the CH ablator to absorb hohlraum x-ray preheat. Spectrally resolved emission from ablator dopants can be used to study the degree of mixing of ablator material into the ignition hot spot. Here, we study the atomic processes that affect the radiative characteristics of these elements using a set of simulation tools to first estimate the evolution of plasma conditions in the compressed target, and then to compute the atomic kinetics of the dopant and the resultant radiative emission. Using estimates of temperature and density profiles predicted by radiation-hydrodynamics simulations, we set up simple plasma grids where we allow dopant material to be embedded in the fuel, and perform multi-dimensional collisional-radiative simulations using SPECT3D to compute non-LTE atomic level populations and spectral signatures from the dopant. Recently improved Stark-broadened line shape modeling for Ge K-shell lines has been included. The goal is to study the radiative and atomic processes that affect the emergent spectra, including the effects of inner-shell photoabsorption and Kα reemission from the dopant, and to study the sensitivity of the emergent spectra to the dopant and the hot spot and ablator conditions.
Baudracco, J; Lopez-Villalobos, N; Holmes, C W; Comeron, E A; Macdonald, K A; Barry, T N
2013-05-01
A whole-farm, stochastic and dynamic simulation model was developed to predict biophysical and economic performance of grazing dairy systems. Several whole-farm models simulate grazing dairy systems, but most of them work at a herd level. This model, named e-Dairy, differs from the few models that work at an animal level, because it allows stochastic behaviour of the genetic merit of individual cows for several traits, namely, yields of milk, fat and protein, live weight (LW) and body condition score (BCS) within a whole-farm model. This model accounts for genetic differences between cows, is sensitive to genotype × environment interactions at an animal level and allows pasture growth, milk and supplements price to behave stochastically. The model includes an energy-based animal module that predicts intake at grazing, mammary gland functioning and body lipid change. This whole-farm model simulates a 365-day period for individual cows within a herd, with cow parameters randomly generated on the basis of the mean parameter values, defined as input and variance and co-variances from experimental data sets. The main inputs of e-Dairy are farm area, use of land, type of pasture, type of crops, monthly pasture growth rate, supplements offered, nutritional quality of feeds, herd description including herd size, age structure, calving pattern, BCS and LW at calving, probabilities of pregnancy, average genetic merit and economic values for items of income and costs. The model allows to set management policies to define: dry-off cows (ceasing of lactation), target pre- and post-grazing herbage mass and feed supplementation. The main outputs are herbage dry matter intake, annual pasture utilisation, milk yield, changes in BCS and LW, economic farm profit and return on assets. The model showed satisfactory accuracy of prediction when validated against two data sets from farmlet system experiments. Relative prediction errors were <10% for all variables, and concordance correlation coefficients over 0.80 for annual pasture utilisation, yields of milk and milk solids (MS; fat plus protein), and of 0.69 and 0.48 for LW and BCS, respectively. A simulation of two contrasting dairy systems is presented to show the practical use of the model. The model can be used to explore the effects of feeding level and genetic merit and their interactions for grazing dairy systems, evaluating the trade-offs between profit and the associated risk.
Validation of computer simulation training for esophagogastroduodenoscopy: Pilot study.
Sedlack, Robert E
2007-08-01
Little is known regarding the value of esophagogastroduodenoscopy (EGD) simulators in education. The purpose of the present paper was to validate the use of computer simulation in novice EGD training. In phase 1, expert endoscopists evaluated various aspects of simulation fidelity as compared to live endoscopy. Additionally, computer-recorded performance metrics were assessed by comparing the recorded scores from users of three different experience levels. In phase 2, the transfer of simulation-acquired skills to the clinical setting was assessed in a two-group, randomized pilot study. The setting was a large gastroenterology (GI) Fellowship training program; in phase 1, 21 subjects (seven expert, intermediate and novice endoscopist), made up the three experience groups. In phase 2, eight novice GI fellows were involved in the two-group, randomized portion of the study examining the transfer of simulation skills to the clinical setting. During the initial validation phase, each of the 21 subjects completed two standardized EDG scenarios on a computer simulator and their performance scores were recorded for seven parameters. Following this, staff participants completed a questionnaire evaluating various aspects of the simulator's fidelity. Finally, four novice GI fellows were randomly assigned to receive 6 h of simulator-augmented training (SAT group) in EGD prior to beginning 1 month of patient-based EGD training. The remaining fellows experienced 1 month of patient-based training alone (PBT group). Results of the seven measured performance parameters were compared between three groups of varying experience using a Wilcoxon ranked sum test. The staffs' simulator fidelity survey used a 7-point Likert scale (1, very unrealistic; 4, neutral; 7, very realistic) for each of the parameters examined. During the second phase of this study, supervising staff rated both SAT and PBT fellows' patient-based performance daily. Scoring in each skill was completed using a 7-point Likert scale (1, strongly disagree; 4, neutral; 7, strongly agree). Median scores were compared between groups using the Wilcoxon ranked sum test. Staff evaluations of fidelity found that only two of the parameters examined (anatomy and scope maneuverability) had a significant degree of realism. The remaining areas were felt to be limited in their fidelity. Of the computer-recorded performance scores, only the novice group could be reliably identified from the other two experience groups. In the clinical application phase, the median Patient Discomfort ratings were superior in the PBT group (6; interquartile range [IQR], 5-6) as compared to the SAT group (5; IQR, 4-6; P = 0.015). PBT fellows' ratings were also superior in Sedation, Patient Discomfort, Independence and Competence during various phases of the evaluation. At no point were SAT fellows rated higher than the PBT group in any of the parameters examined. This EGD simulator has limitations to the degree of fidelity and can differentiate only novice endoscopists from other levels of experience. Finally, skills learned during EGD simulation training do not appear to translate well into patient-based endoscopy skills. These findings suggest against a key element of validity for the use of this computer simulator in novice EGD training.
NASA Astrophysics Data System (ADS)
Guo, Donglin; Wang, Huijun; Wang, Aihui
2017-11-01
Numerical simulation is of great importance to the investigation of changes in frozen ground on large spatial and long temporal scales. Previous studies have focused on the impacts of improvements in the model for the simulation of frozen ground. Here the sensitivities of permafrost simulation to different atmospheric forcing data sets are examined using the Community Land Model, version 4.5 (CLM4.5), in combination with three sets of newly developed and reanalysis-based atmospheric forcing data sets (NOAA Climate Forecast System Reanalysis (CFSR), European Centre for Medium-Range Weather Forecasts Re-Analysis Interim (ERA-I), and NASA Modern Era Retrospective-Analysis for Research and Applications (MERRA)). All three simulations were run from 1979 to 2009 at a resolution of 0.5° × 0.5° and validated with what is considered to be the best available permafrost observations (soil temperature, active layer thickness, and permafrost extent). Results show that the use of reanalysis-based atmospheric forcing data set reproduces the variations in soil temperature and active layer thickness but produces evident biases in their climatologies. Overall, the simulations based on the CFSR and ERA-I data sets give more reasonable results than the simulation based on the MERRA data set, particularly for the present-day permafrost extent and the change in active layer thickness. The three simulations produce ranges for the present-day climatology (permafrost area: 11.31-13.57 × 106 km2; active layer thickness: 1.10-1.26 m) and for recent changes (permafrost area: -5.8% to -9.0%; active layer thickness: 9.9%-20.2%). The differences in air temperature increase, snow depth, and permafrost thermal conditions in these simulations contribute to the differences in simulated results.
Heo, Moonseong; Litwin, Alain H; Blackstock, Oni; Kim, Namhee; Arnsten, Julia H
2017-02-01
We derived sample size formulae for detecting main effects in group-based randomized clinical trials with different levels of data hierarchy between experimental and control arms. Such designs are necessary when experimental interventions need to be administered to groups of subjects whereas control conditions need to be administered to individual subjects. This type of trial, often referred to as a partially nested or partially clustered design, has been implemented for management of chronic diseases such as diabetes and is beginning to emerge more commonly in wider clinical settings. Depending on the research setting, the level of hierarchy of data structure for the experimental arm can be three or two, whereas that for the control arm is two or one. Such different levels of data hierarchy assume correlation structures of outcomes that are different between arms, regardless of whether research settings require two or three level data structure for the experimental arm. Therefore, the different correlations should be taken into account for statistical modeling and for sample size determinations. To this end, we considered mixed-effects linear models with different correlation structures between experimental and control arms to theoretically derive and empirically validate the sample size formulae with simulation studies.
Impact of Sea Level Rise on Storm Surge and Inundation in the Northern Gulf of Mexico
NASA Astrophysics Data System (ADS)
Veeramony, J.
2016-12-01
Assessing the impact of climate change on surge and inundation due to tropical cyclones is important for coastal adaptation as well as mitigation efforts. Changes in global climate increase vulnerability of coastal environments to the threat posed by severe storms in a number of ways. Both the intensity of future storms as well as the return periods of more severe storms are expected to increase signficantly. Increasing mean sea levels lead to more areas being inundated due to storm surge and bring the threat of inundation further inland. Rainfall associated with severe storms are also expected to increase substantially, which will add to the intensity of inland flooding and coastal inundation. In this study, we will examine the effects of sea level rise and increasing rainfall intensity using Hurricane Ike as the baseline. The Delft3D modeling system will be set up in nested mode, with the outermost nest covering the Gulf of Mexico. The system will be run in a coupled mode, modeling both waves and the hydrodynamics. The baseline simulation will use the atmospheric forcing which consists of the NOAA H*Wind (Powell et all 1998) for the core hurricane characteristics blended with reanalyzed background winds to create a smooth wind field. The rainfall estimates are obtained from TRMM. From this baseline, a set of simulations will be performed to show the impact of sea level rise and increased rainfall activity on flooding and inundation along theTexas-Lousiana coast.
Toward Robust Estimation of the Components of Forest Population Change
Francis A. Roesch
2014-01-01
Multiple levels of simulation are used to test the robustness of estimators of the components of change. I first created a variety of spatial-temporal populations based on, but more variable than, an actual forest monitoring data set and then sampled those populations under a variety of sampling error structures. The performance of each of four estimation approaches is...
APEX (Air Pollution Exercise) Volume 6: Industrialist's Manual No. 1, Shear Power Company.
ERIC Educational Resources Information Center
Environmental Protection Agency, Research Triangle Park, NC. Office of Manpower Development.
The Industrialist's Manual No. 1, Shear Power Company is part of a set of 21 manuals (AA 001 009-001 029) used in APEX (Air Pollution Exercise), a computerized college and professional level "real world" game simulation of a community with urban and rural problems, industrial activities, and air pollution difficulties. The first two sections,…
APEX (Air Pollution Exercise) Volume 10: Industrialist's Manual No. 6, Dusty Rhodes' Cement Company.
ERIC Educational Resources Information Center
Environmental Protection Agency, Research Triangle Park, NC. Office of Manpower Development.
The Industrialist's Manual No. 6, Dusty Rhodes' Cement Company is part of a set of 21 manuals (AA 001 009-001 029) used in APEX (Air Pollution Exercise), a computerized college and professional level "real world" game simulation of a community with urban and rural problems, industrial activities, and air pollution difficulties. The first two…
ERIC Educational Resources Information Center
Hii, King Kuok; Rzepa, Henry S.; Smith, Edward H.
2015-01-01
The coupling of a student experiment involving the preparation and use of a catalyst for the asymmetric epoxidation of an alkene with computational simulations of various properties of the resulting epoxide is set out in the form of a software toolbox from which students select appropriate components. At the core of these are the computational…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forsline, P.L.; Musselman, R.C.; Kender, W.J.
Mature 'McIntosh', 'Empire', and 'Golden Delicious' apple trees (Malus domestica Borkh.) were sprayed with simulated acid rain solutions in the pH range of 2.5 to 5.5 at full bloom in 1980 and in 1981. In 1981, weekly sprays were applied at pH 2.75 and pH 3.25. Necrotic lesions developed on apple petals at pH 2.5 with slight injury appearing at pH 3.0 and pH 3.5. Apple foliage had no acid rain lesions at any of the pH levels tested. Pollen germination was reduced at ph 2.5 in 'Empire'. Slight fruit set reduction at pH 2.5 was observed in 'McIntosh'. Themore » incidence of russetting on 'Golden Delicious' fruits was ameliorated by the presence of rain-exclusion chambers but was not affected by acid rain. With season-long sprays at pH 2.75, there was a slight delay in maturity and lower weight of 'McIntosh' apples. Even at the lowest pH levels no detrimental effects of simulated acid rain were found on apple tree productivity and fruit quality when measured as fruit set, seed number per fruit, and fruit size and appearance.« less
Analytical modelling of temperature effects on an AMPA-type synapse.
Kufel, Dominik S; Wojcik, Grzegorz M
2018-05-11
It was previously reported, that temperature may significantly influence neural dynamics on the different levels of brain function. Thus, in computational neuroscience, it would be useful to make models scalable for a wide range of various brain temperatures. However, lack of experimental data and an absence of temperature-dependent analytical models of synaptic conductance does not allow to include temperature effects at the multi-neuron modeling level. In this paper, we propose a first step to deal with this problem: A new analytical model of AMPA-type synaptic conductance, which is able to incorporate temperature effects in low-frequency stimulations. It was constructed based on Markov model description of AMPA receptor kinetics using the set of coupled ODEs. The closed-form solution for the set of differential equations was found using uncoupling assumption (introduced in the paper) with few simplifications motivated both from experimental data and from Monte Carlo simulation of synaptic transmission. The model may be used for computationally efficient and biologically accurate implementation of temperature effects on AMPA receptor conductance in large-scale neural network simulations. As a result, it may open a wide range of new possibilities for researching the influence of temperature on certain aspects of brain functioning.
Level set immersed boundary method for gas-liquid-solid interactions with phase-change
NASA Astrophysics Data System (ADS)
Dhruv, Akash; Balaras, Elias; Riaz, Amir; Kim, Jungho
2017-11-01
We will discuss an approach to simulate the interaction between two-phase flows with phase changes and stationary/moving structures. In our formulation, the Navier-Stokes and heat advection-diffusion equations are solved on a block-structured grid using adaptive mesh refinement (AMR) along with sharp jump in pressure, velocity and temperature across the interface separating the different phases. The jumps are implemented using a modified Ghost Fluid Method (Lee et al., J. Comput. Physics, 344:381-418, 2017), and the interface is tracked with a level set approach. Phase transition is achieved by calculating mass flux near the interface and extrapolating it to the rest of the domain using a Hamilton-Jacobi equation. Stationary/moving structures are simulated with an immersed boundary formulation based on moving least squares (Vanella & Balaras, J. Comput. Physics, 228:6617-6628, 2009). A variety of canonical problems involving vaporization, film boiling and nucleate boiling is presented to validate the method and demonstrate the its formal accuracy. The robustness of the solver in complex problems, which are crucial in efficient design of heat transfer mechanisms for various applications, will also be demonstrated. Work supported by NASA, Grant NNX16AQ77G.
Applying operations research to optimize a novel population management system for cancer screening.
Zai, Adrian H; Kim, Seokjin; Kamis, Arnold; Hung, Ken; Ronquillo, Jeremiah G; Chueh, Henry C; Atlas, Steven J
2014-02-01
To optimize a new visit-independent, population-based cancer screening system (TopCare) by using operations research techniques to simulate changes in patient outreach staffing levels (delegates, navigators), modifications to user workflow within the information technology (IT) system, and changes in cancer screening recommendations. TopCare was modeled as a multiserver, multiphase queueing system. Simulation experiments implemented the queueing network model following a next-event time-advance mechanism, in which systematic adjustments were made to staffing levels, IT workflow settings, and cancer screening frequency in order to assess their impact on overdue screenings per patient. TopCare reduced the average number of overdue screenings per patient from 1.17 at inception to 0.86 during simulation to 0.23 at steady state. Increases in the workforce improved the effectiveness of TopCare. In particular, increasing the delegate or navigator staff level by one person improved screening completion rates by 1.3% or 12.2%, respectively. In contrast, changes in the amount of time a patient entry stays on delegate and navigator lists had little impact on overdue screenings. Finally, lengthening the screening interval increased efficiency within TopCare by decreasing overdue screenings at the patient level, resulting in a smaller number of overdue patients needing delegates for screening and a higher fraction of screenings completed by delegates. Simulating the impact of changes in staffing, system parameters, and clinical inputs on the effectiveness and efficiency of care can inform the allocation of limited resources in population management.
March, Christopher A; Steiger, David; Scholl, Gretchen; Mohan, Vishnu; Hersh, William R; Gold, Jeffrey A
2013-01-01
Objective To establish the role of high-fidelity simulation training to test the efficacy and safety of the electronic health record (EHR)–user interface within the intensive care unit (ICU) environment. Design Prospective pilot study. Setting Medical ICU in an academic medical centre. Participants Postgraduate medical trainees. Interventions A 5-day-simulated ICU patient was developed in the EHR including labs, hourly vitals, medication administration, ventilator settings, nursing and notes. Fourteen medical issues requiring recognition and subsequent changes in management were included. Issues were chosen based on their frequency of occurrence within the ICU and their ability to test different aspects of the EHR–user interface. ICU residents, blinded to the presence of medical errors within the case, were provided a sign-out and given 10 min to review the case in the EHR. They then presented the case with their management suggestions to an attending physician. Participants were graded on the number of issues identified. All participants were provided with immediate feedback upon completion of the simulation. Primary and secondary outcomes To determine the frequency of error recognition in an EHR simulation. To determine factors associated with improved performance in the simulation. Results 38 participants including 9 interns, 10 residents and 19 fellows were tested. The average error recognition rate was 41% (range 6–73%), which increased slightly with the level of training (35%, 41% and 50% for interns, residents, and fellows, respectively). Over-sedation was the least-recognised error (16%); poor glycemic control was most often recognised (68%). Only 32% of the participants recognised inappropriate antibiotic dosing. Performance correlated with the total number of screens used (p=0.03). Conclusions Despite development of comprehensive EHRs, there remain significant gaps in identifying dangerous medical management issues. This gap remains despite high levels of medical training, suggesting that EHR-specific training may be beneficial. Simulation provides a novel tool in order to both identify these gaps as well as foster EHR-specific training. PMID:23578685
Lianou, Alexandra; Geornaras, Ifigenia; Kendall, Patricia A; Scanga, John A; Sofos, John N
2007-08-01
Uncured turkey breast, commercially available with or without a mixture of potassium lactate and sodium diacetate, was sliced, inoculated with a 10-strain composite of Listeria monocytogenes, vacuum-packaged, and stored at 4 degrees C, to simulate contamination after a lethal processing step at the plant. At 5, 15, 25 and 50 days of storage, packages were opened, slices were tested, and bags with remaining slices were reclosed with rubber bands; this simulated home use of plant-sliced and -packaged product. At the same above time intervals, portions of original product (stored at 4 degrees C in original processing bags) were sliced and inoculated as above, and packaged in delicatessen bags, simulating contamination during slicing/handling at retail or home. Both sets of bags were stored aerobically at 7 degrees C for 12 days to simulate home storage. L. monocytogenes populations were lower (P<0.05) during storage in turkey breast containing a combination of lactate and diacetate compared to product without antimicrobials under both contamination scenarios. Due to prolific growth of the pathogen under the plant-contamination scenario in product without lactate-diacetate during vacuum-packaged storage (4 degrees C), populations at 3 days of aerobic storage (7 degrees C) of such product ranged from 4.6 to 7.4 log cfu/cm(2). Under the retail/home-contamination scenario, mean growth rates (log cfu/cm(2)/day) of the organism during aerobic storage ranged from 0.14 to 0.16, and from 0.25 to 0.51, in product with and without lactate-diacetate, respectively; growth rates in turkey breast without antimicrobials decreased (P<0.05) with age of the product. Overall, product without antimicrobials inoculated to simulate plant-contamination and product with lactate-diacetate inoculated to simulate retail/home-contamination were associated with the highest and lowest pathogen levels during aerobic storage at 7 degrees C, respectively. However, 5- and 15-day-old turkey breast without lactate-diacetate stored aerobically for 12 days resulted in similar pathogen levels (7.3-7.7 log cfu/cm(2)), irrespective of contamination scenario.
NASA Astrophysics Data System (ADS)
Liu, Yan; Fan, Xi; Chen, Houpeng; Wang, Yueqing; Liu, Bo; Song, Zhitang; Feng, Songlin
2017-08-01
In this brief, multilevel data storage for phase-change memory (PCM) has attracted more attention in the memory market to implement high capacity memory system and reduce cost-per-bit. In this work, we present a universal programing method of SET stair-case current pulse in PCM cells, which can exploit the optimum programing scheme to achieve 2-bit/ 4state resistance-level with equal logarithm interval. SET stair-case waveform can be optimized by TCAD real time simulation to realize multilevel data storage efficiently in an arbitrary phase change material. Experimental results from 1 k-bit PCM test-chip have validated the proposed multilevel programing scheme. This multilevel programming scheme has improved the information storage density, robustness of resistance-level, energy efficient and avoiding process complexity.
Shuttle program. MCC Level C formulation requirements: Entry guidance and entry autopilot
NASA Technical Reports Server (NTRS)
Harpold, J. C.; Hill, O.
1980-01-01
A set of preliminary entry guidance and autopilot software formulations is presented for use in the Mission Control Center (MCC) entry processor. These software formulations meet all level B requirements. Revision 2 incorporates the modifications required to functionally simulate optimal TAEM targeting capability (OTT). Implementation of this logic in the MCC must be coordinated with flight software OTT implementation and MCC TAEM guidance OTT. The entry guidance logic is based on the Orbiter avionics entry guidance software. This MCC requirements document contains a definition of coordinate systems, a list of parameter definitions for the software formulations, a description of the entry guidance detailed formulation requirements, a description of the detailed autopilot formulation requirements, a description of the targeting routine, and a set of formulation flow charts.
Wear simulation of total knee prostheses using load and kinematics waveforms from stair climbing.
Abdel-Jaber, Sami; Belvedere, Claudio; Leardini, Alberto; Affatato, Saverio
2015-11-05
Knee wear simulators are meant to perform load cycles on knee implants under physiological conditions, matching exactly, if possible, those experienced at the replaced joint during daily living activities. Unfortunately, only conditions of low demanding level walking, specified in ISO-14243, are used conventionally during such tests. A recent study has provided a consistent knee kinematic and load data-set measured during stair climbing in patients implanted with a specific modern total knee prosthesis design. In the present study, wear simulation tests were performed for the first time using this data-set on the same prosthesis design. It was hypothesised that more demanding tasks would result in wear rates that differ from those observed in retrievals. Four prostheses for total knee arthroplasty were tested using a displacement-controlled knee wear simulator for two million cycles at 1.1 Hz, under kinematics and load conditions typical of stair climbing. After simulation, the corresponding damage scars on the bearings were qualified and compared with equivalent explanted prostheses. An average mass loss of 20.2±1.5 mg was found. Scanning digital microscopy revealed similar features, though the explant had a greater variety of damage modes, including a high prevalence of adhesive wear damage and burnishing in the overall articulating surface. This study confirmed that the results from wear simulation machines are strongly affected by kinematics and loads applied during simulations. Based on the present results for the full understanding of the current clinical failure of knee implants, a more comprehensive series of conditions are necessary for equivalent simulations in vitro. Copyright © 2015 Elsevier Ltd. All rights reserved.
Hazardous Convective Weather in the Central United States: Present and Future
NASA Astrophysics Data System (ADS)
Liu, C.; Ikeda, K.; Rasmussen, R.
2017-12-01
Two sets of 13-year continental-scale convection-permitting simulations were performed using the 4-km-resolution WRF model. They consist of a retrospective simulation, which downscales the ERA-Interim reanalysis during the period October 2000 - September 2013, and a future climate sensitivity simulation for the same period based on the perturbed reanalysis-derived boundary conditions with the CMIP5 ensemble-mean high-end emission scenario climate change. The evaluation of the retrospective simulation indicates that the model is able to realistically reproduce the main characteristics of deep precipitating convection observed in the current climate such as the spectra of convective population and propagating mesoscale convective systems (MCSs). It is also shown that severe convection and associated MCS will increase in frequency and intensity, implying a potential increase in high impact convective weather in a future warmer climate. In this study, the warm-season hazardous convective weather (i.e., tonadoes, hails and damaging gusty wind) in the central United states is examined using these 4-km downscaling simulations. First, a model-based proxy for hazardous convective weather is derived on the basis of a set of characteristic meteorological variables such as the model composite radar reflectivity, updraft helicity, vertical wind shear, and low-level wind. Second, the developed proxy is applied to the retrospective simulation for estimate of the model hazardous weather events during the historical period. Third, the simulated hazardous weather statistics are evaluated against the NOAA severe weather reports. Lastly, the proxy is applied to the future climate simulation for the projected change of hazardous convective weather in response to global warming. Preliminary results will be reported at the 2017 AGU session "High Resolution Climate Modeling".
Monte Carlo Simulations for VLBI2010
NASA Astrophysics Data System (ADS)
Wresnik, J.; Böhm, J.; Schuh, H.
2007-07-01
Monte Carlo simulations are carried out at the Institute of Geodesy and Geophysics (IGG), Vienna, and at Goddard Space Flight Center (GSFC), Greenbelt (USA), with the goal to design a new geodetic Very Long Baseline Interferometry (VLBI) system. Influences of the schedule, the network geometry and the main stochastic processes on the geodetic results are investigated. Therefore schedules are prepared with the software package SKED (Vandenberg 1999), and different strategies are applied to produce temporally very dense schedules which are compared in terms of baseline length repeatabilities. For the simulation of VLBI observations a Monte Carlo Simulator was set up which creates artificial observations by randomly simulating wet zenith delay and clock values as well as additive white noise representing the antenna errors. For the simulation at IGG the VLBI analysis software OCCAM (Titov et al. 2004) was adapted. Random walk processes with power spectrum densities of 0.7 and 0.1 psec2/sec are used for the simulation of wet zenith delays. The clocks are simulated with Allan Standard Deviations of 1*10^-14 @ 50 min and 2*10^-15 @ 15 min and three levels of white noise, 4 psec, 8 psec and, 16 psec, are added to the artificial observations. The variations of the power spectrum densities of the clocks and wet zenith delays, and the application of different white noise levels show clearly that the wet delay is the critical factor for the improvement of the geodetic VLBI system. At GSFC the software CalcSolve is used for the VLBI analysis, therefore a comparison between the software packages OCCAM and CalcSolve was done with simulated data. For further simulations the wet zenith delay was modeled by a turbulence model. This data was provided by Nilsson T. and was added to the simulation work. Different schedules have been run.
Upscaling of spectroradiometer data for stress detection in orchards with remote sensing
NASA Astrophysics Data System (ADS)
Kempeneers, Pieter; De Backer, Steve; Delalieux, Stephanie; Sterckx, Sindy; Debruyn, Walter; Coppin, Pol; Scheunders, Paul
2004-10-01
This paper studies the detection of vegetation stress in orchards via remote sensing. During previous research, it was shown that stress can be detected reliably on hyperspectral reflectances of the fresh leaves, using a generic wavelet based hyperspectral classification. In this work, we demonstrate the capability to detect stress from airborne/spaceborne hyperspectral sensors by upscaling the leaf reflectances to top of atmosphere (TOA) radiances. Several data sets are generated, measuring the foliar reflectance with a portable field spectroradiometer, covering different time periods, fruit variants and stress types. We concentrated on the Jonagold and Golden Delicious apple trees, induced with mildew and nitrogen deficiency. First, a directional homogeneous canopy reflectance model (ACRM) is applied on these data sets for simulating top of canopy (TOC) spectra. Then, the TOC level is further upscaled to TOA, using the atmospheric radiative transfer model MODTRAN4. To simulate hyperspectral imagery acquired with real airborne/spaceborne sensors, the spectrum is further filtered and subsampled to the available resolution. Using these simulated upscaled TOC and TOA spectra in classification, we will demonstrate that there is still a differentiation possible between stresses and non-stressed trees. Furthermore, results show it is possible to train a classifier with simulated TOA data, to make a classification of real hyperspectral imagery over the orchard.
LArSoft: toolkit for simulation, reconstruction and analysis of liquid argon TPC neutrino detectors
NASA Astrophysics Data System (ADS)
Snider, E. L.; Petrillo, G.
2017-10-01
LArSoft is a set of detector-independent software tools for the simulation, reconstruction and analysis of data from liquid argon (LAr) neutrino experiments The common features of LAr time projection chambers (TPCs) enable sharing of algorithm code across detectors of very different size and configuration. LArSoft is currently used in production simulation and reconstruction by the ArgoNeuT, DUNE, LArlAT, MicroBooNE, and SBND experiments. The software suite offers a wide selection of algorithms and utilities, including those for associated photo-detectors and the handling of auxiliary detectors outside the TPCs. Available algorithms cover the full range of simulation and reconstruction, from raw waveforms to high-level reconstructed objects, event topologies and classification. The common code within LArSoft is contributed by adopting experiments, which also provide detector-specific geometry descriptions, and code for the treatment of electronic signals. LArSoft is also a collaboration of experiments, Fermilab and associated software projects which cooperate in setting requirements, priorities, and schedules. In this talk, we outline the general architecture of the software and the interaction with external libraries and detector-specific code. We also describe the dynamics of LArSoft software development between the contributing experiments, the projects supporting the software infrastructure LArSoft relies on, and the core LArSoft support project.
Folding free-energy landscape of villin headpiece subdomain from molecular dynamics simulations.
Lei, Hongxing; Wu, Chun; Liu, Haiguang; Duan, Yong
2007-03-20
High-accuracy ab initio folding has remained an elusive objective despite decades of effort. To explore the folding landscape of villin headpiece subdomain HP35, we conducted two sets of replica exchange molecular dynamics for 200 ns each and three sets of conventional microsecond-long molecular dynamics simulations, using AMBER FF03 force field and a generalized-Born solvation model. The protein folded consistently to the native state; the lowest C(alpha)-rmsd from the x-ray structure was 0.46 A, and the C(alpha)- rmsd of the center of the most populated cluster was 1.78 A at 300 K. ab initio simulations have previously not reached this level. The folding landscape of HP35 can be partitioned into the native, denatured, and two intermediate-state regions. The native state is separated from the major folding intermediate state by a small barrier, whereas a large barrier exists between the major folding intermediate and the denatured states. The melting temperature T(m) = 339 K extracted from the heat-capacity profile was in close agreement with the experimentally derived T(m) = 342 K. A comprehensive picture of the kinetics and thermodynamics of HP35 folding emerges when the results from replica exchange and conventional molecular dynamics simulations are combined.
Liaw, Sok Ying; Wong, Lai Fun; Lim, Eunice Ya Ping; Ang, Sophia Bee Leng; Mujumdar, Sandhya; Ho, Jasmine Tze Yin; Mordiffi, Siti Zubaidah; Ang, Emily Neo Kim
2016-02-19
Nurses play an important role in detecting patients with clinical deterioration. However, the problem of nurses failing to trigger deteriorating ward patients still persists despite the implementation of a patient safety initiative, the Rapid Response System. A Web-based simulation was developed to enhance nurses' role in recognizing and responding to deteriorating patients. While studies have evaluated the effectiveness of the Web-based simulation on nurses' clinical performance in a simulated environment, no study has examined its impact on nurses' actual practice in the clinical setting. The objective of this study was to evaluate the impact of Web-based simulation on nurses' recognition of and response to deteriorating patients in clinical settings. The outcomes were measured across all levels of Kirkpatrick's 4-level evaluation model with clinical outcome on triggering rates of deteriorating patients as the primary outcome measure. A before-and-after study was conducted on two general wards at an acute care tertiary hospital over a 14-month period. All nurses from the two study wards who undertook the Web-based simulation as part of their continuing nursing education were invited to complete questionnaires at various time points to measure their motivational reaction, knowledge, and perceived transfer of learning. Clinical records on cases triggered by ward nurses from the two study wards were evaluated for frequency and types of triggers over a period of 6 months pre- and 6 months postintervention. The number of deteriorating patients triggered by ward nurses in a medical general ward increased significantly (P<.001) from pre- (84/937, 8.96%) to postintervention (91/624, 14.58%). The nurses reported positively on the transfer of learning (mean 3.89, SD 0.49) from the Web-based simulation to clinical practice. A significant increase (P<.001) on knowledge posttest score from pretest score was also reported. The nurses also perceived positively their motivation (mean 3.78, SD 0.56) to engage in the Web-based simulation. This study provides evidence on the effectiveness of Web-based simulation in improving nursing practice when recognizing and responding to deteriorating patients. This educational tool could be implemented by nurse educators worldwide to address the educational needs of a large group of hospital nurses responsible for patients in clinical deterioration.
Processes governing the temperature structure of the tropical tropopause layer (Invited)
NASA Astrophysics Data System (ADS)
Birner, T.
2013-12-01
The tropical tropopause layer (TTL) is among the most important but least understood regions of the global climate system. The TTL sets the boundary condition for atmospheric tracers entering the stratosphere. Specifically, TTL temperatures control stratospheric water vapor concentrations, which play a key role in the radiative budget of the entire stratosphere with implications for tropospheric and surface climate. The TTL shows a curious stratification structure: temperature continues to decrease beyond the level of main convective outflow (~200 hPa) up to the cold point tropopause (~100 hPa), but TTL lapse rates are smaller than in the upper troposphere. A cold point tropopause well separated from the level of main convective outflow requires TTL cooling which may be the result of: 1) the detailed radiative balance in the TTL, 2) large-scale upwelling (forced by extratropical or tropical waves), 3) the large-scale hydrostatic response aloft deep convective heating, 4) overshooting convection, 5) breaking gravity waves. All of these processes may act in isolation or combine to produce the observed TTL temperature structure. Here, a critical discussion of these processes / mechanisms and their role in lifting the cold point tropopause above the level of main convective outflow is presented. Results are based on idealized radiative-convective equilibrium model simulations, contrasting single-column with cloud-resolving simulations, as well on simulations with chemistry-climate models and reanalysis data. While all of the above processes are capable of producing a TTL-like region in isolation, their combination is found to produce important feedbacks. In particular, both water vapor and ozone are found to have strong radiative effects on TTL temperatures, highlighting important feedbacks between transport circulations setting temperatures and tracer structures and the resulting tracer structures in turn affecting temperatures.
Application of the aeroacoustic analogy to a shrouded, subsonic, radial fan
NASA Astrophysics Data System (ADS)
Buccieri, Bryan M.; Richards, Christopher M.
2016-12-01
A study was conducted to investigate the predictive capability of computational aeroacoustics with respect to a shrouded, subsonic, radial fan. A three dimensional unsteady fluid dynamics simulation was conducted to produce aerodynamic data used as the acoustic source for an aeroacoustics simulation. Two acoustic models were developed: one modeling the forces on the rotating fan blades as a set of rotating dipoles located at the center of mass of each fan blade and one modeling the forces on the stationary fan shroud as a field of distributed stationary dipoles. Predicted acoustic response was compared to experimental data measured at two operating speeds using three different outlet restrictions. The blade source model predicted overall far field sound power levels within 5 dB averaged over the six different operating conditions while the shroud model predicted overall far field sound power levels within 7 dB averaged over the same conditions. Doubling the density of the computational fluids mesh and using a scale adaptive simulation turbulence model increased broadband noise accuracy. However, computation time doubled and the accuracy of the overall sound power level prediction improved by only 1 dB.
Simulation of Columbia River Floods in the Hanford Reach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waichler, Scott R.; Serkowski, John A.; Perkins, William A.
Columbia River water elevations and flows in the Hanford Reach affect the environment and facilities along the shoreline, including movement of contaminants in groundwater, fish habitat, and infrastructure subject to flooding. This report describes the hydraulic simulation of hypothetical flood flows using the best available topographic and bathymetric data for the Hanford Reach and the Modular Aquatic Simulation System in 1 Dimension (MASS1) hydrodynamic model. The MASS1 model of the Hanford Reach was previously calibrated to field measurements of water surface elevations. The current model setup can be used for other studies of flow, water levels, and temperature in themore » Reach. The existing MASS1 channel geometry and roughness and other model configuration inputs for the Hanford Reach were used for this study, and previous calibration and validation results for the model are reprinted here for reference. The flood flows for this study were simulated by setting constant flow rates obtained from the U.S. Army Corps of Engineers (USACE) for the Columbia, Snake, and Yakima Rivers, and a constant water level at McNary Dam, and then running the model to steady state. The discharge levels simulated were all low-probability events; for example, a 100-year flood is one that would occur on average every 100 years, or put another way, in any given year there is a 1% chance that a discharge of that level or higher will occur. The simulated floods and their corresponding Columbia River discharges were 100-year (445,000 cfs), 500-year (520,000 cfs), and the USACE-defined Standard Project Flood (960,000 cfs). The resulting water levels from the steady-state floods can be viewed as “worst case” outcomes for the respective discharge levels. The MASS1 output for water surface elevations was converted to the North American Vertical Datum of 1988 and projected across the channel and land surface to enable mapping of the floodplain for each scenario. Floodplain maps show that for the 100-year and 500-year discharge levels, flooding is mainly confined to the topographic trench that is the river channel. The flooded area for the Standard Project Flood extends out of the channel area in some places, particularly in the 100-F Area. All of the output from the simulations have been archived and are available for future investigations in the Hanford Reach.« less
Benoit, Julia S; Chan, Wenyaw; Doody, Rachelle S
2015-01-01
Parameter dependency within data sets in simulation studies is common, especially in models such as Continuous-Time Markov Chains (CTMC). Additionally, the literature lacks a comprehensive examination of estimation performance for the likelihood-based general multi-state CTMC. Among studies attempting to assess the estimation, none have accounted for dependency among parameter estimates. The purpose of this research is twofold: 1) to develop a multivariate approach for assessing accuracy and precision for simulation studies 2) to add to the literature a comprehensive examination of the estimation of a general 3-state CTMC model. Simulation studies are conducted to analyze longitudinal data with a trinomial outcome using a CTMC with and without covariates. Measures of performance including bias, component-wise coverage probabilities, and joint coverage probabilities are calculated. An application is presented using Alzheimer's disease caregiver stress levels. Comparisons of joint and component-wise parameter estimates yield conflicting inferential results in simulations from models with and without covariates. In conclusion, caution should be taken when conducting simulation studies aiming to assess performance and choice of inference should properly reflect the purpose of the simulation.
Sensitivity studies of the new Coastal Surge and Inundation Prediction System
NASA Astrophysics Data System (ADS)
Condon, A. J.; Veeramony, J.
2012-12-01
This paper details the sensitivity studies involved in the validation of a coastal storm surge and inundation prediction system for operational use by the United States Navy. The system consists of the Delft3D-FLOW model coupled with the Delft3D-WAVE model. This dynamically coupled system will replace the current operational system, PC-Tides which does not include waves or other global ocean circulation. The Delft3D modeling system uses multiple nests to capture large, basin-scale circulation as well as coastal circulation and tightly couples waves and circulation at all scales. An additional benefit in using the presented system is that the Delft Dashboard, a graphical user interface product, can be used to simplify the set-up of Delft3D features such as the grid, elevation data, boundary forcing, and nesting. In this way less man-hours and training will be needed to perform inundation forecasts. The new coupled system is used to model storm surge and inundation produced by Hurricane Ike (2008) along the Gulf of Mexico coast. Due to the time constraints in an operational forecasting environment, storm simulations must be as streamlined as possible. Many factors such as model resolution, elevation data sets, parametrization of bottom friction, frequency of coupling between hydrodynamic and wave components, and atmospheric forcing among others can influence the run times and results of the simulations. To assess the sensitivity of the modeling system to these various components a "best" simulation was first developed. The best simulation consists of reanalysis atmospheric forcing in the form of Oceanweather wind and pressure fields. Further the wind field is modified by applying a directional land-masking to account for changes in land-roughness in the coastal zone. A number of air-sea drag coefficient formulations were tested to find the best match with observed results. An analysis of sea-level trends for the region reveals a seasonal trend of elevated sea level in the region which is applied throughout the Gulf of Mexico. The hydrodynamic model is run in 2D depth averaged mode with a spatially varying Manning's N coefficient based on land cover data. Multiple nests are used with resolutions varying between 0.1° and 0.004°. A blended bathymetry and topography dataset from multiple sources is used. Tidal constituents are obtained from the Oregon State University global model of ocean tides based on TOPEX7.2 satellite altimeter data. Simulated water level is compared to data from NOAA National Ocean Service observing stations throughout the region. Simulated inundation is compared to observations by means of Federal Emergency Management Agency High Water Mark (HWM) data. Results from the "best" simulation show very favorable comparison to observations. Simulated peak water levels are generally within 0.25 m and HWMs are well correlated with observations. Once the "best" simulation was established, sensitivity of the system to the wind model, drag coefficient, elevation dataset, initial water level, wave coupling, bottom roughness, and domain resolution was investigated. Each component has an influence on the simulation results, some much more than others. As expected the atmospheric forcing is the key component, however all other factors must be carefully chosen to obtain the best results.
Simulation and management games for training command and control in emergencies.
Levi, Leon; Bregman, David
2003-01-01
The aim of our project was to introduce and implement simulation techniques in a problematic field of increasing health care system preparedness for disasters. This field was chosen as knowledge is gained by few experienced staff members who need to disperse it to others during the busy routine work of the system personnel. Knowledge management techniques ranging from classifying the current data, centralized organizational knowledge storage and using it for decision making and dispersing it through the organization--were used in this project. In the first stage we analyzed the current system of building a preparedness protocol (set of orders). We identified the pitfalls of changing personnel and loosing knowledge gained through lessons from local and national experience. For this stage we developed a database of resources and objects (casualties) to be used in the simulation in different possibilities. One of those was the differentiation between drills with trainer and those in front of computers enable to set the needed solution. The model rules for different scenarios of multi-casualty incidents from conventional warfare trauma to combined chemical/toxicological as well as, levels of care pre and inside hospitals--were incorporated to the database management system (we used Microsoft Access' DBMS). The hardware for management game was comprised of serial computers with network and possibility of projection of scenes. For prehospital phase the possibility of portable PC's and connections to central server was used to assess bidirectional flow of information. Simulation software (ARENA) and graphical interfase (Visual Basic, GUI) as shown in the attached figure. We hereby conclude that our system provides solutions which are in use in different levels of healthcare system to assess and improve management command and control for different scenarios of multi-casualty incidents.
NASA Astrophysics Data System (ADS)
Pitts, K.; Nasiri, S. L.; Smith, N.
2013-12-01
Global climate models have improved considerably over the years, yet clouds still represent a large factor of uncertainty for these models. Comparisons of model-simulated cloud variables with equivalent satellite cloud products are the best way to start diagnosing the differences between model output and observations. Gridded (level 3) cloud products from many different satellites and instruments are required for a full analysis, but these products are created by different science teams using different algorithms and filtering criteria to create similar, but not directly comparable, cloud products. This study makes use of a recently developed uniform space-time gridding algorithm to create a new set of gridded cloud products from each satellite instrument's level 2 data of interest which are each filtered using the same criteria, allowing for a more direct comparison between satellite products. The filtering is done via several variables such as cloud top pressure/height, thermodynamic phase, optical properties, satellite viewing angle, and sun zenith angle. The filtering criteria are determined based on the variable being analyzed and the science question at hand. Each comparison of different variables may require different filtering strategies as no single approach is appropriate for all problems. Beyond inter-satellite data comparison, these new sets of uniformly gridded satellite products can also be used for comparison with model-simulated cloud variables. Of particular interest to this study are the differences in the vertical distributions of ice and liquid water content between the satellite retrievals and model simulations, especially in the mid-troposphere where there are mixed-phase clouds to consider. This presentation will demonstrate the proof of concept through comparisons of cloud water path from Aqua MODIS retrievals and NASA GISS-E2-[R/H] model simulations archived in the CMIP5 data portal.
A high-resolution global-scale groundwater model
NASA Astrophysics Data System (ADS)
de Graaf, I. E. M.; Sutanudjaja, E. H.; van Beek, L. P. H.; Bierkens, M. F. P.
2015-02-01
Groundwater is the world's largest accessible source of fresh water. It plays a vital role in satisfying basic needs for drinking water, agriculture and industrial activities. During times of drought groundwater sustains baseflow to rivers and wetlands, thereby supporting ecosystems. Most global-scale hydrological models (GHMs) do not include a groundwater flow component, mainly due to lack of geohydrological data at the global scale. For the simulation of lateral flow and groundwater head dynamics, a realistic physical representation of the groundwater system is needed, especially for GHMs that run at finer resolutions. In this study we present a global-scale groundwater model (run at 6' resolution) using MODFLOW to construct an equilibrium water table at its natural state as the result of long-term climatic forcing. The used aquifer schematization and properties are based on available global data sets of lithology and transmissivities combined with the estimated thickness of an upper, unconfined aquifer. This model is forced with outputs from the land-surface PCRaster Global Water Balance (PCR-GLOBWB) model, specifically net recharge and surface water levels. A sensitivity analysis, in which the model was run with various parameter settings, showed that variation in saturated conductivity has the largest impact on the groundwater levels simulated. Validation with observed groundwater heads showed that groundwater heads are reasonably well simulated for many regions of the world, especially for sediment basins (R2 = 0.95). The simulated regional-scale groundwater patterns and flow paths demonstrate the relevance of lateral groundwater flow in GHMs. Inter-basin groundwater flows can be a significant part of a basin's water budget and help to sustain river baseflows, especially during droughts. Also, water availability of larger aquifer systems can be positively affected by additional recharge from inter-basin groundwater flows.
Effect of music on surgical skill during simulated intraocular surgery.
Kyrillos, Ralph; Caissie, Mathieu
2017-12-01
To evaluate the effect of Mozart music compared to silence on anterior segment surgical skill in the context of simulated intraocular surgery. Prospective stratified and randomized noninferiority trial. Fourteen ophthalmologists and 12 residents in ophthalmology. All participants were asked to perform 4 sets of predetermined tasks on the EyeSI surgical simulator (VRmagic, Mannheim, Germany). The participants completed 1 Capsulorhexis task and 1 Anti-Tremor task during 3 separate visits. The first 2 sets determined the basic level on day 1. Then, the participants were stratified by surgical experience and randomized to be exposed to music (Mozart sonata for 2 pianos in D-K448) during either the third or the fourth set of tasks (day 2 or 3). Surgical skill was evaluated using the parameters recorded by the simulator such as "Total score" and "Time" for both tasks and task-specific parameters such as "Out of tolerance percentage" for the Anti-Tremor task and "Deviation of rhexis radius from 2.5 mm," "Roundness," and "Centering" for the Capsulorhexis task. The data were analyzed using the Wilcoxon signed-rank test. No statistically significant differences were noted between exposure and nonexposure for all the Anti-Tremor task parameters as well as most parameters for the Capsulorhexis task. Two parameters for the Capsulorhexis task showed a strong trend for improvement with exposure to music ("Total score" +23.3%, p = 0.025; "Roundness" +33.0%, p = 0.037). Exposure to music did not negatively impact surgical skills. Moreover, a trend for improvement was shown while listening to Mozart music. Copyright © 2017 Canadian Ophthalmological Society. Published by Elsevier Inc. All rights reserved.
Hauk, O; Keil, A; Elbert, T; Müller, M M
2002-01-30
We describe a methodology to apply current source density (CSD) and minimum norm (MN) estimation as pre-processing tools for time-series analysis of single trial EEG data. The performance of these methods is compared for the case of wavelet time-frequency analysis of simulated gamma-band activity. A reasonable comparison of CSD and MN on the single trial level requires regularization such that the corresponding transformed data sets have similar signal-to-noise ratios (SNRs). For region-of-interest approaches, it should be possible to optimize the SNR for single estimates rather than for the whole distributed solution. An effective implementation of the MN method is described. Simulated data sets were created by modulating the strengths of a radial and a tangential test dipole with wavelets in the frequency range of the gamma band, superimposed with simulated spatially uncorrelated noise. The MN and CSD transformed data sets as well as the average reference (AR) representation were subjected to wavelet frequency-domain analysis, and power spectra were mapped for relevant frequency bands. For both CSD and MN, the influence of noise can be sufficiently suppressed by regularization to yield meaningful information, but only MN represents both radial and tangential dipole sources appropriately as single peaks. Therefore, when relating wavelet power spectrum topographies to their neuronal generators, MN should be preferred.
van Strien, Maarten J; Slager, Cornelis T J; de Vries, Bauke; Grêt-Regamey, Adrienne
2016-06-01
Many studies have assessed the effect of landscape patterns on spatial ecological processes by simulating these processes in computer-generated landscapes with varying composition and configuration. To generate such landscapes, various neutral landscape models have been developed. However, the limited set of landscape-level pattern variables included in these models is often inadequate to generate landscapes that reflect real landscapes. In order to achieve more flexibility and variability in the generated landscapes patterns, a more complete set of class- and patch-level pattern variables should be implemented in these models. These enhancements have been implemented in Landscape Generator (LG), which is a software that uses optimization algorithms to generate landscapes that match user-defined target values. Developed for participatory spatial planning at small scale, we enhanced the usability of LG and demonstrated how it can be used for larger scale ecological studies. First, we used LG to recreate landscape patterns from a real landscape (i.e., a mountainous region in Switzerland). Second, we generated landscape series with incrementally changing pattern variables, which could be used in ecological simulation studies. We found that LG was able to recreate landscape patterns that approximate those of real landscapes. Furthermore, we successfully generated landscape series that would not have been possible with traditional neutral landscape models. LG is a promising novel approach for generating neutral landscapes and enables testing of new hypotheses regarding the influence of landscape patterns on ecological processes. LG is freely available online.
A mathematical framework for modelling cambial surface evolution using a level set method
Sellier, Damien; Plank, Michael J.; Harrington, Jonathan J.
2011-01-01
Background and Aims During their lifetime, tree stems take a series of successive nested shapes. Individual tree growth models traditionally focus on apical growth and architecture. However, cambial growth, which is distributed over a surface layer wrapping the whole organism, equally contributes to plant form and function. This study aims at providing a framework to simulate how organism shape evolves as a result of a secondary growth process that occurs at the cellular scale. Methods The development of the vascular cambium is modelled as an expanding surface using the level set method. The surface consists of multiple compartments following distinct expansion rules. Growth behaviour can be formulated as a mathematical function of surface state variables and independent variables to describe biological processes. Key Results The model was coupled to an architectural model and to a forest stand model to simulate cambium dynamics and wood formation at the scale of the organism. The model is able to simulate competition between cambia, surface irregularities and local features. Predicting the shapes associated with arbitrarily complex growth functions does not add complexity to the numerical method itself. Conclusions Despite their slenderness, it is sometimes useful to conceive of trees as expanding surfaces. The proposed mathematical framework provides a way to integrate through time and space the biological and physical mechanisms underlying cambium activity. It can be used either to test growth hypotheses or to generate detailed maps of wood internal structure. PMID:21470972
COSMIC REIONIZATION ON COMPUTERS: NUMERICAL AND PHYSICAL CONVERGENCE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gnedin, Nickolay Y., E-mail: gnedin@fnal.gov; Kavli Institute for Cosmological Physics, University of Chicago, Chicago, IL 60637; Department of Astronomy and Astrophysics, University of Chicago, Chicago, IL 60637
In this paper I show that simulations of reionization performed under the Cosmic Reionization On Computers project do converge in space and mass, albeit rather slowly. A fully converged solution (for a given star formation and feedback model) can be determined at a level of precision of about 20%, but such a solution is useless in practice, since achieving it in production-grade simulations would require a large set of runs at various mass and spatial resolutions, and computational resources for such an undertaking are not yet readily available. In order to make progress in the interim, I introduce a weakmore » convergence correction factor in the star formation recipe, which allows one to approximate the fully converged solution with finite-resolution simulations. The accuracy of weakly converged simulations approaches a comparable, ∼20% level of precision for star formation histories of individual galactic halos and other galactic properties that are directly related to star formation rates, such as stellar masses and metallicities. Yet other properties of model galaxies, for example, their H i masses, are recovered in the weakly converged runs only within a factor of 2.« less
Modeling and Composing Scenario-Based Requirements with Aspects
NASA Technical Reports Server (NTRS)
Araujo, Joao; Whittle, Jon; Ki, Dae-Kyoo
2004-01-01
There has been significant recent interest, within the Aspect-Oriented Software Development (AOSD) community, in representing crosscutting concerns at various stages of the software lifecycle. However, most of these efforts have concentrated on the design and implementation phases. We focus in this paper on representing aspects during use case modeling. In particular, we focus on scenario-based requirements and show how to compose aspectual and non-aspectual scenarios so that they can be simulated as a whole. Non-aspectual scenarios are modeled as UML sequence diagram. Aspectual scenarios are modeled as Interaction Pattern Specifications (IPS). In order to simulate them, the scenarios are transformed into a set of executable state machines using an existing state machine synthesis algorithm. Previous work composed aspectual and non-aspectual scenarios at the sequence diagram level. In this paper, the composition is done at the state machine level.
NASA Astrophysics Data System (ADS)
Li, Huajiao; Fang, Wei; An, Haizhong; Gao, Xiangyun; Yan, Lili
2016-05-01
Economic networks in the real world are not homogeneous; therefore, it is important to study economic networks with heterogeneous nodes and edges to simulate a real network more precisely. In this paper, we present an empirical study of the one-mode derivative holding-based network constructed by the two-mode affiliation network of two sets of actors using the data of worldwide listed energy companies and their shareholders. First, we identify the primitive relationship in the two-mode affiliation network of the two sets of actors. Then, we present the method used to construct the derivative network based on the shareholding relationship between two sets of actors and the affiliation relationship between actors and events. After constructing the derivative network, we analyze different topological features on the node level, edge level and entire network level and explain the meanings of the different values of the topological features combining the empirical data. This study is helpful for expanding the usage of complex networks to heterogeneous economic networks. For empirical research on the worldwide listed energy stock market, this study is useful for discovering the inner relationships between the nations and regions from a new perspective.
Modelling wildland fire propagation by tracking random fronts
NASA Astrophysics Data System (ADS)
Pagnini, G.; Mentrelli, A.
2014-08-01
Wildland fire propagation is studied in the literature by two alternative approaches, namely the reaction-diffusion equation and the level-set method. These two approaches are considered alternatives to each other because the solution of the reaction-diffusion equation is generally a continuous smooth function that has an exponential decay, and it is not zero in an infinite domain, while the level-set method, which is a front tracking technique, generates a sharp function that is not zero inside a compact domain. However, these two approaches can indeed be considered complementary and reconciled. Turbulent hot-air transport and fire spotting are phenomena with a random nature and they are extremely important in wildland fire propagation. Consequently, the fire front gets a random character, too; hence, a tracking method for random fronts is needed. In particular, the level-set contour is randomised here according to the probability density function of the interface particle displacement. Actually, when the level-set method is developed for tracking a front interface with a random motion, the resulting averaged process emerges to be governed by an evolution equation of the reaction-diffusion type. In this reconciled approach, the rate of spread of the fire keeps the same key and characterising role that is typical of the level-set approach. The resulting model emerges to be suitable for simulating effects due to turbulent convection, such as fire flank and backing fire, the faster fire spread being because of the actions by hot-air pre-heating and by ember landing, and also due to the fire overcoming a fire-break zone, which is a case not resolved by models based on the level-set method. Moreover, from the proposed formulation, a correction follows for the formula of the rate of spread which is due to the mean jump length of firebrands in the downwind direction for the leeward sector of the fireline contour. The presented study constitutes a proof of concept, and it needs to be subjected to a future validation.
Blana, Dimitra; Hincapie, Juan G; Chadwick, Edward K; Kirsch, Robert F
2013-01-01
Neuroprosthetic systems based on functional electrical stimulation aim to restore motor function to individuals with paralysis following spinal cord injury. Identifying the optimal electrode set for the neuroprosthesis is complicated because it depends on the characteristics of the individual (such as injury level), the force capacities of the muscles, the movements the system aims to restore, and the hardware limitations (number and type of electrodes available). An electrode-selection method has been developed that uses a customized musculoskeletal model. Candidate electrode sets are created based on desired functional outcomes and the hard ware limitations of the proposed system. Inverse-dynamic simulations are performed to determine the proportion of target movements that can be accomplished with each set; the set that allows the most movements to be performed is chosen as the optimal set. The technique is demonstrated here for a system recently developed by our research group to restore whole-arm movement to individuals with high-level tetraplegia. The optimal set included selective nerve-cuff electrodes for the radial and musculocutaneous nerves; single-channel cuffs for the axillary, suprascapular, upper subscapular, and long-thoracic nerves; and muscle-based electrodes for the remaining channels. The importance of functional goals, hardware limitations, muscle and nerve anatomy, and surgical feasibility are highlighted.
Using Simulation to Prepare Nursing Students for Professional Roles.
Wittmann-Price, Ruth A; Price, Sam W; Graham, Crystal; Wilson, Linda
2016-01-01
The current job market for nurses is variable and although there remains a projected shortage of nurses for the future, availability of entry-level positions has changed. This mixed-methods pilot study describes the successful use of simulated role-play to prepare senior nursing students (N = 66) for competitive job markets. The simulation laboratory was set up as a human resource department. The students and interviewers were evaluated by surveys. The majority of students rated the experience high for understanding interviews and assisting them with readiness for interviews. Qualitative results revealed themes of nervousness, confidence, and readiness. Interviewers also discussed student nervousness and the benefits of simulated interviews. These results affirmed that the overall learning outcome of the experience was positive and can assist in promoting professional role transition. The project will continue to be implemented, and it will be extended to graduate students in the future.
Method and system for fault accommodation of machines
NASA Technical Reports Server (NTRS)
Goebel, Kai Frank (Inventor); Subbu, Rajesh Venkat (Inventor); Rausch, Randal Thomas (Inventor); Frederick, Dean Kimball (Inventor)
2011-01-01
A method for multi-objective fault accommodation using predictive modeling is disclosed. The method includes using a simulated machine that simulates a faulted actual machine, and using a simulated controller that simulates an actual controller. A multi-objective optimization process is performed, based on specified control settings for the simulated controller and specified operational scenarios for the simulated machine controlled by the simulated controller, to generate a Pareto frontier-based solution space relating performance of the simulated machine to settings of the simulated controller, including adjustment to the operational scenarios to represent a fault condition of the simulated machine. Control settings of the actual controller are adjusted, represented by the simulated controller, for controlling the actual machine, represented by the simulated machine, in response to a fault condition of the actual machine, based on the Pareto frontier-based solution space, to maximize desirable operational conditions and minimize undesirable operational conditions while operating the actual machine in a region of the solution space defined by the Pareto frontier.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ortoleva, Peter J.
Illustrative embodiments of systems and methods for the deductive multiscale simulation of macromolecules are disclosed. In one illustrative embodiment, a deductive multiscale simulation method may include (i) constructing a set of order parameters that model one or more structural characteristics of a macromolecule, (ii) simulating an ensemble of atomistic configurations for the macromolecule using instantaneous values of the set of order parameters, (iii) simulating thermal-average forces and diffusivities for the ensemble of atomistic configurations, and (iv) evolving the set of order parameters via Langevin dynamics using the thermal-average forces and diffusivities.
Airborne Turbulence Detection System Certification Tool Set
NASA Technical Reports Server (NTRS)
Hamilton, David W.; Proctor, Fred H.
2006-01-01
A methodology and a corresponding set of simulation tools for testing and evaluating turbulence detection sensors has been presented. The tool set is available to industry and the FAA for certification of radar based airborne turbulence detection systems. The tool set consists of simulated data sets representing convectively induced turbulence, an airborne radar simulation system, hazard tables to convert the radar observable to an aircraft load, documentation, a hazard metric "truth" algorithm, and criteria for scoring the predictions. Analysis indicates that flight test data supports spatial buffers for scoring detections. Also, flight data and demonstrations with the tool set suggest the need for a magnitude buffer.
Ab initio molecular simulations with numeric atom-centered orbitals
NASA Astrophysics Data System (ADS)
Blum, Volker; Gehrke, Ralf; Hanke, Felix; Havu, Paula; Havu, Ville; Ren, Xinguo; Reuter, Karsten; Scheffler, Matthias
2009-11-01
We describe a complete set of algorithms for ab initio molecular simulations based on numerically tabulated atom-centered orbitals (NAOs) to capture a wide range of molecular and materials properties from quantum-mechanical first principles. The full algorithmic framework described here is embodied in the Fritz Haber Institute "ab initio molecular simulations" (FHI-aims) computer program package. Its comprehensive description should be relevant to any other first-principles implementation based on NAOs. The focus here is on density-functional theory (DFT) in the local and semilocal (generalized gradient) approximations, but an extension to hybrid functionals, Hartree-Fock theory, and MP2/GW electron self-energies for total energies and excited states is possible within the same underlying algorithms. An all-electron/full-potential treatment that is both computationally efficient and accurate is achieved for periodic and cluster geometries on equal footing, including relaxation and ab initio molecular dynamics. We demonstrate the construction of transferable, hierarchical basis sets, allowing the calculation to range from qualitative tight-binding like accuracy to meV-level total energy convergence with the basis set. Since all basis functions are strictly localized, the otherwise computationally dominant grid-based operations scale as O(N) with system size N. Together with a scalar-relativistic treatment, the basis sets provide access to all elements from light to heavy. Both low-communication parallelization of all real-space grid based algorithms and a ScaLapack-based, customized handling of the linear algebra for all matrix operations are possible, guaranteeing efficient scaling (CPU time and memory) up to massively parallel computer systems with thousands of CPUs.
Re-Evaluation of Development of the TMDL Using Long-Term Monitoring Data and Modeling
NASA Astrophysics Data System (ADS)
Squires, A.; Rittenburg, R.; Boll, J.; Brooks, E. S.
2012-12-01
Since 1996, 47,979 Total Maximum Daily Loads (TMDLs) have been approved throughout the United States for impaired water bodies. TMDLs are set through the determination of natural background loads for a given water body which then estimate contributions from point and nonpoint sources to create load allocations and determine acceptable pollutant levels to meet water quality standards. Monitoring data and hydrologic models may be used in this process. However, data sets used are often limited in duration and frequency, and model simulations are not always accurate. The objective of this study is to retrospectively look at the development and accuracy of the TMDL for a stream in an agricultural area using long-term monitoring data and a robust modeling process. The study area is the Paradise Creek Watershed in northern Idaho. A sediment TMDL was determined for the Idaho section of Paradise Creek in 1997. Sediment TMDL levels were determined using a short-term data set and the Water Erosion Prediction Project (WEPP) model. Background loads used for the TMDL in 1997 were from pre-agricultural levels, based on WEPP model results. We modified the WEPP model for simulation of saturation excess overland flow, the dominant runoff generation mechanism, and analyzed more than 10 years of high resolution monitoring data from 2001 - 2012, including discharge and total suspended solids. Results will compare background loading and current loading based on present-day land use documented during the monitoring period and compare previous WEPP model results with the modified WEPP model results. This research presents a reevaluation of the TMDL process with recommendations for a more scientifically sound methodology to attain realistic water quality goals.
Clusternomics: Integrative context-dependent clustering for heterogeneous datasets
Wernisch, Lorenz
2017-01-01
Integrative clustering is used to identify groups of samples by jointly analysing multiple datasets describing the same set of biological samples, such as gene expression, copy number, methylation etc. Most existing algorithms for integrative clustering assume that there is a shared consistent set of clusters across all datasets, and most of the data samples follow this structure. However in practice, the structure across heterogeneous datasets can be more varied, with clusters being joined in some datasets and separated in others. In this paper, we present a probabilistic clustering method to identify groups across datasets that do not share the same cluster structure. The proposed algorithm, Clusternomics, identifies groups of samples that share their global behaviour across heterogeneous datasets. The algorithm models clusters on the level of individual datasets, while also extracting global structure that arises from the local cluster assignments. Clusters on both the local and the global level are modelled using a hierarchical Dirichlet mixture model to identify structure on both levels. We evaluated the model both on simulated and on real-world datasets. The simulated data exemplifies datasets with varying degrees of common structure. In such a setting Clusternomics outperforms existing algorithms for integrative and consensus clustering. In a real-world application, we used the algorithm for cancer subtyping, identifying subtypes of cancer from heterogeneous datasets. We applied the algorithm to TCGA breast cancer dataset, integrating gene expression, miRNA expression, DNA methylation and proteomics. The algorithm extracted clinically meaningful clusters with significantly different survival probabilities. We also evaluated the algorithm on lung and kidney cancer TCGA datasets with high dimensionality, again showing clinically significant results and scalability of the algorithm. PMID:29036190
Clusternomics: Integrative context-dependent clustering for heterogeneous datasets.
Gabasova, Evelina; Reid, John; Wernisch, Lorenz
2017-10-01
Integrative clustering is used to identify groups of samples by jointly analysing multiple datasets describing the same set of biological samples, such as gene expression, copy number, methylation etc. Most existing algorithms for integrative clustering assume that there is a shared consistent set of clusters across all datasets, and most of the data samples follow this structure. However in practice, the structure across heterogeneous datasets can be more varied, with clusters being joined in some datasets and separated in others. In this paper, we present a probabilistic clustering method to identify groups across datasets that do not share the same cluster structure. The proposed algorithm, Clusternomics, identifies groups of samples that share their global behaviour across heterogeneous datasets. The algorithm models clusters on the level of individual datasets, while also extracting global structure that arises from the local cluster assignments. Clusters on both the local and the global level are modelled using a hierarchical Dirichlet mixture model to identify structure on both levels. We evaluated the model both on simulated and on real-world datasets. The simulated data exemplifies datasets with varying degrees of common structure. In such a setting Clusternomics outperforms existing algorithms for integrative and consensus clustering. In a real-world application, we used the algorithm for cancer subtyping, identifying subtypes of cancer from heterogeneous datasets. We applied the algorithm to TCGA breast cancer dataset, integrating gene expression, miRNA expression, DNA methylation and proteomics. The algorithm extracted clinically meaningful clusters with significantly different survival probabilities. We also evaluated the algorithm on lung and kidney cancer TCGA datasets with high dimensionality, again showing clinically significant results and scalability of the algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiang, Chih-Chieh; Lin, Hsin-Hon; Lin, Chang-Shiun
Abstract-Multiple-photon emitters, such as In-111 or Se-75, have enormous potential in the field of nuclear medicine imaging. For example, Se-75 can be used to investigate the bile acid malabsorption and measure the bile acid pool loss. The simulation system for emission tomography (SimSET) is a well-known Monte Carlo simulation (MCS) code in nuclear medicine for its high computational efficiency. However, current SimSET cannot simulate these isotopes due to the lack of modeling of complex decay scheme and the time-dependent decay process. To extend the versatility of SimSET for simulation of those multi-photon emission isotopes, a time-resolved multiple photon history generatormore » based on SimSET codes is developed in present study. For developing the time-resolved SimSET (trSimSET) with radionuclide decay process, the new MCS model introduce new features, including decay time information and photon time-of-flight information, into this new code. The half-life of energy states were tabulated from the Evaluated Nuclear Structure Data File (ENSDF) database. The MCS results indicate that the overall percent difference is less than 8.5% for all simulation trials as compared to GATE. To sum up, we demonstrated that time-resolved SimSET multiple photon history generator can have comparable accuracy with GATE and keeping better computational efficiency. The new MCS code is very useful to study the multi-photon imaging of novel isotopes that needs the simulation of lifetime and the time-of-fight measurements. (authors)« less
NASA Astrophysics Data System (ADS)
Lee, Seungjoon; Kevrekidis, Ioannis G.; Karniadakis, George Em
2017-09-01
Exascale-level simulations require fault-resilient algorithms that are robust against repeated and expected software and/or hardware failures during computations, which may render the simulation results unsatisfactory. If each processor can share some global information about the simulation from a coarse, limited accuracy but relatively costless auxiliary simulator we can effectively fill-in the missing spatial data at the required times by a statistical learning technique - multi-level Gaussian process regression, on the fly; this has been demonstrated in previous work [1]. Based on the previous work, we also employ another (nonlinear) statistical learning technique, Diffusion Maps, that detects computational redundancy in time and hence accelerate the simulation by projective time integration, giving the overall computation a "patch dynamics" flavor. Furthermore, we are now able to perform information fusion with multi-fidelity and heterogeneous data (including stochastic data). Finally, we set the foundations of a new framework in CFD, called patch simulation, that combines information fusion techniques from, in principle, multiple fidelity and resolution simulations (and even experiments) with a new adaptive timestep refinement technique. We present two benchmark problems (the heat equation and the Navier-Stokes equations) to demonstrate the new capability that statistical learning tools can bring to traditional scientific computing algorithms. For each problem, we rely on heterogeneous and multi-fidelity data, either from a coarse simulation of the same equation or from a stochastic, particle-based, more "microscopic" simulation. We consider, as such "auxiliary" models, a Monte Carlo random walk for the heat equation and a dissipative particle dynamics (DPD) model for the Navier-Stokes equations. More broadly, in this paper we demonstrate the symbiotic and synergistic combination of statistical learning, domain decomposition, and scientific computing in exascale simulations.
Jeong, Jeong-Won; Shin, Dae C; Do, Synho; Marmarelis, Vasilis Z
2006-08-01
This paper presents a novel segmentation methodology for automated classification and differentiation of soft tissues using multiband data obtained with the newly developed system of high-resolution ultrasonic transmission tomography (HUTT) for imaging biological organs. This methodology extends and combines two existing approaches: the L-level set active contour (AC) segmentation approach and the agglomerative hierarchical kappa-means approach for unsupervised clustering (UC). To prevent the trapping of the current iterative minimization AC algorithm in a local minimum, we introduce a multiresolution approach that applies the level set functions at successively increasing resolutions of the image data. The resulting AC clusters are subsequently rearranged by the UC algorithm that seeks the optimal set of clusters yielding the minimum within-cluster distances in the feature space. The presented results from Monte Carlo simulations and experimental animal-tissue data demonstrate that the proposed methodology outperforms other existing methods without depending on heuristic parameters and provides a reliable means for soft tissue differentiation in HUTT images.
Simulated families: A test for different methods of family identification
NASA Technical Reports Server (NTRS)
Bendjoya, Philippe; Cellino, Alberto; Froeschle, Claude; Zappala, Vincenzo
1992-01-01
A set of families generated in fictitious impact events (leading to a wide range of 'structure' in the orbital element space have been superimposed to various backgrounds of different densities in order to investigate the efficiency and the limitations of the methods used by Zappala et al. (1990) and by Bendjoya et al. (1990) for identifying asteroid families. In addition, an evaluation of the expected interlopers at different significance levels and the possibility of improving the definition of the level of maximum significant of a given family were analyzed.
Further Investigations of Gravity Modeling on Surface-Interacting Vehicle Simulations
NASA Technical Reports Server (NTRS)
Madden, Michael M.
2009-01-01
A vehicle simulation is "surface-interacting" if the state of the vehicle (position, velocity, and acceleration) relative to the surface is important. Surface-interacting simulations perform ascent, entry, descent, landing, surface travel, or atmospheric flight. The dynamics of surface-interacting simulations are influenced by the modeling of gravity. Gravity is the sum of gravitation and the centrifugal acceleration due to the world s rotation. Both components are functions of position relative to the world s center and that position for a given set of geodetic coordinates (latitude, longitude, and altitude) depends on the world model (world shape and dynamics). Thus, gravity fidelity depends on the fidelities of the gravitation model and the world model and on the interaction of the gravitation and world model. A surface-interacting simulation cannot treat the gravitation separately from the world model. This paper examines the actual performance of different pairs of world and gravitation models (or direct gravity models) on the travel of a subsonic civil transport in level flight under various starting conditions.
Mager, Diana R; Lange, Jean W; Greiner, Philip A; Saracino, Katherine H
2012-08-01
The Expanded Learning and Dedication to Elders in the Region (ELDER) project addressed the needs of under-served older adults by educating health care providers in home health and long-term care facilities. Four agencies in a health professional shortage/medically underserved area participated. Focus groups were held to determine agency-specific educational needs. Curricula from the John A. Hartford Foundation were adapted to design unique curricula for each agency and level of personnel during the first 2 years. The focus of this report is the case-based simulation learning approach used in year 3 to validate application of knowledge and facilitate teamwork and interprofessional communication. Three simulation sessions on varying topics were conducted at each site. Postsimulation surveys and qualitative interviews with hired evaluators showed that participants found simulations helpful to their practice. Tailored on-site education incorporating mid-fidelity simulation was an effective model for translating gerontological knowledge into practice and encouraging communication and teamwork in these settings. Copyright 2012, SLACK Incorporated.
Numerical study of wind over breaking waves and generation of spume droplets
NASA Astrophysics Data System (ADS)
Yang, Zixuan; Tang, Shuai; Dong, Yu-Hong; Shen, Lian
2017-11-01
We present direct numerical simulation (DNS) results on wind over breaking waves. The air and water are simulated as a coherent system. The air-water interface is captured using a coupled level-set and volume-of-fluid method. The initial condition for the simulation is fully-developed wind turbulence over strongly-forced steep waves. Because wave breaking is an unsteady process, we use ensemble averaging of a large number of runs to obtain turbulence statistics. The generation and transport of spume droplets during wave breaking is also simulated. The trajectories of sea spray droplets are tracked using a Lagrangian particle tracking method. The generation of droplets is captured using a kinematic criterion based on the relative velocity of fluid particles of water with respect to the wave phase speed. From the simulation, we observe that the wave plunging generates a large vortex in air, which makes an important contribution to the suspension of sea spray droplets.
Polyester: simulating RNA-seq datasets with differential transcript expression.
Frazee, Alyssa C; Jaffe, Andrew E; Langmead, Ben; Leek, Jeffrey T
2015-09-01
Statistical methods development for differential expression analysis of RNA sequencing (RNA-seq) requires software tools to assess accuracy and error rate control. Since true differential expression status is often unknown in experimental datasets, artificially constructed datasets must be utilized, either by generating costly spike-in experiments or by simulating RNA-seq data. Polyester is an R package designed to simulate RNA-seq data, beginning with an experimental design and ending with collections of RNA-seq reads. Its main advantage is the ability to simulate reads indicating isoform-level differential expression across biological replicates for a variety of experimental designs. Data generated by Polyester is a reasonable approximation to real RNA-seq data and standard differential expression workflows can recover differential expression set in the simulation by the user. Polyester is freely available from Bioconductor (http://bioconductor.org/). jtleek@gmail.com Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Blázquez, M.; Egizabal, A.; Unzueta, I.
2014-08-01
The LIFE+ Project SIRENA, Simulation of the release of nanomaterials from consumer products for environmental exposure assessment, (LIFE11 ENV/ES/596) has set up a Technological Surveillance System (TSS) to trace technical references at worldwide level related to nanocomposites and the release from nanocomposites. So far a total of seventy three items of different nature (from peer reviewed articles to presentations and contributions to congresses) have been selected and classified as "nanomaterials release simulation technologies". In present document, different approaches for the simulation of different life cycle stages through the physical degradation of polymer nanocomposites at laboratory scale are assessed. In absence of a reference methodology, the comparison of the different protocols used still remains a challenge.
First experiences of high-fidelity simulation training in junior nursing students in Korea.
Lee, Suk Jeong; Kim, Sang Suk; Park, Young-Mi
2015-07-01
This study was conducted to explore first experiences of high-fidelity simulation training in Korean nursing students, in order to develop and establish more effective guidelines for future simulation training in Korea. Thirty-three junior nursing students participated in high-fidelity simulation training for the first time. Using both qualitative and quantitative methods, data were collected from reflective journals and questionnaires of simulation effectiveness after simulation training. Descriptive statistics were used to analyze simulation effectiveness and content analysis was performed with the reflective journal data. Five dimensions and 31 domains, both positive and negative experiences, emerged from qualitative analysis: (i) machine-human interaction in a safe environment; (ii) perceived learning capability; (iii) observational learning; (iv) reconciling practice with theory; and (v) follow-up debriefing effect. More than 70% of students scored high on increased ability to identify changes in the patient's condition, critical thinking, decision-making, effectiveness of peer observation, and debriefing in effectiveness of simulation. This study reported both positive and negative experiences of simulation. The results of this study could be used to set the level of task difficulty in simulation. Future simulation programs can be designed by reinforcing the positive experiences and modifying the negative results. © 2014 The Authors. Japan Journal of Nursing Science © 2014 Japan Academy of Nursing Science.
NASA Astrophysics Data System (ADS)
Shafei, Shoresh; Kuzyk, Mark C.; Kuzyk, Mark G.
2010-03-01
The hyperpolarizability governs all light-matter interactions. In recent years, quantum mechanical calculations have shown that there is a fundamental limit of the hyperpolarizability of all materials. The fundamental limits are calculated only under the assumption that the Thomas Kuhn sum rules and the three-level ansatz hold. (The three-level ansatz states that for optimized hyperpolarizability, only two excited states contribute to the hyperpolarizability.) All molecules ever characterized have hyperpolarizabilities that fall well below the limits. However, Monte Carlo simulations of the nonlinear polarizability have shown that attaining values close to the fundamental limit is theoretically possible; but, the calculations do not provide guidance with regards to what potentials are optimized. The focus of our work is to use Monte Carlo techniques to determine sets of energies and transition moments that are consistent with the sum rules, and study the constraints on their signs. This analysis will be used to implement a numerical proof of three-level ansatz.
NASA Astrophysics Data System (ADS)
Bozorgzadeh, Nezam; Yanagimura, Yoko; Harrison, John P.
2017-12-01
The Hoek-Brown empirical strength criterion for intact rock is widely used as the basis for estimating the strength of rock masses. Estimations of the intact rock H-B parameters, namely the empirical constant m and the uniaxial compressive strength σc, are commonly obtained by fitting the criterion to triaxial strength data sets of small sample size. This paper investigates how such small sample sizes affect the uncertainty associated with the H-B parameter estimations. We use Monte Carlo (MC) simulation to generate data sets of different sizes and different combinations of H-B parameters, and then investigate the uncertainty in H-B parameters estimated from these limited data sets. We show that the uncertainties depend not only on the level of variability but also on the particular combination of parameters being investigated. As particular combinations of H-B parameters can informally be considered to represent specific rock types, we discuss that as the minimum number of required samples depends on rock type it should correspond to some acceptable level of uncertainty in the estimations. Also, a comparison of the results from our analysis with actual rock strength data shows that the probability of obtaining reliable strength parameter estimations using small samples may be very low. We further discuss the impact of this on ongoing implementation of reliability-based design protocols and conclude with suggestions for improvements in this respect.
High-Level Connectionist Models
1993-10-01
subject of intense research since Axelrod (1984) showed that two agents engaged in a prisoner’s dilemma ( Poundstone , 1992) can evolve into mutually...The various parameter values for the program are set as described above unless otherwise noted. 4.1 Williams ’ Trigger Problem As an initial test...M. P. Vecchi. Optimization by simulated annealing. Sci- ence, 220:671-680, 1983. [39] R. J. Williams . Adaptive State Representation and Estimation
Chen, Chia-Lin; Wang, Yuchuan; Lee, Jason J S; Tsui, Benjamin M W
2008-07-01
The authors developed and validated an efficient Monte Carlo simulation (MCS) workflow to facilitate small animal pinhole SPECT imaging research. This workflow seamlessly integrates two existing MCS tools: simulation system for emission tomography (SimSET) and GEANT4 application for emission tomography (GATE). Specifically, we retained the strength of GATE in describing complex collimator/detector configurations to meet the anticipated needs for studying advanced pinhole collimation (e.g., multipinhole) geometry, while inserting the fast SimSET photon history generator (PHG) to circumvent the relatively slow GEANT4 MCS code used by GATE in simulating photon interactions inside voxelized phantoms. For validation, data generated from this new SimSET-GATE workflow were compared with those from GATE-only simulations as well as experimental measurements obtained using a commercial small animal pinhole SPECT system. Our results showed excellent agreement (e.g., in system point response functions and energy spectra) between SimSET-GATE and GATE-only simulations, and, more importantly, a significant computational speedup (up to approximately 10-fold) provided by the new workflow. Satisfactory agreement between MCS results and experimental data were also observed. In conclusion, the authors have successfully integrated SimSET photon history generator in GATE for fast and realistic pinhole SPECT simulations, which can facilitate research in, for example, the development and application of quantitative pinhole and multipinhole SPECT for small animal imaging. This integrated simulation tool can also be adapted for studying other preclinical and clinical SPECT techniques.
NASA Astrophysics Data System (ADS)
Stewart, Iris T.; Loague, Keith
2003-12-01
Groundwater vulnerability assessments of nonpoint source agrochemical contamination at regional scales are either qualitative in nature or require prohibitively costly computational efforts. By contrast, the type transfer function (TTF) modeling approach for vadose zone pesticide leaching presented here estimates solute concentrations at a depth of interest, only uses available soil survey, climatic, and irrigation information, and requires minimal computational cost for application. TTFs are soil texture based travel time probability density functions that describe a characteristic leaching behavior for soil profiles with similar soil hydraulic properties. Seven sets of TTFs, representing different levels of upscaling, were developed for six loam soil textural classes with the aid of simulated breakthrough curves from synthetic data sets. For each TTF set, TTFs were determined from a group or subgroup of breakthrough curves for each soil texture by identifying the effective parameters of the function that described the average leaching behavior of the group. The grouping of the breakthrough curves was based on the TTF index, a measure of the magnitude of the peak concentration, the peak arrival time, and the concentration spread. Comparison to process-based simulations show that the TTFs perform well with respect to mass balance, concentration magnitude, and the timing of concentration peaks. Sets of TTFs based on individual soil textures perform better for all the evaluation criteria than sets that span all textures. As prediction accuracy and computational cost increase with the number of TTFs in a set, the selection of a TTF set is determined by a given application.
Visual noise disrupts conceptual integration in reading.
Gao, Xuefei; Stine-Morrow, Elizabeth A L; Noh, Soo Rim; Eskew, Rhea T
2011-02-01
The Effortfulness Hypothesis suggests that sensory impairment (either simulated or age-related) may decrease capacity for semantic integration in language comprehension. We directly tested this hypothesis by measuring resource allocation to different levels of processing during reading (i.e., word vs. semantic analysis). College students read three sets of passages word-by-word, one at each of three levels of dynamic visual noise. There was a reliable interaction between processing level and noise, such that visual noise increased resources allocated to word-level processing, at the cost of attention paid to semantic analysis. Recall of the most important ideas also decreased with increasing visual noise. Results suggest that sensory challenge can impair higher-level cognitive functions in learning from text, supporting the Effortfulness Hypothesis.
Lázár, Attila N; Clarke, Derek; Adams, Helen; Akanda, Abdur Razzaque; Szabo, Sylvia; Nicholls, Robert J; Matthews, Zoe; Begum, Dilruba; Saleh, Abul Fazal M; Abedin, Md Anwarul; Payo, Andres; Streatfield, Peter Kim; Hutton, Craig; Mondal, M Shahjahan; Moslehuddin, Abu Zofar Md
2015-06-01
Coastal Bangladesh experiences significant poverty and hazards today and is highly vulnerable to climate and environmental change over the coming decades. Coastal stakeholders are demanding information to assist in the decision making processes, including simulation models to explore how different interventions, under different plausible future socio-economic and environmental scenarios, could alleviate environmental risks and promote development. Many existing simulation models neglect the complex interdependencies between the socio-economic and environmental system of coastal Bangladesh. Here an integrated approach has been proposed to develop a simulation model to support agriculture and poverty-based analysis and decision-making in coastal Bangladesh. In particular, we show how a simulation model of farmer's livelihoods at the household level can be achieved. An extended version of the FAO's CROPWAT agriculture model has been integrated with a downscaled regional demography model to simulate net agriculture profit. This is used together with a household income-expenses balance and a loans logical tree to simulate the evolution of food security indicators and poverty levels. Modelling identifies salinity and temperature stress as limiting factors to crop productivity and fertilisation due to atmospheric carbon dioxide concentrations as a reinforcing factor. The crop simulation results compare well with expected outcomes but also reveal some unexpected behaviours. For example, under current model assumptions, temperature is more important than salinity for crop production. The agriculture-based livelihood and poverty simulations highlight the critical significance of debt through informal and formal loans set at such levels as to persistently undermine the well-being of agriculture-dependent households. Simulations also indicate that progressive approaches to agriculture (i.e. diversification) might not provide the clear economic benefit from the perspective of pricing due to greater susceptibility to climate vagaries. The livelihood and poverty results highlight the importance of the holistic consideration of the human-nature system and the careful selection of poverty indicators. Although the simulation model at this stage contains the minimum elements required to simulate the complexity of farmer livelihood interactions in coastal Bangladesh, the crop and socio-economic findings compare well with expected behaviours. The presented integrated model is the first step to develop a holistic, transferable analytic method and tool for coastal Bangladesh.
Initial conditions for accurate N-body simulations of massive neutrino cosmologies
NASA Astrophysics Data System (ADS)
Zennaro, M.; Bel, J.; Villaescusa-Navarro, F.; Carbone, C.; Sefusatti, E.; Guzzo, L.
2017-04-01
The set-up of the initial conditions in cosmological N-body simulations is usually implemented by rescaling the desired low-redshift linear power spectrum to the required starting redshift consistently with the Newtonian evolution of the simulation. The implementation of this practical solution requires more care in the context of massive neutrino cosmologies, mainly because of the non-trivial scale-dependence of the linear growth that characterizes these models. In this work, we consider a simple two-fluid, Newtonian approximation for cold dark matter and massive neutrinos perturbations that can reproduce the cold matter linear evolution predicted by Boltzmann codes such as CAMB or CLASS with a 0.1 per cent accuracy or below for all redshift relevant to non-linear structure formation. We use this description, in the first place, to quantify the systematic errors induced by several approximations often assumed in numerical simulations, including the typical set-up of the initial conditions for massive neutrino cosmologies adopted in previous works. We then take advantage of the flexibility of this approach to rescale the late-time linear power spectra to the simulation initial redshift, in order to be as consistent as possible with the dynamics of the N-body code and the approximations it assumes. We implement our method in a public code (REPS rescaled power spectra for initial conditions with massive neutrinos https://github.com/matteozennaro/reps) providing the initial displacements and velocities for cold dark matter and neutrino particles that will allow accurate, I.e. 1 per cent level, numerical simulations for this cosmological scenario.
MacPhee, A. G.; Dymoke-Bradshaw, A. K. L.; Hares, J. D.; ...
2016-08-08
Here, we report simulationsand experiments that demonstrate an increasein spatial resolution ofthe NIF core diagnostic x-ray streak camerasby a factor of two, especially off axis. A designwas achieved by usinga corrector electron optic to flatten the field curvature at the detector planeand corroborated by measurement. In addition, particle in cell simulations were performed to identify theregions in the streak camera that contribute most to space charge blurring. Our simulations provide a tool for convolving syntheticpre-shot spectra with the instrument functionso signal levels can be set to maximize dynamic range for the relevant part of the streak record.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Radojcic, Riko; Nowak, Matt; Nakamoto, Mark
The status of the development of a Design-for-Stress simulation flow that captures the stress effects in packaged 3D-stacked Si products like integrated circuits (ICs) using advanced via-middle Through Si Via technology is outlined. The next set of challenges required to proliferate the methodology and to deploy it for making and dispositioning real Si product decisions are described here. These include the adoption and support of a Process Design Kit (PDK) that includes the relevant material properties, the development of stress simulation methodologies that operate at higher levels of abstraction in a design flow, and the development and adoption of suitablemore » models required to make real product reliability decisions.« less
Sabol, Thomas A.; Springer, Abraham E.
2013-01-01
Seepage erosion and mass failure of emergent sandy deposits along the Colorado River in Grand Canyon National Park, Arizona, are a function of the elevation of groundwater in the sandbar, fluctuations in river stage, the exfiltration of water from the bar face, and the slope of the bar face. In this study, a generalized three-dimensional numerical model was developed to predict the time-varying groundwater level, within the bar face region of a freshly deposited eddy sandbar, as a function of river stage. Model verification from two transient simulations demonstrates the ability of the model to predict groundwater levels within the onshore portion of the sandbar face across a range of conditions. Use of this generalized model is applicable across a range of typical eddy sandbar deposits in diverse settings. The ability to predict the groundwater level at the onshore end of the sandbar face is essential for both physical and numerical modeling efforts focusing on the erosion and mass failure of eddy sandbars downstream of Glen Canyon Dam along the Colorado River.
NASA Technical Reports Server (NTRS)
Olson, William S.; Bauer, Peter; Kummerow, Christian D.; Tao, Wei-Kuo
2000-01-01
The one-dimensional, steady-state melting layer model developed in Part I of this study is used to calculate both the microphysical and radiative properties of melting precipitation, based upon the computed concentrations of snow and graupel just above the freezing level at applicable horizontal gridpoints of 3-dimensional cloud resolving model simulations. The modified 3-dimensional distributions of precipitation properties serve as input to radiative transfer calculations of upwelling radiances and radar extinction/reflectivities at the TRMM Microwave Imager (TMI) and Precipitation Radar (PR) frequencies, respectively. At the resolution of the cloud resolving model grids (approx. 1 km), upwelling radiances generally increase if mixed-phase precipitation is included in the model atmosphere. The magnitude of the increase depends upon the optical thickness of the cloud and precipitation, as well as the scattering characteristics of ice-phase precipitation aloft. Over the set of cloud resolving model simulations utilized in this study, maximum radiance increases of 43, 28, 18, and 10 K are simulated at 10.65, 19.35 GHz, 37.0, and 85.5 GHz, respectively. The impact of melting on TMI-measured radiances is determined not only by the physics of the melting particles but also by the horizontal extent of the melting precipitation, since the lower-frequency channels have footprints that extend over 10''s of kilometers. At TMI resolution, the maximum radiance increases are 16, 15, 12, and 9 K at the same frequencies. Simulated PR extinction and reflectivities in the melting layer can increase dramatically if mixed-phase precipitation is included, a result consistent with previous studies. Maximum increases of 0.46 (-2 dB) in extinction optical depth and 5 dBZ in reflectivity are simulated based upon the set of cloud resolving model simulations.
Computer Simulations of Polytetrafluoroethylene in the Solid State
NASA Astrophysics Data System (ADS)
Holt, D. B.; Farmer, B. L.; Eby, R. K.; Macturk, K. S.
1996-03-01
Force field parameters (Set I) for fluoropolymers were previously derived from MOPAC AM1 semiempirical data on model molecules. A second set (Set II) was derived from the AM1 results augmented by ab initio calculations. Both sets yield reasonable helical and phase II packing structures for polytetrafluoroethylene (PTFE) chains. However, Set I and Set II differ in the strength of van der Waals interactions, with Set II having deeper potential wells (order of magnitude). To differentiate which parameter set provides a better description of PTFE behavior, molecular dynamics simulations have been performed with Biosym Discover on clusters of PTFE chains which begin in a phase II packing environment. Added to the model are artificial constraints which allow the simulation of thermal expansion without having to define periodic boundary conditions for each specific temperature of interest. The preliminary dynamics simulations indicate that the intra- and intermolecular interactions provided by Set I are too weak. The degree of helical disorder and chain motion are high even at temperatures well below the phase II-phase IV transition temperature (19 C). Set II appears to yield a better description of PTFE in the solid state.
NASA Astrophysics Data System (ADS)
Reilhac, Anthonin; Boisson, Frédéric; Wimberley, Catriona; Parmar, Arvind; Zahra, David; Hamze, Hasar; Davis, Emma; Arthur, Andrew; Bouillot, Caroline; Charil, Arnaud; Grégoire, Marie-Claude
2016-02-01
In PET imaging, research groups have recently proposed different experimental set ups allowing multiple animals to be simultaneously imaged in a scanner in order to reduce the costs and increase the throughput. In those studies, the technical feasibility was demonstrated and the signal degradation caused by additional mice in the FOV characterized, however, the impact of the signal degradation on the outcome of a PET study has not yet been studied. Here we thoroughly investigated, using Monte Carlo simulated [18F]FDG and [11C]Raclopride PET studies, different experimental designs for whole-body and brain acquisitions of two mice and assessed the actual impact on the detection of biological variations as compared to a single-mouse setting. First, we extended the validation of the PET-SORTEO Monte Carlo simulation platform for the simultaneous simulation of two animals. Then, we designed [18F]FDG and [11C]Raclopride input mouse models for the simulation of realistic whole-body and brain PET studies. Simulated studies allowed us to accurately estimate the differences in detection between single- and dual-mode acquisition settings that are purely the result of having two animals in the FOV. Validation results showed that PET-SORTEO accurately reproduced the spatial resolution and noise degradations that were observed with actual dual phantom experiments. The simulated [18F]FDG whole-body study showed that the resolution loss due to the off-center positioning of the mice was the biggest contributing factor in signal degradation at the pixel level and a minimal inter-animal distance as well as the use of reconstruction methods with resolution modeling should be preferred. Dual mode acquisition did not have a major impact on ROI-based analysis except in situations where uptake values in organs from the same subject were compared. The simulated [11C]Raclopride study however showed that dual-mice imaging strongly reduced the sensitivity to variations when mice were positioned side-by-side while no sensitivity reduction was observed when they were facing each other. This is the first study showing the impact of different experimental designs for whole-body and brain acquisitions of two mice on the quality of the results using Monte Carlo simulated [18F]FDG and [11C]Raclopride PET studies.
Klippenstein, Stephen J; Harding, Lawrence B; Ruscic, Branko
2017-09-07
The fidelity of combustion simulations is strongly dependent on the accuracy of the underlying thermochemical properties for the core combustion species that arise as intermediates and products in the chemical conversion of most fuels. High level theoretical evaluations are coupled with a wide-ranging implementation of the Active Thermochemical Tables (ATcT) approach to obtain well-validated high fidelity predictions for the 0 K heat of formation for a large set of core combustion species. In particular, high level ab initio electronic structure based predictions are obtained for a set of 348 C, N, O, and H containing species, which corresponds to essentially all core combustion species with 34 or fewer electrons. The theoretical analyses incorporate various high level corrections to base CCSD(T)/cc-pVnZ analyses (n = T or Q) using H 2 , CH 4 , H 2 O, and NH 3 as references. Corrections for the complete-basis-set limit, higher-order excitations, anharmonic zero-point energy, core-valence, relativistic, and diagonal Born-Oppenheimer effects are ordered in decreasing importance. Independent ATcT values are presented for a subset of 150 species. The accuracy of the theoretical predictions is explored through (i) examination of the magnitude of the various corrections, (ii) comparisons with other high level calculations, and (iii) through comparison with the ATcT values. The estimated 2σ uncertainties of the three methods devised here, ANL0, ANL0-F12, and ANL1, are in the range of ±1.0-1.5 kJ/mol for single-reference and moderately multireference species, for which the calculated higher order excitations are 5 kJ/mol or less. In addition to providing valuable references for combustion simulations, the subsequent inclusion of the current theoretical results into the ATcT thermochemical network is expected to significantly improve the thermochemical knowledge base for less-well studied species.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klippenstein, Stephen J.; Harding, Lawrence B.; Ruscic, Branko
Here, the fidelity of combustion simulations is strongly dependent on the accuracy of the underlying thermochemical properties for the core combustion species that arise as intermediates and products in the chemical conversion of most fuels. High level theoretical evaluations are coupled with a wide-ranging implementation of the Active Thermochemical Tables (ATcT) approach to obtain well-validated high fidelity predictions for the 0 K heat of formation for a large set of core combustion species. In particular, high level ab initio electronic structure based predictions are obtained for a set of 348 C, N, O, and H containing species, which corresponds tomore » essentially all core combustion species with 34 or fewer electrons. The theoretical analyses incorporate various high level corrections to base CCSD(T)/cc-pVnZ analyses (n = T or Q) using H 2, CH 4, H 2O, and NH 3 as references. Corrections for the complete-basis-set limit, higher-order excitations, anharmonic zeropoint energy, core–valence, relativistic, and diagonal Born–Oppenheimer effects are ordered in decreasing importance. Independent ATcT values are presented for a subset of 150 species. The accuracy of the theoretical predictions is explored through (i) examination of the magnitude of the various corrections, (ii) comparisons with other high level calculations, and (iii) through comparison with the ATcT values. The estimated 2σ uncertainties of the three methods devised here, ANL0, ANL0-F12, and ANL1, are in the range of ±1.0–1.5 kJ/mol for single-reference and moderately multireference species, for which the calculated higher order excitations are 5 kJ/mol or less. In addition to providing valuable references for combustion simulations, the subsequent inclusion of the current theoretical results into the ATcT thermochemical network is expected to significantly improve the thermochemical knowledge base for less-well studied species.« less
Klippenstein, Stephen J.; Harding, Lawrence B.; Ruscic, Branko
2017-07-31
Here, the fidelity of combustion simulations is strongly dependent on the accuracy of the underlying thermochemical properties for the core combustion species that arise as intermediates and products in the chemical conversion of most fuels. High level theoretical evaluations are coupled with a wide-ranging implementation of the Active Thermochemical Tables (ATcT) approach to obtain well-validated high fidelity predictions for the 0 K heat of formation for a large set of core combustion species. In particular, high level ab initio electronic structure based predictions are obtained for a set of 348 C, N, O, and H containing species, which corresponds tomore » essentially all core combustion species with 34 or fewer electrons. The theoretical analyses incorporate various high level corrections to base CCSD(T)/cc-pVnZ analyses (n = T or Q) using H 2, CH 4, H 2O, and NH 3 as references. Corrections for the complete-basis-set limit, higher-order excitations, anharmonic zeropoint energy, core–valence, relativistic, and diagonal Born–Oppenheimer effects are ordered in decreasing importance. Independent ATcT values are presented for a subset of 150 species. The accuracy of the theoretical predictions is explored through (i) examination of the magnitude of the various corrections, (ii) comparisons with other high level calculations, and (iii) through comparison with the ATcT values. The estimated 2σ uncertainties of the three methods devised here, ANL0, ANL0-F12, and ANL1, are in the range of ±1.0–1.5 kJ/mol for single-reference and moderately multireference species, for which the calculated higher order excitations are 5 kJ/mol or less. In addition to providing valuable references for combustion simulations, the subsequent inclusion of the current theoretical results into the ATcT thermochemical network is expected to significantly improve the thermochemical knowledge base for less-well studied species.« less
Preferential attachment in multiple trade networks
NASA Astrophysics Data System (ADS)
Foschi, Rachele; Riccaboni, Massimo; Schiavo, Stefano
2014-08-01
In this paper we develop a model for the evolution of multiple networks which is able to replicate the concentrated and sparse nature of world trade data. Our model is an extension of the preferential attachment growth model to the case of multiple networks. Countries trade a variety of goods of different complexity. Every country progressively evolves from trading less sophisticated to high-tech goods. The probabilities of capturing more trade opportunities at a given level of complexity and of starting to trade more complex goods are both proportional to the number of existing trade links. We provide a set of theoretical predictions and simulative results. A calibration exercise shows that our model replicates the same concentration level of world trade as well as the sparsity pattern of the trade matrix. We also discuss a set of numerical solutions to deal with large multiple networks.
Effects of Night Work, Sleep Loss and Time on Task on Simulated Threat Detection Performance
Basner, Mathias; Rubinstein, Joshua; Fomberstein, Kenneth M.; Coble, Matthew C.; Ecker, Adrian; Avinash, Deepa; Dinges, David F.
2008-01-01
Study Objectives: To investigate the effects of night work and sleep loss on a simulated luggage screening task (SLST) that mimicked the x-ray system used by airport luggage screeners. Design: We developed more than 5,800 unique simulated x-ray images of luggage organized into 31 stimulus sets of 200 bags each. 25% of each set contained either a gun or a knife with low or high target difficulty. The 200-bag stimuli sets were then run on software that simulates an x-ray screening system (SLST). Signal detection analysis was used to obtain measures of hit rate (HR), false alarm rate (FAR), threat detection accuracy (A′), and response bias (B″D). Setting: Experimental laboratory study Participants: 24 healthy nonprofessional volunteers (13 women, mean age ± SD = 29.9 ± 6.5 years). Interventions: Subjects performed the SLST every 2 h during a 5-day period that included a 35 h period of wakefulness that extended to night work and then another day work period after the night without sleep. Results: Threat detection accuracy A′ decreased significantly (P < 0.001) while FAR increased significantly (P < 0.001) during night work, while both A′ (P = 0.001) and HR decreased (P = 0.008) during day work following sleep loss. There were prominent time-on-task effects on response bias B″D (P = 0.002) and response latency (P = 0.004), but accuracy A′ was unaffected. Both HR and FAR increased significantly with increasing study duration (both P < 0.001), while response latency decreased significantly (P < 0.001). Conclusions: This study provides the first systematic evidence that night work and sleep loss adversely affect the accuracy of detecting complex real world objects among high levels of background clutter. If the results can be replicated in professional screeners and real work environments, fatigue in luggage screening personnel may pose a threat for air traffic safety unless countermeasures for fatigue are deployed. Citation: Basner M; Rubinstein J; Fomberstein KM; Coble MC; Avinash D; Dinges DF. Effects of Night Work, Sleep Loss and Time on Task on Simulated Threat Detection Performance. SLEEP 2008;31(9):1251-1259. PMID:18788650
Brydges, Ryan; Mallette, Claire; Pollex, Heather; Carnahan, Heather; Dubrowski, Adam
2012-08-01
Educators often simplify complex tasks by setting learning objectives that focus trainees on isolated skills rather than the holistic task. We designed 2 sets of learning objectives for intravenous catheterization using goal setting theory. We hypothesized that setting holistic goals related to technical, cognitive, and communication skills would result in superior holistic performance, whereas setting isolated goals related to technical skills would result in superior technical performance. We randomly assigned practicing health care professionals to set holistic (n = 14) or isolated (n = 15) goals. All watched an instructional video and studied a list of 9 goals specific to their group. Participants practiced independently in a hybrid simulation (standardized patient combined with an arm simulator). The first and the last practice trials were videotaped for analysis. One-week later, participants completed a transfer test in another hybrid simulation scenario. Blinded experts evaluated performance on all 3 trials using the Direct Observation of Procedural Skills tool. The holistic group scored higher than the isolated group on the holistic Direct Observation of Procedural Skills score for all 3 trials [mean (SD), 45.0 (9.16) vs. 38.4 (9.17); P = 0.01]. The isolated group did not perform better than the holistic group on the technical skills score [10.3 (2.73) vs. 11.6 (3.01); P = 0.11]. Our results suggest that asking learners to set holistic goals did not interfere with their attaining competent holistic and technical skills during hybrid simulation training. This exploratory trial provides preliminary evidence for how to consider integrating hybrid simulation into medical curricula and for the design of learning goals in simulation-based education.
Spinello, Elio F; Fischbach, Ronald
2008-01-01
This study investigated the use of a Web-based community health simulation as a problem-based learning (PBL) experience for undergraduate students majoring in public health. The study sought to determine whether students who participated in the online simulation achieved differences in academic and attitudinal outcomes compared with students who participated in a traditional PBL exercise. Using a nonexperimental comparative design, 21 undergraduate students enrolled in a health-behavior course were each randomly assigned to one of four workgroups. Each workgroup was randomly assigned the semester-long simulation project or the traditional PBL exercise. Survey instruments were used to measure students' attitudes toward the course, their perceptions of the learning community, and perceptions of their own cognitive learning. Content analysis of final essay exams and group reports was used to identify differences in academic outcomes and students' level of conceptual understanding of health-behavior theory. Findings indicated that students participating in the simulation produced higher mean final exam scores compared with students participating in the traditional PBL (p=0.03). Students in the simulation group also outperformed students in the traditional group with respect to their understanding of health-behavior theory (p=0.04). Students in the simulation group, however, rated their own level of cognitive learning lower than did students in the traditional group (p=0.03). By bridging time and distance constraints of the traditional classroom setting, an online simulation may be an effective PBL approach for public health students. Recommendations include further research using a larger sample to explore students' perceptions of learning when participating in simulated real-world activities. Additional research focusing on possible differences between actual and perceived learning relative to PBL methods and student workgroup dynamics is also recommended.
Near-peer medical student simulation training.
Cash, Thomas; Brand, Eleanor; Wong, Emma; Richardson, Jay; Athorn, Sam; Chowdhury, Faiza
2017-06-01
There is growing concern that medical students are inadequately prepared for life as a junior doctor. A lack of confidence managing acutely unwell patients is often cited as a barrier to good clinical care. With medical schools investing heavily in simulation equipment, we set out to explore if near-peer simulation training is an effective teaching format. Medical students in their third year of study and above were invited to attend a 90-minute simulation teaching session. The sessions were designed and delivered by final-year medical students using clinical scenarios mapped to the Sheffield MBChB curriculum. Candidates were required to assess, investigate and manage an acutely unwell simulated patient. Pre- and post-simulation training Likert scale questionnaires were completed relating to self-reported confidence levels. There is growing concern that medical students are inadequately prepared for life as a junior doctor RESULTS: Questionnaires were completed by 25 students (100% response rate); 52 per cent of students had no prior simulation experience. There were statistically significant improvements in self-reported confidence levels in each of the six areas assessed (p < 0.005). Thematic analysis of free-text comments indicated that candidates enjoyed the practical format of the sessions and found the experience useful. Our results suggest that near-peer medical student simulation training benefits both teacher and learner and that this simplistic model could easily be replicated at other medical schools. As the most junior members of the team, medical students are often confined to observer status. Simulation empowers students to practise independently in a safe and protected environment. Furthermore, it may help to alleviate anxiety about starting work as a junior doctor and improve future patient care. © 2016 John Wiley & Sons Ltd and The Association for the Study of Medical Education.
Applying operations research to optimize a novel population management system for cancer screening
Zai, Adrian H; Kim, Seokjin; Kamis, Arnold; Hung, Ken; Ronquillo, Jeremiah G; Chueh, Henry C; Atlas, Steven J
2014-01-01
Objective To optimize a new visit-independent, population-based cancer screening system (TopCare) by using operations research techniques to simulate changes in patient outreach staffing levels (delegates, navigators), modifications to user workflow within the information technology (IT) system, and changes in cancer screening recommendations. Materials and methods TopCare was modeled as a multiserver, multiphase queueing system. Simulation experiments implemented the queueing network model following a next-event time-advance mechanism, in which systematic adjustments were made to staffing levels, IT workflow settings, and cancer screening frequency in order to assess their impact on overdue screenings per patient. Results TopCare reduced the average number of overdue screenings per patient from 1.17 at inception to 0.86 during simulation to 0.23 at steady state. Increases in the workforce improved the effectiveness of TopCare. In particular, increasing the delegate or navigator staff level by one person improved screening completion rates by 1.3% or 12.2%, respectively. In contrast, changes in the amount of time a patient entry stays on delegate and navigator lists had little impact on overdue screenings. Finally, lengthening the screening interval increased efficiency within TopCare by decreasing overdue screenings at the patient level, resulting in a smaller number of overdue patients needing delegates for screening and a higher fraction of screenings completed by delegates. Conclusions Simulating the impact of changes in staffing, system parameters, and clinical inputs on the effectiveness and efficiency of care can inform the allocation of limited resources in population management. PMID:24043318
Zarriello, Phillip J.; Olson, Scott A.; Flynn, Robert H.; Strauch, Kellan R.; Murphy, Elizabeth A.
2014-01-01
Heavy, persistent rains from late February through March 2010 caused severe flooding that set, or nearly set, peaks of record for streamflows and water levels at many long-term streamgages in Rhode Island. In response to this event, hydraulic models were updated for selected reaches covering about 56 river miles in the Pawtuxet River Basin to simulate water-surface elevations (WSEs) at specified flows and boundary conditions. Reaches modeled included the main stem of the Pawtuxet River, the North and South Branches of the Pawtuxet River, Pocasset River, Simmons Brook, Dry Brook, Meshanticut Brook, Furnace Hill Brook, Flat River, Quidneck Brook, and two unnamed tributaries referred to as South Branch Pawtuxet River Tributary A1 and Tributary A2. All the hydraulic models were updated to Hydrologic Engineering Center-River Analysis System (HEC-RAS) version 4.1.0 using steady-state simulations. Updates to the models included incorporation of new field-survey data at structures, high resolution land-surface elevation data, and updated flood flows from a related study. The models were assessed using high-water marks (HWMs) obtained in a related study following the March– April 2010 flood and the simulated water levels at the 0.2-percent annual exceedance probability (AEP), which is the estimated AEP of the 2010 flood in the basin. HWMs were obtained at 110 sites along the main stem of the Pawtuxet River, the North and South Branches of the Pawtuxet River, Pocasset River, Simmons Brook, Furnace Hill Brook, Flat River, and Quidneck Brook. Differences between the 2010 HWM elevations and the simulated 0.2-percent AEP WSEs from flood insurance studies (FISs) and the updated models developed in this study varied with most differences attributed to the magnitude of the 0.2-percent AEP flows. WSEs from the updated models generally are in closer agreement with the observed 2010 HWMs than with the FIS WSEs. The improved agreement of the updated simulated water elevations to observed 2010 HWMs provides a measure of the hydraulic model performance, which indicates the updated models better represent flooding at other AEPs than the existing FIS models.
Burgette, Lane F; Reiter, Jerome P
2013-06-01
Multinomial outcomes with many levels can be challenging to model. Information typically accrues slowly with increasing sample size, yet the parameter space expands rapidly with additional covariates. Shrinking all regression parameters towards zero, as often done in models of continuous or binary response variables, is unsatisfactory, since setting parameters equal to zero in multinomial models does not necessarily imply "no effect." We propose an approach to modeling multinomial outcomes with many levels based on a Bayesian multinomial probit (MNP) model and a multiple shrinkage prior distribution for the regression parameters. The prior distribution encourages the MNP regression parameters to shrink toward a number of learned locations, thereby substantially reducing the dimension of the parameter space. Using simulated data, we compare the predictive performance of this model against two other recently-proposed methods for big multinomial models. The results suggest that the fully Bayesian, multiple shrinkage approach can outperform these other methods. We apply the multiple shrinkage MNP to simulating replacement values for areal identifiers, e.g., census tract indicators, in order to protect data confidentiality in public use datasets.
Many-body calculations of molecular electric polarizabilities in asymptotically complete basis sets
NASA Astrophysics Data System (ADS)
Monten, Ruben; Hajgató, Balázs; Deleuze, Michael S.
2011-10-01
The static dipole polarizabilities of Ne, CO, N2, F2, HF, H2O, HCN, and C2H2 (acetylene) have been determined close to the Full-CI limit along with an asymptotically complete basis set (CBS), according to the principles of a Focal Point Analysis. For this purpose the results of Finite Field calculations up to the level of Coupled Cluster theory including Single, Double, Triple, Quadruple and perturbative Pentuple excitations [CCSDTQ(P)] were used, in conjunction with suited extrapolations of energies obtained using augmented and doubly-augmented Dunning's correlation consistent polarized valence basis sets of improving quality. The polarizability characteristics of C2H4 (ethylene) and C2H6 (ethane) have been determined on the same grounds at the CCSDTQ level in the CBS limit. Comparison is made with results obtained using lower levels in electronic correlation, or taking into account the relaxation of the molecular structure due to an adiabatic polarization process. Vibrational corrections to electronic polarizabilities have been empirically estimated according to Born-Oppenheimer Molecular Dynamical simulations employing Density Functional Theory. Confrontation with experiment ultimately indicates relative accuracies of the order of 1 to 2%.
Superensemble Climate Simulations for attribution of 2013-15 Western US drought
NASA Astrophysics Data System (ADS)
Mote, P.; Rupp, D. E.; Otto, F. E. L.; Uhe, P.; Allen, M. R.
2015-12-01
As of summer 2015, California is in its fourth year of drought, Oregon in its second year, and Washington in its first, with serious economic and ecological consequences. Scientists are using various approaches to evaluate the rarity, severity, and causes of this drought. This talk describes one approach to evaluating the influences of various causal factors. Using a framework for generating a superensemble of regional climate simulations called climateprediction.net, created by scientists at University of Oxford, we have generated tens of thousands of simulations of the period November 2013 to May 2015, to focus on the most acute period of drought in these three states. One set used observed sea surface temperatures (SSTs), one used average SSTs from 2000-2010, and one "natural" set used the observed SSTs but with the anthropogenic warming component subtracted and greenhouse gas concentrations set to pre-industrial values. Within the three sets, the simulations differ only in the atmospheric initial conditions used. Distinctly different likelihoods of drought emerge in the Pacific states in the three sets of simulations. This talk will describe differences in temperature, precipitation, snow water equivalent, atmospheric flow, and energy and water balance terms in the various sets of simulations, and how they differ from California to Washington.
Improved score statistics for meta-analysis in single-variant and gene-level association studies.
Yang, Jingjing; Chen, Sai; Abecasis, Gonçalo
2018-06-01
Meta-analysis is now an essential tool for genetic association studies, allowing them to combine large studies and greatly accelerating the pace of genetic discovery. Although the standard meta-analysis methods perform equivalently as the more cumbersome joint analysis under ideal settings, they result in substantial power loss under unbalanced settings with various case-control ratios. Here, we investigate the power loss problem by the standard meta-analysis methods for unbalanced studies, and further propose novel meta-analysis methods performing equivalently to the joint analysis under both balanced and unbalanced settings. We derive improved meta-score-statistics that can accurately approximate the joint-score-statistics with combined individual-level data, for both linear and logistic regression models, with and without covariates. In addition, we propose a novel approach to adjust for population stratification by correcting for known population structures through minor allele frequencies. In the simulated gene-level association studies under unbalanced settings, our method recovered up to 85% power loss caused by the standard methods. We further showed the power gain of our methods in gene-level tests with 26 unbalanced studies of age-related macular degeneration . In addition, we took the meta-analysis of three unbalanced studies of type 2 diabetes as an example to discuss the challenges of meta-analyzing multi-ethnic samples. In summary, our improved meta-score-statistics with corrections for population stratification can be used to construct both single-variant and gene-level association studies, providing a useful framework for ensuring well-powered, convenient, cross-study analyses. © 2018 WILEY PERIODICALS, INC.
Report on the formal specification and partial verification of the VIPER microprocessor
NASA Technical Reports Server (NTRS)
Brock, Bishop; Hunt, Warren A., Jr.
1991-01-01
The VIPER microprocessor chip is partitioned into four levels of abstractions. At the highest level, VIPER is described with decreasingly abstract sets of functions in LCF-LSM. At the lowest level are the gate-level models in proprietary CAD languages. The block-level and gate-level specifications are also given in the ELLA simulation language. Among VIPER's deficiencies are the fact that there is no notion of external events in the top-level specification, and it is impossible to use the top-level specifications to prove abstract properties of programs running on VIPER computers. There is no complete proof that the gate-level specifications implement the top-level specifications. Cohn's proof that the major-state machine correctly implements the top-level specifications has no formal connection with any of the other proof attempts. None of the latter address resetting the machine, memory timeout, forced error, or single step modes.
Simulation of Groundwater-Level and Salinity Changes in the Eastern Shore, Virginia
Sanford, Ward E.; Pope, Jason P.; Nelms, David L.
2009-01-01
Groundwater-level and salinity changes have been simulated with a groundwater model developed and calibrated for the Eastern Shore of Virginia. The Eastern Shore is the southern part of the Delmarva Peninsula that is occupied by Accomack and Northampton Counties in Virginia. Groundwater is the sole source of freshwater to the Eastern Shore, and demands for water have been increasing from domestic, industrial, agricultural, and public-supply sectors of the economy. Thus, it is important that the groundwater supply be protected from overextraction and seawater intrusion. The best way for water managers to use all of the information available is usually to compile this information into a numerical model that can simulate the response of the system to current and future stresses. A detailed description of the geology, hydrogeology, and historical groundwater extractions was compiled and entered into the numerical model. The hydrogeologic framework is composed of a surficial aquifer under unconfined conditions, a set of three aquifers and associated overlying confining units under confined conditions (the upper, middle, and lower Yorktown-Eastover Formation), and an underlying confining unit (the St. Marys Formation). An estimate of the location and depths of two major paleochannels was also included in the framework of the model. Total withdrawals from industrial, commercial, public-supply, and some agricultural wells were compiled from the period 1900 through 2003. Reported pumpage from these sources increased dramatically during the 1960s and 70s, up to currently about 4 million gallons per day. Domestic withdrawals were estimated on the basis of population census districts and were assigned spatially to the model on the assumption that domestic users are located close to roads. A numerical model was created using the U.S. Geological Survey (USGS) code SEAWAT to simulate both water levels and concentrations of chloride (representing salinity). The model was calibrated using 605 predevelopment and transient water-level observations that are associated predominantly with 20 observation nests of wells sited across the study area. Sampling for groundwater chemistry at these sites revealed that chloride has not increased significantly in the last 20 years. Environmental tracers in the samples also indicated that the water in the surficial aquifer is typically years to decades old, whereas water in the confined aquifers is typically centuries to millennia old. The calibration procedure yielded distributions of hydraulic conductivity and storage coefficients of the aquifers and confining units that are based on 21 pilot points, but vary smoothly across the study area. The estimated values are consistent with other measurements of these properties measured previously on cores and during hydraulic tests at various well fields. Simulations performed with the model demonstrated that the calibrated model can reproduce the observed historical water levels fairly well (R2 = 0.93). The chloride concentrations were also simulated, but a match with chloride concentrations was more difficult to achieve (R2 = 0.16) because of the lack of sufficient data and the unknown exact behavior of the entire transition zone in the millennia leading up to the present day. Future pumping scenarios were simulated through 2050, with pumping set to either 2003 rates or total permitted withdrawal rates. Water levels in 2050 are predicted to be lower than current levels by a few feet where stresses are currently heaviest but potentially by tens of feet if total permitted withdrawals are extracted at current low-stressed sites. Simulations of chloride concentrations through 2050 revealed some potential for seawater intrusion in the areas of Cape Charles, Chincoteague, east of the town of Exmore, and east of the town of Accomac, but precise estimates of concentration increases are highly uncertain. Simulation results were also used to estimate that the down
The acoustic performance of double-skin facades: A design support tool for architects
NASA Astrophysics Data System (ADS)
Batungbakal, Aireen
This study assesses and validates the influence of measuring sound in the urban environment and the influence of glass facade components in reducing sound transmission to the indoor environment. Among the most reported issues affecting workspaces, increased awareness to minimize noise led building designers to reconsider the design of building envelopes and its site environment. Outdoor sound conditions, such as traffic noise, challenge designers to accurately estimate the capability of glass facades in acquiring an appropriate indoor sound quality. Indicating the density of the urban environment, field-tests acquired existing sound levels in areas of high commercial development, employment, and traffic activity, establishing a baseline for sound levels common in urban work areas. Composed from the direct sound transmission loss of glass facades simulated through INSUL, a sound insulation software, data is utilized as an informative tool correlating the response of glass facade components towards existing outdoor sound levels of a project site in order to achieve desired indoor sound levels. This study progresses to link the disconnection in validating the acoustic performance of glass facades early in a project's design, from conditioned settings such as field-testing and simulations to project completion. Results obtained from the study's facade simulations and facade comparison supports that acoustic comfort is not limited to a singular solution, but multiple design options responsive to its environment.
Identifying arbitrary parameter zonation using multiple level set functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Zhiming; Vesselinov, Velimir Valentinov; Lei, Hongzhuan
In this paper, we extended the analytical level set method [1, 2] for identifying a piece-wisely heterogeneous (zonation) binary system to the case with an arbitrary number of materials with unknown material properties. In the developed level set approach, starting from an initial guess, the material interfaces are propagated through iterations such that the residuals between the simulated and observed state variables (hydraulic head) is minimized. We derived an expression for the propagation velocity of the interface between any two materials, which is related to the permeability contrast between the materials on two sides of the interface, the sensitivity ofmore » the head to permeability, and the head residual. We also formulated an expression for updating the permeability of all materials, which is consistent with the steepest descent of the objective function. The developed approach has been demonstrated through many examples, ranging from totally synthetic cases to a case where the flow conditions are representative of a groundwater contaminant site at the Los Alamos National Laboratory. These examples indicate that the level set method can successfully identify zonation structures, even if the number of materials in the model domain is not exactly known in advance. Although the evolution of the material zonation depends on the initial guess field, inverse modeling runs starting with different initial guesses fields may converge to the similar final zonation structure. These examples also suggest that identifying interfaces of spatially distributed heterogeneities is more important than estimating their permeability values.« less
Identifying arbitrary parameter zonation using multiple level set functions
Lu, Zhiming; Vesselinov, Velimir Valentinov; Lei, Hongzhuan
2018-03-14
In this paper, we extended the analytical level set method [1, 2] for identifying a piece-wisely heterogeneous (zonation) binary system to the case with an arbitrary number of materials with unknown material properties. In the developed level set approach, starting from an initial guess, the material interfaces are propagated through iterations such that the residuals between the simulated and observed state variables (hydraulic head) is minimized. We derived an expression for the propagation velocity of the interface between any two materials, which is related to the permeability contrast between the materials on two sides of the interface, the sensitivity ofmore » the head to permeability, and the head residual. We also formulated an expression for updating the permeability of all materials, which is consistent with the steepest descent of the objective function. The developed approach has been demonstrated through many examples, ranging from totally synthetic cases to a case where the flow conditions are representative of a groundwater contaminant site at the Los Alamos National Laboratory. These examples indicate that the level set method can successfully identify zonation structures, even if the number of materials in the model domain is not exactly known in advance. Although the evolution of the material zonation depends on the initial guess field, inverse modeling runs starting with different initial guesses fields may converge to the similar final zonation structure. These examples also suggest that identifying interfaces of spatially distributed heterogeneities is more important than estimating their permeability values.« less
Identifying arbitrary parameter zonation using multiple level set functions
NASA Astrophysics Data System (ADS)
Lu, Zhiming; Vesselinov, Velimir V.; Lei, Hongzhuan
2018-07-01
In this paper, we extended the analytical level set method [1,2] for identifying a piece-wisely heterogeneous (zonation) binary system to the case with an arbitrary number of materials with unknown material properties. In the developed level set approach, starting from an initial guess, the material interfaces are propagated through iterations such that the residuals between the simulated and observed state variables (hydraulic head) is minimized. We derived an expression for the propagation velocity of the interface between any two materials, which is related to the permeability contrast between the materials on two sides of the interface, the sensitivity of the head to permeability, and the head residual. We also formulated an expression for updating the permeability of all materials, which is consistent with the steepest descent of the objective function. The developed approach has been demonstrated through many examples, ranging from totally synthetic cases to a case where the flow conditions are representative of a groundwater contaminant site at the Los Alamos National Laboratory. These examples indicate that the level set method can successfully identify zonation structures, even if the number of materials in the model domain is not exactly known in advance. Although the evolution of the material zonation depends on the initial guess field, inverse modeling runs starting with different initial guesses fields may converge to the similar final zonation structure. These examples also suggest that identifying interfaces of spatially distributed heterogeneities is more important than estimating their permeability values.
Hvitfeldt-Forsberg, Helena; Mazzocato, Pamela; Glaser, Daniel; Keller, Christina; Unbeck, Maria
2017-01-01
Objective To explore healthcare staffs’ and managers’ perceptions of how and when discrete event simulation modelling can be used as a decision support in improvement efforts. Design Two focus group discussions were performed. Setting Two settings were included: a rheumatology department and an orthopaedic section both situated in Sweden. Participants Healthcare staff and managers (n=13) from the two settings. Interventions Two workshops were performed, one at each setting. Workshops were initiated by a short introduction to simulation modelling. Results from the respective simulation model were then presented and discussed in the following focus group discussion. Results Categories from the content analysis are presented according to the following research questions: how and when simulation modelling can assist healthcare improvement? Regarding how, the participants mentioned that simulation modelling could act as a tool for support and a way to visualise problems, potential solutions and their effects. Regarding when, simulation modelling could be used both locally and by management, as well as a pedagogical tool to develop and test innovative ideas and to involve everyone in the improvement work. Conclusions Its potential as an information and communication tool and as an instrument for pedagogic work within healthcare improvement render a broader application and value of simulation modelling than previously reported. PMID:28588107
A review of training research and virtual reality simulators for the da Vinci surgical system.
Liu, May; Curet, Myriam
2015-01-01
PHENOMENON: Virtual reality simulators are the subject of several recent studies of skills training for robot-assisted surgery. Yet no consensus exists regarding what a core skill set comprises or how to measure skill performance. Defining a core skill set and relevant metrics would help surgical educators evaluate different simulators. This review draws from published research to propose a core technical skill set for using the da Vinci surgeon console. Publications on three commercial simulators were used to evaluate the simulators' content addressing these skills and associated metrics. An analysis of published research suggests that a core technical skill set for operating the surgeon console includes bimanual wristed manipulation, camera control, master clutching to manage hand position, use of third instrument arm, activating energy sources, appropriate depth perception, and awareness of forces applied by instruments. Validity studies of three commercial virtual reality simulators for robot-assisted surgery suggest that all three have comparable content and metrics. However, none have comprehensive content and metrics for all core skills. INSIGHTS: Virtual reality simulation remains a promising tool to support skill training for robot-assisted surgery, yet existing commercial simulator content is inadequate for performing and assessing a comprehensive basic skill set. The results of this evaluation help identify opportunities and challenges that exist for future developments in virtual reality simulation for robot-assisted surgery. Specifically, the inclusion of educational experts in the development cycle alongside clinical and technological experts is recommended.
NASA Technical Reports Server (NTRS)
Druyan, Leonard M.; Fulakeza, Matthew B.
2013-01-01
Five annual climate cycles (1998-2002) are simulated for continental Africa and adjacent oceans by a regional atmospheric model (RM3). RM3 horizontal grid spacing is 0.44deg at 28 vertical levels. Each of 2 simulation ensembles is driven by lateral boundary conditions from each of 2 alternative reanalysis data sets. One simulation downs cales National Center for Environmental Prediction reanalysis 2 (NCPR2) and the other the European Centre for Medium Range Weather Forecasts Interim reanalysis (ERA-I). NCPR2 data are archived at 2.5deg grid spacing, while a recent version of ERA-I provides data at 0.75deg spacing. ERA-I-forced simulations are recomrp. ended by the Coordinated Regional Downscaling Experiment (CORDEX). Comparisons of the 2 sets of simulations with each other and with observational evidence assess the relative performance of each downscaling system. A third simulation also uses ERA-I forcing, but degraded to the same horizontal resolution as NCPR2. RM3-simulated pentad and monthly mean precipitation data are compared to Tropical Rainfall Measuring Mission (TRMM) data, gridded at 0.5deg, and RM3-simulated circulation is compared to both reanalyses. Results suggest that each downscaling system provides advantages and disadvantages relative to the other. The RM3/NCPR2 achieves a more realistic northward advance of summer monsoon rains over West Africa, but RM3/ERA-I creates the more realistic monsoon circulation. Both systems recreate some features of JulySeptember 1999 minus 2002 precipitation differences. Degrading the resolution of ERA-I driving data unrealistically slows the monsoon circulation and considerably diminishes summer rainfall rates over West Africa. The high resolution of ERA-I data, therefore, contributes to the quality of the downscaling, but NCPR2laterai boundary conditions nevertheless produce better simulations of some features.
Towards an orientation-distribution-based multi-scale approach for remodelling biological tissues.
Menzel, A; Harrysson, M; Ristinmaa, M
2008-10-01
The mechanical behaviour of soft biological tissues is governed by phenomena occurring on different scales of observation. From the computational modelling point of view, a vital aspect consists of the appropriate incorporation of micromechanical effects into macroscopic constitutive equations. In this work, particular emphasis is placed on the simulation of soft fibrous tissues with the orientation of the underlying fibres being determined by distribution functions. A straightforward but convenient Taylor-type homogenisation approach links the micro- or rather meso-level of fibres to the overall macro-level and allows to reflect macroscopically orthotropic response. As a key aspect of this work, evolution equations for the fibre orientations are accounted for so that physiological effects like turnover or rather remodelling are captured. Concerning numerical applications, the derived set of equations can be embedded into a nonlinear finite element context so that first elementary simulations are finally addressed.
Nonclassicality of Temporal Correlations.
Brierley, Stephen; Kosowski, Adrian; Markiewicz, Marcin; Paterek, Tomasz; Przysiężna, Anna
2015-09-18
The results of spacelike separated measurements are independent of distant measurement settings, a property one might call two-way no-signaling. In contrast, timelike separated measurements are only one-way no-signaling since the past is independent of the future but not vice versa. For this reason some temporal correlations that are formally identical to nonclassical spatial correlations can still be modeled classically. We propose a new formulation of Bell's theorem for temporal correlations; namely, we define nonclassical temporal correlations as the ones which cannot be simulated by propagating in time the classical information content of a quantum system given by the Holevo bound. We first show that temporal correlations between results of any projective quantum measurements on a qubit can be simulated classically. Then we present a sequence of general measurements on a single m-level quantum system that cannot be explained by propagating in time an m-level classical system and using classical computers with unlimited memory.
Efficient high-quality volume rendering of SPH data.
Fraedrich, Roland; Auer, Stefan; Westermann, Rüdiger
2010-01-01
High quality volume rendering of SPH data requires a complex order-dependent resampling of particle quantities along the view rays. In this paper we present an efficient approach to perform this task using a novel view-space discretization of the simulation domain. Our method draws upon recent work on GPU-based particle voxelization for the efficient resampling of particles into uniform grids. We propose a new technique that leverages a perspective grid to adaptively discretize the view-volume, giving rise to a continuous level-of-detail sampling structure and reducing memory requirements compared to a uniform grid. In combination with a level-of-detail representation of the particle set, the perspective grid allows effectively reducing the amount of primitives to be processed at run-time. We demonstrate the quality and performance of our method for the rendering of fluid and gas dynamics SPH simulations consisting of many millions of particles.
NASA Astrophysics Data System (ADS)
Saenz, Daniel L.; Kim, Hojin; Chen, Josephine; Stathakis, Sotirios; Kirby, Neil
2016-09-01
The primary purpose of the study was to determine how detailed deformable image registration (DIR) phantoms need to adequately simulate human anatomy and accurately assess the quality of DIR algorithms. In particular, how many distinct tissues are required in a phantom to simulate complex human anatomy? Pelvis and head-and-neck patient CT images were used for this study as virtual phantoms. Two data sets from each site were analyzed. The virtual phantoms were warped to create two pairs consisting of undeformed and deformed images. Otsu’s method was employed to create additional segmented image pairs of n distinct soft tissue CT number ranges (fat, muscle, etc). A realistic noise image was added to each image. Deformations were applied in MIM Software (MIM) and Velocity deformable multi-pass (DMP) and compared with the known warping. Images with more simulated tissue levels exhibit more contrast, enabling more accurate results. Deformation error (magnitude of the vector difference between known and predicted deformation) was used as a metric to evaluate how many CT number gray levels are needed for a phantom to serve as a realistic patient proxy. Stabilization of the mean deformation error was reached by three soft tissue levels for Velocity DMP and MIM, though MIM exhibited a persisting difference in accuracy between the discrete images and the unprocessed image pair. A minimum detail of three levels allows a realistic patient proxy for use with Velocity and MIM deformation algorithms.