A verification library for multibody simulation software
NASA Technical Reports Server (NTRS)
Kim, Sung-Soo; Haug, Edward J.; Frisch, Harold P.
1989-01-01
A multibody dynamics verification library, that maintains and manages test and validation data is proposed, based on RRC Robot arm and CASE backhoe validation and a comparitive study of DADS, DISCOS, and CONTOPS that are existing public domain and commercial multibody dynamic simulation programs. Using simple representative problems, simulation results from each program are cross checked, and the validation results are presented. Functionalities of the verification library are defined, in order to automate validation procedure.
Paliwal, Nikhil; Damiano, Robert J; Varble, Nicole A; Tutino, Vincent M; Dou, Zhongwang; Siddiqui, Adnan H; Meng, Hui
2017-12-01
Computational fluid dynamics (CFD) is a promising tool to aid in clinical diagnoses of cardiovascular diseases. However, it uses assumptions that simplify the complexities of the real cardiovascular flow. Due to high-stakes in the clinical setting, it is critical to calculate the effect of these assumptions in the CFD simulation results. However, existing CFD validation approaches do not quantify error in the simulation results due to the CFD solver's modeling assumptions. Instead, they directly compare CFD simulation results against validation data. Thus, to quantify the accuracy of a CFD solver, we developed a validation methodology that calculates the CFD model error (arising from modeling assumptions). Our methodology identifies independent error sources in CFD and validation experiments, and calculates the model error by parsing out other sources of error inherent in simulation and experiments. To demonstrate the method, we simulated the flow field of a patient-specific intracranial aneurysm (IA) in the commercial CFD software star-ccm+. Particle image velocimetry (PIV) provided validation datasets for the flow field on two orthogonal planes. The average model error in the star-ccm+ solver was 5.63 ± 5.49% along the intersecting validation line of the orthogonal planes. Furthermore, we demonstrated that our validation method is superior to existing validation approaches by applying three representative existing validation techniques to our CFD and experimental dataset, and comparing the validation results. Our validation methodology offers a streamlined workflow to extract the "true" accuracy of a CFD solver.
NASA Technical Reports Server (NTRS)
Carr, Peter C.; Mckissick, Burnell T.
1988-01-01
A joint experiment to investigate simulator validation and cue fidelity was conducted by the Dryden Flight Research Facility of NASA Ames Research Center (Ames-Dryden) and NASA Langley Research Center. The primary objective was to validate the use of a closed-loop pilot-vehicle mathematical model as an analytical tool for optimizing the tradeoff between simulator fidelity requirements and simulator cost. The validation process includes comparing model predictions with simulation and flight test results to evaluate various hypotheses for differences in motion and visual cues and information transfer. A group of five pilots flew air-to-air tracking maneuvers in the Langley differential maneuvering simulator and visual motion simulator and in an F-14 aircraft at Ames-Dryden. The simulators used motion and visual cueing devices including a g-seat, a helmet loader, wide field-of-view horizon, and a motion base platform.
Validation of the mean radiant temperature simulated by the RayMan software in urban environments.
Lee, Hyunjung; Mayer, Helmut
2016-11-01
The RayMan software is worldwide applied in investigations on different issues in human-biometeorology. However, only the simulated mean radiant temperature (T mrt ) has been validated so far in a few case studies. They are based on T mrt values, which were experimentally determined in urban environments by use of a globe thermometer or applying the six-directional method. This study analyses previous T mrt validations in a comparative manner. Their results are extended by a recent validation of T mrt in an urban micro-environment in Freiburg (southwest Germany), which can be regarded as relatively heterogeneous due to different shading intensities by tree crowns. In addition, a validation of the physiologically equivalent temperature (PET) simulated by RayMan is conducted for the first time. The validations are based on experimentally determined T mrt and PET values, which were calculated from measured meteorological variables in the daytime of a clear-sky summer day. In total, the validation results show that RayMan is capable of simulating T mrt satisfactorily under relatively homogeneous site conditions. However, the inaccuracy of simulated T mrt is increasing with lower sun elevation and growing heterogeneity of the simulation site. As T mrt represents the meteorological variable that mostly governs PET in the daytime of clear-sky summer days, the accuracy of simulated T mrt is mainly responsible for the accuracy of simulated PET. The T mrt validations result in some recommendations, which concern an update of physical principles applied in the RayMan software to simulate the short- and long-wave radiant flux densities, especially from vertical building walls and tree crowns.
The internal validity of arthroscopic simulators and their effectiveness in arthroscopic education.
Slade Shantz, Jesse Alan; Leiter, Jeff R S; Gottschalk, Tania; MacDonald, Peter Benjamin
2014-01-01
The purpose of this systematic review was to identify standard procedures for the validation of arthroscopic simulators and determine whether simulators improve the surgical skills of users. Arthroscopic simulator validation studies and randomized trials assessing the effectiveness of arthroscopic simulators in education were identified from online databases, as well as, grey literature and reference lists. Only validation studies and randomized trials were included for review. Study heterogeneity was calculated and where appropriate, study results were combined employing a random effects model. Four hundred and thirteen studies were reviewed. Thirteen studies met the inclusion criteria assessing the construct validity of simulators. A pooled analysis of internal validation studies determined that simulators could discriminate between novice and experts, but not between novice and intermediate trainees on time of completion of a simulated task. Only one study assessed the utility of a knee simulator in training arthroscopic skills directly and demonstrated that the skill level of simulator-trained residents was greater than non-simulator-trained residents. Excessive heterogeneity exists in the literature to determine the internal and transfer validity of arthroscopic simulators currently available. Evidence suggests that simulators can discriminate between novice and expert users, but discrimination between novice and intermediate trainees in surgical education should be paramount. International standards for the assessment of arthroscopic simulator validity should be developed to increase the use and effectiveness of simulators in orthopedic surgery.
Simulation validation of the XV-15 tilt-rotor research aircraft
NASA Technical Reports Server (NTRS)
Ferguson, S. W.; Hanson, G. D.; Churchill, G. B.
1984-01-01
The results of a simulation validation program of the XV-15 tilt-rotor research aircraft are detailed, covering such simulation aspects as the mathematical model, visual system, motion system, cab aural system, cab control loader system, pilot perceptual fidelity, and generic tilt rotor applications. Simulation validation was performed for the hover, low-speed, and sideward flight modes, with consideration of the in-ground rotor effect. Several deficiencies of the mathematical model and the simulation systems were identified in the course of the simulation validation project, and some were corrected. It is noted that NASA's Vertical Motion Simulator used in the program is an excellent tool for tilt-rotor and rotorcraft design, development, and pilot training.
Validation of a Monte Carlo simulation of the Inveon PET scanner using GATE
NASA Astrophysics Data System (ADS)
Lu, Lijun; Zhang, Houjin; Bian, Zhaoying; Ma, Jianhua; Feng, Qiangjin; Chen, Wufan
2016-08-01
The purpose of this study is to validate the application of GATE (Geant4 Application for Tomographic Emission) Monte Carlo simulation toolkit in order to model the performance characteristics of Siemens Inveon small animal PET system. The simulation results were validated against experimental/published data in accordance with the NEMA NU-4 2008 protocol for standardized evaluation of spatial resolution, sensitivity, scatter fraction (SF) and noise equivalent counting rate (NECR) of a preclinical PET system. An agreement of less than 18% was obtained between the radial, tangential and axial spatial resolutions of the simulated and experimental results. The simulated peak NECR of mouse-size phantom agreed with the experimental result, while for the rat-size phantom simulated value was higher than experimental result. The simulated and experimental SFs of mouse- and rat- size phantom both reached an agreement of less than 2%. It has been shown the feasibility of our GATE model to accurately simulate, within certain limits, all major performance characteristics of Inveon PET system.
Simulation verification techniques study. Subsystem simulation validation techniques
NASA Technical Reports Server (NTRS)
Duncan, L. M.; Reddell, J. P.; Schoonmaker, P. B.
1974-01-01
Techniques for validation of software modules which simulate spacecraft onboard systems are discussed. An overview of the simulation software hierarchy for a shuttle mission simulator is provided. A set of guidelines for the identification of subsystem/module performance parameters and critical performance parameters are presented. Various sources of reference data to serve as standards of performance for simulation validation are identified. Environment, crew station, vehicle configuration, and vehicle dynamics simulation software are briefly discussed from the point of view of their interfaces with subsystem simulation modules. A detailed presentation of results in the area of vehicle subsystems simulation modules is included. A list of references, conclusions and recommendations are also given.
NASA Astrophysics Data System (ADS)
Nelson, B. A.; Akcay, C.; Glasser, A. H.; Hansen, C. J.; Jarboe, T. R.; Marklin, G. J.; Milroy, R. D.; Morgan, K. D.; Norgaard, P. C.; Shumlak, U.; Sutherland, D. A.; Victor, B. S.; Sovinec, C. R.; O'Bryan, J. B.; Held, E. D.; Ji, J.-Y.; Lukin, V. S.
2014-10-01
The Plasma Science and Innovation Center (PSI-Center - http://www.psicenter.org) supports collaborating validation platform experiments with 3D extended MHD simulations using the NIMROD, HiFi, and PSI-TET codes. Collaborators include the Bellan Plasma Group (Caltech), CTH (Auburn U), HBT-EP (Columbia), HIT-SI (U Wash-UW), LTX (PPPL), MAST (Culham), Pegasus (U Wisc-Madison), SSX (Swarthmore College), TCSU (UW), and ZaP/ZaP-HD (UW). The PSI-Center is exploring application of validation metrics between experimental data and simulations results. Biorthogonal decomposition (BOD) is used to compare experiments with simulations. BOD separates data sets into spatial and temporal structures, giving greater weight to dominant structures. Several BOD metrics are being formulated with the goal of quantitive validation. Results from these simulation and validation studies, as well as an overview of the PSI-Center status will be presented.
Validation of the SimSET simulation package for modeling the Siemens Biograph mCT PET scanner
NASA Astrophysics Data System (ADS)
Poon, Jonathan K.; Dahlbom, Magnus L.; Casey, Michael E.; Qi, Jinyi; Cherry, Simon R.; Badawi, Ramsey D.
2015-02-01
Monte Carlo simulation provides a valuable tool in performance assessment and optimization of system design parameters for PET scanners. SimSET is a popular Monte Carlo simulation toolkit that features fast simulation time, as well as variance reduction tools to further enhance computational efficiency. However, SimSET has lacked the ability to simulate block detectors until its most recent release. Our goal is to validate new features of SimSET by developing a simulation model of the Siemens Biograph mCT PET scanner and comparing the results to a simulation model developed in the GATE simulation suite and to experimental results. We used the NEMA NU-2 2007 scatter fraction, count rates, and spatial resolution protocols to validate the SimSET simulation model and its new features. The SimSET model overestimated the experimental results of the count rate tests by 11-23% and the spatial resolution test by 13-28%, which is comparable to previous validation studies of other PET scanners in the literature. The difference between the SimSET and GATE simulation was approximately 4-8% for the count rate test and approximately 3-11% for the spatial resolution test. In terms of computational time, SimSET performed simulations approximately 11 times faster than GATE simulations. The new block detector model in SimSET offers a fast and reasonably accurate simulation toolkit for PET imaging applications.
Helicopter simulation validation using flight data
NASA Technical Reports Server (NTRS)
Key, D. L.; Hansen, R. S.; Cleveland, W. B.; Abbott, W. Y.
1982-01-01
A joint NASA/Army effort to perform a systematic ground-based piloted simulation validation assessment is described. The best available mathematical model for the subject helicopter (UH-60A Black Hawk) was programmed for real-time operation. Flight data were obtained to validate the math model, and to develop models for the pilot control strategy while performing mission-type tasks. The validated math model is to be combined with motion and visual systems to perform ground based simulation. Comparisons of the control strategy obtained in flight with that obtained on the simulator are to be used as the basis for assessing the fidelity of the results obtained in the simulator.
NASA Astrophysics Data System (ADS)
Zhou, Abel; White, Graeme L.; Davidson, Rob
2018-02-01
Anti-scatter grids are commonly used in x-ray imaging systems to reduce scatter radiation reaching the image receptor. Anti-scatter grid performance and validation can be simulated through use of Monte Carlo (MC) methods. Our recently reported work has modified existing MC codes resulting in improved performance when simulating x-ray imaging. The aim of this work is to validate the transmission of x-ray photons in grids from the recently reported new MC codes against experimental results and results previously reported in other literature. The results of this work show that the scatter-to-primary ratio (SPR), the transmissions of primary (T p), scatter (T s), and total (T t) radiation determined using this new MC code system have strong agreement with the experimental results and the results reported in the literature. T p, T s, T t, and SPR determined in this new MC simulation code system are valid. These results also show that the interference effect on Rayleigh scattering should not be neglected in both mammographic and general grids’ evaluation. Our new MC simulation code system has been shown to be valid and can be used for analysing and evaluating the designs of grids.
Validation of virtual-reality-based simulations for endoscopic sinus surgery.
Dharmawardana, N; Ruthenbeck, G; Woods, C; Elmiyeh, B; Diment, L; Ooi, E H; Reynolds, K; Carney, A S
2015-12-01
Virtual reality (VR) simulators provide an alternative to real patients for practicing surgical skills but require validation to ensure accuracy. Here, we validate the use of a virtual reality sinus surgery simulator with haptic feedback for training in Otorhinolaryngology - Head & Neck Surgery (OHNS). Participants were recruited from final-year medical students, interns, resident medical officers (RMOs), OHNS registrars and consultants. All participants completed an online questionnaire after performing four separate simulation tasks. These were then used to assess face, content and construct validity. anova with post hoc correlation was used for statistical analysis. The following groups were compared: (i) medical students/interns, (ii) RMOs, (iii) registrars and (iv) consultants. Face validity results had a statistically significant (P < 0.05) difference between the consultant group and others, while there was no significant difference between medical student/intern and RMOs. Variability within groups was not significant. Content validity results based on consultant scoring and comments indicated that the simulations need further development in several areas to be effective for registrar-level teaching. However, students, interns and RMOs indicated that the simulations provide a useful tool for learning OHNS-related anatomy and as an introduction to ENT-specific procedures. The VR simulations have been validated for teaching sinus anatomy and nasendoscopy to medical students, interns and RMOs. However, they require further development before they can be regarded as a valid tool for more advanced surgical training. © 2015 John Wiley & Sons Ltd.
DOT National Transportation Integrated Search
2006-01-01
A previous study developed a procedure for microscopic simulation model calibration and validation and evaluated the procedure via two relatively simple case studies using three microscopic simulation models. Results showed that default parameters we...
Can We Study Autonomous Driving Comfort in Moving-Base Driving Simulators? A Validation Study.
Bellem, Hanna; Klüver, Malte; Schrauf, Michael; Schöner, Hans-Peter; Hecht, Heiko; Krems, Josef F
2017-05-01
To lay the basis of studying autonomous driving comfort using driving simulators, we assessed the behavioral validity of two moving-base simulator configurations by contrasting them with a test-track setting. With increasing level of automation, driving comfort becomes increasingly important. Simulators provide a safe environment to study perceived comfort in autonomous driving. To date, however, no studies were conducted in relation to comfort in autonomous driving to determine the extent to which results from simulator studies can be transferred to on-road driving conditions. Participants ( N = 72) experienced six differently parameterized lane-change and deceleration maneuvers and subsequently rated the comfort of each scenario. One group of participants experienced the maneuvers on a test-track setting, whereas two other groups experienced them in one of two moving-base simulator configurations. We could demonstrate relative and absolute validity for one of the two simulator configurations. Subsequent analyses revealed that the validity of the simulator highly depends on the parameterization of the motion system. Moving-base simulation can be a useful research tool to study driving comfort in autonomous vehicles. However, our results point at a preference for subunity scaling factors for both lateral and longitudinal motion cues, which might be explained by an underestimation of speed in virtual environments. In line with previous studies, we recommend lateral- and longitudinal-motion scaling factors of approximately 50% to 60% in order to obtain valid results for both active and passive driving tasks.
PSI-Center Simulations of Validation Platform Experiments
NASA Astrophysics Data System (ADS)
Nelson, B. A.; Akcay, C.; Glasser, A. H.; Hansen, C. J.; Jarboe, T. R.; Marklin, G. J.; Milroy, R. D.; Morgan, K. D.; Norgaard, P. C.; Shumlak, U.; Victor, B. S.; Sovinec, C. R.; O'Bryan, J. B.; Held, E. D.; Ji, J.-Y.; Lukin, V. S.
2013-10-01
The Plasma Science and Innovation Center (PSI-Center - http://www.psicenter.org) supports collaborating validation platform experiments with extended MHD simulations. Collaborators include the Bellan Plasma Group (Caltech), CTH (Auburn U), FRX-L (Los Alamos National Laboratory), HIT-SI (U Wash - UW), LTX (PPPL), MAST (Culham), Pegasus (U Wisc-Madison), PHD/ELF (UW/MSNW), SSX (Swarthmore College), TCSU (UW), and ZaP/ZaP-HD (UW). Modifications have been made to the NIMROD, HiFi, and PSI-Tet codes to specifically model these experiments, including mesh generation/refinement, non-local closures, appropriate boundary conditions (external fields, insulating BCs, etc.), and kinetic and neutral particle interactions. The PSI-Center is exploring application of validation metrics between experimental data and simulations results. Biorthogonal decomposition is proving to be a powerful method to compare global temporal and spatial structures for validation. Results from these simulation and validation studies, as well as an overview of the PSI-Center status will be presented.
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.; Sowers, T Shane; Liu, Yuan; Owen, A. Karl; Guo, Ten-Huei
2015-01-01
The National Aeronautics and Space Administration (NASA) has developed independent airframe and engine models that have been integrated into a single real-time aircraft simulation for piloted evaluation of propulsion control algorithms. In order to have confidence in the results of these evaluations, the integrated simulation must be validated to demonstrate that its behavior is realistic and that it meets the appropriate Federal Aviation Administration (FAA) certification requirements for aircraft. The paper describes the test procedures and results, demonstrating that the integrated simulation generally meets the FAA requirements and is thus a valid testbed for evaluation of propulsion control modes.
NASA Astrophysics Data System (ADS)
Dörr, Dominik; Joppich, Tobias; Schirmaier, Fabian; Mosthaf, Tobias; Kärger, Luise; Henning, Frank
2016-10-01
Thermoforming of continuously fiber reinforced thermoplastics (CFRTP) is ideally suited to thin walled and complex shaped products. By means of forming simulation, an initial validation of the producibility of a specific geometry, an optimization of the forming process and the prediction of fiber-reorientation due to forming is possible. Nevertheless, applied methods need to be validated. Therefor a method is presented, which enables the calculation of error measures for the mismatch between simulation results and experimental tests, based on measurements with a conventional coordinate measuring device. As a quantitative measure, describing the curvature is provided, the presented method is also suitable for numerical or experimental sensitivity studies on wrinkling behavior. The applied methods for forming simulation, implemented in Abaqus explicit, are presented and applied to a generic geometry. The same geometry is tested experimentally and simulation and test results are compared by the proposed validation method.
Robustness and Uncertainty: Applications for Policy in Climate and Hydrological Modeling
NASA Astrophysics Data System (ADS)
Fields, A. L., III
2015-12-01
Policymakers must often decide how to proceed when presented with conflicting simulation data from hydrological, climatological, and geological models. While laboratory sciences often appeal to the reproducibility of results to argue for the validity of their conclusions, simulations cannot use this strategy for a number of pragmatic and methodological reasons. However, robustness of predictions and causal structures can serve the same function for simulations as reproducibility does for laboratory experiments and field observations in either adjudicating between conflicting results or showing that there is insufficient justification to externally validate the results. Additionally, an interpretation of the argument from robustness is presented that involves appealing to the convergence of many well-built and diverse models rather than the more common version which involves appealing to the probability that one of a set of models is likely to be true. This interpretation strengthens the case for taking robustness as an additional requirement for the validation of simulation results and ultimately supports the idea that computer simulations can provide information about the world that is just as trustworthy as data from more traditional laboratory studies and field observations. Understanding the importance of robust results for the validation of simulation data is especially important for policymakers making decisions on the basis of potentially conflicting models. Applications will span climate, hydrological, and hydroclimatological models.
Combat Simulation Using Breach Computer Language
1979-09-01
simulation and weapon system analysis computer language Two types of models were constructed: a stochastic duel and a dynamic engagement model The... duel model validates the BREACH approach by comparing results with mathematical solutions. The dynamic model shows the capability of the BREACH...BREACH 2 Background 2 The Language 3 Static Duel 4 Background and Methodology 4 Validation 5 Results 8 Tank Duel Simulation 8 Dynamic Assault Model
Software aspects of the Geant4 validation repository
NASA Astrophysics Data System (ADS)
Dotti, Andrea; Wenzel, Hans; Elvira, Daniel; Genser, Krzysztof; Yarba, Julia; Carminati, Federico; Folger, Gunter; Konstantinov, Dmitri; Pokorski, Witold; Ribon, Alberto
2017-10-01
The Geant4, GeantV and GENIE collaborations regularly perform validation and regression tests for simulation results. DoSSiER (Database of Scientific Simulation and Experimental Results) is being developed as a central repository to store the simulation results as well as the experimental data used for validation. DoSSiER is easily accessible via a web application. In addition, a web service allows for programmatic access to the repository to extract records in JSON or XML exchange formats. In this article, we describe the functionality and the current status of various components of DoSSiER as well as the technology choices we made.
Software Aspects of the Geant4 Validation Repository
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dotti, Andrea; Wenzel, Hans; Elvira, Daniel
2016-01-01
The Geant4, GeantV and GENIE collaborations regularly perform validation and regression tests for simulation results. DoSSiER (Database of Scientic Simulation and Experimental Results) is being developed as a central repository to store the simulation results as well as the experimental data used for validation. DoSSiER is easily accessible via a web application. In addition, a web service allows for programmatic access to the repository to extract records in JSON or XML exchange formats. In this article, we describe the functionality and the current status of various components of DoSSiER as well as the technology choices we made.
DOT National Transportation Integrated Search
2008-01-01
Computer simulations are often used in aviation studies. These simulation tools may require complex, high-fidelity aircraft models. Since many of the flight models used are third-party developed products, independent validation is desired prior to im...
Validation Of The Airspace Concept Evaluation System Using Real World Data
NASA Technical Reports Server (NTRS)
Zelinski, Shannon
2005-01-01
This paper discusses the process of performing a validation of the Airspace Concept Evaluation System (ACES) using real world historical flight operational data. ACES inputs are generated from select real world data and processed to create a realistic reproduction of a single day of operations within the National Airspace System (NAS). ACES outputs are then compared to real world operational metrics and delay statistics for the reproduced day. Preliminary results indicate that ACES produces delays and airport operational metrics similar to the real world with minor variations of delay by phase of flight. ACES is a nation-wide fast-time simulation tool developed at NASA Ames Research Center. ACES models and simulates the NAS using interacting agents representing center control, terminal flow management, airports, individual flights, and other NAS elements. These agents pass messages between one another similar to real world communications. This distributed agent based system is designed to emulate the highly unpredictable nature of the NAS, making it a suitable tool to evaluate current and envisioned airspace concepts. To ensure that ACES produces the most realistic results, the system must be validated. There is no way to validate future concepts scenarios using real world historical data, but current day scenario validations increase confidence in the validity of future scenario results. Each operational day has unique weather and traffic demand schedules. The more a simulation utilizes the unique characteristic of a specific day, the more realistic the results should be. ACES is able to simulate the full scale demand traffic necessary to perform a validation using real world data. Through direct comparison with the real world, models may continuee to be improved and unusual trends and biases may be filtered out of the system or used to normalize the results of future concept simulations.
Simulation verification techniques study
NASA Technical Reports Server (NTRS)
Schoonmaker, P. B.; Wenglinski, T. H.
1975-01-01
Results are summarized of the simulation verification techniques study which consisted of two tasks: to develop techniques for simulator hardware checkout and to develop techniques for simulation performance verification (validation). The hardware verification task involved definition of simulation hardware (hardware units and integrated simulator configurations), survey of current hardware self-test techniques, and definition of hardware and software techniques for checkout of simulator subsystems. The performance verification task included definition of simulation performance parameters (and critical performance parameters), definition of methods for establishing standards of performance (sources of reference data or validation), and definition of methods for validating performance. Both major tasks included definition of verification software and assessment of verification data base impact. An annotated bibliography of all documents generated during this study is provided.
Simulation-based assessment in anesthesiology: requirements for practical implementation.
Boulet, John R; Murray, David J
2010-04-01
Simulations have taken a central role in the education and assessment of medical students, residents, and practicing physicians. The introduction of simulation-based assessments in anesthesiology, especially those used to establish various competencies, has demanded fairly rigorous studies concerning the psychometric properties of the scores. Most important, major efforts have been directed at identifying, and addressing, potential threats to the validity of simulation-based assessment scores. As a result, organizations that wish to incorporate simulation-based assessments into their evaluation practices can access information regarding effective test development practices, the selection of appropriate metrics, the minimization of measurement errors, and test score validation processes. The purpose of this article is to provide a broad overview of the use of simulation for measuring physician skills and competencies. For simulations used in anesthesiology, studies that describe advances in scenario development, the development of scoring rubrics, and the validation of assessment results are synthesized. Based on the summary of relevant research, psychometric requirements for practical implementation of simulation-based assessments in anesthesiology are forwarded. As technology expands, and simulation-based education and evaluation takes on a larger role in patient safety initiatives, the groundbreaking work conducted to date can serve as a model for those individuals and organizations that are responsible for developing, scoring, or validating simulation-based education and assessment programs in anesthesiology.
Quantitative validation of carbon-fiber laminate low velocity impact simulations
English, Shawn A.; Briggs, Timothy M.; Nelson, Stacy M.
2015-09-26
Simulations of low velocity impact with a flat cylindrical indenter upon a carbon fiber fabric reinforced polymer laminate are rigorously validated. Comparison of the impact energy absorption between the model and experiment is used as the validation metric. Additionally, non-destructive evaluation, including ultrasonic scans and three-dimensional computed tomography, provide qualitative validation of the models. The simulations include delamination, matrix cracks and fiber breaks. An orthotropic damage and failure constitutive model, capable of predicting progressive damage and failure, is developed in conjunction and described. An ensemble of simulations incorporating model parameter uncertainties is used to predict a response distribution which ismore » then compared to experimental output using appropriate statistical methods. Lastly, the model form errors are exposed and corrected for use in an additional blind validation analysis. The result is a quantifiable confidence in material characterization and model physics when simulating low velocity impact in structures of interest.« less
I-15 San Diego, California, model validation and calibration report.
DOT National Transportation Integrated Search
2010-02-01
The Integrated Corridor Management (ICM) initiative requires the calibration and validation of simulation models used in the Analysis, Modeling, and Simulation of Pioneer Site proposed integrated corridors. This report summarizes the results and proc...
Numerical simulation of cavitating flows in shipbuilding
NASA Astrophysics Data System (ADS)
Bagaev, D.; Yegorov, S.; Lobachev, M.; Rudnichenko, A.; Taranov, A.
2018-05-01
The paper presents validation of numerical simulations of cavitating flows around different marine objects carried out at the Krylov State Research Centre (KSRC). Preliminary validation was done with reference to international test objects. The main part of the paper contains results of solving practical problems of ship propulsion design. The validation of numerical simulations by comparison with experimental data shows a good accuracy of the supercomputer technologies existing at Krylov State Research Centre for both hydrodynamic and cavitation characteristics prediction.
McDonald, Richard R.; Nelson, Jonathan M.; Fosness, Ryan L.; Nelson, Peter O.; Constantinescu, George; Garcia, Marcelo H.; Hanes, Dan
2016-01-01
Two- and three-dimensional morphodynamic simulations are becoming common in studies of channel form and process. The performance of these simulations are often validated against measurements from laboratory studies. Collecting channel change information in natural settings for model validation is difficult because it can be expensive and under most channel forming flows the resulting channel change is generally small. Several channel restoration projects designed in part to armor large meanders with several large spurs constructed of wooden piles on the Kootenai River, ID, have resulted in rapid bed elevation change following construction. Monitoring of these restoration projects includes post- restoration (as-built) Digital Elevation Models (DEMs) as well as additional channel surveys following high channel forming flows post-construction. The resulting sequence of measured bathymetry provides excellent validation data for morphodynamic simulations at the reach scale of a real river. In this paper we test the performance a quasi-three-dimensional morphodynamic simulation against the measured elevation change. The resulting simulations predict the pattern of channel change reasonably well but many of the details such as the maximum scour are under predicted.
DoSSiER: Database of scientific simulation and experimental results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wenzel, Hans; Yarba, Julia; Genser, Krzystof
The Geant4, GeantV and GENIE collaborations regularly perform validation and regression tests for simulation results. DoSSiER (Database of Scientific Simulation and Experimental Results) is being developed as a central repository to store the simulation results as well as the experimental data used for validation. DoSSiER can be easily accessed via a web application. In addition, a web service allows for programmatic access to the repository to extract records in json or xml exchange formats. In this paper, we describe the functionality and the current status of various components of DoSSiER as well as the technology choices we made.
DoSSiER: Database of scientific simulation and experimental results
Wenzel, Hans; Yarba, Julia; Genser, Krzystof; ...
2016-08-01
The Geant4, GeantV and GENIE collaborations regularly perform validation and regression tests for simulation results. DoSSiER (Database of Scientific Simulation and Experimental Results) is being developed as a central repository to store the simulation results as well as the experimental data used for validation. DoSSiER can be easily accessed via a web application. In addition, a web service allows for programmatic access to the repository to extract records in json or xml exchange formats. In this paper, we describe the functionality and the current status of various components of DoSSiER as well as the technology choices we made.
Daetwyler, Hans D; Calus, Mario P L; Pong-Wong, Ricardo; de Los Campos, Gustavo; Hickey, John M
2013-02-01
The genomic prediction of phenotypes and breeding values in animals and plants has developed rapidly into its own research field. Results of genomic prediction studies are often difficult to compare because data simulation varies, real or simulated data are not fully described, and not all relevant results are reported. In addition, some new methods have been compared only in limited genetic architectures, leading to potentially misleading conclusions. In this article we review simulation procedures, discuss validation and reporting of results, and apply benchmark procedures for a variety of genomic prediction methods in simulated and real example data. Plant and animal breeding programs are being transformed by the use of genomic data, which are becoming widely available and cost-effective to predict genetic merit. A large number of genomic prediction studies have been published using both simulated and real data. The relative novelty of this area of research has made the development of scientific conventions difficult with regard to description of the real data, simulation of genomes, validation and reporting of results, and forward in time methods. In this review article we discuss the generation of simulated genotype and phenotype data, using approaches such as the coalescent and forward in time simulation. We outline ways to validate simulated data and genomic prediction results, including cross-validation. The accuracy and bias of genomic prediction are highlighted as performance indicators that should be reported. We suggest that a measure of relatedness between the reference and validation individuals be reported, as its impact on the accuracy of genomic prediction is substantial. A large number of methods were compared in example simulated and real (pine and wheat) data sets, all of which are publicly available. In our limited simulations, most methods performed similarly in traits with a large number of quantitative trait loci (QTL), whereas in traits with fewer QTL variable selection did have some advantages. In the real data sets examined here all methods had very similar accuracies. We conclude that no single method can serve as a benchmark for genomic prediction. We recommend comparing accuracy and bias of new methods to results from genomic best linear prediction and a variable selection approach (e.g., BayesB), because, together, these methods are appropriate for a range of genetic architectures. An accompanying article in this issue provides a comprehensive review of genomic prediction methods and discusses a selection of topics related to application of genomic prediction in plants and animals.
Daetwyler, Hans D.; Calus, Mario P. L.; Pong-Wong, Ricardo; de los Campos, Gustavo; Hickey, John M.
2013-01-01
The genomic prediction of phenotypes and breeding values in animals and plants has developed rapidly into its own research field. Results of genomic prediction studies are often difficult to compare because data simulation varies, real or simulated data are not fully described, and not all relevant results are reported. In addition, some new methods have been compared only in limited genetic architectures, leading to potentially misleading conclusions. In this article we review simulation procedures, discuss validation and reporting of results, and apply benchmark procedures for a variety of genomic prediction methods in simulated and real example data. Plant and animal breeding programs are being transformed by the use of genomic data, which are becoming widely available and cost-effective to predict genetic merit. A large number of genomic prediction studies have been published using both simulated and real data. The relative novelty of this area of research has made the development of scientific conventions difficult with regard to description of the real data, simulation of genomes, validation and reporting of results, and forward in time methods. In this review article we discuss the generation of simulated genotype and phenotype data, using approaches such as the coalescent and forward in time simulation. We outline ways to validate simulated data and genomic prediction results, including cross-validation. The accuracy and bias of genomic prediction are highlighted as performance indicators that should be reported. We suggest that a measure of relatedness between the reference and validation individuals be reported, as its impact on the accuracy of genomic prediction is substantial. A large number of methods were compared in example simulated and real (pine and wheat) data sets, all of which are publicly available. In our limited simulations, most methods performed similarly in traits with a large number of quantitative trait loci (QTL), whereas in traits with fewer QTL variable selection did have some advantages. In the real data sets examined here all methods had very similar accuracies. We conclude that no single method can serve as a benchmark for genomic prediction. We recommend comparing accuracy and bias of new methods to results from genomic best linear prediction and a variable selection approach (e.g., BayesB), because, together, these methods are appropriate for a range of genetic architectures. An accompanying article in this issue provides a comprehensive review of genomic prediction methods and discusses a selection of topics related to application of genomic prediction in plants and animals. PMID:23222650
Development and Validation of a Mobile Device-based External Ventricular Drain Simulator.
Morone, Peter J; Bekelis, Kimon; Root, Brandon K; Singer, Robert J
2017-10-01
Multiple external ventricular drain (EVD) simulators have been created, yet their cost, bulky size, and nonreusable components limit their accessibility to residency programs. To create and validate an animated EVD simulator that is accessible on a mobile device. We developed a mobile-based EVD simulator that is compatible with iOS (Apple Inc., Cupertino, California) and Android-based devices (Google, Mountain View, California) and can be downloaded from the Apple App and Google Play Store. Our simulator consists of a learn mode, which teaches users the procedure, and a test mode, which assesses users' procedural knowledge. Twenty-eight participants, who were divided into expert and novice categories, completed the simulator in test mode and answered a postmodule survey. This was graded using a 5-point Likert scale, with 5 representing the highest score. Using the survey results, we assessed the module's face and content validity, whereas construct validity was evaluated by comparing the expert and novice test scores. Participants rated individual survey questions pertaining to face and content validity a median score of 4 out of 5. When comparing test scores, generated by the participants completing the test mode, the experts scored higher than the novices (mean, 71.5; 95% confidence interval, 69.2 to 73.8 vs mean, 48; 95% confidence interval, 44.2 to 51.6; P < .001). We created a mobile-based EVD simulator that is inexpensive, reusable, and accessible. Our results demonstrate that this simulator is face, content, and construct valid. Copyright © 2017 by the Congress of Neurological Surgeons
Ribeiro de Oliveira, Marcelo Magaldi; Nicolato, Arthur; Santos, Marcilea; Godinho, Joao Victor; Brito, Rafael; Alvarenga, Alexandre; Martins, Ana Luiza Valle; Prosdocimi, André; Trivelato, Felipe Padovani; Sabbagh, Abdulrahman J; Reis, Augusto Barbosa; Maestro, Rolando Del
2016-05-01
OBJECT The development of neurointerventional treatments of central nervous system disorders has resulted in the need for adequate training environments for novice interventionalists. Virtual simulators offer anatomical definition but lack adequate tactile feedback. Animal models, which provide more lifelike training, require an appropriate infrastructure base. The authors describe a training model for neurointerventional procedures using the human placenta (HP), which affords haptic training with significantly fewer resource requirements, and discuss its validation. METHODS Twelve HPs were prepared for simulated endovascular procedures. Training exercises performed by interventional neuroradiologists and novice fellows were placental angiography, stent placement, aneurysm coiling, and intravascular liquid embolic agent injection. RESULTS The endovascular training exercises proposed can be easily reproduced in the HP. Face, content, and construct validity were assessed by 6 neurointerventional radiologists and 6 novice fellows in interventional radiology. CONCLUSIONS The use of HP provides an inexpensive training model for the training of neurointerventionalists. Preliminary validation results show that this simulation model has face and content validity and has demonstrated construct validity for the interventions assessed in this study.
Assessing Procedural Competence: Validity Considerations.
Pugh, Debra M; Wood, Timothy J; Boulet, John R
2015-10-01
Simulation-based medical education (SBME) offers opportunities for trainees to learn how to perform procedures and to be assessed in a safe environment. However, SBME research studies often lack robust evidence to support the validity of the interpretation of the results obtained from tools used to assess trainees' skills. The purpose of this paper is to describe how a validity framework can be applied when reporting and interpreting the results of a simulation-based assessment of skills related to performing procedures. The authors discuss various sources of validity evidence because they relate to SBME. A case study is presented.
NASA Technical Reports Server (NTRS)
Noble, Erik; Druyan, Leonard M.; Fulakeza, Matthew
2014-01-01
The performance of the NCAR Weather Research and Forecasting Model (WRF) as a West African regional-atmospheric model is evaluated. The study tests the sensitivity of WRF-simulated vorticity maxima associated with African easterly waves to 64 combinations of alternative parameterizations in a series of simulations in September. In all, 104 simulations of 12-day duration during 11 consecutive years are examined. The 64 combinations combine WRF parameterizations of cumulus convection, radiation transfer, surface hydrology, and PBL physics. Simulated daily and mean circulation results are validated against NASA's Modern-Era Retrospective Analysis for Research and Applications (MERRA) and NCEP/Department of Energy Global Reanalysis 2. Precipitation is considered in a second part of this two-part paper. A wide range of 700-hPa vorticity validation scores demonstrates the influence of alternative parameterizations. The best WRF performers achieve correlations against reanalysis of 0.40-0.60 and realistic amplitudes of spatiotemporal variability for the 2006 focus year while a parallel-benchmark simulation by the NASA Regional Model-3 (RM3) achieves higher correlations, but less realistic spatiotemporal variability. The largest favorable impact on WRF-vorticity validation is achieved by selecting the Grell-Devenyi cumulus convection scheme, resulting in higher correlations against reanalysis than simulations using the Kain-Fritch convection. Other parameterizations have less-obvious impact, although WRF configurations incorporating one surface model and PBL scheme consistently performed poorly. A comparison of reanalysis circulation against two NASA radiosonde stations confirms that both reanalyses represent observations well enough to validate the WRF results. Validation statistics for optimized WRF configurations simulating the parallel period during 10 additional years are less favorable than for 2006.
Sugand, Kapil; Wescott, Robert A; Carrington, Richard; Hart, Alister; Van Duren, Bernard H
2018-05-10
Background and purpose - Simulation is an adjunct to surgical education. However, nothing can accurately simulate fluoroscopic procedures in orthopedic trauma. Current options for training with fluoroscopy are either intraoperative, which risks radiation, or use of expensive and unrealistic virtual reality simulators. We introduce FluoroSim, an inexpensive digital fluoroscopy simulator without the need for radiation. Patients and methods - This was a multicenter study with 26 surgeons in which everyone completed 1 attempt at inserting a guide-wire into a femoral dry bone using surgical equipment and FluoroSim. 5 objective performance metrics were recorded in real-time to assess construct validity. The surgeons were categorized based on the number of dynamic hip screws (DHS) performed: novices (< 10), intermediates (10-39) and experts (≥ 40). A 7-point Likert scale questionnaire assessed the face and content validity of FluoroSim. Results - Construct validity was present for 2 clinically validated metrics in DHS surgery. Experts and intermediates statistically significantly outperformed novices for tip-apex distance and for cut-out rate. Novices took the least number of radiographs. Face and content validity were also observed. Interpretation - FluoroSim discriminated between novice and intermediate or expert surgeons based on tip-apex distance and cut-out rate while demonstrating face and content validity. FluoroSim provides a useful adjunct to orthopedic training. Our findings concur with results from studies using other simulation modalities. FluoroSim can be implemented for education easily and cheaply away from theater in a safe and controlled environment.
Comparison of Nonlinear Random Response Using Equivalent Linearization and Numerical Simulation
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Muravyov, Alexander A.
2000-01-01
A recently developed finite-element-based equivalent linearization approach for the analysis of random vibrations of geometrically nonlinear multiple degree-of-freedom structures is validated. The validation is based on comparisons with results from a finite element based numerical simulation analysis using a numerical integration technique in physical coordinates. In particular, results for the case of a clamped-clamped beam are considered for an extensive load range to establish the limits of validity of the equivalent linearization approach.
Benchmark tests for a Formula SAE Student car prototyping
NASA Astrophysics Data System (ADS)
Mariasiu, Florin
2011-12-01
Aerodynamic characteristics of a vehicle are important elements in its design and construction. A low drag coefficient brings significant fuel savings and increased engine power efficiency. In designing and developing vehicles trough computer simulation process to determine the vehicles aerodynamic characteristics are using dedicated CFD (Computer Fluid Dynamics) software packages. However, the results obtained by this faster and cheaper method, are validated by experiments in wind tunnels tests, which are expensive and were complex testing equipment are used in relatively high costs. Therefore, the emergence and development of new low-cost testing methods to validate CFD simulation results would bring great economic benefits for auto vehicles prototyping process. This paper presents the initial development process of a Formula SAE Student race-car prototype using CFD simulation and also present a measurement system based on low-cost sensors through which CFD simulation results were experimentally validated. CFD software package used for simulation was Solid Works with the FloXpress add-on and experimental measurement system was built using four piezoresistive force sensors FlexiForce type.
Xia, Zeyang; Chen, Jie
2014-01-01
Objectives To develop an artificial tooth–periodontal ligament (PDL)–bone complex (ATPBC) that simulates clinical crown displacement. Material and Methods An ATPBC was created. It had a socket hosting a tooth with a thin layer of silicon mixture in between for simulating the PDL. The complex was attached to a device that allows applying a controlled force to the crown and measuring the resulting crown displacement. Crown displacements were compared to previously published data for validation. Results The ATPBC that had a PDL made of two types of silicones, 50% gasket sealant No. 2 and 50% RTV 587 silicone, with a thickness of 0.3 mm, simulated the PDL well. The mechanical behaviors (1) force-displacement relationship, (2) stress relaxation, (3) creep, and (4) hysteresis were validated by the published results. Conclusion The ATPBC simulated the crown displacement behavior reported from biological studies well. PMID:22970752
Intercepting real and simulated falling objects: what is the difference?
Baurès, Robin; Benguigui, Nicolas; Amorim, Michel-Ange; Hecht, Heiko
2009-10-30
The use of virtual reality is nowadays common in many studies in the field of human perception and movement control, particularly in interceptive actions. However, the ecological validity of the simulation is often taken for granted without having been formally established. If participants were to perceive the real situation and its virtual equivalent in a different fashion, the generalization of the results obtained in virtual reality to real life would be highly questionable. We tested the ecological validity of virtual reality in this context by comparing the timing of interceptive actions based upon actually falling objects and their simulated counterparts. The results show very limited differences as a function of whether participants were confronted with a real ball or a simulation thereof. And when present, such differences were limited to the first trial only. This result validates the use of virtual reality when studying interceptive actions of accelerated stimuli.
Wygant, Dustin B; Ben-Porath, Yossef S; Arbisi, Paul A; Berry, David T R; Freeman, David B; Heilbronner, Robert L
2009-11-01
The current study examined the effectiveness of the MMPI-2 Restructured Form (MMPI-2-RF; Ben-Porath and Tellegen, 2008) over-reporting indicators in civil forensic settings. The MMPI-2-RF includes three revised MMPI-2 over-reporting validity scales and a new scale to detect over-reported somatic complaints. Participants dissimulated medical and neuropsychological complaints in two simulation samples, and a known-groups sample used symptom validity tests as a response bias criterion. Results indicated large effect sizes for the MMPI-2-RF validity scales, including a Cohen's d of .90 for Fs in a head injury simulation sample, 2.31 for FBS-r, 2.01 for F-r, and 1.97 for Fs in a medical simulation sample, and 1.45 for FBS-r and 1.30 for F-r in identifying poor effort on SVTs. Classification results indicated good sensitivity and specificity for the scales across the samples. This study indicates that the MMPI-2-RF over-reporting validity scales are effective at detecting symptom over-reporting in civil forensic settings.
Simulation of laser beam reflection at the sea surface modeling and validation
NASA Astrophysics Data System (ADS)
Schwenger, Frédéric; Repasi, Endre
2013-06-01
A 3D simulation of the reflection of a Gaussian shaped laser beam on the dynamic sea surface is presented. The simulation is suitable for the pre-calculation of images for cameras operating in different spectral wavebands (visible, short wave infrared) for a bistatic configuration of laser source and receiver for different atmospheric conditions. In the visible waveband the calculated detected total power of reflected laser light from a 660nm laser source is compared with data collected in a field trial. Our computer simulation comprises the 3D simulation of a maritime scene (open sea/clear sky) and the simulation of laser beam reflected at the sea surface. The basic sea surface geometry is modeled by a composition of smooth wind driven gravity waves. To predict the view of a camera the sea surface radiance must be calculated for the specific waveband. Additionally, the radiances of laser light specularly reflected at the wind-roughened sea surface are modeled considering an analytical statistical sea surface BRDF (bidirectional reflectance distribution function). Validation of simulation results is prerequisite before applying the computer simulation to maritime laser applications. For validation purposes data (images and meteorological data) were selected from field measurements, using a 660nm cw-laser diode to produce laser beam reflection at the water surface and recording images by a TV camera. The validation is done by numerical comparison of measured total laser power extracted from recorded images with the corresponding simulation results. The results of the comparison are presented for different incident (zenith/azimuth) angles of the laser beam.
Validation of Robotic Surgery Simulator (RoSS).
Kesavadas, Thenkurussi; Stegemann, Andrew; Sathyaseelan, Gughan; Chowriappa, Ashirwad; Srimathveeravalli, Govindarajan; Seixas-Mikelus, Stéfanie; Chandrasekhar, Rameella; Wilding, Gregory; Guru, Khurshid
2011-01-01
Recent growth of daVinci Robotic Surgical System as a minimally invasive surgery tool has led to a call for better training of future surgeons. In this paper, a new virtual reality simulator, called RoSS is presented. Initial results from two studies - face and content validity, are very encouraging. 90% of the cohort of expert robotic surgeons felt that the simulator was excellent or somewhat close to the touch and feel of the daVinci console. Content validity of the simulator received 90% approval in some cases. These studies demonstrate that RoSS has the potential of becoming an important training tool for the daVinci surgical robot.
Browning, J. R.; Jonkman, J.; Robertson, A.; ...
2014-12-16
In this study, high-quality computer simulations are required when designing floating wind turbines because of the complex dynamic responses that are inherent with a high number of degrees of freedom and variable metocean conditions. In 2007, the FAST wind turbine simulation tool, developed and maintained by the U.S. Department of Energy's (DOE's) National Renewable Energy Laboratory (NREL), was expanded to include capabilities that are suitable for modeling floating offshore wind turbines. In an effort to validate FAST and other offshore wind energy modeling tools, DOE funded the DeepCwind project that tested three prototype floating wind turbines at 1/50 th scalemore » in a wave basin, including a semisubmersible, a tension-leg platform, and a spar buoy. This paper describes the use of the results of the spar wave basin tests to calibrate and validate the FAST offshore floating simulation tool, and presents some initial results of simulated dynamic responses of the spar to several combinations of wind and sea states. Wave basin tests with the spar attached to a scale model of the NREL 5-megawatt reference wind turbine were performed at the Maritime Research Institute Netherlands under the DeepCwind project. This project included free-decay tests, tests with steady or turbulent wind and still water (both periodic and irregular waves with no wind), and combined wind/wave tests. The resulting data from the 1/50th model was scaled using Froude scaling to full size and used to calibrate and validate a full-size simulated model in FAST. Results of the model calibration and validation include successes, subtleties, and limitations of both wave basin testing and FAST modeling capabilities.« less
NASA Technical Reports Server (NTRS)
Fahrenthold, Eric P.; Shivarama, Ravishankar
2004-01-01
The hybrid particle-finite element method of Fahrenthold and Horban, developed for the simulation of hypervelocity impact problems, has been extended to include new formulations of the particle-element kinematics, additional constitutive models, and an improved numerical implementation. The extended formulation has been validated in three dimensional simulations of published impact experiments. The test cases demonstrate good agreement with experiment, good parallel speedup, and numerical convergence of the simulation results.
Does virtual reality simulation have a role in training trauma and orthopaedic surgeons?
Bartlett, J D; Lawrence, J E; Stewart, M E; Nakano, N; Khanduja, V
2018-05-01
Aims The aim of this study was to assess the current evidence relating to the benefits of virtual reality (VR) simulation in orthopaedic surgical training, and to identify areas of future research. Materials and Methods A literature search using the MEDLINE, Embase, and Google Scholar databases was performed. The results' titles, abstracts, and references were examined for relevance. Results A total of 31 articles published between 2004 and 2016 and relating to the objective validity and efficacy of specific virtual reality orthopaedic surgical simulators were identified. We found 18 studies demonstrating the construct validity of 16 different orthopaedic virtual reality simulators by comparing expert and novice performance. Eight studies have demonstrated skill acquisition on a simulator by showing improvements in performance with repeated use. A further five studies have demonstrated measurable improvements in operating theatre performance following a period of virtual reality simulator training. Conclusion The demonstration of 'real-world' benefits from the use of VR simulation in knee and shoulder arthroscopy is promising. However, evidence supporting its utility in other forms of orthopaedic surgery is lacking. Further studies of validity and utility should be combined with robust analyses of the cost efficiency of validated simulators to justify the financial investment required for their use in orthopaedic training. Cite this article: Bone Joint J 2018;100-B:559-65.
Design and validation of the eyesafe ladar testbed (ELT) using the LadarSIM system simulator
NASA Astrophysics Data System (ADS)
Neilsen, Kevin D.; Budge, Scott E.; Pack, Robert T.; Fullmer, R. Rees; Cook, T. Dean
2009-05-01
The development of an experimental full-waveform LADAR system has been enhanced with the assistance of the LadarSIM system simulation software. The Eyesafe LADAR Test-bed (ELT) was designed as a raster scanning, single-beam, energy-detection LADAR with the capability of digitizing and recording the return pulse waveform at up to 2 GHz for 3D off-line image formation research in the laboratory. To assist in the design phase, the full-waveform LADAR simulation in LadarSIM was used to simulate the expected return waveforms for various system design parameters, target characteristics, and target ranges. Once the design was finalized and the ELT constructed, the measured specifications of the system and experimental data captured from the operational sensor were used to validate the behavior of the system as predicted during the design phase. This paper presents the methodology used, and lessons learned from this "design, build, validate" process. Simulated results from the design phase are presented, and these are compared to simulated results using measured system parameters and operational sensor data. The advantages of this simulation-based process are also presented.
Simulated Driving Assessment (SDA) for Teen Drivers: Results from a Validation Study
McDonald, Catherine C.; Kandadai, Venk; Loeb, Helen; Seacrist, Thomas S.; Lee, Yi-Ching; Winston, Zachary; Winston, Flaura K.
2015-01-01
Background Driver error and inadequate skill are common critical reasons for novice teen driver crashes, yet few validated, standardized assessments of teen driving skills exist. The purpose of this study was to evaluate the construct and criterion validity of a newly developed Simulated Driving Assessment (SDA) for novice teen drivers. Methods The SDA's 35-minute simulated drive incorporates 22 variations of the most common teen driver crash configurations. Driving performance was compared for 21 inexperienced teens (age 16–17 years, provisional license ≤90 days) and 17 experienced adults (age 25–50 years, license ≥5 years, drove ≥100 miles per week, no collisions or moving violations ≤3 years). SDA driving performance (Error Score) was based on driving safety measures derived from simulator and eye-tracking data. Negative driving outcomes included simulated collisions or run-off-the-road incidents. A professional driving evaluator/instructor reviewed videos of SDA performance (DEI Score). Results The SDA demonstrated construct validity: 1.) Teens had a higher Error Score than adults (30 vs. 13, p=0.02); 2.) For each additional error committed, the relative risk of a participant's propensity for a simulated negative driving outcome increased by 8% (95% CI: 1.05–1.10, p<0.01). The SDA demonstrated criterion validity: Error Score was correlated with DEI Score (r=−0.66, p<0.001). Conclusions This study supports the concept of validated simulated driving tests like the SDA to assess novice driver skill in complex and hazardous driving scenarios. The SDA, as a standard protocol to evaluate teen driver performance, has the potential to facilitate screening and assessment of teen driving readiness and could be used to guide targeted skill training. PMID:25740939
Noble, Erik; Druyan, Leonard M; Fulakeza, Matthew
2016-01-01
This paper evaluates the performance of the Weather and Research Forecasting (WRF) model as a regional-atmospheric model over West Africa. It tests WRF sensitivity to 64 configurations of alternative parameterizations in a series of 104 twelve-day September simulations during eleven consecutive years, 2000-2010. The 64 configurations combine WRF parameterizations of cumulus convection, radiation, surface-hydrology, and PBL. Simulated daily and total precipitation results are validated against Global Precipitation Climatology Project (GPCP) and Tropical Rainfall Measuring Mission (TRMM) data. Particular attention is given to westward-propagating precipitation maxima associated with African Easterly Waves (AEWs). A wide range of daily precipitation validation scores demonstrates the influence of alternative parameterizations. The best WRF performers achieve time-longitude correlations (against GPCP) of between 0.35-0.42 and spatiotemporal variability amplitudes only slightly higher than observed estimates. A parallel simulation by the benchmark Regional Model-v.3 achieves a higher correlation (0.52) and realistic spatiotemporal variability amplitudes. The largest favorable impact on WRF precipitation validation is achieved by selecting the Grell-Devenyi convection scheme, resulting in higher correlations against observations than using the Kain-Fritch convection scheme. Other parameterizations have less obvious impact. Validation statistics for optimized WRF configurations simulating the parallel period during 2000-2010 are more favorable for 2005, 2006, and 2008 than for other years. The selection of some of the same WRF configurations as high scorers in both circulation and precipitation validations supports the notion that simulations of West African daily precipitation benefit from skillful simulations of associated AEW vorticity centers and that simulations of AEWs would benefit from skillful simulations of convective precipitation.
Noble, Erik; Druyan, Leonard M.; Fulakeza, Matthew
2018-01-01
This paper evaluates the performance of the Weather and Research Forecasting (WRF) model as a regional-atmospheric model over West Africa. It tests WRF sensitivity to 64 configurations of alternative parameterizations in a series of 104 twelve-day September simulations during eleven consecutive years, 2000–2010. The 64 configurations combine WRF parameterizations of cumulus convection, radiation, surface-hydrology, and PBL. Simulated daily and total precipitation results are validated against Global Precipitation Climatology Project (GPCP) and Tropical Rainfall Measuring Mission (TRMM) data. Particular attention is given to westward-propagating precipitation maxima associated with African Easterly Waves (AEWs). A wide range of daily precipitation validation scores demonstrates the influence of alternative parameterizations. The best WRF performers achieve time-longitude correlations (against GPCP) of between 0.35–0.42 and spatiotemporal variability amplitudes only slightly higher than observed estimates. A parallel simulation by the benchmark Regional Model-v.3 achieves a higher correlation (0.52) and realistic spatiotemporal variability amplitudes. The largest favorable impact on WRF precipitation validation is achieved by selecting the Grell-Devenyi convection scheme, resulting in higher correlations against observations than using the Kain-Fritch convection scheme. Other parameterizations have less obvious impact. Validation statistics for optimized WRF configurations simulating the parallel period during 2000–2010 are more favorable for 2005, 2006, and 2008 than for other years. The selection of some of the same WRF configurations as high scorers in both circulation and precipitation validations supports the notion that simulations of West African daily precipitation benefit from skillful simulations of associated AEW vorticity centers and that simulations of AEWs would benefit from skillful simulations of convective precipitation. PMID:29563651
Assessment of simulation fidelity using measurements of piloting technique in flight
NASA Technical Reports Server (NTRS)
Clement, W. F.; Cleveland, W. B.; Key, D. L.
1984-01-01
The U.S. Army and NASA joined together on a project to conduct a systematic investigation and validation of a ground based piloted simulation of the Army/Sikorsky UH-60A helicopter. Flight testing was an integral part of the validation effort. Nap-of-the-Earth (NOE) piloting tasks which were investigated included the bob-up, the hover turn, the dash/quickstop, the sidestep, the dolphin, and the slalom. Results from the simulation indicate that the pilot's NOE task performance in the simulator is noticeably and quantifiably degraded when compared with the task performance results generated in flight test. The results of the flight test and ground based simulation experiments support a unique rationale for the assessment of simulation fidelity: flight simulation fidelity should be judged quantitatively by measuring pilot's control strategy and technique as induced by the simulator. A quantitative comparison is offered between the piloting technique observed in a flight simulator and that observed in flight test for the same tasks performed by the same pilots.
Shift Verification and Validation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pandya, Tara M.; Evans, Thomas M.; Davidson, Gregory G
2016-09-07
This documentation outlines the verification and validation of Shift for the Consortium for Advanced Simulation of Light Water Reactors (CASL). Five main types of problems were used for validation: small criticality benchmark problems; full-core reactor benchmarks for light water reactors; fixed-source coupled neutron-photon dosimetry benchmarks; depletion/burnup benchmarks; and full-core reactor performance benchmarks. We compared Shift results to measured data and other simulated Monte Carlo radiation transport code results, and found very good agreement in a variety of comparison measures. These include prediction of critical eigenvalue, radial and axial pin power distributions, rod worth, leakage spectra, and nuclide inventories over amore » burn cycle. Based on this validation of Shift, we are confident in Shift to provide reference results for CASL benchmarking.« less
Model-Based Verification and Validation of Spacecraft Avionics
NASA Technical Reports Server (NTRS)
Khan, Mohammed Omair
2012-01-01
Our simulation was able to mimic the results of 30 tests on the actual hardware. This shows that simulations have the potential to enable early design validation - well before actual hardware exists. Although simulations focused around data processing procedures at subsystem and device level, they can also be applied to system level analysis to simulate mission scenarios and consumable tracking (e.g. power, propellant, etc.). Simulation engine plug-in developments are continually improving the product, but handling time for time-sensitive operations (like those of the remote engineering unit and bus controller) can be cumbersome.
NASA Astrophysics Data System (ADS)
McIntosh, Bryan
The LSO scintillator crystal commonly used in PET scanners contains a low level of intrinsic radioactivity due to a small amount of Lu-176. This is not usually a concern in routine scanning but can become an issue in small animal imaging, especially when imaging low tracer activity levels. Previously there had been no systematic validation of simulations of this activity; this thesis discusses the validation of a GATE model of intrinsic Lu-176 against results from a bench-top pair of detectors and a Siemens Inveon preclinical PET system. The simulation results matched those from the bench-top system very well, but did not agree as well with results from the complete Inveon system due to a drop-off in system sensitivity at low energies that was not modelled. With this validation the model can now be used with confidence to predict the effects of Lu-176 activity in future PET systems.
A Validation Study of Merging and Spacing Techniques in a NAS-Wide Simulation
NASA Technical Reports Server (NTRS)
Glaab, Patricia C.
2011-01-01
In November 2010, Intelligent Automation, Inc. (IAI) delivered an M&S software tool to that allows system level studies of the complex terminal airspace with the ACES simulation. The software was evaluated against current day arrivals in the Atlanta TRACON using Atlanta's Hartsfield-Jackson International Airport (KATL) arrival schedules. Results of this validation effort are presented describing data sets, traffic flow assumptions and techniques, and arrival rate comparisons between reported landings at Atlanta versus simulated arrivals using the same traffic sets in ACES equipped with M&S. Initial results showed the simulated system capacity to be significantly below arrival capacity seen at KATL. Data was gathered for Atlanta using commercial airport and flight tracking websites (like FlightAware.com), and analyzed to insure compatible techniques were used for result reporting and comparison. TFM operators for Atlanta were consulted for tuning final simulation parameters and for guidance in flow management techniques during high volume operations. Using these modified parameters and incorporating TFM guidance for efficiencies in flowing aircraft, arrival capacity for KATL was matched for the simulation. Following this validation effort, a sensitivity study was conducted to measure the impact of variations in system parameters on the Atlanta airport arrival capacity.
Hydrological Modeling of the Jiaoyi Watershed (China) Using HSPF Model
Yan, Chang-An; Zhang, Wanchang; Zhang, Zhijie
2014-01-01
A watershed hydrological model, hydrological simulation program-Fortran (HSPF), was applied to simulate the spatial and temporal variation of hydrological processes in the Jiaoyi watershed of Huaihe River Basin, the heaviest shortage of water resources and polluted area in China. The model was calibrated using the years 2001–2004 and validated with data from 2005 to 2006. Calibration and validation results showed that the model generally simulated mean monthly and daily runoff precisely due to the close matching hydrographs between simulated and observed runoff, as well as the excellent evaluation indicators such as Nash-Sutcliffe efficiency (NSE), coefficient of correlation (R 2), and the relative error (RE). The similar simulation results between calibration and validation period showed that all the calibrated parameters had a certain representation in Jiaoyi watershed. Additionally, the simulation in rainy months was more accurate than the drought months. Another result in this paper was that HSPF was also capable of estimating the water balance components reasonably and realistically in space through the whole watershed. The calibrated model can be used to explore the effects of climate change scenarios and various watershed management practices on the water resources and water environment in the basin. PMID:25013863
Validation of a Novel Laparoscopic Adjustable Gastric Band Simulator
Sankaranarayanan, Ganesh; Adair, James D.; Halic, Tansel; Gromski, Mark A.; Lu, Zhonghua; Ahn, Woojin; Jones, Daniel B.; De, Suvranu
2011-01-01
Background Morbid obesity accounts for more than 90,000 deaths per year in the United States. Laparoscopic adjustable gastric banding (LAGB) is the second most common weight loss procedure performed in the US and the most common in Europe and Australia. Simulation in surgical training is a rapidly advancing field that has been adopted by many to prepare surgeons for surgical techniques and procedures. Study Aim The aim of our study was to determine face, construct and content validity for a novel virtual reality laparoscopic adjustable gastric band simulator. Methods Twenty-eight subjects were categorized into two groups (Expert and Novice), determined by their skill level in laparoscopic surgery. Experts consisted of subjects who had at least four years of laparoscopic training and operative experience. Novices consisted of subjects with medical training, but with less than four years of laparoscopic training. The subjects performed the virtual reality laparoscopic adjustable band surgery simulator. They were automatically scored, according to various tasks. The subjects then completed a questionnaire to evaluate face and content validity. Results On a 5-point Likert scale (1 – lowest score, 5 – highest score), the mean score for visual realism was 4.00 ± 0.67 and the mean score for realism of the interface and tool movements was 4.07 ± 0.77 [Face Validity]. There were significant differences in the performance of the two subject groups (Expert and Novice), based on total scores (p<0.001) [Construct Validity]. Mean scores for utility of the simulator, as addressed by the Expert group, was 4.50 ± 0.71 [Content Validity]. Conclusion We created a virtual reality laparoscopic adjustable gastric band simulator. Our initial results demonstrate excellent face, construct and content validity findings. To our knowledge, this is the first virtual reality simulator with haptic feedback for training residents and surgeons in the laparoscopic adjustable gastric banding procedure. PMID:20734069
Validated simulator for space debris removal with nets and other flexible tethers applications
NASA Astrophysics Data System (ADS)
Gołębiowski, Wojciech; Michalczyk, Rafał; Dyrek, Michał; Battista, Umberto; Wormnes, Kjetil
2016-12-01
In the context of active debris removal technologies and preparation activities for the e.Deorbit mission, a simulator for net-shaped elastic bodies dynamics and their interactions with rigid bodies, has been developed. Its main application is to aid net design and test scenarios for space debris deorbitation. The simulator can model all the phases of the debris capturing process: net launch, flight and wrapping around the target. It handles coupled simulation of rigid and flexible bodies dynamics. Flexible bodies were implemented using Cosserat rods model. It allows to simulate flexible threads or wires with elasticity and damping for stretching, bending and torsion. Threads may be combined into structures of any topology, so the software is able to simulate nets, pure tethers, tether bundles, cages, trusses, etc. Full contact dynamics was implemented. Programmatic interaction with simulation is possible - i.e. for control implementation. The underlying model has been experimentally validated and due to significant gravity influence, experiment had to be performed in microgravity conditions. Validation experiment for parabolic flight was a downscaled process of Envisat capturing. The prepacked net was launched towards the satellite model, it expanded, hit the model and wrapped around it. The whole process was recorded with 2 fast stereographic camera sets for full 3D trajectory reconstruction. The trajectories were used to compare net dynamics to respective simulations and then to validate the simulation tool. The experiments were performed on board of a Falcon-20 aircraft, operated by National Research Council in Ottawa, Canada. Validation results show that model reflects phenomenon physics accurately enough, so it may be used for scenario evaluation and mission design purposes. The functionalities of the simulator are described in detail in the paper, as well as its underlying model, sample cases and methodology behind validation. Results are presented and typical use cases are discussed showing that the software may be used to design throw nets for space debris capturing, but also to simulate deorbitation process, chaser control system or general interactions between rigid and elastic bodies - all in convenient and efficient way. The presented work was led by SKA Polska under the ESA contract, within the CleanSpace initiative.
Design and landing dynamic analysis of reusable landing leg for a near-space manned capsule
NASA Astrophysics Data System (ADS)
Yue, Shuai; Nie, Hong; Zhang, Ming; Wei, Xiaohui; Gan, Shengyong
2018-06-01
To improve the landing performance of a near-space manned capsule under various landing conditions, a novel landing system is designed that employs double chamber and single chamber dampers in the primary and auxiliary struts, respectively. A dynamic model of the landing system is established, and the damper parameters are determined by employing the design method. A single-leg drop test with different initial pitch angles is then conducted to compare and validate the simulation model. Based on the validated simulation model, seven critical landing conditions regarding nine crucial landing responses are found by combining the radial basis function (RBF) surrogate model and adaptive simulated annealing (ASA) optimization method. Subsequently, the adaptability of the landing system under critical landing conditions is analyzed. The results show that the simulation effectively results match the test results, which validates the accuracy of the dynamic model. In addition, all of the crucial responses under their corresponding critical landing conditions satisfy the design specifications, demonstrating the feasibility of the landing system.
A systematic review of validated sinus surgery simulators.
Stew, B; Kao, S S-T; Dharmawardana, N; Ooi, E H
2018-06-01
Simulation provides a safe and effective opportunity to develop surgical skills. A variety of endoscopic sinus surgery (ESS) simulators has been described in the literature. Validation of these simulators allows for effective utilisation in training. To conduct a systematic review of the published literature to analyse the evidence for validated ESS simulation. Pubmed, Embase, Cochrane and Cinahl were searched from inception of the databases to 11 January 2017. Twelve thousand five hundred and sixteen articles were retrieved of which 10 112 were screened following the removal of duplicates. Thirty-eight full-text articles were reviewed after meeting search criteria. Evidence of face, content, construct, discriminant and predictive validity was extracted. Twenty articles were included in the analysis describing 12 ESS simulators. Eleven of these simulators had undergone validation: 3 virtual reality, 7 physical bench models and 1 cadaveric simulator. Seven of the simulators were shown to have face validity, 7 had construct validity and 1 had predictive validity. None of the simulators demonstrated discriminate validity. This systematic review demonstrates that a number of ESS simulators have been comprehensively validated. Many of the validation processes, however, lack standardisation in outcome reporting, thus limiting a meta-analysis comparison between simulators. © 2017 John Wiley & Sons Ltd.
Validating the simulation of large-scale parallel applications using statistical characteristics
Zhang, Deli; Wilke, Jeremiah; Hendry, Gilbert; ...
2016-03-01
Simulation is a widely adopted method to analyze and predict the performance of large-scale parallel applications. Validating the hardware model is highly important for complex simulations with a large number of parameters. Common practice involves calculating the percent error between the projected and the real execution time of a benchmark program. However, in a high-dimensional parameter space, this coarse-grained approach often suffers from parameter insensitivity, which may not be known a priori. Moreover, the traditional approach cannot be applied to the validation of software models, such as application skeletons used in online simulations. In this work, we present a methodologymore » and a toolset for validating both hardware and software models by quantitatively comparing fine-grained statistical characteristics obtained from execution traces. Although statistical information has been used in tasks like performance optimization, this is the first attempt to apply it to simulation validation. Lastly, our experimental results show that the proposed evaluation approach offers significant improvement in fidelity when compared to evaluation using total execution time, and the proposed metrics serve as reliable criteria that progress toward automating the simulation tuning process.« less
3D Simulation of External Flooding Events for the RISMC Pathway
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prescott, Steven; Mandelli, Diego; Sampath, Ramprasad
2015-09-01
Incorporating 3D simulations as part of the Risk-Informed Safety Margins Characterization (RISMIC) Toolkit allows analysts to obtain a more complete picture of complex system behavior for events including external plant hazards. External events such as flooding have become more important recently – however these can be analyzed with existing and validated simulated physics toolkits. In this report, we describe these approaches specific to flooding-based analysis using an approach called Smoothed Particle Hydrodynamics. The theory, validation, and example applications of the 3D flooding simulation are described. Integrating these 3D simulation methods into computational risk analysis provides a spatial/visual aspect to themore » design, improves the realism of results, and can prove visual understanding to validate the analysis of flooding.« less
NASA Astrophysics Data System (ADS)
Ammouri, Aymen; Ben Salah, Walid; Khachroumi, Sofiane; Ben Salah, Tarek; Kourda, Ferid; Morel, Hervé
2014-05-01
Design of integrated power converters needs prototype-less approaches. Specific simulations are required for investigation and validation process. Simulation relies on active and passive device models. Models of planar devices, for instance, are still not available in power simulator tools. There is, thus, a specific limitation during the simulation process of integrated power systems. The paper focuses on the development of a physically-based planar inductor model and its validation inside a power converter during transient switching. The planar inductor model remains a complex device to model, particularly when the skin, the proximity and the parasitic capacitances effects are taken into account. Heterogeneous simulation scheme, including circuit and device models, is successfully implemented in VHDL-AMS language and simulated in Simplorer platform. The mixed simulation results has been favorably tested and compared with practical measurements. It is found that the multi-domain simulation results and measurements data are in close agreement.
Validity of using photographs to simulate visible qualities of forest recreation environments
Robin E. Hoffman; James F. Palmer
1995-01-01
Forest recreation managers and researchers interested in conserving and improving the visual quality and recreation opportunities available in forest environments must often resort to simulations as a means of illustrating alternatives for potential users to evaluate. This paper reviews the results of prior research evaluating the validity of using photographic...
NASA Astrophysics Data System (ADS)
Zainol, M. R. R. M. A.; Kamaruddin, M. A.; Zawawi, M. H.; Wahab, K. A.
2017-11-01
Smooth Particle Hydrodynamic is the three-dimensional (3D) model. In this research work, three cases and one validation have been simulate using DualSPHysics. Study area of this research work was at Sarawak Barrage. The cases have different water level at the downstream. This study actually to simulate riverbed erosion and scouring properties by using multi-phases cases which use sand as sediment and water. The velocity and the scouring profile have been recorded as the result and shown in the result chapter. The result of the validation is acceptable where the scouring profile and the velocity were slightly different between laboratory experiment and simulation. Hence, it can be concluded that the simulation by using SPH can be used as the alternative to simulate the real cases.
Adjustment and validation of a simulation tool for CSP plants based on parabolic trough technology
NASA Astrophysics Data System (ADS)
García-Barberena, Javier; Ubani, Nora
2016-05-01
The present work presents the validation process carried out for a simulation tool especially designed for the energy yield assessment of concentrating solar plants based on parabolic through (PT) technology. The validation has been carried out by comparing the model estimations with real data collected from a commercial CSP plant. In order to adjust the model parameters used for the simulation, 12 different days were selected among one-year of operational data measured at the real plant. The 12 days were simulated and the estimations compared with the measured data, focusing on the most important variables from the simulation point of view: temperatures, pressures and mass flow of the solar field, gross power, parasitic power, and net power delivered by the plant. Based on these 12 days, the key parameters for simulating the model were properly fixed and the simulation of a whole year performed. The results obtained for a complete year simulation showed very good agreement for the gross and net electric total production. The estimations for these magnitudes show a 1.47% and 2.02% BIAS respectively. The results proved that the simulation software describes with great accuracy the real operation of the power plant and correctly reproduces its transient behavior.
Performance validation of the ANSER control laws for the F-18 HARV
NASA Technical Reports Server (NTRS)
Messina, Michael D.
1995-01-01
The ANSER control laws were implemented in Ada by NASA Dryden for flight test on the High Alpha Research Vehicle (HARV). The Ada implementation was tested in the hardware-in-the-loop (HIL) simulation, and results were compared to those obtained with the NASA Langley batch Fortran implementation of the control laws which are considered the 'truth model.' This report documents the performance validation test results between these implementations. This report contains the ANSER performance validation test plan, HIL versus batch time-history comparisons, simulation scripts used to generate checkcases, and detailed analysis of discrepancies discovered during testing.
Performance validation of the ANSER Control Laws for the F-18 HARV
NASA Technical Reports Server (NTRS)
Messina, Michael D.
1995-01-01
The ANSER control laws were implemented in Ada by NASA Dryden for flight test on the High Alpha Research Vehicle (HARV). The Ada implementation was tested in the hardware-in-the-loop (HIL) simulation, and results were compared to those obtained with the NASA Langley batch Fortran implementation of the control laws which are considered the 'truth model'. This report documents the performance validation test results between these implementations. This report contains the ANSER performance validation test plan, HIL versus batch time-history comparisons, simulation scripts used to generate checkcases, and detailed analysis of discrepancies discovered during testing.
Towards Virtual FLS: Development of a Peg Transfer Simulator
Arikatla, Venkata S; Ahn, Woojin; Sankaranarayanan, Ganesh; De, Suvranu
2014-01-01
Background Peg transfer is one of five tasks in the Fundamentals of Laparoscopic Surgery (FLS), program. We report the development and validation of a Virtual Basic Laparoscopic Skill Trainer-Peg Transfer (VBLaST-PT©) simulator for automatic real-time scoring and objective quantification of performance. Methods We have introduced new techniques in order to allow bi-manual manipulation of pegs and automatic scoring/evaluation while maintaining high quality of simulation. We performed a preliminary face and construct validation study with 22 subjects divided into two groups: experts (PGY 4–5, fellow and practicing surgeons) and novice (PGY 1–3). Results Face validation shows high scores for all the aspects of the simulation. A two-tailed Mann-Whitney U-test scores showed significant difference between the two groups on completion time (p=0.003), FLS score (p=0.002) and the VBLaST-PT© score (p=0.006). Conclusions VBLaST-PT© is a high quality virtual simulator that showed both face and construct validity. PMID:24030904
Time Domain Tool Validation Using ARES I-X Flight Data
NASA Technical Reports Server (NTRS)
Hough, Steven; Compton, James; Hannan, Mike; Brandon, Jay
2011-01-01
The ARES I-X vehicle was launched from NASA's Kennedy Space Center (KSC) on October 28, 2009 at approximately 11:30 EDT. ARES I-X was the first test flight for NASA s ARES I launch vehicle, and it was the first non-Shuttle launch vehicle designed and flown by NASA since Saturn. The ARES I-X had a 4-segment solid rocket booster (SRB) first stage and a dummy upper stage (US) to emulate the properties of the ARES I US. During ARES I-X pre-flight modeling and analysis, six (6) independent time domain simulation tools were developed and cross validated. Each tool represents an independent implementation of a common set of models and parameters in a different simulation framework and architecture. Post flight data and reconstructed models provide the means to validate a subset of the simulations against actual flight data and to assess the accuracy of pre-flight dispersion analysis. Post flight data consists of telemetered Operational Flight Instrumentation (OFI) data primarily focused on flight computer outputs and sensor measurements as well as Best Estimated Trajectory (BET) data that estimates vehicle state information from all available measurement sources. While pre-flight models were found to provide a reasonable prediction of the vehicle flight, reconstructed models were generated to better represent and simulate the ARES I-X flight. Post flight reconstructed models include: SRB propulsion model, thrust vector bias models, mass properties, base aerodynamics, and Meteorological Estimated Trajectory (wind and atmospheric data). The result of the effort is a set of independently developed, high fidelity, time-domain simulation tools that have been cross validated and validated against flight data. This paper presents the process and results of high fidelity aerospace modeling, simulation, analysis and tool validation in the time domain.
An Integrated Study on a Novel High Temperature High Entropy Alloy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Shizhong
2016-12-31
This report summarizes our recent works of theoretical modeling, simulation, and experimental validation of the simulation results on the new refractory high entropy alloy (HEA) design and oxide doped refractory HEA research. The simulation of the stability and thermal dynamics simulation on potential thermal stable candidates were performed and related HEA with oxide doped samples were synthesized and characterized. The HEA ab initio density functional theory and molecular dynamics physical property simulation methods and experimental texture validation techniques development, achievements already reached, course work development, students and postdoc training, and future improvement research directions are briefly introduced.
NASA Technical Reports Server (NTRS)
Raiszadeh, Ben; Queen, Eric M.
2002-01-01
A capability to simulate trajectories Of Multiple interacting rigid bodies has been developed. This capability uses the Program to Optimize Simulated Trajectories II (POST II). Previously, POST II had the ability to simulate multiple bodies without interacting forces. The current implementation is used for the Simulation of parachute trajectories, in which the parachute and suspended bodies can be treated as rigid bodies. An arbitrary set of connecting lines can be included in the model and are treated as massless spring-dampers. This paper discusses details of the connection line modeling and results of several test cases used to validate the capability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
English, Shawn A.; Briggs, Timothy M.; Nelson, Stacy M.
Simulations of low velocity impact with a flat cylindrical indenter upon a carbon fiber fabric reinforced polymer laminate are rigorously validated. Comparison of the impact energy absorption between the model and experiment is used as the validation metric. Additionally, non-destructive evaluation, including ultrasonic scans and three-dimensional computed tomography, provide qualitative validation of the models. The simulations include delamination, matrix cracks and fiber breaks. An orthotropic damage and failure constitutive model, capable of predicting progressive damage and failure, is developed in conjunction and described. An ensemble of simulations incorporating model parameter uncertainties is used to predict a response distribution which ismore » then compared to experimental output using appropriate statistical methods. Lastly, the model form errors are exposed and corrected for use in an additional blind validation analysis. The result is a quantifiable confidence in material characterization and model physics when simulating low velocity impact in structures of interest.« less
Simulation of Propagation of Compartment Fire on Building Facades
NASA Astrophysics Data System (ADS)
Simion, A.; Dragne, H.; Stoica, D.; Anghel, I.
2018-06-01
The façade fire simulation of buildings is carried out with Pyrosim numerical fire modeling program, following the implementation of a fire scenario in this simulation program. The scenario that was implemented in the Pyrosim program by researchers from the INCERC Fire Safety Research and Testing Laboratory complied with the requirements of BS 8414. The results obtained following the run of the computational program led to the visual validation of effluents at different time points from the beginning of the thermal load burning, as well as the validation in terms of recorded temperatures. It is considered that the results obtained are reasonable, the test being fully validated from the point of view of the implementation of the fire scenario, of the correct development of the effluents and of the temperature values [1].
Hariharan, Prasanna; D’Souza, Gavin A.; Horner, Marc; Morrison, Tina M.; Malinauskas, Richard A.; Myers, Matthew R.
2017-01-01
A “credible” computational fluid dynamics (CFD) model has the potential to provide a meaningful evaluation of safety in medical devices. One major challenge in establishing “model credibility” is to determine the required degree of similarity between the model and experimental results for the model to be considered sufficiently validated. This study proposes a “threshold-based” validation approach that provides a well-defined acceptance criteria, which is a function of how close the simulation and experimental results are to the safety threshold, for establishing the model validity. The validation criteria developed following the threshold approach is not only a function of Comparison Error, E (which is the difference between experiments and simulations) but also takes in to account the risk to patient safety because of E. The method is applicable for scenarios in which a safety threshold can be clearly defined (e.g., the viscous shear-stress threshold for hemolysis in blood contacting devices). The applicability of the new validation approach was tested on the FDA nozzle geometry. The context of use (COU) was to evaluate if the instantaneous viscous shear stress in the nozzle geometry at Reynolds numbers (Re) of 3500 and 6500 was below the commonly accepted threshold for hemolysis. The CFD results (“S”) of velocity and viscous shear stress were compared with inter-laboratory experimental measurements (“D”). The uncertainties in the CFD and experimental results due to input parameter uncertainties were quantified following the ASME V&V 20 standard. The CFD models for both Re = 3500 and 6500 could not be sufficiently validated by performing a direct comparison between CFD and experimental results using the Student’s t-test. However, following the threshold-based approach, a Student’s t-test comparing |S-D| and |Threshold-S| showed that relative to the threshold, the CFD and experimental datasets for Re = 3500 were statistically similar and the model could be considered sufficiently validated for the COU. However, for Re = 6500, at certain locations where the shear stress is close the hemolysis threshold, the CFD model could not be considered sufficiently validated for the COU. Our analysis showed that the model could be sufficiently validated either by reducing the uncertainties in experiments, simulations, and the threshold or by increasing the sample size for the experiments and simulations. The threshold approach can be applied to all types of computational models and provides an objective way of determining model credibility and for evaluating medical devices. PMID:28594889
Hariharan, Prasanna; D'Souza, Gavin A; Horner, Marc; Morrison, Tina M; Malinauskas, Richard A; Myers, Matthew R
2017-01-01
A "credible" computational fluid dynamics (CFD) model has the potential to provide a meaningful evaluation of safety in medical devices. One major challenge in establishing "model credibility" is to determine the required degree of similarity between the model and experimental results for the model to be considered sufficiently validated. This study proposes a "threshold-based" validation approach that provides a well-defined acceptance criteria, which is a function of how close the simulation and experimental results are to the safety threshold, for establishing the model validity. The validation criteria developed following the threshold approach is not only a function of Comparison Error, E (which is the difference between experiments and simulations) but also takes in to account the risk to patient safety because of E. The method is applicable for scenarios in which a safety threshold can be clearly defined (e.g., the viscous shear-stress threshold for hemolysis in blood contacting devices). The applicability of the new validation approach was tested on the FDA nozzle geometry. The context of use (COU) was to evaluate if the instantaneous viscous shear stress in the nozzle geometry at Reynolds numbers (Re) of 3500 and 6500 was below the commonly accepted threshold for hemolysis. The CFD results ("S") of velocity and viscous shear stress were compared with inter-laboratory experimental measurements ("D"). The uncertainties in the CFD and experimental results due to input parameter uncertainties were quantified following the ASME V&V 20 standard. The CFD models for both Re = 3500 and 6500 could not be sufficiently validated by performing a direct comparison between CFD and experimental results using the Student's t-test. However, following the threshold-based approach, a Student's t-test comparing |S-D| and |Threshold-S| showed that relative to the threshold, the CFD and experimental datasets for Re = 3500 were statistically similar and the model could be considered sufficiently validated for the COU. However, for Re = 6500, at certain locations where the shear stress is close the hemolysis threshold, the CFD model could not be considered sufficiently validated for the COU. Our analysis showed that the model could be sufficiently validated either by reducing the uncertainties in experiments, simulations, and the threshold or by increasing the sample size for the experiments and simulations. The threshold approach can be applied to all types of computational models and provides an objective way of determining model credibility and for evaluating medical devices.
Development and validation of the simulation-based learning evaluation scale.
Hung, Chang-Chiao; Liu, Hsiu-Chen; Lin, Chun-Chih; Lee, Bih-O
2016-05-01
The instruments that evaluate a student's perception of receiving simulated training are English versions and have not been tested for reliability or validity. The aim of this study was to develop and validate a Chinese version Simulation-Based Learning Evaluation Scale (SBLES). Four stages were conducted to develop and validate the SBLES. First, specific desired competencies were identified according to the National League for Nursing and Taiwan Nursing Accreditation Council core competencies. Next, the initial item pool was comprised of 50 items related to simulation that were drawn from the literature of core competencies. Content validity was established by use of an expert panel. Finally, exploratory factor analysis and confirmatory factor analysis were conducted for construct validity, and Cronbach's coefficient alpha determined the scale's internal consistency reliability. Two hundred and fifty students who had experienced simulation-based learning were invited to participate in this study. Two hundred and twenty-five students completed and returned questionnaires (response rate=90%). Six items were deleted from the initial item pool and one was added after an expert panel review. Exploratory factor analysis with varimax rotation revealed 37 items remaining in five factors which accounted for 67% of the variance. The construct validity of SBLES was substantiated in a confirmatory factor analysis that revealed a good fit of the hypothesized factor structure. The findings tally with the criterion of convergent and discriminant validity. The range of internal consistency for five subscales was .90 to .93. Items were rated on a 5-point scale from 1 (strongly disagree) to 5 (strongly agree). The results of this study indicate that the SBLES is valid and reliable. The authors recommend that the scale could be applied in the nursing school to evaluate the effectiveness of simulation-based learning curricula. Copyright © 2016 Elsevier Ltd. All rights reserved.
Implementation of WirelessHART in the NS-2 Simulator and Validation of Its Correctness
Zand, Pouria; Mathews, Emi; Havinga, Paul; Stojanovski, Spase; Sisinni, Emiliano; Ferrari, Paolo
2014-01-01
One of the first standards in the wireless sensor networks domain, WirelessHART (HART (Highway Addressable Remote Transducer)), was introduced to address industrial process automation and control requirements. This standard can be used as a reference point to evaluate other wireless protocols in the domain of industrial monitoring and control. This makes it worthwhile to set up a reliable WirelessHART simulator in order to achieve that reference point in a relatively easy manner. Moreover, it offers an alternative to expensive testbeds for testing and evaluating the performance of WirelessHART. This paper explains our implementation of WirelessHART in the NS-2 network simulator. According to our knowledge, this is the first implementation that supports the WirelessHART network manager, as well as the whole stack (all OSI (Open Systems Interconnection model) layers) of the WirelessHART standard. It also explains our effort to validate the correctness of our implementation, namely through the validation of the implementation of the WirelessHART stack protocol and of the network manager. We use sniffed traffic from a real WirelessHART testbed installed in the Idrolab plant for these validations. This confirms the validity of our simulator. Empirical analysis shows that the simulated results are nearly comparable to the results obtained from real networks. We also demonstrate the versatility and usability of our implementation by providing some further evaluation results in diverse scenarios. For example, we evaluate the performance of the WirelessHART network by applying incremental interference in a multi-hop network. PMID:24841245
NASA Astrophysics Data System (ADS)
Spies, M.; Rieder, H.; Orth, Th.; Maack, S.
2012-05-01
In this contribution we address the beam field simulation of 2D ultrasonic arrays using the Generalized Point Source Synthesis technique. Aiming at the inspection of cylindrical components (e.g. pipes) the influence of concave and convex surface curvatures, respectively, has been evaluated for a commercial probe. We have compared these results with those obtained using a commercial simulation tool. In civil engineering, the ultrasonic inspection of highly attenuating concrete structures has been advanced by the development of dry contact point transducers, mainly applied in array arrangements. Our respective simulations for a widely used commercial probe are validated using experimental results acquired on concrete half-spheres with diameters from 200 mm up to 650 mm.
NASA Astrophysics Data System (ADS)
Elbakary, Mohamed I.; Iftekharuddin, Khan M.; Papelis, Yiannis; Newman, Brett
2017-05-01
Air Traffic Management (ATM) concepts are commonly tested in simulation to obtain preliminary results and validate the concepts before adoption. Recently, the researchers found that simulation is not enough because of complexity associated with ATM concepts. In other words, full-scale tests must eventually take place to provide compelling performance evidence before adopting full implementation. Testing using full-scale aircraft produces a high-cost approach that yields high-confidence results but simulation provides a low-risk/low-cost approach with reduced confidence on the results. One possible approach to increase the confidence of the results and simultaneously reduce the risk and the cost is using unmanned sub-scale aircraft in testing new concepts for ATM. This paper presents the simulation results of using unmanned sub-scale aircraft in implementing ATM concepts compared to the full scale aircraft. The results of simulation show that the performance of sub-scale is quite comparable to that of the full-scale which validates use of the sub-scale in testing new ATM concepts. Keywords: Unmanned
NASA Technical Reports Server (NTRS)
Stankovic, Ana V.
2003-01-01
Professor Stankovic will be developing and refining Simulink based models of the PM alternator and comparing the simulation results with experimental measurements taken from the unit. Her first task is to validate the models using the experimental data. Her next task is to develop alternative control techniques for the application of the Brayton Cycle PM Alternator in a nuclear electric propulsion vehicle. The control techniques will be first simulated using the validated models then tried experimentally with hardware available at NASA. Testing and simulation of a 2KW PM synchronous generator with diode bridge output is described. The parameters of a synchronous PM generator have been measured and used in simulation. Test procedures have been developed to verify the PM generator model with diode bridge output. Experimental and simulation results are in excellent agreement.
NASA Astrophysics Data System (ADS)
Gabi, Yasmine; Martins, Olivier; Wolter, Bernd; Strass, Benjamin
2018-04-01
The paper considers the Rockwell hardness investigation by finite element simulation in inspection situation of press hardened parts using the 3MA non-destructive testing system. The FEM model is based on robust strategy calculation which manages the issues of geometry and the time multiscale, as well as the local nonlinear hysteresis behavior of ferromagnetic materials. 3MA simulations are performed at high level operating point in order to saturate the soft microscopic surface soft layer of press hardened steel and access mainly to the bulk properties. 3MA measurements are validated by comparison with numerical simulations. Based on the simulation outputs, a virtual calibration is run. This result constitutes the first validation; the simulated calibration is in agreement with the conventional experimental data. As an outstanding highlight a correlation between magnetic quantities and hardness can be described via FEM simulated signals and shows high accuracy to the measured results.
Validation of the ROMI-RIP rough mill simulator
Edward R. Thomas; Urs Buehlmann
2002-01-01
The USDA Forest Service's ROMI-RIP rough mill rip-first simulation program is a popular tool for analyzing rough mill conditions, determining more efficient rough mill practices, and finding optimal lumber board cut-up patterns. However, until now, the results generated by ROMI-RIP have not been rigorously compared to those of an actual rough mill. Validating the...
Challenges of forest landscape modeling - simulating large landscapes and validating results
Hong S. He; Jian Yang; Stephen R. Shifley; Frank R. Thompson
2011-01-01
Over the last 20 years, we have seen a rapid development in the field of forest landscape modeling, fueled by both technological and theoretical advances. Two fundamental challenges have persisted since the inception of FLMs: (1) balancing realistic simulation of ecological processes at broad spatial and temporal scales with computing capacity, and (2) validating...
Development and Validation of a Monte Carlo Simulation Tool for Multi-Pinhole SPECT
Mok, Greta S. P.; Du, Yong; Wang, Yuchuan; Frey, Eric C.; Tsui, Benjamin M. W.
2011-01-01
Purpose In this work, we developed and validated a Monte Carlo simulation (MCS) tool for investigation and evaluation of multi-pinhole (MPH) SPECT imaging. Procedures This tool was based on a combination of the SimSET and MCNP codes. Photon attenuation and scatter in the object, as well as penetration and scatter through the collimator detector, are modeled in this tool. It allows accurate and efficient simulation of MPH SPECT with focused pinhole apertures and user-specified photon energy, aperture material, and imaging geometry. The MCS method was validated by comparing the point response function (PRF), detection efficiency (DE), and image profiles obtained from point sources and phantom experiments. A prototype single-pinhole collimator and focused four- and five-pinhole collimators fitted on a small animal imager were used for the experimental validations. We have also compared computational speed among various simulation tools for MPH SPECT, including SimSET-MCNP, MCNP, SimSET-GATE, and GATE for simulating projections of a hot sphere phantom. Results We found good agreement between the MCS and experimental results for PRF, DE, and image profiles, indicating the validity of the simulation method. The relative computational speeds for SimSET-MCNP, MCNP, SimSET-GATE, and GATE are 1: 2.73: 3.54: 7.34, respectively, for 120-view simulations. We also demonstrated the application of this MCS tool in small animal imaging by generating a set of low-noise MPH projection data of a 3D digital mouse whole body phantom. Conclusions The new method is useful for studying MPH collimator designs, data acquisition protocols, image reconstructions, and compensation techniques. It also has great potential to be applied for modeling the collimator-detector response with penetration and scatter effects for MPH in the quantitative reconstruction method. PMID:19779896
NASA Technical Reports Server (NTRS)
Geng, Tao; Paxson, Daniel E.; Zheng, Fei; Kuznetsov, Andrey V.; Roberts, William L.
2008-01-01
Pulsed combustion is receiving renewed interest as a potential route to higher performance in air breathing propulsion systems. Pulsejets offer a simple experimental device with which to study unsteady combustion phenomena and validate simulations. Previous computational fluid dynamic (CFD) simulation work focused primarily on the pulsejet combustion and exhaust processes. This paper describes a new inlet sub-model which simulates the fluidic and mechanical operation of a valved pulsejet head. The governing equations for this sub-model are described. Sub-model validation is provided through comparisons of simulated and experimentally measured reed valve motion, and time averaged inlet mass flow rate. The updated pulsejet simulation, with the inlet sub-model implemented, is validated through comparison with experimentally measured combustion chamber pressure, inlet mass flow rate, operational frequency, and thrust. Additionally, the simulated pulsejet exhaust flowfield, which is dominated by a starting vortex ring, is compared with particle imaging velocimetry (PIV) measurements on the bases of velocity, vorticity, and vortex location. The results show good agreement between simulated and experimental data. The inlet sub-model is shown to be critical for the successful modeling of pulsejet operation. This sub-model correctly predicts both the inlet mass flow rate and its phase relationship with the combustion chamber pressure. As a result, the predicted pulsejet thrust agrees very well with experimental data.
NASA Technical Reports Server (NTRS)
Wey, Thomas
2017-01-01
This paper summarizes the reacting results of simulating a bluff body stabilized flame experiment of Volvo Validation Rig using a releasable edition of the National Combustion Code (NCC). The turbulence models selected to investigate the configuration are the sub-grid scaled kinetic energy coupled large eddy simulation (K-LES) and the time-filtered Navier-Stokes (TFNS) simulation. The turbulence chemistry interaction used is linear eddy mixing (LEM).
Simulation-based training for prostate surgery.
Khan, Raheej; Aydin, Abdullatif; Khan, Muhammad Shamim; Dasgupta, Prokar; Ahmed, Kamran
2015-10-01
To identify and review the currently available simulators for prostate surgery and to explore the evidence supporting their validity for training purposes. A review of the literature between 1999 and 2014 was performed. The search terms included a combination of urology, prostate surgery, robotic prostatectomy, laparoscopic prostatectomy, transurethral resection of the prostate (TURP), simulation, virtual reality, animal model, human cadavers, training, assessment, technical skills, validation and learning curves. Furthermore, relevant abstracts from the American Urological Association, European Association of Urology, British Association of Urological Surgeons and World Congress of Endourology meetings, between 1999 and 2013, were included. Only studies related to prostate surgery simulators were included; studies regarding other urological simulators were excluded. A total of 22 studies that carried out a validation study were identified. Five validated models and/or simulators were identified for TURP, one for photoselective vaporisation of the prostate, two for holmium enucleation of the prostate, three for laparoscopic radical prostatectomy (LRP) and four for robot-assisted surgery. Of the TURP simulators, all five have demonstrated content validity, three face validity and four construct validity. The GreenLight laser simulator has demonstrated face, content and construct validities. The Kansai HoLEP Simulator has demonstrated face and content validity whilst the UroSim HoLEP Simulator has demonstrated face, content and construct validity. All three animal models for LRP have been shown to have construct validity whilst the chicken skin model was also content valid. Only two robotic simulators were identified with relevance to robot-assisted laparoscopic prostatectomy, both of which demonstrated construct validity. A wide range of different simulators are available for prostate surgery, including synthetic bench models, virtual-reality platforms, animal models, human cadavers, distributed simulation and advanced training programmes and modules. The currently validated simulators can be used by healthcare organisations to provide supplementary training sessions for trainee surgeons. Further research should be conducted to validate simulated environments, to determine which simulators have greater efficacy than others and to assess the cost-effectiveness of the simulators and the transferability of skills learnt. With surgeons investigating new possibilities for easily reproducible and valid methods of training, simulation offers great scope for implementation alongside traditional methods of training. © 2014 The Authors BJU International © 2014 BJU International Published by John Wiley & Sons Ltd.
Ares I-X Range Safety Simulation Verification and Analysis Independent Validation and Verification
NASA Technical Reports Server (NTRS)
Merry, Carl M.; Tarpley, Ashley F.; Craig, A. Scott; Tartabini, Paul V.; Brewer, Joan D.; Davis, Jerel G.; Dulski, Matthew B.; Gimenez, Adrian; Barron, M. Kyle
2011-01-01
NASA s Ares I-X vehicle launched on a suborbital test flight from the Eastern Range in Florida on October 28, 2009. To obtain approval for launch, a range safety final flight data package was generated to meet the data requirements defined in the Air Force Space Command Manual 91-710 Volume 2. The delivery included products such as a nominal trajectory, trajectory envelopes, stage disposal data and footprints, and a malfunction turn analysis. The Air Force s 45th Space Wing uses these products to ensure public and launch area safety. Due to the criticality of these data, an independent validation and verification effort was undertaken to ensure data quality and adherence to requirements. As a result, the product package was delivered with the confidence that independent organizations using separate simulation software generated data to meet the range requirements and yielded consistent results. This document captures Ares I-X final flight data package verification and validation analysis, including the methodology used to validate and verify simulation inputs, execution, and results and presents lessons learned during the process
Validation of a virtual reality-based simulator for shoulder arthroscopy.
Rahm, Stefan; Germann, Marco; Hingsammer, Andreas; Wieser, Karl; Gerber, Christian
2016-05-01
This study was to determine face and construct validity of a new virtual reality-based shoulder arthroscopy simulator which uses passive haptic feedback. Fifty-one participants including 25 novices (<20 shoulder arthroscopies) and 26 experts (>100 shoulder arthroscopies) completed two tests: for assessment of face validity, a questionnaire was filled out concerning quality of simulated reality and training potential using a 7-point Likert scale (range 1-7). Construct validity was tested by comparing simulator metrics (operation time in seconds, camera and grasper pathway in centimetre and grasper openings) between novices and experts test results. Overall simulated reality was rated high with a median value of 5.5 (range 2.8-7) points. Training capacity scored a median value of 5.8 (range 3-7) points. Experts were significantly faster in the diagnostic test with a median of 91 (range 37-208) s than novices with 1177 (range 81-383) s (p < 0.0001) and in the therapeutic test 102 (range 58-283) s versus 229 (range 114-399) s (p < 0.0001). Similar results were seen in the other metric values except in the camera pathway in the therapeutic test. The tested simulator achieved high scores in terms of realism and training capability. It reliably discriminated between novices and experts. Further improvements of the simulator, especially in the field of therapeutic arthroscopy, might improve its value as training and assessment tool for shoulder arthroscopy skills. II.
Validation of a Novel Virtual Reality Simulator for Robotic Surgery
Schreuder, Henk W. R.; Persson, Jan E. U.; Wolswijk, Richard G. H.; Ihse, Ingmar; Schijven, Marlies P.; Verheijen, René H. M.
2014-01-01
Objective. With the increase in robotic-assisted laparoscopic surgery there is a concomitant rising demand for training methods. The objective was to establish face and construct validity of a novel virtual reality simulator (dV-Trainer, Mimic Technologies, Seattle, WA) for the use in training of robot-assisted surgery. Methods. A comparative cohort study was performed. Participants (n = 42) were divided into three groups according to their robotic experience. To determine construct validity, participants performed three different exercises twice. Performance parameters were measured. To determine face validity, participants filled in a questionnaire after completion of the exercises. Results. Experts outperformed novices in most of the measured parameters. The most discriminative parameters were “time to complete” and “economy of motion” (P < 0.001). The training capacity of the simulator was rated 4.6 ± 0.5 SD on a 5-point Likert scale. The realism of the simulator in general, visual graphics, movements of instruments, interaction with objects, and the depth perception were all rated as being realistic. The simulator is considered to be a very useful training tool for residents and medical specialist starting with robotic surgery. Conclusions. Face and construct validity for the dV-Trainer could be established. The virtual reality simulator is a useful tool for training robotic surgery. PMID:24600328
Experimental Validation of a Closed Brayton Cycle System Transient Simulation
NASA Technical Reports Server (NTRS)
Johnson, Paul K.; Hervol, David S.
2006-01-01
The Brayton Power Conversion Unit (BPCU) located at NASA Glenn Research Center (GRC) in Cleveland, Ohio was used to validate the results of a computational code known as Closed Cycle System Simulation (CCSS). Conversion system thermal transient behavior was the focus of this validation. The BPCU was operated at various steady state points and then subjected to transient changes involving shaft rotational speed and thermal energy input. These conditions were then duplicated in CCSS. Validation of the CCSS BPCU model provides confidence in developing future Brayton power system performance predictions, and helps to guide high power Brayton technology development.
NASA Astrophysics Data System (ADS)
Ubieta, Eduardo; Hoyo, Itzal del; Valenzuela, Loreto; Lopez-Martín, Rafael; Peña, Víctor de la; López, Susana
2017-06-01
A simulation model of a parabolic-trough solar collector developed in Modelica® language is calibrated and validated. The calibration is performed in order to approximate the behavior of the solar collector model to a real one due to the uncertainty in some of the system parameters, i.e. measured data is used during the calibration process. Afterwards, the validation of this calibrated model is done. During the validation, the results obtained from the model are compared to the ones obtained during real operation in a collector from the Plataforma Solar de Almeria (PSA).
Piloted Evaluation of a UH-60 Mixer Equivalent Turbulence Simulation Model
NASA Technical Reports Server (NTRS)
Lusardi, Jeff A.; Blanken, Chris L.; Tischeler, Mark B.
2002-01-01
A simulation study of a recently developed hover/low speed Mixer Equivalent Turbulence Simulation (METS) model for the UH-60 Black Hawk helicopter was conducted in the NASA Ames Research Center Vertical Motion Simulator (VMS). The experiment was a continuation of previous work to develop a simple, but validated, turbulence model for hovering rotorcraft. To validate the METS model, two experienced test pilots replicated precision hover tasks that had been conducted in an instrumented UH-60 helicopter in turbulence. Objective simulation data were collected for comparison with flight test data, and subjective data were collected that included handling qualities ratings and pilot comments for increasing levels of turbulence. Analyses of the simulation results show good analytic agreement between the METS model and flight test data, with favorable pilot perception of the simulated turbulence. Precision hover tasks were also repeated using the more complex rotating-frame SORBET (Simulation Of Rotor Blade Element Turbulence) model to generate turbulence. Comparisons of the empirically derived METS model with the theoretical SORBET model show good agreement providing validation of the more complex blade element method of simulating turbulence.
Validation of Broadband Ground Motion Simulations for Japanese Crustal Earthquakes by the Recipe
NASA Astrophysics Data System (ADS)
Iwaki, A.; Maeda, T.; Morikawa, N.; Miyake, H.; Fujiwara, H.
2015-12-01
The Headquarters for Earthquake Research Promotion (HERP) of Japan has organized the broadband ground motion simulation method into a standard procedure called the "recipe" (HERP, 2009). In the recipe, the source rupture is represented by the characterized source model (Irikura and Miyake, 2011). The broadband ground motion time histories are computed by a hybrid approach: the 3-D finite-difference method (Aoi et al. 2004) and the stochastic Green's function method (Dan and Sato, 1998; Dan et al. 2000) for the long- (> 1 s) and short-period (< 1 s) components, respectively, using the 3-D velocity structure model. As the engineering significance of scenario earthquake ground motion prediction is increasing, thorough verification and validation are required for the simulation methods. This study presents the self-validation of the recipe for two MW6.6 crustal events in Japan, the 2000 Tottori and 2004 Chuetsu (Niigata) earthquakes. We first compare the simulated velocity time series with the observation. Main features of the velocity waveforms, such as the near-fault pulses and the large later phases on deep sediment sites are well reproduced by the simulations. Then we evaluate 5% damped pseudo acceleration spectra (PSA) in the framework of the SCEC Broadband Platform (BBP) validation (Dreger et al. 2015). The validation results are generally acceptable in the period range 0.1 - 10 s, whereas those in the shortest period range (0.01-0.1 s) are less satisfactory. We also evaluate the simulations with the 1-D velocity structure models used in the SCEC BBP validation exercise. Although the goodness-of-fit parameters for PSA do not significantly differ from those for the 3-D velocity structure model, noticeable differences in velocity waveforms are observed. Our results suggest the importance of 1) well-constrained 3-D velocity structure model for broadband ground motion simulations and 2) evaluation of time series of ground motion as well as response spectra.
Validation studies of the DOE-2 Building Energy Simulation Program. Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sullivan, R.; Winkelmann, F.
1998-06-01
This report documents many of the validation studies (Table 1) of the DOE-2 building energy analysis simulation program that have taken place since 1981. Results for several versions of the program are presented with the most recent study conducted in 1996 on version DOE-2.1E and the most distant study conducted in 1981 on version DOE-1.3. This work is part of an effort related to continued development of DOE-2, particularly in its use as a simulation engine for new specialized versions of the program such as the recently released RESFEN 3.1. RESFEN 3.1 is a program specifically dealing with analyzing themore » energy performance of windows in residential buildings. The intent in providing the results of these validation studies is to give potential users of the program a high degree of confidence in the calculated results. Validation studies in which calculated simulation data is compared to measured data have been conducted throughout the development of the DOE-2 program. Discrepancies discovered during the course of such work has resulted in improvements in the simulation algorithms. Table 2 provides a listing of additions and modifications that have been made to various versions of the program since version DOE-2.1A. One of the most significant recent changes in the program occurred with version DOE-2.1E. An improved algorithm for calculating the outside surface film coefficient was implemented. In addition, integration of the WINDOW 4 program was accomplished resulting in improved ability in analyzing window energy performance. Validation and verification of a program as sophisticated as DOE-2 must necessarily be limited because of the approximations inherent in the program. For example, the most accurate model of the heat transfer processes in a building would include a three-dimensional analysis. To justify such detailed algorithmic procedures would correspondingly require detailed information describing the building and/or HVAC system and energy plant parameters. Until building simulation programs can get this data directly from CAD programs, such detail would negate the usefulness of the program for the practicing engineers and architects who currently use the program. In addition, the validation studies discussed herein indicate that such detail is really unnecessary. The comparison of calculated and measured quantities have resulted in a satisfactory level of confidence that is sufficient for continued use of the DOE-2 program. However, additional validation is warranted, particularly at the component level, to further improve the program.« less
NASA Technical Reports Server (NTRS)
Chen, R. T. N.; Daughaday, H.; Andrisani, D., II; Till, R. D.; Weingarten, N. C.
1975-01-01
The results of a feasibility study and preliminary design for active control research and validation using the Total In-Flight Simulator (TIFS) aircraft are documented. Active control functions which can be demonstrated on the TIFS aircraft and the cost of preparing, equipping, and operating the TIFS aircraft for active control technology development are determined. It is shown that the TIFS aircraft is as a suitable test bed for inflight research and validation of many ACT concepts.
Dynamic Time Warping compared to established methods for validation of musculoskeletal models.
Gaspar, Martin; Welke, Bastian; Seehaus, Frank; Hurschler, Christof; Schwarze, Michael
2017-04-11
By means of Multi-Body musculoskeletal simulation, important variables such as internal joint forces and moments can be estimated which cannot be measured directly. Validation can ensued by qualitative or by quantitative methods. Especially when comparing time-dependent signals, many methods do not perform well and validation is often limited to qualitative approaches. The aim of the present study was to investigate the capabilities of the Dynamic Time Warping (DTW) algorithm for comparing time series, which can quantify phase as well as amplitude errors. We contrast the sensitivity of DTW with other established metrics: the Pearson correlation coefficient, cross-correlation, the metric according to Geers, RMSE and normalized RMSE. This study is based on two data sets, where one data set represents direct validation and the other represents indirect validation. Direct validation was performed in the context of clinical gait-analysis on trans-femoral amputees fitted with a 6 component force-moment sensor. Measured forces and moments from amputees' socket-prosthesis are compared to simulated forces and moments. Indirect validation was performed in the context of surface EMG measurements on a cohort of healthy subjects with measurements taken of seven muscles of the leg, which were compared to simulated muscle activations. Regarding direct validation, a positive linear relation between results of RMSE and nRMSE to DTW can be seen. For indirect validation, a negative linear relation exists between Pearson correlation and cross-correlation. We propose the DTW algorithm for use in both direct and indirect quantitative validation as it correlates well with methods that are most suitable for one of the tasks. However, in DV it should be used together with methods resulting in a dimensional error value, in order to be able to interpret results more comprehensible. Copyright © 2017 Elsevier Ltd. All rights reserved.
Hybrid Particle-Element Simulation of Impact on Composite Orbital Debris Shields
NASA Technical Reports Server (NTRS)
Fahrenthold, Eric P.
2004-01-01
This report describes the development of new numerical methods and new constitutive models for the simulation of hypervelocity impact effects on spacecraft. The research has included parallel implementation of the numerical methods and material models developed under the project. Validation work has included both one dimensional simulations, for comparison with exact solutions, and three dimensional simulations of published hypervelocity impact experiments. The validated formulations have been applied to simulate impact effects in a velocity and kinetic energy regime outside the capabilities of current experimental methods. The research results presented here allow for the expanded use of numerical simulation, as a complement to experimental work, in future design of spacecraft for hypervelocity impact effects.
Walmsley, Christopher W; McCurry, Matthew R; Clausen, Phillip D; McHenry, Colin R
2013-01-01
Finite element analysis (FEA) is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be 'reasonable' are often assumed to have little influence on the results and their interpretation. HERE WE REPORT AN EXTENSIVE SENSITIVITY ANALYSIS WHERE HIGH RESOLUTION FINITE ELEMENT (FE) MODELS OF MANDIBLES FROM SEVEN SPECIES OF CROCODILE WERE ANALYSED UNDER LOADS TYPICAL FOR COMPARATIVE ANALYSIS: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous), scaling (standardising volume, surface area, or length), tooth position (front, mid, or back tooth engagement), and linear load case (type of loading for each feeding type). Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different comparative datasets would also be sensitive to identical simulation assumptions; hence, modelling assumptions should undergo rigorous selection. The accuracy of input data is paramount, and simulations should focus on taking biological context into account. Ideally, validation of simulations should be addressed; however, where validation is impossible or unfeasible, sensitivity analyses should be performed to identify which assumptions have the greatest influence upon the results.
McCurry, Matthew R.; Clausen, Phillip D.; McHenry, Colin R.
2013-01-01
Finite element analysis (FEA) is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be ‘reasonable’ are often assumed to have little influence on the results and their interpretation. Here we report an extensive sensitivity analysis where high resolution finite element (FE) models of mandibles from seven species of crocodile were analysed under loads typical for comparative analysis: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous), scaling (standardising volume, surface area, or length), tooth position (front, mid, or back tooth engagement), and linear load case (type of loading for each feeding type). Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different comparative datasets would also be sensitive to identical simulation assumptions; hence, modelling assumptions should undergo rigorous selection. The accuracy of input data is paramount, and simulations should focus on taking biological context into account. Ideally, validation of simulations should be addressed; however, where validation is impossible or unfeasible, sensitivity analyses should be performed to identify which assumptions have the greatest influence upon the results. PMID:24255817
Validation of an Active Gear, Flexible Aircraft Take-off and Landing analysis (AGFATL)
NASA Technical Reports Server (NTRS)
Mcgehee, J. R.
1984-01-01
The results of an analytical investigation using a computer program for active gear, flexible aircraft take off and landing analysis (AGFATL) are compared with experimental data from shaker tests, drop tests, and simulated landing tests to validate the AGFATL computer program. Comparison of experimental and analytical responses for both passive and active gears indicates good agreement for shaker tests and drop tests. For the simulated landing tests, the passive and active gears were influenced by large strut binding friction forces. The inclusion of these undefined forces in the analytical simulations was difficult, and consequently only fair to good agreement was obtained. An assessment of the results from the investigation indicates that the AGFATL computer program is a valid tool for the study and initial design of series hydraulic active control landing gear systems.
Computational fluid dynamics modeling of laboratory flames and an industrial flare.
Singh, Kanwar Devesh; Gangadharan, Preeti; Chen, Daniel H; Lou, Helen H; Li, Xianchang; Richmond, Peyton
2014-11-01
A computational fluid dynamics (CFD) methodology for simulating the combustion process has been validated with experimental results. Three different types of experimental setups were used to validate the CFD model. These setups include an industrial-scale flare setups and two lab-scale flames. The CFD study also involved three different fuels: C3H6/CH/Air/N2, C2H4/O2/Ar and CH4/Air. In the first setup, flare efficiency data from the Texas Commission on Environmental Quality (TCEQ) 2010 field tests were used to validate the CFD model. In the second setup, a McKenna burner with flat flames was simulated. Temperature and mass fractions of important species were compared with the experimental data. Finally, results of an experimental study done at Sandia National Laboratories to generate a lifted jet flame were used for the purpose of validation. The reduced 50 species mechanism, LU 1.1, the realizable k-epsilon turbulence model, and the EDC turbulence-chemistry interaction model were usedfor this work. Flare efficiency, axial profiles of temperature, and mass fractions of various intermediate species obtained in the simulation were compared with experimental data and a good agreement between the profiles was clearly observed. In particular the simulation match with the TCEQ 2010 flare tests has been significantly improved (within 5% of the data) compared to the results reported by Singh et al. in 2012. Validation of the speciated flat flame data supports the view that flares can be a primary source offormaldehyde emission.
Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics
NASA Technical Reports Server (NTRS)
Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.
Validation results of satellite mock-up capturing experiment using nets
NASA Astrophysics Data System (ADS)
Medina, Alberto; Cercós, Lorenzo; Stefanescu, Raluca M.; Benvenuto, Riccardo; Pesce, Vincenzo; Marcon, Marco; Lavagna, Michèle; González, Iván; Rodríguez López, Nuria; Wormnes, Kjetil
2017-05-01
The PATENDER activity (Net parametric characterization and parabolic flight), funded by the European Space Agency (ESA) via its Clean Space initiative, was aiming to validate a simulation tool for designing nets for capturing space debris. This validation has been performed through a set of different experiments under microgravity conditions where a net was launched capturing and wrapping a satellite mock-up. This paper presents the architecture of the thrown-net dynamics simulator together with the set-up of the deployment experiment and its trajectory reconstruction results on a parabolic flight (Novespace A-310, June 2015). The simulator has been implemented within the Blender framework in order to provide a highly configurable tool, able to reproduce different scenarios for Active Debris Removal missions. The experiment has been performed over thirty parabolas offering around 22 s of zero-g conditions. Flexible meshed fabric structure (the net) ejected from a container and propelled by corner masses (the bullets) arranged around its circumference have been launched at different initial velocities and launching angles using a pneumatic-based dedicated mechanism (representing the chaser satellite) against a target mock-up (the target satellite). High-speed motion cameras were recording the experiment allowing 3D reconstruction of the net motion. The net knots have been coloured to allow the images post-process using colour segmentation, stereo matching and iterative closest point (ICP) for knots tracking. The final objective of the activity was the validation of the net deployment and wrapping simulator using images recorded during the parabolic flight. The high-resolution images acquired have been post-processed to determine accurately the initial conditions and generate the reference data (position and velocity of all knots of the net along its deployment and wrapping of the target mock-up) for the simulator validation. The simulator has been properly configured according to the parabolic flight scenario, and executed in order to generate the validation data. Both datasets have been compared according to different metrics in order to perform the validation of the PATENDER simulator.
NASA Technical Reports Server (NTRS)
Schulte, Peter Z.; Moore, James W.
2011-01-01
The Crew Exploration Vehicle Parachute Assembly System (CPAS) project conducts computer simulations to verify that flight performance requirements on parachute loads and terminal rate of descent are met. Design of Experiments (DoE) provides a systematic method for variation of simulation input parameters. When implemented and interpreted correctly, a DoE study of parachute simulation tools indicates values and combinations of parameters that may cause requirement limits to be violated. This paper describes one implementation of DoE that is currently being developed by CPAS, explains how DoE results can be interpreted, and presents the results of several preliminary studies. The potential uses of DoE to validate parachute simulation models and verify requirements are also explored.
Experimental validation for thermal transmittances of window shading systems with perimeter gaps
Hart, Robert; Goudey, Howdy; Curcija, D. Charlie
2018-02-22
Virtually all residential and commercial windows in the U.S. have some form of window attachment, but few have been designed for energy savings. ISO 15099 presents a simulation framework to determine thermal performance of window attachments, but the model has not been validated for these products. This paper outlines a review and validation of the ISO 15099 centre-of-glass heat transfer correlations for perimeter gaps (top, bottom, and side) in naturally ventilated cavities through measurement and simulation. The thermal transmittance impact due to dimensional variations of these gaps is measured experimentally, simulated using computational fluid dynamics, and simulated utilizing simplified correlationsmore » from ISO 15099. Results show that the ISO 15099 correlations produce a mean error between measured and simulated heat flux of 2.5 ± 7%. These tolerances are similar to those obtained from sealed cavity comparisons and are deemed acceptable within the ISO 15099 framework.« less
Validity evidence and reliability of a simulated patient feedback instrument
2012-01-01
Background In the training of healthcare professionals, one of the advantages of communication training with simulated patients (SPs) is the SP's ability to provide direct feedback to students after a simulated clinical encounter. The quality of SP feedback must be monitored, especially because it is well known that feedback can have a profound effect on student performance. Due to the current lack of valid and reliable instruments to assess the quality of SP feedback, our study examined the validity and reliability of one potential instrument, the 'modified Quality of Simulated Patient Feedback Form' (mQSF). Methods Content validity of the mQSF was assessed by inviting experts in the area of simulated clinical encounters to rate the importance of the mQSF items. Moreover, generalizability theory was used to examine the reliability of the mQSF. Our data came from videotapes of clinical encounters between six simulated patients and six students and the ensuing feedback from the SPs to the students. Ten faculty members judged the SP feedback according to the items on the mQSF. Three weeks later, this procedure was repeated with the same faculty members and recordings. Results All but two items of the mQSF received importance ratings of > 2.5 on a four-point rating scale. A generalizability coefficient of 0.77 was established with two judges observing one encounter. Conclusions The findings for content validity and reliability with two judges suggest that the mQSF is a valid and reliable instrument to assess the quality of feedback provided by simulated patients. PMID:22284898
Validating a driving simulator using surrogate safety measures.
Yan, Xuedong; Abdel-Aty, Mohamed; Radwan, Essam; Wang, Xuesong; Chilakapati, Praveen
2008-01-01
Traffic crash statistics and previous research have shown an increased risk of traffic crashes at signalized intersections. How to diagnose safety problems and develop effective countermeasures to reduce crash rate at intersections is a key task for traffic engineers and researchers. This study aims at investigating whether the driving simulator can be used as a valid tool to assess traffic safety at signalized intersections. In support of the research objective, this simulator validity study was conducted from two perspectives, a traffic parameter (speed) and a safety parameter (crash history). A signalized intersection with as many important features (including roadway geometries, traffic control devices, intersection surroundings, and buildings) was replicated into a high-fidelity driving simulator. A driving simulator experiment with eight scenarios at the intersection were conducted to determine if the subjects' speed behavior and traffic risk patterns in the driving simulator were similar to what were found at the real intersection. The experiment results showed that speed data observed from the field and in the simulator experiment both follow normal distributions and have equal means for each intersection approach, which validated the driving simulator in absolute terms. Furthermore, this study used an innovative approach of using surrogate safety measures from the simulator to contrast with the crash analysis for the field data. The simulator experiment results indicated that compared to the right-turn lane with the low rear-end crash history record (2 crashes), subjects showed a series of more risky behaviors at the right-turn lane with the high rear-end crash history record (16 crashes), including higher deceleration rate (1.80+/-1.20 m/s(2) versus 0.80+/-0.65 m/s(2)), higher non-stop right-turn rate on red (81.67% versus 57.63%), higher right-turn speed as stop line (18.38+/-8.90 km/h versus 14.68+/-6.04 km/h), shorter following distance (30.19+/-13.43 m versus 35.58+/-13.41 m), and higher rear-end probability (9/59=0.153 versus 2/60=0.033). Therefore, the relative validity of driving simulator was well established for the traffic safety studies at signalized intersections.
Enhancement of CFD validation exercise along the roof profile of a low-rise building
NASA Astrophysics Data System (ADS)
Deraman, S. N. C.; Majid, T. A.; Zaini, S. S.; Yahya, W. N. W.; Abdullah, J.; Ismail, M. A.
2018-04-01
The aim of this study is to enhance the validation of CFD exercise along the roof profile of a low-rise building. An isolated gabled-roof house having 26.6° roof pitch was simulated to obtain the pressure coefficient around the house. Validation of CFD analysis with experimental data requires many input parameters. This study performed CFD simulation based on the data from a previous study. Where the input parameters were not clearly stated, new input parameters were established from the open literatures. The numerical simulations were performed in FLUENT 14.0 by applying the Computational Fluid Dynamics (CFD) approach based on steady RANS equation together with RNG k-ɛ model. Hence, the result from CFD was analysed by using quantitative test (statistical analysis) and compared with CFD results from the previous study. The statistical analysis results from ANOVA test and error measure showed that the CFD results from the current study produced good agreement and exhibited the closest error compared to the previous study. All the input data used in this study can be extended to other types of CFD simulation involving wind flow over an isolated single storey house.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gutierrez, Marte
2013-12-31
This research project aims to develop and validate an advanced computer model that can be used in the planning and design of stimulation techniques to create engineered reservoirs for Enhanced Geothermal Systems. The specific objectives of the proposal are to; Develop a true three-dimensional hydro-thermal fracturing simulator that is particularly suited for EGS reservoir creation; Perform laboratory scale model tests of hydraulic fracturing and proppant flow/transport using a polyaxial loading device, and use the laboratory results to test and validate the 3D simulator; Perform discrete element/particulate modeling of proppant transport in hydraulic fractures, and use the results to improve understandmore » of proppant flow and transport; Test and validate the 3D hydro-thermal fracturing simulator against case histories of EGS energy production; and Develop a plan to commercialize the 3D fracturing and proppant flow/transport simulator. The project is expected to yield several specific results and benefits. Major technical products from the proposal include; A true-3D hydro-thermal fracturing computer code that is particularly suited to EGS; Documented results of scale model tests on hydro-thermal fracturing and fracture propping in an analogue crystalline rock; Documented procedures and results of discrete element/particulate modeling of flow and transport of proppants for EGS applications; and Database of monitoring data, with focus of Acoustic Emissions (AE) from lab scale modeling and field case histories of EGS reservoir creation.« less
Performance optimization and validation of ADM1 simulations under anaerobic thermophilic conditions.
Atallah, Nabil M; El-Fadel, Mutasem; Ghanimeh, Sophia; Saikaly, Pascal; Abou-Najm, Majdi
2014-12-01
In this study, two experimental sets of data each involving two thermophilic anaerobic digesters treating food waste, were simulated using the Anaerobic Digestion Model No. 1 (ADM1). A sensitivity analysis was conducted, using both data sets of one digester, for parameter optimization based on five measured performance indicators: methane generation, pH, acetate, total COD, ammonia, and an equally weighted combination of the five indicators. The simulation results revealed that while optimization with respect to methane alone, a commonly adopted approach, succeeded in simulating methane experimental results, it predicted other intermediary outputs less accurately. On the other hand, the multi-objective optimization has the advantage of providing better results than methane optimization despite not capturing the intermediary output. The results from the parameter optimization were validated upon their independent application on the data sets of the second digester. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Arneson, Heather; Evans, Antony D.; Li, Jinhua; Wei, Mei Yueh
2017-01-01
Integrated Demand Management (IDM) is a near- to mid-term NASA concept that proposes to address mismatches in air traffic system demand and capacity by using strategic flow management capabilities to pre-condition demand into the more tactical Time-Based Flow Management System (TBFM). This paper describes an automated simulation capability to support IDM concept development. The capability closely mimics existing human-in-the-loop (HITL) capabilities, automating both the human components and collaboration between operational systems, and speeding up the real-time aircraft simulations. Such a capability allows for parametric studies that will inform the HITL simulations, identifying breaking points and parameter values at which significant changes in system behavior occur. This paper also describes the initial validation of individual components of the automated simulation capability, and an example application comparing the performance of the IDM concept under two TBFM scheduling paradigms. The results and conclusions from this simulation compare closely to those from previous HITL simulations using similar scenarios, providing an initial validation of the automated simulation capability.
Validation of thermal effects of LED package by using Elmer finite element simulation method
NASA Astrophysics Data System (ADS)
Leng, Lai Siang; Retnasamy, Vithyacharan; Mohamad Shahimin, Mukhzeer; Sauli, Zaliman; Taniselass, Steven; Bin Ab Aziz, Muhamad Hafiz; Vairavan, Rajendaran; Kirtsaeng, Supap
2017-02-01
The overall performance of the Light-emitting diode, LED package is critically affected by the heat attribution. In this study, open source software - Elmer FEM has been utilized to study the thermal analysis of the LED package. In order to perform a complete simulation study, both Salome software and ParaView software were introduced as Pre and Postprocessor. The thermal effect of the LED package was evaluated by this software. The result has been validated with commercially licensed software based on previous work. The percentage difference from both simulation results is less than 5% which is tolerable and comparable.
Li, Zhaofu; Liu, Hongyu; Luo, Chuan; Li, Yan; Li, Hengpeng; Pan, Jianjun; Jiang, Xiaosan; Zhou, Quansuo; Xiong, Zhengqin
2015-05-01
The Hydrological Simulation Program-Fortran (HSPF), which is a hydrological and water-quality computer model that was developed by the United States Environmental Protection Agency, was employed to simulate runoff and nutrient export from a typical small watershed in a hilly eastern monsoon region of China. First, a parameter sensitivity analysis was performed to assess how changes in the model parameters affect runoff and nutrient export. Next, the model was calibrated and validated using measured runoff and nutrient concentration data. The Nash-Sutcliffe efficiency (E NS ) values of the yearly runoff were 0.87 and 0.69 for the calibration and validation periods, respectively. For storms runoff events, the E NS values were 0.93 for the calibration period and 0.47 for the validation period. Antecedent precipitation and soil moisture conditions can affect the simulation accuracy of storm event flow. The E NS values for the total nitrogen (TN) export were 0.58 for the calibration period and 0.51 for the validation period. In addition, the correlation coefficients between the observed and simulated TN concentrations were 0.84 for the calibration period and 0.74 for the validation period. For phosphorus export, the E NS values were 0.89 for the calibration period and 0.88 for the validation period. In addition, the correlation coefficients between the observed and simulated orthophosphate concentrations were 0.96 and 0.94 for the calibration and validation periods, respectively. The nutrient simulation results are generally satisfactory even though the parameter-lumped HSPF model cannot represent the effects of the spatial pattern of land cover on nutrient export. The model parameters obtained in this study could serve as reference values for applying the model to similar regions. In addition, HSPF can properly describe the characteristics of water quantity and quality processes in this area. After adjustment, calibration, and validation of the parameters, the HSPF model is suitable for hydrological and water-quality simulations in watershed planning and management and for designing best management practices.
Martin, Kevin D; Amendola, Annunziato; Phisitkul, Phinit
2016-01-01
Abstract Purpose Orthopedic education continues to move towards evidence-based curriculum in order to comply with new residency accreditation mandates. There are currently three high fidelity arthroscopic virtual reality (VR) simulators available, each with multiple instructional modules and simulated arthroscopic procedures. The aim of the current study is to assess face validity, defined as the degree to which a procedure appears effective in terms of its stated aims, of three available VR simulators. Methods Thirty subjects were recruited from a single orthopedic residency training program. Each subject completed one training session on each of the three leading VR arthroscopic simulators (ARTHRO mentor-Symbionix, ArthroS-Virtamed, and ArthroSim-Toltech). Each arthroscopic session involved simulator-specific modules. After training sessions, subjects completed a previously validated simulator questionnaire for face validity. Results The median external appearances for the ARTHRO Mentor (9.3, range 6.7-10.0; p=0.0036) and ArthroS (9.3, range 7.3-10.0; p=0.0003) were statistically higher than for Arthro- Sim (6.7, range 3.3-9.7). There was no statistical difference in intraarticular appearance, instrument appearance, or user friendliness between the three groups. Most simulators reached an appropriate level of proportion of sufficient scores for each categor y (≥70%), except for ARTHRO Mentor (intraarticular appearance-50%; instrument appearance- 61.1%) and ArthroSim (external appearance- 50%; user friendliness-68.8%). Conclusion These results demonstrate that ArthroS has the highest overall face validity of the three current arthroscopic VR simulators. However, only external appearance for ArthroS reached statistical significance when compared to the other simulators. Additionally, each simulator had satisfactory intraarticular quality. This study helps further the understanding of VR simulation and necessary features for accurate arthroscopic representation. This data also provides objective data for educators when selecting equipment that will best facilitate residency training. PMID:27528830
Effect of monthly areal rainfall uncertainty on streamflow simulation
NASA Astrophysics Data System (ADS)
Ndiritu, J. G.; Mkhize, N.
2017-08-01
Areal rainfall is mostly obtained from point rainfall measurements that are sparsely located and several studies have shown that this results in large areal rainfall uncertainties at the daily time step. However, water resources assessment is often carried out a monthly time step and streamflow simulation is usually an essential component of this assessment. This study set out to quantify monthly areal rainfall uncertainties and assess their effect on streamflow simulation. This was achieved by; i) quantifying areal rainfall uncertainties and using these to generate stochastic monthly areal rainfalls, and ii) finding out how the quality of monthly streamflow simulation and streamflow variability change if stochastic areal rainfalls are used instead of historic areal rainfalls. Tests on monthly rainfall uncertainty were carried out using data from two South African catchments while streamflow simulation was confined to one of them. A non-parametric model that had been applied at a daily time step was used for stochastic areal rainfall generation and the Pitman catchment model calibrated using the SCE-UA optimizer was used for streamflow simulation. 100 randomly-initialised calibration-validation runs using 100 stochastic areal rainfalls were compared with 100 runs obtained using the single historic areal rainfall series. By using 4 rain gauges alternately to obtain areal rainfall, the resulting differences in areal rainfall averaged to 20% of the mean monthly areal rainfall and rainfall uncertainty was therefore highly significant. Pitman model simulations obtained coefficient of efficiencies averaging 0.66 and 0.64 in calibration and validation using historic rainfalls while the respective values using stochastic areal rainfalls were 0.59 and 0.57. Average bias was less than 5% in all cases. The streamflow ranges using historic rainfalls averaged to 29% of the mean naturalised flow in calibration and validation and the respective average ranges using stochastic monthly rainfalls were 86 and 90% of the mean naturalised streamflow. In calibration, 33% of the naturalised flow located within the streamflow ranges with historic rainfall simulations and using stochastic rainfalls increased this to 66%. In validation the respective percentages of naturalised flows located within the simulated streamflow ranges were 32 and 72% respectively. The analysis reveals that monthly areal rainfall uncertainty is significant and incorporating it into streamflow simulation would add validity to the results.
NASA Astrophysics Data System (ADS)
Qin, Sanbo; Mittal, Jeetain; Zhou, Huan-Xiang
2013-08-01
We have developed a ‘postprocessing’ method for modeling biochemical processes such as protein folding under crowded conditions (Qin and Zhou 2009 Biophys. J. 97 12-19). In contrast to the direct simulation approach, in which the protein undergoing folding is simulated along with crowders, the postprocessing method requires only the folding simulation without crowders. The influence of the crowders is then obtained by taking conformations from the crowder-free simulation and calculating the free energies of transferring to the crowders. This postprocessing yields the folding free energy surface of the protein under crowding. Here the postprocessing results for the folding of three small proteins under ‘repulsive’ crowding are validated by those obtained previously by the direct simulation approach (Mittal and Best 2010 Biophys. J. 98 315-20). This validation confirms the accuracy of the postprocessing approach and highlights its distinct advantages in modeling biochemical processes under cell-like crowded conditions, such as enabling an atomistic representation of the test proteins.
SimulaTE: simulating complex landscapes of transposable elements of populations.
Kofler, Robert
2018-04-15
Estimating the abundance of transposable elements (TEs) in populations (or tissues) promises to answer many open research questions. However, progress is hampered by the lack of concordance between different approaches for TE identification and thus potentially unreliable results. To address this problem, we developed SimulaTE a tool that generates TE landscapes for populations using a newly developed domain specific language (DSL). The simple syntax of our DSL allows for easily building even complex TE landscapes that have, for example, nested, truncated and highly diverged TE insertions. Reads may be simulated for the populations using different sequencing technologies (PacBio, Illumina paired-ends) and strategies (sequencing individuals and pooled populations). The comparison between the expected (i.e. simulated) and the observed results will guide researchers in finding the most suitable approach for a particular research question. SimulaTE is implemented in Python and available at https://sourceforge.net/projects/simulates/. Manual https://sourceforge.net/p/simulates/wiki/Home/#manual; Test data and tutorials https://sourceforge.net/p/simulates/wiki/Home/#walkthrough; Validation https://sourceforge.net/p/simulates/wiki/Home/#validation. robert.kofler@vetmeduni.ac.at.
Towards Application of NASA Standard for Models and Simulations in Aeronautical Design Process
NASA Astrophysics Data System (ADS)
Vincent, Luc; Dunyach, Jean-Claude; Huet, Sandrine; Pelissier, Guillaume; Merlet, Joseph
2012-08-01
Even powerful computational techniques like simulation endure limitations in their validity domain. Consequently using simulation models requires cautions to avoid making biased design decisions for new aeronautical products on the basis of inadequate simulation results. Thus the fidelity, accuracy and validity of simulation models shall be monitored in context all along the design phases to build confidence in achievement of the goals of modelling and simulation.In the CRESCENDO project, we adapt the Credibility Assessment Scale method from NASA standard for models and simulations from space programme to the aircraft design in order to assess the quality of simulations. The proposed eight quality assurance metrics aggregate information to indicate the levels of confidence in results. They are displayed in management dashboard and can secure design trade-off decisions at programme milestones.The application of this technique is illustrated in aircraft design context with specific thermal Finite Elements Analysis. This use case shows how to judge the fitness- for-purpose of simulation as Virtual testing means and then green-light the continuation of Simulation Lifecycle Management (SLM) process.
Validation of the Monte Carlo simulator GATE for indium-111 imaging.
Assié, K; Gardin, I; Véra, P; Buvat, I
2005-07-07
Monte Carlo simulations are useful for optimizing and assessing single photon emission computed tomography (SPECT) protocols, especially when aiming at measuring quantitative parameters from SPECT images. Before Monte Carlo simulated data can be trusted, the simulation model must be validated. The purpose of this work was to validate the use of GATE, a new Monte Carlo simulation platform based on GEANT4, for modelling indium-111 SPECT data, the quantification of which is of foremost importance for dosimetric studies. To that end, acquisitions of (111)In line sources in air and in water and of a cylindrical phantom were performed, together with the corresponding simulations. The simulation model included Monte Carlo modelling of the camera collimator and of a back-compartment accounting for photomultiplier tubes and associated electronics. Energy spectra, spatial resolution, sensitivity values, images and count profiles obtained for experimental and simulated data were compared. An excellent agreement was found between experimental and simulated energy spectra. For source-to-collimator distances varying from 0 to 20 cm, simulated and experimental spatial resolution differed by less than 2% in air, while the simulated sensitivity values were within 4% of the experimental values. The simulation of the cylindrical phantom closely reproduced the experimental data. These results suggest that GATE enables accurate simulation of (111)In SPECT acquisitions.
Sechopoulos, Ioannis; Ali, Elsayed S M; Badal, Andreu; Badano, Aldo; Boone, John M; Kyprianou, Iacovos S; Mainegra-Hing, Ernesto; McMillan, Kyle L; McNitt-Gray, Michael F; Rogers, D W O; Samei, Ehsan; Turner, Adam C
2015-10-01
The use of Monte Carlo simulations in diagnostic medical imaging research is widespread due to its flexibility and ability to estimate quantities that are challenging to measure empirically. However, any new Monte Carlo simulation code needs to be validated before it can be used reliably. The type and degree of validation required depends on the goals of the research project, but, typically, such validation involves either comparison of simulation results to physical measurements or to previously published results obtained with established Monte Carlo codes. The former is complicated due to nuances of experimental conditions and uncertainty, while the latter is challenging due to typical graphical presentation and lack of simulation details in previous publications. In addition, entering the field of Monte Carlo simulations in general involves a steep learning curve. It is not a simple task to learn how to program and interpret a Monte Carlo simulation, even when using one of the publicly available code packages. This Task Group report provides a common reference for benchmarking Monte Carlo simulations across a range of Monte Carlo codes and simulation scenarios. In the report, all simulation conditions are provided for six different Monte Carlo simulation cases that involve common x-ray based imaging research areas. The results obtained for the six cases using four publicly available Monte Carlo software packages are included in tabular form. In addition to a full description of all simulation conditions and results, a discussion and comparison of results among the Monte Carlo packages and the lessons learned during the compilation of these results are included. This abridged version of the report includes only an introductory description of the six cases and a brief example of the results of one of the cases. This work provides an investigator the necessary information to benchmark his/her Monte Carlo simulation software against the reference cases included here before performing his/her own novel research. In addition, an investigator entering the field of Monte Carlo simulations can use these descriptions and results as a self-teaching tool to ensure that he/she is able to perform a specific simulation correctly. Finally, educators can assign these cases as learning projects as part of course objectives or training programs.
Du, Yongxing; Zhang, Lingze; Sang, Lulu; Wu, Daocheng
2016-04-29
In this paper, an Archimedean planar spiral antenna for the application of thermotherapy was designed. This type of antenna was chosen for its compact structure, flexible application and wide heating area. The temperature field generated by the use of this Two-armed Spiral Antenna in a muscle-equivalent phantom was simulated and subsequently validated by experimentation. First, the specific absorption rate (SAR) of the field was calculated using the Finite Element Method (FEM) by Ansoft's High Frequency Structure Simulation (HFSS). Then, the temperature elevation in the phantom was simulated by an explicit finite difference approximation of the bioheat equation (BHE). The temperature distribution was then validated by a phantom heating experiment. The results showed that this antenna had a good heating ability and a wide heating area. A comparison between the calculation and the measurement showed a fair agreement in the temperature elevation. The validated model could be applied for the analysis of electromagnetic-temperature distribution in phantoms during the process of antenna design or thermotherapy experimentation.
ERIC Educational Resources Information Center
Schmitt, T. A.; Sass, D. A.; Sullivan, J. R.; Walker, C. M.
2010-01-01
Imposed time limits on computer adaptive tests (CATs) can result in examinees having difficulty completing all items, thus compromising the validity and reliability of ability estimates. In this study, the effects of speededness were explored in a simulated CAT environment by varying examinee response patterns to end-of-test items. Expectedly,…
NASA Astrophysics Data System (ADS)
Maślak, Mariusz; Pazdanowski, Michał; Woźniczka, Piotr
2018-01-01
Validation of fire resistance for the same steel frame bearing structure is performed here using three different numerical models, i.e. a bar one prepared in the SAFIR environment, and two 3D models developed within the framework of Autodesk Simulation Mechanical (ASM) and an alternative one developed in the environment of the Abaqus code. The results of the computer simulations performed are compared with the experimental results obtained previously, in a laboratory fire test, on a structure having the same characteristics and subjected to the same heating regimen. Comparison of the experimental and numerically determined displacement evolution paths for selected nodes of the considered frame during the simulated fire exposure constitutes the basic criterion applied to evaluate the validity of the numerical results obtained. The experimental and numerically determined estimates of critical temperature specific to the considered frame and related to the limit state of bearing capacity in fire have been verified as well.
Derivation and Applicability of Asymptotic Results for Multiple Subtests Person-Fit Statistics
Albers, Casper J.; Meijer, Rob R.; Tendeiro, Jorge N.
2016-01-01
In high-stakes testing, it is important to check the validity of individual test scores. Although a test may, in general, result in valid test scores for most test takers, for some test takers, test scores may not provide a good description of a test taker’s proficiency level. Person-fit statistics have been proposed to check the validity of individual test scores. In this study, the theoretical asymptotic sampling distribution of two person-fit statistics that can be used for tests that consist of multiple subtests is first discussed. Second, simulation study was conducted to investigate the applicability of this asymptotic theory for tests of finite length, in which the correlation between subtests and number of items in the subtests was varied. The authors showed that these distributions provide reasonable approximations, even for tests consisting of subtests of only 10 items each. These results have practical value because researchers do not have to rely on extensive simulation studies to simulate sampling distributions. PMID:29881053
Recent Advances in Simulation of Eddy Current Testing of Tubes and Experimental Validations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reboud, C.; Premel, D.; Lesselier, D.
2007-03-21
Eddy current testing (ECT) is widely used in iron and steel industry for the inspection of tubes during manufacturing. A collaboration between CEA and the Vallourec Research Center led to the development of new numerical functionalities dedicated to the simulation of ECT of non-magnetic tubes by external probes. The achievement of experimental validations led us to the integration of these models into the CIVA platform. Modeling approach and validation results are discussed here. A new numerical scheme is also proposed in order to improve the accuracy of the model.
Recent Advances in Simulation of Eddy Current Testing of Tubes and Experimental Validations
NASA Astrophysics Data System (ADS)
Reboud, C.; Prémel, D.; Lesselier, D.; Bisiaux, B.
2007-03-01
Eddy current testing (ECT) is widely used in iron and steel industry for the inspection of tubes during manufacturing. A collaboration between CEA and the Vallourec Research Center led to the development of new numerical functionalities dedicated to the simulation of ECT of non-magnetic tubes by external probes. The achievement of experimental validations led us to the integration of these models into the CIVA platform. Modeling approach and validation results are discussed here. A new numerical scheme is also proposed in order to improve the accuracy of the model.
CLVTOPS Liftoff and Separation Analysis Validation Using Ares I-X Flight Data
NASA Technical Reports Server (NTRS)
Burger, Ben; Schwarz, Kristina; Kim, Young
2011-01-01
CLVTOPS is a multi-body time domain flight dynamics simulation tool developed by NASA s Marshall Space Flight Center (MSFC) for a space launch vehicle and is based on the TREETOPS simulation tool. CLVTOPS is currently used to simulate the flight dynamics and separation/jettison events of the Ares I launch vehicle including liftoff and staging separation. In order for CLVTOPS to become an accredited tool, validation against other independent simulations and real world data is needed. The launch of the Ares I-X vehicle (first Ares I test flight) on October 28, 2009 presented a great opportunity to provide validation evidence for CLVTOPS. In order to simulate the Ares I-X flight, specific models were implemented into CLVTOPS. These models include the flight day environment, reconstructed thrust, reconstructed mass properties, aerodynamics, and the Ares I-X guidance, navigation and control models. The resulting simulation output was compared to Ares I-X flight data. During the liftoff region of flight, trajectory states from the simulation and flight data were compared. The CLVTOPS results were used to make a semi-transparent animation of the vehicle that was overlaid directly on top of the flight video to provide a qualitative measure of the agreement between the simulation and the actual flight. During ascent, the trajectory states of the vehicle were compared with flight data. For the stage separation event, the trajectory states of the two stages were compared to available flight data. Since no quantitative rotational state data for the upper stage was available, the CLVTOPS results were used to make an animation of the two stages to show a side-by-side comparison with flight video. All of the comparisons between CLVTOPS and the flight data show good agreement. This paper documents comparisons between CLVTOPS and Ares I-X flight data which serve as validation evidence for the eventual accreditation of CLVTOPS.
Demonstration of innovative techniques for work zone safety data analysis
DOT National Transportation Integrated Search
2009-07-15
Based upon the results of the simulator data analysis, additional future research can be : identified to validate the driving simulator in terms of similarities with Ohio work zones. For : instance, the speeds observed in the simulator were greater f...
Wiesmann, Veit; Bergler, Matthias; Palmisano, Ralf; Prinzen, Martin; Franz, Daniela; Wittenberg, Thomas
2017-03-18
Manual assessment and evaluation of fluorescent micrograph cell experiments is time-consuming and tedious. Automated segmentation pipelines can ensure efficient and reproducible evaluation and analysis with constant high quality for all images of an experiment. Such cell segmentation approaches are usually validated and rated in comparison to manually annotated micrographs. Nevertheless, manual annotations are prone to errors and display inter- and intra-observer variability which influence the validation results of automated cell segmentation pipelines. We present a new approach to simulate fluorescent cell micrographs that provides an objective ground truth for the validation of cell segmentation methods. The cell simulation was evaluated twofold: (1) An expert observer study shows that the proposed approach generates realistic fluorescent cell micrograph simulations. (2) An automated segmentation pipeline on the simulated fluorescent cell micrographs reproduces segmentation performances of that pipeline on real fluorescent cell micrographs. The proposed simulation approach produces realistic fluorescent cell micrographs with corresponding ground truth. The simulated data is suited to evaluate image segmentation pipelines more efficiently and reproducibly than it is possible on manually annotated real micrographs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gutierrez, Marte
The research project aims to develop and validate an advanced computer model that can be used in the planning and design of stimulation techniques to create engineered reservoirs for Enhanced Geothermal Systems. The specific objectives of the proposal are to: 1) Develop a true three-dimensional hydro-thermal fracturing simulator that is particularly suited for EGS reservoir creation. 2) Perform laboratory scale model tests of hydraulic fracturing and proppant flow/transport using a polyaxial loading device, and use the laboratory results to test and validate the 3D simulator. 3) Perform discrete element/particulate modeling of proppant transport in hydraulic fractures, and use the resultsmore » to improve understand of proppant flow and transport. 4) Test and validate the 3D hydro-thermal fracturing simulator against case histories of EGS energy production. 5) Develop a plan to commercialize the 3D fracturing and proppant flow/transport simulator. The project is expected to yield several specific results and benefits. Major technical products from the proposal include: 1) A true-3D hydro-thermal fracturing computer code that is particularly suited to EGS, 2) Documented results of scale model tests on hydro-thermal fracturing and fracture propping in an analogue crystalline rock, 3) Documented procedures and results of discrete element/particulate modeling of flow and transport of proppants for EGS applications, and 4) Database of monitoring data, with focus of Acoustic Emissions (AE) from lab scale modeling and field case histories of EGS reservoir creation.« less
Cuesta-Gragera, Ana; Navarro-Fontestad, Carmen; Mangas-Sanjuan, Victor; González-Álvarez, Isabel; García-Arieta, Alfredo; Trocóniz, Iñaki F; Casabó, Vicente G; Bermejo, Marival
2015-07-10
The objective of this paper is to apply a previously developed semi-physiologic pharmacokinetic model implemented in NONMEM to simulate bioequivalence trials (BE) of acetyl salicylic acid (ASA) in order to validate the model performance against ASA human experimental data. ASA is a drug with first-pass hepatic and intestinal metabolism following Michaelis-Menten kinetics that leads to the formation of two main metabolites in two generations (first and second generation metabolites). The first aim was to adapt the semi-physiological model for ASA in NOMMEN using ASA pharmacokinetic parameters from literature, showing its sequential metabolism. The second aim was to validate this model by comparing the results obtained in NONMEM simulations with published experimental data at a dose of 1000 mg. The validated model was used to simulate bioequivalence trials at 3 dose schemes (100, 1000 and 3000 mg) and with 6 test formulations with decreasing in vivo dissolution rate constants versus the reference formulation (kD 8-0.25 h (-1)). Finally, the third aim was to determine which analyte (parent drug, first generation or second generation metabolite) was more sensitive to changes in formulation performance. The validation results showed that the concentration-time curves obtained with the simulations reproduced closely the published experimental data, confirming model performance. The parent drug (ASA) was the analyte that showed to be more sensitive to the decrease in pharmaceutical quality, with the highest decrease in Cmax and AUC ratio between test and reference formulations. Copyright © 2015 Elsevier B.V. All rights reserved.
Validation of the thermal code of RadTherm-IR, IR-Workbench, and F-TOM
NASA Astrophysics Data System (ADS)
Schwenger, Frédéric; Grossmann, Peter; Malaplate, Alain
2009-05-01
System assessment by image simulation requires synthetic scenarios that can be viewed by the device to be simulated. In addition to physical modeling of the camera, a reliable modeling of scene elements is necessary. Software products for modeling of target data in the IR should be capable of (i) predicting surface temperatures of scene elements over a long period of time and (ii) computing sensor views of the scenario. For such applications, FGAN-FOM acquired the software products RadTherm-IR (ThermoAnalytics Inc., Calumet, USA; IR-Workbench (OKTAL-SE, Toulouse, France). Inspection of the accuracy of simulation results by validation is necessary before using these products for applications. In the first step of validation, the performance of both "thermal solvers" was determined through comparison of the computed diurnal surface temperatures of a simple object with the corresponding values from measurements. CUBI is a rather simple geometric object with well known material parameters which makes it suitable for testing and validating object models in IR. It was used in this study as a test body. Comparison of calculated and measured surface temperature values will be presented, together with the results from the FGAN-FOM thermal object code F-TOM. In the second validation step, radiances of the simulated sensor views computed by RadTherm-IR and IR-Workbench will be compared with radiances retrieved from the recorded sensor images taken by the sensor that was simulated. Strengths and weaknesses of the models RadTherm-IR, IR-Workbench and F-TOM will be discussed.
A Monte Carlo analysis of breast screening randomized trials.
Zamora, Luis I; Forastero, Cristina; Guirado, Damián; Lallena, Antonio M
2016-12-01
To analyze breast screening randomized trials with a Monte Carlo simulation tool. A simulation tool previously developed to simulate breast screening programmes was adapted for that purpose. The history of women participating in the trials was simulated, including a model for survival after local treatment of invasive cancers. Distributions of time gained due to screening detection against symptomatic detection and the overall screening sensitivity were used as inputs. Several randomized controlled trials were simulated. Except for the age range of women involved, all simulations used the same population characteristics and this permitted to analyze their external validity. The relative risks obtained were compared to those quoted for the trials, whose internal validity was addressed by further investigating the reasons of the disagreements observed. The Monte Carlo simulations produce results that are in good agreement with most of the randomized trials analyzed, thus indicating their methodological quality and external validity. A reduction of the breast cancer mortality around 20% appears to be a reasonable value according to the results of the trials that are methodologically correct. Discrepancies observed with Canada I and II trials may be attributed to a low mammography quality and some methodological problems. Kopparberg trial appears to show a low methodological quality. Monte Carlo simulations are a powerful tool to investigate breast screening controlled randomized trials, helping to establish those whose results are reliable enough to be extrapolated to other populations and to design the trial strategies and, eventually, adapting them during their development. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Development and validation of an artificial wetlab training system for the lumbar discectomy.
Adermann, Jens; Geissler, Norman; Bernal, Luis E; Kotzsch, Susanne; Korb, Werner
2014-09-01
An initial research indicated that realistic haptic simulators with an adapted training concept are needed to enhance the training for spinal surgery. A cognitive task analysis (CTA) was performed to define a realistic and helpful scenario-based simulation. Based on the results a simulator for lumbar discectomy was developed. Additionally, a realistic training operating room was built for a pilot. The results were validated. The CTA showed a need for realistic scenario-based training in spine surgery. The developed simulator consists of synthetic bone structures, synthetic soft tissue and an advanced bleeding system. Due to the close interdisciplinary cooperation of surgeons between engineers and psychologists, the iterative multicentre validation showed that the simulator is visually and haptically realistic. The simulator offers integrated sensors for the evaluation of the traction being used and the compression during surgery. The participating surgeons in the pilot workshop rated the simulator and the training concept as very useful for the improvement of their surgical skills. In the context of the present work a precise definition for the simulator and training concept was developed. The additional implementation of sensors allows the objective evaluation of the surgical training by the trainer. Compared to other training simulators and concepts, the high degree of objectivity strengthens the acceptance of the feedback. The measured data of the nerve root tension and the compression of the dura can be used for intraoperative control and a detailed postoperative evaluation.
WEST-3 wind turbine simulator development. Volume 2: Verification
NASA Technical Reports Server (NTRS)
Sridhar, S.
1985-01-01
The details of a study to validate WEST-3, a new time wind turbine simulator developed by Paragib Pacific Inc., are presented in this report. For the validation, the MOD-0 wind turbine was simulated on WEST-3. The simulation results were compared with those obtained from previous MOD-0 simulations, and with test data measured during MOD-0 operations. The study was successful in achieving the major objective of proving that WEST-3 yields results which can be used to support a wind turbine development process. The blade bending moments, peak and cyclic, from the WEST-3 simulation correlated reasonably well with the available MOD-0 data. The simulation was also able to predict the resonance phenomena observed during MOD-0 operations. Also presented in the report is a description and solution of a serious numerical instability problem encountered during the study. The problem was caused by the coupling of the rotor and the power train models. The results of the study indicate that some parts of the existing WEST-3 simulation model may have to be refined for future work; specifically, the aerodynamics and procedure used to couple the rotor model with the tower and the power train models.
van Rossum, Huub H; Kemperman, Hans
2017-02-01
To date, no practical tools are available to obtain optimal settings for moving average (MA) as a continuous analytical quality control instrument. Also, there is no knowledge of the true bias detection properties of applied MA. We describe the use of bias detection curves for MA optimization and MA validation charts for validation of MA. MA optimization was performed on a data set of previously obtained consecutive assay results. Bias introduction and MA bias detection were simulated for multiple MA procedures (combination of truncation limits, calculation algorithms and control limits) and performed for various biases. Bias detection curves were generated by plotting the median number of test results needed for bias detection against the simulated introduced bias. In MA validation charts the minimum, median, and maximum numbers of assay results required for MA bias detection are shown for various bias. Their use was demonstrated for sodium, potassium, and albumin. Bias detection curves allowed optimization of MA settings by graphical comparison of bias detection properties of multiple MA. The optimal MA was selected based on the bias detection characteristics obtained. MA validation charts were generated for selected optimal MA and provided insight into the range of results required for MA bias detection. Bias detection curves and MA validation charts are useful tools for optimization and validation of MA procedures.
NASA Astrophysics Data System (ADS)
Weingart, Robert
This thesis is about the validation of a computational fluid dynamics simulation of a ground vehicle by means of a low-budget coast-down test. The vehicle is built to the standards of the 2014 Formula SAE rules. It is equipped with large wings in the front and rear of the car; the vertical loads on the tires are measured by specifically calibrated shock potentiometers. The coast-down test was performed on a runway of a local airport and is used to determine vehicle specific coefficients such as drag, downforce, aerodynamic balance, and rolling resistance for different aerodynamic setups. The test results are then compared to the respective simulated results. The drag deviates about 5% from the simulated to the measured results. The downforce numbers show a deviation up to 18% respectively. Moreover, a sensitivity analysis of inlet velocities, ride heights, and pitch angles was performed with the help of the computational simulation.
Monte Carlo simulation of Ray-Scan 64 PET system and performance evaluation using GATE toolkit
NASA Astrophysics Data System (ADS)
Li, Suying; Zhang, Qiushi; Vuletic, Ivan; Xie, Zhaoheng; Yang, Kun; Ren, Qiushi
2017-02-01
In this study, we aimed to develop a GATE model for the simulation of Ray-Scan 64 PET scanner and model its performance characteristics. A detailed implementation of system geometry and physical process were included in the simulation model. Then we modeled the performance characteristics of Ray-Scan 64 PET system for the first time, based on National Electrical Manufacturers Association (NEMA) NU-2 2007 protocols and validated the model against experimental measurement, including spatial resolution, sensitivity, counting rates and noise equivalent count rate (NECR). Moreover, an accurate dead time module was investigated to simulate the counting rate performance. Overall results showed reasonable agreement between simulation and experimental data. The validation results showed the reliability and feasibility of the GATE model to evaluate major performance of Ray-Scan 64 PET system. It provided a useful tool for a wide range of research applications.
Fixed gain and adaptive techniques for rotorcraft vibration control
NASA Technical Reports Server (NTRS)
Roy, R. H.; Saberi, H. A.; Walker, R. A.
1985-01-01
The results of an analysis effort performed to demonstrate the feasibility of employing approximate dynamical models and frequency shaped cost functional control law desgin techniques for helicopter vibration suppression are presented. Both fixed gain and adaptive control designs based on linear second order dynamical models were implemented in a detailed Rotor Systems Research Aircraft (RSRA) simulation to validate these active vibration suppression control laws. Approximate models of fuselage flexibility were included in the RSRA simulation in order to more accurately characterize the structural dynamics. The results for both the fixed gain and adaptive approaches are promising and provide a foundation for pursuing further validation in more extensive simulation studies and in wind tunnel and/or flight tests.
Validation of NASA Thermal Ice Protection Computer Codes. Part 3; The Validation of Antice
NASA Technical Reports Server (NTRS)
Al-Khalil, Kamel M.; Horvath, Charles; Miller, Dean R.; Wright, William B.
2001-01-01
An experimental program was generated by the Icing Technology Branch at NASA Glenn Research Center to validate two ice protection simulation codes: (1) LEWICE/Thermal for transient electrothermal de-icing and anti-icing simulations, and (2) ANTICE for steady state hot gas and electrothermal anti-icing simulations. An electrothermal ice protection system was designed and constructed integral to a 36 inch chord NACA0012 airfoil. The model was fully instrumented with thermo-couples, RTD'S, and heat flux gages. Tests were conducted at several icing environmental conditions during a two week period at the NASA Glenn Icing Research Tunnel. Experimental results of running-wet and evaporative cases were compared to the ANTICE computer code predictions and are presented in this paper.
Face and construct validity of a computer-based virtual reality simulator for ERCP.
Bittner, James G; Mellinger, John D; Imam, Toufic; Schade, Robert R; Macfadyen, Bruce V
2010-02-01
Currently, little evidence supports computer-based simulation for ERCP training. To determine face and construct validity of a computer-based simulator for ERCP and assess its perceived utility as a training tool. Novice and expert endoscopists completed 2 simulated ERCP cases by using the GI Mentor II. Virtual Education and Surgical Simulation Laboratory, Medical College of Georgia. Outcomes included times to complete the procedure, reach the papilla, and use fluoroscopy; attempts to cannulate the papilla, pancreatic duct, and common bile duct; and number of contrast injections and complications. Subjects assessed simulator graphics, procedural accuracy, difficulty, haptics, overall realism, and training potential. Only when performance data from cases A and B were combined did the GI Mentor II differentiate novices and experts based on times to complete the procedure, reach the papilla, and use fluoroscopy. Across skill levels, overall opinions were similar regarding graphics (moderately realistic), accuracy (similar to clinical ERCP), difficulty (similar to clinical ERCP), overall realism (moderately realistic), and haptics. Most participants (92%) claimed that the simulator has definite training potential or should be required for training. Small sample size, single institution. The GI Mentor II demonstrated construct validity for ERCP based on select metrics. Most subjects thought that the simulated graphics, procedural accuracy, and overall realism exhibit face validity. Subjects deemed it a useful training tool. Study repetition involving more participants and cases may help confirm results and establish the simulator's ability to differentiate skill levels based on ERCP-specific metrics.
Competency-Based Training and Simulation: Making a "Valid" Argument.
Noureldin, Yasser A; Lee, Jason Y; McDougall, Elspeth M; Sweet, Robert M
2018-02-01
The use of simulation as an assessment tool is much more controversial than is its utility as an educational tool. However, without valid simulation-based assessment tools, the ability to objectively assess technical skill competencies in a competency-based medical education framework will remain challenging. The current literature in urologic simulation-based training and assessment uses a definition and framework of validity that is now outdated. This is probably due to the absence of awareness rather than an absence of comprehension. The following review article provides the urologic community an updated taxonomy on validity theory as it relates to simulation-based training and assessments and translates our simulation literature to date into this framework. While the old taxonomy considered validity as distinct subcategories and focused on the simulator itself, the modern taxonomy, for which we translate the literature evidence, considers validity as a unitary construct with a focus on interpretation of simulator data/scores.
IMPACT: a generic tool for modelling and simulating public health policy.
Ainsworth, J D; Carruthers, E; Couch, P; Green, N; O'Flaherty, M; Sperrin, M; Williams, R; Asghar, Z; Capewell, S; Buchan, I E
2011-01-01
Populations are under-served by local health policies and management of resources. This partly reflects a lack of realistically complex models to enable appraisal of a wide range of potential options. Rising computing power coupled with advances in machine learning and healthcare information now enables such models to be constructed and executed. However, such models are not generally accessible to public health practitioners who often lack the requisite technical knowledge or skills. To design and develop a system for creating, executing and analysing the results of simulated public health and healthcare policy interventions, in ways that are accessible and usable by modellers and policy-makers. The system requirements were captured and analysed in parallel with the statistical method development for the simulation engine. From the resulting software requirement specification the system architecture was designed, implemented and tested. A model for Coronary Heart Disease (CHD) was created and validated against empirical data. The system was successfully used to create and validate the CHD model. The initial validation results show concordance between the simulation results and the empirical data. We have demonstrated the ability to connect health policy-modellers and policy-makers in a unified system, thereby making population health models easier to share, maintain, reuse and deploy.
NASA Astrophysics Data System (ADS)
Hidayat, Iki; Sutopo; Pratama, Heru Berian
2017-12-01
The Kerinci geothermal field is one phase liquid reservoir system in the Kerinci District, western part of Jambi Province. In this field, there are geothermal prospects that identified by the heat source up flow inside a National Park area. Kerinci field was planned to develop 1×55 MWe by Pertamina Geothermal Energy. To define reservoir characterization, the numerical simulation of Kerinci field is developed by using TOUGH2 software with information from conceptual model. The pressure and temperature profile well data of KRC-B1 are validated with simulation data to reach natural state condition. The result of the validation is suitable matching. Based on natural state simulation, the resource assessment of Kerinci geothermal field is estimated by using Monte Carlo simulation with the result P10-P50-P90 are 49.4 MW, 64.3 MW and 82.4 MW respectively. This paper is the first study of resource assessment that has been estimated successfully in Kerinci Geothermal Field using numerical simulation coupling with Monte carlo simulation.
How to test validity in orthodontic research: a mixed dentition analysis example.
Donatelli, Richard E; Lee, Shin-Jae
2015-02-01
The data used to test the validity of a prediction method should be different from the data used to generate the prediction model. In this study, we explored whether an independent data set is mandatory for testing the validity of a new prediction method and how validity can be tested without independent new data. Several validation methods were compared in an example using the data from a mixed dentition analysis with a regression model. The validation errors of real mixed dentition analysis data and simulation data were analyzed for increasingly large data sets. The validation results of both the real and the simulation studies demonstrated that the leave-1-out cross-validation method had the smallest errors. The largest errors occurred in the traditional simple validation method. The differences between the validation methods diminished as the sample size increased. The leave-1-out cross-validation method seems to be an optimal validation method for improving the prediction accuracy in a data set with limited sample sizes. Copyright © 2015 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Duncan, L. M.; Reddell, J. P.; Schoonmaker, P. B.
1975-01-01
Techniques and support software for the efficient performance of simulation validation are discussed. Overall validation software structure, the performance of validation at various levels of simulation integration, guidelines for check case formulation, methods for real time acquisition and formatting of data from an all up operational simulator, and methods and criteria for comparison and evaluation of simulation data are included. Vehicle subsystems modules, module integration, special test requirements, and reference data formats are also described.
FEMFLOW3D; a finite-element program for the simulation of three-dimensional aquifers; version 1.0
Durbin, Timothy J.; Bond, Linda D.
1998-01-01
This document also includes model validation, source code, and example input and output files. Model validation was performed using four test problems. For each test problem, the results of a model simulation with FEMFLOW3D were compared with either an analytic solution or the results of an independent numerical approach. The source code, written in the ANSI x3.9-1978 FORTRAN standard, and the complete input and output of an example problem are listed in the appendixes.
Validation of a novel laparoscopic adjustable gastric band simulator.
Sankaranarayanan, Ganesh; Adair, James D; Halic, Tansel; Gromski, Mark A; Lu, Zhonghua; Ahn, Woojin; Jones, Daniel B; De, Suvranu
2011-04-01
Morbid obesity accounts for more than 90,000 deaths per year in the United States. Laparoscopic adjustable gastric banding (LAGB) is the second most common weight loss procedure performed in the US and the most common in Europe and Australia. Simulation in surgical training is a rapidly advancing field that has been adopted by many to prepare surgeons for surgical techniques and procedures. The aim of our study was to determine face, construct, and content validity for a novel virtual reality laparoscopic adjustable gastric band simulator. Twenty-eight subjects were categorized into two groups (expert and novice), determined by their skill level in laparoscopic surgery. Experts consisted of subjects who had at least 4 years of laparoscopic training and operative experience. Novices consisted of subjects with medical training but with less than 4 years of laparoscopic training. The subjects used the virtual reality laparoscopic adjustable band surgery simulator. They were automatically scored according to various tasks. The subjects then completed a questionnaire to evaluate face and content validity. On a 5-point Likert scale (1 = lowest score, 5 = highest score), the mean score for visual realism was 4.00 ± 0.67 and the mean score for realism of the interface and tool movements was 4.07 ± 0.77 (face validity). There were significant differences in the performances of the two subject groups (expert and novice) based on total scores (p < 0.001) (construct validity). Mean score for utility of the simulator, as addressed by the expert group, was 4.50 ± 0.71 (content validity). We created a virtual reality laparoscopic adjustable gastric band simulator. Our initial results demonstrate excellent face, construct, and content validity findings. To our knowledge, this is the first virtual reality simulator with haptic feedback for training residents and surgeons in the laparoscopic adjustable gastric banding procedure.
Simulation validation and management
NASA Astrophysics Data System (ADS)
Illgen, John D.
1995-06-01
Illgen Simulation Technologies, Inc., has been working interactive verification and validation programs for the past six years. As a result, they have evolved a methodology that has been adopted and successfully implemented by a number of different verification and validation programs. This methodology employs a unique case of computer-assisted software engineering (CASE) tools to reverse engineer source code and produce analytical outputs (flow charts and tables) that aid the engineer/analyst in the verification and validation process. We have found that the use of CASE tools saves time,which equate to improvements in both schedule and cost. This paper will describe the ISTI-developed methodology and how CASe tools are used in its support. Case studies will be discussed.
Development and validation of a GEANT4 radiation transport code for CT dosimetry
Carver, DE; Kost, SD; Fernald, MJ; Lewis, KG; Fraser, ND; Pickens, DR; Price, RR; Stabin, MG
2014-01-01
We have created a radiation transport code using the GEANT4 Monte Carlo toolkit to simulate pediatric patients undergoing CT examinations. The focus of this paper is to validate our simulation with real-world physical dosimetry measurements using two independent techniques. Exposure measurements were made with a standard 100-mm CT pencil ionization chamber, and absorbed doses were also measured using optically stimulated luminescent (OSL) dosimeters. Measurements were made in air, a standard 16-cm acrylic head phantom, and a standard 32-cm acrylic body phantom. Physical dose measurements determined from the ionization chamber in air for 100 and 120 kVp beam energies were used to derive photon-fluence calibration factors. Both ion chamber and OSL measurement results provide useful comparisons in the validation of our Monte Carlo simulations. We found that simulated and measured CTDI values were within an overall average of 6% of each other. PMID:25706135
Development and validation of a GEANT4 radiation transport code for CT dosimetry.
Carver, D E; Kost, S D; Fernald, M J; Lewis, K G; Fraser, N D; Pickens, D R; Price, R R; Stabin, M G
2015-04-01
The authors have created a radiation transport code using the GEANT4 Monte Carlo toolkit to simulate pediatric patients undergoing CT examinations. The focus of this paper is to validate their simulation with real-world physical dosimetry measurements using two independent techniques. Exposure measurements were made with a standard 100-mm CT pencil ionization chamber, and absorbed doses were also measured using optically stimulated luminescent (OSL) dosimeters. Measurements were made in air with a standard 16-cm acrylic head phantom and with a standard 32-cm acrylic body phantom. Physical dose measurements determined from the ionization chamber in air for 100 and 120 kVp beam energies were used to derive photon-fluence calibration factors. Both ion chamber and OSL measurement results provide useful comparisons in the validation of the Monte Carlo simulations. It was found that simulated and measured CTDI values were within an overall average of 6% of each other.
Numerical simulations in the development of propellant management devices
NASA Astrophysics Data System (ADS)
Gaulke, Diana; Winkelmann, Yvonne; Dreyer, Michael
Propellant management devices (PMDs) are used for positioning the propellant at the propel-lant port. It is important to provide propellant without gas bubbles. Gas bubbles can inflict cavitation and may lead to system failures in the worst case. Therefore, the reliable operation of such devices must be guaranteed. Testing these complex systems is a very intricate process. Furthermore, in most cases only tests with downscaled geometries are possible. Numerical sim-ulations are used here as an aid to optimize the tests and to predict certain results. Based on these simulations, parameters can be determined in advance and parts of the equipment can be adjusted in order to minimize the number of experiments. In return, the simulations are validated regarding the test results. Furthermore, if the accuracy of the numerical prediction is verified, then numerical simulations can be used for validating the scaling of the experiments. This presentation demonstrates some selected numerical simulations for the development of PMDs at ZARM.
Simulation studies for the evaluation of health information technologies: experiences and results.
Ammenwerth, Elske; Hackl, Werner O; Binzer, Kristine; Christoffersen, Tue E H; Jensen, Sanne; Lawton, Kitta; Skjoet, Peter; Nohr, Christian
It is essential for new health information technologies (IT) to undergo rigorous evaluations to ensure they are effective and safe for use in real-world situations. However, evaluation of new health IT is challenging, as field studies are often not feasible when the technology being evaluated is not sufficiently mature. Laboratory-based evaluations have also been shown to have insufficient external validity. Simulation studies seem to be a way to bridge this gap. The aim of this study was to evaluate, using a simulation methodology, the impact of a new prototype of an electronic medication management system on the appropriateness of prescriptions and drug-related activities, including laboratory test ordering or medication changes. This article presents the results of a controlled simulation study with 50 simulation runs, including ten doctors and five simulation patients, and discusses experiences and lessons learnt while conducting the study. Although the new electronic medication management system showed tendencies to improve medication safety when compared with the standard system, this tendency was not significant. Altogether, five distinct situations were identified where the new medication management system did help to improve medication safety. This simulation study provided a good compromise between internal validity and external validity. However, several challenges need to be addressed when undertaking simulation evaluations including: preparation of adequate test cases; training of participants before using unfamiliar applications; consideration of time, effort and costs of conducting the simulation; technical maturity of the evaluated system; and allowing adequate preparation of simulation scenarios and simulation setting. Simulation studies are an interesting but time-consuming approach, which can be used to evaluate newly developed health IT systems, particularly those systems that are not yet sufficiently mature to undergo field evaluation studies.
Garfjeld Roberts, Patrick; Guyver, Paul; Baldwin, Mathew; Akhtar, Kash; Alvand, Abtin; Price, Andrew J; Rees, Jonathan L
2017-02-01
To assess the construct and face validity of ArthroS, a passive haptic VR simulator. A secondary aim was to evaluate the novel performance metrics produced by this simulator. Two groups of 30 participants, each divided into novice, intermediate or expert based on arthroscopic experience, completed three separate tasks on either the knee or shoulder module of the simulator. Performance was recorded using 12 automatically generated performance metrics and video footage of the arthroscopic procedures. The videos were blindly assessed using a validated global rating scale (GRS). Participants completed a survey about the simulator's realism and training utility. This new simulator demonstrated construct validity of its tasks when evaluated against a GRS (p ≤ 0.003 in all cases). Regarding it's automatically generated performance metrics, established outputs such as time taken (p ≤ 0.001) and instrument path length (p ≤ 0.007) also demonstrated good construct validity. However, two-thirds of the proposed 'novel metrics' the simulator reports could not distinguish participants based on arthroscopic experience. Face validity assessment rated the simulator as a realistic and useful tool for trainees, but the passive haptic feedback (a key feature of this simulator) is rated as less realistic. The ArthroS simulator has good task construct validity based on established objective outputs, but some of the novel performance metrics could not distinguish between surgical experience. The passive haptic feedback of the simulator also needs improvement. If simulators could offer automated and validated performance feedback, this would facilitate improvements in the delivery of training by allowing trainees to practise and self-assess.
Development and validation of the Simulation Learning Effectiveness Inventory.
Chen, Shiah-Lian; Huang, Tsai-Wei; Liao, I-Chen; Liu, Chienchi
2015-10-01
To develop and psychometrically test the Simulation Learning Effectiveness Inventory. High-fidelity simulation helps students develop clinical skills and competencies. Yet, reliable instruments measuring learning outcomes are scant. A descriptive cross-sectional survey was used to validate psychometric properties of the instrument measuring students' perception of stimulation learning effectiveness. A purposive sample of 505 nursing students who had taken simulation courses was recruited from a department of nursing of a university in central Taiwan from January 2010-June 2010. The study was conducted in two phases. In Phase I, question items were developed based on the literature review and the preliminary psychometric properties of the inventory were evaluated using exploratory factor analysis. Phase II was conducted to evaluate the reliability and validity of the finalized inventory using confirmatory factor analysis. The results of exploratory and confirmatory factor analyses revealed the instrument was composed of seven factors, named course arrangement, equipment resource, debriefing, clinical ability, problem-solving, confidence and collaboration. A further second-order analysis showed comparable fits between a three second-order factor (preparation, process and outcome) and the seven first-order factor models. Internal consistency was supported by adequate Cronbach's alphas and composite reliability. Convergent and discriminant validities were also supported by confirmatory factor analysis. The study provides evidence that the Simulation Learning Effectiveness Inventory is reliable and valid for measuring student perception of learning effectiveness. The instrument is helpful in building the evidence-based knowledge of the effect of simulation teaching on students' learning outcomes. © 2015 John Wiley & Sons Ltd.
SINERGIA laparoscopic virtual reality simulator: didactic design and technical development.
Lamata, Pablo; Gómez, Enrique J; Sánchez-Margallo, Francisco M; López, Oscar; Monserrat, Carlos; García, Verónica; Alberola, Carlos; Florido, Miguel Angel Rodríguez; Ruiz, Juan; Usón, Jesús
2007-03-01
VR laparoscopic simulators have demonstrated its validity in recent studies, and research should be directed towards a high training effectiveness and efficacy. In this direction, an insight into simulators' didactic design and technical development is provided, by describing the methodology followed in the building of the SINERGIA simulator. It departs from a clear analysis of training needs driven by a surgical training curriculum. Existing solutions and validation studies are an important reference for the definition of specifications, which are described with a suitable use of simulation technologies. Five new didactic exercises are proposed to train some of the basic laparoscopic skills. Simulator construction has required existing algorithms and the development of a particle-based biomechanical model, called PARSYS, and a collision handling solution based in a multi-point strategy. The resulting VR laparoscopic simulator includes new exercises and enhanced simulation technologies, and is finding a very good acceptance among surgeons.
DES Y1 Results: Validating Cosmological Parameter Estimation Using Simulated Dark Energy Surveys
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacCrann, N.; et al.
We use mock galaxy survey simulations designed to resemble the Dark Energy Survey Year 1 (DES Y1) data to validate and inform cosmological parameter estimation. When similar analysis tools are applied to both simulations and real survey data, they provide powerful validation tests of the DES Y1 cosmological analyses presented in companion papers. We use two suites of galaxy simulations produced using different methods, which therefore provide independent tests of our cosmological parameter inference. The cosmological analysis we aim to validate is presented in DES Collaboration et al. (2017) and uses angular two-point correlation functions of galaxy number counts and weak lensing shear, as well as their cross-correlation, in multiple redshift bins. While our constraints depend on the specific set of simulated realisations available, for both suites of simulations we find that the input cosmology is consistent with the combined constraints from multiple simulated DES Y1 realizations in themore » $$\\Omega_m-\\sigma_8$$ plane. For one of the suites, we are able to show with high confidence that any biases in the inferred $$S_8=\\sigma_8(\\Omega_m/0.3)^{0.5}$$ and $$\\Omega_m$$ are smaller than the DES Y1 $$1-\\sigma$$ uncertainties. For the other suite, for which we have fewer realizations, we are unable to be this conclusive; we infer a roughly 70% probability that systematic biases in the recovered $$\\Omega_m$$ and $$S_8$$ are sub-dominant to the DES Y1 uncertainty. As cosmological analyses of this kind become increasingly more precise, validation of parameter inference using survey simulations will be essential to demonstrate robustness.« less
Farhan, Bilal; Soltani, Tandis; Do, Rebecca; Perez, Claudia; Choi, Hanul; Ghoniem, Gamal
2018-05-02
Endoscopic injection of urethral bulking agents is an office procedure that is used to treat stress urinary incontinence secondary to internal sphincteric deficiency. Validation studies important part of simulator evaluation and is considered important step to establish the effectiveness of simulation-based training. The endoscopic needle injection (ENI) simulator has not been formally validated, although it has been used widely at University of California, Irvine. We aimed to assess the face, content, and construct validity of the UC, Irvine ENI simulator. Dissected female porcine bladders were mounted in a modified Hysteroscopy Diagnostic Trainer. Using routine endoscopic equipment for this procedure with video monitoring, 6 urologists (experts group) and 6 urology trainee (novice group) completed urethral bulking agents injections on a total of 12 bladders using ENI simulator. Face and content validities were assessed by using structured quantitative survey which rating the realism. Construct validity was assessed by comparing the performance, time of the procedure, and the occlusive (anatomical and functional) evaluations between the experts and novices. Trainees also completed a postprocedure feedback survey. Effective injections were evaluated by measuring the retrograde urethral opening pressure, visual cystoscopic coaptation, and postprocedure gross anatomic examination. All 12 participants felt the simulator was a good training tool and should be used as essential part of urology training (face validity). ENI simulator showed good face and content validity with average score varies between the experts and the novices was 3.9/5 and 3.8/5, respectively. Content validity evaluation showed that most aspects of the simulator were adequately realistic (mean Likert scores 3.9-3.8/5). However, the bladder does not bleed, and sometimes thin. Experts significantly outperformed novices (p < 001) across all measure of performance therefore establishing construct validity. The ENI simulator shows face, content and construct validities, although few aspects of simulator were not very realistic (e.g., bleeding).This study provides a base for the future formal validation for this simulator and for continuing use of this simulator in endourology training. Copyright © 2018 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Engineering equations for characterizing non-linear laser intensity propagation in air with loss.
Karr, Thomas; Stotts, Larry B; Tellez, Jason A; Schmidt, Jason D; Mansell, Justin D
2018-02-19
The propagation of high peak-power laser beams in real atmospheres will be affected at long range by both linear and nonlinear effects contained therein. Arguably, J. H. Marburger is associated with the mathematical characterization of this phenomenon. This paper provides a validated set of engineering equations for characterizing the self-focusing distance from a laser beam propagating through non-turbulent air with, and without, loss as well as three source configurations: (1) no lens, (2) converging lens and (3) diverging lens. The validation was done against wave-optics simulation results. Some validated equations follow Marburger completely, but others do not, requiring modification of the original theory. Our results can provide a guide for numerical simulations and field experiments.
NASA Astrophysics Data System (ADS)
Zhang, Yaning; Xu, Fei; Li, Bingxi; Kim, Yong-Song; Zhao, Wenke; Xie, Gongnan; Fu, Zhongbin
2018-04-01
This study aims to validate the three-phase heat and mass transfer model developed in the first part (Three phase heat and mass transfer model for unsaturated soil freezing process: Part 1 - model development). Experimental results from studies and experiments were used for the validation. The results showed that the correlation coefficients for the simulated and experimental water contents at different soil depths were between 0.83 and 0.92. The correlation coefficients for the simulated and experimental liquid water contents at different soil temperatures were between 0.95 and 0.99. With these high accuracies, the developed model can be well used to predict the water contents at different soil depths and temperatures.
Simulation of crossflow instability on a supersonic highly swept wing
NASA Technical Reports Server (NTRS)
Pruett, C. David
1995-01-01
A direct numerical simulation (DNS) algorithm has been developed and validated for use in the investigation of crossflow instability on supersonic swept wings, an application of potential relevance to the design of the High-Speed Civil Transport (HSCT). The algorithm is applied to the investigation of stationary crossflow instability on an infinitely long 77-degree swept wing in Mach 3.5 flow. The results of the DNS are compared with the predictions of linear parabolized stability equation (PSE) methodology. In-general, the DNS and PSE results agree closely in terms of modal growth rate, structure, and orientation angle. Although further validation is needed for large-amplitude (nonlinear) disturbances, the close agreement between independently derived methods offers preliminary validation of both DNS and PSE approaches.
Improvement of Simulation Method in Validation of Software of the Coordinate Measuring Systems
NASA Astrophysics Data System (ADS)
Nieciąg, Halina
2015-10-01
Software is used in order to accomplish various tasks at each stage of the functioning of modern measuring systems. Before metrological confirmation of measuring equipment, the system has to be validated. This paper discusses the method for conducting validation studies of a fragment of software to calculate the values of measurands. Due to the number and nature of the variables affecting the coordinate measurement results and the complex character and multi-dimensionality of measurands, the study used the Monte Carlo method of numerical simulation. The article presents an attempt of possible improvement of results obtained by classic Monte Carlo tools. The algorithm LHS (Latin Hypercube Sampling) was implemented as alternative to the simple sampling schema of classic algorithm.
DNS of Flows over Periodic Hills using a Discontinuous-Galerkin Spectral-Element Method
NASA Technical Reports Server (NTRS)
Diosady, Laslo T.; Murman, Scott M.
2014-01-01
Direct numerical simulation (DNS) of turbulent compressible flows is performed using a higher-order space-time discontinuous-Galerkin finite-element method. The numerical scheme is validated by performing DNS of the evolution of the Taylor-Green vortex and turbulent flow in a channel. The higher-order method is shown to provide increased accuracy relative to low-order methods at a given number of degrees of freedom. The turbulent flow over a periodic array of hills in a channel is simulated at Reynolds number 10,595 using an 8th-order scheme in space and a 4th-order scheme in time. These results are validated against previous large eddy simulation (LES) results. A preliminary analysis provides insight into how these detailed simulations can be used to improve Reynoldsaveraged Navier-Stokes (RANS) modeling
Electron backscattering simulation in Geant4
NASA Astrophysics Data System (ADS)
Dondero, Paolo; Mantero, Alfonso; Ivanchencko, Vladimir; Lotti, Simone; Mineo, Teresa; Fioretti, Valentina
2018-06-01
The backscattering of electrons is a key phenomenon in several physics applications which range from medical therapy to space including AREMBES, the new ESA simulation framework for radiation background effects. The importance of properly reproducing this complex interaction has grown considerably in the last years and the Geant4 Monte Carlo simulation toolkit, recently upgraded to the version 10.3, is able to comply with the AREMBES requirements in a wide energy range. In this study a validation of the electron Geant4 backscattering models is performed with respect to several experimental data. In addition a selection of the most recent validation results on the electron scattering processes is also presented. Results of our analysis show a good agreement between simulations and data from several experiments, confirming the Geant4 electron backscattering models to be robust and reliable up to a few tens of electronvolts.
ERIC Educational Resources Information Center
Lievens, Filip; Patterson, Fiona
2011-01-01
In high-stakes selection among candidates with considerable domain-specific knowledge and experience, investigations of whether high-fidelity simulations (assessment centers; ACs) have incremental validity over low-fidelity simulations (situational judgment tests; SJTs) are lacking. Therefore, this article integrates research on the validity of…
Validation of Solar Sail Simulations for the NASA Solar Sail Demonstration Project
NASA Technical Reports Server (NTRS)
Braafladt, Alexander C.; Artusio-Glimpse, Alexandra B.; Heaton, Andrew F.
2014-01-01
NASA's Solar Sail Demonstration project partner L'Garde is currently assembling a flight-like sail assembly for a series of ground demonstration tests beginning in 2015. For future missions of this sail that might validate solar sail technology, it is necessary to have an accurate sail thrust model. One of the primary requirements of a proposed potential technology validation mission will be to demonstrate solar sail thrust over a set time period, which for this project is nominally 30 days. This requirement would be met by comparing a L'Garde-developed trajectory simulation to the as-flown trajectory. The current sail simulation baseline for L'Garde is a Systems Tool Kit (STK) plug-in that includes a custom-designed model of the L'Garde sail. The STK simulation has been verified for a flat plate model by comparing it to the NASA-developed Solar Sail Spaceflight Simulation Software (S5). S5 matched STK with a high degree of accuracy and the results of the validation indicate that the L'Garde STK model is accurate enough to meet the potential future mission requirements. Additionally, since the L'Garde sail deviates considerably from a flat plate, a force model for a non-flat sail provided by L'Garde sail was also tested and compared to a flat plate model in S5. This result will be used in the future as a basis of comparison to the non-flat sail model being developed for STK.
Model-Based Verification and Validation of Spacecraft Avionics
NASA Technical Reports Server (NTRS)
Khan, M. Omair; Sievers, Michael; Standley, Shaun
2012-01-01
Verification and Validation (V&V) at JPL is traditionally performed on flight or flight-like hardware running flight software. For some time, the complexity of avionics has increased exponentially while the time allocated for system integration and associated V&V testing has remained fixed. There is an increasing need to perform comprehensive system level V&V using modeling and simulation, and to use scarce hardware testing time to validate models; the norm for thermal and structural V&V for some time. Our approach extends model-based V&V to electronics and software through functional and structural models implemented in SysML. We develop component models of electronics and software that are validated by comparison with test results from actual equipment. The models are then simulated enabling a more complete set of test cases than possible on flight hardware. SysML simulations provide access and control of internal nodes that may not be available in physical systems. This is particularly helpful in testing fault protection behaviors when injecting faults is either not possible or potentially damaging to the hardware. We can also model both hardware and software behaviors in SysML, which allows us to simulate hardware and software interactions. With an integrated model and simulation capability we can evaluate the hardware and software interactions and identify problems sooner. The primary missing piece is validating SysML model correctness against hardware; this experiment demonstrated such an approach is possible.
Development, Validation, and Application of OSSEs at NASA/GMAO
NASA Technical Reports Server (NTRS)
Errico, Ronald; Prive, Nikki
2015-01-01
During the past several years, NASA Goddard's Global Modeling and Assimilation Office (GMAO) has been developing a framework for conducting Observing System Simulation Experiments (OSSEs). The motivation and design of that framework will be described and a sample of validation results presented. Fundamentals issues will be highlighted, particularly the critical importance of appropriately simulating system errors. Some problems that have just arisen in the newest experimental system will also be mentioned.
MCMEG: Simulations of both PDD and TPR for 6 MV LINAC photon beam using different MC codes
NASA Astrophysics Data System (ADS)
Fonseca, T. C. F.; Mendes, B. M.; Lacerda, M. A. S.; Silva, L. A. C.; Paixão, L.; Bastos, F. M.; Ramirez, J. V.; Junior, J. P. R.
2017-11-01
The Monte Carlo Modelling Expert Group (MCMEG) is an expert network specializing in Monte Carlo radiation transport and the modelling and simulation applied to the radiation protection and dosimetry research field. For the first inter-comparison task the group launched an exercise to model and simulate a 6 MV LINAC photon beam using the Monte Carlo codes available within their laboratories and validate their simulated results by comparing them with experimental measurements carried out in the National Cancer Institute (INCA) in Rio de Janeiro, Brazil. The experimental measurements were performed using an ionization chamber with calibration traceable to a Secondary Standard Dosimetry Laboratory (SSDL). The detector was immersed in a water phantom at different depths and was irradiated with a radiation field size of 10×10 cm2. This exposure setup was used to determine the dosimetric parameters Percentage Depth Dose (PDD) and Tissue Phantom Ratio (TPR). The validation process compares the MC calculated results to the experimental measured PDD20,10 and TPR20,10. Simulations were performed reproducing the experimental TPR20,10 quality index which provides a satisfactory description of both the PDD curve and the transverse profiles at the two depths measured. This paper reports in detail the modelling process using MCNPx, MCNP6, EGSnrc and Penelope Monte Carlo codes, the source and tally descriptions, the validation processes and the results.
Cloud computing and validation of expandable in silico livers.
Ropella, Glen E P; Hunt, C Anthony
2010-12-03
In Silico Livers (ISLs) are works in progress. They are used to challenge multilevel, multi-attribute, mechanistic hypotheses about the hepatic disposition of xenobiotics coupled with hepatic responses. To enhance ISL-to-liver mappings, we added discrete time metabolism, biliary elimination, and bolus dosing features to a previously validated ISL and initiated re-validated experiments that required scaling experiments to use more simulated lobules than previously, more than could be achieved using the local cluster technology. Rather than dramatically increasing the size of our local cluster we undertook the re-validation experiments using the Amazon EC2 cloud platform. So doing required demonstrating the efficacy of scaling a simulation to use more cluster nodes and assessing the scientific equivalence of local cluster validation experiments with those executed using the cloud platform. The local cluster technology was duplicated in the Amazon EC2 cloud platform. Synthetic modeling protocols were followed to identify a successful parameterization. Experiment sample sizes (number of simulated lobules) on both platforms were 49, 70, 84, and 152 (cloud only). Experimental indistinguishability was demonstrated for ISL outflow profiles of diltiazem using both platforms for experiments consisting of 84 or more samples. The process was analogous to demonstration of results equivalency from two different wet-labs. The results provide additional evidence that disposition simulations using ISLs can cover the behavior space of liver experiments in distinct experimental contexts (there is in silico-to-wet-lab phenotype similarity). The scientific value of experimenting with multiscale biomedical models has been limited to research groups with access to computer clusters. The availability of cloud technology coupled with the evidence of scientific equivalency has lowered the barrier and will greatly facilitate model sharing as well as provide straightforward tools for scaling simulations to encompass greater detail with no extra investment in hardware.
V-SUIT Model Validation Using PLSS 1.0 Test Results
NASA Technical Reports Server (NTRS)
Olthoff, Claas
2015-01-01
The dynamic portable life support system (PLSS) simulation software Virtual Space Suit (V-SUIT) has been under development at the Technische Universitat Munchen since 2011 as a spin-off from the Virtual Habitat (V-HAB) project. The MATLAB(trademark)-based V-SUIT simulates space suit portable life support systems and their interaction with a detailed and also dynamic human model, as well as the dynamic external environment of a space suit moving on a planetary surface. To demonstrate the feasibility of a large, system level simulation like V-SUIT, a model of NASA's PLSS 1.0 prototype was created. This prototype was run through an extensive series of tests in 2011. Since the test setup was heavily instrumented, it produced a wealth of data making it ideal for model validation. The implemented model includes all components of the PLSS in both the ventilation and thermal loops. The major components are modeled in greater detail, while smaller and ancillary components are low fidelity black box models. The major components include the Rapid Cycle Amine (RCA) CO2 removal system, the Primary and Secondary Oxygen Assembly (POS/SOA), the Pressure Garment System Volume Simulator (PGSVS), the Human Metabolic Simulator (HMS), the heat exchanger between the ventilation and thermal loops, the Space Suit Water Membrane Evaporator (SWME) and finally the Liquid Cooling Garment Simulator (LCGS). Using the created model, dynamic simulations were performed using same test points also used during PLSS 1.0 testing. The results of the simulation were then compared to the test data with special focus on absolute values during the steady state phases and dynamic behavior during the transition between test points. Quantified simulation results are presented that demonstrate which areas of the V-SUIT model are in need of further refinement and those that are sufficiently close to the test results. Finally, lessons learned from the modelling and validation process are given in combination with implications for the future development of other PLSS models in V-SUIT.
Physics-based agent to simulant correlations for vapor phase mass transport.
Willis, Matthew P; Varady, Mark J; Pearl, Thomas P; Fouse, Janet C; Riley, Patrick C; Mantooth, Brent A; Lalain, Teri A
2013-12-15
Chemical warfare agent simulants are often used as an agent surrogate to perform environmental testing, mitigating exposure hazards. This work specifically addresses the assessment of downwind agent vapor concentration resulting from an evaporating simulant droplet. A previously developed methodology was used to estimate the mass diffusivities of the chemical warfare agent simulants methyl salicylate, 2-chloroethyl ethyl sulfide, di-ethyl malonate, and chloroethyl phenyl sulfide. Along with the diffusivity of the chemical warfare agent bis(2-chloroethyl) sulfide, the simulant diffusivities were used in an advection-diffusion model to predict the vapor concentrations downwind from an evaporating droplet of each chemical at various wind velocities and temperatures. The results demonstrate that the simulant-to-agent concentration ratio and the corresponding vapor pressure ratio are equivalent under certain conditions. Specifically, the relationship is valid within ranges of measurement locations relative to the evaporating droplet and observation times. The valid ranges depend on the relative transport properties of the agent and simulant, and whether vapor transport is diffusion or advection dominant. Published by Elsevier B.V.
Cloud computing and validation of expandable in silico livers
2010-01-01
Background In Silico Livers (ISLs) are works in progress. They are used to challenge multilevel, multi-attribute, mechanistic hypotheses about the hepatic disposition of xenobiotics coupled with hepatic responses. To enhance ISL-to-liver mappings, we added discrete time metabolism, biliary elimination, and bolus dosing features to a previously validated ISL and initiated re-validated experiments that required scaling experiments to use more simulated lobules than previously, more than could be achieved using the local cluster technology. Rather than dramatically increasing the size of our local cluster we undertook the re-validation experiments using the Amazon EC2 cloud platform. So doing required demonstrating the efficacy of scaling a simulation to use more cluster nodes and assessing the scientific equivalence of local cluster validation experiments with those executed using the cloud platform. Results The local cluster technology was duplicated in the Amazon EC2 cloud platform. Synthetic modeling protocols were followed to identify a successful parameterization. Experiment sample sizes (number of simulated lobules) on both platforms were 49, 70, 84, and 152 (cloud only). Experimental indistinguishability was demonstrated for ISL outflow profiles of diltiazem using both platforms for experiments consisting of 84 or more samples. The process was analogous to demonstration of results equivalency from two different wet-labs. Conclusions The results provide additional evidence that disposition simulations using ISLs can cover the behavior space of liver experiments in distinct experimental contexts (there is in silico-to-wet-lab phenotype similarity). The scientific value of experimenting with multiscale biomedical models has been limited to research groups with access to computer clusters. The availability of cloud technology coupled with the evidence of scientific equivalency has lowered the barrier and will greatly facilitate model sharing as well as provide straightforward tools for scaling simulations to encompass greater detail with no extra investment in hardware. PMID:21129207
An optimization model to agroindustrial sector in antioquia (Colombia, South America)
NASA Astrophysics Data System (ADS)
Fernandez, J.
2015-06-01
This paper develops a proposal of a general optimization model for the flower industry, which is defined by using discrete simulation and nonlinear optimization, whose mathematical models have been solved by using ProModel simulation tools and Gams optimization. It defines the operations that constitute the production and marketing of the sector, statistically validated data taken directly from each operation through field work, the discrete simulation model of the operations and the linear optimization model of the entire industry chain are raised. The model is solved with the tools described above and presents the results validated in a case study.
Integration and Validation of Hysteroscopy Simulation in the Surgical Training Curriculum.
Elessawy, Mohamed; Skrzipczyk, Moritz; Eckmann-Scholz, Christel; Maass, Nicolai; Mettler, Liselotte; Guenther, Veronika; van Mackelenbergh, Marion; Bauerschlag, Dirk O; Alkatout, Ibrahim
The primary objective of our study was to test the construct validity of the HystSim hysteroscopic simulator to determine whether simulation training can improve the acquisition of hysteroscopic skills regardless of the previous levels of experience of the participants. The secondary objective was to analyze the performance of a selected task, using specially designed scoring charts to help reduce the learning curve for both novices and experienced surgeons. The teaching of hysteroscopic intervention has received only scant attention, focusing mainly on the development of physical models and box simulators. This encouraged our working group to search for a suitable hysteroscopic simulator module and to test its validation. We decided to use the HystSim hysteroscopic simulator, which is one of the few such simulators that has already completed a validation process, with high ratings for both realism and training capacity. As a testing tool for our study, we selected the myoma resection task. We analyzed the results using the multimetric score system suggested by HystSim, allowing a more precise interpretation of the results. Between June 2014 and May 2015, our group collected data on 57 participants of minimally invasive surgical training courses at the Kiel School of Gynecological Endoscopy, Department of Gynecology and Obstetrics, University Hospitals Schleswig-Holstein, Campus Kiel. The novice group consisted of 42 medical students and residents with no prior experience in hysteroscopy, whereas the expert group consisted of 15 participants with more than 2 years of experience of advanced hysteroscopy operations. The overall results demonstrated that all participants attained significant improvements between their pretest and posttests, independent of their previous levels of experience (p < 0.002). Those in the expert group demonstrated statistically significant, superior scores in the pretest and posttests (p = 0.001, p = 0.006). Regarding visualization and ergonomics, the novices showed a better pretest value than the experts; however, the experts were able to improve significantly during the posttest. These precise findings demonstrated that the multimetric scoring system achieved several important objectives, including clinical relevance, critical relevance, and training motivation. All participants demonstrated improvements in their hysteroscopic skills, proving an adequate construct validation of the HystSim. Using the multimetric scoring system enabled a more accurate analysis of the performance of the participants independent of their levels of experience which could be an important key for streamlining the learning curve. Future studies testing the predictive validation of the simulator and frequency of the training intervals are necessary before the introduction of the simulator into the standard surgical training curriculum. Copyright © 2016 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Nadkarni, Lindsay D; Roskind, Cindy G; Auerbach, Marc A; Calhoun, Aaron W; Adler, Mark D; Kessler, David O
2018-04-01
The aim of this study was to assess the validity of a formative feedback instrument for leaders of simulated resuscitations. This is a prospective validation study with a fully crossed (person × scenario × rater) study design. The Concise Assessment of Leader Management (CALM) instrument was designed by pediatric emergency medicine and graduate medical education experts to be used off the shelf to evaluate and provide formative feedback to resuscitation leaders. Four experts reviewed 16 videos of in situ simulated pediatric resuscitations and scored resuscitation leader performance using the CALM instrument. The videos consisted of 4 pediatric emergency department resuscitation teams each performing in 4 pediatric resuscitation scenarios (cardiac arrest, respiratory arrest, seizure, and sepsis). We report on content and internal structure (reliability) validity of the CALM instrument. Content validity was supported by the instrument development process that involved professional experience, expert consensus, focused literature review, and pilot testing. Internal structure validity (reliability) was supported by the generalizability analysis. The main component that contributed to score variability was the person (33%), meaning that individual leaders performed differently. The rater component had almost zero (0%) contribution to variance, which implies that raters were in agreement and argues for high interrater reliability. These results provide initial evidence to support the validity of the CALM instrument as a reliable assessment instrument that can facilitate formative feedback to leaders of pediatric simulated resuscitations.
Assessing Discriminative Performance at External Validation of Clinical Prediction Models
Nieboer, Daan; van der Ploeg, Tjeerd; Steyerberg, Ewout W.
2016-01-01
Introduction External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. Methods We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. Results The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. Conclusion The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients. PMID:26881753
Validation of the Electromagnetic Code FACETS for Numerical Simulation of Radar Target Images
2009-12-01
Validation of the electromagnetic code FACETS for numerical simulation of radar target images S. Wong...Validation of the electromagnetic code FACETS for numerical simulation of radar target images S. Wong DRDC Ottawa...for simulating radar images of a target is obtained, through direct simulation-to-measurement comparisons. A 3-dimensional computer-aided design
NASA Astrophysics Data System (ADS)
Song, S. G.
2016-12-01
Simulation-based ground motion prediction approaches have several benefits over empirical ground motion prediction equations (GMPEs). For instance, full 3-component waveforms can be produced and site-specific hazard analysis is also possible. However, it is important to validate them against observed ground motion data to confirm their efficiency and validity before practical uses. There have been community efforts for these purposes, which are supported by the Broadband Platform (BBP) project at the Southern California Earthquake Center (SCEC). In the simulation-based ground motion prediction approaches, it is a critical element to prepare a possible range of scenario rupture models. I developed a pseudo-dynamic source model for Mw 6.5-7.0 by analyzing a number of dynamic rupture models, based on 1-point and 2-point statistics of earthquake source parameters (Song et al. 2014; Song 2016). In this study, the developed pseudo-dynamic source models were tested against observed ground motion data at the SCEC BBP, Ver 16.5. The validation was performed at two stages. At the first stage, simulated ground motions were validated against observed ground motion data for past events such as the 1992 Landers and 1994 Northridge, California, earthquakes. At the second stage, they were validated against the latest version of empirical GMPEs, i.e., NGA-West2. The validation results show that the simulated ground motions produce ground motion intensities compatible with observed ground motion data at both stages. The compatibility of the pseudo-dynamic source models with the omega-square spectral decay and the standard deviation of the simulated ground motion intensities are also discussed in the study
Bolinger, Elizabeth; Reese, Caitlin; Suhr, Julie; Larrabee, Glenn J
2014-02-01
We examined the effect of simulated head injury on scores on the Neurological Complaints (NUC) and Cognitive Complaints (COG) scales of the Minnesota Multiphasic Personality Inventory-2 Restructured Form (MMPI-2-RF). Young adults with a history of mild head injury were randomly assigned to simulate head injury or give their best effort on a battery of neuropsychological tests, including the MMPI-2-RF. Simulators who also showed poor effort on performance validity tests (PVTs) were compared with controls who showed valid performance on PVTs. Results showed that both scales, but especially NUC, are elevated in individuals simulating head injury, with medium to large effect sizes. Although both scales were highly correlated with all MMPI-2-RF over-reporting validity scales, the relationship of Response Bias Scale to both NUC and COG was much stronger in the simulators than controls. Even accounting for over-reporting on the MMPI-2-RF, NUC was related to general somatic complaints regardless of group membership, whereas COG was related to both psychological distress and somatic complaints in the control group only. Neither scale was related to actual neuropsychological performance, regardless of group membership. Overall, results provide further evidence that self-reported cognitive symptoms can be due to many causes, not necessarily cognitive impairment, and can be exaggerated in a non-credible manner.
NASA Astrophysics Data System (ADS)
Hawes, Frederick T.; Berk, Alexander; Richtsmeier, Steven C.
2016-05-01
A validated, polarimetric 3-dimensional simulation capability, P-MCScene, is being developed by generalizing Spectral Sciences' Monte Carlo-based synthetic scene simulation model, MCScene, to include calculation of all 4 Stokes components. P-MCScene polarimetric optical databases will be generated by a new version (MODTRAN7) of the government-standard MODTRAN radiative transfer algorithm. The conversion of MODTRAN6 to a polarimetric model is being accomplished by (1) introducing polarimetric data, by (2) vectorizing the MODTRAN radiation calculations and by (3) integrating the newly revised and validated vector discrete ordinate model VDISORT3. Early results, presented here, demonstrate a clear pathway to the long-term goal of fully validated polarimetric models.
Empirical validation of an agent-based model of wood markets in Switzerland
Hilty, Lorenz M.; Lemm, Renato; Thees, Oliver
2018-01-01
We present an agent-based model of wood markets and show our efforts to validate this model using empirical data from different sources, including interviews, workshops, experiments, and official statistics. Own surveys closed gaps where data was not available. Our approach to model validation used a variety of techniques, including the replication of historical production amounts, prices, and survey results, as well as a historical case study of a large sawmill entering the market and becoming insolvent only a few years later. Validating the model using this case provided additional insights, showing how the model can be used to simulate scenarios of resource availability and resource allocation. We conclude that the outcome of the rigorous validation qualifies the model to simulate scenarios concerning resource availability and allocation in our study region. PMID:29351300
The impact of simulation education on self-efficacy towards teaching for nurse educators.
Garner, S L; Killingsworth, E; Bradshaw, M; Raj, L; Johnson, S R; Abijah, S P; Parimala, S; Victor, S
2018-03-23
The objective of this study was to assess the impact of a simulation workshop on self-efficacy towards teaching for nurse educators in India. Additionally, we sought to revise and validate a tool to measure self-efficacy in teaching for use with a global audience. Simulation is an evidence-based teaching and learning method and is increasingly used in nursing education globally. As new technology and teaching methods such as simulation continue to evolve, it is important for new as well as experienced nurse educators globally to have confidence in their teaching skills and abilities. The study included (1) instrument revision, and measures of reliability and validation, (2) an 8-h faculty development workshop intervention on simulation, (3) pre- and post-survey of self-efficacy among nurse educators, and (4) investigation of relationship between faculty socio-demographics and degree of self-efficacy. The modified tool showed internal consistency (r = 0.98) and was validated by international faculty experts. There were significant improvements in total self-efficacy (P < 0.001) and subscale scores among nurse educators after the simulation workshop intervention when compared to pre-survey results. No significant relationships were found between socio-demographic variables and degree of self-efficacy. Strong self-efficacy in teaching among nurse educators is crucial for effective learning to occur. Results indicated the simulation workshop was effective in significantly improving self-efficacy towards teaching for nurse educators using an internationally validated tool. The Minister of Health in India recently called for improvements in nursing education. Introducing nursing education on simulation as a teaching method in India and globally to improve self-efficacy among teachers is an example of a strategy towards meeting this call. © 2018 The Authors International Nursing Review published by John Wiley & Sons Ltd on behalf of International Council of Nurses.
Johnson, Sheena Joanne; Guediri, Sara M; Kilkenny, Caroline; Clough, Peter J
2011-12-01
This study developed and validated a virtual reality (VR) simulator for use by interventional radiologists. Research in the area of skill acquisition reports practice as essential to become a task expert. Studies on simulation show skills learned in VR can be successfully transferred to a real-world task. Recently, with improvements in technology, VR simulators have been developed to allow complex medical procedures to be practiced without risking the patient. Three studies are reported. In Study I, 35 consultant interventional radiologists took part in a cognitive task analysis to empirically establish the key competencies of the Seldinger procedure. In Study 2, 62 participants performed one simulated procedure, and their performance was compared by expertise. In Study 3, the transferability of simulator training to a real-world procedure was assessed with 14 trainees. Study I produced 23 key competencies that were implemented as performance measures in the simulator. Study 2 showed the simulator had both face and construct validity, although some issues were identified. Study 3 showed the group that had undergone simulator training received significantly higher mean performance ratings on a subsequent patient procedure. The findings of this study support the centrality of validation in the successful design of simulators and show the utility of simulators as a training device. The studies show the key elements of a validation program for a simulator. In addition to task analysis and face and construct validities, the authors highlight the importance of transfer of training in validation studies.
Current Status of Simulation-based Training Tools in Orthopedic Surgery: A Systematic Review.
Morgan, Michael; Aydin, Abdullatif; Salih, Alan; Robati, Shibby; Ahmed, Kamran
To conduct a systematic review of orthopedic training and assessment simulators with reference to their level of evidence (LoE) and level of recommendation. Medline and EMBASE library databases were searched for English language articles published between 1980 and 2016, describing orthopedic simulators or validation studies of these models. All studies were assessed for LoE, and each model was subsequently awarded a level of recommendation using a modified Oxford Centre for Evidence-Based Medicine classification, adapted for education. A total of 76 articles describing orthopedic simulators met the inclusion criteria, 47 of which described at least 1 validation study. The most commonly identified models (n = 34) and validation studies (n = 26) were for knee arthroscopy. Construct validation was the most frequent validation study attempted by authors. In all, 62% (47 of 76) of the simulator studies described arthroscopy simulators, which also contained validation studies with the highest LoE. Orthopedic simulators are increasingly being subjected to validation studies, although the LoE of such studies generally remain low. There remains a lack of focus on nontechnical skills and on cost analyses of orthopedic simulators. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Hyper-X Stage Separation Trajectory Validation Studies
NASA Technical Reports Server (NTRS)
Tartabini, Paul V.; Bose, David M.; McMinn, John D.; Martin, John G.; Strovers, Brian K.
2003-01-01
An independent twelve degree-of-freedom simulation of the X-43A separation trajectory was created with the Program to Optimize Simulated trajectories (POST II). This simulation modeled the multi-body dynamics of the X-43A and its booster and included the effect of two pyrotechnically actuated pistons used to push the vehicles apart as well as aerodynamic interaction forces and moments between the two vehicles. The simulation was developed to validate trajectory studies conducted with a 14 degree-of-freedom simulation created early in the program using the Automatic Dynamic Analysis of Mechanics Systems (ADAMS) simulation software. The POST simulation was less detailed than the official ADAMS-based simulation used by the Project, but was simpler, more concise and ran faster, while providing similar results. The increase in speed provided by the POST simulation provided the Project with an alternate analysis tool. This tool was ideal for performing separation control logic trade studies that required the running of numerous Monte Carlo trajectories.
[Numerical simulation and operation optimization of biological filter].
Zou, Zong-Sen; Shi, Han-Chang; Chen, Xiang-Qiang; Xie, Xiao-Qing
2014-12-01
BioWin software and two sensitivity analysis methods were used to simulate the Denitrification Biological Filter (DNBF) + Biological Aerated Filter (BAF) process in Yuandang Wastewater Treatment Plant. Based on the BioWin model of DNBF + BAF process, the operation data of September 2013 were used for sensitivity analysis and model calibration, and the operation data of October 2013 were used for model validation. The results indicated that the calibrated model could accurately simulate practical DNBF + BAF processes, and the most sensitive parameters were the parameters related to biofilm, OHOs and aeration. After the validation and calibration of model, it was used for process optimization with simulating operation results under different conditions. The results showed that, the best operation condition for discharge standard B was: reflux ratio = 50%, ceasing methanol addition, influent C/N = 4.43; while the best operation condition for discharge standard A was: reflux ratio = 50%, influent COD = 155 mg x L(-1) after methanol addition, influent C/N = 5.10.
In-Trail Procedure Air Traffic Control Procedures Validation Simulation Study
NASA Technical Reports Server (NTRS)
Chartrand, Ryan C.; Hewitt, Katrin P.; Sweeney, Peter B.; Graff, Thomas J.; Jones, Kenneth M.
2012-01-01
In August 2007, Airservices Australia (Airservices) and the United States National Aeronautics and Space Administration (NASA) conducted a validation experiment of the air traffic control (ATC) procedures associated with the Automatic Dependant Surveillance-Broadcast (ADS-B) In-Trail Procedure (ITP). ITP is an Airborne Traffic Situation Awareness (ATSA) application designed for near-term use in procedural airspace in which ADS-B data are used to facilitate climb and descent maneuvers. NASA and Airservices conducted the experiment in Airservices simulator in Melbourne, Australia. Twelve current operational air traffic controllers participated in the experiment, which identified aspects of the ITP that could be improved (mainly in the communication and controller approval process). Results showed that controllers viewed the ITP as valid and acceptable. This paper describes the experiment design and results.
Higuera-Trujillo, Juan Luis; López-Tarruella Maldonado, Juan; Llinares Millán, Carmen
2017-11-01
Psychological research into human factors frequently uses simulations to study the relationship between human behaviour and the environment. Their validity depends on their similarity with the physical environments. This paper aims to validate three environmental-simulation display formats: photographs, 360° panoramas, and virtual reality. To do this we compared the psychological and physiological responses evoked by simulated environments set-ups to those from a physical environment setup; we also assessed the users' sense of presence. Analysis show that 360° panoramas offer the closest to reality results according to the participants' psychological responses, and virtual reality according to the physiological responses. Correlations between the feeling of presence and physiological and other psychological responses were also observed. These results may be of interest to researchers using environmental-simulation technologies currently available in order to replicate the experience of physical environments. Copyright © 2017 Elsevier Ltd. All rights reserved.
Face Validation of the Virtual Electrosurgery Skill Trainer (VEST©)
Sankaranarayanan, Ganesh; Li, Baichun; Miller, Amie; Wakily, Hussna; Jones, Stephanie B.; Schwaitzberg, Steven; Jones, Daniel B.; De, Suvranu; Olasky, Jaisa
2015-01-01
Background Electrosurgery is a modality that is widely used in surgery, whose use has resulted in injuries, OR fires and even death. The SAGES has established the FUSE program to address the knowledge gap in the proper and safe usage of electrosurgical devices. Complementing it, we have developed the Virtual Electrosurgery Skill Trainer (VEST©), which is designed to train subjects in both cognitive and motor skills necessary to safely operate electrosurgical devices. The objective of this study is to asses the face validity of the VEST© simulator. Methods Sixty-three subjects were recruited at the 2014 SAGES Learning Center. They all completed the monopolar electrosurgery module on the VEST© simulator. At the end of the study, subjects assessed the face validity with questions that were scored on a 5-point Likert scale. Results The subjects were divided into two groups; FUSE experience (n = 15) and no FUSE experience (n = 48). The median score for both the groups was 4 or higher on all questions and 5 on questions on effectiveness of VEST© in aiding learning electrosurgery fundamentals. Questions on using the simulator in their own skills lab and recommending it to their peers also scored at 5. Mann-Whitney U test showed no significant difference (p > 0.05) indicating a general agreement. 46 % of the respondents preferred VEST compared to 52 % who preferred animal model and 2 % preferred both for training in electrosurgery. Conclusion This study demonstrated the face validity of the VEST © simulator. High scores showed that the simulator was visually realistic and reproduced lifelike tissue effects and the features were adequate enough to provide high realism. The self-learning instructional material was also found to be very useful in learning the fundamentals of electrosurgery. Adding more modules would increase the applicability of the VEST© simulator. PMID:26092003
A dynamic model of the human postural control system
NASA Technical Reports Server (NTRS)
Hill, J. C.
1972-01-01
A digital simulation of the pitch axis dynamics of a stick man of figures is described. Difficulties encountered in linearizing the equations of motion are discussed; the conclusion reached is that a completely linear simulation is of such restricted validity that only a nonlinear simulation is of any practical use. Typical simulation results obtained from the full nonlinear model are presented.
A dynamic model of the human postural control system.
NASA Technical Reports Server (NTRS)
Hill, J. C.
1971-01-01
Description of a digital simulation of the pitch axis dynamics of a stick man. The difficulties encountered in linearizing the equations of motion are discussed; the conclusion reached is that a completely linear simulation is of such restricted validity that only a nonlinear simulation is of any practical use. Typical simulation results obtained from the full nonlinear model are illustrated.
U.S. 75 Dallas, Texas, Model Validation and Calibration Report
DOT National Transportation Integrated Search
2010-02-01
This report presents the model validation and calibration results of the Integrated Corridor Management (ICM) analysis, modeling, and simulation (AMS) for the U.S. 75 Corridor in Dallas, Texas. The purpose of the project was to estimate the benefits ...
Simulated Driving Assessment (SDA) for teen drivers: results from a validation study.
McDonald, Catherine C; Kandadai, Venk; Loeb, Helen; Seacrist, Thomas S; Lee, Yi-Ching; Winston, Zachary; Winston, Flaura K
2015-06-01
Driver error and inadequate skill are common critical reasons for novice teen driver crashes, yet few validated, standardised assessments of teen driving skills exist. The purpose of this study is to evaluate the construct and criterion validity of a newly developed Simulated Driving Assessment (SDA) for novice teen drivers. The SDA's 35 min simulated drive incorporates 22 variations of the most common teen driver crash configurations. Driving performance was compared for 21 inexperienced teens (age 16-17 years, provisional license ≤90 days) and 17 experienced adults (age 25-50 years, license ≥5 years, drove ≥100 miles per week, no collisions or moving violations ≤3 years). SDA driving performance (Error Score) was based on driving safety measures derived from simulator and eye-tracking data. Negative driving outcomes included simulated collisions or run-off-the-road incidents. A professional driving evaluator/instructor (DEI Score) reviewed videos of SDA performance. The SDA demonstrated construct validity: (1) teens had a higher Error Score than adults (30 vs. 13, p=0.02); (2) For each additional error committed, the RR of a participant's propensity for a simulated negative driving outcome increased by 8% (95% CI 1.05 to 1.10, p<0.01). The SDA-demonstrated criterion validity: Error Score was correlated with DEI Score (r=-0.66, p<0.001). This study supports the concept of validated simulated driving tests like the SDA to assess novice driver skill in complex and hazardous driving scenarios. The SDA, as a standard protocol to evaluate teen driver performance, has the potential to facilitate screening and assessment of teen driving readiness and could be used to guide targeted skill training. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Simulation of a tethered microgravity robot pair and validation on a planar air bearing
NASA Astrophysics Data System (ADS)
Mantellato, R.; Lorenzini, E. C.; Sternberg, D.; Roascio, D.; Saenz-Otero, A.; Zachrau, H. J.
2017-09-01
A software model has been developed to simulate the on-orbit dynamics of a dual-mass tethered system where one or both of the tethered spacecraft are able to produce propulsive thrust. The software simulates translations and rotations of both spacecraft, with the visco-elastic tether being simulated as a lumped-mass model. Thanks to this last feature, tether longitudinal and lateral modes of vibration and tether tension can be accurately assessed. Also, the way the spacecraft motion responds to sudden tether tension spikes can be studied in detail. The code enables the simulation of different scenarios, including space tug missions for deorbit maneuvers in a debris mitigation context and general-purpose tethered formation flight missions. This study aims to validate the software through a representative test campaign performed with the MIT Synchronized Position Hold Engage and Reorient Experimental Satellites (SPHERES) planar air bearing system. Results obtained with the numerical simulator are compared with data from direct measurements in different testing setups. The studied cases take into account different initial conditions of the spacecraft velocities and relative attitudes, and thrust forces. Data analysis is presented comparing the results of the simulations with direct measurements of acceleration and Azimuth rate of the two bodies in the planar air bearing test facility using a Nylon tether. Plans for conducting a microgravity test campaign using the SPHERES satellites aboard the International Space Station are also being scheduled in the near future in order to further validate the simulation using data from the relevant operational environment of extended microgravity with full six degree of freedom (per body) motion.
NASA Astrophysics Data System (ADS)
Nir, A.; Doughty, C.; Tsang, C. F.
Validation methods which developed in the context of deterministic concepts of past generations often cannot be directly applied to environmental problems, which may be characterized by limited reproducibility of results and highly complex models. Instead, validation is interpreted here as a series of activities, including both theoretical and experimental tests, designed to enhance our confidence in the capability of a proposed model to describe some aspect of reality. We examine the validation process applied to a project concerned with heat and fluid transport in porous media, in which mathematical modeling, simulation, and results of field experiments are evaluated in order to determine the feasibility of a system for seasonal thermal energy storage in shallow unsaturated soils. Technical details of the field experiments are not included, but appear in previous publications. Validation activities are divided into three stages. The first stage, carried out prior to the field experiments, is concerned with modeling the relevant physical processes, optimization of the heat-exchanger configuration and the shape of the storage volume, and multi-year simulation. Subjects requiring further theoretical and experimental study are identified at this stage. The second stage encompasses the planning and evaluation of the initial field experiment. Simulations are made to determine the experimental time scale and optimal sensor locations. Soil thermal parameters and temperature boundary conditions are estimated using an inverse method. Then results of the experiment are compared with model predictions using different parameter values and modeling approximations. In the third stage, results of an experiment performed under different boundary conditions are compared to predictions made by the models developed in the second stage. Various aspects of this theoretical and experimental field study are described as examples of the verification and validation procedure. There is no attempt to validate a specific model, but several models of increasing complexity are compared with experimental results. The outcome is interpreted as a demonstration of the paradigm proposed by van der Heijde, 26 that different constituencies have different objectives for the validation process and therefore their acceptance criteria differ also.
A Digital Sensor Simulator of the Pushbroom Offner Hyperspectral Imaging Spectrometer
Tao, Dongxing; Jia, Guorui; Yuan, Yan; Zhao, Huijie
2014-01-01
Sensor simulators can be used in forecasting the imaging quality of a new hyperspectral imaging spectrometer, and generating simulated data for the development and validation of the data processing algorithms. This paper presents a novel digital sensor simulator for the pushbroom Offner hyperspectral imaging spectrometer, which is widely used in the hyperspectral remote sensing. Based on the imaging process, the sensor simulator consists of a spatial response module, a spectral response module, and a radiometric response module. In order to enhance the simulation accuracy, spatial interpolation-resampling, which is implemented before the spatial degradation, is developed to compromise the direction error and the extra aliasing effect. Instead of using the spectral response function (SRF), the dispersive imaging characteristics of the Offner convex grating optical system is accurately modeled by its configuration parameters. The non-uniformity characteristics, such as keystone and smile effects, are simulated in the corresponding modules. In this work, the spatial, spectral and radiometric calibration processes are simulated to provide the parameters of modulation transfer function (MTF), SRF and radiometric calibration parameters of the sensor simulator. Some uncertainty factors (the stability, band width of the monochromator for the spectral calibration, and the integrating sphere uncertainty for the radiometric calibration) are considered in the simulation of the calibration process. With the calibration parameters, several experiments were designed to validate the spatial, spectral and radiometric response of the sensor simulator, respectively. The experiment results indicate that the sensor simulator is valid. PMID:25615727
Gathering Validity Evidence for Surgical Simulation: A Systematic Review.
Borgersen, Nanna Jo; Naur, Therese M H; Sørensen, Stine M D; Bjerrum, Flemming; Konge, Lars; Subhi, Yousif; Thomsen, Ann Sofia S
2018-06-01
To identify current trends in the use of validity frameworks in surgical simulation, to provide an overview of the evidence behind the assessment of technical skills in all surgical specialties, and to present recommendations and guidelines for future validity studies. Validity evidence for assessment tools used in the evaluation of surgical performance is of paramount importance to ensure valid and reliable assessment of skills. We systematically reviewed the literature by searching 5 databases (PubMed, EMBASE, Web of Science, PsycINFO, and the Cochrane Library) for studies published from January 1, 2008, to July 10, 2017. We included original studies evaluating simulation-based assessments of health professionals in surgical specialties and extracted data on surgical specialty, simulator modality, participant characteristics, and the validity framework used. Data were synthesized qualitatively. We identified 498 studies with a total of 18,312 participants. Publications involving validity assessments in surgical simulation more than doubled from 2008 to 2010 (∼30 studies/year) to 2014 to 2016 (∼70 to 90 studies/year). Only 6.6% of the studies used the recommended contemporary validity framework (Messick). The majority of studies used outdated frameworks such as face validity. Significant differences were identified across surgical specialties. The evaluated assessment tools were mostly inanimate or virtual reality simulation models. An increasing number of studies have gathered validity evidence for simulation-based assessments in surgical specialties, but the use of outdated frameworks remains common. To address the current practice, this paper presents guidelines on how to use the contemporary validity framework when designing validity studies.
Thermalized Drude Oscillators with the LAMMPS Molecular Dynamics Simulator.
Dequidt, Alain; Devémy, Julien; Pádua, Agílio A H
2016-01-25
LAMMPS is a very customizable molecular dynamics simulation software, which can be used to simulate a large diversity of systems. We introduce a new package for simulation of polarizable systems with LAMMPS using thermalized Drude oscillators. The implemented functionalities are described and are illustrated by examples. The implementation was validated by comparing simulation results with published data and using a reference software. Computational performance is also analyzed.
2011-01-01
Background Valve dysfunction is a common cardiovascular pathology. Despite significant clinical research, there is little formal study of how valve dysfunction affects overall circulatory dynamics. Validated models would offer the ability to better understand these dynamics and thus optimize diagnosis, as well as surgical and other interventions. Methods A cardiovascular and circulatory system (CVS) model has already been validated in silico, and in several animal model studies. It accounts for valve dynamics using Heaviside functions to simulate a physiologically accurate "open on pressure, close on flow" law. However, it does not consider real-time valve opening dynamics and therefore does not fully capture valve dysfunction, particularly where the dysfunction involves partial closure. This research describes an updated version of this previous closed-loop CVS model that includes the progressive opening of the mitral valve, and is defined over the full cardiac cycle. Results Simulations of the cardiovascular system with healthy mitral valve are performed, and, the global hemodynamic behaviour is studied compared with previously validated results. The error between resulting pressure-volume (PV) loops of already validated CVS model and the new CVS model that includes the progressive opening of the mitral valve is assessed and remains within typical measurement error and variability. Simulations of ischemic mitral insufficiency are also performed. Pressure-Volume loops, transmitral flow evolution and mitral valve aperture area evolution follow reported measurements in shape, amplitude and trends. Conclusions The resulting cardiovascular system model including mitral valve dynamics provides a foundation for clinical validation and the study of valvular dysfunction in vivo. The overall models and results could readily be generalised to other cardiac valves. PMID:21942971
Hovering Dual-Spin Vehicle Groundwork for Bias Momentum Sizing Validation Experiment
NASA Technical Reports Server (NTRS)
Rothhaar, Paul M.; Moerder, Daniel D.; Lim, Kyong B.
2008-01-01
Angular bias momentum offers significant stability augmentation for hovering flight vehicles. The reliance of the vehicle on thrust vectoring for agility and disturbance rejection is greatly reduced with significant levels of stored angular momentum in the system. A methodical procedure for bias momentum sizing has been developed in previous studies. This current study provides groundwork for experimental validation of that method using an experimental vehicle called the Dual-Spin Test Device, a thrust-levitated platform. Using measured data the vehicle's thrust vectoring units are modeled and a gust environment is designed and characterized. Control design is discussed. Preliminary experimental results of the vehicle constrained to three rotational degrees of freedom are compared to simulation for a case containing no bias momentum to validate the simulation. A simulation of a bias momentum dominant case is presented.
An Experimental and Numerical Study of a Supersonic Burner for CFD Model Development
NASA Technical Reports Server (NTRS)
Magnotti, G.; Cutler, A. D.
2008-01-01
A laboratory scale supersonic burner has been developed for validation of computational fluid dynamics models. Detailed numerical simulations were performed for the flow inside the combustor, and coupled with finite element thermal analysis to obtain more accurate outflow conditions. A database of nozzle exit profiles for a wide range of conditions of interest was generated to be used as boundary conditions for simulation of the external jet, or for validation of non-intrusive measurement techniques. A set of experiments was performed to validate the numerical results. In particular, temperature measurements obtained by using an infrared camera show that the computed heat transfer was larger than the measured value. Relaminarization in the convergent part of the nozzle was found to be responsible for this discrepancy, and further numerical simulations sustained this conclusion.
Estimating and validating harvesting system production through computer simulation
John E. Baumgras; Curt C. Hassler; Chris B. LeDoux
1993-01-01
A Ground Based Harvesting System Simulation model (GB-SIM) has been developed to estimate stump-to-truck production rates and multiproduct yields for conventional ground-based timber harvesting systems in Appalachian hardwood stands. Simulation results reflect inputs that define harvest site and timber stand attributes, wood utilization options, and key attributes of...
GADEN: A 3D Gas Dispersion Simulator for Mobile Robot Olfaction in Realistic Environments.
Monroy, Javier; Hernandez-Bennets, Victor; Fan, Han; Lilienthal, Achim; Gonzalez-Jimenez, Javier
2017-06-23
This work presents a simulation framework developed under the widely used Robot Operating System (ROS) to enable the validation of robotics systems and gas sensing algorithms under realistic environments. The framework is rooted in the principles of computational fluid dynamics and filament dispersion theory, modeling wind flow and gas dispersion in 3D real-world scenarios (i.e., accounting for walls, furniture, etc.). Moreover, it integrates the simulation of different environmental sensors, such as metal oxide gas sensors, photo ionization detectors, or anemometers. We illustrate the potential and applicability of the proposed tool by presenting a simulation case in a complex and realistic office-like environment where gas leaks of different chemicals occur simultaneously. Furthermore, we accomplish quantitative and qualitative validation by comparing our simulated results against real-world data recorded inside a wind tunnel where methane was released under different wind flow profiles. Based on these results, we conclude that our simulation framework can provide a good approximation to real world measurements when advective airflows are present in the environment.
GADEN: A 3D Gas Dispersion Simulator for Mobile Robot Olfaction in Realistic Environments
Hernandez-Bennetts, Victor; Fan, Han; Lilienthal, Achim; Gonzalez-Jimenez, Javier
2017-01-01
This work presents a simulation framework developed under the widely used Robot Operating System (ROS) to enable the validation of robotics systems and gas sensing algorithms under realistic environments. The framework is rooted in the principles of computational fluid dynamics and filament dispersion theory, modeling wind flow and gas dispersion in 3D real-world scenarios (i.e., accounting for walls, furniture, etc.). Moreover, it integrates the simulation of different environmental sensors, such as metal oxide gas sensors, photo ionization detectors, or anemometers. We illustrate the potential and applicability of the proposed tool by presenting a simulation case in a complex and realistic office-like environment where gas leaks of different chemicals occur simultaneously. Furthermore, we accomplish quantitative and qualitative validation by comparing our simulated results against real-world data recorded inside a wind tunnel where methane was released under different wind flow profiles. Based on these results, we conclude that our simulation framework can provide a good approximation to real world measurements when advective airflows are present in the environment. PMID:28644375
Broadband Ground Motion Simulation Recipe for Scenario Hazard Assessment in Japan
NASA Astrophysics Data System (ADS)
Koketsu, K.; Fujiwara, H.; Irikura, K.
2014-12-01
The National Seismic Hazard Maps for Japan, which consist of probabilistic seismic hazard maps (PSHMs) and scenario earthquake shaking maps (SESMs), have been published every year since 2005 by the Earthquake Research Committee (ERC) in the Headquarter for Earthquake Research Promotion, which was established in the Japanese government after the 1995 Kobe earthquake. The publication was interrupted due to problems in the PSHMs revealed by the 2011 Tohoku earthquake, and the Subcommittee for Evaluations of Strong Ground Motions ('Subcommittee') has been examining the problems for two and a half years (ERC, 2013; Fujiwara, 2014). However, the SESMs and the broadband ground motion simulation recipe used in them are still valid at least for crustal earthquakes. Here, we outline this recipe and show the results of validation tests for it.Irikura and Miyake (2001) and Irikura (2004) developed a recipe for simulating strong ground motions from future crustal earthquakes based on a characterization of their source models (Irikura recipe). The result of the characterization is called a characterized source model, where a rectangular fault includes a few rectangular asperities. Each asperity and the background area surrounding the asperities have their own uniform stress drops. The Irikura recipe defines the parameters of the fault and asperities, and how to simulate broadband ground motions from the characterized source model. The recipe for the SESMs was constructed following the Irikura recipe (ERC, 2005). The National Research Institute for Earth Science and Disaster Prevention (NIED) then made simulation codes along this recipe to generate SESMs (Fujiwara et al., 2006; Morikawa et al., 2011). The Subcommittee in 2002 validated a preliminary version of the SESM recipe by comparing simulated and observed ground motions for the 2000 Tottori earthquake. In 2007 and 2008, the Subcommittee carried out detailed validations of the current version of the SESM recipe and the NIED codes using ground motions from the 2005 Fukuoka earthquake. Irikura and Miyake (2011) summarized the latter validations, concluding that the ground motions were successfully simulated as shown in the figure. This indicates that the recipe has enough potential to generate broadband ground motions for scenario hazard assessment in Japan.
Monte Carlo simulation of secondary neutron dose for scanning proton therapy using FLUKA
Lee, Chaeyeong; Lee, Sangmin; Lee, Seung-Jae; Song, Hankyeol; Kim, Dae-Hyun; Cho, Sungkoo; Jo, Kwanghyun; Han, Youngyih; Chung, Yong Hyun
2017-01-01
Proton therapy is a rapidly progressing field for cancer treatment. Globally, many proton therapy facilities are being commissioned or under construction. Secondary neutrons are an important issue during the commissioning process of a proton therapy facility. The purpose of this study is to model and validate scanning nozzles of proton therapy at Samsung Medical Center (SMC) by Monte Carlo simulation for beam commissioning. After the commissioning, a secondary neutron ambient dose from proton scanning nozzle (Gantry 1) was simulated and measured. This simulation was performed to evaluate beam properties such as percent depth dose curve, Bragg peak, and distal fall-off, so that they could be verified with measured data. Using the validated beam nozzle, the secondary neutron ambient dose was simulated and then compared with the measured ambient dose from Gantry 1. We calculated secondary neutron dose at several different points. We demonstrated the validity modeling a proton scanning nozzle system to evaluate various parameters using FLUKA. The measured secondary neutron ambient dose showed a similar tendency with the simulation result. This work will increase the knowledge necessary for the development of radiation safety technology in medical particle accelerators. PMID:29045491
Mechanical impact of dynamic phenomena in Francis turbines at off design conditions
NASA Astrophysics Data System (ADS)
Duparchy, F.; Brammer, J.; Thibaud, M.; Favrel, A.; Lowys, P. Y.; Avellan, F.
2017-04-01
At partial load and overload conditions, Francis turbines are subjected to hydraulic instabilities that can potentially result in high dynamic solicitations of the turbine components and significantly reduce their lifetime. This study presents both experimental data and numerical simulations that were used as complementary approaches to study these dynamic solicitations. Measurements performed on a reduced scale physical model, including a special runner instrumented with on-board strain gauges and pressure sensors, were used to investigate the dynamic phenomena experienced by the runner. They were also taken as reference to validate the numerical simulation results. After validation, advantage was taken from the numerical simulations to highlight the mechanical response of the structure to the unsteady hydraulic phenomena, as well as their impact on the fatigue damage of the runner.
Wang, Wei; Lu, Hui; Yang, Dawen; Sothea, Khem; Jiao, Yang; Gao, Bin; Peng, Xueting; Pang, Zhiguo
2016-01-01
The Mekong River is the most important river in Southeast Asia. It has increasingly suffered from water-related problems due to economic development, population growth and climate change in the surrounding areas. In this study, we built a distributed Geomorphology-Based Hydrological Model (GBHM) of the Mekong River using remote sensing data and other publicly available data. Two numerical experiments were conducted using different rainfall data sets as model inputs. The data sets included rain gauge data from the Mekong River Commission (MRC) and remote sensing rainfall data from the Tropic Rainfall Measurement Mission (TRMM 3B42V7). Model calibration and validation were conducted for the two rainfall data sets. Compared to the observed discharge, both the gauge simulation and TRMM simulation performed well during the calibration period (1998–2001). However, the performance of the gauge simulation was worse than that of the TRMM simulation during the validation period (2002–2012). The TRMM simulation is more stable and reliable at different scales. Moreover, the calibration period was changed to 2, 4, and 8 years to test the impact of the calibration period length on the two simulations. The results suggest that longer calibration periods improved the GBHM performance during validation periods. In addition, the TRMM simulation is more stable and less sensitive to the calibration period length than is the gauge simulation. Further analysis reveals that the uneven distribution of rain gauges makes the input rainfall data less representative and more heterogeneous, worsening the simulation performance. Our results indicate that remotely sensed rainfall data may be more suitable for driving distributed hydrologic models, especially in basins with poor data quality or limited gauge availability. PMID:27010692
Wang, Wei; Lu, Hui; Yang, Dawen; Sothea, Khem; Jiao, Yang; Gao, Bin; Peng, Xueting; Pang, Zhiguo
2016-01-01
The Mekong River is the most important river in Southeast Asia. It has increasingly suffered from water-related problems due to economic development, population growth and climate change in the surrounding areas. In this study, we built a distributed Geomorphology-Based Hydrological Model (GBHM) of the Mekong River using remote sensing data and other publicly available data. Two numerical experiments were conducted using different rainfall data sets as model inputs. The data sets included rain gauge data from the Mekong River Commission (MRC) and remote sensing rainfall data from the Tropic Rainfall Measurement Mission (TRMM 3B42V7). Model calibration and validation were conducted for the two rainfall data sets. Compared to the observed discharge, both the gauge simulation and TRMM simulation performed well during the calibration period (1998-2001). However, the performance of the gauge simulation was worse than that of the TRMM simulation during the validation period (2002-2012). The TRMM simulation is more stable and reliable at different scales. Moreover, the calibration period was changed to 2, 4, and 8 years to test the impact of the calibration period length on the two simulations. The results suggest that longer calibration periods improved the GBHM performance during validation periods. In addition, the TRMM simulation is more stable and less sensitive to the calibration period length than is the gauge simulation. Further analysis reveals that the uneven distribution of rain gauges makes the input rainfall data less representative and more heterogeneous, worsening the simulation performance. Our results indicate that remotely sensed rainfall data may be more suitable for driving distributed hydrologic models, especially in basins with poor data quality or limited gauge availability.
Validation of the ArthroS virtual reality simulator for arthroscopic skills.
Stunt, J J; Kerkhoffs, G M M J; van Dijk, C N; Tuijthof, G J M
2015-11-01
Virtual reality simulator training has become important for acquiring arthroscopic skills. A new simulator for knee arthroscopy ArthroS™ has been developed. The purpose of this study was to demonstrate face and construct validity, executed according to a protocol used previously to validate arthroscopic simulators. Twenty-seven participants were divided into three groups having different levels of arthroscopic experience. Participants answered questions regarding general information and the outer appearance of the simulator for face validity. Construct validity was assessed with one standardized navigation task. Face validity, educational value and user friendliness were further determined by giving participants three exercises and by asking them to fill out the questionnaire. Construct validity was demonstrated between experts and beginners. Median task times were not significantly different for all repetitions between novices and intermediates, and between intermediates and experts. Median face validity was 8.3 for the outer appearance, 6.5 for the intra-articular joint and 4.7 for surgical instruments. Educational value and user friendliness were perceived as nonsatisfactory, especially because of the lack of tactile feedback. The ArthroS™ demonstrated construct validity between novices and experts, but did not demonstrate full face validity. Future improvements should be mainly focused on the development of tactile feedback. It is necessary that a newly presented simulator is validated to prove it actually contributes to proficiency of skills.
Longitudinal train dynamics model for a rail transit simulation system
Wang, Jinghui; Rakha, Hesham A.
2018-01-01
The paper develops a longitudinal train dynamics model in support of microscopic railway transportation simulation. The model can be calibrated without any mechanical data making it ideal for implementation in transportation simulators. The calibration and validation work is based on data collected from the Portland light rail train fleet. The calibration procedure is mathematically formulated as a constrained non-linear optimization problem. The validity of the model is assessed by comparing instantaneous model predictions against field observations, and also evaluated in the domains of acceleration/deceleration versus speed and acceleration/deceleration versus distance. A test is conducted to investigate the adequacy of themore » model in simulation implementation. The results demonstrate that the proposed model can adequately capture instantaneous train dynamics, and provides good performance in the simulation test. Thus, the model provides a simple theoretical foundation for microscopic simulators and will significantly support the planning, management and control of railway transportation systems.« less
Longitudinal train dynamics model for a rail transit simulation system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Jinghui; Rakha, Hesham A.
The paper develops a longitudinal train dynamics model in support of microscopic railway transportation simulation. The model can be calibrated without any mechanical data making it ideal for implementation in transportation simulators. The calibration and validation work is based on data collected from the Portland light rail train fleet. The calibration procedure is mathematically formulated as a constrained non-linear optimization problem. The validity of the model is assessed by comparing instantaneous model predictions against field observations, and also evaluated in the domains of acceleration/deceleration versus speed and acceleration/deceleration versus distance. A test is conducted to investigate the adequacy of themore » model in simulation implementation. The results demonstrate that the proposed model can adequately capture instantaneous train dynamics, and provides good performance in the simulation test. Thus, the model provides a simple theoretical foundation for microscopic simulators and will significantly support the planning, management and control of railway transportation systems.« less
Assessing the driving performance of older adult drivers: on-road versus simulated driving.
Lee, Hoe C; Cameron, Don; Lee, Andy H
2003-09-01
To validate a laboratory-based driving simulator in measuring on-road driving performance, 129 older adult drivers were assessed with both the simulator and an on-road test. The driving performance of the participants was gauged by appropriate and reliable age-specific assessment criteria, which were found to be negatively correlated with age. Using principal component analysis, two performance indices were developed from the criteria to represent the overall performance in simulated driving and the on-road assessment. There was significant positive association between the two indices, with the simulated driving performance index explaining over two-thirds of the variability of the on-road driving performance index, after adjustment for age and gender of the drivers. The results supported the validity of the driving simulator and it is a safer and more economical method than the on-road testing to assess the driving performance of older adult drivers.
Upgrades for the CMS simulation
Lange, D. J.; Hildreth, M.; Ivantchenko, V. N.; ...
2015-05-22
Over the past several years, the CMS experiment has made significant changes to its detector simulation application. The geometry has been generalized to include modifications being made to the CMS detector for 2015 operations, as well as model improvements to the simulation geometry of the current CMS detector and the implementation of a number of approved and possible future detector configurations. These include both completely new tracker and calorimetry systems. We have completed the transition to Geant4 version 10, we have made significant progress in reducing the CPU resources required to run our Geant4 simulation. These have been achieved throughmore » both technical improvements and through numerical techniques. Substantial speed improvements have been achieved without changing the physics validation benchmarks that the experiment uses to validate our simulation application for use in production. As a result, we will discuss the methods that we implemented and the corresponding demonstrated performance improvements deployed for our 2015 simulation application.« less
TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Badal, A; Zbijewski, W; Bolch, W
Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods,more » are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10{sup 7} xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the virtual generation of medical images and accurate estimation of radiation dose and other imaging parameters. For this, detailed computational phantoms of the patient anatomy must be utilized and implemented within the radiation transport code. Computational phantoms presently come in one of three format types, and in one of four morphometric categories. Format types include stylized (mathematical equation-based), voxel (segmented CT/MR images), and hybrid (NURBS and polygon mesh surfaces). Morphometric categories include reference (small library of phantoms by age at 50th height/weight percentile), patient-dependent (larger library of phantoms at various combinations of height/weight percentiles), patient-sculpted (phantoms altered to match the patient's unique outer body contour), and finally, patient-specific (an exact representation of the patient with respect to both body contour and internal anatomy). The existence and availability of these phantoms represents a very important advance for the simulation of realistic medical imaging applications using Monte Carlo methods. New Monte Carlo simulation codes need to be thoroughly validated before they can be used to perform novel research. Ideally, the validation process would involve comparison of results with those of an experimental measurement, but accurate replication of experimental conditions can be very challenging. It is very common to validate new Monte Carlo simulations by replicating previously published simulation results of similar experiments. This process, however, is commonly problematic due to the lack of sufficient information in the published reports of previous work so as to be able to replicate the simulation in detail. To aid in this process, the AAPM Task Group 195 prepared a report in which six different imaging research experiments commonly performed using Monte Carlo simulations are described and their results provided. The simulation conditions of all six cases are provided in full detail, with all necessary data on material composition, source, geometry, scoring and other parameters provided. The results of these simulations when performed with the four most common publicly available Monte Carlo packages are also provided in tabular form. The Task Group 195 Report will be useful for researchers needing to validate their Monte Carlo work, and for trainees needing to learn Monte Carlo simulation methods. In this symposium we will review the recent advancements in highperformance computing hardware enabling the reduction in computational resources needed for Monte Carlo simulations in medical imaging. We will review variance reduction techniques commonly applied in Monte Carlo simulations of medical imaging systems and present implementation strategies for efficient combination of these techniques with GPU acceleration. Trade-offs involved in Monte Carlo acceleration by means of denoising and “sparse sampling” will be discussed. A method for rapid scatter correction in cone-beam CT (<5 min/scan) will be presented as an illustration of the simulation speeds achievable with optimized Monte Carlo simulations. We will also discuss the development, availability, and capability of the various combinations of computational phantoms for Monte Carlo simulation of medical imaging systems. Finally, we will review some examples of experimental validation of Monte Carlo simulations and will present the AAPM Task Group 195 Report. Learning Objectives: Describe the advances in hardware available for performing Monte Carlo simulations in high performance computing environments. Explain variance reduction, denoising and sparse sampling techniques available for reduction of computational time needed for Monte Carlo simulations of medical imaging. List and compare the computational anthropomorphic phantoms currently available for more accurate assessment of medical imaging parameters in Monte Carlo simulations. Describe experimental methods used for validation of Monte Carlo simulations in medical imaging. Describe the AAPM Task Group 195 Report and its use for validation and teaching of Monte Carlo simulations in medical imaging.« less
Domain of validity of the perturbative approach to femtosecond optical spectroscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gelin, Maxim F.; Rao, B. Jayachander; Nest, Mathias
2013-12-14
We have performed numerical nonperturbative simulations of transient absorption pump-probe responses for a series of molecular model systems. The resulting signals as a function of the laser field strength and the pump-probe delay time are compared with those obtained in the perturbative response function formalism. The simulations and their theoretical analysis indicate that the perturbative description remains valid up to moderately strong laser pulses, corresponding to a rather substantial depopulation (population) of the initial (final) electronic states.
Quadruplex digital flight control system assessment
NASA Technical Reports Server (NTRS)
Mulcare, D. B.; Downing, L. E.; Smith, M. K.
1988-01-01
Described are the development and validation of a double fail-operational digital flight control system architecture for critical pitch axis functions. Architectural tradeoffs are assessed, system simulator modifications are described, and demonstration testing results are critiqued. Assessment tools and their application are also illustrated. Ultimately, the vital role of system simulation, tailored to digital mechanization attributes, is shown to be essential to validating the airworthiness of full-time critical functions such as augmented fly-by-wire systems for relaxed static stability airplanes.
The Validity of Quasi-Steady-State Approximations in Discrete Stochastic Simulations
Kim, Jae Kyoung; Josić, Krešimir; Bennett, Matthew R.
2014-01-01
In biochemical networks, reactions often occur on disparate timescales and can be characterized as either fast or slow. The quasi-steady-state approximation (QSSA) utilizes timescale separation to project models of biochemical networks onto lower-dimensional slow manifolds. As a result, fast elementary reactions are not modeled explicitly, and their effect is captured by nonelementary reaction-rate functions (e.g., Hill functions). The accuracy of the QSSA applied to deterministic systems depends on how well timescales are separated. Recently, it has been proposed to use the nonelementary rate functions obtained via the deterministic QSSA to define propensity functions in stochastic simulations of biochemical networks. In this approach, termed the stochastic QSSA, fast reactions that are part of nonelementary reactions are not simulated, greatly reducing computation time. However, it is unclear when the stochastic QSSA provides an accurate approximation of the original stochastic simulation. We show that, unlike the deterministic QSSA, the validity of the stochastic QSSA does not follow from timescale separation alone, but also depends on the sensitivity of the nonelementary reaction rate functions to changes in the slow species. The stochastic QSSA becomes more accurate when this sensitivity is small. Different types of QSSAs result in nonelementary functions with different sensitivities, and the total QSSA results in less sensitive functions than the standard or the prefactor QSSA. We prove that, as a result, the stochastic QSSA becomes more accurate when nonelementary reaction functions are obtained using the total QSSA. Our work provides an apparently novel condition for the validity of the QSSA in stochastic simulations of biochemical reaction networks with disparate timescales. PMID:25099817
Link performance model for filter bank based multicarrier systems
NASA Astrophysics Data System (ADS)
Petrov, Dmitry; Oborina, Alexandra; Giupponi, Lorenza; Stitz, Tobias Hidalgo
2014-12-01
This paper presents a complete link level abstraction model for link quality estimation on the system level of filter bank multicarrier (FBMC)-based networks. The application of mean mutual information per coded bit (MMIB) approach is validated for the FBMC systems. The considered quality measure of the resource element for the FBMC transmission is the received signal-to-noise-plus-distortion ratio (SNDR). Simulation results of the proposed link abstraction model show that the proposed approach is capable of estimating the block error rate (BLER) accurately, even when the signal is propagated through the channels with deep and frequent fades, as it is the case for the 3GPP Hilly Terrain (3GPP-HT) and Enhanced Typical Urban (ETU) models. The FBMC-related results of link level simulations are compared with cyclic prefix orthogonal frequency division multiplexing (CP-OFDM) analogs. Simulation results are also validated through the comparison to reference publicly available results. Finally, the steps of link level abstraction algorithm for FBMC are formulated and its application for system level simulation of a professional mobile radio (PMR) network is discussed.
COMPRESSORS, *AIR FLOW, TURBOFAN ENGINES , TRANSIENTS, SURGES, STABILITY, COMPUTERIZED SIMULATION, EXPERIMENTAL DATA, VALIDATION, DIGITAL SIMULATION, INLET GUIDE VANES , ROTATION, STALLING, RECOVERY, HYSTERESIS
Design and control of compliant tensegrity robots through simulation and hardware validation
Caluwaerts, Ken; Despraz, Jérémie; Işçen, Atıl; Sabelhaus, Andrew P.; Bruce, Jonathan; Schrauwen, Benjamin; SunSpiral, Vytas
2014-01-01
To better understand the role of tensegrity structures in biological systems and their application to robotics, the Dynamic Tensegrity Robotics Lab at NASA Ames Research Center, Moffett Field, CA, USA, has developed and validated two software environments for the analysis, simulation and design of tensegrity robots. These tools, along with new control methodologies and the modular hardware components developed to validate them, are presented as a system for the design of actuated tensegrity structures. As evidenced from their appearance in many biological systems, tensegrity (‘tensile–integrity’) structures have unique physical properties that make them ideal for interaction with uncertain environments. Yet, these characteristics make design and control of bioinspired tensegrity robots extremely challenging. This work presents the progress our tools have made in tackling the design and control challenges of spherical tensegrity structures. We focus on this shape since it lends itself to rolling locomotion. The results of our analyses include multiple novel control approaches for mobility and terrain interaction of spherical tensegrity structures that have been tested in simulation. A hardware prototype of a spherical six-bar tensegrity, the Reservoir Compliant Tensegrity Robot, is used to empirically validate the accuracy of simulation. PMID:24990292
Challenges of NDE simulation tool validation, optimization, and utilization for composites
NASA Astrophysics Data System (ADS)
Leckey, Cara A. C.; Seebo, Jeffrey P.; Juarez, Peter
2016-02-01
Rapid, realistic nondestructive evaluation (NDE) simulation tools can aid in inspection optimization and prediction of inspectability for advanced aerospace materials and designs. NDE simulation tools may someday aid in the design and certification of aerospace components; potentially shortening the time from material development to implementation by industry and government. Furthermore, ultrasound modeling and simulation are expected to play a significant future role in validating the capabilities and limitations of guided wave based structural health monitoring (SHM) systems. The current state-of-the-art in ultrasonic NDE/SHM simulation is still far from the goal of rapidly simulating damage detection techniques for large scale, complex geometry composite components/vehicles containing realistic damage types. Ongoing work at NASA Langley Research Center is focused on advanced ultrasonic simulation tool development. This paper discusses challenges of simulation tool validation, optimization, and utilization for composites. Ongoing simulation tool development work is described along with examples of simulation validation and optimization challenges that are more broadly applicable to all NDE simulation tools. The paper will also discuss examples of simulation tool utilization at NASA to develop new damage characterization methods for composites, and associated challenges in experimentally validating those methods.
Fuermaier, Anselm B M; Tucha, Oliver; Koerts, Janneke; Lange, Klaus W; Weisbrod, Matthias; Aschenbrenner, Steffen; Tucha, Lara
2017-12-01
The assessment of performance validity is an essential part of the neuropsychological evaluation of adults with attention-deficit/hyperactivity disorder (ADHD). Most available tools, however, are inaccurate regarding the identification of noncredible performance. This study describes the development of a visuospatial working memory test, including a validity indicator for noncredible cognitive performance of adults with ADHD. Visuospatial working memory of adults with ADHD (n = 48) was first compared to the test performance of healthy individuals (n = 48). Furthermore, a simulation design was performed including 252 individuals who were randomly assigned to either a control group (n = 48) or to 1 of 3 simulation groups who were requested to feign ADHD (n = 204). Additional samples of 27 adults with ADHD and 69 instructed simulators were included to cross-validate findings from the first samples. Adults with ADHD showed impaired visuospatial working memory performance of medium size as compared to healthy individuals. Simulation groups committed significantly more errors and had shorter response times as compared to patients with ADHD. Moreover, binary logistic regression analysis was carried out to derive a validity index that optimally differentiates between true and feigned ADHD. ROC analysis demonstrated high classification rates of the validity index, as shown in excellent specificity (95.8%) and adequate sensitivity (60.3%). The visuospatial working memory test as presented in this study therefore appears sensitive in indicating cognitive impairment of adults with ADHD. Furthermore, the embedded validity index revealed promising results concerning the detection of noncredible cognitive performance of adults with ADHD. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Bham, Ghulam H; Leu, Ming C; Vallati, Manoj; Mathur, Durga R
2014-06-01
This study is aimed at validating a driving simulator (DS) for the study of driver behavior in work zones. A validation study requires field data collection. For studies conducted in highway work zones, the availability of safe vantage points for data collection at critical locations can be a significant challenge. A validation framework is therefore proposed in this paper, demonstrated using a fixed-based DS that addresses the issue by using a global positioning system (GPS). The validation of the DS was conducted using objective and subjective evaluations. The objective validation was divided into qualitative and quantitative evaluations. The DS was validated by comparing the results of simulation with the field data, which were collected using a GPS along the highway and video recordings at specific locations in a work zone. The constructed work zone scenario in the DS was subjectively evaluated with 46 participants. The objective evaluation established the absolute and relative validity of the DS. The mean speeds from the DS data showed excellent agreement with the field data. The subjective evaluation indicated realistic driving experience by the participants. The use of GPS showed that continuous data collected along the highway can overcome the challenges of unavailability of safe vantage points especially at critical locations. Further, a validated DS can be used for examining driver behavior in complex situations by replicating realistic scenarios. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ivancic, B.; Riedmann, H.; Frey, M.; Knab, O.; Karl, S.; Hannemann, K.
2016-07-01
The paper summarizes technical results and first highlights of the cooperation between DLR and Airbus Defence and Space (DS) within the work package "CFD Modeling of Combustion Chamber Processes" conducted in the frame of the Propulsion 2020 Project. Within the addressed work package, DLR Göttingen and Airbus DS Ottobrunn have identified several test cases where adequate test data are available and which can be used for proper validation of the computational fluid dynamics (CFD) tools. In this paper, the first test case, the Penn State chamber (RCM1), is discussed. Presenting the simulation results from three different tools, it is shown that the test case can be computed properly with steady-state Reynolds-averaged Navier-Stokes (RANS) approaches. The achieved simulation results reproduce the measured wall heat flux as an important validation parameter very well but also reveal some inconsistencies in the test data which are addressed in this paper.
Sessa, Luca; Perrenot, Cyril; Xu, Song; Hubert, Jacques; Bresler, Laurent; Brunaud, Laurent; Perez, Manuela
2018-03-01
In robotic surgery, the coordination between the console-side surgeon and bed-side assistant is crucial, more than in standard surgery or laparoscopy where the surgical team works in close contact. Xperience™ Team Trainer (XTT) is a new optional component for the dv-Trainer ® platform and simulates the patient-side working environment. We present preliminary results for face, content, and the workload imposed regarding the use of the XTT virtual reality platform for the psychomotor and communication skills training of the bed-side assistant in robot-assisted surgery. Participants were categorized into "Beginners" and "Experts". They tested a series of exercises (Pick & Place Laparoscopic Demo, Pick & Place 2 and Team Match Board 1) and completed face validity questionnaires. "Experts" assessed content validity on another questionnaire. All the participants completed a NASA Task Load Index questionnaire to assess the workload imposed by XTT. Twenty-one consenting participants were included (12 "Beginners" and 9 "Experts"). XTT was shown to possess face and content validity, as evidenced by the rankings given on the simulator's ease of use and realism parameters and on the simulator's usefulness for training. Eight out of nine "Experts" judged the visualization of metrics after the exercises useful. However, face validity has shown some weaknesses regarding interactions and instruments. Reasonable workload parameters were registered. XTT demonstrated excellent face and content validity with acceptable workload parameters. XTT could become a useful tool for robotic surgery team training.
TH-CD-207A-08: Simulated Real-Time Image Guidance for Lung SBRT Patients Using Scatter Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Redler, G; Cifter, G; Templeton, A
2016-06-15
Purpose: To develop a comprehensive Monte Carlo-based model for the acquisition of scatter images of patient anatomy in real-time, during lung SBRT treatment. Methods: During SBRT treatment, images of patient anatomy can be acquired from scattered radiation. To rigorously examine the utility of scatter images for image guidance, a model is developed using MCNP code to simulate scatter images of phantoms and lung cancer patients. The model is validated by comparing experimental and simulated images of phantoms of different complexity. The differentiation between tissue types is investigated by imaging objects of known compositions (water, lung, and bone equivalent). A lungmore » tumor phantom, simulating materials and geometry encountered during lung SBRT treatments, is used to investigate image noise properties for various quantities of delivered radiation (monitor units(MU)). Patient scatter images are simulated using the validated simulation model. 4DCT patient data is converted to an MCNP input geometry accounting for different tissue composition and densities. Lung tumor phantom images acquired with decreasing imaging time (decreasing MU) are used to model the expected noise amplitude in patient scatter images, producing realistic simulated patient scatter images with varying temporal resolution. Results: Image intensity in simulated and experimental scatter images of tissue equivalent objects (water, lung, bone) match within the uncertainty (∼3%). Lung tumor phantom images agree as well. Specifically, tumor-to-lung contrast matches within the uncertainty. The addition of random noise approximating quantum noise in experimental images to simulated patient images shows that scatter images of lung tumors can provide images in as fast as 0.5 seconds with CNR∼2.7. Conclusions: A scatter imaging simulation model is developed and validated using experimental phantom scatter images. Following validation, lung cancer patient scatter images are simulated. These simulated patient images demonstrate the clinical utility of scatter imaging for real-time tumor tracking during lung SBRT.« less
Current status of validation for robotic surgery simulators - a systematic review.
Abboudi, Hamid; Khan, Mohammed S; Aboumarzouk, Omar; Guru, Khurshid A; Challacombe, Ben; Dasgupta, Prokar; Ahmed, Kamran
2013-02-01
To analyse studies validating the effectiveness of robotic surgery simulators. The MEDLINE(®), EMBASE(®) and PsycINFO(®) databases were systematically searched until September 2011. References from retrieved articles were reviewed to broaden the search. The simulator name, training tasks, participant level, training duration and evaluation scoring were extracted from each study. We also extracted data on feasibility, validity, cost-effectiveness, reliability and educational impact. We identified 19 studies investigating simulation options in robotic surgery. There are five different robotic surgery simulation platforms available on the market. In all, 11 studies sought opinion and compared performance between two different groups; 'expert' and 'novice'. Experts ranged in experience from 21-2200 robotic cases. The novice groups consisted of participants with no prior experience on a robotic platform and were often medical students or junior doctors. The Mimic dV-Trainer(®), ProMIS(®), SimSurgery Educational Platform(®) (SEP) and Intuitive systems have shown face, content and construct validity. The Robotic Surgical SimulatorTM system has only been face and content validated. All of the simulators except SEP have shown educational impact. Feasibility and cost-effectiveness of simulation systems was not evaluated in any trial. Virtual reality simulators were shown to be effective training tools for junior trainees. Simulation training holds the greatest potential to be used as an adjunct to traditional training methods to equip the next generation of robotic surgeons with the skills required to operate safely. However, current simulation models have only been validated in small studies. There is no evidence to suggest one type of simulator provides more effective training than any other. More research is needed to validate simulated environments further and investigate the effectiveness of animal and cadaveric training in robotic surgery. © 2012 BJU International.
A simulation study of Large Area Crop Inventory Experiment (LACIE) technology
NASA Technical Reports Server (NTRS)
Ziegler, L. (Principal Investigator); Potter, J.
1979-01-01
The author has identified the following significant results. The LACIE performance predictor (LPP) was used to replicate LACIE phase 2 for a 15 year period, using accuracy assessment results for phase 2 error components. Results indicated that the (LPP) simulated the LACIE phase 2 procedures reasonably well. For the 15 year simulation, only 7 of the 15 production estimates were within 10 percent of the true production. The simulations indicated that the acreage estimator, based on CAMS phase 2 procedures, has a negative bias. This bias was too large to support the 90/90 criterion with the CV observed and simulated for the phase 2 production estimator. Results of this simulation study validate the theory that the acreage variance estimator in LACIE was conservative.
MCNPX simulation of proton dose distribution in homogeneous and CT phantoms
NASA Astrophysics Data System (ADS)
Lee, C. C.; Lee, Y. J.; Tung, C. J.; Cheng, H. W.; Chao, T. C.
2014-02-01
A dose simulation system was constructed based on the MCNPX Monte Carlo package to simulate proton dose distribution in homogeneous and CT phantoms. Conversion from Hounsfield unit of a patient CT image set to material information necessary for Monte Carlo simulation is based on Schneider's approach. In order to validate this simulation system, inter-comparison of depth dose distributions among those obtained from the MCNPX, GEANT4 and FLUKA codes for a 160 MeV monoenergetic proton beam incident normally on the surface of a homogeneous water phantom was performed. For dose validation within the CT phantom, direct comparison with measurement is infeasible. Instead, this study took the approach to indirectly compare the 50% ranges (R50%) along the central axis by our system to the NIST CSDA ranges for beams with 160 and 115 MeV energies. Comparison result within the homogeneous phantom shows good agreement. Differences of simulated R50% among the three codes are less than 1 mm. For results within the CT phantom, the MCNPX simulated water equivalent Req,50% are compatible with the CSDA water equivalent ranges from the NIST database with differences of 0.7 and 4.1 mm for 160 and 115 MeV beams, respectively.
Bayona, Sofía; Fernández-Arroyo, José Manuel; Martín, Isaac; Bayona, Pilar
2008-09-01
The aims of this study were to test the face, content, and construct validities of a virtual-reality haptic arthroscopy simulator and to validate four assessment hypothesis. The participants in our study were 94 arthroscopists attending an international conference on arthroscopy. The interviewed surgeons had been performing arthroscopies for a mean of 8.71 years (σ = 6.94 years). We explained the operation, functionality, instructions for use, and the exercises provided by the simulator. They performed a trial exercise and then an exercise in which performance was recorded. After having using it, the arthroscopists answered a questionnaire. The simulator was classified as one of the best training methods (over phantoms), and obtained a mark of 7.10 out of 10 as an evaluation tool. The simulator was considered more useful for inexperienced surgeons than for surgeons with experience (mean difference 1.88 out of 10, P value < 0.001). The participants valued the simulator at 8.24 as a tool for learning skills, its fidelity at 7.41, the quality of the platform at 7.54, and the content of the exercises at 7.09. It obtained a global score of 7.82. Of the subjects, 30.8% said they would practise with the simulator more than 6 h per week. Of the surgeons, 89.4% affirmed that they would recommend the simulator to their colleagues. The data gathered support the first three hypotheses, as well as face and content validities. Results show statistically significant differences between experts and novices, thus supporting the construct validity, but studies with a larger sample must be carried out to verify this. We propose concrete solutions and an equation to calculate economy of movement. Analogously, we analyze competence measurements and propose an equation to provide a single measurement that contains them all and that, according to the surgeons' criteria, is as reliable as the judgment of experts observing the performance of an apprentice.
NASA Technical Reports Server (NTRS)
Ringermacher, H. I.; Moerner, W. E.; Miller, J. G.
1974-01-01
A two transducer correction formula valid for both solid and liquid specimens is presented. Using computer simulations of velocity measurements, the accuracy and range of validity of the results are discussed and are compared with previous approximations.
Infantino, Angelo; Cicoria, Gianfranco; Lucconi, Giulia; Pancaldi, Davide; Vichi, Sara; Zagni, Federico; Mostacci, Domiziano; Marengo, Mario
2016-12-01
In the planning of a new cyclotron facility, an accurate knowledge of the radiation field around the accelerator is fundamental for the design of shielding, the protection of workers, the general public and the environment. Monte Carlo simulations can be very useful in this process, and their use is constantly increasing. However, few data have been published so far as regards the proper validation of Monte Carlo simulation against experimental measurements, particularly in the energy range of biomedical cyclotrons. In this work a detailed model of an existing installation of a GE PETtrace 16.5MeV cyclotron was developed using FLUKA. An extensive measurement campaign of the neutron ambient dose equivalent H ∗ (10) in marked positions around the cyclotron was conducted using a neutron rem-counter probe and CR39 neutron detectors. Data from a previous measurement campaign performed by our group using TLDs were also re-evaluated. The FLUKA model was then validated by comparing the results of high-statistics simulations with experimental data. In 10 out of 12 measurement locations, FLUKA simulations were in agreement within uncertainties with all the three different sets of experimental data; in the remaining 2 positions, the agreement was with 2/3 of the measurements. Our work allows to quantitatively validate our FLUKA simulation setup and confirms that Monte Carlo technique can produce accurate results in the energy range of biomedical cyclotrons. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Linton, Steven J; Flink, Ida K; Nilsson, Emma; Edlund, Sara
2017-05-01
Patient-centered, empathetic communication has been recommended as a means for improving the health care of patients suffering pain. However, a problem has been training health care providers since programs may be time-consuming and difficult to learn. Validation, a form of empathetic response that communicates that what a patient experiences is accepted as true, has been suggested as an appropriate method for improving communication with patients suffering pain. We study the immediate effects of providing medical students with a 2-session (45-minute duration each) program in validation skills on communication. A one group, pretest vs posttest design was employed with 22 volunteer medical students. To control patient variables, actors simulated 1 of 2 patient scenarios (randomly provided at pretest and posttest). Video recordings were blindly evaluated. Self-ratings of validation and satisfaction were also employed. Observed validation responses increased significantly after training and corresponded to significant reductions in invalidating responses. Both the patient simulators and the medical students were significantly more satisfied after the training. We demonstrated that training empathetic validation results in improved communication thus extending previous findings to a medical setting with patients suffering pain. Our results suggest that it would be feasible to provide validation training for health care providers and this warrants further investigation in controlled studies.
Jones, R T; Kazdin, A E; Haney, J I
1981-01-01
A multifaceted behavioral program designed to teach emergency fire escape procedures to children was evaluated in a multiple-baseline design. Five children were trained to respond correctly to nine home emergency fire situations under simulated conditions. The situations and responses focused upon in training were identified by a social validation procedure involving consultation with several safety agencies, including the direct input of firefighters. Training, carried out in simulated bedrooms at school, resulted in significant improvements in both overt behavior and self-report of fire safety skills. The gains were maintained at a post-check assessment 2 weeks after training had been terminated. The results are discussed in relation both to the importance of social validation of targets and outcomes and the implications for further research in assessing and developing emergency response skills. PMID:7298537
NASA Astrophysics Data System (ADS)
Silva, F.; Maechling, P. J.; Goulet, C. A.; Somerville, P.; Jordan, T. H.
2014-12-01
The Southern California Earthquake Center (SCEC) Broadband Platform is a collaborative software development project involving geoscientists, earthquake engineers, graduate students, and the SCEC Community Modeling Environment. The SCEC Broadband Platform (BBP) is open-source scientific software that can generate broadband (0-100Hz) ground motions for earthquakes, integrating complex scientific modules that implement rupture generation, low and high-frequency seismogram synthesis, non-linear site effects calculation, and visualization into a software system that supports easy on-demand computation of seismograms. The Broadband Platform operates in two primary modes: validation simulations and scenario simulations. In validation mode, the Platform runs earthquake rupture and wave propagation modeling software to calculate seismograms for a well-observed historical earthquake. Then, the BBP calculates a number of goodness of fit measurements that quantify how well the model-based broadband seismograms match the observed seismograms for a certain event. Based on these results, the Platform can be used to tune and validate different numerical modeling techniques. In scenario mode, the Broadband Platform can run simulations for hypothetical (scenario) earthquakes. In this mode, users input an earthquake description, a list of station names and locations, and a 1D velocity model for their region of interest, and the Broadband Platform software then calculates ground motions for the specified stations. Working in close collaboration with scientists and research engineers, the SCEC software development group continues to add new capabilities to the Broadband Platform and to release new versions as open-source scientific software distributions that can be compiled and run on many Linux computer systems. Our latest release includes 5 simulation methods, 7 simulation regions covering California, Japan, and Eastern North America, the ability to compare simulation results against GMPEs, and several new data products, such as map and distance-based goodness of fit plots. As the number and complexity of scenarios simulated using the Broadband Platform increases, we have added batching utilities to substantially improve support for running large-scale simulations on computing clusters.
Rossi, Michael R.; Tanaka, Daigo; Shimada, Kenji; Rabin, Yoed
2009-01-01
The current study focuses on experimentally validating a planning scheme based on the so-called bubble-packing method. This study is a part of an ongoing effort to develop computerized planning tools for cryosurgery, where bubble packing has been previously developed as a means to find an initial, uniform distribution of cryoprobes within a given domain; the so-called force-field analogy was then used to move cryoprobes to their optimum layout. However, due to the high quality of the cryoprobes’ distribution, suggested by bubble packing and its low computational cost, it has been argued that a planning scheme based solely on bubble packing may be more clinically relevant. To test this argument, an experimental validation is performed on a simulated cross-section of the prostate, using gelatin solution as a phantom material, proprietary liquid-nitrogen based cryoprobes, and a cryoheater to simulate urethral warming. Experimental results are compared with numerically simulated temperature histories resulting from planning. Results indicate an average disagreement of 0.8 mm in identifying the freezing front location, which is an acceptable level of uncertainty in the context of prostate cryosurgery imaging. PMID:19885373
NASA Astrophysics Data System (ADS)
Zhang, Jie; Nixon, Andrew; Barber, Tom; Budyn, Nicolas; Bevan, Rhodri; Croxford, Anthony; Wilcox, Paul
2018-04-01
In this paper, a methodology of using finite element (FE) model to validate a ray-based model in the simulation of full matrix capture (FMC) ultrasonic array data set is proposed. The overall aim is to separate signal contributions from different interactions in FE results for easier comparing each individual component in the ray-based model results. This is achieved by combining the results from multiple FE models of the system of interest that include progressively more geometrical features while preserving the same mesh structure. It is shown that the proposed techniques allow the interactions from a large number of different ray-paths to be isolated in FE results and compared directly to the results from a ray-based forward model.
NASA Astrophysics Data System (ADS)
Cros, Maria; Joemai, Raoul M. S.; Geleijns, Jacob; Molina, Diego; Salvadó, Marçal
2017-08-01
This study aims to develop and test software for assessing and reporting doses for standard patients undergoing computed tomography (CT) examinations in a 320 detector-row cone-beam scanner. The software, called SimDoseCT, is based on the Monte Carlo (MC) simulation code, which was developed to calculate organ doses and effective doses in ICRP anthropomorphic adult reference computational phantoms for acquisitions with the Aquilion ONE CT scanner (Toshiba). MC simulation was validated by comparing CTDI measurements within standard CT dose phantoms with results from simulation under the same conditions. SimDoseCT consists of a graphical user interface connected to a MySQL database, which contains the look-up-tables that were generated with MC simulations for volumetric acquisitions at different scan positions along the phantom using any tube voltage, bow tie filter, focal spot and nine different beam widths. Two different methods were developed to estimate organ doses and effective doses from acquisitions using other available beam widths in the scanner. A correction factor was used to estimate doses in helical acquisitions. Hence, the user can select any available protocol in the Aquilion ONE scanner for a standard adult male or female and obtain the dose results through the software interface. Agreement within 9% between CTDI measurements and simulations allowed the validation of the MC program. Additionally, the algorithm for dose reporting in SimDoseCT was validated by comparing dose results from this tool with those obtained from MC simulations for three volumetric acquisitions (head, thorax and abdomen). The comparison was repeated using eight different collimations and also for another collimation in a helical abdomen examination. The results showed differences of 0.1 mSv or less for absolute dose in most organs and also in the effective dose calculation. The software provides a suitable tool for dose assessment in standard adult patients undergoing CT examinations in a 320 detector-row cone-beam scanner.
Cros, Maria; Joemai, Raoul M S; Geleijns, Jacob; Molina, Diego; Salvadó, Marçal
2017-07-17
This study aims to develop and test software for assessing and reporting doses for standard patients undergoing computed tomography (CT) examinations in a 320 detector-row cone-beam scanner. The software, called SimDoseCT, is based on the Monte Carlo (MC) simulation code, which was developed to calculate organ doses and effective doses in ICRP anthropomorphic adult reference computational phantoms for acquisitions with the Aquilion ONE CT scanner (Toshiba). MC simulation was validated by comparing CTDI measurements within standard CT dose phantoms with results from simulation under the same conditions. SimDoseCT consists of a graphical user interface connected to a MySQL database, which contains the look-up-tables that were generated with MC simulations for volumetric acquisitions at different scan positions along the phantom using any tube voltage, bow tie filter, focal spot and nine different beam widths. Two different methods were developed to estimate organ doses and effective doses from acquisitions using other available beam widths in the scanner. A correction factor was used to estimate doses in helical acquisitions. Hence, the user can select any available protocol in the Aquilion ONE scanner for a standard adult male or female and obtain the dose results through the software interface. Agreement within 9% between CTDI measurements and simulations allowed the validation of the MC program. Additionally, the algorithm for dose reporting in SimDoseCT was validated by comparing dose results from this tool with those obtained from MC simulations for three volumetric acquisitions (head, thorax and abdomen). The comparison was repeated using eight different collimations and also for another collimation in a helical abdomen examination. The results showed differences of 0.1 mSv or less for absolute dose in most organs and also in the effective dose calculation. The software provides a suitable tool for dose assessment in standard adult patients undergoing CT examinations in a 320 detector-row cone-beam scanner.
NASA Technical Reports Server (NTRS)
Pamadi, Bandu N.; Toniolo, Matthew D.; Tartabini, Paul V.; Roithmayr, Carlos M.; Albertson, Cindy W.; Karlgaard, Christopher D.
2016-01-01
The objective of this report is to develop and implement a physics based method for analysis and simulation of multi-body dynamics including launch vehicle stage separation. The constraint force equation (CFE) methodology discussed in this report provides such a framework for modeling constraint forces and moments acting at joints when the vehicles are still connected. Several stand-alone test cases involving various types of joints were developed to validate the CFE methodology. The results were compared with ADAMS(Registered Trademark) and Autolev, two different industry standard benchmark codes for multi-body dynamic analysis and simulations. However, these two codes are not designed for aerospace flight trajectory simulations. After this validation exercise, the CFE algorithm was implemented in Program to Optimize Simulated Trajectories II (POST2) to provide a capability to simulate end-to-end trajectories of launch vehicles including stage separation. The POST2/CFE methodology was applied to the STS-1 Space Shuttle solid rocket booster (SRB) separation and Hyper-X Research Vehicle (HXRV) separation from the Pegasus booster as a further test and validation for its application to launch vehicle stage separation problems. Finally, to demonstrate end-to-end simulation capability, POST2/CFE was applied to the ascent, orbit insertion, and booster return of a reusable two-stage-to-orbit (TSTO) vehicle concept. With these validation exercises, POST2/CFE software can be used for performing conceptual level end-to-end simulations, including launch vehicle stage separation, for problems similar to those discussed in this report.
High-order continuum kinetic method for modeling plasma dynamics in phase space
Vogman, G. V.; Colella, P.; Shumlak, U.
2014-12-15
Continuum methods offer a high-fidelity means of simulating plasma kinetics. While computationally intensive, these methods are advantageous because they can be cast in conservation-law form, are not susceptible to noise, and can be implemented using high-order numerical methods. Advances in continuum method capabilities for modeling kinetic phenomena in plasmas require the development of validation tools in higher dimensional phase space and an ability to handle non-cartesian geometries. To that end, a new benchmark for validating Vlasov-Poisson simulations in 3D (x,v x,v y) is presented. The benchmark is based on the Dory-Guest-Harris instability and is successfully used to validate a continuummore » finite volume algorithm. To address challenges associated with non-cartesian geometries, unique features of cylindrical phase space coordinates are described. Preliminary results of continuum kinetic simulations in 4D (r,z,v r,v z) phase space are presented.« less
Rogers, R; Sewell, K W; Morey, L C; Ustad, K L
1996-12-01
Psychological assessment with multiscale inventories is largely dependent on the honesty and forthrightness of those persons evaluated. We investigated the effectiveness of the Personality Assessment Inventory (PAI) in detecting participants feigning three specific disorders: schizophrenia, major depression, and generalized anxiety disorder. With a simulation design, we tested the PAI validity scales on 166 naive (undergraduates with minimal preparation) and 80 sophisticated (doctoral psychology students with 1 week preparation) participants. We compared their results to persons with the designated disorders: schizophrenia (n = 45), major depression (n = 136), and generalized anxiety disorder (n = 40). Although moderately effective with naive simulators, the validity scales evidenced only modest positive predictive power with their sophisticated counterparts. Therefore, we performed a two-stage discriminant analysis that yielded a moderately high hit rate (> 80%) that was maintained in the cross-validation sample, irrespective of the feigned disorder or the sophistication of the simulators.
WEC3: Wave Energy Converter Code Comparison Project: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Combourieu, Adrien; Lawson, Michael; Babarit, Aurelien
This paper describes the recently launched Wave Energy Converter Code Comparison (WEC3) project and present preliminary results from this effort. The objectives of WEC3 are to verify and validate numerical modelling tools that have been developed specifically to simulate wave energy conversion devices and to inform the upcoming IEA OES Annex VI Ocean Energy Modelling Verification and Validation project. WEC3 is divided into two phases. Phase 1 consists of a code-to-code verification and Phase II entails code-to-experiment validation. WEC3 focuses on mid-fidelity codes that simulate WECs using time-domain multibody dynamics methods to model device motions and hydrodynamic coefficients to modelmore » hydrodynamic forces. Consequently, high-fidelity numerical modelling tools, such as Navier-Stokes computational fluid dynamics simulation, and simple frequency domain modelling tools were not included in the WEC3 project.« less
Salipur, Zdravko; Bertocci, Gina
2010-01-01
It has been shown that ANSI WC19 transit wheelchairs that are crashworthy in frontal impact exhibit catastrophic failures in rear impact and may not be able to provide stable seating support and thus occupant protection for the wheelchair occupant. Thus far only limited sled test and computer simulation data have been available to study rear impact wheelchair safety. Computer modeling can be used as an economic and comprehensive tool to gain critical knowledge regarding wheelchair integrity and occupant safety. This study describes the development and validation of a computer model simulating an adult wheelchair-seated occupant subjected to a rear impact event. The model was developed in MADYMO and validated rigorously using the results of three similar sled tests conducted to specifications provided in the draft ISO/TC 173 standard. Outcomes from the model can provide critical wheelchair loading information to wheelchair and tiedown manufacturers, resulting in safer wheelchair designs for rear impact conditions. (c) 2009 IPEM. Published by Elsevier Ltd. All rights reserved.
Influence of Contact Angle Boundary Condition on CFD Simulation of T-Junction
NASA Astrophysics Data System (ADS)
Arias, S.; Montlaur, A.
2018-03-01
In this work, we study the influence of the contact angle boundary condition on 3D CFD simulations of the bubble generation process occurring in a capillary T-junction. Numerical simulations have been performed with the commercial Computational Fluid Dynamics solver ANSYS Fluent v15.0.7. Experimental results serve as a reference to validate numerical results for four independent parameters: the bubble generation frequency, volume, velocity and length. CFD simulations accurately reproduce experimental results both from qualitative and quantitative points of view. Numerical results are very sensitive to the gas-liquid-wall contact angle boundary conditions, confirming that this is a fundamental parameter to obtain accurate CFD results for simulations of this kind of problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bostani, Maryam, E-mail: mbostani@mednet.ucla.edu; McMillan, Kyle; Cagnon, Chris H.
2014-11-01
Purpose: Monte Carlo (MC) simulation methods have been widely used in patient dosimetry in computed tomography (CT), including estimating patient organ doses. However, most simulation methods have undergone a limited set of validations, often using homogeneous phantoms with simple geometries. As clinical scanning has become more complex and the use of tube current modulation (TCM) has become pervasive in the clinic, MC simulations should include these techniques in their methodologies and therefore should also be validated using a variety of phantoms with different shapes and material compositions to result in a variety of differently modulated tube current profiles. The purposemore » of this work is to perform the measurements and simulations to validate a Monte Carlo model under a variety of test conditions where fixed tube current (FTC) and TCM were used. Methods: A previously developed MC model for estimating dose from CT scans that models TCM, built using the platform of MCNPX, was used for CT dose quantification. In order to validate the suitability of this model to accurately simulate patient dose from FTC and TCM CT scan, measurements and simulations were compared over a wide range of conditions. Phantoms used for testing range from simple geometries with homogeneous composition (16 and 32 cm computed tomography dose index phantoms) to more complex phantoms including a rectangular homogeneous water equivalent phantom, an elliptical shaped phantom with three sections (where each section was a homogeneous, but different material), and a heterogeneous, complex geometry anthropomorphic phantom. Each phantom requires varying levels of x-, y- and z-modulation. Each phantom was scanned on a multidetector row CT (Sensation 64) scanner under the conditions of both FTC and TCM. Dose measurements were made at various surface and depth positions within each phantom. Simulations using each phantom were performed for FTC, detailed x–y–z TCM, and z-axis-only TCM to obtain dose estimates. This allowed direct comparisons between measured and simulated dose values under each condition of phantom, location, and scan to be made. Results: For FTC scans, the percent root mean square (RMS) difference between measurements and simulations was within 5% across all phantoms. For TCM scans, the percent RMS of the difference between measured and simulated values when using detailed TCM and z-axis-only TCM simulations was 4.5% and 13.2%, respectively. For the anthropomorphic phantom, the difference between TCM measurements and detailed TCM and z-axis-only TCM simulations was 1.2% and 8.9%, respectively. For FTC measurements and simulations, the percent RMS of the difference was 5.0%. Conclusions: This work demonstrated that the Monte Carlo model developed provided good agreement between measured and simulated values under both simple and complex geometries including an anthropomorphic phantom. This work also showed the increased dose differences for z-axis-only TCM simulations, where considerable modulation in the x–y plane was present due to the shape of the rectangular water phantom. Results from this investigation highlight details that need to be included in Monte Carlo simulations of TCM CT scans in order to yield accurate, clinically viable assessments of patient dosimetry.« less
Validation of Potential Models for Li2O in Classical Molecular Dynamics Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oda, Takuji; Oya, Yasuhisa; Tanaka, Satoru
2007-08-01
Four Buckingham-type pairwise potential models for Li2O were assessed by molecular static and dynamics simulations. In the static simulation, all models afforded acceptable agreement with experimental values and ab initio calculation results for the crystalline properties. Moreover, the superionic phase transition was realized in the dynamics simulation. However, the Li diffusivity and the lattice expansion were not adequately reproduced at the same time by any model. When using these models in future radiation simulation, these features should be taken into account, in order to reduce the model dependency of the results.
Kumar, Y Kiran; Mehta, Shashi Bhushan; Ramachandra, Manjunath
2017-01-01
The purpose of this work is to provide some validation methods for evaluating the hemodynamic assessment of Cerebral Arteriovenous Malformation (CAVM). This article emphasizes the importance of validating noninvasive measurements for CAVM patients, which are designed using lumped models for complex vessel structure. The validation of the hemodynamics assessment is based on invasive clinical measurements and cross-validation techniques with the Philips proprietary validated software's Qflow and 2D Perfursion. The modeling results are validated for 30 CAVM patients for 150 vessel locations. Mean flow, diameter, and pressure were compared between modeling results and with clinical/cross validation measurements, using an independent two-tailed Student t test. Exponential regression analysis was used to assess the relationship between blood flow, vessel diameter, and pressure between them. Univariate analysis is used to assess the relationship between vessel diameter, vessel cross-sectional area, AVM volume, AVM pressure, and AVM flow results were performed with linear or exponential regression. Modeling results were compared with clinical measurements from vessel locations of cerebral regions. Also, the model is cross validated with Philips proprietary validated software's Qflow and 2D Perfursion. Our results shows that modeling results and clinical results are nearly matching with a small deviation. In this article, we have validated our modeling results with clinical measurements. The new approach for cross-validation is proposed by demonstrating the accuracy of our results with a validated product in a clinical environment.
NASA Astrophysics Data System (ADS)
Etxeberria, A.; Vechiu, I.; Baudoin, S.; Camblong, H.; Kreckelbergh, S.
2014-02-01
The increasing use of distributed generators, which are mainly based on renewable sources, can create several issues in the operation of the electric grid. The microgrid is being analysed as a solution to the integration in the grid of the renewable sources at a high penetration level in a controlled way. The storage systems play a vital role in order to keep the energy and power balance of the microgrid. Due to the technical limitations of the currently available storage systems, it is necessary to use more than one storage technology to satisfy the requirements of the microgrid application. This work validates in simulations and experimentally the use of a Three-Level Neutral Point Clamped converter to control the power flow of a hybrid storage system formed by a SuperCapacitor and a Vanadium Redox Battery. The operation of the system is validated in two case studies in the experimental platform installed in ESTIA. The experimental results prove the validity of the proposed system as well as the designed control algorithm. The good agreement among experimental and simulation results also validates the simulation model, that can therefore be used to analyse the operation of the system in different case studies.
Rezaeian, Sanaz; Zhong, Peng; Hartzell, Stephen; Zareian, Farzin
2015-01-01
Simulated earthquake ground motions can be used in many recent engineering applications that require time series as input excitations. However, applicability and validation of simulations are subjects of debate in the seismological and engineering communities. We propose a validation methodology at the waveform level and directly based on characteristics that are expected to influence most structural and geotechnical response parameters. In particular, three time-dependent validation metrics are used to evaluate the evolving intensity, frequency, and bandwidth of a waveform. These validation metrics capture nonstationarities in intensity and frequency content of waveforms, making them ideal to address nonlinear response of structural systems. A two-component error vector is proposed to quantify the average and shape differences between these validation metrics for a simulated and recorded ground-motion pair. Because these metrics are directly related to the waveform characteristics, they provide easily interpretable feedback to seismologists for modifying their ground-motion simulation models. To further simplify the use and interpretation of these metrics for engineers, it is shown how six scalar key parameters, including duration, intensity, and predominant frequency, can be extracted from the validation metrics. The proposed validation methodology is a step forward in paving the road for utilization of simulated ground motions in engineering practice and is demonstrated using examples of recorded and simulated ground motions from the 1994 Northridge, California, earthquake.
Alwaal, Amjad; Al-Qaoud, Talal M; Haddad, Richard L; Alzahrani, Tarek M; Delisle, Josee; Anidjar, Maurice
2015-01-01
Assessing the predictive validity of the LapSim simulator within a urology residency program. Twelve urology residents at McGill University were enrolled in the study between June 2008 and December 2011. The residents had weekly training on the LapSim that consisted of 3 tasks (cutting, clip-applying, and lifting and grasping). They underwent monthly assessment of their LapSim performance using total time, tissue damage and path length among other parameters as surrogates for their economy of movement and respect for tissue. The last residents' LapSim performance was compared with their first performance of radical nephrectomy on anesthetized porcine models in their 4(th) year of training. Two independent urologic surgeons rated the resident performance on the porcine models, and kappa test with standardized weight function was used to assess for inter-observer bias. Nonparametric spearman correlation test was used to compare each rater's cumulative score with the cumulative score obtained on the porcine models in order to test the predictive validity of the LapSim simulator. The kappa results demonstrated acceptable agreement between the two observers among all domains of the rating scale of performance except for confidence of movement and efficiency. In addition, poor predictive validity of the LapSim simulator was demonstrated. Predictive validity was not demonstrated for the LapSim simulator in the context of a urology residency training program.
Nuclear Energy Knowledge and Validation Center (NEKVaC) Needs Workshop Summary Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gougar, Hans
2015-02-01
The Department of Energy (DOE) has made significant progress developing simulation tools to predict the behavior of nuclear systems with greater accuracy and of increasing our capability to predict the behavior of these systems outside of the standard range of applications. These analytical tools require a more complex array of validation tests to accurately simulate the physics and multiple length and time scales. Results from modern simulations will allow experiment designers to narrow the range of conditions needed to bound system behavior and to optimize the deployment of instrumentation to limit the breadth and cost of the campaign. Modern validation,more » verification and uncertainty quantification (VVUQ) techniques enable analysts to extract information from experiments in a systematic manner and provide the users with a quantified uncertainty estimate. Unfortunately, the capability to perform experiments that would enable taking full advantage of the formalisms of these modern codes has progressed relatively little (with some notable exceptions in fuels and thermal-hydraulics); the majority of the experimental data available today is the "historic" data accumulated over the last decades of nuclear systems R&D. A validated code-model is a tool for users. An unvalidated code-model is useful for code developers to gain understanding, publish research results, attract funding, etc. As nuclear analysis codes have become more sophisticated, so have the measurement and validation methods and the challenges that confront them. A successful yet cost-effective validation effort requires expertise possessed only by a few, resources possessed only by the well-capitalized (or a willing collective), and a clear, well-defined objective (validating a code that is developed to satisfy the need(s) of an actual user). To that end, the Idaho National Laboratory established the Nuclear Energy Knowledge and Validation Center to address the challenges of modern code validation and to manage the knowledge from past, current, and future experimental campaigns. By pulling together the best minds involved in code development, experiment design, and validation to establish and disseminate best practices and new techniques, the Nuclear Energy Knowledge and Validation Center (NEKVaC or the ‘Center’) will be a resource for industry, DOE Programs, and academia validation efforts.« less
Perspectives on the simulation of protein–surface interactions using empirical force field methods
Latour, Robert A.
2014-01-01
Protein–surface interactions are of fundamental importance for a broad range of applications in the fields of biomaterials and biotechnology. Present experimental methods are limited in their ability to provide a comprehensive depiction of these interactions at the atomistic level. In contrast, empirical force field based simulation methods inherently provide the ability to predict and visualize protein–surface interactions with full atomistic detail. These methods, however, must be carefully developed, validated, and properly applied before confidence can be placed in results from the simulations. In this perspectives paper, I provide an overview of the critical aspects that I consider being of greatest importance for the development of these methods, with a focus on the research that my combined experimental and molecular simulation groups have conducted over the past decade to address these issues. These critical issues include the tuning of interfacial force field parameters to accurately represent the thermodynamics of interfacial behavior, adequate sampling of these types of complex molecular systems to generate results that can be comparable with experimental data, and the generation of experimental data that can be used for simulation results evaluation and validation. PMID:25028242
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Rui; Sumner, Tyler S.
2016-04-17
An advanced system analysis tool SAM is being developed for fast-running, improved-fidelity, and whole-plant transient analyses at Argonne National Laboratory under DOE-NE’s Nuclear Energy Advanced Modeling and Simulation (NEAMS) program. As an important part of code development, companion validation activities are being conducted to ensure the performance and validity of the SAM code. This paper presents the benchmark simulations of two EBR-II tests, SHRT-45R and BOP-302R, whose data are available through the support of DOE-NE’s Advanced Reactor Technology (ART) program. The code predictions of major primary coolant system parameter are compared with the test results. Additionally, the SAS4A/SASSYS-1 code simulationmore » results are also included for a code-to-code comparison.« less
NASA Technical Reports Server (NTRS)
Tabiei, Al; Lawrence, Charles; Fasanella, Edwin L.
2009-01-01
A series of crash tests were conducted with dummies during simulated Orion crew module landings at the Wright-Patterson Air Force Base. These tests consisted of several crew configurations with and without astronaut suits. Some test results were collected and are presented. In addition, finite element models of the tests were developed and are presented. The finite element models were validated using the experimental data, and the test responses were compared with the computed results. Occupant crash data, such as forces, moments, and accelerations, were collected from the simulations and compared with injury criteria to assess occupant survivability and injury. Some of the injury criteria published in the literature is summarized for completeness. These criteria were used to determine potential injury during crew impact events.
Virtual evaluation of stent graft deployment: a validated modeling and simulation study.
De Bock, S; Iannaccone, F; De Santis, G; De Beule, M; Van Loo, D; Devos, D; Vermassen, F; Segers, P; Verhegghe, B
2012-09-01
The presented study details the virtual deployment of a bifurcated stent graft (Medtronic Talent) in an Abdominal Aortic Aneurysm model, using the finite element method. The entire deployment procedure is modeled, with the stent graft being crimped and bent according to the vessel geometry, and subsequently released. The finite element results are validated in vitro with placement of the device in a silicone mock aneurysm, using high resolution CT scans to evaluate the result. The presented work confirms the capability of finite element computer simulations to predict the deformed configuration after endovascular aneurysm repair (EVAR). These simulations can be used to quantify mechanical parameters, such as neck dilations, radial forces and stresses in the device, that are difficult or impossible to obtain from medical imaging. Copyright © 2012 Elsevier Ltd. All rights reserved.
Korzeniowski, Przemyslaw; Brown, Daniel C; Sodergren, Mikael H; Barrow, Alastair; Bello, Fernando
2017-02-01
The goal of this study was to establish face, content, and construct validity of NOViSE-the first force-feedback enabled virtual reality (VR) simulator for natural orifice transluminal endoscopic surgery (NOTES). Fourteen surgeons and surgical trainees performed 3 simulated hybrid transgastric cholecystectomies using a flexible endoscope on NOViSE. Four of them were classified as "NOTES experts" who had independently performed 10 or more simulated or human NOTES procedures. Seven participants were classified as "Novices" and 3 as "Gastroenterologists" with no or minimal NOTES experience. A standardized 5-point Likert-type scale questionnaire was administered to assess the face and content validity. NOViSE showed good overall face and content validity. In 14 out of 15 statements pertaining to face validity (graphical appearance, endoscope and tissue behavior, overall realism), ≥50% of responses were "agree" or "strongly agree." In terms of content validity, 85.7% of participants agreed or strongly agreed that NOViSE is a useful training tool for NOTES and 71.4% that they would recommend it to others. Construct validity was established by comparing a number of performance metrics such as task completion times, path lengths, applied forces, and so on. NOViSE demonstrated early signs of construct validity. Experts were faster and used a shorter endoscopic path length than novices in all but one task. The results indicate that NOViSE authentically recreates a transgastric hybrid cholecystectomy and sets promising foundations for the further development of a VR training curriculum for NOTES without compromising patient safety or requiring expensive animal facilities.
Modelling and validation of Proton exchange membrane fuel cell (PEMFC)
NASA Astrophysics Data System (ADS)
Mohiuddin, A. K. M.; Basran, N.; Khan, A. A.
2018-01-01
This paper is the outcome of a small scale fuel cell project. Fuel cell is an electrochemical device that converts energy from chemical reaction to electrical work. Proton Exchange Membrane Fuel Cell (PEMFC) is one of the different types of fuel cell, which is more efficient, having low operational temperature and fast start up capability results in high energy density. In this study, a mathematical model of 1.2 W PEMFC is developed and simulated using MATLAB software. This model describes the PEMFC behaviour under steady-state condition. This mathematical modeling of PEMFC determines the polarization curve, power generated, and the efficiency of the fuel cell. Simulation results were validated by comparing with experimental results obtained from the test of a single PEMFC with a 3 V motor. The performance of experimental PEMFC is little lower compared to simulated PEMFC, however both results were found in good agreement. Experiments on hydrogen flow rate also been conducted to obtain the amount of hydrogen consumed to produce electrical work on PEMFC.
NASA Astrophysics Data System (ADS)
Akai, Takashi; Bijeljic, Branko; Blunt, Martin J.
2018-06-01
In the color gradient lattice Boltzmann model (CG-LBM), a fictitious-density wetting boundary condition has been widely used because of its ease of implementation. However, as we show, this may lead to inaccurate results in some cases. In this paper, a new scheme for the wetting boundary condition is proposed which can handle complicated 3D geometries. The validity of our method for static problems is demonstrated by comparing the simulated results to analytical solutions in 2D and 3D geometries with curved boundaries. Then, capillary rise simulations are performed to study dynamic problems where the three-phase contact line moves. The results are compared to experimental results in the literature (Heshmati and Piri, 2014). If a constant contact angle is assumed, the simulations agree with the analytical solution based on the Lucas-Washburn equation. However, to match the experiments, we need to implement a dynamic contact angle that varies with the flow rate.
Modelling and simulation of wood chip combustion in a hot air generator system.
Rajika, J K A T; Narayana, Mahinsasa
2016-01-01
This study focuses on modelling and simulation of horizontal moving bed/grate wood chip combustor. A standalone finite volume based 2-D steady state Euler-Euler Computational Fluid Dynamics (CFD) model was developed for packed bed combustion. Packed bed combustion of a medium scale biomass combustor, which was retrofitted from wood log to wood chip feeding for Tea drying in Sri Lanka, was evaluated by a CFD simulation study. The model was validated by the experimental results of an industrial biomass combustor for a hot air generation system in tea industry. Open-source CFD tool; OpenFOAM was used to generate CFD model source code for the packed bed combustion and simulated along with an available solver for free board region modelling in the CFD tool. Height of the packed bed is about 20 cm and biomass particles are assumed to be spherical shape with constant surface area to volume ratio. Temperature measurements of the combustor are well agreed with simulation results while gas phase compositions have discrepancies. Combustion efficiency of the validated hot air generator is around 52.2 %.
Using Monte Carlo Simulation to Prioritize Key Maritime Environmental Impacts of Port Infrastructure
NASA Astrophysics Data System (ADS)
Perez Lespier, L. M.; Long, S.; Shoberg, T.
2016-12-01
This study creates a Monte Carlo simulation model to prioritize key indicators of environmental impacts resulting from maritime port infrastructure. Data inputs are derived from LandSat imagery, government databases, and industry reports to create the simulation. Results are validated using subject matter experts and compared with those returned from time-series regression to determine goodness of fit. The Port of Prince Rupert, Canada is used as the location for the study.
Validation of a DICE Simulation Against a Discrete Event Simulation Implemented Entirely in Code.
Möller, Jörgen; Davis, Sarah; Stevenson, Matt; Caro, J Jaime
2017-10-01
Modeling is an essential tool for health technology assessment, and various techniques for conceptualizing and implementing such models have been described. Recently, a new method has been proposed-the discretely integrated condition event or DICE simulation-that enables frequently employed approaches to be specified using a common, simple structure that can be entirely contained and executed within widely available spreadsheet software. To assess if a DICE simulation provides equivalent results to an existing discrete event simulation, a comparison was undertaken. A model of osteoporosis and its management programmed entirely in Visual Basic for Applications and made public by the National Institute for Health and Care Excellence (NICE) Decision Support Unit was downloaded and used to guide construction of its DICE version in Microsoft Excel ® . The DICE model was then run using the same inputs and settings, and the results were compared. The DICE version produced results that are nearly identical to the original ones, with differences that would not affect the decision direction of the incremental cost-effectiveness ratios (<1% discrepancy), despite the stochastic nature of the models. The main limitation of the simple DICE version is its slow execution speed. DICE simulation did not alter the results and, thus, should provide a valid way to design and implement decision-analytic models without requiring specialized software or custom programming. Additional efforts need to be made to speed up execution.
Numerical modeling and experimental validation of thermoplastic composites induction welding
NASA Astrophysics Data System (ADS)
Palmieri, Barbara; Nele, Luigi; Galise, Francesco
2018-05-01
In this work, a numerical simulation and experimental test of the induction welding of continuous fibre-reinforced thermoplastic composites (CFRTPCs) was provided. The thermoplastic Polyamide 66 (PA66) with carbon fiber fabric was used. Using a dedicated software (JMag Designer), the influence of the fundamental process parameters such as temperature, current and holding time was investigated. In order to validate the results of the simulations, and therefore the numerical model used, experimental tests were carried out, and the temperature values measured during the tests were compared with the aid of an optical pyrometer, with those provided by the numerical simulation. The mechanical properties of the welded joints were evaluated by single lap shear tests.
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Cunningham, Kevin; Hill, Melissa A.
2013-01-01
Flight test and modeling techniques were developed for efficiently identifying global aerodynamic models that can be used to accurately simulate stall, upset, and recovery on large transport airplanes. The techniques were developed and validated in a high-fidelity fixed-base flight simulator using a wind-tunnel aerodynamic database, realistic sensor characteristics, and a realistic flight deck representative of a large transport aircraft. Results demonstrated that aerodynamic models for stall, upset, and recovery can be identified rapidly and accurately using relatively simple piloted flight test maneuvers. Stall maneuver predictions and comparisons of identified aerodynamic models with data from the underlying simulation aerodynamic database were used to validate the techniques.
Validated numerical simulation model of a dielectric elastomer generator
NASA Astrophysics Data System (ADS)
Foerster, Florentine; Moessinger, Holger; Schlaak, Helmut F.
2013-04-01
Dielectric elastomer generators (DEG) produce electrical energy by converting mechanical into electrical energy. Efficient operation requires homogeneous deformation of each single layer. However, by different internal and external influences like supports or the shape of a DEG the deformation will be inhomogeneous and hence negatively affect the amount of the generated electrical energy. Optimization of the deformation behavior leads to improved efficiency of the DEG and consequently to higher energy gain. In this work a numerical simulation model of a multilayer dielectric elastomer generator is developed using the FEM software ANSYS. The analyzed multilayer DEG consists of 49 active dielectric layers with layer thicknesses of 50 μm. The elastomer is silicone (PDMS) while the compliant electrodes are made of graphite powder. In the simulation the real material parameters of the PDMS and the graphite electrodes need to be included. Therefore, the mechanical and electrical material parameters of the PDMS are determined by experimental investigations of test samples while the electrode parameters are determined by numerical simulations of test samples. The numerical simulation of the DEG is carried out as coupled electro-mechanical simulation for the constant voltage energy harvesting cycle. Finally, the derived numerical simulation model is validated by comparison with analytical calculations and further simulated DEG configurations. The comparison of the determined results show good accordance with regard to the deformation of the DEG. Based on the validated model it is now possible to optimize the DEG layout for improved deformation behavior with further simulations.
A discrete event simulation tool to support and predict hospital and clinic staffing.
DeRienzo, Christopher M; Shaw, Ryan J; Meanor, Phillip; Lada, Emily; Ferranti, Jeffrey; Tanaka, David
2017-06-01
We demonstrate how to develop a simulation tool to help healthcare managers and administrators predict and plan for staffing needs in a hospital neonatal intensive care unit using administrative data. We developed a discrete event simulation model of nursing staff needed in a neonatal intensive care unit and then validated the model against historical data. The process flow was translated into a discrete event simulation model. Results demonstrated that the model can be used to give a respectable estimate of annual admissions, transfers, and deaths based upon two different staffing levels. The discrete event simulation tool model can provide healthcare managers and administrators with (1) a valid method of modeling patient mix, patient acuity, staffing needs, and costs in the present state and (2) a forecast of how changes in a unit's staffing, referral patterns, or patient mix would affect a unit in a future state.
Integrated Disposal Facility FY 2016: ILAW Verification and Validation of the eSTOMP Simulator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freedman, Vicky L.; Bacon, Diana H.; Fang, Yilin
2016-05-13
This document describes two sets of simulations carried out to further verify and validate the eSTOMP simulator. In this report, a distinction is made between verification and validation, and the focus is on verifying eSTOMP through a series of published benchmarks on cementitious wastes, and validating eSTOMP based on a lysimeter experiment for the glassified waste. These activities are carried out within the context of a scientific view of validation that asserts that models can only be invalidated, and that model validation (and verification) is a subjective assessment.
Validation of educational assessments: a primer for simulation and beyond.
Cook, David A; Hatala, Rose
2016-01-01
Simulation plays a vital role in health professions assessment. This review provides a primer on assessment validation for educators and education researchers. We focus on simulation-based assessment of health professionals, but the principles apply broadly to other assessment approaches and topics. Validation refers to the process of collecting validity evidence to evaluate the appropriateness of the interpretations, uses, and decisions based on assessment results. Contemporary frameworks view validity as a hypothesis, and validity evidence is collected to support or refute the validity hypothesis (i.e., that the proposed interpretations and decisions are defensible). In validation, the educator or researcher defines the proposed interpretations and decisions, identifies and prioritizes the most questionable assumptions in making these interpretations and decisions (the "interpretation-use argument"), empirically tests those assumptions using existing or newly-collected evidence, and then summarizes the evidence as a coherent "validity argument." A framework proposed by Messick identifies potential evidence sources: content, response process, internal structure, relationships with other variables, and consequences. Another framework proposed by Kane identifies key inferences in generating useful interpretations: scoring, generalization, extrapolation, and implications/decision. We propose an eight-step approach to validation that applies to either framework: Define the construct and proposed interpretation, make explicit the intended decision(s), define the interpretation-use argument and prioritize needed validity evidence, identify candidate instruments and/or create/adapt a new instrument, appraise existing evidence and collect new evidence as needed, keep track of practical issues, formulate the validity argument, and make a judgment: does the evidence support the intended use? Rigorous validation first prioritizes and then empirically evaluates key assumptions in the interpretation and use of assessment scores. Validation science would be improved by more explicit articulation and prioritization of the interpretation-use argument, greater use of formal validation frameworks, and more evidence informing the consequences and implications of assessment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ivanov, N. V.; Kakurin, A. M.
2014-10-15
Simulation of the magnetic island evolution under Resonant Magnetic Perturbation (RMP) in rotating T-10 tokamak plasma is presented with intent of TEAR code experimental validation. In the T-10 experiment chosen for simulation, the RMP consists of a stationary error field, a magnetic field of the eddy current in the resistive vacuum vessel and magnetic field of the externally applied controlled halo current in the plasma scrape-off layer (SOL). The halo-current loop consists of a rail limiter, plasma SOL, vacuum vessel, and external part of the circuit. Effects of plasma resistivity, viscosity, and RMP are taken into account in the TEARmore » code based on the two-fluid MHD approximation. Radial distribution of the magnetic flux perturbation is calculated with account of the externally applied RMP. A good agreement is obtained between the simulation results and experimental data for the cases of preprogrammed and feedback-controlled halo current in the plasma SOL.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neylon, J; Min, Y; Qi, S
2014-06-15
Purpose: Deformable image registration (DIR) plays a pivotal role in head and neck adaptive radiotherapy but a systematic validation of DIR algorithms has been limited by a lack of quantitative high-resolution groundtruth. We address this limitation by developing a GPU-based framework that provides a systematic DIR validation by generating (a) model-guided synthetic CTs representing posture and physiological changes, and (b) model-guided landmark-based validation. Method: The GPU-based framework was developed to generate massive mass-spring biomechanical models from patient simulation CTs and contoured structures. The biomechanical model represented soft tissue deformations for known rigid skeletal motion. Posture changes were simulated by articulatingmore » skeletal anatomy, which subsequently applied elastic corrective forces upon the soft tissue. Physiological changes such as tumor regression and weight loss were simulated in a biomechanically precise manner. Synthetic CT data was then generated from the deformed anatomy. The initial and final positions for one hundred randomly-chosen mass elements inside each of the internal contoured structures were recorded as ground truth data. The process was automated to create 45 synthetic CT datasets for a given patient CT. For instance, the head rotation was varied between +/− 4 degrees along each axis, and tumor volumes were systematically reduced up to 30%. Finally, the original CT and deformed synthetic CT were registered using an optical flow based DIR. Results: Each synthetic data creation took approximately 28 seconds of computation time. The number of landmarks per data set varied between two and three thousand. The validation method is able to perform sub-voxel analysis of the DIR, and report the results by structure, giving a much more in depth investigation of the error. Conclusions: We presented a GPU based high-resolution biomechanical head and neck model to validate DIR algorithms by generating CT equivalent 3D volumes with simulated posture changes and physiological regression.« less
Validation of a Video-based Game-Understanding Test Procedure in Badminton.
ERIC Educational Resources Information Center
Blomqvist, Minna T.; Luhtanen, Pekka; Laakso, Lauri; Keskinen, Esko
2000-01-01
Reports the development and validation of video-based game-understanding tests in badminton for elementary and secondary students. The tests included different sequences that simulated actual game situations. Players had to solve tactical problems by selecting appropriate solutions and arguments for their decisions. Results suggest that the test…
NASA Astrophysics Data System (ADS)
Wang, Huihui; Sukhomlinov, Vladimir S.; Kaganovich, Igor D.; Mustafaev, Alexander S.
2017-02-01
Using the Monte Carlo collision method, we have performed simulations of ion velocity distribution functions (IVDF) taking into account both elastic collisions and charge exchange collisions of ions with atoms in uniform electric fields for argon and helium background gases. The simulation results are verified by comparison with the experiment data of the ion mobilities and the ion transverse diffusion coefficients in argon and helium. The recently published experimental data for the first seven coefficients of the Legendre polynomial expansion of the ion energy and angular distribution functions are used to validate simulation results for IVDF. Good agreement between measured and simulated IVDFs shows that the developed simulation model can be used for accurate calculations of IVDFs.
Dynamic simulation of a reverse Brayton refrigerator
NASA Astrophysics Data System (ADS)
Peng, N.; Lei, L. L.; Xiong, L. Y.; Tang, J. C.; Dong, B.; Liu, L. Q.
2014-01-01
A test refrigerator based on the modified Reverse Brayton cycle has been developed in the Chinese Academy of Sciences recently. To study the behaviors of this test refrigerator, a dynamic simulation has been carried out. The numerical model comprises the typical components of the test refrigerator: compressor, valves, heat exchangers, expander and heater. This simulator is based on the oriented-object approach and each component is represented by a set of differential and algebraic equations. The control system of the test refrigerator is also simulated, which can be used to optimize the control strategies. This paper describes all the models and shows the simulation results. Comparisons between simulation results and experimental data are also presented. Experimental validation on the test refrigerator gives satisfactory results.
von Dadelszen, Peter; Allaire, Catherine
2011-01-01
Background: Concern regarding the quality of surgical training in obstetrics and gynecology residency programs is focusing attention on competency based education. Because open surgical skills cannot necessarily be translated into laparoscopic skills and with minimally invasive surgery becoming standard in operative gynecology, the discrepancy in training between obstetrics and gynecology will widen. Training on surgical simulators with virtual reality may improve surgical skills. However, before incorporation into training programs for gynecology residents the validity of such instruments needs to first be established. We sought to prove the construct validity of a virtual reality laparoscopic simulator, the SurgicalSimTM, by showing its ability to distinguish between surgeons with different laparoscopic experience. Methods: Eleven gynecologic surgeons (experts) and 11 perinatologists (controls) completed 3 tasks on the simulator, and 10 performance parameters were compared. Results: The experts performed faster, more efficiently, and with fewer errors, proving the construct validity of the SurgicalSim. Conclusions: Laparoscopic virtual reality simulators can measure relevant surgical skills and so distinguish between subjects having different skill levels. Hence, these simulators could be integrated into gynecology resident endoscopic training and utilized for objective assessment. Second, the skills required for competency in obstetrics cannot necessarily be utilized for better performance in laparoscopic gynecology. PMID:21985726
Jalink, M B; Goris, J; Heineman, E; Pierie, J P E N; ten Cate Hoedemaker, H O
2014-02-01
Virtual reality (VR) laparoscopic simulators have been around for more than 10 years and have proven to be cost- and time-effective in laparoscopic skills training. However, most simulators are, in our experience, considered less interesting by residents and are often poorly accessible. Consequently, these devices are rarely used in actual training. In an effort to make a low-cost and more attractive simulator, a custom-made Nintendo Wii game was developed. This game could ultimately be used to train the same basic skills as VR laparoscopic simulators ought to. Before such a video game can be implemented into a surgical training program, it has to be validated according to international standards. The main goal of this study was to test construct and concurrent validity of the controls of a prototype of the game. In this study, the basic laparoscopic skills of experts (surgeons, urologists, and gynecologists, n = 15) were compared to those of complete novices (internists, n = 15) using the Wii Laparoscopy (construct validity). Scores were also compared to the Fundamentals of Laparoscopy (FLS) Peg Transfer test, an already established assessment method for measuring basic laparoscopic skills (concurrent validity). Results showed that experts were 111 % faster (P = 0.001) on the Wii Laparoscopy task than novices. Also, scores of the FLS Peg Transfer test and the Wii Laparoscopy showed a significant, high correlation (r = 0.812, P < 0.001). The prototype setup of the Wii Laparoscopy possesses solid construct and concurrent validity.
NASA Astrophysics Data System (ADS)
Dube, B.; Lefebvre, S.; Perocheau, A.; Nakra, H. L.
1988-01-01
This paper describes the comparative results obtained from digital and hybrid simulation studies on a variable speed wind generator interconnected to the utility grid. The wind generator is a vertical-axis Darrieus type coupled to a synchronous machine by a gear-box; the synchronous machine is connected to the AC utility grid through a static frequency converter. Digital simulation results have been obtained using CSMP software; these results are compared with those obtained from a real-time hybrid simulator that in turn uses a part of the IREQ HVDC simulator. The agreement between hybrid and digital simulation results is generally good. The results demonstrate that the digital simulation reproduces the dynamic behavior of the system in a satisfactory manner and thus constitutes a valid tool for the design of the control systems of the wind generator.
Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J
2009-04-01
Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67x3 (67 clusters of three observations) and a 33x6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67x3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis.
Brewin, James; Tang, Jessica; Dasgupta, Prokar; Khan, Muhammad S; Ahmed, Kamran; Bello, Fernando; Kneebone, Roger; Jaye, Peter
2015-07-01
To evaluate the face, content and construct validity of the distributed simulation (DS) environment for technical and non-technical skills training in endourology. To evaluate the educational impact of DS for urology training. DS offers a portable, low-cost simulated operating room environment that can be set up in any open space. A prospective mixed methods design using established validation methodology was conducted in this simulated environment with 10 experienced and 10 trainee urologists. All participants performed a simulated prostate resection in the DS environment. Outcome measures included surveys to evaluate the DS, as well as comparative analyses of experienced and trainee urologist's performance using real-time and 'blinded' video analysis and validated performance metrics. Non-parametric statistical methods were used to compare differences between groups. The DS environment demonstrated face, content and construct validity for both non-technical and technical skills. Kirkpatrick level 1 evidence for the educational impact of the DS environment was shown. Further studies are needed to evaluate the effect of simulated operating room training on real operating room performance. This study has shown the validity of the DS environment for non-technical, as well as technical skills training. DS-based simulation appears to be a valuable addition to traditional classroom-based simulation training. © 2014 The Authors BJU International © 2014 BJU International Published by John Wiley & Sons Ltd.
Design and control of compliant tensegrity robots through simulation and hardware validation.
Caluwaerts, Ken; Despraz, Jérémie; Işçen, Atıl; Sabelhaus, Andrew P; Bruce, Jonathan; Schrauwen, Benjamin; SunSpiral, Vytas
2014-09-06
To better understand the role of tensegrity structures in biological systems and their application to robotics, the Dynamic Tensegrity Robotics Lab at NASA Ames Research Center, Moffett Field, CA, USA, has developed and validated two software environments for the analysis, simulation and design of tensegrity robots. These tools, along with new control methodologies and the modular hardware components developed to validate them, are presented as a system for the design of actuated tensegrity structures. As evidenced from their appearance in many biological systems, tensegrity ('tensile-integrity') structures have unique physical properties that make them ideal for interaction with uncertain environments. Yet, these characteristics make design and control of bioinspired tensegrity robots extremely challenging. This work presents the progress our tools have made in tackling the design and control challenges of spherical tensegrity structures. We focus on this shape since it lends itself to rolling locomotion. The results of our analyses include multiple novel control approaches for mobility and terrain interaction of spherical tensegrity structures that have been tested in simulation. A hardware prototype of a spherical six-bar tensegrity, the Reservoir Compliant Tensegrity Robot, is used to empirically validate the accuracy of simulation. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Physics of neutral gas jet interaction with magnetized plasmas
NASA Astrophysics Data System (ADS)
Wang, Zhanhui; Xu, Xueqiao; Diamond, Patrick; Xu, Min; Duan, Xuru; Yu, Deliang; Zhou, Yulin; Shi, Yongfu; Nie, Lin; Ke, Rui; Zhong, Wulv; Shi, Zhongbing; Sun, Aiping; Li, Jiquan; Yao, Lianghua
2017-10-01
It is critical to understand the physics and transport dynamics during the plasma fuelling process. Plasma and neutral interactions involve the transfer of charge, momentum, and energy in ion-neutral and electron-neutral collisions. Thus, a seven field fluid model of neutral gas jet injection (NGJI) is obtained, which couples plasma density, heat, and momentum transport equations together with neutrals density and momentum transport equations of both molecules and atoms. Transport dynamics of plasma and neutrals are simulated for a complete range of discharge times, including steady state before NGJI, transport during NGJI, and relaxation after NGJI. With the trans-neut module of BOUT + + code, the simulations of mean profile variations and fueling depths during fueling have been benchmarked well with other codes and also validated with HL-2A experiment results. Both fast component (FC) and slow component (SC) of NGJI are simulated and validated with the HL-2A experimental measurements. The plasma blocking effect on the FC penetration is also simulated and validated well with the experiment. This work is supported by the National Natural Science Foundation of China under Grant No. 11575055.
Xie, Yi; Mun, Sungyong; Kim, Jinhyun; Wang, Nien-Hwa Linda
2002-01-01
A tandem simulated moving bed (SMB) process for insulin purification has been proposed and validated experimentally. The mixture to be separated consists of insulin, high molecular weight proteins, and zinc chloride. A systematic approach based on the standing wave design, rate model simulations, and experiments was used to develop this multicomponent separation process. The standing wave design was applied to specify the SMB operating conditions of a lab-scale unit with 10 columns. The design was validated with rate model simulations prior to experiments. The experimental results show 99.9% purity and 99% yield, which closely agree with the model predictions and the standing wave design targets. The agreement proves that the standing wave design can ensure high purity and high yield for the tandem SMB process. Compared to a conventional batch SEC process, the tandem SMB has 10% higher yield, 400% higher throughput, and 72% lower eluant consumption. In contrast, a design that ignores the effects of mass transfer and nonideal flow cannot meet the purity requirement and gives less than 96% yield.
Object-Oriented Modeling of an Energy Harvesting System Based on Thermoelectric Generators
NASA Astrophysics Data System (ADS)
Nesarajah, Marco; Frey, Georg
This paper deals with the modeling of an energy harvesting system based on thermoelectric generators (TEG), and the validation of the model by means of a test bench. TEGs are capable to improve the overall energy efficiency of energy systems, e.g. combustion engines or heating systems, by using the remaining waste heat to generate electrical power. Previously, a component-oriented model of the TEG itself was developed in Modelica® language. With this model any TEG can be described and simulated given the material properties and the physical dimension. Now, this model was extended by the surrounding components to a complete model of a thermoelectric energy harvesting system. In addition to the TEG, the model contains the cooling system, the heat source, and the power electronics. To validate the simulation model, a test bench was built and installed on an oil-fired household heating system. The paper reports results of the measurements and discusses the validity of the developed simulation models. Furthermore, the efficiency of the proposed energy harvesting system is derived and possible improvements based on design variations tested in the simulation model are proposed.
Model for Atmospheric Propagation of Spatially Combined Laser Beams
2016-09-01
thesis modeling tools is discussed. In Chapter 6, the thesis validated the model with analytical computations and simulations result from...using propagation model . Based on both the analytical computation and WaveTrain results, the diraction e ects simulated in the propagation model are...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS MODEL FOR ATMOSPHERIC PROPAGATION OF SPATIALLY COMBINED LASER BEAMS by Kum Leong Lee
Campbell, J Q; Coombs, D J; Rao, M; Rullkoetter, P J; Petrella, A J
2016-09-06
The purpose of this study was to seek broad verification and validation of human lumbar spine finite element models created using a previously published automated algorithm. The automated algorithm takes segmented CT scans of lumbar vertebrae, automatically identifies important landmarks and contact surfaces, and creates a finite element model. Mesh convergence was evaluated by examining changes in key output variables in response to mesh density. Semi-direct validation was performed by comparing experimental results for a single specimen to the automated finite element model results for that specimen with calibrated material properties from a prior study. Indirect validation was based on a comparison of results from automated finite element models of 18 individual specimens, all using one set of generalized material properties, to a range of data from the literature. A total of 216 simulations were run and compared to 186 experimental data ranges in all six primary bending modes up to 7.8Nm with follower loads up to 1000N. Mesh convergence results showed less than a 5% difference in key variables when the original mesh density was doubled. The semi-direct validation results showed that the automated method produced results comparable to manual finite element modeling methods. The indirect validation results showed a wide range of outcomes due to variations in the geometry alone. The studies showed that the automated models can be used to reliably evaluate lumbar spine biomechanics, specifically within our intended context of use: in pure bending modes, under relatively low non-injurious simulated in vivo loads, to predict torque rotation response, disc pressures, and facet forces. Copyright © 2016 Elsevier Ltd. All rights reserved.
Challenges of NDE Simulation Tool Challenges of NDE Simulation Tool
NASA Technical Reports Server (NTRS)
Leckey, Cara A. C.; Juarez, Peter D.; Seebo, Jeffrey P.; Frank, Ashley L.
2015-01-01
Realistic nondestructive evaluation (NDE) simulation tools enable inspection optimization and predictions of inspectability for new aerospace materials and designs. NDE simulation tools may someday aid in the design and certification of advanced aerospace components; potentially shortening the time from material development to implementation by industry and government. Furthermore, modeling and simulation are expected to play a significant future role in validating the capabilities and limitations of guided wave based structural health monitoring (SHM) systems. The current state-of-the-art in ultrasonic NDE/SHM simulation cannot rapidly simulate damage detection techniques for large scale, complex geometry composite components/vehicles with realistic damage types. This paper discusses some of the challenges of model development and validation for composites, such as the level of realism and scale of simulation needed for NASA' applications. Ongoing model development work is described along with examples of model validation studies. The paper will also discuss examples of the use of simulation tools at NASA to develop new damage characterization methods, and associated challenges of validating those methods.
NASA Technical Reports Server (NTRS)
Raiszadeh, Behzad; Queen, Eric M.; Hotchko, Nathaniel J.
2009-01-01
A capability to simulate trajectories of multiple interacting rigid bodies has been developed, tested and validated. This capability uses the Program to Optimize Simulated Trajectories II (POST 2). The standard version of POST 2 allows trajectory simulation of multiple bodies without force interaction. In the current implementation, the force interaction between the parachute and the suspended bodies has been modeled using flexible lines, allowing accurate trajectory simulation of the individual bodies in flight. The POST 2 multibody capability is intended to be general purpose and applicable to any parachute entry trajectory simulation. This research paper explains the motivation for multibody parachute simulation, discusses implementation methods, and presents validation of this capability.
A multiscale approach to accelerate pore-scale simulation of porous electrodes
NASA Astrophysics Data System (ADS)
Zheng, Weibo; Kim, Seung Hyun
2017-04-01
A new method to accelerate pore-scale simulation of porous electrodes is presented. The method combines the macroscopic approach with pore-scale simulation by decomposing a physical quantity into macroscopic and local variations. The multiscale method is applied to the potential equation in pore-scale simulation of a Proton Exchange Membrane Fuel Cell (PEMFC) catalyst layer, and validated with the conventional approach for pore-scale simulation. Results show that the multiscale scheme substantially reduces the computational cost without sacrificing accuracy.
The report gives results of activities relating to the Advanced Utility Simulation Model (AUSM): sensitivity testing. comparison with a mature electric utility model, and calibration to historical emissions. The activities were aimed at demonstrating AUSM's validity over input va...
NASA Technical Reports Server (NTRS)
Arneson, Heather; Evans, Antony D.; Li, Jinhua; Wei, Mei Yueh
2017-01-01
Integrated Demand Management (IDM) is a near- to mid-term NASA concept that proposes to address mismatches in air traffic system demand and capacity by using strategic flow management capabilities to pre-condition demand into the more tactical Time-Based Flow Management System (TBFM). This paper describes an automated simulation capability to support IDM concept development. The capability closely mimics existing human-in-the-loop (HITL) capabilities, while automating both the human components and collaboration between operational systems, and speeding up the real-time aircraft simulations. Such a capability allows for parametric studies to be carried out that can inform the HITL simulations, identifying breaking points and parameter values at which significant changes in system behavior occur. The paper describes the initial validation of the automated simulation capability against results from previous IDM HITL experiments, quantifying the differences. The simulator is then used to explore the performance of the IDM concept under the simple scenario of a capacity constrained airport under a wide range of wind conditions.
NASA Astrophysics Data System (ADS)
Ward, T.; Fleming, J. S.; Hoffmann, S. M. A.; Kemp, P. M.
2005-11-01
Simulation is useful in the validation of functional image analysis methods, particularly when considering the number of analysis techniques currently available lacking thorough validation. Problems exist with current simulation methods due to long run times or unrealistic results making it problematic to generate complete datasets. A method is presented for simulating known abnormalities within normal brain SPECT images using a measured point spread function (PSF), and incorporating a stereotactic atlas of the brain for anatomical positioning. This allows for the simulation of realistic images through the use of prior information regarding disease progression. SPECT images of cerebral perfusion have been generated consisting of a control database and a group of simulated abnormal subjects that are to be used in a UK audit of analysis methods. The abnormality is defined in the stereotactic space, then transformed to the individual subject space, convolved with a measured PSF and removed from the normal subject image. The dataset was analysed using SPM99 (Wellcome Department of Imaging Neuroscience, University College, London) and the MarsBaR volume of interest (VOI) analysis toolbox. The results were evaluated by comparison with the known ground truth. The analysis showed improvement when using a smoothing kernel equal to system resolution over the slightly larger kernel used routinely. Significant correlation was found between effective volume of a simulated abnormality and the detected size using SPM99. Improvements in VOI analysis sensitivity were found when using the region median over the region mean. The method and dataset provide an efficient methodology for use in the comparison and cross validation of semi-quantitative analysis methods in brain SPECT, and allow the optimization of analysis parameters.
NASA Astrophysics Data System (ADS)
Gholami, V.; Khaleghi, M. R.; Sebghati, M.
2017-11-01
The process of water quality testing is money/time-consuming, quite important and difficult stage for routine measurements. Therefore, use of models has become commonplace in simulating water quality. In this study, the coactive neuro-fuzzy inference system (CANFIS) was used to simulate groundwater quality. Further, geographic information system (GIS) was used as the pre-processor and post-processor tool to demonstrate spatial variation of groundwater quality. All important factors were quantified and groundwater quality index (GWQI) was developed. The proposed model was trained and validated by taking a case study of Mazandaran Plain located in northern part of Iran. The factors affecting groundwater quality were the input variables for the simulation, whereas GWQI index was the output. The developed model was validated to simulate groundwater quality. Network validation was performed via comparison between the estimated and actual GWQI values. In GIS, the study area was separated to raster format in the pixel dimensions of 1 km and also by incorporation of input data layers of the Fuzzy Network-CANFIS model; the geo-referenced layers of the effective factors in groundwater quality were earned. Therefore, numeric values of each pixel with geographical coordinates were entered to the Fuzzy Network-CANFIS model and thus simulation of groundwater quality was accessed in the study area. Finally, the simulated GWQI indices using the Fuzzy Network-CANFIS model were entered into GIS, and hence groundwater quality map (raster layer) based on the results of the network simulation was earned. The study's results confirm the high efficiency of incorporation of neuro-fuzzy techniques and GIS. It is also worth noting that the general quality of the groundwater in the most studied plain is fairly low.
NASA Astrophysics Data System (ADS)
Allred, C. Jeff; Churchill, David; Buckner, Gregory D.
2017-07-01
This paper presents a novel approach to monitoring rotor blade flap, lead-lag and pitch using an embedded gyroscope and symmetrically mounted MEMS accelerometers. The central hypothesis is that differential accelerometer measurements are proportional only to blade motion; fuselage acceleration and blade bending are inherently compensated for. The inverse kinematic relationships (from blade position to acceleration and angular rate) are derived and simulated to validate this hypothesis. An algorithm to solve the forward kinematic relationships (from sensor measurement to blade position) is developed using these simulation results. This algorithm is experimentally validated using a prototype device. The experimental results justify continued development of this kinematic estimation approach.
Hydrogen Reduction of Lunar Regolith Simulants for Oxygen Production
NASA Technical Reports Server (NTRS)
Hegde, U.; Balasubramaniam, R.; Gokoglu, S. A.; Rogers, K.; Reddington, M.; Oryshchyn, L.
2011-01-01
Hydrogen reduction of the lunar regolith simulants JSC-1A and LHT-2M is investigated in this paper. Experiments conducted at NASA Johnson Space Center are described and are analyzed utilizing a previously validated model developed by the authors at NASA Glenn Research Center. The effects of regolith sintering and clumping, likely in actual production operations, on the oxygen production rate are studied. Interpretations of the obtained results on the basis of the validated model are provided and linked to increase in the effective particle size and reduction in the intra-particle species diffusion rates. Initial results on the pressure dependence of the oxygen production rate are also presented and discussed
Modelling Black Carbon concentrations in two busy street canyons in Brussels using CANSBC
NASA Astrophysics Data System (ADS)
Brasseur, O.; Declerck, P.; Heene, B.; Vanderstraeten, P.
2015-01-01
This paper focused on modelling Black Carbon (BC) concentrations in two busy street canyons, the Crown and Belliard Street in Brussels. The used original Operational Street Pollution Model was adapted to BC by eliminating the chemical module and is noted here as CANSBC. Model validations were performed using temporal BC data from the fixed measurement network in Brussels. Subsequently, BC emissions were adjusted so that simulated BC concentrations equalled the observed ones, averaged over the whole period of simulation. Direct validations were performed for the Crown Street, while BC model calculations for the Belliard Street were validated indirectly using the linear relationship between BC and NOx. Concerning the Crown Street, simulated and observed half-hourly BC concentrations correlated well (r = 0.74) for the period from July 1st, 2011 till June 30th, 2013. In particular, CANSBC performed very well to simulate the monthly and diurnal evolutions of averaged BC concentrations, as well as the difference between weekdays and weekends. This means that the model correctly handled the meteorological conditions as well as the variation in traffic emissions. Considering dispersion, it should however be noted that BC concentrations are better simulated under stable than under unstable conditions. Even if the correlation on half-hourly NOx concentrations was slightly lower (r = 0.60) than the one of BC, indirect validations of CANSBC for the Belliard Street yielded comparable results and conclusions as described above for the Crown Street. Based on our results, it can be stated that CANSBC is suitable to accurately simulate BC concentrations in the street canyons of Brussels, under the following conditions: (i) accurate vehicle counting data is available to correctly estimate traffic emissions, and (ii) vehicle speeds are measured in order to improve emission estimates and to take into account the impact of the turbulence generated by moving vehicles on the local dispersion of BC.
NASA Technical Reports Server (NTRS)
Paulk, C. H., Jr.; Astill, D. L.; Donley, S. T.
1983-01-01
The operation of the SH-2F helicopter from the decks of small ships in adverse weather was simulated using a large amplitude vertical motion simulator, a wide angle computer generated imagery visual system, and an interchangeable cab (ICAB). The simulation facility, the mathematical programs, and the validation method used to ensure simulation fidelity are described. The results show the simulator to be a useful tool in simulating the ship-landing problem. Characteristics of the ICAB system and ways in which the simulation can be improved are presented.
Li, Zhengdong; Zou, Donghua; Liu, Ningguo; Zhong, Liangwei; Shao, Yu; Wan, Lei; Huang, Ping; Chen, Yijiu
2013-06-10
The elucidation and prediction of the biomechanics of lower limb fractures could serve as a useful tool in forensic practices. Finite element (FE) analysis could potentially help in the understanding of the fracture mechanisms of lower limb fractures frequently caused by car-pedestrian accidents. Our aim was (1) to develop and validate a FE model of the human lower limb, (2) to assess the biomechanics of specific injuries concerning run-over and impact loading conditions, and (3) to reconstruct one real car-pedestrian collision case using the model created in this study. We developed a novel lower limb FE model and simulated three different loading scenarios. The geometry of the model was reconstructed using Mimics 13.0 based on computed tomography (CT) scans from an actual traffic accident. The material properties were based upon a synthesis of data found in published literature. The FE model validation and injury reconstruction were conducted using the LS-DYNA code. The FE model was validated by a comparison of the simulation results of three-point bending, overall lateral impact tests and published postmortem human surrogate (PMHS) results. Simulated loading scenarios of running-over the thigh with a wheel, the impact on the upper leg, and impact on the lower thigh were conducted with velocities of 10 m/s, 20 m/s, and 40 m/s, respectively. We compared the injuries resulting from one actual case with the simulated results in order to explore the possible fracture bio-mechanism. The peak fracture forces, maximum bending moments, and energy lost ratio exhibited no significant differences between the FE simulations and the literature data. Under simulated run-over conditions, the segmental fracture pattern was formed and the femur fracture patterns and mechanisms were consistent with the actual injury features of the case. Our study demonstrated that this simulation method could potentially be effective in identifying forensic cases and exploring of the injury mechanisms of lower limb fractures encountered due to inflicted lesions. This model can also help to distinguish between possible and impossible scenarios. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
A novel augmented reality simulator for skills assessment in minimal invasive surgery.
Lahanas, Vasileios; Loukas, Constantinos; Smailis, Nikolaos; Georgiou, Evangelos
2015-08-01
Over the past decade, simulation-based training has come to the foreground as an efficient method for training and assessment of surgical skills in minimal invasive surgery. Box-trainers and virtual reality (VR) simulators have been introduced in the teaching curricula and have substituted to some extent the traditional model of training based on animals or cadavers. Augmented reality (AR) is a new technology that allows blending of VR elements and real objects within a real-world scene. In this paper, we present a novel AR simulator for assessment of basic laparoscopic skills. The components of the proposed system include: a box-trainer, a camera and a set of laparoscopic tools equipped with custom-made sensors that allow interaction with VR training elements. Three AR tasks were developed, focusing on basic skills such as perception of depth of field, hand-eye coordination and bimanual operation. The construct validity of the system was evaluated via a comparison between two experience groups: novices with no experience in laparoscopic surgery and experienced surgeons. The observed metrics included task execution time, tool pathlength and two task-specific errors. The study also included a feedback questionnaire requiring participants to evaluate the face-validity of the system. Between-group comparison demonstrated highly significant differences (<0.01) in all performance metrics and tasks denoting the simulator's construct validity. Qualitative analysis on the instruments' trajectories highlighted differences between novices and experts regarding smoothness and economy of motion. Subjects' ratings on the feedback questionnaire highlighted the face-validity of the training system. The results highlight the potential of the proposed simulator to discriminate groups with different expertise providing a proof of concept for the potential use of AR as a core technology for laparoscopic simulation training.
NASA Technical Reports Server (NTRS)
Ling, Lisa
2014-01-01
For the purpose of performing safety analysis and risk assessment for a potential off-nominal atmospheric reentry resulting in vehicle breakup, a synthesis of trajectory propagation coupled with thermal analysis and the evaluation of node failure is required to predict the sequence of events, the timeline, and the progressive demise of spacecraft components. To provide this capability, the Simulation for Prediction of Entry Article Demise (SPEAD) analysis tool was developed. The software and methodology have been validated against actual flights, telemetry data, and validated software, and safety/risk analyses were performed for various programs using SPEAD. This report discusses the capabilities, modeling, validation, and application of the SPEAD analysis tool.
NASA Technical Reports Server (NTRS)
Gravitz, Robert M.; Hale, Joseph
2006-01-01
NASA's Exploration Systems Mission Directorate (ESMD) is implementing a management approach for modeling and simulation (M&S) that will provide decision-makers information on the model's fidelity, credibility, and quality. This information will allow the decision-maker to understand the risks involved in using a model's results in the decision-making process. This presentation will discuss NASA's approach for verification and validation (V&V) of its models or simulations supporting space exploration. This presentation will describe NASA's V&V process and the associated M&S verification and validation (V&V) activities required to support the decision-making process. The M&S V&V Plan and V&V Report templates for ESMD will also be illustrated.
NASA Astrophysics Data System (ADS)
Rock, Gilles; Fischer, Kim; Schlerf, Martin; Gerhards, Max; Udelhoven, Thomas
2017-04-01
The development and optimization of image processing algorithms requires the availability of datasets depicting every step from earth surface to the sensor's detector. The lack of ground truth data obliges to develop algorithms on simulated data. The simulation of hyperspectral remote sensing data is a useful tool for a variety of tasks such as the design of systems, the understanding of the image formation process, and the development and validation of data processing algorithms. An end-to-end simulator has been set up consisting of a forward simulator, a backward simulator and a validation module. The forward simulator derives radiance datasets based on laboratory sample spectra, applies atmospheric contributions using radiative transfer equations, and simulates the instrument response using configurable sensor models. This is followed by the backward simulation branch, consisting of an atmospheric correction (AC), a temperature and emissivity separation (TES) or a hybrid AC and TES algorithm. An independent validation module allows the comparison between input and output dataset and the benchmarking of different processing algorithms. In this study, hyperspectral thermal infrared scenes of a variety of surfaces have been simulated to analyze existing AC and TES algorithms. The ARTEMISS algorithm was optimized and benchmarked against the original implementations. The errors in TES were found to be related to incorrect water vapor retrieval. The atmospheric characterization could be optimized resulting in increasing accuracies in temperature and emissivity retrieval. Airborne datasets of different spectral resolutions were simulated from terrestrial HyperCam-LW measurements. The simulated airborne radiance spectra were subjected to atmospheric correction and TES and further used for a plant species classification study analyzing effects related to noise and mixed pixels.
Properties of Syntactic Foam for Simulation of Mechanical Insults.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hubbard, Neal Benson; Haulenbeek, Kimberly K.; Spletzer, Matthew A.
Syntactic foam encapsulation protects sensitive components. The energy mitigated by the foam is calculated with numerical simulations. The properties of a syntactic foam consisting of a mixture of an epoxy-rubber adduct and glass microballoons are obtained from published literature and test results. The conditions and outcomes of the tests are discussed. The method for converting published properties and test results to input for finite element models is described. Simulations of the test conditions are performed to validate the inputs.
Turbine-99 unsteady simulations - Validation
NASA Astrophysics Data System (ADS)
Cervantes, M. J.; Andersson, U.; Lövgren, H. M.
2010-08-01
The Turbine-99 test case, a Kaplan draft tube model, aimed to determine the state of the art within draft tube simulation. Three workshops were organized on the matter in 1999, 2001 and 2005 where the geometry and experimental data were provided as boundary conditions to the participants. Since the last workshop, computational power and flow modelling have been developed and the available data completed with unsteady pressure measurements and phase resolved velocity measurements in the cone. Such new set of data together with the corresponding phase resolved velocity boundary conditions offer new possibilities to validate unsteady numerical simulations in Kaplan draft tube. The present work presents simulation of the Turbine-99 test case with time dependent angular resolved inlet velocity boundary conditions. Different grids and time steps are investigated. The results are compared to experimental time dependent pressure and velocity measurements.
NASA Astrophysics Data System (ADS)
Paik, Kwang-Jun; Park, Hyung-Gil; Seo, Jongsoo
2013-12-01
Simulations of cavitation flow and hull pressure fluctuation for a marine propeller operating behind a hull using the unsteady Reynolds-Averaged Navier-Stokes equations (RANS) are presented. A full hull body submerged under the free surface is modeled in the computational domain to simulate directly the wake field of the ship at the propeller plane. Simulations are performed in design and ballast draught conditions to study the effect of cavitation number. And two propellers with slightly different geometry are simulated to validate the detectability of the numerical simulation. All simulations are performed using a commercial CFD software FLUENT. Cavitation patterns of the simulations show good agreement with the experimental results carried out in Samsung CAvitation Tunnel (SCAT). The simulation results for the hull pressure fluctuation induced by a propeller are also compared with the experimental results showing good agreement in the tendency and amplitude, especially, for the first blade frequency.
Parasitic Parameters Extraction for InP DHBT Based on EM Method and Validation up to H-Band
NASA Astrophysics Data System (ADS)
Li, Oupeng; Zhang, Yong; Wang, Lei; Xu, Ruimin; Cheng, Wei; Wang, Yuan; Lu, Haiyan
2017-05-01
This paper presents a small-signal model for InGaAs/InP double heterojunction bipolar transistor (DHBT). Parasitic parameters of access via and electrode finger are extracted by 3-D electromagnetic (EM) simulation. By analyzing the equivalent circuit of seven special structures and using the EM simulation results, the parasitic parameters are extracted systematically. Compared with multi-port s-parameter EM model, the equivalent circuit model has clear physical intension and avoids the complex internal ports setting. The model is validated on a 0.5 × 7 μm2 InP DHBT up to 325 GHz. The model provides a good fitting result between measured and simulated multi-bias s-parameters in full band. At last, an H-band amplifier is designed and fabricated for further verification. The measured amplifier performance is highly agreed with the model prediction, which indicates the model has good accuracy in submillimeterwave band.
Assessment of MARMOT Grain Growth Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fromm, B.; Zhang, Y.; Schwen, D.
2015-12-01
This report assesses the MARMOT grain growth model by comparing modeling predictions with experimental results from thermal annealing. The purpose here is threefold: (1) to demonstrate the validation approach of using thermal annealing experiments with non-destructive characterization, (2) to test the reconstruction capability and computation efficiency in MOOSE, and (3) to validate the grain growth model and the associated parameters that are implemented in MARMOT for UO 2. To assure a rigorous comparison, the 2D and 3D initial experimental microstructures of UO 2 samples were characterized using non-destructive Synchrotron x-ray. The same samples were then annealed at 2273K for grainmore » growth, and their initial microstructures were used as initial conditions for simulated annealing at the same temperature using MARMOT. After annealing, the final experimental microstructures were characterized again to compare with the results from simulations. So far, comparison between modeling and experiments has been done for 2D microstructures, and 3D comparison is underway. The preliminary results demonstrated the usefulness of the non-destructive characterization method for MARMOT grain growth model validation. A detailed analysis of the 3D microstructures is in progress to fully validate the current model in MARMOT.« less
Unver, Vesile; Basak, Tulay; Watts, Penni; Gaioso, Vanessa; Moss, Jacqueline; Tastan, Sevinc; Iyigun, Emine; Tosun, Nuran
2017-02-01
The purpose of this study was to adapt the "Student Satisfaction and Self-Confidence in Learning Scale" (SCLS), "Simulation Design Scale" (SDS), and "Educational Practices Questionnaire" (EPQ) developed by Jeffries and Rizzolo into Turkish and establish the reliability and the validity of these translated scales. A sample of 87 nursing students participated in this study. These scales were cross-culturally adapted through a process including translation, comparison with original version, back translation, and pretesting. Construct validity was evaluated by factor analysis, and criterion validity was evaluated using the Perceived Learning Scale, Patient Intervention Self-confidence/Competency Scale, and Educational Belief Scale. Cronbach's alpha values were found as 0.77-0.85 for SCLS, 0.73-0.86 for SDS, and 0.61-0.86 for EPQ. The results of this study show that the Turkish versions of all scales are validated and reliable measurement tools.
Simulation models in population breast cancer screening: A systematic review.
Koleva-Kolarova, Rositsa G; Zhan, Zhuozhao; Greuter, Marcel J W; Feenstra, Talitha L; De Bock, Geertruida H
2015-08-01
The aim of this review was to critically evaluate published simulation models for breast cancer screening of the general population and provide a direction for future modeling. A systematic literature search was performed to identify simulation models with more than one application. A framework for qualitative assessment which incorporated model type; input parameters; modeling approach, transparency of input data sources/assumptions, sensitivity analyses and risk of bias; validation, and outcomes was developed. Predicted mortality reduction (MR) and cost-effectiveness (CE) were compared to estimates from meta-analyses of randomized control trials (RCTs) and acceptability thresholds. Seven original simulation models were distinguished, all sharing common input parameters. The modeling approach was based on tumor progression (except one model) with internal and cross validation of the resulting models, but without any external validation. Differences in lead times for invasive or non-invasive tumors, and the option for cancers not to progress were not explicitly modeled. The models tended to overestimate the MR (11-24%) due to screening as compared to optimal RCTs 10% (95% CI - 2-21%) MR. Only recently, potential harms due to regular breast cancer screening were reported. Most scenarios resulted in acceptable cost-effectiveness estimates given current thresholds. The selected models have been repeatedly applied in various settings to inform decision making and the critical analysis revealed high risk of bias in their outcomes. Given the importance of the models, there is a need for externally validated models which use systematical evidence for input data to allow for more critical evaluation of breast cancer screening. Copyright © 2015 Elsevier Ltd. All rights reserved.
Measurement of Cyclic Flows in Trachea Using PIV and Numerical simulation
NASA Astrophysics Data System (ADS)
Bělka, Miloslav; Elcner, Jakub; Jedelský, Jan; Boiron, Olivier; Knapp, Yannick; Bailly, Lucie
2015-05-01
Inhalation of pharmaceutical aerosols is a convenient way to treat lung or even systemic diseases. For effective treatment it is very important to understand air flow characteristics within respiratory airways and determine deposition hot spots. In this paper the air flow in trachea was investigated by numerical simulations. To validate these results we carried out particle image velocimetry experiments and compared resulting velocity fields. Simplified geometry of respiratory airways from oral cavity to 4th generation of branching was employed. Air flow characteristics were analysed during sinusoidal breathing pattern for light activity conditions (period 4 s and tidal volume 1 l). The observed flow fields indicated that the flow in trachea is turbulent during the sinusoidal flow except phases of flow turnarounds. The flow was skewed to front side of the trachea during inspiration and had twin-peak profile during expiration because of the mixing from daughter branches. The methods were compared and good agreement was found. This validation of CFD simulation can result into its further usage in respiratory airflow studies.
NASA Astrophysics Data System (ADS)
Haberlandt, U.; Gerten, D.; Schaphoff, S.; Lucht, W.
Dynamic global vegetation models are developed with the main purpose to describe the spatio-temporal dynamics of vegetation at the global scale. Increasing concern about climate change impacts has put the focus of recent applications on the sim- ulation of the global carbon cycle. Water is a prime driver of biogeochemical and biophysical processes, thus an appropriate representation of the water cycle is crucial for their proper simulation. However, these models usually lack thorough validation of the water balance they produce. Here we present a hydrological validation of the current version of the LPJ (Lund- Potsdam-Jena) model, a dynamic global vegetation model operating at daily time steps. Long-term simulated runoff and evapotranspiration are compared to literature values, results from three global hydrological models, and discharge observations from various macroscale river basins. It was found that the seasonal and spatial patterns of the LPJ-simulated average values correspond well both with the measurements and the results from the stand-alone hy- drological models. However, a general underestimation of runoff occurs, which may be attributable to the low input dynamics of precipitation (equal distribution within a month), to the simulated vegetation pattern (potential vegetation without anthro- pogenic influence), and to some generalizations of the hydrological components in LPJ. Future research will focus on a better representation of the temporal variability of climate forcing, improved description of hydrological processes, and on the consider- ation of anthropogenic land use.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ansari, A.; Mohaghegh, S.; Shahnam, M.
To ensure the usefulness of simulation technologies in practice, their credibility needs to be established with Uncertainty Quantification (UQ) methods. In this project, smart proxy is introduced to significantly reduce the computational cost of conducting large number of multiphase CFD simulations, which is typically required for non-intrusive UQ analysis. Smart proxy for CFD models are developed using pattern recognition capabilities of Artificial Intelligence (AI) and Data Mining (DM) technologies. Several CFD simulation runs with different inlet air velocities for a rectangular fluidized bed are used to create a smart CFD proxy that is capable of replicating the CFD results formore » the entire geometry and inlet velocity range. The smart CFD proxy is validated with blind CFD runs (CFD runs that have not played any role during the development of the smart CFD proxy). The developed and validated smart CFD proxy generates its results in seconds with reasonable error (less than 10%). Upon completion of this project, UQ studies that rely on hundreds or thousands of smart CFD proxy runs can be accomplished in minutes. Following figure demonstrates a validation example (blind CFD run) showing the results from the MFiX simulation and the smart CFD proxy for pressure distribution across a fluidized bed at a given time-step (the layer number corresponds to the vertical location in the bed).« less
Experimental validation of the TOPAS Monte Carlo system for passive scattering proton therapy
Testa, M.; Schümann, J.; Lu, H.-M.; Shin, J.; Faddegon, B.; Perl, J.; Paganetti, H.
2013-01-01
Purpose: TOPAS (TOol for PArticle Simulation) is a particle simulation code recently developed with the specific aim of making Monte Carlo simulations user-friendly for research and clinical physicists in the particle therapy community. The authors present a thorough and extensive experimental validation of Monte Carlo simulations performed with TOPAS in a variety of setups relevant for proton therapy applications. The set of validation measurements performed in this work represents an overall end-to-end testing strategy recommended for all clinical centers planning to rely on TOPAS for quality assurance or patient dose calculation and, more generally, for all the institutions using passive-scattering proton therapy systems. Methods: The authors systematically compared TOPAS simulations with measurements that are performed routinely within the quality assurance (QA) program in our institution as well as experiments specifically designed for this validation study. First, the authors compared TOPAS simulations with measurements of depth-dose curves for spread-out Bragg peak (SOBP) fields. Second, absolute dosimetry simulations were benchmarked against measured machine output factors (OFs). Third, the authors simulated and measured 2D dose profiles and analyzed the differences in terms of field flatness and symmetry and usable field size. Fourth, the authors designed a simple experiment using a half-beam shifter to assess the effects of multiple Coulomb scattering, beam divergence, and inverse square attenuation on lateral and longitudinal dose profiles measured and simulated in a water phantom. Fifth, TOPAS’ capabilities to simulate time dependent beam delivery was benchmarked against dose rate functions (i.e., dose per unit time vs time) measured at different depths inside an SOBP field. Sixth, simulations of the charge deposited by protons fully stopping in two different types of multilayer Faraday cups (MLFCs) were compared with measurements to benchmark the nuclear interaction models used in the simulations. Results: SOBPs’ range and modulation width were reproduced, on average, with an accuracy of +1, −2 and ±3 mm, respectively. OF simulations reproduced measured data within ±3%. Simulated 2D dose-profiles show field flatness and average field radius within ±3% of measured profiles. The field symmetry resulted, on average in ±3% agreement with commissioned profiles. TOPAS accuracy in reproducing measured dose profiles downstream the half beam shifter is better than 2%. Dose rate function simulation reproduced the measurements within ∼2% showing that the four-dimensional modeling of the passively modulation system was implement correctly and millimeter accuracy can be achieved in reproducing measured data. For MLFCs simulations, 2% agreement was found between TOPAS and both sets of experimental measurements. The overall results show that TOPAS simulations are within the clinical accepted tolerances for all QA measurements performed at our institution. Conclusions: Our Monte Carlo simulations reproduced accurately the experimental data acquired through all the measurements performed in this study. Thus, TOPAS can reliably be applied to quality assurance for proton therapy and also as an input for commissioning of commercial treatment planning systems. This work also provides the basis for routine clinical dose calculations in patients for all passive scattering proton therapy centers using TOPAS. PMID:24320505
Summary: Experimental validation of real-time fault-tolerant systems
NASA Technical Reports Server (NTRS)
Iyer, R. K.; Choi, G. S.
1992-01-01
Testing and validation of real-time systems is always difficult to perform since neither the error generation process nor the fault propagation problem is easy to comprehend. There is no better substitute to results based on actual measurements and experimentation. Such results are essential for developing a rational basis for evaluation and validation of real-time systems. However, with physical experimentation, controllability and observability are limited to external instrumentation that can be hooked-up to the system under test. And this process is quite a difficult, if not impossible, task for a complex system. Also, to set up such experiments for measurements, physical hardware must exist. On the other hand, a simulation approach allows flexibility that is unequaled by any other existing method for system evaluation. A simulation methodology for system evaluation was successfully developed and implemented and the environment was demonstrated using existing real-time avionic systems. The research was oriented toward evaluating the impact of permanent and transient faults in aircraft control computers. Results were obtained for the Bendix BDX 930 system and Hamilton Standard EEC131 jet engine controller. The studies showed that simulated fault injection is valuable, in the design stage, to evaluate the susceptibility of computing sytems to different types of failures.
Verification and Validation of EnergyPlus Phase Change Material Model for Opaque Wall Assemblies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tabares-Velasco, P. C.; Christensen, C.; Bianchi, M.
2012-08-01
Phase change materials (PCMs) represent a technology that may reduce peak loads and HVAC energy consumption in buildings. A few building energy simulation programs have the capability to simulate PCMs, but their accuracy has not been completely tested. This study shows the procedure used to verify and validate the PCM model in EnergyPlus using a similar approach as dictated by ASHRAE Standard 140, which consists of analytical verification, comparative testing, and empirical validation. This process was valuable, as two bugs were identified and fixed in the PCM model, and version 7.1 of EnergyPlus will have a validated PCM model. Preliminarymore » results using whole-building energy analysis show that careful analysis should be done when designing PCMs in homes, as their thermal performance depends on several variables such as PCM properties and location in the building envelope.« less
NASA Astrophysics Data System (ADS)
De Lucia, Marco; Kempka, Thomas; Afanasyev, Andrey; Melnik, Oleg; Kühn, Michael
2016-04-01
Coupled reactive transport simulations, especially in heterogeneous settings considering multiphase flow, are extremely time consuming and suffer from significant numerical issues compared to purely hydrodynamic simulations. This represents a major hurdle in the assessment of geological subsurface utilization, since it constrains the practical application of reactive transport modelling to coarse spatial discretization or oversimplified geological settings. In order to overcome such limitations, De Lucia et al. [1] developed and validated a one-way coupling approach between geochemistry and hydrodynamics, which is particularly well suited for CO2 storage simulations, while being of general validity. In the present study, the models used for the validation of the one-way coupling approach introduced by De Lucia et al. (2015), and originally performed with the TOUGHREACT simulator, are transferred to and benchmarked against the multiphase reservoir simulator MUFITS [2]. The geological model is loosely inspired by an existing CO2 storage site. Its grid comprises 2,950 elements enclosed in a single layer, but reflecting a realistic three-dimensional anticline geometry. For the purpose of this comparison, homogeneous and heterogeneous scenarios in terms of porosity and permeability were investigated. In both cases, the results of the MUFITS simulator are in excellent agreement with those produced with the fully-coupled TOUGHREACT simulator, while profiting from significantly higher computational performance. This study demonstrates how a computationally efficient simulator such as MUFITS can be successfully included in a coupled process simulation framework, and also suggests ameliorations and specific strategies for the coupling of chemical processes with hydrodynamics and heat transport, aiming at tackling geoscientific problems beyond the storage of CO2. References [1] De Lucia, M., Kempka, T., and Kühn, M. A coupling alternative to reactive transport simulations for long-term prediction of chemical reactions in heterogeneous CO2 storage systems, Geosci. Model Dev., 8, 279-294, 2015, doi:10.5194/gmd-8-279-2015 [2] Afanasyev, A.A. Application of the reservoir simulator MUFITS for 3D modeling of CO2 storage in geological formations, Energy Procedia, 40, 365-374, 2013, doi:10.1016/j.egypro.2013.08.042
Modelling and simulation of a heat exchanger
NASA Technical Reports Server (NTRS)
Xia, Lei; Deabreu-Garcia, J. Alex; Hartley, Tom T.
1991-01-01
Two models for two different control systems are developed for a parallel heat exchanger. First by spatially lumping a heat exchanger model, a good approximate model which has a high system order is produced. Model reduction techniques are applied to these to obtain low order models that are suitable for dynamic analysis and control design. The simulation method is discussed to ensure a valid simulation result.
Stochastic simulation of nucleation in binary alloys
NASA Astrophysics Data System (ADS)
L’vov, P. E.; Svetukhin, V. V.
2018-06-01
In this study, we simulate nucleation in binary alloys with respect to thermal fluctuations of the alloy composition. The simulation is based on the Cahn–Hilliard–Cook equation. We have considered the influence of some fluctuation parameters (wave vector cutoff and noise amplitude) on the kinetics of nucleation and growth of minority phase precipitates. The obtained results are validated by the example of iron–chromium alloys.
Simulation of a polarized laser beam reflected at the sea surface: modeling and validation
NASA Astrophysics Data System (ADS)
Schwenger, Frédéric
2015-05-01
A 3-D simulation of the polarization-dependent reflection of a Gaussian shaped laser beam on the dynamic sea surface is presented. The simulation considers polarized or unpolarized laser sources and calculates the polarization states upon reflection at the sea surface. It is suitable for the radiance calculation of the scene in different spectral wavebands (e.g. near-infrared, SWIR, etc.) not including the camera degradations. The simulation also considers a bistatic configuration of laser source and receiver as well as different atmospheric conditions. In the SWIR, the detected total power of reflected laser light is compared with data collected in a field trial. Our computer simulation combines the 3-D simulation of a maritime scene (open sea/clear sky) with the simulation of polarized or unpolarized laser light reflected at the sea surface. The basic sea surface geometry is modeled by a composition of smooth wind driven gravity waves. To predict the input of a camera equipped with a linear polarizer, the polarized sea surface radiance must be calculated for the specific waveband. The s- and p-polarization states are calculated for the emitted sea surface radiance and the specularly reflected sky radiance to determine the total polarized sea surface radiance of each component. The states of polarization and the radiance of laser light specularly reflected at the wind-roughened sea surface are calculated by considering the s- and p- components of the electric field of laser light with respect to the specular plane of incidence. This is done by using the formalism of their coherence matrices according to E. Wolf [1]. Additionally, an analytical statistical sea surface BRDF (bidirectional reflectance distribution function) is considered for the reflection of laser light radiances. Validation of the simulation results is required to ensure model credibility and applicability to maritime laser applications. For validation purposes, field measurement data (images and meteorological data) was analyzed. An infrared laser, with or without a mounted polarizer, produced laser beam reflection at the water surface and images were recorded by a camera equipped with a polarizer with horizontal or vertical alignment. The validation is done by numerical comparison of measured total laser power extracted from recorded images with the corresponding simulation results. The results of the comparison are presented for different incident (zenith/azimuth) angles of the laser beam and different alignment for the laser polarizers (vertical/horizontal/without) and the camera (vertical/horizontal).
NASA Astrophysics Data System (ADS)
Shao, Hongbing
Software testing with scientific software systems often suffers from test oracle problem, i.e., lack of test oracles. Amsterdam discrete dipole approximation code (ADDA) is a scientific software system that can be used to simulate light scattering of scatterers of various types. Testing of ADDA suffers from "test oracle problem". In this thesis work, I established a testing framework to test scientific software systems and evaluated this framework using ADDA as a case study. To test ADDA, I first used CMMIE code as the pseudo oracle to test ADDA in simulating light scattering of a homogeneous sphere scatterer. Comparable results were obtained between ADDA and CMMIE code. This validated ADDA for use with homogeneous sphere scatterers. Then I used experimental result obtained for light scattering of a homogeneous sphere to validate use of ADDA with sphere scatterers. ADDA produced light scattering simulation comparable to the experimentally measured result. This further validated the use of ADDA for simulating light scattering of sphere scatterers. Then I used metamorphic testing to generate test cases covering scatterers of various geometries, orientations, homogeneity or non-homogeneity. ADDA was tested under each of these test cases and all tests passed. The use of statistical analysis together with metamorphic testing is discussed as a future direction. In short, using ADDA as a case study, I established a testing framework, including use of pseudo oracles, experimental results and the metamorphic testing techniques to test scientific software systems that suffer from test oracle problems. Each of these techniques is necessary and contributes to the testing of the software under test.
Impact of length of calibration period on the apex model output simulation performance
USDA-ARS?s Scientific Manuscript database
Datasets from long-term monitoring sites that can be used for calibration and validation of hydrologic and water quality models are rare due to resource constraints. As a result, hydrologic and water quality models are calibrated and, when possible, validated using short-term measured data. A previo...
Evaluation of impact of length of calibration time period on the APEX model streamflow simulation
USDA-ARS?s Scientific Manuscript database
Due to resource constraints, continuous long-term measured data for model calibration and validation (C/V) are rare. As a result, most hydrologic and water quality models are calibrated and, if possible, validated using limited available measured data. However, little research has been carried out t...
USDA-ARS?s Scientific Manuscript database
Availability of continuous long-term measured data for model calibration and validation is limited due to time and resources constraints. As a result, hydrologic and water quality models are calibrated and, if possible, validated when measured data is available. Past work reported on the impact of t...
Mahony, Mary C; Patterson, Patricia; Hayward, Brooke; North, Robert; Green, Dawne
2015-05-01
To demonstrate, using human factors engineering (HFE), that a redesigned, pre-filled, ready-to-use, pre-asembled follitropin alfa pen can be used to administer prescribed follitropin alfa doses safely and accurately. A failure modes and effects analysis identified hazards and harms potentially caused by use errors; risk-control measures were implemented to ensure acceptable device use risk management. Participants were women with infertility, their significant others, and fertility nurse (FN) professionals. Preliminary testing included 'Instructions for Use' (IFU) and pre-validation studies. Validation studies used simulated injections in a representative use environment; participants received prior training on pen use. User performance in preliminary testing led to IFU revisions and a change to outer needle cap design to mitigate needle stick potential. In the first validation study (49 users, 343 simulated injections), in the FN group, one observed critical use error resulted in a device design modification and another in an IFU change. A second validation study tested the mitigation strategies; previously reported use errors were not repeated. Through an iterative process involving a series of studies, modifications were made to the pen design and IFU. Simulated-use testing demonstrated that the redesigned pen can be used to administer follitropin alfa effectively and safely.
Simulation of SEU Cross-sections using MRED under Conditions of Limited Device Information
NASA Technical Reports Server (NTRS)
Lauenstein, J. M.; Reed, R. A.; Weller, R. A.; Mendenhall, M. H.; Warren, K. M.; Pellish, J. A.; Schrimpf, R. D.; Sierawski, B. D.; Massengill, L. W.; Dodd, P. E.;
2007-01-01
This viewgraph presentation reviews the simulation of Single Event Upset (SEU) cross sections using the membrane electrode assembly (MEA) resistance and electrode diffusion (MRED) tool using "Best guess" assumptions about the process and geometry, and direct ionization, low-energy beam test results. This work will also simulate SEU cross-sections including angular and high energy responses and compare the simulated results with beam test data for the validation of the model. Using MRED, we produced a reasonably accurate upset response model of a low-critical charge SRAM without detailed information about the circuit, device geometry, or fabrication process
Virtual reality simulation training in Otolaryngology.
Arora, Asit; Lau, Loretta Y M; Awad, Zaid; Darzi, Ara; Singh, Arvind; Tolley, Neil
2014-01-01
To conduct a systematic review of the validity data for the virtual reality surgical simulator platforms available in Otolaryngology. Ovid and Embase databases searched July 13, 2013. Four hundred and nine abstracts were independently reviewed by 2 authors. Thirty-six articles which fulfilled the search criteria were retrieved and viewed in full text. These articles were assessed for quantitative data on at least one aspect of face, content, construct or predictive validity. Papers were stratified by simulator, sub-specialty and further classified by the validation method used. There were 21 articles reporting applications for temporal bone surgery (n = 12), endoscopic sinus surgery (n = 6) and myringotomy (n = 3). Four different simulator platforms were validated for temporal bone surgery and two for each of the other surgical applications. Face/content validation represented the most frequent study type (9/21). Construct validation studies performed on temporal bone and endoscopic sinus surgery simulators showed that performance measures reliably discriminated between different experience levels. Simulation training improved cadaver temporal bone dissection skills and operating room performance in sinus surgery. Several simulator platforms particularly in temporal bone surgery and endoscopic sinus surgery are worthy of incorporation into training programmes. Standardised metrics are necessary to guide curriculum development in Otolaryngology. Copyright © 2013 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.
Steady and Unsteady Nozzle Simulations Using the Conservation Element and Solution Element Method
NASA Technical Reports Server (NTRS)
Friedlander, David Joshua; Wang, Xiao-Yen J.
2014-01-01
This paper presents results from computational fluid dynamic (CFD) simulations of a three-stream plug nozzle. Time-accurate, Euler, quasi-1D and 2D-axisymmetric simulations were performed as part of an effort to provide a CFD-based approach to modeling nozzle dynamics. The CFD code used for the simulations is based on the space-time Conservation Element and Solution Element (CESE) method. Steady-state results were validated using the Wind-US code and a code utilizing the MacCormack method while the unsteady results were partially validated via an aeroacoustic benchmark problem. The CESE steady-state flow field solutions showed excellent agreement with solutions derived from the other methods and codes while preliminary unsteady results for the three-stream plug nozzle are also shown. Additionally, a study was performed to explore the sensitivity of gross thrust computations to the control surface definition. The results showed that most of the sensitivity while computing the gross thrust is attributed to the control surface stencil resolution and choice of stencil end points and not to the control surface definition itself.Finally, comparisons between the quasi-1D and 2D-axisymetric solutions were performed in order to gain insight on whether a quasi-1D solution can capture the steady and unsteady nozzle phenomena without the cost of a 2D-axisymmetric simulation. Initial results show that while the quasi-1D solutions are similar to the 2D-axisymmetric solutions, the inability of the quasi-1D simulations to predict two dimensional phenomena limits its accuracy.
Estimating Flow-Through Balance Momentum Tares with CFD
NASA Technical Reports Server (NTRS)
Melton, John E.; James, Kevin D.; Long, Kurtis R.; Flamm, Jeffrey D.
2016-01-01
This paper describes the process used for estimating flow-through balance momentum tares. The interaction of jet engine exhausts on the BOEINGERA Hybrid Wing Body (HWB) was simulated in the NFAC 40x80 wind tunnel at NASA Ames using a pair of turbine powered simulators (TPS). High-pressure air was passed through a flow-through balance and manifold before being delivered to the TPS units. The force and moment tares that result from the internal shear and pressure distribution were estimated using CFD. Validation of the CFD simulations for these complex internal flows is a challenge, given limited experimental data due to the complications of the internal geometry. Two CFD validation efforts are documented, and comparisons with experimental data from the final model installation are provided.
Validation Testing of a Peridynamic Impact Damage Model Using NASA's Micro-Particle Gun
NASA Technical Reports Server (NTRS)
Baber, Forrest E.; Zelinski, Brian J.; Guven, Ibrahim; Gray, Perry
2017-01-01
Through a collaborative effort between the Virginia Commonwealth University and Raytheon, a peridynamic model for sand impact damage has been developed1-3. Model development has focused on simulating impacts of sand particles on ZnS traveling at velocities consistent with aircraft take-off and landing speeds. The model reproduces common features of impact damage including pit and radial cracks, and, under some conditions, lateral cracks. This study focuses on a preliminary validation exercise in which simulation results from the peridynamic model are compared to a limited experimental data set generated by NASA's recently developed micro-particle gun (MPG). The MPG facility measures the dimensions and incoming and rebound velocities of the impact particles. It also links each particle to a specific impact site and its associated damage. In this validation exercise parameters of the peridynamic model are adjusted to fit the experimentally observed pit diameter, average length of radial cracks and rebound velocities for 4 impacts of 300 µm glass beads on ZnS. Results indicate that a reasonable fit of these impact characteristics can be obtained by suitable adjustment of the peridynamic input parameters, demonstrating that the MPG can be used effectively as a validation tool for impact modeling and that the peridynamic sand impact model described herein possesses not only a qualitative but also a quantitative ability to simulate sand impact events.
Validation of the BASALT model for simulating off-axis hydrothermal circulation in oceanic crust
NASA Astrophysics Data System (ADS)
Farahat, Navah X.; Archer, David; Abbot, Dorian S.
2017-08-01
Fluid recharge and discharge between the deep ocean and the porous upper layer of off-axis oceanic crust tends to concentrate in small volumes of rock, such as seamounts and fractures, that are unimpeded by low-permeability sediments. Basement structure, sediment burial, heat flow, and other regional characteristics of off-axis hydrothermal systems appear to produce considerable diversity of circulation behaviors. Circulation of seawater and seawater-derived fluids controls the extent of fluid-rock interaction, resulting in significant geochemical impacts. However, the primary regional characteristics that control how seawater is distributed within upper oceanic crust are still poorly understood. In this paper we present the details of the two-dimensional (2-D) BASALT (Basement Activity Simulated At Low Temperatures) numerical model of heat and fluid transport in an off-axis hydrothermal system. This model is designed to simulate a wide range of conditions in order to explore the dominant controls on circulation. We validate the BASALT model's ability to reproduce observations by configuring it to represent a thoroughly studied transect of the Juan de Fuca Ridge eastern flank. The results demonstrate that including series of narrow, ridge-parallel fractures as subgrid features produces a realistic circulation scenario at the validation site. In future projects, a full reactive transport version of the validated BASALT model will be used to explore geochemical fluxes in a variety of off-axis hydrothermal environments.
Blast Load Simulator Experiments for Computational Model Validation Report 3
2017-07-01
establish confidence in the results produced by the simulations. This report describes a set of replicate experiments in which a small, non - responding steel...designed to simulate blast waveforms for explosive yields up to 20,000 lb of TNT equivalent at a peak reflected pressure up to 80 psi and a peak...the pressure loading on a non - responding box-type structure at varying obliquities located in the flow of the BLS simulated blast environment for
Design and validation of inert homemade explosive simulants for ground penetrating radar
NASA Astrophysics Data System (ADS)
VanderGaast, Brian W.; McFee, John E.; Russell, Kevin L.; Faust, Anthony A.
2015-05-01
The Canadian Armed Forces (CAF) identified a requirement for inert simulants to act as improvised, or homemade, explosives (IEs) when training on, or evaluating, ground penetrating radar (GPR) systems commonly used in the detection of buried landmines and improvised explosive devices (IEDs). In response, Defence R and D Canada (DRDC) initiated a project to develop IE simulant formulations using commonly available inert materials. These simulants are intended to approximate the expected GPR response of common ammonium nitrate-based IEs, in particular ammonium nitrate/fuel oil (ANFO) and ammonium nitrate/aluminum (ANAl). The complex permittivity over the range of electromagnetic frequencies relevant to standard GPR systems was measured for bulk quantities of these three IEs that had been fabricated at DRDC Suffield Research Centre. Following these measurements, published literature was examined to find benign materials with both a similar complex permittivity, as well as other physical properties deemed desirable - such as low-toxicity, thermal stability, and commercial availability - in order to select candidates for subsequent simulant formulation. Suitable simulant formulations were identified for ANFO, with resulting complex permittivities measured to be within acceptable limits of target values. These IE formulations will now undergo end-user trials with CAF operators in order to confirm their utility. Investigations into ANAl simulants continues. This progress report outlines the development program, simulant design, and current validation results.
Impact of Neutrino Opacities on Core-collapse Supernova Simulations
NASA Astrophysics Data System (ADS)
Kotake, Kei; Takiwaki, Tomoya; Fischer, Tobias; Nakamura, Ko; Martínez-Pinedo, Gabriel
2018-02-01
The accurate description of neutrino opacities is central to both the core-collapse supernova (CCSN) phenomenon and the validity of the explosion mechanism itself. In this work, we study in a systematic fashion the role of a variety of well-selected neutrino opacities in CCSN simulations where the multi-energy, three-flavor neutrino transport is solved using the isotropic diffusion source approximation (IDSA) scheme. To verify our code, we first present results from one-dimensional (1D) simulations following the core collapse, bounce, and ∼250 ms postbounce of a 15 {M}ȯ star using a standard set of neutrino opacities by Bruenn. A detailed comparison with published results supports the reliability of our three-flavor IDSA scheme using the standard opacity set. We then investigate in 1D simulations how individual opacity updates lead to differences with the baseline run with the standard opacity set. Through detailed comparisons with previous work, we check the validity of our implementation of each update in a step-by-step manner. Individual neutrino opacities with the largest impact on the overall evolution in 1D simulations are selected for systematic comparisons in our two-dimensional (2D) simulations. Special attention is given to the criterion of explodability in the 2D models. We discuss the implications of these results as well as its limitations and the requirements for future, more elaborate CCSN modeling.
Judd, Belinda K; Scanlan, Justin N; Alison, Jennifer A; Waters, Donna; Gordon, Christopher J
2016-08-05
Despite the recent widespread adoption of simulation in clinical education in physiotherapy, there is a lack of validated tools for assessment in this setting. The Assessment of Physiotherapy Practice (APP) is a comprehensive tool used in clinical placement settings in Australia to measure professional competence of physiotherapy students. The aim of the study was to evaluate the validity of the APP for student assessment in simulation settings. A total of 1260 APPs were collected, 971 from students in simulation and 289 from students in clinical placements. Rasch analysis was used to examine the construct validity of the APP tool in three different simulation assessment formats: longitudinal assessment over 1 week of simulation; longitudinal assessment over 2 weeks; and a short-form (25 min) assessment of a single simulation scenario. Comparison with APPs from 5 week clinical placements in hospital and clinic-based settings were also conducted. The APP demonstrated acceptable fit to the expectations of the Rasch model for the 1 and 2 week clinical simulations, exhibiting unidimensional properties that were able to distinguish different levels of student performance. For the short-form simulation, nine of the 20 items recorded greater than 25 % of scores as 'not-assessed' by clinical educators which impacted on the suitability of the APP tool in this simulation format. The APP was a valid assessment tool when used in longitudinal simulation formats. A revised APP may be required for assessment in short-form simulation scenarios.
Numerical Simulation of the Fluid-Structure Interaction of a Surface Effect Ship Bow Seal
NASA Astrophysics Data System (ADS)
Bloxom, Andrew L.
Numerical simulations of fluid-structure interaction (FSI) problems were performed in an effort to verify and validate a commercially available FSI tool. This tool uses an iterative partitioned coupling scheme between CD-adapco's STAR-CCM+ finite volume fluid solver and Simulia's Abaqus finite element structural solver to simulate the FSI response of a system. Preliminary verification and validation work (V&V) was carried out to understand the numerical behavior of the codes individually and together as a FSI tool. Verification and Validation work that was completed included code order verification of the respective fluid and structural solvers with Couette-Poiseuille flow and Euler-Bernoulli beam theory. These results confirmed the 2 nd order accuracy of the spatial discretizations used. Following that, a mixture of solution verifications and model calibrations was performed with the inclusion of the physics models implemented in the solution of the FSI problems. Solution verifications were completed for fluid and structural stand-alone models as well as for the coupled FSI solutions. These results re-confirmed the spatial order of accuracy but for more complex flows and physics models as well as the order of accuracy of the temporal discretizations. In lieu of a good material definition, model calibration is performed to reproduce the experimental results. This work used model calibration for both instances of hyperelastic materials which were presented in the literature as validation cases because these materials were defined as linear elastic. Calibrated, three dimensional models of the bow seal on the University of Michigan bow seal test platform showed the ability to reproduce the experimental results qualitatively through averaging of the forces and seal displacements. These simulations represent the only current 3D results for this case. One significant result of this study is the ability to visualize the flow around the seal and to directly measure the seal resistances at varying cushion pressures, seal immersions, forward speeds, and different seal materials. SES design analysis could greatly benefit from the inclusion of flexible seals in simulations, and this work is a positive step in that direction. In future work, the inclusion of more complex seal geometries and contact will further enhance the capability of this tool.
Assessing Discriminative Performance at External Validation of Clinical Prediction Models.
Nieboer, Daan; van der Ploeg, Tjeerd; Steyerberg, Ewout W
2016-01-01
External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients.
Haptic simulation framework for determining virtual dental occlusion.
Wu, Wen; Chen, Hui; Cen, Yuhai; Hong, Yang; Khambay, Balvinder; Heng, Pheng Ann
2017-04-01
The surgical treatment of many dentofacial deformities is often complex due to its three-dimensional nature. To determine the dental occlusion in the most stable position is essential for the success of the treatment. Computer-aided virtual planning on individualized patient-specific 3D model can help formulate the surgical plan and predict the surgical change. However, in current computer-aided planning systems, it is not possible to determine the dental occlusion of the digital models in the intuitive way during virtual surgical planning because of absence of haptic feedback. In this paper, a physically based haptic simulation framework is proposed, which can provide surgeons with the intuitive haptic feedback to determine the dental occlusion of the digital models in their most stable position. To provide the physically realistic force feedback when the dental models contact each other during the searching process, the contact model is proposed to describe the dynamic and collision properties of the dental models during the alignment. The simulated impulse/contact-based forces are integrated into the unified simulation framework. A validation study has been conducted on fifteen sets of virtual dental models chosen at random and covering a wide range of the dental relationships found clinically. The dental occlusions obtained by an expert were employed as a benchmark to compare the virtual occlusion results. The mean translational and angular deviations of the virtual occlusion results from the benchmark were small. The experimental results show the validity of our method. The simulated forces can provide valuable insights to determine the virtual dental occlusion. The findings of this work and the validation of proposed concept lead the way for full virtual surgical planning on patient-specific virtual models allowing fully customized treatment plans for the surgical correction of dentofacial deformities.
Aydin, Abdullatif; Muir, Gordon H; Graziano, Manuela E; Khan, Muhammad Shamim; Dasgupta, Prokar; Ahmed, Kamran
2015-06-01
To assess face, content and construct validity, and feasibility and acceptability of the GreenLight™ Simulator as a training tool for photoselective vaporisation of the prostate (PVP), and to establish learning curves and develop an evidence-based training curriculum. This prospective, observational and comparative study, recruited novice (25 participants), intermediate (14) and expert-level urologists (seven) from the UK and Europe at the 28th European Association of Urological Surgeons Annual Meeting 2013. A group of novices (12 participants) performed 10 sessions of subtask training modules followed by a long operative case, whereas a second group (13) performed five sessions of a given case module. Intermediate and expert groups performed all training modules once, followed by one operative case. The outcome measures for learning curves and construct validity were time to task, coagulation time, vaporisation time, average sweep speed, average laser distance, blood loss, operative errors, and instrument cost. Face and content validity, feasibility and acceptability were addressed through a quantitative survey. Construct validity was demonstrated in two of five training modules (P = 0.038; P = 0.018) and in a considerable number of case metrics (P = 0.034). Learning curves were seen in all five training modules (P < 0.001) and significant reduction in case operative time (P < 0.001) and error (P = 0.017) were seen. An evidence-based training curriculum, to help trainees acquire transferable skills, was produced using the results. This study has shown the GreenLight Simulator to be a valid and useful training tool for PVP. It is hoped that by using the training curriculum for the GreenLight Simulator, novice trainees can acquire skills and knowledge to a predetermined level of proficiency. © 2014 The Authors. BJU International © 2014 BJU International.
The Role of Simulation in Microsurgical Training.
Evgeniou, Evgenios; Walker, Harriet; Gujral, Sameer
Simulation has been established as an integral part of microsurgical training. The aim of this study was to assess and categorize the various simulation models in relation to the complexity of the microsurgical skill being taught and analyze the assessment methods commonly employed in microsurgical simulation training. Numerous courses have been established using simulation models. These models can be categorized, according to the level of complexity of the skill being taught, into basic, intermediate, and advanced. Microsurgical simulation training should be assessed using validated assessment methods. Assessment methods vary significantly from subjective expert opinions to self-assessment questionnaires and validated global rating scales. The appropriate assessment method should carefully be chosen based on the simulation modality. Simulation models should be validated, and a model with appropriate fidelity should be chosen according to the microsurgical skill being taught. Assessment should move from traditional simple subjective evaluations of trainee performance to validated tools. Future studies should assess the transferability of skills gained during simulation training to the real-life setting. Copyright © 2018 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Kawaguchi, Koji; Egi, Hiroyuki; Hattori, Minoru; Sawada, Hiroyuki; Suzuki, Takahisa; Ohdan, Hideki
2014-10-01
Virtual reality surgical simulators are becoming popular as a means of providing trainees with an opportunity to practice laparoscopic skills. The Lap-X (Epona Medical, Rotterdam, the Netherlands) is a novel VR simulator for training basic skills in laparoscopic surgery. The objective of this study was to validate the LAP-X laparoscopic virtual reality simulator by assessing the face and construct validity in order to determine whether the simulator is adequate for basic skills training. The face and content validity were evaluated using a structured questionnaire. To assess the construct validity, the participants, nine expert surgeons (median age: 40 (32-45)) (>100 laparoscopic procedures) and 11 novices performed three basic laparoscopic tasks using the Lap-X. The participants reported a high level of content validity. No significant differences were found between the expert surgeons and the novices (Ps > 0.246). The performance of the expert surgeons on the three tasks was significantly better than that of the novices in all parameters (Ps < 0.05). This study demonstrated the face, content and construct validity of the Lap-X. The Lap-X holds real potential as a home and hospital training device.
Reducing EnergyPlus Run Time For Code Compliance Tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Athalye, Rahul A.; Gowri, Krishnan; Schultz, Robert W.
2014-09-12
Integration of the EnergyPlus ™ simulation engine into performance-based code compliance software raises a concern about simulation run time, which impacts timely feedback of compliance results to the user. EnergyPlus annual simulations for proposed and code baseline building models, and mechanical equipment sizing result in simulation run times beyond acceptable limits. This paper presents a study that compares the results of a shortened simulation time period using 4 weeks of hourly weather data (one per quarter), to an annual simulation using full 52 weeks of hourly weather data. Three representative building types based on DOE Prototype Building Models and threemore » climate zones were used for determining the validity of using a shortened simulation run period. Further sensitivity analysis and run time comparisons were made to evaluate the robustness and run time savings of using this approach. The results of this analysis show that the shortened simulation run period provides compliance index calculations within 1% of those predicted using annual simulation results, and typically saves about 75% of simulation run time.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Long, Daniel J.; Lee, Choonsik; Tien, Christopher
2013-01-15
Purpose: To validate the accuracy of a Monte Carlo source model of the Siemens SOMATOM Sensation 16 CT scanner using organ doses measured in physical anthropomorphic phantoms. Methods: The x-ray output of the Siemens SOMATOM Sensation 16 multidetector CT scanner was simulated within the Monte Carlo radiation transport code, MCNPX version 2.6. The resulting source model was able to perform various simulated axial and helical computed tomographic (CT) scans of varying scan parameters, including beam energy, filtration, pitch, and beam collimation. Two custom-built anthropomorphic phantoms were used to take dose measurements on the CT scanner: an adult male and amore » 9-month-old. The adult male is a physical replica of University of Florida reference adult male hybrid computational phantom, while the 9-month-old is a replica of University of Florida Series B 9-month-old voxel computational phantom. Each phantom underwent a series of axial and helical CT scans, during which organ doses were measured using fiber-optic coupled plastic scintillator dosimeters developed at University of Florida. The physical setup was reproduced and simulated in MCNPX using the CT source model and the computational phantoms upon which the anthropomorphic phantoms were constructed. Average organ doses were then calculated based upon these MCNPX results. Results: For all CT scans, good agreement was seen between measured and simulated organ doses. For the adult male, the percent differences were within 16% for axial scans, and within 18% for helical scans. For the 9-month-old, the percent differences were all within 15% for both the axial and helical scans. These results are comparable to previously published validation studies using GE scanners and commercially available anthropomorphic phantoms. Conclusions: Overall results of this study show that the Monte Carlo source model can be used to accurately and reliably calculate organ doses for patients undergoing a variety of axial or helical CT examinations on the Siemens SOMATOM Sensation 16 scanner.« less
Prototyping and validating requirements of radiation and nuclear emergency plan simulator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamid, AHA., E-mail: amyhamijah@nm.gov.my; Faculty of Computing, Universiti Teknologi Malaysia; Rozan, MZA.
2015-04-29
Organizational incapability in developing unrealistic, impractical, inadequate and ambiguous mechanisms of radiological and nuclear emergency preparedness and response plan (EPR) causing emergency plan disorder and severe disasters. These situations resulting from 65.6% of poor definition and unidentified roles and duties of the disaster coordinator. Those unexpected conditions brought huge aftermath to the first responders, operators, workers, patients and community at large. Hence, in this report, we discuss prototyping and validating of Malaysia radiation and nuclear emergency preparedness and response plan simulation model (EPRM). A prototyping technique was required to formalize the simulation model requirements. Prototyping as systems requirements validation wasmore » carried on to endorse the correctness of the model itself against the stakeholder’s intensions in resolving those organizational incapability. We have made assumptions for the proposed emergency preparedness and response model (EPRM) through the simulation software. Those assumptions provided a twofold of expected mechanisms, planning and handling of the respective emergency plan as well as in bringing off the hazard involved. This model called RANEPF (Radiation and Nuclear Emergency Planning Framework) simulator demonstrated the training emergency response perquisites rather than the intervention principles alone. The demonstrations involved the determination of the casualties’ absorbed dose range screening and the coordination of the capacity planning of the expected trauma triage. Through user-centred design and sociotechnical approach, RANEPF simulator was strategized and simplified, though certainly it is equally complex.« less
Prototyping and validating requirements of radiation and nuclear emergency plan simulator
NASA Astrophysics Data System (ADS)
Hamid, AHA.; Rozan, MZA.; Ibrahim, R.; Deris, S.; Selamat, A.
2015-04-01
Organizational incapability in developing unrealistic, impractical, inadequate and ambiguous mechanisms of radiological and nuclear emergency preparedness and response plan (EPR) causing emergency plan disorder and severe disasters. These situations resulting from 65.6% of poor definition and unidentified roles and duties of the disaster coordinator. Those unexpected conditions brought huge aftermath to the first responders, operators, workers, patients and community at large. Hence, in this report, we discuss prototyping and validating of Malaysia radiation and nuclear emergency preparedness and response plan simulation model (EPRM). A prototyping technique was required to formalize the simulation model requirements. Prototyping as systems requirements validation was carried on to endorse the correctness of the model itself against the stakeholder's intensions in resolving those organizational incapability. We have made assumptions for the proposed emergency preparedness and response model (EPRM) through the simulation software. Those assumptions provided a twofold of expected mechanisms, planning and handling of the respective emergency plan as well as in bringing off the hazard involved. This model called RANEPF (Radiation and Nuclear Emergency Planning Framework) simulator demonstrated the training emergency response perquisites rather than the intervention principles alone. The demonstrations involved the determination of the casualties' absorbed dose range screening and the coordination of the capacity planning of the expected trauma triage. Through user-centred design and sociotechnical approach, RANEPF simulator was strategized and simplified, though certainly it is equally complex.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Byoung Yoon; Leavy, Richard Brian; Niederhaus, John Henry J.
2013-03-01
The finite-element shock hydrodynamics code ALEGRA has recently been upgraded to include an X-FEM implementation in 2D for simulating impact, sliding, and release between materials in the Eulerian frame. For validation testing purposes, the problem of long-rod penetration in semi-infinite targets is considered in this report, at velocities of 500 to 3000 m/s. We describe testing simulations done using ALEGRA with and without the X-FEM capability, in order to verify its adequacy by showing X-FEM recovers the good results found with the standard ALEGRA formulation. The X-FEM results for depth of penetration differ from previously measured experimental data by lessmore » than 2%, and from the standard formulation results by less than 1%. They converge monotonically under mesh refinement at first order. Sensitivities to domain size and rear boundary condition are investigated and shown to be small. Aside from some simulation stability issues, X-FEM is found to produce good results for this classical impact and penetration problem.« less
Educational Validity of Business Gaming Simulation: A Research Methodology Framework
ERIC Educational Resources Information Center
Stainton, Andrew J.; Johnson, Johnnie E.; Borodzicz, Edward P.
2010-01-01
Many past educational validity studies of business gaming simulation, and more specifically total enterprise simulation, have been inconclusive. Studies have focused on the weaknesses of business gaming simulation; which is often regarded as an educational medium that has limitations regarding learning effectiveness. However, no attempts have been…
DOE Office of Scientific and Technical Information (OSTI.GOV)
English, Shawn Allen; Nelson, Stacy Michelle; Briggs, Timothy
Presented is a model verification and validation effort using low - velocity impact (LVI) of carbon fiber reinforced polymer laminate experiments. A flat cylindrical indenter impacts the laminate with enough energy to produce delamination, matrix cracks and fiber breaks. Included in the experimental efforts are ultrasonic scans of the damage for qualitative validation of the models. However, the primary quantitative metrics of validation are the force time history measured through the instrumented indenter and initial and final velocities. The simulations, whi ch are run on Sandia's Sierra finite element codes , consist of all physics and material parameters of importancemore » as determined by a sensitivity analysis conducted on the LVI simulation. A novel orthotropic damage and failure constitutive model that is cap able of predicting progressive composite damage and failure is described in detail and material properties are measured, estimated from micromechanics or optimized through calibration. A thorough verification and calibration to the accompanying experiment s are presented. Specia l emphasis is given to the four - point bend experiment. For all simulations of interest, the mesh and material behavior is verified through extensive convergence studies. An ensemble of simulations incorporating model parameter unc ertainties is used to predict a response distribution which is then compared to experimental output. The result is a quantifiable confidence in material characterization and model physics when simulating this phenomenon in structures of interest.« less
Ahmad, Zaki Uddin; Chao, Bing; Konggidinata, Mas Iwan; Lian, Qiyu; Zappi, Mark E; Gang, Daniel Dianchen
2018-04-27
Numerous research works have been devoted in the adsorption area using experimental approaches. All these approaches are based on trial and error process and extremely time consuming. Molecular simulation technique is a new tool that can be used to design and predict the performance of an adsorbent. This research proposed a simulation technique that can greatly reduce the time in designing the adsorbent. In this study, a new Rhombic ordered mesoporous carbon (OMC) model is proposed and constructed with various pore sizes and oxygen contents using Materials Visualizer Module to optimize the structure of OMC for resorcinol adsorption. The specific surface area, pore volume, small angle X-ray diffraction pattern, and resorcinol adsorption capacity were calculated by Forcite and Sorption module in Materials Studio Package. The simulation results were validated experimentally through synthesizing OMC with different pore sizes and oxygen contents prepared via hard template method employing SBA-15 silica scaffold. Boric acid was used as the pore expanding reagent to synthesize OMC with different pore sizes (from 4.6 to 11.3 nm) and varying oxygen contents (from 11.9% to 17.8%). Based on the simulation and experimental validation, the optimal pore size was found to be 6 nm for maximum adsorption of resorcinol. Copyright © 2018 Elsevier B.V. All rights reserved.
Validity of Cognitive Load Measures in Simulation-Based Training: A Systematic Review.
Naismith, Laura M; Cavalcanti, Rodrigo B
2015-11-01
Cognitive load theory (CLT) provides a rich framework to inform instructional design. Despite the applicability of CLT to simulation-based medical training, findings from multimedia learning have not been consistently replicated in this context. This lack of transferability may be related to issues in measuring cognitive load (CL) during simulation. The authors conducted a review of CLT studies across simulation training contexts to assess the validity evidence for different CL measures. PRISMA standards were followed. For 48 studies selected from a search of MEDLINE, EMBASE, PsycInfo, CINAHL, and ERIC databases, information was extracted about study aims, methods, validity evidence of measures, and findings. Studies were categorized on the basis of findings and prevalence of validity evidence collected, and statistical comparisons between measurement types and research domains were pursued. CL during simulation training has been measured in diverse populations including medical trainees, pilots, and university students. Most studies (71%; 34) used self-report measures; others included secondary task performance, physiological indices, and observer ratings. Correlations between CL and learning varied from positive to negative. Overall validity evidence for CL measures was low (mean score 1.55/5). Studies reporting greater validity evidence were more likely to report that high CL impaired learning. The authors found evidence that inconsistent correlations between CL and learning may be related to issues of validity in CL measures. Further research would benefit from rigorous documentation of validity and from triangulating measures of CL. This can better inform CLT instructional design for simulation-based medical training.
O'Clock, George D
2016-08-01
Cellular engineering involves modification and control of cell properties, and requires an understanding of fundamentals and mechanisms of action for cellular derived product development. One of the keys to success in cellular engineering involves the quality and validity of results obtained from cell chemical signaling pathway assays. The accuracy of the assay data cannot be verified or assured if the effect of positive feedback, nonlinearities, and interrelationships between cell chemical signaling pathway elements are not understood, modeled, and simulated. Nonlinearities and positive feedback in the cell chemical signaling pathway can produce significant aberrations in assay data collection. Simulating the pathway can reveal potential instability problems that will affect assay results. A simulation, using an electrical analog for the coupled differential equations representing each segment of the pathway, provides an excellent tool for assay validation purposes. With this approach, voltages represent pathway enzyme concentrations and operational amplifier feedback resistance and input resistance values determine pathway gain and rate constants. The understanding provided by pathway modeling and simulation is strategically important in order to establish experimental controls for assay protocol structure, time frames specified between assays, and assay concentration variation limits; to ensure accuracy and reproducibility of results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ricci, Paolo; Theiler, C.; Fasoli, A.
A methodology for plasma turbulence code validation is discussed, focusing on quantitative assessment of the agreement between experiments and simulations. The present work extends the analysis carried out in a previous paper [P. Ricci et al., Phys. Plasmas 16, 055703 (2009)] where the validation observables were introduced. Here, it is discussed how to quantify the agreement between experiments and simulations with respect to each observable, how to define a metric to evaluate this agreement globally, and - finally - how to assess the quality of a validation procedure. The methodology is then applied to the simulation of the basic plasmamore » physics experiment TORPEX [A. Fasoli et al., Phys. Plasmas 13, 055902 (2006)], considering both two-dimensional and three-dimensional simulation models.« less
Assessment of a hybrid finite element and finite volume code for turbulent incompressible flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xia, Yidong, E-mail: yidong.xia@inl.gov; Wang, Chuanjin; Luo, Hong
Hydra-TH is a hybrid finite-element/finite-volume incompressible/low-Mach flow simulation code based on the Hydra multiphysics toolkit being developed and used for thermal-hydraulics applications. In the present work, a suite of verification and validation (V&V) test problems for Hydra-TH was defined to meet the design requirements of the Consortium for Advanced Simulation of Light Water Reactors (CASL). The intent for this test problem suite is to provide baseline comparison data that demonstrates the performance of the Hydra-TH solution methods. The simulation problems vary in complexity from laminar to turbulent flows. A set of RANS and LES turbulence models were used in themore » simulation of four classical test problems. Numerical results obtained by Hydra-TH agreed well with either the available analytical solution or experimental data, indicating the verified and validated implementation of these turbulence models in Hydra-TH. Where possible, some form of solution verification has been attempted to identify sensitivities in the solution methods, and suggest best practices when using the Hydra-TH code. -- Highlights: •We performed a comprehensive study to verify and validate the turbulence models in Hydra-TH. •Hydra-TH delivers 2nd-order grid convergence for the incompressible Navier–Stokes equations. •Hydra-TH can accurately simulate the laminar boundary layers. •Hydra-TH can accurately simulate the turbulent boundary layers with RANS turbulence models. •Hydra-TH delivers high-fidelity LES capability for simulating turbulent flows in confined space.« less
Validating a Monotonically-Integrated Large Eddy Simulation Code for Subsonic Jet Acoustics
NASA Technical Reports Server (NTRS)
Ingraham, Daniel; Bridges, James
2017-01-01
The results of subsonic jet validation cases for the Naval Research Lab's Jet Engine Noise REduction (JENRE) code are reported. Two set points from the Tanna matrix, set point 3 (Ma = 0.5, unheated) and set point 7 (Ma = 0.9, unheated) are attempted on three different meshes. After a brief discussion of the JENRE code and the meshes constructed for this work, the turbulent statistics for the axial velocity are presented and compared to experimental data, with favorable results. Preliminary simulations for set point 23 (Ma = 0.5, Tj=T1 = 1.764) on one of the meshes are also described. Finally, the proposed configuration for the farfield noise prediction with JENRE's Ffowcs-Williams Hawking solver are detailed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tabares-Velasco, P. C.; Christensen, C.; Bianchi, M.
2012-08-01
Phase change materials (PCM) represent a potential technology to reduce peak loads and HVAC energy consumption in residential buildings. This paper summarizes NREL efforts to obtain accurate energy simulations when PCMs are modeled in residential buildings: the overall methodology to verify and validate Conduction Finite Difference (CondFD) and PCM algorithms in EnergyPlus is presented in this study. It also shows preliminary results of three residential building enclosure technologies containing PCM: PCM-enhanced insulation, PCM impregnated drywall and thin PCM layers. The results are compared based on predicted peak reduction and energy savings using two algorithms in EnergyPlus: the PCM and Conductionmore » Finite Difference (CondFD) algorithms.« less
Check-Cases for Verification of 6-Degree-of-Freedom Flight Vehicle Simulations. Volume 2; Appendices
NASA Technical Reports Server (NTRS)
Murri, Daniel G.; Jackson, E. Bruce; Shelton, Robert O.
2015-01-01
This NASA Engineering and Safety Center (NESC) assessment was established to develop a set of time histories for the flight behavior of increasingly complex example aerospacecraft that could be used to partially validate various simulation frameworks. The assessment was conducted by representatives from several NASA Centers and an open-source simulation project. This document contains details on models, implementation, and results.
NASA Astrophysics Data System (ADS)
Yu, Hesheng; Thé, Jesse
2016-11-01
The prediction of the dispersion of air pollutants in urban areas is of great importance to public health, homeland security, and environmental protection. Computational Fluid Dynamics (CFD) emerges as an effective tool for pollutant dispersion modelling. This paper reports and quantitatively validates the shear stress transport (SST) k-ω turbulence closure model and its transitional variant for pollutant dispersion under complex urban environment for the first time. Sensitivity analysis is performed to establish recommendation for the proper use of turbulence models in urban settings. The current SST k-ω simulation is validated rigorously by extensive experimental data using hit rate for velocity components, and the "factor of two" of observations (FAC2) and fractional bias (FB) for concentration field. The simulation results show that current SST k-ω model can predict flow field nicely with an overall hit rate of 0.870, and concentration dispersion with FAC2 = 0.721 and FB = 0.045. The flow simulation of the current SST k-ω model is slightly inferior to that of a detached eddy simulation (DES), but better than that of standard k-ε model. However, the current study is the best among these three model approaches, when validated against measurements of pollutant dispersion in the atmosphere. This work aims to provide recommendation for proper use of CFD to predict pollutant dispersion in urban environment.
Simulated training in colonoscopic stenting of colonic strictures: validation of a cadaver model.
Iordache, F; Bucobo, J C; Devlin, D; You, K; Bergamaschi, R
2015-07-01
There are currently no available simulation models for training in colonoscopic stent deployment. The aim of this study was to validate a cadaver model for simulation training in colonoscopy with stent deployment for colonic strictures. This was a prospective study enrolling surgeons at a single institution. Participants performed colonoscopic stenting on a cadaver model. Their performance was assessed by two independent observers. Measurements were performed for quantitative analysis (time to identify stenosis, time for deployment, accuracy) and a weighted score was devised for assessment. The Mann-Whitney U-test and Student's t-test were used for nonparametric and parametric data, respectively. Cohen's kappa coefficient was used for reliability. Twenty participants performed a colonoscopy with deployment of a self-expandable metallic stent in two cadavers (groups A and B) with 20 strictures overall. The median time was 206 s. The model was able to differentiate between experts and novices (P = 0. 013). The results showed a good consensus estimate of reliability, with kappa = 0.571 (P < 0.0001). The cadaver model described in this study has content, construct and concurrent validity for simulation training in colonoscopic deployment of self-expandable stents for colonic strictures. Further studies are needed to evaluate the predictive validity of this model in terms of skill transfer to clinical practice. Colorectal Disease © 2014 The Association of Coloproctology of Great Britain and Ireland.
2011-01-01
Background Simulation models of influenza spread play an important role for pandemic preparedness. However, as the world has not faced a severe pandemic for decades, except the rather mild H1N1 one in 2009, pandemic influenza models are inherently hypothetical and validation is, thus, difficult. We aim at reconstructing a recent seasonal influenza epidemic that occurred in Switzerland and deem this to be a promising validation strategy for models of influenza spread. Methods We present a spatially explicit, individual-based simulation model of influenza spread. The simulation model bases upon (i) simulated human travel data, (ii) data on human contact patterns and (iii) empirical knowledge on the epidemiology of influenza. For model validation we compare the simulation outcomes with empirical knowledge regarding (i) the shape of the epidemic curve, overall infection rate and reproduction number, (ii) age-dependent infection rates and time of infection, (iii) spatial patterns. Results The simulation model is capable of reproducing the shape of the 2003/2004 H3N2 epidemic curve of Switzerland and generates an overall infection rate (14.9 percent) and reproduction numbers (between 1.2 and 1.3), which are realistic for seasonal influenza epidemics. Age and spatial patterns observed in empirical data are also reflected by the model: Highest infection rates are in children between 5 and 14 and the disease spreads along the main transport axes from west to east. Conclusions We show that finding evidence for the validity of simulation models of influenza spread by challenging them with seasonal influenza outbreak data is possible and promising. Simulation models for pandemic spread gain more credibility if they are able to reproduce seasonal influenza outbreaks. For more robust modelling of seasonal influenza, serological data complementing sentinel information would be beneficial. PMID:21554680
Face and Construct Validation of a Next Generation Virtual Reality (Gen2-VR©) Surgical Simulator
Sankaranarayanan, Ganesh; Li, Baichun; Manser, Kelly; Jones, Stephanie B.; Jones, Daniel B.; Schwaitzberg, Steven; Cao, Caroline G. L.; De, Suvranu
2015-01-01
Introduction Surgical performance is affected by distractors and interruptions to surgical workflow that exist in the operating room. However, traditional surgical simulators are used to train surgeons in a skills lab that does not recreate these conditions. To overcome this limitation, we have developed a novel, immersive virtual reality (Gen2-VR©) system to train surgeons in these environments. This study was to establish face and construct validity of our system. Methods and Procedures The study was a within-subjects design, with subjects repeating a virtual peg transfer task under three different conditions: CASE I: traditional VR; CASE II: Gen2-VR© with no distractions and CASE III: Gen2-VR© with distractions and interruptions.. In Case III, to simulate the effects of distractions and interruptions, music was played intermittently, the camera lens was fogged for 10 seconds and tools malfunctioned for 15 seconds at random points in time during the simulation. At the completion of the study subjects filled in a 5-point Likert scale feedback questionnaire. A total of sixteen subjects participated in this study. Results Friedman test showed significant difference in scores between the three conditions (p < 0.0001). Post hoc analysis using Wilcoxon Signed Rank tests with Bonferroni correction further showed that all the three conditions were significantly different from each other (Case I, Case II, p < 0.001), (Case I, Case III, p < 0.001) and (Case II, Case III, p = 0.009). Subjects rated that fog (mean= 4.18) and tool malfunction (median = 4.56) significantly hindered their performance. Conclusion The results showed that Gen2-VR© simulator has both face and construct validity and it can accurately and realistically present distractions and interruptions in a simulated OR, in spite of limitations of the current HMD hardware technology. PMID:26092010
Franklin, Ashley E; Burns, Paulette; Lee, Christopher S
2014-10-01
In 2006, the National League for Nursing published three measures related to novice nurses' beliefs about self-confidence, scenario design, and educational practices associated with simulation. Despite the extensive use of these measures, little is known about their reliability and validity. The psychometric properties of the Student Satisfaction and Self-Confidence in Learning Scale, Simulation Design Scale, and Educational Practices Questionnaire were studied among a sample of 2200 surveys completed by novice nurses from a liberal arts university in the southern United States. Psychometric tests included item analysis, confirmatory and exploratory factor analyses in randomly-split subsamples, concordant and discordant validity, and internal consistency. All three measures have sufficient reliability and validity to be used in education research. There is room for improvement in content validity with the Student Satisfaction and Self-Confidence in Learning and Simulation Design Scale. This work provides robust evidence to ensure that judgments made about self-confidence after simulation, simulation design and educational practices are valid and reliable. Copyright © 2014 Elsevier Ltd. All rights reserved.
Additional confirmation of the validity of laboratory simulation of cloud radiances
NASA Technical Reports Server (NTRS)
Davis, J. M.; Cox, S. K.
1986-01-01
The results of a laboratory experiment are presented that provide additional verification of the methodology adopted for simulation of the radiances reflected from fields of optically thick clouds using the Cloud Field Optical Simulator (CFOS) at Colorado State University. The comparison of these data with their theoretically derived counterparts indicates that the crucial mechanism of cloud-to-cloud radiance field interaction is accurately simulated in the CFOS experiments and adds confidence to the manner in which the optical depth is scaled.
Accuracy of MHD simulations: Effects of simulation initialization in GUMICS-4
NASA Astrophysics Data System (ADS)
Lakka, Antti; Pulkkinen, Tuija; Dimmock, Andrew; Osmane, Adnane; Palmroth, Minna; Honkonen, Ilja
2016-04-01
We conducted a study aimed at revealing how different global magnetohydrodynamic (MHD) simulation initialization methods affect the dynamics in different parts of the Earth's magnetosphere-ionosphere system. While such magnetosphere-ionosphere coupling codes have been used for more than two decades, their testing still requires significant work to identify the optimal numerical representation of the physical processes. We used the Grand Unified Magnetosphere-Ionosphere Coupling Simulation (GUMICS-4), the only European global MHD simulation being developed by the Finnish Meteorological Institute. GUMICS-4 was put to a test that included two stages: 1) a 10 day Omni data interval was simulated and the results were validated by comparing both the bow shock and the magnetopause spatial positions predicted by the simulation to actual measurements and 2) the validated 10 day simulation run was used as a reference in a comparison of five 3 + 12 hour (3 hour synthetic initialisation + 12 hour actual simulation) simulation runs. The 12 hour input was not only identical in each simulation case but it also represented a subset of the 10 day input thus enabling quantifying the effects of different synthetic initialisations on the magnetosphere-ionosphere system. The used synthetic initialisation data sets were created using stepwise, linear and sinusoidal functions. Switching the used input from the synthetic to real Omni data was immediate. The results show that the magnetosphere forms in each case within an hour after the switch to real data. However, local dissimilarities are found in the magnetospheric dynamics after formation depending on the used initialisation method. This is evident especially in the inner parts of the lobe.
MATLAB/Simulink Pulse-Echo Ultrasound System Simulator Based on Experimentally Validated Models.
Kim, Taehoon; Shin, Sangmin; Lee, Hyongmin; Lee, Hyunsook; Kim, Heewon; Shin, Eunhee; Kim, Suhwan
2016-02-01
A flexible clinical ultrasound system must operate with different transducers, which have characteristic impulse responses and widely varying impedances. The impulse response determines the shape of the high-voltage pulse that is transmitted and the specifications of the front-end electronics that receive the echo; the impedance determines the specification of the matching network through which the transducer is connected. System-level optimization of these subsystems requires accurate modeling of pulse-echo (two-way) response, which in turn demands a unified simulation of the ultrasonics and electronics. In this paper, this is realized by combining MATLAB/Simulink models of the high-voltage transmitter, the transmission interface, the acoustic subsystem which includes wave propagation and reflection, the receiving interface, and the front-end receiver. To demonstrate the effectiveness of our simulator, the models are experimentally validated by comparing the simulation results with the measured data from a commercial ultrasound system. This simulator could be used to quickly provide system-level feedback for an optimized tuning of electronic design parameters.
NASA Astrophysics Data System (ADS)
Langlois, Serge; Fouquet, Olivier; Gouy, Yann; Riant, David
2014-08-01
On-Board Computers (OBC) are more and more using integrated systems on-chip (SOC) that embed processors running from 50MHz up to several hundreds of MHz, and around which are plugged some dedicated communication controllers together with other Input/Output channels.For ground testing and On-Board SoftWare (OBSW) validation purpose, a representative simulation of these systems, faster than real-time and with cycle-true timing of execution, is not achieved with current purely software simulators.Since a few years some hybrid solutions where put in place ([1], [2]), including hardware in the loop so as to add accuracy and performance in the computer software simulation.This paper presents the results of the works engaged by Thales Alenia Space (TAS-F) at the end of 2010, that led to a validated HW simulator of the UT699 by mid- 2012 and that is now qualified and fully used in operational contexts.
Hu, L H; Fong, N K; Yang, L Z; Chow, W K; Li, Y Z; Huo, R
2007-02-09
Smoke and toxic gases, such as carbon monoxide, are the most fatal factors in fires. This paper models fire-induced smoke spread and carbon monoxide transportation in an 88m long channel by Fire Dynamics Simulator (FDS) with large eddy simulation (LES). FDS is now a well-founded fire dynamics computational fluid dynamic (CFD) program, which was developed by National Institute of Standards and Technology (NIST). Two full scale experiments with fire sizes of 0.75 and 1.6MW were conducted in this channel to validate the program. The spread of the fire-induced smoke flow together with the smoke temperature distribution along the channel, and the carbon monoxide concentration at an assigned position were measured. The FDS simulation results were compared with experimental data with fairly good agreement demonstrated. The validation work is then extended to numerically study the carbon monoxide concentration distribution, both vertically and longitudinally, in this long channel. Results showed that carbon monoxide concentration increase linearly with the height above the floor and decreases exponentially with the distance away from the fire source.
ERIC Educational Resources Information Center
Schubert, T. F., Jr.; Kim, E. M.
2009-01-01
The use of Miller's Theorem in the determination of the high-frequency cutoff frequency of transistor amplifiers was recently challenged by a paper published in this TRANSACTIONS. Unfortunately, that paper provided no simulation or experimental results to bring credence to the challenge or to validate the alternate method of determination…
Validation of scramjet exhaust simulation technique at Mach 6
NASA Technical Reports Server (NTRS)
Hopkins, H. B.; Konopka, W.; Leng, J.
1979-01-01
Current design philosophy for hydrogen-fueled, scramjet-powered hypersonic aircraft results in configurations with strong couplings between the engine plume and vehicle aerodynamics. The experimental verification of the scramjet exhaust simulation is described. The scramjet exhaust was reproduced for the Mach 6 flight condition by the detonation tube simulator. The exhaust flow pressure profiles, and to a large extent the heat transfer rate profiles, were then duplicated by cool gas mixtures of Argon and Freon 13B1 or Freon 12. The results of these experiments indicate that a cool gas simulation of the hot scramjet exhaust is a viable simulation technique except for phenomena which are dependent on the wall temperature relative to flow temperature.
Assessment of simulation fidelity using measurements of piloting technique in flight
NASA Technical Reports Server (NTRS)
Ferguson, S. W.; Clement, W. F.; Cleveland, W. B.; Key, D. L.
1984-01-01
The U.S. Army and NASA have undertaken the systematic validation of a ground-based piloted simulator for the UH-60A helicopter. The results of previous handling quality and task performance flight tests for this helicopter have been used as a data base for evaluating the fidelity of the present simulation, which is being conducted at the NASA Ames Research Center's Vertical Motion Simulator. Such nap-of-the-earth piloting tasks as pop-up, hover turn, dash/quick stop, sidestep, dolphin, and slalom, have been investigated. It is noted that pilot simulator performance is significantly and quantifiable degraded by comparison with flight test results for the same tasks.
NASA Astrophysics Data System (ADS)
Hussein, Rafid M.; Chandrashekhara, K.
2017-11-01
A multi-scale modeling approach is presented to simulate and validate thermo-oxidation shrinkage and cracking damage of a high temperature polymer composite. The multi-scale approach investigates coupled transient diffusion-reaction and static structural at macro- to micro-scale. The micro-scale shrinkage deformation and cracking damage are simulated and validated using 2D and 3D simulations. Localized shrinkage displacement boundary conditions for the micro-scale simulations are determined from the respective meso- and macro-scale simulations, conducted for a cross-ply laminate. The meso-scale geometrical domain and the micro-scale geometry and mesh are developed using the object oriented finite element (OOF). The macro-scale shrinkage and weight loss are measured using unidirectional coupons and used to build the macro-shrinkage model. The cross-ply coupons are used to validate the macro-shrinkage model by the shrinkage profiles acquired using scanning electron images at the cracked surface. The macro-shrinkage model deformation shows a discrepancy when the micro-scale image-based cracking is computed. The local maximum shrinkage strain is assumed to be 13 times the maximum macro-shrinkage strain of 2.5 × 10-5, upon which the discrepancy is minimized. The microcrack damage of the composite is modeled using a static elastic analysis with extended finite element and cohesive surfaces by considering the modulus spatial evolution. The 3D shrinkage displacements are fed to the model using node-wise boundary/domain conditions of the respective oxidized region. Microcrack simulation results: length, meander, and opening are closely matched to the crack in the area of interest for the scanning electron images.
Face validation of the Virtual Electrosurgery Skill Trainer (VEST©).
Sankaranarayanan, Ganesh; Li, Baichun; Miller, Amie; Wakily, Hussna; Jones, Stephanie B; Schwaitzberg, Steven; Jones, Daniel B; De, Suvranu; Olasky, Jaisa
2016-02-01
Electrosurgery is a modality that is widely used in surgery, whose use has resulted in injuries, OR fires and even death. The SAGES has established the FUSE program to address the knowledge gap in the proper and safe usage of electrosurgical devices. Complementing it, we have developed the Virtual Electrosurgery Skill Trainer (VEST(©)), which is designed to train subjects in both cognitive and motor skills necessary to safely operate electrosurgical devices. The objective of this study is to asses the face validity of the VEST(©) simulator. Sixty-three subjects were recruited at the 2014 SAGES Learning Center. They all completed the monopolar electrosurgery module on the VEST(©) simulator. At the end of the study, subjects assessed the face validity with questions that were scored on a 5-point Likert scale. The subjects were divided into two groups; FUSE experience (n = 15) and no FUSE experience (n = 48). The median score for both the groups was 4 or higher on all questions and 5 on questions on effectiveness of VEST(©) in aiding learning electrosurgery fundamentals. Questions on using the simulator in their own skills lab and recommending it to their peers also scored at 5. Mann-Whitney U test showed no significant difference (p > 0.05) indicating a general agreement. 46% of the respondents preferred VEST compared with 52% who preferred animal model and 2% preferred both for training in electrosurgery. This study demonstrated the face validity of the VEST(©) simulator. High scores showed that the simulator was visually realistic and reproduced lifelike tissue effects and the features were adequate enough to provide high realism. The self-learning instructional material was also found to be very useful in learning the fundamentals of electrosurgery. Adding more modules would increase the applicability of the VEST(©) simulator.
Validation of a small-animal PET simulation using GAMOS: a GEANT4-based framework
NASA Astrophysics Data System (ADS)
Cañadas, M.; Arce, P.; Rato Mendes, P.
2011-01-01
Monte Carlo-based modelling is a powerful tool to help in the design and optimization of positron emission tomography (PET) systems. The performance of these systems depends on several parameters, such as detector physical characteristics, shielding or electronics, whose effects can be studied on the basis of realistic simulated data. The aim of this paper is to validate a comprehensive study of the Raytest ClearPET small-animal PET scanner using a new Monte Carlo simulation platform which has been developed at CIEMAT (Madrid, Spain), called GAMOS (GEANT4-based Architecture for Medicine-Oriented Simulations). This toolkit, based on the GEANT4 code, was originally designed to cover multiple applications in the field of medical physics from radiotherapy to nuclear medicine, but has since been applied by some of its users in other fields of physics, such as neutron shielding, space physics, high energy physics, etc. Our simulation model includes the relevant characteristics of the ClearPET system, namely, the double layer of scintillator crystals in phoswich configuration, the rotating gantry, the presence of intrinsic radioactivity in the crystals or the storage of single events for an off-line coincidence sorting. Simulated results are contrasted with experimental acquisitions including studies of spatial resolution, sensitivity, scatter fraction and count rates in accordance with the National Electrical Manufacturers Association (NEMA) NU 4-2008 protocol. Spatial resolution results showed a discrepancy between simulated and measured values equal to 8.4% (with a maximum FWHM difference over all measurement directions of 0.5 mm). Sensitivity results differ less than 1% for a 250-750 keV energy window. Simulated and measured count rates agree well within a wide range of activities, including under electronic saturation of the system (the measured peak of total coincidences, for the mouse-sized phantom, was 250.8 kcps reached at 0.95 MBq mL-1 and the simulated peak was 247.1 kcps at 0.87 MBq mL-1). Agreement better than 3% was obtained in the scatter fraction comparison study. We also measured and simulated a mini-Derenzo phantom obtaining images with similar quality using iterative reconstruction methods. We concluded that the overall performance of the simulation showed good agreement with the measured results and validates the GAMOS package for PET applications. Furthermore, its ease of use and flexibility recommends it as an excellent tool to optimize design features or image reconstruction techniques.
Highlights of Transient Plume Impingement Model Validation and Applications
NASA Technical Reports Server (NTRS)
Woronowicz, Michael
2011-01-01
This paper describes highlights of an ongoing validation effort conducted to assess the viability of applying a set of analytic point source transient free molecule equations to model behavior ranging from molecular effusion to rocket plumes. The validation effort includes encouraging comparisons to both steady and transient studies involving experimental data and direct simulation Monte Carlo results. Finally, this model is applied to describe features of two exotic transient scenarios involving NASA Goddard Space Flight Center satellite programs.
Simulating direct shear tests with the Bullet physics library: A validation study.
Izadi, Ehsan; Bezuijen, Adam
2018-01-01
This study focuses on the possible uses of physics engines, and more specifically the Bullet physics library, to simulate granular systems. Physics engines are employed extensively in the video gaming, animation and movie industries to create physically plausible scenes. They are designed to deliver a fast, stable, and optimal simulation of certain systems such as rigid bodies, soft bodies and fluids. This study focuses exclusively on simulating granular media in the context of rigid body dynamics with the Bullet physics library. The first step was to validate the results of the simulations of direct shear testing on uniform-sized metal beads on the basis of laboratory experiments. The difference in the average angle of mobilized frictions was found to be only 1.0°. In addition, a very close match was found between dilatancy in the laboratory samples and in the simulations. A comprehensive study was then conducted to determine the failure and post-failure mechanism. We conclude with the presentation of a simulation of a direct shear test on real soil which demonstrated that Bullet has all the capabilities needed to be used as software for simulating granular systems.
Ride qualities criteria validation/pilot performance study: Flight test results
NASA Technical Reports Server (NTRS)
Nardi, L. U.; Kawana, H. Y.; Greek, D. C.
1979-01-01
Pilot performance during a terrain following flight was studied for ride quality criteria validation. Data from manual and automatic terrain following operations conducted during low level penetrations were analyzed to determine the effect of ride qualities on crew performance. The conditions analyzed included varying levels of turbulence, terrain roughness, and mission duration with a ride smoothing system on and off. Limited validation of the B-1 ride quality criteria and some of the first order interactions between ride qualities and pilot/vehicle performance are highlighted. An earlier B-1 flight simulation program correlated well with the flight test results.
Experimental validation of structural optimization methods
NASA Technical Reports Server (NTRS)
Adelman, Howard M.
1992-01-01
The topic of validating structural optimization methods by use of experimental results is addressed. The need for validating the methods as a way of effecting a greater and an accelerated acceptance of formal optimization methods by practicing engineering designers is described. The range of validation strategies is defined which includes comparison of optimization results with more traditional design approaches, establishing the accuracy of analyses used, and finally experimental validation of the optimization results. Examples of the use of experimental results to validate optimization techniques are described. The examples include experimental validation of the following: optimum design of a trussed beam; combined control-structure design of a cable-supported beam simulating an actively controlled space structure; minimum weight design of a beam with frequency constraints; minimization of the vibration response of helicopter rotor blade; minimum weight design of a turbine blade disk; aeroelastic optimization of an aircraft vertical fin; airfoil shape optimization for drag minimization; optimization of the shape of a hole in a plate for stress minimization; optimization to minimize beam dynamic response; and structural optimization of a low vibration helicopter rotor.
1992-02-01
these following sections will includ descriptions of the flight results have validated die simulation study approach im t md mehodolies assocated with the...Consultant and E-xchange Programme and the Aerospace Applications Studies Programme. The results of AGARD work are reported to the member nations and...tasks with ever-increasing levels of fidelity is leading to a steady growth in their use for all areas of aviation from new concept studies , through
RCWA and FDTD modeling of light emission from internally structured OLEDs.
Callens, Michiel Koen; Marsman, Herman; Penninck, Lieven; Peeters, Patrick; de Groot, Harry; ter Meulen, Jan Matthijs; Neyts, Kristiaan
2014-05-05
We report on the fabrication and simulation of a green OLED with an Internal Light Extraction (ILE) layer. The optical behavior of these devices is simulated using both Rigorous Coupled Wave Analysis (RCWA) and Finite Difference Time-Domain (FDTD) methods. Results obtained using these two different techniques show excellent agreement and predict the experimental results with good precision. By verifying the validity of both simulation methods on the internal light extraction structure we pave the way to optimization of ILE layers using either of these methods.
Soldier Dimensions in Combat Models
1990-05-07
and performance. Questionnaires, SQTs, and ARTEPs were often used. Many scales had estimates of reliability but few had validity data. Most studies...pending its validation . Research plans were provided for applications in simulated combat and with simulation devices, for data previously gathered...regarding reliability and validity . Lack of information following an instrument indicates neither reliability nor validity information was provided by the
Numerical simulations of LNG vapor dispersion in Brayton Fire Training Field tests with ANSYS CFX.
Qi, Ruifeng; Ng, Dedy; Cormier, Benjamin R; Mannan, M Sam
2010-11-15
Federal safety regulations require the use of validated consequence models to determine the vapor cloud dispersion exclusion zones for accidental liquefied natural gas (LNG) releases. One tool that is being developed in industry for exclusion zone determination and LNG vapor dispersion modeling is computational fluid dynamics (CFD). This paper uses the ANSYS CFX CFD code to model LNG vapor dispersion in the atmosphere. Discussed are important parameters that are essential inputs to the ANSYS CFX simulations, including the atmospheric conditions, LNG evaporation rate and pool area, turbulence in the source term, ground surface temperature and roughness height, and effects of obstacles. A sensitivity analysis was conducted to illustrate uncertainties in the simulation results arising from the mesh size and source term turbulence intensity. In addition, a set of medium-scale LNG spill tests were performed at the Brayton Fire Training Field to collect data for validating the ANSYS CFX prediction results. A comparison of test data with simulation results demonstrated that CFX was able to describe the dense gas behavior of LNG vapor cloud, and its prediction results of downwind gas concentrations close to ground level were in approximate agreement with the test data. Copyright © 2010 Elsevier B.V. All rights reserved.
Dynamic Simulation of Human Gait Model With Predictive Capability.
Sun, Jinming; Wu, Shaoli; Voglewede, Philip A
2018-03-01
In this paper, it is proposed that the central nervous system (CNS) controls human gait using a predictive control approach in conjunction with classical feedback control instead of exclusive classical feedback control theory that controls based on past error. To validate this proposition, a dynamic model of human gait is developed using a novel predictive approach to investigate the principles of the CNS. The model developed includes two parts: a plant model that represents the dynamics of human gait and a controller that represents the CNS. The plant model is a seven-segment, six-joint model that has nine degrees-of-freedom (DOF). The plant model is validated using data collected from able-bodied human subjects. The proposed controller utilizes model predictive control (MPC). MPC uses an internal model to predict the output in advance, compare the predicted output to the reference, and optimize the control input so that the predicted error is minimal. To decrease the complexity of the model, two joints are controlled using a proportional-derivative (PD) controller. The developed predictive human gait model is validated by simulating able-bodied human gait. The simulation results show that the developed model is able to simulate the kinematic output close to experimental data.
The role of numerical simulation for the development of an advanced HIFU system
NASA Astrophysics Data System (ADS)
Okita, Kohei; Narumi, Ryuta; Azuma, Takashi; Takagi, Shu; Matumoto, Yoichiro
2014-10-01
High-intensity focused ultrasound (HIFU) has been used clinically and is under clinical trials to treat various diseases. An advanced HIFU system employs ultrasound techniques for guidance during HIFU treatment instead of magnetic resonance imaging in current HIFU systems. A HIFU beam imaging for monitoring the HIFU beam and a localized motion imaging for treatment validation of tissue are introduced briefly as the real-time ultrasound monitoring techniques. Numerical simulations have a great impact on the development of real-time ultrasound monitoring as well as the improvement of the safety and efficacy of treatment in advanced HIFU systems. A HIFU simulator was developed to reproduce ultrasound propagation through the body in consideration of the elasticity of tissue, and was validated by comparison with in vitro experiments in which the ultrasound emitted from the phased-array transducer propagates through the acrylic plate acting as a bone phantom. As the result, the defocus and distortion of the ultrasound propagating through the acrylic plate in the simulation quantitatively agree with that in the experimental results. Therefore, the HIFU simulator accurately reproduces the ultrasound propagation through the medium whose shape and physical properties are well known. In addition, it is experimentally confirmed that simulation-assisted focus control of the phased-array transducer enables efficient assignment of the focus to the target. Simulation-assisted focus control can contribute to design of transducers and treatment planning.
A Level-set based framework for viscous simulation of particle-laden supersonic flows
NASA Astrophysics Data System (ADS)
Das, Pratik; Sen, Oishik; Jacobs, Gustaaf; Udaykumar, H. S.
2017-06-01
Particle-laden supersonic flows are important in natural and industrial processes, such as, volcanic eruptions, explosions, pneumatic conveyance of particle in material processing etc. Numerical study of such high-speed particle laden flows at the mesoscale calls for a numerical framework which allows simulation of supersonic flow around multiple moving solid objects. Only a few efforts have been made toward development of numerical frameworks for viscous simulation of particle-fluid interaction in supersonic flow regime. The current work presents a Cartesian grid based sharp-interface method for viscous simulations of interaction between supersonic flow with moving rigid particles. The no-slip boundary condition is imposed at the solid-fluid interfaces using a modified ghost fluid method (GFM). The current method is validated against the similarity solution of compressible boundary layer over flat-plate and benchmark numerical solution for steady supersonic flow over cylinder. Further validation is carried out against benchmark numerical results for shock induced lift-off of a cylinder in a shock tube. 3D simulation of steady supersonic flow over sphere is performed to compare the numerically obtained drag co-efficient with experimental results. A particle-resolved viscous simulation of shock interaction with a cloud of particles is performed to demonstrate that the current method is suitable for large-scale particle resolved simulations of particle-laden supersonic flows.
Jensen Ang, Wei Jie; Hopkins, Michael Edward; Partridge, Roland; Hennessey, Iain; Brennan, Paul Martin; Fouyas, Ioannis; Hughes, Mark Antony
2014-03-01
Reductions in working hours affect training opportunities for surgeons. Surgical simulation is increasingly proposed to help bridge the resultant training gap. For simulation training to translate effectively into the operating theater, acquisition of technical proficiency must be objectively assessed. Evaluating "economy of movement" is one way to achieve this. We sought to validate a practical and economical method of assessing economy of movement during a simulated task. We hypothesized that accelerometers, found in smartphones, provide quantitative, objective feedback when attached to a neurosurgeon's wrists. Subjects (n = 25) included consultants, senior registrars, junior registrars, junior doctors, and medical students. Total resultant acceleration (TRA), average resultant acceleration, and movements with acceleration >0.6g (suprathreshold acceleration events) were recorded while subjects performed a simulated dural closure task. Students recorded an average TRA 97.0 ± 31.2 ms higher than senior registrars (P = .03) and 103 ± 31.2 ms higher than consultants (P = .02). Similarly, junior doctors accrued an average TRA 181 ± 31.2 ms higher than senior registrars (P < .001) and 187 ± 31.2 ms higher than consultants (P < .001). Significant correlations were observed between surgical outcome (as measured by quality of dural closure) and both TRA (r = .44, P < .001) and number of suprathreshold acceleration events (r = .33, P < .001). TRA (219 ± 66.6 ms; P = .01) and number of suprathreshold acceleration events (127 ± 42.5; P = .02) dropped between the first and fourth trials for junior doctors, suggesting procedural learning. TRA was 45.4 ± 17.1 ms higher in the dominant hand for students (P = .04) and 57.2 ± 17.1 ms for junior doctors (P = .005), contrasting with even TRA distribution between hands (acquired ambidexterity) in senior groups. Data from smartphone-based accelerometers show construct validity as an adjunct for assessing technical performance during simulation training.
Abramyan, Tigran M.; Hyde-Volpe, David L.; Stuart, Steven J.; Latour, Robert A.
2017-01-01
The use of standard molecular dynamics simulation methods to predict the interactions of a protein with a material surface have the inherent limitations of lacking the ability to determine the most likely conformations and orientations of the adsorbed protein on the surface and to determine the level of convergence attained by the simulation. In addition, standard mixing rules are typically applied to combine the nonbonded force field parameters of the solution and solid phases the system to represent interfacial behavior without validation. As a means to circumvent these problems, the authors demonstrate the application of an efficient advanced sampling method (TIGER2A) for the simulation of the adsorption of hen egg-white lysozyme on a crystalline (110) high-density polyethylene surface plane. Simulations are conducted to generate a Boltzmann-weighted ensemble of sampled states using force field parameters that were validated to represent interfacial behavior for this system. The resulting ensembles of sampled states were then analyzed using an in-house-developed cluster analysis method to predict the most probable orientations and conformations of the protein on the surface based on the amount of sampling performed, from which free energy differences between the adsorbed states were able to be calculated. In addition, by conducting two independent sets of TIGER2A simulations combined with cluster analyses, the authors demonstrate a method to estimate the degree of convergence achieved for a given amount of sampling. The results from these simulations demonstrate that these methods enable the most probable orientations and conformations of an adsorbed protein to be predicted and that the use of our validated interfacial force field parameter set provides closer agreement to available experimental results compared to using standard CHARMM force field parameterization to represent molecular behavior at the interface. PMID:28514864
NASA Astrophysics Data System (ADS)
Prasad, K.; Thorpe, A. K.; Duren, R. M.; Thompson, D. R.; Whetstone, J. R.
2016-12-01
The National Institute of Standards and Technology (NIST) has supported the development and demonstration of a measurement capability to accurately locate greenhouse gas sources and measure their flux to the atmosphere over urban domains. However, uncertainties in transport models which form the basis of all top-down approaches can significantly affect our capability to attribute sources and predict their flux to the atmosphere. Reducing uncertainties between bottom-up and top-down models will require high resolution transport models as well as validation and verification of dispersion models over an urban domain. Tracer experiments involving the release of Perfluorocarbon Tracers (PFTs) at known flow rates offer the best approach for validating dispersion / transport models. However, tracer experiments are limited by cost, ability to make continuous measurements, and environmental concerns. Natural tracer experiments, such as the leak from the Aliso Canyon underground storage facility offers a unique opportunity to improve and validate high resolution transport models, test leak hypothesis, and to estimate the amount of methane released.High spatial resolution (10 m) Large Eddy Simulations (LES) coupled with WRF atmospheric transport models were performed to simulate the dynamics of the Aliso Canyon methane plume and to quantify the source. High resolution forward simulation results were combined with aircraft and tower based in-situ measurements as well as data from NASA airborne imaging spectrometers. Comparison of simulation results with measurement data demonstrate the capability of the LES models to accurately model transport and dispersion of methane plumes over urban domains.
Performance evaluation of an agent-based occupancy simulation model
Luo, Xuan; Lam, Khee Poh; Chen, Yixing; ...
2017-01-17
Occupancy is an important factor driving building performance. Static and homogeneous occupant schedules, commonly used in building performance simulation, contribute to issues such as performance gaps between simulated and measured energy use in buildings. Stochastic occupancy models have been recently developed and applied to better represent spatial and temporal diversity of occupants in buildings. However, there is very limited evaluation of the usability and accuracy of these models. This study used measured occupancy data from a real office building to evaluate the performance of an agent-based occupancy simulation model: the Occupancy Simulator. The occupancy patterns of various occupant types weremore » first derived from the measured occupant schedule data using statistical analysis. Then the performance of the simulation model was evaluated and verified based on (1) whether the distribution of observed occupancy behavior patterns follows the theoretical ones included in the Occupancy Simulator, and (2) whether the simulator can reproduce a variety of occupancy patterns accurately. Results demonstrated the feasibility of applying the Occupancy Simulator to simulate a range of occupancy presence and movement behaviors for regular types of occupants in office buildings, and to generate stochastic occupant schedules at the room and individual occupant levels for building performance simulation. For future work, model validation is recommended, which includes collecting and using detailed interval occupancy data of all spaces in an office building to validate the simulated occupant schedules from the Occupancy Simulator.« less
Performance evaluation of an agent-based occupancy simulation model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Xuan; Lam, Khee Poh; Chen, Yixing
Occupancy is an important factor driving building performance. Static and homogeneous occupant schedules, commonly used in building performance simulation, contribute to issues such as performance gaps between simulated and measured energy use in buildings. Stochastic occupancy models have been recently developed and applied to better represent spatial and temporal diversity of occupants in buildings. However, there is very limited evaluation of the usability and accuracy of these models. This study used measured occupancy data from a real office building to evaluate the performance of an agent-based occupancy simulation model: the Occupancy Simulator. The occupancy patterns of various occupant types weremore » first derived from the measured occupant schedule data using statistical analysis. Then the performance of the simulation model was evaluated and verified based on (1) whether the distribution of observed occupancy behavior patterns follows the theoretical ones included in the Occupancy Simulator, and (2) whether the simulator can reproduce a variety of occupancy patterns accurately. Results demonstrated the feasibility of applying the Occupancy Simulator to simulate a range of occupancy presence and movement behaviors for regular types of occupants in office buildings, and to generate stochastic occupant schedules at the room and individual occupant levels for building performance simulation. For future work, model validation is recommended, which includes collecting and using detailed interval occupancy data of all spaces in an office building to validate the simulated occupant schedules from the Occupancy Simulator.« less
Analysis of Square Cup Deep-Drawing Test of Pure Titanium
NASA Astrophysics Data System (ADS)
Ogawa, Takaki; Ma, Ninshu; Ueyama, Minoru; Harada, Yasunori
2016-08-01
The prediction of formability of titunium is more difficult than steels since its strong anisotropy. If computer simulation can estimate the formability of titanium, we can select the optimal forming conditions. The purpose of this study was to acquire knowledge for the formability prediction by the computer simulation of the square cup deep-drawing of pure titanium. In this paper, the results of FEM analsis of pure titanium were compared with the experimental results to examine the analysis validity. We analyzed the formability of deepdrawing square cup of titanium by the FEM using solid elements. Compared the analysis results with the experimental results such as the forming shape, the punch load, and the thickness, the validity was confirmed. Further, through analyzing the change of the thickness around the forming corner, it was confirmed that the thickness increased to its maximum value during forming process at the stroke of 35mm more than the maximum stroke.
In Vitro Simulation and Validation of the Circulation with Congenital Heart Defects
Figliola, Richard S.; Giardini, Alessandro; Conover, Tim; Camp, Tiffany A.; Biglino, Giovanni; Chiulli, John; Hsia, Tain-Yen
2010-01-01
Despite the recent advances in computational modeling, experimental simulation of the circulation with congenital heart defect using mock flow circuits remains an important tool for device testing, and for detailing the probable flow consequences resulting from surgical and interventional corrections. Validated mock circuits can be applied to qualify the results from novel computational models. New mathematical tools, coupled with advanced clinical imaging methods, allow for improved assessment of experimental circuit performance relative to human function, as well as the potential for patient-specific adaptation. In this review, we address the development of three in vitro mock circuits specific for studies of congenital heart defects. Performance of an in vitro right heart circulation circuit through a series of verification and validation exercises is described, including correlations with animal studies, and quantifying the effects of circuit inertiance on test results. We present our experience in the design of mock circuits suitable for investigations of the characteristics of the Fontan circulation. We use one such mock circuit to evaluate the accuracy of Doppler predictions in the presence of aortic coarctation. PMID:21218147
Imputation of missing data in time series for air pollutants
NASA Astrophysics Data System (ADS)
Junger, W. L.; Ponce de Leon, A.
2015-02-01
Missing data are major concerns in epidemiological studies of the health effects of environmental air pollutants. This article presents an imputation-based method that is suitable for multivariate time series data, which uses the EM algorithm under the assumption of normal distribution. Different approaches are considered for filtering the temporal component. A simulation study was performed to assess validity and performance of proposed method in comparison with some frequently used methods. Simulations showed that when the amount of missing data was as low as 5%, the complete data analysis yielded satisfactory results regardless of the generating mechanism of the missing data, whereas the validity began to degenerate when the proportion of missing values exceeded 10%. The proposed imputation method exhibited good accuracy and precision in different settings with respect to the patterns of missing observations. Most of the imputations obtained valid results, even under missing not at random. The methods proposed in this study are implemented as a package called mtsdi for the statistical software system R.
Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J
2009-01-01
Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67×3 (67 clusters of three observations) and a 33×6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67×3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis. PMID:20011037
The use of MR B+1 imaging for validation of FDTD electromagnetic simulations of human anatomies.
Van den Berg, Cornelis A T; Bartels, Lambertus W; van den Bergen, Bob; Kroeze, Hugo; de Leeuw, Astrid A C; Van de Kamer, Jeroen B; Lagendijk, Jan J W
2006-10-07
In this study, MR B(+)(1) imaging is employed to experimentally verify the validity of FDTD simulations of electromagnetic field patterns in human anatomies. Measurements and FDTD simulations of the B(+)(1) field induced by a 3 T MR body coil in a human corpse were performed. It was found that MR B(+)(1) imaging is a sensitive method to measure the radiofrequency (RF) magnetic field inside a human anatomy with a precision of approximately 3.5%. A good correlation was found between the B(+)(1) measurements and FDTD simulations. The measured B(+)(1) pattern for a human pelvis consisted of a global, diagonal modulation pattern plus local B(+)(1) heterogeneties. It is believed that these local B(+)(1) field variations are the result of peaks in the induced electric currents, which could not be resolved by the FDTD simulations on a 5 mm(3) simulation grid. The findings from this study demonstrate that B(+)(1) imaging is a valuable experimental technique to gain more knowledge about the dielectric interaction of RF fields with the human anatomy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Im, Piljae; Bhandari, Mahabir S.; New, Joshua Ryan
This document describes the Oak Ridge National Laboratory (ORNL) multiyear experimental plan for validation and uncertainty characterization of whole-building energy simulation for a multi-zone research facility using a traditional rooftop unit (RTU) as a baseline heating, ventilating, and air conditioning (HVAC) system. The project’s overarching objective is to increase the accuracy of energy simulation tools by enabling empirical validation of key inputs and algorithms. Doing so is required to inform the design of increasingly integrated building systems and to enable accountability for performance gaps between design and operation of a building. The project will produce documented data sets that canmore » be used to validate key functionality in different energy simulation tools and to identify errors and inadequate assumptions in simulation engines so that developers can correct them. ASHRAE Standard 140, Method of Test for the Evaluation of Building Energy Analysis Computer Programs (ASHRAE 2004), currently consists primarily of tests to compare different simulation programs with one another. This project will generate sets of measured data to enable empirical validation, incorporate these test data sets in an extended version of Standard 140, and apply these tests to the Department of Energy’s (DOE) EnergyPlus software (EnergyPlus 2016) to initiate the correction of any significant deficiencies. The fitness-for-purpose of the key algorithms in EnergyPlus will be established and demonstrated, and vendors of other simulation programs will be able to demonstrate the validity of their products. The data set will be equally applicable to validation of other simulation engines as well.« less
Finite element simulation of crack depth measurements in concrete using diffuse ultrasound
NASA Astrophysics Data System (ADS)
Seher, Matthias; Kim, Jin-Yeon; Jacobs, Laurence J.
2012-05-01
This research simulates the measurements of crack depth in concrete using diffuse ultrasound. The finite element method is employed to simulate the ultrasonic diffusion process around cracks with different geometrical shapes, with the goal of gaining physical insight into the data obtained from experimental measurements. The commercial finite element software Ansys is used to implement the two-dimensional concrete model. The model is validated with an analytical solution and experimental results. It is found from the simulation results that preliminary knowledge of the crack geometry is required to interpret the energy evolution curves from measurements and to correctly determine the crack depth.
NASA Astrophysics Data System (ADS)
Tessonnier, T.; Mairani, A.; Brons, S.; Sala, P.; Cerutti, F.; Ferrari, A.; Haberer, T.; Debus, J.; Parodi, K.
2017-08-01
In the field of particle therapy helium ion beams could offer an alternative for radiotherapy treatments, owing to their interesting physical and biological properties intermediate between protons and carbon ions. We present in this work the comparisons and validations of the Monte Carlo FLUKA code against in-depth dosimetric measurements acquired at the Heidelberg Ion Beam Therapy Center (HIT). Depth dose distributions in water with and without ripple filter, lateral profiles at different depths in water and a spread-out Bragg peak were investigated. After experimentally-driven tuning of the less known initial beam characteristics in vacuum (beam lateral size and momentum spread) and simulation parameters (water ionization potential), comparisons of depth dose distributions were performed between simulations and measurements, which showed overall good agreement with range differences below 0.1 mm and dose-weighted average dose-differences below 2.3% throughout the entire energy range. Comparisons of lateral dose profiles showed differences in full-width-half-maximum lower than 0.7 mm. Measurements of the spread-out Bragg peak indicated differences with simulations below 1% in the high dose regions and 3% in all other regions, with a range difference less than 0.5 mm. Despite the promising results, some discrepancies between simulations and measurements were observed, particularly at high energies. These differences were attributed to an underestimation of dose contributions from secondary particles at large angles, as seen in a triple Gaussian parametrization of the lateral profiles along the depth. However, the results allowed us to validate FLUKA simulations against measurements, confirming its suitability for 4He ion beam modeling in preparation of clinical establishment at HIT. Future activities building on this work will include treatment plan comparisons using validated biological models between proton and helium ions, either within a Monte Carlo treatment planning engine based on the same FLUKA code, or an independent analytical planning system fed with a validated database of inputs calculated with FLUKA.
Tessonnier, T; Mairani, A; Brons, S; Sala, P; Cerutti, F; Ferrari, A; Haberer, T; Debus, J; Parodi, K
2017-08-01
In the field of particle therapy helium ion beams could offer an alternative for radiotherapy treatments, owing to their interesting physical and biological properties intermediate between protons and carbon ions. We present in this work the comparisons and validations of the Monte Carlo FLUKA code against in-depth dosimetric measurements acquired at the Heidelberg Ion Beam Therapy Center (HIT). Depth dose distributions in water with and without ripple filter, lateral profiles at different depths in water and a spread-out Bragg peak were investigated. After experimentally-driven tuning of the less known initial beam characteristics in vacuum (beam lateral size and momentum spread) and simulation parameters (water ionization potential), comparisons of depth dose distributions were performed between simulations and measurements, which showed overall good agreement with range differences below 0.1 mm and dose-weighted average dose-differences below 2.3% throughout the entire energy range. Comparisons of lateral dose profiles showed differences in full-width-half-maximum lower than 0.7 mm. Measurements of the spread-out Bragg peak indicated differences with simulations below 1% in the high dose regions and 3% in all other regions, with a range difference less than 0.5 mm. Despite the promising results, some discrepancies between simulations and measurements were observed, particularly at high energies. These differences were attributed to an underestimation of dose contributions from secondary particles at large angles, as seen in a triple Gaussian parametrization of the lateral profiles along the depth. However, the results allowed us to validate FLUKA simulations against measurements, confirming its suitability for 4 He ion beam modeling in preparation of clinical establishment at HIT. Future activities building on this work will include treatment plan comparisons using validated biological models between proton and helium ions, either within a Monte Carlo treatment planning engine based on the same FLUKA code, or an independent analytical planning system fed with a validated database of inputs calculated with FLUKA.
Roze, S; Liens, D; Palmer, A; Berger, W; Tucker, D; Renaudin, C
2006-12-01
The aim of this study was to describe a health economic model developed to project lifetime clinical and cost outcomes of lipid-modifying interventions in patients not reaching target lipid levels and to assess the validity of the model. The internet-based, computer simulation model is made up of two decision analytic sub-models, the first utilizing Monte Carlo simulation, and the second applying Markov modeling techniques. Monte Carlo simulation generates a baseline cohort for long-term simulation by assigning an individual lipid profile to each patient, and applying the treatment effects of interventions under investigation. The Markov model then estimates the long-term clinical (coronary heart disease events, life expectancy, and quality-adjusted life expectancy) and cost outcomes up to a lifetime horizon, based on risk equations from the Framingham study. Internal and external validation analyses were performed. The results of the model validation analyses, plotted against corresponding real-life values from Framingham, 4S, AFCAPS/TexCAPS, and a meta-analysis by Gordon et al., showed that the majority of values were close to the y = x line, which indicates a perfect fit. The R2 value was 0.9575 and the gradient of the regression line was 0.9329, both very close to the perfect fit (= 1). Validation analyses of the computer simulation model suggest the model is able to recreate the outcomes from published clinical studies and would be a valuable tool for the evaluation of new and existing therapy options for patients with persistent dyslipidemia.
Surrogates for numerical simulations; optimization of eddy-promoter heat exchangers
NASA Technical Reports Server (NTRS)
Patera, Anthony T.; Patera, Anthony
1993-01-01
Although the advent of fast and inexpensive parallel computers has rendered numerous previously intractable calculations feasible, many numerical simulations remain too resource-intensive to be directly inserted in engineering optimization efforts. An attractive alternative to direct insertion considers models for computational systems: the expensive simulation is evoked only to construct and validate a simplified, input-output model; this simplified input-output model then serves as a simulation surrogate in subsequent engineering optimization studies. A simple 'Bayesian-validated' statistical framework for the construction, validation, and purposive application of static computer simulation surrogates is presented. As an example, dissipation-transport optimization of laminar-flow eddy-promoter heat exchangers are considered: parallel spectral element Navier-Stokes calculations serve to construct and validate surrogates for the flowrate and Nusselt number; these surrogates then represent the originating Navier-Stokes equations in the ensuing design process.
Large eddy simulation for atmospheric boundary layer flow over flat and complex terrains
NASA Astrophysics Data System (ADS)
Han, Yi; Stoellinger, Michael; Naughton, Jonathan
2016-09-01
In this work, we present Large Eddy Simulation (LES) results of atmospheric boundary layer (ABL) flow over complex terrain with neutral stratification using the OpenFOAM-based simulator for on/offshore wind farm applications (SOWFA). The complete work flow to investigate the LES for the ABL over real complex terrain is described including meteorological-tower data analysis, mesh generation and case set-up. New boundary conditions for the lateral and top boundaries are developed and validated to allow inflow and outflow as required in complex terrain simulations. The turbulent inflow data for the terrain simulation is generated using a precursor simulation of a flat and neutral ABL. Conditionally averaged met-tower data is used to specify the conditions for the flat precursor simulation and is also used for comparison with the simulation results of the terrain LES. A qualitative analysis of the simulation results reveals boundary layer separation and recirculation downstream of a prominent ridge that runs across the simulation domain. Comparisons of mean wind speed, standard deviation and direction between the computed results and the conditionally averaged tower data show a reasonable agreement.
CFD simulation and experimental validation of a GM type double inlet pulse tube refrigerator
NASA Astrophysics Data System (ADS)
Banjare, Y. P.; Sahoo, R. K.; Sarangi, S. K.
2010-04-01
Pulse tube refrigerator has the advantages of long life and low vibration over the conventional cryocoolers, such as GM and stirling coolers because of the absence of moving parts in low temperature. This paper performs a three-dimensional computational fluid dynamic (CFD) simulation of a GM type double inlet pulse tube refrigerator (DIPTR) vertically aligned, operating under a variety of thermal boundary conditions. A commercial computational fluid dynamics (CFD) software package, Fluent 6.1 is used to model the oscillating flow inside a pulse tube refrigerator. The simulation represents fully coupled systems operating in steady-periodic mode. The externally imposed boundary conditions are sinusoidal pressure inlet by user defined function at one end of the tube and constant temperature or heat flux boundaries at the external walls of the cold-end heat exchangers. The experimental method to evaluate the optimum parameters of DIPTR is difficult. On the other hand, developing a computer code for CFD analysis is equally complex. The objectives of the present investigations are to ascertain the suitability of CFD based commercial package, Fluent for study of energy and fluid flow in DIPTR and to validate the CFD simulation results with available experimental data. The general results, such as the cool down behaviours of the system, phase relation between mass flow rate and pressure at cold end, the temperature profile along the wall of the cooler and refrigeration load are presented for different boundary conditions of the system. The results confirm that CFD based Fluent simulations are capable of elucidating complex periodic processes in DIPTR. The results also show that there is an excellent agreement between CFD simulation results and experimental results.
A Hybrid Reality Radiation-free Simulator for Teaching Wire Navigation Skills
Kho, Jenniefer Y.; Johns, Brian D.; Thomas, Geb. W.; Karam, Matthew D.; Marsh, J. Lawrence; Anderson, Donald D.
2016-01-01
Objectives Surgical simulation is an increasingly important method to facilitate the acquiring of surgical skills. Simulation can be helpful in developing hip fracture fixation skills because it is a common procedure for which performance can be objectively assessed (i.e., the tip-apex distance). The procedure requires fluoroscopic guidance to drill a wire along an osseous trajectory to a precise position within bone. The objective of this study was to assess the construct validity for a novel radiation-free simulator designed to teach wire navigation skills in hip fracture fixation. Methods Novices (N=30) with limited to no surgical experience in hip fracture fixation and experienced surgeons (N=10) participated. Participants drilled a guide wire in the center-center position of a synthetic femoral head in a hip fracture simulator, using electromagnetic sensors to track the guide wire position. Sensor data were gathered to generate fluoroscopic-like images of the hip and guide wire. Simulator performance of novice and experienced participants was compared to measure construct validity. Results The simulator was able to discriminate the accuracy in guide wire position between novices and experienced surgeons. Experienced surgeons achieved a more accurate tip-apex distance than novices (13 vs 23 mm, respectively, p=0.009). The magnitude of improvement on successive simulator attempts was dependent on level of expertise; tip-apex distance improved significantly in the novice group, while it was unchanged in the experienced group. Conclusions This hybrid reality, radiation-free hip fracture simulator, which combines real-world objects with computer-generated imagery demonstrates construct validity by distinguishing the performance of novices and experienced surgeons. There is a differential effect depending on level of experience, and it could be used as an effective training tool in novice surgeons. PMID:26165262
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ricci, P., E-mail: paolo.ricci@epfl.ch; Riva, F.; Theiler, C.
In the present work, a Verification and Validation procedure is presented and applied showing, through a practical example, how it can contribute to advancing our physics understanding of plasma turbulence. Bridging the gap between plasma physics and other scientific domains, in particular, the computational fluid dynamics community, a rigorous methodology for the verification of a plasma simulation code is presented, based on the method of manufactured solutions. This methodology assesses that the model equations are correctly solved, within the order of accuracy of the numerical scheme. The technique to carry out a solution verification is described to provide a rigorousmore » estimate of the uncertainty affecting the numerical results. A methodology for plasma turbulence code validation is also discussed, focusing on quantitative assessment of the agreement between experiments and simulations. The Verification and Validation methodology is then applied to the study of plasma turbulence in the basic plasma physics experiment TORPEX [Fasoli et al., Phys. Plasmas 13, 055902 (2006)], considering both two-dimensional and three-dimensional simulations carried out with the GBS code [Ricci et al., Plasma Phys. Controlled Fusion 54, 124047 (2012)]. The validation procedure allows progress in the understanding of the turbulent dynamics in TORPEX, by pinpointing the presence of a turbulent regime transition, due to the competition between the resistive and ideal interchange instabilities.« less
Simulation Assessment Validation Environment (SAVE). Software User’s Manual
2000-09-01
requirements and decisions are made. The integration is leveraging work from other DoD organizations so that high -end results are attainable much faster than...planning through the modeling and simulation data capture and visualization process. The planners can complete the manufacturing process plan with a high ...technologies. This tool is also used to perform “ high level” factory process simulation prior to full CAD model development and help define feasible
Blast Load Simulator Experiments for Computational Model Validation: Report 1
2016-08-01
involving the inclusion of non-responding box-type structures in a BLS simulated blast environment. The BLS is a highly tunable com- pressed-gas-driven...Blast Load Simulator (BLS) to evaluate its suitability for a future effort involving the inclusion of non-responding box-type structures located in...Recommendations Preliminary testing indicated that inclusion of the grill and diaphragm striker resulted in a decrease in peak pressure of about 12
Flight Test Evaluation of the Airborne Information for Lateral Spacing (AILS) Concept
NASA Technical Reports Server (NTRS)
Abbott, Terence S.
2002-01-01
The Airborne Information for Lateral Spacing (AILS) concept is designed to support independent parallel approach operations to runways spaced as close as 2,500 feet. This report briefly describes the AILS operational concept and the results of a flight test of one implementation of this concept. The focus of this flight test experiment was to validate a prior simulator study, evaluating pilot performance, pilot acceptability, and minimum miss-distances for the rare situation in which an aircraft on one approach intrudes into the path of an aircraft on the other approach. Although the flight data set was not meant to be a statistically valid sample, the trends acquired in flight followed those of the simulator and therefore met the intent of validating the findings from the simulator. Results from this study showed that the design-goal mean miss-distance of 1,200 feet to potential collision situations was surpassed with an actual mean miss-distance of 1,859 feet. Pilot reaction times to the alerting system, which was an operational concern, averaged 0.65 seconds, were well below the design goal reaction time of 2.0 seconds. From the results of both of these tests, it can be concluded that this operational concept, with supporting technology and procedures, may provide an operationally viable means for conducting simultaneous, independent instrument approaches to runways spaced as close as 2500 ft.
NASA Technical Reports Server (NTRS)
Pace, Dale K.
2000-01-01
A simulation conceptual model is a simulation developers way of translating modeling requirements (i. e., what is to be represented by the simulation or its modification) into a detailed design framework (i. e., how it is to be done), from which the software, hardware, networks (in the case of distributed simulation), and systems/equipment that will make up the simulation can be built or modified. A conceptual model is the collection of information which describes a simulation developers concept about the simulation and its pieces. That information consists of assumptions, algorithms, characteristics, relationships, and data. Taken together, these describe how the simulation developer understands what is to be represented by the simulation (entities, actions, tasks, processes, interactions, etc.) and how that representation will satisfy the requirements to which the simulation responds. Thus the conceptual model is the basis for judgment about simulation fidelity and validity for any condition that is not specifically tested. The more perspicuous and precise the conceptual model, the more likely it is that the simulation development will both fully satisfy requirements and allow demonstration that the requirements are satisfied (i. e., validation). Methods used in simulation conceptual model development have significant implications for simulation management and for assessment of simulation uncertainty. This paper suggests how to develop and document a simulation conceptual model so that the simulation fidelity and validity can be most effectively determined. These ideas for conceptual model development apply to all simulation varieties. The paper relates these ideas to uncertainty assessments as they relate to simulation fidelity and validity. The paper also explores implications for simulation management from conceptual model development methods, especially relative to reuse of simulation components.
Effect of soccer shoe upper on ball behaviour in curve kicks
Ishii, Hideyuki; Sakurai, Yoshihisa; Maruyama, Takeo
2014-01-01
New soccer shoes have been developed by considering various concepts related to kicking, such as curving a soccer ball. However, the effects of shoes on ball behaviour remain unclear. In this study, by using a finite element simulation, we investigated the factors that affect ball behaviour immediately after impact in a curve kick. Five experienced male university soccer players performed one curve kick. We developed a finite element model of the foot and ball and evaluated the validity of the model by comparing the finite element results for the ball behaviour immediately after impact with the experimental results. The launch angle, ball velocity, and ball rotation in the finite element analysis were all in general agreement with the experimental results. Using the validated finite element model, we simulated the ball behaviour. The simulation results indicated that the larger the foot velocity immediately before impact, the larger the ball velocity and ball rotation. Furthermore, the Young's modulus of the shoe upper and the coefficient of friction between the shoe upper and the ball had little effect on the launch angle, ball velocity, and ball rotation. The results of this study suggest that the shoe upper does not significantly influence ball behaviour. PMID:25266788
Effect of soccer shoe upper on ball behaviour in curve kicks
NASA Astrophysics Data System (ADS)
Ishii, Hideyuki; Sakurai, Yoshihisa; Maruyama, Takeo
2014-08-01
New soccer shoes have been developed by considering various concepts related to kicking, such as curving a soccer ball. However, the effects of shoes on ball behaviour remain unclear. In this study, by using a finite element simulation, we investigated the factors that affect ball behaviour immediately after impact in a curve kick. Five experienced male university soccer players performed one curve kick. We developed a finite element model of the foot and ball and evaluated the validity of the model by comparing the finite element results for the ball behaviour immediately after impact with the experimental results. The launch angle, ball velocity, and ball rotation in the finite element analysis were all in general agreement with the experimental results. Using the validated finite element model, we simulated the ball behaviour. The simulation results indicated that the larger the foot velocity immediately before impact, the larger the ball velocity and ball rotation. Furthermore, the Young's modulus of the shoe upper and the coefficient of friction between the shoe upper and the ball had little effect on the launch angle, ball velocity, and ball rotation. The results of this study suggest that the shoe upper does not significantly influence ball behaviour.
Reduction of artifacts in computer simulation of breast Cooper's ligaments
NASA Astrophysics Data System (ADS)
Pokrajac, David D.; Kuperavage, Adam; Maidment, Andrew D. A.; Bakic, Predrag R.
2016-03-01
Anthropomorphic software breast phantoms have been introduced as a tool for quantitative validation of breast imaging systems. Efficacy of the validation results depends on the realism of phantom images. The recursive partitioning algorithm based upon the octree simulation has been demonstrated as versatile and capable of efficiently generating large number of phantoms to support virtual clinical trials of breast imaging. Previously, we have observed specific artifacts, (here labeled "dents") on the boundaries of simulated Cooper's ligaments. In this work, we have demonstrated that these "dents" result from the approximate determination of the closest simulated ligament to an examined subvolume (i.e., octree node) of the phantom. We propose a modification of the algorithm that determines the closest ligament by considering a pre-specified number of neighboring ligaments selected based upon the functions that govern the shape of ligaments simulated in the subvolume. We have qualitatively and quantitatively demonstrated that the modified algorithm can lead to elimination or reduction of dent artifacts in software phantoms. In a proof-of concept example, we simulated a 450 ml phantom with 333 compartments at 100 micrometer resolution. After the proposed modification, we corrected 148,105 dents, with an average size of 5.27 voxels (5.27nl). We have also qualitatively analyzed the corresponding improvement in the appearance of simulated mammographic images. The proposed algorithm leads to reduction of linear and star-like artifacts in simulated phantom projections, which can be attributed to dents. Analysis of a larger number of phantoms is ongoing.
Chen, Chia-Lin; Wang, Yuchuan; Lee, Jason J S; Tsui, Benjamin M W
2008-07-01
The authors developed and validated an efficient Monte Carlo simulation (MCS) workflow to facilitate small animal pinhole SPECT imaging research. This workflow seamlessly integrates two existing MCS tools: simulation system for emission tomography (SimSET) and GEANT4 application for emission tomography (GATE). Specifically, we retained the strength of GATE in describing complex collimator/detector configurations to meet the anticipated needs for studying advanced pinhole collimation (e.g., multipinhole) geometry, while inserting the fast SimSET photon history generator (PHG) to circumvent the relatively slow GEANT4 MCS code used by GATE in simulating photon interactions inside voxelized phantoms. For validation, data generated from this new SimSET-GATE workflow were compared with those from GATE-only simulations as well as experimental measurements obtained using a commercial small animal pinhole SPECT system. Our results showed excellent agreement (e.g., in system point response functions and energy spectra) between SimSET-GATE and GATE-only simulations, and, more importantly, a significant computational speedup (up to approximately 10-fold) provided by the new workflow. Satisfactory agreement between MCS results and experimental data were also observed. In conclusion, the authors have successfully integrated SimSET photon history generator in GATE for fast and realistic pinhole SPECT simulations, which can facilitate research in, for example, the development and application of quantitative pinhole and multipinhole SPECT for small animal imaging. This integrated simulation tool can also be adapted for studying other preclinical and clinical SPECT techniques.
Advanced simulation model for IPM motor drive with considering phase voltage and stator inductance
NASA Astrophysics Data System (ADS)
Lee, Dong-Myung; Park, Hyun-Jong; Lee, Ju
2016-10-01
This paper proposes an advanced simulation model of driving system for Interior Permanent Magnet (IPM) BrushLess Direct Current (BLDC) motors driven by 120-degree conduction method (two-phase conduction method, TPCM) that is widely used for sensorless control of BLDC motors. BLDC motors can be classified as SPM (Surface mounted Permanent Magnet) and IPM motors. Simulation model of driving system with SPM motors is simple due to the constant stator inductance regardless of the rotor position. Simulation models of SPM motor driving system have been proposed in many researches. On the other hand, simulation models for IPM driving system by graphic-based simulation tool such as Matlab/Simulink have not been proposed. Simulation study about driving system of IPMs with TPCM is complex because stator inductances of IPM vary with the rotor position, as permanent magnets are embedded in the rotor. To develop sensorless scheme or improve control performance, development of control algorithm through simulation study is essential, and the simulation model that accurately reflects the characteristic of IPM is required. Therefore, this paper presents the advanced simulation model of IPM driving system, which takes into account the unique characteristic of IPM due to the position-dependent inductances. The validity of the proposed simulation model is validated by comparison to experimental and simulation results using IPM with TPCM control scheme.
A Computational Methodology for Simulating Thermal Loss Testing of the Advanced Stirling Convertor
NASA Technical Reports Server (NTRS)
Reid, Terry V.; Wilson, Scott D.; Schifer, Nicholas A.; Briggs, Maxwell H.
2012-01-01
The U.S. Department of Energy (DOE) and Lockheed Martin Space Systems Company (LMSSC) have been developing the Advanced Stirling Radioisotope Generator (ASRG) for use as a power system for space science missions. This generator would use two highefficiency Advanced Stirling Convertors (ASCs), developed by Sunpower Inc. and NASA Glenn Research Center (GRC). The ASCs convert thermal energy from a radioisotope heat source into electricity. As part of ground testing of these ASCs, different operating conditions are used to simulate expected mission conditions. These conditions require achieving a particular operating frequency, hot end and cold end temperatures, and specified electrical power output for a given net heat input. In an effort to improve net heat input predictions, numerous tasks have been performed which provided a more accurate value for net heat input into the ASCs, including the use of multidimensional numerical models. Validation test hardware has also been used to provide a direct comparison of numerical results and validate the multi-dimensional numerical models used to predict convertor net heat input and efficiency. These validation tests were designed to simulate the temperature profile of an operating Stirling convertor and resulted in a measured net heat input of 244.4 W. The methodology was applied to the multi-dimensional numerical model which resulted in a net heat input of 240.3 W. The computational methodology resulted in a value of net heat input that was 1.7 percent less than that measured during laboratory testing. The resulting computational methodology and results are discussed.
Dreger, Douglas S.; Beroza, Gregory C.; Day, Steven M.; Goulet, Christine A.; Jordan, Thomas H; Spudich, Paul A.; Stewart, Jonathan P.
2015-01-01
This paper summarizes the evaluation of ground motion simulation methods implemented on the SCEC Broadband Platform (BBP), version 14.3 (as of March 2014). A seven-member panel, the authorship of this article, was formed to evaluate those methods for the prediction of pseudo-‐spectral accelerations (PSAs) of ground motion. The panel’s mandate was to evaluate the methods using tools developed through the validation exercise (Goulet et al. ,2014), and to define validation metrics for the assessment of the methods’ performance. This paper summarizes the evaluation process and conclusions from the panel. The five broadband, finite-source simulation methods on the BBP include two deterministic approaches herein referred to as CSM (Anderson, 2014) and UCSB (Crempien and Archuleta, 2014); a band-‐limited stochastic white noise method called EXSIM (Atkinson and Assatourians, 2014); and two hybrid approaches, referred to as G&P (Graves and Pitarka, 2014) and SDSU (Olsen and Takedatsu, 2014), which utilize a deterministic Green’s function approach for periods longer than 1 second and stochastic methods for periods shorter than 1 second. Two acceptance tests were defined to validate the broadband finite‐source ground methods (Goulet et al., 2014). Part A compared observed and simulated PSAs for periods from 0.01 to 10 seconds for 12 moderate to large earthquakes located in California, Japan, and the eastern US. Part B compared the median simulated PSAs to published NGA-‐West1 (Abrahamson and Silva, 2008; Boore and Atkinson, 2008; Campbell and Bozorgnia, 2008; and Chiou and Youngs, 2008) ground motion prediction equations (GMPEs) for specific magnitude and distance cases using a pass-‐fail criteria based on a defined acceptable range around the spectral shape of the GMPEs. For the initial Part A and Part B validation exercises during the summer of 2013, the software for the five methods was locked in at version 13.6 (see Maechling et al., 2014). In the spring of 2014, additional moderate events were considered for the Part A validation, and additional magnitude and distance cases were considered for the Part B validation, for the software locked in at version 14.3. Several of the simulation procedures, specifically UCSB and SDSU, changed significantly between versions 13.6 and 14.3. The CSM code was not submitted in time for the v14.3 evaluation and its detailed performance is not addressed in this paper. As described in Goulet et al. (2014) and Maechling et al. (2014), the BBP generates a variety of products, including three-‐component acceleration time series. A series of post-‐processing codes were developed to provide individual component PSAs and average median horizontal-‐component PSA (referred to as RotD50; Boore, 2010) for oscillator periods ranging from 0.01 to 10 seconds, as well as median PSA values computed using the NGA-‐West 1 GMPEs. The BBP was also configured to provide statistical analysis of simulation results relative to recordings (Part A) and GMPEs (Part B) as described further in sections below. As part of our evaluation, we reviewed documentation provided by each of the developers, which included the technical basis behind the methods and the developer’s self-‐assessments regarding the extrapolation capabilities (in terms of magnitude and distance ranges) of their methods. Two workshops were held in which methods and results were presented, and the panel was given the opportunity to question the developers and to have detailed technical discussions. A SCEC report (Dreger et al., 2013) describes the results of this review for BBP version 13.6. This paper summarizes that work and presents results for the more recent BBP 14.3 validation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Epiney, A.; Canepa, S.; Zerkak, O.
The STARS project at the Paul Scherrer Institut (PSI) has adopted the TRACE thermal-hydraulic (T-H) code for best-estimate system transient simulations of the Swiss Light Water Reactors (LWRs). For analyses involving interactions between system and core, a coupling of TRACE with the SIMULATE-3K (S3K) LWR core simulator has also been developed. In this configuration, the TRACE code and associated nuclear power reactor simulation models play a central role to achieve a comprehensive safety analysis capability. Thus, efforts have now been undertaken to consolidate the validation strategy by implementing a more rigorous and structured assessment approach for TRACE applications involving eithermore » only system T-H evaluations or requiring interfaces to e.g. detailed core or fuel behavior models. The first part of this paper presents the preliminary concepts of this validation strategy. The principle is to systematically track the evolution of a given set of predicted physical Quantities of Interest (QoIs) over a multidimensional parametric space where each of the dimensions represent the evolution of specific analysis aspects, including e.g. code version, transient specific simulation methodology and model "nodalisation". If properly set up, such environment should provide code developers and code users with persistent (less affected by user effect) and quantified information (sensitivity of QoIs) on the applicability of a simulation scheme (codes, input models, methodology) for steady state and transient analysis of full LWR systems. Through this, for each given transient/accident, critical paths of the validation process can be identified that could then translate into defining reference schemes to be applied for downstream predictive simulations. In order to illustrate this approach, the second part of this paper presents a first application of this validation strategy to an inadvertent blowdown event that occurred in a Swiss BWR/6. The transient was initiated by the spurious actuation of the Automatic Depressurization System (ADS). The validation approach progresses through a number of dimensions here: First, the same BWR system simulation model is assessed for different versions of the TRACE code, up to the most recent one. The second dimension is the "nodalisation" dimension, where changes to the input model are assessed. The third dimension is the "methodology" dimension. In this case imposed power and an updated TRACE core model are investigated. For each step in each validation dimension, a common set of QoIs are investigated. For the steady-state results, these include fuel temperatures distributions. For the transient part of the present study, the evaluated QoIs include the system pressure evolution and water carry-over into the steam line.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borges, C.; Zarza-Moreno, M.; Heath, E.
2012-01-15
Purpose: The most recent Varian micro multileaf collimator (MLC), the High Definition (HD120) MLC, was modeled using the BEAMNRC Monte Carlo code. This model was incorporated into a Varian medical linear accelerator, for a 6 MV beam, in static and dynamic mode. The model was validated by comparing simulated profiles with measurements. Methods: The Varian Trilogy (2300C/D) accelerator model was accurately implemented using the state-of-the-art Monte Carlo simulation program BEAMNRC and validated against off-axis and depth dose profiles measured using ionization chambers, by adjusting the energy and the full width at half maximum (FWHM) of the initial electron beam. Themore » HD120 MLC was modeled by developing a new BEAMNRC component module (CM), designated HDMLC, adapting the available DYNVMLC CM and incorporating the specific characteristics of this new micro MLC. The leaf dimensions were provided by the manufacturer. The geometry was visualized by tracing particles through the CM and recording their position when a leaf boundary is crossed. The leaf material density and abutting air gap between leaves were adjusted in order to obtain a good agreement between the simulated leakage profiles and EBT2 film measurements performed in a solid water phantom. To validate the HDMLC implementation, additional MLC static patterns were also simulated and compared to additional measurements. Furthermore, the ability to simulate dynamic MLC fields was implemented in the HDMLC CM. The simulation results of these fields were compared with EBT2 film measurements performed in a solid water phantom. Results: Overall, the discrepancies, with and without MLC, between the opened field simulations and the measurements using ionization chambers in a water phantom, for the off-axis profiles are below 2% and in depth-dose profiles are below 2% after the maximum dose depth and below 4% in the build-up region. On the conditions of these simulations, this tungsten-based MLC has a density of 18.7 g cm{sup -3} and an overall leakage of about 1.1 {+-} 0.03%. The discrepancies between the film measured and simulated closed and blocked fields are below 2% and 8%, respectively. Other measurements were performed for alternated leaf patterns and the agreement is satisfactory (to within 4%). The dynamic mode for this MLC was implemented and the discrepancies between film measurements and simulations are within 4%. Conclusions: The Varian Trilogy (2300 C/D) linear accelerator including the HD120 MLC was successfully modeled and simulated using the Monte Carlo BEAMNRC code by developing an independent CM, the HDMLC CM, either in static and dynamic modes.« less
Update on simulation-based surgical training and assessment in ophthalmology: a systematic review.
Thomsen, Ann Sofia S; Subhi, Yousif; Kiilgaard, Jens Folke; la Cour, Morten; Konge, Lars
2015-06-01
This study reviews the evidence behind simulation-based surgical training of ophthalmologists to determine (1) the validity of the reported models and (2) the ability to transfer skills to the operating room. Simulation-based training is established widely within ophthalmology, although it often lacks a scientific basis for implementation. We conducted a systematic review of trials involving simulation-based training or assessment of ophthalmic surgical skills among health professionals. The search included 5 databases (PubMed, EMBASE, PsycINFO, Cochrane Library, and Web of Science) and was completed on March 1, 2014. Overall, the included trials were divided into animal, cadaver, inanimate, and virtual-reality models. Risk of bias was assessed using the Cochrane Collaboration's tool. Validity evidence was evaluated using a modern validity framework (Messick's). We screened 1368 reports for eligibility and included 118 trials. The most common surgery simulated was cataract surgery. Most validity trials investigated only 1 or 2 of 5 sources of validity (87%). Only 2 trials (48 participants) investigated transfer of skills to the operating room; 4 trials (65 participants) evaluated the effect of simulation-based training on patient-related outcomes. Because of heterogeneity of the studies, it was not possible to conduct a quantitative analysis. The methodologic rigor of trials investigating simulation-based surgical training in ophthalmology is inadequate. To ensure effective implementation of training models, evidence-based knowledge of validity and efficacy is needed. We provide a useful tool for implementation and evaluation of research in simulation-based training. Copyright © 2015 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Face and content validity of the virtual reality simulator 'ScanTrainer®'.
Alsalamah, Amal; Campo, Rudi; Tanos, Vasilios; Grimbizis, Gregoris; Van Belle, Yves; Hood, Kerenza; Pugh, Neil; Amso, Nazar
2017-01-01
Ultrasonography is a first-line imaging in the investigation of women's irregular bleeding and other gynaecological pathologies, e.g. ovarian cysts and early pregnancy problems. However, teaching ultrasound, especially transvaginal scanning, remains a challenge for health professionals. New technology such as simulation may potentially facilitate and expedite the process of learning ultrasound. Simulation may prove to be realistic, very close to real patient scanning experience for the sonographer and objectively able to assist the development of basic skills such as image manipulation, hand-eye coordination and examination technique. The aim of this study was to determine the face and content validity of a virtual reality simulator (ScanTrainer®, MedaPhor plc, Cardiff, Wales, UK) as reflective of real transvaginal ultrasound (TVUS) scanning. A questionnaire with 14 simulator-related statements was distributed to a number of participants with differing levels of sonography experience in order to determine the level of agreement between the use of the simulator in training and real practice. There were 36 participants: novices ( n = 25) and experts ( n = 11) who rated the simulator. Median scores of face validity statements between experts and non-experts using a 10-point visual analogue scale (VAS) ratings ranged between 7.5 and 9.0 ( p > 0.05) indicated a high level of agreement. Experts' median scores of content validity statements ranged from 8.4 to 9.0. The findings confirm that the simulator has the feel and look of real-time scanning with high face validity. Similarly, its tutorial structures and learning steps confirm the content validity.
NASA Astrophysics Data System (ADS)
da Silva, Felipe das Neves Roque; Alves, José Luis Drummond; Cataldi, Marcio
2018-03-01
This paper aims to validate inflow simulations concerning the present-day climate at Água Vermelha Hydroelectric Plant (AVHP—located on the Grande River Basin) based on the Soil Moisture Accounting Procedure (SMAP) hydrological model. In order to provide rainfall data to the SMAP model, the RegCM regional climate model was also used working with boundary conditions from the MIROC model. Initially, present-day climate simulation performed by RegCM model was analyzed. It was found that, in terms of rainfall, the model was able to simulate the main patterns observed over South America. A bias correction technique was also used and it was essential to reduce mistakes related to rainfall simulation. Comparison between rainfall simulations from RegCM and MIROC showed improvements when the dynamical downscaling was performed. Then, SMAP, a rainfall-runoff hydrological model, was used to simulate inflows at Água Vermelha Hydroelectric Plant. After calibration with observed rainfall, SMAP simulations were evaluated in two different periods from the one used in calibration. During calibration, SMAP captures the inflow variability observed at AVHP. During validation periods, the hydrological model obtained better results and statistics with observed rainfall. However, in spite of some discrepancies, the use of simulated rainfall without bias correction captured the interannual flow variability. However, the use of bias removal in the simulated rainfall performed by RegCM brought significant improvements to the simulation of natural inflows performed by SMAP. Not only the curve of simulated inflow became more similar to the observed inflow, but also the statistics improved their values. Improvements were also noticed in the inflow simulation when the rainfall was provided by the regional climate model compared to the global model. In general, results obtained so far prove that there was an added value in rainfall when regional climate model was compared to global climate model and that data from regional models must be bias-corrected so as to improve their results.
Mahalingam, S; Awad, Z; Tolley, N S; Khemani, S
2016-08-01
The objective of this study was to identify and investigate the face and content validity of ventilation tube insertion (VTI) training models described in the literature. A review of literature was carried out to identify articles describing VTI simulators. Feasible models were replicated and assessed by a group of experts. Postgraduate simulation centre. Experts were defined as surgeons who had performed at least 100 VTI on patients. Seventeen experts were participated ensuring sufficient statistical power for analysis. A standardised 18-item Likert-scale questionnaire was used. This addressed face validity (realism), global and task-specific content (suitability of the model for teaching) and curriculum recommendation. The search revealed eleven models, of which only five had associated validity data. Five models were found to be feasible to replicate. None of the tested models achieved face or global content validity. Only one model achieved task-specific validity, and hence, there was no agreement on curriculum recommendation. The quality of simulation models is moderate and there is room for improvement. There is a need for new models to be developed or existing ones to be refined in order to construct a more realistic training platform for VTI simulation. © 2015 John Wiley & Sons Ltd.
Pre-compression volume on flow ripple reduction of a piston pump
NASA Astrophysics Data System (ADS)
Xu, Bing; Song, Yuechao; Yang, Huayong
2013-11-01
Axial piston pump with pre-compression volume(PCV) has lower flow ripple in large scale of operating condition than the traditional one. However, there is lack of precise simulation model of the axial piston pump with PCV, so the parameters of PCV are difficult to be determined. A finite element simulation model for piston pump with PCV is built by considering the piston movement, the fluid characteristic(including fluid compressibility and viscosity) and the leakage flow rate. Then a test of the pump flow ripple called the secondary source method is implemented to validate the simulation model. Thirdly, by comparing results among the simulation results, test results and results from other publications at the same operating condition, the simulation model is validated and used in optimizing the axial piston pump with PCV. According to the pump flow ripples obtained by the simulation model with different PCV parameters, the flow ripple is the smallest when the PCV angle is 13°, the PCV volume is 1.3×10-4 m3 at such operating condition that the pump suction pressure is 2 MPa, the pump delivery pressure 15 MPa, the pump speed 1 000 r/min, the swash plate angle 13°. At the same time, the flow ripple can be reduced when the pump suction pressure is 2 MPa, the pump delivery pressure is 5 MPa,15 MPa, 22 MPa, pump speed is 400 r/min, 1 000 r/min, 1 500 r/min, the swash plate angle is 11°, 13°, 15° and 17°, respectively. The finite element simulation model proposed provides a method for optimizing the PCV structure and guiding for designing a quieter axial piston pump.
Validating clustering of molecular dynamics simulations using polymer models
2011-01-01
Background Molecular dynamics (MD) simulation is a powerful technique for sampling the meta-stable and transitional conformations of proteins and other biomolecules. Computational data clustering has emerged as a useful, automated technique for extracting conformational states from MD simulation data. Despite extensive application, relatively little work has been done to determine if the clustering algorithms are actually extracting useful information. A primary goal of this paper therefore is to provide such an understanding through a detailed analysis of data clustering applied to a series of increasingly complex biopolymer models. Results We develop a novel series of models using basic polymer theory that have intuitive, clearly-defined dynamics and exhibit the essential properties that we are seeking to identify in MD simulations of real biomolecules. We then apply spectral clustering, an algorithm particularly well-suited for clustering polymer structures, to our models and MD simulations of several intrinsically disordered proteins. Clustering results for the polymer models provide clear evidence that the meta-stable and transitional conformations are detected by the algorithm. The results for the polymer models also help guide the analysis of the disordered protein simulations by comparing and contrasting the statistical properties of the extracted clusters. Conclusions We have developed a framework for validating the performance and utility of clustering algorithms for studying molecular biopolymer simulations that utilizes several analytic and dynamic polymer models which exhibit well-behaved dynamics including: meta-stable states, transition states, helical structures, and stochastic dynamics. We show that spectral clustering is robust to anomalies introduced by structural alignment and that different structural classes of intrinsically disordered proteins can be reliably discriminated from the clustering results. To our knowledge, our framework is the first to utilize model polymers to rigorously test the utility of clustering algorithms for studying biopolymers. PMID:22082218
Observing System Simulation Experiments: An Overview
NASA Technical Reports Server (NTRS)
Prive, Nikki C.; Errico, Ronald M.
2016-01-01
An overview of Observing System Simulation Experiments (OSSEs) will be given, with focus on calibration and validation of OSSE frameworks. Pitfalls and practice will be discussed, including observation error characteristics, incestuousness, and experimental design. The potential use of OSSEs for investigation of the behaviour of data assimilation systems will be explored, including some results from experiments using the NASAGMAO OSSE.
Constructing and Evaluating a Validity Argument for the Final-Year Ward Simulation Exercise
ERIC Educational Resources Information Center
Till, Hettie; Ker, Jean; Myford, Carol; Stirling, Kevin; Mires, Gary
2015-01-01
The authors report final-year ward simulation data from the University of Dundee Medical School. Faculty who designed this assessment intend for the final score to represent an individual senior medical student's level of clinical performance. The results are included in each student's portfolio as one source of evidence of the student's…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, C., E-mail: hansec@uw.edu; Columbia University, New York, New York 10027; Victor, B.
We present application of three scalar metrics derived from the Biorthogonal Decomposition (BD) technique to evaluate the level of agreement between macroscopic plasma dynamics in different data sets. BD decomposes large data sets, as produced by distributed diagnostic arrays, into principal mode structures without assumptions on spatial or temporal structure. These metrics have been applied to validation of the Hall-MHD model using experimental data from the Helicity Injected Torus with Steady Inductive helicity injection experiment. Each metric provides a measure of correlation between mode structures extracted from experimental data and simulations for an array of 192 surface-mounted magnetic probes. Numericalmore » validation studies have been performed using the NIMROD code, where the injectors are modeled as boundary conditions on the flux conserver, and the PSI-TET code, where the entire plasma volume is treated. Initial results from a comprehensive validation study of high performance operation with different injector frequencies are presented, illustrating application of the BD method. Using a simplified (constant, uniform density and temperature) Hall-MHD model, simulation results agree with experimental observation for two of the three defined metrics when the injectors are driven with a frequency of 14.5 kHz.« less
Computational Modelling of Patella Femoral Kinematics During Gait Cycle and Experimental Validation
NASA Astrophysics Data System (ADS)
Maiti, Raman
2016-06-01
The effect of loading and boundary conditions on patellar mechanics is significant due to the complications arising in patella femoral joints during total knee replacements. To understand the patellar mechanics with respect to loading and motion, a computational model representing the patella femoral joint was developed and validated against experimental results. The computational model was created in IDEAS NX and simulated in MSC ADAMS/VIEW software. The results obtained in the form of internal external rotations and anterior posterior displacements for a new and experimentally simulated specimen for patella femoral joint under standard gait condition were compared with experimental measurements performed on the Leeds ProSim knee simulator. A good overall agreement between the computational prediction and the experimental data was obtained for patella femoral kinematics. Good agreement between the model and the past studies was observed when the ligament load was removed and the medial lateral displacement was constrained. The model is sensitive to ±5 % change in kinematics, frictional, force and stiffness coefficients and insensitive to time step.
Computational Modelling of Patella Femoral Kinematics During Gait Cycle and Experimental Validation
NASA Astrophysics Data System (ADS)
Maiti, Raman
2018-06-01
The effect of loading and boundary conditions on patellar mechanics is significant due to the complications arising in patella femoral joints during total knee replacements. To understand the patellar mechanics with respect to loading and motion, a computational model representing the patella femoral joint was developed and validated against experimental results. The computational model was created in IDEAS NX and simulated in MSC ADAMS/VIEW software. The results obtained in the form of internal external rotations and anterior posterior displacements for a new and experimentally simulated specimen for patella femoral joint under standard gait condition were compared with experimental measurements performed on the Leeds ProSim knee simulator. A good overall agreement between the computational prediction and the experimental data was obtained for patella femoral kinematics. Good agreement between the model and the past studies was observed when the ligament load was removed and the medial lateral displacement was constrained. The model is sensitive to ±5 % change in kinematics, frictional, force and stiffness coefficients and insensitive to time step.
Numerical modeling of local scour around hydraulic structure in sandy beds by dynamic mesh method
NASA Astrophysics Data System (ADS)
Fan, Fei; Liang, Bingchen; Bai, Yuchuan; Zhu, Zhixia; Zhu, Yanjun
2017-10-01
Local scour, a non-negligible factor in hydraulic engineering, endangers the safety of hydraulic structures. In this work, a numerical model for simulating local scour was constructed, based on the open source code computational fluid dynamics model OpenFOAM. We consider both the bedload and suspended load sediment transport in the scour model and adopt the dynamic mesh method to simulate the evolution of the bed elevation. We use the finite area method to project data between the three-dimensional flow model and the two-dimensional (2D) scour model. We also improved the 2D sand slide method and added it to the scour model to correct the bed bathymetry when the bed slope angle exceeds the angle of repose. Moreover, to validate our scour model, we conducted and compared the results of three experiments with those of the developed model. The validation results show that our developed model can reliably simulate local scour.
An IMU-to-Body Alignment Method Applied to Human Gait Analysis.
Vargas-Valencia, Laura Susana; Elias, Arlindo; Rocon, Eduardo; Bastos-Filho, Teodiano; Frizera, Anselmo
2016-12-10
This paper presents a novel calibration procedure as a simple, yet powerful, method to place and align inertial sensors with body segments. The calibration can be easily replicated without the need of any additional tools. The proposed method is validated in three different applications: a computer mathematical simulation; a simplified joint composed of two semi-spheres interconnected by a universal goniometer; and a real gait test with five able-bodied subjects. Simulation results demonstrate that, after the calibration method is applied, the joint angles are correctly measured independently of previous sensor placement on the joint, thus validating the proposed procedure. In the cases of a simplified joint and a real gait test with human volunteers, the method also performs correctly, although secondary plane errors appear when compared with the simulation results. We believe that such errors are caused by limitations of the current inertial measurement unit (IMU) technology and fusion algorithms. In conclusion, the presented calibration procedure is an interesting option to solve the alignment problem when using IMUs for gait analysis.
Experimental Validation of Various Temperature Modells for Semi-Physical Tyre Model Approaches
NASA Astrophysics Data System (ADS)
Hackl, Andreas; Scherndl, Christoph; Hirschberg, Wolfgang; Lex, Cornelia
2017-10-01
With increasing level of complexity and automation in the area of automotive engineering, the simulation of safety relevant Advanced Driver Assistance Systems (ADAS) leads to increasing accuracy demands in the description of tyre contact forces. In recent years, with improvement in tyre simulation, the needs for coping with tyre temperatures and the resulting changes in tyre characteristics are rising significantly. Therefore, experimental validation of three different temperature model approaches is carried out, discussed and compared in the scope of this article. To investigate or rather evaluate the range of application of the presented approaches in combination with respect of further implementation in semi-physical tyre models, the main focus lies on the a physical parameterisation. Aside from good modelling accuracy, focus is held on computational time and complexity of the parameterisation process. To evaluate this process and discuss the results, measurements from a Hoosier racing tyre 6.0 / 18.0 10 LCO C2000 from an industrial flat test bench are used. Finally the simulation results are compared with the measurement data.
The timing of bud burst and its effect on tree growth.
Rötzer, T; Grote, R; Pretzsch, H
2004-02-01
A phenology model for estimating the timings of bud burst--one of the most influential phenological phases for the simulation of tree growth--is presented in this study. The model calculates the timings of the leafing of beech (Fagus sylvatica L.) and oak (Quercus robur L.) and the May shoot of Norway spruce (Picea abies L.) and Scots pine (Pinus sylvestris L.) on the basis of the daily maximum temperature. The data for parameterisation and validation of the model have been taken from 40 climate and 120 phenological stations in southern Germany with time series for temperature and bud burst of up to 30 years. The validation of the phenology module by means of an independent data set showed correlation coefficients for comparisons between observed and simulated values of 54% (beech), 55% (oak), 59% (spruce) and 56% (pine) with mean absolute errors varying from 4.4 days (spruce) to 5.0 days (pine). These results correspond well with the results of other--often more complex--phenology models. After the phenology module had been implemented in the tree-growth model BALANCE, the growth of a mixed forest stand with the former static and the new dynamic timings for the bud burst was simulated. The results of the two simulation runs showed that phenology has to be taken into account when simulating forest growth, particularly in mixed stands.
NASA Technical Reports Server (NTRS)
LaCasse, Katherine M.; Splitt, Michael E.; Lazarus, Steven M.; Lapenta, William M.
2008-01-01
High- and low-resolution sea surface temperature (SST) analysis products are used to initialize the Weather Research and Forecasting (WRF) Model for May 2004 for short-term forecasts over Florida and surrounding waters. Initial and boundary conditions for the simulations were provided by a combination of observations, large-scale model output, and analysis products. The impact of using a 1-km Moderate Resolution Imaging Spectroradiometer (MODIS) SST composite on subsequent evolution of the marine atmospheric boundary layer (MABL) is assessed through simulation comparisons and limited validation. Model results are presented for individual simulations, as well as for aggregates of easterly- and westerly-dominated low-level flows. The simulation comparisons show that the use of MODIS SST composites results in enhanced convergence zones. earlier and more intense horizontal convective rolls. and an increase in precipitation as well as a change in precipitation location. Validation of 10-m winds with buoys shows a slight improvement in wind speed. The most significant results of this study are that 1) vertical wind stress divergence and pressure gradient accelerations across the Florida Current region vary in importance as a function of flow direction and stability and 2) the warmer Florida Current in the MODIS product transports heat vertically and downwind of this heat source, modifying the thermal structure and the MABL wind field primarily through pressure gradient adjustments.
A relationship between peak temperature drop and velocity differential in a microburst
NASA Technical Reports Server (NTRS)
Proctor, Fred H.
1989-01-01
Results from numerical microburst simulations using the Terminal Area Simulation System (Proctor, 1987) are used to develop a relationship between wind velocity differential and peak temperature drop. The numerical model and the relationships derived from the model are described. The relationship between peak temperature drop and differential wind velocity is shown to be valid during microburst development, for all precipitation shaft intensities and diameters. It is found that the relationship is not valid for low-reflectivity microburst events or in the presence of ground-based stable layers. The use of the relationship in IR wind shear detection systems is considered.
Juarez, Juan C; Brown, David M; Young, David W
2014-05-19
Current Strehl ratio models for actively compensated free-space optical communications terminals do not accurately predict system performance under strong turbulence conditions as they are based on weak turbulence theory. For evaluation of compensated systems, we present an approach for simulating the Strehl ratio with both low-order (tip/tilt) and higher-order (adaptive optics) correction. Our simulation results are then compared to the published models and their range of turbulence validity is assessed. Finally, we propose a new Strehl ratio model and antenna gain equation that are valid for general turbulence conditions independent of the degree of compensation.
A design procedure for the handling qualities optimization of the X-29A aircraft
NASA Technical Reports Server (NTRS)
Bosworth, John T.; Cox, Timothy H.
1989-01-01
The techniques used to improve the pitch-axis handling qualities of the X-29A wing-canard-planform fighter aircraft are reviewed. The aircraft and its FCS are briefly described, and the design method, which works within the existing FCS architecture, is characterized in detail. Consideration is given to the selection of design goals and design variables, the definition and calculation of the cost function, the validation of the mathematical model on the basis of flight-test data, and the validation of the improved design by means of nonlinear simulations. Flight tests of the improved design are shown to verify the simulation results.
NASA Astrophysics Data System (ADS)
Deng, B.; Xiao, L.; Zhao, X.; Baker, E.; Gong, D.; Guo, D.; He, H.; Hou, S.; Liu, C.; Liu, T.; Sun, Q.; Thomas, J.; Wang, J.; Xiang, A. C.; Yang, D.; Ye, J.; Zhou, W.
2018-05-01
Two optical data link data transmission Application Specific Integrated Circuits (ASICs), the baseline and its backup, have been designed for the ATLAS Liquid Argon (LAr) Calorimeter Phase-I trigger upgrade. The latency of each ASIC and that of its corresponding receiver implemented in a back-end Field-Programmable Gate Array (FPGA) are critical specifications. In this paper, we present the latency measurements and simulation of two ASICs. The measurement results indicate that both ASICs achieve their design goals and meet the latency specifications. The consistency between the simulation and measurements validates the ASIC latency characterization.
Hierarchical CAD Tools for Radiation Hardened Mixed Signal Electronic Circuits
2005-01-28
11 Figure 3: Schematic of Analog and Digital Components 12 Figure 4: Dose Rate Syntax 14 Figure 5: Single Event Effects (SEE) Syntax 15 Figure 6...Harmony-AMS simulation of a Digital Phase Locked Loop 19 Figure 10: SEE results from DPLL Simulation 20 Figure 11: Published results used for validation...analog and digital circuitry. Combining the analog and digital elements onto a single chip has several advantages, but also creates unique challenges
Validity evidence and reliability of a simulated patient feedback instrument.
Schlegel, Claudia; Woermann, Ulrich; Rethans, Jan-Joost; van der Vleuten, Cees
2012-01-27
In the training of healthcare professionals, one of the advantages of communication training with simulated patients (SPs) is the SP's ability to provide direct feedback to students after a simulated clinical encounter. The quality of SP feedback must be monitored, especially because it is well known that feedback can have a profound effect on student performance. Due to the current lack of valid and reliable instruments to assess the quality of SP feedback, our study examined the validity and reliability of one potential instrument, the 'modified Quality of Simulated Patient Feedback Form' (mQSF). Content validity of the mQSF was assessed by inviting experts in the area of simulated clinical encounters to rate the importance of the mQSF items. Moreover, generalizability theory was used to examine the reliability of the mQSF. Our data came from videotapes of clinical encounters between six simulated patients and six students and the ensuing feedback from the SPs to the students. Ten faculty members judged the SP feedback according to the items on the mQSF. Three weeks later, this procedure was repeated with the same faculty members and recordings. All but two items of the mQSF received importance ratings of > 2.5 on a four-point rating scale. A generalizability coefficient of 0.77 was established with two judges observing one encounter. The findings for content validity and reliability with two judges suggest that the mQSF is a valid and reliable instrument to assess the quality of feedback provided by simulated patients.
SIMSAT: An object oriented architecture for real-time satellite simulation
NASA Technical Reports Server (NTRS)
Williams, Adam P.
1993-01-01
Real-time satellite simulators are vital tools in the support of satellite missions. They are used in the testing of ground control systems, the training of operators, the validation of operational procedures, and the development of contingency plans. The simulators must provide high-fidelity modeling of the satellite, which requires detailed system information, much of which is not available until relatively near launch. The short time-scales and resulting high productivity required of such simulator developments culminates in the need for a reusable infrastructure which can be used as a basis for each simulator. This paper describes a major new simulation infrastructure package, the Software Infrastructure for Modelling Satellites (SIMSAT). It outlines the object oriented design methodology used, describes the resulting design, and discusses the advantages and disadvantages experienced in applying the methodology.
Simulation and experimental research of 1MWe solar tower power plant in China
NASA Astrophysics Data System (ADS)
Yu, Qiang; Wang, Zhifeng; Xu, Ershu
2016-05-01
The establishment of a reliable simulation system for a solar tower power plant can greatly increase the economic and safety performance of the whole system. In this paper, a dynamic model of the 1MWe Solar Tower Power Plant at Badaling in Beijing is developed based on the "STAR-90" simulation platform, including the heliostat field, the central receiver system (water/steam), etc. The dynamic behavior of the global CSP plant can be simulated. In order to verify the validity of simulation system, a complete experimental process was synchronously simulated by repeating the same operating steps based on the simulation platform, including the locations and number of heliostats, the mass flow of the feed water, etc. According to the simulation and experimental results, some important parameters are taken out to make a deep comparison. The results show that there is good alignment between the simulations and the experimental results and that the error range can be acceptable considering the error of the models. In the end, a comprehensive and deep analysis on the error source is carried out according to the comparative results.
Anomaa Senaviratne, G M M M; Udawatta, Ranjith P; Baffaut, Claire; Anderson, Stephen H
2013-01-01
The Agricultural Policy Environmental Extender (APEX) model is used to evaluate best management practices on pollutant loading in whole farms or small watersheds. The objectives of this study were to conduct a sensitivity analysis to determine the effect of model parameters on APEX output and use the parameterized, calibrated, and validated model to evaluate long-term benefits of grass waterways. The APEX model was used to model three (East, Center, and West) adjacent field-size watersheds with claypan soils under a no-till corn ( L.)/soybean [ (L.) Merr.] rotation. Twenty-seven parameters were sensitive for crop yield, runoff, sediment, nitrogen (dissolved and total), and phosphorous (dissolved and total) simulations. The model was calibrated using measured event-based data from the Center watershed from 1993 to 1997 and validated with data from the West and East watersheds. Simulated crop yields were within ±13% of the measured yield. The model performance for event-based runoff was excellent, with calibration and validation > 0.9 and Nash-Sutcliffe coefficients (NSC) > 0.8, respectively. Sediment and total nitrogen calibration results were satisfactory for larger rainfall events (>50 mm), with > 0.5 and NSC > 0.4, but validation results remained poor, with NSC between 0.18 and 0.3. Total phosphorous was well calibrated and validated, with > 0.8 and NSC > 0.7, respectively. The presence of grass waterways reduced annual total phosphorus loadings by 13 to 25%. The replicated study indicates that APEX provides a convenient and efficient tool to evaluate long-term benefits of conservation practices. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
The aberration characteristics in a misaligned three-mirror anastigmatic (TMA) system
NASA Astrophysics Data System (ADS)
Wang, Bin; Wu, Fan; Ye, Yutang
2016-09-01
To realize the efficient alignment of the TMA system, the aberrations in a misaligned TMA system had been analyzed theoretically in this paper. Firstly, based on the nodal aberration theory (NAT), the aberration types and characteristics in the misaligned TMA system had been concluded; Secondly, a simulation validation had been carried out to testify the analysis results, the simulation results validates the aberration characteristics; Finally, the alignment procedures were determined according to the aberration characteristics: adjust the axial spacing of the mirrors in terms of Z9 in the center field of TMA system first; and then, adjust the decenters and tilts of the mirrors in terms of Z5 - Z8 in the edge field of TMA system. This method is helpful for the alignment of the TMA telescope.
Gatidis, Sergios; Würslin, Christian; Seith, Ferdinand; Schäfer, Jürgen F; la Fougère, Christian; Nikolaou, Konstantin; Schwenzer, Nina F; Schmidt, Holger
2016-01-01
Optimization of tracer dose regimes in positron emission tomography (PET) imaging is a trade-off between diagnostic image quality and radiation exposure. The challenge lies in defining minimal tracer doses that still result in sufficient diagnostic image quality. In order to find such minimal doses, it would be useful to simulate tracer dose reduction as this would enable to study the effects of tracer dose reduction on image quality in single patients without repeated injections of different amounts of tracer. The aim of our study was to introduce and validate a method for simulation of low-dose PET images enabling direct comparison of different tracer doses in single patients and under constant influencing factors. (18)F-fluoride PET data were acquired on a combined PET/magnetic resonance imaging (MRI) scanner. PET data were stored together with the temporal information of the occurrence of single events (list-mode format). A predefined proportion of PET events were then randomly deleted resulting in undersampled PET data. These data sets were subsequently reconstructed resulting in simulated low-dose PET images (retrospective undersampling of list-mode data). This approach was validated in phantom experiments by visual inspection and by comparison of PET quality metrics contrast recovery coefficient (CRC), background-variability (BV) and signal-to-noise ratio (SNR) of measured and simulated PET images for different activity concentrations. In addition, reduced-dose PET images of a clinical (18)F-FDG PET dataset were simulated using the proposed approach. (18)F-PET image quality degraded with decreasing activity concentrations with comparable visual image characteristics in measured and in corresponding simulated PET images. This result was confirmed by quantification of image quality metrics. CRC, SNR and BV showed concordant behavior with decreasing activity concentrations for measured and for corresponding simulated PET images. Simulation of dose-reduced datasets based on clinical (18)F-FDG PET data demonstrated the clinical applicability of the proposed data. Simulation of PET tracer dose reduction is possible with retrospective undersampling of list-mode data. Resulting simulated low-dose images have equivalent characteristics with PET images actually measured at lower doses and can be used to derive optimal tracer dose regimes.
NASA Technical Reports Server (NTRS)
Claus, Russell W.; Beach, Tim; Turner, Mark; Hendricks, Eric S.
2015-01-01
This paper describes the geometry and simulation results of a gas-turbine engine based on the original EEE engine developed in the 1980s. While the EEE engine was never in production, the technology developed during the program underpins many of the current generation of gas turbine engines. This geometry is being explored as a potential multi-stage turbomachinery test case that may be used to develop technology for virtual full-engine simulation. Simulation results were used to test the validity of each component geometry representation. Results are compared to a zero-dimensional engine model developed from experimental data. The geometry is captured in a series of Initial Graphical Exchange Specification (IGES) files and is available on a supplemental DVD to this report.
Design and simulation of novel laparoscopic renal denervation system: a feasibility study.
Ye, Eunbi; Baik, Jinhwan; Lee, Seunghyun; Ryu, Seon Young; Yang, Sunchoel; Choi, Eue-Keun; Song, Won Hoon; Yuk, Hyeong Dong; Jeong, Chang Wook; Park, Sung-Min
2018-05-18
In this study, we propose a novel laparoscopy-based renal denervation (RDN) system for treating patients with resistant hypertension. In this feasibility study, we investigated whether our proposed surgical instrument can ablate renal nerves from outside of the renal artery safely and effectively and can overcome the depth-related limitations of the previous catheter-based system with less damage to the arterial walls. We designed a looped bipolar electrosurgical instrument to be used with laparoscopy-based RDN system. The tip of instrument wraps around the renal artery and delivers the radio-frequency (RF) energy. We evaluated the thermal distribution via simulation study on a numerical model designed using histological data and validated the results by the in vitro study. Finally, to show the effectiveness of this system, we compared the performance of our system with that of catheter-based RDN system through simulations. Simulation results were within the 95% confidence intervals of the in vitro experimental results. The validated results demonstrated that the proposed laparoscopy-based RDN system produces an effective thermal distribution for the removal of renal sympathetic nerves without damaging the arterial wall and addresses the depth limitation of catheter-based RDN system. We developed a novel laparoscope-based electrosurgical RDN method for hypertension treatment. The feasibility of our system was confirmed through a simulation study as well as in vitro experiments. Our proposed method could be an effective treatment for resistant hypertension as well as central nervous system diseases.
Simulated fault injection - A methodology to evaluate fault tolerant microprocessor architectures
NASA Technical Reports Server (NTRS)
Choi, Gwan S.; Iyer, Ravishankar K.; Carreno, Victor A.
1990-01-01
A simulation-based fault-injection method for validating fault-tolerant microprocessor architectures is described. The approach uses mixed-mode simulation (electrical/logic analysis), and injects transient errors in run-time to assess the resulting fault impact. As an example, a fault-tolerant architecture which models the digital aspects of a dual-channel real-time jet-engine controller is used. The level of effectiveness of the dual configuration with respect to single and multiple transients is measured. The results indicate 100 percent coverage of single transients. Approximately 12 percent of the multiple transients affect both channels; none result in controller failure since two additional levels of redundancy exist.
Experimental Study and CFD Simulation of a 2D Circulating Fluidized Bed
NASA Astrophysics Data System (ADS)
Kallio, S.; Guldén, M.; Hermanson, A.
Computational fluid dynamics (CFD) gains popularity in fluidized bed modeling. For model validation, there is a need of detailed measurements under well-defined conditions. In the present study, experiments were carried out in a 40 em wide and 3 m high 2D circulating fluidized bed. Two experiments were simulated by means of the Eulerian multiphase models of the Fluent CFD software. The vertical pressure and solids volume fraction profiles and the solids circulation rate obtained from the simulation were compared to the experimental results. In addition, lateral volume fraction profiles could be compared. The simulated CFB flow patterns and the profiles obtained from simulations were in general in a good agreement with the experimental results.
NASA Technical Reports Server (NTRS)
Fletcher, Lauren E.; Aldridge, Ann M.; Wheelwright, Charles; Maida, James
1997-01-01
Task illumination has a major impact on human performance: What a person can perceive in his environment significantly affects his ability to perform tasks, especially in space's harsh environment. Training for lighting conditions in space has long depended on physical models and simulations to emulate the effect of lighting, but such tests are expensive and time-consuming. To evaluate lighting conditions not easily simulated on Earth, personnel at NASA Johnson Space Center's (JSC) Graphics Research and Analysis Facility (GRAF) have been developing computerized simulations of various illumination conditions using the ray-tracing program, Radiance, developed by Greg Ward at Lawrence Berkeley Laboratory. Because these computer simulations are only as accurate as the data used, accurate information about the reflectance properties of materials and light distributions is needed. JSC's Lighting Environment Test Facility (LETF) personnel gathered material reflectance properties for a large number of paints, metals, and cloths used in the Space Shuttle and Space Station programs, and processed these data into reflectance parameters needed for the computer simulations. They also gathered lamp distribution data for most of the light sources used, and validated the ability to accurately simulate lighting levels by comparing predictions with measurements for several ground-based tests. The result of this study is a database of material reflectance properties for a wide variety of materials, and lighting information for most of the standard light sources used in the Shuttle/Station programs. The combination of the Radiance program and GRAF's graphics capability form a validated computerized lighting simulation capability for NASA.
Validation of a Monte Carlo simulation of the Philips Allegro/GEMINI PET systems using GATE
NASA Astrophysics Data System (ADS)
Lamare, F.; Turzo, A.; Bizais, Y.; Cheze LeRest, C.; Visvikis, D.
2006-02-01
A newly developed simulation toolkit, GATE (Geant4 Application for Tomographic Emission), was used to develop a Monte Carlo simulation of a fully three-dimensional (3D) clinical PET scanner. The Philips Allegro/GEMINI PET systems were simulated in order to (a) allow a detailed study of the parameters affecting the system's performance under various imaging conditions, (b) study the optimization and quantitative accuracy of emission acquisition protocols for dynamic and static imaging, and (c) further validate the potential of GATE for the simulation of clinical PET systems. A model of the detection system and its geometry was developed. The accuracy of the developed detection model was tested through the comparison of simulated and measured results obtained with the Allegro/GEMINI systems for a number of NEMA NU2-2001 performance protocols including spatial resolution, sensitivity and scatter fraction. In addition, an approximate model of the system's dead time at the level of detected single events and coincidences was developed in an attempt to simulate the count rate related performance characteristics of the scanner. The developed dead-time model was assessed under different imaging conditions using the count rate loss and noise equivalent count rates performance protocols of standard and modified NEMA NU2-2001 (whole body imaging conditions) and NEMA NU2-1994 (brain imaging conditions) comparing simulated with experimental measurements obtained with the Allegro/GEMINI PET systems. Finally, a reconstructed image quality protocol was used to assess the overall performance of the developed model. An agreement of <3% was obtained in scatter fraction, with a difference between 4% and 10% in the true and random coincidence count rates respectively, throughout a range of activity concentrations and under various imaging conditions, resulting in <8% differences between simulated and measured noise equivalent count rates performance. Finally, the image quality validation study revealed a good agreement in signal-to-noise ratio and contrast recovery coefficients for a number of different volume spheres and two different (clinical level based) tumour-to-background ratios. In conclusion, these results support the accurate modelling of the Philips Allegro/GEMINI PET systems using GATE in combination with a dead-time model for the signal flow description, which leads to an agreement of <10% in coincidence count rates under different imaging conditions and clinically relevant activity concentration levels.
NASA Astrophysics Data System (ADS)
Mei, Zhixiong; Wu, Hao; Li, Shiyun
2018-06-01
The Conversion of Land Use and its Effects at Small regional extent (CLUE-S), which is a widely used model for land-use simulation, utilizes logistic regression to estimate the relationships between land use and its drivers, and thus, predict land-use change probabilities. However, logistic regression disregards possible spatial autocorrelation and self-organization in land-use data. Autologistic regression can depict spatial autocorrelation but cannot address self-organization, while logistic regression by considering only self-organization (NElogistic regression) fails to capture spatial autocorrelation. Therefore, this study developed a regression (NE-autologistic regression) method, which incorporated both spatial autocorrelation and self-organization, to improve CLUE-S. The Zengcheng District of Guangzhou, China was selected as the study area. The land-use data of 2001, 2005, and 2009, as well as 10 typical driving factors, were used to validate the proposed regression method and the improved CLUE-S model. Then, three future land-use scenarios in 2020: the natural growth scenario, ecological protection scenario, and economic development scenario, were simulated using the improved model. Validation results showed that NE-autologistic regression performed better than logistic regression, autologistic regression, and NE-logistic regression in predicting land-use change probabilities. The spatial allocation accuracy and kappa values of NE-autologistic-CLUE-S were higher than those of logistic-CLUE-S, autologistic-CLUE-S, and NE-logistic-CLUE-S for the simulations of two periods, 2001-2009 and 2005-2009, which proved that the improved CLUE-S model achieved the best simulation and was thereby effective to a certain extent. The scenario simulation results indicated that under all three scenarios, traffic land and residential/industrial land would increase, whereas arable land and unused land would decrease during 2009-2020. Apparent differences also existed in the simulated change sizes and locations of each land-use type under different scenarios. The results not only demonstrate the validity of the improved model but also provide a valuable reference for relevant policy-makers.
Yu, Hesheng; Thé, Jesse
2017-05-01
The dispersion of gaseous pollutant around buildings is complex due to complex turbulence features such as flow detachment and zones of high shear. Computational fluid dynamics (CFD) models are one of the most promising tools to describe the pollutant distribution in the near field of buildings. Reynolds-averaged Navier-Stokes (RANS) models are the most commonly used CFD techniques to address turbulence transport of the pollutant. This research work studies the use of [Formula: see text] closure model for the gas dispersion around a building by fully resolving the viscous sublayer for the first time. The performance of standard [Formula: see text] model is also included for comparison, along with results of an extensively validated Gaussian dispersion model, the U.S. Environmental Protection Agency (EPA) AERMOD (American Meteorological Society/U.S. Environmental Protection Agency Regulatory Model). This study's CFD models apply the standard [Formula: see text] and the [Formula: see text] turbulence models to obtain wind flow field. A passive concentration transport equation is then calculated based on the resolved flow field to simulate the distribution of pollutant concentrations. The resultant simulation of both wind flow and concentration fields are validated rigorously by extensive data using multiple validation metrics. The wind flow field can be acceptably modeled by the [Formula: see text] model. However, the [Formula: see text] model fails to simulate the gas dispersion. The [Formula: see text] model outperforms [Formula: see text] in both flow and dispersion simulations, with higher hit rates for dimensionless velocity components and higher "factor of 2" of observations (FAC2) for normalized concentration. All these validation metrics of [Formula: see text] model pass the quality assurance criteria recommended by The Association of German Engineers (Verein Deutscher Ingenieure, VDI) guideline. Furthermore, these metrics are better than or the same as those in the literature. Comparison between the performances of [Formula: see text] and AERMOD shows that the CFD simulation is superior to Gaussian-type model for pollutant dispersion in the near wake of obstacles. AERMOD can perform as a screening tool for near-field gas dispersion due to its expeditious calculation and the ability to handle complicated cases. The utilization of [Formula: see text] to simulate gaseous pollutant dispersion around an isolated building is appropriate and is expected to be suitable for complex urban environment. Multiple validation metrics of [Formula: see text] turbulence model in CFD quantitatively indicated that this turbulence model was appropriate for the simulation of gas dispersion around buildings. CFD is, therefore, an attractive alternative to wind tunnel for modeling gas dispersion in urban environment due to its excellent performance, and lower cost.
NASA Technical Reports Server (NTRS)
Strutzenberg, Louise L.; Putman, Gabriel C.
2011-01-01
The Ares I Scale Model Acoustics Test (ASMAT) is a series of live-fire tests of scaled rocket motors meant to simulate the conditions of the Ares I launch configuration. These tests have provided a well documented set of high fidelity measurements useful for validation including data taken over a range of test conditions and containing phenomena like Ignition Over-Pressure and water suppression of acoustics. Expanding from initial simulations of the ASMAT setup in a held down configuration, simulations have been performed using the Loci/CHEM computational fluid dynamics software for ASMAT tests of the vehicle at 5 ft. elevation (100 ft. real vehicle elevation) with worst case drift in the direction of the launch tower. These tests have been performed without water suppression and have compared the acoustic emissions for launch structures with and without launch mounts. In addition, simulation results have also been compared to acoustic and imagery data collected from similar live-fire tests to assess the accuracy of the simulations. Simulations have shown a marked change in the pattern of emissions after removal of the launch mount with a reduction in the overall acoustic environment experienced by the vehicle and the formation of highly directed acoustic waves moving across the platform deck. Comparisons of simulation results to live-fire test data showed good amplitude and temporal correlation and imagery comparisons over the visible and infrared wavelengths showed qualitative capture of all plume and pressure wave evolution features.
Jeong, Eun Ju; Chung, Hyun Soo; Choi, Jeong Yun; Kim, In Sook; Hong, Seong Hee; Yoo, Kyung Sook; Kim, Mi Kyoung; Won, Mi Yeol; Eum, So Yeon; Cho, Young Soon
2017-06-01
The aim of this study was to develop a simulation-based time-out learning programme targeted to nurses participating in high-risk invasive procedures and to figure out the effects of application of the new programme on acceptance of nurses. This study was performed using a simulation-based learning predesign and postdesign to figure out the effects of implementation of this programme. It was targeted to 48 registered nurses working in the general ward and the emergency department in a tertiary teaching hospital. Difference between acceptance and performance rates has been figured out by using mean, standard deviation, and Wilcoxon-signed rank test. The perception survey and score sheet have been validated through content validation index, and the reliability of evaluator has been verified by using intraclass correlation coefficient. Results showed high level of acceptance of high-risk invasive procedure (P<.01). Further, improvement was consistent regardless of clinical experience, workplace, or experience in simulation-based learning. The face validity of the programme showed over 4.0 out of 5.0. This simulation-based learning programme was effective in improving the recognition of time-out protocol and has given the participants the opportunity to become proactive in cases of high-risk invasive procedures performed outside of operating room. © 2017 John Wiley & Sons Australia, Ltd.
Jaffer, U; Singh, P; Pandey, V A; Aslam, M; Standfield, N J
2014-01-01
Duplex ultrasound facilitates bedside diagnosis and hence timely patient care. Its uptake has been hampered by training and accreditation issues. We have developed an assessment tool for Duplex arterial stenosis measurement for both simulator and patient based training. A novel assessment tool: duplex ultrasound assessment of technical skills was developed. A modified duplex ultrasound assessment of technical skills was used for simulator training. Novice, intermediate experience and expert users of duplex ultrasound were invited to participate. Participants viewed an instructional video and were allowed ample time to familiarize with the equipment. Participants' attempts were recorded and independently assessed by four experts using the modified duplex ultrasound assessment of technical skills. 'Global' assessment was also done on a four point Likert scale. Content, construct and concurrent validity as well as reliability were evaluated. Content and construct validity as well as reliability were demonstrated. The simulator had good satisfaction rating from participants: median 4; range 3-5. Receiver operator characteristic analysis has established a cut point of 22/ 34 and 25/ 40 were most appropriate for simulator and patient based assessment respectively. We have validated a novel assessment tool for duplex arterial stenosis detection. Further work is underway to establish transference validity of simulator training to improved skill in scanning patients. We have developed and validated duplex ultrasound assessment of technical skills for simulator training.
Fransson, Boel A; Chen, Chi-Ya; Noyes, Julie A; Ragle, Claude A
2016-11-01
To determine the construct and concurrent validity of instrument motion metrics for laparoscopic skills assessment in virtual reality and augmented reality simulators. Evaluation study. Veterinarian students (novice, n = 14) and veterinarians (experienced, n = 11) with no or variable laparoscopic experience. Participants' minimally invasive surgery (MIS) experience was determined by hospital records of MIS procedures performed in the Teaching Hospital. Basic laparoscopic skills were assessed by 5 tasks using a physical box trainer. Each participant completed 2 tasks for assessments in each type of simulator (virtual reality: bowel handling and cutting; augmented reality: object positioning and a pericardial window model). Motion metrics such as instrument path length, angle or drift, and economy of motion of each simulator were recorded. None of the motion metrics in a virtual reality simulator showed correlation with experience, or to the basic laparoscopic skills score. All metrics in augmented reality were significantly correlated with experience (time, instrument path, and economy of movement), except for the hand dominance metric. The basic laparoscopic skills score was correlated to all performance metrics in augmented reality. The augmented reality motion metrics differed between American College of Veterinary Surgeons diplomates and residents, whereas basic laparoscopic skills score and virtual reality metrics did not. Our results provide construct validity and concurrent validity for motion analysis metrics for an augmented reality system, whereas a virtual reality system was validated only for the time score. © Copyright 2016 by The American College of Veterinary Surgeons.
Fast Whole-Engine Stirling Analysis
NASA Technical Reports Server (NTRS)
Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.; Demko, Rikako
2005-01-01
An experimentally validated approach is described for fast axisymmetric Stirling engine simulations. These simulations include the entire displacer interior and demonstrate it is possible to model a complete engine cycle in less than an hour. The focus of this effort was to demonstrate it is possible to produce useful Stirling engine performance results in a time-frame short enough to impact design decisions. The combination of utilizing the latest 64-bit Opteron computer processors, fiber-optical Myrinet communications, dynamic meshing, and across zone partitioning has enabled solution times at least 240 times faster than previous attempts at simulating the axisymmetric Stirling engine. A comparison of the multidimensional results, calibrated one-dimensional results, and known experimental results is shown. This preliminary comparison demonstrates that axisymmetric simulations can be very accurate, but more work remains to improve the simulations through such means as modifying the thermal equilibrium regenerator models, adding fluid-structure interactions, including radiation effects, and incorporating mechanodynamics.
Fast Whole-Engine Stirling Analysis
NASA Technical Reports Server (NTRS)
Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.; Demko, Rikako
2007-01-01
An experimentally validated approach is described for fast axisymmetric Stirling engine simulations. These simulations include the entire displacer interior and demonstrate it is possible to model a complete engine cycle in less than an hour. The focus of this effort was to demonstrate it is possible to produce useful Stirling engine performance results in a time-frame short enough to impact design decisions. The combination of utilizing the latest 64-bit Opteron computer processors, fiber-optical Myrinet communications, dynamic meshing, and across zone partitioning has enabled solution times at least 240 times faster than previous attempts at simulating the axisymmetric Stirling engine. A comparison of the multidimensional results, calibrated one-dimensional results, and known experimental results is shown. This preliminary comparison demonstrates that axisymmetric simulations can be very accurate, but more work remains to improve the simulations through such means as modifying the thermal equilibrium regenerator models, adding fluid-structure interactions, including radiation effects, and incorporating mechanodynamics.
Validation of mathematical model for CZ process using small-scale laboratory crystal growth furnace
NASA Astrophysics Data System (ADS)
Bergfelds, Kristaps; Sabanskis, Andrejs; Virbulis, Janis
2018-05-01
The present material is focused on the modelling of small-scale laboratory NaCl-RbCl crystal growth furnace. First steps towards fully transient simulations are taken in the form of stationary simulations that deal with the optimization of material properties to match the model to experimental conditions. For this purpose, simulation software primarily used for the modelling of industrial-scale silicon crystal growth process was successfully applied. Finally, transient simulations of the crystal growth are presented, giving a sufficient agreement to experimental results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freed, Melanie; Miller, Stuart; Tang, Katherine
Purpose: MANTIS is a Monte Carlo code developed for the detailed simulation of columnar CsI scintillator screens in x-ray imaging systems. Validation of this code is needed to provide a reliable and valuable tool for system optimization and accurate reconstructions for a variety of x-ray applications. Whereas previous validation efforts have focused on matching of summary statistics, in this work the authors examine the complete point response function (PRF) of the detector system in addition to relative light output values. Methods: Relative light output values and high-resolution PRFs have been experimentally measured with a custom setup. A corresponding set ofmore » simulated light output values and PRFs have also been produced, where detailed knowledge of the experimental setup and CsI:Tl screen structures are accounted for in the simulations. Four different screens were investigated with different thicknesses, column tilt angles, and substrate types. A quantitative comparison between the experimental and simulated PRFs was performed for four different incidence angles (0 deg., 15 deg., 30 deg., and 45 deg.) and two different x-ray spectra (40 and 70 kVp). The figure of merit (FOM) used measures the normalized differences between the simulated and experimental data averaged over a region of interest. Results: Experimental relative light output values ranged from 1.456 to 1.650 and were in approximate agreement for aluminum substrates, but poor agreement for graphite substrates. The FOMs for all screen types, incidence angles, and energies ranged from 0.1929 to 0.4775. To put these FOMs in context, the same FOM was computed for 2D symmetric Gaussians fit to the same experimental data. These FOMs ranged from 0.2068 to 0.8029. Our analysis demonstrates that MANTIS reproduces experimental PRFs with higher accuracy than a symmetric 2D Gaussian fit to the experimental data in the majority of cases. Examination of the spatial distribution of differences between the PRFs shows that the main reason for errors between MANTIS and the experimental data is that MANTIS-generated PRFs are sharper than the experimental PRFs. Conclusions: The experimental validation of MANTIS performed in this study demonstrates that MANTIS is able to reliably predict experimental PRFs, especially for thinner screens, and can reproduce the highly asymmetric shape seen in the experimental data. As a result, optimizations and reconstructions carried out using MANTIS should yield results indicative of actual detector performance. Better characterization of screen properties is necessary to reconcile the simulated light output values with experimental data.« less
PeneloPET, a Monte Carlo PET simulation tool based on PENELOPE: features and validation
NASA Astrophysics Data System (ADS)
España, S; Herraiz, J L; Vicente, E; Vaquero, J J; Desco, M; Udias, J M
2009-03-01
Monte Carlo simulations play an important role in positron emission tomography (PET) imaging, as an essential tool for the research and development of new scanners and for advanced image reconstruction. PeneloPET, a PET-dedicated Monte Carlo tool, is presented and validated in this work. PeneloPET is based on PENELOPE, a Monte Carlo code for the simulation of the transport in matter of electrons, positrons and photons, with energies from a few hundred eV to 1 GeV. PENELOPE is robust, fast and very accurate, but it may be unfriendly to people not acquainted with the FORTRAN programming language. PeneloPET is an easy-to-use application which allows comprehensive simulations of PET systems within PENELOPE. Complex and realistic simulations can be set by modifying a few simple input text files. Different levels of output data are available for analysis, from sinogram and lines-of-response (LORs) histogramming to fully detailed list mode. These data can be further exploited with the preferred programming language, including ROOT. PeneloPET simulates PET systems based on crystal array blocks coupled to photodetectors and allows the user to define radioactive sources, detectors, shielding and other parts of the scanner. The acquisition chain is simulated in high level detail; for instance, the electronic processing can include pile-up rejection mechanisms and time stamping of events, if desired. This paper describes PeneloPET and shows the results of extensive validations and comparisons of simulations against real measurements from commercial acquisition systems. PeneloPET is being extensively employed to improve the image quality of commercial PET systems and for the development of new ones.
Cooperative Collision Avoidance Technology Demonstration Data Analysis Report
NASA Technical Reports Server (NTRS)
2007-01-01
This report details the National Aeronautics and Space Administration (NASA) Access 5 Project Office Cooperative Collision Avoidance (CCA) Technology Demonstration for unmanned aircraft systems (UAS) conducted from 21 to 28 September 2005. The test platform chosen for the demonstration was the Proteus Optionally Piloted Vehicle operated by Scaled Composites, LLC, flown out of the Mojave Airport, Mojave, CA. A single intruder aircraft, a NASA Gulf stream III, was used during the demonstration to execute a series of near-collision encounter scenarios. Both aircraft were equipped with Traffic Alert and Collision Avoidance System-II (TCAS-II) and Automatic Dependent Surveillance Broadcast (ADS-B) systems. The objective of this demonstration was to collect flight data to support validation efforts for the Access 5 CCA Work Package Performance Simulation and Systems Integration Laboratory (SIL). Correlation of the flight data with results obtained from the performance simulation serves as the basis for the simulation validation. A similar effort uses the flight data to validate the SIL architecture that contains the same sensor hardware that was used during the flight demonstration.
Validation and Continued Development of Methods for Spheromak Simulation
NASA Astrophysics Data System (ADS)
Benedett, Thomas
2016-10-01
The HIT-SI experiment has demonstrated stable sustainment of spheromaks. Determining how the underlying physics extrapolate to larger, higher-temperature regimes is of prime importance in determining the viability of the inductively-driven spheromak. It is thus prudent to develop and validate a computational model that can be used to study current results and study the effect of possible design choices on plasma behavior. A zero-beta Hall-MHD model has shown good agreement with experimental data at 14.5 kHz injector operation. Experimental observations at higher frequency, where the best performance is achieved, indicate pressure effects are important and likely required to attain quantitative agreement with simulations. Efforts to extend the existing validation to high frequency (36-68 kHz) using an extended MHD model implemented in the PSI-TET arbitrary-geometry 3D MHD code will be presented. An implementation of anisotropic viscosity, a feature observed to improve agreement between NIMROD simulations and experiment, will also be presented, along with investigations of flux conserver features and their impact on density control for future SIHI experiments. Work supported by DoE.
Lau, Nathan; Jamieson, Greg A; Skraaning, Gyrd
2016-03-01
The Process Overview Measure is a query-based measure developed to assess operator situation awareness (SA) from monitoring process plants. A companion paper describes how the measure has been developed according to process plant properties and operator cognitive work. The Process Overview Measure demonstrated practicality, sensitivity, validity and reliability in two full-scope simulator experiments investigating dramatically different operational concepts. Practicality was assessed based on qualitative feedback of participants and researchers. The Process Overview Measure demonstrated sensitivity and validity by revealing significant effects of experimental manipulations that corroborated with other empirical results. The measure also demonstrated adequate inter-rater reliability and practicality for measuring SA in full-scope simulator settings based on data collected on process experts. Thus, full-scope simulator studies can employ the Process Overview Measure to reveal the impact of new control room technology and operational concepts on monitoring process plants. Practitioner Summary: The Process Overview Measure is a query-based measure that demonstrated practicality, sensitivity, validity and reliability for assessing operator situation awareness (SA) from monitoring process plants in representative settings.
Full-Scale Crash Test and Finite Element Simulation of a Composite Prototype Helicopter
NASA Technical Reports Server (NTRS)
Jackson, Karen E.; Fasanella, Edwin L.; Boitnott, Richard L.; Lyle, Karen H.
2003-01-01
A full-scale crash test of a prototype composite helicopter was performed at the Impact Dynamics Research Facility at NASA Langley Research Center in 1999 to obtain data for validation of a finite element crash simulation. The helicopter was the flight test article built by Sikorsky Aircraft during the Advanced Composite Airframe Program (ACAP). The composite helicopter was designed to meet the stringent Military Standard (MIL-STD-1290A) crashworthiness criteria and was outfitted with two crew and two troop seats and four anthropomorphic dummies. The test was performed at 38-ft/s vertical and 32.5-ft/s horizontal velocity onto a rigid surface. An existing modal-vibration model of the Sikorsky ACAP helicopter was converted into a model suitable for crash simulation. A two-stage modeling approach was implemented and an external user-defined subroutine was developed to represent the complex landing gear response. The crash simulation was executed with a nonlinear, explicit transient dynamic finite element code. Predictions of structural deformation and failure, the sequence of events, and the dynamic response of the airframe structure were generated and the numerical results were correlated with the experimental data to validate the simulation. The test results, the model development, and the test-analysis correlation are described.
Simulation-based validation and arrival-time correction for Patlak analyses of Perfusion-CT scans
NASA Astrophysics Data System (ADS)
Bredno, Jörg; Hom, Jason; Schneider, Thomas; Wintermark, Max
2009-02-01
Blood-brain-barrier (BBB) breakdown is a hypothesized mechanism for hemorrhagic transformation in acute stroke. The Patlak analysis of a Perfusion Computed Tomography (PCT) scan measures the BBB permeability, but the method yields higher estimates when applied to the first pass of the contrast bolus compared to a delayed phase. We present a numerical phantom that simulates vascular and parenchymal time-attenuation curves to determine the validity of permeability measurements obtained with different acquisition protocols. A network of tubes represents the major cerebral arteries ipsi- and contralateral to an ischemic event. These tubes branch off into smaller segments that represent capillary beds. Blood flow in the phantom is freely defined and simulated as non-Newtonian tubular flow. Diffusion of contrast in the vessels and permeation through vessel walls is part of the simulation. The phantom allows us to compare the results of a permeability measurement to the simulated vessel wall status. A Patlak analysis reliably detects areas with BBB breakdown for acquisitions of 240s duration, whereas results obtained from the first pass are biased in areas of reduced blood flow. Compensating for differences in contrast arrival times reduces this bias and gives good estimates of BBB permeability for PCT acquisitions of 90-150s duration.
Fracture simulation of restored teeth using a continuum damage mechanics failure model.
Li, Haiyan; Li, Jianying; Zou, Zhenmin; Fok, Alex Siu-Lun
2011-07-01
The aim of this paper is to validate the use of a finite-element (FE) based continuum damage mechanics (CDM) failure model to simulate the debonding and fracture of restored teeth. Fracture testing of plastic model teeth, with or without a standard Class-II MOD (mesial-occusal-distal) restoration, was carried out to investigate their fracture behavior. In parallel, 2D FE models of the teeth are constructed and analyzed using the commercial FE software ABAQUS. A CDM failure model, implemented into ABAQUS via the user element subroutine (UEL), is used to simulate the debonding and/or final fracture of the model teeth under a compressive load. The material parameters needed for the CDM model to simulate fracture are obtained through separate mechanical tests. The predicted results are then compared with the experimental data of the fracture tests to validate the failure model. The failure processes of the intact and restored model teeth are successfully reproduced by the simulation. However, the fracture parameters obtained from testing small specimens need to be adjusted to account for the size effect. The results indicate that the CDM model is a viable model for the prediction of debonding and fracture in dental restorations. Copyright © 2011 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Liu, Dongdong; She, Dongli
2018-06-01
Current physically based erosion models do not carefully consider the dynamic variations of soil properties during rainfall and are unable to simulate saline-sodic soil slope erosion processes. The aim of this work was to build upon a complete model framework, SSEM, to simulate runoff and erosion processes for saline-sodic soils by coupling dynamic saturated hydraulic conductivity Ks and soil erodibility Kτ. Sixty rainfall simulation rainfall experiments (2 soil textures × 5 sodicity levels × 2 slope gradients × 3 duplicates) provided data for model calibration and validation. SSEM worked very well for simulating the runoff and erosion processes of saline-sodic silty clay. The runoff and erosion processes of saline-sodic silt loam were more complex than those of non-saline soils or soils with higher clay contents; thus, SSEM did not perform very well for some validation events. We further examined the model performances of four concepts: Dynamic Ks and Kτ (Case 1, SSEM), Dynamic Ks and Constant Kτ (Case 2), Constant Ks and Dynamic Kτ (Case 3) and Constant Ks and Constant Kτ (Case 4). The results demonstrated that the model, which considers dynamic variations in soil saturated hydraulic conductivity and soil erodibility, can provide more reasonable runoff and erosion prediction results for saline-sodic soils.
A stochastic modeling of isotope exchange reactions in glutamine synthetase
NASA Astrophysics Data System (ADS)
Kazmiruk, N. V.; Boronovskiy, S. E.; Nartsissov, Ya R.
2017-11-01
The model presented in this work allows simulation of isotopic exchange reactions at chemical equilibrium catalyzed by a glutamine synthetase. To simulate the functioning of the enzyme the algorithm based on the stochastic approach was applied. The dependence of exchange rates for 14C and 32P on metabolite concentration was estimated. The simulation results confirmed the hypothesis of the ascertained validity for preferred order random binding mechanism. Corresponding values of K0.5 were also obtained.
Validation of nonlinear gyrokinetic simulations of L- and I-mode plasmas on Alcator C-Mod
DOE Office of Scientific and Technical Information (OSTI.GOV)
Creely, A. J.; Howard, N. T.; Rodriguez-Fernandez, P.
New validation of global, nonlinear, ion-scale gyrokinetic simulations (GYRO) is carried out for L- and I-mode plasmas on Alcator C-Mod, utilizing heat fluxes, profile stiffness, and temperature fluctuations. Previous work at C-Mod found that ITG/TEM-scale GYRO simulations can match both electron and ion heat fluxes within error bars in I-mode [White PoP 2015], suggesting that multi-scale (cross-scale coupling) effects [Howard PoP 2016] may be less important in I-mode than in L-mode. New results presented here, however, show that global, nonlinear, ion-scale GYRO simulations are able to match the experimental ion heat flux, but underpredict electron heat flux (at most radii),more » electron temperature fluctuations, and perturbative thermal diffusivity in both L- and I-mode. Linear addition of electron heat flux from electron scale runs does not resolve this discrepancy. These results indicate that single-scale simulations do not sufficiently describe the I-mode core transport, and that multi-scale (coupled electron- and ion-scale) transport models are needed. In conclusion a preliminary investigation with multi-scale TGLF, however, was unable to resolve the discrepancy between ion-scale GYRO and experimental electron heat fluxes and perturbative diffusivity, motivating further work with multi-scale GYRO simulations and a more comprehensive study with multi-scale TGLF.« less
Giurgiutiu, Victor
2017-01-01
Piezoelectric wafer active sensors (PWAS) are commonly used for detecting Lamb waves for structural health monitoring application. However, in most applications of active sensing, the signals are of high-amplitude and easy to detect. In this article, we have shown a new avenue of using the PWAS transducer for detecting the low-amplitude fatigue-crack related acoustic emission (AE) signals. Multiphysics finite element (FE) simulations were performed with two PWAS transducers bonded to the structure. Various configurations of the sensors were studied by using the simulations. One PWAS was placed near to the fatigue-crack and the other one was placed at a certain distance from the crack. The simulated AE event was generated at the crack tip. The simulation results showed that both PWAS transducers were capable of sensing the AE signals. To validate the multiphysics simulation results, an in-situ AE-fatigue experiment was performed. Two PWAS transducers were bonded to the thin aerospace test coupon. The fatigue crack was generated in the test coupon which had produced low-amplitude acoustic waves. The low-amplitude fatigue-crack related AE signals were successfully captured by the PWAS transducers. The distance effect on the captured AE signals was also studied. It has been shown that some high-frequency contents of the AE signal have developed as they travel away from the crack. PMID:28817081
Validation of nonlinear gyrokinetic simulations of L- and I-mode plasmas on Alcator C-Mod
Creely, A. J.; Howard, N. T.; Rodriguez-Fernandez, P.; ...
2017-03-02
New validation of global, nonlinear, ion-scale gyrokinetic simulations (GYRO) is carried out for L- and I-mode plasmas on Alcator C-Mod, utilizing heat fluxes, profile stiffness, and temperature fluctuations. Previous work at C-Mod found that ITG/TEM-scale GYRO simulations can match both electron and ion heat fluxes within error bars in I-mode [White PoP 2015], suggesting that multi-scale (cross-scale coupling) effects [Howard PoP 2016] may be less important in I-mode than in L-mode. New results presented here, however, show that global, nonlinear, ion-scale GYRO simulations are able to match the experimental ion heat flux, but underpredict electron heat flux (at most radii),more » electron temperature fluctuations, and perturbative thermal diffusivity in both L- and I-mode. Linear addition of electron heat flux from electron scale runs does not resolve this discrepancy. These results indicate that single-scale simulations do not sufficiently describe the I-mode core transport, and that multi-scale (coupled electron- and ion-scale) transport models are needed. In conclusion a preliminary investigation with multi-scale TGLF, however, was unable to resolve the discrepancy between ion-scale GYRO and experimental electron heat fluxes and perturbative diffusivity, motivating further work with multi-scale GYRO simulations and a more comprehensive study with multi-scale TGLF.« less
NASA Astrophysics Data System (ADS)
Boughari, Yamina
New methodologies have been developed to optimize the integration, testing and certification of flight control systems, an expensive process in the aerospace industry. This thesis investigates the stability of the Cessna Citation X aircraft without control, and then optimizes two different flight controllers from design to validation. The aircraft's model was obtained from the data provided by the Research Aircraft Flight Simulator (RAFS) of the Cessna Citation business aircraft. To increase the stability and control of aircraft systems, optimizations of two different flight control designs were performed: 1) the Linear Quadratic Regulation and the Proportional Integral controllers were optimized using the Differential Evolution algorithm and the level 1 handling qualities as the objective function. The results were validated for the linear and nonlinear aircraft models, and some of the clearance criteria were investigated; and 2) the Hinfinity control method was applied on the stability and control augmentation systems. To minimize the time required for flight control design and its validation, an optimization of the controllers design was performed using the Differential Evolution (DE), and the Genetic algorithms (GA). The DE algorithm proved to be more efficient than the GA. New tools for visualization of the linear validation process were also developed to reduce the time required for the flight controller assessment. Matlab software was used to validate the different optimization algorithms' results. Research platforms of the aircraft's linear and nonlinear models were developed, and compared with the results of flight tests performed on the Research Aircraft Flight Simulator. Some of the clearance criteria of the optimized H-infinity flight controller were evaluated, including its linear stability, eigenvalues, and handling qualities criteria. Nonlinear simulations of the maneuvers criteria were also investigated during this research to assess the Cessna Citation X's flight controller clearance, and therefore, for its anticipated certification.
Methods to validate the accuracy of an indirect calorimeter in the in-vitro setting.
Oshima, Taku; Ragusa, Marco; Graf, Séverine; Dupertuis, Yves Marc; Heidegger, Claudia-Paula; Pichard, Claude
2017-12-01
The international ICALIC initiative aims at developing a new indirect calorimeter according to the needs of the clinicians and researchers in the field of clinical nutrition and metabolism. The project initially focuses on validating the calorimeter for use in mechanically ventilated acutely ill adult patient. However, standard methods to validate the accuracy of calorimeters have not yet been established. This paper describes the procedures for the in-vitro tests to validate the accuracy of the new indirect calorimeter, and defines the ranges for the parameters to be evaluated in each test to optimize the validation for clinical and research calorimetry measurements. Two in-vitro tests have been defined to validate the accuracy of the gas analyzers and the overall function of the new calorimeter. 1) Gas composition analysis allows validating the accuracy of O 2 and CO 2 analyzers. Reference gas of known O 2 (or CO 2 ) concentration is diluted by pure nitrogen gas to achieve predefined O 2 (or CO 2 ) concentration, to be measured by the indirect calorimeter. O 2 and CO 2 concentrations to be tested were determined according to their expected ranges of concentrations during calorimetry measurements. 2) Gas exchange simulator analysis validates O 2 consumption (VO 2 ) and CO 2 production (VCO 2 ) measurements. CO 2 gas injection into artificial breath gas provided by the mechanical ventilator simulates VCO 2 . Resulting dilution of O 2 concentration in the expiratory air is analyzed by the calorimeter as VO 2 . CO 2 gas of identical concentration to the fraction of inspired O 2 (FiO 2 ) is used to simulate identical VO 2 and VCO 2 . Indirect calorimetry results from publications were analyzed to determine the VO 2 and VCO 2 values to be tested for the validation. O 2 concentration in respiratory air is highest at inspiration, and can decrease to 15% during expiration. CO 2 concentration can be as high as 5% in expired air. To validate analyzers for measurements of FiO 2 up to 70%, ranges of O 2 and CO 2 concentrations to be tested were defined as 15-70% and 0.5-5.0%, respectively. The mean VO 2 in 426 adult mechanically ventilated patients was 270 ml/min, with 2 standard deviation (SD) ranges of 150-391 ml/min. Thus, VO 2 and VCO 2 to be simulated for the validation were defined as 150, 250, and 400 ml/min. The procedures for the in-vitro tests of the new indirect calorimeter and the ranges for the parameters to be evaluated in each test have been defined to optimize the validation of accuracy for clinical and research indirect calorimetry measurements. The combined methods will be used to validate the accuracy of the new indirect calorimeter developed by the ICALIC initiative, and should become the standard method to validate the accuracy of any future indirect calorimeters. Copyright © 2017 European Society for Clinical Nutrition and Metabolism. Published by Elsevier Ltd. All rights reserved.
Calder, Stefan; O'Grady, Greg; Cheng, Leo K; Du, Peng
2018-04-27
Electrogastrography (EGG) is a non-invasive method for measuring gastric electrical activity. Recent simulation studies have attempted to extend the current clinical utility of the EGG, in particular by providing a theoretical framework for distinguishing specific gastric slow wave dysrhythmias. In this paper we implement an experimental setup called a 'torso-tank' with the aim of expanding and experimentally validating these previous simulations. The torso-tank was developed using an adult male torso phantom with 190 electrodes embedded throughout the torso. The gastric slow waves were reproduced using an artificial current source capable of producing 3D electrical fields. Multiple gastric dysrhythmias were reproduced based on high-resolution mapping data from cases of human gastric dysfunction (gastric re-entry, conduction blocks and ectopic pacemakers) in addition to normal test data. Each case was recorded and compared to the previously-presented simulated results. Qualitative and quantitative analyses were performed to define the accuracy showing [Formula: see text] 1.8% difference, [Formula: see text] 0.99 correlation, and [Formula: see text] 0.04 normalised RMS error between experimental and simulated findings. These results reaffirm previous findings and these methods in unison therefore present a promising morphological-based methodology for advancing the understanding and clinical applications of EGG.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gehin, Jess C; Godfrey, Andrew T; Evans, Thomas M
The Consortium for Advanced Simulation of Light Water Reactors (CASL) is developing a collection of methods and software products known as VERA, the Virtual Environment for Reactor Applications, including a core simulation capability called VERA-CS. A key milestone for this endeavor is to validate VERA against measurements from operating nuclear power reactors. The first step in validation against plant data is to determine the ability of VERA to accurately simulate the initial startup physics tests for Watts Bar Nuclear Power Station, Unit 1 (WBN1) cycle 1. VERA-CS calculations were performed with the Insilico code developed at ORNL using cross sectionmore » processing from the SCALE system and the transport capabilities within the Denovo transport code using the SPN method. The calculations were performed with ENDF/B-VII.0 cross sections in 252 groups (collapsed to 23 groups for the 3D transport solution). The key results of the comparison of calculations with measurements include initial criticality, control rod worth critical configurations, control rod worth, differential boron worth, and isothermal temperature reactivity coefficient (ITC). The VERA results for these parameters show good agreement with measurements, with the exception of the ITC, which requires additional investigation. Results are also compared to those obtained with Monte Carlo methods and a current industry core simulator.« less
Theoretical modeling of a portable x-ray tube based KXRF system to measure lead in bone
Specht, Aaron J; Weisskopf, Marc G; Nie, Linda Huiling
2017-01-01
Objective K-shell x-ray fluorescence (KXRF) techniques have been used to identify health effects resulting from exposure to metals for decades, but the equipment is bulky and requires significant maintenance and licensing procedures. A portable x-ray fluorescence (XRF) device was developed to overcome these disadvantages, but introduced a measurement dependency on soft tissue thickness. With recent advances to detector technology, an XRF device utilizing the advantages of both systems should be feasible. Approach In this study, we used Monte Carlo simulations to test the feasibility of an XRF device with a high-energy x-ray tube and detector operable at room temperature. Main Results We first validated the use of Monte Carlo N-particle transport code (MCNP) for x-ray tube simulations, and found good agreement between experimental and simulated results. Then, we optimized x-ray tube settings and found the detection limit of the high-energy x-ray tube based XRF device for bone lead measurements to be 6.91 μg g−1 bone mineral using a cadmium zinc telluride detector. Significance In conclusion, this study validated the use of MCNP in simulations of x-ray tube physics and XRF applications, and demonstrated the feasibility of a high-energy x-ray tube based XRF for metal exposure assessment. PMID:28169835
Towards a supported common NEAMS software stack
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cormac Garvey
2012-04-01
The NEAMS IPSC's are developing multidimensional, multiphysics, multiscale simulation codes based on first principles that will be capable of predicting all aspects of current and future nuclear reactor systems. These new breeds of simulation codes will include rigorous verification, validation and uncertainty quantification checks to quantify the accuracy and quality of the simulation results. The resulting NEAMS IPSC simulation codes will be an invaluable tool in designing the next generation of Nuclear Reactors and also contribute to a more speedy process in the acquisition of licenses from the NRC for new Reactor designs. Due to the high resolution of themore » models, the complexity of the physics and the added computational resources to quantify the accuracy/quality of the results, the NEAMS IPSC codes will require large HPC resources to carry out the production simulation runs.« less
A validation procedure for a LADAR system radiometric simulation model
NASA Astrophysics Data System (ADS)
Leishman, Brad; Budge, Scott; Pack, Robert
2007-04-01
The USU LadarSIM software package is a ladar system engineering tool that has recently been enhanced to include the modeling of the radiometry of Ladar beam footprints. This paper will discuss our validation of the radiometric model and present a practical approach to future validation work. In order to validate complicated and interrelated factors affecting radiometry, a systematic approach had to be developed. Data for known parameters were first gathered then unknown parameters of the system were determined from simulation test scenarios. This was done in a way to isolate as many unknown variables as possible, then build on the previously obtained results. First, the appropriate voltage threshold levels of the discrimination electronics were set by analyzing the number of false alarms seen in actual data sets. With this threshold set, the system noise was then adjusted to achieve the appropriate number of dropouts. Once a suitable noise level was found, the range errors of the simulated and actual data sets were compared and studied. Predicted errors in range measurements were analyzed using two methods: first by examining the range error of a surface with known reflectivity and second by examining the range errors for specific detectors with known responsivities. This provided insight into the discrimination method and receiver electronics used in the actual system.
Brunckhorst, Oliver; Shahid, Shahab; Aydin, Abdullatif; McIlhenny, Craig; Khan, Shahid; Raza, Syed Johar; Sahai, Arun; Brewin, James; Bello, Fernando; Kneebone, Roger; Khan, Muhammad Shamim; Dasgupta, Prokar; Ahmed, Kamran
2015-09-01
Current training modalities within ureteroscopy have been extensively validated and must now be integrated within a comprehensive curriculum. Additionally, non-technical skills often cause surgical error and little research has been conducted to combine this with technical skills teaching. This study therefore aimed to develop and validate a curriculum for semi-rigid ureteroscopy, integrating both technical and non-technical skills teaching within the programme. Delphi methodology was utilised for curriculum development and content validation, with a randomised trial then conducted (n = 32) for curriculum evaluation. The developed curriculum consisted of four modules; initially developing basic technical skills and subsequently integrating non-technical skills teaching. Sixteen participants underwent the simulation-based curriculum and were subsequently assessed, together with the control cohort (n = 16) within a full immersion environment. Both technical (Time to completion, OSATS and a task specific checklist) and non-technical (NOTSS) outcome measures were recorded with parametric and non-parametric analyses used depending on the distribution of our data as evaluated by a Shapiro-Wilk test. Improvements within the intervention cohort demonstrated educational value across all technical and non-technical parameters recorded, including time to completion (p < 0.01), OSATS scores (p < 0.001), task specific checklist scores (p = 0.011) and NOTSS scores (p < 0.001). Content validity, feasibility and acceptability were all demonstrated through curriculum development and post-study questionnaire results. The current developed curriculum demonstrates that integrating both technical and non-technical skills teaching is both educationally valuable and feasible. Additionally, the curriculum offers a validated simulation-based training modality within ureteroscopy and a framework for the development of other simulation-based programmes.
Research on Flow Field Perception Based on Artificial Lateral Line Sensor System.
Liu, Guijie; Wang, Mengmeng; Wang, Anyi; Wang, Shirui; Yang, Tingting; Malekian, Reza; Li, Zhixiong
2018-03-11
In nature, the lateral line of fish is a peculiar and important organ for sensing the surrounding hydrodynamic environment, preying, escaping from predators and schooling. In this paper, by imitating the mechanism of fish lateral canal neuromasts, we developed an artificial lateral line system composed of micro-pressure sensors. Through hydrodynamic simulations, an optimized sensor structure was obtained and the pressure distribution models of the lateral surface were established in uniform flow and turbulent flow. Carrying out the corresponding underwater experiment, the validity of the numerical simulation method is verified by the comparison between the experimental data and the simulation results. In addition, a variety of effective research methods are proposed and validated for the flow velocity estimation and attitude perception in turbulent flow, respectively and the shape recognition of obstacles is realized by the neural network algorithm.
Hydrological Modelling using HEC-HMS for Flood Risk Assessment of Segamat Town, Malaysia
NASA Astrophysics Data System (ADS)
Romali, N. S.; Yusop, Z.; Ismail, A. Z.
2018-03-01
This paper presents an assessment of the applicability of using Hydrologic Modelling System developed by the Hydrologic Engineering Center (HEC-HMS) for hydrological modelling of Segamat River. The objective of the model application is to assist in the assessment of flood risk by providing the peak flows of 2011 Segamat flood for the generation of flood mapping of Segamat town. The capability of the model was evaluated by comparing the historical observed data with the simulation results of the selected flood events. The model calibration and validation efficiency was verified using Nash-Sutcliffe model efficiency coefficient. The results demonstrate the interest to implement the hydrological model for assessing flood risk where the simulated peak flow result is in agreement with historical observed data. The model efficiency of the calibrated and validated exercises is 0.90 and 0.76 respectively, which is acceptable.
Validation and upgrading of physically based mathematical models
NASA Technical Reports Server (NTRS)
Duval, Ronald
1992-01-01
The validation of the results of physically-based mathematical models against experimental results was discussed. Systematic techniques are used for: (1) isolating subsets of the simulator mathematical model and comparing the response of each subset to its experimental response for the same input conditions; (2) evaluating the response error to determine whether it is the result of incorrect parameter values, incorrect structure of the model subset, or unmodeled external effects of cross coupling; and (3) modifying and upgrading the model and its parameter values to determine the most physically appropriate combination of changes.
The development of a simulation model of the treatment of coronary heart disease.
Cooper, Keith; Davies, Ruth; Roderick, Paul; Chase, Debbie; Raftery, James
2002-11-01
A discrete event simulation models the progress of patients who have had a coronary event, through their treatment pathways and subsequent coronary events. The main risk factors in the model are age, sex, history of previous events and the extent of the coronary vessel disease. The model parameters are based on data collected from epidemiological studies of incidence and prognosis, efficacy studies. national surveys and treatment audits. The simulation results were validated against different sources of data. The initial results show that increasing revascularisation has considerable implications for resource use but has little impact on patient mortality.
Simulation of vortex-induced vibrations of a cylinder using ANSYS CFX
NASA Astrophysics Data System (ADS)
Izhar, Abu Bakar; Qureshi, Arshad Hussain; Khushnood, Shahab
2014-08-01
In this paper, vortex-induced vibrations of a cylinder are simulated by use of ANSYS CFX simulation code. The cylinder is treated as a rigid body and transverse displacements are obtained by use of a one degree of freedom spring damper system. 2-D as well as 3-D analysis is performed using air as the fluid. Reynolds number is varied from 40 to 16000 approx., covering the laminar and turbulent regimes of flow. The experimental results of (Khalak and Williamson, 1997) and other researchers are used for validation purposes. The results obtained are comparable.
Chaudhari, Mangesh I.; You, Xinli; Pratt, Lawrence R.; ...
2015-11-24
Ethylene carbonate (EC) and propylene carbonate (PC) are widely used solvents in lithium (Li)-ion batteries and supercapacitors. Ion dissolution and diffusion in those media are correlated with solvent dielectric responses. Here, we use all-atom molecular dynamics simulations of the pure solvents to calculate dielectric constants and relaxation times, and molecular mobilities. The computed results are compared with limited available experiments to assist more exhaustive studies of these important characteristics. As a result, the observed agreement is encouraging and provides guidance for further validation of force-field simulation models for EC and PC solvents.
A Novel Cost Based Model for Energy Consumption in Cloud Computing
Horri, A.; Dastghaibyfard, Gh.
2015-01-01
Cloud data centers consume enormous amounts of electrical energy. To support green cloud computing, providers also need to minimize cloud infrastructure energy consumption while conducting the QoS. In this study, for cloud environments an energy consumption model is proposed for time-shared policy in virtualization layer. The cost and energy usage of time-shared policy were modeled in the CloudSim simulator based upon the results obtained from the real system and then proposed model was evaluated by different scenarios. In the proposed model, the cache interference costs were considered. These costs were based upon the size of data. The proposed model was implemented in the CloudSim simulator and the related simulation results indicate that the energy consumption may be considerable and that it can vary with different parameters such as the quantum parameter, data size, and the number of VMs on a host. Measured results validate the model and demonstrate that there is a tradeoff between energy consumption and QoS in the cloud environment. Also, measured results validate the model and demonstrate that there is a tradeoff between energy consumption and QoS in the cloud environment. PMID:25705716
A novel cost based model for energy consumption in cloud computing.
Horri, A; Dastghaibyfard, Gh
2015-01-01
Cloud data centers consume enormous amounts of electrical energy. To support green cloud computing, providers also need to minimize cloud infrastructure energy consumption while conducting the QoS. In this study, for cloud environments an energy consumption model is proposed for time-shared policy in virtualization layer. The cost and energy usage of time-shared policy were modeled in the CloudSim simulator based upon the results obtained from the real system and then proposed model was evaluated by different scenarios. In the proposed model, the cache interference costs were considered. These costs were based upon the size of data. The proposed model was implemented in the CloudSim simulator and the related simulation results indicate that the energy consumption may be considerable and that it can vary with different parameters such as the quantum parameter, data size, and the number of VMs on a host. Measured results validate the model and demonstrate that there is a tradeoff between energy consumption and QoS in the cloud environment. Also, measured results validate the model and demonstrate that there is a tradeoff between energy consumption and QoS in the cloud environment.
Airfoil Ice-Accretion Aerodynamics Simulation
NASA Technical Reports Server (NTRS)
Bragg, Michael B.; Broeren, Andy P.; Addy, Harold E.; Potapczuk, Mark G.; Guffond, Didier; Montreuil, E.
2007-01-01
NASA Glenn Research Center, ONERA, and the University of Illinois are conducting a major research program whose goal is to improve our understanding of the aerodynamic scaling of ice accretions on airfoils. The program when it is completed will result in validated scaled simulation methods that produce the essential aerodynamic features of the full-scale iced-airfoil. This research will provide some of the first, high-fidelity, full-scale, iced-airfoil aerodynamic data. An initial study classified ice accretions based on their aerodynamics into four types: roughness, streamwise ice, horn ice, and spanwise-ridge ice. Subscale testing using a NACA 23012 airfoil was performed in the NASA IRT and University of Illinois wind tunnel to better understand the aerodynamics of these ice types and to test various levels of ice simulation fidelity. These studies are briefly reviewed here and have been presented in more detail in other papers. Based on these results, full-scale testing at the ONERA F1 tunnel using cast ice shapes obtained from molds taken in the IRT will provide full-scale iced airfoil data from full-scale ice accretions. Using these data as a baseline, the final step is to validate the simulation methods in scale in the Illinois wind tunnel. Computational ice accretion methods including LEWICE and ONICE have been used to guide the experiments and are briefly described and results shown. When full-scale and simulation aerodynamic results are available, these data will be used to further develop computational tools. Thus the purpose of the paper is to present an overview of the program and key results to date.
Validation of Shielding Analysis Capability of SuperMC with SINBAD
NASA Astrophysics Data System (ADS)
Chen, Chaobin; Yang, Qi; Wu, Bin; Han, Yuncheng; Song, Jing
2017-09-01
Abstract: The shielding analysis capability of SuperMC was validated with the Shielding Integral Benchmark Archive Database (SINBAD). The SINBAD was compiled by RSICC and NEA, it includes numerous benchmark experiments performed with the D-T fusion neutron source facilities of OKTAVIAN, FNS, IPPE, etc. The results from SuperMC simulation were compared with experimental data and MCNP results. Very good agreement with deviation lower than 1% was achieved and it suggests that SuperMC is reliable in shielding calculation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chao, Y.A.; Chapman, D.M.; Hill, D.J.
2000-12-15
The dynamic rod worth measurement (DRWM) technique is a method of quickly validating the predicted bank worth of control rods and shutdown rods. The DRWM analytic method is based on three-dimensional, space-time kinetic simulations of the rapid rod movements. Its measurement data is processed with an advanced digital reactivity computer. DRWM has been used as the method of bank worth validation at numerous plant startups with excellent results. The process and methodology of DRWM are described, and the measurement results of using DRWM are presented.
Simulation of particle motion in a closed conduit validated against experimental data
NASA Astrophysics Data System (ADS)
Dolanský, Jindřich
2015-05-01
Motion of a number of spherical particles in a closed conduit is examined by means of both simulation and experiment. The bed of the conduit is covered by stationary spherical particles of the size of the moving particles. The flow is driven by experimentally measured velocity profiles which are inputs of the simulation. Altering input velocity profiles generates various trajectory patterns. The lattice Boltzmann method (LBM) based simulation is developed to study mutual interactions of the flow and the particles. The simulation enables to model both the particle motion and the fluid flow. The entropic LBM is employed to deal with the flow characterized by the high Reynolds number. The entropic modification of the LBM along with the enhanced refinement of the lattice grid yield an increase in demands on computational resources. Due to the inherently parallel nature of the LBM it can be handled by employing the Parallel Computing Toolbox (MATLAB) and other transformations enabling usage of the CUDA GPU computing technology. The trajectories of the particles determined within the LBM simulation are validated against data gained from the experiments. The compatibility of the simulation results with the outputs of experimental measurements is evaluated. The accuracy of the applied approach is assessed and stability and efficiency of the simulation is also considered.
Simulation of a G-tolerance curve using the pulsatile cardiovascular model
NASA Technical Reports Server (NTRS)
Solomon, M.; Srinivasan, R.
1985-01-01
A computer simulation study, performed to assess the ability of the cardiovascular model to reproduce the G tolerance curve (G level versus tolerance time) is reported. A composite strength duration curve derived from experimental data obtained in human centrifugation studies was used for comparison. The effects of abolishing automomic control and of blood volume loss on G tolerance were also simulated. The results provide additional validation of the model. The need for the presence of autonomic reflexes even at low levels of G is pointed out. The low margin of safety with a loss of blood volume indicated by the simulation results underscores the necessity for protective measures during Shuttle reentry.
Simulating the evolution of glyphosate resistance in grains farming in northern Australia
Thornby, David F.; Walker, Steve R.
2009-01-01
Background and Aims The evolution of resistance to herbicides is a substantial problem in contemporary agriculture. Solutions to this problem generally consist of the use of practices to control the resistant population once it evolves, and/or to institute preventative measures before populations become resistant. Herbicide resistance evolves in populations over years or decades, so predicting the effectiveness of preventative strategies in particular relies on computational modelling approaches. While models of herbicide resistance already exist, none deals with the complex regional variability in the northern Australian sub-tropical grains farming region. For this reason, a new computer model was developed. Methods The model consists of an age- and stage-structured population model of weeds, with an existing crop model used to simulate plant growth and competition, and extensions to the crop model added to simulate seed bank ecology and population genetics factors. Using awnless barnyard grass (Echinochloa colona) as a test case, the model was used to investigate the likely rate of evolution under conditions expected to produce high selection pressure. Key Results Simulating continuous summer fallows with glyphosate used as the only means of weed control resulted in predicted resistant weed populations after approx. 15 years. Validation of the model against the paddock history for the first real-world glyphosate-resistant awnless barnyard grass population shows that the model predicted resistance evolution to within a few years of the real situation. Conclusions This validation work shows that empirical validation of herbicide resistance models is problematic. However, the model simulates the complexities of sub-tropical grains farming in Australia well, and can be used to investigate, generate and improve glyphosate resistance prevention strategies. PMID:19567415
Terrill Vosbein, Heidi A; Boatz, Jerry A; Kenney, John W
2005-12-22
The moment analysis method (MA) has been tested for the case of 2S --> 2P ([core]ns1 --> [core]np1) transitions of alkali metal atoms (M) doped into cryogenic rare gas (Rg) matrices using theoretically validated simulations. Theoretical/computational M/Rg system models are constructed with precisely defined parameters that closely mimic known M/Rg systems. Monte Carlo (MC) techniques are then employed to generate simulated absorption and magnetic circular dichroism (MCD) spectra of the 2S --> 2P M/Rg transition to which the MA method can be applied with the goal of seeing how effective the MA method is in re-extracting the M/Rg system parameters from these known simulated systems. The MA method is summarized in general, and an assessment is made of the use of the MA method in the rigid shift approximation typically used to evaluate M/Rg systems. The MC-MCD simulation technique is summarized, and validating evidence is presented. The simulation results and the assumptions used in applying MA to M/Rg systems are evaluated. The simulation results on Na/Ar demonstrate that the MA method does successfully re-extract the 2P spin-orbit coupling constant and Landé g-factor values initially used to build the simulations. However, assigning physical significance to the cubic and noncubic Jahn-Teller (JT) vibrational mode parameters in cryogenic M/Rg systems is not supported.
On validating remote sensing simulations using coincident real data
NASA Astrophysics Data System (ADS)
Wang, Mingming; Yao, Wei; Brown, Scott; Goodenough, Adam; van Aardt, Jan
2016-05-01
The remote sensing community often requires data simulation, either via spectral/spatial downsampling or through virtual, physics-based models, to assess systems and algorithms. The Digital Imaging and Remote Sensing Image Generation (DIRSIG) model is one such first-principles, physics-based model for simulating imagery for a range of modalities. Complex simulation of vegetation environments subsequently has become possible, as scene rendering technology and software advanced. This in turn has created questions related to the validity of such complex models, with potential multiple scattering, bidirectional distribution function (BRDF), etc. phenomena that could impact results in the case of complex vegetation scenes. We selected three sites, located in the Pacific Southwest domain (Fresno, CA) of the National Ecological Observatory Network (NEON). These sites represent oak savanna, hardwood forests, and conifer-manzanita-mixed forests. We constructed corresponding virtual scenes, using airborne LiDAR and imaging spectroscopy data from NEON, ground-based LiDAR data, and field-collected spectra to characterize the scenes. Imaging spectroscopy data for these virtual sites then were generated using the DIRSIG simulation environment. This simulated imagery was compared to real AVIRIS imagery (15m spatial resolution; 12 pixels/scene) and NEON Airborne Observation Platform (AOP) data (1m spatial resolution; 180 pixels/scene). These tests were performed using a distribution-comparison approach for select spectral statistics, e.g., established the spectra's shape, for each simulated versus real distribution pair. The initial comparison results of the spectral distributions indicated that the shapes of spectra between the virtual and real sites were closely matched.
NASA Astrophysics Data System (ADS)
Danilishin, A. M.; Kozhukhov, Y. V.; Neverov, V. V.; Malev, K. G.; Mironov, Y. R.
2017-08-01
The aim of this work is the validation study for the numerical modeling of characteristics of a multistage centrifugal compressor for natural gas. In the research process was the analysis used grid interfaces and software systems. The result revealed discrepancies between the simulated and experimental characteristics and outlined the future work plan.