Sample records for validated simulation tools

  1. Challenges of NDE simulation tool validation, optimization, and utilization for composites

    NASA Astrophysics Data System (ADS)

    Leckey, Cara A. C.; Seebo, Jeffrey P.; Juarez, Peter

    2016-02-01

    Rapid, realistic nondestructive evaluation (NDE) simulation tools can aid in inspection optimization and prediction of inspectability for advanced aerospace materials and designs. NDE simulation tools may someday aid in the design and certification of aerospace components; potentially shortening the time from material development to implementation by industry and government. Furthermore, ultrasound modeling and simulation are expected to play a significant future role in validating the capabilities and limitations of guided wave based structural health monitoring (SHM) systems. The current state-of-the-art in ultrasonic NDE/SHM simulation is still far from the goal of rapidly simulating damage detection techniques for large scale, complex geometry composite components/vehicles containing realistic damage types. Ongoing work at NASA Langley Research Center is focused on advanced ultrasonic simulation tool development. This paper discusses challenges of simulation tool validation, optimization, and utilization for composites. Ongoing simulation tool development work is described along with examples of simulation validation and optimization challenges that are more broadly applicable to all NDE simulation tools. The paper will also discuss examples of simulation tool utilization at NASA to develop new damage characterization methods for composites, and associated challenges in experimentally validating those methods.

  2. Challenges of NDE Simulation Tool Challenges of NDE Simulation Tool

    NASA Technical Reports Server (NTRS)

    Leckey, Cara A. C.; Juarez, Peter D.; Seebo, Jeffrey P.; Frank, Ashley L.

    2015-01-01

    Realistic nondestructive evaluation (NDE) simulation tools enable inspection optimization and predictions of inspectability for new aerospace materials and designs. NDE simulation tools may someday aid in the design and certification of advanced aerospace components; potentially shortening the time from material development to implementation by industry and government. Furthermore, modeling and simulation are expected to play a significant future role in validating the capabilities and limitations of guided wave based structural health monitoring (SHM) systems. The current state-of-the-art in ultrasonic NDE/SHM simulation cannot rapidly simulate damage detection techniques for large scale, complex geometry composite components/vehicles with realistic damage types. This paper discusses some of the challenges of model development and validation for composites, such as the level of realism and scale of simulation needed for NASA' applications. Ongoing model development work is described along with examples of model validation studies. The paper will also discuss examples of the use of simulation tools at NASA to develop new damage characterization methods, and associated challenges of validating those methods.

  3. Competency-Based Training and Simulation: Making a "Valid" Argument.

    PubMed

    Noureldin, Yasser A; Lee, Jason Y; McDougall, Elspeth M; Sweet, Robert M

    2018-02-01

    The use of simulation as an assessment tool is much more controversial than is its utility as an educational tool. However, without valid simulation-based assessment tools, the ability to objectively assess technical skill competencies in a competency-based medical education framework will remain challenging. The current literature in urologic simulation-based training and assessment uses a definition and framework of validity that is now outdated. This is probably due to the absence of awareness rather than an absence of comprehension. The following review article provides the urologic community an updated taxonomy on validity theory as it relates to simulation-based training and assessments and translates our simulation literature to date into this framework. While the old taxonomy considered validity as distinct subcategories and focused on the simulator itself, the modern taxonomy, for which we translate the literature evidence, considers validity as a unitary construct with a focus on interpretation of simulator data/scores.

  4. The validity of a professional competence tool for physiotherapy students in simulation-based clinical education: a Rasch analysis.

    PubMed

    Judd, Belinda K; Scanlan, Justin N; Alison, Jennifer A; Waters, Donna; Gordon, Christopher J

    2016-08-05

    Despite the recent widespread adoption of simulation in clinical education in physiotherapy, there is a lack of validated tools for assessment in this setting. The Assessment of Physiotherapy Practice (APP) is a comprehensive tool used in clinical placement settings in Australia to measure professional competence of physiotherapy students. The aim of the study was to evaluate the validity of the APP for student assessment in simulation settings. A total of 1260 APPs were collected, 971 from students in simulation and 289 from students in clinical placements. Rasch analysis was used to examine the construct validity of the APP tool in three different simulation assessment formats: longitudinal assessment over 1 week of simulation; longitudinal assessment over 2 weeks; and a short-form (25 min) assessment of a single simulation scenario. Comparison with APPs from 5 week clinical placements in hospital and clinic-based settings were also conducted. The APP demonstrated acceptable fit to the expectations of the Rasch model for the 1 and 2 week clinical simulations, exhibiting unidimensional properties that were able to distinguish different levels of student performance. For the short-form simulation, nine of the 20 items recorded greater than 25 % of scores as 'not-assessed' by clinical educators which impacted on the suitability of the APP tool in this simulation format. The APP was a valid assessment tool when used in longitudinal simulation formats. A revised APP may be required for assessment in short-form simulation scenarios.

  5. Improving Escalation of Care: Development and Validation of the Quality of Information Transfer Tool.

    PubMed

    Johnston, Maximilian J; Arora, Sonal; Pucher, Philip H; Reissis, Yannis; Hull, Louise; Huddy, Jeremy R; King, Dominic; Darzi, Ara

    2016-03-01

    To develop and provide validity and feasibility evidence for the QUality of Information Transfer (QUIT) tool. Prompt escalation of care in the setting of patient deterioration can prevent further harm. Escalation and information transfer skills are not currently measured in surgery. This study comprised 3 phases: the development (phase 1), validation (phase 2), and feasibility analysis (phase 3) of the QUIT tool. Phase 1 involved identification of core skills needed for successful escalation of care through literature review and 33 semistructured interviews with stakeholders. Phase 2 involved the generation of validity evidence for the tool using a simulated setting. Thirty surgeons assessed a deteriorating postoperative patient in a simulated ward and escalated their care to a senior colleague. The face and content validity were assessed using a survey. Construct and concurrent validity of the tool were determined by comparing performance scores using the QUIT tool with those measured using the Situation-Background-Assessment-Recommendation (SBAR) tool. Phase 3 was conducted using direct observation of escalation scenarios on surgical wards in 2 hospitals. A 7-category assessment tool was developed from phase 1 consisting of 24 items. Twenty-one of 24 items had excellent content validity (content validity index >0.8). All 7 categories and 18 of 24 (P < 0.05) items demonstrated construct validity. The correlation between the QUIT and SBAR tools used was strong indicating concurrent validity (r = 0.694, P < 0.001). Real-time scoring of escalation referrals was feasible and indicated that doctors currently have better information transfer skills than nurses when faced with a deteriorating patient. A validated tool to assess information transfer for deteriorating surgical patients was developed and tested using simulation and real-time clinical scenarios. It may improve the quality and safety of patient care on the surgical ward.

  6. Development and psychometric testing of a Clinical Reasoning Evaluation Simulation Tool (CREST) for assessing nursing students' abilities to recognize and respond to clinical deterioration.

    PubMed

    Liaw, Sok Ying; Rashasegaran, Ahtherai; Wong, Lai Fun; Deneen, Christopher Charles; Cooper, Simon; Levett-Jones, Tracy; Goh, Hongli Sam; Ignacio, Jeanette

    2018-03-01

    The development of clinical reasoning skills in recognising and responding to clinical deterioration is essential in pre-registration nursing education. Simulation has been increasingly used by educators to develop this skill. To develop and evaluate the psychometric properties of a Clinical Reasoning Evaluation Simulation Tool (CREST) for measuring clinical reasoning skills in recognising and responding to clinical deterioration in a simulated environment. A scale development with psychometric testing and mixed methods study. Nursing students and academic staff were recruited at a university. A three-phase prospective study was conducted. Phase 1 involved the development and content validation of the CREST; Phase 2 included the psychometric testing of the tool with 15 second-year and 15 third-year nursing students who undertook the simulation-based assessment; Phase 3 involved the usability testing of the tool with nine academic staff through a survey questionnaire and focus group discussion. A 10-item CREST was developed based on a model of clinical reasoning. A content validity of 0.93 was obtained from the validation of 15 international experts. The construct validity was supported as the third-year students demonstrated significantly higher (p<0.001) clinical reasoning scores than the second-year students. The concurrent validity was also supported with significant positive correlations between global rating scores and almost all subscale scores, and the total scores. The predictive validity was supported with an existing tool. The internal consistency was high with a Cronbach's alpha of 0.92. A high inter-rater reliability was demonstrated with an intraclass correlation coefficient of 0.88. The usability of the tool was rated positively by the nurse educators but the need to ease the scoring process was highlighted. A valid and reliable tool was developed to measure the effectiveness of simulation in developing clinical reasoning skills for recognising and responding to clinical deterioration. Copyright © 2017. Published by Elsevier Ltd.

  7. Development of Reliable and Validated Tools to Evaluate Technical Resuscitation Skills in a Pediatric Simulation Setting: Resuscitation and Emergency Simulation Checklist for Assessment in Pediatrics.

    PubMed

    Faudeux, Camille; Tran, Antoine; Dupont, Audrey; Desmontils, Jonathan; Montaudié, Isabelle; Bréaud, Jean; Braun, Marc; Fournier, Jean-Paul; Bérard, Etienne; Berlengi, Noémie; Schweitzer, Cyril; Haas, Hervé; Caci, Hervé; Gatin, Amélie; Giovannini-Chami, Lisa

    2017-09-01

    To develop a reliable and validated tool to evaluate technical resuscitation skills in a pediatric simulation setting. Four Resuscitation and Emergency Simulation Checklist for Assessment in Pediatrics (RESCAPE) evaluation tools were created, following international guidelines: intraosseous needle insertion, bag mask ventilation, endotracheal intubation, and cardiac massage. We applied a modified Delphi methodology evaluation to binary rating items. Reliability was assessed comparing the ratings of 2 observers (1 in real time and 1 after a video-recorded review). The tools were assessed for content, construct, and criterion validity, and for sensitivity to change. Inter-rater reliability, evaluated with Cohen kappa coefficients, was perfect or near-perfect (>0.8) for 92.5% of items and each Cronbach alpha coefficient was ≥0.91. Principal component analyses showed that all 4 tools were unidimensional. Significant increases in median scores with increasing levels of medical expertise were demonstrated for RESCAPE-intraosseous needle insertion (P = .0002), RESCAPE-bag mask ventilation (P = .0002), RESCAPE-endotracheal intubation (P = .0001), and RESCAPE-cardiac massage (P = .0037). Significantly increased median scores over time were also demonstrated during a simulation-based educational program. RESCAPE tools are reliable and validated tools for the evaluation of technical resuscitation skills in pediatric settings during simulation-based educational programs. They might also be used for medical practice performance evaluations. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Development and Validation of a Monte Carlo Simulation Tool for Multi-Pinhole SPECT

    PubMed Central

    Mok, Greta S. P.; Du, Yong; Wang, Yuchuan; Frey, Eric C.; Tsui, Benjamin M. W.

    2011-01-01

    Purpose In this work, we developed and validated a Monte Carlo simulation (MCS) tool for investigation and evaluation of multi-pinhole (MPH) SPECT imaging. Procedures This tool was based on a combination of the SimSET and MCNP codes. Photon attenuation and scatter in the object, as well as penetration and scatter through the collimator detector, are modeled in this tool. It allows accurate and efficient simulation of MPH SPECT with focused pinhole apertures and user-specified photon energy, aperture material, and imaging geometry. The MCS method was validated by comparing the point response function (PRF), detection efficiency (DE), and image profiles obtained from point sources and phantom experiments. A prototype single-pinhole collimator and focused four- and five-pinhole collimators fitted on a small animal imager were used for the experimental validations. We have also compared computational speed among various simulation tools for MPH SPECT, including SimSET-MCNP, MCNP, SimSET-GATE, and GATE for simulating projections of a hot sphere phantom. Results We found good agreement between the MCS and experimental results for PRF, DE, and image profiles, indicating the validity of the simulation method. The relative computational speeds for SimSET-MCNP, MCNP, SimSET-GATE, and GATE are 1: 2.73: 3.54: 7.34, respectively, for 120-view simulations. We also demonstrated the application of this MCS tool in small animal imaging by generating a set of low-noise MPH projection data of a 3D digital mouse whole body phantom. Conclusions The new method is useful for studying MPH collimator designs, data acquisition protocols, image reconstructions, and compensation techniques. It also has great potential to be applied for modeling the collimator-detector response with penetration and scatter effects for MPH in the quantitative reconstruction method. PMID:19779896

  9. Validation of the self-assessment teamwork tool (SATT) in a cohort of nursing and medical students.

    PubMed

    Roper, Lucinda; Shulruf, Boaz; Jorm, Christine; Currie, Jane; Gordon, Christopher J

    2018-02-09

    Poor teamwork has been implicated in medical error and teamwork training has been shown to improve patient care. Simulation is an effective educational method for teamwork training. Post-simulation reflection aims to promote learning and we have previously developed a self-assessment teamwork tool (SATT) for health students to measure teamwork performance. This study aimed to evaluate the psychometric properties of a revised self-assessment teamwork tool. The tool was tested in 257 medical and nursing students after their participation in one of several mass casualty simulations. Using exploratory and confirmatory factor analysis, the revised self-assessment teamwork tool was shown to have strong construct validity, high reliability, and the construct demonstrated invariance across groups (Medicine & Nursing). The modified SATT was shown to be a reliable and valid student self-assessment tool. The SATT is a quick and practical method of guiding students' reflection on important teamwork skills.

  10. Fast scattering simulation tool for multi-energy x-ray imaging

    NASA Astrophysics Data System (ADS)

    Sossin, A.; Tabary, J.; Rebuffel, V.; Létang, J. M.; Freud, N.; Verger, L.

    2015-12-01

    A combination of Monte Carlo (MC) and deterministic approaches was employed as a means of creating a simulation tool capable of providing energy resolved x-ray primary and scatter images within a reasonable time interval. Libraries of Sindbad, a previously developed x-ray simulation software, were used in the development. The scatter simulation capabilities of the tool were validated through simulation with the aid of GATE and through experimentation by using a spectrometric CdTe detector. A simple cylindrical phantom with cavities and an aluminum insert was used. Cross-validation with GATE showed good agreement with a global spatial error of 1.5% and a maximum scatter spectrum error of around 6%. Experimental validation also supported the accuracy of the simulations obtained from the developed software with a global spatial error of 1.8% and a maximum error of around 8.5% in the scatter spectra.

  11. Time Domain Tool Validation Using ARES I-X Flight Data

    NASA Technical Reports Server (NTRS)

    Hough, Steven; Compton, James; Hannan, Mike; Brandon, Jay

    2011-01-01

    The ARES I-X vehicle was launched from NASA's Kennedy Space Center (KSC) on October 28, 2009 at approximately 11:30 EDT. ARES I-X was the first test flight for NASA s ARES I launch vehicle, and it was the first non-Shuttle launch vehicle designed and flown by NASA since Saturn. The ARES I-X had a 4-segment solid rocket booster (SRB) first stage and a dummy upper stage (US) to emulate the properties of the ARES I US. During ARES I-X pre-flight modeling and analysis, six (6) independent time domain simulation tools were developed and cross validated. Each tool represents an independent implementation of a common set of models and parameters in a different simulation framework and architecture. Post flight data and reconstructed models provide the means to validate a subset of the simulations against actual flight data and to assess the accuracy of pre-flight dispersion analysis. Post flight data consists of telemetered Operational Flight Instrumentation (OFI) data primarily focused on flight computer outputs and sensor measurements as well as Best Estimated Trajectory (BET) data that estimates vehicle state information from all available measurement sources. While pre-flight models were found to provide a reasonable prediction of the vehicle flight, reconstructed models were generated to better represent and simulate the ARES I-X flight. Post flight reconstructed models include: SRB propulsion model, thrust vector bias models, mass properties, base aerodynamics, and Meteorological Estimated Trajectory (wind and atmospheric data). The result of the effort is a set of independently developed, high fidelity, time-domain simulation tools that have been cross validated and validated against flight data. This paper presents the process and results of high fidelity aerospace modeling, simulation, analysis and tool validation in the time domain.

  12. Unsteady Three-Dimensional Simulation of a Shear Coaxial GO2/GH2 Rocket Injector with RANS and Hybrid-RAN-LES/DES Using Flamelet Models

    NASA Technical Reports Server (NTRS)

    Westra, Doug G.; West, Jeffrey S.; Richardson, Brian R.

    2015-01-01

    Historically, the analysis and design of liquid rocket engines (LREs) has relied on full-scale testing and one-dimensional empirical tools. The testing is extremely expensive and the one-dimensional tools are not designed to capture the highly complex, and multi-dimensional features that are inherent to LREs. Recent advances in computational fluid dynamics (CFD) tools have made it possible to predict liquid rocket engine performance, stability, to assess the effect of complex flow features, and to evaluate injector-driven thermal environments, to mitigate the cost of testing. Extensive efforts to verify and validate these CFD tools have been conducted, to provide confidence for using them during the design cycle. Previous validation efforts have documented comparisons of predicted heat flux thermal environments with test data for a single element gaseous oxygen (GO2) and gaseous hydrogen (GH2) injector. The most notable validation effort was a comprehensive validation effort conducted by Tucker et al. [1], in which a number of different groups modeled a GO2/GH2 single element configuration by Pal et al [2]. The tools used for this validation comparison employed a range of algorithms, from both steady and unsteady Reynolds Averaged Navier-Stokes (U/RANS) calculations, large-eddy simulations (LES), detached eddy simulations (DES), and various combinations. A more recent effort by Thakur et al. [3] focused on using a state-of-the-art CFD simulation tool, Loci/STREAM, on a two-dimensional grid. Loci/STREAM was chosen because it has a unique, very efficient flamelet parameterization of combustion reactions that are too computationally expensive to simulate with conventional finite-rate chemistry calculations. The current effort focuses on further advancement of validation efforts, again using the Loci/STREAM tool with the flamelet parameterization, but this time with a three-dimensional grid. Comparisons to the Pal et al. heat flux data will be made for both RANS and Hybrid RANSLES/ Detached Eddy simulations (DES). Computation costs will be reported, along with comparison of accuracy and cost to much less expensive two-dimensional RANS simulations of the same geometry.

  13. Validation of Robotic Surgery Simulator (RoSS).

    PubMed

    Kesavadas, Thenkurussi; Stegemann, Andrew; Sathyaseelan, Gughan; Chowriappa, Ashirwad; Srimathveeravalli, Govindarajan; Seixas-Mikelus, Stéfanie; Chandrasekhar, Rameella; Wilding, Gregory; Guru, Khurshid

    2011-01-01

    Recent growth of daVinci Robotic Surgical System as a minimally invasive surgery tool has led to a call for better training of future surgeons. In this paper, a new virtual reality simulator, called RoSS is presented. Initial results from two studies - face and content validity, are very encouraging. 90% of the cohort of expert robotic surgeons felt that the simulator was excellent or somewhat close to the touch and feel of the daVinci console. Content validity of the simulator received 90% approval in some cases. These studies demonstrate that RoSS has the potential of becoming an important training tool for the daVinci surgical robot.

  14. Validation of a novel duplex ultrasound objective structured assessment of technical skills (DUOSATS) for arterial stenosis detection.

    PubMed

    Jaffer, U; Singh, P; Pandey, V A; Aslam, M; Standfield, N J

    2014-01-01

    Duplex ultrasound facilitates bedside diagnosis and hence timely patient care. Its uptake has been hampered by training and accreditation issues. We have developed an assessment tool for Duplex arterial stenosis measurement for both simulator and patient based training. A novel assessment tool: duplex ultrasound assessment of technical skills was developed. A modified duplex ultrasound assessment of technical skills was used for simulator training. Novice, intermediate experience and expert users of duplex ultrasound were invited to participate. Participants viewed an instructional video and were allowed ample time to familiarize with the equipment. Participants' attempts were recorded and independently assessed by four experts using the modified duplex ultrasound assessment of technical skills. 'Global' assessment was also done on a four point Likert scale. Content, construct and concurrent validity as well as reliability were evaluated. Content and construct validity as well as reliability were demonstrated. The simulator had good satisfaction rating from participants: median 4; range 3-5. Receiver operator characteristic analysis has established a cut point of 22/ 34 and 25/ 40 were most appropriate for simulator and patient based assessment respectively. We have validated a novel assessment tool for duplex arterial stenosis detection. Further work is underway to establish transference validity of simulator training to improved skill in scanning patients. We have developed and validated duplex ultrasound assessment of technical skills for simulator training.

  15. Simulation validation and management

    NASA Astrophysics Data System (ADS)

    Illgen, John D.

    1995-06-01

    Illgen Simulation Technologies, Inc., has been working interactive verification and validation programs for the past six years. As a result, they have evolved a methodology that has been adopted and successfully implemented by a number of different verification and validation programs. This methodology employs a unique case of computer-assisted software engineering (CASE) tools to reverse engineer source code and produce analytical outputs (flow charts and tables) that aid the engineer/analyst in the verification and validation process. We have found that the use of CASE tools saves time,which equate to improvements in both schedule and cost. This paper will describe the ISTI-developed methodology and how CASe tools are used in its support. Case studies will be discussed.

  16. Validation of a novel venous duplex ultrasound objective structured assessment of technical skills for the assessment of venous reflux.

    PubMed

    Jaffer, Usman; Normahani, Pasha; Lackenby, Kimberly; Aslam, Mohammed; Standfield, Nigel J

    2015-01-01

    Duplex ultrasound measurement of reflux time is central to the diagnosis of venous incompetence. We have developed an assessment tool for Duplex measurement of venous reflux for both simulator and patient-based training. A novel assessment tool, Venous Duplex Ultrasound Assessment of Technical Skills (V-DUOSATS), was developed. A modified DUOSATS was used for simulator training. Participants of varying skill level were invited to viewed an instructional video and were allowed ample time to familiarize with the Duplex equipment. Attempts made by the participants were recorded and independently assessed by 3 expert assessors and 5 novice assessors using the modified V-DUOSATS. "Global" assessment was also done by expert assessors on a 4-point Likert scale. Content, construct, and concurrent validities as well as reliability were evaluated. Content and construct validity as well as reliability were demonstrated. Receiver operator characteristic analysis-established cut points of 19/22 and 21/30 were most appropriate for simulator and patient-based assessment, respectively. We have validated a novel assessment tool for Duplex venous reflux measurement. Further work is required to establish transference validity of simulator training to improve skill in scanning patients. We have developed and validated V-DUOSATS for simulator training. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  17. Gathering Validity Evidence for Surgical Simulation: A Systematic Review.

    PubMed

    Borgersen, Nanna Jo; Naur, Therese M H; Sørensen, Stine M D; Bjerrum, Flemming; Konge, Lars; Subhi, Yousif; Thomsen, Ann Sofia S

    2018-06-01

    To identify current trends in the use of validity frameworks in surgical simulation, to provide an overview of the evidence behind the assessment of technical skills in all surgical specialties, and to present recommendations and guidelines for future validity studies. Validity evidence for assessment tools used in the evaluation of surgical performance is of paramount importance to ensure valid and reliable assessment of skills. We systematically reviewed the literature by searching 5 databases (PubMed, EMBASE, Web of Science, PsycINFO, and the Cochrane Library) for studies published from January 1, 2008, to July 10, 2017. We included original studies evaluating simulation-based assessments of health professionals in surgical specialties and extracted data on surgical specialty, simulator modality, participant characteristics, and the validity framework used. Data were synthesized qualitatively. We identified 498 studies with a total of 18,312 participants. Publications involving validity assessments in surgical simulation more than doubled from 2008 to 2010 (∼30 studies/year) to 2014 to 2016 (∼70 to 90 studies/year). Only 6.6% of the studies used the recommended contemporary validity framework (Messick). The majority of studies used outdated frameworks such as face validity. Significant differences were identified across surgical specialties. The evaluated assessment tools were mostly inanimate or virtual reality simulation models. An increasing number of studies have gathered validity evidence for simulation-based assessments in surgical specialties, but the use of outdated frameworks remains common. To address the current practice, this paper presents guidelines on how to use the contemporary validity framework when designing validity studies.

  18. Simulator validation results and proposed reporting format from flight testing a software model of a complex, high-performance airplane.

    DOT National Transportation Integrated Search

    2008-01-01

    Computer simulations are often used in aviation studies. These simulation tools may require complex, high-fidelity aircraft models. Since many of the flight models used are third-party developed products, independent validation is desired prior to im...

  19. Calibration and validation of a spar-type floating offshore wind turbine model using the FAST dynamic simulation tool

    DOE PAGES

    Browning, J. R.; Jonkman, J.; Robertson, A.; ...

    2014-12-16

    In this study, high-quality computer simulations are required when designing floating wind turbines because of the complex dynamic responses that are inherent with a high number of degrees of freedom and variable metocean conditions. In 2007, the FAST wind turbine simulation tool, developed and maintained by the U.S. Department of Energy's (DOE's) National Renewable Energy Laboratory (NREL), was expanded to include capabilities that are suitable for modeling floating offshore wind turbines. In an effort to validate FAST and other offshore wind energy modeling tools, DOE funded the DeepCwind project that tested three prototype floating wind turbines at 1/50 th scalemore » in a wave basin, including a semisubmersible, a tension-leg platform, and a spar buoy. This paper describes the use of the results of the spar wave basin tests to calibrate and validate the FAST offshore floating simulation tool, and presents some initial results of simulated dynamic responses of the spar to several combinations of wind and sea states. Wave basin tests with the spar attached to a scale model of the NREL 5-megawatt reference wind turbine were performed at the Maritime Research Institute Netherlands under the DeepCwind project. This project included free-decay tests, tests with steady or turbulent wind and still water (both periodic and irregular waves with no wind), and combined wind/wave tests. The resulting data from the 1/50th model was scaled using Froude scaling to full size and used to calibrate and validate a full-size simulated model in FAST. Results of the model calibration and validation include successes, subtleties, and limitations of both wave basin testing and FAST modeling capabilities.« less

  20. Teamwork Assessment Tools in Obstetric Emergencies: A Systematic Review.

    PubMed

    Onwochei, Desire N; Halpern, Stephen; Balki, Mrinalini

    2017-06-01

    Team-based training and simulation can improve patient safety, by improving communication, decision making, and performance of team members. Currently, there is no general consensus on whether or not a specific assessment tool is better adapted to evaluate teamwork in obstetric emergencies. The purpose of this qualitative systematic review was to find the tools available to assess team effectiveness in obstetric emergencies. We searched Embase, Medline, PubMed, Web of Science, PsycINFO, CINAHL, and Google Scholar for prospective studies that evaluated nontechnical skills in multidisciplinary teams involving obstetric emergencies. The search included studies from 1944 until January 11, 2016. Data on reliability and validity measures were collected and used for interpretation. A descriptive analysis was performed on the data. Thirteen studies were included in the final qualitative synthesis. All the studies assessed teams in the context of obstetric simulation scenarios, but only six included anesthetists in the simulations. One study evaluated their teamwork tool using just validity measures, five using just reliability measures, and one used both. The most reliable tools identified were the Clinical Teamwork Scale, the Global Assessment of Obstetric Team Performance, and the Global Rating Scale of performance. However, they were still lacking in terms of quality and validity. More work needs to be conducted to establish the validity of teamwork tools for nontechnical skills, and the development of an ideal tool is warranted. Further studies are required to assess how outcomes, such as performance and patient safety, are influenced when using these tools.

  1. Validation of virtual-reality-based simulations for endoscopic sinus surgery.

    PubMed

    Dharmawardana, N; Ruthenbeck, G; Woods, C; Elmiyeh, B; Diment, L; Ooi, E H; Reynolds, K; Carney, A S

    2015-12-01

    Virtual reality (VR) simulators provide an alternative to real patients for practicing surgical skills but require validation to ensure accuracy. Here, we validate the use of a virtual reality sinus surgery simulator with haptic feedback for training in Otorhinolaryngology - Head & Neck Surgery (OHNS). Participants were recruited from final-year medical students, interns, resident medical officers (RMOs), OHNS registrars and consultants. All participants completed an online questionnaire after performing four separate simulation tasks. These were then used to assess face, content and construct validity. anova with post hoc correlation was used for statistical analysis. The following groups were compared: (i) medical students/interns, (ii) RMOs, (iii) registrars and (iv) consultants. Face validity results had a statistically significant (P < 0.05) difference between the consultant group and others, while there was no significant difference between medical student/intern and RMOs. Variability within groups was not significant. Content validity results based on consultant scoring and comments indicated that the simulations need further development in several areas to be effective for registrar-level teaching. However, students, interns and RMOs indicated that the simulations provide a useful tool for learning OHNS-related anatomy and as an introduction to ENT-specific procedures. The VR simulations have been validated for teaching sinus anatomy and nasendoscopy to medical students, interns and RMOs. However, they require further development before they can be regarded as a valid tool for more advanced surgical training. © 2015 John Wiley & Sons Ltd.

  2. Development of the TeamOBS-PPH - targeting clinical performance in postpartum hemorrhage.

    PubMed

    Brogaard, Lise; Hvidman, Lone; Hinshaw, Kim; Kierkegaard, Ole; Manser, Tanja; Musaeus, Peter; Arafeh, Julie; Daniels, Kay I; Judy, Amy E; Uldbjerg, Niels

    2018-06-01

    This study aimed to develop a valid and reliable TeamOBS-PPH tool for assessing clinical performance in the management of postpartum hemorrhage (PPH). The tool was evaluated using video-recordings of teams managing PPH in both real-life and simulated settings. A Delphi panel consisting of 12 obstetricians from the UK, Norway, Sweden, Iceland, and Denmark achieved consensus on (i) the elements to include in the assessment tool, (ii) the weighting of each element, and (iii) the final tool. The validity and reliability were evaluated according to Cook and Beckman. (Level 1) Four raters scored four video-recordings of in situ simulations of PPH. (Level 2) Two raters scored 85 video-recordings of real-life teams managing patients with PPH ≥1000 mL in two Danish hospitals. (Level 3) Two raters scored 15 video-recordings of in situ simulations of PPH from a US hospital. The tool was designed with scores from 0 to 100. (Level 1) Teams of novices had a median score of 54 (95% CI 48-60), whereas experienced teams had a median score of 75 (95% CI 71-79; p < 0.001). (Level 2) The intra-rater [intra-class correlation (ICC) = 0.96] and inter-rater (ICC = 0.83) agreements for real-life PPH were strong. The tool was applicable in all cases: atony, retained placenta, and lacerations. (Level 3) The tool was easily adapted to in situ simulation settings in the USA (ICC = 0.86). The TeamOBS-PPH tool appears to be valid and reliable for assessing clinical performance in real-life and simulated settings. The tool will be shared as the free TeamOBS App. © 2018 Nordic Federation of Societies of Obstetrics and Gynecology.

  3. A Methodology for the Design of Application-Specific Cyber-Physical Social Sensing Co-Simulators.

    PubMed

    Sánchez, Borja Bordel; Alcarria, Ramón; Sánchez-Picot, Álvaro; Sánchez-de-Rivera, Diego

    2017-09-22

    Cyber-Physical Social Sensing (CPSS) is a new trend in the context of pervasive sensing. In these new systems, various domains coexist in time, evolve together and influence each other. Thus, application-specific tools are necessary for specifying and validating designs and simulating systems. However, nowadays, different tools are employed to simulate each domain independently. Mainly, the cause of the lack of co-simulation instruments to simulate all domains together is the extreme difficulty of combining and synchronizing various tools. In order to reduce that difficulty, an adequate architecture for the final co-simulator must be selected. Therefore, in this paper the authors investigate and propose a methodology for the design of CPSS co-simulation tools. The paper describes the four steps that software architects should follow in order to design the most adequate co-simulator for a certain application, considering the final users' needs and requirements and various additional factors such as the development team's experience. Moreover, the first practical use case of the proposed methodology is provided. An experimental validation is also included in order to evaluate the performing of the proposed co-simulator and to determine the correctness of the proposal.

  4. A Methodology for the Design of Application-Specific Cyber-Physical Social Sensing Co-Simulators

    PubMed Central

    Sánchez-Picot, Álvaro

    2017-01-01

    Cyber-Physical Social Sensing (CPSS) is a new trend in the context of pervasive sensing. In these new systems, various domains coexist in time, evolve together and influence each other. Thus, application-specific tools are necessary for specifying and validating designs and simulating systems. However, nowadays, different tools are employed to simulate each domain independently. Mainly, the cause of the lack of co-simulation instruments to simulate all domains together is the extreme difficulty of combining and synchronizing various tools. In order to reduce that difficulty, an adequate architecture for the final co-simulator must be selected. Therefore, in this paper the authors investigate and propose a methodology for the design of CPSS co-simulation tools. The paper describes the four steps that software architects should follow in order to design the most adequate co-simulator for a certain application, considering the final users’ needs and requirements and various additional factors such as the development team’s experience. Moreover, the first practical use case of the proposed methodology is provided. An experimental validation is also included in order to evaluate the performing of the proposed co-simulator and to determine the correctness of the proposal. PMID:28937610

  5. Simulation validation of the XV-15 tilt-rotor research aircraft

    NASA Technical Reports Server (NTRS)

    Ferguson, S. W.; Hanson, G. D.; Churchill, G. B.

    1984-01-01

    The results of a simulation validation program of the XV-15 tilt-rotor research aircraft are detailed, covering such simulation aspects as the mathematical model, visual system, motion system, cab aural system, cab control loader system, pilot perceptual fidelity, and generic tilt rotor applications. Simulation validation was performed for the hover, low-speed, and sideward flight modes, with consideration of the in-ground rotor effect. Several deficiencies of the mathematical model and the simulation systems were identified in the course of the simulation validation project, and some were corrected. It is noted that NASA's Vertical Motion Simulator used in the program is an excellent tool for tilt-rotor and rotorcraft design, development, and pilot training.

  6. WEC3: Wave Energy Converter Code Comparison Project: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Combourieu, Adrien; Lawson, Michael; Babarit, Aurelien

    This paper describes the recently launched Wave Energy Converter Code Comparison (WEC3) project and present preliminary results from this effort. The objectives of WEC3 are to verify and validate numerical modelling tools that have been developed specifically to simulate wave energy conversion devices and to inform the upcoming IEA OES Annex VI Ocean Energy Modelling Verification and Validation project. WEC3 is divided into two phases. Phase 1 consists of a code-to-code verification and Phase II entails code-to-experiment validation. WEC3 focuses on mid-fidelity codes that simulate WECs using time-domain multibody dynamics methods to model device motions and hydrodynamic coefficients to modelmore » hydrodynamic forces. Consequently, high-fidelity numerical modelling tools, such as Navier-Stokes computational fluid dynamics simulation, and simple frequency domain modelling tools were not included in the WEC3 project.« less

  7. Development and validation of an observation tool for the assessment of nursing pain management practices in intensive care unit in a standardized clinical simulation setting.

    PubMed

    Gosselin, Emilie; Bourgault, Patricia; Lavoie, Stephan; Coleman, Robin-Marie; Méziat-Burdin, Anne

    2014-12-01

    Pain management in the intensive care unit is often inadequate. There is no tool available to assess nursing pain management practices. The aim of this study was to develop and validate a measuring tool to assess nursing pain management in the intensive care unit during standardized clinical simulation. A literature review was performed to identify relevant components demonstrating optimal pain management in adult intensive care units and to integrate them in an observation tool. This tool was submitted to an expert panel and pretested. It was then used to assess pain management practice during 26 discrete standardized clinical simulation sessions with intensive care nurses. The Nursing Observation Tool for Pain Management (NOTPaM) contains 28 statements grouped into 8 categories, which are grouped into 4 dimensions: subjective assessment, objective assessment, interventions, and reassessment. The tool's internal consistency was calculated at a Cronbach's alpha of 0.436 for the whole tool; the alpha varies from 0.328 to 0.518 for each dimension. To evaluate the inter-rater reliability, intra-class correlation coefficient was used, which was calculated at 0.751 (p < .001) for the whole tool, with variations from 0.619 to 0.920 (p < .01) between dimensions. The expert panel was satisfied with the content and face validity of the tool. The psychometric qualities of the NOTPaM developed in this study are satisfactory. However, the tool could be improved with slight modifications. Nevertheless, it was useful in assessing intensive care nurses' pain management in a standardized clinical simulation. The NOTPaM is the first tool created for this purpose. Copyright © 2014 American Society for Pain Management Nursing. Published by Elsevier Inc. All rights reserved.

  8. Discussing Virtual Tools that Simulate Probabilities: What Are the Middle School Teachers' Concerns?

    ERIC Educational Resources Information Center

    Savard, Annie; Freiman, Viktor; Theis, Laurent; Larose, Fançois

    2013-01-01

    Mathematics teachers, researchers and specialists in educational technology from Quebec, Canada developed virtual tools that make interactive simulations of games of chance. These tools were presented to a group of teachers from New Brunswick through workshops and they then got to test and validate them with their students. Semi-structured…

  9. A discrete event simulation tool to support and predict hospital and clinic staffing.

    PubMed

    DeRienzo, Christopher M; Shaw, Ryan J; Meanor, Phillip; Lada, Emily; Ferranti, Jeffrey; Tanaka, David

    2017-06-01

    We demonstrate how to develop a simulation tool to help healthcare managers and administrators predict and plan for staffing needs in a hospital neonatal intensive care unit using administrative data. We developed a discrete event simulation model of nursing staff needed in a neonatal intensive care unit and then validated the model against historical data. The process flow was translated into a discrete event simulation model. Results demonstrated that the model can be used to give a respectable estimate of annual admissions, transfers, and deaths based upon two different staffing levels. The discrete event simulation tool model can provide healthcare managers and administrators with (1) a valid method of modeling patient mix, patient acuity, staffing needs, and costs in the present state and (2) a forecast of how changes in a unit's staffing, referral patterns, or patient mix would affect a unit in a future state.

  10. Simulation for Prediction of Entry Article Demise (SPEAD): An Analysis Tool for Spacecraft Safety Analysis and Ascent/Reentry Risk Assessment

    NASA Technical Reports Server (NTRS)

    Ling, Lisa

    2014-01-01

    For the purpose of performing safety analysis and risk assessment for a potential off-nominal atmospheric reentry resulting in vehicle breakup, a synthesis of trajectory propagation coupled with thermal analysis and the evaluation of node failure is required to predict the sequence of events, the timeline, and the progressive demise of spacecraft components. To provide this capability, the Simulation for Prediction of Entry Article Demise (SPEAD) analysis tool was developed. The software and methodology have been validated against actual flights, telemetry data, and validated software, and safety/risk analyses were performed for various programs using SPEAD. This report discusses the capabilities, modeling, validation, and application of the SPEAD analysis tool.

  11. Validation of the SimSET simulation package for modeling the Siemens Biograph mCT PET scanner

    NASA Astrophysics Data System (ADS)

    Poon, Jonathan K.; Dahlbom, Magnus L.; Casey, Michael E.; Qi, Jinyi; Cherry, Simon R.; Badawi, Ramsey D.

    2015-02-01

    Monte Carlo simulation provides a valuable tool in performance assessment and optimization of system design parameters for PET scanners. SimSET is a popular Monte Carlo simulation toolkit that features fast simulation time, as well as variance reduction tools to further enhance computational efficiency. However, SimSET has lacked the ability to simulate block detectors until its most recent release. Our goal is to validate new features of SimSET by developing a simulation model of the Siemens Biograph mCT PET scanner and comparing the results to a simulation model developed in the GATE simulation suite and to experimental results. We used the NEMA NU-2 2007 scatter fraction, count rates, and spatial resolution protocols to validate the SimSET simulation model and its new features. The SimSET model overestimated the experimental results of the count rate tests by 11-23% and the spatial resolution test by 13-28%, which is comparable to previous validation studies of other PET scanners in the literature. The difference between the SimSET and GATE simulation was approximately 4-8% for the count rate test and approximately 3-11% for the spatial resolution test. In terms of computational time, SimSET performed simulations approximately 11 times faster than GATE simulations. The new block detector model in SimSET offers a fast and reasonably accurate simulation toolkit for PET imaging applications.

  12. An optimization model to agroindustrial sector in antioquia (Colombia, South America)

    NASA Astrophysics Data System (ADS)

    Fernandez, J.

    2015-06-01

    This paper develops a proposal of a general optimization model for the flower industry, which is defined by using discrete simulation and nonlinear optimization, whose mathematical models have been solved by using ProModel simulation tools and Gams optimization. It defines the operations that constitute the production and marketing of the sector, statistically validated data taken directly from each operation through field work, the discrete simulation model of the operations and the linear optimization model of the entire industry chain are raised. The model is solved with the tools described above and presents the results validated in a case study.

  13. Validation of a novel virtual reality simulator for robotic surgery.

    PubMed

    Schreuder, Henk W R; Persson, Jan E U; Wolswijk, Richard G H; Ihse, Ingmar; Schijven, Marlies P; Verheijen, René H M

    2014-01-01

    With the increase in robotic-assisted laparoscopic surgery there is a concomitant rising demand for training methods. The objective was to establish face and construct validity of a novel virtual reality simulator (dV-Trainer, Mimic Technologies, Seattle, WA) for the use in training of robot-assisted surgery. A comparative cohort study was performed. Participants (n = 42) were divided into three groups according to their robotic experience. To determine construct validity, participants performed three different exercises twice. Performance parameters were measured. To determine face validity, participants filled in a questionnaire after completion of the exercises. Experts outperformed novices in most of the measured parameters. The most discriminative parameters were "time to complete" and "economy of motion" (P < 0.001). The training capacity of the simulator was rated 4.6 ± 0.5 SD on a 5-point Likert scale. The realism of the simulator in general, visual graphics, movements of instruments, interaction with objects, and the depth perception were all rated as being realistic. The simulator is considered to be a very useful training tool for residents and medical specialist starting with robotic surgery. Face and construct validity for the dV-Trainer could be established. The virtual reality simulator is a useful tool for training robotic surgery.

  14. Validation of a Novel Virtual Reality Simulator for Robotic Surgery

    PubMed Central

    Schreuder, Henk W. R.; Persson, Jan E. U.; Wolswijk, Richard G. H.; Ihse, Ingmar; Schijven, Marlies P.; Verheijen, René H. M.

    2014-01-01

    Objective. With the increase in robotic-assisted laparoscopic surgery there is a concomitant rising demand for training methods. The objective was to establish face and construct validity of a novel virtual reality simulator (dV-Trainer, Mimic Technologies, Seattle, WA) for the use in training of robot-assisted surgery. Methods. A comparative cohort study was performed. Participants (n = 42) were divided into three groups according to their robotic experience. To determine construct validity, participants performed three different exercises twice. Performance parameters were measured. To determine face validity, participants filled in a questionnaire after completion of the exercises. Results. Experts outperformed novices in most of the measured parameters. The most discriminative parameters were “time to complete” and “economy of motion” (P < 0.001). The training capacity of the simulator was rated 4.6 ± 0.5 SD on a 5-point Likert scale. The realism of the simulator in general, visual graphics, movements of instruments, interaction with objects, and the depth perception were all rated as being realistic. The simulator is considered to be a very useful training tool for residents and medical specialist starting with robotic surgery. Conclusions. Face and construct validity for the dV-Trainer could be established. The virtual reality simulator is a useful tool for training robotic surgery. PMID:24600328

  15. Echo simulator with novel training and competency testing tools.

    PubMed

    Sheehan, Florence H; Otto, Catherine M; Freeman, Rosario V

    2013-01-01

    We developed and validated an echo simulator with three novel tools that facilitate training and enable quantitative and objective measurement of psychomotor as well as cognitive skill. First, the trainee can see original patient images - not synthetic or simulated images - that morph in real time as the mock transducer is manipulated on the mannequin. Second, augmented reality is used for Visual Guidance, a tool that assists the trainee in scanning by displaying the target organ in 3-dimensions (3D) together with the location of the current view plane and the plane of the anatomically correct view. Third, we introduce Image Matching, a tool that leverages the aptitude of the human brain for recognizing similarities and differences to help trainees learn to perform visual assessment of ultrasound images. Psychomotor competence is measured in terms of the view plane angle error. The construct validity of the simulator for competency testing was established by demonstrating its ability to discriminate novices vs. experts.

  16. A review of the available urology skills training curricula and their validation.

    PubMed

    Shepherd, William; Arora, Karan Singh; Abboudi, Hamid; Shamim Khan, Mohammed; Dasgupta, Prokar; Ahmed, Kamran

    2014-01-01

    The transforming field of urological surgery continues to demand development of novel training devices and curricula for its trainees. Contemporary trainees have to balance workplace demands while overcoming the cognitive barriers of acquiring skills in rapidly multiplying and advancing surgical techniques. This article provides a brief review of the process involved in developing a surgical curriculum and the current status of real and simulation-based curricula in the 4 subgroups of urological surgical practice: open, laparoscopic, endoscopic, and robotic. An informal literature review was conducted to provide a snapshot into the variety of simulation training tools available for technical and nontechnical urological surgical skills within all subgroups of urological surgery using the following keywords: "urology, surgery, training, curriculum, validation, non-technical skills, technical skills, LESS, robotic, laparoscopy, animal models." Validated training tools explored in research were tabulated and summarized. A total of 20 studies exploring validated training tools were identified. Huge variation was noticed in the types of validity sought by researchers and suboptimal incorporation of these tools into curricula was noted across the subgroups of urological surgery. The following key recommendations emerge from the review: adoption of simulation-based curricula in training; better integration of dedicated training time in simulated environments within a trainee's working hours; better incentivization for educators and assessors to improvise, research, and deliver teaching using the technologies available; and continued emphasis on developing nontechnical skills in tandem with technical operative skills. © 2013 Published by Association of Program Directors in Surgery on behalf of Association of Program Directors in Surgery.

  17. Going DEEP: guidelines for building simulation-based team assessments.

    PubMed

    Grand, James A; Pearce, Marina; Rench, Tara A; Chao, Georgia T; Fernandez, Rosemarie; Kozlowski, Steve W J

    2013-05-01

    Whether for team training, research or evaluation, making effective use of simulation-based technologies requires robust, reliable and accurate assessment tools. Extant literature on simulation-based assessment practices has primarily focused on scenario and instructional design; however, relatively little direct guidance has been provided regarding the challenging decisions and fundamental principles related to assessment development and implementation. The objective of this manuscript is to introduce a generalisable assessment framework supplemented by specific guidance on how to construct and ensure valid and reliable simulation-based team assessment tools. The recommendations reflect best practices in assessment and are designed to empower healthcare educators, professionals and researchers with the knowledge to design and employ valid and reliable simulation-based team assessments. Information and actionable recommendations associated with creating assessments of team processes (non-technical 'teamwork' activities) and performance (demonstration of technical proficiency) are presented which provide direct guidance on how to Distinguish the underlying competencies one aims to assess, Elaborate the measures used to capture team member behaviours during simulation activities, Establish the content validity of these measures and Proceduralise the measurement tools in a way that is systematically aligned with the goals of the simulation activity while maintaining methodological rigour (DEEP). The DEEP framework targets fundamental principles and critical activities that are important for effective assessment, and should benefit healthcare educators, professionals and researchers seeking to design or enhance any simulation-based assessment effort.

  18. Can 3D Gamified Simulations Be Valid Vocational Training Tools for Persons with Intellectual Disability? An Experiment Based on a Real-life Situation.

    PubMed

    von Barnekow, Ariel; Bonet-Codina, Núria; Tost, Dani

    2017-03-23

    To investigate if 3D gamified simulations can be valid vocational training tools for persons with intellectual disability. A 3D gamified simulation composed by a set of training tasks for cleaning in hostelry was developed in collaboration with professionals of a real hostel and pedagogues of a special needs school. The learning objectives focus on the acquisition of vocabulary skills, work procedures, social abilities and risk prevention. Several accessibility features were developed to make the tasks easy to do from a technological point-of-view. A pilot experiment was conducted to test the pedagogical efficacy of this tool on intellectually disabled workers and students. User scores in the gamified simulation follow a curve of increasing progression. When confronted with reality, they recognized the scenario and tried to reproduce what they had learned in the simulation. Finally, they were interested in the tool, they showed a strong feeling of immersion and engagement, and they reported having fun. On the basis of this experiment we believe that 3D gamified simulations can be efficient tools to train social and professional skills of persons with intellectual disabilities contributing thus to foster their social inclusion through work.

  19. Simulation of anisoplanatic imaging through optical turbulence using numerical wave propagation with new validation analysis

    NASA Astrophysics Data System (ADS)

    Hardie, Russell C.; Power, Jonathan D.; LeMaster, Daniel A.; Droege, Douglas R.; Gladysz, Szymon; Bose-Pillai, Santasri

    2017-07-01

    We present a numerical wave propagation method for simulating imaging of an extended scene under anisoplanatic conditions. While isoplanatic simulation is relatively common, few tools are specifically designed for simulating the imaging of extended scenes under anisoplanatic conditions. We provide a complete description of the proposed simulation tool, including the wave propagation method used. Our approach computes an array of point spread functions (PSFs) for a two-dimensional grid on the object plane. The PSFs are then used in a spatially varying weighted sum operation, with an ideal image, to produce a simulated image with realistic optical turbulence degradation. The degradation includes spatially varying warping and blurring. To produce the PSF array, we generate a series of extended phase screens. Simulated point sources are numerically propagated from an array of positions on the object plane, through the phase screens, and ultimately to the focal plane of the simulated camera. Note that the optical path for each PSF will be different, and thus, pass through a different portion of the extended phase screens. These different paths give rise to a spatially varying PSF to produce anisoplanatic effects. We use a method for defining the individual phase screen statistics that we have not seen used in previous anisoplanatic simulations. We also present a validation analysis. In particular, we compare simulated outputs with the theoretical anisoplanatic tilt correlation and a derived differential tilt variance statistic. This is in addition to comparing the long- and short-exposure PSFs and isoplanatic angle. We believe this analysis represents the most thorough validation of an anisoplanatic simulation to date. The current work is also unique that we simulate and validate both constant and varying Cn2(z) profiles. Furthermore, we simulate sequences with both temporally independent and temporally correlated turbulence effects. Temporal correlation is introduced by generating even larger extended phase screens and translating this block of screens in front of the propagation area. Our validation analysis shows an excellent match between the simulation statistics and the theoretical predictions. Thus, we think this tool can be used effectively to study optical anisoplanatic turbulence and to aid in the development of image restoration methods.

  20. Multiyear Plan for Validation of EnergyPlus Multi-Zone HVAC System Modeling using ORNL's Flexible Research Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Im, Piljae; Bhandari, Mahabir S.; New, Joshua Ryan

    This document describes the Oak Ridge National Laboratory (ORNL) multiyear experimental plan for validation and uncertainty characterization of whole-building energy simulation for a multi-zone research facility using a traditional rooftop unit (RTU) as a baseline heating, ventilating, and air conditioning (HVAC) system. The project’s overarching objective is to increase the accuracy of energy simulation tools by enabling empirical validation of key inputs and algorithms. Doing so is required to inform the design of increasingly integrated building systems and to enable accountability for performance gaps between design and operation of a building. The project will produce documented data sets that canmore » be used to validate key functionality in different energy simulation tools and to identify errors and inadequate assumptions in simulation engines so that developers can correct them. ASHRAE Standard 140, Method of Test for the Evaluation of Building Energy Analysis Computer Programs (ASHRAE 2004), currently consists primarily of tests to compare different simulation programs with one another. This project will generate sets of measured data to enable empirical validation, incorporate these test data sets in an extended version of Standard 140, and apply these tests to the Department of Energy’s (DOE) EnergyPlus software (EnergyPlus 2016) to initiate the correction of any significant deficiencies. The fitness-for-purpose of the key algorithms in EnergyPlus will be established and demonstrated, and vendors of other simulation programs will be able to demonstrate the validity of their products. The data set will be equally applicable to validation of other simulation engines as well.« less

  1. Development and psychometric evaluation of the "Neurosurgical Evaluation of Attitudes towards simulation Training" (NEAT) tool for use in neurosurgical education and training.

    PubMed

    Kirkman, Matthew A; Muirhead, William; Nandi, Dipankar; Sevdalis, Nick

    2014-01-01

    Neurosurgical simulation training is becoming increasingly popular. Attitudes toward simulation among residents can contribute to the effectiveness of simulation training, but such attitudes remain poorly explored in neurosurgery with no psychometrically proven measure in the literature. The aim of the present study was to evaluate prospectively a newly developed tool for this purpose: the Neurosurgical Evaluation of Attitudes towards simulation Training (NEAT). The NEAT tool was prospectively developed in 2 stages and psychometrically evaluated (validity and reliability) in 2 administrations with the same participants. The tool comprises a questionnaire with 9 Likert scale items and 2 free-text sections assessing attitudes toward simulation in neurosurgery. The evaluation was completed with 31 neurosurgery residents in London, United Kingdom, who were generally favorable toward neurosurgical simulation. The internal consistency of the questionnaire was high, as demonstrated by the overall Cronbach α values (α=0.899 and α=0.955). All but 2 questionnaire items had "substantial" or "almost perfect" test-retest reliability following repeated survey administrations (median Pearson r correlation=0.688; range, 0.248-0.841). NEAT items were well correlated with each other on both occasions, showing good validity of content within the NEAT tool. There was no significant relationship between either gender or length of neurosurgical experience and item ratings. NEAT is the first psychometrically evaluated tool for evaluating attitudes toward simulation in neurosurgery. Further implementation of NEAT is required in wider neurosurgical populations to establish whether specific population groups differ. Use of NEAT in studies of neurosurgical simulation could offer an additional outcome measure to performance metrics, permitting evaluation of the impact of neurosurgical simulation on attitudes toward simulation both between participants and within the same participants over time. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Update on simulation-based surgical training and assessment in ophthalmology: a systematic review.

    PubMed

    Thomsen, Ann Sofia S; Subhi, Yousif; Kiilgaard, Jens Folke; la Cour, Morten; Konge, Lars

    2015-06-01

    This study reviews the evidence behind simulation-based surgical training of ophthalmologists to determine (1) the validity of the reported models and (2) the ability to transfer skills to the operating room. Simulation-based training is established widely within ophthalmology, although it often lacks a scientific basis for implementation. We conducted a systematic review of trials involving simulation-based training or assessment of ophthalmic surgical skills among health professionals. The search included 5 databases (PubMed, EMBASE, PsycINFO, Cochrane Library, and Web of Science) and was completed on March 1, 2014. Overall, the included trials were divided into animal, cadaver, inanimate, and virtual-reality models. Risk of bias was assessed using the Cochrane Collaboration's tool. Validity evidence was evaluated using a modern validity framework (Messick's). We screened 1368 reports for eligibility and included 118 trials. The most common surgery simulated was cataract surgery. Most validity trials investigated only 1 or 2 of 5 sources of validity (87%). Only 2 trials (48 participants) investigated transfer of skills to the operating room; 4 trials (65 participants) evaluated the effect of simulation-based training on patient-related outcomes. Because of heterogeneity of the studies, it was not possible to conduct a quantitative analysis. The methodologic rigor of trials investigating simulation-based surgical training in ophthalmology is inadequate. To ensure effective implementation of training models, evidence-based knowledge of validity and efficacy is needed. We provide a useful tool for implementation and evaluation of research in simulation-based training. Copyright © 2015 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  3. IgSimulator: a versatile immunosequencing simulator.

    PubMed

    Safonova, Yana; Lapidus, Alla; Lill, Jennie

    2015-10-01

    The recent introduction of next-generation sequencing technologies to antibody studies have resulted in a growing number of immunoinformatics tools for antibody repertoire analysis. However, benchmarking these newly emerging tools remains problematic since the gold standard datasets that are needed to validate these tools are typically not available. Since simulating antibody repertoires is often the only feasible way to benchmark new immunoinformatics tools, we developed the IgSimulator tool that addresses various complications in generating realistic antibody repertoires. IgSimulator's code has modular structure and can be easily adapted to new requirements to simulation. IgSimulator is open source and freely available as a C++ and Python program running on all Unix-compatible platforms. The source code is available from yana-safonova.github.io/ig_simulator. safonova.yana@gmail.com Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  4. Development, initial reliability and validity testing of an observational tool for assessing technical skills of operating room nurses.

    PubMed

    Sevdalis, Nick; Undre, Shabnam; Henry, Janet; Sydney, Elaine; Koutantji, Mary; Darzi, Ara; Vincent, Charles A

    2009-09-01

    The recent emergence of the Systems Approach to the safety and quality of surgical care has triggered individual and team skills training modules for surgeons and anaesthetists and relevant observational assessment tools have been developed. To develop an observational tool that captures operating room (OR) nurses' technical skill and can be used for assessment and training. The Imperial College Assessment of Technical Skills for Nurses (ICATS-N) assesses (i) gowning and gloving, (ii) setting up instrumentation, (iii) draping, and (iv) maintaining sterility. Three to five observable behaviours have been identified for each skill and are rated on 1-6 scales. Feasibility and aspects of reliability and validity were assessed in 20 simulation-based crisis management training modules for trainee nurses and doctors, carried out in a Simulated Operating Room. The tool was feasible to use in the context of simulation-based training. Satisfactory reliability (Cronbach alpha) was obtained across trainers' and trainees' scores (analysed jointly and separately). Moreover, trainer nurse's ratings of the four skills correlated positively, thus indicating adequate content validity. Trainer's and trainees' ratings did not correlate. Assessment of OR nurses' technical skill is becoming a training priority. The present evidence suggests that the ICATS-N could be considered for use as an assessment/training tool for junior OR nurses.

  5. Construct validation of a novel hybrid surgical simulator.

    PubMed

    Broe, D; Ridgway, P F; Johnson, S; Tierney, S; Conlon, K C

    2006-06-01

    Simulated minimal access surgery has improved recently as both a learning and assessment tool. The construct validation of a novel simulator, ProMis, is described for use by residents in training. ProMis is a surgical simulator that can design tasks in both virtual and actual reality. A pilot group of surgical residents ranging from novice to expert completed three standardized tasks: orientation, dissection, and basic suturing. The tasks were tested for construct validity. Two experienced surgeons examined the recorded tasks in a blinded fashion using an objective structured assessment of technical skills format (OSATS: task-specific checklist and global rating score) as well as metrics delivered by the simulator. The findings showed excellent interrater reliability (Cronbach's alpha of 0.88 for the checklist and 0.93 for the global rating). The median scores in the experience groups were statistically different in both the global rating and the task-specific checklists (p < 0.05). The scores for the orientation task alone did not reach significance (p = 0.1), suggesting that modification is required before ProMis could be used in isolation as an assessment tool. The three simulated tasks in combination are construct valid for differentiating experience levels among surgeons in training. This hybrid simulator has potential added benefits of marrying the virtual with actual, and of combining simple box traits and advanced virtual reality simulation.

  6. Analysis procedures and subjective flight results of a simulator validation and cue fidelity experiment

    NASA Technical Reports Server (NTRS)

    Carr, Peter C.; Mckissick, Burnell T.

    1988-01-01

    A joint experiment to investigate simulator validation and cue fidelity was conducted by the Dryden Flight Research Facility of NASA Ames Research Center (Ames-Dryden) and NASA Langley Research Center. The primary objective was to validate the use of a closed-loop pilot-vehicle mathematical model as an analytical tool for optimizing the tradeoff between simulator fidelity requirements and simulator cost. The validation process includes comparing model predictions with simulation and flight test results to evaluate various hypotheses for differences in motion and visual cues and information transfer. A group of five pilots flew air-to-air tracking maneuvers in the Langley differential maneuvering simulator and visual motion simulator and in an F-14 aircraft at Ames-Dryden. The simulators used motion and visual cueing devices including a g-seat, a helmet loader, wide field-of-view horizon, and a motion base platform.

  7. Hyper-X Stage Separation Trajectory Validation Studies

    NASA Technical Reports Server (NTRS)

    Tartabini, Paul V.; Bose, David M.; McMinn, John D.; Martin, John G.; Strovers, Brian K.

    2003-01-01

    An independent twelve degree-of-freedom simulation of the X-43A separation trajectory was created with the Program to Optimize Simulated trajectories (POST II). This simulation modeled the multi-body dynamics of the X-43A and its booster and included the effect of two pyrotechnically actuated pistons used to push the vehicles apart as well as aerodynamic interaction forces and moments between the two vehicles. The simulation was developed to validate trajectory studies conducted with a 14 degree-of-freedom simulation created early in the program using the Automatic Dynamic Analysis of Mechanics Systems (ADAMS) simulation software. The POST simulation was less detailed than the official ADAMS-based simulation used by the Project, but was simpler, more concise and ran faster, while providing similar results. The increase in speed provided by the POST simulation provided the Project with an alternate analysis tool. This tool was ideal for performing separation control logic trade studies that required the running of numerous Monte Carlo trajectories.

  8. The virtual reality simulator dV-Trainer(®) is a valid assessment tool for robotic surgical skills.

    PubMed

    Perrenot, Cyril; Perez, Manuela; Tran, Nguyen; Jehl, Jean-Philippe; Felblinger, Jacques; Bresler, Laurent; Hubert, Jacques

    2012-09-01

    Exponential development of minimally invasive techniques, such as robotic-assisted devices, raises the question of how to assess robotic surgery skills. Early development of virtual simulators has provided efficient tools for laparoscopic skills certification based on objective scoring, high availability, and lower cost. However, similar evaluation is lacking for robotic training. The purpose of this study was to assess several criteria, such as reliability, face, content, construct, and concurrent validity of a new virtual robotic surgery simulator. This prospective study was conducted from December 2009 to April 2010 using three simulators dV-Trainers(®) (MIMIC Technologies(®)) and one Da Vinci S(®) (Intuitive Surgical(®)). Seventy-five subjects, divided into five groups according to their initial surgical training, were evaluated based on five representative exercises of robotic specific skills: 3D perception, clutching, visual force feedback, EndoWrist(®) manipulation, and camera control. Analysis was extracted from (1) questionnaires (realism and interest), (2) automatically generated data from simulators, and (3) subjective scoring by two experts of depersonalized videos of similar exercises with robot. Face and content validity were generally considered high (77 %). Five levels of ability were clearly identified by the simulator (ANOVA; p = 0.0024). There was a strong correlation between automatic data from dV-Trainer and subjective evaluation with robot (r = 0.822). Reliability of scoring was high (r = 0.851). The most relevant criteria were time and economy of motion. The most relevant exercises were Pick and Place and Ring and Rail. The dV-Trainer(®) simulator proves to be a valid tool to assess basic skills of robotic surgery.

  9. Ambient Assisted Living spaces validation by services and devices simulation.

    PubMed

    Fernández-Llatas, Carlos; Mocholí, Juan Bautista; Sala, Pilar; Naranjo, Juan Carlos; Pileggi, Salvatore F; Guillén, Sergio; Traver, Vicente

    2011-01-01

    The design of Ambient Assisted Living (AAL) products is a very demanding challenge. AAL products creation is a complex iterative process which must accomplish exhaustive prerequisites about accessibility and usability. In this process the early detection of errors is crucial to create cost-effective systems. Computer-assisted tools can suppose a vital help to usability designers in order to avoid design errors. Specifically computer simulation of products in AAL environments can be used in all the design phases to support the validation. In this paper, a computer simulation tool for supporting usability designers in the creation of innovative AAL products is presented. This application will benefit their work saving time and improving the final system functionality.

  10. Validation of the ROMI-RIP rough mill simulator

    Treesearch

    Edward R. Thomas; Urs Buehlmann

    2002-01-01

    The USDA Forest Service's ROMI-RIP rough mill rip-first simulation program is a popular tool for analyzing rough mill conditions, determining more efficient rough mill practices, and finding optimal lumber board cut-up patterns. However, until now, the results generated by ROMI-RIP have not been rigorously compared to those of an actual rough mill. Validating the...

  11. Face and construct validity of a computer-based virtual reality simulator for ERCP.

    PubMed

    Bittner, James G; Mellinger, John D; Imam, Toufic; Schade, Robert R; Macfadyen, Bruce V

    2010-02-01

    Currently, little evidence supports computer-based simulation for ERCP training. To determine face and construct validity of a computer-based simulator for ERCP and assess its perceived utility as a training tool. Novice and expert endoscopists completed 2 simulated ERCP cases by using the GI Mentor II. Virtual Education and Surgical Simulation Laboratory, Medical College of Georgia. Outcomes included times to complete the procedure, reach the papilla, and use fluoroscopy; attempts to cannulate the papilla, pancreatic duct, and common bile duct; and number of contrast injections and complications. Subjects assessed simulator graphics, procedural accuracy, difficulty, haptics, overall realism, and training potential. Only when performance data from cases A and B were combined did the GI Mentor II differentiate novices and experts based on times to complete the procedure, reach the papilla, and use fluoroscopy. Across skill levels, overall opinions were similar regarding graphics (moderately realistic), accuracy (similar to clinical ERCP), difficulty (similar to clinical ERCP), overall realism (moderately realistic), and haptics. Most participants (92%) claimed that the simulator has definite training potential or should be required for training. Small sample size, single institution. The GI Mentor II demonstrated construct validity for ERCP based on select metrics. Most subjects thought that the simulated graphics, procedural accuracy, and overall realism exhibit face validity. Subjects deemed it a useful training tool. Study repetition involving more participants and cases may help confirm results and establish the simulator's ability to differentiate skill levels based on ERCP-specific metrics.

  12. Adjustment and validation of a simulation tool for CSP plants based on parabolic trough technology

    NASA Astrophysics Data System (ADS)

    García-Barberena, Javier; Ubani, Nora

    2016-05-01

    The present work presents the validation process carried out for a simulation tool especially designed for the energy yield assessment of concentrating solar plants based on parabolic through (PT) technology. The validation has been carried out by comparing the model estimations with real data collected from a commercial CSP plant. In order to adjust the model parameters used for the simulation, 12 different days were selected among one-year of operational data measured at the real plant. The 12 days were simulated and the estimations compared with the measured data, focusing on the most important variables from the simulation point of view: temperatures, pressures and mass flow of the solar field, gross power, parasitic power, and net power delivered by the plant. Based on these 12 days, the key parameters for simulating the model were properly fixed and the simulation of a whole year performed. The results obtained for a complete year simulation showed very good agreement for the gross and net electric total production. The estimations for these magnitudes show a 1.47% and 2.02% BIAS respectively. The results proved that the simulation software describes with great accuracy the real operation of the power plant and correctly reproduces its transient behavior.

  13. Ensuring relational competency in critical care: Importance of nursing students' communication skills.

    PubMed

    Sánchez Expósito, Judit; Leal Costa, César; Díaz Agea, José Luis; Carrillo Izquierdo, María Dolores; Jiménez Rodríguez, Diana

    2018-02-01

    The aim of this study was to analyse the communication skills of students in interactions with simulated critically-ill patients using a new assessment tool to study the relationships between communication skills, teamwork and clinical skills and to analyse the psychometric properties of the tool. A cross-sectional study was conducted to assess the communications skills of 52 students with critically-ill patients through the use of a new measurement tool to score video recordings of simulated clinical scenarios. The 52 students obtained low scores on their skills in communicating with patients. The reliability of the measuring instrument showed good inter-observer agreement (ICC between 0.71 and 0.90) and the validity yielded a positive correlation (p<0.01). The results provide evidence that nursing students lack skills when communicating with critically ill patients in simulated scenarios. The measuring instrument used is therefore deemed valid and reliable for assessing nursing students through a clinical simulation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Current Status of Simulation-based Training Tools in Orthopedic Surgery: A Systematic Review.

    PubMed

    Morgan, Michael; Aydin, Abdullatif; Salih, Alan; Robati, Shibby; Ahmed, Kamran

    To conduct a systematic review of orthopedic training and assessment simulators with reference to their level of evidence (LoE) and level of recommendation. Medline and EMBASE library databases were searched for English language articles published between 1980 and 2016, describing orthopedic simulators or validation studies of these models. All studies were assessed for LoE, and each model was subsequently awarded a level of recommendation using a modified Oxford Centre for Evidence-Based Medicine classification, adapted for education. A total of 76 articles describing orthopedic simulators met the inclusion criteria, 47 of which described at least 1 validation study. The most commonly identified models (n = 34) and validation studies (n = 26) were for knee arthroscopy. Construct validation was the most frequent validation study attempted by authors. In all, 62% (47 of 76) of the simulator studies described arthroscopy simulators, which also contained validation studies with the highest LoE. Orthopedic simulators are increasingly being subjected to validation studies, although the LoE of such studies generally remain low. There remains a lack of focus on nontechnical skills and on cost analyses of orthopedic simulators. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  15. Creation and Delphi-method refinement of pediatric disaster triage simulations.

    PubMed

    Cicero, Mark X; Brown, Linda; Overly, Frank; Yarzebski, Jorge; Meckler, Garth; Fuchs, Susan; Tomassoni, Anthony; Aghababian, Richard; Chung, Sarita; Garrett, Andrew; Fagbuyi, Daniel; Adelgais, Kathleen; Goldman, Ran; Parker, James; Auerbach, Marc; Riera, Antonio; Cone, David; Baum, Carl R

    2014-01-01

    There is a need for rigorously designed pediatric disaster triage (PDT) training simulations for paramedics. First, we sought to design three multiple patient incidents for EMS provider training simulations. Our second objective was to determine the appropriate interventions and triage level for each victim in each of the simulations and develop evaluation instruments for each simulation. The final objective was to ensure that each simulation and evaluation tool was free of bias toward any specific PDT strategy. We created mixed-methods disaster simulation scenarios with pediatric victims: a school shooting, a school bus crash, and a multiple-victim house fire. Standardized patients, high-fidelity manikins, and low-fidelity manikins were used to portray the victims. Each simulation had similar acuity of injuries and 10 victims. Examples include children with special health-care needs, gunshot wounds, and smoke inhalation. Checklist-based evaluation tools and behaviorally anchored global assessments of function were created for each simulation. Eight physicians and paramedics from areas with differing PDT strategies were recruited as Subject Matter Experts (SMEs) for a modified Delphi iterative critique of the simulations and evaluation tools. The modified Delphi was managed with an online survey tool. The SMEs provided an expected triage category for each patient. The target for modified Delphi consensus was ≥85%. Using Likert scales and free text, the SMEs assessed the validity of the simulations, including instances of bias toward a specific PDT strategy, clarity of learning objectives, and the correlation of the evaluation tools to the learning objectives and scenarios. After two rounds of the modified Delphi, consensus for expected triage level was >85% for 28 of 30 victims, with the remaining two achieving >85% consensus after three Delphi iterations. To achieve consensus, we amended 11 instances of bias toward a specific PDT strategy and corrected 10 instances of noncorrelation between evaluations and simulation. The modified Delphi process, used to derive novel PDT simulation and evaluation tools, yielded a high degree of consensus among the SMEs, and eliminated biases toward specific PDT strategies in the evaluations. The simulations and evaluation tools may now be tested for reliability and validity as part of a prehospital PDT curriculum.

  16. A unified approach to validation, reliability, and education study design for surgical technical skills training.

    PubMed

    Sweet, Robert M; Hananel, David; Lawrenz, Frances

    2010-02-01

    To present modern educational psychology theory and apply these concepts to validity and reliability of surgical skills training and assessment. In a series of cross-disciplinary meetings, we applied a unified approach of behavioral science principles and theory to medical technical skills education given the recent advances in the theories in the field of behavioral psychology and statistics. While validation of the individual simulation tools is important, it is only one piece of a multimodal curriculum that in and of itself deserves examination and study. We propose concurrent validation throughout the design of simulation-based curriculum rather than once it is complete. We embrace the concept that validity and curriculum development are interdependent, ongoing processes that are never truly complete. Individual predictive, construct, content, and face validity aspects should not be considered separately but as interdependent and complementary toward an end application. Such an approach could help guide our acceptance and appropriate application of these exciting new training and assessment tools for technical skills training in medicine.

  17. A Monte Carlo analysis of breast screening randomized trials.

    PubMed

    Zamora, Luis I; Forastero, Cristina; Guirado, Damián; Lallena, Antonio M

    2016-12-01

    To analyze breast screening randomized trials with a Monte Carlo simulation tool. A simulation tool previously developed to simulate breast screening programmes was adapted for that purpose. The history of women participating in the trials was simulated, including a model for survival after local treatment of invasive cancers. Distributions of time gained due to screening detection against symptomatic detection and the overall screening sensitivity were used as inputs. Several randomized controlled trials were simulated. Except for the age range of women involved, all simulations used the same population characteristics and this permitted to analyze their external validity. The relative risks obtained were compared to those quoted for the trials, whose internal validity was addressed by further investigating the reasons of the disagreements observed. The Monte Carlo simulations produce results that are in good agreement with most of the randomized trials analyzed, thus indicating their methodological quality and external validity. A reduction of the breast cancer mortality around 20% appears to be a reasonable value according to the results of the trials that are methodologically correct. Discrepancies observed with Canada I and II trials may be attributed to a low mammography quality and some methodological problems. Kopparberg trial appears to show a low methodological quality. Monte Carlo simulations are a powerful tool to investigate breast screening controlled randomized trials, helping to establish those whose results are reliable enough to be extrapolated to other populations and to design the trial strategies and, eventually, adapting them during their development. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  18. A Student Assessment Tool for Standardized Patient Simulations (SAT-SPS): Psychometric analysis.

    PubMed

    Castro-Yuste, Cristina; García-Cabanillas, María José; Rodríguez-Cornejo, María Jesús; Carnicer-Fuentes, Concepción; Paloma-Castro, Olga; Moreno-Corral, Luis Javier

    2018-05-01

    The evaluation of the level of clinical competence acquired by the student is a complex process that must meet various requirements to ensure its quality. The psychometric analysis of the data collected by the assessment tools used is a fundamental aspect to guarantee the student's competence level. To conduct a psychometric analysis of an instrument which assesses clinical competence in nursing students at simulation stations with standardized patients in OSCE-format tests. The construct of clinical competence was operationalized as a set of observable and measurable behaviors, measured by the newly-created Student Assessment Tool for Standardized Patient Simulations (SAT-SPS), which was comprised of 27 items. The categories assigned to the items were 'incorrect or not performed' (0), 'acceptable' (1), and 'correct' (2). 499 nursing students. Data were collected by two independent observers during the assessment of the students' performance at a four-station OSCE with standardized patients. Descriptive statistics were used to summarize the variables. The difficulty levels and floor and ceiling effects were determined for each item. Reliability was analyzed using internal consistency and inter-observer reliability. The validity analysis was performed considering face validity, content and construct validity (through exploratory factor analysis), and criterion validity. Internal reliability and inter-observer reliability were higher than 0.80. The construct validity analysis suggested a three-factor model accounting for 37.1% of the variance. These three factors were named 'Nursing process', 'Communication skills', and 'Safe practice'. A significant correlation was found between the scores obtained and the students' grades in general, as well as with the grades obtained in subjects with clinical content. The assessment tool has proven to be sufficiently reliable and valid for the assessment of the clinical competence of nursing students using standardized patients. This tool has three main components: the nursing process, communication skills, and safety management. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. A numerical tool for reproducing driver behaviour: experiments and predictive simulations.

    PubMed

    Casucci, M; Marchitto, M; Cacciabue, P C

    2010-03-01

    This paper presents the simulation tool called SDDRIVE (Simple Simulation of Driver performance), which is the numerical computerised implementation of the theoretical architecture describing Driver-Vehicle-Environment (DVE) interactions, contained in Cacciabue and Carsten [Cacciabue, P.C., Carsten, O. A simple model of driver behaviour to sustain design and safety assessment of automated systems in automotive environments, 2010]. Following a brief description of the basic algorithms that simulate the performance of drivers, the paper presents and discusses a set of experiments carried out in a Virtual Reality full scale simulator for validating the simulation. Then the predictive potentiality of the tool is shown by discussing two case studies of DVE interactions, performed in the presence of different driver attitudes in similar traffic conditions.

  20. Examining validity evidence for a simulation-based assessment tool for basic robotic surgical skills.

    PubMed

    Havemann, Maria Cecilie; Dalsgaard, Torur; Sørensen, Jette Led; Røssaak, Kristin; Brisling, Steffen; Mosgaard, Berit Jul; Høgdall, Claus; Bjerrum, Flemming

    2018-05-14

    Increasing focus on patient safety makes it important to ensure surgical competency among surgeons before operating on patients. The objective was to gather validity evidence for a virtual-reality simulator test for robotic surgical skills and evaluate its potential as a training tool. Surgeons with varying experience in robotic surgery were recruited: novices (zero procedures), intermediates (1-50), experienced (> 50). Five experienced surgeons rated five exercises on the da Vinci Skills Simulator. Participants were tested using the five exercises. Participants were invited back 3 times and completed a total of 10 attempts per exercise. The outcome was the average simulator performance score for the 5 exercises. 32 participants from 5 surgical specialties were included. 38 participants completed all 4 sessions. A moderate correlation between the average total score and robotic experience was identified for the first attempt (Spearman r = 0.58; p = 0.0004). A difference in average total score was observed between novices and intermediates [median score 61% (IQR 52-66) vs. 83% (IQR 75-91), adjusted p < 0.0001], as well as novices and experienced [median score 61% (IQR 52-66) vs. 80 (IQR 69-85), adjusted p = 0.002]. All three groups improved their performance between the 1st and 10th attempts (p < 0.00). This study describes validity evidence for a virtual-reality simulator for basic robotic surgical skills, which can be used for assessment of basic competency and as a training tool. However, more validity evidence is needed before it can be used for certification or high-stakes assessment.

  1. Development of a simulation evaluation tool for assessing nursing students' clinical judgment in caring for children with dehydration.

    PubMed

    Kim, Shin-Jeong; Kim, Sunghee; Kang, Kyung-Ah; Oh, Jina; Lee, Myung-Nam

    2016-02-01

    The lack of reliable and valid tools to evaluate learning outcomes during simulations has limited the adoption and progress of simulation-based nursing education. This study had two aims: (a) to develop a simulation evaluation tool (SET(c-dehydration)) to assess students' clinical judgment in caring for children with dehydration based on the Lasater Clinical Judgment Rubric (LCJR) and (b) to examine its reliability and validity. Undergraduate nursing students from two nursing schools in South Korea participated in this study from March 3 through June 10, 2014. The SET(c-dehydration) was developed, and 120 nursing students' clinical judgment was evaluated. Descriptive statistics, Cronbach's alpha, Cohen's kappa coefficient, and confirmatory factor analysis (CFA) were used to analyze the data. A 41-item version of the SET(c-dehydration) with three subscales was developed. Cohen's kappa (measuring inter-observer reliability) of the sessions ranged from .73 to .95, and Cronbach's alpha was .87. The mean total rating of the SET(c-dehydration) by the instructors was 1.92 (±.25), and the mean scores for the four LCJR dimensions of clinical judgment were as follows: noticing (1.74±.27), interpreting (1.85±.43), responding (2.17±.32), and reflecting (1.79±.35). CFA, which was performed to test construct validity, showed that the four dimensions of the SET(c-dehydration) was an appropriate framework. The SET(c-dehydration) provides a means to evaluate clinical judgment in simulation education. Its reliability and validity should be examined further. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Design and control of compliant tensegrity robots through simulation and hardware validation

    PubMed Central

    Caluwaerts, Ken; Despraz, Jérémie; Işçen, Atıl; Sabelhaus, Andrew P.; Bruce, Jonathan; Schrauwen, Benjamin; SunSpiral, Vytas

    2014-01-01

    To better understand the role of tensegrity structures in biological systems and their application to robotics, the Dynamic Tensegrity Robotics Lab at NASA Ames Research Center, Moffett Field, CA, USA, has developed and validated two software environments for the analysis, simulation and design of tensegrity robots. These tools, along with new control methodologies and the modular hardware components developed to validate them, are presented as a system for the design of actuated tensegrity structures. As evidenced from their appearance in many biological systems, tensegrity (‘tensile–integrity’) structures have unique physical properties that make them ideal for interaction with uncertain environments. Yet, these characteristics make design and control of bioinspired tensegrity robots extremely challenging. This work presents the progress our tools have made in tackling the design and control challenges of spherical tensegrity structures. We focus on this shape since it lends itself to rolling locomotion. The results of our analyses include multiple novel control approaches for mobility and terrain interaction of spherical tensegrity structures that have been tested in simulation. A hardware prototype of a spherical six-bar tensegrity, the Reservoir Compliant Tensegrity Robot, is used to empirically validate the accuracy of simulation. PMID:24990292

  3. OC5 Project Phase Ib: Validation of hydrodynamic loading on a fixed, flexible cylinder for offshore wind applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, Amy N.; Wendt, Fabian; Jonkman, Jason M.

    This paper summarizes the findings from Phase Ib of the Offshore Code Comparison, Collaboration, Continued with Correlation (OC5) project. OC5 is a project run under the International Energy Agency (IEA) Wind Research Task 30, and is focused on validating the tools used for modelling offshore wind systems through the comparison of simulated responses of select offshore wind systems (and components) to physical test data. For Phase Ib of the project, simulated hydrodynamic loads on a flexible cylinder fixed to a sloped bed were validated against test measurements made in the shallow water basin at the Danish Hydraulic Institute (DHI) withmore » support from the Technical University of Denmark (DTU). The first phase of OC5 examined two simple cylinder structures (Phase Ia and Ib) to focus on validation of hydrodynamic models used in the various tools before moving on to more complex offshore wind systems and the associated coupled physics. As a result, verification and validation activities such as these lead to improvement of offshore wind modelling tools, which will enable the development of more innovative and cost-effective offshore wind designs.« less

  4. OC5 Project Phase Ib: Validation of hydrodynamic loading on a fixed, flexible cylinder for offshore wind applications

    DOE PAGES

    Robertson, Amy N.; Wendt, Fabian; Jonkman, Jason M.; ...

    2016-10-13

    This paper summarizes the findings from Phase Ib of the Offshore Code Comparison, Collaboration, Continued with Correlation (OC5) project. OC5 is a project run under the International Energy Agency (IEA) Wind Research Task 30, and is focused on validating the tools used for modelling offshore wind systems through the comparison of simulated responses of select offshore wind systems (and components) to physical test data. For Phase Ib of the project, simulated hydrodynamic loads on a flexible cylinder fixed to a sloped bed were validated against test measurements made in the shallow water basin at the Danish Hydraulic Institute (DHI) withmore » support from the Technical University of Denmark (DTU). The first phase of OC5 examined two simple cylinder structures (Phase Ia and Ib) to focus on validation of hydrodynamic models used in the various tools before moving on to more complex offshore wind systems and the associated coupled physics. As a result, verification and validation activities such as these lead to improvement of offshore wind modelling tools, which will enable the development of more innovative and cost-effective offshore wind designs.« less

  5. The impact of simulation education on self-efficacy towards teaching for nurse educators.

    PubMed

    Garner, S L; Killingsworth, E; Bradshaw, M; Raj, L; Johnson, S R; Abijah, S P; Parimala, S; Victor, S

    2018-03-23

    The objective of this study was to assess the impact of a simulation workshop on self-efficacy towards teaching for nurse educators in India. Additionally, we sought to revise and validate a tool to measure self-efficacy in teaching for use with a global audience. Simulation is an evidence-based teaching and learning method and is increasingly used in nursing education globally. As new technology and teaching methods such as simulation continue to evolve, it is important for new as well as experienced nurse educators globally to have confidence in their teaching skills and abilities. The study included (1) instrument revision, and measures of reliability and validation, (2) an 8-h faculty development workshop intervention on simulation, (3) pre- and post-survey of self-efficacy among nurse educators, and (4) investigation of relationship between faculty socio-demographics and degree of self-efficacy. The modified tool showed internal consistency (r = 0.98) and was validated by international faculty experts. There were significant improvements in total self-efficacy (P < 0.001) and subscale scores among nurse educators after the simulation workshop intervention when compared to pre-survey results. No significant relationships were found between socio-demographic variables and degree of self-efficacy. Strong self-efficacy in teaching among nurse educators is crucial for effective learning to occur. Results indicated the simulation workshop was effective in significantly improving self-efficacy towards teaching for nurse educators using an internationally validated tool. The Minister of Health in India recently called for improvements in nursing education. Introducing nursing education on simulation as a teaching method in India and globally to improve self-efficacy among teachers is an example of a strategy towards meeting this call. © 2018 The Authors International Nursing Review published by John Wiley & Sons Ltd on behalf of International Council of Nurses.

  6. CLVTOPS Liftoff and Separation Analysis Validation Using Ares I-X Flight Data

    NASA Technical Reports Server (NTRS)

    Burger, Ben; Schwarz, Kristina; Kim, Young

    2011-01-01

    CLVTOPS is a multi-body time domain flight dynamics simulation tool developed by NASA s Marshall Space Flight Center (MSFC) for a space launch vehicle and is based on the TREETOPS simulation tool. CLVTOPS is currently used to simulate the flight dynamics and separation/jettison events of the Ares I launch vehicle including liftoff and staging separation. In order for CLVTOPS to become an accredited tool, validation against other independent simulations and real world data is needed. The launch of the Ares I-X vehicle (first Ares I test flight) on October 28, 2009 presented a great opportunity to provide validation evidence for CLVTOPS. In order to simulate the Ares I-X flight, specific models were implemented into CLVTOPS. These models include the flight day environment, reconstructed thrust, reconstructed mass properties, aerodynamics, and the Ares I-X guidance, navigation and control models. The resulting simulation output was compared to Ares I-X flight data. During the liftoff region of flight, trajectory states from the simulation and flight data were compared. The CLVTOPS results were used to make a semi-transparent animation of the vehicle that was overlaid directly on top of the flight video to provide a qualitative measure of the agreement between the simulation and the actual flight. During ascent, the trajectory states of the vehicle were compared with flight data. For the stage separation event, the trajectory states of the two stages were compared to available flight data. Since no quantitative rotational state data for the upper stage was available, the CLVTOPS results were used to make an animation of the two stages to show a side-by-side comparison with flight video. All of the comparisons between CLVTOPS and the flight data show good agreement. This paper documents comparisons between CLVTOPS and Ares I-X flight data which serve as validation evidence for the eventual accreditation of CLVTOPS.

  7. Data Association Algorithms for Tracking Satellites

    DTIC Science & Technology

    2013-03-27

    validation of the new tools. The description provided here includes the mathematical back ground and description of the models implemented, as well as a...simulation development. This work includes the addition of higher-fidelity models in CU-TurboProp and validation of the new tools. The description...ode45(), used in Ananke, and (3) provide the necessary inputs to the bidirectional reflectance distribution function ( BRDF ) model provided by Pacific

  8. New Automotive Air Conditioning System Simulation Tool Developed in MATLAB/Simulink

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kiss, T.; Chaney, L.; Meyer, J.

    Further improvements in vehicle fuel efficiency require accurate evaluation of the vehicle's transient total power requirement. When operated, the air conditioning (A/C) system is the largest auxiliary load on a vehicle; therefore, accurate evaluation of the load it places on the vehicle's engine and/or energy storage system is especially important. Vehicle simulation software, such as 'Autonomie,' has been used by OEMs to evaluate vehicles' energy performance. A transient A/C simulation tool incorporated into vehicle simulation models would also provide a tool for developing more efficient A/C systems through a thorough consideration of the transient A/C system performance. The dynamic systemmore » simulation software Matlab/Simulink was used to develop new and more efficient vehicle energy system controls. The various modeling methods used for the new simulation tool are described in detail. Comparison with measured data is provided to demonstrate the validity of the model.« less

  9. OECD-NEA Expert Group on Multi-Physics Experimental Data, Benchmarks and Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valentine, Timothy; Rohatgi, Upendra S.

    High-fidelity, multi-physics modeling and simulation (M&S) tools are being developed and utilized for a variety of applications in nuclear science and technology and show great promise in their abilities to reproduce observed phenomena for many applications. Even with the increasing fidelity and sophistication of coupled multi-physics M&S tools, the underpinning models and data still need to be validated against experiments that may require a more complex array of validation data because of the great breadth of the time, energy and spatial domains of the physical phenomena that are being simulated. The Expert Group on Multi-Physics Experimental Data, Benchmarks and Validationmore » (MPEBV) of the Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) was formed to address the challenges with the validation of such tools. The work of the MPEBV expert group is shared among three task forces to fulfill its mandate and specific exercises are being developed to demonstrate validation principles for common industrial challenges. This paper describes the overall mission of the group, the specific objectives of the task forces, the linkages among the task forces, and the development of a validation exercise that focuses on a specific reactor challenge problem.« less

  10. The ATLAS Simulation Infrastructure

    DOE PAGES

    Aad, G.; Abbott, B.; Abdallah, J.; ...

    2010-09-25

    The simulation software for the ATLAS Experiment at the Large Hadron Collider is being used for large-scale production of events on the LHC Computing Grid. This simulation requires many components, from the generators that simulate particle collisions, through packages simulating the response of the various detectors and triggers. All of these components come together under the ATLAS simulation infrastructure. In this paper, that infrastructure is discussed, including that supporting the detector description, interfacing the event generation, and combining the GEANT4 simulation of the response of the individual detectors. Also described are the tools allowing the software validation, performance testing, andmore » the validation of the simulated output against known physics processes.« less

  11. Validation Of The Airspace Concept Evaluation System Using Real World Data

    NASA Technical Reports Server (NTRS)

    Zelinski, Shannon

    2005-01-01

    This paper discusses the process of performing a validation of the Airspace Concept Evaluation System (ACES) using real world historical flight operational data. ACES inputs are generated from select real world data and processed to create a realistic reproduction of a single day of operations within the National Airspace System (NAS). ACES outputs are then compared to real world operational metrics and delay statistics for the reproduced day. Preliminary results indicate that ACES produces delays and airport operational metrics similar to the real world with minor variations of delay by phase of flight. ACES is a nation-wide fast-time simulation tool developed at NASA Ames Research Center. ACES models and simulates the NAS using interacting agents representing center control, terminal flow management, airports, individual flights, and other NAS elements. These agents pass messages between one another similar to real world communications. This distributed agent based system is designed to emulate the highly unpredictable nature of the NAS, making it a suitable tool to evaluate current and envisioned airspace concepts. To ensure that ACES produces the most realistic results, the system must be validated. There is no way to validate future concepts scenarios using real world historical data, but current day scenario validations increase confidence in the validity of future scenario results. Each operational day has unique weather and traffic demand schedules. The more a simulation utilizes the unique characteristic of a specific day, the more realistic the results should be. ACES is able to simulate the full scale demand traffic necessary to perform a validation using real world data. Through direct comparison with the real world, models may continuee to be improved and unusual trends and biases may be filtered out of the system or used to normalize the results of future concept simulations.

  12. Validation of highly reliable, real-time knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Johnson, Sally C.

    1988-01-01

    Knowledge-based systems have the potential to greatly increase the capabilities of future aircraft and spacecraft and to significantly reduce support manpower needed for the space station and other space missions. However, a credible validation methodology must be developed before knowledge-based systems can be used for life- or mission-critical applications. Experience with conventional software has shown that the use of good software engineering techniques and static analysis tools can greatly reduce the time needed for testing and simulation of a system. Since exhaustive testing is infeasible, reliability must be built into the software during the design and implementation phases. Unfortunately, many of the software engineering techniques and tools used for conventional software are of little use in the development of knowledge-based systems. Therefore, research at Langley is focused on developing a set of guidelines, methods, and prototype validation tools for building highly reliable, knowledge-based systems. The use of a comprehensive methodology for building highly reliable, knowledge-based systems should significantly decrease the time needed for testing and simulation. A proven record of delivering reliable systems at the beginning of the highly visible testing and simulation phases is crucial to the acceptance of knowledge-based systems in critical applications.

  13. PeneloPET, a Monte Carlo PET simulation tool based on PENELOPE: features and validation

    NASA Astrophysics Data System (ADS)

    España, S; Herraiz, J L; Vicente, E; Vaquero, J J; Desco, M; Udias, J M

    2009-03-01

    Monte Carlo simulations play an important role in positron emission tomography (PET) imaging, as an essential tool for the research and development of new scanners and for advanced image reconstruction. PeneloPET, a PET-dedicated Monte Carlo tool, is presented and validated in this work. PeneloPET is based on PENELOPE, a Monte Carlo code for the simulation of the transport in matter of electrons, positrons and photons, with energies from a few hundred eV to 1 GeV. PENELOPE is robust, fast and very accurate, but it may be unfriendly to people not acquainted with the FORTRAN programming language. PeneloPET is an easy-to-use application which allows comprehensive simulations of PET systems within PENELOPE. Complex and realistic simulations can be set by modifying a few simple input text files. Different levels of output data are available for analysis, from sinogram and lines-of-response (LORs) histogramming to fully detailed list mode. These data can be further exploited with the preferred programming language, including ROOT. PeneloPET simulates PET systems based on crystal array blocks coupled to photodetectors and allows the user to define radioactive sources, detectors, shielding and other parts of the scanner. The acquisition chain is simulated in high level detail; for instance, the electronic processing can include pile-up rejection mechanisms and time stamping of events, if desired. This paper describes PeneloPET and shows the results of extensive validations and comparisons of simulations against real measurements from commercial acquisition systems. PeneloPET is being extensively employed to improve the image quality of commercial PET systems and for the development of new ones.

  14. Validation of Tendril TrueHome Using Software-to-Software Comparison

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maguire, Jeffrey B; Horowitz, Scott G; Moore, Nathan

    This study performed comparative evaluation of EnergyPlus version 8.6 and Tendril TrueHome, two physics-based home energy simulation models, to identify differences in energy consumption predictions between the two programs and resolve discrepancies between them. EnergyPlus is considered a benchmark, best-in-class software tool for building energy simulation. This exercise sought to improve both software tools through additional evaluation/scrutiny.

  15. Assessing Procedural Competence: Validity Considerations.

    PubMed

    Pugh, Debra M; Wood, Timothy J; Boulet, John R

    2015-10-01

    Simulation-based medical education (SBME) offers opportunities for trainees to learn how to perform procedures and to be assessed in a safe environment. However, SBME research studies often lack robust evidence to support the validity of the interpretation of the results obtained from tools used to assess trainees' skills. The purpose of this paper is to describe how a validity framework can be applied when reporting and interpreting the results of a simulation-based assessment of skills related to performing procedures. The authors discuss various sources of validity evidence because they relate to SBME. A case study is presented.

  16. Validation of Mission Plans Through Simulation

    NASA Astrophysics Data System (ADS)

    St-Pierre, J.; Melanson, P.; Brunet, C.; Crabtree, D.

    2002-01-01

    The purpose of a spacecraft mission planning system is to automatically generate safe and optimized mission plans for a single spacecraft, or more functioning in unison. The system verifies user input syntax, conformance to commanding constraints, absence of duty cycle violations, timing conflicts, state conflicts, etc. Present day constraint-based systems with state-based predictive models use verification rules derived from expert knowledge. A familiar solution found in Mission Operations Centers, is to complement the planning system with a high fidelity spacecraft simulator. Often a dedicated workstation, the simulator is frequently used for operator training and procedure validation, and may be interfaced to actual control stations with command and telemetry links. While there are distinct advantages to having a planning system offer realistic operator training using the actual flight control console, physical verification of data transfer across layers and procedure validation, experience has revealed some drawbacks and inefficiencies in ground segment operations: With these considerations, two simulation-based mission plan validation projects are under way at the Canadian Space Agency (CSA): RVMP and ViSION. The tools proposed in these projects will automatically run scenarios and provide execution reports to operations planning personnel, prior to actual command upload. This can provide an important safeguard for system or human errors that can only be detected with high fidelity, interdependent spacecraft models running concurrently. The core element common to these projects is a spacecraft simulator, built with off-the- shelf components such as CAE's Real-Time Object-Based Simulation Environment (ROSE) technology, MathWork's MATLAB/Simulink, and Analytical Graphics' Satellite Tool Kit (STK). To complement these tools, additional components were developed, such as an emulated Spacecraft Test and Operations Language (STOL) interpreter and CCSDS TM/TC encoders and decoders. This paper discusses the use of simulation in the context of space mission planning, describes the projects under way and proposes additional venues of investigation and development.

  17. Validation of the updated ArthroS simulator: face and construct validity of a passive haptic virtual reality simulator with novel performance metrics.

    PubMed

    Garfjeld Roberts, Patrick; Guyver, Paul; Baldwin, Mathew; Akhtar, Kash; Alvand, Abtin; Price, Andrew J; Rees, Jonathan L

    2017-02-01

    To assess the construct and face validity of ArthroS, a passive haptic VR simulator. A secondary aim was to evaluate the novel performance metrics produced by this simulator. Two groups of 30 participants, each divided into novice, intermediate or expert based on arthroscopic experience, completed three separate tasks on either the knee or shoulder module of the simulator. Performance was recorded using 12 automatically generated performance metrics and video footage of the arthroscopic procedures. The videos were blindly assessed using a validated global rating scale (GRS). Participants completed a survey about the simulator's realism and training utility. This new simulator demonstrated construct validity of its tasks when evaluated against a GRS (p ≤ 0.003 in all cases). Regarding it's automatically generated performance metrics, established outputs such as time taken (p ≤ 0.001) and instrument path length (p ≤ 0.007) also demonstrated good construct validity. However, two-thirds of the proposed 'novel metrics' the simulator reports could not distinguish participants based on arthroscopic experience. Face validity assessment rated the simulator as a realistic and useful tool for trainees, but the passive haptic feedback (a key feature of this simulator) is rated as less realistic. The ArthroS simulator has good task construct validity based on established objective outputs, but some of the novel performance metrics could not distinguish between surgical experience. The passive haptic feedback of the simulator also needs improvement. If simulators could offer automated and validated performance feedback, this would facilitate improvements in the delivery of training by allowing trainees to practise and self-assess.

  18. Applied Virtual Reality Research and Applications at NASA/Marshall Space Flight Center

    NASA Technical Reports Server (NTRS)

    Hale, Joseph P.

    1995-01-01

    A Virtual Reality (VR) applications program has been under development at NASA/Marshall Space Flight Center (MSFC) since 1989. The objectives of the MSFC VR Applications Program are to develop, assess, validate, and utilize VR in hardware development, operations development and support, mission operations training and science training. Before this technology can be utilized with confidence in these applications, it must be validated for each particular class of application. That is, the precision and reliability with which it maps onto real settings and scenarios, representative of a class, must be calculated and assessed. The approach of the MSFC VR Applications Program is to develop and validate appropriate virtual environments and associated object kinematic and behavior attributes for specific classes of applications. These application-specific environments and associated simulations will be validated, where possible, through empirical comparisons with existing, accepted tools and methodologies. These validated VR analytical tools will then be available for use in the design and development of space systems and operations and in training and mission support systems. Specific validation studies for selected classes of applications have been completed or are currently underway. These include macro-ergonomic "control-room class" design analysis, Spacelab stowage reconfiguration training, a full-body micro-gravity functional reach simulator, and a gross anatomy teaching simulator. This paper describes the MSFC VR Applications Program and the validation studies.

  19. The Role of Simulation in Microsurgical Training.

    PubMed

    Evgeniou, Evgenios; Walker, Harriet; Gujral, Sameer

    Simulation has been established as an integral part of microsurgical training. The aim of this study was to assess and categorize the various simulation models in relation to the complexity of the microsurgical skill being taught and analyze the assessment methods commonly employed in microsurgical simulation training. Numerous courses have been established using simulation models. These models can be categorized, according to the level of complexity of the skill being taught, into basic, intermediate, and advanced. Microsurgical simulation training should be assessed using validated assessment methods. Assessment methods vary significantly from subjective expert opinions to self-assessment questionnaires and validated global rating scales. The appropriate assessment method should carefully be chosen based on the simulation modality. Simulation models should be validated, and a model with appropriate fidelity should be chosen according to the microsurgical skill being taught. Assessment should move from traditional simple subjective evaluations of trainee performance to validated tools. Future studies should assess the transferability of skills gained during simulation training to the real-life setting. Copyright © 2018 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  20. Landsat-7 Simulation and Testing Environments

    NASA Technical Reports Server (NTRS)

    Holmes, E.; Ha, K.; Hawkins, K.; Lombardo, J.; Ram, M.; Sabelhaus, P.; Scott, S.; Phillips, R.

    1999-01-01

    A spacecraft Attitude Control and Determination Subsystem (ACDS) is heavily dependent upon simulation throughout its entire development, implementation and ground test cycle. Engineering simulation tools are typically developed to design and analyze control systems to validate the design and software simulation tools are required to qualify the flight software. However, the need for simulation does not end here. Operating the ACDS of a spacecraft on the ground requires the simulation of spacecraft dynamics, disturbance modeling and celestial body motion. Sensor data must also be simulated and substituted for actual sensor data on the ground so that the spacecraft will respond by sending commands to the actuators as they will on orbit. And finally, the simulators is the primary training tool and test-bed for the Flight Operations Team. In this paper various ACDS simulation, developed for or used by the Landsat 7 project will be described. The paper will include a description of each tool, its unique attributes, and its role in the overall development and testing of the ACDS. Finally, a section is included which discusses how the coordinated use of these simulation tools can maximize the probability of uncovering software, hardware and operations errors during the ground test process.

  1. Simulation of Climate Change Impacts on Wheat-Fallow Cropping Systems

    USDA-ARS?s Scientific Manuscript database

    Agricultural system simulation models are predictive tools for assessing climate change impacts on crop production. In this study, RZWQM2 that contains the DSSAT 4.0-CERES model was evaluated for simulating climate change impacts on wheat growth. The model was calibrated and validated using data fro...

  2. Dynamic CFD Simulations of the Supersonic Inflatable Aerodynamic Decelerator (SIAD) Ballistic Range Tests

    NASA Technical Reports Server (NTRS)

    Brock, Joseph M; Stern, Eric

    2016-01-01

    Dynamic CFD simulations of the SIAD ballistic test model were performed using US3D flow solver. Motivation for performing these simulations is for the purpose of validation and verification of the US3D flow solver as a viable computational tool for predicting dynamic coefficients.

  3. Design and control of compliant tensegrity robots through simulation and hardware validation.

    PubMed

    Caluwaerts, Ken; Despraz, Jérémie; Işçen, Atıl; Sabelhaus, Andrew P; Bruce, Jonathan; Schrauwen, Benjamin; SunSpiral, Vytas

    2014-09-06

    To better understand the role of tensegrity structures in biological systems and their application to robotics, the Dynamic Tensegrity Robotics Lab at NASA Ames Research Center, Moffett Field, CA, USA, has developed and validated two software environments for the analysis, simulation and design of tensegrity robots. These tools, along with new control methodologies and the modular hardware components developed to validate them, are presented as a system for the design of actuated tensegrity structures. As evidenced from their appearance in many biological systems, tensegrity ('tensile-integrity') structures have unique physical properties that make them ideal for interaction with uncertain environments. Yet, these characteristics make design and control of bioinspired tensegrity robots extremely challenging. This work presents the progress our tools have made in tackling the design and control challenges of spherical tensegrity structures. We focus on this shape since it lends itself to rolling locomotion. The results of our analyses include multiple novel control approaches for mobility and terrain interaction of spherical tensegrity structures that have been tested in simulation. A hardware prototype of a spherical six-bar tensegrity, the Reservoir Compliant Tensegrity Robot, is used to empirically validate the accuracy of simulation. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  4. The role of simulation in continuing medical education for acute care physicians: a systematic review.

    PubMed

    Khanduja, P Kristina; Bould, M Dylan; Naik, Viren N; Hladkowicz, Emily; Boet, Sylvain

    2015-01-01

    We systematically reviewed the effectiveness of simulation-based education, targeting independently practicing qualified physicians in acute care specialties. We also describe how simulation is used for performance assessment in this population. Data source included: DataMEDLINE, Embase, Cochrane Database of Systematic Reviews, Cochrane CENTRAL Database of Controlled Trials, and National Health Service Economic Evaluation Database. The last date of search was January 31, 2013. All original research describing simulation-based education for independently practicing physicians in anesthesiology, critical care, and emergency medicine was reviewed. Data analysis was performed in duplicate with further review by a third author in cases of disagreement until consensus was reached. Data extraction was focused on effectiveness according to Kirkpatrick's model. For simulation-based performance assessment, tool characteristics and sources of validity evidence were also collated. Of 39 studies identified, 30 studies focused on the effectiveness of simulation-based education and nine studies evaluated the validity of simulation-based assessment. Thirteen studies (30%) targeted the lower levels of Kirkpatrick's hierarchy with reliance on self-reporting. Simulation was unanimously described as a positive learning experience with perceived impact on clinical practice. Of the 17 remaining studies, 10 used a single group or "no intervention comparison group" design. The majority (n = 17; 44%) were able to demonstrate both immediate and sustained improvements in educational outcomes. Nine studies reported the psychometric properties of simulation-based performance assessment as their sole objective. These predominantly recruited independent practitioners as a convenience sample to establish whether the tool could discriminate between experienced and inexperienced operators and concentrated on a single aspect of validity evidence. Simulation is perceived as a positive learning experience with limited evidence to support improved learning. Future research should focus on the optimal modality and frequency of exposure, quality of assessment tools and on the impact of simulation-based education beyond the individuals toward improved patient care.

  5. Design of a CO2 Twin Rotary Compressor for a Heat Pump Water Heater

    NASA Astrophysics Data System (ADS)

    Ahn, Jong Min; Kim, Woo Young; Kim, Hyun Jin; Cho, Sung Oug; Seo, Jong Cheun

    2010-06-01

    For a CO2 heat pump water heater, one-stage twin rotary compressor has been designed. As a design tool, computer simulation program for the compressor performance has been made. Validation of the simulation program has been carried out for a bench model compressor in a compressor calorimeter. Cooling capacity and the compressor input power were reasonably well compared between the simulation and the calorimeter test. Good agreement on P-V diagram between the simulation and the test was also obtained. With this validated compressor simulation program, parametric study has been performed to arrive at optimum dimensions for the compression chamber.

  6. Excavator Design Validation

    NASA Technical Reports Server (NTRS)

    Pholsiri, Chalongrath; English, James; Seberino, Charles; Lim, Yi-Je

    2010-01-01

    The Excavator Design Validation tool verifies excavator designs by automatically generating control systems and modeling their performance in an accurate simulation of their expected environment. Part of this software design includes interfacing with human operations that can be included in simulation-based studies and validation. This is essential for assessing productivity, versatility, and reliability. This software combines automatic control system generation from CAD (computer-aided design) models, rapid validation of complex mechanism designs, and detailed models of the environment including soil, dust, temperature, remote supervision, and communication latency to create a system of high value. Unique algorithms have been created for controlling and simulating complex robotic mechanisms automatically from just a CAD description. These algorithms are implemented as a commercial cross-platform C++ software toolkit that is configurable using the Extensible Markup Language (XML). The algorithms work with virtually any mobile robotic mechanisms using module descriptions that adhere to the XML standard. In addition, high-fidelity, real-time physics-based simulation algorithms have also been developed that include models of internal forces and the forces produced when a mechanism interacts with the outside world. This capability is combined with an innovative organization for simulation algorithms, new regolith simulation methods, and a unique control and study architecture to make powerful tools with the potential to transform the way NASA verifies and compares excavator designs. Energid's Actin software has been leveraged for this design validation. The architecture includes parametric and Monte Carlo studies tailored for validation of excavator designs and their control by remote human operators. It also includes the ability to interface with third-party software and human-input devices. Two types of simulation models have been adapted: high-fidelity discrete element models and fast analytical models. By using the first to establish parameters for the second, a system has been created that can be executed in real time, or faster than real time, on a desktop PC. This allows Monte Carlo simulations to be performed on a computer platform available to all researchers, and it allows human interaction to be included in a real-time simulation process. Metrics on excavator performance are established that work with the simulation architecture. Both static and dynamic metrics are included.

  7. Simulated ventriculostomy training with conventional neuronavigational equipment used clinically in the operating room: prospective validation study.

    PubMed

    Kirkman, Matthew A; Muirhead, William; Sevdalis, Nick; Nandi, Dipankar

    2015-01-01

    Simulation is gaining increasing interest as a method of delivering high-quality, time-effective, and safe training to neurosurgical residents. However, most current simulators are purpose-built for simulation, being relatively expensive and inaccessible to many residents. The purpose of this study was to provide the first comprehensive validity assessment of ventriculostomy performance metrics from the Medtronic StealthStation S7 Surgical Navigation System, a neuronavigational tool widely used in the clinical setting, as a training tool for simulated ventriculostomy while concomitantly reporting on stress measures. A prospective study where participants performed 6 simulated ventriculostomy attempts on a model head with StealthStation-coregistered imaging. The performance measures included distance of the ventricular catheter tip to the foramen of Monro and presence of the catheter tip in the ventricle. Data on objective and self-reported stress and workload measures were also collected. The operating rooms of the National Hospital for Neurology and Neurosurgery, Queen Square, London. A total of 31 individuals with varying levels of prior ventriculostomy experience, varying in seniority from medical student to senior resident. Performance at simulated ventriculostomy improved significantly over subsequent attempts, irrespective of previous ventriculostomy experience. Performance improved whether or not the StealthStation display monitor was used for real-time visual feedback, but performance was optimal when it was. Further, performance was inversely correlated with both objective and self-reported measures of stress (traditionally referred to as concurrent validity). Stress and workload measures were well-correlated with each other, and they also correlated with technical performance. These initial data support the use of the StealthStation as a training tool for simulated ventriculostomy, providing a safe environment for repeated practice with immediate feedback. Although the potential implications are profound for neurosurgical education and training, further research following this proof-of-concept study is required on a larger scale for full validation and proof that training translates into improved long-term simulated and patient outcomes. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  8. Nuclear Energy Knowledge and Validation Center (NEKVaC) Needs Workshop Summary Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gougar, Hans

    2015-02-01

    The Department of Energy (DOE) has made significant progress developing simulation tools to predict the behavior of nuclear systems with greater accuracy and of increasing our capability to predict the behavior of these systems outside of the standard range of applications. These analytical tools require a more complex array of validation tests to accurately simulate the physics and multiple length and time scales. Results from modern simulations will allow experiment designers to narrow the range of conditions needed to bound system behavior and to optimize the deployment of instrumentation to limit the breadth and cost of the campaign. Modern validation,more » verification and uncertainty quantification (VVUQ) techniques enable analysts to extract information from experiments in a systematic manner and provide the users with a quantified uncertainty estimate. Unfortunately, the capability to perform experiments that would enable taking full advantage of the formalisms of these modern codes has progressed relatively little (with some notable exceptions in fuels and thermal-hydraulics); the majority of the experimental data available today is the "historic" data accumulated over the last decades of nuclear systems R&D. A validated code-model is a tool for users. An unvalidated code-model is useful for code developers to gain understanding, publish research results, attract funding, etc. As nuclear analysis codes have become more sophisticated, so have the measurement and validation methods and the challenges that confront them. A successful yet cost-effective validation effort requires expertise possessed only by a few, resources possessed only by the well-capitalized (or a willing collective), and a clear, well-defined objective (validating a code that is developed to satisfy the need(s) of an actual user). To that end, the Idaho National Laboratory established the Nuclear Energy Knowledge and Validation Center to address the challenges of modern code validation and to manage the knowledge from past, current, and future experimental campaigns. By pulling together the best minds involved in code development, experiment design, and validation to establish and disseminate best practices and new techniques, the Nuclear Energy Knowledge and Validation Center (NEKVaC or the ‘Center’) will be a resource for industry, DOE Programs, and academia validation efforts.« less

  9. Linking simulation-based educational assessments and patient-related outcomes: a systematic review and meta-analysis.

    PubMed

    Brydges, Ryan; Hatala, Rose; Zendejas, Benjamin; Erwin, Patricia J; Cook, David A

    2015-02-01

    To examine the evidence supporting the use of simulation-based assessments as surrogates for patient-related outcomes assessed in the workplace. The authors systematically searched MEDLINE, EMBASE, Scopus, and key journals through February 26, 2013. They included original studies that assessed health professionals and trainees using simulation and then linked those scores with patient-related outcomes assessed in the workplace. Two reviewers independently extracted information on participants, tasks, validity evidence, study quality, patient-related and simulation-based outcomes, and magnitude of correlation. All correlations were pooled using random-effects meta-analysis. Of 11,628 potentially relevant articles, the 33 included studies enrolled 1,203 participants, including postgraduate physicians (n = 24 studies), practicing physicians (n = 8), medical students (n = 6), dentists (n = 2), and nurses (n = 1). The pooled correlation for provider behaviors was 0.51 (95% confidence interval [CI], 0.38 to 0.62; n = 27 studies); for time behaviors, 0.44 (95% CI, 0.15 to 0.66; n = 7); and for patient outcomes, 0.24 (95% CI, -0.02 to 0.47; n = 5). Most reported validity evidence was favorable, though studies often included only correlational evidence. Validity evidence of internal structure (n = 13 studies), content (n = 12), response process (n = 2), and consequences (n = 1) were reported less often. Three tools showed large pooled correlations and favorable (albeit incomplete) validity evidence. Simulation-based assessments often correlate positively with patient-related outcomes. Although these surrogates are imperfect, tools with established validity evidence may replace workplace-based assessments for evaluating select procedural skills.

  10. An approach to value-based simulator selection: The creation and evaluation of the simulator value index tool.

    PubMed

    Rooney, Deborah M; Hananel, David M; Covington, Benjamin J; Dionise, Patrick L; Nykamp, Michael T; Pederson, Melvin; Sahloul, Jamal M; Vasquez, Rachael; Seagull, F Jacob; Pinsky, Harold M; Sweier, Domenica G; Cooke, James M

    2018-04-01

    Currently there is no reliable, standardized mechanism to support health care professionals during the evaluation of and procurement processes for simulators. A tool founded on best practices could facilitate simulator purchase processes. In a 3-phase process, we identified top factors considered during the simulator purchase process through expert consensus (n = 127), created the Simulator Value Index (SVI) tool, evaluated targeted validity evidence, and evaluated the practical value of this SVI. A web-based survey was sent to simulation professionals. Participants (n = 79) used the SVI and provided feedback. We evaluated the practical value of 4 tool variations by calculating their sensitivity to predict a preferred simulator. Seventeen top factors were identified and ranked. The top 2 were technical stability/reliability of the simulator and customer service, with no practical differences in rank across institution or stakeholder role. Full SVI variations predicted successfully the preferred simulator with good (87%) sensitivity, whereas the sensitivity of variations in cost and customer service and cost and technical stability decreased (≤54%). The majority (73%) of participants agreed that the SVI was helpful at guiding simulator purchase decisions, and 88% agreed the SVI tool would help facilitate discussion with peers and leadership. Our findings indicate the SVI supports the process of simulator purchase using a standardized framework. Sensitivity of the tool improved when factors extend beyond traditionally targeted factors. We propose the tool will facilitate discussion amongst simulation professionals dealing with simulation, provide essential information for finance and procurement professionals, and improve the long-term value of simulation solutions. Limitations and application of the tool are discussed. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Patient simulation: a literary synthesis of assessment tools in anesthesiology.

    PubMed

    Edler, Alice A; Fanning, Ruth G; Chen, Michael I; Claure, Rebecca; Almazan, Dondee; Struyk, Brain; Seiden, Samuel C

    2009-12-20

    High-fidelity patient simulation (HFPS) has been hypothesized as a modality for assessing competency of knowledge and skill in patient simulation, but uniform methods for HFPS performance assessment (PA) have not yet been completely achieved. Anesthesiology as a field founded the HFPS discipline and also leads in its PA. This project reviews the types, quality, and designated purpose of HFPS PA tools in anesthesiology. We used the systematic review method and systematically reviewed anesthesiology literature referenced in PubMed to assess the quality and reliability of available PA tools in HFPS. Of 412 articles identified, 50 met our inclusion criteria. Seventy seven percent of studies have been published since 2000; more recent studies demonstrated higher quality. Investigators reported a variety of test construction and validation methods. The most commonly reported test construction methods included "modified Delphi Techniques" for item selection, reliability measurement using inter-rater agreement, and intra-class correlations between test items or subtests. Modern test theory, in particular generalizability theory, was used in nine (18%) of studies. Test score validity has been addressed in multiple investigations and shown a significant improvement in reporting accuracy. However the assessment of predicative has been low across the majority of studies. Usability and practicality of testing occasions and tools was only anecdotally reported. To more completely comply with the gold standards for PA design, both shared experience of experts and recognition of test construction standards, including reliability and validity measurements, instrument piloting, rater training, and explicit identification of the purpose and proposed use of the assessment tool, are required.

  12. Campus Energy Model for Control and Performance Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2014-09-19

    The core of the modeling platform is an extensible block library for the MATLAB/Simulink software suite. The platform enables true co-simulation (interaction at each simulation time step) with NREL's state-of-the-art modeling tools and other energy modeling software.

  13. Assessment study of insight ARTHRO VR (®) arthroscopy virtual training simulator: face, content, and construct validities.

    PubMed

    Bayona, Sofía; Fernández-Arroyo, José Manuel; Martín, Isaac; Bayona, Pilar

    2008-09-01

    The aims of this study were to test the face, content, and construct validities of a virtual-reality haptic arthroscopy simulator and to validate four assessment hypothesis. The participants in our study were 94 arthroscopists attending an international conference on arthroscopy. The interviewed surgeons had been performing arthroscopies for a mean of 8.71 years (σ = 6.94 years). We explained the operation, functionality, instructions for use, and the exercises provided by the simulator. They performed a trial exercise and then an exercise in which performance was recorded. After having using it, the arthroscopists answered a questionnaire. The simulator was classified as one of the best training methods (over phantoms), and obtained a mark of 7.10 out of 10 as an evaluation tool. The simulator was considered more useful for inexperienced surgeons than for surgeons with experience (mean difference 1.88 out of 10, P value < 0.001). The participants valued the simulator at 8.24 as a tool for learning skills, its fidelity at 7.41, the quality of the platform at 7.54, and the content of the exercises at 7.09. It obtained a global score of 7.82. Of the subjects, 30.8% said they would practise with the simulator more than 6 h per week. Of the surgeons, 89.4% affirmed that they would recommend the simulator to their colleagues. The data gathered support the first three hypotheses, as well as face and content validities. Results show statistically significant differences between experts and novices, thus supporting the construct validity, but studies with a larger sample must be carried out to verify this. We propose concrete solutions and an equation to calculate economy of movement. Analogously, we analyze competence measurements and propose an equation to provide a single measurement that contains them all and that, according to the surgeons' criteria, is as reliable as the judgment of experts observing the performance of an apprentice.

  14. Synthetic Infrared Scene: Improving the KARMA IRSG Module and Signature Modelling Tool SMAT

    DTIC Science & Technology

    2011-03-01

    d’engagements impliquant des autodirecteurs infrarouges dans l’environnement de simulation KARMA. Le travail a été réalisé à partir de novembre 2008...infrarouges dans l’environnement de simulation KARMA. Le travail a été réalisé à partir de novembre 2008 jusqu’à mars 2011. Ce rapport de contrat est axé...74 13 Evaluating Performance Validator tool

  15. High-resolution computational algorithms for simulating offshore wind turbines and farms: Model development and validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calderer, Antoni; Yang, Xiaolei; Angelidis, Dionysios

    2015-10-30

    The present project involves the development of modeling and analysis design tools for assessing offshore wind turbine technologies. The computational tools developed herein are able to resolve the effects of the coupled interaction of atmospheric turbulence and ocean waves on aerodynamic performance and structural stability and reliability of offshore wind turbines and farms. Laboratory scale experiments have been carried out to derive data sets for validating the computational models.

  16. An Alternative Frictional Boundary Condition for Computational Fluid Dynamics Simulation of Friction Stir Welding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Gaoqiang; Feng, Zhili; Zhu, Yucan

    For better application of numerical simulation in optimization and design of friction stir welding (FSW), this paper presents a new frictional boundary condition at the tool/workpiece interface for computational fluid dynamics (CFD) modeling of FSW. The proposed boundary condition is based on an implementation of the Coulomb friction model. Using the new boundary condition, the CFD simulation yields non-uniform distribution of contact state over the tool/workpiece interface, as validated by the experimental weld macrostructure. It is found that interfacial sticking state is present over large area at the tool-workpiece interface, while significant interfacial sliding occurs at the shoulder periphery, themore » lower part of pin side, and the periphery of pin bottom. Due to the interfacial sticking, a rotating flow zone is found under the shoulder, in which fast circular motion occurs. The diameter of the rotating flow zone is smaller than the shoulder diameter, which is attributed to the presence of the interfacial sliding at the shoulder periphery. For the simulated welding condition, the heat generation due to friction and plastic deformation makes up 54.4 and 45.6% of the total heat generation rate, respectively. In conclusion, the simulated temperature field is validated by the good agreement to the experimental measurements.« less

  17. An Alternative Frictional Boundary Condition for Computational Fluid Dynamics Simulation of Friction Stir Welding

    DOE PAGES

    Chen, Gaoqiang; Feng, Zhili; Zhu, Yucan; ...

    2016-07-11

    For better application of numerical simulation in optimization and design of friction stir welding (FSW), this paper presents a new frictional boundary condition at the tool/workpiece interface for computational fluid dynamics (CFD) modeling of FSW. The proposed boundary condition is based on an implementation of the Coulomb friction model. Using the new boundary condition, the CFD simulation yields non-uniform distribution of contact state over the tool/workpiece interface, as validated by the experimental weld macrostructure. It is found that interfacial sticking state is present over large area at the tool-workpiece interface, while significant interfacial sliding occurs at the shoulder periphery, themore » lower part of pin side, and the periphery of pin bottom. Due to the interfacial sticking, a rotating flow zone is found under the shoulder, in which fast circular motion occurs. The diameter of the rotating flow zone is smaller than the shoulder diameter, which is attributed to the presence of the interfacial sliding at the shoulder periphery. For the simulated welding condition, the heat generation due to friction and plastic deformation makes up 54.4 and 45.6% of the total heat generation rate, respectively. In conclusion, the simulated temperature field is validated by the good agreement to the experimental measurements.« less

  18. Validation results of satellite mock-up capturing experiment using nets

    NASA Astrophysics Data System (ADS)

    Medina, Alberto; Cercós, Lorenzo; Stefanescu, Raluca M.; Benvenuto, Riccardo; Pesce, Vincenzo; Marcon, Marco; Lavagna, Michèle; González, Iván; Rodríguez López, Nuria; Wormnes, Kjetil

    2017-05-01

    The PATENDER activity (Net parametric characterization and parabolic flight), funded by the European Space Agency (ESA) via its Clean Space initiative, was aiming to validate a simulation tool for designing nets for capturing space debris. This validation has been performed through a set of different experiments under microgravity conditions where a net was launched capturing and wrapping a satellite mock-up. This paper presents the architecture of the thrown-net dynamics simulator together with the set-up of the deployment experiment and its trajectory reconstruction results on a parabolic flight (Novespace A-310, June 2015). The simulator has been implemented within the Blender framework in order to provide a highly configurable tool, able to reproduce different scenarios for Active Debris Removal missions. The experiment has been performed over thirty parabolas offering around 22 s of zero-g conditions. Flexible meshed fabric structure (the net) ejected from a container and propelled by corner masses (the bullets) arranged around its circumference have been launched at different initial velocities and launching angles using a pneumatic-based dedicated mechanism (representing the chaser satellite) against a target mock-up (the target satellite). High-speed motion cameras were recording the experiment allowing 3D reconstruction of the net motion. The net knots have been coloured to allow the images post-process using colour segmentation, stereo matching and iterative closest point (ICP) for knots tracking. The final objective of the activity was the validation of the net deployment and wrapping simulator using images recorded during the parabolic flight. The high-resolution images acquired have been post-processed to determine accurately the initial conditions and generate the reference data (position and velocity of all knots of the net along its deployment and wrapping of the target mock-up) for the simulator validation. The simulator has been properly configured according to the parabolic flight scenario, and executed in order to generate the validation data. Both datasets have been compared according to different metrics in order to perform the validation of the PATENDER simulator.

  19. Current status of validation for robotic surgery simulators - a systematic review.

    PubMed

    Abboudi, Hamid; Khan, Mohammed S; Aboumarzouk, Omar; Guru, Khurshid A; Challacombe, Ben; Dasgupta, Prokar; Ahmed, Kamran

    2013-02-01

    To analyse studies validating the effectiveness of robotic surgery simulators. The MEDLINE(®), EMBASE(®) and PsycINFO(®) databases were systematically searched until September 2011. References from retrieved articles were reviewed to broaden the search. The simulator name, training tasks, participant level, training duration and evaluation scoring were extracted from each study. We also extracted data on feasibility, validity, cost-effectiveness, reliability and educational impact. We identified 19 studies investigating simulation options in robotic surgery. There are five different robotic surgery simulation platforms available on the market. In all, 11 studies sought opinion and compared performance between two different groups; 'expert' and 'novice'. Experts ranged in experience from 21-2200 robotic cases. The novice groups consisted of participants with no prior experience on a robotic platform and were often medical students or junior doctors. The Mimic dV-Trainer(®), ProMIS(®), SimSurgery Educational Platform(®) (SEP) and Intuitive systems have shown face, content and construct validity. The Robotic Surgical SimulatorTM system has only been face and content validated. All of the simulators except SEP have shown educational impact. Feasibility and cost-effectiveness of simulation systems was not evaluated in any trial. Virtual reality simulators were shown to be effective training tools for junior trainees. Simulation training holds the greatest potential to be used as an adjunct to traditional training methods to equip the next generation of robotic surgeons with the skills required to operate safely. However, current simulation models have only been validated in small studies. There is no evidence to suggest one type of simulator provides more effective training than any other. More research is needed to validate simulated environments further and investigate the effectiveness of animal and cadaveric training in robotic surgery. © 2012 BJU International.

  20. Development and validation of the TOCO-TURBT tool: a summative assessment tool that measures surgical competency in transurethral resection of bladder tumour.

    PubMed

    de Vries, Anna H; Muijtjens, Arno M M; van Genugten, Hilde G J; Hendrikx, Ad J M; Koldewijn, Evert L; Schout, Barbara M A; van der Vleuten, Cees P M; Wagner, Cordula; Tjiam, Irene M; van Merriënboer, Jeroen J G

    2018-06-05

    The current shift towards competency-based residency training has increased the need for objective assessment of skills. In this study, we developed and validated an assessment tool that measures technical and non-technical competency in transurethral resection of bladder tumour (TURBT). The 'Test Objective Competency' (TOCO)-TURBT tool was designed by means of cognitive task analysis (CTA), which included expert consensus. The tool consists of 51 items, divided into 3 phases: preparatory (n = 15), procedural (n = 21), and completion (n = 15). For validation of the TOCO-TURBT tool, 2 TURBT procedures were performed and videotaped by 25 urologists and 51 residents in a simulated setting. The participants' degree of competence was assessed by a panel of eight independent expert urologists using the TOCO-TURBT tool. Each procedure was assessed by two raters. Feasibility, acceptability and content validity were evaluated by means of a quantitative cross-sectional survey. Regression analyses were performed to assess the strength of the relation between experience and test scores (construct validity). Reliability was analysed by generalizability theory. The majority of assessors and urologists indicated the TOCO-TURBT tool to be a valid assessment of competency and would support the implementation of the TOCO-TURBT assessment as a certification method for residents. Construct validity was clearly established for all outcome measures of the procedural phase (all r > 0.5, p < 0.01). Generalizability-theory analysis showed high reliability (coefficient Phi ≥ 0.8) when using the format of two assessors and two cases. This study provides first evidence that the TOCO-TURBT tool is a feasible, valid and reliable assessment tool for measuring competency in TURBT. The tool has the potential to be used for future certification of competencies for residents and urologists. The methodology of CTA might be valuable in the development of assessment tools in other areas of clinical practice.

  1. A survey of ground operations tools developed to simulate the pointing of space telescopes and the design for WISE

    NASA Technical Reports Server (NTRS)

    Fabinsky, Beth

    2006-01-01

    WISE, the Wide Field Infrared Survey Explorer, is scheduled for launch in June 2010. The mission operations system for WISE requires a software modeling tool to help plan, integrate and simulate all spacecraft pointing and verify that no attitude constraints are violated. In the course of developing the requirements for this tool, an investigation was conducted into the design of similar tools for other space-based telescopes. This paper summarizes the ground software and processes used to plan and validate pointing for a selection of space telescopes; with this information as background, the design for WISE is presented.

  2. Methodology for Computational Fluid Dynamic Validation for Medical Use: Application to Intracranial Aneurysm.

    PubMed

    Paliwal, Nikhil; Damiano, Robert J; Varble, Nicole A; Tutino, Vincent M; Dou, Zhongwang; Siddiqui, Adnan H; Meng, Hui

    2017-12-01

    Computational fluid dynamics (CFD) is a promising tool to aid in clinical diagnoses of cardiovascular diseases. However, it uses assumptions that simplify the complexities of the real cardiovascular flow. Due to high-stakes in the clinical setting, it is critical to calculate the effect of these assumptions in the CFD simulation results. However, existing CFD validation approaches do not quantify error in the simulation results due to the CFD solver's modeling assumptions. Instead, they directly compare CFD simulation results against validation data. Thus, to quantify the accuracy of a CFD solver, we developed a validation methodology that calculates the CFD model error (arising from modeling assumptions). Our methodology identifies independent error sources in CFD and validation experiments, and calculates the model error by parsing out other sources of error inherent in simulation and experiments. To demonstrate the method, we simulated the flow field of a patient-specific intracranial aneurysm (IA) in the commercial CFD software star-ccm+. Particle image velocimetry (PIV) provided validation datasets for the flow field on two orthogonal planes. The average model error in the star-ccm+ solver was 5.63 ± 5.49% along the intersecting validation line of the orthogonal planes. Furthermore, we demonstrated that our validation method is superior to existing validation approaches by applying three representative existing validation techniques to our CFD and experimental dataset, and comparing the validation results. Our validation methodology offers a streamlined workflow to extract the "true" accuracy of a CFD solver.

  3. Face, Content, and Construct Validations of Endoscopic Needle Injection Simulator for Transurethral Bulking Agent in Treatment of Stress Urinary Incontinence.

    PubMed

    Farhan, Bilal; Soltani, Tandis; Do, Rebecca; Perez, Claudia; Choi, Hanul; Ghoniem, Gamal

    2018-05-02

    Endoscopic injection of urethral bulking agents is an office procedure that is used to treat stress urinary incontinence secondary to internal sphincteric deficiency. Validation studies important part of simulator evaluation and is considered important step to establish the effectiveness of simulation-based training. The endoscopic needle injection (ENI) simulator has not been formally validated, although it has been used widely at University of California, Irvine. We aimed to assess the face, content, and construct validity of the UC, Irvine ENI simulator. Dissected female porcine bladders were mounted in a modified Hysteroscopy Diagnostic Trainer. Using routine endoscopic equipment for this procedure with video monitoring, 6 urologists (experts group) and 6 urology trainee (novice group) completed urethral bulking agents injections on a total of 12 bladders using ENI simulator. Face and content validities were assessed by using structured quantitative survey which rating the realism. Construct validity was assessed by comparing the performance, time of the procedure, and the occlusive (anatomical and functional) evaluations between the experts and novices. Trainees also completed a postprocedure feedback survey. Effective injections were evaluated by measuring the retrograde urethral opening pressure, visual cystoscopic coaptation, and postprocedure gross anatomic examination. All 12 participants felt the simulator was a good training tool and should be used as essential part of urology training (face validity). ENI simulator showed good face and content validity with average score varies between the experts and the novices was 3.9/5 and 3.8/5, respectively. Content validity evaluation showed that most aspects of the simulator were adequately realistic (mean Likert scores 3.9-3.8/5). However, the bladder does not bleed, and sometimes thin. Experts significantly outperformed novices (p < 001) across all measure of performance therefore establishing construct validity. The ENI simulator shows face, content and construct validities, although few aspects of simulator were not very realistic (e.g., bleeding).This study provides a base for the future formal validation for this simulator and for continuing use of this simulator in endourology training. Copyright © 2018 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  4. Safety of railroad passenger vehicle dynamics : OMNISIM simulation and test correlations for passenger rail cars

    DOT National Transportation Integrated Search

    2002-07-01

    The purpose of the work is to validate the safety assessment methodology previously developed for passenger rail vehicle dynamics, which requires the application of simulation tools as well as testing of vehicles under different track scenarios. This...

  5. MACHETE: Environment for Space Networking Evaluation

    NASA Technical Reports Server (NTRS)

    Jennings, Esther H.; Segui, John S.; Woo, Simon

    2010-01-01

    Space Exploration missions requires the design and implementation of space networking that differs from terrestrial networks. In a space networking architecture, interplanetary communication protocols need to be designed, validated and evaluated carefully to support different mission requirements. As actual systems are expensive to build, it is essential to have a low cost method to validate and verify mission/system designs and operations. This can be accomplished through simulation. Simulation can aid design decisions where alternative solutions are being considered, support trade-studies and enable fast study of what-if scenarios. It can be used to identify risks, verify system performance against requirements, and as an initial test environment as one moves towards emulation and actual hardware implementation of the systems. We describe the development of Multi-mission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE) and its use cases in supporting architecture trade studies, protocol performance and its role in hybrid simulation/emulation. The MACHETE environment contains various tools and interfaces such that users may select the set of tools tailored for the specific simulation end goal. The use cases illustrate tool combinations for simulating space networking in different mission scenarios. This simulation environment is useful in supporting space networking design for planned and future missions as well as evaluating performance of existing networks where non-determinism exist in data traffic and/or link conditions.

  6. Conversion and Validation of Distribution System Model from a QSTS-Based Tool to a Real-Time Dynamic Phasor Simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chamana, Manohar; Prabakar, Kumaraguru; Palmintier, Bryan

    A software process is developed to convert distribution network models from a quasi-static time-series tool (OpenDSS) to a real-time dynamic phasor simulator (ePHASORSIM). The description of this process in this paper would be helpful for researchers who intend to perform similar conversions. The converter could be utilized directly by users of real-time simulators who intend to perform software-in-the-loop or hardware-in-the-loop tests on large distribution test feeders for a range of use cases, including testing functions of advanced distribution management systems against a simulated distribution system. In the future, the developers intend to release the conversion tool as open source tomore » enable use by others.« less

  7. Conversion and Validation of Distribution System Model from a QSTS-Based Tool to a Real-Time Dynamic Phasor Simulator: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chamana, Manohar; Prabakar, Kumaraguru; Palmintier, Bryan

    A software process is developed to convert distribution network models from a quasi-static time-series tool (OpenDSS) to a real-time dynamic phasor simulator (ePHASORSIM). The description of this process in this paper would be helpful for researchers who intend to perform similar conversions. The converter could be utilized directly by users of real-time simulators who intend to perform software-in-the-loop or hardware-in-the-loop tests on large distribution test feeders for a range of use cases, including testing functions of advanced distribution management systems against a simulated distribution system. In the future, the developers intend to release the conversion tool as open source tomore » enable use by others.« less

  8. A novel augmented reality simulator for skills assessment in minimal invasive surgery.

    PubMed

    Lahanas, Vasileios; Loukas, Constantinos; Smailis, Nikolaos; Georgiou, Evangelos

    2015-08-01

    Over the past decade, simulation-based training has come to the foreground as an efficient method for training and assessment of surgical skills in minimal invasive surgery. Box-trainers and virtual reality (VR) simulators have been introduced in the teaching curricula and have substituted to some extent the traditional model of training based on animals or cadavers. Augmented reality (AR) is a new technology that allows blending of VR elements and real objects within a real-world scene. In this paper, we present a novel AR simulator for assessment of basic laparoscopic skills. The components of the proposed system include: a box-trainer, a camera and a set of laparoscopic tools equipped with custom-made sensors that allow interaction with VR training elements. Three AR tasks were developed, focusing on basic skills such as perception of depth of field, hand-eye coordination and bimanual operation. The construct validity of the system was evaluated via a comparison between two experience groups: novices with no experience in laparoscopic surgery and experienced surgeons. The observed metrics included task execution time, tool pathlength and two task-specific errors. The study also included a feedback questionnaire requiring participants to evaluate the face-validity of the system. Between-group comparison demonstrated highly significant differences (<0.01) in all performance metrics and tasks denoting the simulator's construct validity. Qualitative analysis on the instruments' trajectories highlighted differences between novices and experts regarding smoothness and economy of motion. Subjects' ratings on the feedback questionnaire highlighted the face-validity of the training system. The results highlight the potential of the proposed simulator to discriminate groups with different expertise providing a proof of concept for the potential use of AR as a core technology for laparoscopic simulation training.

  9. Proposal of a micromagnetic standard problem for ferromagnetic resonance simulations

    NASA Astrophysics Data System (ADS)

    Baker, Alexander; Beg, Marijan; Ashton, Gregory; Albert, Maximilian; Chernyshenko, Dmitri; Wang, Weiwei; Zhang, Shilei; Bisotti, Marc-Antonio; Franchin, Matteo; Hu, Chun Lian; Stamps, Robert; Hesjedal, Thorsten; Fangohr, Hans

    2017-01-01

    Nowadays, micromagnetic simulations are a common tool for studying a wide range of different magnetic phenomena, including the ferromagnetic resonance. A technique for evaluating reliability and validity of different micromagnetic simulation tools is the simulation of proposed standard problems. We propose a new standard problem by providing a detailed specification and analysis of a sufficiently simple problem. By analyzing the magnetization dynamics in a thin permalloy square sample, triggered by a well defined excitation, we obtain the ferromagnetic resonance spectrum and identify the resonance modes via Fourier transform. Simulations are performed using both finite difference and finite element numerical methods, with OOMMF and Nmag simulators, respectively. We report the effects of initial conditions and simulation parameters on the character of the observed resonance modes for this standard problem. We provide detailed instructions and code to assist in using the results for evaluation of new simulator tools, and to help with numerical calculation of ferromagnetic resonance spectra and modes in general.

  10. Validation of a Full-Immersion Simulation Platform for Percutaneous Nephrolithotomy Using Three-Dimensional Printing Technology.

    PubMed

    Ghazi, Ahmed; Campbell, Timothy; Melnyk, Rachel; Feng, Changyong; Andrusco, Alex; Stone, Jonathan; Erturk, Erdal

    2017-12-01

    The restriction of resident hours with an increasing focus on patient safety and a reduced caseload has impacted surgical training. A complex and complication prone procedure such as percutaneous nephrolithotomy (PCNL) with a steep learning curve may create an unsafe environment for hands-on resident training. In this study, we validate a high fidelity, inanimate PCNL model within a full-immersion simulation environment. Anatomically correct models of the human pelvicaliceal system, kidney, and relevant adjacent structures were created using polyvinyl alcohol hydrogels and three-dimensional-printed injection molds. All steps of a PCNL were simulated including percutaneous renal access, nephroscopy, and lithotripsy. Five experts (>100 caseload) and 10 novices (<20 caseload) from both urology (full procedure) and interventional radiology (access only) departments completed the simulation. Face and content validity were calculated using model ratings for similarity to the real procedure and usefulness as a training tool. Differences in performance among groups with various levels of experience using clinically relevant procedural metrics were used to calculate construct validity. The model was determined to have an excellent face and content validity with an average score of 4.5/5.0 and 4.6/5.0, respectively. There were significant differences between novice and expert operative metrics including mean fluoroscopy time, the number of percutaneous access attempts, and number of times the needle was repositioned. Experts achieved better stone clearance with fewer procedural complications. We demonstrated the face, content, and construct validity of an inanimate, full task trainer for PCNL. Construct validity between experts and novices was demonstrated using incorporated procedural metrics, which permitted the accurate assessment of performance. While hands-on training under supervision remains an integral part of any residency, this full-immersion simulation provides a comprehensive tool for surgical skills development and evaluation before hands-on exposure.

  11. Investigation of different modeling approaches for computational fluid dynamics simulation of high-pressure rocket combustors

    NASA Astrophysics Data System (ADS)

    Ivancic, B.; Riedmann, H.; Frey, M.; Knab, O.; Karl, S.; Hannemann, K.

    2016-07-01

    The paper summarizes technical results and first highlights of the cooperation between DLR and Airbus Defence and Space (DS) within the work package "CFD Modeling of Combustion Chamber Processes" conducted in the frame of the Propulsion 2020 Project. Within the addressed work package, DLR Göttingen and Airbus DS Ottobrunn have identified several test cases where adequate test data are available and which can be used for proper validation of the computational fluid dynamics (CFD) tools. In this paper, the first test case, the Penn State chamber (RCM1), is discussed. Presenting the simulation results from three different tools, it is shown that the test case can be computed properly with steady-state Reynolds-averaged Navier-Stokes (RANS) approaches. The achieved simulation results reproduce the measured wall heat flux as an important validation parameter very well but also reveal some inconsistencies in the test data which are addressed in this paper.

  12. Development of a patient-specific surgical simulator for pediatric laparoscopic procedures.

    PubMed

    Saber, Nikoo R; Menon, Vinay; St-Pierre, Jean C; Looi, Thomas; Drake, James M; Cyril, Xavier

    2014-01-01

    The purpose of this study is to develop and evaluate a pediatric patient-specific surgical simulator for the planning, practice, and validation of laparoscopic surgical procedures prior to intervention, initially focusing on the choledochal cyst resection and reconstruction scenario. The simulator is comprised of software elements including a deformable body physics engine, virtual surgical tools, and abdominal organs. Hardware components such as haptics-enabled hand controllers and a representative endoscopic tool have also been integrated. The prototype is able to perform a number of surgical tasks and further development work is under way to simulate the complete procedure with acceptable fidelity and accuracy.

  13. Ecological Validity in Eye-Tracking: An Empirical Study

    ERIC Educational Resources Information Center

    Spinner, Patti; Gass, Susan M.; Behney, Jennifer

    2013-01-01

    Eye-trackers are becoming increasingly widespread as a tool to investigate second language (L2) acquisition. Unfortunately, clear standards for methodology--including font size, font type, and placement of interest areas--are not yet available. Although many researchers stress the need for ecological validity--that is, the simulation of natural…

  14. Continued Validation of the O-SCORE (Ottawa Surgical Competency Operating Room Evaluation): Use in the Simulated Environment.

    PubMed

    MacEwan, Matthew J; Dudek, Nancy L; Wood, Timothy J; Gofton, Wade T

    2016-01-01

    CONSTRUCT: The Ottawa Surgical Competency Operating Room Evaluation (O-SCORE) is a 9-item surgical evaluation tool designed to assess technical competence in surgical trainees using behavioral anchors. The initial development of the O-SCORE produced evidence for valid results. Further work is required to determine if the use of a single surgeon or an unblinded rater introduces bias. In addition, the relationship of the O-SCORE to other currently used technical assessment tools should be explored to provide validity evidence related to the relationship to other measures. We have designed this project to provide continued validity evidence for the O-SCORE related to these two issues. Nineteen residents and 2 staff Orthopedic Surgeons from the University of Ottawa volunteered to participate in a 2-part OSCE style station. Participants completed a written questionnaire followed by a videotaped 10-minute simulated open reduction and internal fixation of a midshaft radius fracture. Videos were rated individually by 2 blinded staff orthopedic surgeons using an Objective Structured Assessment of Technical Skills (OSATS) global rating scale, an OSATS checklist, and the O-SCORE in random order. O-SCORE results appeared sensitive to surgical training level even when raters were blinded. In addition, strong agreement between two independent observers using the O-SCORE suggests that the measure captures a performance easily recognized by surgical observers. Ratings on the O-SCORE also were strongly associated with global ratings on the currently most validated technical evaluation tool (OSATS). Collectively, these results suggest that the O-SCORE generates accurate, reproducible, and meaningful results when used in a randomized and blinded fashion, providing continued validity evidence for using this tool to evaluate surgical trainee competence. The O-SCORE was able to differentiate surgical trainee level using blinded raters providing further evidence of validity for the O-SCORE. There was strong agreement between two independent observers using the O-SCORE. Ratings on the O-SCORE also demonstrated equivalence to scores on the most validated technical evaluation tool (OSATS). These results suggest that the O-SCORE demonstrates accurate and reproducible results when used in a randomized and blinded fashion providing continued validity evidence for this tool in the evaluation of surgical competence in the trainees.

  15. Modelling and simulation of wood chip combustion in a hot air generator system.

    PubMed

    Rajika, J K A T; Narayana, Mahinsasa

    2016-01-01

    This study focuses on modelling and simulation of horizontal moving bed/grate wood chip combustor. A standalone finite volume based 2-D steady state Euler-Euler Computational Fluid Dynamics (CFD) model was developed for packed bed combustion. Packed bed combustion of a medium scale biomass combustor, which was retrofitted from wood log to wood chip feeding for Tea drying in Sri Lanka, was evaluated by a CFD simulation study. The model was validated by the experimental results of an industrial biomass combustor for a hot air generation system in tea industry. Open-source CFD tool; OpenFOAM was used to generate CFD model source code for the packed bed combustion and simulated along with an available solver for free board region modelling in the CFD tool. Height of the packed bed is about 20 cm and biomass particles are assumed to be spherical shape with constant surface area to volume ratio. Temperature measurements of the combustor are well agreed with simulation results while gas phase compositions have discrepancies. Combustion efficiency of the validated hot air generator is around 52.2 %.

  16. Creation and validation of a simulator for corneal rust ring removal.

    PubMed

    Mednick, Zale; Tabanfar, Reza; Alexander, Ashley; Simpson, Sarah; Baxter, Stephanie

    2017-10-01

    To create and validate a simulation model for corneal rust ring removal. Rust rings were created on cadaveric eyes with the use of small particles of metal. The eyes were mounted on suction plates at slit lamps and the trainees practiced rust ring removal. An inexperienced cohort of medical students and first year ophthalmology residents (n=11), and an experienced cohort of senior residents and faculty (n=11) removed the rust rings from the eyes with the use of a burr. Rust ring removal was evaluated based on removal time, percentage of rust removed and incidence of corneal perforation. A survey was administered to participants to determine face validity. Time for rust ring removal was longer in the inexperienced group at 187±93 seconds (range of 66-408 seconds), compared to the experienced group at 117±54 seconds (range of 55-240 seconds) (p=0.046). Removal speed was similar between groups, at 4847±4355 pixels/minute and 7206±5181 pixels/minute in the inexperienced and experienced groups, respectively (p=0.26). Removal percentage values were similar between groups, at 61±15% and 69±18% (p=0.38). There were no corneal perforations. 100% (22/22) of survey respondents believed the simulator would be a valuable practice tool, and 89% (17/19) felt the simulation was a valid representation of the clinical correlate. The corneal rust ring simulator presented here is a valid training tool that could be used by early trainees to gain greater comfort level before attempting rust ring removal on a live patient. Copyright © 2017 Canadian Ophthalmological Society. Published by Elsevier Inc. All rights reserved.

  17. Derivation of a Performance Checklist for Ultrasound-Guided Arthrocentesis Using the Modified Delphi Method.

    PubMed

    Kunz, Derek; Pariyadath, Manoj; Wittler, Mary; Askew, Kim; Manthey, David; Hartman, Nicholas

    2017-06-01

    Arthrocentesis is an important skill for physicians in multiple specialties. Recent studies indicate a superior safety and performance profile for this procedure using ultrasound guidance for needle placement, and improving quality of care requires a valid measurement of competency using this modality. We endeavored to create a validated tool to assess the performance of this procedure using the modified Delphi technique and experts in multiple disciplines across the United States. We derived a 22-item checklist designed to assess competency for the completion of ultrasound-guided arthrocentesis, which demonstrated a Cronbach's alpha of 0.89, indicating an excellent degree of internal consistency. Although we were able to demonstrate content validity for this tool, further validity evidence should be acquired after the tool is used and studied in clinical and simulated contexts. © 2017 by the American Institute of Ultrasound in Medicine.

  18. Mechanism-Based FE Simulation of Tool Wear in Diamond Drilling of SiCp/Al Composites.

    PubMed

    Xiang, Junfeng; Pang, Siqin; Xie, Lijing; Gao, Feinong; Hu, Xin; Yi, Jie; Hu, Fang

    2018-02-07

    The aim of this work is to analyze the micro mechanisms underlying the wear of macroscale tools during diamond machining of SiC p /Al6063 composites and to develop the mechanism-based diamond wear model in relation to the dominant wear behaviors. During drilling, high volume fraction SiC p /Al6063 composites containing Cu, the dominant wear mechanisms of diamond tool involve thermodynamically activated physicochemical wear due to diamond-graphite transformation catalyzed by Cu in air atmosphere and mechanically driven abrasive wear due to high-frequency scrape of hard SiC reinforcement on tool surface. An analytical diamond wear model, coupling Usui abrasive wear model and Arrhenius extended graphitization wear model was proposed and implemented through a user-defined subroutine for tool wear estimates. Tool wear estimate in diamond drilling of SiC p /Al6063 composites was achieved by incorporating the combined abrasive-chemical tool wear subroutine into the coupled thermomechanical FE model of 3D drilling. The developed drilling FE model for reproducing diamond tool wear was validated for feasibility and reliability by comparing numerically simulated tool wear morphology and experimentally observed results after drilling a hole using brazed polycrystalline diamond (PCD) and chemical vapor deposition (CVD) diamond coated tools. A fairly good agreement of experimental and simulated results in cutting forces, chip and tool wear morphologies demonstrates that the developed 3D drilling FE model, combined with a subroutine for diamond tool wear estimate can provide a more accurate analysis not only in cutting forces and chip shape but also in tool wear behavior during drilling SiC p /Al6063 composites. Once validated and calibrated, the developed diamond tool wear model in conjunction with other machining FE models can be easily extended to the investigation of tool wear evolution with various diamond tool geometries and other machining processes in cutting different workpiece materials.

  19. Mechanism-Based FE Simulation of Tool Wear in Diamond Drilling of SiCp/Al Composites

    PubMed Central

    Xiang, Junfeng; Pang, Siqin; Xie, Lijing; Gao, Feinong; Hu, Xin; Yi, Jie; Hu, Fang

    2018-01-01

    The aim of this work is to analyze the micro mechanisms underlying the wear of macroscale tools during diamond machining of SiCp/Al6063 composites and to develop the mechanism-based diamond wear model in relation to the dominant wear behaviors. During drilling, high volume fraction SiCp/Al6063 composites containing Cu, the dominant wear mechanisms of diamond tool involve thermodynamically activated physicochemical wear due to diamond-graphite transformation catalyzed by Cu in air atmosphere and mechanically driven abrasive wear due to high-frequency scrape of hard SiC reinforcement on tool surface. An analytical diamond wear model, coupling Usui abrasive wear model and Arrhenius extended graphitization wear model was proposed and implemented through a user-defined subroutine for tool wear estimates. Tool wear estimate in diamond drilling of SiCp/Al6063 composites was achieved by incorporating the combined abrasive-chemical tool wear subroutine into the coupled thermomechanical FE model of 3D drilling. The developed drilling FE model for reproducing diamond tool wear was validated for feasibility and reliability by comparing numerically simulated tool wear morphology and experimentally observed results after drilling a hole using brazed polycrystalline diamond (PCD) and chemical vapor deposition (CVD) diamond coated tools. A fairly good agreement of experimental and simulated results in cutting forces, chip and tool wear morphologies demonstrates that the developed 3D drilling FE model, combined with a subroutine for diamond tool wear estimate can provide a more accurate analysis not only in cutting forces and chip shape but also in tool wear behavior during drilling SiCp/Al6063 composites. Once validated and calibrated, the developed diamond tool wear model in conjunction with other machining FE models can be easily extended to the investigation of tool wear evolution with various diamond tool geometries and other machining processes in cutting different workpiece materials. PMID:29414839

  20. Quadruplex digital flight control system assessment

    NASA Technical Reports Server (NTRS)

    Mulcare, D. B.; Downing, L. E.; Smith, M. K.

    1988-01-01

    Described are the development and validation of a double fail-operational digital flight control system architecture for critical pitch axis functions. Architectural tradeoffs are assessed, system simulator modifications are described, and demonstration testing results are critiqued. Assessment tools and their application are also illustrated. Ultimately, the vital role of system simulation, tailored to digital mechanization attributes, is shown to be essential to validating the airworthiness of full-time critical functions such as augmented fly-by-wire systems for relaxed static stability airplanes.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Nicholas R.; Carlsen, Brett W.; Dixon, Brent W.

    Dynamic fuel cycle simulation tools are intended to model holistic transient nuclear fuel cycle scenarios. As with all simulation tools, fuel cycle simulators require verification through unit tests, benchmark cases, and integral tests. Model validation is a vital aspect as well. Although compara-tive studies have been performed, there is no comprehensive unit test and benchmark library for fuel cycle simulator tools. The objective of this paper is to identify the must test functionalities of a fuel cycle simulator tool within the context of specific problems of interest to the Fuel Cycle Options Campaign within the U.S. Department of Energy smore » Office of Nuclear Energy. The approach in this paper identifies the features needed to cover the range of promising fuel cycle options identified in the DOE-NE Fuel Cycle Evaluation and Screening (E&S) and categorizes these features to facilitate prioritization. Features were categorized as essential functions, integrating features, and exemplary capabilities. One objective of this paper is to propose a library of unit tests applicable to each of the essential functions. Another underlying motivation for this paper is to encourage an international dialog on the functionalities and standard test methods for fuel cycle simulator tools.« less

  2. Validation of the GreenLight™ Simulator and development of a training curriculum for photoselective vaporisation of the prostate.

    PubMed

    Aydin, Abdullatif; Muir, Gordon H; Graziano, Manuela E; Khan, Muhammad Shamim; Dasgupta, Prokar; Ahmed, Kamran

    2015-06-01

    To assess face, content and construct validity, and feasibility and acceptability of the GreenLight™ Simulator as a training tool for photoselective vaporisation of the prostate (PVP), and to establish learning curves and develop an evidence-based training curriculum. This prospective, observational and comparative study, recruited novice (25 participants), intermediate (14) and expert-level urologists (seven) from the UK and Europe at the 28th European Association of Urological Surgeons Annual Meeting 2013. A group of novices (12 participants) performed 10 sessions of subtask training modules followed by a long operative case, whereas a second group (13) performed five sessions of a given case module. Intermediate and expert groups performed all training modules once, followed by one operative case. The outcome measures for learning curves and construct validity were time to task, coagulation time, vaporisation time, average sweep speed, average laser distance, blood loss, operative errors, and instrument cost. Face and content validity, feasibility and acceptability were addressed through a quantitative survey. Construct validity was demonstrated in two of five training modules (P = 0.038; P = 0.018) and in a considerable number of case metrics (P = 0.034). Learning curves were seen in all five training modules (P < 0.001) and significant reduction in case operative time (P < 0.001) and error (P = 0.017) were seen. An evidence-based training curriculum, to help trainees acquire transferable skills, was produced using the results. This study has shown the GreenLight Simulator to be a valid and useful training tool for PVP. It is hoped that by using the training curriculum for the GreenLight Simulator, novice trainees can acquire skills and knowledge to a predetermined level of proficiency. © 2014 The Authors. BJU International © 2014 BJU International.

  3. Evaluation of automated decisionmaking methodologies and development of an integrated robotic system simulation. Volume 1: Study results

    NASA Technical Reports Server (NTRS)

    Lowrie, J. W.; Fermelia, A. J.; Haley, D. C.; Gremban, K. D.; Vanbaalen, J.; Walsh, R. W.

    1982-01-01

    A variety of artificial intelligence techniques which could be used with regard to NASA space applications and robotics were evaluated. The techniques studied were decision tree manipulators, problem solvers, rule based systems, logic programming languages, representation language languages, and expert systems. The overall structure of a robotic simulation tool was defined and a framework for that tool developed. Nonlinear and linearized dynamics equations were formulated for n link manipulator configurations. A framework for the robotic simulation was established which uses validated manipulator component models connected according to a user defined configuration.

  4. Direct Geolocation of Satellite Images with the EO-CFI Libraries

    NASA Astrophysics Data System (ADS)

    de Miguel, Eduardo; Prado, Elena; Estebanez, Monica; Martin, Ana I.; Gonzalez, Malena

    2016-08-01

    The INTA Remote Sensing Laboratory has implemented a tool for the direct geolocation of satellite images. The core of the tool is a C code based on the "Earth Observation Mission CFI SW" from ESA. The tool accepts different types of inputs for satellite attitude (euler angles, quaternions, default attitude models). Satellite position can be provided either in ECEF or ECI coordinates. The line of sight of each individual detector is imported from an external file or is generated by the tool from camera parameters. Global DEM ACE2 is used to define ground intersection of the LOS.The tool has been already tailored for georeferencing images from the forthcoming Spanish Earth Observation mission SEOSat/Ingenio, and for the camera APIS onboard the INTA cubesat OPTOS. The next step is to configure it for the geolocation of Sentinel 2 L1b images.The tool has been internally validated by different means. This validation shows that the tool is suitable for georeferencing images from high spatial resolution missions. As part of the validation efforts, a code for simulating orbital info for LEO missions using EO-CFI has been produced.

  5. SolarTherm: A flexible Modelica-based simulator for CSP systems

    NASA Astrophysics Data System (ADS)

    Scott, Paul; Alonso, Alberto de la Calle; Hinkley, James T.; Pye, John

    2017-06-01

    Annual performance simulations provide a valuable tool for analysing the viability and overall impact of different concentrating solar power (CSP) component and system designs. However, existing tools work best with conventional systems and are difficult or impossible to adapt when novel components, configurations and operating strategies are of interest. SolarTherm is a new open source simulation tool that fulfils this need for the solar community. It includes a simulation framework and a library of flexible CSP components and control strategies that can be adapted or replaced with new designs to meet the special needs of end users. This paper provides an introduction to SolarTherm and a comparison of models for an energy-based trough system and a physical tower system to those in the well-established and widely-used simulator SAM. Differences were found in some components where the inner workings of SAM are undocumented or not well understood, while the other parts show strong agreement. These results help to validate the fundamentals of SolarTherm and demonstrate that, while at an early stage of development, it is already a useful tool for performing annual simulations.

  6. Simulation of ultrasonic arrays for industrial and civil engineering applications including validation

    NASA Astrophysics Data System (ADS)

    Spies, M.; Rieder, H.; Orth, Th.; Maack, S.

    2012-05-01

    In this contribution we address the beam field simulation of 2D ultrasonic arrays using the Generalized Point Source Synthesis technique. Aiming at the inspection of cylindrical components (e.g. pipes) the influence of concave and convex surface curvatures, respectively, has been evaluated for a commercial probe. We have compared these results with those obtained using a commercial simulation tool. In civil engineering, the ultrasonic inspection of highly attenuating concrete structures has been advanced by the development of dry contact point transducers, mainly applied in array arrangements. Our respective simulations for a widely used commercial probe are validated using experimental results acquired on concrete half-spheres with diameters from 200 mm up to 650 mm.

  7. A fast ultrasonic simulation tool based on massively parallel implementations

    NASA Astrophysics Data System (ADS)

    Lambert, Jason; Rougeron, Gilles; Lacassagne, Lionel; Chatillon, Sylvain

    2014-02-01

    This paper presents a CIVA optimized ultrasonic inspection simulation tool, which takes benefit of the power of massively parallel architectures: graphical processing units (GPU) and multi-core general purpose processors (GPP). This tool is based on the classical approach used in CIVA: the interaction model is based on Kirchoff, and the ultrasonic field around the defect is computed by the pencil method. The model has been adapted and parallelized for both architectures. At this stage, the configurations addressed by the tool are : multi and mono-element probes, planar specimens made of simple isotropic materials, planar rectangular defects or side drilled holes of small diameter. Validations on the model accuracy and performances measurements are presented.

  8. A new possibility in thoracoscopic virtual reality simulation training: development and testing of a novel virtual reality simulator for video-assisted thoracoscopic surgery lobectomy.

    PubMed

    Jensen, Katrine; Bjerrum, Flemming; Hansen, Henrik Jessen; Petersen, René Horsleben; Pedersen, Jesper Holst; Konge, Lars

    2015-10-01

    The aims of this study were to develop virtual reality simulation software for video-assisted thoracic surgery (VATS) lobectomy, to explore the opinions of thoracic surgeons concerning the VATS lobectomy simulator and to test the validity of the simulator metrics. Experienced VATS surgeons worked with computer specialists to develop a VATS lobectomy software for a virtual reality simulator. Thoracic surgeons with different degrees of experience in VATS were enrolled at the 22nd meeting of the European Society of Thoracic Surgeons (ESTS) held in Copenhagen in June 2014. The surgeons were divided according to the number of performed VATS lobectomies: novices (0 VATS lobectomies), intermediates (1-49 VATS lobectomies) and experienced (>50 VATS lobectomies). The participants all performed a lobectomy of a right upper lobe on the simulator and answered a questionnaire regarding content validity. Metrics were compared between the three groups. We succeeded in developing the first version of a virtual reality VATS lobectomy simulator. A total of 103 thoracic surgeons completed the simulated lobectomy and were distributed as follows: novices n = 32, intermediates n = 45 and experienced n = 26. All groups rated the overall user realism of the VATS lobectomy scenario to a median of 5 on a scale 1-7, with 7 being the best score. The experienced surgeons found the graphics and movements realistic and rated the scenario high in terms of usefulness as a training tool for novice and intermediate experienced thoracic surgeons, but not very useful as a training tool for experienced surgeons. The metric scores were not statistically significant between groups. This is the first study to describe a commercially available virtual reality simulator for a VATS lobectomy. More than 100 thoracic surgeons found the simulator realistic, and hence it showed good content validity. However, none of the built-in simulator metrics could significantly distinguish between novice, intermediate experienced and experienced surgeons, and further development of the simulator software is necessary to develop valid metrics. © The Author 2015. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.

  9. Exoplanet Yield Estimation for Decadal Study Concepts using EXOSIMS

    NASA Astrophysics Data System (ADS)

    Morgan, Rhonda; Lowrance, Patrick; Savransky, Dmitry; Garrett, Daniel

    2016-01-01

    The anticipated upcoming large mission study concepts for the direct imaging of exo-earths present an exciting opportunity for exoplanet discovery and characterization. While these telescope concepts would also be capable of conducting a broad range of astrophysical investigations, the most difficult technology challenges are driven by the requirements for imaging exo-earths. The exoplanet science yield for these mission concepts will drive design trades and mission concept comparisons.To assist in these trade studies, the Exoplanet Exploration Program Office (ExEP) is developing a yield estimation tool that emphasizes transparency and consistent comparison of various design concepts. The tool will provide a parametric estimate of science yield of various mission concepts using contrast curves from physics-based model codes and Monte Carlo simulations of design reference missions using realistic constraints, such as solar avoidance angles, the observatory orbit, propulsion limitations of star shades, the accessibility of candidate targets, local and background zodiacal light levels, and background confusion by stars and galaxies. The python tool utilizes Dmitry Savransky's EXOSIMS (Exoplanet Open-Source Imaging Mission Simulator) design reference mission simulator that is being developed for the WFIRST Preliminary Science program. ExEP is extending and validating the tool for future mission concepts under consideration for the upcoming 2020 decadal review. We present a validation plan and preliminary yield results for a point design.

  10. Design and Control of Compliant Tensegrity Robots Through Simulation and Hardware Validation

    NASA Technical Reports Server (NTRS)

    Caluwaerts, Ken; Despraz, Jeremie; Iscen, Atil; Sabelhaus, Andrew P.; Bruce, Jonathan; Schrauwen, Benjamin; Sunspiral, Vytas

    2014-01-01

    To better understand the role of tensegrity structures in biological systems and their application to robotics, the Dynamic Tensegrity Robotics Lab at NASA Ames Research Center has developed and validated two different software environments for the analysis, simulation, and design of tensegrity robots. These tools, along with new control methodologies and the modular hardware components developed to validate them, are presented as a system for the design of actuated tensegrity structures. As evidenced from their appearance in many biological systems, tensegrity ("tensile-integrity") structures have unique physical properties which make them ideal for interaction with uncertain environments. Yet these characteristics, such as variable structural compliance, and global multi-path load distribution through the tension network, make design and control of bio-inspired tensegrity robots extremely challenging. This work presents the progress in using these two tools in tackling the design and control challenges. The results of this analysis includes multiple novel control approaches for mobility and terrain interaction of spherical tensegrity structures. The current hardware prototype of a six-bar tensegrity, code-named ReCTeR, is presented in the context of this validation.

  11. Optimization and validation of moving average quality control procedures using bias detection curves and moving average validation charts.

    PubMed

    van Rossum, Huub H; Kemperman, Hans

    2017-02-01

    To date, no practical tools are available to obtain optimal settings for moving average (MA) as a continuous analytical quality control instrument. Also, there is no knowledge of the true bias detection properties of applied MA. We describe the use of bias detection curves for MA optimization and MA validation charts for validation of MA. MA optimization was performed on a data set of previously obtained consecutive assay results. Bias introduction and MA bias detection were simulated for multiple MA procedures (combination of truncation limits, calculation algorithms and control limits) and performed for various biases. Bias detection curves were generated by plotting the median number of test results needed for bias detection against the simulated introduced bias. In MA validation charts the minimum, median, and maximum numbers of assay results required for MA bias detection are shown for various bias. Their use was demonstrated for sodium, potassium, and albumin. Bias detection curves allowed optimization of MA settings by graphical comparison of bias detection properties of multiple MA. The optimal MA was selected based on the bias detection characteristics obtained. MA validation charts were generated for selected optimal MA and provided insight into the range of results required for MA bias detection. Bias detection curves and MA validation charts are useful tools for optimization and validation of MA procedures.

  12. Identification of fuel cycle simulator functionalities for analysis of transition to a new fuel cycle

    DOE PAGES

    Brown, Nicholas R.; Carlsen, Brett W.; Dixon, Brent W.; ...

    2016-06-09

    Dynamic fuel cycle simulation tools are intended to model holistic transient nuclear fuel cycle scenarios. As with all simulation tools, fuel cycle simulators require verification through unit tests, benchmark cases, and integral tests. Model validation is a vital aspect as well. Although compara-tive studies have been performed, there is no comprehensive unit test and benchmark library for fuel cycle simulator tools. The objective of this paper is to identify the must test functionalities of a fuel cycle simulator tool within the context of specific problems of interest to the Fuel Cycle Options Campaign within the U.S. Department of Energy smore » Office of Nuclear Energy. The approach in this paper identifies the features needed to cover the range of promising fuel cycle options identified in the DOE-NE Fuel Cycle Evaluation and Screening (E&S) and categorizes these features to facilitate prioritization. Features were categorized as essential functions, integrating features, and exemplary capabilities. One objective of this paper is to propose a library of unit tests applicable to each of the essential functions. Another underlying motivation for this paper is to encourage an international dialog on the functionalities and standard test methods for fuel cycle simulator tools.« less

  13. Development of a Scale-up Tool for Pervaporation Processes

    PubMed Central

    Thiess, Holger; Strube, Jochen

    2018-01-01

    In this study, an engineering tool for the design and optimization of pervaporation processes is developed based on physico-chemical modelling coupled with laboratory/mini-plant experiments. The model incorporates the solution-diffusion-mechanism, polarization effects (concentration and temperature), axial dispersion, pressure drop and the temperature drop in the feed channel due to vaporization of the permeating components. The permeance, being the key model parameter, was determined via dehydration experiments on a mini-plant scale for the binary mixtures ethanol/water and ethyl acetate/water. A second set of experimental data was utilized for the validation of the model for two chemical systems. The industrially relevant ternary mixture, ethanol/ethyl acetate/water, was investigated close to its azeotropic point and compared to a simulation conducted with the determined binary permeance data. Experimental and simulation data proved to agree very well for the investigated process conditions. In order to test the scalability of the developed engineering tool, large-scale data from an industrial pervaporation plant used for the dehydration of ethanol was compared to a process simulation conducted with the validated physico-chemical model. Since the membranes employed in both mini-plant and industrial scale were of the same type, the permeance data could be transferred. The comparison of the measured and simulated data proved the scalability of the derived model. PMID:29342956

  14. Mastoidectomy performance assessment of virtual simulation training using final-product analysis.

    PubMed

    Andersen, Steven A W; Cayé-Thomasen, Per; Sørensen, Mads S

    2015-02-01

    The future development of integrated automatic assessment in temporal bone virtual surgical simulators calls for validation against currently established assessment tools. This study aimed to explore the relationship between mastoidectomy final-product performance assessment in virtual simulation and traditional dissection training. Prospective trial with blinding. A total of 34 novice residents performed a mastoidectomy on the Visible Ear Simulator and on a cadaveric temporal bone. Two blinded senior otologists assessed the final-product performance using a modified Welling scale. The simulator gathered basic metrics on time, steps, and volumes in relation to the on-screen tutorial and collisions with vital structures. Substantial inter-rater reliability (kappa = 0.77) for virtual simulation and moderate inter-rater reliability (kappa = 0.59) for dissection final-product assessment was found. The simulation and dissection performance scores had significant correlation (P = .014). None of the basic simulator metrics correlated significantly with the final-product score except for number of steps completed in the simulator. A modified version of a validated final-product performance assessment tool can be used to assess mastoidectomy on virtual temporal bones. Performance assessment of virtual mastoidectomy could potentially save the use of cadaveric temporal bones for more advanced training when a basic level of competency in simulation has been achieved. NA. © 2014 The American Laryngological, Rhinological and Otological Society, Inc.

  15. TU-A-17A-02: In Memoriam of Ben Galkin: Virtual Tools for Validation of X-Ray Breast Imaging Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Myers, K; Bakic, P; Abbey, C

    2014-06-15

    This symposium will explore simulation methods for the preclinical evaluation of novel 3D and 4D x-ray breast imaging systems – the subject of AAPM taskgroup TG234. Given the complex design of modern imaging systems, simulations offer significant advantages over long and costly clinical studies in terms of reproducibility, reduced radiation exposures, a known reference standard, and the capability for studying patient and disease subpopulations through appropriate choice of simulation parameters. Our focus will be on testing the realism of software anthropomorphic phantoms and virtual clinical trials tools developed for the optimization and validation of breast imaging systems. The symposium willmore » review the stateof- the-science, as well as the advantages and limitations of various approaches to testing realism of phantoms and simulated breast images. Approaches based upon the visual assessment of synthetic breast images by expert observers will be contrasted with approaches based upon comparing statistical properties between synthetic and clinical images. The role of observer models in the assessment of realism will be considered. Finally, an industry perspective will be presented, summarizing the role and importance of virtual tools and simulation methods in product development. The challenges and conditions that must be satisfied in order for computational modeling and simulation to play a significantly increased role in the design and evaluation of novel breast imaging systems will be addressed. Learning Objectives: Review the state-of-the science in testing realism of software anthropomorphic phantoms and virtual clinical trials tools; Compare approaches based upon the visual assessment by expert observers vs. the analysis of statistical properties of synthetic images; Discuss the role of observer models in the assessment of realism; Summarize the industry perspective to virtual methods for breast imaging.« less

  16. Simulation Facilities and Test Beds for Galileo

    NASA Astrophysics Data System (ADS)

    Schlarmann, Bernhard Kl.; Leonard, Arian

    2002-01-01

    Galileo is the European satellite navigation system, financed by the European Space Agency (ESA) and the European Commission (EC). The Galileo System, currently under definition phase, will offer seamless global coverage, providing state-of-the-art positioning and timing services. Galileo services will include a standard service targeted at mass market users, an augmented integrity service, providing integrity warnings when fault occur and Public Regulated Services (ensuring a continuity of service for the public users). Other services are under consideration (SAR and integrated communications). Galileo will be interoperable with GPS, and will be complemented by local elements that will enhance the services for specific local users. In the frame of the Galileo definition phase, several system design and simulation facilities and test beds have been defined and developed for the coming phases of the project, respectively they are currently under development. These are mainly the following tools: Galileo Mission Analysis Simulator to design the Space Segment, especially to support constellation design, deployment and replacement. Galileo Service Volume Simulator to analyse the global performance requirements based on a coverage analysis for different service levels and degrades modes. Galileo System Simulation Facility is a sophisticated end-to-end simulation tool to assess the navigation performances for a complete variety of users under different operating conditions and different modes. Galileo Signal Validation Facility to evaluate signal and message structures for Galileo. Galileo System Test Bed (Version 1) to assess and refine the Orbit Determination &Time Synchronisation and Integrity algorithms, through experiments relying on GPS space infrastructure. This paper presents an overview on the so called "G-Facilities" and describes the use of the different system design tools during the project life cycle in order to design the system with respect to availability, continuity and integrity requirements. It gives more details on two of these system design tools: the Galileo Signal Validation Facility (GSVF) and the Galileo System Simulation Facility (GSSF). It will describe the operational use of these facilities within the complete set of design tools and especially the combined use of GSVF and GSSF will be described. Finally, this paper presents also examples and results obtained with these tools.

  17. Geochemical Reaction Mechanism Discovery from Molecular Simulation

    DOE PAGES

    Stack, Andrew G.; Kent, Paul R. C.

    2014-11-10

    Methods to explore reactions using computer simulation are becoming increasingly quantitative, versatile, and robust. In this review, a rationale for how molecular simulation can help build better geochemical kinetics models is first given. We summarize some common methods that geochemists use to simulate reaction mechanisms, specifically classical molecular dynamics and quantum chemical methods and discuss their strengths and weaknesses. Useful tools such as umbrella sampling and metadynamics that enable one to explore reactions are discussed. Several case studies wherein geochemists have used these tools to understand reaction mechanisms are presented, including water exchange and sorption on aqueous species and mineralmore » surfaces, surface charging, crystal growth and dissolution, and electron transfer. The impact that molecular simulation has had on our understanding of geochemical reactivity are highlighted in each case. In the future, it is anticipated that molecular simulation of geochemical reaction mechanisms will become more commonplace as a tool to validate and interpret experimental data, and provide a check on the plausibility of geochemical kinetic models.« less

  18. Early Validation of Failure Detection, Isolation, and Recovery Design Using Heterogeneous Modelling and Simulation

    NASA Astrophysics Data System (ADS)

    van der Plas, Peter; Guerriero, Suzanne; Cristiano, Leorato; Rugina, Ana

    2012-08-01

    Modelling and simulation can support a number of use cases across the spacecraft development life-cycle. Given the increasing complexity of space missions, the observed general trend is for a more extensive usage of simulation already in the early phases. A major perceived advantage is that modelling and simulation can enable the validation of critical aspects of the spacecraft design before the actual development is started, as such reducing the risk in later phases.Failure Detection, Isolation, and Recovery (FDIR) is one of the areas with a high potential to benefit from early modelling and simulation. With the increasing level of required spacecraft autonomy, FDIR specifications can grow in such a way that the traditional document-based review process soon becomes inadequate.This paper shows that FDIR modelling and simulation in a system context can provide a powerful tool to support the FDIR verification process. It is highlighted that FDIR modelling at this early stage requires heterogeneous modelling tools and languages, in order to provide an adequate functional description of the different components (i.e. FDIR functions, environment, equipment, etc.) to be modelled.For this reason, an FDIR simulation framework is proposed in this paper. This framework is based on a number of tools already available in the Avionics Systems Laboratory at ESTEC, which are the Avionics Test Bench Functional Engineering Simulator (ATB FES), Matlab/Simulink, TASTE, and Real Time Developer Studio (RTDS).The paper then discusses the application of the proposed simulation framework to a real case-study, i.e. the FDIR modelling of a satellite in support of actual ESA mission. Challenges and benefits of the approach are described. Finally, lessons learned and the generality of the proposed approach are discussed.

  19. Development of a physically-based planar inductors VHDL-AMS model for integrated power converter design

    NASA Astrophysics Data System (ADS)

    Ammouri, Aymen; Ben Salah, Walid; Khachroumi, Sofiane; Ben Salah, Tarek; Kourda, Ferid; Morel, Hervé

    2014-05-01

    Design of integrated power converters needs prototype-less approaches. Specific simulations are required for investigation and validation process. Simulation relies on active and passive device models. Models of planar devices, for instance, are still not available in power simulator tools. There is, thus, a specific limitation during the simulation process of integrated power systems. The paper focuses on the development of a physically-based planar inductor model and its validation inside a power converter during transient switching. The planar inductor model remains a complex device to model, particularly when the skin, the proximity and the parasitic capacitances effects are taken into account. Heterogeneous simulation scheme, including circuit and device models, is successfully implemented in VHDL-AMS language and simulated in Simplorer platform. The mixed simulation results has been favorably tested and compared with practical measurements. It is found that the multi-domain simulation results and measurements data are in close agreement.

  20. Can We Study Autonomous Driving Comfort in Moving-Base Driving Simulators? A Validation Study.

    PubMed

    Bellem, Hanna; Klüver, Malte; Schrauf, Michael; Schöner, Hans-Peter; Hecht, Heiko; Krems, Josef F

    2017-05-01

    To lay the basis of studying autonomous driving comfort using driving simulators, we assessed the behavioral validity of two moving-base simulator configurations by contrasting them with a test-track setting. With increasing level of automation, driving comfort becomes increasingly important. Simulators provide a safe environment to study perceived comfort in autonomous driving. To date, however, no studies were conducted in relation to comfort in autonomous driving to determine the extent to which results from simulator studies can be transferred to on-road driving conditions. Participants ( N = 72) experienced six differently parameterized lane-change and deceleration maneuvers and subsequently rated the comfort of each scenario. One group of participants experienced the maneuvers on a test-track setting, whereas two other groups experienced them in one of two moving-base simulator configurations. We could demonstrate relative and absolute validity for one of the two simulator configurations. Subsequent analyses revealed that the validity of the simulator highly depends on the parameterization of the motion system. Moving-base simulation can be a useful research tool to study driving comfort in autonomous vehicles. However, our results point at a preference for subunity scaling factors for both lateral and longitudinal motion cues, which might be explained by an underestimation of speed in virtual environments. In line with previous studies, we recommend lateral- and longitudinal-motion scaling factors of approximately 50% to 60% in order to obtain valid results for both active and passive driving tasks.

  1. Validation of Magnetic Resonance Thermometry by Computational Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Rydquist, Grant; Owkes, Mark; Verhulst, Claire M.; Benson, Michael J.; Vanpoppel, Bret P.; Burton, Sascha; Eaton, John K.; Elkins, Christopher P.

    2016-11-01

    Magnetic Resonance Thermometry (MRT) is a new experimental technique that can create fully three-dimensional temperature fields in a noninvasive manner. However, validation is still required to determine the accuracy of measured results. One method of examination is to compare data gathered experimentally to data computed with computational fluid dynamics (CFD). In this study, large-eddy simulations have been performed with the NGA computational platform to generate data for a comparison with previously run MRT experiments. The experimental setup consisted of a heated jet inclined at 30° injected into a larger channel. In the simulations, viscosity and density were scaled according to the local temperature to account for differences in buoyant and viscous forces. A mesh-independent study was performed with 5 mil-, 15 mil- and 45 mil-cell meshes. The program Star-CCM + was used to simulate the complete experimental geometry. This was compared to data generated from NGA. Overall, both programs show good agreement with the experimental data gathered with MRT. With this data, the validity of MRT as a diagnostic tool has been shown and the tool can be used to further our understanding of a range of flows with non-trivial temperature distributions.

  2. Applications of the Petri net to simulate, test, and validate the performance and safety of complex, heterogeneous, multi-modality patient monitoring alarm systems.

    PubMed

    Sloane, E B; Gelhot, V

    2004-01-01

    This research is motivated by the rapid pace of medical device and information system integration. Although the ability to interconnect many medical devices and information systems may help improve patient care, there is no way to detect if incompatibilities between one or more devices might cause critical events such as patient alarms to go unnoticed or cause one or more of the devices to become stuck in a disabled state. Petri net tools allow automated testing of all possible states and transitions between devices and/or systems to detect potential failure modes in advance. This paper describes an early research project to use Petri nets to simulate and validate a multi-modality central patient monitoring system. A free Petri net tool, HPSim, is used to simulate two wireless patient monitoring networks: one with 44 heart monitors and a central monitoring system and a second version that includes an additional 44 wireless pulse oximeters. In the latter Petri net simulation, a potentially dangerous heart arrhythmia and pulse oximetry alarms were detected.

  3. ExEP yield modeling tool and validation test results

    NASA Astrophysics Data System (ADS)

    Morgan, Rhonda; Turmon, Michael; Delacroix, Christian; Savransky, Dmitry; Garrett, Daniel; Lowrance, Patrick; Liu, Xiang Cate; Nunez, Paul

    2017-09-01

    EXOSIMS is an open-source simulation tool for parametric modeling of the detection yield and characterization of exoplanets. EXOSIMS has been adopted by the Exoplanet Exploration Programs Standards Definition and Evaluation Team (ExSDET) as a common mechanism for comparison of exoplanet mission concept studies. To ensure trustworthiness of the tool, we developed a validation test plan that leverages the Python-language unit-test framework, utilizes integration tests for selected module interactions, and performs end-to-end crossvalidation with other yield tools. This paper presents the test methods and results, with the physics-based tests such as photometry and integration time calculation treated in detail and the functional tests treated summarily. The test case utilized a 4m unobscured telescope with an idealized coronagraph and an exoplanet population from the IPAC radial velocity (RV) exoplanet catalog. The known RV planets were set at quadrature to allow deterministic validation of the calculation of physical parameters, such as working angle, photon counts and integration time. The observing keepout region was tested by generating plots and movies of the targets and the keepout zone over a year. Although the keepout integration test required the interpretation of a user, the test revealed problems in the L2 halo orbit and the parameterization of keepout applied to some solar system bodies, which the development team was able to address. The validation testing of EXOSIMS was performed iteratively with the developers of EXOSIMS and resulted in a more robust, stable, and trustworthy tool that the exoplanet community can use to simulate exoplanet direct-detection missions from probe class, to WFIRST, up to large mission concepts such as HabEx and LUVOIR.

  4. Software Tools to Support Research on Airport Departure Planning

    NASA Technical Reports Server (NTRS)

    Carr, Francis; Evans, Antony; Feron, Eric; Clarke, John-Paul

    2003-01-01

    A simple, portable and useful collection of software tools has been developed for the analysis of airport surface traffic. The tools are based on a flexible and robust traffic-flow model, and include calibration, validation and simulation functionality for this model. Several different interfaces have been developed to help promote usage of these tools, including a portable Matlab(TM) implementation of the basic algorithms; a web-based interface which provides online access to automated analyses of airport traffic based on a database of real-world operations data which covers over 250 U.S. airports over a 5-year period; and an interactive simulation-based tool currently in use as part of a college-level educational module. More advanced applications for airport departure traffic include taxi-time prediction and evaluation of "windowing" congestion control.

  5. Using the arthroscopic surgery skill evaluation tool as a pass-fail examination.

    PubMed

    Koehler, Ryan J; Nicandri, Gregg T

    2013-12-04

    Examination of arthroscopic skill requires evaluation tools that are valid and reliable with clear criteria for passing. The Arthroscopic Surgery Skill Evaluation Tool was developed as a video-based assessment of technical skill with criteria for passing established by a panel of experts. The purpose of this study was to test the validity and reliability of the Arthroscopic Surgery Skill Evaluation Tool as a pass-fail examination of arthroscopic skill. Twenty-eight residents and two sports medicine faculty members were recorded performing diagnostic knee arthroscopy on a left and right cadaveric specimen in our arthroscopic skills laboratory. Procedure videos were evaluated with use of the Arthroscopic Surgery Skill Evaluation Tool by two raters blind to subject identity. Subjects were considered to pass the Arthroscopic Surgery Skill Evaluation Tool when they attained scores of ≥ 3 on all eight assessment domains. The raters agreed on a pass-fail rating for fifty-five of sixty videos rated with an interclass correlation coefficient value of 0.83. Ten of thirty participants were assigned passing scores by both raters for both diagnostic arthroscopies performed in the laboratory. Receiver operating characteristic analysis demonstrated that logging more than eighty arthroscopic cases or performing more than thirty-five arthroscopic knee cases was predictive of attaining a passing Arthroscopic Surgery Skill Evaluation Tool score on both procedures performed in the laboratory. The Arthroscopic Surgery Skill Evaluation Tool is valid and reliable as a pass-fail examination of diagnostic arthroscopy of the knee in the simulation laboratory. This study demonstrates that the Arthroscopic Surgery Skill Evaluation Tool may be a useful tool for pass-fail examination of diagnostic arthroscopy of the knee in the simulation laboratory. Further study is necessary to determine whether the Arthroscopic Surgery Skill Evaluation Tool can be used for the assessment of multiple arthroscopic procedures and whether it can be used to evaluate arthroscopic procedures performed in the operating room.

  6. Improving the performance of a filling line based on simulation

    NASA Astrophysics Data System (ADS)

    Jasiulewicz-Kaczmarek, M.; Bartkowiak, T.

    2016-08-01

    The paper describes the method of improving performance of a filling line based on simulation. This study concerns a production line that is located in a manufacturing centre of a FMCG company. A discrete event simulation model was built using data provided by maintenance data acquisition system. Two types of failures were identified in the system and were approximated using continuous statistical distributions. The model was validated taking into consideration line performance measures. A brief Pareto analysis of line failures was conducted to identify potential areas of improvement. Two improvements scenarios were proposed and tested via simulation. The outcome of the simulations were the bases of financial analysis. NPV and ROI values were calculated taking into account depreciation, profits, losses, current CIT rate and inflation. A validated simulation model can be a useful tool in maintenance decision-making process.

  7. The AAO fiber instrument data simulator

    NASA Astrophysics Data System (ADS)

    Goodwin, Michael; Farrell, Tony; Smedley, Scott; Heald, Ron; Heijmans, Jeroen; De Silva, Gayandhi; Carollo, Daniela

    2012-09-01

    The fiber instrument data simulator is an in-house software tool that simulates detector images of fiber-fed spectrographs developed by the Australian Astronomical Observatory (AAO). In addition to helping validate the instrument designs, the resulting simulated images are used to develop the required data reduction software. Example applications that have benefited from the tool usage are the HERMES and SAMI instrumental projects for the Anglo-Australian Telescope (AAT). Given the sophistication of these projects an end-to-end data simulator that accurately models the predicted detector images is required. The data simulator encompasses all aspects of the transmission and optical aberrations of the light path: from the science object, through the atmosphere, telescope, fibers, spectrograph and finally the camera detectors. The simulator runs under a Linux environment that uses pre-calculated information derived from ZEMAX models and processed data from MATLAB. In this paper, we discuss the aspects of the model, software, example simulations and verification.

  8. Validation of the one pass measure for motivational interviewing competence.

    PubMed

    McMaster, Fiona; Resnicow, Ken

    2015-04-01

    This paper examines the psychometric properties of the OnePass coding system: a new, user-friendly tool for evaluating practitioner competence in motivational interviewing (MI). We provide data on reliability and validity with the current gold-standard: Motivational Interviewing Treatment Integrity tool (MITI). We compared scores from 27 videotaped MI sessions performed by student counselors trained in MI and simulated patients using both OnePass and MITI, with three different raters for each tool. Reliability was estimated using intra-class coefficients (ICCs), and validity was assessed using Pearson's r. OnePass had high levels of inter-rater reliability with 19/23 items found from substantial to almost perfect agreement. Taking the pair of scores with the highest inter-rater reliability on the MITI, the concurrent validity between the two measures ranged from moderate to high. Validity was highest for evocation, autonomy, direction and empathy. OnePass appears to have good inter-rater reliability while capturing similar dimensions of MI as the MITI. Despite the moderate concurrent validity with the MITI, the OnePass shows promise in evaluating both traditional and novel interpretations of MI. OnePass may be a useful tool for developing and improving practitioner competence in MI where access to MITI coders is limited. Copyright © 2015. Published by Elsevier Ireland Ltd.

  9. Face, content, and construct validity of human placenta as a haptic training tool in neurointerventional surgery.

    PubMed

    Ribeiro de Oliveira, Marcelo Magaldi; Nicolato, Arthur; Santos, Marcilea; Godinho, Joao Victor; Brito, Rafael; Alvarenga, Alexandre; Martins, Ana Luiza Valle; Prosdocimi, André; Trivelato, Felipe Padovani; Sabbagh, Abdulrahman J; Reis, Augusto Barbosa; Maestro, Rolando Del

    2016-05-01

    OBJECT The development of neurointerventional treatments of central nervous system disorders has resulted in the need for adequate training environments for novice interventionalists. Virtual simulators offer anatomical definition but lack adequate tactile feedback. Animal models, which provide more lifelike training, require an appropriate infrastructure base. The authors describe a training model for neurointerventional procedures using the human placenta (HP), which affords haptic training with significantly fewer resource requirements, and discuss its validation. METHODS Twelve HPs were prepared for simulated endovascular procedures. Training exercises performed by interventional neuroradiologists and novice fellows were placental angiography, stent placement, aneurysm coiling, and intravascular liquid embolic agent injection. RESULTS The endovascular training exercises proposed can be easily reproduced in the HP. Face, content, and construct validity were assessed by 6 neurointerventional radiologists and 6 novice fellows in interventional radiology. CONCLUSIONS The use of HP provides an inexpensive training model for the training of neurointerventionalists. Preliminary validation results show that this simulation model has face and content validity and has demonstrated construct validity for the interventions assessed in this study.

  10. OR fire virtual training simulator: design and face validity.

    PubMed

    Dorozhkin, Denis; Olasky, Jaisa; Jones, Daniel B; Schwaitzberg, Steven D; Jones, Stephanie B; Cao, Caroline G L; Molina, Marcos; Henriques, Steven; Wang, Jinling; Flinn, Jeff; De, Suvranu

    2017-09-01

    The Virtual Electrosurgical Skill Trainer is a tool for training surgeons the safe operation of electrosurgery tools in both open and minimally invasive surgery. This training includes a dedicated team-training module that focuses on operating room (OR) fire prevention and response. The module was developed to allow trainees, practicing surgeons, anesthesiologist, and nurses to interact with a virtual OR environment, which includes anesthesia apparatus, electrosurgical equipment, a virtual patient, and a fire extinguisher. Wearing a head-mounted display, participants must correctly identify the "fire triangle" elements and then successfully contain an OR fire. Within these virtual reality scenarios, trainees learn to react appropriately to the simulated emergency. A study targeted at establishing the face validity of the virtual OR fire simulator was undertaken at the 2015 Society of American Gastrointestinal and Endoscopic Surgeons conference. Forty-nine subjects with varying experience participated in this Institutional Review Board-approved study. The subjects were asked to complete the OR fire training/prevention sequence in the VEST simulator. Subjects were then asked to answer a subjective preference questionnaire consisting of sixteen questions, focused on the usefulness and fidelity of the simulator. On a 5-point scale, 12 of 13 questions were rated at a mean of 3 or greater (92%). Five questions were rated above 4 (38%), particularly those focusing on the simulator effectiveness and its usefulness in OR fire safety training. A total of 33 of the 49 participants (67%) chose the virtual OR fire trainer over the traditional training methods such as a textbook or an animal model. Training for OR fire emergencies in fully immersive VR environments, such as the VEST trainer, may be the ideal training modality. The face validity of the OR fire training module of the VEST simulator was successfully established on many aspects of the simulation.

  11. Experimental validation of thermo-chemical algorithm for a simulation of pultrusion processes

    NASA Astrophysics Data System (ADS)

    Barkanov, E.; Akishin, P.; Miazza, N. L.; Galvez, S.; Pantelelis, N.

    2018-04-01

    To provide better understanding of the pultrusion processes without or with temperature control and to support the pultrusion tooling design, an algorithm based on the mixed time integration scheme and nodal control volumes method has been developed. At present study its experimental validation is carried out by the developed cure sensors measuring the electrical resistivity and temperature on the profile surface. By this verification process the set of initial data used for a simulation of the pultrusion process with rod profile has been successfully corrected and finally defined.

  12. Cross-platform validation and analysis environment for particle physics

    NASA Astrophysics Data System (ADS)

    Chekanov, S. V.; Pogrebnyak, I.; Wilbern, D.

    2017-11-01

    A multi-platform validation and analysis framework for public Monte Carlo simulation for high-energy particle collisions is discussed. The front-end of this framework uses the Python programming language, while the back-end is written in Java, which provides a multi-platform environment that can be run from a web browser and can easily be deployed at the grid sites. The analysis package includes all major software tools used in high-energy physics, such as Lorentz vectors, jet algorithms, histogram packages, graphic canvases, and tools for providing data access. This multi-platform software suite, designed to minimize OS-specific maintenance and deployment time, is used for online validation of Monte Carlo event samples through a web interface.

  13. Verification and Validation of Requirements on the CEV Parachute Assembly System Using Design of Experiments

    NASA Technical Reports Server (NTRS)

    Schulte, Peter Z.; Moore, James W.

    2011-01-01

    The Crew Exploration Vehicle Parachute Assembly System (CPAS) project conducts computer simulations to verify that flight performance requirements on parachute loads and terminal rate of descent are met. Design of Experiments (DoE) provides a systematic method for variation of simulation input parameters. When implemented and interpreted correctly, a DoE study of parachute simulation tools indicates values and combinations of parameters that may cause requirement limits to be violated. This paper describes one implementation of DoE that is currently being developed by CPAS, explains how DoE results can be interpreted, and presents the results of several preliminary studies. The potential uses of DoE to validate parachute simulation models and verify requirements are also explored.

  14. Numerical Simulation of the Fluid-Structure Interaction of a Surface Effect Ship Bow Seal

    NASA Astrophysics Data System (ADS)

    Bloxom, Andrew L.

    Numerical simulations of fluid-structure interaction (FSI) problems were performed in an effort to verify and validate a commercially available FSI tool. This tool uses an iterative partitioned coupling scheme between CD-adapco's STAR-CCM+ finite volume fluid solver and Simulia's Abaqus finite element structural solver to simulate the FSI response of a system. Preliminary verification and validation work (V&V) was carried out to understand the numerical behavior of the codes individually and together as a FSI tool. Verification and Validation work that was completed included code order verification of the respective fluid and structural solvers with Couette-Poiseuille flow and Euler-Bernoulli beam theory. These results confirmed the 2 nd order accuracy of the spatial discretizations used. Following that, a mixture of solution verifications and model calibrations was performed with the inclusion of the physics models implemented in the solution of the FSI problems. Solution verifications were completed for fluid and structural stand-alone models as well as for the coupled FSI solutions. These results re-confirmed the spatial order of accuracy but for more complex flows and physics models as well as the order of accuracy of the temporal discretizations. In lieu of a good material definition, model calibration is performed to reproduce the experimental results. This work used model calibration for both instances of hyperelastic materials which were presented in the literature as validation cases because these materials were defined as linear elastic. Calibrated, three dimensional models of the bow seal on the University of Michigan bow seal test platform showed the ability to reproduce the experimental results qualitatively through averaging of the forces and seal displacements. These simulations represent the only current 3D results for this case. One significant result of this study is the ability to visualize the flow around the seal and to directly measure the seal resistances at varying cushion pressures, seal immersions, forward speeds, and different seal materials. SES design analysis could greatly benefit from the inclusion of flexible seals in simulations, and this work is a positive step in that direction. In future work, the inclusion of more complex seal geometries and contact will further enhance the capability of this tool.

  15. Simulating Next-Generation Sequencing Datasets from Empirical Mutation and Sequencing Models

    PubMed Central

    Stephens, Zachary D.; Hudson, Matthew E.; Mainzer, Liudmila S.; Taschuk, Morgan; Weber, Matthew R.; Iyer, Ravishankar K.

    2016-01-01

    An obstacle to validating and benchmarking methods for genome analysis is that there are few reference datasets available for which the “ground truth” about the mutational landscape of the sample genome is known and fully validated. Additionally, the free and public availability of real human genome datasets is incompatible with the preservation of donor privacy. In order to better analyze and understand genomic data, we need test datasets that model all variants, reflecting known biology as well as sequencing artifacts. Read simulators can fulfill this requirement, but are often criticized for limited resemblance to true data and overall inflexibility. We present NEAT (NExt-generation sequencing Analysis Toolkit), a set of tools that not only includes an easy-to-use read simulator, but also scripts to facilitate variant comparison and tool evaluation. NEAT has a wide variety of tunable parameters which can be set manually on the default model or parameterized using real datasets. The software is freely available at github.com/zstephens/neat-genreads. PMID:27893777

  16. Analysis of Aurora's Performance Simulation Engine for Three Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freeman, Janine; Simon, Joseph

    2015-07-07

    Aurora Solar Inc. is building a cloud-based optimization platform to automate the design, engineering, and permit generation process of solar photovoltaic (PV) installations. They requested that the National Renewable Energy Laboratory (NREL) validate the performance of the PV system performance simulation engine of Aurora Solar’s solar design platform, Aurora. In previous work, NREL performed a validation of multiple other PV modeling tools 1, so this study builds upon that work by examining all of the same fixed-tilt systems with available module datasheets that NREL selected and used in the aforementioned study. Aurora Solar set up these three operating PV systemsmore » in their modeling platform using NREL-provided system specifications and concurrent weather data. NREL then verified the setup of these systems, ran the simulations, and compared the Aurora-predicted performance data to measured performance data for those three systems, as well as to performance data predicted by other PV modeling tools.« less

  17. Biomaterial science meets computational biology.

    PubMed

    Hutmacher, Dietmar W; Little, J Paige; Pettet, Graeme J; Loessner, Daniela

    2015-05-01

    There is a pressing need for a predictive tool capable of revealing a holistic understanding of fundamental elements in the normal and pathological cell physiology of organoids in order to decipher the mechanoresponse of cells. Therefore, the integration of a systems bioengineering approach into a validated mathematical model is necessary to develop a new simulation tool. This tool can only be innovative by combining biomaterials science with computational biology. Systems-level and multi-scale experimental data are incorporated into a single framework, thus representing both single cells and collective cell behaviour. Such a computational platform needs to be validated in order to discover key mechano-biological factors associated with cell-cell and cell-niche interactions.

  18. Validation of the CALSPAN gross-motion-simulation code with actually occurring injury patterns in aircraft accidents.

    PubMed

    Ballo, J M; Dunne, M J; McMeekin, R R

    1978-01-01

    Digital simulation of aircraft-accident kinematics has heretofore been used almost exclusively as a design tool to explore structural load limits, precalculate decelerative forces at various cabin stations, and describe the effect of protective devices in the crash environment. In an effort to determine the value of digital computer simulation of fatal aircraft accidents, a fatality involving an ejection-system failure (out-of-envelope ejection) was modeled, and the injuries actually incurred were compared to those predicted; good agreement was found. The simulation of fatal aircraft accidents is advantageous because of a well-defined endpoint (death), lack of therapeutic intervention, and a static anatomic situation that can be minutely investigated. Such simulation techniques are a useful tool in the study of experimental trauma.

  19. TU-EF-204-07: Add Tube Current Modulation to a Low Dose Simulation Tool for CT Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Y.; Department of Physics, University of Arizona, Tucson, AZ; Wen, G.

    2015-06-15

    Purpose: We extended the capabilities of a low dose simulation tool to model Tube-Current Modulation (TCM). TCM is widely used in clinical practice to reduce radiation dose in CT scans. We expect the tool to be valuable for various clinical applications (e.g., optimize protocols, compare reconstruction techniques and evaluate TCM methods). Methods: The tube current is input as a function of z location, instead of a fixed value. Starting from the line integrals of a scan, a new Poisson noise realization at a lower dose is generated for each view. To validate the new functionality, we compared simulated scans withmore » real scans in image space. Results: First we assessed noise in the difference between the low-dose simulations and the original high-dose scan. When the simulated tube current is a step function of z location, the noise at each segment matches the noise of 3 separate constant-tube-current-simulations. Secondly, with a phantom that forces TCM, we compared a low-dose simulation with an equivalent real low-dose scan. The mean CT number of the simulated scan and the real low-dose scan were 137.7±0.6 and 137.8±0.5 respectively. Furthermore, with 240 ROIs, the noise of the simulated scan and the real low-dose scan were 24.03±0.45 and 23.99±0.43 respectively, and they were not statistically different (2-sample t-test, p-value=0.28). The facts that the noise reflected the trend of the TCM curve, and that the absolute noise measurements were not statistically different validated the TCM function. Conclusion: We successfully added tube-current modulation functionality in an existing low dose simulation tool. We demonstrated that the noise reflected an input tube-current modulation curve. In addition, we verified that the noise and mean CT number of our simulation agreed with a real low dose scan. The authors are all employees of Philips. Yijun Ding is also supported by NIBIB P41EB002035 and NIBIB R01EB000803.« less

  20. Integrating Reliability Analysis with a Performance Tool

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Palumbo, Daniel L.; Ulrey, Michael

    1995-01-01

    A large number of commercial simulation tools support performance oriented studies of complex computer and communication systems. Reliability of these systems, when desired, must be obtained by remodeling the system in a different tool. This has obvious drawbacks: (1) substantial extra effort is required to create the reliability model; (2) through modeling error the reliability model may not reflect precisely the same system as the performance model; (3) as the performance model evolves one must continuously reevaluate the validity of assumptions made in that model. In this paper we describe an approach, and a tool that implements this approach, for integrating a reliability analysis engine into a production quality simulation based performance modeling tool, and for modeling within such an integrated tool. The integrated tool allows one to use the same modeling formalisms to conduct both performance and reliability studies. We describe how the reliability analysis engine is integrated into the performance tool, describe the extensions made to the performance tool to support the reliability analysis, and consider the tool's performance.

  1. Electromagnetic Simulations for Aerospace Application Final Report CRADA No. TC-0376-92

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madsen, N.; Meredith, S.

    Electromagnetic (EM) simulation tools play an important role in the design cycle, allowing optimization of a design before it is fabricated for testing. The purpose of this cooperative project was to provide Lockheed with state-of-the-art electromagnetic (EM) simulation software that will enable the optimal design of the next generation of low-observable (LO) military aircraft through the VHF regime. More particularly, the project was principally code development and validation, its goal to produce a 3-D, conforming grid,time-domain (TD) EM simulation tool, consisting of a mesh generator, a DS13D-based simulation kernel, and an RCS postprocessor, which was useful in the optimization ofmore » LO aircraft, both for full-aircraft simulations run on a massively parallel computer and for small scale problems run on a UNIX workstation.« less

  2. Flight Testing an Iced Business Jet for Flight Simulation Model Validation

    NASA Technical Reports Server (NTRS)

    Ratvasky, Thomas P.; Barnhart, Billy P.; Lee, Sam; Cooper, Jon

    2007-01-01

    A flight test of a business jet aircraft with various ice accretions was performed to obtain data to validate flight simulation models developed through wind tunnel tests. Three types of ice accretions were tested: pre-activation roughness, runback shapes that form downstream of the thermal wing ice protection system, and a wing ice protection system failure shape. The high fidelity flight simulation models of this business jet aircraft were validated using a software tool called "Overdrive." Through comparisons of flight-extracted aerodynamic forces and moments to simulation-predicted forces and moments, the simulation models were successfully validated. Only minor adjustments in the simulation database were required to obtain adequate match, signifying the process used to develop the simulation models was successful. The simulation models were implemented in the NASA Ice Contamination Effects Flight Training Device (ICEFTD) to enable company pilots to evaluate flight characteristics of the simulation models. By and large, the pilots confirmed good similarities in the flight characteristics when compared to the real airplane. However, pilots noted pitch up tendencies at stall with the flaps extended that were not representative of the airplane and identified some differences in pilot forces. The elevator hinge moment model and implementation of the control forces on the ICEFTD were identified as a driver in the pitch ups and control force issues, and will be an area for future work.

  3. Assessing teamwork performance in obstetrics: A systematic search and review of validated tools.

    PubMed

    Fransen, Annemarie F; de Boer, Liza; Kienhorst, Dieneke; Truijens, Sophie E; van Runnard Heimel, Pieter J; Oei, S Guid

    2017-09-01

    Teamwork performance is an essential component for the clinical efficiency of multi-professional teams in obstetric care. As patient safety is related to teamwork performance, it has become an important learning goal in simulation-based education. In order to improve teamwork performance, reliable assessment tools are required. These can be used to provide feedback during training courses, or to compare learning effects between different types of training courses. The aim of the current study is to (1) identify the available assessment tools to evaluate obstetric teamwork performance in a simulated environment, and (2) evaluate their psychometric properties in order to identify the most valuable tool(s) to use. We performed a systematic search in PubMed, MEDLINE, and EMBASE to identify articles describing assessment tools for the evaluation of obstetric teamwork performance in a simulated environment. In order to evaluate the quality of the identified assessment tools the standards and grading rules have been applied as recommended by the Accreditation Council for Graduate Medical Education (ACGME) Committee on Educational Outcomes. The included studies were also assessed according to the Oxford Centre for Evidence Based Medicine (OCEBM) levels of evidence. This search resulted in the inclusion of five articles describing the following six tools: Clinical Teamwork Scale, Human Factors Rating Scale, Global Rating Scale, Assessment of Obstetric Team Performance, Global Assessment of Obstetric Team Performance, and the Teamwork Measurement Tool. Based on the ACGME guidelines we assigned a Class 3, level C of evidence, to all tools. Regarding the OCEBM levels of evidence, a level 3b was assigned to two studies and a level 4 to four studies. The Clinical Teamwork Scale demonstrated the most comprehensive validation, and the Teamwork Measurement Tool demonstrated promising results, however it is recommended to further investigate its reliability. Copyright © 2017. Published by Elsevier B.V.

  4. Booster Interface Loads

    NASA Technical Reports Server (NTRS)

    Gentz, Steve; Wood, Bill; Nettles, Mindy

    2015-01-01

    The interaction between shock waves and the wake shed from the forward booster/core attach hardware results in unsteady pressure fluctuations, which can lead to large buffeting loads on the vehicle. This task investigates whether computational tools can adequately predict these flows, and whether alternative booster nose shapes can reduce these loads. Results from wind tunnel tests will be used to validate the computations and provide design information for future Space Launch System (SLS) configurations. The current work combines numerical simulations with wind tunnel testing to predict buffeting loads caused by the boosters. Variations in nosecone shape, similar to the Ariane 5 design (fig. 1), are being evaluated with regard to lowering the buffet loads. The task will provide design information for the mitigation of buffet loads for SLS, along with validated simulation tools to be used to assess future SLS designs.

  5. Validity Evidence for the Neuro-Endoscopic Ventriculostomy Assessment Tool (NEVAT).

    PubMed

    Breimer, Gerben E; Haji, Faizal A; Cinalli, Giuseppe; Hoving, Eelco W; Drake, James M

    2017-02-01

    Growing demand for transparent and standardized methods for evaluating surgical competence prompted the construction of the Neuro-Endoscopic Ventriculostomy Assessment Tool (NEVAT). To provide validity evidence of the NEVAT by reporting on the tool's internal structure and its relationship with surgical expertise during simulation-based training. The NEVAT was used to assess performance of trainees and faculty at an international neuroendoscopy workshop. All participants performed an endoscopic third ventriculostomy (ETV) on a synthetic simulator. Participants were simultaneously scored by 2 raters using the NEVAT procedural checklist and global rating scale (GRS). Evidence of internal structure was collected by calculating interrater reliability and internal consistency of raters' scores. Evidence of relationships with other variables was collected by comparing the ETV performance of experts, experienced trainees, and novices using Jonckheere's test (evidence of construct validity). Thirteen experts, 11 experienced trainees, and 10 novices participated. The interrater reliability by the intraclass correlation coefficient for the checklist and GRS was 0.82 and 0.94, respectively. Internal consistency (Cronbach's α) for the checklist and the GRS was 0.74 and 0.97, respectively. Median scores with interquartile range on the checklist and GRS for novices, experienced trainees, and experts were 0.69 (0.58-0.86), 0.85 (0.63-0.89), and 0.85 (0.81-0.91) and 3.1 (2.5-3.8), 3.7 (2.2-4.3) and 4.6 (4.4-4.9), respectively. Jonckheere's test showed that the median checklist and GRS score increased with performer expertise ( P = .04 and .002, respectively). This study provides validity evidence for the NEVAT to support its use as a standardized method of evaluating neuroendoscopic competence during simulation-based training. Copyright © 2016 by the Congress of Neurological Surgeons

  6. Development of a self-assessment teamwork tool for use by medical and nursing students.

    PubMed

    Gordon, Christopher J; Jorm, Christine; Shulruf, Boaz; Weller, Jennifer; Currie, Jane; Lim, Renee; Osomanski, Adam

    2016-08-24

    Teamwork training is an essential component of health professional student education. A valid and reliable teamwork self-assessment tool could assist students to identify desirable teamwork behaviours with the potential to promote learning about effective teamwork. The aim of this study was to develop and evaluate a self-assessment teamwork tool for health professional students for use in the context of emergency response to a mass casualty. The authors modified a previously published teamwork instrument designed for experienced critical care teams for use with medical and nursing students involved in mass casualty simulations. The 17-item questionnaire was administered to students immediately following the simulations. These scores were used to explore the psychometric properties of the tool, using Exploratory and Confirmatory Factor Analysis. 202 (128 medical and 74 nursing) students completed the self-assessment teamwork tool for students. Exploratory factor analysis revealed 2 factors (5 items - Teamwork coordination and communication; 4 items - Information sharing and support) and these were justified with confirmatory factor analysis. Internal consistency was 0.823 for Teamwork coordination and communication, and 0.812 for Information sharing and support. These data provide evidence to support the validity and reliability of the self-assessment teamwork tool for students This self-assessment tool could be of value to health professional students following team training activities to help them identify the attributes of effective teamwork.

  7. Validation of an Active Gear, Flexible Aircraft Take-off and Landing analysis (AGFATL)

    NASA Technical Reports Server (NTRS)

    Mcgehee, J. R.

    1984-01-01

    The results of an analytical investigation using a computer program for active gear, flexible aircraft take off and landing analysis (AGFATL) are compared with experimental data from shaker tests, drop tests, and simulated landing tests to validate the AGFATL computer program. Comparison of experimental and analytical responses for both passive and active gears indicates good agreement for shaker tests and drop tests. For the simulated landing tests, the passive and active gears were influenced by large strut binding friction forces. The inclusion of these undefined forces in the analytical simulations was difficult, and consequently only fair to good agreement was obtained. An assessment of the results from the investigation indicates that the AGFATL computer program is a valid tool for the study and initial design of series hydraulic active control landing gear systems.

  8. Modeling the direct sun component in buildings using matrix algebraic approaches: Methods and validation

    DOE PAGES

    Lee, Eleanor S.; Geisler-Moroder, David; Ward, Gregory

    2017-12-23

    Simulation tools that enable annual energy performance analysis of optically-complex fenestration systems have been widely adopted by the building industry for use in building design, code development, and the development of rating and certification programs for commercially-available shading and daylighting products. The tools rely on a three-phase matrix operation to compute solar heat gains, using as input low-resolution bidirectional scattering distribution function (BSDF) data (10–15° angular resolution; BSDF data define the angle-dependent behavior of light-scattering materials and systems). Measurement standards and product libraries for BSDF data are undergoing development to support solar heat gain calculations. Simulation of other metrics suchmore » as discomfort glare, annual solar exposure, and potentially thermal discomfort, however, require algorithms and BSDF input data that more accurately model the spatial distribution of transmitted and reflected irradiance or illuminance from the sun (0.5° resolution). This study describes such algorithms and input data, then validates the tools (i.e., an interpolation tool for measured BSDF data and the five-phase method) through comparisons with ray-tracing simulations and field monitored data from a full-scale testbed. Simulations of daylight-redirecting films, a micro-louvered screen, and venetian blinds using variable resolution, tensor tree BSDF input data derived from interpolated scanning goniophotometer measurements were shown to agree with field monitored data to within 20% for greater than 75% of the measurement period for illuminance-based performance parameters. The three-phase method delivered significantly less accurate results. We discuss the ramifications of these findings on industry and provide recommendations to increase end user awareness of the current limitations of existing software tools and BSDF product libraries.« less

  9. Modeling the direct sun component in buildings using matrix algebraic approaches: Methods and validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Eleanor S.; Geisler-Moroder, David; Ward, Gregory

    Simulation tools that enable annual energy performance analysis of optically-complex fenestration systems have been widely adopted by the building industry for use in building design, code development, and the development of rating and certification programs for commercially-available shading and daylighting products. The tools rely on a three-phase matrix operation to compute solar heat gains, using as input low-resolution bidirectional scattering distribution function (BSDF) data (10–15° angular resolution; BSDF data define the angle-dependent behavior of light-scattering materials and systems). Measurement standards and product libraries for BSDF data are undergoing development to support solar heat gain calculations. Simulation of other metrics suchmore » as discomfort glare, annual solar exposure, and potentially thermal discomfort, however, require algorithms and BSDF input data that more accurately model the spatial distribution of transmitted and reflected irradiance or illuminance from the sun (0.5° resolution). This study describes such algorithms and input data, then validates the tools (i.e., an interpolation tool for measured BSDF data and the five-phase method) through comparisons with ray-tracing simulations and field monitored data from a full-scale testbed. Simulations of daylight-redirecting films, a micro-louvered screen, and venetian blinds using variable resolution, tensor tree BSDF input data derived from interpolated scanning goniophotometer measurements were shown to agree with field monitored data to within 20% for greater than 75% of the measurement period for illuminance-based performance parameters. The three-phase method delivered significantly less accurate results. We discuss the ramifications of these findings on industry and provide recommendations to increase end user awareness of the current limitations of existing software tools and BSDF product libraries.« less

  10. Open-source framework for power system transmission and distribution dynamics co-simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Renke; Fan, Rui; Daily, Jeff

    The promise of the smart grid entails more interactions between the transmission and distribution networks, and there is an immediate need for tools to provide the comprehensive modelling and simulation required to integrate operations at both transmission and distribution levels. Existing electromagnetic transient simulators can perform simulations with integration of transmission and distribution systems, but the computational burden is high for large-scale system analysis. For transient stability analysis, currently there are only separate tools for simulating transient dynamics of the transmission and distribution systems. In this paper, we introduce an open source co-simulation framework “Framework for Network Co-Simulation” (FNCS), togethermore » with the decoupled simulation approach that links existing transmission and distribution dynamic simulators through FNCS. FNCS is a middleware interface and framework that manages the interaction and synchronization of the transmission and distribution simulators. Preliminary testing results show the validity and capability of the proposed open-source co-simulation framework and the decoupled co-simulation methodology.« less

  11. iSAFT Protocol Validation Platform for On-Board Data Networks

    NASA Astrophysics Data System (ADS)

    Tavoularis, Antonis; Kollias, Vangelis; Marinis, Kostas

    2014-08-01

    iSAFT is an integrated powerful HW/SW environmentfor the simulation, validation & monitoring of satellite/spacecraft on-board data networks supporting simultaneously a wide range of protocols (RMAP, PTP, CCSDS Space Packet, TM/TC, CANopen, etc.) and network interfaces (SpaceWire, ECSS MIL-STD-1553, ECSS CAN). It is based on over 20 years of TELETEL's experience in the area of protocol validation in the telecommunications and aeronautical sectors, and it has been fully re-engineered in cooperation of TELETEL with ESA & space Primes, to comply with space on-board industrial validation requirements (ECSS, EGSE, AIT, AIV, etc.). iSAFT is highly modular and expandable to support new network interfaces & protocols and it is based on the powerful iSAFT graphical tool chain (Protocol Analyser / Recorder, TestRunner, Device Simulator, Traffic Generator, etc.).

  12. Validation of the OpCost logging cost model using contractor surveys

    Treesearch

    Conor K. Bell; Robert F. Keefe; Jeremy S. Fried

    2017-01-01

    OpCost is a harvest and fuel treatment operations cost model developed to function as both a standalone tool and an integrated component of the Bioregional Inventory Originated Simulation Under Management (BioSum) analytical framework for landscape-level analysis of forest management alternatives. OpCost is an updated implementation of the Fuel Reduction Cost Simulator...

  13. A cross-validation package driving Netica with python

    USGS Publications Warehouse

    Fienen, Michael N.; Plant, Nathaniel G.

    2014-01-01

    Bayesian networks (BNs) are powerful tools for probabilistically simulating natural systems and emulating process models. Cross validation is a technique to avoid overfitting resulting from overly complex BNs. Overfitting reduces predictive skill. Cross-validation for BNs is known but rarely implemented due partly to a lack of software tools designed to work with available BN packages. CVNetica is open-source, written in Python, and extends the Netica software package to perform cross-validation and read, rebuild, and learn BNs from data. Insights gained from cross-validation and implications on prediction versus description are illustrated with: a data-driven oceanographic application; and a model-emulation application. These examples show that overfitting occurs when BNs become more complex than allowed by supporting data and overfitting incurs computational costs as well as causing a reduction in prediction skill. CVNetica evaluates overfitting using several complexity metrics (we used level of discretization) and its impact on performance metrics (we used skill).

  14. Simulation and evaluation of the Sh-2F helicopter in a shipboard environment using the interchangeable cab system

    NASA Technical Reports Server (NTRS)

    Paulk, C. H., Jr.; Astill, D. L.; Donley, S. T.

    1983-01-01

    The operation of the SH-2F helicopter from the decks of small ships in adverse weather was simulated using a large amplitude vertical motion simulator, a wide angle computer generated imagery visual system, and an interchangeable cab (ICAB). The simulation facility, the mathematical programs, and the validation method used to ensure simulation fidelity are described. The results show the simulator to be a useful tool in simulating the ship-landing problem. Characteristics of the ICAB system and ways in which the simulation can be improved are presented.

  15. Development of a Twin-spool Turbofan Engine Simulation Using the Toolbox for Modeling and Analysis of Thermodynamic Systems (T-MATS)

    NASA Technical Reports Server (NTRS)

    Zinnecker, Alicia M.; Chapman, Jeffryes W.; Lavelle, Thomas M.; Litt, Johathan S.

    2014-01-01

    The Toolbox for Modeling and Analysis of Thermodynamic Systems (T-MATS) is a tool that has been developed to allow a user to build custom models of systems governed by thermodynamic principles using a template to model each basic process. Validation of this tool in an engine model application was performed through reconstruction of the Commercial Modular Aero-Propulsion System Simulation (C-MAPSS) (v2) using the building blocks from the T-MATS (v1) library. In order to match the two engine models, it was necessary to address differences in several assumptions made in the two modeling approaches. After these modifications were made, validation of the engine model continued by integrating both a steady-state and dynamic iterative solver with the engine plant and comparing results from steady-state and transient simulation of the T-MATS and C-MAPSS models. The results show that the T-MATS engine model was accurate within 3 of the C-MAPSS model, with inaccuracy attributed to the increased dimension of the iterative solver solution space required by the engine model constructed using the T-MATS library. This demonstrates that, given an understanding of the modeling assumptions made in T-MATS and a baseline model, the T-MATS tool provides a viable option for constructing a computational model of a twin-spool turbofan engine that may be used in simulation studies.

  16. The LEAP™ Gesture Interface Device and Take-Home Laparoscopic Simulators: A Study of Construct and Concurrent Validity.

    PubMed

    Partridge, Roland W; Brown, Fraser S; Brennan, Paul M; Hennessey, Iain A M; Hughes, Mark A

    2016-02-01

    To assess the potential of the LEAP™ infrared motion tracking device to map laparoscopic instrument movement in a simulated environment. Simulator training is optimized when augmented by objective performance feedback. We explore the potential LEAP has to provide this in a way compatible with affordable take-home simulators. LEAP and the previously validated InsTrac visual tracking tool mapped expert and novice performances of a standardized simulated laparoscopic task. Ability to distinguish between the 2 groups (construct validity) and correlation between techniques (concurrent validity) were the primary outcome measures. Forty-three expert and 38 novice performances demonstrated significant differences in LEAP-derived metrics for instrument path distance (P < .001), speed (P = .002), acceleration (P < .001), motion smoothness (P < .001), and distance between the instruments (P = .019). Only instrument path distance demonstrated a correlation between LEAP and InsTrac tracking methods (novices: r = .663, P < .001; experts: r = .536, P < .001). Consistency of LEAP tracking was poor (average % time hands not tracked: 31.9%). The LEAP motion device is able to track the movement of hands using instruments in a laparoscopic box simulator. Construct validity is demonstrated by its ability to distinguish novice from expert performances. Only time and instrument path distance demonstrated concurrent validity with an existing tracking method however. A number of limitations to the tracking method used by LEAP have been identified. These need to be addressed before it can be considered an alternative to visual tracking for the delivery of objective performance metrics in take-home laparoscopic simulators. © The Author(s) 2015.

  17. Teaching and assessing procedural skills using simulation: metrics and methodology.

    PubMed

    Lammers, Richard L; Davenport, Moira; Korley, Frederick; Griswold-Theodorson, Sharon; Fitch, Michael T; Narang, Aneesh T; Evans, Leigh V; Gross, Amy; Rodriguez, Elliot; Dodge, Kelly L; Hamann, Cara J; Robey, Walter C

    2008-11-01

    Simulation allows educators to develop learner-focused training and outcomes-based assessments. However, the effectiveness and validity of simulation-based training in emergency medicine (EM) requires further investigation. Teaching and testing technical skills require methods and assessment instruments that are somewhat different than those used for cognitive or team skills. Drawing from work published by other medical disciplines as well as educational, behavioral, and human factors research, the authors developed six research themes: measurement of procedural skills; development of performance standards; assessment and validation of training methods, simulator models, and assessment tools; optimization of training methods; transfer of skills learned on simulator models to patients; and prevention of skill decay over time. The article reviews relevant and established educational research methodologies and identifies gaps in our knowledge of how physicians learn procedures. The authors present questions requiring further research that, once answered, will advance understanding of simulation-based procedural training and assessment in EM.

  18. Simulation of router action on a lathe to test the cutting tool performance in edge-trimming of graphite/epoxy composite

    NASA Astrophysics Data System (ADS)

    Ramulu, M.; Rogers, E.

    1994-04-01

    The predominant machining application with graphite/epoxy composite materials in aerospace industry is peripheral trimming. The computer numerically controlled (CNC) high speed routers required to do edge trimming work are generally scheduled for production work in industry and are not available for extensive cutter testing. Therefore, an experimental method of simulating the conditions of periphery trim using a lathe is developed in this paper. The validity of the test technique will be demonstrated by conducting carbide tool wear tests under dry cutting conditions. The experimental results will be analyzed to characterize the wear behavior of carbide cutting tools in machining the composite materials.

  19. Comparisons of Kinematics and Dynamics Simulation Software Tools

    NASA Technical Reports Server (NTRS)

    Shiue, Yeu-Sheng Paul

    2002-01-01

    Kinematic and dynamic analyses for moving bodies are essential to system engineers and designers in the process of design and validations. 3D visualization and motion simulation plus finite element analysis (FEA) give engineers a better way to present ideas and results. Marshall Space Flight Center (MSFC) system engineering researchers are currently using IGRIP from DELMIA Inc. as a kinematic simulation tool for discrete bodies motion simulations. Although IGRIP is an excellent tool for kinematic simulation with some dynamic analysis capabilities in robotic control, explorations of other alternatives with more powerful dynamic analysis and FEA capabilities are necessary. Kinematics analysis will only examine the displacement, velocity, and acceleration of the mechanism without considering effects from masses of components. With dynamic analysis and FEA, effects such as the forces or torques at the joint due to mass and inertia of components can be identified. With keen market competition, ALGOR Mechanical Event Simulation (MES), MSC visualNastran 4D, Unigraphics Motion+, and Pro/MECHANICA were chosen for explorations. In this study, comparisons between software tools were presented in terms of following categories: graphical user interface (GUI), import capability, tutorial availability, ease of use, kinematic simulation capability, dynamic simulation capability, FEA capability, graphical output, technical support, and cost. Propulsion Test Article (PTA) with Fastrac engine model exported from IGRIP and an office chair mechanism were used as examples for simulations.

  20. A new approach to road accident rescue.

    PubMed

    Morales, Alejandro; González-Aguilera, Diego; López, Alfonso I; Gutiérrez, Miguel A

    2016-01-01

    This article develops and validates a new methodology and tool for rescue assistance in traffic accidents, with the aim of improving its efficiency and safety in the evacuation of people, reducing the number of victims in road accidents. Different tests supported by professionals and experts have been designed under different circumstances and with different categories of damaged vehicles coming from real accidents and simulated trapped victims in order to calibrate and refine the proposed methodology and tool. To validate this new approach, a tool called App_Rescue has been developed. This tool is based on the use of a computer system that allows an efficient access to the technical information of the vehicle and sanitary information of the common passengers. The time spent during rescue using the standard protocol and the proposed method was compared. This rescue assistance system allows us to make vital information accessible in posttrauma care services, improving the effectiveness of interventions by the emergency services, reducing the rescue time and therefore minimizing the consequences involved and the number of victims. This could often mean saving lives. In the different simulated rescue operations, the rescue time has been reduced an average of 14%.

  1. Utility of an emulation and simulation computer model for air revitalization system hardware design, development, and test

    NASA Technical Reports Server (NTRS)

    Yanosy, J. L.; Rowell, L. F.

    1985-01-01

    Efforts to make increasingly use of suitable computer programs in the design of hardware have the potential to reduce expenditures. In this context, NASA has evaluated the benefits provided by software tools through an application to the Environmental Control and Life Support (ECLS) system. The present paper is concerned with the benefits obtained by an employment of simulation tools in the case of the Air Revitalization System (ARS) of a Space Station life support system. Attention is given to the ARS functions and components, a computer program overview, a SAND (solid amine water desorbed) bed model description, a model validation, and details regarding the simulation benefits.

  2. The feasibility of sharing simulation-based evaluation scenarios in anesthesiology.

    PubMed

    Berkenstadt, Haim; Kantor, Gareth S; Yusim, Yakov; Gafni, Naomi; Perel, Azriel; Ezri, Tiberiu; Ziv, Amitai

    2005-10-01

    We prospectively assessed the feasibility of international sharing of simulation-based evaluation tools despite differences in language, education, and anesthesia practice, in an Israeli study, using validated scenarios from a multi-institutional United States (US) study. Thirty-one Israeli junior anesthesia residents performed four simulation scenarios. Training sessions were videotaped and performance was assessed using two validated scoring systems (Long and Short Forms) by two independent raters. Subjects scored from 37 to 95 (70 +/- 12) of 108 possible points with the "Long Form" and "Short Form" scores ranging from 18 to 35 (28.2 +/- 4.5) of 40 possible points. Scores >70% of the maximal score were achieved by 61% of participants in comparison to only 5% in the original US study. The scenarios were rated as very realistic by 80% of the participants (grade 4 on a 1-4 scale). Reliability of the original assessment tools was demonstrated by internal consistencies of 0.66 for the Long and 0.75 for the Short Form (Cronbach alpha statistic). Values in the original study were 0.72-0.76 for the Long and 0.71-0.75 for the Short Form. The reliability did not change when a revised Israeli version of the scoring was used. Interrater reliability measured by Pearson correlation was 0.91 for the Long and 0.96 for the Short Form (P < 0.01). The high scores for plausibility given to the scenarios and the similar reliability of the original assessment tool support the feasibility of using simulation-based evaluation tools, developed in the US, in Israel. The higher scores achieved by Israeli residents may be related to the fact that most Israeli residents are immigrants with previous training in anesthesia. Simulation-based assessment tools developed in a multi-institutional study in the United States can be used in Israel despite the differences in language, education, and medical system.

  3. Cross-platform validation and analysis environment for particle physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chekanov, S. V.; Pogrebnyak, I.; Wilbern, D.

    A multi-platform validation and analysis framework for public Monte Carlo simulation for high-energy particle collisions is discussed. The front-end of this framework uses the Python programming language, while the back-end is written in Java, which provides a multi-platform environment that can be run from a web browser and can easily be deployed at the grid sites. The analysis package includes all major software tools used in high-energy physics, such as Lorentz vectors, jet algorithms, histogram packages, graphic canvases, and tools for providing data access. This multi-platform software suite, designed to minimize OS-specific maintenance and deployment time, is used for onlinemore » validation of Monte Carlo event samples through a web interface.« less

  4. Simulation Assessment Validation Environment (SAVE). Software User’s Manual

    DTIC Science & Technology

    2000-09-01

    requirements and decisions are made. The integration is leveraging work from other DoD organizations so that high -end results are attainable much faster than...planning through the modeling and simulation data capture and visualization process. The planners can complete the manufacturing process plan with a high ...technologies. This tool is also used to perform “ high level” factory process simulation prior to full CAD model development and help define feasible

  5. Experimental validation and model development for thermal transmittances of porous window screens and horizontal louvred blind systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Robert; Goudey, Howdy; Curcija, D. Charlie

    Virtually every home in the US has some form of shades, blinds, drapes, or other window attachment, but few have been designed for energy savings. In order to provide a common basis of comparison for thermal performance it is important to have validated simulation tools. This study outlines a review and validation of the ISO 15099 centre-of-glass thermal transmittance correlations for naturally ventilated cavities through measurement and detailed simulations. The focus is on the impacts of room-side ventilated cavities, such as those found with solar screens and horizontal louvred blinds. The thermal transmittance of these systems is measured experimentally, simulatedmore » using computational fluid dynamics analysis, and simulated utilizing simplified correlations from ISO 15099. Finally, correlation coefficients are proposed for the ISO 15099 algorithm that reduces the mean error between measured and simulated heat flux for typical solar screens from 16% to 3.5% and from 13% to 1% for horizontal blinds.« less

  6. Experimental validation and model development for thermal transmittances of porous window screens and horizontal louvred blind systems

    DOE PAGES

    Hart, Robert; Goudey, Howdy; Curcija, D. Charlie

    2017-05-16

    Virtually every home in the US has some form of shades, blinds, drapes, or other window attachment, but few have been designed for energy savings. In order to provide a common basis of comparison for thermal performance it is important to have validated simulation tools. This study outlines a review and validation of the ISO 15099 centre-of-glass thermal transmittance correlations for naturally ventilated cavities through measurement and detailed simulations. The focus is on the impacts of room-side ventilated cavities, such as those found with solar screens and horizontal louvred blinds. The thermal transmittance of these systems is measured experimentally, simulatedmore » using computational fluid dynamics analysis, and simulated utilizing simplified correlations from ISO 15099. Finally, correlation coefficients are proposed for the ISO 15099 algorithm that reduces the mean error between measured and simulated heat flux for typical solar screens from 16% to 3.5% and from 13% to 1% for horizontal blinds.« less

  7. STOCK: Structure mapper and online coarse-graining kit for molecular simulations

    DOE PAGES

    Bevc, Staš; Junghans, Christoph; Praprotnik, Matej

    2015-03-15

    We present a web toolkit STructure mapper and Online Coarse-graining Kit for setting up coarse-grained molecular simulations. The kit consists of two tools: structure mapping and Boltzmann inversion tools. The aim of the first tool is to define a molecular mapping from high, e.g. all-atom, to low, i.e. coarse-grained, resolution. Using a graphical user interface it generates input files, which are compatible with standard coarse-graining packages, e.g. VOTCA and DL_CGMAP. Our second tool generates effective potentials for coarse-grained simulations preserving the structural properties, e.g. radial distribution functions, of the underlying higher resolution model. The required distribution functions can be providedmore » by any simulation package. Simulations are performed on a local machine and only the distributions are uploaded to the server. The applicability of the toolkit is validated by mapping atomistic pentane and polyalanine molecules to a coarse-grained representation. Effective potentials are derived for systems of TIP3P (transferable intermolecular potential 3 point) water molecules and salt solution. The presented coarse-graining web toolkit is available at http://stock.cmm.ki.si.« less

  8. Validation of structural analysis methods using the in-house liner cyclic rigs

    NASA Technical Reports Server (NTRS)

    Thompson, R. L.

    1982-01-01

    Test conditions and variables to be considered in each of the test rigs and test configurations, and also used in the validation of the structural predictive theories and tools, include: thermal and mechanical load histories (simulating an engine mission cycle; different boundary conditions; specimens and components of different dimensions and geometries; different materials; various cooling schemes and cooling hole configurations; several advanced burner liner structural design concepts; and the simulation of hot streaks. Based on these test conditions and test variables, the test matrices for each rig and configurations can be established to verify the predictive tools over as wide a range of test conditions as possible using the simplest possible tests. A flow chart for the thermal/structural analysis of a burner liner and how the analysis relates to the tests is shown schematically. The chart shows that several nonlinear constitutive theories are to be evaluated.

  9. Model for Atmospheric Propagation of Spatially Combined Laser Beams

    DTIC Science & Technology

    2016-09-01

    thesis modeling tools is discussed. In Chapter 6, the thesis validated the model with analytical computations and simulations result from...using propagation model . Based on both the analytical computation and WaveTrain results, the diraction e ects simulated in the propagation model are...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS MODEL FOR ATMOSPHERIC PROPAGATION OF SPATIALLY COMBINED LASER BEAMS by Kum Leong Lee

  10. High-order continuum kinetic method for modeling plasma dynamics in phase space

    DOE PAGES

    Vogman, G. V.; Colella, P.; Shumlak, U.

    2014-12-15

    Continuum methods offer a high-fidelity means of simulating plasma kinetics. While computationally intensive, these methods are advantageous because they can be cast in conservation-law form, are not susceptible to noise, and can be implemented using high-order numerical methods. Advances in continuum method capabilities for modeling kinetic phenomena in plasmas require the development of validation tools in higher dimensional phase space and an ability to handle non-cartesian geometries. To that end, a new benchmark for validating Vlasov-Poisson simulations in 3D (x,v x,v y) is presented. The benchmark is based on the Dory-Guest-Harris instability and is successfully used to validate a continuummore » finite volume algorithm. To address challenges associated with non-cartesian geometries, unique features of cylindrical phase space coordinates are described. Preliminary results of continuum kinetic simulations in 4D (r,z,v r,v z) phase space are presented.« less

  11. The role of virtual reality in surgical training in otorhinolaryngology.

    PubMed

    Fried, Marvin P; Uribe, José I; Sadoughi, Babak

    2007-06-01

    This article reviews the rationale, current status and future directions for the development and implementation of virtual reality surgical simulators as training tools. The complexity of modern surgical techniques, which utilize advanced technology, presents a dilemma for surgical training. Hands-on patient experience - the traditional apprenticeship method for teaching operations - may not apply because of the learning curve for skill acquisition and patient safety expectation. The paranasal sinuses and temporal bone have intricate anatomy with a significant amount of vital structures either within the surgical field or in close proximity. The current standard of surgical care in these areas involves the use of endoscopes, cameras and microscopes, requiring additional hand-eye coordination, an accurate command of fine motor skills, and a thorough knowledge of the anatomy under magnified vision. A surgeon's disorientation or loss of perspective can lead to complications, often catastrophic and occasionally lethal. These considerations define the ideal environment for surgical simulation; not surprisingly, significant research and validation of simulators in these areas have occurred. Virtual reality simulators are demonstrating validity as training and skills assessment tools. Future prototypes will find application for routine use in teaching, surgical planning and the development of new instruments and computer-assisted devices.

  12. Training and Assessment of Hysteroscopic Skills: A Systematic Review.

    PubMed

    Savran, Mona Meral; Sørensen, Stine Maya Dreier; Konge, Lars; Tolsgaard, Martin G; Bjerrum, Flemming

    2016-01-01

    The aim of this systematic review was to identify studies on hysteroscopic training and assessment. PubMed, Excerpta Medica, the Cochrane Library, and Web of Science were searched in January 2015. Manual screening of references and citation tracking were also performed. Studies on hysteroscopic educational interventions were selected without restrictions on study design, populations, language, or publication year. A qualitative data synthesis including the setting, study participants, training model, training characteristics, hysteroscopic skills, assessment parameters, and study outcomes was performed by 2 authors working independently. Effect sizes were calculated when possible. Overall, 2 raters independently evaluated sources of validity evidence supporting the outcomes of the hysteroscopy assessment tools. A total of 25 studies on hysteroscopy training were identified, of which 23 were performed in simulated settings. Overall, 10 studies used virtual-reality simulators and reported effect sizes for technical skills ranging from 0.31 to 2.65; 12 used inanimate models and reported effect sizes for technical skills ranging from 0.35 to 3.19. One study involved live animal models; 2 studies were performed in clinical settings. The validity evidence supporting the assessment tools used was low. Consensus between the 2 raters on the reported validity evidence was high (94%). This systematic review demonstrated large variations in the effect of different tools for hysteroscopy training. The validity evidence supporting the assessment of hysteroscopic skills was limited. Copyright © 2016 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  13. Computer Aided Battery Engineering Consortium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pesaran, Ahmad

    A multi-national lab collaborative team was assembled that includes experts from academia and industry to enhance recently developed Computer-Aided Battery Engineering for Electric Drive Vehicles (CAEBAT)-II battery crush modeling tools and to develop microstructure models for electrode design - both computationally efficient. Task 1. The new Multi-Scale Multi-Domain model framework (GH-MSMD) provides 100x to 1,000x computation speed-up in battery electrochemical/thermal simulation while retaining modularity of particles and electrode-, cell-, and pack-level domains. The increased speed enables direct use of the full model in parameter identification. Task 2. Mechanical-electrochemical-thermal (MECT) models for mechanical abuse simulation were simultaneously coupled, enabling simultaneous modelingmore » of electrochemical reactions during the short circuit, when necessary. The interactions between mechanical failure and battery cell performance were studied, and the flexibility of the model for various batteries structures and loading conditions was improved. Model validation is ongoing to compare with test data from Sandia National Laboratories. The ABDT tool was established in ANSYS. Task 3. Microstructural modeling was conducted to enhance next-generation electrode designs. This 3- year project will validate models for a variety of electrodes, complementing Advanced Battery Research programs. Prototype tools have been developed for electrochemical simulation and geometric reconstruction.« less

  14. In Vitro Simulation and Validation of the Circulation with Congenital Heart Defects

    PubMed Central

    Figliola, Richard S.; Giardini, Alessandro; Conover, Tim; Camp, Tiffany A.; Biglino, Giovanni; Chiulli, John; Hsia, Tain-Yen

    2010-01-01

    Despite the recent advances in computational modeling, experimental simulation of the circulation with congenital heart defect using mock flow circuits remains an important tool for device testing, and for detailing the probable flow consequences resulting from surgical and interventional corrections. Validated mock circuits can be applied to qualify the results from novel computational models. New mathematical tools, coupled with advanced clinical imaging methods, allow for improved assessment of experimental circuit performance relative to human function, as well as the potential for patient-specific adaptation. In this review, we address the development of three in vitro mock circuits specific for studies of congenital heart defects. Performance of an in vitro right heart circulation circuit through a series of verification and validation exercises is described, including correlations with animal studies, and quantifying the effects of circuit inertiance on test results. We present our experience in the design of mock circuits suitable for investigations of the characteristics of the Fontan circulation. We use one such mock circuit to evaluate the accuracy of Doppler predictions in the presence of aortic coarctation. PMID:21218147

  15. Improvement of Simulation Method in Validation of Software of the Coordinate Measuring Systems

    NASA Astrophysics Data System (ADS)

    Nieciąg, Halina

    2015-10-01

    Software is used in order to accomplish various tasks at each stage of the functioning of modern measuring systems. Before metrological confirmation of measuring equipment, the system has to be validated. This paper discusses the method for conducting validation studies of a fragment of software to calculate the values of measurands. Due to the number and nature of the variables affecting the coordinate measurement results and the complex character and multi-dimensionality of measurands, the study used the Monte Carlo method of numerical simulation. The article presents an attempt of possible improvement of results obtained by classic Monte Carlo tools. The algorithm LHS (Latin Hypercube Sampling) was implemented as alternative to the simple sampling schema of classic algorithm.

  16. Simulation techniques in hyperthermia treatment planning

    PubMed Central

    Paulides, MM; Stauffer, PR; Neufeld, E; Maccarini, P; Kyriakou, A; Canters, RAM; Diederich, C; Bakker, JF; Van Rhoon, GC

    2013-01-01

    Clinical trials have shown that hyperthermia (HT), i.e. an increase of tissue temperature to 39-44°C, significantly enhance radiotherapy and chemotherapy effectiveness (1). Driven by the developments in computational techniques and computing power, personalized hyperthermia treatment planning (HTP) has matured and has become a powerful tool for optimizing treatment quality. Electromagnetic, ultrasound, and thermal simulations using realistic clinical setups are now being performed to achieve patient-specific treatment optimization. In addition, extensive studies aimed to properly implement novel HT tools and techniques, and to assess the quality of HT, are becoming more common. In this paper, we review the simulation tools and techniques developed for clinical hyperthermia, and evaluate their current status on the path from “model” to “clinic”. In addition, we illustrate the major techniques employed for validation and optimization. HTP has become an essential tool for improvement, control, and assessment of HT treatment quality. As such, it plays a pivotal role in the quest to establish HT as an efficacious addition to multi-modality treatment of cancer. PMID:23672453

  17. A coarse-grained model for DNA origami.

    PubMed

    Reshetnikov, Roman V; Stolyarova, Anastasia V; Zalevsky, Arthur O; Panteleev, Dmitry Y; Pavlova, Galina V; Klinov, Dmitry V; Golovin, Andrey V; Protopopova, Anna D

    2018-02-16

    Modeling tools provide a valuable support for DNA origami design. However, current solutions have limited application for conformational analysis of the designs. In this work we present a tool for a thorough study of DNA origami structure and dynamics. The tool is based on a novel coarse-grained model dedicated to geometry optimization and conformational analysis of DNA origami. We explored the ability of the model to predict dynamic behavior, global shapes, and fine details of two single-layer systems designed in hexagonal and square lattices using atomic force microscopy, Förster resonance energy transfer spectroscopy, and all-atom molecular dynamic simulations for validation of the results. We also examined the performance of the model for multilayer systems by simulation of DNA origami with published cryo-electron microscopy and atomic force microscopy structures. A good agreement between the simulated and experimental data makes the model suitable for conformational analysis of DNA origami objects. The tool is available at http://vsb.fbb.msu.ru/cosm as a web-service and as a standalone version.

  18. A coarse-grained model for DNA origami

    PubMed Central

    Stolyarova, Anastasia V; Zalevsky, Arthur O; Panteleev, Dmitry Y; Pavlova, Galina V; Klinov, Dmitry V; Golovin, Andrey V; Protopopova, Anna D

    2018-01-01

    Abstract Modeling tools provide a valuable support for DNA origami design. However, current solutions have limited application for conformational analysis of the designs. In this work we present a tool for a thorough study of DNA origami structure and dynamics. The tool is based on a novel coarse-grained model dedicated to geometry optimization and conformational analysis of DNA origami. We explored the ability of the model to predict dynamic behavior, global shapes, and fine details of two single-layer systems designed in hexagonal and square lattices using atomic force microscopy, Förster resonance energy transfer spectroscopy, and all-atom molecular dynamic simulations for validation of the results. We also examined the performance of the model for multilayer systems by simulation of DNA origami with published cryo-electron microscopy and atomic force microscopy structures. A good agreement between the simulated and experimental data makes the model suitable for conformational analysis of DNA origami objects. The tool is available at http://vsb.fbb.msu.ru/cosm as a web-service and as a standalone version. PMID:29267876

  19. Real-Time Performance of Mechatronic PZT Module Using Active Vibration Feedback Control.

    PubMed

    Aggogeri, Francesco; Borboni, Alberto; Merlo, Angelo; Pellegrini, Nicola; Ricatto, Raffaele

    2016-09-25

    This paper proposes an innovative mechatronic piezo-actuated module to control vibrations in modern machine tools. Vibrations represent one of the main issues that seriously compromise the quality of the workpiece. The active vibration control (AVC) device is composed of a host part integrated with sensors and actuators synchronized by a regulator; it is able to make a self-assessment and adjust to alterations in the environment. In particular, an innovative smart actuator has been designed and developed to satisfy machining requirements during active vibration control. This study presents the mechatronic model based on the kinematic and dynamic analysis of the AVC device. To ensure a real time performance, a H2-LQG controller has been developed and validated by simulations involving a machine tool, PZT actuator and controller models. The Hardware in the Loop (HIL) architecture is adopted to control and attenuate the vibrations. A set of experimental tests has been performed to validate the AVC module on a commercial machine tool. The feasibility of the real time vibration damping is demonstrated and the simulation accuracy is evaluated.

  20. Real-Time Performance of Mechatronic PZT Module Using Active Vibration Feedback Control

    PubMed Central

    Aggogeri, Francesco; Borboni, Alberto; Merlo, Angelo; Pellegrini, Nicola; Ricatto, Raffaele

    2016-01-01

    This paper proposes an innovative mechatronic piezo-actuated module to control vibrations in modern machine tools. Vibrations represent one of the main issues that seriously compromise the quality of the workpiece. The active vibration control (AVC) device is composed of a host part integrated with sensors and actuators synchronized by a regulator; it is able to make a self-assessment and adjust to alterations in the environment. In particular, an innovative smart actuator has been designed and developed to satisfy machining requirements during active vibration control. This study presents the mechatronic model based on the kinematic and dynamic analysis of the AVC device. To ensure a real time performance, a H2-LQG controller has been developed and validated by simulations involving a machine tool, PZT actuator and controller models. The Hardware in the Loop (HIL) architecture is adopted to control and attenuate the vibrations. A set of experimental tests has been performed to validate the AVC module on a commercial machine tool. The feasibility of the real time vibration damping is demonstrated and the simulation accuracy is evaluated. PMID:27681732

  1. Integration of SimSET photon history generator in GATE for efficient Monte Carlo simulations of pinhole SPECT.

    PubMed

    Chen, Chia-Lin; Wang, Yuchuan; Lee, Jason J S; Tsui, Benjamin M W

    2008-07-01

    The authors developed and validated an efficient Monte Carlo simulation (MCS) workflow to facilitate small animal pinhole SPECT imaging research. This workflow seamlessly integrates two existing MCS tools: simulation system for emission tomography (SimSET) and GEANT4 application for emission tomography (GATE). Specifically, we retained the strength of GATE in describing complex collimator/detector configurations to meet the anticipated needs for studying advanced pinhole collimation (e.g., multipinhole) geometry, while inserting the fast SimSET photon history generator (PHG) to circumvent the relatively slow GEANT4 MCS code used by GATE in simulating photon interactions inside voxelized phantoms. For validation, data generated from this new SimSET-GATE workflow were compared with those from GATE-only simulations as well as experimental measurements obtained using a commercial small animal pinhole SPECT system. Our results showed excellent agreement (e.g., in system point response functions and energy spectra) between SimSET-GATE and GATE-only simulations, and, more importantly, a significant computational speedup (up to approximately 10-fold) provided by the new workflow. Satisfactory agreement between MCS results and experimental data were also observed. In conclusion, the authors have successfully integrated SimSET photon history generator in GATE for fast and realistic pinhole SPECT simulations, which can facilitate research in, for example, the development and application of quantitative pinhole and multipinhole SPECT for small animal imaging. This integrated simulation tool can also be adapted for studying other preclinical and clinical SPECT techniques.

  2. An experimental method for the assessment of color simulation tools.

    PubMed

    Lillo, Julio; Alvaro, Leticia; Moreira, Humberto

    2014-07-22

    The Simulcheck method for evaluating the accuracy of color simulation tools in relation to dichromats is described and used to test three color simulation tools: Variantor, Coblis, and Vischeck. A total of 10 dichromats (five protanopes, five deuteranopes) and 10 normal trichromats participated in the current study. Simulcheck includes two psychophysical tasks: the Pseudoachromatic Stimuli Identification task and the Minimum Achromatic Contrast task. The Pseudoachromatic Stimuli Identification task allows determination of the two chromatic angles (h(uv) values) that generate a minimum response in the yellow–blue opponent mechanism and, consequently, pseudoachromatic stimuli (greens or reds). The Minimum Achromatic Contrast task requires the selection of the gray background that produces minimum contrast (near zero change in the achromatic mechanism) for each pseudoachromatic stimulus selected in the previous task (L(R) values). Results showed important differences in the colorimetric transformations performed by the three evaluated simulation tools and their accuracy levels. Vischeck simulation accurately implemented the algorithm of Brettel, Viénot, and Mollon (1997). Only Vischeck appeared accurate (similarity in huv and L(R) values between real and simulated dichromats) and, consequently, could render reliable color selections. It is concluded that Simulcheck is a consistent method because it provided an equivalent pattern of results for huv and L(R) values irrespective of the stimulus set used to evaluate a simulation tool. Simulcheck was also considered valid because real dichromats provided expected huv and LR values when performing the two psychophysical tasks included in this method. © 2014 ARVO.

  3. A real-time, dual processor simulation of the rotor system research aircraft

    NASA Technical Reports Server (NTRS)

    Mackie, D. B.; Alderete, T. S.

    1977-01-01

    A real-time, man-in-the loop, simulation of the rotor system research aircraft (RSRA) was conducted. The unique feature of this simulation was that two digital computers were used in parallel to solve the equations of the RSRA mathematical model. The design, development, and implementation of the simulation are documented. Program validation was discussed, and examples of data recordings are given. This simulation provided an important research tool for the RSRA project in terms of safe and cost-effective design analysis. In addition, valuable knowledge concerning parallel processing and a powerful simulation hardware and software system was gained.

  4. Face and construct validation of a next generation virtual reality (Gen2-VR) surgical simulator.

    PubMed

    Sankaranarayanan, Ganesh; Li, Baichun; Manser, Kelly; Jones, Stephanie B; Jones, Daniel B; Schwaitzberg, Steven; Cao, Caroline G L; De, Suvranu

    2016-03-01

    Surgical performance is affected by distractors and interruptions to surgical workflow that exist in the operating room. However, traditional surgical simulators are used to train surgeons in a skills laboratory that does not recreate these conditions. To overcome this limitation, we have developed a novel, immersive virtual reality (Gen2-VR) system to train surgeons in these environments. This study was to establish face and construct validity of our system. The study was a within-subjects design, with subjects repeating a virtual peg transfer task under three different conditions: Case I: traditional VR; Case II: Gen2-VR with no distractions and Case III: Gen2-VR with distractions and interruptions. In Case III, to simulate the effects of distractions and interruptions, music was played intermittently, the camera lens was fogged for 10 s and tools malfunctioned for 15 s at random points in time during the simulation. At the completion of the study subjects filled in a 5-point Likert scale feedback questionnaire. A total of sixteen subjects participated in this study. Friedman test showed significant difference in scores between the three conditions (p < 0.0001). Post hoc analysis using Wilcoxon signed-rank tests with Bonferroni correction further showed that all the three conditions were significantly different from each other (Case I, Case II, p < 0.0001), (Case I, Case III, p < 0.0001) and (Case II, Case III, p = 0.009). Subjects rated that fog (mean 4.18) and tool malfunction (median 4.56) significantly hindered their performance. The results showed that Gen2-VR simulator has both face and construct validity and that it can accurately and realistically present distractions and interruptions in a simulated OR, in spite of limitations of the current HMD hardware technology.

  5. Face and Construct Validation of a Next Generation Virtual Reality (Gen2-VR©) Surgical Simulator

    PubMed Central

    Sankaranarayanan, Ganesh; Li, Baichun; Manser, Kelly; Jones, Stephanie B.; Jones, Daniel B.; Schwaitzberg, Steven; Cao, Caroline G. L.; De, Suvranu

    2015-01-01

    Introduction Surgical performance is affected by distractors and interruptions to surgical workflow that exist in the operating room. However, traditional surgical simulators are used to train surgeons in a skills lab that does not recreate these conditions. To overcome this limitation, we have developed a novel, immersive virtual reality (Gen2-VR©) system to train surgeons in these environments. This study was to establish face and construct validity of our system. Methods and Procedures The study was a within-subjects design, with subjects repeating a virtual peg transfer task under three different conditions: CASE I: traditional VR; CASE II: Gen2-VR© with no distractions and CASE III: Gen2-VR© with distractions and interruptions.. In Case III, to simulate the effects of distractions and interruptions, music was played intermittently, the camera lens was fogged for 10 seconds and tools malfunctioned for 15 seconds at random points in time during the simulation. At the completion of the study subjects filled in a 5-point Likert scale feedback questionnaire. A total of sixteen subjects participated in this study. Results Friedman test showed significant difference in scores between the three conditions (p < 0.0001). Post hoc analysis using Wilcoxon Signed Rank tests with Bonferroni correction further showed that all the three conditions were significantly different from each other (Case I, Case II, p < 0.001), (Case I, Case III, p < 0.001) and (Case II, Case III, p = 0.009). Subjects rated that fog (mean= 4.18) and tool malfunction (median = 4.56) significantly hindered their performance. Conclusion The results showed that Gen2-VR© simulator has both face and construct validity and it can accurately and realistically present distractions and interruptions in a simulated OR, in spite of limitations of the current HMD hardware technology. PMID:26092010

  6. On Designing Multicore-Aware Simulators for Systems Biology Endowed with OnLine Statistics

    PubMed Central

    Calcagno, Cristina; Coppo, Mario

    2014-01-01

    The paper arguments are on enabling methodologies for the design of a fully parallel, online, interactive tool aiming to support the bioinformatics scientists .In particular, the features of these methodologies, supported by the FastFlow parallel programming framework, are shown on a simulation tool to perform the modeling, the tuning, and the sensitivity analysis of stochastic biological models. A stochastic simulation needs thousands of independent simulation trajectories turning into big data that should be analysed by statistic and data mining tools. In the considered approach the two stages are pipelined in such a way that the simulation stage streams out the partial results of all simulation trajectories to the analysis stage that immediately produces a partial result. The simulation-analysis workflow is validated for performance and effectiveness of the online analysis in capturing biological systems behavior on a multicore platform and representative proof-of-concept biological systems. The exploited methodologies include pattern-based parallel programming and data streaming that provide key features to the software designers such as performance portability and efficient in-memory (big) data management and movement. Two paradigmatic classes of biological systems exhibiting multistable and oscillatory behavior are used as a testbed. PMID:25050327

  7. On designing multicore-aware simulators for systems biology endowed with OnLine statistics.

    PubMed

    Aldinucci, Marco; Calcagno, Cristina; Coppo, Mario; Damiani, Ferruccio; Drocco, Maurizio; Sciacca, Eva; Spinella, Salvatore; Torquati, Massimo; Troina, Angelo

    2014-01-01

    The paper arguments are on enabling methodologies for the design of a fully parallel, online, interactive tool aiming to support the bioinformatics scientists .In particular, the features of these methodologies, supported by the FastFlow parallel programming framework, are shown on a simulation tool to perform the modeling, the tuning, and the sensitivity analysis of stochastic biological models. A stochastic simulation needs thousands of independent simulation trajectories turning into big data that should be analysed by statistic and data mining tools. In the considered approach the two stages are pipelined in such a way that the simulation stage streams out the partial results of all simulation trajectories to the analysis stage that immediately produces a partial result. The simulation-analysis workflow is validated for performance and effectiveness of the online analysis in capturing biological systems behavior on a multicore platform and representative proof-of-concept biological systems. The exploited methodologies include pattern-based parallel programming and data streaming that provide key features to the software designers such as performance portability and efficient in-memory (big) data management and movement. Two paradigmatic classes of biological systems exhibiting multistable and oscillatory behavior are used as a testbed.

  8. Monte Carlo simulation of Ray-Scan 64 PET system and performance evaluation using GATE toolkit

    NASA Astrophysics Data System (ADS)

    Li, Suying; Zhang, Qiushi; Vuletic, Ivan; Xie, Zhaoheng; Yang, Kun; Ren, Qiushi

    2017-02-01

    In this study, we aimed to develop a GATE model for the simulation of Ray-Scan 64 PET scanner and model its performance characteristics. A detailed implementation of system geometry and physical process were included in the simulation model. Then we modeled the performance characteristics of Ray-Scan 64 PET system for the first time, based on National Electrical Manufacturers Association (NEMA) NU-2 2007 protocols and validated the model against experimental measurement, including spatial resolution, sensitivity, counting rates and noise equivalent count rate (NECR). Moreover, an accurate dead time module was investigated to simulate the counting rate performance. Overall results showed reasonable agreement between simulation and experimental data. The validation results showed the reliability and feasibility of the GATE model to evaluate major performance of Ray-Scan 64 PET system. It provided a useful tool for a wide range of research applications.

  9. A typology of educationally focused medical simulation tools.

    PubMed

    Alinier, Guillaume

    2007-10-01

    The concept of simulation as an educational tool in healthcare is not a new idea but its use has really blossomed over the last few years. This enthusiasm is partly driven by an attempt to increase patient safety and also because the technology is becoming more affordable and advanced. Simulation is becoming more commonly used for initial training purposes as well as for continuing professional development, but people often have very different perceptions of the definition of the term simulation, especially in an educational context. This highlights the need for a clear classification of the technology available but also about the method and teaching approach employed. The aims of this paper are to discuss the current range of simulation approaches and propose a clear typology of simulation teaching aids. Commonly used simulation techniques have been identified and discussed in order to create a classification that reports simulation techniques, their usual mode of delivery, the skills they can address, the facilities required, their typical use, and their pros and cons. This paper presents a clear classification scheme of educational simulation tools and techniques with six different technological levels. They are respectively: written simulations, three-dimensional models, screen-based simulators, standardized patients, intermediate fidelity patient simulators, and interactive patient simulators. This typology allows the accurate description of the simulation technology and the teaching methods applied. Thus valid comparison of educational tools can be made as to their potential effectiveness and verisimilitude at different training stages. The proposed typology of simulation methodologies available for educational purposes provides a helpful guide for educators and participants which should help them to realise the potential learning outcomes at different technological simulation levels in relation to the training approach employed. It should also be a useful resource for simulation users who are trying to improve their educational practice.

  10. Evaluation of a virtual-reality-based simulator using passive haptic feedback for knee arthroscopy.

    PubMed

    Fucentese, Sandro F; Rahm, Stefan; Wieser, Karl; Spillmann, Jonas; Harders, Matthias; Koch, Peter P

    2015-04-01

    The aim of this work is to determine face validity and construct validity of a new virtual-reality-based simulator for diagnostic and therapeutic knee arthroscopy. The study tests a novel arthroscopic simulator based on passive haptics. Sixty-eight participants were grouped into novices, intermediates, and experts. All participants completed two exercises. In order to establish face validity, all participants filled out a questionnaire concerning different aspects of simulator realism, training capacity, and different statements using a seven-point Likert scale (range 1-7). Construct validity was tested by comparing various simulator metric values between novices and experts. Face validity could be established: overall realism was rated with a mean value of 5.5 points. Global training capacity scored a mean value of 5.9. Participants considered the simulator as useful for procedural training of diagnostic and therapeutic arthroscopy. In the foreign body removal exercise, experts were overall significantly faster in the whole procedure (6 min 24 s vs. 8 min 24 s, p < 0.001), took less time to complete the diagnostic tour (2 min 49 s vs. 3 min 32 s, p = 0.027), and had a shorter camera path length (186 vs. 246 cm, p = 0.006). The simulator achieved high scores in terms of realism. It was regarded as a useful training tool, which is also capable of differentiating between varying levels of arthroscopic experience. Nevertheless, further improvements of the simulator especially in the field of therapeutic arthroscopy are desirable. In general, the findings support that virtual-reality-based simulation using passive haptics has the potential to complement conventional training of knee arthroscopy skills. II.

  11. Validation techniques of agent based modelling for geospatial simulations

    NASA Astrophysics Data System (ADS)

    Darvishi, M.; Ahmadi, G.

    2014-10-01

    One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS) is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS), biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI's ArcGIS, OpenMap, GeoTools, etc) for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  12. Measuring the Process and Quality of Informed Consent for Clinical Research: Development and Testing

    PubMed Central

    Cohn, Elizabeth Gross; Jia, Haomiao; Smith, Winifred Chapman; Erwin, Katherine; Larson, Elaine L.

    2013-01-01

    Purpose/Objectives To develop and assess the reliability and validity of an observational instrument, the Process and Quality of Informed Consent (P-QIC). Design A pilot study of the psychometrics of a tool designed to measure the quality and process of the informed consent encounter in clinical research. The study used professionally filmed, simulated consent encounters designed to vary in process and quality. Setting A major urban teaching hospital in the northeastern region of the United States. Sample 63 students enrolled in health-related programs participated in psychometric testing, 16 students participated in test-retest reliability, and 5 investigator-participant dyads were observed for the actual consent encounters. Methods For reliability and validity testing, students watched and rated videotaped simulations of four consent encounters intentionally varied in process and content and rated them with the proposed instrument. Test-retest reliability was established by raters watching the videotaped simulations twice. Inter-rater reliability was demonstrated by two simultaneous but independent raters observing an actual consent encounter. Main Research Variables The essential elements of information and communication for informed consent. Findings The initial testing of the P-QIC demonstrated reliable and valid psychometric properties in both the simulated standardized consent encounters and actual consent encounters in the hospital setting. Conclusions The P-QIC is an easy-to-use observational tool that provides a quick assessment of the areas of strength and areas that need improvement in a consent encounter. It can be used in the initial trainings of new investigators or consent administrators and in ongoing programs of improvement for informed consent. Implications for Nursing The development of a validated observational instrument will allow investigators to assess the consent process more accurately and evaluate strategies designed to improve it. PMID:21708532

  13. Recent advancements in medical simulation: patient-specific virtual reality simulation.

    PubMed

    Willaert, Willem I M; Aggarwal, Rajesh; Van Herzeele, Isabelle; Cheshire, Nicholas J; Vermassen, Frank E

    2012-07-01

    Patient-specific virtual reality simulation (PSVR) is a new technological advancement that allows practice of upcoming real operations and complements the established role of VR simulation as a generic training tool. This review describes current developments in PSVR and draws parallels with other high-stake industries, such as aviation, military, and sports. A review of the literature was performed using PubMed and Internet search engines to retrieve data relevant to PSVR in medicine. All reports pertaining to PSVR were included. Reports on simulators that did not incorporate a haptic interface device were excluded from the review. Fifteen reports described 12 simulators that enabled PSVR. Medical procedures in the field of laparoscopy, vascular surgery, orthopedics, neurosurgery, and plastic surgery were included. In all cases, source data was two-dimensional CT or MRI data. Face validity was most commonly reported. Only one (vascular) simulator had undergone face, content, and construct validity. Of the 12 simulators, 1 is commercialized and 11 are prototypes. Five simulators have been used in conjunction with real patient procedures. PSVR is a promising technological advance within medicine. The majority of simulators are still in the prototype phase. As further developments unfold, the validity of PSVR will have to be examined much like generic VR simulation for training purposes. Nonetheless, similar to the aviation, military, and sport industries, operative performance and patient safety may be enhanced by the application of this novel technology.

  14. A Validation Study of Merging and Spacing Techniques in a NAS-Wide Simulation

    NASA Technical Reports Server (NTRS)

    Glaab, Patricia C.

    2011-01-01

    In November 2010, Intelligent Automation, Inc. (IAI) delivered an M&S software tool to that allows system level studies of the complex terminal airspace with the ACES simulation. The software was evaluated against current day arrivals in the Atlanta TRACON using Atlanta's Hartsfield-Jackson International Airport (KATL) arrival schedules. Results of this validation effort are presented describing data sets, traffic flow assumptions and techniques, and arrival rate comparisons between reported landings at Atlanta versus simulated arrivals using the same traffic sets in ACES equipped with M&S. Initial results showed the simulated system capacity to be significantly below arrival capacity seen at KATL. Data was gathered for Atlanta using commercial airport and flight tracking websites (like FlightAware.com), and analyzed to insure compatible techniques were used for result reporting and comparison. TFM operators for Atlanta were consulted for tuning final simulation parameters and for guidance in flow management techniques during high volume operations. Using these modified parameters and incorporating TFM guidance for efficiencies in flowing aircraft, arrival capacity for KATL was matched for the simulation. Following this validation effort, a sensitivity study was conducted to measure the impact of variations in system parameters on the Atlanta airport arrival capacity.

  15. SimulaTE: simulating complex landscapes of transposable elements of populations.

    PubMed

    Kofler, Robert

    2018-04-15

    Estimating the abundance of transposable elements (TEs) in populations (or tissues) promises to answer many open research questions. However, progress is hampered by the lack of concordance between different approaches for TE identification and thus potentially unreliable results. To address this problem, we developed SimulaTE a tool that generates TE landscapes for populations using a newly developed domain specific language (DSL). The simple syntax of our DSL allows for easily building even complex TE landscapes that have, for example, nested, truncated and highly diverged TE insertions. Reads may be simulated for the populations using different sequencing technologies (PacBio, Illumina paired-ends) and strategies (sequencing individuals and pooled populations). The comparison between the expected (i.e. simulated) and the observed results will guide researchers in finding the most suitable approach for a particular research question. SimulaTE is implemented in Python and available at https://sourceforge.net/projects/simulates/. Manual https://sourceforge.net/p/simulates/wiki/Home/#manual; Test data and tutorials https://sourceforge.net/p/simulates/wiki/Home/#walkthrough; Validation https://sourceforge.net/p/simulates/wiki/Home/#validation. robert.kofler@vetmeduni.ac.at.

  16. Spacecraft Multiple Array Communication System Performance Analysis

    NASA Technical Reports Server (NTRS)

    Hwu, Shian U.; Desilva, Kanishka; Sham, Catherine C.

    2010-01-01

    The Communication Systems Simulation Laboratory (CSSL) at the NASA Johnson Space Center is tasked to perform spacecraft and ground network communication system simulations, design validation, and performance verification. The CSSL has developed simulation tools that model spacecraft communication systems and the space and ground environment in which the tools operate. In this paper, a spacecraft communication system with multiple arrays is simulated. Multiple array combined technique is used to increase the radio frequency coverage and data rate performance. The technique is to achieve phase coherence among the phased arrays to combine the signals at the targeting receiver constructively. There are many technical challenges in spacecraft integration with a high transmit power communication system. The array combining technique can improve the communication system data rate and coverage performances without increasing the system transmit power requirements. Example simulation results indicate significant performance improvement can be achieved with phase coherence implementation.

  17. Development of a Twin-Spool Turbofan Engine Simulation Using the Toolbox for the Modeling and Analysis of Thermodynamic Systems (T-MATS)

    NASA Technical Reports Server (NTRS)

    Zinnecker, Alicia M.; Chapman, Jeffryes W.; Lavelle, Thomas M.; Litt, Jonathan S.

    2014-01-01

    The Toolbox for the Modeling and Analysis of Thermodynamic Systems (T-MATS) is a tool that has been developed to allow a user to build custom models of systems governed by thermodynamic principles using a template to model each basic process. Validation of this tool in an engine model application was performed through reconstruction of the Commercial Modular Aero-Propulsion System Simulation (C-MAPSS) (v2) using the building blocks from the T-MATS (v1) library. In order to match the two engine models, it was necessary to address differences in several assumptions made in the two modeling approaches. After these modifications were made, validation of the engine model continued by integrating both a steady-state and dynamic iterative solver with the engine plant and comparing results from steady-state and transient simulation of the T-MATS and C-MAPSS models. The results show that the T-MATS engine model was accurate within 3% of the C-MAPSS model, with inaccuracy attributed to the increased dimension of the iterative solver solution space required by the engine model constructed using the T-MATS library. This demonstrates that, given an understanding of the modeling assumptions made in T-MATS and a baseline model, the T-MATS tool provides a viable option for constructing a computational model of a twin-spool turbofan engine that may be used in simulation studies.

  18. Implementing Nonlinear Buoyancy and Excitation Forces in the WEC-Sim Wave Energy Converter Modeling Tool: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lawson, M.; Yu, Y. H.; Nelessen, A.

    2014-05-01

    Wave energy converters (WECs) are commonly designed and analyzed using numerical models that combine multi-body dynamics with hydrodynamic models based on the Cummins Equation and linearized hydrodynamic coefficients. These modeling methods are attractive design tools because they are computationally inexpensive and do not require the use of high performance computing resources necessitated by high-fidelity methods, such as Navier Stokes computational fluid dynamics. Modeling hydrodynamics using linear coefficients assumes that the device undergoes small motions and that the wetted surface area of the devices is approximately constant. WEC devices, however, are typically designed to undergo large motions in order to maximizemore » power extraction, calling into question the validity of assuming that linear hydrodynamic models accurately capture the relevant fluid-structure interactions. In this paper, we study how calculating buoyancy and Froude-Krylov forces from the instantaneous position of a WEC device (referred to as instantaneous buoyancy and Froude-Krylov forces from herein) changes WEC simulation results compared to simulations that use linear hydrodynamic coefficients. First, we describe the WEC-Sim tool used to perform simulations and how the ability to model instantaneous forces was incorporated into WEC-Sim. We then use a simplified one-body WEC device to validate the model and to demonstrate how accounting for these instantaneously calculated forces affects the accuracy of simulation results, such as device motions, hydrodynamic forces, and power generation.« less

  19. Six degree of freedom simulation system for evaluating automated rendezvous and docking spacecraft

    NASA Technical Reports Server (NTRS)

    Rourke, Kenneth H.; Tsugawa, Roy K.

    1991-01-01

    Future logistics supply and servicing vehicles such as cargo transfer vehicles (CTV) must have full 6 degree of freedom (6DOF) capability in order to perform requisite rendezvous, proximity operations, and capture operations. The design and performance issues encountered when developing a 6DOF maneuvering spacecraft are very complex with subtle interactions which are not immediately obvious or easily anticipated. In order to deal with these complexities and develop robust maneuvering spacecraft designs, a simulation system and associated family of tools are used at TRW for generating and validating spacecraft performance requirements and guidance algorithms. An overview of the simulator and tools is provided. These are used by TRW for autonomous rendezvous and docking research projects including CTV studies.

  20. Performance modeling & simulation of complex systems (A systems engineering design & analysis approach)

    NASA Technical Reports Server (NTRS)

    Hall, Laverne

    1995-01-01

    Modeling of the Multi-mission Image Processing System (MIPS) will be described as an example of the use of a modeling tool to design a distributed system that supports multiple application scenarios. This paper examines: (a) modeling tool selection, capabilities, and operation (namely NETWORK 2.5 by CACl), (b) pointers for building or constructing a model and how the MIPS model was developed, (c) the importance of benchmarking or testing the performance of equipment/subsystems being considered for incorporation the design/architecture, (d) the essential step of model validation and/or calibration using the benchmark results, (e) sample simulation results from the MIPS model, and (f) how modeling and simulation analysis affected the MIPS design process by having a supportive and informative impact.

  1. [Animal experimentation, computer simulation and surgical research].

    PubMed

    Carpentier, Alain

    2009-11-01

    We live in a digital world In medicine, computers are providing new tools for data collection, imaging, and treatment. During research and development of complex technologies and devices such as artificial hearts, computer simulation can provide more reliable information than experimentation on large animals. In these specific settings, animal experimentation should serve more to validate computer models of complex devices than to demonstrate their reliability.

  2. Simulation of an Asynchronous Machine by using a Pseudo Bond Graph

    NASA Astrophysics Data System (ADS)

    Romero, Gregorio; Felez, Jesus; Maroto, Joaquin; Martinez, M. Luisa

    2008-11-01

    For engineers, computer simulation, is a basic tool since it enables them to understand how systems work without actually needing to see them. They can learn how they work in different circumstances and optimize their design with considerably less cost in terms of time and money than if they had to carry out tests on a physical system. However, if computer simulation is to be reliable it is essential for the simulation model to be validated. There is a wide range of commercial brands on the market offering products for electrical domain simulation (SPICE, LabVIEW PSCAD,Dymola, Simulink, Simplorer,...). These are powerful tools, but require the engineer to have a perfect knowledge of the electrical field. This paper shows an alternative methodology to can simulate an asynchronous machine using the multidomain Bond Graph technique and apply it in any program that permit the simulation of models based in this technique; no extraordinary knowledge of this technique and electric field are required to understand the process .

  3. Cost analysis of objective resident cataract surgery assessments.

    PubMed

    Nandigam, Kiran; Soh, Jonathan; Gensheimer, William G; Ghazi, Ahmed; Khalifa, Yousuf M

    2015-05-01

    To compare 8 ophthalmology resident surgical training tools to determine which is most cost effective. University of Rochester Medical Center, Rochester, New York, USA. Retrospective evaluation of technology. A cost-analysis model was created to compile all relevant costs in running each tool in a medium-sized ophthalmology program. Quantitative cost estimates were obtained based on cost of tools, cost of time in evaluations, and supply and maintenance costs. For wet laboratory simulation, Eyesi was the least expensive cataract surgery simulation method; however, it is only capable of evaluating simulated cataract surgery rehearsal and requires supplementation with other evaluative methods for operating room performance and for noncataract wet lab training and evaluation. The most expensive training tool was the Eye Surgical Skills Assessment Test (ESSAT). The 2 most affordable methods for resident evaluation in operating room performance were the Objective Assessment of Skills in Intraocular Surgery (OASIS) and Global Rating Assessment of Skills in Intraocular Surgery (GRASIS). Cost-based analysis of ophthalmology resident surgical training tools are needed so residency programs can implement tools that are valid, reliable, objective, and cost effective. There is no perfect training system at this time. Copyright © 2015 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  4. Validation of an explanatory tool for data-fused displays for high-technology future aircraft

    NASA Astrophysics Data System (ADS)

    Fletcher, Georgina C. L.; Shanks, Craig R.; Selcon, Stephen J.

    1996-05-01

    As the number of sensor and data sources in the military cockpit increases, pilots will suffer high levels of workload which could result in reduced performance and the loss of situational awareness. A DRA research program has been investigating the use of data-fused displays in decision support and has developed and laboratory-tested an explanatory tool for displaying information in air combat scenarios. The tool has been designed to provide pictorial explanations of data that maintain situational awareness by involving the pilot in the hostile aircraft threat assessment task. This paper reports a study carried out to validate the success of the explanatory tool in a realistic flight simulation facility. Aircrew were asked to perform a threat assessment task, either with or without the explanatory tool providing information in the form of missile launch success zone envelopes, while concurrently flying a waypoint course within set flight parameters. The results showed that there was a significant improvement (p less than 0.01) in threat assessment accuracy of 30% when using the explanatory tool. This threat assessment performance advantage was achieved without a trade-off with flying task performance. Situational awareness measures showed no general differences between the explanatory and control conditions, but significant learning effects suggested that the explanatory tool makes the task initially more intuitive and hence less demanding on the pilots' attentional resources. The paper concludes that DRA's data-fused explanatory tool is successful at improving threat assessment accuracy in a realistic simulated flying environment, and briefly discusses the requirements for further research in the area.

  5. Comparative assessment of three standardized robotic surgery training methods.

    PubMed

    Hung, Andrew J; Jayaratna, Isuru S; Teruya, Kara; Desai, Mihir M; Gill, Inderbir S; Goh, Alvin C

    2013-10-01

    To evaluate three standardized robotic surgery training methods, inanimate, virtual reality and in vivo, for their construct validity. To explore the concept of cross-method validity, where the relative performance of each method is compared. Robotic surgical skills were prospectively assessed in 49 participating surgeons who were classified as follows: 'novice/trainee': urology residents, previous experience <30 cases (n = 38) and 'experts': faculty surgeons, previous experience ≥30 cases (n = 11). Three standardized, validated training methods were used: (i) structured inanimate tasks; (ii) virtual reality exercises on the da Vinci Skills Simulator (Intuitive Surgical, Sunnyvale, CA, USA); and (iii) a standardized robotic surgical task in a live porcine model with performance graded by the Global Evaluative Assessment of Robotic Skills (GEARS) tool. A Kruskal-Wallis test was used to evaluate performance differences between novices and experts (construct validity). Spearman's correlation coefficient (ρ) was used to measure the association of performance across inanimate, simulation and in vivo methods (cross-method validity). Novice and expert surgeons had previously performed a median (range) of 0 (0-20) and 300 (30-2000) robotic cases, respectively (P < 0.001). Construct validity: experts consistently outperformed residents with all three methods (P < 0.001). Cross-method validity: overall performance of inanimate tasks significantly correlated with virtual reality robotic performance (ρ = -0.7, P < 0.001) and in vivo robotic performance based on GEARS (ρ = -0.8, P < 0.0001). Virtual reality performance and in vivo tissue performance were also found to be strongly correlated (ρ = 0.6, P < 0.001). We propose the novel concept of cross-method validity, which may provide a method of evaluating the relative value of various forms of skills education and assessment. We externally confirmed the construct validity of each featured training tool. © 2013 BJU International.

  6. Team performance in resuscitation teams: Comparison and critique of two recently developed scoring tools☆

    PubMed Central

    McKay, Anthony; Walker, Susanna T.; Brett, Stephen J.; Vincent, Charles; Sevdalis, Nick

    2012-01-01

    Background and aim Following high profile errors resulting in patient harm and attracting negative publicity, the healthcare sector has begun to focus on training non-technical teamworking skills as one way of reducing the rate of adverse events. Within the area of resuscitation, two tools have been developed recently aiming to assess these skills – TEAM and OSCAR. The aims of the study reported here were:1.To determine the inter-rater reliability of the tools in assessing performance within the context of resuscitation.2.To correlate scores of the same resuscitation teams episodes using both tools, thereby determining their concurrent validity within the context of resuscitation.3.To carry out a critique of both tools and establish how best each one may be utilised. Methods The study consisted of two phases – reliability assessment; and content comparison, and correlation. Assessments were made by two resuscitation experts, who watched 24 pre-recorded resuscitation simulations, and independently rated team behaviours using both tools. The tools were critically appraised, and correlation between overall score surrogates was assessed. Results Both OSCAR and TEAM achieved high levels of inter-rater reliability (in the form of adequate intra-class coefficients) and minor significant differences between Wilcoxon tests. Comparison of the scores from both tools demonstrated a high degree of correlation (and hence concurrent validity). Finally, critique of each tool highlighted differences in length and complexity. Conclusion Both OSCAR and TEAM can be used to assess resuscitation teams in a simulated environment, with the tools correlating well with one another. We envisage a role for both tools – with TEAM giving a quick, global assessment of the team, but OSCAR enabling more detailed breakdown of the assessment, facilitating feedback, and identifying areas of weakness for future training. PMID:22561464

  7. High fidelity, low cost moulage as a valid simulation tool to improve burns education.

    PubMed

    Pywell, M J; Evgeniou, E; Highway, K; Pitt, E; Estela, C M

    2016-06-01

    Simulation allows the opportunity for repeated practice in controlled, safe conditions. Moulage uses materials such as makeup to simulate clinical presentations. Moulage fidelity can be assessed by face validity (realism) and content validity (appropriateness). The aim of this project is to compare the fidelity of professional moulage to non-professional moulage in the context of a burns management course. Four actors were randomly assigned to a professional make-up artist or a course faculty member for moulage preparation such that two actors were in each group. Participants completed the actor-based burn management scenarios and answered a ten-question Likert-scale questionnaire on face and content validity. Mean scores and a linear mixed effects model were used to compare professional and non-professional moulage. Cronbach's alpha assessed internal consistency. Twenty participants experienced three out of four scenarios and at the end of the course completed a total of 60 questionnaires. Professional moulage had higher average ratings for face (4.30 v 3.80; p=0.11) and content (4.30 v 4.00; p=0.06) validity. Internal consistency of face (α=0.91) and content (α=0.85) validity questions was very good. The fidelity of professionally prepared moulage, as assessed by content validity, was higher than non-professionally prepared moulage. We have shown that using professional techniques and low cost materials we can prepare quality high fidelity moulage simulations. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  8. A new variable parallel holes collimator for scintigraphic device with validation method based on Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Trinci, G.; Massari, R.; Scandellari, M.; Boccalini, S.; Costantini, S.; Di Sero, R.; Basso, A.; Sala, R.; Scopinaro, F.; Soluri, A.

    2010-09-01

    The aim of this work is to show a new scintigraphic device able to change automatically the length of its collimator in order to adapt the spatial resolution value to gamma source distance. This patented technique replaces the need for collimator change that standard gamma cameras still feature. Monte Carlo simulations represent the best tool in searching new technological solutions for such an innovative collimation structure. They also provide a valid analysis on response of gamma cameras performances as well as on advantages and limits of this new solution. Specifically, Monte Carlo simulations are realized with GEANT4 (GEometry ANd Tracking) framework and the specific simulation object is a collimation method based on separate blocks that can be brought closer and farther, in order to reach and maintain specific spatial resolution values for all source-detector distances. To verify the accuracy and the faithfulness of these simulations, we have realized experimental measurements with identical setup and conditions. This confirms the power of the simulation as an extremely useful tool, especially where new technological solutions need to be studied, tested and analyzed before their practical realization. The final aim of this new collimation system is the improvement of the SPECT techniques, with the real control of the spatial resolution value during tomographic acquisitions. This principle did allow us to simulate a tomographic acquisition of two capillaries of radioactive solution, in order to verify the possibility to clearly distinguish them.

  9. Development and Demonstration of a Computational Tool for the Analysis of Particle Vitiation Effects in Hypersonic Propulsion Test Facilities

    NASA Technical Reports Server (NTRS)

    Perkins, Hugh Douglas

    2010-01-01

    In order to improve the understanding of particle vitiation effects in hypersonic propulsion test facilities, a quasi-one dimensional numerical tool was developed to efficiently model reacting particle-gas flows over a wide range of conditions. Features of this code include gas-phase finite-rate kinetics, a global porous-particle combustion model, mass, momentum and energy interactions between phases, and subsonic and supersonic particle drag and heat transfer models. The basic capabilities of this tool were validated against available data or other validated codes. To demonstrate the capabilities of the code a series of computations were performed for a model hypersonic propulsion test facility and scramjet. Parameters studied were simulated flight Mach number, particle size, particle mass fraction and particle material.

  10. Genetic Simulation Resources: a website for the registration and discovery of genetic data simulators

    PubMed Central

    Peng, Bo; Chen, Huann-Sheng; Mechanic, Leah E.; Racine, Ben; Clarke, John; Clarke, Lauren; Gillanders, Elizabeth; Feuer, Eric J.

    2013-01-01

    Summary: Many simulation methods and programs have been developed to simulate genetic data of the human genome. These data have been widely used, for example, to predict properties of populations retrospectively or prospectively according to mathematically intractable genetic models, and to assist the validation, statistical inference and power analysis of a variety of statistical models. However, owing to the differences in type of genetic data of interest, simulation methods, evolutionary features, input and output formats, terminologies and assumptions for different applications, choosing the right tool for a particular study can be a resource-intensive process that usually involves searching, downloading and testing many different simulation programs. Genetic Simulation Resources (GSR) is a website provided by the National Cancer Institute (NCI) that aims to help researchers compare and choose the appropriate simulation tools for their studies. This website allows authors of simulation software to register their applications and describe them with well-defined attributes, thus allowing site users to search and compare simulators according to specified features. Availability: http://popmodels.cancercontrol.cancer.gov/gsr. Contact: gsr@mail.nih.gov PMID:23435068

  11. Development and operation of a real-time simulation at the NASA Ames Vertical Motion Simulator

    NASA Technical Reports Server (NTRS)

    Sweeney, Christopher; Sheppard, Shirin; Chetelat, Monique

    1993-01-01

    The Vertical Motion Simulator (VMS) facility at the NASA Ames Research Center combines the largest vertical motion capability in the world with a flexible real-time operating system allowing research to be conducted quickly and effectively. Due to the diverse nature of the aircraft simulated and the large number of simulations conducted annually, the challenge for the simulation engineer is to develop an accurate real-time simulation in a timely, efficient manner. The SimLab facility and the software tools necessary for an operating simulation will be discussed. Subsequent sections will describe the development process through operation of the simulation; this includes acceptance of the model, validation, integration and production phases.

  12. DES Y1 Results: Validating Cosmological Parameter Estimation Using Simulated Dark Energy Surveys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacCrann, N.; et al.

    We use mock galaxy survey simulations designed to resemble the Dark Energy Survey Year 1 (DES Y1) data to validate and inform cosmological parameter estimation. When similar analysis tools are applied to both simulations and real survey data, they provide powerful validation tests of the DES Y1 cosmological analyses presented in companion papers. We use two suites of galaxy simulations produced using different methods, which therefore provide independent tests of our cosmological parameter inference. The cosmological analysis we aim to validate is presented in DES Collaboration et al. (2017) and uses angular two-point correlation functions of galaxy number counts and weak lensing shear, as well as their cross-correlation, in multiple redshift bins. While our constraints depend on the specific set of simulated realisations available, for both suites of simulations we find that the input cosmology is consistent with the combined constraints from multiple simulated DES Y1 realizations in themore » $$\\Omega_m-\\sigma_8$$ plane. For one of the suites, we are able to show with high confidence that any biases in the inferred $$S_8=\\sigma_8(\\Omega_m/0.3)^{0.5}$$ and $$\\Omega_m$$ are smaller than the DES Y1 $$1-\\sigma$$ uncertainties. For the other suite, for which we have fewer realizations, we are unable to be this conclusive; we infer a roughly 70% probability that systematic biases in the recovered $$\\Omega_m$$ and $$S_8$$ are sub-dominant to the DES Y1 uncertainty. As cosmological analyses of this kind become increasingly more precise, validation of parameter inference using survey simulations will be essential to demonstrate robustness.« less

  13. Small Launch Vehicle Trade Space Definition: Development of a Zero Level Mass Estimation Tool with Trajectory Validation

    NASA Technical Reports Server (NTRS)

    Waters, Eric D.

    2013-01-01

    Recent high level interest in the capability of small launch vehicles has placed significant demand on determining the trade space these vehicles occupy. This has led to the development of a zero level analysis tool that can quickly determine the minimum expected vehicle gross liftoff weight (GLOW) in terms of vehicle stage specific impulse (Isp) and propellant mass fraction (pmf) for any given payload value. Utilizing an extensive background in Earth to orbit trajectory experience a total necessary delta v the vehicle must achieve can be estimated including relevant loss terms. This foresight into expected losses allows for more specific assumptions relating to the initial estimates of thrust to weight values for each stage. This tool was further validated against a trajectory model, in this case the Program to Optimize Simulated Trajectories (POST), to determine if the initial sizing delta v was adequate to meet payload expectations. Presented here is a description of how the tool is setup and the approach the analyst must take when using the tool. Also, expected outputs which are dependent on the type of small launch vehicle being sized will be displayed. The method of validation will be discussed as well as where the sizing tool fits into the vehicle design process.

  14. ANTARES: Spacecraft Simulation for Multiple User Communities and Facilities

    NASA Technical Reports Server (NTRS)

    Acevedo, Amanda; Berndt, Jon; Othon, William; Arnold, Jason; Gay, Robet

    2007-01-01

    The Advanced NASA Technology Architecture for Exploration Studies (ANTARES) simulation is the primary tool being used for requirements assessment of the NASA Orion spacecraft by the Guidance Navigation and Control (GN&C) teams at Johnson Space Center (JSC). ANTARES is a collection of packages and model libraries that are assembled and executed by the Trick simulation environment. Currently, ANTARES is being used for spacecraft design assessment, performance analysis, requirements validation, Hardware In the Loop (HWIL) and Human In the Loop (HIL) testing.

  15. Numerical Simulation of Rocket Exhaust Interaction with Lunar Soil

    NASA Technical Reports Server (NTRS)

    Liever, Peter; Tosh, Abhijit; Curtis, Jennifer

    2012-01-01

    This technology development originated from the need to assess the debris threat resulting from soil material erosion induced by landing spacecraft rocket plume impingement on extraterrestrial planetary surfaces. The impact of soil debris was observed to be highly detrimental during NASA s Apollo lunar missions and will pose a threat for any future landings on the Moon, Mars, and other exploration targets. The innovation developed under this program provides a simulation tool that combines modeling of the diverse disciplines of rocket plume impingement gas dynamics, granular soil material liberation, and soil debris particle kinetics into one unified simulation system. The Unified Flow Solver (UFS) developed by CFDRC enabled the efficient, seamless simulation of mixed continuum and rarefied rocket plume flow utilizing a novel direct numerical simulation technique of the Boltzmann gas dynamics equation. The characteristics of the soil granular material response and modeling of the erosion and liberation processes were enabled through novel first principle-based granular mechanics models developed by the University of Florida specifically for the highly irregularly shaped and cohesive lunar regolith material. These tools were integrated into a unique simulation system that accounts for all relevant physics aspects: (1) Modeling of spacecraft rocket plume impingement flow under lunar vacuum environment resulting in a mixed continuum and rarefied flow; (2) Modeling of lunar soil characteristics to capture soil-specific effects of particle size and shape composition, soil layer cohesion and granular flow physics; and (3) Accurate tracking of soil-borne debris particles beginning with aerodynamically driven motion inside the plume to purely ballistic motion in lunar far field conditions. In the earlier project phase of this innovation, the capabilities of the UFS for mixed continuum and rarefied flow situations were validated and demonstrated for lunar lander rocket plume flow impingement under lunar vacuum conditions. Applications and improvements to the granular flow simulation tools contributed by the University of Florida were tested against Earth environment experimental results. Requirements for developing, validating, and demonstrating this solution environment were clearly identified, and an effective second phase execution plan was devised. In this phase, the physics models were refined and fully integrated into a production-oriented simulation tool set. Three-dimensional simulations of Apollo Lunar Excursion Module (LEM) and Altair landers (including full-scale lander geometry) established the practical applicability of the UFS simulation approach and its advanced performance level for large-scale realistic problems.

  16. Modeling and Simulation Roadmap to Enhance Electrical Energy Security of U.S. Naval Bases

    DTIC Science & Technology

    2012-03-01

    evaluating power system architectures and technologies and, therefore, can become a valuable tool for the implementation of the described plan for Navy...a well validated and consistent process for evaluating power system architectures and technologies and, therefore, can be a valuable tool for the...process for evaluating power system architectures and component technologies is needed to support the development and implementation of these new

  17. Tool Support for Parametric Analysis of Large Software Simulation Systems

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Gundy-Burlet, Karen; Pasareanu, Corina; Menzies, Tim; Barrett, Tony

    2008-01-01

    The analysis of large and complex parameterized software systems, e.g., systems simulation in aerospace, is very complicated and time-consuming due to the large parameter space, and the complex, highly coupled nonlinear nature of the different system components. Thus, such systems are generally validated only in regions local to anticipated operating points rather than through characterization of the entire feasible operational envelope of the system. We have addressed the factors deterring such an analysis with a tool to support envelope assessment: we utilize a combination of advanced Monte Carlo generation with n-factor combinatorial parameter variations to limit the number of cases, but still explore important interactions in the parameter space in a systematic fashion. Additional test-cases, automatically generated from models (e.g., UML, Simulink, Stateflow) improve the coverage. The distributed test runs of the software system produce vast amounts of data, making manual analysis impossible. Our tool automatically analyzes the generated data through a combination of unsupervised Bayesian clustering techniques (AutoBayes) and supervised learning of critical parameter ranges using the treatment learner TAR3. The tool has been developed around the Trick simulation environment, which is widely used within NASA. We will present this tool with a GN&C (Guidance, Navigation and Control) simulation of a small satellite system.

  18. Challenges to Computational Aerothermodynamic Simulation and Validation for Planetary Entry Vehicle Analysis

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.; Johnston, Christopher O.; Kleb, Bil

    2010-01-01

    Challenges to computational aerothermodynamic (CA) simulation and validation of hypersonic flow over planetary entry vehicles are discussed. Entry, descent, and landing (EDL) of high mass to Mars is a significant driver of new simulation requirements. These requirements include simulation of large deployable, flexible structures and interactions with reaction control system (RCS) and retro-thruster jets. Simulation of radiation and ablation coupled to the flow solver continues to be a high priority for planetary entry analyses, especially for return to Earth and outer planet missions. Three research areas addressing these challenges are emphasized. The first addresses the need to obtain accurate heating on unstructured tetrahedral grid systems to take advantage of flexibility in grid generation and grid adaptation. A multi-dimensional inviscid flux reconstruction algorithm is defined that is oriented with local flow topology as opposed to grid. The second addresses coupling of radiation and ablation to the hypersonic flow solver - flight- and ground-based data are used to provide limited validation of these multi-physics simulations. The third addresses the challenges of retro-propulsion simulation and the criticality of grid adaptation in this application. The evolution of CA to become a tool for innovation of EDL systems requires a successful resolution of these challenges.

  19. Massively Parallel Processing for Fast and Accurate Stamping Simulations

    NASA Astrophysics Data System (ADS)

    Gress, Jeffrey J.; Xu, Siguang; Joshi, Ramesh; Wang, Chuan-tao; Paul, Sabu

    2005-08-01

    The competitive automotive market drives automotive manufacturers to speed up the vehicle development cycles and reduce the lead-time. Fast tooling development is one of the key areas to support fast and short vehicle development programs (VDP). In the past ten years, the stamping simulation has become the most effective validation tool in predicting and resolving all potential formability and quality problems before the dies are physically made. The stamping simulation and formability analysis has become an critical business segment in GM math-based die engineering process. As the simulation becomes as one of the major production tools in engineering factory, the simulation speed and accuracy are the two of the most important measures for stamping simulation technology. The speed and time-in-system of forming analysis becomes an even more critical to support the fast VDP and tooling readiness. Since 1997, General Motors Die Center has been working jointly with our software vendor to develop and implement a parallel version of simulation software for mass production analysis applications. By 2001, this technology was matured in the form of distributed memory processing (DMP) of draw die simulations in a networked distributed memory computing environment. In 2004, this technology was refined to massively parallel processing (MPP) and extended to line die forming analysis (draw, trim, flange, and associated spring-back) running on a dedicated computing environment. The evolution of this technology and the insight gained through the implementation of DM0P/MPP technology as well as performance benchmarks are discussed in this publication.

  20. Optimized radiation-hardened erbium doped fiber amplifiers for long space missions

    NASA Astrophysics Data System (ADS)

    Ladaci, A.; Girard, S.; Mescia, L.; Robin, T.; Laurent, A.; Cadier, B.; Boutillier, M.; Ouerdane, Y.; Boukenter, A.

    2017-04-01

    In this work, we developed and exploited simulation tools to optimize the performances of rare earth doped fiber amplifiers (REDFAs) for space missions. To describe these systems, a state-of-the-art model based on the rate equations and the particle swarm optimization technique is developed in which we also consider the main radiation effect on REDFA: the radiation induced attenuation (RIA). After the validation of this tool set by confrontation between theoretical and experimental results, we investigate how the deleterious radiation effects on the amplifier performance can be mitigated following adequate strategies to conceive the REDFA architecture. The tool set was validated by comparing the calculated Erbium-doped fiber amplifier (EDFA) gain degradation under X-rays at ˜300 krad(SiO2) with the corresponding experimental results. Two versions of the same fibers were used in this work, a standard optical fiber and a radiation hardened fiber, obtained by loading the previous fiber with hydrogen gas. Based on these fibers, standard and radiation hardened EDFAs were manufactured and tested in different operating configurations, and the obtained data were compared with simulation data done considering the same EDFA structure and fiber properties. This comparison reveals a good agreement between simulated gain and experimental data (<10% as the maximum error for the highest doses). Compared to our previous results obtained on Er/Yb-amplifiers, these results reveal the importance of the photo-bleaching mechanism competing with the RIA that cannot be neglected for the modeling of the radiation-induced gain degradation of EDFAs. This implies to measure in representative conditions the RIA at the pump and signal wavelengths that are used as input parameters for the simulation. The validated numerical codes have then been used to evaluate the potential of some EDFA architecture evolutions in the amplifier performance during the space mission. Optimization of both the fiber length and the EDFA pumping scheme allows us to strongly reduce its radiation vulnerability in terms of gain. The presented approach is a complementary and effective tool for hardening by device techniques and opens new perspectives for the applications of REDFAs and lasers in harsh environments.

  1. Personalizing oncology treatments by predicting drug efficacy, side-effects, and improved therapy: mathematics, statistics, and their integration.

    PubMed

    Agur, Zvia; Elishmereni, Moran; Kheifetz, Yuri

    2014-01-01

    Despite its great promise, personalized oncology still faces many hurdles, and it is increasingly clear that targeted drugs and molecular biomarkers alone yield only modest clinical benefit. One reason is the complex relationships between biomarkers and the patient's response to drugs, obscuring the true weight of the biomarkers in the overall patient's response. This complexity can be disentangled by computational models that integrate the effects of personal biomarkers into a simulator of drug-patient dynamic interactions, for predicting the clinical outcomes. Several computational tools have been developed for personalized oncology, notably evidence-based tools for simulating pharmacokinetics, Bayesian-estimated tools for predicting survival, etc. We describe representative statistical and mathematical tools, and discuss their merits, shortcomings and preliminary clinical validation attesting to their potential. Yet, the individualization power of mathematical models alone, or statistical models alone, is limited. More accurate and versatile personalization tools can be constructed by a new application of the statistical/mathematical nonlinear mixed effects modeling (NLMEM) approach, which until recently has been used only in drug development. Using these advanced tools, clinical data from patient populations can be integrated with mechanistic models of disease and physiology, for generating personal mathematical models. Upon a more substantial validation in the clinic, this approach will hopefully be applied in personalized clinical trials, P-trials, hence aiding the establishment of personalized medicine within the main stream of clinical oncology. © 2014 Wiley Periodicals, Inc.

  2. Validation of a virtual reality-based simulator for shoulder arthroscopy.

    PubMed

    Rahm, Stefan; Germann, Marco; Hingsammer, Andreas; Wieser, Karl; Gerber, Christian

    2016-05-01

    This study was to determine face and construct validity of a new virtual reality-based shoulder arthroscopy simulator which uses passive haptic feedback. Fifty-one participants including 25 novices (<20 shoulder arthroscopies) and 26 experts (>100 shoulder arthroscopies) completed two tests: for assessment of face validity, a questionnaire was filled out concerning quality of simulated reality and training potential using a 7-point Likert scale (range 1-7). Construct validity was tested by comparing simulator metrics (operation time in seconds, camera and grasper pathway in centimetre and grasper openings) between novices and experts test results. Overall simulated reality was rated high with a median value of 5.5 (range 2.8-7) points. Training capacity scored a median value of 5.8 (range 3-7) points. Experts were significantly faster in the diagnostic test with a median of 91 (range 37-208) s than novices with 1177 (range 81-383) s (p < 0.0001) and in the therapeutic test 102 (range 58-283) s versus 229 (range 114-399) s (p < 0.0001). Similar results were seen in the other metric values except in the camera pathway in the therapeutic test. The tested simulator achieved high scores in terms of realism and training capability. It reliably discriminated between novices and experts. Further improvements of the simulator, especially in the field of therapeutic arthroscopy, might improve its value as training and assessment tool for shoulder arthroscopy skills. II.

  3. Validation of a Novel Laparoscopic Adjustable Gastric Band Simulator

    PubMed Central

    Sankaranarayanan, Ganesh; Adair, James D.; Halic, Tansel; Gromski, Mark A.; Lu, Zhonghua; Ahn, Woojin; Jones, Daniel B.; De, Suvranu

    2011-01-01

    Background Morbid obesity accounts for more than 90,000 deaths per year in the United States. Laparoscopic adjustable gastric banding (LAGB) is the second most common weight loss procedure performed in the US and the most common in Europe and Australia. Simulation in surgical training is a rapidly advancing field that has been adopted by many to prepare surgeons for surgical techniques and procedures. Study Aim The aim of our study was to determine face, construct and content validity for a novel virtual reality laparoscopic adjustable gastric band simulator. Methods Twenty-eight subjects were categorized into two groups (Expert and Novice), determined by their skill level in laparoscopic surgery. Experts consisted of subjects who had at least four years of laparoscopic training and operative experience. Novices consisted of subjects with medical training, but with less than four years of laparoscopic training. The subjects performed the virtual reality laparoscopic adjustable band surgery simulator. They were automatically scored, according to various tasks. The subjects then completed a questionnaire to evaluate face and content validity. Results On a 5-point Likert scale (1 – lowest score, 5 – highest score), the mean score for visual realism was 4.00 ± 0.67 and the mean score for realism of the interface and tool movements was 4.07 ± 0.77 [Face Validity]. There were significant differences in the performance of the two subject groups (Expert and Novice), based on total scores (p<0.001) [Construct Validity]. Mean scores for utility of the simulator, as addressed by the Expert group, was 4.50 ± 0.71 [Content Validity]. Conclusion We created a virtual reality laparoscopic adjustable gastric band simulator. Our initial results demonstrate excellent face, construct and content validity findings. To our knowledge, this is the first virtual reality simulator with haptic feedback for training residents and surgeons in the laparoscopic adjustable gastric banding procedure. PMID:20734069

  4. Validation of a novel laparoscopic adjustable gastric band simulator.

    PubMed

    Sankaranarayanan, Ganesh; Adair, James D; Halic, Tansel; Gromski, Mark A; Lu, Zhonghua; Ahn, Woojin; Jones, Daniel B; De, Suvranu

    2011-04-01

    Morbid obesity accounts for more than 90,000 deaths per year in the United States. Laparoscopic adjustable gastric banding (LAGB) is the second most common weight loss procedure performed in the US and the most common in Europe and Australia. Simulation in surgical training is a rapidly advancing field that has been adopted by many to prepare surgeons for surgical techniques and procedures. The aim of our study was to determine face, construct, and content validity for a novel virtual reality laparoscopic adjustable gastric band simulator. Twenty-eight subjects were categorized into two groups (expert and novice), determined by their skill level in laparoscopic surgery. Experts consisted of subjects who had at least 4 years of laparoscopic training and operative experience. Novices consisted of subjects with medical training but with less than 4 years of laparoscopic training. The subjects used the virtual reality laparoscopic adjustable band surgery simulator. They were automatically scored according to various tasks. The subjects then completed a questionnaire to evaluate face and content validity. On a 5-point Likert scale (1 = lowest score, 5 = highest score), the mean score for visual realism was 4.00 ± 0.67 and the mean score for realism of the interface and tool movements was 4.07 ± 0.77 (face validity). There were significant differences in the performances of the two subject groups (expert and novice) based on total scores (p < 0.001) (construct validity). Mean score for utility of the simulator, as addressed by the expert group, was 4.50 ± 0.71 (content validity). We created a virtual reality laparoscopic adjustable gastric band simulator. Our initial results demonstrate excellent face, construct, and content validity findings. To our knowledge, this is the first virtual reality simulator with haptic feedback for training residents and surgeons in the laparoscopic adjustable gastric banding procedure.

  5. A review of simulation platforms in surgery of the temporal bone.

    PubMed

    Bhutta, M F

    2016-10-01

    Surgery of the temporal bone is a high-risk activity in an anatomically complex area. Simulation enables rehearsal of such surgery. The traditional simulation platform is the cadaveric temporal bone, but in recent years other simulation platforms have been created, including plastic and virtual reality platforms. To undertake a review of simulation platforms for temporal bone surgery, specifically assessing their educational value in terms of validity and in enabling transition to surgery. Systematic qualitative review. Search of the Pubmed, CINAHL, BEI and ERIC databases. Assessment of reported outcomes in terms of educational value. A total of 49 articles were included, covering cadaveric, animal, plastic and virtual simulation platforms. Cadaveric simulation is highly rated as an educational tool, but there may be a ceiling effect on educational outcomes after drilling 8-10 temporal bones. Animal models show significant anatomical variation from man. Plastic temporal bone models offer much potential, but at present lack sufficient anatomical or haptic validity. Similarly, virtual reality platforms lack sufficient anatomical or haptic validity, but with technological improvements they are advancing rapidly. At present, cadaveric simulation remains the best platform for training in temporal bone surgery. Technological advances enabling improved materials or modelling mean that in the future plastic or virtual platforms may become comparable to cadaveric platforms, and also offer additional functionality including patient-specific simulation from CT data. © 2015 John Wiley & Sons Ltd.

  6. Streamflow Simulations and Percolation Estimates Using the Soil and Water Assessment Tool for Selected Basins in North-Central Nebraska, 1940-2005

    USGS Publications Warehouse

    Strauch, Kellan R.; Linard, Joshua I.

    2009-01-01

    The U.S. Geological Survey, in cooperation with the Upper Elkhorn, Lower Elkhorn, Upper Loup, Lower Loup, Middle Niobrara, Lower Niobrara, Lewis and Clark, and Lower Platte North Natural Resources Districts, used the Soil and Water Assessment Tool to simulate streamflow and estimate percolation in north-central Nebraska to aid development of long-term strategies for management of hydrologically connected ground and surface water. Although groundwater models adequately simulate subsurface hydrologic processes, they often are not designed to simulate the hydrologically complex processes occurring at or near the land surface. The use of watershed models such as the Soil and Water Assessment Tool, which are designed specifically to simulate surface and near-subsurface processes, can provide helpful insight into the effects of surface-water hydrology on the groundwater system. The Soil and Water Assessment Tool was calibrated for five stream basins in the Elkhorn-Loup Groundwater Model study area in north-central Nebraska to obtain spatially variable estimates of percolation. Six watershed models were calibrated to recorded streamflow in each subbasin by modifying the adjustment parameters. The calibrated parameter sets were then used to simulate a validation period; the validation period was half of the total streamflow period of record with a minimum requirement of 10 years. If the statistical and water-balance results for the validation period were similar to those for the calibration period, a model was considered satisfactory. Statistical measures of each watershed model's performance were variable. These objective measures included the Nash-Sutcliffe measure of efficiency, the ratio of the root-mean-square error to the standard deviation of the measured data, and an estimate of bias. The model met performance criteria for the bias statistic, but failed to meet statistical adequacy criteria for the other two performance measures when evaluated at a monthly time step. A primary cause of the poor model validation results was the inability of the model to reproduce the sustained base flow and streamflow response to precipitation that was observed in the Sand Hills region. The watershed models also were evaluated based on how well they conformed to the annual mass balance (precipitation equals the sum of evapotranspiration, streamflow/runoff, and deep percolation). The model was able to adequately simulate annual values of evapotranspiration, runoff, and precipitation in comparison to reported values, which indicates the model may provide reasonable estimates of annual percolation. Mean annual percolation estimated by the model as basin averages varied within the study area from a maximum of 12.9 inches in the Loup River Basin to a minimum of 1.5 inches in the Shell Creek Basin. Percolation also varied within the studied basins; basin headwaters tended to have greater percolation rates than downstream areas. This variance in percolation rates was mainly was because of the predominance of sandy, highly permeable soils in the upstream areas of the modeled basins.

  7. Construct and face validity of the educational computer-based environment (ECE) assessment scenarios for basic endoneurosurgery skills.

    PubMed

    Cagiltay, Nergiz Ercil; Ozcelik, Erol; Sengul, Gokhan; Berker, Mustafa

    2017-11-01

    In neurosurgery education, there is a paradigm shift from time-based training to criterion-based model for which competency and assessment becomes very critical. Even virtual reality simulators provide alternatives to improve education and assessment in neurosurgery programs and allow for several objective assessment measures, there are not many tools for assessing the overall performance of trainees. This study aims to develop and validate a tool for assessing the overall performance of participants in a simulation-based endoneurosurgery training environment. A training program was developed in two levels: endoscopy practice and beginning surgical practice based on four scenarios. Then, three experiments were conducted with three corresponding groups of participants (Experiment 1, 45 (32 beginners, 13 experienced), Experiment 2, 53 (40 beginners, 13 experienced), and Experiment 3, 26 (14 novices, 12 intermediate) participants). The results analyzed to understand the common factors among the performance measurements of these experiments. Then, a factor capable of assessing the overall skill levels of surgical residents was extracted. Afterwards, the proposed measure was tested to estimate the experience levels of the participants. Finally, the level of realism of these educational scenarios was assessed. The factor formed by time, distance, and accuracy on simulated tasks provided an overall performance indicator. The prediction correctness was very high for the beginners than the one for experienced surgeons in Experiments 1 and 2. When non-dominant hand is used in a surgical procedure-based scenario, skill levels of surgeons can be better predicted. The results indicate that the scenarios in Experiments 1 and 2 can be used as an assessment tool for the beginners, and scenario-2 in Experiment 3 can be used as an assessment tool for intermediate and novice levels. It can be concluded that forming the balance between perceived action capacities and skills is critical for better designing and developing skill assessment surgical simulation tools.

  8. The reliability and validity of three questionnaires: The Student Satisfaction and Self-Confidence in Learning Scale, Simulation Design Scale, and Educational Practices Questionnaire.

    PubMed

    Unver, Vesile; Basak, Tulay; Watts, Penni; Gaioso, Vanessa; Moss, Jacqueline; Tastan, Sevinc; Iyigun, Emine; Tosun, Nuran

    2017-02-01

    The purpose of this study was to adapt the "Student Satisfaction and Self-Confidence in Learning Scale" (SCLS), "Simulation Design Scale" (SDS), and "Educational Practices Questionnaire" (EPQ) developed by Jeffries and Rizzolo into Turkish and establish the reliability and the validity of these translated scales. A sample of 87 nursing students participated in this study. These scales were cross-culturally adapted through a process including translation, comparison with original version, back translation, and pretesting. Construct validity was evaluated by factor analysis, and criterion validity was evaluated using the Perceived Learning Scale, Patient Intervention Self-confidence/Competency Scale, and Educational Belief Scale. Cronbach's alpha values were found as 0.77-0.85 for SCLS, 0.73-0.86 for SDS, and 0.61-0.86 for EPQ. The results of this study show that the Turkish versions of all scales are validated and reliable measurement tools.

  9. Furthering the Validity of a Tool to Assess Simulated Pregnancy Options Counseling Skills.

    PubMed

    Lupi, Carla; Ward-Peterson, Melissa; Coxe, Stefany; Minor, Suzanne; Eliacin, Irmanie; Obeso, Vivian

    2016-10-01

    To further the validity of a tool to assess nondirective pregnancy options counseling skills. Using a cross-sectional design, we explored four sources of construct validity evidence for an objective structured clinical examination for training and assessment of nondirective pregnancy options counseling: content, response process, internal structure, and relations to other variables. Content of the previously developed tool was enhanced through input from five family medicine educators. The objective structured clinical examination was implemented in a family medicine clerkship with third-year medical students from 2014 to 2015 using trained raters. Response process was addressed after a pilot round. Three new raters evaluated videotapes of 46 performances. Cronbach's alpha, intraclass correlation coefficients, and Spearman's rho were estimated with 95% confidence intervals. The content validity was affirmed. Cronbach's alpha was 0.71. According to Landis and Koch's criteria, all but two items unique to the clinical situation of pregnancy options counseling generated substantial to perfect agreement (0.62-1.00). Relations to other variables within the checklist were strong, ranging from 0.66 to 0.87. This tool for assessing pregnancy options counseling skills has excellent content and strong internal structure. Further work to improve the Global Rating Scale may be necessary for summative use.

  10. Development of RAD-Score: A Tool to Assess the Procedural Competence of Diagnostic Radiology Residents.

    PubMed

    Isupov, Inga; McInnes, Matthew D F; Hamstra, Stan J; Doherty, Geoffrey; Gupta, Ashish; Peddle, Susan; Jibri, Zaid; Rakhra, Kawan; Hibbert, Rebecca M

    2017-04-01

    The purpose of this study is to develop a tool to assess the procedural competence of radiology trainees, with sources of evidence gathered from five categories to support the construct validity of tool: content, response process, internal structure, relations to other variables, and consequences. A pilot form for assessing procedural competence among radiology residents, known as the RAD-Score tool, was developed by evaluating published literature and using a modified Delphi procedure involving a group of local content experts. The pilot version of the tool was tested by seven radiology department faculty members who evaluated procedures performed by 25 residents at one institution between October 2014 and June 2015. Residents were evaluated while performing multiple procedures in both clinical and simulation settings. The main outcome measure was the percentage of residents who were considered ready to perform procedures independently, with testing conducted to determine differences between levels of training. A total of 105 forms (for 52 procedures performed in a clinical setting and 53 procedures performed in a simulation setting) were collected for a variety of procedures (eight vascular or interventional, 42 body, 12 musculoskeletal, 23 chest, and 20 breast procedures). A statistically significant difference was noted in the percentage of trainees who were rated as being ready to perform a procedure independently (in postgraduate year [PGY] 2, 12% of residents; in PGY3, 61%; in PGY4, 85%; and in PGY5, 88%; p < 0.05); this difference persisted in the clinical and simulation settings. User feedback and psychometric analysis were used to create a final version of the form. This prospective study describes the successful development of a tool for assessing the procedural competence of radiology trainees with high levels of construct validity in multiple domains. Implementation of the tool in the radiology residency curriculum is planned and can play an instrumental role in the transition to competency-based radiology training.

  11. A Review of Hypersonics Aerodynamics, Aerothermodynamics and Plasmadynamics Activities within NASA's Fundamental Aeronautics Program

    NASA Technical Reports Server (NTRS)

    Salas, Manuel D.

    2007-01-01

    The research program of the aerodynamics, aerothermodynamics and plasmadynamics discipline of NASA's Hypersonic Project is reviewed. Details are provided for each of its three components: 1) development of physics-based models of non-equilibrium chemistry, surface catalytic effects, turbulence, transition and radiation; 2) development of advanced simulation tools to enable increased spatial and time accuracy, increased geometrical complexity, grid adaptation, increased physical-processes complexity, uncertainty quantification and error control; and 3) establishment of experimental databases from ground and flight experiments to develop better understanding of high-speed flows and to provide data to validate and guide the development of simulation tools.

  12. Tools for Economic Analysis of Patient Management Interventions in Heart Failure Cost-Effectiveness Model: A Web-based program designed to evaluate the cost-effectiveness of disease management programs in heart failure.

    PubMed

    Reed, Shelby D; Neilson, Matthew P; Gardner, Matthew; Li, Yanhong; Briggs, Andrew H; Polsky, Daniel E; Graham, Felicia L; Bowers, Margaret T; Paul, Sara C; Granger, Bradi B; Schulman, Kevin A; Whellan, David J; Riegel, Barbara; Levy, Wayne C

    2015-11-01

    Heart failure disease management programs can influence medical resource use and quality-adjusted survival. Because projecting long-term costs and survival is challenging, a consistent and valid approach to extrapolating short-term outcomes would be valuable. We developed the Tools for Economic Analysis of Patient Management Interventions in Heart Failure Cost-Effectiveness Model, a Web-based simulation tool designed to integrate data on demographic, clinical, and laboratory characteristics; use of evidence-based medications; and costs to generate predicted outcomes. Survival projections are based on a modified Seattle Heart Failure Model. Projections of resource use and quality of life are modeled using relationships with time-varying Seattle Heart Failure Model scores. The model can be used to evaluate parallel-group and single-cohort study designs and hypothetical programs. Simulations consist of 10,000 pairs of virtual cohorts used to generate estimates of resource use, costs, survival, and incremental cost-effectiveness ratios from user inputs. The model demonstrated acceptable internal and external validity in replicating resource use, costs, and survival estimates from 3 clinical trials. Simulations to evaluate the cost-effectiveness of heart failure disease management programs across 3 scenarios demonstrate how the model can be used to design a program in which short-term improvements in functioning and use of evidence-based treatments are sufficient to demonstrate good long-term value to the health care system. The Tools for Economic Analysis of Patient Management Interventions in Heart Failure Cost-Effectiveness Model provides researchers and providers with a tool for conducting long-term cost-effectiveness analyses of disease management programs in heart failure. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Experimental validation of ultrasonic NDE simulation software

    NASA Astrophysics Data System (ADS)

    Dib, Gerges; Larche, Michael; Diaz, Aaron A.; Crawford, Susan L.; Prowant, Matthew S.; Anderson, Michael T.

    2016-02-01

    Computer modeling and simulation is becoming an essential tool for transducer design and insight into ultrasonic nondestructive evaluation (UT-NDE). As the popularity of simulation tools for UT-NDE increases, it becomes important to assess their reliability to model acoustic responses from defects in operating components and provide information that is consistent with in-field inspection data. This includes information about the detectability of different defect types for a given UT probe. Recently, a cooperative program between the Electrical Power Research Institute and the U.S. Nuclear Regulatory Commission was established to validate numerical modeling software commonly used for simulating UT-NDE of nuclear power plant components. In the first phase of this cooperative, extensive experimental UT measurements were conducted on machined notches with varying depth, length, and orientation in stainless steel plates. Then, the notches were modeled in CIVA, a semi-analytical NDE simulation platform developed by the French Commissariat a l'Energie Atomique, and their responses compared with the experimental measurements. Discrepancies between experimental and simulation results are due to either improper inputs to the simulation model, or to incorrect approximations and assumptions in the numerical models. To address the former, a variation study was conducted on the different parameters that are required as inputs for the model, specifically the specimen and transducer properties. Then, the ability of simulations to give accurate predictions regarding the detectability of the different defects was demonstrated. This includes the results in terms of the variations in defect amplitude indications, and the ratios between tip diffracted and specular signal amplitudes.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rutqvist, Jonny; Blanco Martin, Laura; Mukhopadhyay, Sumit

    In this report, we present FY2014 progress by Lawrence Berkeley National Laboratory (LBNL) related to modeling of coupled thermal-hydrological-mechanical-chemical (THMC) processes in salt and their effect on brine migration at high temperatures. LBNL’s work on the modeling of coupled THMC processes in salt was initiated in FY2012, focusing on exploring and demonstrating the capabilities of an existing LBNL modeling tool (TOUGH-FLAC) for simulating temperature-driven coupled flow and geomechanical processes in salt. This work includes development related to, and implementation of, essential capabilities, as well as testing the model against relevant information and published experimental data related to the fate andmore » transport of water. we provide more details on the FY2014 work, first presenting updated tools and improvements made to the TOUGH-FLAC simulator, and the use of this updated tool in a new model simulation of long-term THM behavior within a generic repository in a salt formation. This is followed by the description of current benchmarking and validations efforts, including the TSDE experiment. We then present the current status in the development of constitutive relationships and the dual-continuum model for brine migration. We conclude with an outlook for FY2015, which will be much focused on model validation against field experiments and on the use of the model for the design studies related to a proposed heater experiment.« less

  15. ProMIS augmented reality training of laparoscopic procedures face validity.

    PubMed

    Botden, Sanne M B I; Buzink, Sonja N; Schijven, Marlies P; Jakimowicz, Jack J

    2008-01-01

    Conventional video trainers lack the ability to assess the trainee objectively, but offer modalities that are often missing in virtual reality simulation, such as realistic haptic feedback. The ProMIS augmented reality laparoscopic simulator retains the benefit of a traditional box trainer, by using original laparoscopic instruments and tactile tasks, but additionally generates objective measures of performance. Fifty-five participants performed a "basic skills" and "suturing and knot-tying" task on ProMIS, after which they filled out a questionnaire regarding realism, haptics, and didactic value of the simulator, on a 5-point-Likert scale. The participants were allotted to 2 experience groups: "experienced" (>50 procedures and >5 sutures; N = 27), and "moderately experienced" (<50 procedures and <5 sutures; N = 28). General consensus among all participants, particularly the experienced, was that ProMIS is a useful tool for training (mean: 4.67, SD: 0.48). It was considered very realistic (mean: 4.44, SD: 0.66), with good haptics (mean: 4.10, SD: 0.97) and didactic value (mean 4.10, SD: 0.65). This study established the face validity of the ProMIS augmented reality simulator for "basic skills" and "suturing and knot-tying" tasks. ProMIS was considered a good tool for training in laparoscopic skills for surgical residents and surgeons.

  16. Can virtual reality simulation be used for advanced bariatric surgical training?

    PubMed

    Lewis, Trystan M; Aggarwal, Rajesh; Kwasnicki, Richard M; Rajaretnam, Niro; Moorthy, Krishna; Ahmed, Ahmed; Darzi, Ara

    2012-06-01

    Laparoscopic bariatric surgery is a safe and effective way of treating morbid obesity. However, the operations are technically challenging and training opportunities for junior surgeons are limited. This study aims to assess whether virtual reality (VR) simulation is an effective adjunct for training and assessment of laparoscopic bariatric technical skills. Twenty bariatric surgeons of varying experience (Five experienced, five intermediate, and ten novice) were recruited to perform a jejuno-jejunostomy on both cadaveric tissue and on the bariatric module of the Lapmentor VR simulator (Simbionix Corporation, Cleveland, OH). Surgical performance was assessed using validated global rating scales (GRS) and procedure specific video rating scales (PSRS). Subjects were also questioned about the appropriateness of VR as a training tool for surgeons. Construct validity of the VR bariatric module was demonstrated with a significant difference in performance between novice and experienced surgeons on the VR jejuno-jejunostomy module GRS (median 11-15.5; P = .017) and PSRS (median 11-13; P = .003). Content validity was demonstrated with surgeons describing the VR bariatric module as useful and appropriate for training (mean Likert score 4.45/7) and they would highly recommend VR simulation to others for bariatric training (mean Likert score 5/7). Face and concurrent validity were not established. This study shows that the bariatric module on a VR simulator demonstrates construct and content validity. VR simulation appears to be an effective method for training of advanced bariatric technical skills for surgeons at the start of their bariatric training. However, assessment of technical skills should still take place on cadaveric tissue. Copyright © 2012. Published by Mosby, Inc.

  17. Proposing "the burns suite" as a novel simulation tool for advancing the delivery of burns education.

    PubMed

    Sadideen, Hazim; Wilson, David; Moiemen, Naiem; Kneebone, Roger

    2014-01-01

    Educational theory highlights the importance of contextualized simulation for effective learning. We explored this concept in a burns scenario in a novel, low-cost, high-fidelity, portable, immersive simulation environment (referred to as distributed simulation). This contextualized simulation/distributed simulation combination was named "The Burns Suite" (TBS). A pediatric burn resuscitation scenario was selected after high trainee demand. It was designed on Advanced Trauma and Life Support and Emergency Management of Severe Burns principles and refined using expert opinion through cognitive task analysis. TBS contained "realism" props, briefed nurses, and a simulated patient. Novices and experts were recruited. Five-point Likert-type questionnaires were developed for face and content validity. Cronbach's α was calculated for scale reliability. Semistructured interviews captured responses for qualitative thematic analysis allowing for data triangulation. Twelve participants completed TBS scenario. Mean face and content validity ratings were high (4.6 and 4.5, respectively; range, 4-5). The internal consistency of questions was high. Qualitative data analysis revealed that participants felt 1) the experience was "real" and they were "able to behave as if in a real resuscitation environment," and 2) TBS "addressed what Advanced Trauma and Life Support and Emergency Management of Severe Burns didn't" (including the efficacy of incorporating nontechnical skills). TBS provides a novel, effective simulation tool to significantly advance the delivery of burns education. Recreating clinical challenge is crucial to optimize simulation training. This low-cost approach also has major implications for surgical education, particularly during increasing financial austerity. Alternative scenarios and/or procedures can be recreated within TBS, providing a diverse educational immersive simulation experience.

  18. Electromagnetic Particle-In-Cell simulation on the impedance of a dipole antenna surrounded by an ion sheath

    NASA Astrophysics Data System (ADS)

    Miyake, Y.; Usui, H.; Kojima, H.; Omura, Y.; Matsumoto, H.

    2008-06-01

    We have newly developed a numerical tool for the analysis of antenna impedance in plasma environment by making use of electromagnetic Particle-In-Cell (PIC) plasma simulations. To validate the developed tool, we first examined the antenna impedance in a homogeneous kinetic plasma and confirmed that the obtained results basically agree with the conventional theories. We next applied the tool to examine an ion-sheathed dipole antenna. The results confirmed that the inclusion of the ion-sheath effects reduces the capacitance below the electron plasma frequency. The results also revealed that the signature of impedance resonance observed at the plasma frequency is modified by the presence of the sheath. Since the sheath dynamics can be solved by the PIC scheme throughout the antenna analysis in a self-consistent manner, the developed tool has feasibility to perform more practical and complicated antenna analyses that will be necessary in real space missions.

  19. SimDoseCT: dose reporting software based on Monte Carlo simulation for a 320 detector-row cone-beam CT scanner and ICRP computational adult phantoms

    NASA Astrophysics Data System (ADS)

    Cros, Maria; Joemai, Raoul M. S.; Geleijns, Jacob; Molina, Diego; Salvadó, Marçal

    2017-08-01

    This study aims to develop and test software for assessing and reporting doses for standard patients undergoing computed tomography (CT) examinations in a 320 detector-row cone-beam scanner. The software, called SimDoseCT, is based on the Monte Carlo (MC) simulation code, which was developed to calculate organ doses and effective doses in ICRP anthropomorphic adult reference computational phantoms for acquisitions with the Aquilion ONE CT scanner (Toshiba). MC simulation was validated by comparing CTDI measurements within standard CT dose phantoms with results from simulation under the same conditions. SimDoseCT consists of a graphical user interface connected to a MySQL database, which contains the look-up-tables that were generated with MC simulations for volumetric acquisitions at different scan positions along the phantom using any tube voltage, bow tie filter, focal spot and nine different beam widths. Two different methods were developed to estimate organ doses and effective doses from acquisitions using other available beam widths in the scanner. A correction factor was used to estimate doses in helical acquisitions. Hence, the user can select any available protocol in the Aquilion ONE scanner for a standard adult male or female and obtain the dose results through the software interface. Agreement within 9% between CTDI measurements and simulations allowed the validation of the MC program. Additionally, the algorithm for dose reporting in SimDoseCT was validated by comparing dose results from this tool with those obtained from MC simulations for three volumetric acquisitions (head, thorax and abdomen). The comparison was repeated using eight different collimations and also for another collimation in a helical abdomen examination. The results showed differences of 0.1 mSv or less for absolute dose in most organs and also in the effective dose calculation. The software provides a suitable tool for dose assessment in standard adult patients undergoing CT examinations in a 320 detector-row cone-beam scanner.

  20. SimDoseCT: dose reporting software based on Monte Carlo simulation for a 320 detector-row cone-beam CT scanner and ICRP computational adult phantoms.

    PubMed

    Cros, Maria; Joemai, Raoul M S; Geleijns, Jacob; Molina, Diego; Salvadó, Marçal

    2017-07-17

    This study aims to develop and test software for assessing and reporting doses for standard patients undergoing computed tomography (CT) examinations in a 320 detector-row cone-beam scanner. The software, called SimDoseCT, is based on the Monte Carlo (MC) simulation code, which was developed to calculate organ doses and effective doses in ICRP anthropomorphic adult reference computational phantoms for acquisitions with the Aquilion ONE CT scanner (Toshiba). MC simulation was validated by comparing CTDI measurements within standard CT dose phantoms with results from simulation under the same conditions. SimDoseCT consists of a graphical user interface connected to a MySQL database, which contains the look-up-tables that were generated with MC simulations for volumetric acquisitions at different scan positions along the phantom using any tube voltage, bow tie filter, focal spot and nine different beam widths. Two different methods were developed to estimate organ doses and effective doses from acquisitions using other available beam widths in the scanner. A correction factor was used to estimate doses in helical acquisitions. Hence, the user can select any available protocol in the Aquilion ONE scanner for a standard adult male or female and obtain the dose results through the software interface. Agreement within 9% between CTDI measurements and simulations allowed the validation of the MC program. Additionally, the algorithm for dose reporting in SimDoseCT was validated by comparing dose results from this tool with those obtained from MC simulations for three volumetric acquisitions (head, thorax and abdomen). The comparison was repeated using eight different collimations and also for another collimation in a helical abdomen examination. The results showed differences of 0.1 mSv or less for absolute dose in most organs and also in the effective dose calculation. The software provides a suitable tool for dose assessment in standard adult patients undergoing CT examinations in a 320 detector-row cone-beam scanner.

  1. Validation of the AVM Blast Computational Modeling and Simulation Tool Set

    DTIC Science & Technology

    2015-08-04

    by-construction" methodology is powerful and would not be possible without high -level design languages to support validation and verification. [1,4...to enable the making of informed design decisions.  Enable rapid exploration of the design trade-space for high -fidelity requirements tradeoffs...live-fire tests, the jump height of the target structure is recorded by using either high speed cameras or a string pot. A simple projectile motion

  2. Evaluation of the hooghoudt and kirkham tile drain equations in the soil and water assessment tool to simulate tile flow and nitrate-nitrogen.

    PubMed

    Moriasi, Daniel N; Gowda, Prasanna H; Arnold, Jeffrey G; Mulla, David J; Ale, Srinivasulu; Steiner, Jean L; Tomer, Mark D

    2013-11-01

    Subsurface tile drains in agricultural systems of the midwestern United States are a major contributor of nitrate-N (NO-N) loadings to hypoxic conditions in the Gulf of Mexico. Hydrologic and water quality models, such as the Soil and Water Assessment Tool, are widely used to simulate tile drainage systems. The Hooghoudt and Kirkham tile drain equations in the Soil and Water Assessment Tool have not been rigorously tested for predicting tile flow and the corresponding NO-N losses. In this study, long-term (1983-1996) monitoring plot data from southern Minnesota were used to evaluate the SWAT version 2009 revision 531 (hereafter referred to as SWAT) model for accurately estimating subsurface tile drain flows and associated NO-N losses. A retention parameter adjustment factor was incorporated to account for the effects of tile drainage and slope changes on the computation of surface runoff using the curve number method (hereafter referred to as Revised SWAT). The SWAT and Revised SWAT models were calibrated and validated for tile flow and associated NO-N losses. Results indicated that, on average, Revised SWAT predicted monthly tile flow and associated NO-N losses better than SWAT by 48 and 28%, respectively. For the calibration period, the Revised SWAT model simulated tile flow and NO-N losses within 4 and 1% of the observed data, respectively. For the validation period, it simulated tile flow and NO-N losses within 8 and 2%, respectively, of the observed values. Therefore, the Revised SWAT model is expected to provide more accurate simulation of the effectiveness of tile drainage and NO-N management practices. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  3. A Systematic Review of Tools Used to Assess Team Leadership in Health Care Action Teams.

    PubMed

    Rosenman, Elizabeth D; Ilgen, Jonathan S; Shandro, Jamie R; Harper, Amy L; Fernandez, Rosemarie

    2015-10-01

    To summarize the characteristics of tools used to assess leadership in health care action (HCA) teams. HCA teams are interdisciplinary teams performing complex, critical tasks under high-pressure conditions. The authors conducted a systematic review of the PubMed/MEDLINE, CINAHL, ERIC, EMBASE, PsycINFO, and Web of Science databases, key journals, and review articles published through March 2012 for English-language articles that applied leadership assessment tools to HCA teams in all specialties. Pairs of reviewers assessed identified articles for inclusion and exclusion criteria and abstracted data on study characteristics, tool characteristics, and validity evidence. Of the 9,913 abstracts screened, 83 studies were included. They described 61 team leadership assessment tools. Forty-nine tools (80%) provided behaviors, skills, or characteristics to define leadership. Forty-four tools (72%) assessed leadership as one component of a larger assessment, 13 tools (21%) identified leadership as the primary focus of the assessment, and 4 (7%) assessed leadership style. Fifty-three studies (64%) assessed leadership at the team level; 29 (35%) did so at the individual level. Assessments of simulated (n = 55) and live (n = 30) patient care events were performed. Validity evidence included content validity (n = 75), internal structure (n = 61), relationship to other variables (n = 44), and response process (n = 15). Leadership assessment tools applied to HCA teams are heterogeneous in content and application. Comparisons between tools are limited by study variability. A systematic approach to team leadership tool development, evaluation, and implementation will strengthen understanding of this important competency.

  4. A computational continuum model of poroelastic beds

    PubMed Central

    Zampogna, G. A.

    2017-01-01

    Despite the ubiquity of fluid flows interacting with porous and elastic materials, we lack a validated non-empirical macroscale method for characterizing the flow over and through a poroelastic medium. We propose a computational tool to describe such configurations by deriving and validating a continuum model for the poroelastic bed and its interface with the above free fluid. We show that, using stress continuity condition and slip velocity condition at the interface, the effective model captures the effects of small changes in the microstructure anisotropy correctly and predicts the overall behaviour in a physically consistent and controllable manner. Moreover, we show that the performance of the effective model is accurate by validating with fully microscopic resolved simulations. The proposed computational tool can be used in investigations in a wide range of fields, including mechanical engineering, bio-engineering and geophysics. PMID:28413355

  5. Validated simulator for space debris removal with nets and other flexible tethers applications

    NASA Astrophysics Data System (ADS)

    Gołębiowski, Wojciech; Michalczyk, Rafał; Dyrek, Michał; Battista, Umberto; Wormnes, Kjetil

    2016-12-01

    In the context of active debris removal technologies and preparation activities for the e.Deorbit mission, a simulator for net-shaped elastic bodies dynamics and their interactions with rigid bodies, has been developed. Its main application is to aid net design and test scenarios for space debris deorbitation. The simulator can model all the phases of the debris capturing process: net launch, flight and wrapping around the target. It handles coupled simulation of rigid and flexible bodies dynamics. Flexible bodies were implemented using Cosserat rods model. It allows to simulate flexible threads or wires with elasticity and damping for stretching, bending and torsion. Threads may be combined into structures of any topology, so the software is able to simulate nets, pure tethers, tether bundles, cages, trusses, etc. Full contact dynamics was implemented. Programmatic interaction with simulation is possible - i.e. for control implementation. The underlying model has been experimentally validated and due to significant gravity influence, experiment had to be performed in microgravity conditions. Validation experiment for parabolic flight was a downscaled process of Envisat capturing. The prepacked net was launched towards the satellite model, it expanded, hit the model and wrapped around it. The whole process was recorded with 2 fast stereographic camera sets for full 3D trajectory reconstruction. The trajectories were used to compare net dynamics to respective simulations and then to validate the simulation tool. The experiments were performed on board of a Falcon-20 aircraft, operated by National Research Council in Ottawa, Canada. Validation results show that model reflects phenomenon physics accurately enough, so it may be used for scenario evaluation and mission design purposes. The functionalities of the simulator are described in detail in the paper, as well as its underlying model, sample cases and methodology behind validation. Results are presented and typical use cases are discussed showing that the software may be used to design throw nets for space debris capturing, but also to simulate deorbitation process, chaser control system or general interactions between rigid and elastic bodies - all in convenient and efficient way. The presented work was led by SKA Polska under the ESA contract, within the CleanSpace initiative.

  6. Modeling, validation and analysis of a Whegs robot in the USARSim environment

    NASA Astrophysics Data System (ADS)

    Taylor, Brian K.; Balakirsky, Stephen; Messina, Elena; Quinn, Roger D.

    2008-04-01

    Simulation of robots in a virtual domain has multiple benefits. End users can use the simulation as a training tool to increase their skill with the vehicle without risking damage to the robot or surrounding environment. Simulation allows researchers and developers to benchmark robot performance in a range of scenarios without having the physical robot or environment present. The simulation can also help guide and generate new design concepts. USARSim (Unified System for Automation and Robot Simulation) is a tool that is being used to accomplish these goals, particularly within the realm of search and rescue. It is based on the Unreal Tournament 2004 gaming engine, which approximates the physics of how a robot interacts with its environment. A family of vehicles that can benefit from simulation in USARSim are Whegs TM robots. Developed in the Biorobotics Laboratory at Case Western Reserve University, Whegs TM robots are highly mobile ground vehicles that use abstracted biological principles to achieve a robust level of locomotion, including passive gait adaptation and enhanced climbing abilities. This paper describes a Whegs TM robot model that was constructed in USARSim. The model was configured with the same kinds of behavioral characteristics found in real Whegs TM vehicles. Once these traits were implemented, a validation study was performed using identical performance metrics measured on both the virtual and real vehicles to quantify vehicle performance and to ensure that the virtual robot's performance matched that of the real robot.

  7. Using "The Burns Suite" as a Novel High Fidelity Simulation Tool for Interprofessional and Teamwork Training.

    PubMed

    Sadideen, Hazim; Wilson, David; Moiemen, Naiem; Kneebone, Roger

    2016-01-01

    Educational theory highlights the importance of contextualized simulation for effective learning. The authors recently published the concept of "The Burns Suite" (TBS) as a novel tool to advance the delivery of burns education for residents/clinicians. Effectively, TBS represents a low-cost, high-fidelity, portable, immersive simulation environment. Recently, simulation-based team training (SBTT) has been advocated as a means to improve interprofessional practice. The authors aimed to explore the role of TBS in SBTT. A realistic pediatric burn resuscitation scenario was designed based on "advanced trauma and life support" and "emergency management of severe burns" principles, refined utilizing expert opinion through cognitive task analysis. The focus of this analysis was on nontechnical and interpersonal skills of clinicians and nurses within the scenario, mirroring what happens in real life. Five-point Likert-type questionnaires were developed for face and content validity. Cronbach's alpha was calculated for scale reliability. Semistructured interviews captured responses for qualitative thematic analysis allowing for data triangulation. Twenty-two participants completed TBS resuscitation scenario. Mean face and content validity ratings were high (4.4 and 4.7 respectively; range 4-5). The internal consistency of questions was high. Qualitative data analysis revealed two new themes. Participants reported that the experience felt particularly authentic because the simulation had high psychological and social fidelity, and there was a demand for such a facility to be made available to improve nontechnical skills and interprofessional relations. TBS provides a realistic, novel tool for SBTT, addressing both nontechnical and interprofessional team skills. Recreating clinical challenge is crucial to optimize SBTT. With a better understanding of the theories underpinning simulation and interprofessional education, future simulation scenarios can be designed to provide unique educational experiences whereby team members will learn with and from other specialties and professions in a safe, controlled environment.

  8. Nuclear Energy Advanced Modeling and Simulation (NEAMS) Waste Integrated Performance and Safety Codes (IPSC) : FY10 development and integration.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Criscenti, Louise Jacqueline; Sassani, David Carl; Arguello, Jose Guadalupe, Jr.

    2011-02-01

    This report describes the progress in fiscal year 2010 in developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs,more » and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with robust verification, validation, and software quality requirements. Waste IPSC activities in fiscal year 2010 focused on specifying a challenge problem to demonstrate proof of concept, developing a verification and validation plan, and performing an initial gap analyses to identify candidate codes and tools to support the development and integration of the Waste IPSC. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. This year-end progress report documents the FY10 status of acquisition, development, and integration of thermal-hydrologic-chemical-mechanical (THCM) code capabilities, frameworks, and enabling tools and infrastructure.« less

  9. Simulation Tool for Dielectric Barrier Discharge Plasma Actuators at Atmospheric and Sub-Atmospheric Pressures: SBIR Phase I Final Report

    NASA Technical Reports Server (NTRS)

    Likhanskii, Alexandre

    2012-01-01

    This report is the final report of a SBIR Phase I project. It is identical to the final report submitted, after some proprietary information of administrative nature has been removed. The development of a numerical simulation tool for dielectric barrier discharge (DBD) plasma actuator is reported. The objectives of the project were to analyze and predict DBD operation at wide range of ambient gas pressures. It overcomes the limitations of traditional DBD codes which are limited to low-speed applications and have weak prediction capabilities. The software tool allows DBD actuator analysis and prediction for subsonic to hypersonic flow regime. The simulation tool is based on the VORPAL code developed by Tech-X Corporation. VORPAL's capability of modeling DBD plasma actuator at low pressures (0.1 to 10 torr) using kinetic plasma modeling approach, and at moderate to atmospheric pressures (1 to 10 atm) using hydrodynamic plasma modeling approach, were demonstrated. In addition, results of experiments with pulsed+bias DBD configuration that were performed for validation purposes are reported.

  10. Numerical Simulation of Molten Flow in Directed Energy Deposition Using an Iterative Geometry Technique

    NASA Astrophysics Data System (ADS)

    Vincent, Timothy J.; Rumpfkeil, Markus P.; Chaudhary, Anil

    2018-03-01

    The complex, multi-faceted physics of laser-based additive metals processing tends to demand high-fidelity models and costly simulation tools to provide predictions accurate enough to aid in selecting process parameters. Of particular difficulty is the accurate determination of melt pool shape and size, which are useful for predicting lack-of-fusion, as this typically requires an adequate treatment of thermal and fluid flow. In this article we describe a novel numerical simulation tool which aims to achieve a balance between accuracy and cost. This is accomplished by making simplifying assumptions regarding the behavior of the gas-liquid interface for processes with a moderate energy density, such as Laser Engineered Net Shaping (LENS). The details of the implementation, which is based on the solver simpleFoam of the well-known software suite OpenFOAM, are given here and the tool is verified and validated for a LENS process involving Ti-6Al-4V. The results indicate that the new tool predicts width and height of a deposited track to engineering accuracy levels.

  11. Numerical Simulation of Molten Flow in Directed Energy Deposition Using an Iterative Geometry Technique

    NASA Astrophysics Data System (ADS)

    Vincent, Timothy J.; Rumpfkeil, Markus P.; Chaudhary, Anil

    2018-06-01

    The complex, multi-faceted physics of laser-based additive metals processing tends to demand high-fidelity models and costly simulation tools to provide predictions accurate enough to aid in selecting process parameters. Of particular difficulty is the accurate determination of melt pool shape and size, which are useful for predicting lack-of-fusion, as this typically requires an adequate treatment of thermal and fluid flow. In this article we describe a novel numerical simulation tool which aims to achieve a balance between accuracy and cost. This is accomplished by making simplifying assumptions regarding the behavior of the gas-liquid interface for processes with a moderate energy density, such as Laser Engineered Net Shaping (LENS). The details of the implementation, which is based on the solver simpleFoam of the well-known software suite OpenFOAM, are given here and the tool is verified and validated for a LENS process involving Ti-6Al-4V. The results indicate that the new tool predicts width and height of a deposited track to engineering accuracy levels.

  12. Face and content validity of Xperience™ Team Trainer: bed-side assistant training simulator for robotic surgery.

    PubMed

    Sessa, Luca; Perrenot, Cyril; Xu, Song; Hubert, Jacques; Bresler, Laurent; Brunaud, Laurent; Perez, Manuela

    2018-03-01

    In robotic surgery, the coordination between the console-side surgeon and bed-side assistant is crucial, more than in standard surgery or laparoscopy where the surgical team works in close contact. Xperience™ Team Trainer (XTT) is a new optional component for the dv-Trainer ® platform and simulates the patient-side working environment. We present preliminary results for face, content, and the workload imposed regarding the use of the XTT virtual reality platform for the psychomotor and communication skills training of the bed-side assistant in robot-assisted surgery. Participants were categorized into "Beginners" and "Experts". They tested a series of exercises (Pick & Place Laparoscopic Demo, Pick & Place 2 and Team Match Board 1) and completed face validity questionnaires. "Experts" assessed content validity on another questionnaire. All the participants completed a NASA Task Load Index questionnaire to assess the workload imposed by XTT. Twenty-one consenting participants were included (12 "Beginners" and 9 "Experts"). XTT was shown to possess face and content validity, as evidenced by the rankings given on the simulator's ease of use and realism parameters and on the simulator's usefulness for training. Eight out of nine "Experts" judged the visualization of metrics after the exercises useful. However, face validity has shown some weaknesses regarding interactions and instruments. Reasonable workload parameters were registered. XTT demonstrated excellent face and content validity with acceptable workload parameters. XTT could become a useful tool for robotic surgery team training.

  13. Face validity, construct validity and training benefits of a virtual reality TURP simulator.

    PubMed

    Bright, Elizabeth; Vine, Samuel; Wilson, Mark R; Masters, Rich S W; McGrath, John S

    2012-01-01

    To assess face validity, construct validity and the training benefits of a virtual reality TURP simulator. 11 novices (no TURP experience) and 7 experts (>200 TURP's) completed a virtual reality median lobe prostate resection task on the TURPsim™ (Simbionix USA Corp., Cleveland, OH). Performance indicators (percentage of prostate resected (PR), percentage of capsular resection (CR) and time diathermy loop active without tissue contact (TAWC) were recorded via the TURPsim™ and compared between novices and experts to assess construct validity. Verbal comments provided by experts following task completion were used to assess face validity. Repeated attempts of the task by the novices were analysed to assess the training benefits of the TURPsim™. Experts resected a significantly greater percentage of prostate per minute (p < 0.01) and had significantly less active diathermy time without tissue contact (p < 0.01) than novices. After practice, novices were able to perform the simulation more effectively, with significant improvement in all measured parameters. Improvement in performance was noted in novices following repetitive training, as evidenced by improved TAWC scores that were not significantly different from the expert group (p = 0.18). This study has established face and construct validity for the TURPsim™. The potential benefit in using this tool to train novices has also been demonstrated. Copyright © 2012 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.

  14. ARC integration into the NEAMS Workbench

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stauff, N.; Gaughan, N.; Kim, T.

    2017-01-01

    One of the objectives of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Integration Product Line (IPL) is to facilitate the deployment of the high-fidelity codes developed within the program. The Workbench initiative was launched in FY-2017 by the IPL to facilitate the transition from conventional tools to high fidelity tools. The Workbench provides a common user interface for model creation, real-time validation, execution, output processing, and visualization for integrated codes.

  15. Tools for Evaluating Fault Detection and Diagnostic Methods for HVAC Secondary Systems

    NASA Astrophysics Data System (ADS)

    Pourarian, Shokouh

    Although modern buildings are using increasingly sophisticated energy management and control systems that have tremendous control and monitoring capabilities, building systems routinely fail to perform as designed. More advanced building control, operation, and automated fault detection and diagnosis (AFDD) technologies are needed to achieve the goal of net-zero energy commercial buildings. Much effort has been devoted to develop such technologies for primary heating ventilating and air conditioning (HVAC) systems, and some secondary systems. However, secondary systems, such as fan coil units and dual duct systems, although widely used in commercial, industrial, and multifamily residential buildings, have received very little attention. This research study aims at developing tools that could provide simulation capabilities to develop and evaluate advanced control, operation, and AFDD technologies for these less studied secondary systems. In this study, HVACSIM+ is selected as the simulation environment. Besides developing dynamic models for the above-mentioned secondary systems, two other issues related to the HVACSIM+ environment are also investigated. One issue is the nonlinear equation solver used in HVACSIM+ (Powell's Hybrid method in subroutine SNSQ). It has been found from several previous research projects (ASRHAE RP 825 and 1312) that SNSQ is especially unstable at the beginning of a simulation and sometimes unable to converge to a solution. Another issue is related to the zone model in the HVACSIM+ library of components. Dynamic simulation of secondary HVAC systems unavoidably requires an interacting zone model which is systematically and dynamically interacting with building surrounding. Therefore, the accuracy and reliability of the building zone model affects operational data generated by the developed dynamic tool to predict HVAC secondary systems function. The available model does not simulate the impact of direct solar radiation that enters a zone through glazing and the study of zone model is conducted in this direction to modify the existing zone model. In this research project, the following tasks are completed and summarized in this report: 1. Develop dynamic simulation models in the HVACSIM+ environment for common fan coil unit and dual duct system configurations. The developed simulation models are able to produce both fault-free and faulty operational data under a wide variety of faults and severity levels for advanced control, operation, and AFDD technology development and evaluation purposes; 2. Develop a model structure, which includes the grouping of blocks and superblocks, treatment of state variables, initial and boundary conditions, and selection of equation solver, that can simulate a dual duct system efficiently with satisfactory stability; 3. Design and conduct a comprehensive and systematic validation procedure using collected experimental data to validate the developed simulation models under both fault-free and faulty operational conditions; 4. Conduct a numerical study to compare two solution techniques: Powell's Hybrid (PH) and Levenberg-Marquardt (LM) in terms of their robustness and accuracy. 5. Modification of the thermal state of the existing building zone model in HVACSIM+ library of component. This component is revised to consider the transmitted heat through glazing as a heat source for transient building zone load prediction In this report, literature, including existing HVAC dynamic modeling environment and models, HVAC model validation methodologies, and fault modeling and validation methodologies, are reviewed. The overall methodologies used for fault free and fault model development and validation are introduced. Detailed model development and validation results for the two secondary systems, i.e., fan coil unit and dual duct system are summarized. Experimental data mostly from the Iowa Energy Center Energy Resource Station are used to validate the models developed in this project. Satisfactory model performance in both fault free and fault simulation studies is observed for all studied systems.

  16. Polycrystalline CVD diamond device level modeling for particle detection applications

    NASA Astrophysics Data System (ADS)

    Morozzi, A.; Passeri, D.; Kanxheri, K.; Servoli, L.; Lagomarsino, S.; Sciortino, S.

    2016-12-01

    Diamond is a promising material whose excellent physical properties foster its use for radiation detection applications, in particular in those hostile operating environments where the silicon-based detectors behavior is limited due to the high radiation fluence. Within this framework, the application of Technology Computer Aided Design (TCAD) simulation tools is highly envisaged for the study, the optimization and the predictive analysis of sensing devices. Since the novelty of using diamond in electronics, this material is not included in the library of commercial, state-of-the-art TCAD software tools. In this work, we propose the development, the application and the validation of numerical models to simulate the electrical behavior of polycrystalline (pc)CVD diamond conceived for diamond sensors for particle detection. The model focuses on the characterization of a physically-based pcCVD diamond bandgap taking into account deep-level defects acting as recombination centers and/or trap states. While a definite picture of the polycrystalline diamond band-gap is still debated, the effect of the main parameters (e.g. trap densities, capture cross-sections, etc.) can be deeply investigated thanks to the simulated approach. The charge collection efficiency due to β -particle irradiation of diamond materials provided by different vendors and with different electrode configurations has been selected as figure of merit for the model validation. The good agreement between measurements and simulation findings, keeping the traps density as the only one fitting parameter, assesses the suitability of the TCAD modeling approach as a predictive tool for the design and the optimization of diamond-based radiation detectors.

  17. Modelling of peak temperature during friction stir processing of magnesium alloy AZ91

    NASA Astrophysics Data System (ADS)

    Vaira Vignesh, R.; Padmanaban, R.

    2018-02-01

    Friction stir processing (FSP) is a solid state processing technique with potential to modify the properties of the material through microstructural modification. The study of heat transfer in FSP aids in the identification of defects like flash, inadequate heat input, poor material flow and mixing etc. In this paper, transient temperature distribution during FSP of magnesium alloy AZ91 was simulated using finite element modelling. The numerical model results were validated using the experimental results from the published literature. The model was used to predict the peak temperature obtained during FSP for various process parameter combinations. The simulated peak temperature results were used to develop a statistical model. The effect of process parameters namely tool rotation speed, tool traverse speed and shoulder diameter of the tool on the peak temperature was investigated using the developed statistical model. It was found that peak temperature was directly proportional to tool rotation speed and shoulder diameter and inversely proportional to tool traverse speed.

  18. Noncredible cognitive performance at clinical evaluation of adult ADHD: An embedded validity indicator in a visuospatial working memory test.

    PubMed

    Fuermaier, Anselm B M; Tucha, Oliver; Koerts, Janneke; Lange, Klaus W; Weisbrod, Matthias; Aschenbrenner, Steffen; Tucha, Lara

    2017-12-01

    The assessment of performance validity is an essential part of the neuropsychological evaluation of adults with attention-deficit/hyperactivity disorder (ADHD). Most available tools, however, are inaccurate regarding the identification of noncredible performance. This study describes the development of a visuospatial working memory test, including a validity indicator for noncredible cognitive performance of adults with ADHD. Visuospatial working memory of adults with ADHD (n = 48) was first compared to the test performance of healthy individuals (n = 48). Furthermore, a simulation design was performed including 252 individuals who were randomly assigned to either a control group (n = 48) or to 1 of 3 simulation groups who were requested to feign ADHD (n = 204). Additional samples of 27 adults with ADHD and 69 instructed simulators were included to cross-validate findings from the first samples. Adults with ADHD showed impaired visuospatial working memory performance of medium size as compared to healthy individuals. Simulation groups committed significantly more errors and had shorter response times as compared to patients with ADHD. Moreover, binary logistic regression analysis was carried out to derive a validity index that optimally differentiates between true and feigned ADHD. ROC analysis demonstrated high classification rates of the validity index, as shown in excellent specificity (95.8%) and adequate sensitivity (60.3%). The visuospatial working memory test as presented in this study therefore appears sensitive in indicating cognitive impairment of adults with ADHD. Furthermore, the embedded validity index revealed promising results concerning the detection of noncredible cognitive performance of adults with ADHD. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  19. An open source platform for multi-scale spatially distributed simulations of microbial ecosystems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Segre, Daniel

    2014-08-14

    The goal of this project was to develop a tool for facilitating simulation, validation and discovery of multiscale dynamical processes in microbial ecosystems. This led to the development of an open-source software platform for Computation Of Microbial Ecosystems in Time and Space (COMETS). COMETS performs spatially distributed time-dependent flux balance based simulations of microbial metabolism. Our plan involved building the software platform itself, calibrating and testing it through comparison with experimental data, and integrating simulations and experiments to address important open questions on the evolution and dynamics of cross-feeding interactions between microbial species.

  20. Simulation of SEU Cross-sections using MRED under Conditions of Limited Device Information

    NASA Technical Reports Server (NTRS)

    Lauenstein, J. M.; Reed, R. A.; Weller, R. A.; Mendenhall, M. H.; Warren, K. M.; Pellish, J. A.; Schrimpf, R. D.; Sierawski, B. D.; Massengill, L. W.; Dodd, P. E.; hide

    2007-01-01

    This viewgraph presentation reviews the simulation of Single Event Upset (SEU) cross sections using the membrane electrode assembly (MEA) resistance and electrode diffusion (MRED) tool using "Best guess" assumptions about the process and geometry, and direct ionization, low-energy beam test results. This work will also simulate SEU cross-sections including angular and high energy responses and compare the simulated results with beam test data for the validation of the model. Using MRED, we produced a reasonably accurate upset response model of a low-critical charge SRAM without detailed information about the circuit, device geometry, or fabrication process

  1. Maestro: an orchestration framework for large-scale WSN simulations.

    PubMed

    Riliskis, Laurynas; Osipov, Evgeny

    2014-03-18

    Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation.

  2. Maestro: An Orchestration Framework for Large-Scale WSN Simulations

    PubMed Central

    Riliskis, Laurynas; Osipov, Evgeny

    2014-01-01

    Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation. PMID:24647123

  3. An RL10A-3-3A rocket engine model using the rocket engine transient simulator (ROCETS) software

    NASA Technical Reports Server (NTRS)

    Binder, Michael

    1993-01-01

    Steady-state and transient computer models of the RL10A-3-3A rocket engine have been created using the Rocket Engine Transient Simulation (ROCETS) code. These models were created for several purposes. The RL10 engine is a critical component of past, present, and future space missions; the model will give NASA an in-house capability to simulate the performance of the engine under various operating conditions and mission profiles. The RL10 simulation activity is also an opportunity to further validate the ROCETS program. The ROCETS code is an important tool for modeling rocket engine systems at NASA Lewis. ROCETS provides a modular and general framework for simulating the steady-state and transient behavior of any desired propulsion system. Although the ROCETS code is being used in a number of different analysis and design projects within NASA, it has not been extensively validated for any system using actual test data. The RL10A-3-3A has a ten year history of test and flight applications; it should provide sufficient data to validate the ROCETS program capability. The ROCETS models of the RL10 system were created using design information provided by Pratt & Whitney, the engine manufacturer. These models are in the process of being validated using test-stand and flight data. This paper includes a brief description of the models and comparison of preliminary simulation output against flight and test-stand data.

  4. Catchment-scale Validation of a Physically-based, Post-fire Runoff and Erosion Model

    NASA Astrophysics Data System (ADS)

    Quinn, D.; Brooks, E. S.; Robichaud, P. R.; Dobre, M.; Brown, R. E.; Wagenbrenner, J.

    2017-12-01

    The cascading consequences of fire-induced ecological changes have profound impacts on both natural and managed forest ecosystems. Forest managers tasked with implementing post-fire mitigation strategies need robust tools to evaluate the effectiveness of their decisions, particularly those affecting hydrological recovery. Various hillslope-scale interfaces of the physically-based Water Erosion Prediction Project (WEPP) model have been successfully validated for this purpose using fire-effected plot experiments, however these interfaces are explicitly designed to simulate single hillslopes. Spatially-distributed, catchment-scale WEPP interfaces have been developed over the past decade, however none have been validated for post-fire simulations, posing a barrier to adoption for forest managers. In this validation study, we compare WEPP simulations with pre- and post-fire hydrological records for three forested catchments (W. Willow, N. Thomas, and S. Thomas) that burned in the 2011 Wallow Fire in Northeastern Arizona, USA. Simulations were conducted using two approaches; the first using automatically created inputs from an online, spatial, post-fire WEPP interface, and the second using manually created inputs which incorporate the spatial variability of fire effects observed in the field. Both approaches were compared to five years of observed post-fire sediment and flow data to assess goodness of fit.

  5. Use of measurement theory for operationalization and quantification of psychological constructs in systems dynamics modelling

    NASA Astrophysics Data System (ADS)

    Fitkov-Norris, Elena; Yeghiazarian, Ara

    2016-11-01

    The analytical tools available to social scientists have traditionally been adapted from tools originally designed for analysis of natural science phenomena. This article discusses the applicability of systems dynamics - a qualitative based modelling approach, as a possible analysis and simulation tool that bridges the gap between social and natural sciences. After a brief overview of the systems dynamics modelling methodology, the advantages as well as limiting factors of systems dynamics to the potential applications in the field of social sciences and human interactions are discussed. The issues arise with regards to operationalization and quantification of latent constructs at the simulation building stage of the systems dynamics methodology and measurement theory is proposed as a ready and waiting solution to the problem of dynamic model calibration, with a view of improving simulation model reliability and validity and encouraging the development of standardised, modular system dynamics models that can be used in social science research.

  6. Optimization and Simulation of SLM Process for High Density H13 Tool Steel Parts

    NASA Astrophysics Data System (ADS)

    Laakso, Petri; Riipinen, Tuomas; Laukkanen, Anssi; Andersson, Tom; Jokinen, Antero; Revuelta, Alejandro; Ruusuvuori, Kimmo

    This paper demonstrates the successful printing and optimization of processing parameters of high-strength H13 tool steel by Selective Laser Melting (SLM). D-Optimal Design of Experiments (DOE) approach is used for parameter optimization of laser power, scanning speed and hatch width. With 50 test samples (1×1×1cm) we establish parameter windows for these three parameters in relation to part density. The calculated numerical model is found to be in good agreement with the density data obtained from the samples using image analysis. A thermomechanical finite element simulation model is constructed of the SLM process and validated by comparing the calculated densities retrieved from the model with the experimentally determined densities. With the simulation tool one can explore the effect of different parameters on density before making any printed samples. Establishing a parameter window provides the user with freedom for parameter selection such as choosing parameters that result in fastest print speed.

  7. Hierarchical CAD Tools for Radiation Hardened Mixed Signal Electronic Circuits

    DTIC Science & Technology

    2005-01-28

    11 Figure 3: Schematic of Analog and Digital Components 12 Figure 4: Dose Rate Syntax 14 Figure 5: Single Event Effects (SEE) Syntax 15 Figure 6...Harmony-AMS simulation of a Digital Phase Locked Loop 19 Figure 10: SEE results from DPLL Simulation 20 Figure 11: Published results used for validation...analog and digital circuitry. Combining the analog and digital elements onto a single chip has several advantages, but also creates unique challenges

  8. Development and application of incrementally complex tools for wind turbine aerodynamics

    NASA Astrophysics Data System (ADS)

    Gundling, Christopher H.

    Advances and availability of computational resources have made wind farm design using simulation tools a reality. Wind farms are battling two issues, affecting the cost of energy, that will make or break many future investments in wind energy. The most significant issue is the power reduction of downstream turbines operating in the wake of upstream turbines. The loss of energy from wind turbine wakes is difficult to predict and the underestimation of energy losses due to wakes has been a common problem throughout the industry. The second issue is a shorter lifetime of blades and past failures of gearboxes due to increased fluctuations in the unsteady loading of waked turbines. The overall goal of this research is to address these problems by developing a platform for a multi-fidelity wind turbine aerodynamic performance and wake prediction tool. Full-scale experiments in the field have dramatically helped researchers understand the unique issues inside a large wind farm, but experimental methods can only be used to a limited extent due to the cost of such field studies and the size of wind farms. The uncertainty of the inflow is another inherent drawback of field experiments. Therefore, computational fluid dynamics (CFD) predictions, strategically validated using carefully performed wind farm field campaigns, are becoming a more standard design practice. The developed CFD models include a blade element model (BEM) code with a free-vortex wake, an actuator disk or line based method with large eddy simulations (LES) and a fully resolved rotor based method with detached eddy simulations (DES) and adaptive mesh refinement (AMR). To create more realistic simulations, performance of a one-way coupling between different mesoscale atmospheric boundary layer (ABL) models and the three microscale CFD solvers is tested. These methods are validated using data from incrementally complex test cases that include the NREL Phase VI wind tunnel test, the Sexbierum wind farm and the Lillgrund offshore wind farm. By cross-comparing the lowest complexity free-vortex method with the higher complexity methods, a fast and accurate simulation tool has been generated that can perform wind farm simulations in a few hours.

  9. Validation of NOViSE.

    PubMed

    Korzeniowski, Przemyslaw; Brown, Daniel C; Sodergren, Mikael H; Barrow, Alastair; Bello, Fernando

    2017-02-01

    The goal of this study was to establish face, content, and construct validity of NOViSE-the first force-feedback enabled virtual reality (VR) simulator for natural orifice transluminal endoscopic surgery (NOTES). Fourteen surgeons and surgical trainees performed 3 simulated hybrid transgastric cholecystectomies using a flexible endoscope on NOViSE. Four of them were classified as "NOTES experts" who had independently performed 10 or more simulated or human NOTES procedures. Seven participants were classified as "Novices" and 3 as "Gastroenterologists" with no or minimal NOTES experience. A standardized 5-point Likert-type scale questionnaire was administered to assess the face and content validity. NOViSE showed good overall face and content validity. In 14 out of 15 statements pertaining to face validity (graphical appearance, endoscope and tissue behavior, overall realism), ≥50% of responses were "agree" or "strongly agree." In terms of content validity, 85.7% of participants agreed or strongly agreed that NOViSE is a useful training tool for NOTES and 71.4% that they would recommend it to others. Construct validity was established by comparing a number of performance metrics such as task completion times, path lengths, applied forces, and so on. NOViSE demonstrated early signs of construct validity. Experts were faster and used a shorter endoscopic path length than novices in all but one task. The results indicate that NOViSE authentically recreates a transgastric hybrid cholecystectomy and sets promising foundations for the further development of a VR training curriculum for NOTES without compromising patient safety or requiring expensive animal facilities.

  10. A New Approach to Computing Information in Measurements of Non-Resolved Space Objects by the Falcon Telescope Network

    DTIC Science & Technology

    2014-09-01

    Analysis Simulation for Advanced Tracking (TASAT) satellite modeling tool [8,9]. The method uses the bi-reflectance distribution functions ( BRDF ...directional Reflectance Model Validation and Utilization, Air Force Avionics Laboratory Technical Report, AFAL-TR-73-303, October 1973. [10] Hall, D...failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE SEP 2014 2. REPORT

  11. Mining for Data

    NASA Technical Reports Server (NTRS)

    1998-01-01

    AbTech Corporation used an F-18 HARV (High Alpha Research Vehicle) simulation developed by NASA to create an interactive computer-based prototype of the MQ (Model Quest) SV (System Validator) tool. Dryden Flight Research Center provided support to develop, test, and rapidly reprogram the validation function. AbTech's ModelQuest Enterprises highly automated and outperforms other modeling techniques to quickly discover meaningful relationships, patterns, and trends in databases. Applications include technical and business professionals in finance, marketing, business, banking, retail, healthcare, and aerospace.

  12. Marshall Space Flight Center's Virtual Reality Applications Program 1993

    NASA Technical Reports Server (NTRS)

    Hale, Joseph P., II

    1993-01-01

    A Virtual Reality (VR) applications program has been under development at the Marshall Space Flight Center (MSFC) since 1989. Other NASA Centers, most notably Ames Research Center (ARC), have contributed to the development of the VR enabling technologies and VR systems. This VR technology development has now reached a level of maturity where specific applications of VR as a tool can be considered. The objectives of the MSFC VR Applications Program are to develop, validate, and utilize VR as a Human Factors design and operations analysis tool and to assess and evaluate VR as a tool in other applications (e.g., training, operations development, mission support, teleoperations planning, etc.). The long-term goals of this technology program is to enable specialized Human Factors analyses earlier in the hardware and operations development process and develop more effective training and mission support systems. The capability to perform specialized Human Factors analyses earlier in the hardware and operations development process is required to better refine and validate requirements during the requirements definition phase. This leads to a more efficient design process where perturbations caused by late-occurring requirements changes are minimized. A validated set of VR analytical tools must be developed to enable a more efficient process for the design and development of space systems and operations. Similarly, training and mission support systems must exploit state-of-the-art computer-based technologies to maximize training effectiveness and enhance mission support. The approach of the VR Applications Program is to develop and validate appropriate virtual environments and associated object kinematic and behavior attributes for specific classes of applications. These application-specific environments and associated simulations will be validated, where possible, through empirical comparisons with existing, accepted tools and methodologies. These validated VR analytical tools will then be available for use in the design and development of space systems and operations and in training and mission support systems.

  13. Simulated single molecule microscopy with SMeagol.

    PubMed

    Lindén, Martin; Ćurić, Vladimir; Boucharin, Alexis; Fange, David; Elf, Johan

    2016-08-01

    SMeagol is a software tool to simulate highly realistic microscopy data based on spatial systems biology models, in order to facilitate development, validation and optimization of advanced analysis methods for live cell single molecule microscopy data. SMeagol runs on Matlab R2014 and later, and uses compiled binaries in C for reaction-diffusion simulations. Documentation, source code and binaries for Mac OS, Windows and Ubuntu Linux can be downloaded from http://smeagol.sourceforge.net johan.elf@icm.uu.se Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  14. Numerical simulation of electromagnetic waves in Schwarzschild space-time by finite difference time domain method and Green function method

    NASA Astrophysics Data System (ADS)

    Jia, Shouqing; La, Dongsheng; Ma, Xuelian

    2018-04-01

    The finite difference time domain (FDTD) algorithm and Green function algorithm are implemented into the numerical simulation of electromagnetic waves in Schwarzschild space-time. FDTD method in curved space-time is developed by filling the flat space-time with an equivalent medium. Green function in curved space-time is obtained by solving transport equations. Simulation results validate both the FDTD code and Green function code. The methods developed in this paper offer a tool to solve electromagnetic scattering problems.

  15. Advanced Simulation and Computing Business Plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rummel, E.

    To maintain a credible nuclear weapons program, the National Nuclear Security Administration’s (NNSA’s) Office of Defense Programs (DP) needs to make certain that the capabilities, tools, and expert staff are in place and are able to deliver validated assessments. This requires a complete and robust simulation environment backed by an experimental program to test ASC Program models. This ASC Business Plan document encapsulates a complex set of elements, each of which is essential to the success of the simulation component of the Nuclear Security Enterprise. The ASC Business Plan addresses the hiring, mentoring, and retaining of programmatic technical staff responsiblemore » for building the simulation tools of the nuclear security complex. The ASC Business Plan describes how the ASC Program engages with industry partners—partners upon whom the ASC Program relies on for today’s and tomorrow’s high performance architectures. Each piece in this chain is essential to assure policymakers, who must make decisions based on the results of simulations, that they are receiving all the actionable information they need.« less

  16. Issues and approach to develop validated analysis tools for hypersonic flows: One perspective

    NASA Technical Reports Server (NTRS)

    Deiwert, George S.

    1993-01-01

    Critical issues concerning the modeling of low density hypervelocity flows where thermochemical nonequilibrium effects are pronounced are discussed. Emphasis is on the development of validated analysis tools, and the activity in the NASA Ames Research Center's Aerothermodynamics Branch is described. Inherent in the process is a strong synergism between ground test and real gas computational fluid dynamics (CFD). Approaches to develop and/or enhance phenomenological models and incorporate them into computational flowfield simulation codes are discussed. These models were partially validated with experimental data for flows where the gas temperature is raised (compressive flows). Expanding flows, where temperatures drop, however, exhibit somewhat different behavior. Experimental data for these expanding flow conditions is sparse and reliance must be made on intuition and guidance from computational chemistry to model transport processes under these conditions. Ground based experimental studies used to provide necessary data for model development and validation are described. Included are the performance characteristics of high enthalpy flow facilities, such as shock tubes and ballistic ranges.

  17. Issues and approach to develop validated analysis tools for hypersonic flows: One perspective

    NASA Technical Reports Server (NTRS)

    Deiwert, George S.

    1992-01-01

    Critical issues concerning the modeling of low-density hypervelocity flows where thermochemical nonequilibrium effects are pronounced are discussed. Emphasis is on the development of validated analysis tools. A description of the activity in the Ames Research Center's Aerothermodynamics Branch is also given. Inherent in the process is a strong synergism between ground test and real-gas computational fluid dynamics (CFD). Approaches to develop and/or enhance phenomenological models and incorporate them into computational flow-field simulation codes are discussed. These models have been partially validated with experimental data for flows where the gas temperature is raised (compressive flows). Expanding flows, where temperatures drop, however, exhibit somewhat different behavior. Experimental data for these expanding flow conditions are sparse; reliance must be made on intuition and guidance from computational chemistry to model transport processes under these conditions. Ground-based experimental studies used to provide necessary data for model development and validation are described. Included are the performance characteristics of high-enthalpy flow facilities, such as shock tubes and ballistic ranges.

  18. Benchmark Simulations of the Thermal-Hydraulic Responses during EBR-II Inherent Safety Tests using SAM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Rui; Sumner, Tyler S.

    2016-04-17

    An advanced system analysis tool SAM is being developed for fast-running, improved-fidelity, and whole-plant transient analyses at Argonne National Laboratory under DOE-NE’s Nuclear Energy Advanced Modeling and Simulation (NEAMS) program. As an important part of code development, companion validation activities are being conducted to ensure the performance and validity of the SAM code. This paper presents the benchmark simulations of two EBR-II tests, SHRT-45R and BOP-302R, whose data are available through the support of DOE-NE’s Advanced Reactor Technology (ART) program. The code predictions of major primary coolant system parameter are compared with the test results. Additionally, the SAS4A/SASSYS-1 code simulationmore » results are also included for a code-to-code comparison.« less

  19. D-VASim: an interactive virtual laboratory environment for the simulation and analysis of genetic circuits.

    PubMed

    Baig, Hasan; Madsen, Jan

    2017-01-15

    Simulation and behavioral analysis of genetic circuits is a standard approach of functional verification prior to their physical implementation. Many software tools have been developed to perform in silico analysis for this purpose, but none of them allow users to interact with the model during runtime. The runtime interaction gives the user a feeling of being in the lab performing a real world experiment. In this work, we present a user-friendly software tool named D-VASim (Dynamic Virtual Analyzer and Simulator), which provides a virtual laboratory environment to simulate and analyze the behavior of genetic logic circuit models represented in an SBML (Systems Biology Markup Language). Hence, SBML models developed in other software environments can be analyzed and simulated in D-VASim. D-VASim offers deterministic as well as stochastic simulation; and differs from other software tools by being able to extract and validate the Boolean logic from the SBML model. D-VASim is also capable of analyzing the threshold value and propagation delay of a genetic circuit model. D-VASim is available for Windows and Mac OS and can be downloaded from bda.compute.dtu.dk/downloads/. haba@dtu.dk, jama@dtu.dk. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. A Novel Tool Improves Existing Estimates of Recent Tuberculosis Transmission in Settings of Sparse Data Collection.

    PubMed

    Kasaie, Parastu; Mathema, Barun; Kelton, W David; Azman, Andrew S; Pennington, Jeff; Dowdy, David W

    2015-01-01

    In any setting, a proportion of incident active tuberculosis (TB) reflects recent transmission ("recent transmission proportion"), whereas the remainder represents reactivation. Appropriately estimating the recent transmission proportion has important implications for local TB control, but existing approaches have known biases, especially where data are incomplete. We constructed a stochastic individual-based model of a TB epidemic and designed a set of simulations (derivation set) to develop two regression-based tools for estimating the recent transmission proportion from five inputs: underlying TB incidence, sampling coverage, study duration, clustered proportion of observed cases, and proportion of observed clusters in the sample. We tested these tools on a set of unrelated simulations (validation set), and compared their performance against that of the traditional 'n-1' approach. In the validation set, the regression tools reduced the absolute estimation bias (difference between estimated and true recent transmission proportion) in the 'n-1' technique by a median [interquartile range] of 60% [9%, 82%] and 69% [30%, 87%]. The bias in the 'n-1' model was highly sensitive to underlying levels of study coverage and duration, and substantially underestimated the recent transmission proportion in settings of incomplete data coverage. By contrast, the regression models' performance was more consistent across different epidemiological settings and study characteristics. We provide one of these regression models as a user-friendly, web-based tool. Novel tools can improve our ability to estimate the recent TB transmission proportion from data that are observable (or estimable) by public health practitioners with limited available molecular data.

  1. A Novel Tool Improves Existing Estimates of Recent Tuberculosis Transmission in Settings of Sparse Data Collection

    PubMed Central

    Kasaie, Parastu; Mathema, Barun; Kelton, W. David; Azman, Andrew S.; Pennington, Jeff; Dowdy, David W.

    2015-01-01

    In any setting, a proportion of incident active tuberculosis (TB) reflects recent transmission (“recent transmission proportion”), whereas the remainder represents reactivation. Appropriately estimating the recent transmission proportion has important implications for local TB control, but existing approaches have known biases, especially where data are incomplete. We constructed a stochastic individual-based model of a TB epidemic and designed a set of simulations (derivation set) to develop two regression-based tools for estimating the recent transmission proportion from five inputs: underlying TB incidence, sampling coverage, study duration, clustered proportion of observed cases, and proportion of observed clusters in the sample. We tested these tools on a set of unrelated simulations (validation set), and compared their performance against that of the traditional ‘n-1’ approach. In the validation set, the regression tools reduced the absolute estimation bias (difference between estimated and true recent transmission proportion) in the ‘n-1’ technique by a median [interquartile range] of 60% [9%, 82%] and 69% [30%, 87%]. The bias in the ‘n-1’ model was highly sensitive to underlying levels of study coverage and duration, and substantially underestimated the recent transmission proportion in settings of incomplete data coverage. By contrast, the regression models’ performance was more consistent across different epidemiological settings and study characteristics. We provide one of these regression models as a user-friendly, web-based tool. Novel tools can improve our ability to estimate the recent TB transmission proportion from data that are observable (or estimable) by public health practitioners with limited available molecular data. PMID:26679499

  2. Effective Energy Simulation and Optimal Design of Side-lit Buildings with Venetian Blinds

    NASA Astrophysics Data System (ADS)

    Cheng, Tian

    Venetian blinds are popularly used in buildings to control the amount of incoming daylight for improving visual comfort and reducing heat gains in air-conditioning systems. Studies have shown that the proper design and operation of window systems could result in significant energy savings in both lighting and cooling. However, there is no convenient computer tool that allows effective and efficient optimization of the envelope of side-lit buildings with blinds now. Three computer tools, Adeline, DOE2 and EnergyPlus widely used for the above-mentioned purpose have been experimentally examined in this study. Results indicate that the two former tools give unacceptable accuracy due to unrealistic assumptions adopted while the last one may generate large errors in certain conditions. Moreover, current computer tools have to conduct hourly energy simulations, which are not necessary for life-cycle energy analysis and optimal design, to provide annual cooling loads. This is not computationally efficient, particularly not suitable for optimal designing a building at initial stage because the impacts of many design variations and optional features have to be evaluated. A methodology is therefore developed for efficient and effective thermal and daylighting simulations and optimal design of buildings with blinds. Based on geometric optics and radiosity method, a mathematical model is developed to reasonably simulate the daylighting behaviors of venetian blinds. Indoor illuminance at any reference point can be directly and efficiently computed. They have been validated with both experiments and simulations with Radiance. Validation results show that indoor illuminances computed by the new models agree well with the measured data, and the accuracy provided by them is equivalent to that of Radiance. The computational efficiency of the new models is much higher than that of Radiance as well as EnergyPlus. Two new methods are developed for the thermal simulation of buildings. A fast Fourier transform (FFT) method is presented to avoid the root-searching process in the inverse Laplace transform of multilayered walls. Generalized explicit FFT formulae for calculating the discrete Fourier transform (DFT) are developed for the first time. They can largely facilitate the implementation of FFT. The new method also provides a basis for generating the symbolic response factors. Validation simulations show that it can generate the response factors as accurate as the analytical solutions. The second method is for direct estimation of annual or seasonal cooling loads without the need for tedious hourly energy simulations. It is validated by hourly simulation results with DOE2. Then symbolic long-term cooling load can be created by combining the two methods with thermal network analysis. The symbolic long-term cooling load can keep the design parameters of interest as symbols, which is particularly useful for the optimal design and sensitivity analysis. The methodology is applied to an office building in Hong Kong for the optimal design of building envelope. Design variables such as window-to-wall ratio, building orientation, and glazing optical and thermal properties are included in the study. Results show that the selected design values could significantly impact the energy performance of windows, and the optimal design of side-lit buildings could greatly enhance energy savings. The application example also demonstrates that the developed methodology significantly facilitates the optimal building design and sensitivity analysis, and leads to high computational efficiency.

  3. A Validation Argument for a Simulation-Based Training Course Centered on Assessment, Recognition, and Early Management of Pediatric Sepsis.

    PubMed

    Geis, Gary L; Wheeler, Derek S; Bunger, Amy; Militello, Laura G; Taylor, Regina G; Bauer, Jerome P; Byczkowski, Terri L; Kerrey, Benjamin T; Patterson, Mary D

    2018-02-01

    Early recognition of sepsis remains one of the greatest challenges in medicine. Novice clinicians are often responsible for the recognition of sepsis and the initiation of urgent management. The aim of this study was to create a validity argument for the use of a simulation-based training course centered on assessment, recognition, and early management of sepsis in a laboratory-based setting. Five unique simulation scenarios were developed integrating critical sepsis cues identified through qualitative interviewing. Scenarios were piloted with groups of novice, intermediate, and expert pediatric physicians. The primary outcome was physician recognition of sepsis, measured with an adapted situation awareness global assessment tool. Secondary outcomes were physician compliance with pediatric advanced life support (PALS) guidelines and early sepsis management (ESM) recommendations, measured by two internally derived tools. Analysis compared recognition of sepsis by levels of expertise and measured association of sepsis recognition with the secondary outcomes. Eighteen physicians were recruited, six per study group. Each physician completed three sepsis simulations. Sepsis was recognized in 19 (35%) of 54 simulations. The odds that experts recognized sepsis was 2.6 [95% confidence interval (CI) = 0.5-13.8] times greater than novices. Adjusted for severity, for every point increase in the PALS global performance score, the odds that sepsis was recognized increased by 11.3 (95% CI = 3.1-41.4). Similarly, the odds ratio for the PALS checklist score was 1.5 (95% CI = 0.8-2.6). Adjusted for severity and level of expertise, the odds of recognizing sepsis was associated with an increase in the ESM checklist score of 1.8 (95% CI = 0.9-3.6) and an increase in ESM global performance score of 4.1 (95% CI = 1.7-10.0). Although incomplete, evidence from initial testing suggests that the simulations of pediatric sepsis were sufficiently valid to justify their use in training novice pediatric physicians in the assessment, recognition, and management of pediatric sepsis.

  4. SESAME: a software tool for the numerical dosimetric reconstruction of radiological accidents involving external sources and its application to the accident in Chile in December 2005.

    PubMed

    Huet, C; Lemosquet, A; Clairand, I; Rioual, J B; Franck, D; de Carlan, L; Aubineau-Lanièce, I; Bottollier-Depois, J F

    2009-01-01

    Estimating the dose distribution in a victim's body is a relevant indicator in assessing biological damage from exposure in the event of a radiological accident caused by an external source. This dose distribution can be assessed by physical dosimetric reconstruction methods. Physical dosimetric reconstruction can be achieved using experimental or numerical techniques. This article presents the laboratory-developed SESAME--Simulation of External Source Accident with MEdical images--tool specific to dosimetric reconstruction of radiological accidents through numerical simulations which combine voxel geometry and the radiation-material interaction MCNP(X) Monte Carlo computer code. The experimental validation of the tool using a photon field and its application to a radiological accident in Chile in December 2005 are also described.

  5. The plant leaf movement analyzer (PALMA): a simple tool for the analysis of periodic cotyledon and leaf movement in Arabidopsis thaliana.

    PubMed

    Wagner, Lucas; Schmal, Christoph; Staiger, Dorothee; Danisman, Selahattin

    2017-01-01

    The analysis of circadian leaf movement rhythms is a simple yet effective method to study effects of treatments or gene mutations on the circadian clock of plants. Currently, leaf movements are analysed using time lapse photography and subsequent bioinformatics analyses of leaf movements. Programs that are used for this purpose either are able to perform one function (i.e. leaf tip detection or rhythm analysis) or their function is limited to specific computational environments. We developed a leaf movement analysis tool-PALMA-that works in command line and combines image extraction with rhythm analysis using Fast Fourier transformation and non-linear least squares fitting. We validated PALMA in both simulated time series and in experiments using the known short period mutant sensitivity to red light reduced 1 ( srr1 - 1 ). We compared PALMA with two established leaf movement analysis tools and found it to perform equally well. Finally, we tested the effect of reduced iron conditions on the leaf movement rhythms of wild type plants. Here, we found that PALMA successfully detected period lengthening under reduced iron conditions. PALMA correctly estimated the period of both simulated and real-life leaf movement experiments. As a platform-independent console-program that unites both functions needed for the analysis of circadian leaf movements it is a valid alternative to existing leaf movement analysis tools.

  6. Modified subaperture tool influence functions of a flat-pitch polisher with reverse-calculated material removal rate.

    PubMed

    Dong, Zhichao; Cheng, Haobo; Tam, Hon-Yuen

    2014-04-10

    Numerical simulation of subaperture tool influence functions (TIF) is widely known as a critical procedure in computer-controlled optical surfacing. However, it may lack practicability in engineering because the emulation TIF (e-TIF) has some discrepancy with the practical TIF (p-TIF), and the removal rate could not be predicted by simulations. Prior to the polishing of a formal workpiece, opticians have to conduct TIF spot experiments on another sample to confirm the p-TIF with a quantitative removal rate, which is difficult and time-consuming for sequential polishing runs with different tools. This work is dedicated to applying these e-TIFs into practical engineering by making improvements from two aspects: (1) modifies the pressure distribution model of a flat-pitch polisher by finite element analysis and least square fitting methods to make the removal shape of e-TIFs closer to p-TIFs (less than 5% relative deviation validated by experiments); (2) predicts the removal rate of e-TIFs by reverse calculating the material removal volume of a pre-polishing run to the formal workpiece (relative deviations of peak and volume removal rate were validated to be less than 5%). This can omit TIF spot experiments for the particular flat-pitch tool employed and promote the direct usage of e-TIFs in the optimization of a dwell time map, which can largely save on cost and increase fabrication efficiency.

  7. Experimental validation of docking and capture using space robotics testbeds

    NASA Technical Reports Server (NTRS)

    Spofford, John; Schmitz, Eric; Hoff, William

    1991-01-01

    This presentation describes the application of robotic and computer vision systems to validate docking and capture operations for space cargo transfer vehicles. Three applications are discussed: (1) air bearing systems in two dimensions that yield high quality free-flying, flexible, and contact dynamics; (2) validation of docking mechanisms with misalignment and target dynamics; and (3) computer vision technology for target location and real-time tracking. All the testbeds are supported by a network of engineering workstations for dynamic and controls analyses. Dynamic simulation of multibody rigid and elastic systems are performed with the TREETOPS code. MATRIXx/System-Build and PRO-MATLAB/Simulab are the tools for control design and analysis using classical and modern techniques such as H-infinity and LQG/LTR. SANDY is a general design tool to optimize numerically a multivariable robust compensator with a user-defined structure. Mathematica and Macsyma are used to derive symbolically dynamic and kinematic equations.

  8. Simulation tools for scattering corrections in spectrally resolved x-ray computed tomography using McXtrace

    NASA Astrophysics Data System (ADS)

    Busi, Matteo; Olsen, Ulrik L.; Knudsen, Erik B.; Frisvad, Jeppe R.; Kehres, Jan; Dreier, Erik S.; Khalil, Mohamad; Haldrup, Kristoffer

    2018-03-01

    Spectral computed tomography is an emerging imaging method that involves using recently developed energy discriminating photon-counting detectors (PCDs). This technique enables measurements at isolated high-energy ranges, in which the dominating undergoing interaction between the x-ray and the sample is the incoherent scattering. The scattered radiation causes a loss of contrast in the results, and its correction has proven to be a complex problem, due to its dependence on energy, material composition, and geometry. Monte Carlo simulations can utilize a physical model to estimate the scattering contribution to the signal, at the cost of high computational time. We present a fast Monte Carlo simulation tool, based on McXtrace, to predict the energy resolved radiation being scattered and absorbed by objects of complex shapes. We validate the tool through measurements using a CdTe single PCD (Multix ME-100) and use it for scattering correction in a simulation of a spectral CT. We found the correction to account for up to 7% relative amplification in the reconstructed linear attenuation. It is a useful tool for x-ray CT to obtain a more accurate material discrimination, especially in the high-energy range, where the incoherent scattering interactions become prevailing (>50 keV).

  9. Adaption of G-TAG Software for Validating Touch and Go Asteroid Sample Return Design Methodology

    NASA Technical Reports Server (NTRS)

    Blackmore, Lars James C.; Acikmese, Behcet; Mandic, Milan

    2012-01-01

    A software tool is used to demonstrate the feasibility of Touch and Go (TAG) sampling for Asteroid Sample Return missions. TAG is a concept whereby a spacecraft is in contact with the surface of a small body, such as a comet or asteroid, for a few seconds or less before ascending to a safe location away from the small body. Previous work at JPL developed the G-TAG simulation tool, which provides a software environment for fast, multi-body simulations of the TAG event. G-TAG is described in Multibody Simulation Software Testbed for Small-Body Exploration and Sampling, (NPO-47196) NASA Tech Briefs, Vol. 35, No. 11 (November 2011), p.54. This current innovation adapts this tool to a mission that intends to return a sample from the surface of an asteroid. In order to demonstrate the feasibility of the TAG concept, the new software tool was used to generate extensive simulations that demonstrate the designed spacecraft meets key requirements. These requirements state that contact force and duration must be sufficient to ensure that enough material from the surface is collected in the brushwheel sampler (BWS), and that the spacecraft must survive the contact and must be able to recover and ascend to a safe position, and maintain velocity and orientation after the contact.

  10. Model-based sensorimotor integration for multi-joint control: development of a virtual arm model.

    PubMed

    Song, D; Lan, N; Loeb, G E; Gordon, J

    2008-06-01

    An integrated, sensorimotor virtual arm (VA) model has been developed and validated for simulation studies of control of human arm movements. Realistic anatomical features of shoulder, elbow and forearm joints were captured with a graphic modeling environment, SIMM. The model included 15 musculotendon elements acting at the shoulder, elbow and forearm. Muscle actions on joints were evaluated by SIMM generated moment arms that were matched to experimentally measured profiles. The Virtual Muscle (VM) model contained appropriate admixture of slow and fast twitch fibers with realistic physiological properties for force production. A realistic spindle model was embedded in each VM with inputs of fascicle length, gamma static (gamma(stat)) and dynamic (gamma(dyn)) controls and outputs of primary (I(a)) and secondary (II) afferents. A piecewise linear model of Golgi Tendon Organ (GTO) represented the ensemble sampling (I(b)) of the total muscle force at the tendon. All model components were integrated into a Simulink block using a special software tool. The complete VA model was validated with open-loop simulation at discrete hand positions within the full range of alpha and gamma drives to extrafusal and intrafusal muscle fibers. The model behaviors were consistent with a wide variety of physiological phenomena. Spindle afferents were effectively modulated by fusimotor drives and hand positions of the arm. These simulations validated the VA model as a computational tool for studying arm movement control. The VA model is available to researchers at website http://pt.usc.edu/cel .

  11. GADEN: A 3D Gas Dispersion Simulator for Mobile Robot Olfaction in Realistic Environments.

    PubMed

    Monroy, Javier; Hernandez-Bennets, Victor; Fan, Han; Lilienthal, Achim; Gonzalez-Jimenez, Javier

    2017-06-23

    This work presents a simulation framework developed under the widely used Robot Operating System (ROS) to enable the validation of robotics systems and gas sensing algorithms under realistic environments. The framework is rooted in the principles of computational fluid dynamics and filament dispersion theory, modeling wind flow and gas dispersion in 3D real-world scenarios (i.e., accounting for walls, furniture, etc.). Moreover, it integrates the simulation of different environmental sensors, such as metal oxide gas sensors, photo ionization detectors, or anemometers. We illustrate the potential and applicability of the proposed tool by presenting a simulation case in a complex and realistic office-like environment where gas leaks of different chemicals occur simultaneously. Furthermore, we accomplish quantitative and qualitative validation by comparing our simulated results against real-world data recorded inside a wind tunnel where methane was released under different wind flow profiles. Based on these results, we conclude that our simulation framework can provide a good approximation to real world measurements when advective airflows are present in the environment.

  12. GADEN: A 3D Gas Dispersion Simulator for Mobile Robot Olfaction in Realistic Environments

    PubMed Central

    Hernandez-Bennetts, Victor; Fan, Han; Lilienthal, Achim; Gonzalez-Jimenez, Javier

    2017-01-01

    This work presents a simulation framework developed under the widely used Robot Operating System (ROS) to enable the validation of robotics systems and gas sensing algorithms under realistic environments. The framework is rooted in the principles of computational fluid dynamics and filament dispersion theory, modeling wind flow and gas dispersion in 3D real-world scenarios (i.e., accounting for walls, furniture, etc.). Moreover, it integrates the simulation of different environmental sensors, such as metal oxide gas sensors, photo ionization detectors, or anemometers. We illustrate the potential and applicability of the proposed tool by presenting a simulation case in a complex and realistic office-like environment where gas leaks of different chemicals occur simultaneously. Furthermore, we accomplish quantitative and qualitative validation by comparing our simulated results against real-world data recorded inside a wind tunnel where methane was released under different wind flow profiles. Based on these results, we conclude that our simulation framework can provide a good approximation to real world measurements when advective airflows are present in the environment. PMID:28644375

  13. Modeling and Validation of Lithium-ion Automotive Battery Packs (SAE 2013-01-1539)

    EPA Science Inventory

    The Advanced Light-Duty Powertrain and Hybrid Analysis (ALPHA) tool was created by EPA to evaluate the Greenhouse Gas (GHG) emissions of Light-Duty (LD) vehicles. It is a physics-based, forward-looking, full vehicle computer simulator capable of analyzing various vehicle types c...

  14. Development of a fourth generation predictive capability maturity model.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hills, Richard Guy; Witkowski, Walter R.; Urbina, Angel

    2013-09-01

    The Predictive Capability Maturity Model (PCMM) is an expert elicitation tool designed to characterize and communicate completeness of the approaches used for computational model definition, verification, validation, and uncertainty quantification associated for an intended application. The primary application of this tool at Sandia National Laboratories (SNL) has been for physics-based computational simulations in support of nuclear weapons applications. The two main goals of a PCMM evaluation are 1) the communication of computational simulation capability, accurately and transparently, and 2) the development of input for effective planning. As a result of the increasing importance of computational simulation to SNLs mission, themore » PCMM has evolved through multiple generations with the goal to provide more clarity, rigor, and completeness in its application. This report describes the approach used to develop the fourth generation of the PCMM.« less

  15. Validation of the second-generation Olympus colonoscopy simulator for skills assessment.

    PubMed

    Haycock, A V; Bassett, P; Bladen, J; Thomas-Gibson, S

    2009-11-01

    Simulators have potential value in providing objective evidence of technical skill for procedures within medicine. The aim of this study was to determine face and construct validity for the Olympus colonoscopy simulator and to establish which assessment measures map to clinical benchmarks of expertise. Thirty-four participants were recruited: 10 novices with no prior colonoscopy experience, 13 intermediate (trainee) endoscopists with fewer than 1000 previous colonoscopies, and 11 experienced endoscopists with more than 1000 previous colonoscopies. All participants completed three standardized cases on the simulator and experts gave feedback regarding the realism of the simulator. Forty metrics recorded automatically by the simulator were analyzed for their ability to distinguish between the groups. The simulator discriminated participants by experience level for 22 different parameters. Completion rates were lower for novices than for trainees and experts (37 % vs. 79 % and 88 % respectively, P < 0.001) and both novices and trainees took significantly longer to reach all major landmarks than the experts. Several technical aspects of competency were discriminatory; pushing with an embedded tip ( P = 0.03), correct use of the variable stiffness function ( P = 0.004), number of sigmoid N-loops ( P = 0.02); size of sigmoid N-loops ( P = 0.01), and time to remove alpha loops ( P = 0.004). Out of 10, experts rated the realism of movement at 6.4, force feedback at 6.6, looping at 6.6, and loop resolution at 6.8. The Olympus colonoscopy simulator has good face validity and excellent construct validity. It provides an objective assessment of colonoscopic skill on multiple measures and benchmarks have been set to allow its use as both a formative and a summative assessment tool. Georg Thieme Verlag KG Stuttgart. New York.

  16. Development of a Brazilian Portuguese adapted version of the Gap-Kalamazoo communication skills assessment form.

    PubMed

    Amaral, Anna Beatriz C N; Rider, Elizabeth A; Lajolo, Paula P; Tone, Luiz G; Pinto, Rogerio M C; Lajolo, Marisa P; Calhoun, Aaron W

    2016-12-11

    The goal of this study was to translate, adapt and validate the items of the Gap-Kalamazoo Communication Skills Assessment Form for use in the Brazilian cultural setting. The Gap-Kalamazoo Communication Skills Assessment Form was translated into Portuguese by two independent bilingual Brazilian translators and was reconciled by a third bilingual healthcare professional. The translated text was then assessed for content using a modified Delphi technique and adjusted as needed to assure content validity. A total of nine phrases in the completed tool were adjusted. The final tool was then used to assess videotaped simulations as a means of validation.  Response process was assessed using exploratory factor analysis and internal structure was assessed via Cronbach's Alpha (internal consistency) and Intraclass Correlation (test-retest reliability and inter-rater reliability). One hundred and four (104) videotaped communication skills simulations were assessed by 38 subjects (6 staff physicians, 4 faculty physicians, 8 resident physicians, 4 professional actors with experience in simulation, and 16 other allied healthcare professionals). Measures of Internal consistency (Cronbach's alpha = 0.818) and test-retest reliability (intra-class correlation coefficient = 0.942) were high.  Exploratory factor analysis confirmed the uni-dimensionality of the instrument. Our results support the validity and reliability of the Brazilian Gap-Kalamazoo Communication Skills Assessment Form when used among Brazilian medical residents.  The Brazilian version of Gap-Kalamazoo Communication Skills Assessment Form was found to be adequate both in the linguistic and technical aspects.  The use of this instrument in Brazilian medical education can enhance the assessment of physician-patient-team relationships on an ongoing basis.

  17. EBUS-STAT Subscore Analysis to Predict the Efficacy and Assess the Validity of Virtual Reality Simulation for EBUS-TBNA Training Among Experienced Bronchoscopists.

    PubMed

    Scarlata, Simone; Palermo, Patrizio; Candoli, Piero; Tofani, Ariela; Petitti, Tommasangelo; Corbetta, Lorenzo

    2017-04-01

    Linear endobronchial ultrasound transbronchial needle aspiration (EBUS-TBNA) represents a pivotal innovation in interventional pulmonology; determining the best approach to guarantee systematic and efficient training is expected to become a main issue in the forthcoming years. Virtual reality simulators have been proposed as potential EBUS-TBNA training instruments, to avoid unskilled beginners practicing directly in real-life settings. A validated and perfected simulation program could be used before allowing beginners to practice on patients. Our goal was to test the reliability of the EBUS-Skills and Task Assessment Tool (STAT) and its subscores for measuring the competence of experienced bronchoscopists approaching EBUS-guided TBNA, using only the virtual reality simulator as both a training and an assessment tool. Fifteen experienced bronchoscopists, with poor or no experience in EBUS-TBNA, participated in this study. They were all administered the Italian version of the EBUS-STAT evaluation tool, during a high-fidelity virtual reality simulation. This was followed by a single 7-hour theoretical and practical (on simulators) session on EBUS-TBNA, at the end of which their skills were reassessed by EBUS-STAT. An overall, significant improvement in EBUS-TBNA skills was observed, thereby confirming that (a) virtual reality simulation can facilitate practical learning among practitioners, and (b) EBUS-STAT is capable of detecting these improvements. The test's overall ability to detect differences was negatively influenced by the minimal variation of the scores relating to items 1 and 2, was not influenced by the training, and improved significantly when the 2 items were not considered. Apart from these 2 items, all the remaining subscores were equally capable of revealing improvements in the learner. Lastly, we found that trainees with presimulation EBUS-STAT scores above 79 did not show any significant improvement after virtual reality training, suggesting that this score represents a cutoff value capable of predicting the likelihood that simulation can be beneficial. Virtual reality simulation is capable of providing a practical learning tool for practitioners with previous experience in flexible bronchoscopy, and the EBUS-STAT questionnaire is capable of detecting these changes. A pretraining EBUS-STAT score below 79 is a good indicator of those candidates who will benefit from the simulation training. Further studies are needed to verify whether a modified version of the questionnaire would be capable of improving its performance among experienced bronchoscopists.

  18. Molecular simulation and experimental validation of resorcinol adsorption on Ordered Mesoporous Carbon (OMC).

    PubMed

    Ahmad, Zaki Uddin; Chao, Bing; Konggidinata, Mas Iwan; Lian, Qiyu; Zappi, Mark E; Gang, Daniel Dianchen

    2018-04-27

    Numerous research works have been devoted in the adsorption area using experimental approaches. All these approaches are based on trial and error process and extremely time consuming. Molecular simulation technique is a new tool that can be used to design and predict the performance of an adsorbent. This research proposed a simulation technique that can greatly reduce the time in designing the adsorbent. In this study, a new Rhombic ordered mesoporous carbon (OMC) model is proposed and constructed with various pore sizes and oxygen contents using Materials Visualizer Module to optimize the structure of OMC for resorcinol adsorption. The specific surface area, pore volume, small angle X-ray diffraction pattern, and resorcinol adsorption capacity were calculated by Forcite and Sorption module in Materials Studio Package. The simulation results were validated experimentally through synthesizing OMC with different pore sizes and oxygen contents prepared via hard template method employing SBA-15 silica scaffold. Boric acid was used as the pore expanding reagent to synthesize OMC with different pore sizes (from 4.6 to 11.3 nm) and varying oxygen contents (from 11.9% to 17.8%). Based on the simulation and experimental validation, the optimal pore size was found to be 6 nm for maximum adsorption of resorcinol. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Reproducible computational biology experiments with SED-ML--the Simulation Experiment Description Markup Language.

    PubMed

    Waltemath, Dagmar; Adams, Richard; Bergmann, Frank T; Hucka, Michael; Kolpakov, Fedor; Miller, Andrew K; Moraru, Ion I; Nickerson, David; Sahle, Sven; Snoep, Jacky L; Le Novère, Nicolas

    2011-12-15

    The increasing use of computational simulation experiments to inform modern biological research creates new challenges to annotate, archive, share and reproduce such experiments. The recently published Minimum Information About a Simulation Experiment (MIASE) proposes a minimal set of information that should be provided to allow the reproduction of simulation experiments among users and software tools. In this article, we present the Simulation Experiment Description Markup Language (SED-ML). SED-ML encodes in a computer-readable exchange format the information required by MIASE to enable reproduction of simulation experiments. It has been developed as a community project and it is defined in a detailed technical specification and additionally provides an XML schema. The version of SED-ML described in this publication is Level 1 Version 1. It covers the description of the most frequent type of simulation experiments in the area, namely time course simulations. SED-ML documents specify which models to use in an experiment, modifications to apply on the models before using them, which simulation procedures to run on each model, what analysis results to output, and how the results should be presented. These descriptions are independent of the underlying model implementation. SED-ML is a software-independent format for encoding the description of simulation experiments; it is not specific to particular simulation tools. Here, we demonstrate that with the growing software support for SED-ML we can effectively exchange executable simulation descriptions. With SED-ML, software can exchange simulation experiment descriptions, enabling the validation and reuse of simulation experiments in different tools. Authors of papers reporting simulation experiments can make their simulation protocols available for other scientists to reproduce the results. Because SED-ML is agnostic about exact modeling language(s) used, experiments covering models from different fields of research can be accurately described and combined.

  20. Use of human patient simulation and the situation awareness global assessment technique in practical trauma skills assessment.

    PubMed

    Hogan, Michael P; Pace, David E; Hapgood, Joanne; Boone, Darrell C

    2006-11-01

    Situation awareness (SA) is defined as the perception of elements in the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future. This construct is vital to decision making in intense, dynamic environments. It has been used in aviation as it relates to pilot performance, but has not been applied to medical education. The most widely used objective tool for measuring trainee SA is the Situation Awareness Global Assessment Technique (SAGAT). The purpose of this study was to design and validate SAGAT for assessment of practical trauma skills, and to compare SAGAT results to traditional checklist style scoring. Using the Human Patient Simulator, we designed SAGAT for practical trauma skills assessment based on Advanced Trauma Life Support objectives. Sixteen subjects (four staff surgeons, four senior residents, four junior residents, and four medical students) participated in three scenarios each. They were assessed using SAGAT and traditional checklist assessment. A questionnaire was used to assess possible confounding factors in attaining SA and overall trainee satisfaction. SAGAT was found to show significant difference (analysis of variance; p < 0.001) in scores based on level of training lending statistical support to construct validity. SAGAT was likewise found to display reliability (Cronbach's alpha 0.767), and significant scoring correlation with traditional checklist performance measures (Pearson's coefficient 0.806). The questionnaire revealed no confounding factors and universal satisfaction with the human patient simulator and SAGAT. SAGAT is a valid, reliable assessment tool for trauma trainees in the dynamic clinical environment created by human patient simulation. Information provided by SAGAT could provide specific feedback, direct individualized teaching, and support curriculum change. Introduction of SAGAT could improve the current assessment model for practical trauma education.

  1. Integrated tokamak modeling: when physics informs engineering and research planning

    NASA Astrophysics Data System (ADS)

    Poli, Francesca

    2017-10-01

    Simulations that integrate virtually all the relevant engineering and physics aspects of a real tokamak experiment are a power tool for experimental interpretation, model validation and planning for both present and future devices. This tutorial will guide through the building blocks of an ``integrated'' tokamak simulation, such as magnetic flux diffusion, thermal, momentum and particle transport, external heating and current drive sources, wall particle sources and sinks. Emphasis is given to the connection and interplay between external actuators and plasma response, between the slow time scales of the current diffusion and the fast time scales of transport, and how reduced and high-fidelity models can contribute to simulate a whole device. To illustrate the potential and limitations of integrated tokamak modeling for discharge prediction, a helium plasma scenario for the ITER pre-nuclear phase is taken as an example. This scenario presents challenges because it requires core-edge integration and advanced models for interaction between waves and fast-ions, which are subject to a limited experimental database for validation and guidance. Starting from a scenario obtained by re-scaling parameters from the demonstration inductive ``ITER baseline'', it is shown how self-consistent simulations that encompass both core and edge plasma regions, as well as high-fidelity heating and current drive source models are needed to set constraints on the density, magnetic field and heating scheme. This tutorial aims at demonstrating how integrated modeling, when used with adequate level of criticism, can not only support design of operational scenarios, but also help to asses the limitations and gaps in the available models, thus indicating where improved modeling tools are required and how present experiments can help their validation and inform research planning. Work supported by DOE under DE-AC02-09CH1146.

  2. Finite element simulation and Experimental verification of Incremental Sheet metal Forming

    NASA Astrophysics Data System (ADS)

    Kaushik Yanamundra, Krishna; Karthikeyan, R., Dr.; Naranje, Vishal, Dr

    2018-04-01

    Incremental sheet metal forming is now a proven manufacturing technique that can be employed to obtain application specific, customized, symmetric or asymmetric shapes that are required by automobile or biomedical industries for specific purposes like car body parts, dental implants or knee implants. Finite element simulation of metal forming process is being performed successfully using explicit dynamics analysis of commercial FE software. The simulation is mainly useful in optimization of the process as well design of the final product. This paper focuses on simulating the incremental sheet metal forming process in ABAQUS, and validating the results using experimental methods. The shapes generated for testing are of trapezoid, dome and elliptical shapes whose G codes are written and fed into the CNC milling machine with an attached forming tool with a hemispherical bottom. The same pre-generated coordinates are used to simulate a similar machining conditions in ABAQUS and the tool forces, stresses and strains in the workpiece while machining are obtained as the output data. The forces experimentally were recorded using a dynamometer. The experimental and simulated results were then compared and thus conclusions were drawn.

  3. Media fill for validation of a good manufacturing practice-compliant cell production process.

    PubMed

    Serra, Marta; Roseti, Livia; Bassi, Alessandra

    2015-01-01

    According to the European Regulation EC 1394/2007, the clinical use of Advanced Therapy Medicinal Products, such as Human Bone Marrow Mesenchymal Stem Cells expanded for the regeneration of bone tissue or Chondrocytes for Autologous Implantation, requires the development of a process in compliance with the Good Manufacturing Practices. The Media Fill test, consisting of a simulation of the expansion process by using a microbial growth medium instead of the cells, is considered one of the most effective ways to validate a cell production process. Such simulation, in fact, allows to identify any weakness in production that can lead to microbiological contamination of the final cell product as well as qualifying operators. Here, we report the critical aspects concerning the design of a Media Fill test to be used as a tool for the further validation of the sterility of a cell-based Good Manufacturing Practice-compliant production process.

  4. Integral Full Core Multi-Physics PWR Benchmark with Measured Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forget, Benoit; Smith, Kord; Kumar, Shikhar

    In recent years, the importance of modeling and simulation has been highlighted extensively in the DOE research portfolio with concrete examples in nuclear engineering with the CASL and NEAMS programs. These research efforts and similar efforts worldwide aim at the development of high-fidelity multi-physics analysis tools for the simulation of current and next-generation nuclear power reactors. Like all analysis tools, verification and validation is essential to guarantee proper functioning of the software and methods employed. The current approach relies mainly on the validation of single physic phenomena (e.g. critical experiment, flow loops, etc.) and there is a lack of relevantmore » multiphysics benchmark measurements that are necessary to validate high-fidelity methods being developed today. This work introduces a new multi-cycle full-core Pressurized Water Reactor (PWR) depletion benchmark based on two operational cycles of a commercial nuclear power plant that provides a detailed description of fuel assemblies, burnable absorbers, in-core fission detectors, core loading and re-loading patterns. This benchmark enables analysts to develop extremely detailed reactor core models that can be used for testing and validation of coupled neutron transport, thermal-hydraulics, and fuel isotopic depletion. The benchmark also provides measured reactor data for Hot Zero Power (HZP) physics tests, boron letdown curves, and three-dimensional in-core flux maps from 58 instrumented assemblies. The benchmark description is now available online and has been used by many groups. However, much work remains to be done on the quantification of uncertainties and modeling sensitivities. This work aims to address these deficiencies and make this benchmark a true non-proprietary international benchmark for the validation of high-fidelity tools. This report details the BEAVRS uncertainty quantification for the first two cycle of operations and serves as the final report of the project.« less

  5. CFD Code Validation of Wall Heat Fluxes for a G02/GH2 Single Element Combustor

    NASA Technical Reports Server (NTRS)

    Lin, Jeff; West, Jeff S.; Williams, Robert W.; Tucker, P. Kevin

    2005-01-01

    This paper puts forth the case for the need for improved injector design tools to meet NASA s Vision for Space Exploration goals. Requirements for this improved tool are outlined and discussed. The potential for Computational Fluid Dynamics (CFD) to meet these requirements is noted along with its current shortcomings, especially relative to demonstrated solution accuracy. The concept of verification and validation is introduced as the primary process for building and quantifying the confidence necessary for CFD to be useful as an injector design tool. The verification and validation process is considered in the context of the Marshall Space Flight Center (MSFC) Combustion Devices CFD Simulation Capability Roadmap via the Simulation Readiness Level (SRL) concept. The portion of the validation process which demonstrates the ability of a CFD code to simulate heat fluxes to a rocket engine combustor wall is the focus of the current effort. The FDNS and Loci-CHEM codes are used to simulate a shear coaxial single element G02/GH2 injector experiment. The experiment was conducted a t a chamber pressure of 750 psia using hot propellants from preburners. A measured wall temperature profile is used as a boundary condition to facilitate the calculations. Converged solutions, obtained from both codes by using wall functions with the K-E turbulence model and integrating to the wall using Mentor s baseline turbulence model, are compared to the experimental data. The initial solutions from both codes revealed significant issues with the wall function implementation associated with the recirculation zone between the shear coaxial jet and the chamber wall. The FDNS solution with a corrected implementation shows marked improvement in overall character and level of comparison to the data. With the FDNS code, integrating to the wall with Mentor s baseline turbulence model actually produce a degraded solution when compared to the wall function solution with the K--E model. The Loci-CHEM solution, produced by integrating to the wall with Mentor s baseline turbulence model, matches both the heat flux rise rate in the near injector region and the peak heat flux level very well. However, it moderately over predicts the heat fluxes downstream of the reattachment point. The Loci-CHEM solution achieved by integrating to the wall with Mentor s baseline turbulence model was clearly superior to the other solutions produced in this effort.

  6. Adaptation of G-TAG Software for Validating Touch-and-Go Comet Surface Sampling Design Methodology

    NASA Technical Reports Server (NTRS)

    Mandic, Milan; Acikmese, Behcet; Blackmore, Lars

    2011-01-01

    The G-TAG software tool was developed under the R&TD on Integrated Autonomous Guidance, Navigation, and Control for Comet Sample Return, and represents a novel, multi-body dynamics simulation software tool for studying TAG sampling. The G-TAG multi-body simulation tool provides a simulation environment in which a Touch-and-Go (TAG) sampling event can be extensively tested. TAG sampling requires the spacecraft to descend to the surface, contact the surface with a sampling collection device, and then to ascend to a safe altitude. The TAG event lasts only a few seconds but is mission-critical with potentially high risk. Consequently, there is a need for the TAG event to be well characterized and studied by simulation and analysis in order for the proposal teams to converge on a reliable spacecraft design. This adaptation of the G-TAG tool was developed to support the Comet Odyssey proposal effort, and is specifically focused to address comet sample return missions. In this application, the spacecraft descends to and samples from the surface of a comet. Performance of the spacecraft during TAG is assessed based on survivability and sample collection performance. For the adaptation of the G-TAG simulation tool to comet scenarios, models are developed that accurately describe the properties of the spacecraft, approach trajectories, and descent velocities, as well as the models of the external forces and torques acting on the spacecraft. The adapted models of the spacecraft, descent profiles, and external sampling forces/torques were more sophisticated and customized for comets than those available in the basic G-TAG simulation tool. Scenarios implemented include the study of variations in requirements, spacecraft design (size, locations, etc. of the spacecraft components), and the environment (surface properties, slope, disturbances, etc.). The simulations, along with their visual representations using G-View, contributed to the Comet Odyssey New Frontiers proposal effort by indicating problems and/or benefits of different approaches and designs.

  7. Validating a driving simulator using surrogate safety measures.

    PubMed

    Yan, Xuedong; Abdel-Aty, Mohamed; Radwan, Essam; Wang, Xuesong; Chilakapati, Praveen

    2008-01-01

    Traffic crash statistics and previous research have shown an increased risk of traffic crashes at signalized intersections. How to diagnose safety problems and develop effective countermeasures to reduce crash rate at intersections is a key task for traffic engineers and researchers. This study aims at investigating whether the driving simulator can be used as a valid tool to assess traffic safety at signalized intersections. In support of the research objective, this simulator validity study was conducted from two perspectives, a traffic parameter (speed) and a safety parameter (crash history). A signalized intersection with as many important features (including roadway geometries, traffic control devices, intersection surroundings, and buildings) was replicated into a high-fidelity driving simulator. A driving simulator experiment with eight scenarios at the intersection were conducted to determine if the subjects' speed behavior and traffic risk patterns in the driving simulator were similar to what were found at the real intersection. The experiment results showed that speed data observed from the field and in the simulator experiment both follow normal distributions and have equal means for each intersection approach, which validated the driving simulator in absolute terms. Furthermore, this study used an innovative approach of using surrogate safety measures from the simulator to contrast with the crash analysis for the field data. The simulator experiment results indicated that compared to the right-turn lane with the low rear-end crash history record (2 crashes), subjects showed a series of more risky behaviors at the right-turn lane with the high rear-end crash history record (16 crashes), including higher deceleration rate (1.80+/-1.20 m/s(2) versus 0.80+/-0.65 m/s(2)), higher non-stop right-turn rate on red (81.67% versus 57.63%), higher right-turn speed as stop line (18.38+/-8.90 km/h versus 14.68+/-6.04 km/h), shorter following distance (30.19+/-13.43 m versus 35.58+/-13.41 m), and higher rear-end probability (9/59=0.153 versus 2/60=0.033). Therefore, the relative validity of driving simulator was well established for the traffic safety studies at signalized intersections.

  8. Emulation of rocket trajectory based on a six degree of freedom model

    NASA Astrophysics Data System (ADS)

    Zhang, Wenpeng; Li, Fan; Wu, Zhong; Li, Rong

    2008-10-01

    In this paper, a 6-DOF motion mathematical model is discussed. It is consisted of body dynamics and kinematics block, aero dynamics block and atmosphere block. Based on Simulink, the whole rocket trajectory mathematical model is developed. In this model, dynamic system simulation becomes easy and visual. The method of modularization design gives more convenience to transplant. At last, relevant data is given to be validated by Monte Carlo means. Simulation results show that the flight trajectory of the rocket can be simulated preferably by means of this model, and it also supplies a necessary simulating tool for the development of control system.

  9. Atomic-scale to Meso-scale Simulation Studies of Thermal Ageing and Irradiation Effects in Fe- Cr Alloys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stanley, Eugene; Liu, Li

    In this project, we target at three primary objectives: (1) Molecular Dynamics (MD) code development for Fe-Cr alloys, which can be utilized to provide thermodynamic and kinetic properties as inputs in mesoscale Phase Field (PF) simulations; (2) validation and implementation of the MD code to explain thermal ageing and radiation damage; and (3) an integrated modeling platform for MD and PF simulations. These two simulation tools, MD and PF, will ultimately be merged to understand and quantify the kinetics and mechanisms of microstructure and property evolution of Fe-Cr alloys under various thermal and irradiation environments

  10. Validation of Solar Sail Simulations for the NASA Solar Sail Demonstration Project

    NASA Technical Reports Server (NTRS)

    Braafladt, Alexander C.; Artusio-Glimpse, Alexandra B.; Heaton, Andrew F.

    2014-01-01

    NASA's Solar Sail Demonstration project partner L'Garde is currently assembling a flight-like sail assembly for a series of ground demonstration tests beginning in 2015. For future missions of this sail that might validate solar sail technology, it is necessary to have an accurate sail thrust model. One of the primary requirements of a proposed potential technology validation mission will be to demonstrate solar sail thrust over a set time period, which for this project is nominally 30 days. This requirement would be met by comparing a L'Garde-developed trajectory simulation to the as-flown trajectory. The current sail simulation baseline for L'Garde is a Systems Tool Kit (STK) plug-in that includes a custom-designed model of the L'Garde sail. The STK simulation has been verified for a flat plate model by comparing it to the NASA-developed Solar Sail Spaceflight Simulation Software (S5). S5 matched STK with a high degree of accuracy and the results of the validation indicate that the L'Garde STK model is accurate enough to meet the potential future mission requirements. Additionally, since the L'Garde sail deviates considerably from a flat plate, a force model for a non-flat sail provided by L'Garde sail was also tested and compared to a flat plate model in S5. This result will be used in the future as a basis of comparison to the non-flat sail model being developed for STK.

  11. Validation and learning in the Procedicus KSA virtual reality surgical simulator.

    PubMed

    Ström, P; Kjellin, A; Hedman, L; Johnson, E; Wredmark, T; Felländer-Tsai, L

    2003-02-01

    Advanced simulator training within medicine is a rapidly growing field. Virtual reality simulators are being introduced as cost-saving educational tools, which also lead to increased patient safety. Fifteen medical students were included in the study. For 10 medical students performance was monitored, before and after 1 h of training, in two endoscopic simulators (the Procedicus KSA with haptic feedback and anatomical graphics and the established MIST simulator without this haptic feedback and graphics). Five medical students performed 50 tests in the Procedicus KSA in order to analyze learning curves. One of these five medical students performed multiple training sessions during 2 weeks and performed more than 300 tests. There was a significant improvement after 1 h of training regarding time, movement economy, and total score. The results in the two simulators were highly correlated. Our results show that the use of surgical simulators as a pedagogical tool in medical student training is encouraging. It shows rapid learning curves and our suggestion is to introduce endoscopic simulator training in undergraduate medical education during the course in surgery when motivation is high and before the development of "negative stereotypes" and incorrect practices.

  12. Naval electronic warfare simulation for effectiveness assessment and softkill programmability facility

    NASA Astrophysics Data System (ADS)

    Lançon, F.

    2011-06-01

    The Anti-ship Missile (ASM) threat to be faced by ships will become more diverse and difficult. Intelligence, rules of engagement constraints, fast reaction-time for effective softkill solution require specific tools to design Electronic Warfare (EW) systems and to integrate it onboard ship. SAGEM Company provides decoy launcher system [1] and its associated Naval Electronic Warfare Simulation tool (NEWS) to permit softkill effectiveness analysis for anti-ship missile defence. NEWS tool generates virtual environment for missile-ship engagement and counter-measure simulator over a wide spectrum: RF, IR, EO. It integrates EW Command & Control (EWC2) process which is implemented in decoy launcher system and performs Monte-Carlo batch processing to evaluate softkill effectiveness in different engagement situations. NEWS is designed to allow immediate EWC2 process integration from simulation to real decoy launcher system. By design, it allows the final operator to be able to program, test and integrate its own EWC2 module and EW library onboard, so intelligence of each user is protected and evolution of threat can be taken into account through EW library update. The objectives of NEWS tool are also to define a methodology for trial definition and trial data reduction. Growth potential would permit to design new concept for EWC2 programmability and real time effectiveness estimation in EW system. This tool can also be used for operator training purpose. This paper presents the architecture design, the softkill programmability facility concept and the flexibility for onboard integration on ship. The concept of this operationally focused simulation, which is to use only one tool for design, development, trial validation and operational use, will be demonstrated.

  13. Simulation environment and graphical visualization environment: a COPD use-case

    PubMed Central

    2014-01-01

    Background Today, many different tools are developed to execute and visualize physiological models that represent the human physiology. Most of these tools run models written in very specific programming languages which in turn simplify the communication among models. Nevertheless, not all of these tools are able to run models written in different programming languages. In addition, interoperability between such models remains an unresolved issue. Results In this paper we present a simulation environment that allows, first, the execution of models developed in different programming languages and second the communication of parameters to interconnect these models. This simulation environment, developed within the Synergy-COPD project, aims at helping and supporting bio-researchers and medical students understand the internal mechanisms of the human body through the use of physiological models. This tool is composed of a graphical visualization environment, which is a web interface through which the user can interact with the models, and a simulation workflow management system composed of a control module and a data warehouse manager. The control module monitors the correct functioning of the whole system. The data warehouse manager is responsible for managing the stored information and supporting its flow among the different modules. This simulation environment has been validated with the integration of three models: two deterministic, i.e. based on linear and differential equations, and one probabilistic, i.e., based on probability theory. These models have been selected based on the disease under study in this project, i.e., chronic obstructive pulmonary disease. Conclusion It has been proved that the simulation environment presented here allows the user to research and study the internal mechanisms of the human physiology by the use of models via a graphical visualization environment. A new tool for bio-researchers is ready for deployment in various use cases scenarios. PMID:25471327

  14. Assessment of the National Combustion Code

    NASA Technical Reports Server (NTRS)

    Liu, nan-Suey; Iannetti, Anthony; Shih, Tsan-Hsing

    2007-01-01

    The advancements made during the last decade in the areas of combustion modeling, numerical simulation, and computing platform have greatly facilitated the use of CFD based tools in the development of combustion technology. Further development of verification, validation and uncertainty quantification will have profound impact on the reliability and utility of these CFD based tools. The objectives of the present effort are to establish baseline for the National Combustion Code (NCC) and experimental data, as well as to document current capabilities and identify gaps for further improvements.

  15. Optical dosimetry probes to validate Monte Carlo and empirical-method-based NIR dose planning in the brain.

    PubMed

    Verleker, Akshay Prabhu; Shaffer, Michael; Fang, Qianqian; Choi, Mi-Ran; Clare, Susan; Stantz, Keith M

    2016-12-01

    A three-dimensional photon dosimetry in tissues is critical in designing optical therapeutic protocols to trigger light-activated drug release. The objective of this study is to investigate the feasibility of a Monte Carlo-based optical therapy planning software by developing dosimetry tools to characterize and cross-validate the local photon fluence in brain tissue, as part of a long-term strategy to quantify the effects of photoactivated drug release in brain tumors. An existing GPU-based 3D Monte Carlo (MC) code was modified to simulate near-infrared photon transport with differing laser beam profiles within phantoms of skull bone (B), white matter (WM), and gray matter (GM). A novel titanium-based optical dosimetry probe with isotropic acceptance was used to validate the local photon fluence, and an empirical model of photon transport was developed to significantly decrease execution time for clinical application. Comparisons between the MC and the dosimetry probe measurements were on an average 11.27%, 13.25%, and 11.81% along the illumination beam axis, and 9.4%, 12.06%, 8.91% perpendicular to the beam axis for WM, GM, and B phantoms, respectively. For a heterogeneous head phantom, the measured % errors were 17.71% and 18.04% along and perpendicular to beam axis. The empirical algorithm was validated by probe measurements and matched the MC results (R20.99), with average % error of 10.1%, 45.2%, and 22.1% relative to probe measurements, and 22.6%, 35.8%, and 21.9% relative to the MC, for WM, GM, and B phantoms, respectively. The simulation time for the empirical model was 6 s versus 8 h for the GPU-based Monte Carlo for a head phantom simulation. These tools provide the capability to develop and optimize treatment plans for optimal release of pharmaceuticals in the treatment of cancer. Future work will test and validate these novel delivery and release mechanisms in vivo.

  16. Cataract surgeons outperform medical students in Eyesi virtual reality cataract surgery: evidence for construct validity.

    PubMed

    Selvander, Madeleine; Asman, Peter

    2013-08-01

    To investigate construct validity for modules hydromaneuvers and phaco on the Eyesi surgical simulator. Seven cataract surgeons and 17 medical students performed capsulorhexis, hydromaneuvers, phaco, navigation, forceps, cracking and chopping modules in a standardized manner. Three trials were performed on each module (two on phaco) in the above order. Performance parameters as calculated by the simulator for each trial were saved. Video recordings of the second trial of the modules capsulorhexis, hydromaneuvers and phaco were evaluated with the modified Objective Structured Assessment of Surgical Skill (OSATS) and Objective Structured Assessment of Cataract Surgical Skill (OSACSS) tools. Cataract surgeons outperformed medical students with regard to overall score on capsulorhexis (p < 0.001, p = 0.035, p = 0.010 for the tree iterations, respectively), navigation (p = 0.024, p = 0.307, p = 0.007), forceps (p = 0.017, p = 0.03, p = 0.028). Less obvious differences in overall score were found for modules cracking and chopping (p = 0.266, p = 0.022, p = 0.324) and phaco (p = 0.011, p = 0.081 for the two iterations, respectively). No differences in overall score were found on hydromaneuvers (p = 0.588, p = 0.503, p = 0.773), but surgeons received better scores from the evaluations of the modified OSATS (p = 0.001) and OSACSS (capsulorhexis, p = 0.003; hydromaneuvers, p = 0.017; phaco, p = 0.001). Construct validity was found on several modules previously not investigated (phaco, hydromaneuvers, cracking and chopping, navigation), and our results confirm previously demonstrated construct validity for capsulorhexis and forceps modules. Interestingly, validation of the hydromaneuvers module required OSACSS video evaluation tool. A further development of the scoring system in the simulator for the hydromaneuvers module would be advantageous and make training and evaluation of progress more accessible and immediate. © 2012 The Authors. Acta Ophthalmologica © 2012 Acta Ophthalmologica Scandinavica Foundation.

  17. Development of an objective assessment tool for total laparoscopic hysterectomy: A Delphi method among experts and evaluation on a virtual reality simulator.

    PubMed

    Knight, Sophie; Aggarwal, Rajesh; Agostini, Aubert; Loundou, Anderson; Berdah, Stéphane; Crochet, Patrice

    2018-01-01

    Total Laparoscopic hysterectomy (LH) requires an advanced level of operative skills and training. The aim of this study was to develop an objective scale specific for the assessment of technical skills for LH (H-OSATS) and to demonstrate feasibility of use and validity in a virtual reality setting. The scale was developed using a hierarchical task analysis and a panel of international experts. A Delphi method obtained consensus among experts on relevant steps that should be included into the H-OSATS scale for assessment of operative performances. Feasibility of use and validity of the scale were evaluated by reviewing video recordings of LH performed on a virtual reality laparoscopic simulator. Three groups of operators of different levels of experience were assessed in a Marseille teaching hospital (10 novices, 8 intermediates and 8 experienced surgeons). Correlations with scores obtained using a recognised generic global rating tool (OSATS) were calculated. A total of 76 discrete steps were identified by the hierarchical task analysis. 14 experts completed the two rounds of the Delphi questionnaire. 64 steps reached consensus and were integrated in the scale. During the validation process, median time to rate each video recording was 25 minutes. There was a significant difference between the novice, intermediate and experienced group for total H-OSATS scores (133, 155.9 and 178.25 respectively; p = 0.002). H-OSATS scale demonstrated high inter-rater reliability (intraclass correlation coefficient [ICC] = 0.930; p<0.001) and test retest reliability (ICC = 0.877; p<0.001). High correlations were found between total H-OSATS scores and OSATS scores (rho = 0.928; p<0.001). The H-OSATS scale displayed evidence of validity for assessment of technical performances for LH performed on a virtual reality simulator. The implementation of this scale is expected to facilitate deliberate practice. Next steps should focus on evaluating the validity of the scale in the operating room.

  18. ISSARS Aerosol Database : an Incorporation of Atmospheric Particles into a Universal Tool to Simulate Remote Sensing Instruments

    NASA Technical Reports Server (NTRS)

    Goetz, Michael B.

    2011-01-01

    The Instrument Simulator Suite for Atmospheric Remote Sensing (ISSARS) entered its third and final year of development with an overall goal of providing a unified tool to simulate active and passive space borne atmospheric remote sensing instruments. These simulations focus on the atmosphere ranging from UV to microwaves. ISSARS handles all assumptions and uses various models on scattering and microphysics to fill the gaps left unspecified by the atmospheric models to create each instrument's measurements. This will help benefit mission design and reduce mission cost, create efficient implementation of multi-instrument/platform Observing System Simulation Experiments (OSSE), and improve existing models as well as new advanced models in development. In this effort, various aerosol particles are incorporated into the system, and a simulation of input wavelength and spectral refractive indices related to each spherical test particle(s) generate its scattering properties and phase functions. These atmospheric particles being integrated into the system comprise the ones observed by the Multi-angle Imaging SpectroRadiometer(MISR) and by the Multiangle SpectroPolarimetric Imager(MSPI). In addition, a complex scattering database generated by Prof. Ping Yang (Texas A&M) is also incorporated into this aerosol database. Future development with a radiative transfer code will generate a series of results that can be validated with results obtained by the MISR and MSPI instruments; nevertheless, test cases are simulated to determine the validity of various plugin libraries used to determine or gather the scattering properties of particles studied by MISR and MSPI, or within the Single-scattering properties of tri-axial ellipsoidal mineral dust particles database created by Prof. Ping Yang.

  19. A web-based rapid assessment tool for production publishing solutions

    NASA Astrophysics Data System (ADS)

    Sun, Tong

    2010-02-01

    Solution assessment is a critical first-step in understanding and measuring the business process efficiency enabled by an integrated solution package. However, assessing the effectiveness of any solution is usually a very expensive and timeconsuming task which involves lots of domain knowledge, collecting and understanding the specific customer operational context, defining validation scenarios and estimating the expected performance and operational cost. This paper presents an intelligent web-based tool that can rapidly assess any given solution package for production publishing workflows via a simulation engine and create a report for various estimated performance metrics (e.g. throughput, turnaround time, resource utilization) and operational cost. By integrating the digital publishing workflow ontology and an activity based costing model with a Petri-net based workflow simulation engine, this web-based tool allows users to quickly evaluate any potential digital publishing solutions side-by-side within their desired operational contexts, and provides a low-cost and rapid assessment for organizations before committing any purchase. This tool also benefits the solution providers to shorten the sales cycles, establishing a trustworthy customer relationship and supplement the professional assessment services with a proven quantitative simulation and estimation technology.

  20. On the granular fingering instability: controlled triggering in laboratory experiments and numerical simulations

    NASA Astrophysics Data System (ADS)

    Vriend, Nathalie; Tsang, Jonny; Arran, Matthew; Jin, Binbin; Johnsen, Alexander

    2017-11-01

    When a mixture of small, smooth particles and larger, coarse particles is released on a rough inclined plane, the initial uniform front may break up in distinct fingers which elongate over time. This fingering instability is sensitive to the unique arrangement of individual particles and is driven by granular segregation (Pouliquen et al., 1997). Variability in initial conditions create significant limitations for consistent experimental and numerical validation of newly developed theoretical models (Baker et al., 2016) for finger formation. We present an experimental study using a novel tool that sets the initial fingering width of the instability. By changing this trigger width between experiments, we explore the response of the avalanche breakup to perturbations of different widths. Discrete particle simulations (using MercuryDPM, Thornton et al., 2012) are conducted under a similar setting, reproducing the variable finger width, allowing validation between experiments and numerical simulations. A good agreement between simulations and experiments is obtained, and ongoing theoretical work is briefly introduced. NMV acknowledges the Royal Society Dorothy Hodgkin Research Fellowship.

  1. Mixed reality ventriculostomy simulation: experience in neurosurgical residency.

    PubMed

    Hooten, Kristopher G; Lister, J Richard; Lombard, Gwen; Lizdas, David E; Lampotang, Samsun; Rajon, Didier A; Bova, Frank; Murad, Gregory J A

    2014-12-01

    Medicine and surgery are turning toward simulation to improve on limited patient interaction during residency training. Many simulators today use virtual reality with augmented haptic feedback with little to no physical elements. In a collaborative effort, the University of Florida Department of Neurosurgery and the Center for Safety, Simulation & Advanced Learning Technologies created a novel "mixed" physical and virtual simulator to mimic the ventriculostomy procedure. The simulator contains all the physical components encountered for the procedure with superimposed 3-D virtual elements for the neuroanatomical structures. To introduce the ventriculostomy simulator and its validation as a necessary training tool in neurosurgical residency. We tested the simulator in more than 260 residents. An algorithm combining time and accuracy was used to grade performance. Voluntary postperformance surveys were used to evaluate the experience. Results demonstrate that more experienced residents have statistically significant better scores and completed the procedure in less time than inexperienced residents. Survey results revealed that most residents agreed that practice on the simulator would help with future ventriculostomies. This mixed reality simulator provides a real-life experience, and will be an instrumental tool in training the next generation of neurosurgeons. We have now implemented a standard where incoming residents must prove efficiency and skill on the simulator before their first interaction with a patient.

  2. Microstructure Modeling of 3rd Generation Disk Alloys

    NASA Technical Reports Server (NTRS)

    Jou, Herng-Jeng

    2010-01-01

    The objective of this program is to model, validate, and predict the precipitation microstructure evolution, using PrecipiCalc (QuesTek Innovations LLC) software, for 3rd generation Ni-based gas turbine disc superalloys during processing and service, with a set of logical and consistent experiments and characterizations. Furthermore, within this program, the originally research-oriented microstructure simulation tool will be further improved and implemented to be a useful and user-friendly engineering tool. In this report, the key accomplishment achieved during the second year (2008) of the program is summarized. The activities of this year include final selection of multicomponent thermodynamics and mobility databases, precipitate surface energy determination from nucleation experiment, multiscale comparison of predicted versus measured intragrain precipitation microstructure in quench samples showing good agreement, isothermal coarsening experiment and interaction of grain boundary and intergrain precipitates, primary microstructure of subsolvus treatment, and finally the software implementation plan for the third year of the project. In the following year, the calibrated models and simulation tools will be validated against an independently developed experimental data set, with actual disc heat treatment process conditions. Furthermore, software integration and implementation will be developed to provide material engineers valuable information in order to optimize the processing of the 3rd generation gas turbine disc alloys.

  3. Simulation of gaseous pollutant dispersion around an isolated building using the k-ω SST (shear stress transport) turbulence model.

    PubMed

    Yu, Hesheng; Thé, Jesse

    2017-05-01

    The dispersion of gaseous pollutant around buildings is complex due to complex turbulence features such as flow detachment and zones of high shear. Computational fluid dynamics (CFD) models are one of the most promising tools to describe the pollutant distribution in the near field of buildings. Reynolds-averaged Navier-Stokes (RANS) models are the most commonly used CFD techniques to address turbulence transport of the pollutant. This research work studies the use of [Formula: see text] closure model for the gas dispersion around a building by fully resolving the viscous sublayer for the first time. The performance of standard [Formula: see text] model is also included for comparison, along with results of an extensively validated Gaussian dispersion model, the U.S. Environmental Protection Agency (EPA) AERMOD (American Meteorological Society/U.S. Environmental Protection Agency Regulatory Model). This study's CFD models apply the standard [Formula: see text] and the [Formula: see text] turbulence models to obtain wind flow field. A passive concentration transport equation is then calculated based on the resolved flow field to simulate the distribution of pollutant concentrations. The resultant simulation of both wind flow and concentration fields are validated rigorously by extensive data using multiple validation metrics. The wind flow field can be acceptably modeled by the [Formula: see text] model. However, the [Formula: see text] model fails to simulate the gas dispersion. The [Formula: see text] model outperforms [Formula: see text] in both flow and dispersion simulations, with higher hit rates for dimensionless velocity components and higher "factor of 2" of observations (FAC2) for normalized concentration. All these validation metrics of [Formula: see text] model pass the quality assurance criteria recommended by The Association of German Engineers (Verein Deutscher Ingenieure, VDI) guideline. Furthermore, these metrics are better than or the same as those in the literature. Comparison between the performances of [Formula: see text] and AERMOD shows that the CFD simulation is superior to Gaussian-type model for pollutant dispersion in the near wake of obstacles. AERMOD can perform as a screening tool for near-field gas dispersion due to its expeditious calculation and the ability to handle complicated cases. The utilization of [Formula: see text] to simulate gaseous pollutant dispersion around an isolated building is appropriate and is expected to be suitable for complex urban environment. Multiple validation metrics of [Formula: see text] turbulence model in CFD quantitatively indicated that this turbulence model was appropriate for the simulation of gas dispersion around buildings. CFD is, therefore, an attractive alternative to wind tunnel for modeling gas dispersion in urban environment due to its excellent performance, and lower cost.

  4. Simulation/Emulation Techniques: Compressing Schedules With Parallel (HW/SW) Development

    NASA Technical Reports Server (NTRS)

    Mangieri, Mark L.; Hoang, June

    2014-01-01

    NASA has always been in the business of balancing new technologies and techniques to achieve human space travel objectives. NASA's Kedalion engineering analysis lab has been validating and using many contemporary avionics HW/SW development and integration techniques, which represent new paradigms to NASA's heritage culture. Kedalion has validated many of the Orion HW/SW engineering techniques borrowed from the adjacent commercial aircraft avionics solution space, inserting new techniques and skills into the Multi - Purpose Crew Vehicle (MPCV) Orion program. Using contemporary agile techniques, Commercial-off-the-shelf (COTS) products, early rapid prototyping, in-house expertise and tools, and extensive use of simulators and emulators, NASA has achieved cost effective paradigms that are currently serving the Orion program effectively. Elements of long lead custom hardware on the Orion program have necessitated early use of simulators and emulators in advance of deliverable hardware to achieve parallel design and development on a compressed schedule.

  5. DualSPHysics: A numerical tool to simulate real breakwaters

    NASA Astrophysics Data System (ADS)

    Zhang, Feng; Crespo, Alejandro; Altomare, Corrado; Domínguez, José; Marzeddu, Andrea; Shang, Shao-ping; Gómez-Gesteira, Moncho

    2018-02-01

    The open-source code DualSPHysics is used in this work to compute the wave run-up in an existing dike in the Chinese coast using realistic dimensions, bathymetry and wave conditions. The GPU computing power of the DualSPHysics allows simulating real-engineering problems that involve complex geometries with a high resolution in a reasonable computational time. The code is first validated by comparing the numerical free-surface elevation, the wave orbital velocities and the time series of the run-up with physical data in a wave flume. Those experiments include a smooth dike and an armored dike with two layers of cubic blocks. After validation, the code is applied to a real case to obtain the wave run-up under different incident wave conditions. In order to simulate the real open sea, the spurious reflections from the wavemaker are removed by using an active wave absorption technique.

  6. Development and validation of rear impact computer simulation model of an adult manual transit wheelchair with a seated occupant.

    PubMed

    Salipur, Zdravko; Bertocci, Gina

    2010-01-01

    It has been shown that ANSI WC19 transit wheelchairs that are crashworthy in frontal impact exhibit catastrophic failures in rear impact and may not be able to provide stable seating support and thus occupant protection for the wheelchair occupant. Thus far only limited sled test and computer simulation data have been available to study rear impact wheelchair safety. Computer modeling can be used as an economic and comprehensive tool to gain critical knowledge regarding wheelchair integrity and occupant safety. This study describes the development and validation of a computer model simulating an adult wheelchair-seated occupant subjected to a rear impact event. The model was developed in MADYMO and validated rigorously using the results of three similar sled tests conducted to specifications provided in the draft ISO/TC 173 standard. Outcomes from the model can provide critical wheelchair loading information to wheelchair and tiedown manufacturers, resulting in safer wheelchair designs for rear impact conditions. (c) 2009 IPEM. Published by Elsevier Ltd. All rights reserved.

  7. Validation and optimization of SST k-ω turbulence model for pollutant dispersion within a building array

    NASA Astrophysics Data System (ADS)

    Yu, Hesheng; Thé, Jesse

    2016-11-01

    The prediction of the dispersion of air pollutants in urban areas is of great importance to public health, homeland security, and environmental protection. Computational Fluid Dynamics (CFD) emerges as an effective tool for pollutant dispersion modelling. This paper reports and quantitatively validates the shear stress transport (SST) k-ω turbulence closure model and its transitional variant for pollutant dispersion under complex urban environment for the first time. Sensitivity analysis is performed to establish recommendation for the proper use of turbulence models in urban settings. The current SST k-ω simulation is validated rigorously by extensive experimental data using hit rate for velocity components, and the "factor of two" of observations (FAC2) and fractional bias (FB) for concentration field. The simulation results show that current SST k-ω model can predict flow field nicely with an overall hit rate of 0.870, and concentration dispersion with FAC2 = 0.721 and FB = 0.045. The flow simulation of the current SST k-ω model is slightly inferior to that of a detached eddy simulation (DES), but better than that of standard k-ε model. However, the current study is the best among these three model approaches, when validated against measurements of pollutant dispersion in the atmosphere. This work aims to provide recommendation for proper use of CFD to predict pollutant dispersion in urban environment.

  8. Simulation in bronchoscopy: current and future perspectives.

    PubMed

    Nilsson, Philip Mørkeberg; Naur, Therese Maria Henriette; Clementsen, Paul Frost; Konge, Lars

    2017-01-01

    To provide an overview of current literature that informs how to approach simulation practice of bronchoscopy and discuss how findings from other simulation research can help inform the use of simulation in bronchoscopy training. We conducted a literature search on simulation training of bronchoscopy and divided relevant studies in three categories: 1) structuring simulation training in bronchoscopy, 2) assessment of competence in bronchoscopy training, and 3) development of cheap alternatives for bronchoscopy simulation. Bronchoscopy simulation is effective, and the training should be structured as distributed practice with mastery learning criteria (ie, training until a certain level of competence is achieved). Dyad practice (training in pairs) is possible and may increase utility of available simulators. Trainee performance should be assessed with assessment tools with established validity. Three-dimensional printing is a promising new technology opening possibilities for developing cheap simulators with innovative features.

  9. An End-to-End simulator for the development of atmospheric corrections and temperature - emissivity separation algorithms in the TIR spectral domain

    NASA Astrophysics Data System (ADS)

    Rock, Gilles; Fischer, Kim; Schlerf, Martin; Gerhards, Max; Udelhoven, Thomas

    2017-04-01

    The development and optimization of image processing algorithms requires the availability of datasets depicting every step from earth surface to the sensor's detector. The lack of ground truth data obliges to develop algorithms on simulated data. The simulation of hyperspectral remote sensing data is a useful tool for a variety of tasks such as the design of systems, the understanding of the image formation process, and the development and validation of data processing algorithms. An end-to-end simulator has been set up consisting of a forward simulator, a backward simulator and a validation module. The forward simulator derives radiance datasets based on laboratory sample spectra, applies atmospheric contributions using radiative transfer equations, and simulates the instrument response using configurable sensor models. This is followed by the backward simulation branch, consisting of an atmospheric correction (AC), a temperature and emissivity separation (TES) or a hybrid AC and TES algorithm. An independent validation module allows the comparison between input and output dataset and the benchmarking of different processing algorithms. In this study, hyperspectral thermal infrared scenes of a variety of surfaces have been simulated to analyze existing AC and TES algorithms. The ARTEMISS algorithm was optimized and benchmarked against the original implementations. The errors in TES were found to be related to incorrect water vapor retrieval. The atmospheric characterization could be optimized resulting in increasing accuracies in temperature and emissivity retrieval. Airborne datasets of different spectral resolutions were simulated from terrestrial HyperCam-LW measurements. The simulated airborne radiance spectra were subjected to atmospheric correction and TES and further used for a plant species classification study analyzing effects related to noise and mixed pixels.

  10. Validation of CT dose-reduction simulation

    PubMed Central

    Massoumzadeh, Parinaz; Don, Steven; Hildebolt, Charles F.; Bae, Kyongtae T.; Whiting, Bruce R.

    2009-01-01

    The objective of this research was to develop and validate a custom computed tomography dose-reduction simulation technique for producing images that have an appearance consistent with the same scan performed at a lower mAs (with fixed kVp, rotation time, and collimation). Synthetic noise is added to projection (sinogram) data, incorporating a stochastic noise model that includes energy-integrating detectors, tube-current modulation, bowtie beam filtering, and electronic system noise. Experimental methods were developed to determine the parameters required for each component of the noise model. As a validation, the outputs of the simulations were compared to measurements with cadavers in the image domain and with phantoms in both the sinogram and image domain, using an unbiased root-mean-square relative error metric to quantify agreement in noise processes. Four-alternative forced-choice (4AFC) observer studies were conducted to confirm the realistic appearance of simulated noise, and the effects of various system model components on visual noise were studied. The “just noticeable difference (JND)” in noise levels was analyzed to determine the sensitivity of observers to changes in noise level. Individual detector measurements were shown to be normally distributed (p>0.54), justifying the use of a Gaussian random noise generator for simulations. Phantom tests showed the ability to match original and simulated noise variance in the sinogram domain to within 5.6%±1.6% (standard deviation), which was then propagated into the image domain with errors less than 4.1%±1.6%. Cadaver measurements indicated that image noise was matched to within 2.6%±2.0%. More importantly, the 4AFC observer studies indicated that the simulated images were realistic, i.e., no detectable difference between simulated and original images (p=0.86) was observed. JND studies indicated that observers’ sensitivity to change in noise levels corresponded to a 25% difference in dose, which is far larger than the noise accuracy achieved by simulation. In summary, the dose-reduction simulation tool demonstrated excellent accuracy in providing realistic images. The methodology promises to be a useful tool for researchers and radiologists to explore dose reduction protocols in an effort to produce diagnostic images with radiation dose “as low as reasonably achievable.” PMID:19235386

  11. Validation of virtual reality as a tool to understand and prevent child pedestrian injury.

    PubMed

    Schwebel, David C; Gaines, Joanna; Severson, Joan

    2008-07-01

    In recent years, virtual reality has emerged as an innovative tool for health-related education and training. Among the many benefits of virtual reality is the opportunity for novice users to engage unsupervised in a safe environment when the real environment might be dangerous. Virtual environments are only useful for health-related research, however, if behavior in the virtual world validly matches behavior in the real world. This study was designed to test the validity of an immersive, interactive virtual pedestrian environment. A sample of 102 children and 74 adults was recruited to complete simulated road-crossings in both the virtual environment and the identical real environment. In both the child and adult samples, construct validity was demonstrated via significant correlations between behavior in the virtual and real worlds. Results also indicate construct validity through developmental differences in behavior; convergent validity by showing correlations between parent-reported child temperament and behavior in the virtual world; internal reliability of various measures of pedestrian safety in the virtual world; and face validity, as measured by users' self-reported perception of realism in the virtual world. We discuss issues of generalizability to other virtual environments, and the implications for application of virtual reality to understanding and preventing pediatric pedestrian injuries.

  12. Modeling Production Plant Forming Processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rhee, M; Becker, R; Couch, R

    2004-09-22

    Engineering has simulation tools and experience in modeling forming processes. Y-12 personnel have expressed interest in validating our tools and experience against their manufacturing process activities such as rolling, casting, and forging etc. We have demonstrated numerical capabilities in a collaborative DOE/OIT project with ALCOA that is nearing successful completion. The goal was to use ALE3D to model Alcoa's slab rolling process in order to demonstrate a computational tool that would allow Alcoa to define a rolling schedule that would minimize the probability of ingot fracture, thus reducing waste and energy consumption. It is intended to lead to long-term collaborationmore » with Y-12 and perhaps involvement with other components of the weapons production complex. Using simulations to aid in design of forming processes can: decrease time to production; reduce forming trials and associated expenses; and guide development of products with greater uniformity and less scrap.« less

  13. Stability analysis using SDSA tool

    NASA Astrophysics Data System (ADS)

    Goetzendorf-Grabowski, Tomasz; Mieszalski, Dawid; Marcinkiewicz, Ewa

    2011-11-01

    The SDSA (Simulation and Dynamic Stability Analysis) application is presented as a tool for analysing the dynamic characteristics of the aircraft just in the conceptual design stage. SDSA is part of the CEASIOM (Computerized Environment for Aircraft Synthesis and Integrated Optimization Methods) software environment which was developed within the SimSAC (Simulating Aircraft Stability And Control Characteristics for Use in Conceptual Design) project, funded by the European Commission 6th Framework Program. SDSA can also be used as stand alone software, and integrated with other design and optimisation systems using software wrappers. This paper focuses on the main functionalities of SDSA and presents both computational and free flight experimental results to compare and validate the presented software. Two aircraft are considered, the EADS Ranger 2000 and the Warsaw University designed PW-6 glider. For the two cases considered here the SDSA software is shown to be an excellent tool for predicting dynamic characteristics of an aircraft.

  14. A global model for steady state and transient S.I. engine heat transfer studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bohac, S.V.; Assanis, D.N.; Baker, D.M.

    1996-09-01

    A global, systems-level model which characterizes the thermal behavior of internal combustion engines is described in this paper. Based on resistor-capacitor thermal networks, either steady-state or transient thermal simulations can be performed. A two-zone, quasi-dimensional spark-ignition engine simulation is used to determine in-cylinder gas temperature and convection coefficients. Engine heat fluxes and component temperatures can subsequently be predicted from specification of general engine dimensions, materials, and operating conditions. Emphasis has been placed on minimizing the number of model inputs and keeping them as simple as possible to make the model practical and useful as an early design tool. The successmore » of the global model depends on properly scaling the general engine inputs to accurately model engine heat flow paths across families of engine designs. The development and validation of suitable, scalable submodels is described in detail in this paper. Simulation sub-models and overall system predictions are validated with data from two spark ignition engines. Several sensitivity studies are performed to determine the most significant heat transfer paths within the engine and exhaust system. Overall, it has been shown that the model is a powerful tool in predicting steady-state heat rejection and component temperatures, as well as transient component temperatures.« less

  15. Finite-Difference Modeling of Acoustic and Gravity Wave Propagation in Mars Atmosphere: Application to Infrasounds Emitted by Meteor Impacts

    NASA Astrophysics Data System (ADS)

    Garcia, Raphael F.; Brissaud, Quentin; Rolland, Lucie; Martin, Roland; Komatitsch, Dimitri; Spiga, Aymeric; Lognonné, Philippe; Banerdt, Bruce

    2017-10-01

    The propagation of acoustic and gravity waves in planetary atmospheres is strongly dependent on both wind conditions and attenuation properties. This study presents a finite-difference modeling tool tailored for acoustic-gravity wave applications that takes into account the effect of background winds, attenuation phenomena (including relaxation effects specific to carbon dioxide atmospheres) and wave amplification by exponential density decrease with height. The simulation tool is implemented in 2D Cartesian coordinates and first validated by comparison with analytical solutions for benchmark problems. It is then applied to surface explosions simulating meteor impacts on Mars in various Martian atmospheric conditions inferred from global climate models. The acoustic wave travel times are validated by comparison with 2D ray tracing in a windy atmosphere. Our simulations predict that acoustic waves generated by impacts can refract back to the surface on wind ducts at high altitude. In addition, due to the strong nighttime near-surface temperature gradient on Mars, the acoustic waves are trapped in a waveguide close to the surface, which allows a night-side detection of impacts at large distances in Mars plains. Such theoretical predictions are directly applicable to future measurements by the INSIGHT NASA Discovery mission.

  16. SimPhospho: a software tool enabling confident phosphosite assignment.

    PubMed

    Suni, Veronika; Suomi, Tomi; Tsubosaka, Tomoya; Imanishi, Susumu Y; Elo, Laura L; Corthals, Garry L

    2018-03-27

    Mass spectrometry combined with enrichment strategies for phosphorylated peptides has been successfully employed for two decades to identify sites of phosphorylation. However, unambiguous phosphosite assignment is considered challenging. Given that site-specific phosphorylation events function as different molecular switches, validation of phosphorylation sites is of utmost importance. In our earlier study we developed a method based on simulated phosphopeptide spectral libraries, which enables highly sensitive and accurate phosphosite assignments. To promote more widespread use of this method, we here introduce a software implementation with improved usability and performance. We present SimPhospho, a fast and user-friendly tool for accurate simulation of phosphopeptide tandem mass spectra. Simulated phosphopeptide spectral libraries are used to validate and supplement database search results, with a goal to improve reliable phosphoproteome identification and reporting. The presented program can be easily used together with the Trans-Proteomic Pipeline and integrated in a phosphoproteomics data analysis workflow. SimPhospho is available for Windows, Linux and Mac operating systems at https://sourceforge.net/projects/simphospho/. It is open source and implemented in C ++. A user's manual with detailed description of data analysis using SimPhospho as well as test data can be found as supplementary material of this article. Supplementary data are available at https://www.btk.fi/research/ computational-biomedicine/software/.

  17. Modeling and Validation of Power-split and P2 Parallel Hybrid Electric Vehicles SAE 2013-01-1470)

    EPA Science Inventory

    The Advanced Light-Duty Powertrain and Hybrid Analysis tool was created by EPA to evaluate the Greenhouse Gas (GHG) emissions of Light-Duty (LD) vehicles. It is a physics-based, forward-looking, full vehicle computer simulator capable of analyzing various vehicle types combined ...

  18. Potential pitfalls of strain rate imaging: angle dependency

    NASA Technical Reports Server (NTRS)

    Castro, P. L.; Greenberg, N. L.; Drinko, J.; Garcia, M. J.; Thomas, J. D.

    2000-01-01

    Strain Rate Imaging (SRI) is a new echocardiographic technique that allows for the real-time determination of myocardial SR, which may be used for the early and accurate detection of coronary artery disease. We sought to study whether SR is affected by scan line alignment in a computer simulation and an in vivo experiment. Through the computer simulation and the in vivo experiment we generated and validated safe scanning sectors within the ultrasound scan sector and showed that while SRI will be an extremely valuable tool in detecting coronary artery disease there are potential pitfalls for the unwary clinician. Only after accounting for these affects due to angle dependency, can clinicians utilize SRI's potential as a valuable tool in detecting coronary artery disease.

  19. Development and validation of quasi-steady-state heat pump water heater model having stratified water tank and wrapped-tank condenser

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, Bo; Nawaz, Kashif; Baxter, Van D.

    Heat pump water heater systems (HPWH) introduce new challenges for design and modeling tools, because they require vapor compression system balanced with a water storage tank. In addition, a wrapped-tank condenser coil has strong coupling with a stratified water tank, which leads HPWH simulation to a transient process. To tackle these challenges and deliver an effective, hardware-based HPWH equipment design tool, a quasi-steady-state HPWH model was developed based on the DOE/ORNL Heat Pump Design Model (HPDM). Two new component models were added via this study. One is a one-dimensional stratified water tank model, an improvement to the open-source EnergyPlus watermore » tank model, by introducing a calibration factor to account for bulk mixing effect due to water draws, circulations, etc. The other is a wrapped-tank condenser coil model, using a segment-to-segment modeling approach. In conclusion, the HPWH system model was validated against available experimental data. After that, the model was used for parametric simulations to determine the effects of various design factors.« less

  20. Development and validation of quasi-steady-state heat pump water heater model having stratified water tank and wrapped-tank condenser

    DOE PAGES

    Shen, Bo; Nawaz, Kashif; Baxter, Van D.; ...

    2017-10-31

    Heat pump water heater systems (HPWH) introduce new challenges for design and modeling tools, because they require vapor compression system balanced with a water storage tank. In addition, a wrapped-tank condenser coil has strong coupling with a stratified water tank, which leads HPWH simulation to a transient process. To tackle these challenges and deliver an effective, hardware-based HPWH equipment design tool, a quasi-steady-state HPWH model was developed based on the DOE/ORNL Heat Pump Design Model (HPDM). Two new component models were added via this study. One is a one-dimensional stratified water tank model, an improvement to the open-source EnergyPlus watermore » tank model, by introducing a calibration factor to account for bulk mixing effect due to water draws, circulations, etc. The other is a wrapped-tank condenser coil model, using a segment-to-segment modeling approach. In conclusion, the HPWH system model was validated against available experimental data. After that, the model was used for parametric simulations to determine the effects of various design factors.« less

  1. A fast and complete GEANT4 and ROOT Object-Oriented Toolkit: GROOT

    NASA Astrophysics Data System (ADS)

    Lattuada, D.; Balabanski, D. L.; Chesnevskaya, S.; Costa, M.; Crucillà, V.; Guardo, G. L.; La Cognata, M.; Matei, C.; Pizzone, R. G.; Romano, S.; Spitaleri, C.; Tumino, A.; Xu, Y.

    2018-01-01

    Present and future gamma-beam facilities represent a great opportunity to validate and evaluate the cross-sections of many photonuclear reactions at near-threshold energies. Monte Carlo (MC) simulations are very important to evaluate the reaction rates and to maximize the detection efficiency but, unfortunately, they can be very cputime-consuming and in some cases very hard to reproduce, especially when exploring near-threshold cross-section. We developed a software that makes use of the validated tracking GEANT4 libraries and the n-body event generator of ROOT in order to provide a fast, realiable and complete MC tool to be used for nuclear physics experiments. This tool is indeed intended to be used for photonuclear reactions at γ-beam facilities with ELISSA (ELI Silicon Strip Array), a new detector array under development at the Extreme Light Infrastructure - Nuclear Physics (ELI-NP). We discuss the results of MC simulations performed to evaluate the effects of the electromagnetic induced background, of the straggling due to the target thickness and of the resolution of the silicon detectors.

  2. Air freight demand models: An overview

    NASA Technical Reports Server (NTRS)

    Dajani, J. S.; Bernstein, G. W.

    1978-01-01

    A survey is presented of some of the approaches which have been considered in freight demand estimation. The few existing continuous time computer simulations of aviation systems are reviewed, with a view toward the assessment of this approach as a tool for structuring air freight studies and for relating the different components of the air freight system. The variety of available data types and sources, without which the calibration, validation and the testing of both modal split and simulation models would be impossible are also reviewed.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peterson, John Russell

    This grant funded the development and dissemination of the Photon Simulator (PhoSim) for the purpose of studying dark energy at high precision with the upcoming Large Synoptic Survey Telescope (LSST) astronomical survey. The work was in collaboration with the LSST Dark Energy Science Collaboration (DESC). Several detailed physics improvements were made in the optics, atmosphere, and sensor, a number of validation studies were performed, and a significant number of usability features were implemented. Future work in DESC will use PhoSim as the image simulation tool for data challenges used by the analysis groups.

  4. Hyperbolic heat conduction problems involving non-Fourier effects - Numerical simulations via explicit Lax-Wendroff/Taylor-Galerkin finite element formulations

    NASA Technical Reports Server (NTRS)

    Tamma, Kumar K.; Namburu, Raju R.

    1989-01-01

    Numerical simulations are presented for hyperbolic heat-conduction problems that involve non-Fourier effects, using explicit, Lax-Wendroff/Taylor-Galerkin FEM formulations as the principal computational tool. Also employed are smoothing techniques which stabilize the numerical noise and accurately predict the propagating thermal disturbances. The accurate capture of propagating thermal disturbances at characteristic time-step values is achieved; numerical test cases are presented which validate the proposed hyperbolic heat-conduction problem concepts.

  5. The control of flexible structure vibrations using a cantilevered adaptive truss

    NASA Technical Reports Server (NTRS)

    Wynn, Robert H., Jr.; Robertshaw, Harry H.

    1991-01-01

    Analytical and experimental procedures and design tools are presented for the control of flexible structure vibrations using a cantilevered adaptive truss. Simulated and experimental data are examined for three types of structures: a slender beam, a single curved beam, and two curved beams. The adaptive truss is shown to produce a 6,000-percent increase in damping, demonstrating its potential in vibration control. Good agreement is obtained between the simulated and experimental data, thus validating the modeling methods.

  6. Development of the McGill simulator for endoscopic sinus surgery: a new high-fidelity virtual reality simulator for endoscopic sinus surgery.

    PubMed

    Varshney, Rickul; Frenkiel, Saul; Nguyen, Lily H P; Young, Meredith; Del Maestro, Rolando; Zeitouni, Anthony; Tewfik, Marc A

    2014-01-01

    The technical challenges of endoscopic sinus surgery (ESS) and the high risk of complications support the development of alternative modalities to train residents in these procedures. Virtual reality simulation is becoming a useful tool for training the skills necessary for minimally invasive surgery; however, there are currently no ESS virtual reality simulators available with valid evidence supporting their use in resident education. Our aim was to develop a new rhinology simulator, as well as to define potential performance metrics for trainee assessment. The McGill simulator for endoscopic sinus surgery (MSESS), a new sinus surgery virtual reality simulator with haptic feedback, was developed (a collaboration between the McGill University Department of Otolaryngology-Head and Neck Surgery, the Montreal Neurologic Institute Simulation Lab, and the National Research Council of Canada). A panel of experts in education, performance assessment, rhinology, and skull base surgery convened to identify core technical abilities that would need to be taught by the simulator, as well as performance metrics to be developed and captured. The MSESS allows the user to perform basic sinus surgery skills, such as an ethmoidectomy and sphenoidotomy, through the use of endoscopic tools in a virtual nasal model. The performance metrics were developed by an expert panel and include measurements of safety, quality, and efficiency of the procedure. The MSESS incorporates novel technological advancements to create a realistic platform for trainees. To our knowledge, this is the first simulator to combine novel tools such as the endonasal wash and elaborate anatomic deformity with advanced performance metrics for ESS.

  7. Inverse simulation system for manual-controlled rendezvous and docking based on artificial neural network

    NASA Astrophysics Data System (ADS)

    Zhou, Wanmeng; Wang, Hua; Tang, Guojin; Guo, Shuai

    2016-09-01

    The time-consuming experimental method for handling qualities assessment cannot meet the increasing fast design requirements for the manned space flight. As a tool for the aircraft handling qualities research, the model-predictive-control structured inverse simulation (MPC-IS) has potential applications in the aerospace field to guide the astronauts' operations and evaluate the handling qualities more effectively. Therefore, this paper establishes MPC-IS for the manual-controlled rendezvous and docking (RVD) and proposes a novel artificial neural network inverse simulation system (ANN-IS) to further decrease the computational cost. The novel system was obtained by replacing the inverse model of MPC-IS with the artificial neural network. The optimal neural network was trained by the genetic Levenberg-Marquardt algorithm, and finally determined by the Levenberg-Marquardt algorithm. In order to validate MPC-IS and ANN-IS, the manual-controlled RVD experiments on the simulator were carried out. The comparisons between simulation results and experimental data demonstrated the validity of two systems and the high computational efficiency of ANN-IS.

  8. Fully-Coupled Thermo-Electrical Modeling and Simulation of Transition Metal Oxide Memristors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mamaluy, Denis; Gao, Xujiao; Tierney, Brian David

    2016-11-01

    Transition metal oxide (TMO) memristors have recently attracted special attention from the semiconductor industry and academia. Memristors are one of the strongest candidates to replace flash memory, and possibly DRAM and SRAM in the near future. Moreover, memristors have a high potential to enable beyond-CMOS technology advances in novel architectures for high performance computing (HPC). The utility of memristors has been demonstrated in reprogrammable logic (cross-bar switches), brain-inspired computing and in non-CMOS complementary logic. Indeed, the potential use of memristors as logic devices is especially important considering the inevitable end of CMOS technology scaling that is anticipated by 2025. Inmore » order to aid the on-going Sandia memristor fabrication effort with a memristor design tool and establish a clear physical picture of resistance switching in TMO memristors, we have created and validated with experimental data a simulation tool we name the Memristor Charge Transport (MCT) Simulator.« less

  9. Simulation training: a systematic review of simulation in arthroscopy and proposal of a new competency-based training framework.

    PubMed

    Tay, Charison; Khajuria, Ankur; Gupte, Chinmay

    2014-01-01

    Traditional orthopaedic training has followed an apprenticeship model whereby trainees enhance their skills by operating under guidance. However the introduction of limitations on training hours and shorter training programmes mean that alternative training strategies are required. To perform a literature review on simulation training in arthroscopy and devise a framework that structures different simulation techniques that could be used in arthroscopic training. A systematic search of Medline, Embase, Google Scholar and the Cochrane Databases were performed. Search terms included "virtual reality OR simulator OR simulation" and "arthroscopy OR arthroscopic". 14 studies evaluating simulators in knee, shoulder and hip arthroplasty were included. The majority of the studies demonstrated construct and transference validity but only one showed concurrent validity. More studies are required to assess its potential as a training and assessment tool, skills transference between simulators and to determine the extent of skills decay from prolonged delays in training. We also devised a "ladder of arthroscopic simulation" that provides a competency-based framework to implement different simulation strategies. The incorporation of simulation into an orthopaedic curriculum will depend on a coordinated approach between many bodies. But the successful integration of simulators in other areas of surgery supports a possible role for simulation in advancing orthopaedic education. Copyright © 2014 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.

  10. Cloud computing and validation of expandable in silico livers.

    PubMed

    Ropella, Glen E P; Hunt, C Anthony

    2010-12-03

    In Silico Livers (ISLs) are works in progress. They are used to challenge multilevel, multi-attribute, mechanistic hypotheses about the hepatic disposition of xenobiotics coupled with hepatic responses. To enhance ISL-to-liver mappings, we added discrete time metabolism, biliary elimination, and bolus dosing features to a previously validated ISL and initiated re-validated experiments that required scaling experiments to use more simulated lobules than previously, more than could be achieved using the local cluster technology. Rather than dramatically increasing the size of our local cluster we undertook the re-validation experiments using the Amazon EC2 cloud platform. So doing required demonstrating the efficacy of scaling a simulation to use more cluster nodes and assessing the scientific equivalence of local cluster validation experiments with those executed using the cloud platform. The local cluster technology was duplicated in the Amazon EC2 cloud platform. Synthetic modeling protocols were followed to identify a successful parameterization. Experiment sample sizes (number of simulated lobules) on both platforms were 49, 70, 84, and 152 (cloud only). Experimental indistinguishability was demonstrated for ISL outflow profiles of diltiazem using both platforms for experiments consisting of 84 or more samples. The process was analogous to demonstration of results equivalency from two different wet-labs. The results provide additional evidence that disposition simulations using ISLs can cover the behavior space of liver experiments in distinct experimental contexts (there is in silico-to-wet-lab phenotype similarity). The scientific value of experimenting with multiscale biomedical models has been limited to research groups with access to computer clusters. The availability of cloud technology coupled with the evidence of scientific equivalency has lowered the barrier and will greatly facilitate model sharing as well as provide straightforward tools for scaling simulations to encompass greater detail with no extra investment in hardware.

  11. A Design Tool for Liquid Rocket Engine Injectors

    NASA Technical Reports Server (NTRS)

    Farmer, R.; Cheng, G.; Trinh, H.; Tucker, K.

    2000-01-01

    A practical design tool which emphasizes the analysis of flowfields near the injector face of liquid rocket engines has been developed and used to simulate preliminary configurations of NASA's Fastrac and vortex engines. This computational design tool is sufficiently detailed to predict the interactive effects of injector element impingement angles and points and the momenta of the individual orifice flows and the combusting flow which results. In order to simulate a significant number of individual orifices, a homogeneous computational fluid dynamics model was developed. To describe sub- and supercritical liquid and vapor flows, the model utilized thermal and caloric equations of state which were valid over a wide range of pressures and temperatures. The model was constructed such that the local quality of the flow was determined directly. Since both the Fastrac and vortex engines utilize RP-1/LOX propellants, a simplified hydrocarbon combustion model was devised in order to accomplish three-dimensional, multiphase flow simulations. Such a model does not identify drops or their distribution, but it does allow the recirculating flow along the injector face and into the acoustic cavity and the film coolant flow to be accurately predicted.

  12. Virtual vitreoretinal surgery: validation of a training programme.

    PubMed

    Vergmann, Anna Stage; Vestergaard, Anders Højslet; Grauslund, Jakob

    2017-02-01

    To test the validity of the eyesi surgical simulator as an assessment tool in a virtual reality vitreoretinal training programme. In collaboration with an experienced vitreoretinal surgeon, a virtual vitreoretinal training programme was composed on the eyesi surgical simulator, software version 2.9.2 (VRmagic GmbH, Manheim, Germany). It was completed twice by three groups: 20 medical students, ten residents of ophthalmology and five trained vitreoretinal surgeons. The programme contained six training modules: navigation level 2 (Nav2), forceps training level 5 (ForT5), bimanual training level 3 (BimT3), laser coagulation level 3 (LasC3), posterior hyaloid level 3 (PostH3) and internal limiting membrane peeling level 3 (ILMP3). The scores in each module were assessed from two to five different factors (tissue treatment, efficiency, target achievement, instrument handling and microscope handling), and it was possible to achieve 100 points in each module. At the final training session, the highest overall median score was found for the vitreoretinal surgeons (vitreoretinal surgeons: 434 points, residents: 394.5 points, medical students: 272.5 points, p < 0.01). This was also found in four of the six modules. These were Nav2 (p = 0.03), BimT3 (p < 0.01), PostH3 (p < 0.01) and ILMP3 (p < 0.01). On the other hand, the three groups did not differ regarding ForT5 (p = 0.16) or LasC3 (p = 0.75). We developed a training programme with validity for the eyesi surgical simulator as an assessment tool for overall score and for four of six vitreoretinal modules. These findings could potentially make the programme a useful tool in the training of future vitreoretinal surgeons. © 2016 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  13. Using the split Hopkinson pressure bar to validate material models.

    PubMed

    Church, Philip; Cornish, Rory; Cullis, Ian; Gould, Peter; Lewtas, Ian

    2014-08-28

    This paper gives a discussion of the use of the split-Hopkinson bar with particular reference to the requirements of materials modelling at QinetiQ. This is to deploy validated material models for numerical simulations that are physically based and have as little characterization overhead as possible. In order to have confidence that the models have a wide range of applicability, this means, at most, characterizing the models at low rate and then validating them at high rate. The split Hopkinson pressure bar (SHPB) is ideal for this purpose. It is also a very useful tool for analysing material behaviour under non-shock wave loading. This means understanding the output of the test and developing techniques for reliable comparison of simulations with SHPB data. For materials other than metals comparison with an output stress v strain curve is not sufficient as the assumptions built into the classical analysis are generally violated. The method described in this paper compares the simulations with as much validation data as can be derived from deployed instrumentation including the raw strain gauge data on the input and output bars, which avoids any assumptions about stress equilibrium. One has to take into account Pochhammer-Chree oscillations and their effect on the specimen and recognize that this is itself also a valuable validation test of the material model. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  14. Advanced simulation model for IPM motor drive with considering phase voltage and stator inductance

    NASA Astrophysics Data System (ADS)

    Lee, Dong-Myung; Park, Hyun-Jong; Lee, Ju

    2016-10-01

    This paper proposes an advanced simulation model of driving system for Interior Permanent Magnet (IPM) BrushLess Direct Current (BLDC) motors driven by 120-degree conduction method (two-phase conduction method, TPCM) that is widely used for sensorless control of BLDC motors. BLDC motors can be classified as SPM (Surface mounted Permanent Magnet) and IPM motors. Simulation model of driving system with SPM motors is simple due to the constant stator inductance regardless of the rotor position. Simulation models of SPM motor driving system have been proposed in many researches. On the other hand, simulation models for IPM driving system by graphic-based simulation tool such as Matlab/Simulink have not been proposed. Simulation study about driving system of IPMs with TPCM is complex because stator inductances of IPM vary with the rotor position, as permanent magnets are embedded in the rotor. To develop sensorless scheme or improve control performance, development of control algorithm through simulation study is essential, and the simulation model that accurately reflects the characteristic of IPM is required. Therefore, this paper presents the advanced simulation model of IPM driving system, which takes into account the unique characteristic of IPM due to the position-dependent inductances. The validity of the proposed simulation model is validated by comparison to experimental and simulation results using IPM with TPCM control scheme.

  15. Does a surgical simulator improve resident operative performance of laparoscopic tubal ligation?

    PubMed

    Banks, Erika H; Chudnoff, Scott; Karmin, Ira; Wang, Cuiling; Pardanani, Setul

    2007-11-01

    The purpose of this study was to assess whether a surgical skills simulator laboratory improves resident knowledge and operative performance of laparoscopic tubal ligation. Twenty postgraduate year 1 residents were assigned randomly to either a surgical simulator laboratory on laparoscopic tubal ligation together with apprenticeship teaching in the operating room or to apprenticeship teaching alone. Tests that were given before and after the training assessed basic knowledge. Attending physicians who were blinded to resident randomization status evaluated postgraduate year 1 performance on a laparoscopic tubal ligation in the operating room with 3 validated tools: a task-specific checklist, global rating scale, and pass/fail grade. Postgraduate year 1 residents who were assigned randomly to the surgical simulator laboratory performed significantly better than control subjects on all 3 surgical assessment tools (the checklist, the global score, and the pass/fail analysis) and scored significantly better on the knowledge posttest (all P < .0005). Compared with apprenticeship teaching alone, a surgical simulator laboratory on laparoscopic tubal ligation improved resident knowledge and performance in the operating room.

  16. Design, Realization, and First Validation of an Immersive Web-Based Virtual Patient Simulator for Training Clinical Decisions in Surgery.

    PubMed

    Kleinert, Robert; Heiermann, Nadine; Wahba, Roger; Chang, De-Huan; Hölscher, Arnulf H; Stippel, Dirk L

    2015-01-01

    Immersive patient simulators (IPS) allow an illusionary immersion into a synthetic world where the user can freely navigate through a 3-dimensional environment similar to computer games. Playful learning with IPS allows internalization of medical workflows without harming real patients. Ideally, IPS show high student acceptance and can have positive effect on knowledge gain. Development of IPS with high technical quality is resource intensive. Therefore most of the "high-fidelity" IPS are commercially driven. Usage of IPS in the daily curriculum is still rare. There is no academic-driven simulator that is freely accessible to every student and combines high immersion grade with a profound amount of medical content. Therefore it was our aim to develop an academic-driven IPS prototype that is free to use and combines a high immersion grade with profound medical content. In addition, a first validation of the prototype was conducted. The conceptual design included definition of the following parameters: amount of curricular content, grade of technical quality, availability, and level of validation. A preliminary validation was done with 25 students. Students' opinion about acceptance was evaluated by a Likert-scale questionnaire. Effect on knowledge gain was determined by testing concordance and predictive validity. A custom-made simulator prototype (Artificial learning interface for clinical education [ALICE]) displays a virtual clinic environment that can be explored from a first-person view similar to a video game. By controlling an avatar, the user navigates through the environment, is able to treat virtual patients, and faces the consequence of different decisions. ALICE showed high students' acceptance. There was positive correlation for concordance validity and predictive validity. Simulator usage had positive effect on reproduction of trained content and declarative knowledge. We successfully developed a university-based, IPS prototype (ALICE) with profound medical content. ALICE is a nonprofit simulator, easy to use, and showed high students' acceptance; thus it potentially provides an additional tool for supporting student teaching in the daily clinical curriculum. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  17. Numerical simulation and characterization of trapping noise in InGaP-GaAs heterojunctions devices at high injection

    NASA Astrophysics Data System (ADS)

    Nallatamby, Jean-Christophe; Abdelhadi, Khaled; Jacquet, Jean-Claude; Prigent, Michel; Floriot, Didier; Delage, Sylvain; Obregon, Juan

    2013-03-01

    Commercially available simulators present considerable advantages in performing accurate DC, AC and transient simulations of semiconductor devices, including many fundamental and parasitic effects which are not generally taken into account in house-made simulators. Nevertheless, while the TCAD simulators of the public domain we have tested give accurate results for the simulation of diffusion noise, none of the tested simulators perform trap-assisted GR noise accurately. In order to overcome the aforementioned problem we propose a robust solution to accurately simulate GR noise due to traps. It is based on numerical processing of the output data of one of the simulators available in the public-domain, namely SENTAURUS (from Synopsys). We have linked together, through a dedicated Data Access Component (DAC), the deterministic output data available from SENTAURUS and a powerful, customizable post-processing tool developed on the mathematical SCILAB software package. Thus, robust simulations of GR noise in semiconductor devices can be performed by using GR Langevin sources associated to the scalar Green functions responses of the device. Our method takes advantage of the accuracy of the deterministic simulations of electronic devices obtained with SENTAURUS. A Comparison between 2-D simulations and measurements of low frequency noise on InGaP-GaAs heterojunctions, at low as well as high injection levels, demonstrates the validity of the proposed simulation tool.

  18. Simulation services and analysis tools at the CCMC to study multi-scale structure and dynamics of Earth's magnetopause

    NASA Astrophysics Data System (ADS)

    Kuznetsova, M. M.; Liu, Y. H.; Rastaetter, L.; Pembroke, A. D.; Chen, L. J.; Hesse, M.; Glocer, A.; Komar, C. M.; Dorelli, J.; Roytershteyn, V.

    2016-12-01

    The presentation will provide overview of new tools, services and models implemented at the Community Coordinated Modeling Center (CCMC) to facilitate MMS dayside results analysis. We will provide updates on implementation of Particle-in-Cell (PIC) simulations at the CCMC and opportunities for on-line visualization and analysis of results of PIC simulations of asymmetric magnetic reconnection for different guide fields and boundary conditions. Fields, plasma parameters, particle distribution moments as well as particle distribution functions calculated in selected regions of the vicinity of reconnection sites can be analyzed through the web-based interactive visualization system. In addition there are options to request distribution functions in user selected regions of interest and to fly through simulated magnetic reconnection configurations and a map of distributions to facilitate comparisons with observations. A broad collection of global magnetosphere models hosted at the CCMC provide opportunity to put MMS observations and local PIC simulations into global context. We recently implemented the RECON-X post processing tool (Glocer et al, 2016) which allows users to determine the location of separator surface around closed field lines and between open field lines and solar wind field lines. The tool also finds the separatrix line where the two surfaces touch and positions of magnetic nulls. The surfaces and the separatrix line can be visualized relative to satellite positions in the dayside magnetosphere using an interactive HTML-5 visualization for each time step processed. To validate global magnetosphere models' capability to simulate locations of dayside magnetosphere boundaries we will analyze the proximity of MMS to simulated separatrix locations for a set of MMS diffusion region crossing events.

  19. Neurolinguistically constrained simulation of sentence comprehension: integrating artificial intelligence and brain theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gigley, H.M.

    1982-01-01

    An artificial intelligence approach to the simulation of neurolinguistically constrained processes in sentence comprehension is developed using control strategies for simulation of cooperative computation in associative networks. The desirability of this control strategy in contrast to ATN and production system strategies is explained. A first pass implementation of HOPE, an artificial intelligence simulation model of sentence comprehension, constrained by studies of aphasic performance, psycholinguistics, neurolinguistics, and linguistic theory is described. Claims that the model could serve as a basis for sentence production simulation and for a model of language acquisition as associative learning are discussed. HOPE is a model thatmore » performs in a normal state and includes a lesion simulation facility. HOPE is also a research tool. Its modifiability and use as a tool to investigate hypothesized causes of degradation in comprehension performance by aphasic patients are described. Issues of using behavioral constraints in modelling and obtaining appropriate data for simulated process modelling are discussed. Finally, problems of validation of the simulation results are raised; and issues of how to interpret clinical results to define the evolution of the model are discussed. Conclusions with respect to the feasibility of artificial intelligence simulation process modelling are discussed based on the current state of research.« less

  20. A scoping review of assessment tools for laparoscopic suturing.

    PubMed

    Bilgic, Elif; Endo, Satoshi; Lebedeva, Ekaterina; Takao, Madoka; McKendy, Katherine M; Watanabe, Yusuke; Feldman, Liane S; Vassiliou, Melina C

    2018-05-03

    A needs assessment identified a gap in teaching and assessment of laparoscopic suturing (LS) skills. The purpose of this review is to identify assessment tools that were used to assess LS skills, to evaluate validity evidence available, and to provide guidance for selecting the right assessment tool for specific assessment conditions. Bibliographic databases were searched till April 2017. Full-text articles were included if they reported on assessment tools used in the operating room/simulation to (1) assess procedures that require LS or (2) specifically assess LS skills. Forty-two tools were identified, of which 26 were used for assessing LS skills specifically and 26 for procedures that require LS. Tools had the most evidence in internal structure and relationship to other variables, and least in consequences. Through identification and evaluation of assessment tools, the results of this review could be used as a guideline when implementing assessment tools into training programs.

  1. A validation procedure for a LADAR system radiometric simulation model

    NASA Astrophysics Data System (ADS)

    Leishman, Brad; Budge, Scott; Pack, Robert

    2007-04-01

    The USU LadarSIM software package is a ladar system engineering tool that has recently been enhanced to include the modeling of the radiometry of Ladar beam footprints. This paper will discuss our validation of the radiometric model and present a practical approach to future validation work. In order to validate complicated and interrelated factors affecting radiometry, a systematic approach had to be developed. Data for known parameters were first gathered then unknown parameters of the system were determined from simulation test scenarios. This was done in a way to isolate as many unknown variables as possible, then build on the previously obtained results. First, the appropriate voltage threshold levels of the discrimination electronics were set by analyzing the number of false alarms seen in actual data sets. With this threshold set, the system noise was then adjusted to achieve the appropriate number of dropouts. Once a suitable noise level was found, the range errors of the simulated and actual data sets were compared and studied. Predicted errors in range measurements were analyzed using two methods: first by examining the range error of a surface with known reflectivity and second by examining the range errors for specific detectors with known responsivities. This provided insight into the discrimination method and receiver electronics used in the actual system.

  2. Surgical simulators in cataract surgery training.

    PubMed

    Sikder, Shameema; Tuwairqi, Khaled; Al-Kahtani, Eman; Myers, William G; Banerjee, Pat

    2014-02-01

    Virtual simulators have been widely implemented in medical and surgical training, including ophthalmology. The increasing number of published articles in this field mandates a review of the available results to assess current technology and explore future opportunities. A PubMed search was conducted and a total of 10 articles were reviewed. Virtual simulators have shown construct validity in many modules, successfully differentiating user experience levels during simulated phacoemulsification surgery. Simulators have also shown improvements in wet-lab performance. The implementation of simulators in the residency training has been associated with a decrease in cataract surgery complication rates. Virtual reality simulators are an effective tool in measuring performance and differentiating trainee skill level. Additionally, they may be useful in improving surgical skill and patient outcomes in cataract surgery. Future opportunities rely on taking advantage of technical improvements in simulators for education and research.

  3. A Deep Space Orbit Determination Software: Overview and Event Prediction Capability

    NASA Astrophysics Data System (ADS)

    Kim, Youngkwang; Park, Sang-Young; Lee, Eunji; Kim, Minsik

    2017-06-01

    This paper presents an overview of deep space orbit determination software (DSODS), as well as validation and verification results on its event prediction capabilities. DSODS was developed in the MATLAB object-oriented programming environment to support the Korea Pathfinder Lunar Orbiter (KPLO) mission. DSODS has three major capabilities: celestial event prediction for spacecraft, orbit determination with deep space network (DSN) tracking data, and DSN tracking data simulation. To achieve its functionality requirements, DSODS consists of four modules: orbit propagation (OP), event prediction (EP), data simulation (DS), and orbit determination (OD) modules. This paper explains the highest-level data flows between modules in event prediction, orbit determination, and tracking data simulation processes. Furthermore, to address the event prediction capability of DSODS, this paper introduces OP and EP modules. The role of the OP module is to handle time and coordinate system conversions, to propagate spacecraft trajectories, and to handle the ephemerides of spacecraft and celestial bodies. Currently, the OP module utilizes the General Mission Analysis Tool (GMAT) as a third-party software component for highfidelity deep space propagation, as well as time and coordinate system conversions. The role of the EP module is to predict celestial events, including eclipses, and ground station visibilities, and this paper presents the functionality requirements of the EP module. The validation and verification results show that, for most cases, event prediction errors were less than 10 millisec when compared with flight proven mission analysis tools such as GMAT and Systems Tool Kit (STK). Thus, we conclude that DSODS is capable of predicting events for the KPLO in real mission applications.

  4. TEAM-HF Cost-Effectiveness Model: A Web-Based Program Designed to Evaluate the Cost-Effectiveness of Disease Management Programs in Heart Failure

    PubMed Central

    Reed, Shelby D.; Neilson, Matthew P.; Gardner, Matthew; Li, Yanhong; Briggs, Andrew H.; Polsky, Daniel E.; Graham, Felicia L.; Bowers, Margaret T.; Paul, Sara C.; Granger, Bradi B.; Schulman, Kevin A.; Whellan, David J.; Riegel, Barbara; Levy, Wayne C.

    2015-01-01

    Background Heart failure disease management programs can influence medical resource use and quality-adjusted survival. Because projecting long-term costs and survival is challenging, a consistent and valid approach to extrapolating short-term outcomes would be valuable. Methods We developed the Tools for Economic Analysis of Patient Management Interventions in Heart Failure (TEAM-HF) Cost-Effectiveness Model, a Web-based simulation tool designed to integrate data on demographic, clinical, and laboratory characteristics, use of evidence-based medications, and costs to generate predicted outcomes. Survival projections are based on a modified Seattle Heart Failure Model (SHFM). Projections of resource use and quality of life are modeled using relationships with time-varying SHFM scores. The model can be used to evaluate parallel-group and single-cohort designs and hypothetical programs. Simulations consist of 10,000 pairs of virtual cohorts used to generate estimates of resource use, costs, survival, and incremental cost-effectiveness ratios from user inputs. Results The model demonstrated acceptable internal and external validity in replicating resource use, costs, and survival estimates from 3 clinical trials. Simulations to evaluate the cost-effectiveness of heart failure disease management programs across 3 scenarios demonstrate how the model can be used to design a program in which short-term improvements in functioning and use of evidence-based treatments are sufficient to demonstrate good long-term value to the health care system. Conclusion The TEAM-HF Cost-Effectiveness Model provides researchers and providers with a tool for conducting long-term cost-effectiveness analyses of disease management programs in heart failure. PMID:26542504

  5. Comparison of simulated and measured spectra from an X-ray tube for the energies between 20 and 35 keV

    NASA Astrophysics Data System (ADS)

    Yücel, M.; Emirhan, E.; Bayrak, A.; Ozben, C. S.; Yücel, E. Barlas

    2015-11-01

    Design and production of a simple and low cost X-ray imaging system that can be used for light industrial applications was targeted in the Nuclear Physics Laboratory of Istanbul Technical University. In this study, production, transmission and detection of X-rays were simulated for the proposed imaging device. OX/70-P dental tube was used and X-ray spectra simulated by Geant4 were validated by comparison with X-ray spectra measured between 20 and 35 keV. Relative detection efficiency of the detector was also determined to confirm the physics processes used in the simulations. Various time optimization tools were performed to reduce the simulation time.

  6. Computational simulations of supersonic magnetohydrodynamic flow control, power and propulsion systems

    NASA Astrophysics Data System (ADS)

    Wan, Tian

    This work is motivated by the lack of fully coupled computational tool that solves successfully the turbulent chemically reacting Navier-Stokes equation, the electron energy conservation equation and the electric current Poisson equation. In the present work, the abovementioned equations are solved in a fully coupled manner using fully implicit parallel GMRES methods. The system of Navier-Stokes equations are solved using a GMRES method with combined Schwarz and ILU(0) preconditioners. The electron energy equation and the electric current Poisson equation are solved using a GMRES method with combined SOR and Jacobi preconditioners. The fully coupled method has also been implemented successfully in an unstructured solver, US3D, and convergence test results were presented. This new method is shown two to five times faster than the original DPLR method. The Poisson solver is validated with analytic test problems. Then, four problems are selected; two of them are computed to explore the possibility of onboard MHD control and power generation, and the other two are simulation of experiments. First, the possibility of onboard reentry shock control by a magnetic field is explored. As part of a previous project, MHD power generation onboard a re-entry vehicle is also simulated. Then, the MHD acceleration experiments conducted at NASA Ames research center are simulated. Lastly, the MHD power generation experiments known as the HVEPS project are simulated. For code validation, the scramjet experiments at University of Queensland are simulated first. The generator section of the HVEPS test facility is computed then. The main conclusion is that the computational tool is accurate for different types of problems and flow conditions, and its accuracy and efficiency are necessary when the flow complexity increases.

  7. Extracting atomic numbers and electron densities from a dual source dual energy CT scanner: experiments and a simulation model.

    PubMed

    Landry, Guillaume; Reniers, Brigitte; Granton, Patrick Vincent; van Rooijen, Bart; Beaulieu, Luc; Wildberger, Joachim E; Verhaegen, Frank

    2011-09-01

    Dual energy CT (DECT) imaging can provide both the electron density ρ(e) and effective atomic number Z(eff), thus facilitating tissue type identification. This paper investigates the accuracy of a dual source DECT scanner by means of measurements and simulations. Previous simulation work suggested improved Monte Carlo dose calculation accuracy when compared to single energy CT for low energy photon brachytherapy, but lacked validation. As such, we aim to validate our DECT simulation model in this work. A cylindrical phantom containing tissue mimicking inserts was scanned with a second generation dual source scanner (SOMATOM Definition FLASH) to obtain Z(eff) and ρ(e). A model of the scanner was designed in ImaSim, a CT simulation program, and was used to simulate the experiment. Accuracy of measured Z(eff) (labelled Z) was found to vary from -10% to 10% from low to high Z tissue substitutes while the accuracy on ρ(e) from DECT was about 2.5%. Our simulation reproduced the experiments within ±5% for both Z and ρ(e). A clinical DECT scanner was able to extract Z and ρ(e) of tissue substitutes. Our simulation tool replicates the experiments within a reasonable accuracy. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  8. Numerical Investigation of Plasma Detachment in Magnetic Nozzle Experiments

    NASA Technical Reports Server (NTRS)

    Sankaran, Kamesh; Polzin, Kurt A.

    2008-01-01

    At present there exists no generally accepted theoretical model that provides a consistent physical explanation of plasma detachment from an externally-imposed magnetic nozzle. To make progress towards that end, simulation of plasma flow in the magnetic nozzle of an arcjet experiment is performed using a multidimensional numerical simulation tool that includes theoretical models of the various dispersive and dissipative processes present in the plasma. This is an extension of the simulation tool employed in previous work by Sankaran et al. The aim is to compare the computational results with various proposed magnetic nozzle detachment theories to develop an understanding of the physical mechanisms that cause detachment. An applied magnetic field topology is obtained using a magnetostatic field solver (see Fig. I), and this field is superimposed on the time-dependent magnetic field induced in the plasma to provide a self-consistent field description. The applied magnetic field and model geometry match those found in experiments by Kuriki and Okada. This geometry is modeled because there is a substantial amount of experimental data that can be compared to the computational results, allowing for validation of the model. In addition, comparison of the simulation results with the experimentally obtained plasma parameters will provide insight into the mechanisms that lead to plasma detachment, revealing how they scale with different input parameters. Further studies will focus on modeling literature experiments both for the purpose of additional code validation and to extract physical insight regarding the mechanisms driving detachment.

  9. Analysis of SSME HPOTP rotordynamics subsynchronous whirl

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The causes and remedies of vibration and subsynchronous whirl problems encountered in the Shuttle Main Engine SSME turbomachinery are analyzed. Because the nonlinear and linearized models of the turbopumps play such an important role in the analysis process, the main emphasis is concentrated on the verification and improvement of these tools. It has been the goal of our work to validate the equations of motion used in the models are validated, including the assumptions upon which they are based. Verification of th SSME rotordynamics simulation and the developed enhancements, are emphasized.

  10. Grid Modernization Laboratory Consortium - Testing and Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kroposki, Benjamin; Skare, Paul; Pratt, Rob

    This paper highlights some of the unique testing capabilities and projects being performed at several national laboratories as part of the U. S. Department of Energy Grid Modernization Laboratory Consortium. As part of this effort, the Grid Modernization Laboratory Consortium Testing Network isbeing developed to accelerate grid modernization by enablingaccess to a comprehensive testing infrastructure and creating a repository of validated models and simulation tools that will be publicly available. This work is key to accelerating thedevelopment, validation, standardization, adoption, and deployment of new grid technologies to help meet U. S. energy goals.

  11. SIMSAT: An object oriented architecture for real-time satellite simulation

    NASA Technical Reports Server (NTRS)

    Williams, Adam P.

    1993-01-01

    Real-time satellite simulators are vital tools in the support of satellite missions. They are used in the testing of ground control systems, the training of operators, the validation of operational procedures, and the development of contingency plans. The simulators must provide high-fidelity modeling of the satellite, which requires detailed system information, much of which is not available until relatively near launch. The short time-scales and resulting high productivity required of such simulator developments culminates in the need for a reusable infrastructure which can be used as a basis for each simulator. This paper describes a major new simulation infrastructure package, the Software Infrastructure for Modelling Satellites (SIMSAT). It outlines the object oriented design methodology used, describes the resulting design, and discusses the advantages and disadvantages experienced in applying the methodology.

  12. A health economic model to determine the long-term costs and clinical outcomes of raising low HDL-cholesterol in the prevention of coronary heart disease.

    PubMed

    Roze, S; Liens, D; Palmer, A; Berger, W; Tucker, D; Renaudin, C

    2006-12-01

    The aim of this study was to describe a health economic model developed to project lifetime clinical and cost outcomes of lipid-modifying interventions in patients not reaching target lipid levels and to assess the validity of the model. The internet-based, computer simulation model is made up of two decision analytic sub-models, the first utilizing Monte Carlo simulation, and the second applying Markov modeling techniques. Monte Carlo simulation generates a baseline cohort for long-term simulation by assigning an individual lipid profile to each patient, and applying the treatment effects of interventions under investigation. The Markov model then estimates the long-term clinical (coronary heart disease events, life expectancy, and quality-adjusted life expectancy) and cost outcomes up to a lifetime horizon, based on risk equations from the Framingham study. Internal and external validation analyses were performed. The results of the model validation analyses, plotted against corresponding real-life values from Framingham, 4S, AFCAPS/TexCAPS, and a meta-analysis by Gordon et al., showed that the majority of values were close to the y = x line, which indicates a perfect fit. The R2 value was 0.9575 and the gradient of the regression line was 0.9329, both very close to the perfect fit (= 1). Validation analyses of the computer simulation model suggest the model is able to recreate the outcomes from published clinical studies and would be a valuable tool for the evaluation of new and existing therapy options for patients with persistent dyslipidemia.

  13. Development of Advanced Verification and Validation Procedures and Tools for the Certification of Learning Systems in Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Jacklin, Stephen; Schumann, Johann; Gupta, Pramod; Richard, Michael; Guenther, Kurt; Soares, Fola

    2005-01-01

    Adaptive control technologies that incorporate learning algorithms have been proposed to enable automatic flight control and vehicle recovery, autonomous flight, and to maintain vehicle performance in the face of unknown, changing, or poorly defined operating environments. In order for adaptive control systems to be used in safety-critical aerospace applications, they must be proven to be highly safe and reliable. Rigorous methods for adaptive software verification and validation must be developed to ensure that control system software failures will not occur. Of central importance in this regard is the need to establish reliable methods that guarantee convergent learning, rapid convergence (learning) rate, and algorithm stability. This paper presents the major problems of adaptive control systems that use learning to improve performance. The paper then presents the major procedures and tools presently developed or currently being developed to enable the verification, validation, and ultimate certification of these adaptive control systems. These technologies include the application of automated program analysis methods, techniques to improve the learning process, analytical methods to verify stability, methods to automatically synthesize code, simulation and test methods, and tools to provide on-line software assurance.

  14. Portable acuity screening for any school: validation of patched HOTV with amblyopic patients and Bangerter normals.

    PubMed

    Tsao Wu, Maya; Armitage, M Diane; Trujillo, Claire; Trujillo, Anna; Arnold, Laura E; Tsao Wu, Lauren; Arnold, Robert W

    2017-12-04

    We needed to validate and calibrate our portable acuity screening tools so amblyopia could be detected quickly and effectively at school entry. Spiral-bound flip cards and download pdf surround HOTV acuity test box with critical lines were combined with a matching card. Amblyopic patients performed critical line, then threshold acuity which was then compared to patched E-ETDRS acuity. 5 normal subjects wore Bangerter foil goggles to simulate blur for comparative validation. The 31 treated amblyopic eyes showed: logMAR HOTV = 0.97(logMAR E-ETDRS)-0.04 r2 = 0.88. All but two (6%) fell less than 2 lines difference. The five showed logMAR HOTV = 1.09 ((logMAR E-ETDRS) + .15 r2 = 0.63. The critical-line, test box was 98% efficient at screening within one line of 20/40. These tools reliably detected acuity in treated amblyopic patients and Bangerter blurred normal subjects. These free and affordable tools provide sensitive screening for amblyopia in children from public, private and home schools. Changing "pass" criteria to 4 out of 5 would improve sensitivity with somewhat slower testing for all students.

  15. Determining skeletal muscle architecture with Laplacian simulations: a comparison with diffusion tensor imaging.

    PubMed

    Handsfield, Geoffrey G; Bolsterlee, Bart; Inouye, Joshua M; Herbert, Robert D; Besier, Thor F; Fernandez, Justin W

    2017-12-01

    Determination of skeletal muscle architecture is important for accurately modeling muscle behavior. Current methods for 3D muscle architecture determination can be costly and time-consuming, making them prohibitive for clinical or modeling applications. Computational approaches such as Laplacian flow simulations can estimate muscle fascicle orientation based on muscle shape and aponeurosis location. The accuracy of this approach is unknown, however, since it has not been validated against other standards for muscle architecture determination. In this study, muscle architectures from the Laplacian approach were compared to those determined from diffusion tensor imaging in eight adult medial gastrocnemius muscles. The datasets were subdivided into training and validation sets, and computational fluid dynamics software was used to conduct Laplacian simulations. In training sets, inputs of muscle geometry, aponeurosis location, and geometric flow guides resulted in good agreement between methods. Application of the method to validation sets showed no significant differences in pennation angle (mean difference [Formula: see text] or fascicle length (mean difference 0.9 mm). Laplacian simulation was thus effective at predicting gastrocnemius muscle architectures in healthy volunteers using imaging-derived muscle shape and aponeurosis locations. This method may serve as a tool for determining muscle architecture in silico and as a complement to other approaches.

  16. IMPACT: a generic tool for modelling and simulating public health policy.

    PubMed

    Ainsworth, J D; Carruthers, E; Couch, P; Green, N; O'Flaherty, M; Sperrin, M; Williams, R; Asghar, Z; Capewell, S; Buchan, I E

    2011-01-01

    Populations are under-served by local health policies and management of resources. This partly reflects a lack of realistically complex models to enable appraisal of a wide range of potential options. Rising computing power coupled with advances in machine learning and healthcare information now enables such models to be constructed and executed. However, such models are not generally accessible to public health practitioners who often lack the requisite technical knowledge or skills. To design and develop a system for creating, executing and analysing the results of simulated public health and healthcare policy interventions, in ways that are accessible and usable by modellers and policy-makers. The system requirements were captured and analysed in parallel with the statistical method development for the simulation engine. From the resulting software requirement specification the system architecture was designed, implemented and tested. A model for Coronary Heart Disease (CHD) was created and validated against empirical data. The system was successfully used to create and validate the CHD model. The initial validation results show concordance between the simulation results and the empirical data. We have demonstrated the ability to connect health policy-modellers and policy-makers in a unified system, thereby making population health models easier to share, maintain, reuse and deploy.

  17. Computational assessment of hemodynamics-based diagnostic tools using a database of virtual subjects: Application to three case studies.

    PubMed

    Willemet, Marie; Vennin, Samuel; Alastruey, Jordi

    2016-12-08

    Many physiological indexes and algorithms based on pulse wave analysis have been suggested in order to better assess cardiovascular function. Because these tools are often computed from in-vivo hemodynamic measurements, their validation is time-consuming, challenging, and biased by measurement errors. Recently, a new methodology has been suggested to assess theoretically these computed tools: a database of virtual subjects generated using numerical 1D-0D modeling of arterial hemodynamics. The generated set of simulations encloses a wide selection of healthy cases that could be encountered in a clinical study. We applied this new methodology to three different case studies that demonstrate the potential of our new tool, and illustrated each of them with a clinically relevant example: (i) we assessed the accuracy of indexes estimating pulse wave velocity; (ii) we validated and refined an algorithm that computes central blood pressure; and (iii) we investigated theoretical mechanisms behind the augmentation index. Our database of virtual subjects is a new tool to assist the clinician: it provides insight into the physical mechanisms underlying the correlations observed in clinical practice. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  18. The Imperial Paediatric Emergency Training Toolkit (IPETT) for use in paediatric emergency training: development and evaluation of feasibility and validity.

    PubMed

    Lambden, Simon; DeMunter, Claudine; Dowson, Anne; Cooper, Mehrengise; Gautama, Sanjay; Sevdalis, Nick

    2013-06-01

    To develop and test the feasibility, reliability, and validity of a practical toolkit for the assessment and feedback of skills required to manage paediatric emergencies in critical care settings. The Imperial Paediatric Emergency Training Toolkit (IPETT) was developed based on current evidence-base and expert input. IPETT assesses both technical and non-technical skills. The technical component covers skills in the areas of clinical assessment, airway and breathing, cardiovascular, and drugs. The non-technical component is based on the validated NOTECHS tool and covers communication and interaction, cooperation and team skills, leadership and managerial skills, and decision-making. The reliability (internal consistency), content validity (inter-correlations between different skills) and concurrent validity (correlations between global technical and non-technical scores) of IPETT were prospectively evaluated in 45 simulated paediatric crises carried out in a PICU with anaesthetic and paediatric trainees (N=52). Non-parametric analyses were carried out. Significance was set at P<0.05. Cronbach alpha reliability coefficients were overall acceptable for the technical (alpha range=0.638-0.810) and good for the non-technical (alpha range=0.701-0.899) component of IPETT. The median inter-skill correlation was rho=0.564 and rho=0.549 for the technical and non-technical components, respectively. These indicate good content validity, as the skills were inter-related but not redundant. We also demonstrate a correlation between the global technical and non-technical scores (rho=0.471) - all Ps<0.05 during the assessments. IPETT offers a psychometrically viable and feasible to use tool in the context of paediatric emergencies training. This study shows that assessment of technical and non-technical skills in combination may offer a more clinically relevant model for training in paediatric emergencies. Further validation should aim to demonstrate skill retention over time and skill transfer from simulation-based training to real emergencies. Copyright © 2013. Published by Elsevier Ireland Ltd.

  19. Validating the Performance of the FHWA Work Zone Model Version 1.0: A Case Study Along I-91 in Springfield, Massachusetts

    DOT National Transportation Integrated Search

    2017-08-01

    Central to the effective design of work zones is being able to understand how drivers behave as they approach and enter a work zone area. States use simulation tools in modeling freeway work zones to predict work zone impacts and to select optimal de...

  20. Simulating Soil Organic Matter with CQESTR (v.2.0): Model Description and Validation against Long-term Experiments across North America

    USDA-ARS?s Scientific Manuscript database

    Soil carbon (C) models are important tools for examining complex interactions between climate, crop and soil management practices, and to evaluate the long-term effects of management practices on C-storage potential in soils. CQESTR is a process-based carbon balance model that relates crop residue a...

  1. A Simulation Study of Threats to Validity in Quasi-Experimental Designs: Interrelationship between Design, Measurement, and Analysis.

    PubMed

    Holgado-Tello, Fco P; Chacón-Moscoso, Salvador; Sanduvete-Chaves, Susana; Pérez-Gil, José A

    2016-01-01

    The Campbellian tradition provides a conceptual framework to assess threats to validity. On the other hand, different models of causal analysis have been developed to control estimation biases in different research designs. However, the link between design features, measurement issues, and concrete impact estimation analyses is weak. In order to provide an empirical solution to this problem, we use Structural Equation Modeling (SEM) as a first approximation to operationalize the analytical implications of threats to validity in quasi-experimental designs. Based on the analogies established between the Classical Test Theory (CTT) and causal analysis, we describe an empirical study based on SEM in which range restriction and statistical power have been simulated in two different models: (1) A multistate model in the control condition (pre-test); and (2) A single-trait-multistate model in the control condition (post-test), adding a new mediator latent exogenous (independent) variable that represents a threat to validity. Results show, empirically, how the differences between both the models could be partially or totally attributed to these threats. Therefore, SEM provides a useful tool to analyze the influence of potential threats to validity.

  2. A Simulation Study of Threats to Validity in Quasi-Experimental Designs: Interrelationship between Design, Measurement, and Analysis

    PubMed Central

    Holgado-Tello, Fco. P.; Chacón-Moscoso, Salvador; Sanduvete-Chaves, Susana; Pérez-Gil, José A.

    2016-01-01

    The Campbellian tradition provides a conceptual framework to assess threats to validity. On the other hand, different models of causal analysis have been developed to control estimation biases in different research designs. However, the link between design features, measurement issues, and concrete impact estimation analyses is weak. In order to provide an empirical solution to this problem, we use Structural Equation Modeling (SEM) as a first approximation to operationalize the analytical implications of threats to validity in quasi-experimental designs. Based on the analogies established between the Classical Test Theory (CTT) and causal analysis, we describe an empirical study based on SEM in which range restriction and statistical power have been simulated in two different models: (1) A multistate model in the control condition (pre-test); and (2) A single-trait-multistate model in the control condition (post-test), adding a new mediator latent exogenous (independent) variable that represents a threat to validity. Results show, empirically, how the differences between both the models could be partially or totally attributed to these threats. Therefore, SEM provides a useful tool to analyze the influence of potential threats to validity. PMID:27378991

  3. Development of a proficiency-based virtual reality simulation training curriculum for laparoscopic appendicectomy.

    PubMed

    Sirimanna, Pramudith; Gladman, Marc A

    2017-10-01

    Proficiency-based virtual reality (VR) training curricula improve intraoperative performance, but have not been developed for laparoscopic appendicectomy (LA). This study aimed to develop an evidence-based training curriculum for LA. A total of 10 experienced (>50 LAs), eight intermediate (10-30 LAs) and 20 inexperienced (<10 LAs) operators performed guided and unguided LA tasks on a high-fidelity VR simulator using internationally relevant techniques. The ability to differentiate levels of experience (construct validity) was measured using simulator-derived metrics. Learning curves were analysed. Proficiency benchmarks were defined by the performance of the experienced group. Intermediate and experienced participants completed a questionnaire to evaluate the realism (face validity) and relevance (content validity). Of 18 surgeons, 16 (89%) considered the VR model to be visually realistic and 17 (95%) believed that it was representative of actual practice. All 'guided' modules demonstrated construct validity (P < 0.05), with learning curves that plateaued between sessions 6 and 9 (P < 0.01). When comparing inexperienced to intermediates to experienced, the 'unguided' LA module demonstrated construct validity for economy of motion (5.00 versus 7.17 versus 7.84, respectively; P < 0.01) and task time (864.5 s versus 477.2 s versus 352.1 s, respectively, P < 0.01). Construct validity was also confirmed for number of movements, path length and idle time. Validated modules were used for curriculum construction, with proficiency benchmarks used as performance goals. A VR LA model was realistic and representative of actual practice and was validated as a training and assessment tool. Consequently, the first evidence-based internationally applicable training curriculum for LA was constructed, which facilitates skill acquisition to proficiency. © 2017 Royal Australasian College of Surgeons.

  4. Review of hardware-in-the-loop simulation and its prospects in the automotive area

    NASA Astrophysics Data System (ADS)

    Fathy, Hosam K.; Filipi, Zoran S.; Hagena, Jonathan; Stein, Jeffrey L.

    2006-05-01

    Hardware-in-the-loop (HIL) simulation is rapidly evolving from a control prototyping tool to a system modeling, simulation, and synthesis paradigm synergistically combining many advantages of both physical and virtual prototyping. This paper provides a brief overview of the key enablers and numerous applications of HIL simulation, focusing on its metamorphosis from a control validation tool into a system development paradigm. It then describes a state-of-the art engine-in-the-loop (EIL) simulation facility that highlights the use of HIL simulation for the system-level experimental evaluation of powertrain interactions and development of strategies for clean and efficient propulsion. The facility comprises a real diesel engine coupled to accurate real-time driver, driveline, and vehicle models through a highly responsive dynamometer. This enables the verification of both performance and fuel economy predictions of different conventional and hybrid powertrains. Furthermore, the facility can both replicate the highly dynamic interactions occurring within a real powertrain and measure their influence on transient emissions and visual signature through state-of-the-art instruments. The viability of this facility for integrated powertrain system development is demonstrated through a case study exploring the development of advanced High Mobility Multipurpose Wheeled Vehicle (HMMWV) powertrains.

  5. Integrated Turbine-Based Combined Cycle Dynamic Simulation Model

    NASA Technical Reports Server (NTRS)

    Haid, Daniel A.; Gamble, Eric J.

    2011-01-01

    A Turbine-Based Combined Cycle (TBCC) dynamic simulation model has been developed to demonstrate all modes of operation, including mode transition, for a turbine-based combined cycle propulsion system. The High Mach Transient Engine Cycle Code (HiTECC) is a highly integrated tool comprised of modules for modeling each of the TBCC systems whose interactions and controllability affect the TBCC propulsion system thrust and operability during its modes of operation. By structuring the simulation modeling tools around the major TBCC functional modes of operation (Dry Turbojet, Afterburning Turbojet, Transition, and Dual Mode Scramjet) the TBCC mode transition and all necessary intermediate events over its entire mission may be developed, modeled, and validated. The reported work details the use of the completed model to simulate a TBCC propulsion system as it accelerates from Mach 2.5, through mode transition, to Mach 7. The completion of this model and its subsequent use to simulate TBCC mode transition significantly extends the state-of-the-art for all TBCC modes of operation by providing a numerical simulation of the systems, interactions, and transient responses affecting the ability of the propulsion system to transition from turbine-based to ramjet/scramjet-based propulsion while maintaining constant thrust.

  6. A Comparison of Robotic Simulation Performance on Basic Virtual Reality Skills: Simulator Subjective Versus Objective Assessment Tools.

    PubMed

    Dubin, Ariel K; Smith, Roger; Julian, Danielle; Tanaka, Alyssa; Mattingly, Patricia

    To answer the question of whether there is a difference between robotic virtual reality simulator performance assessment and validated human reviewers. Current surgical education relies heavily on simulation. Several assessment tools are available to the trainee, including the actual robotic simulator assessment metrics and the Global Evaluative Assessment of Robotic Skills (GEARS) metrics, both of which have been independently validated. GEARS is a rating scale through which human evaluators can score trainees' performances on 6 domains: depth perception, bimanual dexterity, efficiency, force sensitivity, autonomy, and robotic control. Each domain is scored on a 5-point Likert scale with anchors. We used 2 common robotic simulators, the dV-Trainer (dVT; Mimic Technologies Inc., Seattle, WA) and the da Vinci Skills Simulator (dVSS; Intuitive Surgical, Sunnyvale, CA), to compare the performance metrics of robotic surgical simulators with the GEARS for a basic robotic task on each simulator. A prospective single-blinded randomized study. A surgical education and training center. Surgeons and surgeons in training. Demographic information was collected including sex, age, level of training, specialty, and previous surgical and simulator experience. Subjects performed 2 trials of ring and rail 1 (RR1) on each of the 2 simulators (dVSS and dVT) after undergoing randomization and warm-up exercises. The second RR1 trial simulator performance was recorded, and the deidentified videos were sent to human reviewers using GEARS. Eight different simulator assessment metrics were identified and paired with a similar performance metric in the GEARS tool. The GEARS evaluation scores and simulator assessment scores were paired and a Spearman rho calculated for their level of correlation. Seventy-four subjects were enrolled in this randomized study with 9 subjects excluded for missing or incomplete data. There was a strong correlation between the GEARS score and the simulator metric score for time to complete versus efficiency, time to complete versus total score, economy of motion versus depth perception, and overall score versus total score with rho coefficients greater than or equal to 0.70; these were significant (p < .0001). Those with weak correlation (rho ≥0.30) were bimanual dexterity versus economy of motion, efficiency versus master workspace range, bimanual dexterity versus master workspace range, and robotic control versus instrument collisions. On basic VR tasks, several simulator metrics are well matched with GEARS scores assigned by human reviewers, but others are not. Identifying these matches/mismatches can improve the training and assessment process when using robotic surgical simulators. Copyright © 2017 American Association of Gynecologic Laparoscopists. Published by Elsevier Inc. All rights reserved.

  7. A Python tool to set up relative free energy calculations in GROMACS

    PubMed Central

    Klimovich, Pavel V.; Mobley, David L.

    2015-01-01

    Free energy calculations based on molecular dynamics (MD) simulations have seen a tremendous growth in the last decade. However, it is still difficult and tedious to set them up in an automated manner, as the majority of the present-day MD simulation packages lack that functionality. Relative free energy calculations are a particular challenge for several reasons, including the problem of finding a common substructure and mapping the transformation to be applied. Here we present a tool, alchemical-setup.py, that automatically generates all the input files needed to perform relative solvation and binding free energy calculations with the MD package GROMACS. When combined with Lead Optimization Mapper [14], recently developed in our group, alchemical-setup.py allows fully automated setup of relative free energy calculations in GROMACS. Taking a graph of the planned calculations and a mapping, both computed by LOMAP, our tool generates the topology and coordinate files needed to perform relative free energy calculations for a given set of molecules, and provides a set of simulation input parameters. The tool was validated by performing relative hydration free energy calculations for a handful of molecules from the SAMPL4 challenge [16]. Good agreement with previously published results and the straightforward way in which free energy calculations can be conducted make alchemical-setup.py a promising tool for automated setup of relative solvation and binding free energy calculations. PMID:26487189

  8. A Tool for Verification and Validation of Neural Network Based Adaptive Controllers for High Assurance Systems

    NASA Technical Reports Server (NTRS)

    Gupta, Pramod; Schumann, Johann

    2004-01-01

    High reliability of mission- and safety-critical software systems has been identified by NASA as a high-priority technology challenge. We present an approach for the performance analysis of a neural network (NN) in an advanced adaptive control system. This problem is important in the context of safety-critical applications that require certification, such as flight software in aircraft. We have developed a tool to measure the performance of the NN during operation by calculating a confidence interval (error bar) around the NN's output. Our tool can be used during pre-deployment verification as well as monitoring the network performance during operation. The tool has been implemented in Simulink and simulation results on a F-15 aircraft are presented.

  9. Using Machine Learning to Predict MCNP Bias

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grechanuk, Pavel Aleksandrovi

    For many real-world applications in radiation transport where simulations are compared to experimental measurements, like in nuclear criticality safety, the bias (simulated - experimental k eff) in the calculation is an extremely important quantity used for code validation. The objective of this project is to accurately predict the bias of MCNP6 [1] criticality calculations using machine learning (ML) algorithms, with the intention of creating a tool that can complement the current nuclear criticality safety methods. In the latest release of MCNP6, the Whisper tool is available for criticality safety analysts and includes a large catalogue of experimental benchmarks, sensitivity profiles,more » and nuclear data covariance matrices. This data, coming from 1100+ benchmark cases, is used in this study of ML algorithms for criticality safety bias predictions.« less

  10. Materials Genome Initiative

    NASA Technical Reports Server (NTRS)

    Vickers, John

    2015-01-01

    The Materials Genome Initiative (MGI) project element is a cross-Center effort that is focused on the integration of computational tools to simulate manufacturing processes and materials behavior. These computational simulations will be utilized to gain understanding of processes and materials behavior to accelerate process development and certification to more efficiently integrate new materials in existing NASA projects and to lead to the design of new materials for improved performance. This NASA effort looks to collaborate with efforts at other government agencies and universities working under the national MGI. MGI plans to develop integrated computational/experimental/ processing methodologies for accelerating discovery and insertion of materials to satisfy NASA's unique mission demands. The challenges include validated design tools that incorporate materials properties, processes, and design requirements; and materials process control to rapidly mature emerging manufacturing methods and develop certified manufacturing processes

  11. A Hybrid Reality Radiation-free Simulator for Teaching Wire Navigation Skills

    PubMed Central

    Kho, Jenniefer Y.; Johns, Brian D.; Thomas, Geb. W.; Karam, Matthew D.; Marsh, J. Lawrence; Anderson, Donald D.

    2016-01-01

    Objectives Surgical simulation is an increasingly important method to facilitate the acquiring of surgical skills. Simulation can be helpful in developing hip fracture fixation skills because it is a common procedure for which performance can be objectively assessed (i.e., the tip-apex distance). The procedure requires fluoroscopic guidance to drill a wire along an osseous trajectory to a precise position within bone. The objective of this study was to assess the construct validity for a novel radiation-free simulator designed to teach wire navigation skills in hip fracture fixation. Methods Novices (N=30) with limited to no surgical experience in hip fracture fixation and experienced surgeons (N=10) participated. Participants drilled a guide wire in the center-center position of a synthetic femoral head in a hip fracture simulator, using electromagnetic sensors to track the guide wire position. Sensor data were gathered to generate fluoroscopic-like images of the hip and guide wire. Simulator performance of novice and experienced participants was compared to measure construct validity. Results The simulator was able to discriminate the accuracy in guide wire position between novices and experienced surgeons. Experienced surgeons achieved a more accurate tip-apex distance than novices (13 vs 23 mm, respectively, p=0.009). The magnitude of improvement on successive simulator attempts was dependent on level of expertise; tip-apex distance improved significantly in the novice group, while it was unchanged in the experienced group. Conclusions This hybrid reality, radiation-free hip fracture simulator, which combines real-world objects with computer-generated imagery demonstrates construct validity by distinguishing the performance of novices and experienced surgeons. There is a differential effect depending on level of experience, and it could be used as an effective training tool in novice surgeons. PMID:26165262

  12. A NEO population generation and observation simulation software tool

    NASA Astrophysics Data System (ADS)

    Müller, Sven; Gelhaus, Johannes; Hahn, Gerhard; Franco, Raffaella

    One of the main targets of ESA's Space Situational Awareness (SSA) program is to build a wide knowledge base about objects that can potentially harm Earth (Near-Earth Objects, NEOs). An important part of this effort is to create the Small Bodies Data Centre (SBDC) which is going to aggregate measurement data from a fully-integrated NEO observation sensor network. Until this network is developed, artificial NEO measurement data is needed in order to validate SBDC algorithms. Moreover, to establish a functioning NEO observation sensor network, it has to be determined where to place sensors, what technical requirements have to be met in order to be able to detect NEOs and which observation strategies work the best. Because of this, a sensor simulation software was needed. This paper presents a software tool which allows users to create and analyse NEO populations and to simulate and analyse population observations. It is a console program written in Fortran and comes with a Graphical User Interface (GUI) written in Java and C. The tool can be distinguished into the components ``Population Generator'' and ``Observation Simulator''. The Population Generator component is responsible for generating and analysing a NEO population. Users can choose between creating fictitious (random) and synthetic populations. The latter are based on one of two models describing the orbital and size distribution of observed NEOs: The existing socalled ``Bottke Model'' (Bottke et al. 2000, 2002) and the new ``Granvik Model'' (Granvik et al. 2014, in preparation) which has been developed in parallel to the tool. Generated populations can be analysed by defining 2D, 3D and scatter plots using various NEO attributes. As a result, the tool creates the appropiate files for the plotting tool ``gnuplot''. The tool's Observation Simulator component yields the Observation Simulation and Observation Analysis functions. Users can define sensor systems using ground- or space-based locations as well as optical or radar sensors and simulate observation campaigns. The tool outputs field-of-view crossings and actual detections of the selected NEO population objects. Using the Observation Analysis users are able to process and plot the results of the Observation Simulation. In order to enable end-users to handle the tool in a user-intuitive and comfortable way, a GUI has been created based on the modular Eclipse Rich Client Platform (RCP) technology. Through the GUI users can easily enter input data for the tool, execute it and view its output data in a clear way. Additionally, the GUI runs gnuplot to create plot pictures and presents them to the user. Furthermore, users can create projects to organise executions of the tool.

  13. Validation of a small-animal PET simulation using GAMOS: a GEANT4-based framework

    NASA Astrophysics Data System (ADS)

    Cañadas, M.; Arce, P.; Rato Mendes, P.

    2011-01-01

    Monte Carlo-based modelling is a powerful tool to help in the design and optimization of positron emission tomography (PET) systems. The performance of these systems depends on several parameters, such as detector physical characteristics, shielding or electronics, whose effects can be studied on the basis of realistic simulated data. The aim of this paper is to validate a comprehensive study of the Raytest ClearPET small-animal PET scanner using a new Monte Carlo simulation platform which has been developed at CIEMAT (Madrid, Spain), called GAMOS (GEANT4-based Architecture for Medicine-Oriented Simulations). This toolkit, based on the GEANT4 code, was originally designed to cover multiple applications in the field of medical physics from radiotherapy to nuclear medicine, but has since been applied by some of its users in other fields of physics, such as neutron shielding, space physics, high energy physics, etc. Our simulation model includes the relevant characteristics of the ClearPET system, namely, the double layer of scintillator crystals in phoswich configuration, the rotating gantry, the presence of intrinsic radioactivity in the crystals or the storage of single events for an off-line coincidence sorting. Simulated results are contrasted with experimental acquisitions including studies of spatial resolution, sensitivity, scatter fraction and count rates in accordance with the National Electrical Manufacturers Association (NEMA) NU 4-2008 protocol. Spatial resolution results showed a discrepancy between simulated and measured values equal to 8.4% (with a maximum FWHM difference over all measurement directions of 0.5 mm). Sensitivity results differ less than 1% for a 250-750 keV energy window. Simulated and measured count rates agree well within a wide range of activities, including under electronic saturation of the system (the measured peak of total coincidences, for the mouse-sized phantom, was 250.8 kcps reached at 0.95 MBq mL-1 and the simulated peak was 247.1 kcps at 0.87 MBq mL-1). Agreement better than 3% was obtained in the scatter fraction comparison study. We also measured and simulated a mini-Derenzo phantom obtaining images with similar quality using iterative reconstruction methods. We concluded that the overall performance of the simulation showed good agreement with the measured results and validates the GAMOS package for PET applications. Furthermore, its ease of use and flexibility recommends it as an excellent tool to optimize design features or image reconstruction techniques.

  14. Reducing the Schizophrenia Stigma: A New Approach Based on Augmented Reality

    PubMed Central

    Silva, Rafael D. de C.; Albuquerque, Saulo G. C.; Muniz, Artur de V.; Filho, Pedro P. Rebouças; Ribeiro, Sidarta

    2017-01-01

    Schizophrenia is a chronic mental disease that usually manifests psychotic symptoms and affects an individual's functionality. The stigma related to this disease is a serious obstacle for an adequate approach to its treatment. Stigma can, for example, delay the start of treatment, and it creates difficulties in interpersonal and professional relationships. This work proposes a new tool based on augmented reality to reduce the stigma related to schizophrenia. The tool is capable of simulating the psychotic symptoms typical of schizophrenia and simulates sense perception changes in order to create an immersive experience capable of generating pathological experiences of a patient with schizophrenia. The integration into the proposed environment occurs through immersion glasses and an embedded camera. Audio and visual effects can also be applied in real time. To validate the proposed environment, medical students experienced the virtual environment and then answered three questionnaires to assess (i) stigmas related to schizophrenia, (ii) the efficiency and effectiveness of the tool, and, finally (iii) stigma after simulation. The analysis of the questionnaires showed that the proposed model is a robust tool and quite realistic and, thus, very promising in reducing stigma associated with schizophrenia by instilling in the observer a greater comprehension of any person during an schizophrenic outbreak, whether a patient or a family member. PMID:29317860

  15. PT-SAFE: a software tool for development and annunciation of medical audible alarms.

    PubMed

    Bennett, Christopher L; McNeer, Richard R

    2012-03-01

    Recent reports by The Joint Commission as well as the Anesthesia Patient Safety Foundation have indicated that medical audible alarm effectiveness needs to be improved. Several recent studies have explored various approaches to improving the audible alarms, motivating the authors to develop real-time software capable of comparing such alarms. We sought to devise software that would allow for the development of a variety of audible alarm designs that could also integrate into existing operating room equipment configurations. The software is meant to be used as a tool for alarm researchers to quickly evaluate novel alarm designs. A software tool was developed for the purpose of creating and annunciating audible alarms. The alarms consisted of annunciators that were mapped to vital sign data received from a patient monitor. An object-oriented approach to software design was used to create a tool that is flexible and modular at run-time, can annunciate wave-files from disk, and can be programmed with MATLAB by the user to create custom alarm algorithms. The software was tested in a simulated operating room to measure technical performance and to validate the time-to-annunciation against existing equipment alarms. The software tool showed efficacy in a simulated operating room environment by providing alarm annunciation in response to physiologic and ventilator signals generated by a human patient simulator, on average 6.2 seconds faster than existing equipment alarms. Performance analysis showed that the software was capable of supporting up to 15 audible alarms on a mid-grade laptop computer before audio dropouts occurred. These results suggest that this software tool provides a foundation for rapidly staging multiple audible alarm sets from the laboratory to a simulation environment for the purpose of evaluating novel alarm designs, thus producing valuable findings for medical audible alarm standardization.

  16. Use of the Soil and Water Assessment Tool (SWAT) for simulating hydrology and water quality in the Cedar River Basin, Iowa, 2000--10

    USGS Publications Warehouse

    Hutchinson, Kasey J.; Christiansen, Daniel E.

    2013-01-01

    The U.S. Geological Survey, in cooperation with the Iowa Department of Natural Resources, used the Soil and Water Assessment Tool to simulate streamflow and nitrate loads within the Cedar River Basin, Iowa. The goal was to assess the ability of the Soil and Water Assessment Tool to estimate streamflow and nitrate loads in gaged and ungaged basins in Iowa. The Cedar River Basin model uses measured streamflow data from 12 U.S. Geological Survey streamflow-gaging stations for hydrology calibration. The U.S. Geological Survey software program, Load Estimator, was used to estimate annual and monthly nitrate loads based on measured nitrate concentrations and streamflow data from three Iowa Department of Natural Resources Storage and Retrieval/Water Quality Exchange stations, located throughout the basin, for nitrate load calibration. The hydrology of the model was calibrated for the period of January 1, 2000, to December 31, 2004, and validated for the period of January 1, 2005, to December 31, 2010. Simulated daily, monthly, and annual streamflow resulted in Nash-Sutcliffe coefficient of model efficiency (ENS) values ranging from 0.44 to 0.83, 0.72 to 0.93, and 0.56 to 0.97, respectively, and coefficient of determination (R2) values ranging from 0.55 to 0.87, 0.74 to 0.94, and 0.65 to 0.99, respectively, for the calibration period. The percent bias ranged from -19 to 10, -16 to 10, and -19 to 10 for daily, monthly, and annual simulation, respectively. The validation period resulted in daily, monthly, and annual ENS values ranging from 0.49 to 0.77, 0.69 to 0.91, and -0.22 to 0.95, respectively; R2 values ranging from 0.59 to 0.84, 0.74 to 0.92, and 0.36 to 0.92, respectively; and percent bias ranging from -16 for all time steps to percent bias of 14, 15, and 15, respectively. The nitrate calibration was based on a small subset of the locations used in the hydrology calibration with limited measured data. Model performance ranges from unsatisfactory to very good for the calibration period (January 1, 2000, to December 31, 2004). Results for the validation period (January 1, 2005, to December 31, 2010) indicate a need for an increase of measured data as well as more refined documented management practices at a higher resolution. Simulated nitrate loads resulted in monthly and annual ENS values ranging from 0.28 to 0.82 and 0.61 to 0.86, respectively, and monthly and annual R2 values ranging from 0.65 to 0.81 and 0.65 to 0.88, respectively, for the calibration period. The monthly and annual calibration percent bias ranged from 4 to 7 and 5 to 7, respectively. The validation period resulted in all but two ENS values less than zero. Monthly and annual validation R2 values ranged from 0.5 to 0.67 and 0.25 to 0.48, respectively. Monthly and annual validation percent bias ranged from 46 to 68 for both time steps. A daily calibration and validation for nitrate loads was not performed because of the poor monthly and annual results; measured daily nitrate data are available for intervals of time in 2009 and 2010 during which a successful monthly and annual calibration could not be achieved. The Cedar River Basin is densely gaged relative to other basins in Iowa; therefore, an alternative hydrology scenario was created to assess the predictive capabilities of the Soil and Water Assessment Tool using fewer locations of measured data for model hydrology calibration. Although the ability of the model to reproduce measured values improves with the number of calibration locations, results indicate that the Soil and Water Assessment Tool can be used to adequately estimate streamflow in less densely gaged basins throughout the State, especially at the monthly time step. However, results also indicate that caution should be used when calibrating a subbasin that consists of physically distinct regions based on only one streamflow-gaging station.

  17. Software phantom with realistic speckle modeling for validation of image analysis methods in echocardiography

    NASA Astrophysics Data System (ADS)

    Law, Yuen C.; Tenbrinck, Daniel; Jiang, Xiaoyi; Kuhlen, Torsten

    2014-03-01

    Computer-assisted processing and interpretation of medical ultrasound images is one of the most challenging tasks within image analysis. Physical phenomena in ultrasonographic images, e.g., the characteristic speckle noise and shadowing effects, make the majority of standard methods from image analysis non optimal. Furthermore, validation of adapted computer vision methods proves to be difficult due to missing ground truth information. There is no widely accepted software phantom in the community and existing software phantoms are not exible enough to support the use of specific speckle models for different tissue types, e.g., muscle and fat tissue. In this work we propose an anatomical software phantom with a realistic speckle pattern simulation to _ll this gap and provide a exible tool for validation purposes in medical ultrasound image analysis. We discuss the generation of speckle patterns and perform statistical analysis of the simulated textures to obtain quantitative measures of the realism and accuracy regarding the resulting textures.

  18. INL Experimental Program Roadmap for Thermal Hydraulic Code Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glenn McCreery; Hugh McIlroy

    2007-09-01

    Advanced computer modeling and simulation tools and protocols will be heavily relied on for a wide variety of system studies, engineering design activities, and other aspects of the Next Generation Nuclear Power (NGNP) Very High Temperature Reactor (VHTR), the DOE Global Nuclear Energy Partnership (GNEP), and light-water reactors. The goal is for all modeling and simulation tools to be demonstrated accurate and reliable through a formal Verification and Validation (V&V) process, especially where such tools are to be used to establish safety margins and support regulatory compliance, or to design a system in a manner that reduces the role ofmore » expensive mockups and prototypes. Recent literature identifies specific experimental principles that must be followed in order to insure that experimental data meet the standards required for a “benchmark” database. Even for well conducted experiments, missing experimental details, such as geometrical definition, data reduction procedures, and manufacturing tolerances have led to poor Benchmark calculations. The INL has a long and deep history of research in thermal hydraulics, especially in the 1960s through 1980s when many programs such as LOFT and Semiscle were devoted to light-water reactor safety research, the EBRII fast reactor was in operation, and a strong geothermal energy program was established. The past can serve as a partial guide for reinvigorating thermal hydraulic research at the laboratory. However, new research programs need to fully incorporate modern experimental methods such as measurement techniques using the latest instrumentation, computerized data reduction, and scaling methodology. The path forward for establishing experimental research for code model validation will require benchmark experiments conducted in suitable facilities located at the INL. This document describes thermal hydraulic facility requirements and candidate buildings and presents examples of suitable validation experiments related to VHTRs, sodium-cooled fast reactors, and light-water reactors. These experiments range from relatively low-cost benchtop experiments for investigating individual phenomena to large electrically-heated integral facilities for investigating reactor accidents and transients.« less

  19. Multi-Agent Modeling and Simulation Approach for Design and Analysis of MER Mission Operations

    NASA Technical Reports Server (NTRS)

    Seah, Chin; Sierhuis, Maarten; Clancey, William J.

    2005-01-01

    A space mission operations system is a complex network of human organizations, information and deep-space network systems and spacecraft hardware. As in other organizations, one of the problems in mission operations is managing the relationship of the mission information systems related to how people actually work (practices). Brahms, a multi-agent modeling and simulation tool, was used to model and simulate NASA's Mars Exploration Rover (MER) mission work practice. The objective was to investigate the value of work practice modeling for mission operations design. From spring 2002 until winter 2003, a Brahms modeler participated in mission systems design sessions and operations testing for the MER mission held at Jet Propulsion Laboratory (JPL). He observed how designers interacted with the Brahms tool. This paper discussed mission system designers' reactions to the simulation output during model validation and the presentation of generated work procedures. This project spurred JPL's interest in the Brahms model, but it was never included as part of the formal mission design process. We discuss why this occurred. Subsequently, we used the MER model to develop a future mission operations concept. Team members were reluctant to use the MER model, even though it appeared to be highly relevant to their effort. We describe some of the tool issues we encountered.

  20. Reproducible computational biology experiments with SED-ML - The Simulation Experiment Description Markup Language

    PubMed Central

    2011-01-01

    Background The increasing use of computational simulation experiments to inform modern biological research creates new challenges to annotate, archive, share and reproduce such experiments. The recently published Minimum Information About a Simulation Experiment (MIASE) proposes a minimal set of information that should be provided to allow the reproduction of simulation experiments among users and software tools. Results In this article, we present the Simulation Experiment Description Markup Language (SED-ML). SED-ML encodes in a computer-readable exchange format the information required by MIASE to enable reproduction of simulation experiments. It has been developed as a community project and it is defined in a detailed technical specification and additionally provides an XML schema. The version of SED-ML described in this publication is Level 1 Version 1. It covers the description of the most frequent type of simulation experiments in the area, namely time course simulations. SED-ML documents specify which models to use in an experiment, modifications to apply on the models before using them, which simulation procedures to run on each model, what analysis results to output, and how the results should be presented. These descriptions are independent of the underlying model implementation. SED-ML is a software-independent format for encoding the description of simulation experiments; it is not specific to particular simulation tools. Here, we demonstrate that with the growing software support for SED-ML we can effectively exchange executable simulation descriptions. Conclusions With SED-ML, software can exchange simulation experiment descriptions, enabling the validation and reuse of simulation experiments in different tools. Authors of papers reporting simulation experiments can make their simulation protocols available for other scientists to reproduce the results. Because SED-ML is agnostic about exact modeling language(s) used, experiments covering models from different fields of research can be accurately described and combined. PMID:22172142

  1. NASA Operational Simulator for Small Satellites: Tools for Software Based Validation and Verification of Small Satellites

    NASA Technical Reports Server (NTRS)

    Grubb, Matt

    2016-01-01

    The NASA Operational Simulator for Small Satellites (NOS3) is a suite of tools to aid in areas such as software development, integration test (IT), mission operations training, verification and validation (VV), and software systems check-out. NOS3 provides a software development environment, a multi-target build system, an operator interface-ground station, dynamics and environment simulations, and software-based hardware models. NOS3 enables the development of flight software (FSW) early in the project life cycle, when access to hardware is typically not available. For small satellites there are extensive lead times on many of the commercial-off-the-shelf (COTS) components as well as limited funding for engineering test units (ETU). Considering the difficulty of providing a hardware test-bed to each developer tester, hardware models are modeled based upon characteristic data or manufacturers data sheets for each individual component. The fidelity of each hardware models is such that FSW executes unaware that physical hardware is not present. This allows binaries to be compiled for both the simulation environment, and the flight computer, without changing the FSW source code. For hardware models that provide data dependent on the environment, such as a GPS receiver or magnetometer, an open-source tool from NASA GSFC (42 Spacecraft Simulation) is used to provide the necessary data. The underlying infrastructure used to transfer messages between FSW and the hardware models can also be used to monitor, intercept, and inject messages, which has proven to be beneficial for VV of larger missions such as James Webb Space Telescope (JWST). As hardware is procured, drivers can be added to the environment to enable hardware-in-the-loop (HWIL) testing. When strict time synchronization is not vital, any number of combinations of hardware components and software-based models can be tested. The open-source operator interface used in NOS3 is COSMOS from Ball Aerospace. For testing, plug-ins are implemented in COSMOS to control the NOS3 simulations, while the command and telemetry tools available in COSMOS are used to communicate with FSW. NOS3 is actively being used for FSW development and component testing of the Simulation-to-Flight 1 (STF-1) CubeSat. As NOS3 matures, hardware models have been added for common CubeSat components such as Novatel GPS receivers, ClydeSpace electrical power systems and batteries, ISISpace antenna systems, etc. In the future, NASA IVV plans to distribute NOS3 to other CubeSat developers and release the suite to the open-source community.

  2. Rationality Validation of a Layered Decision Model for Network Defense

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, Huaqiang; Alves-Foss, James; Zhang, Du

    2007-08-31

    We propose a cost-effective network defense strategy built on three key: three decision layers: security policies, defense strategies, and real-time defense tactics for countering immediate threats. A layered decision model (LDM) can be used to capture this decision process. The LDM helps decision-makers gain insight into the hierarchical relationships among inter-connected entities and decision types, and supports the selection of cost-effective defense mechanisms to safeguard computer networks. To be effective as a business tool, it is first necessary to validate the rationality of model before applying it to real-world business cases. This paper describes our efforts in validating the LDMmore » rationality through simulation.« less

  3. Delphi Method Validation of a Procedural Performance Checklist for Insertion of an Ultrasound-Guided Internal Jugular Central Line.

    PubMed

    Hartman, Nicholas; Wittler, Mary; Askew, Kim; Manthey, David

    2016-01-01

    Placement of ultrasound-guided central lines is a critical skill for physicians in several specialties. Improving the quality of care delivered surrounding this procedure demands rigorous measurement of competency, and validated tools to assess performance are essential. Using the iterative, modified Delphi technique and experts in multiple disciplines across the United States, the study team created a 30-item checklist designed to assess competency in the placement of ultrasound-guided internal jugular central lines. Cronbach α was .94, indicating an excellent degree of internal consistency. Further validation of this checklist will require its implementation in simulated and clinical environments. © The Author(s) 2014.

  4. Computerized Planning of Cryosurgery Using Bubble Packing: An Experimental Validation on a Phantom Material

    PubMed Central

    Rossi, Michael R.; Tanaka, Daigo; Shimada, Kenji; Rabin, Yoed

    2009-01-01

    The current study focuses on experimentally validating a planning scheme based on the so-called bubble-packing method. This study is a part of an ongoing effort to develop computerized planning tools for cryosurgery, where bubble packing has been previously developed as a means to find an initial, uniform distribution of cryoprobes within a given domain; the so-called force-field analogy was then used to move cryoprobes to their optimum layout. However, due to the high quality of the cryoprobes’ distribution, suggested by bubble packing and its low computational cost, it has been argued that a planning scheme based solely on bubble packing may be more clinically relevant. To test this argument, an experimental validation is performed on a simulated cross-section of the prostate, using gelatin solution as a phantom material, proprietary liquid-nitrogen based cryoprobes, and a cryoheater to simulate urethral warming. Experimental results are compared with numerically simulated temperature histories resulting from planning. Results indicate an average disagreement of 0.8 mm in identifying the freezing front location, which is an acceptable level of uncertainty in the context of prostate cryosurgery imaging. PMID:19885373

  5. Development and validation of real-time simulation of X-ray imaging with respiratory motion.

    PubMed

    Vidal, Franck P; Villard, Pierre-Frédéric

    2016-04-01

    We present a framework that combines evolutionary optimisation, soft tissue modelling and ray tracing on GPU to simultaneously compute the respiratory motion and X-ray imaging in real-time. Our aim is to provide validated building blocks with high fidelity to closely match both the human physiology and the physics of X-rays. A CPU-based set of algorithms is presented to model organ behaviours during respiration. Soft tissue deformation is computed with an extension of the Chain Mail method. Rigid elements move according to kinematic laws. A GPU-based surface rendering method is proposed to compute the X-ray image using the Beer-Lambert law. It is provided as an open-source library. A quantitative validation study is provided to objectively assess the accuracy of both components: (i) the respiration against anatomical data, and (ii) the X-ray against the Beer-Lambert law and the results of Monte Carlo simulations. Our implementation can be used in various applications, such as interactive medical virtual environment to train percutaneous transhepatic cholangiography in interventional radiology, 2D/3D registration, computation of digitally reconstructed radiograph, simulation of 4D sinograms to test tomography reconstruction tools. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. “Elegant Tool” Delivers Genome-Level Science for Electrolytes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keith Arterburn

    Now, a ‘disruptive, virtual scientific simulation tool’ delivers a new, genome-level investigation for electrolytes to develop better, more efficient batteries. Dr. Kevin Gering, an Idaho National Laboratory researcher, has developed the Advanced Electrolyte Model (AEM), a copyrighted molecular-based simulation tool that has been scientifically proven and validated using at least a dozen ‘real-world’ physical metrics. Nominated for the 2014 international R&D 100 Award, AEM revolutionizes electrolyte materials selection, optimizing combinations and key design elements to make battery design and experimentation quick, accurate and responsive to specific needs.

  7. Observing System Simulation Experiments for Fun and Profit

    NASA Technical Reports Server (NTRS)

    Prive, Nikki C.

    2015-01-01

    Observing System Simulation Experiments can be powerful tools for evaluating and exploring both the behavior of data assimilation systems and the potential impacts of future observing systems. With great power comes great responsibility - given a pure modeling framework, how can we be sure our results are meaningful? The challenges and pitfalls of OSSE calibration and validation will be addressed, as well as issues of incestuousness, selection of appropriate metrics, and experiment design. The use of idealized observational networks to investigate theoretical ideas in a fully complex modeling framework will also be discussed

  8. Coke formation in the thermal cracking of hydrocarbons. 4: Modeling of coke formation in naphtha cracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reyniers, G.C.; Froment, G.F.; Kopinke, F.D.

    1994-11-01

    An extensive experimental program has been carried out in a pilot unit for the thermal cracking of hydrocarbons. On the basis of the experimental information and the insight in the mechanisms for coke formation in pyrolysis reactors, a mathematical model describing the coke formation has been derived. This model has been incorporated in the existing simulation tools at the Laboratorium voor Petrochemische Techniek, and the run length of an industrial naphtha cracking furnace has been accurately simulated. In this way the coking model has been validated.

  9. Development of an advanced system identification technique for comparing ADAMS analytical results with modal test data for a MICON 65/13 wind turbine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bialasiewicz, J.T.

    1995-07-01

    This work uses the theory developed in NREL/TP--442-7110 to analyze simulated data from an ADAMS (Automated Dynamic Analysis of Mechanical Systems) model of the MICON 65/13 wind turbine. The Observer/Kalman Filter identification approach is expanded to use input-output time histories from ADAMS simulations or structural test data. A step by step outline is offered on how the tools developed in this research, can be used for validation of the ADAMS model.

  10. Analytical expressions for the nonlinear interference in dispersion managed transmission coherent optical systems

    NASA Astrophysics Data System (ADS)

    Qiao, Yaojun; Li, Ming; Yang, Qiuhong; Xu, Yanfei; Ji, Yuefeng

    2015-01-01

    Closed-form expressions of nonlinear interference of dense wavelength-division-multiplexed (WDM) systems with dispersion managed transmission (DMT) are derived. We carry out a simulative validation by addressing an ample and significant set of the Nyquist-WDM systems based on polarization multiplexed quadrature phase-shift keying (PM-QPSK) subcarriers at a baud rate of 32 Gbaud per channel. Simulation results show the simple closed-form analytical expressions can provide an effective tool for the quick and accurate prediction of system performance in DMT coherent optical systems.

  11. GeNeDA: An Open-Source Workflow for Design Automation of Gene Regulatory Networks Inspired from Microelectronics.

    PubMed

    Madec, Morgan; Pecheux, François; Gendrault, Yves; Rosati, Elise; Lallement, Christophe; Haiech, Jacques

    2016-10-01

    The topic of this article is the development of an open-source automated design framework for synthetic biology, specifically for the design of artificial gene regulatory networks based on a digital approach. In opposition to other tools, GeNeDA is an open-source online software based on existing tools used in microelectronics that have proven their efficiency over the last 30 years. The complete framework is composed of a computation core directly adapted from an Electronic Design Automation tool, input and output interfaces, a library of elementary parts that can be achieved with gene regulatory networks, and an interface with an electrical circuit simulator. Each of these modules is an extension of microelectronics tools and concepts: ODIN II, ABC, the Verilog language, SPICE simulator, and SystemC-AMS. GeNeDA is first validated on a benchmark of several combinatorial circuits. The results highlight the importance of the part library. Then, this framework is used for the design of a sequential circuit including a biological state machine.

  12. The acoustic performance of double-skin facades: A design support tool for architects

    NASA Astrophysics Data System (ADS)

    Batungbakal, Aireen

    This study assesses and validates the influence of measuring sound in the urban environment and the influence of glass facade components in reducing sound transmission to the indoor environment. Among the most reported issues affecting workspaces, increased awareness to minimize noise led building designers to reconsider the design of building envelopes and its site environment. Outdoor sound conditions, such as traffic noise, challenge designers to accurately estimate the capability of glass facades in acquiring an appropriate indoor sound quality. Indicating the density of the urban environment, field-tests acquired existing sound levels in areas of high commercial development, employment, and traffic activity, establishing a baseline for sound levels common in urban work areas. Composed from the direct sound transmission loss of glass facades simulated through INSUL, a sound insulation software, data is utilized as an informative tool correlating the response of glass facade components towards existing outdoor sound levels of a project site in order to achieve desired indoor sound levels. This study progresses to link the disconnection in validating the acoustic performance of glass facades early in a project's design, from conditioned settings such as field-testing and simulations to project completion. Results obtained from the study's facade simulations and facade comparison supports that acoustic comfort is not limited to a singular solution, but multiple design options responsive to its environment.

  13. Challenges in Reproducibility, Replicability, and Comparability of Computational Models and Tools for Neuronal and Glial Networks, Cells, and Subcellular Structures.

    PubMed

    Manninen, Tiina; Aćimović, Jugoslava; Havela, Riikka; Teppola, Heidi; Linne, Marja-Leena

    2018-01-01

    The possibility to replicate and reproduce published research results is one of the biggest challenges in all areas of science. In computational neuroscience, there are thousands of models available. However, it is rarely possible to reimplement the models based on the information in the original publication, let alone rerun the models just because the model implementations have not been made publicly available. We evaluate and discuss the comparability of a versatile choice of simulation tools: tools for biochemical reactions and spiking neuronal networks, and relatively new tools for growth in cell cultures. The replicability and reproducibility issues are considered for computational models that are equally diverse, including the models for intracellular signal transduction of neurons and glial cells, in addition to single glial cells, neuron-glia interactions, and selected examples of spiking neuronal networks. We also address the comparability of the simulation results with one another to comprehend if the studied models can be used to answer similar research questions. In addition to presenting the challenges in reproducibility and replicability of published results in computational neuroscience, we highlight the need for developing recommendations and good practices for publishing simulation tools and computational models. Model validation and flexible model description must be an integral part of the tool used to simulate and develop computational models. Constant improvement on experimental techniques and recording protocols leads to increasing knowledge about the biophysical mechanisms in neural systems. This poses new challenges for computational neuroscience: extended or completely new computational methods and models may be required. Careful evaluation and categorization of the existing models and tools provide a foundation for these future needs, for constructing multiscale models or extending the models to incorporate additional or more detailed biophysical mechanisms. Improving the quality of publications in computational neuroscience, enabling progressive building of advanced computational models and tools, can be achieved only through adopting publishing standards which underline replicability and reproducibility of research results.

  14. Challenges in Reproducibility, Replicability, and Comparability of Computational Models and Tools for Neuronal and Glial Networks, Cells, and Subcellular Structures

    PubMed Central

    Manninen, Tiina; Aćimović, Jugoslava; Havela, Riikka; Teppola, Heidi; Linne, Marja-Leena

    2018-01-01

    The possibility to replicate and reproduce published research results is one of the biggest challenges in all areas of science. In computational neuroscience, there are thousands of models available. However, it is rarely possible to reimplement the models based on the information in the original publication, let alone rerun the models just because the model implementations have not been made publicly available. We evaluate and discuss the comparability of a versatile choice of simulation tools: tools for biochemical reactions and spiking neuronal networks, and relatively new tools for growth in cell cultures. The replicability and reproducibility issues are considered for computational models that are equally diverse, including the models for intracellular signal transduction of neurons and glial cells, in addition to single glial cells, neuron-glia interactions, and selected examples of spiking neuronal networks. We also address the comparability of the simulation results with one another to comprehend if the studied models can be used to answer similar research questions. In addition to presenting the challenges in reproducibility and replicability of published results in computational neuroscience, we highlight the need for developing recommendations and good practices for publishing simulation tools and computational models. Model validation and flexible model description must be an integral part of the tool used to simulate and develop computational models. Constant improvement on experimental techniques and recording protocols leads to increasing knowledge about the biophysical mechanisms in neural systems. This poses new challenges for computational neuroscience: extended or completely new computational methods and models may be required. Careful evaluation and categorization of the existing models and tools provide a foundation for these future needs, for constructing multiscale models or extending the models to incorporate additional or more detailed biophysical mechanisms. Improving the quality of publications in computational neuroscience, enabling progressive building of advanced computational models and tools, can be achieved only through adopting publishing standards which underline replicability and reproducibility of research results. PMID:29765315

  15. Modelling and attenuation feasibility of the aeroelastic response of active helicopter rotor systems during the engagement/disengagement phase of maritime operation

    NASA Astrophysics Data System (ADS)

    Khouli, F.

    An aeroelastic phenomenon, known as blade sailing, encountered during maritime operation of helicopters is identified as being a factor that limits the tactical flexibility of helicopter operation in some sea conditions. The hazards associated with this phenomenon and its complexity, owing to the number of factors contributing to its occurrence, led previous investigators to conclude that advanced and validated simulation tools are best suited to investigate it. A research gap is identified in terms of scaled experimental investigation of this phenomenon and practical engineering solutions to alleviate its negative impact on maritime helicopter operation. The feasibility of a proposed strategy to alleviate it required addressing a gap in modelling thin-walled composite active beams/rotor blades. The modelling is performed by extending a mathematically-consistent and asymptotic reduction strategy of the 3-D elastic problem to account for embedded active materials. The derived active cross-sectional theory is validated using 2-D finite element results for closed and open cross-sections. The geometrically-exact intrinsic formulation of active maritime rotor systems is demonstrated to yield compact and symbolic governing equations. The intrinsic feature is shown to allow a classical and proven solution scheme to be successfully applied to obtain time history solutions. A Froude-scaled experimental rotor was designed, built, and tested in a scaled ship airwake environment and representative ship motion. Based on experimental and simulations data, conclusions are drawn regarding the influence of the maritime operation environment and the rotor operation parameters on the blade sailing phenomenon. The experimental data is also used to successfully validate the developed simulation tools. The feasibility of an open-loop control strategy based on the integral active twist concept to counter blade sailing is established in a Mach-scaled maritime operation environment. Recommendations are proposed to improve the strategy and further establish its validity in a full-scale maritime operation environment.

  16. Telemetry-Enhancing Scripts

    NASA Technical Reports Server (NTRS)

    Maimone, Mark W.

    2009-01-01

    Scripts Providing a Cool Kit of Telemetry Enhancing Tools (SPACKLE) is a set of software tools that fill gaps in capabilities of other software used in processing downlinked data in the Mars Exploration Rovers (MER) flight and test-bed operations. SPACKLE tools have helped to accelerate the automatic processing and interpretation of MER mission data, enabling non-experts to understand and/or use MER query and data product command simulation software tools more effectively. SPACKLE has greatly accelerated some operations and provides new capabilities. The tools of SPACKLE are written, variously, in Perl or the C or C++ language. They perform a variety of search and shortcut functions that include the following: Generating text-only, Event Report-annotated, and Web-enhanced views of command sequences; Labeling integer enumerations with their symbolic meanings in text messages and engineering channels; Systematic detecting of corruption within data products; Generating text-only displays of data-product catalogs including downlink status; Validating and labeling of commands related to data products; Performing of convenient searches of detailed engineering data spanning multiple Martian solar days; Generating tables of initial conditions pertaining to engineering, health, and accountability data; Simplified construction and simulation of command sequences; and Fast time format conversions and sorting.

  17. Implementation of a state-to-state analytical framework for the calculation of expansion tube flow properties

    NASA Astrophysics Data System (ADS)

    James, C. M.; Gildfind, D. E.; Lewis, S. W.; Morgan, R. G.; Zander, F.

    2018-03-01

    Expansion tubes are an important type of test facility for the study of planetary entry flow-fields, being the only type of impulse facility capable of simulating the aerothermodynamics of superorbital planetary entry conditions from 10 to 20 km/s. However, the complex flow processes involved in expansion tube operation make it difficult to fully characterise flow conditions, with two-dimensional full facility computational fluid dynamics simulations often requiring tens or hundreds of thousands of computational hours to complete. In an attempt to simplify this problem and provide a rapid flow condition prediction tool, this paper presents a validated and comprehensive analytical framework for the simulation of an expansion tube facility. It identifies central flow processes and models them from state to state through the facility using established compressible and isentropic flow relations, and equilibrium and frozen chemistry. How the model simulates each section of an expansion tube is discussed, as well as how the model can be used to simulate situations where flow conditions diverge from ideal theory. The model is then validated against experimental data from the X2 expansion tube at the University of Queensland.

  18. Dark matter search in a Beam-Dump eXperiment (BDX) at Jefferson Lab: an update on PR12-16-001

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Battaglieri, M.

    This document is an update to the proposal PR12-16-001 Dark matter search in a Beam-Dump eXperiment (BDX) at Jefferson Lab submitted to JLab-PAC44 in 2016 reporting progress in addressing questions raised regarding the beam-on backgrounds. The concerns are addressed by adopting a new simulation tool, FLUKA, and planning measurements of muon fluxes from the dump with its existing shielding around the dump. First, we have implemented the detailed BDX experimental geometry into a FLUKA simulation, in consultation with experts from the JLab Radiation Control Group. The FLUKA simulation has been compared directly to our GEANT4 simulations and shown to agreemore » in regions of validity. The FLUKA interaction package, with a tuned set of biasing weights, is naturally able to generate reliable particle distributions with very small probabilities and therefore predict rates at the detector location beyond the planned shielding around the beam dump. Second, we have developed a plan to conduct measurements of the muon ux from the Hall-A dump in its current configuration to validate our simulations.« less

  19. An IMU-to-Body Alignment Method Applied to Human Gait Analysis.

    PubMed

    Vargas-Valencia, Laura Susana; Elias, Arlindo; Rocon, Eduardo; Bastos-Filho, Teodiano; Frizera, Anselmo

    2016-12-10

    This paper presents a novel calibration procedure as a simple, yet powerful, method to place and align inertial sensors with body segments. The calibration can be easily replicated without the need of any additional tools. The proposed method is validated in three different applications: a computer mathematical simulation; a simplified joint composed of two semi-spheres interconnected by a universal goniometer; and a real gait test with five able-bodied subjects. Simulation results demonstrate that, after the calibration method is applied, the joint angles are correctly measured independently of previous sensor placement on the joint, thus validating the proposed procedure. In the cases of a simplified joint and a real gait test with human volunteers, the method also performs correctly, although secondary plane errors appear when compared with the simulation results. We believe that such errors are caused by limitations of the current inertial measurement unit (IMU) technology and fusion algorithms. In conclusion, the presented calibration procedure is an interesting option to solve the alignment problem when using IMUs for gait analysis.

  20. Benchmarking and validation of a Geant4-SHADOW Monte Carlo simulation for dose calculations in microbeam radiation therapy.

    PubMed

    Cornelius, Iwan; Guatelli, Susanna; Fournier, Pauline; Crosbie, Jeffrey C; Sanchez Del Rio, Manuel; Bräuer-Krisch, Elke; Rosenfeld, Anatoly; Lerch, Michael

    2014-05-01

    Microbeam radiation therapy (MRT) is a synchrotron-based radiotherapy modality that uses high-intensity beams of spatially fractionated radiation to treat tumours. The rapid evolution of MRT towards clinical trials demands accurate treatment planning systems (TPS), as well as independent tools for the verification of TPS calculated dose distributions in order to ensure patient safety and treatment efficacy. Monte Carlo computer simulation represents the most accurate method of dose calculation in patient geometries and is best suited for the purpose of TPS verification. A Monte Carlo model of the ID17 biomedical beamline at the European Synchrotron Radiation Facility has been developed, including recent modifications, using the Geant4 Monte Carlo toolkit interfaced with the SHADOW X-ray optics and ray-tracing libraries. The code was benchmarked by simulating dose profiles in water-equivalent phantoms subject to irradiation by broad-beam (without spatial fractionation) and microbeam (with spatial fractionation) fields, and comparing against those calculated with a previous model of the beamline developed using the PENELOPE code. Validation against additional experimental dose profiles in water-equivalent phantoms subject to broad-beam irradiation was also performed. Good agreement between codes was observed, with the exception of out-of-field doses and toward the field edge for larger field sizes. Microbeam results showed good agreement between both codes and experimental results within uncertainties. Results of the experimental validation showed agreement for different beamline configurations. The asymmetry in the out-of-field dose profiles due to polarization effects was also investigated, yielding important information for the treatment planning process in MRT. This work represents an important step in the development of a Monte Carlo-based independent verification tool for treatment planning in MRT.

  1. Hydrologic calibration and validation of SWAT in a snow-dominated Rocky Mountain watershed, Montana, USA

    Treesearch

    Robert S. Ahl; Scott W. Woods; Hans R. Zuuring

    2008-01-01

    The Soil and Water Assessment Tool (SWAT) has been applied successfully in temperate environments but little is known about its performance in the snow-dominated, forested, mountainous watersheds that provide much of the water supply in western North America. To address this knowledge gap, we configured SWAT to simulate the streamflow of Tenderfoot Creek (TCSWAT)....

  2. Verification and Validation of the Malicious Activity Simulation Tool (MAST) for Network Administrator Training and Evaluation

    DTIC Science & Technology

    2012-03-01

    to sell fake antivirus software ; Gammima, which was used to steal gaming login information; and Zeus, which was used to steal banking information...13 3. Viruses ......................................14 C. PROOF OF CONCEPT OF SOFTWARE TRAINING USING MALWARE MIMICS...33 2. Software .....................................34 3. COMPOSE CG-71 Virtual Machines ...............37 a. Integrated Shipboard Network System

  3. The Applications of Finite Element Analysis in Proximal Humeral Fractures.

    PubMed

    Ye, Yongyu; You, Wei; Zhu, Weimin; Cui, Jiaming; Chen, Kang; Wang, Daping

    2017-01-01

    Proximal humeral fractures are common and most challenging, due to the complexity of the glenohumeral joint, especially in the geriatric population with impacted fractures, that the development of implants continues because currently the problems with their fixation are not solved. Pre-, intra-, and postoperative assessments are crucial in management of those patients. Finite element analysis, as one of the valuable tools, has been implemented as an effective and noninvasive method to analyze proximal humeral fractures, providing solid evidence for management of troublesome patients. However, no review article about the applications and effects of finite element analysis in assessing proximal humeral fractures has been reported yet. This review article summarized the applications, contribution, and clinical significance of finite element analysis in assessing proximal humeral fractures. Furthermore, the limitations of finite element analysis, the difficulties of more realistic simulation, and the validation and also the creation of validated FE models were discussed. We concluded that although some advancements in proximal humeral fractures researches have been made by using finite element analysis, utility of this powerful tool for routine clinical management and adequate simulation requires more state-of-the-art studies to provide evidence and bases.

  4. Continuum approach for aerothermal flow through ablative porous material using discontinuous Galerkin discretization.

    NASA Astrophysics Data System (ADS)

    Schrooyen, Pierre; Chatelain, Philippe; Hillewaert, Koen; Magin, Thierry E.

    2014-11-01

    The atmospheric entry of spacecraft presents several challenges in simulating the aerothermal flow around the heat shield. Predicting an accurate heat-flux is a complex task, especially regarding the interaction between the flow in the free stream and the erosion of the thermal protection material. To capture this interaction, a continuum approach is developed to go progressively from the region fully occupied by fluid to a receding porous medium. The volume averaged Navier-Stokes equations are used to model both phases in the same computational domain considering a single set of conservation laws. The porosity is itself a variable of the computation, allowing to take volumetric ablation into account through adequate source terms. This approach is implemented within a computational tool based on a high-order discontinuous Galerkin discretization. The multi-dimensional tool has already been validated and has proven its efficient parallel implementation. Within this platform, a fully implicit method was developed to simulate multi-phase reacting flows. Numerical results to verify and validate the methodology are considered within this work. Interactions between the flow and the ablated geometry are also presented. Supported by Fund for Research Training in Industry and Agriculture.

  5. Investigation of blast-induced traumatic brain injury.

    PubMed

    Taylor, Paul A; Ludwigsen, John S; Ford, Corey C

    2014-01-01

    Many troops deployed in Iraq and Afghanistan have sustained blast-related, closed-head injuries from being within non-lethal distance of detonated explosive devices. Little is known, however, about the mechanisms associated with blast exposure that give rise to traumatic brain injury (TBI). This study attempts to identify the precise conditions of focused stress wave energy within the brain, resulting from blast exposure, which will correlate with a threshold for persistent brain injury. This study developed and validated a set of modelling tools to simulate blast loading to the human head. Using these tools, the blast-induced, early-time intracranial wave motions that lead to focal brain damage were simulated. The simulations predict the deposition of three distinct wave energy components, two of which can be related to injury-inducing mechanisms, namely cavitation and shear. Furthermore, the results suggest that the spatial distributions of these damaging energy components are independent of blast direction. The predictions reported herein will simplify efforts to correlate simulation predictions with clinical measures of TBI and aid in the development of protective headwear.

  6. Investigation of blast-induced traumatic brain injury

    PubMed Central

    Ludwigsen, John S.; Ford, Corey C.

    2014-01-01

    Objective Many troops deployed in Iraq and Afghanistan have sustained blast-related, closed-head injuries from being within non-lethal distance of detonated explosive devices. Little is known, however, about the mechanisms associated with blast exposure that give rise to traumatic brain injury (TBI). This study attempts to identify the precise conditions of focused stress wave energy within the brain, resulting from blast exposure, which will correlate with a threshold for persistent brain injury. Methods This study developed and validated a set of modelling tools to simulate blast loading to the human head. Using these tools, the blast-induced, early-time intracranial wave motions that lead to focal brain damage were simulated. Results The simulations predict the deposition of three distinct wave energy components, two of which can be related to injury-inducing mechanisms, namely cavitation and shear. Furthermore, the results suggest that the spatial distributions of these damaging energy components are independent of blast direction. Conclusions The predictions reported herein will simplify efforts to correlate simulation predictions with clinical measures of TBI and aid in the development of protective headwear. PMID:24766453

  7. A Simulation Tool for Dynamic Contrast Enhanced MRI

    PubMed Central

    Mauconduit, Franck; Christen, Thomas; Barbier, Emmanuel Luc

    2013-01-01

    The quantification of bolus-tracking MRI techniques remains challenging. The acquisition usually relies on one contrast and the analysis on a simplified model of the various phenomena that arise within a voxel, leading to inaccurate perfusion estimates. To evaluate how simplifications in the interstitial model impact perfusion estimates, we propose a numerical tool to simulate the MR signal provided by a dynamic contrast enhanced (DCE) MRI experiment. Our model encompasses the intrinsic and relaxations, the magnetic field perturbations induced by susceptibility interfaces (vessels and cells), the diffusion of the water protons, the blood flow, the permeability of the vessel wall to the the contrast agent (CA) and the constrained diffusion of the CA within the voxel. The blood compartment is modeled as a uniform compartment. The different blocks of the simulation are validated and compared to classical models. The impact of the CA diffusivity on the permeability and blood volume estimates is evaluated. Simulations demonstrate that the CA diffusivity slightly impacts the permeability estimates ( for classical blood flow and CA diffusion). The effect of long echo times is investigated. Simulations show that DCE-MRI performed with an echo time may already lead to significant underestimation of the blood volume (up to 30% lower for brain tumor permeability values). The potential and the versatility of the proposed implementation are evaluated by running the simulation with realistic vascular geometry obtained from two photons microscopy and with impermeable cells in the extravascular environment. In conclusion, the proposed simulation tool describes DCE-MRI experiments and may be used to evaluate and optimize acquisition and processing strategies. PMID:23516414

  8. Advanced data management system architectures testbed

    NASA Technical Reports Server (NTRS)

    Grant, Terry

    1990-01-01

    The objective of the Architecture and Tools Testbed is to provide a working, experimental focus to the evolving automation applications for the Space Station Freedom data management system. Emphasis is on defining and refining real-world applications including the following: the validation of user needs; understanding system requirements and capabilities; and extending capabilities. The approach is to provide an open, distributed system of high performance workstations representing both the standard data processors and networks and advanced RISC-based processors and multiprocessor systems. The system provides a base from which to develop and evaluate new performance and risk management concepts and for sharing the results. Participants are given a common view of requirements and capability via: remote login to the testbed; standard, natural user interfaces to simulations and emulations; special attention to user manuals for all software tools; and E-mail communication. The testbed elements which instantiate the approach are briefly described including the workstations, the software simulation and monitoring tools, and performance and fault tolerance experiments.

  9. Objective assessment of technique in laparoscopic colorectal surgery: what are the existing tools?

    PubMed

    Foster, J D; Francis, N K

    2015-01-01

    Assessment can improve the effectiveness of surgical training and enable valid judgments of competence. Laparoscopic colon resection surgery is now taught within surgical residency programs, and assessment tools are increasingly used to stimulate formative feedback and enhance learning. Formal assessment of technical performance in laparoscopic colon resection has been successfully applied at the specialist level in the English "LAPCO" National Training Program. Objective assessment tools need to be developed for training and assessment in laparoscopic rectal cancer resection surgery. Simulation may have a future role in assessment and accreditation in laparoscopic colorectal surgery; however, existing virtual reality models are not ready to be used for assessment of this advanced surgery.

  10. Development of an objective assessment tool for total laparoscopic hysterectomy: A Delphi method among experts and evaluation on a virtual reality simulator

    PubMed Central

    Knight, Sophie; Aggarwal, Rajesh; Agostini, Aubert; Loundou, Anderson; Berdah, Stéphane

    2018-01-01

    Introduction Total Laparoscopic hysterectomy (LH) requires an advanced level of operative skills and training. The aim of this study was to develop an objective scale specific for the assessment of technical skills for LH (H-OSATS) and to demonstrate feasibility of use and validity in a virtual reality setting. Material and methods The scale was developed using a hierarchical task analysis and a panel of international experts. A Delphi method obtained consensus among experts on relevant steps that should be included into the H-OSATS scale for assessment of operative performances. Feasibility of use and validity of the scale were evaluated by reviewing video recordings of LH performed on a virtual reality laparoscopic simulator. Three groups of operators of different levels of experience were assessed in a Marseille teaching hospital (10 novices, 8 intermediates and 8 experienced surgeons). Correlations with scores obtained using a recognised generic global rating tool (OSATS) were calculated. Results A total of 76 discrete steps were identified by the hierarchical task analysis. 14 experts completed the two rounds of the Delphi questionnaire. 64 steps reached consensus and were integrated in the scale. During the validation process, median time to rate each video recording was 25 minutes. There was a significant difference between the novice, intermediate and experienced group for total H-OSATS scores (133, 155.9 and 178.25 respectively; p = 0.002). H-OSATS scale demonstrated high inter-rater reliability (intraclass correlation coefficient [ICC] = 0.930; p<0.001) and test retest reliability (ICC = 0.877; p<0.001). High correlations were found between total H-OSATS scores and OSATS scores (rho = 0.928; p<0.001). Conclusion The H-OSATS scale displayed evidence of validity for assessment of technical performances for LH performed on a virtual reality simulator. The implementation of this scale is expected to facilitate deliberate practice. Next steps should focus on evaluating the validity of the scale in the operating room. PMID:29293635

  11. Construct and face validity of a virtual reality-based camera navigation curriculum.

    PubMed

    Shetty, Shohan; Panait, Lucian; Baranoski, Jacob; Dudrick, Stanley J; Bell, Robert L; Roberts, Kurt E; Duffy, Andrew J

    2012-10-01

    Camera handling and navigation are essential skills in laparoscopic surgery. Surgeons rely on camera operators, usually the least experienced members of the team, for visualization of the operative field. Essential skills for camera operators include maintaining orientation, an effective horizon, appropriate zoom control, and a clean lens. Virtual reality (VR) simulation may be a useful adjunct to developing camera skills in a novice population. No standardized VR-based camera navigation curriculum is currently available. We developed and implemented a novel curriculum on the LapSim VR simulator platform for our residents and students. We hypothesize that our curriculum will demonstrate construct and face validity in our trainee population, distinguishing levels of laparoscopic experience as part of a realistic training curriculum. Overall, 41 participants with various levels of laparoscopic training completed the curriculum. Participants included medical students, surgical residents (Postgraduate Years 1-5), fellows, and attendings. We stratified subjects into three groups (novice, intermediate, and advanced) based on previous laparoscopic experience. We assessed face validity with a questionnaire. The proficiency-based curriculum consists of three modules: camera navigation, coordination, and target visualization using 0° and 30° laparoscopes. Metrics include time, target misses, drift, path length, and tissue contact. We analyzed data using analysis of variance and Student's t-test. We noted significant differences in repetitions required to complete the curriculum: 41.8 for novices, 21.2 for intermediates, and 11.7 for the advanced group (P < 0.05). In the individual modules, coordination required 13.3 attempts for novices, 4.2 for intermediates, and 1.7 for the advanced group (P < 0.05). Target visualization required 19.3 attempts for novices, 13.2 for intermediates, and 8.2 for the advanced group (P < 0.05). Participants believe that training improves camera handling skills (95%), is relevant to surgery (95%), and is a valid training tool (93%). Graphics (98%) and realism (93%) were highly regarded. The VR-based camera navigation curriculum demonstrates construct and face validity for our training population. Camera navigation simulation may be a valuable tool that can be integrated into training protocols for residents and medical students during their surgery rotations. Copyright © 2012 Elsevier Inc. All rights reserved.

  12. Injection-Molded Long-Fiber Thermoplastic Composites: From Process Modeling to Prediction of Mechanical Properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Ba Nghiep; Kunc, Vlastimil; Jin, Xiaoshi

    2013-12-18

    This article illustrates the predictive capabilities for long-fiber thermoplastic (LFT) composites that first simulate the injection molding of LFT structures by Autodesk® Simulation Moldflow® Insight (ASMI) to accurately predict fiber orientation and length distributions in these structures. After validating fiber orientation and length predictions against the experimental data, the predicted results are used by ASMI to compute distributions of elastic properties in the molded structures. In addition, local stress-strain responses and damage accumulation under tensile loading are predicted by an elastic-plastic damage model of EMTA-NLA, a nonlinear analysis tool implemented in ABAQUS® via user-subroutines using an incremental Eshelby-Mori-Tanaka approach. Predictedmore » stress-strain responses up to failure and damage accumulations are compared to the experimental results to validate the model.« less

  13. Using Model Replication to Improve the Reliability of Agent-Based Models

    NASA Astrophysics Data System (ADS)

    Zhong, Wei; Kim, Yushim

    The basic presupposition of model replication activities for a computational model such as an agent-based model (ABM) is that, as a robust and reliable tool, it must be replicable in other computing settings. This assumption has recently gained attention in the community of artificial society and simulation due to the challenges of model verification and validation. Illustrating the replication of an ABM representing fraudulent behavior in a public service delivery system originally developed in the Java-based MASON toolkit for NetLogo by a different author, this paper exemplifies how model replication exercises provide unique opportunities for model verification and validation process. At the same time, it helps accumulate best practices and patterns of model replication and contributes to the agenda of developing a standard methodological protocol for agent-based social simulation.

  14. Towards Bridging the Gaps in Holistic Transition Prediction via Numerical Simulations

    NASA Technical Reports Server (NTRS)

    Choudhari, Meelan M.; Li, Fei; Duan, Lian; Chang, Chau-Lyan; Carpenter, Mark H.; Streett, Craig L.; Malik, Mujeeb R.

    2013-01-01

    The economic and environmental benefits of laminar flow technology via reduced fuel burn of subsonic and supersonic aircraft cannot be realized without minimizing the uncertainty in drag prediction in general and transition prediction in particular. Transition research under NASA's Aeronautical Sciences Project seeks to develop a validated set of variable fidelity prediction tools with known strengths and limitations, so as to enable "sufficiently" accurate transition prediction and practical transition control for future vehicle concepts. This paper provides a summary of selected research activities targeting the current gaps in high-fidelity transition prediction, specifically those related to the receptivity and laminar breakdown phases of crossflow induced transition in a subsonic swept-wing boundary layer. The results of direct numerical simulations are used to obtain an enhanced understanding of the laminar breakdown region as well as to validate reduced order prediction methods.

  15. Face, content, and construct validity of a novel portable ergonomic simulator for basic laparoscopic skills.

    PubMed

    Xiao, Dongjuan; Jakimowicz, Jack J; Albayrak, Armagan; Buzink, Sonja N; Botden, Sanne M B I; Goossens, Richard H M

    2014-01-01

    Laparoscopic skills can be improved effectively through laparoscopic simulation. The purpose of this study was to verify the face and content validity of a new portable Ergonomic Laparoscopic Skills simulator (Ergo-Lap simulator) and assess the construct validity of the Ergo-Lap simulator in 4 basic skills tasks. Four tasks were evaluated: 2 different translocation exercises (a basic bimanual exercise and a challenging single-handed exercise), an exercise involving tissue manipulation under tension, and a needle-handling exercise. Task performance was analyzed according to speed and accuracy. The participants rated the usability and didactic value of each task and the Ergo-Lap simulator along a 5-point Likert scale. Institutional academic medical center with its affiliated general surgery residency. Forty-six participants were allotted into 2 groups: a Novice group (n = 26, <10 clinical laparoscopic procedures) and an Experienced group (n = 20, >50 clinical laparoscopic procedures). The Experienced group completed all tasks in less time than the Novice group did (p < 0.001, Mann-Whitney U test). The Experienced group also completed tasks 1, 2, and 4 with fewer errors than the Novice group did (p < 0.05). Of the Novice participants, 96% considered that the present Ergo-Lap simulator could encourage more frequent practice of laparoscopic skills. In addition, 92% would like to purchase this simulator. All of the experienced participants confirmed that the Ergo-Lap simulator was easy to use and useful for practicing basic laparoscopic skills in an ergonomic manner. Most (95%) of these respondents would recommend this simulator to other surgical trainees. This Ergo-Lap simulator with multiple tasks was rated as a useful training tool that can distinguish between various levels of laparoscopic expertise. The Ergo-Lap simulator is also an inexpensive alternative, which surgical trainees could use to update their skills in the skills laboratory, at home, or in the office. Copyright © 2014 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  16. OC5 Project Phase II: Validation of Global Loads of the DeepCwind Floating Semisubmersible Wind Turbine

    DOE PAGES

    Robertson, Amy N.; Wendt, Fabian; Jonkman, Jason M.; ...

    2017-10-01

    This paper summarizes the findings from Phase II of the Offshore Code Comparison, Collaboration, Continued, with Correlation project. The project is run under the International Energy Agency Wind Research Task 30, and is focused on validating the tools used for modeling offshore wind systems through the comparison of simulated responses of select system designs to physical test data. Validation activities such as these lead to improvement of offshore wind modeling tools, which will enable the development of more innovative and cost-effective offshore wind designs. For Phase II of the project, numerical models of the DeepCwind floating semisubmersible wind system weremore » validated using measurement data from a 1/50th-scale validation campaign performed at the Maritime Research Institute Netherlands offshore wave basin. Validation of the models was performed by comparing the calculated ultimate and fatigue loads for eight different wave-only and combined wind/wave test cases against the measured data, after calibration was performed using free-decay, wind-only, and wave-only tests. The results show a decent estimation of both the ultimate and fatigue loads for the simulated results, but with a fairly consistent underestimation in the tower and upwind mooring line loads that can be attributed to an underestimation of wave-excitation forces outside the linear wave-excitation region, and the presence of broadband frequency excitation in the experimental measurements from wind. Participant results showed varied agreement with the experimental measurements based on the modeling approach used. Modeling attributes that enabled better agreement included: the use of a dynamic mooring model; wave stretching, or some other hydrodynamic modeling approach that excites frequencies outside the linear wave region; nonlinear wave kinematics models; and unsteady aerodynamics models. Also, it was observed that a Morison-only hydrodynamic modeling approach could create excessive pitch excitation and resulting tower loads in some frequency bands.« less

  17. OC5 Project Phase II: Validation of Global Loads of the DeepCwind Floating Semisubmersible Wind Turbine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, Amy N.; Wendt, Fabian; Jonkman, Jason M.

    This paper summarizes the findings from Phase II of the Offshore Code Comparison, Collaboration, Continued, with Correlation project. The project is run under the International Energy Agency Wind Research Task 30, and is focused on validating the tools used for modeling offshore wind systems through the comparison of simulated responses of select system designs to physical test data. Validation activities such as these lead to improvement of offshore wind modeling tools, which will enable the development of more innovative and cost-effective offshore wind designs. For Phase II of the project, numerical models of the DeepCwind floating semisubmersible wind system weremore » validated using measurement data from a 1/50th-scale validation campaign performed at the Maritime Research Institute Netherlands offshore wave basin. Validation of the models was performed by comparing the calculated ultimate and fatigue loads for eight different wave-only and combined wind/wave test cases against the measured data, after calibration was performed using free-decay, wind-only, and wave-only tests. The results show a decent estimation of both the ultimate and fatigue loads for the simulated results, but with a fairly consistent underestimation in the tower and upwind mooring line loads that can be attributed to an underestimation of wave-excitation forces outside the linear wave-excitation region, and the presence of broadband frequency excitation in the experimental measurements from wind. Participant results showed varied agreement with the experimental measurements based on the modeling approach used. Modeling attributes that enabled better agreement included: the use of a dynamic mooring model; wave stretching, or some other hydrodynamic modeling approach that excites frequencies outside the linear wave region; nonlinear wave kinematics models; and unsteady aerodynamics models. Also, it was observed that a Morison-only hydrodynamic modeling approach could create excessive pitch excitation and resulting tower loads in some frequency bands.« less

  18. Urban Flow and Pollutant Dispersion Simulation with Multi-scale coupling of Meteorological Model with Computational Fluid Dynamic Analysis

    NASA Astrophysics Data System (ADS)

    Liu, Yushi; Poh, Hee Joo

    2014-11-01

    The Computational Fluid Dynamics analysis has become increasingly important in modern urban planning in order to create highly livable city. This paper presents a multi-scale modeling methodology which couples Weather Research and Forecasting (WRF) Model with open source CFD simulation tool, OpenFOAM. This coupling enables the simulation of the wind flow and pollutant dispersion in urban built-up area with high resolution mesh. In this methodology meso-scale model WRF provides the boundary condition for the micro-scale CFD model OpenFOAM. The advantage is that the realistic weather condition is taken into account in the CFD simulation and complexity of building layout can be handled with ease by meshing utility of OpenFOAM. The result is validated against the Joint Urban 2003 Tracer Field Tests in Oklahoma City and there is reasonably good agreement between the CFD simulation and field observation. The coupling of WRF- OpenFOAM provide urban planners with reliable environmental modeling tool in actual urban built-up area; and it can be further extended with consideration of future weather conditions for the scenario studies on climate change impact.

  19. Building energy simulation in real time through an open standard interface

    DOE PAGES

    Pang, Xiufeng; Nouidui, Thierry S.; Wetter, Michael; ...

    2015-10-20

    Building energy models (BEMs) are typically used for design and code compliance for new buildings and in the renovation of existing buildings to predict energy use. We present the increasing adoption of BEM as standard practice in the building industry presents an opportunity to extend the use of BEMs into construction, commissioning and operation. In 2009, the authors developed a real-time simulation framework to execute an EnergyPlus model in real time to improve building operation. This paper reports an enhancement of that real-time energy simulation framework. The previous version only works with software tools that implement the custom co-simulation interfacemore » of the Building Controls Virtual Test Bed (BCVTB), such as EnergyPlus, Dymola and TRNSYS. The new version uses an open standard interface, the Functional Mockup Interface (FMI), to provide a generic interface to any application that supports the FMI protocol. In addition, the new version utilizes the Simple Measurement and Actuation Profile (sMAP) tool as the data acquisition system to acquire, store and present data. Lastly, this paper introduces the updated architecture of the real-time simulation framework using FMI and presents proof-of-concept demonstration results which validate the new framework.« less

  20. Validation Testing of a Peridynamic Impact Damage Model Using NASA's Micro-Particle Gun

    NASA Technical Reports Server (NTRS)

    Baber, Forrest E.; Zelinski, Brian J.; Guven, Ibrahim; Gray, Perry

    2017-01-01

    Through a collaborative effort between the Virginia Commonwealth University and Raytheon, a peridynamic model for sand impact damage has been developed1-3. Model development has focused on simulating impacts of sand particles on ZnS traveling at velocities consistent with aircraft take-off and landing speeds. The model reproduces common features of impact damage including pit and radial cracks, and, under some conditions, lateral cracks. This study focuses on a preliminary validation exercise in which simulation results from the peridynamic model are compared to a limited experimental data set generated by NASA's recently developed micro-particle gun (MPG). The MPG facility measures the dimensions and incoming and rebound velocities of the impact particles. It also links each particle to a specific impact site and its associated damage. In this validation exercise parameters of the peridynamic model are adjusted to fit the experimentally observed pit diameter, average length of radial cracks and rebound velocities for 4 impacts of 300 µm glass beads on ZnS. Results indicate that a reasonable fit of these impact characteristics can be obtained by suitable adjustment of the peridynamic input parameters, demonstrating that the MPG can be used effectively as a validation tool for impact modeling and that the peridynamic sand impact model described herein possesses not only a qualitative but also a quantitative ability to simulate sand impact events.

  1. Critical evaluation of mechanistic two-phase flow pipeline and well simulation models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dhulesia, H.; Lopez, D.

    1996-12-31

    Mechanistic steady state simulation models, rather than empirical correlations, are used for a design of multiphase production system including well, pipeline and downstream installations. Among the available models, PEPITE, WELLSIM, OLGA, TACITE and TUFFP are widely used for this purpose and consequently, a critical evaluation of these models is needed. An extensive validation methodology is proposed which consists of two distinct steps: first to validate the hydrodynamic point model using the test loop data and, then to validate the over-all simulation model using the real pipelines and wells data. The test loop databank used in this analysis contains about 5952more » data sets originated from four different test loops and a majority of these data are obtained at high pressures (up to 90 bars) with real hydrocarbon fluids. Before performing the model evaluation, physical analysis of the test loops data is required to eliminate non-coherent data. The evaluation of these point models demonstrates that the TACITE and OLGA models can be applied to any configuration of pipes. The TACITE model performs better than the OLGA model because it uses the most appropriate closure laws from the literature validated on a large number of data. The comparison of predicted and measured pressure drop for various real pipelines and wells demonstrates that the TACITE model is a reliable tool.« less

  2. Evaluation of cognitive load and emotional states during multidisciplinary critical care simulation sessions.

    PubMed

    Pawar, Swapnil; Jacques, Theresa; Deshpande, Kush; Pusapati, Raju; Meguerdichian, Michael J

    2018-04-01

    The simulation in critical care setting involves a heterogeneous group of participants with varied background and experience. Measuring the impacts of simulation on emotional state and cognitive load in this setting is not often performed. The feasibility of such measurement in the critical care setting needs further exploration. Medical and nursing staff with varying levels of experience from a tertiary intensive care unit participated in a standardised clinical simulation scenario. The emotional state of each participant was assessed before and after completion of the scenario using a validated eight-item scale containing bipolar oppositional descriptors of emotion. The cognitive load of each participant was assessed after the completion of the scenario using a validated subjective rating tool. A total of 103 medical and nursing staff participated in the study. The participants felt more relaxed (-0.28±1.15 vs 0.14±1, P<0.005; d=0.39), excited (0.25±0.89 vs 0.55±0.92, P<0.005, d=0.35) and alert (0.85±0.87 vs 1.28±0.73, P<0.00001, d=0.54) following simulation. There was no difference in the mean scores for the remaining five items. The mean cognitive load for all participants was 6.67±1.41. There was no significant difference in the cognitive loads among medical staff versus nursing staff (6.61±2.3 vs 6.62±1.7; P>0.05). A well-designed complex high fidelity critical care simulation scenario can be evaluated to identify the relative cognitive load of the participants' experience and their emotional state. The movement of learners emotionally from a more negative state to a positive state suggests that simulation can be an effective tool for improved knowledge transfer and offers more opportunity for dynamic thinking.

  3. Box- or Virtual-Reality Trainer: Which Tool Results in Better Transfer of Laparoscopic Basic Skills?-A Prospective Randomized Trial.

    PubMed

    Brinkmann, Christian; Fritz, Mathias; Pankratius, Ulrich; Bahde, Ralf; Neumann, Philipp; Schlueter, Steffen; Senninger, Norbert; Rijcken, Emile

    Simulation training improves laparoscopic performance. Laparoscopic basic skills can be learned in simulators as box- or virtual-reality (VR) trainers. However, there is no clear recommendation for either box or VR trainers as the most appropriate tool for the transfer of acquired laparoscopic basic skills into a surgical procedure. Both training tools were compared, using validated and well-established curricula in the acquirement of basic skills, in a prospective randomized trial in a 5-day structured laparoscopic training course. Participants completed either a box- or VR-trainer curriculum and then applied the learned skills performing an ex situ laparoscopic cholecystectomy on a pig liver. The performance was recorded on video and evaluated offline by 4 blinded observers using the Global Operative Assessment of Laparoscopic Skills (GOALS) score. Learning curves of the various exercises included in the training course were compared and the improvement in each exercise was analyzed. Surgical Skills Lab of the Department of General and Visceral Surgery, University Hospital Muenster. Surgical novices without prior surgical experience (medical students, n = 36). Posttraining evaluation showed significant improvement compared with baseline in both groups, indicating acquisition of laparoscopic basic skills. Learning curves showed almost the same progression with no significant differences. In simulated laparoscopic cholecystectomy, total GOALS score was significantly higher for the box-trained group than the VR-trained group (box: 15.31 ± 3.61 vs. VR: 12.92 ± 3.06; p = 0.039; Hedge׳s g* = 0.699), indicating higher technical skill levels. Despite both systems having advantages and disadvantages, they can both be used for simulation training for laparoscopic skills. In the setting with 2 structured, validated and almost identical curricula, the box-trained group appears to be superior in the better transfer of basic skills into an experimental but structured surgical procedure. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  4. Theory of quantized systems: formal basis for DEVS/HLA distributed simulation environment

    NASA Astrophysics Data System (ADS)

    Zeigler, Bernard P.; Lee, J. S.

    1998-08-01

    In the context of a DARPA ASTT project, we are developing an HLA-compliant distributed simulation environment based on the DEVS formalism. This environment will provide a user- friendly, high-level tool-set for developing interoperable discrete and continuous simulation models. One application is the study of contract-based predictive filtering. This paper presents a new approach to predictive filtering based on a process called 'quantization' to reduce state update transmission. Quantization, which generates state updates only at quantum level crossings, abstracts a sender model into a DEVS representation. This affords an alternative, efficient approach to embedding continuous models within distributed discrete event simulations. Applications of quantization to message traffic reduction are discussed. The theory has been validated by DEVSJAVA simulations of test cases. It will be subject to further test in actual distributed simulations using the DEVS/HLA modeling and simulation environment.

  5. Comparative hybrid and digital simulation studies of the behaviour of a wind generator equipped with a static frequency converter

    NASA Astrophysics Data System (ADS)

    Dube, B.; Lefebvre, S.; Perocheau, A.; Nakra, H. L.

    1988-01-01

    This paper describes the comparative results obtained from digital and hybrid simulation studies on a variable speed wind generator interconnected to the utility grid. The wind generator is a vertical-axis Darrieus type coupled to a synchronous machine by a gear-box; the synchronous machine is connected to the AC utility grid through a static frequency converter. Digital simulation results have been obtained using CSMP software; these results are compared with those obtained from a real-time hybrid simulator that in turn uses a part of the IREQ HVDC simulator. The agreement between hybrid and digital simulation results is generally good. The results demonstrate that the digital simulation reproduces the dynamic behavior of the system in a satisfactory manner and thus constitutes a valid tool for the design of the control systems of the wind generator.

  6. Aeroacoustic Validation of Installed Low Noise Propulsion for NASA's N+2 Supersonic Airliner

    NASA Technical Reports Server (NTRS)

    Bridges, James

    2018-01-01

    An aeroacoustic test was conducted at NASA Glenn Research Center on an integrated propulsion system designed to meet noise regulations of ICAO Chapter 4 with 10EPNdB cumulative margin. The test had two objectives: to demonstrate that the aircraft design did meet the noise goal, and to validate the acoustic design tools used in the design. Variations in the propulsion system design and its installation were tested and the results compared against predictions. Far-field arrays of microphones measured the acoustic spectral directivity, which was transformed to full scale as noise certification levels. Phased array measurements confirmed that the shielding of the installation model adequately simulated the full aircraft and provided data for validating RANS-based noise prediction tools. Particle image velocimetry confirmed that the flow field around the nozzle on the jet rig mimicked that of the full aircraft and produced flow data to validate the RANS solutions used in the noise predictions. The far-field acoustic measurements confirmed the empirical predictions for the noise. Results provided here detail the steps taken to ensure accuracy of the measurements and give insights into the physics of exhaust noise from installed propulsion systems in future supersonic vehicles.

  7. Investigating the effect of a magnetic field on dose distributions at phantom-air interfaces using PRESAGE® 3D dosimeter and Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Costa, Filipa; Doran, Simon J.; Hanson, Ian M.; Nill, Simeon; Billas, Ilias; Shipley, David; Duane, Simon; Adamovics, John; Oelfke, Uwe

    2018-03-01

    Dosimetric quality assurance (QA) of the new Elekta Unity (MR-linac) will differ from the QA performed of a conventional linac due to the constant magnetic field, which creates an electron return effect (ERE). In this work we aim to validate PRESAGE® dosimetry in a transverse magnetic field, and assess its use to validate the research version of the Monaco TPS of the MR-linac. Cylindrical samples of PRESAGE® 3D dosimeter separated by an air gap were irradiated with a cobalt-60 unit, while placed between the poles of an electromagnet at 0.5 T and 1.5 T. This set-up was simulated in EGSnrc/Cavity Monte Carlo (MC) code and relative dose distributions were compared with measurements using 1D and 2D gamma criteria of 3% and 1.5 mm. The irradiation conditions were adapted for the MR-linac and compared with Monaco TPS simulations. Measured and EGSnrc/Cavity simulated profiles showed good agreement with a gamma passing rate of 99.9% for 0.5 T and 99.8% for 1.5 T. Measurements on the MR-linac also compared well with Monaco TPS simulations, with a gamma passing rate of 98.4% at 1.5 T. Results demonstrated that PRESAGE® can accurately measure dose and detect the ERE, encouraging its use as a QA tool to validate the Monaco TPS of the MR-linac for clinically relevant dose distributions at tissue-air boundaries.

  8. Evidence flow graph methods for validation and verification of expert systems

    NASA Technical Reports Server (NTRS)

    Becker, Lee A.; Green, Peter G.; Bhatnagar, Jayant

    1989-01-01

    The results of an investigation into the use of evidence flow graph techniques for performing validation and verification of expert systems are given. A translator to convert horn-clause rule bases into evidence flow graphs, a simulation program, and methods of analysis were developed. These tools were then applied to a simple rule base which contained errors. It was found that the method was capable of identifying a variety of problems, for example that the order of presentation of input data or small changes in critical parameters could affect the output from a set of rules.

  9. Current progress in patient-specific modeling

    PubMed Central

    2010-01-01

    We present a survey of recent advancements in the emerging field of patient-specific modeling (PSM). Researchers in this field are currently simulating a wide variety of tissue and organ dynamics to address challenges in various clinical domains. The majority of this research employs three-dimensional, image-based modeling techniques. Recent PSM publications mostly represent feasibility or preliminary validation studies on modeling technologies, and these systems will require further clinical validation and usability testing before they can become a standard of care. We anticipate that with further testing and research, PSM-derived technologies will eventually become valuable, versatile clinical tools. PMID:19955236

  10. A GPU Simulation Tool for Training and Optimisation in 2D Digital X-Ray Imaging.

    PubMed

    Gallio, Elena; Rampado, Osvaldo; Gianaria, Elena; Bianchi, Silvio Diego; Ropolo, Roberto

    2015-01-01

    Conventional radiology is performed by means of digital detectors, with various types of technology and different performance in terms of efficiency and image quality. Following the arrival of a new digital detector in a radiology department, all the staff involved should adapt the procedure parameters to the properties of the detector, in order to achieve an optimal result in terms of correct diagnostic information and minimum radiation risks for the patient. The aim of this study was to develop and validate a software capable of simulating a digital X-ray imaging system, using graphics processing unit computing. All radiological image components were implemented in this application: an X-ray tube with primary beam, a virtual patient, noise, scatter radiation, a grid and a digital detector. Three different digital detectors (two digital radiography and a computed radiography systems) were implemented. In order to validate the software, we carried out a quantitative comparison of geometrical and anthropomorphic phantom simulated images with those acquired. In terms of average pixel values, the maximum differences were below 15%, while the noise values were in agreement with a maximum difference of 20%. The relative trends of contrast to noise ratio versus beam energy and intensity were well simulated. Total calculation times were below 3 seconds for clinical images with pixel size of actual dimensions less than 0.2 mm. The application proved to be efficient and realistic. Short calculation times and the accuracy of the results obtained make this software a useful tool for training operators and dose optimisation studies.

  11. Non-Gimbaled Antenna Pointing

    NASA Technical Reports Server (NTRS)

    Vigil, Jeannine S.

    1997-01-01

    The small satellite community has been interested in accessing fixed ground stations for means of space-to-ground transmissions, although a problem arises from the limited global coverage. There is a growing interest for using the Space Network (SN) or Tracking and Data Relay Satellites (TDRS) as the primary support for communications because of the coverage it provides. This thesis will address the potential for satellite access of the Space Network with a non-gimbaled antenna configuration and low-power, coded transmission. The non-gimbaled antenna and the TDRS satellites, TDRS-East, TDRS-West, and TDRS-Zone of Exclusion, were configured in an orbital analysis software package called Satellite Tool Kit to emulate the three-dimensional position of the satellites. The access potential, which is the average number of contacts per day and the average time per contact, were obtained through simulations run over a 30-day period to gain all the possible orientations. The orbital altitude was varied from 600 km through 1200 km with the results being a function of orbital inclination angles varying from 20 deg through 100 deg and pointing half-angles of I0 deg through 40 deg. To compare the validity of the simulations, Jet Propulsion Laboratory granted the use of the TOPEX satellite. The TOPEX satellite was configured to emulate a spin-stabilized antenna with its communications antenna stowed in the zenith-pointing direction. This mimicked the antenna pointing spin-stabilized satellite in the simulations. To make valid comparisons, the TOPEX orbital parameters were entered into Satellite Tool Kit and simulated over five test times provided by Jet Propulsion Laboratory.

  12. Studying distributed cognition of simulation-based team training with DiCoT.

    PubMed

    Rybing, Jonas; Nilsson, Heléne; Jonson, Carl-Oscar; Bang, Magnus

    2016-03-01

    Health care organizations employ simulation-based team training (SBTT) to improve skill, communication and coordination in a broad range of critical care contexts. Quantitative approaches, such as team performance measurements, are predominantly used to measure SBTTs effectiveness. However, a practical evaluation method that examines how this approach supports cognition and teamwork is missing. We have applied Distributed Cognition for Teamwork (DiCoT), a method for analysing cognition and collaboration aspects of work settings, with the purpose of assessing the methodology's usefulness for evaluating SBTTs. In a case study, we observed and analysed four Emergo Train System® simulation exercises where medical professionals trained emergency response routines. The study suggests that DiCoT is an applicable and learnable tool for determining key distributed cognition attributes of SBTTs that are of importance for the simulation validity of training environments. Moreover, we discuss and exemplify how DiCoT supports design of SBTTs with a focus on transfer and validity characteristics. Practitioner Summary: In this study, we have evaluated a method to assess simulation-based team training environments from a cognitive ergonomics perspective. Using a case study, we analysed Distributed Cognition for Teamwork (DiCoT) by applying it to the Emergo Train System®. We conclude that DiCoT is useful for SBTT evaluation and simulator (re)design.

  13. An Integrated Software Package to Enable Predictive Simulation Capabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yousu; Fitzhenry, Erin B.; Jin, Shuangshuang

    The power grid is increasing in complexity due to the deployment of smart grid technologies. Such technologies vastly increase the size and complexity of power grid systems for simulation and modeling. This increasing complexity necessitates not only the use of high-performance-computing (HPC) techniques, but a smooth, well-integrated interplay between HPC applications. This paper presents a new integrated software package that integrates HPC applications and a web-based visualization tool based on a middleware framework. This framework can support the data communication between different applications. Case studies with a large power system demonstrate the predictive capability brought by the integrated software package,more » as well as the better situational awareness provided by the web-based visualization tool in a live mode. Test results validate the effectiveness and usability of the integrated software package.« less

  14. Designing and encoding models for synthetic biology.

    PubMed

    Endler, Lukas; Rodriguez, Nicolas; Juty, Nick; Chelliah, Vijayalakshmi; Laibe, Camille; Li, Chen; Le Novère, Nicolas

    2009-08-06

    A key component of any synthetic biology effort is the use of quantitative models. These models and their corresponding simulations allow optimization of a system design, as well as guiding their subsequent analysis. Once a domain mostly reserved for experts, dynamical modelling of gene regulatory and reaction networks has been an area of growth over the last decade. There has been a concomitant increase in the number of software tools and standards, thereby facilitating model exchange and reuse. We give here an overview of the model creation and analysis processes as well as some software tools in common use. Using markup language to encode the model and associated annotation, we describe the mining of components, their integration in relational models, formularization and parametrization. Evaluation of simulation results and validation of the model close the systems biology 'loop'.

  15. Front panel engineering with CAD simulation tool

    NASA Astrophysics Data System (ADS)

    Delacour, Jacques; Ungar, Serge; Mathieu, Gilles; Hasna, Guenther; Martinez, Pascal; Roche, Jean-Christophe

    1999-04-01

    THe progress made recently in display technology covers many fields of application. The specification of radiance, colorimetry and lighting efficiency creates some new challenges for designers. Photometric design is limited by the capability of correctly predicting the result of a lighting system, to save on the costs and time taken to build multiple prototypes or bread board benches. The second step of the research carried out by company OPTIS is to propose an optimization method to be applied to the lighting system, developed in the software SPEOS. The main features of the tool requires include the CAD interface, to enable fast and efficient transfer between mechanical and light design software, the source modeling, the light transfer model and an optimization tool. The CAD interface is mainly a prototype of transfer, which is not the subjects here. Photometric simulation is efficiently achieved by using the measured source encoding and a simulation by the Monte Carlo method. Today, the advantages and the limitations of the Monte Carlo method are well known. The noise reduction requires a long calculation time, which increases with the complexity of the display panel. A successful optimization is difficult to achieve, due to the long calculation time required for each optimization pass including a Monte Carlo simulation. The problem was initially defined as an engineering method of study. The experience shows that good understanding and mastering of the phenomenon of light transfer is limited by the complexity of non sequential propagation. The engineer must call for the help of a simulation and optimization tool. The main point needed to be able to perform an efficient optimization is a quick method for simulating light transfer. Much work has been done in this area and some interesting results can be observed. It must be said that the Monte Carlo method wastes time calculating some results and information which are not required for the needs of the simulation. Low efficiency transfer system cost a lot of lost time. More generally, the light transfer simulation can be treated efficiently when the integrated result is composed of elementary sub results that include quick analytical calculated intersections. The first axis of research appear. The quick integration research and the quick calculation of geometric intersections. The first axis of research brings some general solutions also valid for multi-reflection systems. The second axis requires some deep thinking on the intersection calculation. An interesting way is the subdivision of space in VOXELS. This is an adapted method of 3D division of space according to the objects and their location. An experimental software has been developed to provide a validation of the method. The gain is particularly high in complex systems. An important reduction in the calculation time has been achieved.

  16. Creation and Global Deployment of a Mobile, Application-Based Cognitive Simulator for Cardiac Surgical Procedures.

    PubMed

    Brewer, Zachary E; Ogden, William David; Fann, James I; Burdon, Thomas A; Sheikh, Ahmad Y

    Several modern learning frameworks (eg, cognitive apprenticeship, anchored instruction, and situated cognition) posit the utility of nontraditional methods for effective experiential learning. Thus, development of novel educational tools emphasizing the cognitive framework of operative sequences may be of benefit to surgical trainees. We propose the development and global deployment of an effective, mobile cognitive cardiac surgical simulator. In methods, 16 preclinical medical students were assessed. Overall, 4 separate surgical modules (sternotomy, cannulation, decannulation, and sternal closure) were created utilizing the Touch Surgery (London, UK) platform. Modules were made available to download free of charge for use on mobile devices. Usage data were collected over a 6-month period. Educational efficacy of the modules was evaluated by randomizing a cohort of medical students to either module usage or traditional, reading-based self-study, followed by a multiple-choice learning assessment tool. In results, downloads of the simulator achieved global penetrance, with highest usage in the USA, Brazil, Italy, UK, and India. Overall, 5368 unique users conducted a total of 1971 hours of simulation. Evaluation of the medical student cohort revealed significantly higher assessment scores in those randomized to module use versus traditional reading (75% ± 9% vs 61% ± 7%, respectively; P < 0.05). In conclusion, this study represents the first effort to create a mobile, interactive cognitive simulator for cardiac surgery. Simulators of this type may be effective for the training and assessment of surgical students. We investigated whether an interactive, mobile-computing-based cognitive task simulator for cardiac surgery could be developed, deployed, and validated. Our findings suggest that such simulators may be a useful learning tool. Copyright © 2016. Published by Elsevier Inc.

  17. Cloud computing and validation of expandable in silico livers

    PubMed Central

    2010-01-01

    Background In Silico Livers (ISLs) are works in progress. They are used to challenge multilevel, multi-attribute, mechanistic hypotheses about the hepatic disposition of xenobiotics coupled with hepatic responses. To enhance ISL-to-liver mappings, we added discrete time metabolism, biliary elimination, and bolus dosing features to a previously validated ISL and initiated re-validated experiments that required scaling experiments to use more simulated lobules than previously, more than could be achieved using the local cluster technology. Rather than dramatically increasing the size of our local cluster we undertook the re-validation experiments using the Amazon EC2 cloud platform. So doing required demonstrating the efficacy of scaling a simulation to use more cluster nodes and assessing the scientific equivalence of local cluster validation experiments with those executed using the cloud platform. Results The local cluster technology was duplicated in the Amazon EC2 cloud platform. Synthetic modeling protocols were followed to identify a successful parameterization. Experiment sample sizes (number of simulated lobules) on both platforms were 49, 70, 84, and 152 (cloud only). Experimental indistinguishability was demonstrated for ISL outflow profiles of diltiazem using both platforms for experiments consisting of 84 or more samples. The process was analogous to demonstration of results equivalency from two different wet-labs. Conclusions The results provide additional evidence that disposition simulations using ISLs can cover the behavior space of liver experiments in distinct experimental contexts (there is in silico-to-wet-lab phenotype similarity). The scientific value of experimenting with multiscale biomedical models has been limited to research groups with access to computer clusters. The availability of cloud technology coupled with the evidence of scientific equivalency has lowered the barrier and will greatly facilitate model sharing as well as provide straightforward tools for scaling simulations to encompass greater detail with no extra investment in hardware. PMID:21129207

  18. Simulation-based assessment to identify critical gaps in safe anesthesia resident performance.

    PubMed

    Blum, Richard H; Boulet, John R; Cooper, Jeffrey B; Muret-Wagstaff, Sharon L

    2014-01-01

    Valid methods are needed to identify anesthesia resident performance gaps early in training. However, many assessment tools in medicine have not been properly validated. The authors designed and tested use of a behaviorally anchored scale, as part of a multiscenario simulation-based assessment system, to identify high- and low-performing residents with regard to domains of greatest concern to expert anesthesiology faculty. An expert faculty panel derived five key behavioral domains of interest by using a Delphi process (1) Synthesizes information to formulate a clear anesthetic plan; (2) Implements a plan based on changing conditions; (3) Demonstrates effective interpersonal and communication skills with patients and staff; (4) Identifies ways to improve performance; and (5) Recognizes own limits. Seven simulation scenarios spanning pre-to-postoperative encounters were used to assess performances of 22 first-year residents and 8 fellows from two institutions. Two of 10 trained faculty raters blinded to trainee program and training level scored each performance independently by using a behaviorally anchored rating scale. Residents, fellows, facilitators, and raters completed surveys. Evidence supporting the reliability and validity of the assessment scores was procured, including a high generalizability coefficient (ρ = 0.81) and expected performance differences between first-year resident and fellow participants. A majority of trainees, facilitators, and raters judged the assessment to be useful, realistic, and representative of critical skills required for safe practice. The study provides initial evidence to support the validity of a simulation-based performance assessment system for identifying critical gaps in safe anesthesia resident performance early in training.

  19. Adaptation and development of software simulation methodologies for cardiovascular engineering: present and future challenges from an end-user perspective

    PubMed Central

    Díaz-Zuccarini, V.; Narracott, A.J.; Burriesci, G.; Zervides, C.; Rafiroiu, D.; Jones, D.; Hose, D.R.; Lawford, P.V.

    2009-01-01

    This paper describes the use of diverse software tools in cardiovascular applications. These tools were primarily developed in the field of engineering and the applications presented push the boundaries of the software to address events related to venous and arterial valve closure, exploration of dynamic boundary conditions or the inclusion of multi-scale boundary conditions from protein to organ levels. The future of cardiovascular research and the challenges that modellers and clinicians face from validation to clinical uptake are discussed from an end-user perspective. PMID:19487202

  20. Adaptation and development of software simulation methodologies for cardiovascular engineering: present and future challenges from an end-user perspective.

    PubMed

    Díaz-Zuccarini, V; Narracott, A J; Burriesci, G; Zervides, C; Rafiroiu, D; Jones, D; Hose, D R; Lawford, P V

    2009-07-13

    This paper describes the use of diverse software tools in cardiovascular applications. These tools were primarily developed in the field of engineering and the applications presented push the boundaries of the software to address events related to venous and arterial valve closure, exploration of dynamic boundary conditions or the inclusion of multi-scale boundary conditions from protein to organ levels. The future of cardiovascular research and the challenges that modellers and clinicians face from validation to clinical uptake are discussed from an end-user perspective.

  1. The Cryosphere Model Comparison Tool (CmCt): Ice Sheet Model Validation and Comparison Tool for Greenland and Antarctica

    NASA Astrophysics Data System (ADS)

    Simon, E.; Nowicki, S.; Neumann, T.; Tyahla, L.; Saba, J. L.; Guerber, J. R.; Bonin, J. A.; DiMarzio, J. P.

    2017-12-01

    The Cryosphere model Comparison tool (CmCt) is a web based ice sheet model validation tool that is being developed by NASA to facilitate direct comparison between observational data and various ice sheet models. The CmCt allows the user to take advantage of several decades worth of observations from Greenland and Antarctica. Currently, the CmCt can be used to compare ice sheet models provided by the user with remotely sensed satellite data from ICESat (Ice, Cloud, and land Elevation Satellite) laser altimetry, GRACE (Gravity Recovery and Climate Experiment) satellite, and radar altimetry (ERS-1, ERS-2, and Envisat). One or more models can be uploaded through the CmCt website and compared with observational data, or compared to each other or other models. The CmCt calculates statistics on the differences between the model and observations, and other quantitative and qualitative metrics, which can be used to evaluate the different model simulations against the observations. The qualitative metrics consist of a range of visual outputs and the quantitative metrics consist of several whole-ice-sheet scalar values that can be used to assign an overall score to a particular simulation. The comparison results from CmCt are useful in quantifying improvements within a specific model (or within a class of models) as a result of differences in model dynamics (e.g., shallow vs. higher-order dynamics approximations), model physics (e.g., representations of ice sheet rheological or basal processes), or model resolution (mesh resolution and/or changes in the spatial resolution of input datasets). The framework and metrics could also be used for use as a model-to-model intercomparison tool, simply by swapping outputs from another model as the observational datasets. Future versions of the tool will include comparisons with other datasets that are of interest to the modeling community, such as ice velocity, ice thickness, and surface mass balance.

  2. The development and assessment of an evaluation tool for pediatric resident competence in leading simulated pediatric resuscitations.

    PubMed

    Grant, Estée C; Grant, Vincent J; Bhanji, Farhan; Duff, Jonathan P; Cheng, Adam; Lockyer, Jocelyn M

    2012-07-01

    It is critical that competency in pediatric resuscitation is achieved and assessed during residency or post graduate medical training. The purpose of this study was to create and evaluate a tool to measure all elements of pediatric resuscitation team leadership competence. An initial set of items, derived from a literature review and a brainstorming session, were refined to a 26 item assessment tool through the use of Delphi methodology. The tool was tested using videos of standardized resuscitations. A psychometric assessment of the evidence for instrument validity and reliability was undertaken. The performance of 30 residents on two videotaped scenarios was assessed by 4 pediatricians using the tool, with 12 items assessing 'leadership and communication skills' (LCS) and 14 items assessing 'knowledge and clinical skills' (KCS). The instrument showed evidence of reliability; the Cronbach's alpha and generalizability co-efficients for the overall instrument were α=0.818 and Ep(2)=0.76, for LCS were α=0.827 and Ep(2)=0.844, and for KCS were α=0.673 and Ep(2)=0.482. While validity was initially established through literature review and brainstorming by the panel of experts, it was further built through the high strength of correlation between global scores and scores for overall performance (r=0.733), LCS (r=0.718) and KCS (r=0.662) as well as the factor analysis which accounted for 40.2% of the variance. The results of the study demonstrate that the instrument is a valid and reliable tool to evaluate pediatric resuscitation team leader competence. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  3. Ares I-X Flight Test Validation of Control Design Tools in the Frequency-Domain

    NASA Technical Reports Server (NTRS)

    Johnson, Matthew; Hannan, Mike; Brandon, Jay; Derry, Stephen

    2011-01-01

    A major motivation of the Ares I-X flight test program was to Design for Data, in order to maximize the usefulness of the data recorded in support of Ares I modeling and validation of design and analysis tools. The Design for Data effort was intended to enable good post-flight characterizations of the flight control system, the vehicle structural dynamics, and also the aerodynamic characteristics of the vehicle. To extract the necessary data from the system during flight, a set of small predetermined Programmed Test Inputs (PTIs) was injected directly into the TVC signal. These PTIs were designed to excite the necessary vehicle dynamics while exhibiting a minimal impact on loads. The method is similar to common approaches in aircraft flight test programs, but with unique launch vehicle challenges due to rapidly changing states, short duration of flight, a tight flight envelope, and an inability to repeat any test. This paper documents the validation effort of the stability analysis tools to the flight data which was performed by comparing the post-flight calculated frequency response of the vehicle to the frequency response calculated by the stability analysis tools used to design and analyze the preflight models during the control design effort. The comparison between flight day frequency response and stability tool analysis for flight of the simulated vehicle shows good agreement and provides a high level of confidence in the stability analysis tools for use in any future program. This is true for both a nominal model as well as for dispersed analysis, which shows that the flight day frequency response is enveloped by the vehicle s preflight uncertainty models.

  4. Development and Validation of a Family Meeting Assessment Tool (FMAT).

    PubMed

    Hagiwara, Yuya; Healy, Jennifer; Lee, Shuko; Ross, Jeanette; Fischer, Dixie; Sanchez-Reilly, Sandra

    2018-01-01

    A cornerstone procedure in Palliative Medicine is to perform family meetings. Learning how to lead a family meeting is an important skill for physicians and others who care for patients with serious illnesses and their families. There is limited evidence on how to assess best practice behaviors during end-of-life family meetings. Our aim was to develop and validate an observational tool to assess trainees' ability to lead a simulated end-of-life family meeting. Building on evidence from published studies and accrediting agency guidelines, an expert panel at our institution developed the Family Meeting Assessment Tool. All fourth-year medical students (MS4) and eight geriatric and palliative medicine fellows (GPFs) were invited to participate in a Family Meeting Objective Structured Clinical Examination, where each trainee assumed the physician role leading a complex family meeting. Two evaluators observed and rated randomly chosen students' performances using the Family Meeting Assessment Tool during the examination. Inter-rater reliability was measured using percent agreement. Internal consistency was measured using Cronbach α. A total of 141 trainees (MS4 = 133 and GPF = 8) and 26 interdisciplinary evaluators participated in the study. Internal reliability (Cronbach α) of the tool was 0.85. Number of trainees rated by two evaluators was 210 (MS4 = 202 and GPF = 8). Rater agreement was 84%. Composite scores, on average, were significantly higher for fellows than for medical students (P < 0.001). Expert-based content, high inter-rater reliability, good internal consistency, and ability to predict educational level provided initial evidence for construct validity for this novel assessment tool. Copyright © 2017 American Academy of Hospice and Palliative Medicine. All rights reserved.

  5. WE-H-BRA-04: Biological Geometries for the Monte Carlo Simulation Toolkit TOPASNBio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McNamara, A; Held, K; Paganetti, H

    2016-06-15

    Purpose: New advances in radiation therapy are most likely to come from the complex interface of physics, chemistry and biology. Computational simulations offer a powerful tool for quantitatively investigating radiation interactions with biological tissue and can thus help bridge the gap between physics and biology. The aim of TOPAS-nBio is to provide a comprehensive tool to generate advanced radiobiology simulations. Methods: TOPAS wraps and extends the Geant4 Monte Carlo (MC) simulation toolkit. TOPAS-nBio is an extension to TOPAS which utilizes the physics processes in Geant4-DNA to model biological damage from very low energy secondary electrons. Specialized cell, organelle and molecularmore » geometries were designed for the toolkit. Results: TOPAS-nBio gives the user the capability of simulating biological geometries, ranging from the micron-scale (e.g. cells and organelles) to complex nano-scale geometries (e.g. DNA and proteins). The user interacts with TOPAS-nBio through easy-to-use input parameter files. For example, in a simple cell simulation the user can specify the cell type and size as well as the type, number and size of included organelles. For more detailed nuclear simulations, the user can specify chromosome territories containing chromatin fiber loops, the later comprised of nucleosomes on a double helix. The chromatin fibers can be arranged in simple rigid geometries or within factual globules, mimicking realistic chromosome territories. TOPAS-nBio also provides users with the capability of reading protein data bank 3D structural files to simulate radiation damage to proteins or nucleic acids e.g. histones or RNA. TOPAS-nBio has been validated by comparing results to other track structure simulation software and published experimental measurements. Conclusion: TOPAS-nBio provides users with a comprehensive MC simulation tool for radiobiological simulations, giving users without advanced programming skills the ability to design and run complex simulations.« less

  6. A method of groundwater quality assessment based on fuzzy network-CANFIS and geographic information system (GIS)

    NASA Astrophysics Data System (ADS)

    Gholami, V.; Khaleghi, M. R.; Sebghati, M.

    2017-11-01

    The process of water quality testing is money/time-consuming, quite important and difficult stage for routine measurements. Therefore, use of models has become commonplace in simulating water quality. In this study, the coactive neuro-fuzzy inference system (CANFIS) was used to simulate groundwater quality. Further, geographic information system (GIS) was used as the pre-processor and post-processor tool to demonstrate spatial variation of groundwater quality. All important factors were quantified and groundwater quality index (GWQI) was developed. The proposed model was trained and validated by taking a case study of Mazandaran Plain located in northern part of Iran. The factors affecting groundwater quality were the input variables for the simulation, whereas GWQI index was the output. The developed model was validated to simulate groundwater quality. Network validation was performed via comparison between the estimated and actual GWQI values. In GIS, the study area was separated to raster format in the pixel dimensions of 1 km and also by incorporation of input data layers of the Fuzzy Network-CANFIS model; the geo-referenced layers of the effective factors in groundwater quality were earned. Therefore, numeric values of each pixel with geographical coordinates were entered to the Fuzzy Network-CANFIS model and thus simulation of groundwater quality was accessed in the study area. Finally, the simulated GWQI indices using the Fuzzy Network-CANFIS model were entered into GIS, and hence groundwater quality map (raster layer) based on the results of the network simulation was earned. The study's results confirm the high efficiency of incorporation of neuro-fuzzy techniques and GIS. It is also worth noting that the general quality of the groundwater in the most studied plain is fairly low.

  7. Computer and laboratory simulation in the teaching of neonatal nursing: innovation and impact on learning 1

    PubMed Central

    Fonseca, Luciana Mara Monti; Aredes, Natália Del' Angelo; Fernandes, Ananda Maria; Batalha, Luís Manuel da Cunha; Apóstolo, Jorge Manuel Amado; Martins, José Carlos Amado; Rodrigues, Manuel Alves

    2016-01-01

    ABSTRACT Objectives: to evaluate the cognitive learning of nursing students in neonatal clinical evaluation from a blended course with the use of computer and laboratory simulation; to compare the cognitive learning of students in a control and experimental group testing the laboratory simulation; and to assess the extracurricular blended course offered on the clinical assessment of preterm infants, according to the students. Method: a quasi-experimental study with 14 Portuguese students, containing pretest, midterm test and post-test. The technologies offered in the course were serious game e-Baby, instructional software of semiology and semiotechnique, and laboratory simulation. Data collection tools developed for this study were used for the course evaluation and characterization of the students. Nonparametric statistics were used: Mann-Whitney and Wilcoxon. Results: the use of validated digital technologies and laboratory simulation demonstrated a statistically significant difference (p = 0.001) in the learning of the participants. The course was evaluated as very satisfactory for them. The laboratory simulation alone did not represent a significant difference in the learning. Conclusions: the cognitive learning of participants increased significantly. The use of technology can be partly responsible for the course success, showing it to be an important teaching tool for innovation and motivation of learning in healthcare. PMID:27737376

  8. Prospective Comparison of Live Evaluation and Video Review in the Evaluation of Operator Performance in a Pediatric Emergency Airway Simulation

    PubMed Central

    House, Joseph B.; Dooley-Hash, Suzanne; Kowalenko, Terry; Sikavitsas, Athina; Seeyave, Desiree M.; Younger, John G.; Hamstra, Stanley J.; Nypaver, Michele M.

    2012-01-01

    Introduction Real-time assessment of operator performance during procedural simulation is a common practice that requires undivided attention by 1 or more reviewers, potentially over many repetitions of the same case. Objective To determine whether reviewers display better interrater agreement of procedural competency when observing recorded, rather than live, performance; and to develop an assessment tool for pediatric rapid sequence intubation (pRSI). Methods A framework of a previously established Objective Structured Assessment of Technical Skills (OSATS) tool was modified for pRSI. Emergency medicine residents (postgraduate year 1–4) were prospectively enrolled in a pRSI simulation scenario and evaluated by 2 live raters using the modified tool. Sessions were videotaped and reviewed by the same raters at least 4 months later. Raters were blinded to their initial rating. Interrater agreement was determined by using the Krippendorff generalized concordance method. Results Overall interrater agreement for live review was 0.75 (95% confidence interval [CI], 0.72–0.78) and for video was 0.79 (95% CI, 0.73–0.82). Live review was significantly superior to video review in only 1 of the OSATS domains (Preparation) and was equivalent in the other domains. Intrarater agreement between the live and video evaluation was very good, greater than 0.75 for all raters, with a mean of 0.81 (95% CI, 0.76–0.85). Conclusion The modified OSATS assessment tool demonstrated some evidence of validity in discriminating among levels of resident experience and high interreviewer reliability. With this tool, intrareviewer reliability was high between live and 4-months' delayed video review of the simulated procedure, which supports feasibility of delayed video review in resident assessment. PMID:23997874

  9. A Preliminary Assessment of the SURF Reactive Burn Model Implementation in FLAG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Carl Edward; McCombe, Ryan Patrick; Carver, Kyle

    Properly validated and calibrated reactive burn models (RBM) can be useful engineering tools for assessing high explosive performance and safety. Experiments with high explosives are expensive. Inexpensive RBM calculations are increasingly relied on for predictive analysis for performance and safety. This report discusses the validation of Menikoff and Shaw’s SURF reactive burn model, which has recently been implemented in the FLAG code. The LANL Gapstick experiment is discussed as is its’ utility in reactive burn model validation. Data obtained from pRad for the LT-63 series is also presented along with FLAG simulations using SURF for both PBX 9501 and PBXmore » 9502. Calibration parameters for both explosives are presented.« less

  10. Development of a Three-Dimensional, Unstructured Material Response Design Tool

    NASA Technical Reports Server (NTRS)

    Schulz, Joseph C.; Stern, Eric C.; Muppidi, Suman; Palmer, Grant E.; Schroeder, Olivia

    2017-01-01

    A preliminary verification and validation of a new material response model is presented. This model, Icarus, is intended to serve as a design tool for the thermal protection systems of re-entry vehicles. Currently, the capability of the model is limited to simulating the pyrolysis of a material as a result of the radiative and convective surface heating imposed on the material from the surrounding high enthalpy gas. Since the major focus behind the development of Icarus has been model extensibility, the hope is that additional physics can be quickly added. This extensibility is critical since thermal protection systems are becoming increasing complex, e.g. woven carbon polymers. Additionally, as a three-dimensional, unstructured, finite-volume model, Icarus is capable of modeling complex geometries. In this paper, the mathematical and numerical formulation is presented followed by a discussion of the software architecture and some preliminary verification and validation studies.

  11. Evaluation of a Simpler Tool to Assess Nontechnical Skills During Simulated Critical Events.

    PubMed

    Watkins, Scott C; Roberts, David A; Boulet, John R; McEvoy, Matthew D; Weinger, Matthew B

    2017-04-01

    Management of critical events requires teams to employ nontechnical skills (NTS), such as teamwork, communication, decision making, and vigilance. We sought to estimate the reliability and provide evidence for the validity of the ratings gathered using a new tool for assessing the NTS of anesthesia providers, the behaviorally anchored rating scale (BARS), and compare its scores with those of an established NTS tool, the Anaesthetists' Nontechnical Skills (ANTS) scale. Six previously trained raters (4 novices and 2 experts) reviewed and scored 18 recorded simulated pediatric crisis management scenarios using a modified ANTS and a BARS tool. Pearson correlation coefficients were calculated separately for the novice and expert raters, by scenario, and overall. The intrarater reliability of the ANTS total score was 0.73 (expert, 0.57; novice, 0.84); for the BARS tool, it was 0.80 (expert, 0.79; novice, 0.81). The average interrater reliability of BARS scores (0.58) was better than ANTS scores (0.37), and the interrater reliabilities of scores from novices (0.69 BARS and 0.52 ANTS) were better than those obtained from experts (0.47 BARS and 0.21 ANTS) for both scoring instruments. The Pearson correlation between the ANTS and BARS total scores was 0.74. Overall, reliability estimates were better for the BARS scores than the ANTS scores. For both measures, the intrarater and interrater reliability was better for novices compared with domain experts, suggesting that properly trained novices can reliably assess the NTS of anesthesia providers managing a simulated critical event. There was substantial correlation between the 2 scoring instruments, suggesting that the tools measured similar constructs. The BARS tool can be an alternative to the ANTS scale for the formative assessment of NTS of anesthesia providers.

  12. Development of synthetic nuclear melt glass for forensic analysis.

    PubMed

    Molgaard, Joshua J; Auxier, John D; Giminaro, Andrew V; Oldham, C J; Cook, Matthew T; Young, Stephen A; Hall, Howard L

    A method for producing synthetic debris similar to the melt glass produced by nuclear surface testing is demonstrated. Melt glass from the first nuclear weapon test (commonly referred to as trinitite) is used as the benchmark for this study. These surrogates can be used to simulate a variety of scenarios and will serve as a tool for developing and validating forensic analysis methods.

  13. Mitigating Task Saturation in Critical Care Air Transport Team Red Flag Checklist

    DTIC Science & Technology

    2015-04-14

    Cincinnati. Team and individual performances were scored using a validated assessment tool for NOTECHS. Salivary cortisol levels were measured at baseline...deployment experience (pɘ.04) continued to be significant. Salivary cortisol levels increased by 0.124μg/dL over baseline as the result of the...Figure Page 1 Salivary cortisol levels for all 48 participants in the simulations ........................................3

  14. Analytical Modeling and Performance Prediction of Remanufactured Gearbox Components

    NASA Astrophysics Data System (ADS)

    Pulikollu, Raja V.; Bolander, Nathan; Vijayakar, Sandeep; Spies, Matthew D.

    Gearbox components operate in extreme environments, often leading to premature removal or overhaul. Though worn or damaged, these components still have the ability to function given the appropriate remanufacturing processes are deployed. Doing so reduces a significant amount of resources (time, materials, energy, manpower) otherwise required to produce a replacement part. Unfortunately, current design and analysis approaches require extensive testing and evaluation to validate the effectiveness and safety of a component that has been used in the field then processed outside of original OEM specification. To test all possible combination of component coupled with various levels of potential damage repaired through various options of processing would be an expensive and time consuming feat, thus prohibiting a broad deployment of remanufacturing processes across industry. However, such evaluation and validation can occur through Integrated Computational Materials Engineering (ICME) modeling and simulation. Sentient developed a microstructure-based component life prediction (CLP) tool to quantify and assist gearbox components remanufacturing process. This was achieved by modeling the design-manufacturing-microstructure-property relationship. The CLP tool assists in remanufacturing of high value, high demand rotorcraft, automotive and wind turbine gears and bearings. This paper summarizes the CLP models development, and validation efforts by comparing the simulation results with rotorcraft spiral bevel gear physical test data. CLP analyzes gear components and systems for safety, longevity, reliability and cost by predicting (1) New gearbox component performance, and optimal time-to-remanufacture (2) Qualification of used gearbox components for remanufacturing process (3) Predicting the remanufactured component performance.

  15. Two Validated Ways of Improving the Ability of Decision-Making in Emergencies; Results from a Literature Review

    PubMed Central

    Khorram-Manesh, Amir; Berlin, Johan; Carlström, Eric

    2016-01-01

    The aim of the current review wasto study the existing knowledge about decision-making and to identify and describe validated training tools.A comprehensive literature review was conducted by using the following keywords: decision-making, emergencies, disasters, crisis management, training, exercises, simulation, validated, real-time, command and control, communication, collaboration, and multi-disciplinary in combination or as an isolated word. Two validated training systems developed in Sweden, 3 level collaboration (3LC) and MacSim, were identified and studied in light of the literature review in order to identify how decision-making can be trained. The training models fulfilled six of the eight identified characteristics of training for decision-making.Based on the results, these training models contained methods suitable to train for decision-making. PMID:27878123

  16. Aerospace Toolbox--a flight vehicle design, analysis, simulation, and software development environment II: an in-depth overview

    NASA Astrophysics Data System (ADS)

    Christian, Paul M.

    2002-07-01

    This paper presents a demonstrated approach to significantly reduce the cost and schedule of non real-time modeling and simulation, real-time HWIL simulation, and embedded code development. The tool and the methodology presented capitalize on a paradigm that has become a standard operating procedure in the automotive industry. The tool described is known as the Aerospace Toolbox, and it is based on the MathWorks Matlab/Simulink framework, which is a COTS application. Extrapolation of automotive industry data and initial applications in the aerospace industry show that the use of the Aerospace Toolbox can make significant contributions in the quest by NASA and other government agencies to meet aggressive cost reduction goals in development programs. The part I of this paper provided a detailed description of the GUI based Aerospace Toolbox and how it is used in every step of a development program; from quick prototyping of concept developments that leverage built-in point of departure simulations through to detailed design, analysis, and testing. Some of the attributes addressed included its versatility in modeling 3 to 6 degrees of freedom, its library of flight test validated library of models (including physics, environments, hardware, and error sources), and its built-in Monte Carlo capability. Other topics that were covered in part I included flight vehicle models and algorithms, and the covariance analysis package, Navigation System Covariance Analysis Tools (NavSCAT). Part II of this series will cover a more in-depth look at the analysis and simulation capability and provide an update on the toolbox enhancements. It will also address how the Toolbox can be used as a design hub for Internet based collaborative engineering tools such as NASA's Intelligent Synthesis Environment (ISE) and Lockheed Martin's Interactive Missile Design Environment (IMD).

  17. New Tools Being Developed for Engine- Airframe Blade-Out Structural Simulations

    NASA Technical Reports Server (NTRS)

    Lawrence, Charles

    2003-01-01

    One of the primary concerns of aircraft structure designers is the accurate simulation of the blade-out event. This is required for the aircraft to pass Federal Aviation Administration (FAA) certification and to ensure that the aircraft is safe for operation. Typically, the most severe blade-out occurs when a first-stage fan blade in a high-bypass gas turbine engine is released. Structural loading results from both the impact of the blade onto the containment ring and the subsequent instantaneous unbalance of the rotating components. Reliable simulations of blade-out are required to ensure structural integrity during flight as well as to guarantee successful blade-out certification testing. The loads generated by these analyses are critical to the design teams for several components of the airplane structures including the engine, nacelle, strut, and wing, as well as the aircraft fuselage. Currently, a collection of simulation tools is used for aircraft structural design. Detailed high-fidelity simulation tools are used to capture the structural loads resulting from blade loss, and then these loads are used as input into an overall system model that includes complete structural models of both the engines and the airframe. The detailed simulation (shown in the figure) includes the time-dependent trajectory of the lost blade and its interactions with the containment structure, and the system simulation includes the lost blade loadings and the interactions between the rotating turbomachinery and the remaining aircraft structural components. General-purpose finite element structural analysis codes are typically used, and special provisions are made to include transient effects from the blade loss and rotational effects resulting from the engine s turbomachinery. To develop and validate these new tools with test data, the NASA Glenn Research Center has teamed with GE Aircraft Engines, Pratt & Whitney, Boeing Commercial Aircraft, Rolls-Royce, and MSC.Software.

  18. Towards a supported common NEAMS software stack

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cormac Garvey

    2012-04-01

    The NEAMS IPSC's are developing multidimensional, multiphysics, multiscale simulation codes based on first principles that will be capable of predicting all aspects of current and future nuclear reactor systems. These new breeds of simulation codes will include rigorous verification, validation and uncertainty quantification checks to quantify the accuracy and quality of the simulation results. The resulting NEAMS IPSC simulation codes will be an invaluable tool in designing the next generation of Nuclear Reactors and also contribute to a more speedy process in the acquisition of licenses from the NRC for new Reactor designs. Due to the high resolution of themore » models, the complexity of the physics and the added computational resources to quantify the accuracy/quality of the results, the NEAMS IPSC codes will require large HPC resources to carry out the production simulation runs.« less

  19. On the simulation and mitigation of anisoplanatic optical turbulence for long range imaging

    NASA Astrophysics Data System (ADS)

    Hardie, Russell C.; LeMaster, Daniel A.

    2017-05-01

    We describe a numerical wave propagation method for simulating long range imaging of an extended scene under anisoplanatic conditions. Our approach computes an array of point spread functions (PSFs) for a 2D grid on the object plane. The PSFs are then used in a spatially varying weighted sum operation, with an ideal image, to produce a simulated image with realistic optical turbulence degradation. To validate the simulation we compare simulated outputs with the theoretical anisoplanatic tilt correlation and differential tilt variance. This is in addition to comparing the long- and short-exposure PSFs, and isoplanatic angle. Our validation analysis shows an excellent match between the simulation statistics and the theoretical predictions. The simulation tool is also used here to quantitatively evaluate a recently proposed block- matching and Wiener filtering (BMWF) method for turbulence mitigation. In this method block-matching registration algorithm is used to provide geometric correction for each of the individual input frames. The registered frames are then averaged and processed with a Wiener filter for restoration. A novel aspect of the proposed BMWF method is that the PSF model used for restoration takes into account the level of geometric correction achieved during image registration. This way, the Wiener filter is able fully exploit the reduced blurring achieved by registration. The BMWF method is relatively simple computationally, and yet, has excellent performance in comparison to state-of-the-art benchmark methods.

  20. Investigating a self-scoring interview simulation for learning and assessment in the medical consultation.

    PubMed

    Bruen, Catherine; Kreiter, Clarence; Wade, Vincent; Pawlikowska, Teresa

    2017-01-01

    Experience with simulated patients supports undergraduate learning of medical consultation skills. Adaptive simulations are being introduced into this environment. The authors investigate whether it can underpin valid and reliable assessment by conducting a generalizability analysis using IT data analytics from the interaction of medical students (in psychiatry) with adaptive simulations to explore the feasibility of adaptive simulations for supporting automated learning and assessment. The generalizability (G) study was focused on two clinically relevant variables: clinical decision points and communication skills. While the G study on the communication skills score yielded low levels of true score variance, the results produced by the decision points, indicating clinical decision-making and confirming user knowledge of the process of the Calgary-Cambridge model of consultation, produced reliability levels similar to what might be expected with rater-based scoring. The findings indicate that adaptive simulations have potential as a teaching and assessment tool for medical consultations.

Top