Johnson, Sheena Joanne; Guediri, Sara M; Kilkenny, Caroline; Clough, Peter J
2011-12-01
This study developed and validated a virtual reality (VR) simulator for use by interventional radiologists. Research in the area of skill acquisition reports practice as essential to become a task expert. Studies on simulation show skills learned in VR can be successfully transferred to a real-world task. Recently, with improvements in technology, VR simulators have been developed to allow complex medical procedures to be practiced without risking the patient. Three studies are reported. In Study I, 35 consultant interventional radiologists took part in a cognitive task analysis to empirically establish the key competencies of the Seldinger procedure. In Study 2, 62 participants performed one simulated procedure, and their performance was compared by expertise. In Study 3, the transferability of simulator training to a real-world procedure was assessed with 14 trainees. Study I produced 23 key competencies that were implemented as performance measures in the simulator. Study 2 showed the simulator had both face and construct validity, although some issues were identified. Study 3 showed the group that had undergone simulator training received significantly higher mean performance ratings on a subsequent patient procedure. The findings of this study support the centrality of validation in the successful design of simulators and show the utility of simulators as a training device. The studies show the key elements of a validation program for a simulator. In addition to task analysis and face and construct validities, the authors highlight the importance of transfer of training in validation studies.
DOT National Transportation Integrated Search
2006-01-01
A previous study developed a procedure for microscopic simulation model calibration and validation and evaluated the procedure via two relatively simple case studies using three microscopic simulation models. Results showed that default parameters we...
Akhtar, Kashif; Sugand, Kapil; Sperrin, Matthew; Cobb, Justin; Standfield, Nigel; Gupte, Chinmay
2015-01-01
Virtual-reality (VR) simulation in orthopedic training is still in its infancy, and much of the work has been focused on arthroscopy. We evaluated the construct validity of a new VR trauma simulator for performing dynamic hip screw (DHS) fixation of a trochanteric femoral fracture. 30 volunteers were divided into 3 groups according to the number of postgraduate (PG) years and the amount of clinical experience: novice (1-4 PG years; less than 10 DHS procedures); intermediate (5-12 PG years; 10-100 procedures); expert (> 12 PG years; > 100 procedures). Each participant performed a DHS procedure and objective performance metrics were recorded. These data were analyzed with each performance metric taken as the dependent variable in 3 regression models. There were statistically significant differences in performance between groups for (1) number of attempts at guide-wire insertion, (2) total fluoroscopy time, (3) tip-apex distance, (4) probability of screw cutout, and (5) overall simulator score. The intermediate group performed the procedure most quickly, with the lowest fluoroscopy time, the lowest tip-apex distance, the lowest probability of cutout, and the highest simulator score, which correlated with their frequency of exposure to running the trauma lists for hip fracture surgery. This study demonstrates the construct validity of a haptic VR trauma simulator with surgeons undertaking the procedure most frequently performing best on the simulator. VR simulation may be a means of addressing restrictions on working hours and allows trainees to practice technical tasks without putting patients at risk. The VR DHS simulator evaluated in this study may provide valid assessment of technical skill.
Hovgaard, Lisette Hvid; Andersen, Steven Arild Wuyts; Konge, Lars; Dalsgaard, Torur; Larsen, Christian Rifbjerg
2018-03-30
The use of robotic surgery for minimally invasive procedures has increased considerably over the last decade. Robotic surgery has potential advantages compared to laparoscopic surgery but also requires new skills. Using virtual reality (VR) simulation to facilitate the acquisition of these new skills could potentially benefit training of robotic surgical skills and also be a crucial step in developing a robotic surgical training curriculum. The study's objective was to establish validity evidence for a simulation-based test for procedural competency for the vaginal cuff closure procedure that can be used in a future simulation-based, mastery learning training curriculum. Eleven novice gynaecological surgeons without prior robotic experience and 11 experienced gynaecological robotic surgeons (> 30 robotic procedures) were recruited. After familiarization with the VR simulator, participants completed the module 'Guided Vaginal Cuff Closure' six times. Validity evidence was investigated for 18 preselected simulator metrics. The internal consistency was assessed using Cronbach's alpha and a composite score was calculated based on metrics with significant discriminative ability between the two groups. Finally, a pass/fail standard was established using the contrasting groups' method. The experienced surgeons significantly outperformed the novice surgeons on 6 of the 18 metrics. The internal consistency was 0.58 (Cronbach's alpha). The experienced surgeons' mean composite score for all six repetitions were significantly better than the novice surgeons' (76.1 vs. 63.0, respectively, p < 0.001). A pass/fail standard of 75/100 was established. Four novice surgeons passed this standard (false positives) and three experienced surgeons failed (false negatives). Our study has gathered validity evidence for a simulation-based test for procedural robotic surgical competency in the vaginal cuff closure procedure and established a credible pass/fail standard for future proficiency-based training.
Assessing Procedural Competence: Validity Considerations.
Pugh, Debra M; Wood, Timothy J; Boulet, John R
2015-10-01
Simulation-based medical education (SBME) offers opportunities for trainees to learn how to perform procedures and to be assessed in a safe environment. However, SBME research studies often lack robust evidence to support the validity of the interpretation of the results obtained from tools used to assess trainees' skills. The purpose of this paper is to describe how a validity framework can be applied when reporting and interpreting the results of a simulation-based assessment of skills related to performing procedures. The authors discuss various sources of validity evidence because they relate to SBME. A case study is presented.
Engineering applications of strong ground motion simulation
NASA Astrophysics Data System (ADS)
Somerville, Paul
1993-02-01
The formulation, validation and application of a procedure for simulating strong ground motions for use in engineering practice are described. The procedure uses empirical source functions (derived from near-source strong motion recordings of small earthquakes) to provide a realistic representation of effects such as source radiation that are difficult to model at high frequencies due to their partly stochastic behavior. Wave propagation effects are modeled using simplified Green's functions that are designed to transfer empirical source functions from their recording sites to those required for use in simulations at a specific site. The procedure has been validated against strong motion recordings of both crustal and subduction earthquakes. For the validation process we choose earthquakes whose source models (including a spatially heterogeneous distribution of the slip of the fault) are independently known and which have abundant strong motion recordings. A quantitative measurement of the fit between the simulated and recorded motion in this validation process is used to estimate the modeling and random uncertainty associated with the simulation procedure. This modeling and random uncertainty is one part of the overall uncertainty in estimates of ground motions of future earthquakes at a specific site derived using the simulation procedure. The other contribution to uncertainty is that due to uncertainty in the source parameters of future earthquakes that affect the site, which is estimated from a suite of simulations generated by varying the source parameters over their ranges of uncertainty. In this paper, we describe the validation of the simulation procedure for crustal earthquakes against strong motion recordings of the 1989 Loma Prieta, California, earthquake, and for subduction earthquakes against the 1985 Michoacán, Mexico, and Valparaiso, Chile, earthquakes. We then show examples of the application of the simulation procedure to the estimatation of the design response spectra for crustal earthquakes at a power plant site in California and for subduction earthquakes in the Seattle-Portland region. We also demonstrate the use of simulation methods for modeling the attenuation of strong ground motion, and show evidence of the effect of critical reflections from the lower crust in causing the observed flattening of the attenuation of strong ground motion from the 1988 Saguenay, Quebec, and 1989 Loma Prieta earthquakes.
Farhan, Bilal; Soltani, Tandis; Do, Rebecca; Perez, Claudia; Choi, Hanul; Ghoniem, Gamal
2018-05-02
Endoscopic injection of urethral bulking agents is an office procedure that is used to treat stress urinary incontinence secondary to internal sphincteric deficiency. Validation studies important part of simulator evaluation and is considered important step to establish the effectiveness of simulation-based training. The endoscopic needle injection (ENI) simulator has not been formally validated, although it has been used widely at University of California, Irvine. We aimed to assess the face, content, and construct validity of the UC, Irvine ENI simulator. Dissected female porcine bladders were mounted in a modified Hysteroscopy Diagnostic Trainer. Using routine endoscopic equipment for this procedure with video monitoring, 6 urologists (experts group) and 6 urology trainee (novice group) completed urethral bulking agents injections on a total of 12 bladders using ENI simulator. Face and content validities were assessed by using structured quantitative survey which rating the realism. Construct validity was assessed by comparing the performance, time of the procedure, and the occlusive (anatomical and functional) evaluations between the experts and novices. Trainees also completed a postprocedure feedback survey. Effective injections were evaluated by measuring the retrograde urethral opening pressure, visual cystoscopic coaptation, and postprocedure gross anatomic examination. All 12 participants felt the simulator was a good training tool and should be used as essential part of urology training (face validity). ENI simulator showed good face and content validity with average score varies between the experts and the novices was 3.9/5 and 3.8/5, respectively. Content validity evaluation showed that most aspects of the simulator were adequately realistic (mean Likert scores 3.9-3.8/5). However, the bladder does not bleed, and sometimes thin. Experts significantly outperformed novices (p < 001) across all measure of performance therefore establishing construct validity. The ENI simulator shows face, content and construct validities, although few aspects of simulator were not very realistic (e.g., bleeding).This study provides a base for the future formal validation for this simulator and for continuing use of this simulator in endourology training. Copyright © 2018 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Face and construct validity of a computer-based virtual reality simulator for ERCP.
Bittner, James G; Mellinger, John D; Imam, Toufic; Schade, Robert R; Macfadyen, Bruce V
2010-02-01
Currently, little evidence supports computer-based simulation for ERCP training. To determine face and construct validity of a computer-based simulator for ERCP and assess its perceived utility as a training tool. Novice and expert endoscopists completed 2 simulated ERCP cases by using the GI Mentor II. Virtual Education and Surgical Simulation Laboratory, Medical College of Georgia. Outcomes included times to complete the procedure, reach the papilla, and use fluoroscopy; attempts to cannulate the papilla, pancreatic duct, and common bile duct; and number of contrast injections and complications. Subjects assessed simulator graphics, procedural accuracy, difficulty, haptics, overall realism, and training potential. Only when performance data from cases A and B were combined did the GI Mentor II differentiate novices and experts based on times to complete the procedure, reach the papilla, and use fluoroscopy. Across skill levels, overall opinions were similar regarding graphics (moderately realistic), accuracy (similar to clinical ERCP), difficulty (similar to clinical ERCP), overall realism (moderately realistic), and haptics. Most participants (92%) claimed that the simulator has definite training potential or should be required for training. Small sample size, single institution. The GI Mentor II demonstrated construct validity for ERCP based on select metrics. Most subjects thought that the simulated graphics, procedural accuracy, and overall realism exhibit face validity. Subjects deemed it a useful training tool. Study repetition involving more participants and cases may help confirm results and establish the simulator's ability to differentiate skill levels based on ERCP-specific metrics.
Ghazi, Ahmed; Campbell, Timothy; Melnyk, Rachel; Feng, Changyong; Andrusco, Alex; Stone, Jonathan; Erturk, Erdal
2017-12-01
The restriction of resident hours with an increasing focus on patient safety and a reduced caseload has impacted surgical training. A complex and complication prone procedure such as percutaneous nephrolithotomy (PCNL) with a steep learning curve may create an unsafe environment for hands-on resident training. In this study, we validate a high fidelity, inanimate PCNL model within a full-immersion simulation environment. Anatomically correct models of the human pelvicaliceal system, kidney, and relevant adjacent structures were created using polyvinyl alcohol hydrogels and three-dimensional-printed injection molds. All steps of a PCNL were simulated including percutaneous renal access, nephroscopy, and lithotripsy. Five experts (>100 caseload) and 10 novices (<20 caseload) from both urology (full procedure) and interventional radiology (access only) departments completed the simulation. Face and content validity were calculated using model ratings for similarity to the real procedure and usefulness as a training tool. Differences in performance among groups with various levels of experience using clinically relevant procedural metrics were used to calculate construct validity. The model was determined to have an excellent face and content validity with an average score of 4.5/5.0 and 4.6/5.0, respectively. There were significant differences between novice and expert operative metrics including mean fluoroscopy time, the number of percutaneous access attempts, and number of times the needle was repositioned. Experts achieved better stone clearance with fewer procedural complications. We demonstrated the face, content, and construct validity of an inanimate, full task trainer for PCNL. Construct validity between experts and novices was demonstrated using incorporated procedural metrics, which permitted the accurate assessment of performance. While hands-on training under supervision remains an integral part of any residency, this full-immersion simulation provides a comprehensive tool for surgical skills development and evaluation before hands-on exposure.
Teaching and assessing procedural skills using simulation: metrics and methodology.
Lammers, Richard L; Davenport, Moira; Korley, Frederick; Griswold-Theodorson, Sharon; Fitch, Michael T; Narang, Aneesh T; Evans, Leigh V; Gross, Amy; Rodriguez, Elliot; Dodge, Kelly L; Hamann, Cara J; Robey, Walter C
2008-11-01
Simulation allows educators to develop learner-focused training and outcomes-based assessments. However, the effectiveness and validity of simulation-based training in emergency medicine (EM) requires further investigation. Teaching and testing technical skills require methods and assessment instruments that are somewhat different than those used for cognitive or team skills. Drawing from work published by other medical disciplines as well as educational, behavioral, and human factors research, the authors developed six research themes: measurement of procedural skills; development of performance standards; assessment and validation of training methods, simulator models, and assessment tools; optimization of training methods; transfer of skills learned on simulator models to patients; and prevention of skill decay over time. The article reviews relevant and established educational research methodologies and identifies gaps in our knowledge of how physicians learn procedures. The authors present questions requiring further research that, once answered, will advance understanding of simulation-based procedural training and assessment in EM.
Virtual reality simulator training for laparoscopic colectomy: what metrics have construct validity?
Shanmugan, Skandan; Leblanc, Fabien; Senagore, Anthony J; Ellis, C Neal; Stein, Sharon L; Khan, Sadaf; Delaney, Conor P; Champagne, Bradley J
2014-02-01
Virtual reality simulation for laparoscopic colectomy has been used for training of surgical residents and has been considered as a model for technical skills assessment of board-eligible colorectal surgeons. However, construct validity (the ability to distinguish between skill levels) must be confirmed before widespread implementation. This study was designed to specifically determine which metrics for laparoscopic sigmoid colectomy have evidence of construct validity. General surgeons that had performed fewer than 30 laparoscopic colon resections and laparoscopic colorectal experts (>200 laparoscopic colon resections) performed laparoscopic sigmoid colectomy on the LAP Mentor model. All participants received a 15-minute instructional warm-up and had never used the simulator before the study. Performance was then compared between each group for 21 metrics (procedural, 14; intraoperative errors, 7) to determine specifically which measurements demonstrate construct validity. Performance was compared with the Mann-Whitney U-test (p < 0.05 was significant). Fifty-three surgeons; 29 general surgeons, and 24 colorectal surgeons enrolled in the study. The virtual reality simulators for laparoscopic sigmoid colectomy demonstrated construct validity for 8 of 14 procedural metrics by distinguishing levels of surgical experience (p < 0.05). The most discriminatory procedural metrics (p < 0.01) favoring experts were reduced instrument path length, accuracy of the peritoneal/medial mobilization, and dissection of the inferior mesenteric artery. Intraoperative errors were not discriminatory for most metrics and favored general surgeons for colonic wall injury (general surgeons, 0.7; colorectal surgeons, 3.5; p = 0.045). Individual variability within the general surgeon and colorectal surgeon groups was not accounted for. The virtual reality simulators for laparoscopic sigmoid colectomy demonstrated construct validity for 8 procedure-specific metrics. However, using virtual reality simulator metrics to detect intraoperative errors did not discriminate between groups. If the virtual reality simulator continues to be used for the technical assessment of trainees and board-eligible surgeons, the evaluation of performance should be limited to procedural metrics.
Procedural virtual reality simulation in minimally invasive surgery.
Våpenstad, Cecilie; Buzink, Sonja N
2013-02-01
Simulation of procedural tasks has the potential to bridge the gap between basic skills training outside the operating room (OR) and performance of complex surgical tasks in the OR. This paper provides an overview of procedural virtual reality (VR) simulation currently available on the market and presented in scientific literature for laparoscopy (LS), flexible gastrointestinal endoscopy (FGE), and endovascular surgery (EVS). An online survey was sent to companies and research groups selling or developing procedural VR simulators, and a systematic search was done for scientific publications presenting or applying VR simulators to train or assess procedural skills in the PUBMED and SCOPUS databases. The results of five simulator companies were included in the survey. In the literature review, 116 articles were analyzed (45 on LS, 43 on FGE, 28 on EVS), presenting a total of 23 simulator systems. The companies stated to altogether offer 78 procedural tasks (33 for LS, 12 for FGE, 33 for EVS), of which 17 also were found in the literature review. Although study type and used outcomes vary between the three different fields, approximately 90 % of the studies presented in the retrieved publications for LS found convincing evidence to confirm the validity or added value of procedural VR simulation. This was the case in approximately 75 % for FGE and EVS. Procedural training using VR simulators has been found to improve clinical performance. There is nevertheless a large amount of simulated procedural tasks that have not been validated. Future research should focus on the optimal use of procedural simulators in the most effective training setups and further investigate the benefits of procedural VR simulation to improve clinical outcome.
In-Trail Procedure Air Traffic Control Procedures Validation Simulation Study
NASA Technical Reports Server (NTRS)
Chartrand, Ryan C.; Hewitt, Katrin P.; Sweeney, Peter B.; Graff, Thomas J.; Jones, Kenneth M.
2012-01-01
In August 2007, Airservices Australia (Airservices) and the United States National Aeronautics and Space Administration (NASA) conducted a validation experiment of the air traffic control (ATC) procedures associated with the Automatic Dependant Surveillance-Broadcast (ADS-B) In-Trail Procedure (ITP). ITP is an Airborne Traffic Situation Awareness (ATSA) application designed for near-term use in procedural airspace in which ADS-B data are used to facilitate climb and descent maneuvers. NASA and Airservices conducted the experiment in Airservices simulator in Melbourne, Australia. Twelve current operational air traffic controllers participated in the experiment, which identified aspects of the ITP that could be improved (mainly in the communication and controller approval process). Results showed that controllers viewed the ITP as valid and acceptable. This paper describes the experiment design and results.
A verification library for multibody simulation software
NASA Technical Reports Server (NTRS)
Kim, Sung-Soo; Haug, Edward J.; Frisch, Harold P.
1989-01-01
A multibody dynamics verification library, that maintains and manages test and validation data is proposed, based on RRC Robot arm and CASE backhoe validation and a comparitive study of DADS, DISCOS, and CONTOPS that are existing public domain and commercial multibody dynamic simulation programs. Using simple representative problems, simulation results from each program are cross checked, and the validation results are presented. Functionalities of the verification library are defined, in order to automate validation procedure.
Hung, Andrew J; Shah, Swar H; Dalag, Leonard; Shin, Daniel; Gill, Inderbir S
2015-08-01
We developed a novel procedure specific simulation platform for robotic partial nephrectomy. In this study we prospectively evaluate its face, content, construct and concurrent validity. This hybrid platform features augmented reality and virtual reality. Augmented reality involves 3-dimensional robotic partial nephrectomy surgical videos overlaid with virtual instruments to teach surgical anatomy, technical skills and operative steps. Advanced technical skills are assessed with an embedded full virtual reality renorrhaphy task. Participants were classified as novice (no surgical training, 15), intermediate (less than 100 robotic cases, 13) or expert (100 or more robotic cases, 14) and prospectively assessed. Cohort performance was compared with the Kruskal-Wallis test (construct validity). Post-study questionnaire was used to assess the realism of simulation (face validity) and usefulness for training (content validity). Concurrent validity evaluated correlation between virtual reality renorrhaphy task and a live porcine robotic partial nephrectomy performance (Spearman's analysis). Experts rated the augmented reality content as realistic (median 8/10) and helpful for resident/fellow training (8.0-8.2/10). Experts rated the platform highly for teaching anatomy (9/10) and operative steps (8.5/10) but moderately for technical skills (7.5/10). Experts and intermediates outperformed novices (construct validity) in efficiency (p=0.0002) and accuracy (p=0.002). For virtual reality renorrhaphy, experts outperformed intermediates on GEARS metrics (p=0.002). Virtual reality renorrhaphy and in vivo porcine robotic partial nephrectomy performance correlated significantly (r=0.8, p <0.0001) (concurrent validity). This augmented reality simulation platform displayed face, content and construct validity. Performance in the procedure specific virtual reality task correlated highly with a porcine model (concurrent validity). Future efforts will integrate procedure specific virtual reality tasks and their global assessment. Copyright © 2015 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
The development of a virtual reality training curriculum for colonoscopy.
Sugden, Colin; Aggarwal, Rajesh; Banerjee, Amrita; Haycock, Adam; Thomas-Gibson, Siwan; Williams, Christopher B; Darzi, Ara
2012-07-01
The development of a structured virtual reality (VR) training curriculum for colonoscopy using high-fidelity simulation. Colonoscopy requires detailed knowledge and technical skill. Changes to working practices in recent times have reduced the availability of traditional training opportunities. Much might, therefore, be achieved by applying novel technologies such as VR simulation to colonoscopy. Scientifically developed device-specific curricula aim to maximize the yield of laboratory-based training by focusing on validated modules and linking progression to the attainment of benchmarked proficiency criteria. Fifty participants comprised of 30 novices (<10 colonoscopies), 10 intermediates (100 to 500 colonoscopies), and 10 experienced (>500 colonoscopies) colonoscopists were recruited to participate. Surrogates of proficiency, such as number of procedures undertaken, determined prospective allocation to 1 of 3 groups (novice, intermediate, and experienced). Construct validity and learning value (comparison between groups and within groups respectively) for each task and metric on the chosen simulator model determined suitability for inclusion in the curriculum. Eight tasks in possession of construct validity and significant learning curves were included in the curriculum: 3 abstract tasks, 4 part-procedural tasks, and 1 procedural task. The whole-procedure task was valid for 11 metrics including the following: "time taken to complete the task" (1238, 343, and 293 s; P < 0.001) and "insertion length with embedded tip" (23.8, 3.6, and 4.9 cm; P = 0.005). Learning curves consistently plateaued at or beyond the ninth attempt. Valid metrics were used to define benchmarks, derived from the performance of the experienced cohort, for each included task. A comprehensive, stratified, benchmarked, whole-procedure curriculum has been developed for a modern high-fidelity VR colonoscopy simulator.
Validation of a Novel Laparoscopic Adjustable Gastric Band Simulator
Sankaranarayanan, Ganesh; Adair, James D.; Halic, Tansel; Gromski, Mark A.; Lu, Zhonghua; Ahn, Woojin; Jones, Daniel B.; De, Suvranu
2011-01-01
Background Morbid obesity accounts for more than 90,000 deaths per year in the United States. Laparoscopic adjustable gastric banding (LAGB) is the second most common weight loss procedure performed in the US and the most common in Europe and Australia. Simulation in surgical training is a rapidly advancing field that has been adopted by many to prepare surgeons for surgical techniques and procedures. Study Aim The aim of our study was to determine face, construct and content validity for a novel virtual reality laparoscopic adjustable gastric band simulator. Methods Twenty-eight subjects were categorized into two groups (Expert and Novice), determined by their skill level in laparoscopic surgery. Experts consisted of subjects who had at least four years of laparoscopic training and operative experience. Novices consisted of subjects with medical training, but with less than four years of laparoscopic training. The subjects performed the virtual reality laparoscopic adjustable band surgery simulator. They were automatically scored, according to various tasks. The subjects then completed a questionnaire to evaluate face and content validity. Results On a 5-point Likert scale (1 – lowest score, 5 – highest score), the mean score for visual realism was 4.00 ± 0.67 and the mean score for realism of the interface and tool movements was 4.07 ± 0.77 [Face Validity]. There were significant differences in the performance of the two subject groups (Expert and Novice), based on total scores (p<0.001) [Construct Validity]. Mean scores for utility of the simulator, as addressed by the Expert group, was 4.50 ± 0.71 [Content Validity]. Conclusion We created a virtual reality laparoscopic adjustable gastric band simulator. Our initial results demonstrate excellent face, construct and content validity findings. To our knowledge, this is the first virtual reality simulator with haptic feedback for training residents and surgeons in the laparoscopic adjustable gastric banding procedure. PMID:20734069
Validation of a novel laparoscopic adjustable gastric band simulator.
Sankaranarayanan, Ganesh; Adair, James D; Halic, Tansel; Gromski, Mark A; Lu, Zhonghua; Ahn, Woojin; Jones, Daniel B; De, Suvranu
2011-04-01
Morbid obesity accounts for more than 90,000 deaths per year in the United States. Laparoscopic adjustable gastric banding (LAGB) is the second most common weight loss procedure performed in the US and the most common in Europe and Australia. Simulation in surgical training is a rapidly advancing field that has been adopted by many to prepare surgeons for surgical techniques and procedures. The aim of our study was to determine face, construct, and content validity for a novel virtual reality laparoscopic adjustable gastric band simulator. Twenty-eight subjects were categorized into two groups (expert and novice), determined by their skill level in laparoscopic surgery. Experts consisted of subjects who had at least 4 years of laparoscopic training and operative experience. Novices consisted of subjects with medical training but with less than 4 years of laparoscopic training. The subjects used the virtual reality laparoscopic adjustable band surgery simulator. They were automatically scored according to various tasks. The subjects then completed a questionnaire to evaluate face and content validity. On a 5-point Likert scale (1 = lowest score, 5 = highest score), the mean score for visual realism was 4.00 ± 0.67 and the mean score for realism of the interface and tool movements was 4.07 ± 0.77 (face validity). There were significant differences in the performances of the two subject groups (expert and novice) based on total scores (p < 0.001) (construct validity). Mean score for utility of the simulator, as addressed by the expert group, was 4.50 ± 0.71 (content validity). We created a virtual reality laparoscopic adjustable gastric band simulator. Our initial results demonstrate excellent face, construct, and content validity findings. To our knowledge, this is the first virtual reality simulator with haptic feedback for training residents and surgeons in the laparoscopic adjustable gastric banding procedure.
Jeong, Eun Ju; Chung, Hyun Soo; Choi, Jeong Yun; Kim, In Sook; Hong, Seong Hee; Yoo, Kyung Sook; Kim, Mi Kyoung; Won, Mi Yeol; Eum, So Yeon; Cho, Young Soon
2017-06-01
The aim of this study was to develop a simulation-based time-out learning programme targeted to nurses participating in high-risk invasive procedures and to figure out the effects of application of the new programme on acceptance of nurses. This study was performed using a simulation-based learning predesign and postdesign to figure out the effects of implementation of this programme. It was targeted to 48 registered nurses working in the general ward and the emergency department in a tertiary teaching hospital. Difference between acceptance and performance rates has been figured out by using mean, standard deviation, and Wilcoxon-signed rank test. The perception survey and score sheet have been validated through content validation index, and the reliability of evaluator has been verified by using intraclass correlation coefficient. Results showed high level of acceptance of high-risk invasive procedure (P<.01). Further, improvement was consistent regardless of clinical experience, workplace, or experience in simulation-based learning. The face validity of the programme showed over 4.0 out of 5.0. This simulation-based learning programme was effective in improving the recognition of time-out protocol and has given the participants the opportunity to become proactive in cases of high-risk invasive procedures performed outside of operating room. © 2017 John Wiley & Sons Australia, Ltd.
ERIC Educational Resources Information Center
Pustejovsky, James E.; Runyon, Christopher
2014-01-01
Direct observation recording procedures produce reductive summary measurements of an underlying stream of behavior. Previous methodological studies of these recording procedures have employed simulation methods for generating random behavior streams, many of which amount to special cases of a statistical model known as the alternating renewal…
Ribeiro de Oliveira, Marcelo Magaldi; Nicolato, Arthur; Santos, Marcilea; Godinho, Joao Victor; Brito, Rafael; Alvarenga, Alexandre; Martins, Ana Luiza Valle; Prosdocimi, André; Trivelato, Felipe Padovani; Sabbagh, Abdulrahman J; Reis, Augusto Barbosa; Maestro, Rolando Del
2016-05-01
OBJECT The development of neurointerventional treatments of central nervous system disorders has resulted in the need for adequate training environments for novice interventionalists. Virtual simulators offer anatomical definition but lack adequate tactile feedback. Animal models, which provide more lifelike training, require an appropriate infrastructure base. The authors describe a training model for neurointerventional procedures using the human placenta (HP), which affords haptic training with significantly fewer resource requirements, and discuss its validation. METHODS Twelve HPs were prepared for simulated endovascular procedures. Training exercises performed by interventional neuroradiologists and novice fellows were placental angiography, stent placement, aneurysm coiling, and intravascular liquid embolic agent injection. RESULTS The endovascular training exercises proposed can be easily reproduced in the HP. Face, content, and construct validity were assessed by 6 neurointerventional radiologists and 6 novice fellows in interventional radiology. CONCLUSIONS The use of HP provides an inexpensive training model for the training of neurointerventionalists. Preliminary validation results show that this simulation model has face and content validity and has demonstrated construct validity for the interventions assessed in this study.
Airport Landside - Volume III : ALSIM Calibration and Validation.
DOT National Transportation Integrated Search
1982-06-01
This volume discusses calibration and validation procedures applied to the Airport Landside Simulation Model (ALSIM), using data obtained at Miami, Denver and LaGuardia Airports. Criteria for the selection of a validation methodology are described. T...
Development and Validation of a Mobile Device-based External Ventricular Drain Simulator.
Morone, Peter J; Bekelis, Kimon; Root, Brandon K; Singer, Robert J
2017-10-01
Multiple external ventricular drain (EVD) simulators have been created, yet their cost, bulky size, and nonreusable components limit their accessibility to residency programs. To create and validate an animated EVD simulator that is accessible on a mobile device. We developed a mobile-based EVD simulator that is compatible with iOS (Apple Inc., Cupertino, California) and Android-based devices (Google, Mountain View, California) and can be downloaded from the Apple App and Google Play Store. Our simulator consists of a learn mode, which teaches users the procedure, and a test mode, which assesses users' procedural knowledge. Twenty-eight participants, who were divided into expert and novice categories, completed the simulator in test mode and answered a postmodule survey. This was graded using a 5-point Likert scale, with 5 representing the highest score. Using the survey results, we assessed the module's face and content validity, whereas construct validity was evaluated by comparing the expert and novice test scores. Participants rated individual survey questions pertaining to face and content validity a median score of 4 out of 5. When comparing test scores, generated by the participants completing the test mode, the experts scored higher than the novices (mean, 71.5; 95% confidence interval, 69.2 to 73.8 vs mean, 48; 95% confidence interval, 44.2 to 51.6; P < .001). We created a mobile-based EVD simulator that is inexpensive, reusable, and accessible. Our results demonstrate that this simulator is face, content, and construct valid. Copyright © 2017 by the Congress of Neurological Surgeons
Translating the simulation of procedural drilling techniques for interactive neurosurgical training.
Stredney, Don; Rezai, Ali R; Prevedello, Daniel M; Elder, J Bradley; Kerwin, Thomas; Hittle, Bradley; Wiet, Gregory J
2013-10-01
Through previous efforts we have developed a fully virtual environment to provide procedural training of otologic surgical technique. The virtual environment is based on high-resolution volumetric data of the regional anatomy. These volumetric data help drive an interactive multisensory, ie, visual (stereo), aural (stereo), and tactile, simulation environment. Subsequently, we have extended our efforts to support the training of neurosurgical procedural technique as part of the Congress of Neurological Surgeons simulation initiative. To deliberately study the integration of simulation technologies into the neurosurgical curriculum and to determine their efficacy in teaching minimally invasive cranial and skull base approaches. We discuss issues of biofidelity and our methods to provide objective, quantitative and automated assessment for the residents. We conclude with a discussion of our experiences by reporting preliminary formative pilot studies and proposed approaches to take the simulation to the next level through additional validation studies. We have presented our efforts to translate an otologic simulation environment for use in the neurosurgical curriculum. We have demonstrated the initial proof of principles and define the steps to integrate and validate the system as an adjuvant to the neurosurgical curriculum.
Procedural training and assessment of competency utilizing simulation.
Sawyer, Taylor; Gray, Megan M
2016-11-01
This review examines the current environment of neonatal procedural learning, describes an updated model of skills training, defines the role of simulation in assessing competency, and discusses potential future directions for simulation-based competency assessment. In order to maximize impact, simulation-based procedural training programs should follow a standardized and evidence-based approach to designing and evaluating educational activities. Simulation can be used to facilitate the evaluation of competency, but must incorporate validated assessment tools to ensure quality and consistency. True competency evaluation cannot be accomplished with simulation alone: competency assessment must also include evaluations of procedural skill during actual clinical care. Future work in this area is needed to measure and track clinically meaningful patient outcomes resulting from simulation-based training, examine the use of simulation to assist physicians undergoing re-entry to practice, and to examine the use of procedural skills simulation as part of a maintenance of competency and life-long learning. Copyright © 2016 Elsevier Inc. All rights reserved.
The internal validity of arthroscopic simulators and their effectiveness in arthroscopic education.
Slade Shantz, Jesse Alan; Leiter, Jeff R S; Gottschalk, Tania; MacDonald, Peter Benjamin
2014-01-01
The purpose of this systematic review was to identify standard procedures for the validation of arthroscopic simulators and determine whether simulators improve the surgical skills of users. Arthroscopic simulator validation studies and randomized trials assessing the effectiveness of arthroscopic simulators in education were identified from online databases, as well as, grey literature and reference lists. Only validation studies and randomized trials were included for review. Study heterogeneity was calculated and where appropriate, study results were combined employing a random effects model. Four hundred and thirteen studies were reviewed. Thirteen studies met the inclusion criteria assessing the construct validity of simulators. A pooled analysis of internal validation studies determined that simulators could discriminate between novice and experts, but not between novice and intermediate trainees on time of completion of a simulated task. Only one study assessed the utility of a knee simulator in training arthroscopic skills directly and demonstrated that the skill level of simulator-trained residents was greater than non-simulator-trained residents. Excessive heterogeneity exists in the literature to determine the internal and transfer validity of arthroscopic simulators currently available. Evidence suggests that simulators can discriminate between novice and expert users, but discrimination between novice and intermediate trainees in surgical education should be paramount. International standards for the assessment of arthroscopic simulator validity should be developed to increase the use and effectiveness of simulators in orthopedic surgery.
NASA Technical Reports Server (NTRS)
Carr, Peter C.; Mckissick, Burnell T.
1988-01-01
A joint experiment to investigate simulator validation and cue fidelity was conducted by the Dryden Flight Research Facility of NASA Ames Research Center (Ames-Dryden) and NASA Langley Research Center. The primary objective was to validate the use of a closed-loop pilot-vehicle mathematical model as an analytical tool for optimizing the tradeoff between simulator fidelity requirements and simulator cost. The validation process includes comparing model predictions with simulation and flight test results to evaluate various hypotheses for differences in motion and visual cues and information transfer. A group of five pilots flew air-to-air tracking maneuvers in the Langley differential maneuvering simulator and visual motion simulator and in an F-14 aircraft at Ames-Dryden. The simulators used motion and visual cueing devices including a g-seat, a helmet loader, wide field-of-view horizon, and a motion base platform.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ricci, Paolo; Theiler, C.; Fasoli, A.
A methodology for plasma turbulence code validation is discussed, focusing on quantitative assessment of the agreement between experiments and simulations. The present work extends the analysis carried out in a previous paper [P. Ricci et al., Phys. Plasmas 16, 055703 (2009)] where the validation observables were introduced. Here, it is discussed how to quantify the agreement between experiments and simulations with respect to each observable, how to define a metric to evaluate this agreement globally, and - finally - how to assess the quality of a validation procedure. The methodology is then applied to the simulation of the basic plasmamore » physics experiment TORPEX [A. Fasoli et al., Phys. Plasmas 13, 055902 (2006)], considering both two-dimensional and three-dimensional simulation models.« less
Model-Based Verification and Validation of Spacecraft Avionics
NASA Technical Reports Server (NTRS)
Khan, Mohammed Omair
2012-01-01
Our simulation was able to mimic the results of 30 tests on the actual hardware. This shows that simulations have the potential to enable early design validation - well before actual hardware exists. Although simulations focused around data processing procedures at subsystem and device level, they can also be applied to system level analysis to simulate mission scenarios and consumable tracking (e.g. power, propellant, etc.). Simulation engine plug-in developments are continually improving the product, but handling time for time-sensitive operations (like those of the remote engineering unit and bus controller) can be cumbersome.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ricci, P., E-mail: paolo.ricci@epfl.ch; Riva, F.; Theiler, C.
In the present work, a Verification and Validation procedure is presented and applied showing, through a practical example, how it can contribute to advancing our physics understanding of plasma turbulence. Bridging the gap between plasma physics and other scientific domains, in particular, the computational fluid dynamics community, a rigorous methodology for the verification of a plasma simulation code is presented, based on the method of manufactured solutions. This methodology assesses that the model equations are correctly solved, within the order of accuracy of the numerical scheme. The technique to carry out a solution verification is described to provide a rigorousmore » estimate of the uncertainty affecting the numerical results. A methodology for plasma turbulence code validation is also discussed, focusing on quantitative assessment of the agreement between experiments and simulations. The Verification and Validation methodology is then applied to the study of plasma turbulence in the basic plasma physics experiment TORPEX [Fasoli et al., Phys. Plasmas 13, 055902 (2006)], considering both two-dimensional and three-dimensional simulations carried out with the GBS code [Ricci et al., Plasma Phys. Controlled Fusion 54, 124047 (2012)]. The validation procedure allows progress in the understanding of the turbulent dynamics in TORPEX, by pinpointing the presence of a turbulent regime transition, due to the competition between the resistive and ideal interchange instabilities.« less
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.; Sowers, T Shane; Liu, Yuan; Owen, A. Karl; Guo, Ten-Huei
2015-01-01
The National Aeronautics and Space Administration (NASA) has developed independent airframe and engine models that have been integrated into a single real-time aircraft simulation for piloted evaluation of propulsion control algorithms. In order to have confidence in the results of these evaluations, the integrated simulation must be validated to demonstrate that its behavior is realistic and that it meets the appropriate Federal Aviation Administration (FAA) certification requirements for aircraft. The paper describes the test procedures and results, demonstrating that the integrated simulation generally meets the FAA requirements and is thus a valid testbed for evaluation of propulsion control modes.
DOT National Transportation Integrated Search
2004-01-01
Microscopic traffic simulation models have been widely accepted and applied in transportation engineering and planning practice for the past decades because simulation is cost-effective, safe, and fast. To achieve high fidelity and credibility for a ...
Development of a patient-specific surgical simulator for pediatric laparoscopic procedures.
Saber, Nikoo R; Menon, Vinay; St-Pierre, Jean C; Looi, Thomas; Drake, James M; Cyril, Xavier
2014-01-01
The purpose of this study is to develop and evaluate a pediatric patient-specific surgical simulator for the planning, practice, and validation of laparoscopic surgical procedures prior to intervention, initially focusing on the choledochal cyst resection and reconstruction scenario. The simulator is comprised of software elements including a deformable body physics engine, virtual surgical tools, and abdominal organs. Hardware components such as haptics-enabled hand controllers and a representative endoscopic tool have also been integrated. The prototype is able to perform a number of surgical tasks and further development work is under way to simulate the complete procedure with acceptable fidelity and accuracy.
Enhanced Oceanic Operations Human-In-The-Loop In-Trail Procedure Validation Simulation Study
NASA Technical Reports Server (NTRS)
Murdoch, Jennifer L.; Bussink, Frank J. L.; Chamberlain, James P.; Chartrand, Ryan C.; Palmer, Michael T.; Palmer, Susan O.
2008-01-01
The Enhanced Oceanic Operations Human-In-The-Loop In-Trail Procedure (ITP) Validation Simulation Study investigated the viability of an ITP designed to enable oceanic flight level changes that would not otherwise be possible. Twelve commercial airline pilots with current oceanic experience flew a series of simulated scenarios involving either standard or ITP flight level change maneuvers and provided subjective workload ratings, assessments of ITP validity and acceptability, and objective performance measures associated with the appropriate selection, request, and execution of ITP flight level change maneuvers. In the majority of scenarios, subject pilots correctly assessed the traffic situation, selected an appropriate response (i.e., either a standard flight level change request, an ITP request, or no request), and executed their selected flight level change procedure, if any, without error. Workload ratings for ITP maneuvers were acceptable and not substantially higher than for standard flight level change maneuvers, and, for the majority of scenarios and subject pilots, subjective acceptability ratings and comments for ITP were generally high and positive. Qualitatively, the ITP was found to be valid and acceptable. However, the error rates for ITP maneuvers were higher than for standard flight level changes, and these errors may have design implications for both the ITP and the study's prototype traffic display. These errors and their implications are discussed.
Evaluation of a virtual-reality-based simulator using passive haptic feedback for knee arthroscopy.
Fucentese, Sandro F; Rahm, Stefan; Wieser, Karl; Spillmann, Jonas; Harders, Matthias; Koch, Peter P
2015-04-01
The aim of this work is to determine face validity and construct validity of a new virtual-reality-based simulator for diagnostic and therapeutic knee arthroscopy. The study tests a novel arthroscopic simulator based on passive haptics. Sixty-eight participants were grouped into novices, intermediates, and experts. All participants completed two exercises. In order to establish face validity, all participants filled out a questionnaire concerning different aspects of simulator realism, training capacity, and different statements using a seven-point Likert scale (range 1-7). Construct validity was tested by comparing various simulator metric values between novices and experts. Face validity could be established: overall realism was rated with a mean value of 5.5 points. Global training capacity scored a mean value of 5.9. Participants considered the simulator as useful for procedural training of diagnostic and therapeutic arthroscopy. In the foreign body removal exercise, experts were overall significantly faster in the whole procedure (6 min 24 s vs. 8 min 24 s, p < 0.001), took less time to complete the diagnostic tour (2 min 49 s vs. 3 min 32 s, p = 0.027), and had a shorter camera path length (186 vs. 246 cm, p = 0.006). The simulator achieved high scores in terms of realism. It was regarded as a useful training tool, which is also capable of differentiating between varying levels of arthroscopic experience. Nevertheless, further improvements of the simulator especially in the field of therapeutic arthroscopy are desirable. In general, the findings support that virtual-reality-based simulation using passive haptics has the potential to complement conventional training of knee arthroscopy skills. II.
Recent advancements in medical simulation: patient-specific virtual reality simulation.
Willaert, Willem I M; Aggarwal, Rajesh; Van Herzeele, Isabelle; Cheshire, Nicholas J; Vermassen, Frank E
2012-07-01
Patient-specific virtual reality simulation (PSVR) is a new technological advancement that allows practice of upcoming real operations and complements the established role of VR simulation as a generic training tool. This review describes current developments in PSVR and draws parallels with other high-stake industries, such as aviation, military, and sports. A review of the literature was performed using PubMed and Internet search engines to retrieve data relevant to PSVR in medicine. All reports pertaining to PSVR were included. Reports on simulators that did not incorporate a haptic interface device were excluded from the review. Fifteen reports described 12 simulators that enabled PSVR. Medical procedures in the field of laparoscopy, vascular surgery, orthopedics, neurosurgery, and plastic surgery were included. In all cases, source data was two-dimensional CT or MRI data. Face validity was most commonly reported. Only one (vascular) simulator had undergone face, content, and construct validity. Of the 12 simulators, 1 is commercialized and 11 are prototypes. Five simulators have been used in conjunction with real patient procedures. PSVR is a promising technological advance within medicine. The majority of simulators are still in the prototype phase. As further developments unfold, the validity of PSVR will have to be examined much like generic VR simulation for training purposes. Nonetheless, similar to the aviation, military, and sport industries, operative performance and patient safety may be enhanced by the application of this novel technology.
van Rossum, Huub H; Kemperman, Hans
2017-02-01
To date, no practical tools are available to obtain optimal settings for moving average (MA) as a continuous analytical quality control instrument. Also, there is no knowledge of the true bias detection properties of applied MA. We describe the use of bias detection curves for MA optimization and MA validation charts for validation of MA. MA optimization was performed on a data set of previously obtained consecutive assay results. Bias introduction and MA bias detection were simulated for multiple MA procedures (combination of truncation limits, calculation algorithms and control limits) and performed for various biases. Bias detection curves were generated by plotting the median number of test results needed for bias detection against the simulated introduced bias. In MA validation charts the minimum, median, and maximum numbers of assay results required for MA bias detection are shown for various bias. Their use was demonstrated for sodium, potassium, and albumin. Bias detection curves allowed optimization of MA settings by graphical comparison of bias detection properties of multiple MA. The optimal MA was selected based on the bias detection characteristics obtained. MA validation charts were generated for selected optimal MA and provided insight into the range of results required for MA bias detection. Bias detection curves and MA validation charts are useful tools for optimization and validation of MA procedures.
Larsen, C R; Grantcharov, T; Aggarwal, R; Tully, A; Sørensen, J L; Dalsgaard, T; Ottesen, B
2006-09-01
Safe realistic training and unbiased quantitative assessment of technical skills are required for laparoscopy. Virtual reality (VR) simulators may be useful tools for training and assessing basic and advanced surgical skills and procedures. This study aimed to investigate the construct validity of the LapSimGyn VR simulator, and to determine the learning curves of gynecologists with different levels of experience. For this study, 32 gynecologic trainees and consultants (juniors or seniors) were allocated into three groups: novices (0 advanced laparoscopic procedures), intermediate level (>20 and <60 procedures), and experts (>100 procedures). All performed 10 sets of simulations consisting of three basic skill tasks and an ectopic pregnancy program. The simulations were carried out on 3 days within a maximum period of 2 weeks. Assessment of skills was based on time, economy of movement, and error parameters measured by the simulator. The data showed that expert gynecologists performed significantly and consistently better than intermediate and novice gynecologists. The learning curves differed significantly between the groups, showing that experts start at a higher level and more rapidly reach the plateau of their learning curve than do intermediate and novice groups of surgeons. The LapSimGyn VR simulator package demonstrates construct validity on both the basic skills module and the procedural gynecologic module for ectopic pregnancy. Learning curves can be obtained, but to reach the maximum performance for the more complex tasks, 10 repetitions do not seem sufficient at the given task level and settings. LapSimGyn also seems to be flexible and widely accepted by the users.
Wen, Tingxi; Medveczky, David; Wu, Jackie; Wu, Jianhuang
2018-01-25
Colonoscopy plays an important role in the clinical screening and management of colorectal cancer. The traditional 'see one, do one, teach one' training style for such invasive procedure is resource intensive and ineffective. Given that colonoscopy is difficult, and time-consuming to master, the use of virtual reality simulators to train gastroenterologists in colonoscopy operations offers a promising alternative. In this paper, a realistic and real-time interactive simulator for training colonoscopy procedure is presented, which can even include polypectomy simulation. Our approach models the colonoscopy as thick flexible elastic rods with different resolutions which are dynamically adaptive to the curvature of the colon. More material characteristics of this deformable material are integrated into our discrete model to realistically simulate the behavior of the colonoscope. We present a simulator for training colonoscopy procedure. In addition, we propose a set of key aspects of our simulator that give fast, high fidelity feedback to trainees. We also conducted an initial validation of this colonoscopic simulator to determine its clinical utility and efficacy.
ERIC Educational Resources Information Center
Bakermans-Kranenburg, Marian J.; Alink, Lenneke R. A.; Biro, Szilvia; Voorthuis, Alexandra; van IJzendoorn, Marinus H.
2015-01-01
Observation of parental sensitivity in a standard procedure, in which caregivers are faced with the same level of infant demand, enables the comparison of sensitivity "between" caregivers. We developed an ecologically valid standardized setting using an infant simulator with interactive features, the Leiden Infant Simulator Sensitivity…
Resampling procedures to identify important SNPs using a consensus approach.
Pardy, Christopher; Motyer, Allan; Wilson, Susan
2011-11-29
Our goal is to identify common single-nucleotide polymorphisms (SNPs) (minor allele frequency > 1%) that add predictive accuracy above that gained by knowledge of easily measured clinical variables. We take an algorithmic approach to predict each phenotypic variable using a combination of phenotypic and genotypic predictors. We perform our procedure on the first simulated replicate and then validate against the others. Our procedure performs well when predicting Q1 but is less successful for the other outcomes. We use resampling procedures where possible to guard against false positives and to improve generalizability. The approach is based on finding a consensus regarding important SNPs by applying random forests and the least absolute shrinkage and selection operator (LASSO) on multiple subsamples. Random forests are used first to discard unimportant predictors, narrowing our focus to roughly 100 important SNPs. A cross-validation LASSO is then used to further select variables. We combine these procedures to guarantee that cross-validation can be used to choose a shrinkage parameter for the LASSO. If the clinical variables were unavailable, this prefiltering step would be essential. We perform the SNP-based analyses simultaneously rather than one at a time to estimate SNP effects in the presence of other causal variants. We analyzed the first simulated replicate of Genetic Analysis Workshop 17 without knowledge of the true model. Post-conference knowledge of the simulation parameters allowed us to investigate the limitations of our approach. We found that many of the false positives we identified were substantially correlated with genuine causal SNPs.
An IMU-to-Body Alignment Method Applied to Human Gait Analysis.
Vargas-Valencia, Laura Susana; Elias, Arlindo; Rocon, Eduardo; Bastos-Filho, Teodiano; Frizera, Anselmo
2016-12-10
This paper presents a novel calibration procedure as a simple, yet powerful, method to place and align inertial sensors with body segments. The calibration can be easily replicated without the need of any additional tools. The proposed method is validated in three different applications: a computer mathematical simulation; a simplified joint composed of two semi-spheres interconnected by a universal goniometer; and a real gait test with five able-bodied subjects. Simulation results demonstrate that, after the calibration method is applied, the joint angles are correctly measured independently of previous sensor placement on the joint, thus validating the proposed procedure. In the cases of a simplified joint and a real gait test with human volunteers, the method also performs correctly, although secondary plane errors appear when compared with the simulation results. We believe that such errors are caused by limitations of the current inertial measurement unit (IMU) technology and fusion algorithms. In conclusion, the presented calibration procedure is an interesting option to solve the alignment problem when using IMUs for gait analysis.
NASA Technical Reports Server (NTRS)
Arnold, J.; Cheatwood, N.; Powell, D.; Wolf, A.; Guensey, C.; Rivellini, T.; Venkatapathy, E.; Beard, T.; Beutter, B.; Laub, B.
2005-01-01
Contents include the following: 3 Listing of critical capabilities (knowledge, procedures, training, facilities) and metrics for validating that they are mission ready. Examples of critical capabilities and validation metrics: ground test and simulations. Flight testing to prove capabilities are mission ready. Issues and recommendations.
Collection of Calibration and Validation Data for An Airport Landside Dynamic Simulation Model
DOT National Transportation Integrated Search
1980-04-01
The report summarizes the airport data collection procedures employed to obtain the necessary calibration and validation information. The preparation for the data collection effort is explained. A description is presented of the initial work tasks, w...
Development of a Turbofan Engine Simulation in a Graphical Simulation Environment
NASA Technical Reports Server (NTRS)
Parker, Khary I.; Guo, Ten-Heui
2003-01-01
This paper presents the development of a generic component level model of a turbofan engine simulation with a digital controller, in an advanced graphical simulation environment. The goal of this effort is to develop and demonstrate a flexible simulation platform for future research in propulsion system control and diagnostic technology. A previously validated FORTRAN-based model of a modern, high-performance, military-type turbofan engine is being used to validate the platform development. The implementation process required the development of various innovative procedures, which are discussed in the paper. Open-loop and closed-loop comparisons are made between the two simulations. Future enhancements that are to be made to the modular engine simulation are summarized.
Translating the Simulation of Procedural Drilling Techniques for Interactive Neurosurgical Training
Stredney, Don; Rezai, Ali R.; Prevedello, Daniel M.; Elder, J. Bradley; Kerwin, Thomas; Hittle, Bradley; Wiet, Gregory J.
2014-01-01
Background Through previous and concurrent efforts, we have developed a fully virtual environment to provide procedural training of otologic surgical technique. The virtual environment is based on high-resolution volumetric data of the regional anatomy. This volumetric data helps drive an interactive multi-sensory, i.e., visual (stereo), aural (stereo), and tactile simulation environment. Subsequently, we have extended our efforts to support the training of neurosurgical procedural technique as part of the CNS simulation initiative. Objective The goal of this multi-level development is to deliberately study the integration of simulation technologies into the neurosurgical curriculum and to determine their efficacy in teaching minimally invasive cranial and skull base approaches. Methods We discuss issues of biofidelity as well as our methods to provide objective, quantitative automated assessment for the residents. Results We conclude with a discussion of our experiences by reporting on preliminary formative pilot studies and proposed approaches to take the simulation to the next level through additional validation studies. Conclusion We have presented our efforts to translate an otologic simulation environment for use in the neurosurgical curriculum. We have demonstrated the initial proof of principles and define the steps to integrate and validate the system as an adjuvant to the neurosurgical curriculum. PMID:24051887
Current status of robotic simulators in acquisition of robotic surgical skills.
Kumar, Anup; Smith, Roger; Patel, Vipul R
2015-03-01
This article provides an overview of the current status of simulator systems in robotic surgery training curriculum, focusing on available simulators for training, their comparison, new technologies introduced in simulation focusing on concepts of training along with existing challenges and future perspectives of simulator training in robotic surgery. The different virtual reality simulators available in the market like dVSS, dVT, RoSS, ProMIS and SEP have shown face, content and construct validity in robotic skills training for novices outside the operating room. Recently, augmented reality simulators like HoST, Maestro AR and RobotiX Mentor have been introduced in robotic training providing a more realistic operating environment, emphasizing more on procedure-specific robotic training . Further, the Xperience Team Trainer, which provides training to console surgeon and bed-side assistant simultaneously, has been recently introduced to emphasize the importance of teamwork and proper coordination. Simulator training holds an important place in current robotic training curriculum of future robotic surgeons. There is a need for more procedure-specific augmented reality simulator training, utilizing advancements in computing and graphical capabilities for new innovations in simulator technology. Further studies are required to establish its cost-benefit ratio along with concurrent and predictive validity.
Garfjeld Roberts, Patrick; Guyver, Paul; Baldwin, Mathew; Akhtar, Kash; Alvand, Abtin; Price, Andrew J; Rees, Jonathan L
2017-02-01
To assess the construct and face validity of ArthroS, a passive haptic VR simulator. A secondary aim was to evaluate the novel performance metrics produced by this simulator. Two groups of 30 participants, each divided into novice, intermediate or expert based on arthroscopic experience, completed three separate tasks on either the knee or shoulder module of the simulator. Performance was recorded using 12 automatically generated performance metrics and video footage of the arthroscopic procedures. The videos were blindly assessed using a validated global rating scale (GRS). Participants completed a survey about the simulator's realism and training utility. This new simulator demonstrated construct validity of its tasks when evaluated against a GRS (p ≤ 0.003 in all cases). Regarding it's automatically generated performance metrics, established outputs such as time taken (p ≤ 0.001) and instrument path length (p ≤ 0.007) also demonstrated good construct validity. However, two-thirds of the proposed 'novel metrics' the simulator reports could not distinguish participants based on arthroscopic experience. Face validity assessment rated the simulator as a realistic and useful tool for trainees, but the passive haptic feedback (a key feature of this simulator) is rated as less realistic. The ArthroS simulator has good task construct validity based on established objective outputs, but some of the novel performance metrics could not distinguish between surgical experience. The passive haptic feedback of the simulator also needs improvement. If simulators could offer automated and validated performance feedback, this would facilitate improvements in the delivery of training by allowing trainees to practise and self-assess.
Pugh, Carla M; Arafat, Fahd O; Kwan, Calvin; Cohen, Elaine R; Kurashima, Yo; Vassiliou, Melina C; Fried, Gerald M
2015-10-01
The aim of our study was to modify our previously developed laparoscopic ventral hernia (LVH) simulator to increase difficulty and then reassess validity and feasibility for using the simulator in a newly developed simulation-based continuing medical education course. Participants (N = 30) were practicing surgeons who signed up for a hands-on postgraduate laparoscopic hernia course. An LVH simulator, with prior validity evidence, was modified for the course to increase difficulty. Participants completed 1 of the 3 variations in hernia anatomy: incarcerated omentum, incarcerated bowel, and diffuse adhesions. During the procedure, course faculty and peer observers rated surgeon performance using Global Operative Assessment of Laparoscopic Skills-Incisional Hernia and Global Operative Assessment of Laparoscopic Skills rating scales with prior validity evidence. Rating scale reliability was reassessed for internal consistency. Peer and faculty raters' scores were compared. In addition, quality and completeness of the hernia repairs were rated. Internal consistency on the general skills performance (peer α = .96, faculty α = .94) and procedure-specific performance (peer α = .91, faculty α = .88) scores were high. Peers were more lenient than faculty raters on all LVH items in both the procedure-specific skills and general skills ratings. Overall, participants scored poorly on the quality and completeness of their hernia repairs (mean = 3.90/16, standard deviation = 2.72), suggesting a mismatch between course attendees and hernia difficulty and identifying a learning need. Simulation-based continuing medical education courses provide hands-on experiences that can positively affect clinical practice. Although our data appear to show a significant mismatch between clinical skill and simulator difficulty, these findings also underscore significant learning needs in the surgical community. Copyright © 2015 Elsevier Inc. All rights reserved.
Jones, R T; Kazdin, A E; Haney, J I
1981-01-01
A multifaceted behavioral program designed to teach emergency fire escape procedures to children was evaluated in a multiple-baseline design. Five children were trained to respond correctly to nine home emergency fire situations under simulated conditions. The situations and responses focused upon in training were identified by a social validation procedure involving consultation with several safety agencies, including the direct input of firefighters. Training, carried out in simulated bedrooms at school, resulted in significant improvements in both overt behavior and self-report of fire safety skills. The gains were maintained at a post-check assessment 2 weeks after training had been terminated. The results are discussed in relation both to the importance of social validation of targets and outcomes and the implications for further research in assessing and developing emergency response skills. PMID:7298537
Cannon, W Dilworth; Garrett, William E; Hunter, Robert E; Sweeney, Howard J; Eckhoff, Donald G; Nicandri, Gregg T; Hutchinson, Mark R; Johnson, Donald D; Bisson, Leslie J; Bedi, Asheesh; Hill, James A; Koh, Jason L; Reinig, Karl D
2014-11-05
There is a paucity of articles in the surgical literature demonstrating transfer validity (transfer of training). The purpose of this study was to assess whether skills learned on the ArthroSim virtual-reality arthroscopic knee simulator transferred to greater skill levels in the operating room. Postgraduate year-3 orthopaedic residents were randomized into simulator-trained and control groups at seven academic institutions. The experimental group trained on the simulator, performing a knee diagnostic arthroscopy procedure to a predetermined proficiency level based on the average proficiency of five community-based orthopaedic surgeons performing the same procedure on the simulator. The residents in the control group continued their institution-specific orthopaedic education and training. Both groups then performed a diagnostic knee arthroscopy procedure on a live patient. Video recordings of the arthroscopic surgery were analyzed by five pairs of expert arthroscopic surgeons blinded to the identity of the residents. A proprietary global rating scale and a procedural checklist, which included visualization and probing scales, were used for rating. Forty-eight (89%) of the fifty-four postgraduate year-3 residents from seven academic institutions completed the study. The simulator-trained group averaged eleven hours of training on the simulator to reach proficiency. The simulator-trained group performed significantly better when rated according to our procedural checklist (p = 0.031), including probing skills (p = 0.016) but not visualization skills (p = 0.34), compared with the control group. The procedural checklist weighted probing skills double the weight of visualization skills. The global rating scale failed to reach significance (p = 0.061) because of one extreme outlier. The duration of the procedure was not significant. This lack of a significant difference seemed to be related to the fact that residents in the control group were less thorough, which shortened their time to completion of the arthroscopic procedure. We have demonstrated transfer validity (transfer of training) that residents trained to proficiency on a high-fidelity realistic virtual-reality arthroscopic knee simulator showed a greater skill level in the operating room compared with the control group. We believe that the results of our study will stimulate residency program directors to incorporate surgical simulation into the core curriculum of their residency programs. Copyright © 2014 by The Journal of Bone and Joint Surgery, Incorporated.
Computational Methods Development at Ames
NASA Technical Reports Server (NTRS)
Kwak, Dochan; Smith, Charles A. (Technical Monitor)
1998-01-01
This viewgraph presentation outlines the development at Ames Research Center of advanced computational methods to provide appropriate fidelity computational analysis/design capabilities. Current thrusts of the Ames research include: 1) methods to enhance/accelerate viscous flow simulation procedures, and the development of hybrid/polyhedral-grid procedures for viscous flow; 2) the development of real time transonic flow simulation procedures for a production wind tunnel, and intelligent data management technology; and 3) the validation of methods and the flow physics study gives historical precedents to above research, and speculates on its future course.
Cannon, W Dilworth; Nicandri, Gregg T; Reinig, Karl; Mevis, Howard; Wittstein, Jocelyn
2014-04-02
Several virtual reality simulators have been developed to assist orthopaedic surgeons in acquiring the skills necessary to perform arthroscopic surgery. The purpose of this study was to assess the construct validity of the ArthroSim virtual reality arthroscopy simulator by evaluating whether skills acquired through increased experience in the operating room lead to improved performance on the simulator. Using the simulator, six postgraduate year-1 orthopaedic residents were compared with six postgraduate year-5 residents and with six community-based orthopaedic surgeons when performing diagnostic arthroscopy. The time to perform the procedure was recorded. To ensure that subjects did not sacrifice the quality of the procedure to complete the task in a shorter time, the simulator was programmed to provide a completeness score that indicated whether the surgeon accurately performed all of the steps of diagnostic arthroscopy in the correct sequence. The mean time to perform the procedure by each group was 610 seconds for community-based orthopaedic surgeons, 745 seconds for postgraduate year-5 residents, and 1028 seconds for postgraduate year-1 residents. Both the postgraduate year-5 residents and the community-based orthopaedic surgeons performed the procedure in significantly less time (p = 0.006) than the postgraduate year-1 residents. There was a trend toward significance (p = 0.055) in time to complete the procedure when the postgraduate year-5 residents were compared with the community-based orthopaedic surgeons. The mean level of completeness as assigned by the simulator for each group was 85% for the community-based orthopaedic surgeons, 79% for the postgraduate year-5 residents, and 71% for the postgraduate year-1 residents. As expected, these differences were not significant, indicating that the three groups had achieved an acceptable level of consistency in their performance of the procedure. Higher levels of surgeon experience resulted in improved efficiency when performing diagnostic knee arthroscopy on the simulator. Further validation studies utilizing the simulator are currently under way and the additional simulated tasks of arthroscopic meniscectomy, meniscal repair, microfracture, and loose body removal are being developed.
Kawaguchi, Koji; Egi, Hiroyuki; Hattori, Minoru; Sawada, Hiroyuki; Suzuki, Takahisa; Ohdan, Hideki
2014-10-01
Virtual reality surgical simulators are becoming popular as a means of providing trainees with an opportunity to practice laparoscopic skills. The Lap-X (Epona Medical, Rotterdam, the Netherlands) is a novel VR simulator for training basic skills in laparoscopic surgery. The objective of this study was to validate the LAP-X laparoscopic virtual reality simulator by assessing the face and construct validity in order to determine whether the simulator is adequate for basic skills training. The face and content validity were evaluated using a structured questionnaire. To assess the construct validity, the participants, nine expert surgeons (median age: 40 (32-45)) (>100 laparoscopic procedures) and 11 novices performed three basic laparoscopic tasks using the Lap-X. The participants reported a high level of content validity. No significant differences were found between the expert surgeons and the novices (Ps > 0.246). The performance of the expert surgeons on the three tasks was significantly better than that of the novices in all parameters (Ps < 0.05). This study demonstrated the face, content and construct validity of the Lap-X. The Lap-X holds real potential as a home and hospital training device.
Martin, Kevin D; Amendola, Annunziato; Phisitkul, Phinit
2016-01-01
Abstract Purpose Orthopedic education continues to move towards evidence-based curriculum in order to comply with new residency accreditation mandates. There are currently three high fidelity arthroscopic virtual reality (VR) simulators available, each with multiple instructional modules and simulated arthroscopic procedures. The aim of the current study is to assess face validity, defined as the degree to which a procedure appears effective in terms of its stated aims, of three available VR simulators. Methods Thirty subjects were recruited from a single orthopedic residency training program. Each subject completed one training session on each of the three leading VR arthroscopic simulators (ARTHRO mentor-Symbionix, ArthroS-Virtamed, and ArthroSim-Toltech). Each arthroscopic session involved simulator-specific modules. After training sessions, subjects completed a previously validated simulator questionnaire for face validity. Results The median external appearances for the ARTHRO Mentor (9.3, range 6.7-10.0; p=0.0036) and ArthroS (9.3, range 7.3-10.0; p=0.0003) were statistically higher than for Arthro- Sim (6.7, range 3.3-9.7). There was no statistical difference in intraarticular appearance, instrument appearance, or user friendliness between the three groups. Most simulators reached an appropriate level of proportion of sufficient scores for each categor y (≥70%), except for ARTHRO Mentor (intraarticular appearance-50%; instrument appearance- 61.1%) and ArthroSim (external appearance- 50%; user friendliness-68.8%). Conclusion These results demonstrate that ArthroS has the highest overall face validity of the three current arthroscopic VR simulators. However, only external appearance for ArthroS reached statistical significance when compared to the other simulators. Additionally, each simulator had satisfactory intraarticular quality. This study helps further the understanding of VR simulation and necessary features for accurate arthroscopic representation. This data also provides objective data for educators when selecting equipment that will best facilitate residency training. PMID:27528830
ERIC Educational Resources Information Center
Wijnen-Meijer, M.; Van der Schaaf, M.; Booij, E.; Harendza, S.; Boscardin, C.; Wijngaarden, J. Van; Ten Cate, Th. J.
2013-01-01
There is a need for valid methods to assess the readiness for clinical practice of medical graduates. This study evaluates the validity of Utrecht Hamburg Trainee Responsibility for Unfamiliar Situations Test (UHTRUST), an authentic simulation procedure to assess whether medical trainees are ready to be entrusted with unfamiliar clinical tasks…
Launch Vehicle Operations Simulator
NASA Technical Reports Server (NTRS)
Blackledge, J. W.
1974-01-01
The Saturn Launch Vehicle Operations Simulator (LVOS) was developed for NASA at Kennedy Space Center. LVOS simulates the Saturn launch vehicle and its ground support equipment. The simulator was intended primarily to be used as a launch crew trainer but it is also being used for test procedure and software validation. A NASA/contractor team of engineers and programmers implemented the simulator after the Apollo XI lunar landing during the low activity periods between launches.
Ahn, Woojin; Dargar, Saurabh; Halic, Tansel; Lee, Jason; Li, Baichun; Pan, Junjun; Sankaranarayanan, Ganesh; Roberts, Kurt; De, Suvranu
2014-01-01
The first virtual-reality-based simulator for Natural Orifice Translumenal Endoscopic Surgery (NOTES) is developed called the Virtual Translumenal Endoscopic Surgery Trainer (VTESTTM). VTESTTM aims to simulate hybrid NOTES cholecystectomy procedure using a rigid scope inserted through the vaginal port. The hardware interface is designed for accurate motion tracking of the scope and laparoscopic instruments to reproduce the unique hand-eye coordination. The haptic-enabled multimodal interactive simulation includes exposing the Calot's triangle and detaching the gall bladder while performing electrosurgery. The developed VTESTTM was demonstrated and validated at NOSCAR 2013.
An IMU-to-Body Alignment Method Applied to Human Gait Analysis
Vargas-Valencia, Laura Susana; Elias, Arlindo; Rocon, Eduardo; Bastos-Filho, Teodiano; Frizera, Anselmo
2016-01-01
This paper presents a novel calibration procedure as a simple, yet powerful, method to place and align inertial sensors with body segments. The calibration can be easily replicated without the need of any additional tools. The proposed method is validated in three different applications: a computer mathematical simulation; a simplified joint composed of two semi-spheres interconnected by a universal goniometer; and a real gait test with five able-bodied subjects. Simulation results demonstrate that, after the calibration method is applied, the joint angles are correctly measured independently of previous sensor placement on the joint, thus validating the proposed procedure. In the cases of a simplified joint and a real gait test with human volunteers, the method also performs correctly, although secondary plane errors appear when compared with the simulation results. We believe that such errors are caused by limitations of the current inertial measurement unit (IMU) technology and fusion algorithms. In conclusion, the presented calibration procedure is an interesting option to solve the alignment problem when using IMUs for gait analysis. PMID:27973406
1990-03-01
and M.H. Knuter. Applied Linear Regression Models. Homewood IL: Richard D. Erwin Inc., 1983. Pritsker, A. Alan B. Introduction to Simulation and SLAM...Control Variates in Simulation," European Journal of Operational Research, 42: (1989). Neter, J., W. Wasserman, and M.H. Xnuter. Applied Linear Regression Models
A simulation study of Large Area Crop Inventory Experiment (LACIE) technology
NASA Technical Reports Server (NTRS)
Ziegler, L. (Principal Investigator); Potter, J.
1979-01-01
The author has identified the following significant results. The LACIE performance predictor (LPP) was used to replicate LACIE phase 2 for a 15 year period, using accuracy assessment results for phase 2 error components. Results indicated that the (LPP) simulated the LACIE phase 2 procedures reasonably well. For the 15 year simulation, only 7 of the 15 production estimates were within 10 percent of the true production. The simulations indicated that the acreage estimator, based on CAMS phase 2 procedures, has a negative bias. This bias was too large to support the 90/90 criterion with the CV observed and simulated for the phase 2 production estimator. Results of this simulation study validate the theory that the acreage variance estimator in LACIE was conservative.
Validation of virtual-reality-based simulations for endoscopic sinus surgery.
Dharmawardana, N; Ruthenbeck, G; Woods, C; Elmiyeh, B; Diment, L; Ooi, E H; Reynolds, K; Carney, A S
2015-12-01
Virtual reality (VR) simulators provide an alternative to real patients for practicing surgical skills but require validation to ensure accuracy. Here, we validate the use of a virtual reality sinus surgery simulator with haptic feedback for training in Otorhinolaryngology - Head & Neck Surgery (OHNS). Participants were recruited from final-year medical students, interns, resident medical officers (RMOs), OHNS registrars and consultants. All participants completed an online questionnaire after performing four separate simulation tasks. These were then used to assess face, content and construct validity. anova with post hoc correlation was used for statistical analysis. The following groups were compared: (i) medical students/interns, (ii) RMOs, (iii) registrars and (iv) consultants. Face validity results had a statistically significant (P < 0.05) difference between the consultant group and others, while there was no significant difference between medical student/intern and RMOs. Variability within groups was not significant. Content validity results based on consultant scoring and comments indicated that the simulations need further development in several areas to be effective for registrar-level teaching. However, students, interns and RMOs indicated that the simulations provide a useful tool for learning OHNS-related anatomy and as an introduction to ENT-specific procedures. The VR simulations have been validated for teaching sinus anatomy and nasendoscopy to medical students, interns and RMOs. However, they require further development before they can be regarded as a valid tool for more advanced surgical training. © 2015 John Wiley & Sons Ltd.
Validation of a Video-based Game-Understanding Test Procedure in Badminton.
ERIC Educational Resources Information Center
Blomqvist, Minna T.; Luhtanen, Pekka; Laakso, Lauri; Keskinen, Esko
2000-01-01
Reports the development and validation of video-based game-understanding tests in badminton for elementary and secondary students. The tests included different sequences that simulated actual game situations. Players had to solve tactical problems by selecting appropriate solutions and arguments for their decisions. Results suggest that the test…
A posteriori model validation for the temporal order of directed functional connectivity maps.
Beltz, Adriene M; Molenaar, Peter C M
2015-01-01
A posteriori model validation for the temporal order of neural directed functional connectivity maps is rare. This is striking because models that require sequential independence among residuals are regularly implemented. The aim of the current study was (a) to apply to directed functional connectivity maps of functional magnetic resonance imaging data an a posteriori model validation procedure (i.e., white noise tests of one-step-ahead prediction errors combined with decision criteria for revising the maps based upon Lagrange Multiplier tests), and (b) to demonstrate how the procedure applies to single-subject simulated, single-subject task-related, and multi-subject resting state data. Directed functional connectivity was determined by the unified structural equation model family of approaches in order to map contemporaneous and first order lagged connections among brain regions at the group- and individual-levels while incorporating external input, then white noise tests were run. Findings revealed that the validation procedure successfully detected unmodeled sequential dependencies among residuals and recovered higher order (greater than one) simulated connections, and that the procedure can accommodate task-related input. Findings also revealed that lags greater than one were present in resting state data: With a group-level network that contained only contemporaneous and first order connections, 44% of subjects required second order, individual-level connections in order to obtain maps with white noise residuals. Results have broad methodological relevance (e.g., temporal validation is necessary after directed functional connectivity analyses because the presence of unmodeled higher order sequential dependencies may bias parameter estimates) and substantive implications (e.g., higher order lags may be common in resting state data).
APPLICATION OF THE HSPF MODEL TO THE SOUTH FORK OF THE BROAD RIVER WATERSHED IN NORTHEASTERN GEORGIA
The Hydrological Simulation Program-Fortran (HSPF) is a comprehensive watershed model which simulates hydrology and water quality at user-specified temporal and spatial scales. Well-established model calibration and validation procedures are followed when adjusting model paramete...
Modeling and Validation of Microwave Ablations with Internal Vaporization
Chiang, Jason; Birla, Sohan; Bedoya, Mariajose; Jones, David; Subbiah, Jeyam; Brace, Christopher L.
2014-01-01
Numerical simulation is increasingly being utilized for computer-aided design of treatment devices, analysis of ablation growth, and clinical treatment planning. Simulation models to date have incorporated electromagnetic wave propagation and heat conduction, but not other relevant physics such as water vaporization and mass transfer. Such physical changes are particularly noteworthy during the intense heat generation associated with microwave heating. In this work, a numerical model was created that integrates microwave heating with water vapor generation and transport by using porous media assumptions in the tissue domain. The heating physics of the water vapor model was validated through temperature measurements taken at locations 5, 10 and 20 mm away from the heating zone of the microwave antenna in homogenized ex vivo bovine liver setup. Cross-sectional area of water vapor transport was validated through intra-procedural computed tomography (CT) during microwave ablations in homogenized ex vivo bovine liver. Iso-density contours from CT images were compared to vapor concentration contours from the numerical model at intermittent time points using the Jaccard Index. In general, there was an improving correlation in ablation size dimensions as the ablation procedure proceeded, with a Jaccard Index of 0.27, 0.49, 0.61, 0.67 and 0.69 at 1, 2, 3, 4, and 5 minutes. This study demonstrates the feasibility and validity of incorporating water vapor concentration into thermal ablation simulations and validating such models experimentally. PMID:25330481
Validation of WIND for a Series of Inlet Flows
NASA Technical Reports Server (NTRS)
Slater, John W.; Abbott, John M.; Cavicchi, Richard H.
2002-01-01
Validation assessments compare WIND CFD simulations to experimental data for a series of inlet flows ranging in Mach number from low subsonic to hypersonic. The validation procedures follow the guidelines of the AIAA. The WIND code performs well in matching the available experimental data. The assessments demonstrate the use of WIND and provide confidence in its use for the analysis of aircraft inlets.
Training, Simulation, the Learning Curve, and How to Reduce Complications in Urology.
Brunckhorst, Oliver; Volpe, Alessandro; van der Poel, Henk; Mottrie, Alexander; Ahmed, Kamran
2016-04-01
Urology is at the forefront of minimally invasive surgery to a great extent. These procedures produce additional learning challenges and possess a steep initial learning curve. Training and assessment methods in surgical specialties such as urology are known to lack clear structure and often rely on differing operative flow experienced by individuals and institutions. This article aims to assess current urology training modalities, to identify the role of simulation within urology, to define and identify the learning curves for various urologic procedures, and to discuss ways to decrease complications in the context of training. A narrative review of the literature was conducted through December 2015 using the PubMed/Medline, Embase, and Cochrane Library databases. Evidence of the validity of training methods in urology includes observation of a procedure, mentorship and fellowship, e-learning, and simulation-based training. Learning curves for various urologic procedures have been recommended based on the available literature. The importance of structured training pathways is highlighted, with integration of modular training to ensure patient safety. Valid training pathways are available in urology. The aim in urology training should be to combine all of the available evidence to produce procedure-specific curricula that utilise the vast array of training methods available to ensure that we continue to improve patient outcomes and reduce complications. The current evidence for different training methods available in urology, including simulation-based training, was reviewed, and the learning curves for various urologic procedures were critically analysed. Based on the evidence, future pathways for urology curricula have been suggested to ensure that patient safety is improved. Copyright © 2016 European Association of Urology. Published by Elsevier B.V. All rights reserved.
A review of virtual reality based training simulators for orthopaedic surgery.
Vaughan, Neil; Dubey, Venketesh N; Wainwright, Thomas W; Middleton, Robert G
2016-02-01
This review presents current virtual reality based training simulators for hip, knee and other orthopaedic surgery, including elective and trauma surgical procedures. There have not been any reviews focussing on hip and knee orthopaedic simulators. A comparison of existing simulator features is provided to identify what is missing and what is required to improve upon current simulators. In total 11 hip replacements pre-operative planning tools were analysed, plus 9 hip trauma fracture training simulators. Additionally 9 knee arthroscopy simulators and 8 other orthopaedic simulators were included for comparison. The findings are that for orthopaedic surgery simulators in general, there is increasing use of patient-specific virtual models which reduce the learning curve. Modelling is also being used for patient-specific implant design and manufacture. Simulators are being increasingly validated for assessment as well as training. There are very few training simulators available for hip replacement, yet more advanced virtual reality is being used for other procedures such as hip trauma and drilling. Training simulators for hip replacement and orthopaedic surgery in general lag behind other surgical procedures for which virtual reality has become more common. Further developments are required to bring hip replacement training simulation up to date with other procedures. This suggests there is a gap in the market for a new high fidelity hip replacement and resurfacing training simulator. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.
Isupov, Inga; McInnes, Matthew D F; Hamstra, Stan J; Doherty, Geoffrey; Gupta, Ashish; Peddle, Susan; Jibri, Zaid; Rakhra, Kawan; Hibbert, Rebecca M
2017-04-01
The purpose of this study is to develop a tool to assess the procedural competence of radiology trainees, with sources of evidence gathered from five categories to support the construct validity of tool: content, response process, internal structure, relations to other variables, and consequences. A pilot form for assessing procedural competence among radiology residents, known as the RAD-Score tool, was developed by evaluating published literature and using a modified Delphi procedure involving a group of local content experts. The pilot version of the tool was tested by seven radiology department faculty members who evaluated procedures performed by 25 residents at one institution between October 2014 and June 2015. Residents were evaluated while performing multiple procedures in both clinical and simulation settings. The main outcome measure was the percentage of residents who were considered ready to perform procedures independently, with testing conducted to determine differences between levels of training. A total of 105 forms (for 52 procedures performed in a clinical setting and 53 procedures performed in a simulation setting) were collected for a variety of procedures (eight vascular or interventional, 42 body, 12 musculoskeletal, 23 chest, and 20 breast procedures). A statistically significant difference was noted in the percentage of trainees who were rated as being ready to perform a procedure independently (in postgraduate year [PGY] 2, 12% of residents; in PGY3, 61%; in PGY4, 85%; and in PGY5, 88%; p < 0.05); this difference persisted in the clinical and simulation settings. User feedback and psychometric analysis were used to create a final version of the form. This prospective study describes the successful development of a tool for assessing the procedural competence of radiology trainees with high levels of construct validity in multiple domains. Implementation of the tool in the radiology residency curriculum is planned and can play an instrumental role in the transition to competency-based radiology training.
ProMIS augmented reality training of laparoscopic procedures face validity.
Botden, Sanne M B I; Buzink, Sonja N; Schijven, Marlies P; Jakimowicz, Jack J
2008-01-01
Conventional video trainers lack the ability to assess the trainee objectively, but offer modalities that are often missing in virtual reality simulation, such as realistic haptic feedback. The ProMIS augmented reality laparoscopic simulator retains the benefit of a traditional box trainer, by using original laparoscopic instruments and tactile tasks, but additionally generates objective measures of performance. Fifty-five participants performed a "basic skills" and "suturing and knot-tying" task on ProMIS, after which they filled out a questionnaire regarding realism, haptics, and didactic value of the simulator, on a 5-point-Likert scale. The participants were allotted to 2 experience groups: "experienced" (>50 procedures and >5 sutures; N = 27), and "moderately experienced" (<50 procedures and <5 sutures; N = 28). General consensus among all participants, particularly the experienced, was that ProMIS is a useful tool for training (mean: 4.67, SD: 0.48). It was considered very realistic (mean: 4.44, SD: 0.66), with good haptics (mean: 4.10, SD: 0.97) and didactic value (mean 4.10, SD: 0.65). This study established the face validity of the ProMIS augmented reality simulator for "basic skills" and "suturing and knot-tying" tasks. ProMIS was considered a good tool for training in laparoscopic skills for surgical residents and surgeons.
Wavelet-based identification of rotor blades in passage-through-resonance tests
NASA Astrophysics Data System (ADS)
Carassale, Luigi; Marrè-Brunenghi, Michela; Patrone, Stefano
2018-01-01
Turbine blades are critical components in turbo engines and their design process usually includes experimental tests in order to validate and/or update numerical models. These tests are generally carried out on full-scale rotors having some blades instrumented with strain gauges and usually involve a run-up or a run-down phase. The quantification of damping in these conditions is rather challenging for several reasons. In this work, we show through numerical simulations that the usual identification procedures lead to a systematic overestimation of damping due both to the finite sweep velocity, as well as to the variation of the blade natural frequencies with the rotation speed. To overcome these problems, an identification procedure based on the continuous wavelet transform is proposed and validated through numerical simulation.
A Hybrid Reality Radiation-free Simulator for Teaching Wire Navigation Skills
Kho, Jenniefer Y.; Johns, Brian D.; Thomas, Geb. W.; Karam, Matthew D.; Marsh, J. Lawrence; Anderson, Donald D.
2016-01-01
Objectives Surgical simulation is an increasingly important method to facilitate the acquiring of surgical skills. Simulation can be helpful in developing hip fracture fixation skills because it is a common procedure for which performance can be objectively assessed (i.e., the tip-apex distance). The procedure requires fluoroscopic guidance to drill a wire along an osseous trajectory to a precise position within bone. The objective of this study was to assess the construct validity for a novel radiation-free simulator designed to teach wire navigation skills in hip fracture fixation. Methods Novices (N=30) with limited to no surgical experience in hip fracture fixation and experienced surgeons (N=10) participated. Participants drilled a guide wire in the center-center position of a synthetic femoral head in a hip fracture simulator, using electromagnetic sensors to track the guide wire position. Sensor data were gathered to generate fluoroscopic-like images of the hip and guide wire. Simulator performance of novice and experienced participants was compared to measure construct validity. Results The simulator was able to discriminate the accuracy in guide wire position between novices and experienced surgeons. Experienced surgeons achieved a more accurate tip-apex distance than novices (13 vs 23 mm, respectively, p=0.009). The magnitude of improvement on successive simulator attempts was dependent on level of expertise; tip-apex distance improved significantly in the novice group, while it was unchanged in the experienced group. Conclusions This hybrid reality, radiation-free hip fracture simulator, which combines real-world objects with computer-generated imagery demonstrates construct validity by distinguishing the performance of novices and experienced surgeons. There is a differential effect depending on level of experience, and it could be used as an effective training tool in novice surgeons. PMID:26165262
A posteriori model validation for the temporal order of directed functional connectivity maps
Beltz, Adriene M.; Molenaar, Peter C. M.
2015-01-01
A posteriori model validation for the temporal order of neural directed functional connectivity maps is rare. This is striking because models that require sequential independence among residuals are regularly implemented. The aim of the current study was (a) to apply to directed functional connectivity maps of functional magnetic resonance imaging data an a posteriori model validation procedure (i.e., white noise tests of one-step-ahead prediction errors combined with decision criteria for revising the maps based upon Lagrange Multiplier tests), and (b) to demonstrate how the procedure applies to single-subject simulated, single-subject task-related, and multi-subject resting state data. Directed functional connectivity was determined by the unified structural equation model family of approaches in order to map contemporaneous and first order lagged connections among brain regions at the group- and individual-levels while incorporating external input, then white noise tests were run. Findings revealed that the validation procedure successfully detected unmodeled sequential dependencies among residuals and recovered higher order (greater than one) simulated connections, and that the procedure can accommodate task-related input. Findings also revealed that lags greater than one were present in resting state data: With a group-level network that contained only contemporaneous and first order connections, 44% of subjects required second order, individual-level connections in order to obtain maps with white noise residuals. Results have broad methodological relevance (e.g., temporal validation is necessary after directed functional connectivity analyses because the presence of unmodeled higher order sequential dependencies may bias parameter estimates) and substantive implications (e.g., higher order lags may be common in resting state data). PMID:26379489
Daetwyler, Hans D; Calus, Mario P L; Pong-Wong, Ricardo; de Los Campos, Gustavo; Hickey, John M
2013-02-01
The genomic prediction of phenotypes and breeding values in animals and plants has developed rapidly into its own research field. Results of genomic prediction studies are often difficult to compare because data simulation varies, real or simulated data are not fully described, and not all relevant results are reported. In addition, some new methods have been compared only in limited genetic architectures, leading to potentially misleading conclusions. In this article we review simulation procedures, discuss validation and reporting of results, and apply benchmark procedures for a variety of genomic prediction methods in simulated and real example data. Plant and animal breeding programs are being transformed by the use of genomic data, which are becoming widely available and cost-effective to predict genetic merit. A large number of genomic prediction studies have been published using both simulated and real data. The relative novelty of this area of research has made the development of scientific conventions difficult with regard to description of the real data, simulation of genomes, validation and reporting of results, and forward in time methods. In this review article we discuss the generation of simulated genotype and phenotype data, using approaches such as the coalescent and forward in time simulation. We outline ways to validate simulated data and genomic prediction results, including cross-validation. The accuracy and bias of genomic prediction are highlighted as performance indicators that should be reported. We suggest that a measure of relatedness between the reference and validation individuals be reported, as its impact on the accuracy of genomic prediction is substantial. A large number of methods were compared in example simulated and real (pine and wheat) data sets, all of which are publicly available. In our limited simulations, most methods performed similarly in traits with a large number of quantitative trait loci (QTL), whereas in traits with fewer QTL variable selection did have some advantages. In the real data sets examined here all methods had very similar accuracies. We conclude that no single method can serve as a benchmark for genomic prediction. We recommend comparing accuracy and bias of new methods to results from genomic best linear prediction and a variable selection approach (e.g., BayesB), because, together, these methods are appropriate for a range of genetic architectures. An accompanying article in this issue provides a comprehensive review of genomic prediction methods and discusses a selection of topics related to application of genomic prediction in plants and animals.
Daetwyler, Hans D.; Calus, Mario P. L.; Pong-Wong, Ricardo; de los Campos, Gustavo; Hickey, John M.
2013-01-01
The genomic prediction of phenotypes and breeding values in animals and plants has developed rapidly into its own research field. Results of genomic prediction studies are often difficult to compare because data simulation varies, real or simulated data are not fully described, and not all relevant results are reported. In addition, some new methods have been compared only in limited genetic architectures, leading to potentially misleading conclusions. In this article we review simulation procedures, discuss validation and reporting of results, and apply benchmark procedures for a variety of genomic prediction methods in simulated and real example data. Plant and animal breeding programs are being transformed by the use of genomic data, which are becoming widely available and cost-effective to predict genetic merit. A large number of genomic prediction studies have been published using both simulated and real data. The relative novelty of this area of research has made the development of scientific conventions difficult with regard to description of the real data, simulation of genomes, validation and reporting of results, and forward in time methods. In this review article we discuss the generation of simulated genotype and phenotype data, using approaches such as the coalescent and forward in time simulation. We outline ways to validate simulated data and genomic prediction results, including cross-validation. The accuracy and bias of genomic prediction are highlighted as performance indicators that should be reported. We suggest that a measure of relatedness between the reference and validation individuals be reported, as its impact on the accuracy of genomic prediction is substantial. A large number of methods were compared in example simulated and real (pine and wheat) data sets, all of which are publicly available. In our limited simulations, most methods performed similarly in traits with a large number of quantitative trait loci (QTL), whereas in traits with fewer QTL variable selection did have some advantages. In the real data sets examined here all methods had very similar accuracies. We conclude that no single method can serve as a benchmark for genomic prediction. We recommend comparing accuracy and bias of new methods to results from genomic best linear prediction and a variable selection approach (e.g., BayesB), because, together, these methods are appropriate for a range of genetic architectures. An accompanying article in this issue provides a comprehensive review of genomic prediction methods and discusses a selection of topics related to application of genomic prediction in plants and animals. PMID:23222650
NASA Astrophysics Data System (ADS)
Baptista, M.; Teles, P.; Cardoso, G.; Vaz, P.
2014-11-01
Over the last decade, there was a substantial increase in the number of interventional cardiology procedures worldwide, and the corresponding ionizing radiation doses for both the medical staff and patients became a subject of concern. Interventional procedures in cardiology are normally very complex, resulting in long exposure times. Also, these interventions require the operator to work near the patient and, consequently, close to the primary X-ray beam. Moreover, due to the scattered radiation from the patient and the equipment, the medical staff is also exposed to a non-uniform radiation field that can lead to a significant exposure of sensitive body organs and tissues, such as the eye lens, the thyroid and the extremities. In order to better understand the spatial variation of the dose and dose rate distributions during an interventional cardiology procedure, the dose distribution around a C-arm fluoroscopic system, in operation in a cardiac cath lab at Portuguese Hospital, was estimated using both Monte Carlo (MC) simulations and dosimetric measurements. To model and simulate the cardiac cath lab, including the fluoroscopic equipment used to execute interventional procedures, the state-of-the-art MC radiation transport code MCNPX 2.7.0 was used. Subsequently, Thermo-Luminescent Detector (TLD) measurements were performed, in order to validate and support the simulation results obtained for the cath lab model. The preliminary results presented in this study reveal that the cardiac cath lab model was successfully validated, taking into account the good agreement between MC calculations and TLD measurements. The simulated results for the isodose curves related to the C-arm fluoroscopic system are also consistent with the dosimetric information provided by the equipment manufacturer (Siemens). The adequacy of the implemented computational model used to simulate complex procedures and map dose distributions around the operator and the medical staff is discussed, in view of the optimization principle (and the associated ALARA objective), one of the pillars of the international system of radiological protection.
Hartman, Nicholas; Wittler, Mary; Askew, Kim; Manthey, David
2016-01-01
Placement of ultrasound-guided central lines is a critical skill for physicians in several specialties. Improving the quality of care delivered surrounding this procedure demands rigorous measurement of competency, and validated tools to assess performance are essential. Using the iterative, modified Delphi technique and experts in multiple disciplines across the United States, the study team created a 30-item checklist designed to assess competency in the placement of ultrasound-guided internal jugular central lines. Cronbach α was .94, indicating an excellent degree of internal consistency. Further validation of this checklist will require its implementation in simulated and clinical environments. © The Author(s) 2014.
Validity evidence and reliability of a simulated patient feedback instrument.
Schlegel, Claudia; Woermann, Ulrich; Rethans, Jan-Joost; van der Vleuten, Cees
2012-01-27
In the training of healthcare professionals, one of the advantages of communication training with simulated patients (SPs) is the SP's ability to provide direct feedback to students after a simulated clinical encounter. The quality of SP feedback must be monitored, especially because it is well known that feedback can have a profound effect on student performance. Due to the current lack of valid and reliable instruments to assess the quality of SP feedback, our study examined the validity and reliability of one potential instrument, the 'modified Quality of Simulated Patient Feedback Form' (mQSF). Content validity of the mQSF was assessed by inviting experts in the area of simulated clinical encounters to rate the importance of the mQSF items. Moreover, generalizability theory was used to examine the reliability of the mQSF. Our data came from videotapes of clinical encounters between six simulated patients and six students and the ensuing feedback from the SPs to the students. Ten faculty members judged the SP feedback according to the items on the mQSF. Three weeks later, this procedure was repeated with the same faculty members and recordings. All but two items of the mQSF received importance ratings of > 2.5 on a four-point rating scale. A generalizability coefficient of 0.77 was established with two judges observing one encounter. The findings for content validity and reliability with two judges suggest that the mQSF is a valid and reliable instrument to assess the quality of feedback provided by simulated patients.
Pilot In-Trail Procedure Validation Simulation Study
NASA Technical Reports Server (NTRS)
Bussink, Frank J. L.; Murdoch, Jennifer L.; Chamberlain, James P.; Chartrand, Ryan; Jones, Kenneth M.
2008-01-01
A Human-In-The-Loop experiment was conducted at the National Aeronautics and Space Administration (NASA) Langley Research Center (LaRC) to investigate the viability of the In-Trail Procedure (ITP) concept from a flight crew perspective, by placing participating airline pilots in a simulated oceanic flight environment. The test subject pilots used new onboard avionics equipment that provided improved information about nearby traffic and enabled them, when specific criteria were met, to request an ITP flight level change referencing one or two nearby aircraft that might otherwise block the flight level change. The subject pilots subjective assessments of ITP validity and acceptability were measured via questionnaires and discussions, and their objective performance in appropriately selecting, requesting, and performing ITP flight level changes was evaluated for each simulated flight scenario. Objective performance and subjective workload assessment data from the experiment s test conditions were analyzed for statistical and operational significance and are reported in the paper. Based on these results, suggestions are made to further improve the ITP.
Crash Certification by Analysis - Are We There Yet?
NASA Technical Reports Server (NTRS)
Jackson, Karen E.; Fasanella, Edwin L.; Lyle, Karen H.
2006-01-01
This paper addresses the issue of crash certification by analysis. This broad topic encompasses many ancillary issues including model validation procedures, uncertainty in test data and analysis models, probabilistic techniques for test-analysis correlation, verification of the mathematical formulation, and establishment of appropriate qualification requirements. This paper will focus on certification requirements for crashworthiness of military helicopters; capabilities of the current analysis codes used for crash modeling and simulation, including some examples of simulations from the literature to illustrate the current approach to model validation; and future directions needed to achieve "crash certification by analysis."
Simulations of motor unit number estimation techniques
NASA Astrophysics Data System (ADS)
Major, Lora A.; Jones, Kelvin E.
2005-06-01
Motor unit number estimation (MUNE) is an electrodiagnostic procedure used to evaluate the number of motor axons connected to a muscle. All MUNE techniques rely on assumptions that must be fulfilled to produce a valid estimate. As there is no gold standard to compare the MUNE techniques against, we have developed a model of the relevant neuromuscular physiology and have used this model to simulate various MUNE techniques. The model allows for a quantitative analysis of candidate MUNE techniques that will hopefully contribute to consensus regarding a standard procedure for performing MUNE.
NASA Technical Reports Server (NTRS)
Stankovic, Ana V.
2003-01-01
Professor Stankovic will be developing and refining Simulink based models of the PM alternator and comparing the simulation results with experimental measurements taken from the unit. Her first task is to validate the models using the experimental data. Her next task is to develop alternative control techniques for the application of the Brayton Cycle PM Alternator in a nuclear electric propulsion vehicle. The control techniques will be first simulated using the validated models then tried experimentally with hardware available at NASA. Testing and simulation of a 2KW PM synchronous generator with diode bridge output is described. The parameters of a synchronous PM generator have been measured and used in simulation. Test procedures have been developed to verify the PM generator model with diode bridge output. Experimental and simulation results are in excellent agreement.
Xiao, Dongjuan; Jakimowicz, Jack J; Albayrak, Armagan; Buzink, Sonja N; Botden, Sanne M B I; Goossens, Richard H M
2014-01-01
Laparoscopic skills can be improved effectively through laparoscopic simulation. The purpose of this study was to verify the face and content validity of a new portable Ergonomic Laparoscopic Skills simulator (Ergo-Lap simulator) and assess the construct validity of the Ergo-Lap simulator in 4 basic skills tasks. Four tasks were evaluated: 2 different translocation exercises (a basic bimanual exercise and a challenging single-handed exercise), an exercise involving tissue manipulation under tension, and a needle-handling exercise. Task performance was analyzed according to speed and accuracy. The participants rated the usability and didactic value of each task and the Ergo-Lap simulator along a 5-point Likert scale. Institutional academic medical center with its affiliated general surgery residency. Forty-six participants were allotted into 2 groups: a Novice group (n = 26, <10 clinical laparoscopic procedures) and an Experienced group (n = 20, >50 clinical laparoscopic procedures). The Experienced group completed all tasks in less time than the Novice group did (p < 0.001, Mann-Whitney U test). The Experienced group also completed tasks 1, 2, and 4 with fewer errors than the Novice group did (p < 0.05). Of the Novice participants, 96% considered that the present Ergo-Lap simulator could encourage more frequent practice of laparoscopic skills. In addition, 92% would like to purchase this simulator. All of the experienced participants confirmed that the Ergo-Lap simulator was easy to use and useful for practicing basic laparoscopic skills in an ergonomic manner. Most (95%) of these respondents would recommend this simulator to other surgical trainees. This Ergo-Lap simulator with multiple tasks was rated as a useful training tool that can distinguish between various levels of laparoscopic expertise. The Ergo-Lap simulator is also an inexpensive alternative, which surgical trainees could use to update their skills in the skills laboratory, at home, or in the office. Copyright © 2014 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Modeling scintillator and WLS fiber signals for fast Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Sánchez, F. A.; Medina-Tanco, G.
2010-08-01
In this work we present a fast, robust and flexible procedure to simulate electronic signals of scintillator units: plastic scintillator material embedded with a wavelength shifter optical fiber coupled to a photo-multiplier tube which, in turn, is plugged to a front-end electronic board. The simple rationale behind the simulation chain allows to adapt the procedure to a broad range of detectors based on that kind of units. We show that, in order to produce realistic results, the simulation parameters can be properly calibrated against laboratory measurements and used thereafter as input of the simulations. Simulated signals of atmospheric background cosmic ray muons are presented and their main features analyzed and validated using actual measured data. Conversely, for any given practical application, the present simulation scheme can be used to find an adequate combination of photo-multiplier tube and optical fiber at the prototyping stage.
Validation of Mission Plans Through Simulation
NASA Astrophysics Data System (ADS)
St-Pierre, J.; Melanson, P.; Brunet, C.; Crabtree, D.
2002-01-01
The purpose of a spacecraft mission planning system is to automatically generate safe and optimized mission plans for a single spacecraft, or more functioning in unison. The system verifies user input syntax, conformance to commanding constraints, absence of duty cycle violations, timing conflicts, state conflicts, etc. Present day constraint-based systems with state-based predictive models use verification rules derived from expert knowledge. A familiar solution found in Mission Operations Centers, is to complement the planning system with a high fidelity spacecraft simulator. Often a dedicated workstation, the simulator is frequently used for operator training and procedure validation, and may be interfaced to actual control stations with command and telemetry links. While there are distinct advantages to having a planning system offer realistic operator training using the actual flight control console, physical verification of data transfer across layers and procedure validation, experience has revealed some drawbacks and inefficiencies in ground segment operations: With these considerations, two simulation-based mission plan validation projects are under way at the Canadian Space Agency (CSA): RVMP and ViSION. The tools proposed in these projects will automatically run scenarios and provide execution reports to operations planning personnel, prior to actual command upload. This can provide an important safeguard for system or human errors that can only be detected with high fidelity, interdependent spacecraft models running concurrently. The core element common to these projects is a spacecraft simulator, built with off-the- shelf components such as CAE's Real-Time Object-Based Simulation Environment (ROSE) technology, MathWork's MATLAB/Simulink, and Analytical Graphics' Satellite Tool Kit (STK). To complement these tools, additional components were developed, such as an emulated Spacecraft Test and Operations Language (STOL) interpreter and CCSDS TM/TC encoders and decoders. This paper discusses the use of simulation in the context of space mission planning, describes the projects under way and proposes additional venues of investigation and development.
2016-06-16
procedure. The predictive capabilities of the high-resolution computational fluid dynamics ( CFD ) simulations of urban flow are validated against a very...turbulence over a 2D building array using high-resolution CFD and a distributed drag force approach a Department of Mechanical Engineering, University
Hovering Dual-Spin Vehicle Groundwork for Bias Momentum Sizing Validation Experiment
NASA Technical Reports Server (NTRS)
Rothhaar, Paul M.; Moerder, Daniel D.; Lim, Kyong B.
2008-01-01
Angular bias momentum offers significant stability augmentation for hovering flight vehicles. The reliance of the vehicle on thrust vectoring for agility and disturbance rejection is greatly reduced with significant levels of stored angular momentum in the system. A methodical procedure for bias momentum sizing has been developed in previous studies. This current study provides groundwork for experimental validation of that method using an experimental vehicle called the Dual-Spin Test Device, a thrust-levitated platform. Using measured data the vehicle's thrust vectoring units are modeled and a gust environment is designed and characterized. Control design is discussed. Preliminary experimental results of the vehicle constrained to three rotational degrees of freedom are compared to simulation for a case containing no bias momentum to validate the simulation. A simulation of a bias momentum dominant case is presented.
Data Processing for Atmospheric Phase Interferometers
NASA Technical Reports Server (NTRS)
Acosta, Roberto J.; Nessel, James A.; Morabito, David D.
2009-01-01
This paper presents a detailed discussion of calibration procedures used to analyze data recorded from a two-element atmospheric phase interferometer (API) deployed at Goldstone, California. In addition, we describe the data products derived from those measurements that can be used for site intercomparison and atmospheric modeling. Simulated data is used to demonstrate the effectiveness of the proposed algorithm and as a means for validating our procedure. A study of the effect of block size filtering is presented to justify our process for isolating atmospheric fluctuation phenomena from other system-induced effects (e.g., satellite motion, thermal drift). A simulated 24 hr interferometer phase data time series is analyzed to illustrate the step-by-step calibration procedure and desired data products.
Crochet, Patrice; Aggarwal, Rajesh; Knight, Sophie; Berdah, Stéphane; Boubli, Léon; Agostini, Aubert
2017-06-01
Substantial evidence in the scientific literature supports the use of simulation for surgical education. However, curricula lack for complex laparoscopic procedures in gynecology. The objective was to evaluate the validity of a program that reproduces key specific components of a laparoscopic hysterectomy (LH) procedure until colpotomy on a virtual reality (VR) simulator and to develop an evidence-based and stepwise training curriculum. This prospective cohort study was conducted in a Marseille teaching hospital. Forty participants were enrolled and were divided into experienced (senior surgeons who had performed more than 100 LH; n = 8), intermediate (surgical trainees who had performed 2-10 LH; n = 8) and inexperienced (n = 24) groups. Baselines were assessed on a validated basic task. Participants were tested for the LH procedure on a high-fidelity VR simulator. Validity evidence was proposed as the ability to differentiate between the three levels of experience. Inexperienced subjects performed ten repetitions for learning curve analysis. Proficiency measures were based on experienced surgeons' performances. Outcome measures were simulator-derived metrics and Objective Structured Assessment of Technical Skills (OSATS) scores. Quantitative analysis found significant inter-group differences between experienced intermediate and inexperienced groups for time (1369, 2385 and 3370 s; p < 0.001), number of movements (2033, 3195 and 4056; p = 0.001), path length (3390, 4526 and 5749 cm; p = 0.002), idle time (357, 654 and 747 s; p = 0.001), respect for tissue (24, 40 and 84; p = 0.01) and number of bladder injuries (0.13, 0 and 4.27; p < 0.001). Learning curves plateaued at the 2nd to 6th repetition. Further qualitative analysis found significant inter-group OSATS score differences at first repetition (22, 15 and 8, respectively; p < 0.001) and second repetition (25.5, 19.5 and 14; p < 0.001). The VR program for LH accrued validity evidence and allowed the development of a training curriculum using a structured scientific methodology.
Sugand, Kapil; Wescott, Robert A; Carrington, Richard; Hart, Alister; Van Duren, Bernard H
2018-05-10
Background and purpose - Simulation is an adjunct to surgical education. However, nothing can accurately simulate fluoroscopic procedures in orthopedic trauma. Current options for training with fluoroscopy are either intraoperative, which risks radiation, or use of expensive and unrealistic virtual reality simulators. We introduce FluoroSim, an inexpensive digital fluoroscopy simulator without the need for radiation. Patients and methods - This was a multicenter study with 26 surgeons in which everyone completed 1 attempt at inserting a guide-wire into a femoral dry bone using surgical equipment and FluoroSim. 5 objective performance metrics were recorded in real-time to assess construct validity. The surgeons were categorized based on the number of dynamic hip screws (DHS) performed: novices (< 10), intermediates (10-39) and experts (≥ 40). A 7-point Likert scale questionnaire assessed the face and content validity of FluoroSim. Results - Construct validity was present for 2 clinically validated metrics in DHS surgery. Experts and intermediates statistically significantly outperformed novices for tip-apex distance and for cut-out rate. Novices took the least number of radiographs. Face and content validity were also observed. Interpretation - FluoroSim discriminated between novice and intermediate or expert surgeons based on tip-apex distance and cut-out rate while demonstrating face and content validity. FluoroSim provides a useful adjunct to orthopedic training. Our findings concur with results from studies using other simulation modalities. FluoroSim can be implemented for education easily and cheaply away from theater in a safe and controlled environment.
Longitudinal train dynamics model for a rail transit simulation system
Wang, Jinghui; Rakha, Hesham A.
2018-01-01
The paper develops a longitudinal train dynamics model in support of microscopic railway transportation simulation. The model can be calibrated without any mechanical data making it ideal for implementation in transportation simulators. The calibration and validation work is based on data collected from the Portland light rail train fleet. The calibration procedure is mathematically formulated as a constrained non-linear optimization problem. The validity of the model is assessed by comparing instantaneous model predictions against field observations, and also evaluated in the domains of acceleration/deceleration versus speed and acceleration/deceleration versus distance. A test is conducted to investigate the adequacy of themore » model in simulation implementation. The results demonstrate that the proposed model can adequately capture instantaneous train dynamics, and provides good performance in the simulation test. Thus, the model provides a simple theoretical foundation for microscopic simulators and will significantly support the planning, management and control of railway transportation systems.« less
Longitudinal train dynamics model for a rail transit simulation system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Jinghui; Rakha, Hesham A.
The paper develops a longitudinal train dynamics model in support of microscopic railway transportation simulation. The model can be calibrated without any mechanical data making it ideal for implementation in transportation simulators. The calibration and validation work is based on data collected from the Portland light rail train fleet. The calibration procedure is mathematically formulated as a constrained non-linear optimization problem. The validity of the model is assessed by comparing instantaneous model predictions against field observations, and also evaluated in the domains of acceleration/deceleration versus speed and acceleration/deceleration versus distance. A test is conducted to investigate the adequacy of themore » model in simulation implementation. The results demonstrate that the proposed model can adequately capture instantaneous train dynamics, and provides good performance in the simulation test. Thus, the model provides a simple theoretical foundation for microscopic simulators and will significantly support the planning, management and control of railway transportation systems.« less
Mixed reality ventriculostomy simulation: experience in neurosurgical residency.
Hooten, Kristopher G; Lister, J Richard; Lombard, Gwen; Lizdas, David E; Lampotang, Samsun; Rajon, Didier A; Bova, Frank; Murad, Gregory J A
2014-12-01
Medicine and surgery are turning toward simulation to improve on limited patient interaction during residency training. Many simulators today use virtual reality with augmented haptic feedback with little to no physical elements. In a collaborative effort, the University of Florida Department of Neurosurgery and the Center for Safety, Simulation & Advanced Learning Technologies created a novel "mixed" physical and virtual simulator to mimic the ventriculostomy procedure. The simulator contains all the physical components encountered for the procedure with superimposed 3-D virtual elements for the neuroanatomical structures. To introduce the ventriculostomy simulator and its validation as a necessary training tool in neurosurgical residency. We tested the simulator in more than 260 residents. An algorithm combining time and accuracy was used to grade performance. Voluntary postperformance surveys were used to evaluate the experience. Results demonstrate that more experienced residents have statistically significant better scores and completed the procedure in less time than inexperienced residents. Survey results revealed that most residents agreed that practice on the simulator would help with future ventriculostomies. This mixed reality simulator provides a real-life experience, and will be an instrumental tool in training the next generation of neurosurgeons. We have now implemented a standard where incoming residents must prove efficiency and skill on the simulator before their first interaction with a patient.
Validity evidence and reliability of a simulated patient feedback instrument
2012-01-01
Background In the training of healthcare professionals, one of the advantages of communication training with simulated patients (SPs) is the SP's ability to provide direct feedback to students after a simulated clinical encounter. The quality of SP feedback must be monitored, especially because it is well known that feedback can have a profound effect on student performance. Due to the current lack of valid and reliable instruments to assess the quality of SP feedback, our study examined the validity and reliability of one potential instrument, the 'modified Quality of Simulated Patient Feedback Form' (mQSF). Methods Content validity of the mQSF was assessed by inviting experts in the area of simulated clinical encounters to rate the importance of the mQSF items. Moreover, generalizability theory was used to examine the reliability of the mQSF. Our data came from videotapes of clinical encounters between six simulated patients and six students and the ensuing feedback from the SPs to the students. Ten faculty members judged the SP feedback according to the items on the mQSF. Three weeks later, this procedure was repeated with the same faculty members and recordings. Results All but two items of the mQSF received importance ratings of > 2.5 on a four-point rating scale. A generalizability coefficient of 0.77 was established with two judges observing one encounter. Conclusions The findings for content validity and reliability with two judges suggest that the mQSF is a valid and reliable instrument to assess the quality of feedback provided by simulated patients. PMID:22284898
Simulators for training in ultrasound guided procedures.
Farjad Sultan, Syed; Shorten, George; Iohom, Gabrielle
2013-06-01
The four major categories of skill sets associated with proficiency in ultrasound guided regional anaesthesia are 1) understanding device operations, 2) image optimization, 3) image interpretation and 4) visualization of needle insertion and injection of the local anesthetic solution. Of these, visualization of needle insertion and injection of local anaesthetic solution can be practiced using simulators and phantoms. This survey of existing simulators summarizes advantages and disadvantages of each. Current deficits pertain to the validation process.
Virtual reality-assisted robotic surgery simulation.
Albani, Justin M; Lee, David I
2007-03-01
For more than a decade, advancing computer technologies have allowed incorporation of virtual reality (VR) into surgical training. This has become especially important in training for laparoscopic procedures, which often are complex and leave little room for error. With the advent of robotic surgery and the development and prevalence of a commercial surgical system (da Vinci robot; Intuitive Surgical, Sunnyvale, CA), a valid VR-assisted robotic surgery simulator could minimize the steep learning curve associated with many of these complex procedures and thus enable better outcomes. To date, such simulation does not exist; however, several agencies and corporations are involved in making this dream a reality. We review the history and progress of VR simulation in surgical training, its promising applications in robotic-assisted surgery, and the remaining challenges to implementation.
The Simplified Aircraft-Based Paired Approach With the ALAS Alerting Algorithm
NASA Technical Reports Server (NTRS)
Perry, Raleigh B.; Madden, Michael M.; Torres-Pomales, Wilfredo; Butler, Ricky W.
2013-01-01
This paper presents the results of an investigation of a proposed concept for closely spaced parallel runways called the Simplified Aircraft-based Paired Approach (SAPA). This procedure depends upon a new alerting algorithm called the Adjacent Landing Alerting System (ALAS). This study used both low fidelity and high fidelity simulations to validate the SAPA procedure and test the performance of the new alerting algorithm. The low fidelity simulation enabled a determination of minimum approach distance for the worst case over millions of scenarios. The high fidelity simulation enabled an accurate determination of timings and minimum approach distance in the presence of realistic trajectories, communication latencies, and total system error for 108 test cases. The SAPA procedure and the ALAS alerting algorithm were applied to the 750-ft parallel spacing (e.g., SFO 28L/28R) approach problem. With the SAPA procedure as defined in this paper, this study concludes that a 750-ft application does not appear to be feasible, but preliminary results for 1000-ft parallel runways look promising.
ERIC Educational Resources Information Center
Giardina, Max
This paper examines the implementation of 3D simulation through the development of the Avenor Virtual Trainer and how situated learning and fidelity of model representation become the basis for more effective Interactive Multimedia Training Situations. The discussion will focus of some principles concerned with situated training, simulation,…
NASA Astrophysics Data System (ADS)
Yi, Dake; Wang, TzuChiang
2018-06-01
In the paper, a new procedure is proposed to investigate three-dimensional fracture problems of a thin elastic plate with a long through-the-thickness crack under remote uniform tensile loading. The new procedure includes a new analytical method and high accurate finite element simulations. In the part of theoretical analysis, three-dimensional Maxwell stress functions are employed in order to derive three-dimensional crack tip fields. Based on the theoretical analysis, an equation which can describe the relationship among the three-dimensional J-integral J( z), the stress intensity factor K( z) and the tri-axial stress constraint level T z ( z) is derived first. In the part of finite element simulations, a fine mesh including 153360 elements is constructed to compute the stress field near the crack front, J( z) and T z ( z). Numerical results show that in the plane very close to the free surface, the K field solution is still valid for in-plane stresses. Comparison with the numerical results shows that the analytical results are valid.
Computer Simulations of Coronary Blood Flow Through a Constriction
2014-03-01
interventional procedures (e.g., stent deployment). Building off previous models that have been partially validated with experimental data, this thesis... stent deployment). Building off previous models that have been partially validated with experimental data, this thesis continues to develop the...the artery and increase blood flow. Generally a stent , or a mesh wire tube, is permanently inserted in order to scaffold open the artery wall
William H. Cooke; Andrew J. Hartsell
2000-01-01
Wall-to-wall Landsat TM classification efforts in Georgia require field validation. Validation uslng FIA data was testing by developing a new crown modeling procedure. A methodology is under development at the Southern Research Station to model crown diameter using Forest Health monitoring data. These models are used to simulate the proportion of tree crowns that...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geisler-Moroder, David; Lee, Eleanor S.; Ward, Gregory J.
2016-08-29
The Five-Phase Method (5-pm) for simulating complex fenestration systems with Radiance is validated against field measurements. The capability of the method to predict workplane illuminances, vertical sensor illuminances, and glare indices derived from captured and rendered high dynamic range (HDR) images is investigated. To be able to accurately represent the direct sun part of the daylight not only in sensor point simulations, but also in renderings of interior scenes, the 5-pm calculation procedure was extended. The validation shows that the 5-pm is superior to the Three-Phase Method for predicting horizontal and vertical illuminance sensor values as well as glare indicesmore » derived from rendered images. Even with input data from global and diffuse horizontal irradiance measurements only, daylight glare probability (DGP) values can be predicted within 10% error of measured values for most situations.« less
Navier-Stokes and viscous-inviscid interaction
NASA Technical Reports Server (NTRS)
Steger, Joseph L.; Vandalsem, William R.
1989-01-01
Some considerations toward developing numerical procedures for simulating viscous compressible flows are discussed. Both Navier-Stokes and boundary layer field methods are considered. Because efficient viscous-inviscid interaction methods have been difficult to extend to complex 3-D flow simulations, Navier-Stokes procedures are more frequently being utilized even though they require considerably more work per grid point. It would seem a mistake, however, not to make use of the more efficient approximate methods in those regions in which they are clearly valid. Ideally, a general purpose compressible flow solver that can optionally take advantage of approximate solution methods would suffice, both to improve accuracy and efficiency. Some potentially useful steps toward this goal are described: a generalized 3-D boundary layer formulation and the fortified Navier-Stokes procedure.
Planck 2015 results. VI. LFI mapmaking
NASA Astrophysics Data System (ADS)
Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartolo, N.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Catalano, A.; Chamballu, A.; Chary, R.-R.; Christensen, P. R.; Colombi, S.; Colombo, L. P. L.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Dickinson, C.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Fergusson, J.; Finelli, F.; Forni, O.; Frailis, M.; Franceschi, E.; Frejsel, A.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Hansen, F. K.; Hanson, D.; Harrison, D. L.; Henrot-Versillé, S.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kiiveri, K.; Kisner, T. S.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Leahy, J. P.; Leonardi, R.; Lesgourgues, J.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; Lindholm, V.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; Mazzotta, P.; McGehee, P.; Meinhold, P. R.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Novikov, D.; Novikov, I.; Paci, F.; Pagano, L.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Pearson, T. J.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Pierpaoli, E.; Pietrobon, D.; Pointecouteau, E.; Polenta, G.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renzi, A.; Rocha, G.; Rosset, C.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Savelainen, M.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Stolyarov, V.; Stompor, R.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vassallo, T.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Watson, R.; Wehus, I. K.; Yvon, D.; Zacchei, A.; Zonca, A.
2016-09-01
This paper describes the mapmaking procedure applied to Planck Low Frequency Instrument (LFI) data. The mapmaking step takes as input the calibrated timelines and pointing information. The main products are sky maps of I, Q, and U Stokes components. For the first time, we present polarization maps at LFI frequencies. The mapmaking algorithm is based on a destriping technique, which is enhanced with a noise prior. The Galactic region is masked to reduce errors arising from bandpass mismatch and high signal gradients. We apply horn-uniform radiometer weights to reduce the effects of beam-shape mismatch. The algorithm is the same as used for the 2013 release, apart from small changes in parameter settings. We validate the procedure through simulations. Special emphasis is put on the control of systematics, which is particularly important for accurate polarization analysis. We also produce low-resolution versions of the maps and corresponding noise covariance matrices. These serve as input in later analysis steps and parameter estimation. The noise covariance matrices are validated through noise Monte Carlo simulations. The residual noise in the map products is characterized through analysis of half-ring maps, noise covariance matrices, and simulations.
Verification and Validation: High Charge and Energy (HZE) Transport Codes and Future Development
NASA Technical Reports Server (NTRS)
Wilson, John W.; Tripathi, Ram K.; Mertens, Christopher J.; Blattnig, Steve R.; Clowdsley, Martha S.; Cucinotta, Francis A.; Tweed, John; Heinbockel, John H.; Walker, Steven A.; Nealy, John E.
2005-01-01
In the present paper, we give the formalism for further developing a fully three-dimensional HZETRN code using marching procedures but also development of a new Green's function code is discussed. The final Green's function code is capable of not only validation in the space environment but also in ground based laboratories with directed beams of ions of specific energy and characterized with detailed diagnostic particle spectrometer devices. Special emphasis is given to verification of the computational procedures and validation of the resultant computational model using laboratory and spaceflight measurements. Due to historical requirements, two parallel development paths for computational model implementation using marching procedures and Green s function techniques are followed. A new version of the HZETRN code capable of simulating HZE ions with either laboratory or space boundary conditions is under development. Validation of computational models at this time is particularly important for President Bush s Initiative to develop infrastructure for human exploration with first target demonstration of the Crew Exploration Vehicle (CEV) in low Earth orbit in 2008.
Engineering uses of physics-based ground motion simulations
Baker, Jack W.; Luco, Nicolas; Abrahamson, Norman A.; Graves, Robert W.; Maechling, Phillip J.; Olsen, Kim B.
2014-01-01
This paper summarizes validation methodologies focused on enabling ground motion simulations to be used with confidence in engineering applications such as seismic hazard analysis and dynmaic analysis of structural and geotechnical systems. Numberical simullation of ground motion from large erthquakes, utilizing physics-based models of earthquake rupture and wave propagation, is an area of active research in the earth science community. Refinement and validatoin of these models require collaboration between earthquake scientists and engineering users, and testing/rating methodolgies for simulated ground motions to be used with confidence in engineering applications. This paper provides an introduction to this field and an overview of current research activities being coordinated by the Souther California Earthquake Center (SCEC). These activities are related both to advancing the science and computational infrastructure needed to produce ground motion simulations, as well as to engineering validation procedures. Current research areas and anticipated future achievements are also discussed.
Kunz, Derek; Pariyadath, Manoj; Wittler, Mary; Askew, Kim; Manthey, David; Hartman, Nicholas
2017-06-01
Arthrocentesis is an important skill for physicians in multiple specialties. Recent studies indicate a superior safety and performance profile for this procedure using ultrasound guidance for needle placement, and improving quality of care requires a valid measurement of competency using this modality. We endeavored to create a validated tool to assess the performance of this procedure using the modified Delphi technique and experts in multiple disciplines across the United States. We derived a 22-item checklist designed to assess competency for the completion of ultrasound-guided arthrocentesis, which demonstrated a Cronbach's alpha of 0.89, indicating an excellent degree of internal consistency. Although we were able to demonstrate content validity for this tool, further validity evidence should be acquired after the tool is used and studied in clinical and simulated contexts. © 2017 by the American Institute of Ultrasound in Medicine.
NASA Technical Reports Server (NTRS)
Oman, R. A.; Foreman, K. M.; Leng, J.; Hopkins, H. B.
1975-01-01
A plan and some preliminary analysis for the accurate simulation of pressure distributions on the afterbody/nozzle portions of a hypersonic scramjet vehicle are described. The objectives fulfilled were to establish the standards of similitude for a hydrogen/air scramjet exhaust interacting with a vehicle afterbody, determine an experimental technique for validation of the procedures that will be used in conventional wind tunnel facilities, suggest a program of experiments for proof of the concept, and explore any unresolved problems in the proposed simulation procedures. It is shown that true enthalpy, Reynolds number, and nearly exact chemistry can be provided in the exhaust flow for the flight regime from Mach 4 to 10 by a detonation tube simulation. A detailed discussion of the required similarity parameters leads to the conclusion that substitute gases can be used as the simulated exhaust gas in a wind tunnel to achieve the correct interaction forces and moments.
Quality control and assurance for validation of DOS/I measurements
NASA Astrophysics Data System (ADS)
Cerussi, Albert; Durkin, Amanda; Kwong, Richard; Quang, Timothy; Hill, Brian; Tromberg, Bruce J.; MacKinnon, Nick; Mantulin, William W.
2010-02-01
Ongoing multi-center clinical trials are crucial for Biophotonics to gain acceptance in medical imaging. In these trials, quality control (QC) and assurance (QA) are key to success and provide "data insurance". Quality control and assurance deal with standardization, validation, and compliance of procedures, materials and instrumentation. Specifically, QC/QA involves systematic assessment of testing materials, instrumentation performance, standard operating procedures, data logging, analysis, and reporting. QC and QA are important for FDA accreditation and acceptance by the clinical community. Our Biophotonics research in the Network for Translational Research in Optical Imaging (NTROI) program for breast cancer characterization focuses on QA/QC issues primarily related to the broadband Diffuse Optical Spectroscopy and Imaging (DOS/I) instrumentation, because this is an emerging technology with limited standardized QC/QA in place. In the multi-center trial environment, we implement QA/QC procedures: 1. Standardize and validate calibration standards and procedures. (DOS/I technology requires both frequency domain and spectral calibration procedures using tissue simulating phantoms and reflectance standards, respectively.) 2. Standardize and validate data acquisition, processing and visualization (optimize instrument software-EZDOS; centralize data processing) 3. Monitor, catalog and maintain instrument performance (document performance; modularize maintenance; integrate new technology) 4. Standardize and coordinate trial data entry (from individual sites) into centralized database 5. Monitor, audit and communicate all research procedures (database, teleconferences, training sessions) between participants ensuring "calibration". This manuscript describes our ongoing efforts, successes and challenges implementing these strategies.
Chae, Sanghoon; Jung, Sung-Weon
2018-01-01
A survey of 67 experienced orthopedic surgeons indicated that precise portal placement was the most important skill in arthroscopic surgery. However, none of the currently available virtual reality simulators include simulation / training in portal placement, including haptic feedback of the necessary puncture force. This study aimed to: (1) measure the in vivo force and stiffness during a portal placement procedure in an actual operating room and (2) implement active haptic simulation of a portal placement procedure using the measured in vivo data. We measured the force required for port placement and the stiffness of the joint capsule during portal placement procedures performed by an experienced arthroscopic surgeon. Based on the acquired mechanical property values, we developed a cable-driven active haptic simulator designed to train the portal placement skill and evaluated the validity of the simulated haptics. Ten patients diagnosed with rotator cuff tears were enrolled in this experiment. The maximum peak force and joint capsule stiffness during posterior portal placement procedures were 66.46 (±10.76N) and 2560.82(±252.92) N/m, respectively. We then designed an active haptic simulator using the acquired data. Our cable-driven mechanism structure had a friction force of 3.763 ± 0.341 N, less than 6% of the mean puncture force. Simulator performance was evaluated by comparing the target stiffness and force with the stiffness and force reproduced by the device. R-squared values were 0.998 for puncture force replication and 0.902 for stiffness replication, indicating that the in vivo data can be used to implement a realistic haptic simulator. PMID:29494691
Application and Validation of Workload Assessment Techniques
1993-03-01
tech ical report documents the process and outcome of meeting this objective. Procedure: A series of eight separate studies was conducted using three...development process . The task analysis and simulation technique was shown to have the capability to track empirical workload ratings. More research is...operator workload during the systems acquisi- tion process , and (b) a pamphlet for the managers of Army systems that describes the need and some procedures
Salas, Rosa Ana; Pleite, Jorge
2013-01-01
We propose a specific procedure to compute the inductance of a toroidal ferrite core as a function of the excitation current. The study includes the linear, intermediate and saturation regions. The procedure combines the use of Finite Element Analysis in 2D and experimental measurements. Through the two dimensional (2D) procedure we are able to achieve convergence, a reduction of computational cost and equivalent results to those computed by three dimensional (3D) simulations. The validation is carried out by comparing 2D, 3D and experimental results. PMID:28809283
VEEP - Vehicle Economy, Emissions, and Performance program
NASA Technical Reports Server (NTRS)
Heimburger, D. A.; Metcalfe, M. A.
1977-01-01
VEEP is a general-purpose discrete event simulation program being developed to study the performance, fuel economy, and exhaust emissions of a vehicle modeled as a collection of its separate components. It is written in SIMSCRIPT II.5. The purpose of this paper is to present the design methodology, describe the simulation model and its components, and summarize the preliminary results. Topics include chief programmer team concepts, the SDDL design language, program portability, user-oriented design, the program's user command syntax, the simulation procedure, and model validation.
SATS HVO Concept Validation Experiment
NASA Technical Reports Server (NTRS)
Consiglio, Maria; Williams, Daniel; Murdoch, Jennifer; Adams, Catherine
2005-01-01
A human-in-the-loop simulation experiment was conducted at the NASA Langley Research Center s (LaRC) Air Traffic Operations Lab (ATOL) in an effort to comprehensively validate tools and procedures intended to enable the Small Aircraft Transportation System, Higher Volume Operations (SATS HVO) concept of operations. The SATS HVO procedures were developed to increase the rate of operations at non-towered, non-radar airports in near all-weather conditions. A key element of the design is the establishment of a volume of airspace around designated airports where pilots accept responsibility for self-separation. Flights operating at these airports, are given approach sequencing information computed by a ground based automated system. The SATS HVO validation experiment was conducted in the ATOL during the spring of 2004 in order to determine if a pilot can safely and proficiently fly an airplane while performing SATS HVO procedures. Comparative measures of flight path error, perceived workload and situation awareness were obtained for two types of scenarios. Baseline scenarios were representative of today s system utilizing procedure separation, where air traffic control grants one approach or departure clearance at a time. SATS HVO scenarios represented approaches and departure procedures as described in the SATS HVO concept of operations. Results from the experiment indicate that low time pilots were able to fly SATS HVO procedures and maintain self-separation as safely and proficiently as flying today's procedures.
Improved modeling of GaN HEMTs for predicting thermal and trapping-induced-kink effects
NASA Astrophysics Data System (ADS)
Jarndal, Anwar; Ghannouchi, Fadhel M.
2016-09-01
In this paper, an improved modeling approach has been developed and validated for GaN high electron mobility transistors (HEMTs). The proposed analytical model accurately simulates the drain current and its inherent trapping and thermal effects. Genetic-algorithm-based procedure is developed to automatically find the fitting parameters of the model. The developed modeling technique is implemented on a packaged GaN-on-Si HEMT and validated by DC and small-/large-signal RF measurements. The model is also employed for designing and realizing a switch-mode inverse class-F power amplifier. The amplifier simulations showed a very good agreement with RF large-signal measurements.
A design procedure for the handling qualities optimization of the X-29A aircraft
NASA Technical Reports Server (NTRS)
Bosworth, John T.; Cox, Timothy H.
1989-01-01
The techniques used to improve the pitch-axis handling qualities of the X-29A wing-canard-planform fighter aircraft are reviewed. The aircraft and its FCS are briefly described, and the design method, which works within the existing FCS architecture, is characterized in detail. Consideration is given to the selection of design goals and design variables, the definition and calculation of the cost function, the validation of the mathematical model on the basis of flight-test data, and the validation of the improved design by means of nonlinear simulations. Flight tests of the improved design are shown to verify the simulation results.
Verification and Validation of EnergyPlus Phase Change Material Model for Opaque Wall Assemblies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tabares-Velasco, P. C.; Christensen, C.; Bianchi, M.
2012-08-01
Phase change materials (PCMs) represent a technology that may reduce peak loads and HVAC energy consumption in buildings. A few building energy simulation programs have the capability to simulate PCMs, but their accuracy has not been completely tested. This study shows the procedure used to verify and validate the PCM model in EnergyPlus using a similar approach as dictated by ASHRAE Standard 140, which consists of analytical verification, comparative testing, and empirical validation. This process was valuable, as two bugs were identified and fixed in the PCM model, and version 7.1 of EnergyPlus will have a validated PCM model. Preliminarymore » results using whole-building energy analysis show that careful analysis should be done when designing PCMs in homes, as their thermal performance depends on several variables such as PCM properties and location in the building envelope.« less
CAG12 - A CSCM based procedure for flow of an equilibrium chemically reacting gas
NASA Technical Reports Server (NTRS)
Green, M. J.; Davy, W. C.; Lombard, C. K.
1985-01-01
The Conservative Supra Characteristic Method (CSCM), an implicit upwind Navier-Stokes algorithm, is extended to the numerical simulation of flows in chemical equilibrium. The resulting computer code known as Chemistry and Gasdynamics Implicit - Version 2 (CAG12) is described. First-order accurate results are presented for inviscid and viscous Mach 20 flows of air past a hemisphere-cylinder. The solution procedure captures the bow shock in a chemically reacting gas, a technique that is needed for simulating high altitude, rarefied flows. In an initial effort to validate the code, the inviscid results are compared with published gasdynamic and chemistry solutions and satisfactorily agreement is obtained.
Numerical simulation of aerothermal loads in hypersonic engine inlets due to shock impingement
NASA Technical Reports Server (NTRS)
Ramakrishnan, R.
1992-01-01
The effect of shock impingement on an axial corner simulating the inlet of a hypersonic vehicle engine is modeled using a finite-difference procedure. A three-dimensional dynamic grid adaptation procedure is utilized to move the grids to regions with strong flow gradients. The adaptation procedure uses a grid relocation stencil that is valid at both the interior and boundary points of the finite-difference grid. A linear combination of spatial derivatives of specific flow variables, calculated with finite-element interpolation functions, are used as adaptation measures. This computational procedure is used to study laminar and turbulent Mach 6 flows in the axial corner. The description of flow physics and qualitative measures of heat transfer distributions on cowl and strut surfaces obtained from the analysis are compared with experimental observations. Conclusions are drawn regarding the capability of the numerical scheme for enhanced modeling of high-speed compressible flows.
Yamaguchi, Shohei; Konishi, Kozo; Yasunaga, Takefumi; Yoshida, Daisuke; Kinjo, Nao; Kobayashi, Kiichiro; Ieiri, Satoshi; Okazaki, Ken; Nakashima, Hideaki; Tanoue, Kazuo; Maehara, Yoshihiko; Hashizume, Makoto
2007-12-01
This study was carried out to investigate whether eye-hand coordination skill on a virtual reality laparoscopic surgical simulator (the LAP Mentor) was able to differentiate among subjects with different laparoscopic experience and thus confirm its construct validity. A total of 31 surgeons, who were all right-handed, were divided into the following two groups according to their experience as an operator in laparoscopic surgery: experienced surgeons (more than 50 laparoscopic procedures) and novice surgeons (fewer than 10 laparoscopic procedures). The subjects were tested using the eye-hand coordination task of the LAP Mentor, and performance was compared between the two groups. Assessment of the laparoscopic skills was based on parameters measured by the simulator. The experienced surgeons completed the task significantly faster than the novice surgeons. The experienced surgeons also achieved a lower number of movements (NOM), better economy of movement (EOM) and faster average speed of the left instrument than the novice surgeons, whereas there were no significant differences between the two groups for the NOM, EOM and average speed of the right instrument. Eye-hand coordination skill of the nondominant hand, but not the dominant hand, measured using the LAP Mentor was able to differentiate between subjects with different laparoscopic experience. This study also provides evidence of construct validity for eye-hand coordination skill on the LAP Mentor.
Rivard, Justin D; Vergis, Ashley S; Unger, Bertram J; Hardy, Krista M; Andrew, Chris G; Gillman, Lawrence M; Park, Jason
2014-06-01
Computer-based surgical simulators capture a multitude of metrics based on different aspects of performance, such as speed, accuracy, and movement efficiency. However, without rigorous assessment, it may be unclear whether all, some, or none of these metrics actually reflect technical skill, which can compromise educational efforts on these simulators. We assessed the construct validity of individual performance metrics on the LapVR simulator (Immersion Medical, San Jose, CA, USA) and used these data to create task-specific summary metrics. Medical students with no prior laparoscopic experience (novices, N = 12), junior surgical residents with some laparoscopic experience (intermediates, N = 12), and experienced surgeons (experts, N = 11) all completed three repetitions of four LapVR simulator tasks. The tasks included three basic skills (peg transfer, cutting, clipping) and one procedural skill (adhesiolysis). We selected 36 individual metrics on the four tasks that assessed six different aspects of performance, including speed, motion path length, respect for tissue, accuracy, task-specific errors, and successful task completion. Four of seven individual metrics assessed for peg transfer, six of ten metrics for cutting, four of nine metrics for clipping, and three of ten metrics for adhesiolysis discriminated between experience levels. Time and motion path length were significant on all four tasks. We used the validated individual metrics to create summary equations for each task, which successfully distinguished between the different experience levels. Educators should maintain some skepticism when reviewing the plethora of metrics captured by computer-based simulators, as some but not all are valid. We showed the construct validity of a limited number of individual metrics and developed summary metrics for the LapVR. The summary metrics provide a succinct way of assessing skill with a single metric for each task, but require further validation.
Minimum-complexity helicopter simulation math model
NASA Technical Reports Server (NTRS)
Heffley, Robert K.; Mnich, Marc A.
1988-01-01
An example of a minimal complexity simulation helicopter math model is presented. Motivating factors are the computational delays, cost, and inflexibility of the very sophisticated math models now in common use. A helicopter model form is given which addresses each of these factors and provides better engineering understanding of the specific handling qualities features which are apparent to the simulator pilot. The technical approach begins with specification of features which are to be modeled, followed by a build up of individual vehicle components and definition of equations. Model matching and estimation procedures are given which enable the modeling of specific helicopters from basic data sources such as flight manuals. Checkout procedures are given which provide for total model validation. A number of possible model extensions and refinement are discussed. Math model computer programs are defined and listed.
NASA Technical Reports Server (NTRS)
Meng, J. C. S.; Thomson, J. A. L.
1975-01-01
A data analysis program constructed to assess LDV system performance, to validate the simulation model, and to test various vortex location algorithms is presented. Real or simulated Doppler spectra versus range and elevation is used and the spatial distributions of various spectral moments or other spectral characteristics are calculated and displayed. Each of the real or simulated scans can be processed by one of three different procedures: simple frequency or wavenumber filtering, matched filtering, and deconvolution filtering. The final output is displayed as contour plots in an x-y coordinate system, as well as in the form of vortex tracks deduced from the maxima of the processed data. A detailed analysis of run number 1023 and run number 2023 is presented to demonstrate the data analysis procedure. Vortex tracks and system range resolutions are compared with theoretical predictions.
Can virtual reality simulation be used for advanced bariatric surgical training?
Lewis, Trystan M; Aggarwal, Rajesh; Kwasnicki, Richard M; Rajaretnam, Niro; Moorthy, Krishna; Ahmed, Ahmed; Darzi, Ara
2012-06-01
Laparoscopic bariatric surgery is a safe and effective way of treating morbid obesity. However, the operations are technically challenging and training opportunities for junior surgeons are limited. This study aims to assess whether virtual reality (VR) simulation is an effective adjunct for training and assessment of laparoscopic bariatric technical skills. Twenty bariatric surgeons of varying experience (Five experienced, five intermediate, and ten novice) were recruited to perform a jejuno-jejunostomy on both cadaveric tissue and on the bariatric module of the Lapmentor VR simulator (Simbionix Corporation, Cleveland, OH). Surgical performance was assessed using validated global rating scales (GRS) and procedure specific video rating scales (PSRS). Subjects were also questioned about the appropriateness of VR as a training tool for surgeons. Construct validity of the VR bariatric module was demonstrated with a significant difference in performance between novice and experienced surgeons on the VR jejuno-jejunostomy module GRS (median 11-15.5; P = .017) and PSRS (median 11-13; P = .003). Content validity was demonstrated with surgeons describing the VR bariatric module as useful and appropriate for training (mean Likert score 4.45/7) and they would highly recommend VR simulation to others for bariatric training (mean Likert score 5/7). Face and concurrent validity were not established. This study shows that the bariatric module on a VR simulator demonstrates construct and content validity. VR simulation appears to be an effective method for training of advanced bariatric technical skills for surgeons at the start of their bariatric training. However, assessment of technical skills should still take place on cadaveric tissue. Copyright © 2012. Published by Mosby, Inc.
Target controlled infusion for kids: trials and simulations.
Mehta, Disha; McCormack, Jon; Fung, Parry; Dumont, Guy; Ansermino, J
2008-01-01
Target controlled infusion (TCI) for Kids is a computer controlled system designed to administer propofol for general anesthesia. A controller establishes infusion rates required to achieve a specified concentration at the drug's effect site (C(e)) by implementing a continuously updated pharmacokinetic-pharmacodymanic model. This manuscript provides an overview of the system's design, preclinical tests, and a clinical pilot study. In pre-clinical tests, predicted infusion rates for 20 simulated procedures displayed complete convergent validity between two software implementations, Labview and Matlab, at computational intervals of 5, 10, and 15s, but diverged with 20s intervals due to system rounding errors. The volume of drug delivered by the TCI system also displayed convergent validity with Tivatrainer, a widely used TCI simulation software. Further tests, were conducted for 50 random procedures to evaluate discrepancies between volumes reported and those actually delivered by the system. Accuracies were within clinically acceptable ranges and normally distributed with a mean of 0.08 +/- 0.01 ml. In the clinical study, propofol pharmacokinetics were simulated for 30 surgical procedures involving children aged 3 months to 9 years. Predicted C(e) values during standard clinical practice, the accuracy of wake-up times predicted by the system, and potential correlations between patient wake-up times, C(e), and state entropy (SE) were assessed. Neither Ce nor SE was a reliable predictor of wake-up time in children, but the small sample size of this study does not fully accommodate the noted variation in children's response to propofol. A C(e) value of 1.9 mug/ml was found to best predict emergence from anesthesia in children.
In-process, non-destructive, dynamic testing of high-speed polymer composite rotors
NASA Astrophysics Data System (ADS)
Kuschmierz, Robert; Filippatos, Angelos; Günther, Philipp; Langkamp, Albert; Hufenbach, Werner; Czarske, Jürgen; Fischer, Andreas
2015-03-01
Polymer composite rotors are lightweight and offer great perspectives in high-speed applications such as turbo machinery. Currently, novel rotor structures and materials are investigated for the purpose of increasing machine efficiency and lifetime, as well as allowing for higher dynamic loads. However, due to the complexity of the composite materials an in-process measurement system is required. This allows for monitoring the evolution of damages under dynamic loads, for testing and predicting the structural integrity of composite rotors in process. In rotor design, it can be used for calibrating and improving models, simulating the dynamic behaviour of polymer composite rotors. The measurement system is to work non-invasive, offer micron uncertainty, as well as a high measurement rate of several tens of kHz. Furthermore, it must be applicable at high surface speeds and under technical vacuum. In order to fulfil these demands a novel laser distance measurement system was developed. It provides the angle resolved measurement of the biaxial deformation of a fibre-reinforced polymer composite rotor with micron uncertainty at surface speeds of more than 300 m/s. Furthermore, a simulation procedure combining a finite element model and a damage mechanics model is applied. A comparison of the measured data and the numerically calculated data is performed to validate the simulation towards rotor expansion. This validating procedure can be used for a model calibration in the future. The simulation procedure could be used to investigate different damage-test cases of the rotor, in order to define its structural behaviour without further experiments.
Planck 2015 results: VI. LFI mapmaking
Ade, P. A. R.; Aghanim, N.; Ashdown, M.; ...
2016-09-20
This article describes the mapmaking procedure applied to Planck Low Frequency Instrument (LFI) data. The mapmaking step takes as input the calibrated timelines and pointing information. The main products are sky maps of I, Q, and U Stokes components. For the first time, we present polarization maps at LFI frequencies. The mapmaking algorithm is based on a destriping technique, which is enhanced with a noise prior. The Galactic region is masked to reduce errors arising from bandpass mismatch and high signal gradients. We apply horn-uniform radiometer weights to reduce the effects of beam-shape mismatch. The algorithm is the same asmore » used for the 2013 release, apart from small changes in parameter settings. We validate the procedure through simulations. Special emphasis is put on the control of systematics, which is particularly important for accurate polarization analysis. We also produce low-resolution versions of the maps and corresponding noise covariance matrices. These serve as input in later analysis steps and parameter estimation. The noise covariance matrices are validated through noise Monte Carlo simulations. The residual noise in the map products is characterized through analysis of half-ring maps, noise covariance matrices, and simulations.« less
Development and Validation of a Monte Carlo Simulation Tool for Multi-Pinhole SPECT
Mok, Greta S. P.; Du, Yong; Wang, Yuchuan; Frey, Eric C.; Tsui, Benjamin M. W.
2011-01-01
Purpose In this work, we developed and validated a Monte Carlo simulation (MCS) tool for investigation and evaluation of multi-pinhole (MPH) SPECT imaging. Procedures This tool was based on a combination of the SimSET and MCNP codes. Photon attenuation and scatter in the object, as well as penetration and scatter through the collimator detector, are modeled in this tool. It allows accurate and efficient simulation of MPH SPECT with focused pinhole apertures and user-specified photon energy, aperture material, and imaging geometry. The MCS method was validated by comparing the point response function (PRF), detection efficiency (DE), and image profiles obtained from point sources and phantom experiments. A prototype single-pinhole collimator and focused four- and five-pinhole collimators fitted on a small animal imager were used for the experimental validations. We have also compared computational speed among various simulation tools for MPH SPECT, including SimSET-MCNP, MCNP, SimSET-GATE, and GATE for simulating projections of a hot sphere phantom. Results We found good agreement between the MCS and experimental results for PRF, DE, and image profiles, indicating the validity of the simulation method. The relative computational speeds for SimSET-MCNP, MCNP, SimSET-GATE, and GATE are 1: 2.73: 3.54: 7.34, respectively, for 120-view simulations. We also demonstrated the application of this MCS tool in small animal imaging by generating a set of low-noise MPH projection data of a 3D digital mouse whole body phantom. Conclusions The new method is useful for studying MPH collimator designs, data acquisition protocols, image reconstructions, and compensation techniques. It also has great potential to be applied for modeling the collimator-detector response with penetration and scatter effects for MPH in the quantitative reconstruction method. PMID:19779896
Numerical simulations of a nonequilibrium argon plasma in a shock-tube experiment
NASA Technical Reports Server (NTRS)
Cambier, Jean-Luc
1991-01-01
A code developed for the numerical modeling of nonequilibrium radiative plasmas is applied to the simulation of the propagation of strong ionizing shock waves in argon gas. The simulations attempt to reproduce a series of shock-tube experiments which will be used to validate the numerical models and procedures. The ability to perform unsteady simulations makes it possible to observe some fluctuations in the shock propagation, coupled to the kinetic processes. A coupling mechanism by pressure waves, reminiscent of oscillation mechanisms observed in detonation waves, is described. The effect of upper atomic levels is also briefly discussed.
Dohrenbusch, R
2009-06-01
Chronic pain accompanied by disability and handicap is a frequent symptom necessitating medical assessment. Current guidelines for the assessment of malingering suggest discrimination between explanatory demonstration, aggravation and simulation. However, this distinction has not clearly been put into operation and validated. The necessity of assessment strategies based on general principles of psychological assessment and testing is emphasized. Standardized and normalized psychological assessment methods and symptom validation techniques should be used in the assessment of subjects with chronic pain problems. An adaptive procedure for assessing the validity of complaints is suggested to minimize effort and costs.
Training for percutaneous renal access on a virtual reality simulator.
Zhang, Yi; Yu, Cheng-fan; Liu, Jin-shun; Wang, Gang; Zhu, He; Na, Yan-qun
2013-01-01
The need to develop new methods of surgical training combined with advances in computing has led to the development of virtual reality surgical simulators. The PERC Mentor(TM) is designed to train the user in percutaneous renal collecting system access puncture. This study aimed to validate the use of this kind of simulator, in percutaneous renal access training. Twenty-one urologists were enrolled as trainees to learn a fluoroscopy-guided percutaneous renal accessing technique. An assigned percutaneous renal access procedure was immediately performed on the PERC Mentor(TM) after watching instruction video and an analog operation. Objective parameters were recorded by the simulator and subjective global rating scale (GRS) score were determined. Simulation training followed and consisted of 2 hours daily training sessions for 2 consecutive days. Twenty-four hours after the training session, trainees were evaluated performing the same procedure. The post-training evaluation was compared to the evaluation of the initial attempt. During the initial attempt, none of the trainees could complete the appointed procedure due to the lack of experience in fluoroscopy-guided percutaneous renal access. After the short-term training, all trainees were able to independently complete the procedure. Of the 21 trainees, 10 had primitive experience in ultrasound-guided percutaneous nephrolithotomy. Trainees were thus categorized into the group of primitive experience and inexperience. The total operating time and amount of contrast material used were significantly lower in the group of primitive experience versus the inexperience group (P = 0.03 and 0.02, respectively). The training on the virtual reality simulator, PERC Mentor(TM), can help trainees with no previous experience of fluoroscopy-guided percutaneous renal access to complete the virtual manipulation of the procedure independently. This virtual reality simulator may become an important training and evaluation tool in teaching fluoroscopy-guided percutaneous renal access.
Vermeulen, Margit I; Tromp, Fred; Zuithoff, Nicolaas P A; Pieters, Ron H M; Damoiseaux, Roger A M J; Kuyvenhoven, Marijke M
2014-12-01
Abstract Background: Historically, semi-structured interviews (SSI) have been the core of the Dutch selection for postgraduate general practice (GP) training. This paper describes a pilot study on a newly designed competency-based selection procedure that assesses whether candidates have the competencies that are required to complete GP training. The objective was to explore reliability and validity aspects of the instruments developed. The new selection procedure comprising the National GP Knowledge Test (LHK), a situational judgement tests (SJT), a patterned behaviour descriptive interview (PBDI) and a simulated encounter (SIM) was piloted alongside the current procedure. Forty-seven candidates volunteered in both procedures. Admission decision was based on the results of the current procedure. Study participants did hardly differ from the other candidates. The mean scores of the candidates on the LHK and SJT were 21.9 % (SD 8.7) and 83.8% (SD 3.1), respectively. The mean self-reported competency scores (PBDI) were higher than the observed competencies (SIM): 3.7(SD 0.5) and 2.9(SD 0.6), respectively. Content-related competencies showed low correlations with one another when measured with different instruments, whereas more diverse competencies measured by a single instrument showed strong to moderate correlations. Moreover, a moderate correlation between LHK and SJT was found. The internal consistencies (intraclass correlation, ICC) of LHK and SJT were poor while the ICC of PBDI and SIM showed acceptable levels of reliability. Findings on content validity and reliability of these new instruments are promising to realize a competency based procedure. Further development of the instruments and research on predictive validity should be pursued.
NASA Astrophysics Data System (ADS)
Nir, A.; Doughty, C.; Tsang, C. F.
Validation methods which developed in the context of deterministic concepts of past generations often cannot be directly applied to environmental problems, which may be characterized by limited reproducibility of results and highly complex models. Instead, validation is interpreted here as a series of activities, including both theoretical and experimental tests, designed to enhance our confidence in the capability of a proposed model to describe some aspect of reality. We examine the validation process applied to a project concerned with heat and fluid transport in porous media, in which mathematical modeling, simulation, and results of field experiments are evaluated in order to determine the feasibility of a system for seasonal thermal energy storage in shallow unsaturated soils. Technical details of the field experiments are not included, but appear in previous publications. Validation activities are divided into three stages. The first stage, carried out prior to the field experiments, is concerned with modeling the relevant physical processes, optimization of the heat-exchanger configuration and the shape of the storage volume, and multi-year simulation. Subjects requiring further theoretical and experimental study are identified at this stage. The second stage encompasses the planning and evaluation of the initial field experiment. Simulations are made to determine the experimental time scale and optimal sensor locations. Soil thermal parameters and temperature boundary conditions are estimated using an inverse method. Then results of the experiment are compared with model predictions using different parameter values and modeling approximations. In the third stage, results of an experiment performed under different boundary conditions are compared to predictions made by the models developed in the second stage. Various aspects of this theoretical and experimental field study are described as examples of the verification and validation procedure. There is no attempt to validate a specific model, but several models of increasing complexity are compared with experimental results. The outcome is interpreted as a demonstration of the paradigm proposed by van der Heijde, 26 that different constituencies have different objectives for the validation process and therefore their acceptance criteria differ also.
NASA Technical Reports Server (NTRS)
Bruce, Kevin R.
1986-01-01
A Mach/CAS control system using an elevator was designed and developed for use on the NASA TCV B737 aircraft to support research in profile descent procedures and approach energy management. The system was designed using linear analysis techniques primarily. The results were confirmed and the system validated at additional flight conditions using a nonlinear 737 aircraft simulation. All design requirements were satisfied.
NASA Technical Reports Server (NTRS)
Williams, Daniel M.; Consiglio, Maria C.; Murdoch, Jennifer L.; Adams, Catherine H.
2005-01-01
This paper provides an analysis of Flight Technical Error (FTE) from recent SATS experiments, called the Higher Volume Operations (HVO) Simulation and Flight experiments, which NASA conducted to determine pilot acceptability of the HVO concept for normal operating conditions. Reported are FTE results from simulation and flight experiment data indicating the SATS HVO concept is viable and acceptable to low-time instrument rated pilots when compared with today s system (baseline). Described is the comparative FTE analysis of lateral, vertical, and airspeed deviations from the baseline and SATS HVO experimental flight procedures. Based on FTE analysis, all evaluation subjects, low-time instrument-rated pilots, flew the HVO procedures safely and proficiently in comparison to today s system. In all cases, the results of the flight experiment validated the results of the simulation experiment and confirm the utility of the simulation platform for comparative Human in the Loop (HITL) studies of SATS HVO and Baseline operations.
Training and certification in endobronchial ultrasound-guided transbronchial needle aspiration
Konge, Lars; Nayahangan, Leizl Joy; Clementsen, Paul Frost
2017-01-01
Endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) plays a key role in the staging of lung cancer, which is crucial for allocation to surgical treatment. EBUS-TBNA is a complicated procedure and simulation-based training is helpful in the first part of the long learning curve prior to performing the procedure on actual patients. New trainees should follow a structured training programme consisting of training on simulators to proficiency as assessed with a validated test followed by supervised practice on patients. The simulation-based training is superior to the traditional apprenticeship model and is recommended in the newest guidelines. EBUS-TBNA and oesophageal ultrasound-guided fine needle aspiration (EUS-FNA or EUS-B-FNA) are complementary to each other and the combined techniques are superior to either technique alone. It is logical to learn and to perform the two techniques in combination, however, for lung cancer staging solely EBUS-TBNA simulators exist, but hopefully in the future simulation-based training in EUS will be possible. PMID:28840013
[Objective surgery -- advanced robotic devices and simulators used for surgical skill assessment].
Suhánszki, Norbert; Haidegger, Tamás
2014-12-01
Robotic assistance became a leading trend in minimally invasive surgery, which is based on the global success of laparoscopic surgery. Manual laparoscopy requires advanced skills and capabilities, which is acquired through tedious learning procedure, while da Vinci type surgical systems offer intuitive control and advanced ergonomics. Nevertheless, in either case, the key issue is to be able to assess objectively the surgeons' skills and capabilities. Robotic devices offer radically new way to collect data during surgical procedures, opening the space for new ways of skill parameterization. This may be revolutionary in MIS training, given the new and objective surgical curriculum and examination methods. The article reviews currently developed skill assessment techniques for robotic surgery and simulators, thoroughly inspecting their validation procedure and utility. In the coming years, these methods will become the mainstream of Western surgical education.
Pretest information for a test to validate plume simulation procedures (FA-17)
NASA Technical Reports Server (NTRS)
Hair, L. M.
1978-01-01
The results of an effort to plan a final verification wind tunnel test to validate the recommended correlation parameters and application techniques were presented. The test planning effort was complete except for test site finalization and the associated coordination. Two suitable test sites were identified. Desired test conditions were shown. Subsequent sections of this report present the selected model and test site, instrumentation of this model, planned test operations, and some concluding remarks.
A quantitative dynamic systems model of health-related quality of life among older adults
Roppolo, Mattia; Kunnen, E Saskia; van Geert, Paul L; Mulasso, Anna; Rabaglietti, Emanuela
2015-01-01
Health-related quality of life (HRQOL) is a person-centered concept. The analysis of HRQOL is highly relevant in the aged population, which is generally suffering from health decline. Starting from a conceptual dynamic systems model that describes the development of HRQOL in individuals over time, this study aims to develop and test a quantitative dynamic systems model, in order to reveal the possible dynamic trends of HRQOL among older adults. The model is tested in different ways: first, with a calibration procedure to test whether the model produces theoretically plausible results, and second, with a preliminary validation procedure using empirical data of 194 older adults. This first validation tested the prediction that given a particular starting point (first empirical data point), the model will generate dynamic trajectories that lead to the observed endpoint (second empirical data point). The analyses reveal that the quantitative model produces theoretically plausible trajectories, thus providing support for the calibration procedure. Furthermore, the analyses of validation show a good fit between empirical and simulated data. In fact, no differences were found in the comparison between empirical and simulated final data for the same subgroup of participants, whereas the comparison between different subgroups of people resulted in significant differences. These data provide an initial basis of evidence for the dynamic nature of HRQOL during the aging process. Therefore, these data may give new theoretical and applied insights into the study of HRQOL and its development with time in the aging population. PMID:26604722
STS-26 crew during emergency egress exercise at LC 39 launch pad B
1988-05-04
S88-40898 (4 May 1988) --- Astronauts, members of the orbiter close-out crew and fire and rescue personnel participate in a simulated emergency egress exercise near the slide wire termination point bunker at Launch Pad 39B. The simulated exercise was performed to familiarize personnel with evacuation routes as well as emergency equipment and procedures. Reasons for conducting the emergency exercises include the need to validate recent post-Challenger upgrades to the launch pad's emergency escape system and the new procedures developed in preparation for STS-26. (NOTE: The astronaut pictured and many of the others who participated in the exercises are not members of STS-26 prime crew).
Emerging Role of Three-Dimensional Printing in Simulation in Otolaryngology.
VanKoevering, Kyle K; Malloy, Kelly Michele
2017-10-01
Simulation is rapidly expanding across medicine as a valuable component of trainee education. For procedural simulation, development of low-cost simulators that allow a realistic, haptic experience for learners to practice maneuvers while appreciating anatomy has become highly valuable. Otolaryngology has seen significant advancements in development of improved, specialty-specific simulators with the expansion of three-dimensional (3D) printing. This article highlights the fundamental components of 3D printing and the multitude of subspecialty simulators that have been developed with the assistance of 3D printing. It briefly discusses important considerations such as cost, fidelity, and validation where available in the literature. Copyright © 2017 Elsevier Inc. All rights reserved.
Virtual reality-based simulation training for ventriculostomy: an evidence-based approach.
Schirmer, Clemens M; Elder, J Bradley; Roitberg, Ben; Lobel, Darlene A
2013-10-01
Virtual reality (VR) simulation-based technologies play an important role in neurosurgical resident training. The Congress of Neurological Surgeons (CNS) Simulation Committee developed a simulation-based curriculum incorporating VR simulators to train residents in the management of common neurosurgical disorders. To enhance neurosurgical resident training for ventriculostomy placement using simulation-based training. A course-based neurosurgical simulation curriculum was introduced at the Neurosurgical Simulation Symposium at the 2011 and 2012 CNS annual meetings. A trauma module was developed to teach ventriculostomy placement as one of the neurosurgical procedures commonly performed in the management of traumatic brain injury. The course offered both didactic and simulator-based instruction, incorporating written and practical pretests and posttests and questionnaires to assess improvement in skill level and to validate the simulators as teaching tools. Fourteen trainees participated in the didactic component of the trauma module. Written scores improved significantly from pretest (75%) to posttest (87.5%; P < .05). Seven participants completed the ventriculostomy simulation. Significant improvements were observed in anatomy (P < .04), burr hole placement (P < .03), final location of the catheter (P = .05), and procedure completion time (P < .004). Senior residents planned a significantly better trajectory (P < .01); junior participants improved most in terms of identifying the relevant anatomy (P < .03) and the time required to complete the procedure (P < .04). VR ventriculostomy placement as part of the CNS simulation trauma module complements standard training techniques for residents in the management of neurosurgical trauma. Improvement in didactic and hands-on knowledge by course participants demonstrates the usefulness of the VR simulator as a training tool.
Virtual evaluation of stent graft deployment: a validated modeling and simulation study.
De Bock, S; Iannaccone, F; De Santis, G; De Beule, M; Van Loo, D; Devos, D; Vermassen, F; Segers, P; Verhegghe, B
2012-09-01
The presented study details the virtual deployment of a bifurcated stent graft (Medtronic Talent) in an Abdominal Aortic Aneurysm model, using the finite element method. The entire deployment procedure is modeled, with the stent graft being crimped and bent according to the vessel geometry, and subsequently released. The finite element results are validated in vitro with placement of the device in a silicone mock aneurysm, using high resolution CT scans to evaluate the result. The presented work confirms the capability of finite element computer simulations to predict the deformed configuration after endovascular aneurysm repair (EVAR). These simulations can be used to quantify mechanical parameters, such as neck dilations, radial forces and stresses in the device, that are difficult or impossible to obtain from medical imaging. Copyright © 2012 Elsevier Ltd. All rights reserved.
Korzeniowski, Przemyslaw; Brown, Daniel C; Sodergren, Mikael H; Barrow, Alastair; Bello, Fernando
2017-02-01
The goal of this study was to establish face, content, and construct validity of NOViSE-the first force-feedback enabled virtual reality (VR) simulator for natural orifice transluminal endoscopic surgery (NOTES). Fourteen surgeons and surgical trainees performed 3 simulated hybrid transgastric cholecystectomies using a flexible endoscope on NOViSE. Four of them were classified as "NOTES experts" who had independently performed 10 or more simulated or human NOTES procedures. Seven participants were classified as "Novices" and 3 as "Gastroenterologists" with no or minimal NOTES experience. A standardized 5-point Likert-type scale questionnaire was administered to assess the face and content validity. NOViSE showed good overall face and content validity. In 14 out of 15 statements pertaining to face validity (graphical appearance, endoscope and tissue behavior, overall realism), ≥50% of responses were "agree" or "strongly agree." In terms of content validity, 85.7% of participants agreed or strongly agreed that NOViSE is a useful training tool for NOTES and 71.4% that they would recommend it to others. Construct validity was established by comparing a number of performance metrics such as task completion times, path lengths, applied forces, and so on. NOViSE demonstrated early signs of construct validity. Experts were faster and used a shorter endoscopic path length than novices in all but one task. The results indicate that NOViSE authentically recreates a transgastric hybrid cholecystectomy and sets promising foundations for the further development of a VR training curriculum for NOTES without compromising patient safety or requiring expensive animal facilities.
Collection of Calibration and Validation Data for an Airport Landside Dynamic Simulation Model.
1980-04-01
movements. The volume of skiers passing through Denver is sufficiently large to warrant the installation of special check-in counters for passengers with...Terminal, only seven sectors were used. Training Procedures MIA was the first of the three airports surveyed. A substantial amount of knowledge and
DOT National Transportation Integrated Search
2009-11-01
The development of the Mechanistic-Empirical Pavement Design Guide (MEPDG) under National Cooperative Highway Research Program (NCHRP) projects 1-37A and 1-40D has significantly improved the ability of pavement designers to model and simulate the eff...
Validation of Broadband Ground Motion Simulations for Japanese Crustal Earthquakes by the Recipe
NASA Astrophysics Data System (ADS)
Iwaki, A.; Maeda, T.; Morikawa, N.; Miyake, H.; Fujiwara, H.
2015-12-01
The Headquarters for Earthquake Research Promotion (HERP) of Japan has organized the broadband ground motion simulation method into a standard procedure called the "recipe" (HERP, 2009). In the recipe, the source rupture is represented by the characterized source model (Irikura and Miyake, 2011). The broadband ground motion time histories are computed by a hybrid approach: the 3-D finite-difference method (Aoi et al. 2004) and the stochastic Green's function method (Dan and Sato, 1998; Dan et al. 2000) for the long- (> 1 s) and short-period (< 1 s) components, respectively, using the 3-D velocity structure model. As the engineering significance of scenario earthquake ground motion prediction is increasing, thorough verification and validation are required for the simulation methods. This study presents the self-validation of the recipe for two MW6.6 crustal events in Japan, the 2000 Tottori and 2004 Chuetsu (Niigata) earthquakes. We first compare the simulated velocity time series with the observation. Main features of the velocity waveforms, such as the near-fault pulses and the large later phases on deep sediment sites are well reproduced by the simulations. Then we evaluate 5% damped pseudo acceleration spectra (PSA) in the framework of the SCEC Broadband Platform (BBP) validation (Dreger et al. 2015). The validation results are generally acceptable in the period range 0.1 - 10 s, whereas those in the shortest period range (0.01-0.1 s) are less satisfactory. We also evaluate the simulations with the 1-D velocity structure models used in the SCEC BBP validation exercise. Although the goodness-of-fit parameters for PSA do not significantly differ from those for the 3-D velocity structure model, noticeable differences in velocity waveforms are observed. Our results suggest the importance of 1) well-constrained 3-D velocity structure model for broadband ground motion simulations and 2) evaluation of time series of ground motion as well as response spectra.
The Arthroscopic Surgical Skill Evaluation Tool (ASSET).
Koehler, Ryan J; Amsdell, Simon; Arendt, Elizabeth A; Bisson, Leslie J; Braman, Jonathan P; Bramen, Jonathan P; Butler, Aaron; Cosgarea, Andrew J; Harner, Christopher D; Garrett, William E; Olson, Tyson; Warme, Winston J; Nicandri, Gregg T
2013-06-01
Surgeries employing arthroscopic techniques are among the most commonly performed in orthopaedic clinical practice; however, valid and reliable methods of assessing the arthroscopic skill of orthopaedic surgeons are lacking. The Arthroscopic Surgery Skill Evaluation Tool (ASSET) will demonstrate content validity, concurrent criterion-oriented validity, and reliability when used to assess the technical ability of surgeons performing diagnostic knee arthroscopic surgery on cadaveric specimens. Cross-sectional study; Level of evidence, 3. Content validity was determined by a group of 7 experts using the Delphi method. Intra-articular performance of a right and left diagnostic knee arthroscopic procedure was recorded for 28 residents and 2 sports medicine fellowship-trained attending surgeons. Surgeon performance was assessed by 2 blinded raters using the ASSET. Concurrent criterion-oriented validity, interrater reliability, and test-retest reliability were evaluated. Content validity: The content development group identified 8 arthroscopic skill domains to evaluate using the ASSET. Concurrent criterion-oriented validity: Significant differences in the total ASSET score (P < .05) between novice, intermediate, and advanced experience groups were identified. Interrater reliability: The ASSET scores assigned by each rater were strongly correlated (r = 0.91, P < .01), and the intraclass correlation coefficient between raters for the total ASSET score was 0.90. Test-retest reliability: There was a significant correlation between ASSET scores for both procedures attempted by each surgeon (r = 0.79, P < .01). The ASSET appears to be a useful, valid, and reliable method for assessing surgeon performance of diagnostic knee arthroscopic surgery in cadaveric specimens. Studies are ongoing to determine its generalizability to other procedures as well as to the live operating room and other simulated environments.
Payload training methodology study
NASA Technical Reports Server (NTRS)
1990-01-01
The results of the Payload Training Methodology Study (PTMS) are documented. Methods and procedures are defined for the development of payload training programs to be conducted at the Marshall Space Flight Center Payload Training Complex (PCT) for the Space Station Freedom program. The study outlines the overall training program concept as well as the six methodologies associated with the program implementation. The program concept outlines the entire payload training program from initial identification of training requirements to the development of detailed design specifications for simulators and instructional material. The following six methodologies are defined: (1) The Training and Simulation Needs Assessment Methodology; (2) The Simulation Approach Methodology; (3) The Simulation Definition Analysis Methodology; (4) The Simulator Requirements Standardization Methodology; (5) The Simulator Development Verification Methodology; and (6) The Simulator Validation Methodology.
Development and implementation of a virtual reality laparoscopic colorectal training curriculum.
Wynn, Greg; Lykoudis, Panagis; Berlingieri, Pasquale
2017-12-12
Contemporary surgical training can be compromised by fewer practical opportunities. Simulation can fill this gap to optimize skills' development and progress monitoring. A structured virtual reality (VR) laparoscopic sigmoid colectomy curriculum is constructed and its validity and outcomes assessed. Parameters and thresholds were defined by analysing the performance of six expert surgeons completing the relevant module on the LAP Mentor simulator. Fourteen surgical trainees followed the curriculum, performance being recorded and analysed. Evidence of validity was assessed. Time to complete procedure, number of movements of right and left instrument, and total path length of right and left instrument movements demonstrated evidence of validity and clear learning curves, with a median of 14 attempts needed to complete the curriculum. A structured curriculum is proposed for training in laparoscopic sigmoid colectomy in a VR environment based on objective metrics in addition to expert consensus. Validity has been demonstrated for some key metrics. Copyright © 2017 Elsevier Inc. All rights reserved.
Brydges, Ryan; Hatala, Rose; Zendejas, Benjamin; Erwin, Patricia J; Cook, David A
2015-02-01
To examine the evidence supporting the use of simulation-based assessments as surrogates for patient-related outcomes assessed in the workplace. The authors systematically searched MEDLINE, EMBASE, Scopus, and key journals through February 26, 2013. They included original studies that assessed health professionals and trainees using simulation and then linked those scores with patient-related outcomes assessed in the workplace. Two reviewers independently extracted information on participants, tasks, validity evidence, study quality, patient-related and simulation-based outcomes, and magnitude of correlation. All correlations were pooled using random-effects meta-analysis. Of 11,628 potentially relevant articles, the 33 included studies enrolled 1,203 participants, including postgraduate physicians (n = 24 studies), practicing physicians (n = 8), medical students (n = 6), dentists (n = 2), and nurses (n = 1). The pooled correlation for provider behaviors was 0.51 (95% confidence interval [CI], 0.38 to 0.62; n = 27 studies); for time behaviors, 0.44 (95% CI, 0.15 to 0.66; n = 7); and for patient outcomes, 0.24 (95% CI, -0.02 to 0.47; n = 5). Most reported validity evidence was favorable, though studies often included only correlational evidence. Validity evidence of internal structure (n = 13 studies), content (n = 12), response process (n = 2), and consequences (n = 1) were reported less often. Three tools showed large pooled correlations and favorable (albeit incomplete) validity evidence. Simulation-based assessments often correlate positively with patient-related outcomes. Although these surrogates are imperfect, tools with established validity evidence may replace workplace-based assessments for evaluating select procedural skills.
Energy deposition dynamics of femtosecond pulses in water
DOE Office of Scientific and Technical Information (OSTI.GOV)
Minardi, Stefano, E-mail: stefano@stefanominardi.eu; Pertsch, Thomas; Milián, Carles
2014-12-01
We exploit inverse Raman scattering and solvated electron absorption to perform a quantitative characterization of the energy loss and ionization dynamics in water with tightly focused near-infrared femtosecond pulses. A comparison between experimental data and numerical simulations suggests that the ionization energy of water is 8 eV, rather than the commonly used value of 6.5 eV. We also introduce an equation for the Raman gain valid for ultra-short pulses that validates our experimental procedure.
Uncertainty Analysis of A Flood Risk Mapping Procedure Applied In Urban Areas
NASA Astrophysics Data System (ADS)
Krause, J.; Uhrich, S.; Bormann, H.; Diekkrüger, B.
In the framework of IRMA-Sponge program the presented study was part of the joint research project FRHYMAP (flood risk and hydrological mapping). A simple con- ceptual flooding model (FLOODMAP) has been developed to simulate flooded areas besides rivers within cities. FLOODMAP requires a minimum of input data (digital el- evation model (DEM), river line, water level plain) and parameters and calculates the flood extent as well as the spatial distribution of flood depths. of course the simulated model results are affected by errors and uncertainties. Possible sources of uncertain- ties are the model structure, model parameters and input data. Thus after the model validation (comparison of simulated water to observed extent, taken from airborne pictures) the uncertainty of the essential input data set (digital elevation model) was analysed. Monte Carlo simulations were performed to assess the effect of uncertain- ties concerning the statistics of DEM quality and to derive flooding probabilities from the set of simulations. The questions concerning a minimum resolution of a DEM re- quired for flood simulation and concerning the best aggregation procedure of a given DEM was answered by comparing the results obtained using all available standard GIS aggregation procedures. Seven different aggregation procedures were applied to high resolution DEMs (1-2m) in three cities (Bonn, Cologne, Luxembourg). Basing on this analysis the effect of 'uncertain' DEM data was estimated and compared with other sources of uncertainties. Especially socio-economic information and monetary transfer functions required for a damage risk analysis show a high uncertainty. There- fore this study helps to analyse the weak points of the flood risk and damage risk assessment procedure.
Validation of the second-generation Olympus colonoscopy simulator for skills assessment.
Haycock, A V; Bassett, P; Bladen, J; Thomas-Gibson, S
2009-11-01
Simulators have potential value in providing objective evidence of technical skill for procedures within medicine. The aim of this study was to determine face and construct validity for the Olympus colonoscopy simulator and to establish which assessment measures map to clinical benchmarks of expertise. Thirty-four participants were recruited: 10 novices with no prior colonoscopy experience, 13 intermediate (trainee) endoscopists with fewer than 1000 previous colonoscopies, and 11 experienced endoscopists with more than 1000 previous colonoscopies. All participants completed three standardized cases on the simulator and experts gave feedback regarding the realism of the simulator. Forty metrics recorded automatically by the simulator were analyzed for their ability to distinguish between the groups. The simulator discriminated participants by experience level for 22 different parameters. Completion rates were lower for novices than for trainees and experts (37 % vs. 79 % and 88 % respectively, P < 0.001) and both novices and trainees took significantly longer to reach all major landmarks than the experts. Several technical aspects of competency were discriminatory; pushing with an embedded tip ( P = 0.03), correct use of the variable stiffness function ( P = 0.004), number of sigmoid N-loops ( P = 0.02); size of sigmoid N-loops ( P = 0.01), and time to remove alpha loops ( P = 0.004). Out of 10, experts rated the realism of movement at 6.4, force feedback at 6.6, looping at 6.6, and loop resolution at 6.8. The Olympus colonoscopy simulator has good face validity and excellent construct validity. It provides an objective assessment of colonoscopic skill on multiple measures and benchmarks have been set to allow its use as both a formative and a summative assessment tool. Georg Thieme Verlag KG Stuttgart. New York.
WEST-3 wind turbine simulator development. Volume 2: Verification
NASA Technical Reports Server (NTRS)
Sridhar, S.
1985-01-01
The details of a study to validate WEST-3, a new time wind turbine simulator developed by Paragib Pacific Inc., are presented in this report. For the validation, the MOD-0 wind turbine was simulated on WEST-3. The simulation results were compared with those obtained from previous MOD-0 simulations, and with test data measured during MOD-0 operations. The study was successful in achieving the major objective of proving that WEST-3 yields results which can be used to support a wind turbine development process. The blade bending moments, peak and cyclic, from the WEST-3 simulation correlated reasonably well with the available MOD-0 data. The simulation was also able to predict the resonance phenomena observed during MOD-0 operations. Also presented in the report is a description and solution of a serious numerical instability problem encountered during the study. The problem was caused by the coupling of the rotor and the power train models. The results of the study indicate that some parts of the existing WEST-3 simulation model may have to be refined for future work; specifically, the aerodynamics and procedure used to couple the rotor model with the tower and the power train models.
Quantitative validation of an air-coupled ultrasonic probe model by Interferometric laser tomography
NASA Astrophysics Data System (ADS)
Revel, G. M.; Pandarese, G.; Cavuto, A.
2012-06-01
The present paper describes the quantitative validation of a finite element (FE) model of the ultrasound beam generated by an air coupled non-contact ultrasound transducer. The model boundary conditions are given by vibration velocities measured by laser vibrometry on the probe membrane. The proposed validation method is based on the comparison between the simulated 3D pressure field and the pressure data measured with interferometric laser tomography technique. The model details and the experimental techniques are described in paper. The analysis of results shows the effectiveness of the proposed approach and the possibility to quantitatively assess and predict the generated acoustic pressure field, with maximum discrepancies in the order of 20% due to uncertainty effects. This step is important for determining in complex problems the real applicability of air-coupled probes and for the simulation of the whole inspection procedure, also when the component is designed, so as to virtually verify its inspectability.
Kundhal, Pavi S; Grantcharov, Teodor P
2009-03-01
This study was conducted to validate the role of virtual reality computer simulation as an objective method for assessing laparoscopic technical skills. The authors aimed to investigate whether performance in the operating room, assessed using a modified Objective Structured Assessment of Technical Skill (OSATS), correlated with the performance parameters registered by a virtual reality laparoscopic trainer (LapSim). The study enrolled 10 surgical residents (3 females) with a median of 5.5 years (range, 2-6 years) since graduation who had similar limited experience in laparoscopic surgery (median, 5; range, 1-16 laparoscopic cholecystectomies). All the participants performed three repetitions of seven basic skills tasks on the LapSim laparoscopic trainer and one laparoscopic cholecystectomy in the operating room. The operating room procedure was video recorded and blindly assessed by two independent observers using a modified OSATS rating scale. Assessment in the operating room was based on three parameters: time used, error score, and economy of motion score. During the tasks on the LapSim, time, error (tissue damage and millimeters of tissue damage [tasks 2-6], error score [incomplete target areas, badly placed clips, and dropped clips [task 7]), and economy of movement parameters (path length and angular path) were registered. The correlation between time, economy, and error parameters during the simulated tasks and the operating room procedure was statistically assessed using Spearman's test. Significant correlations were demonstrated between the time used to complete the operating room procedure and time used for task 7 (r (s) = 0.74; p = 0.015). The error score demonstrated during the laparoscopic cholecystectomy correlated well with the tissue damage in three of the seven tasks (p < 0.05), the millimeters of tissue damage during two of the tasks, and the error score in task 7 (r (s) = 0.67; p = 0.034). Furthermore, statistically significant correlations were observed between the economy of motion score from the operative procedure and LapSim's economy parameters (path length and angular path in six of the tasks) (p < 0.05). The current study demonstrated significant correlations between operative performance in the operating room (assessed using a well-validated rating scale) and psychomotor performance in virtual environment assessed by a computer simulator. This provides strong evidence for the validity of the simulator system as an objective tool for assessing laparoscopic skills. Virtual reality simulation can be used in practice to assess technical skills relevant for minimally invasive surgery.
de Vries, Anna H; Muijtjens, Arno M M; van Genugten, Hilde G J; Hendrikx, Ad J M; Koldewijn, Evert L; Schout, Barbara M A; van der Vleuten, Cees P M; Wagner, Cordula; Tjiam, Irene M; van Merriënboer, Jeroen J G
2018-06-05
The current shift towards competency-based residency training has increased the need for objective assessment of skills. In this study, we developed and validated an assessment tool that measures technical and non-technical competency in transurethral resection of bladder tumour (TURBT). The 'Test Objective Competency' (TOCO)-TURBT tool was designed by means of cognitive task analysis (CTA), which included expert consensus. The tool consists of 51 items, divided into 3 phases: preparatory (n = 15), procedural (n = 21), and completion (n = 15). For validation of the TOCO-TURBT tool, 2 TURBT procedures were performed and videotaped by 25 urologists and 51 residents in a simulated setting. The participants' degree of competence was assessed by a panel of eight independent expert urologists using the TOCO-TURBT tool. Each procedure was assessed by two raters. Feasibility, acceptability and content validity were evaluated by means of a quantitative cross-sectional survey. Regression analyses were performed to assess the strength of the relation between experience and test scores (construct validity). Reliability was analysed by generalizability theory. The majority of assessors and urologists indicated the TOCO-TURBT tool to be a valid assessment of competency and would support the implementation of the TOCO-TURBT assessment as a certification method for residents. Construct validity was clearly established for all outcome measures of the procedural phase (all r > 0.5, p < 0.01). Generalizability-theory analysis showed high reliability (coefficient Phi ≥ 0.8) when using the format of two assessors and two cases. This study provides first evidence that the TOCO-TURBT tool is a feasible, valid and reliable assessment tool for measuring competency in TURBT. The tool has the potential to be used for future certification of competencies for residents and urologists. The methodology of CTA might be valuable in the development of assessment tools in other areas of clinical practice.
Prediction of resource volumes at untested locations using simple local prediction models
Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.
2006-01-01
This paper shows how local spatial nonparametric prediction models can be applied to estimate volumes of recoverable gas resources at individual undrilled sites, at multiple sites on a regional scale, and to compute confidence bounds for regional volumes based on the distribution of those estimates. An approach that combines cross-validation, the jackknife, and bootstrap procedures is used to accomplish this task. Simulation experiments show that cross-validation can be applied beneficially to select an appropriate prediction model. The cross-validation procedure worked well for a wide range of different states of nature and levels of information. Jackknife procedures are used to compute individual prediction estimation errors at undrilled locations. The jackknife replicates also are used with a bootstrap resampling procedure to compute confidence bounds for the total volume. The method was applied to data (partitioned into a training set and target set) from the Devonian Antrim Shale continuous-type gas play in the Michigan Basin in Otsego County, Michigan. The analysis showed that the model estimate of total recoverable volumes at prediction sites is within 4 percent of the total observed volume. The model predictions also provide frequency distributions of the cell volumes at the production unit scale. Such distributions are the basis for subsequent economic analyses. ?? Springer Science+Business Media, LLC 2007.
P-8A Poseidon strategy for modeling & simulation verification validation & accreditation (VV&A)
NASA Astrophysics Data System (ADS)
Kropp, Derek L.
2009-05-01
One of the first challenges in addressing the need for Modeling & Simulation (M&S) Verification, Validation, & Accreditation (VV&A) is to develop an approach for applying structured and formalized VV&A processes. The P-8A Poseidon Multi-Mission Maritime Aircraft (MMA) Program Modeling and Simulation Accreditation Strategy documents the P-8A program's approach to VV&A. The P-8A strategy tailors a risk-based approach and leverages existing bodies of knowledge, such as the Defense Modeling and Simulation Office Recommended Practice Guide (DMSO RPG), to make the process practical and efficient. As the program progresses, the M&S team must continue to look for ways to streamline the process, add supplemental steps to enhance the process, and identify and overcome procedural, organizational, and cultural challenges. This paper includes some of the basics of the overall strategy, examples of specific approaches that have worked well, and examples of challenges that the M&S team has faced.
Modeling aspects of human memory for scientific study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caudell, Thomas P.; Watson, Patrick; McDaniel, Mark A.
Working with leading experts in the field of cognitive neuroscience and computational intelligence, SNL has developed a computational architecture that represents neurocognitive mechanisms associated with how humans remember experiences in their past. The architecture represents how knowledge is organized and updated through information from individual experiences (episodes) via the cortical-hippocampal declarative memory system. We compared the simulated behavioral characteristics with those of humans measured under well established experimental standards, controlling for unmodeled aspects of human processing, such as perception. We used this knowledge to create robust simulations of & human memory behaviors that should help move the scientific community closermore » to understanding how humans remember information. These behaviors were experimentally validated against actual human subjects, which was published. An important outcome of the validation process will be the joining of specific experimental testing procedures from the field of neuroscience with computational representations from the field of cognitive modeling and simulation.« less
Development of a Searchable Database of Cryoablation Simulations for Use in Treatment Planning.
Boas, F Edward; Srimathveeravalli, Govindarajan; Durack, Jeremy C; Kaye, Elena A; Erinjeri, Joseph P; Ziv, Etay; Maybody, Majid; Yarmohammadi, Hooman; Solomon, Stephen B
2017-05-01
To create and validate a planning tool for multiple-probe cryoablation, using simulations of ice ball size and shape for various ablation probe configurations, ablation times, and types of tissue ablated. Ice ball size and shape was simulated using the Pennes bioheat equation. Five thousand six hundred and seventy different cryoablation procedures were simulated, using 1-6 cryoablation probes and 1-2 cm spacing between probes. The resulting ice ball was measured along three perpendicular axes and recorded in a database. Simulated ice ball sizes were compared to gel experiments (26 measurements) and clinical cryoablation cases (42 measurements). The clinical cryoablation measurements were obtained from a HIPAA-compliant retrospective review of kidney and liver cryoablation procedures between January 2015 and February 2016. Finally, we created a web-based cryoablation planning tool, which uses the cryoablation simulation database to look up the probe spacing and ablation time that produces the desired ice ball shape and dimensions. Average absolute error between the simulated and experimentally measured ice balls was 1 mm in gel experiments and 4 mm in clinical cryoablation cases. The simulations accurately predicted the degree of synergy in multiple-probe ablations. The cryoablation simulation database covers a wide range of ice ball sizes and shapes up to 9.8 cm. Cryoablation simulations accurately predict the ice ball size in multiple-probe ablations. The cryoablation database can be used to plan ablation procedures: given the desired ice ball size and shape, it will find the number and type of probes, probe configuration and spacing, and ablation time required.
USDA-ARS?s Scientific Manuscript database
DayCent (Daily Century) is a biogeochemical model of intermediate complexity used to simulate flows of carbon and nutrients for crop, grassland, forest, and savanna ecosystems. Required model inputs are: soil texture, current and historical land use, vegetation cover, and daily maximum/minimum tempe...
ERIC Educational Resources Information Center
Schneider, W. Joel; Roman, Zachary
2018-01-01
We used data simulations to test whether composites consisting of cohesive subtest scores are more accurate than composites consisting of divergent subtest scores. We demonstrate that when multivariate normality holds, divergent and cohesive scores are equally accurate. Furthermore, excluding divergent scores results in biased estimates of…
Zhou, Caigen; Zeng, Xiaoqin; Luo, Chaomin; Zhang, Huaguang
In this paper, local bipolar auto-associative memories are presented based on discrete recurrent neural networks with a class of gain type activation function. The weight parameters of neural networks are acquired by a set of inequalities without the learning procedure. The global exponential stability criteria are established to ensure the accuracy of the restored patterns by considering time delays and external inputs. The proposed methodology is capable of effectively overcoming spurious memory patterns and achieving memory capacity. The effectiveness, robustness, and fault-tolerant capability are validated by simulated experiments.In this paper, local bipolar auto-associative memories are presented based on discrete recurrent neural networks with a class of gain type activation function. The weight parameters of neural networks are acquired by a set of inequalities without the learning procedure. The global exponential stability criteria are established to ensure the accuracy of the restored patterns by considering time delays and external inputs. The proposed methodology is capable of effectively overcoming spurious memory patterns and achieving memory capacity. The effectiveness, robustness, and fault-tolerant capability are validated by simulated experiments.
2009-11-24
assisted by the Brigade Combat Team (BCT) Modernization effort, the use of Models and Simulations ( M &S) becomes more crucial in supporting major...in 2008 via a slice of the Current Force (CF) BCT structure. To ensure realistic operational context, a M &S System-of- Systems (SoS) level...messages, and constructive representation of platforms, vehicles, and terrain. The M &S federation also provided test control, data collection, and live
The control of flexible structure vibrations using a cantilevered adaptive truss
NASA Technical Reports Server (NTRS)
Wynn, Robert H., Jr.; Robertshaw, Harry H.
1991-01-01
Analytical and experimental procedures and design tools are presented for the control of flexible structure vibrations using a cantilevered adaptive truss. Simulated and experimental data are examined for three types of structures: a slender beam, a single curved beam, and two curved beams. The adaptive truss is shown to produce a 6,000-percent increase in damping, demonstrating its potential in vibration control. Good agreement is obtained between the simulated and experimental data, thus validating the modeling methods.
Frasson, L; Neubert, J; Reina, S; Oldfield, M; Davies, B L; Rodriguez Y Baena, F
2010-01-01
The popularity of minimally invasive surgical procedures is driving the development of novel, safer and more accurate surgical tools. In this context a multi-part probe for soft tissue surgery is being developed in the Mechatronics in Medicine Laboratory at Imperial College, London. This study reports an optimization procedure using finite element methods, for the identification of an interlock geometry able to limit the separation of the segments composing the multi-part probe. An optimal geometry was obtained and the corresponding three-dimensional finite element model validated experimentally. Simulation results are shown to be consistent with the physical experiments. The outcome of this study is an important step in the provision of a novel miniature steerable probe for surgery.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chambon, Paul H.; Deter, Dean D.
2016-07-01
xiii ABSTRACT The goal of this project is to develop and evaluate powertrain test procedures that can accurately simulate real-world operating conditions, and to determine greenhouse gas (GHG) emissions of advanced medium- and heavy-duty engine and vehicle technologies. ORNL used their Vehicle System Integration Laboratory to evaluate test procedures on a stand-alone engine as well as two powertrains. Those components where subjected to various drive cycles and vehicle conditions to evaluate the validity of the results over a broad range of test conditions. Overall, more than 1000 tests were performed. The data are compiled and analyzed in this report.
The augmentation algorithm and molecular phylogenetic trees
NASA Technical Reports Server (NTRS)
Holmquist, R.
1978-01-01
Moore's (1977) augmentation procedure is discussed, and it is concluded that the procedure is valid for obtaining estimates of the total number of fixed nucleotide substitutions both theoretically and in practice, for both simulated and real data, and in agreement, for experimentally dense data sets, with stochastic estimates of the divergence, provided the restrictions on codon mutability resulting from natural selection are explicitly allowed for. Tateno and Nei's (1978) critique that the augmentation procedure has a systematic bias toward overestimation of the total number of nucleotide replacements is disputed, and a data analysis suggests that ancestral sequences inferred by the method of parsimony contain a large number of incorrectly assigned nucleotides.
Moorthy, Krishna; Munz, Yaron; Adams, Sally; Pandey, Vikas; Darzi, Ara
2005-01-01
Background: High-risk organizations such as aviation rely on simulations for the training and assessment of technical and team performance. The aim of this study was to develop a simulated environment for surgical trainees using similar principles. Methods: A total of 27 surgical trainees carried out a simulated procedure in a Simulated Operating Theatre with a standardized OR team. Observation of OR events was carried out by an unobtrusive data collection system: clinical data recorder. Assessment of performance consisted of blinded rating of technical skills, a checklist of technical events, an assessment of communication, and a global rating of team skills by a human factors expert and trained surgical research fellows. The participants underwent a debriefing session, and the face validity of the simulated environment was evaluated. Results: While technical skills rating discriminated between surgeons according to experience (P = 0.002), there were no differences in terms of the checklist and team skills (P = 0.70). While all trainees were observed to gown/glove and handle sharps correctly, low scores were observed for some key features of communication with other team members. Low scores were obtained by the entire cohort for vigilance. Interobserver reliability was 0.90 and 0.89 for technical and team skills ratings. Conclusions: The simulated operating theatre could serve as an environment for the development of surgical competence among surgical trainees. Objective, structured, and multimodal assessment of performance during simulated procedures could serve as a basis for focused feedback during training of technical and team skills. PMID:16244534
Evaluation of a training manual for the acquisition of behavioral assessment interviewing skills.
Miltenberger, R G; Fuqua, R W
1985-01-01
Two procedures were used to teach behavioral assessment interviewing skills: a training manual and one-to-one instruction that included modeling, rehearsal, and feedback. Two graduate students and two advanced undergraduates were trained with each procedure. Interviewing skills were recorded in simulated assessment interviews conducted by each student across baseline and treatment conditions. Each training procedure was evaluated in a multiple baseline across students design. The results showed that both procedures were effective for training behavioral interviewing skills, with all students reaching a level of 90%-100% correct responding. Finally, a group of experts in behavior analysis rated each interviewing skill as relevant to the conduct of an assessment interview and a group of behavioral clinicians socially validated the outcomes of the two procedures. PMID:4086413
Weinstock, Peter; Rehder, Roberta; Prabhu, Sanjay P; Forbes, Peter W; Roussin, Christopher J; Cohen, Alan R
2017-07-01
OBJECTIVE Recent advances in optics and miniaturization have enabled the development of a growing number of minimally invasive procedures, yet innovative training methods for the use of these techniques remain lacking. Conventional teaching models, including cadavers and physical trainers as well as virtual reality platforms, are often expensive and ineffective. Newly developed 3D printing technologies can recreate patient-specific anatomy, but the stiffness of the materials limits fidelity to real-life surgical situations. Hollywood special effects techniques can create ultrarealistic features, including lifelike tactile properties, to enhance accuracy and effectiveness of the surgical models. The authors created a highly realistic model of a pediatric patient with hydrocephalus via a unique combination of 3D printing and special effects techniques and validated the use of this model in training neurosurgery fellows and residents to perform endoscopic third ventriculostomy (ETV), an effective minimally invasive method increasingly used in treating hydrocephalus. METHODS A full-scale reproduction of the head of a 14-year-old adolescent patient with hydrocephalus, including external physical details and internal neuroanatomy, was developed via a unique collaboration of neurosurgeons, simulation engineers, and a group of special effects experts. The model contains "plug-and-play" replaceable components for repetitive practice. The appearance of the training model (face validity) and the reproducibility of the ETV training procedure (content validity) were assessed by neurosurgery fellows and residents of different experience levels based on a 14-item Likert-like questionnaire. The usefulness of the training model for evaluating the performance of the trainees at different levels of experience (construct validity) was measured by blinded observers using the Objective Structured Assessment of Technical Skills (OSATS) scale for the performance of ETV. RESULTS A combination of 3D printing technology and casting processes led to the creation of realistic surgical models that include high-fidelity reproductions of the anatomical features of hydrocephalus and allow for the performance of ETV for training purposes. The models reproduced the pulsations of the basilar artery, ventricles, and cerebrospinal fluid (CSF), thus simulating the experience of performing ETV on an actual patient. The results of the 14-item questionnaire showed limited variability among participants' scores, and the neurosurgery fellows and residents gave the models consistently high ratings for face and content validity. The mean score for the content validity questions (4.88) was higher than the mean score for face validity (4.69) (p = 0.03). On construct validity scores, the blinded observers rated performance of fellows significantly higher than that of residents, indicating that the model provided a means to distinguish between novice and expert surgical skills. CONCLUSIONS A plug-and-play lifelike ETV training model was developed through a combination of 3D printing and special effects techniques, providing both anatomical and haptic accuracy. Such simulators offer opportunities to accelerate the development of expertise with respect to new and novel procedures as well as iterate new surgical approaches and innovations, thus allowing novice neurosurgeons to gain valuable experience in surgical techniques without exposing patients to risk of harm.
Day, Theodore Eugene; Sarawgi, Sandeep; Perri, Alexis; Nicolson, Susan C
2015-04-01
This study describes the use of discrete event simulation (DES) to model and analyze a large academic pediatric and test cardiac center. The objective was to identify a strategy, and to predict and test the effectiveness of that strategy, to minimize the number of elective cardiac procedures that are postponed because of a lack of available cardiac intensive care unit (CICU) capacity. A DES of the cardiac center at The Children's Hospital of Philadelphia was developed and was validated by use of 1 year of deidentified administrative patient data. The model was then used to analyze strategies for reducing postponements of cases requiring CICU care through improved scheduling of multipurpose space. Each of five alternative scenarios was simulated for ten independent 1-year runs. Reductions in simulated elective procedure postponements were found when a multipurpose procedure room (the hybrid room) was used for operations on Wednesday and Thursday, compared with Friday (as was the real-world use). The reduction Wednesday was statistically significant, with postponements dropping from 27.8 to 23.3 annually (95% confidence interval 18.8-27.8). Thus, we anticipate a relative reduction in postponements of 16.2%. Since the implementation, there have been two postponements from July 1 to November 21, 2014, compared with ten for the same time period in 2013. Simulation allows us to test planned changes in complex environments, including pediatric cardiac care. Reduction in postponements of cardiac procedures requiring CICU care is predicted through reshuffling schedules of existing multipurpose capacity, and these reductions appear to be achievable in the real world after implementation. Copyright © 2015 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
Pre-engineering Spaceflight Validation of Environmental Models and the 2005 HZETRN Simulation Code
NASA Technical Reports Server (NTRS)
Nealy, John E.; Cucinotta, Francis A.; Wilson, John W.; Badavi, Francis F.; Dachev, Ts. P.; Tomov, B. T.; Walker, Steven A.; DeAngelis, Giovanni; Blattnig, Steve R.; Atwell, William
2006-01-01
The HZETRN code has been identified by NASA for engineering design in the next phase of space exploration highlighting a return to the Moon in preparation for a Mars mission. In response, a new series of algorithms beginning with 2005 HZETRN, will be issued by correcting some prior limitations and improving control of propagated errors along with established code verification processes. Code validation processes will use new/improved low Earth orbit (LEO) environmental models with a recently improved International Space Station (ISS) shield model to validate computational models and procedures using measured data aboard ISS. These validated models will provide a basis for flight-testing the designs of future space vehicles and systems of the Constellation program in the LEO environment.
Development and content validation of performance assessments for endoscopic third ventriculostomy.
Breimer, Gerben E; Haji, Faizal A; Hoving, Eelco W; Drake, James M
2015-08-01
This study aims to develop and establish the content validity of multiple expert rating instruments to assess performance in endoscopic third ventriculostomy (ETV), collectively called the Neuro-Endoscopic Ventriculostomy Assessment Tool (NEVAT). The important aspects of ETV were identified through a review of current literature, ETV videos, and discussion with neurosurgeons, fellows, and residents. Three assessment measures were subsequently developed: a procedure-specific checklist (CL), a CL of surgical errors, and a global rating scale (GRS). Neurosurgeons from various countries, all identified as experts in ETV, were then invited to participate in a modified Delphi survey to establish the content validity of these instruments. In each Delphi round, experts rated their agreement including each procedural step, error, and GRS item in the respective instruments on a 5-point Likert scale. Seventeen experts agreed to participate in the study and completed all Delphi rounds. After item generation, a total of 27 procedural CL items, 26 error CL items, and 9 GRS items were posed to Delphi panelists for rating. An additional 17 procedural CL items, 12 error CL items, and 1 GRS item were added by panelists. After three rounds, strong consensus (>80% agreement) was achieved on 35 procedural CL items, 29 error CL items, and 10 GRS items. Moderate consensus (50-80% agreement) was achieved on an additional 7 procedural CL items and 1 error CL item. The final procedural and error checklist contained 42 and 30 items, respectively (divided into setup, exposure, navigation, ventriculostomy, and closure). The final GRS contained 10 items. We have established the content validity of three ETV assessment measures by iterative consensus of an international expert panel. Each measure provides unique assessment information and thus can be used individually or in combination, depending on the characteristics of the learner and the purpose of the assessment. These instruments must now be evaluated in both the simulated and operative settings, to determine their construct validity and reliability. Ultimately, the measures contained in the NEVAT may prove suitable for formative assessment during ETV training and potentially as summative assessment measures during certification.
Cloud cover determination in polar regions from satellite imagery
NASA Technical Reports Server (NTRS)
Barry, R. G.; Key, J. R.; Maslanik, J. A.
1988-01-01
The principal objectives of this project are: (1) to develop suitable validation data sets to evaluate the effectiveness of the International Satellite Cloud Climatology Project (ISCCP) operational algorithm for cloud retrieval in polar regions and to validate model simulations of polar cloud cover; (2) to identify limitations of current procedures for varying atmospheric surface conditions, and to explore potential means to remedy them using textural classifiers; and (3) to compare synoptic cloud data from a control run experiment of the GISS climate model II with typical observed synoptic cloud patterns.
URANS simulations of the tip-leakage cavitating flow with verification and validation procedures
NASA Astrophysics Data System (ADS)
Cheng, Huai-yu; Long, Xin-ping; Liang, Yun-zhi; Long, Yun; Ji, Bin
2018-04-01
In the present paper, the Vortex Identified Zwart-Gerber-Belamri (VIZGB) cavitation model coupled with the SST-CC turbulence model is used to investigate the unsteady tip-leakage cavitating flow induced by a NACA0009 hydrofoil. A qualitative comparison between the numerical and experimental results is made. In order to quantitatively evaluate the reliability of the numerical data, the verification and validation (V&V) procedures are used in the present paper. Errors of numerical results are estimated with seven error estimators based on the Richardson extrapolation method. It is shown that though a strict validation cannot be achieved, a reasonable prediction of the gross characteristics of the tip-leakage cavitating flow can be obtained. Based on the numerical results, the influence of the cavitation on the tip-leakage vortex (TLV) is discussed, which indicates that the cavitation accelerates the fusion of the TLV and the tip-separation vortex (TSV). Moreover, the trajectory of the TLV, when the cavitation occurs, is close to the side wall.
Adaptive control of large space structures using recursive lattice filters
NASA Technical Reports Server (NTRS)
Goglia, G. L.
1985-01-01
The use of recursive lattice filters for identification and adaptive control of large space structures was studied. Lattice filters are used widely in the areas of speech and signal processing. Herein, they are used to identify the structural dynamics model of the flexible structures. This identified model is then used for adaptive control. Before the identified model and control laws are integrated, the identified model is passed through a series of validation procedures and only when the model passes these validation procedures control is engaged. This type of validation scheme prevents instability when the overall loop is closed. The results obtained from simulation were compared to those obtained from experiments. In this regard, the flexible beam and grid apparatus at the Aerospace Control Research Lab (ACRL) of NASA Langley Research Center were used as the principal candidates for carrying out the above tasks. Another important area of research, namely that of robust controller synthesis, was investigated using frequency domain multivariable controller synthesis methods.
A validation procedure for a LADAR system radiometric simulation model
NASA Astrophysics Data System (ADS)
Leishman, Brad; Budge, Scott; Pack, Robert
2007-04-01
The USU LadarSIM software package is a ladar system engineering tool that has recently been enhanced to include the modeling of the radiometry of Ladar beam footprints. This paper will discuss our validation of the radiometric model and present a practical approach to future validation work. In order to validate complicated and interrelated factors affecting radiometry, a systematic approach had to be developed. Data for known parameters were first gathered then unknown parameters of the system were determined from simulation test scenarios. This was done in a way to isolate as many unknown variables as possible, then build on the previously obtained results. First, the appropriate voltage threshold levels of the discrimination electronics were set by analyzing the number of false alarms seen in actual data sets. With this threshold set, the system noise was then adjusted to achieve the appropriate number of dropouts. Once a suitable noise level was found, the range errors of the simulated and actual data sets were compared and studied. Predicted errors in range measurements were analyzed using two methods: first by examining the range error of a surface with known reflectivity and second by examining the range errors for specific detectors with known responsivities. This provided insight into the discrimination method and receiver electronics used in the actual system.
Cognitive Task Analysis of En Route Air Traffic Control: Model Extension and Validation.
ERIC Educational Resources Information Center
Redding, Richard E.; And Others
Phase II of a project extended data collection and analytic procedures to develop a model of expertise and skill development for en route air traffic control (ATC). New data were collected by recording the Dynamic Simulator (DYSIM) performance of five experts with a work overload problem. Expert controllers were interviewed in depth for mental…
Strength validation and fire endurance of glued-laminated timber beams
E. L. Schaffer; C. M. Marx; D. A. Bender; F. E. Woeste
A previous paper presented a reliability-based model to predict the strength of glued-laminated timber beams at both room temperature and during fire exposure. This Monte Carlo simulation procedure generates strength and fire endurance (time-to-failure, TTF) data for glued- laminated beams that allow assessment of mean strength and TTF as well as their variability....
Landsat TM Classifications For SAFIS Using FIA Field Plots
William H. Cooke; Andrew J. Hartsell
2001-01-01
Wall-to-wall Landsat Thematic Mapper (TM) classification efforts in Georgia require field validation. We developed a new crown modeling procedure based on Forest Health Monitoring (FHM) data to test Forest Inventory and Analysis (FIA) data. These models simulate the proportion of tree crowns that reflect light on a FIA subplot basis. We averaged subplot crown...
Mean estimation in highly skewed samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pederson, S P
The problem of inference for the mean of a highly asymmetric distribution is considered. Even with large sample sizes, usual asymptotics based on normal theory give poor answers, as the right-hand tail of the distribution is often under-sampled. This paper attempts to improve performance in two ways. First, modifications of the standard confidence interval procedure are examined. Second, diagnostics are proposed to indicate whether or not inferential procedures are likely to be valid. The problems are illustrated with data simulated from an absolute value Cauchy distribution. 4 refs., 2 figs., 1 tab.
Peters, Kristina; Michel, Maurice Stephan; Matis, Ulrike; Häcker, Axel
2006-01-01
Experiments to develop innovative surgical therapy procedures are conventionally conducted on animals, as crucial aspects like tissue removal and bleeding disposition cannot be investigated in vitro. Extracorporeal organ models however reflect these aspects and could thus reduce the use of animals for this purpose fundamentally in the future. The aim of this work was to validate the isolated perfused porcine kidney model with regard to its use for surgical purposes on the basis of histological and radiological procedures. The results show that neither storage nor artificial perfusion led to any structural or functional damage which would affect the quality of the organ. The kidney model is highly suitable for simulating the main aspects of renal physiology and allows a constant calibration of perfusion pressure and tissue temperature. Thus, with only a moderate amount of work involved, the kidney model provides a cheap and readily available alternative to conventional animal experiments; it allows standardised experimental settings and provides valid results.
A-SIDE: Video Simulation of Teen Alcohol and Marijuana Use Contexts
Anderson, Kristen G; Brackenbury, Lauren; Quackenbush, Mathias; Buras, Morgan; Brown, Sandra A; Price, Joseph
2014-01-01
Objective: This investigation examined the concurrent validity of a new video simulation assessing adolescent alcohol and marijuana decision making in peer contexts (A-SIDE). Method: One hundred eleven youth (60% female; age 14–19 years; 80% White, 12.6% Latino; 24% recruited from treatment centers) completed the A-SIDE simulation, self-report measures of alcohol and marijuana use and disorder symptoms, and measures of alcohol (i.e., drinking motives and expectancies) and marijuana (i.e., expectancies) cognitions in the laboratory. Results: Study findings support concurrent associations between behavioral willingness to use alcohol and marijuana on the simulation and current use variables as well as on drinking motives and marijuana expectancies. Relations with use variables were found even when sample characteristics were controlled. Interestingly, willingness to accept nonalcoholic beverages (e.g., soda) and food offers in the simulation were inversely related to recent alcohol and marijuana use behavior. Conclusions: These findings are consistent with prior work using laboratory simulations with college students and provide preliminary validity evidence for this procedure. Future work is needed to examine the predictive utility of the A-SIDE with larger and more diverse samples of youth. PMID:25343652
Human cadaver retina model for retinal heating during corneal surgery with a femtosecond laser
NASA Astrophysics Data System (ADS)
Sun, Hui; Fan, Zhongwei; Yun, Jin; Zhao, Tianzhuo; Yan, Ying; Kurtz, Ron M.; Juhasz, Tibor
2014-02-01
Femtosecond lasers are widely used in everyday clinical procedures to perform minimally invasive corneal refractive surgery. The intralase femtosecond laser (AMO Corp. Santa Ana, CA) is a common example of such a laser. In the present study a numerical simulation was developed to quantify the temperature rise in the retina during femtosecond intracorneal surgery. Also, ex-vivo retinal heating due to laser irradiation was measured with an infrared thermal camera (Fluke Corp. Everett, WA) as a validation of the simulation. A computer simulation was developed using Comsol Multiphysics to calculate the temperature rise in the cadaver retina during femtosecond laser corneal surgery. The simulation showed a temperature rise of less than 0.3 degrees for realistic pulse energies for the various repetition rates. Human cadaver retinas were irradiated with a 150 kHz Intralase femtosecond laser and the temperature rise was measured withan infrared thermal camera. Thermal camera measurements are in agreement with the simulation. During routine femtosecond laser corneal surgery with normal clinical parameters, the temperature rise is well beneath the threshold for retina damage. The simulation predictions are in agreement with thermal measurements providing a level of experimental validation.
SIMSAT: An object oriented architecture for real-time satellite simulation
NASA Technical Reports Server (NTRS)
Williams, Adam P.
1993-01-01
Real-time satellite simulators are vital tools in the support of satellite missions. They are used in the testing of ground control systems, the training of operators, the validation of operational procedures, and the development of contingency plans. The simulators must provide high-fidelity modeling of the satellite, which requires detailed system information, much of which is not available until relatively near launch. The short time-scales and resulting high productivity required of such simulator developments culminates in the need for a reusable infrastructure which can be used as a basis for each simulator. This paper describes a major new simulation infrastructure package, the Software Infrastructure for Modelling Satellites (SIMSAT). It outlines the object oriented design methodology used, describes the resulting design, and discusses the advantages and disadvantages experienced in applying the methodology.
Methods to validate the accuracy of an indirect calorimeter in the in-vitro setting.
Oshima, Taku; Ragusa, Marco; Graf, Séverine; Dupertuis, Yves Marc; Heidegger, Claudia-Paula; Pichard, Claude
2017-12-01
The international ICALIC initiative aims at developing a new indirect calorimeter according to the needs of the clinicians and researchers in the field of clinical nutrition and metabolism. The project initially focuses on validating the calorimeter for use in mechanically ventilated acutely ill adult patient. However, standard methods to validate the accuracy of calorimeters have not yet been established. This paper describes the procedures for the in-vitro tests to validate the accuracy of the new indirect calorimeter, and defines the ranges for the parameters to be evaluated in each test to optimize the validation for clinical and research calorimetry measurements. Two in-vitro tests have been defined to validate the accuracy of the gas analyzers and the overall function of the new calorimeter. 1) Gas composition analysis allows validating the accuracy of O 2 and CO 2 analyzers. Reference gas of known O 2 (or CO 2 ) concentration is diluted by pure nitrogen gas to achieve predefined O 2 (or CO 2 ) concentration, to be measured by the indirect calorimeter. O 2 and CO 2 concentrations to be tested were determined according to their expected ranges of concentrations during calorimetry measurements. 2) Gas exchange simulator analysis validates O 2 consumption (VO 2 ) and CO 2 production (VCO 2 ) measurements. CO 2 gas injection into artificial breath gas provided by the mechanical ventilator simulates VCO 2 . Resulting dilution of O 2 concentration in the expiratory air is analyzed by the calorimeter as VO 2 . CO 2 gas of identical concentration to the fraction of inspired O 2 (FiO 2 ) is used to simulate identical VO 2 and VCO 2 . Indirect calorimetry results from publications were analyzed to determine the VO 2 and VCO 2 values to be tested for the validation. O 2 concentration in respiratory air is highest at inspiration, and can decrease to 15% during expiration. CO 2 concentration can be as high as 5% in expired air. To validate analyzers for measurements of FiO 2 up to 70%, ranges of O 2 and CO 2 concentrations to be tested were defined as 15-70% and 0.5-5.0%, respectively. The mean VO 2 in 426 adult mechanically ventilated patients was 270 ml/min, with 2 standard deviation (SD) ranges of 150-391 ml/min. Thus, VO 2 and VCO 2 to be simulated for the validation were defined as 150, 250, and 400 ml/min. The procedures for the in-vitro tests of the new indirect calorimeter and the ranges for the parameters to be evaluated in each test have been defined to optimize the validation of accuracy for clinical and research indirect calorimetry measurements. The combined methods will be used to validate the accuracy of the new indirect calorimeter developed by the ICALIC initiative, and should become the standard method to validate the accuracy of any future indirect calorimeters. Copyright © 2017 European Society for Clinical Nutrition and Metabolism. Published by Elsevier Ltd. All rights reserved.
Why Does a Method That Fails Continue To Be Used: The Answer
Templeton, Alan R.
2009-01-01
It has been claimed that hundreds of researchers use nested clade phylogeographic analysis (NCPA) based on what the method promises rather than requiring objective validation of the method. The supposed failure of NCPA is based upon the argument that validating it by using positive controls ignored type I error, and that computer simulations have shown a high type I error. The first argument is factually incorrect: the previously published validation analysis fully accounted for both type I and type II errors. The simulations that indicate a 75% type I error rate have serious flaws and only evaluate outdated versions of NCPA. These outdated type I error rates fall precipitously when the 2003 version of single locus NCPA is used or when the 2002 multi-locus version of NCPA is used. It is shown that the treewise type I errors in single-locus NCPA can be corrected to the desired nominal level by a simple statistical procedure, and that multilocus NCPA reconstructs a simulated scenario used to discredit NCPA with 100% accuracy. Hence, NCPA is a not a failed method at all, but rather has been validated both by actual data and by simulated data in a manner that satisfies the published criteria given by its critics. The critics have come to different conclusions because they have focused on the pre-2002 versions of NCPA and have failed to take into account the extensive developments in NCPA since 2002. Hence, researchers can choose to use NCPA based upon objective critical validation that shows that NCPA delivers what it promises. PMID:19335340
The aberration characteristics in a misaligned three-mirror anastigmatic (TMA) system
NASA Astrophysics Data System (ADS)
Wang, Bin; Wu, Fan; Ye, Yutang
2016-09-01
To realize the efficient alignment of the TMA system, the aberrations in a misaligned TMA system had been analyzed theoretically in this paper. Firstly, based on the nodal aberration theory (NAT), the aberration types and characteristics in the misaligned TMA system had been concluded; Secondly, a simulation validation had been carried out to testify the analysis results, the simulation results validates the aberration characteristics; Finally, the alignment procedures were determined according to the aberration characteristics: adjust the axial spacing of the mirrors in terms of Z9 in the center field of TMA system first; and then, adjust the decenters and tilts of the mirrors in terms of Z5 - Z8 in the edge field of TMA system. This method is helpful for the alignment of the TMA telescope.
Simulation of unsteady flows by the DSMC macroscopic chemistry method
NASA Astrophysics Data System (ADS)
Goldsworthy, Mark; Macrossan, Michael; Abdel-jawad, Madhat
2009-03-01
In the Direct Simulation Monte-Carlo (DSMC) method, a combination of statistical and deterministic procedures applied to a finite number of 'simulator' particles are used to model rarefied gas-kinetic processes. In the macroscopic chemistry method (MCM) for DSMC, chemical reactions are decoupled from the specific particle pairs selected for collisions. Information from all of the particles within a cell, not just those selected for collisions, is used to determine a reaction rate coefficient for that cell. Unlike collision-based methods, MCM can be used with any viscosity or non-reacting collision models and any non-reacting energy exchange models. It can be used to implement any reaction rate formulations, whether these be from experimental or theoretical studies. MCM has been previously validated for steady flow DSMC simulations. Here we show how MCM can be used to model chemical kinetics in DSMC simulations of unsteady flow. Results are compared with a collision-based chemistry procedure for two binary reactions in a 1-D unsteady shock-expansion tube simulation. Close agreement is demonstrated between the two methods for instantaneous, ensemble-averaged profiles of temperature, density and species mole fractions, as well as for the accumulated number of net reactions per cell.
NASA Technical Reports Server (NTRS)
Wieland, Paul; Miller, Lee; Ibarra, Tom
2003-01-01
As part of the Sustaining Engineering program for the International Space Station (ISS), a ground simulator of the Internal Thermal Control System (ITCS) in the Lab Module was designed and built at the Marshall Space Flight Center (MSFC). To support prediction and troubleshooting, this facility is operationally and functionally similar to the flight system and flight-like components were used when available. Flight software algorithms, implemented using the LabVIEW(Registered Trademark) programming language, were used for monitoring performance and controlling operation. Validation testing of the low temperature loop was completed prior to activation of the Lab module in 2001. Assembly of the moderate temperature loop was completed in 2002 and validated in 2003. The facility has been used to address flight issues with the ITCS, successfully demonstrating the ability to add silver biocide and to adjust the pH of the coolant. Upon validation of the entire facility, it will be capable not only of checking procedures, but also of evaluating payload timelining, operational modifications, physical modifications, and other aspects affecting the thermal control system.
Development of a Searchable Database of Cryoablation Simulations for Use in Treatment Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boas, F. Edward, E-mail: boasf@mskcc.org; Srimathveeravalli, Govindarajan, E-mail: srimaths@mskcc.org; Durack, Jeremy C., E-mail: durackj@mskcc.org
PurposeTo create and validate a planning tool for multiple-probe cryoablation, using simulations of ice ball size and shape for various ablation probe configurations, ablation times, and types of tissue ablated.Materials and MethodsIce ball size and shape was simulated using the Pennes bioheat equation. Five thousand six hundred and seventy different cryoablation procedures were simulated, using 1–6 cryoablation probes and 1–2 cm spacing between probes. The resulting ice ball was measured along three perpendicular axes and recorded in a database. Simulated ice ball sizes were compared to gel experiments (26 measurements) and clinical cryoablation cases (42 measurements). The clinical cryoablation measurements weremore » obtained from a HIPAA-compliant retrospective review of kidney and liver cryoablation procedures between January 2015 and February 2016. Finally, we created a web-based cryoablation planning tool, which uses the cryoablation simulation database to look up the probe spacing and ablation time that produces the desired ice ball shape and dimensions.ResultsAverage absolute error between the simulated and experimentally measured ice balls was 1 mm in gel experiments and 4 mm in clinical cryoablation cases. The simulations accurately predicted the degree of synergy in multiple-probe ablations. The cryoablation simulation database covers a wide range of ice ball sizes and shapes up to 9.8 cm.ConclusionCryoablation simulations accurately predict the ice ball size in multiple-probe ablations. The cryoablation database can be used to plan ablation procedures: given the desired ice ball size and shape, it will find the number and type of probes, probe configuration and spacing, and ablation time required.« less
Lens of the eye dose calculation for neuro-interventional procedures and CBCT scans of the head
NASA Astrophysics Data System (ADS)
Xiong, Zhenyu; Vijayan, Sarath; Rana, Vijay; Jain, Amit; Rudin, Stephen; Bednarek, Daniel R.
2016-03-01
The aim of this work is to develop a method to calculate lens dose for fluoroscopically-guided neuro-interventional procedures and for CBCT scans of the head. EGSnrc Monte Carlo software is used to determine the dose to the lens of the eye for the projection geometry and exposure parameters used in these procedures. This information is provided by a digital CAN bus on the Toshiba Infinix C-Arm system which is saved in a log file by the real-time skin-dose tracking system (DTS) we previously developed. The x-ray beam spectra on this machine were simulated using BEAMnrc. These spectra were compared to those determined by SpekCalc and validated through measured percent-depth-dose (PDD) curves and half-value-layer (HVL) measurements. We simulated CBCT procedures in DOSXYZnrc for a CTDI head phantom and compared the surface dose distribution with that measured with Gafchromic film, and also for an SK150 head phantom and compared the lens dose with that measured with an ionization chamber. Both methods demonstrated good agreement. Organ dose calculated for a simulated neuro-interventional-procedure using DOSXYZnrc with the Zubal CT voxel phantom agreed within 10% with that calculated by PCXMC code for most organs. To calculate the lens dose in a neuro-interventional procedure, we developed a library of normalized lens dose values for different projection angles and kVp's. The total lens dose is then calculated by summing the values over all beam projections and can be included on the DTS report at the end of the procedure.
Havemann, Maria Cecilie; Dalsgaard, Torur; Sørensen, Jette Led; Røssaak, Kristin; Brisling, Steffen; Mosgaard, Berit Jul; Høgdall, Claus; Bjerrum, Flemming
2018-05-14
Increasing focus on patient safety makes it important to ensure surgical competency among surgeons before operating on patients. The objective was to gather validity evidence for a virtual-reality simulator test for robotic surgical skills and evaluate its potential as a training tool. Surgeons with varying experience in robotic surgery were recruited: novices (zero procedures), intermediates (1-50), experienced (> 50). Five experienced surgeons rated five exercises on the da Vinci Skills Simulator. Participants were tested using the five exercises. Participants were invited back 3 times and completed a total of 10 attempts per exercise. The outcome was the average simulator performance score for the 5 exercises. 32 participants from 5 surgical specialties were included. 38 participants completed all 4 sessions. A moderate correlation between the average total score and robotic experience was identified for the first attempt (Spearman r = 0.58; p = 0.0004). A difference in average total score was observed between novices and intermediates [median score 61% (IQR 52-66) vs. 83% (IQR 75-91), adjusted p < 0.0001], as well as novices and experienced [median score 61% (IQR 52-66) vs. 80 (IQR 69-85), adjusted p = 0.002]. All three groups improved their performance between the 1st and 10th attempts (p < 0.00). This study describes validity evidence for a virtual-reality simulator for basic robotic surgical skills, which can be used for assessment of basic competency and as a training tool. However, more validity evidence is needed before it can be used for certification or high-stakes assessment.
Accurate reconstruction of 3D cardiac geometry from coarsely-sliced MRI.
Ringenberg, Jordan; Deo, Makarand; Devabhaktuni, Vijay; Berenfeld, Omer; Snyder, Brett; Boyers, Pamela; Gold, Jeffrey
2014-02-01
We present a comprehensive validation analysis to assess the geometric impact of using coarsely-sliced short-axis images to reconstruct patient-specific cardiac geometry. The methods utilize high-resolution diffusion tensor MRI (DTMRI) datasets as reference geometries from which synthesized coarsely-sliced datasets simulating in vivo MRI were produced. 3D models are reconstructed from the coarse data using variational implicit surfaces through a commonly used modeling tool, CardioViz3D. The resulting geometries were then compared to the reference DTMRI models from which they were derived to analyze how well the synthesized geometries approximate the reference anatomy. Averaged over seven hearts, 95% spatial overlap, less than 3% volume variability, and normal-to-surface distance of 0.32 mm was observed between the synthesized myocardial geometries reconstructed from 8 mm sliced images and the reference data. The results provide strong supportive evidence to validate the hypothesis that coarsely-sliced MRI may be used to accurately reconstruct geometric ventricular models. Furthermore, the use of DTMRI for validation of in vivo MRI presents a novel benchmark procedure for studies which aim to substantiate their modeling and simulation methods using coarsely-sliced cardiac data. In addition, the paper outlines a suggested original procedure for deriving image-based ventricular models using the CardioViz3D software. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Automated Metrics in a Virtual-Reality Myringotomy Simulator: Development and Construct Validity.
Huang, Caiwen; Cheng, Horace; Bureau, Yves; Ladak, Hanif M; Agrawal, Sumit K
2018-06-15
The objectives of this study were: 1) to develop and implement a set of automated performance metrics into the Western myringotomy simulator, and 2) to establish construct validity. Prospective simulator-based assessment study. The Auditory Biophysics Laboratory at Western University, London, Ontario, Canada. Eleven participants were recruited from the Department of Otolaryngology-Head & Neck Surgery at Western University: four senior otolaryngology consultants and seven junior otolaryngology residents. Educational simulation. Discrimination between expert and novice participants on five primary automated performance metrics: 1) time to completion, 2) surgical errors, 3) incision angle, 4) incision length, and 5) the magnification of the microscope. Automated performance metrics were developed, programmed, and implemented into the simulator. Participants were given a standardized simulator orientation and instructions on myringotomy and tube placement. Each participant then performed 10 procedures and automated metrics were collected. The metrics were analyzed using the Mann-Whitney U test with Bonferroni correction. All metrics discriminated senior otolaryngologists from junior residents with a significance of p < 0.002. Junior residents had 2.8 times more errors compared with the senior otolaryngologists. Senior otolaryngologists took significantly less time to completion compared with junior residents. The senior group also had significantly longer incision lengths, more accurate incision angles, and lower magnification keeping both the umbo and annulus in view. Automated quantitative performance metrics were successfully developed and implemented, and construct validity was established by discriminating between expert and novice participants.
Assessing Discriminative Performance at External Validation of Clinical Prediction Models
Nieboer, Daan; van der Ploeg, Tjeerd; Steyerberg, Ewout W.
2016-01-01
Introduction External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. Methods We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. Results The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. Conclusion The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients. PMID:26881753
Assessing Discriminative Performance at External Validation of Clinical Prediction Models.
Nieboer, Daan; van der Ploeg, Tjeerd; Steyerberg, Ewout W
2016-01-01
External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients.
Using the arthroscopic surgery skill evaluation tool as a pass-fail examination.
Koehler, Ryan J; Nicandri, Gregg T
2013-12-04
Examination of arthroscopic skill requires evaluation tools that are valid and reliable with clear criteria for passing. The Arthroscopic Surgery Skill Evaluation Tool was developed as a video-based assessment of technical skill with criteria for passing established by a panel of experts. The purpose of this study was to test the validity and reliability of the Arthroscopic Surgery Skill Evaluation Tool as a pass-fail examination of arthroscopic skill. Twenty-eight residents and two sports medicine faculty members were recorded performing diagnostic knee arthroscopy on a left and right cadaveric specimen in our arthroscopic skills laboratory. Procedure videos were evaluated with use of the Arthroscopic Surgery Skill Evaluation Tool by two raters blind to subject identity. Subjects were considered to pass the Arthroscopic Surgery Skill Evaluation Tool when they attained scores of ≥ 3 on all eight assessment domains. The raters agreed on a pass-fail rating for fifty-five of sixty videos rated with an interclass correlation coefficient value of 0.83. Ten of thirty participants were assigned passing scores by both raters for both diagnostic arthroscopies performed in the laboratory. Receiver operating characteristic analysis demonstrated that logging more than eighty arthroscopic cases or performing more than thirty-five arthroscopic knee cases was predictive of attaining a passing Arthroscopic Surgery Skill Evaluation Tool score on both procedures performed in the laboratory. The Arthroscopic Surgery Skill Evaluation Tool is valid and reliable as a pass-fail examination of diagnostic arthroscopy of the knee in the simulation laboratory. This study demonstrates that the Arthroscopic Surgery Skill Evaluation Tool may be a useful tool for pass-fail examination of diagnostic arthroscopy of the knee in the simulation laboratory. Further study is necessary to determine whether the Arthroscopic Surgery Skill Evaluation Tool can be used for the assessment of multiple arthroscopic procedures and whether it can be used to evaluate arthroscopic procedures performed in the operating room.
Ayodeji, I D; Schijven, M; Jakimowicz, J; Greve, J W
2007-09-01
The goal of our study was to determine expert and referent face validity of the LAP Mentor, the first procedural virtual reality (VR) laparoscopy trainer. In The Netherlands 49 surgeons and surgical trainees were given a hands-on introduction to the Simbionix LAP Mentor training module. Subsequently, a standardized five-point Likert-scale questionnaire was administered. Respondents who had performed over 50 laparoscopic procedures were classified as "experts." The others constituted the "referent" group, representing nonexperts such as surgical trainees. Of the experts, 90.5% (n = 21) judge themselves to be average or above-average laparoscopic surgeons, while 88.5% of referents (n = 28) feel themselves to be less-than-average laparoscopic surgeons (p = 0.000). There is agreement between both groups on all items concerning the simulator's performance and application. Respondents feel strongly about the necessity for training on basic skills before operating on patients and unanimously agree on the importance of procedural training. A large number (87.8%) of respondents expect the LAP Mentor to enhance a trainee's laparoscopic capability, 83.7% expect a shorter laparoscopic learning curve, and 67.3% even predict reduced complication rates in laparoscopic cholecystectomies among novice surgeons. The preferred stage for implementing the VR training module is during the surgeon's residency, and 59.2% of respondents feel the surgical curriculum is incomplete without VR training. Both potential surgical trainees and trainers stress the need for VR training in the surgical curriculum. Both groups believe the LAP Mentor to be a realistic VR module, with a powerful potential for training and monitoring basic laparoscopic skills as well as full laparoscopic procedures. Simulator training is perceived to be both informative and entertaining, and enthusiasm among future trainers and trainees is to be expected. Further validation of the system is required to determine whether the performance results agree with these favorable expectations.
Validation of the state version of the Self-Statement during Public Speaking Scale.
Osório, Flávia L; Crippa, José Alexandre S; Loureiro, Sonia Regina
2013-03-01
To adapt the trait version of the Self Statements during Public Speaking (SSPS) scale to a state version (SSPS-S) and to assess its discriminative validity for use in the Simulated Public Speaking Test (SPST). Subjects with and without social anxiety disorder (n = 45) were assessed while performing the SPST, a clinical-experimental model of anxiety with seven different phases. Alterations in negative self-assessment occurred with significant changes throughout the different phases of the procedure (p = .05). Non-cases presented significantly higher mean values of the SSPS-S in all phases of the procedure than cases (p < .01). Cases assessed themselves in a less positive and more negative manner during the SPST than did non-cases. SSPS-S is adequate for this assessment, especially its negative subscale, and shows good psychometric qualities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gutierrez, Marte
The research project aims to develop and validate an advanced computer model that can be used in the planning and design of stimulation techniques to create engineered reservoirs for Enhanced Geothermal Systems. The specific objectives of the proposal are to: 1) Develop a true three-dimensional hydro-thermal fracturing simulator that is particularly suited for EGS reservoir creation. 2) Perform laboratory scale model tests of hydraulic fracturing and proppant flow/transport using a polyaxial loading device, and use the laboratory results to test and validate the 3D simulator. 3) Perform discrete element/particulate modeling of proppant transport in hydraulic fractures, and use the resultsmore » to improve understand of proppant flow and transport. 4) Test and validate the 3D hydro-thermal fracturing simulator against case histories of EGS energy production. 5) Develop a plan to commercialize the 3D fracturing and proppant flow/transport simulator. The project is expected to yield several specific results and benefits. Major technical products from the proposal include: 1) A true-3D hydro-thermal fracturing computer code that is particularly suited to EGS, 2) Documented results of scale model tests on hydro-thermal fracturing and fracture propping in an analogue crystalline rock, 3) Documented procedures and results of discrete element/particulate modeling of flow and transport of proppants for EGS applications, and 4) Database of monitoring data, with focus of Acoustic Emissions (AE) from lab scale modeling and field case histories of EGS reservoir creation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gutierrez, Marte
2013-12-31
This research project aims to develop and validate an advanced computer model that can be used in the planning and design of stimulation techniques to create engineered reservoirs for Enhanced Geothermal Systems. The specific objectives of the proposal are to; Develop a true three-dimensional hydro-thermal fracturing simulator that is particularly suited for EGS reservoir creation; Perform laboratory scale model tests of hydraulic fracturing and proppant flow/transport using a polyaxial loading device, and use the laboratory results to test and validate the 3D simulator; Perform discrete element/particulate modeling of proppant transport in hydraulic fractures, and use the results to improve understandmore » of proppant flow and transport; Test and validate the 3D hydro-thermal fracturing simulator against case histories of EGS energy production; and Develop a plan to commercialize the 3D fracturing and proppant flow/transport simulator. The project is expected to yield several specific results and benefits. Major technical products from the proposal include; A true-3D hydro-thermal fracturing computer code that is particularly suited to EGS; Documented results of scale model tests on hydro-thermal fracturing and fracture propping in an analogue crystalline rock; Documented procedures and results of discrete element/particulate modeling of flow and transport of proppants for EGS applications; and Database of monitoring data, with focus of Acoustic Emissions (AE) from lab scale modeling and field case histories of EGS reservoir creation.« less
Fransson, Boel A; Chen, Chi-Ya; Noyes, Julie A; Ragle, Claude A
2016-11-01
To determine the construct and concurrent validity of instrument motion metrics for laparoscopic skills assessment in virtual reality and augmented reality simulators. Evaluation study. Veterinarian students (novice, n = 14) and veterinarians (experienced, n = 11) with no or variable laparoscopic experience. Participants' minimally invasive surgery (MIS) experience was determined by hospital records of MIS procedures performed in the Teaching Hospital. Basic laparoscopic skills were assessed by 5 tasks using a physical box trainer. Each participant completed 2 tasks for assessments in each type of simulator (virtual reality: bowel handling and cutting; augmented reality: object positioning and a pericardial window model). Motion metrics such as instrument path length, angle or drift, and economy of motion of each simulator were recorded. None of the motion metrics in a virtual reality simulator showed correlation with experience, or to the basic laparoscopic skills score. All metrics in augmented reality were significantly correlated with experience (time, instrument path, and economy of movement), except for the hand dominance metric. The basic laparoscopic skills score was correlated to all performance metrics in augmented reality. The augmented reality motion metrics differed between American College of Veterinary Surgeons diplomates and residents, whereas basic laparoscopic skills score and virtual reality metrics did not. Our results provide construct validity and concurrent validity for motion analysis metrics for an augmented reality system, whereas a virtual reality system was validated only for the time score. © Copyright 2016 by The American College of Veterinary Surgeons.
Development of a virtual reality training curriculum for phacoemulsification surgery.
Spiteri, A V; Aggarwal, R; Kersey, T L; Sira, M; Benjamin, L; Darzi, A W; Bloom, P A
2014-01-01
Training within a proficiency-based virtual reality (VR) curriculum may reduce errors during real surgical procedures. This study used a scientific methodology to develop a VR training curriculum for phacoemulsification surgery (PS). Ten novice-(n) (performed <10 cataract operations), 10 intermediate-(i) (50-200), and 10 experienced-(e) (>500) surgeons were recruited. Construct validity was defined as the ability to differentiate between the three levels of experience, based on the simulator-derived metrics for two abstract modules (four tasks) and three procedural modules (five tasks) on a high-fidelity VR simulator. Proficiency measures were based on the performance of experienced surgeons. Abstract modules demonstrated a 'ceiling effect' with construct validity established between groups (n) and (i) but not between groups (i) and (e)-Forceps 1 (46, 87, and 95; P<0.001). Increasing difficulty of task showed significantly reduced performance in (n) but minimal difference for (i) and (e)-Anti-tremor 4 (0, 51, and 59; P<0.001), Forceps 4 (11, 73, and 94; P<0.001). Procedural modules were found to be construct valid between groups (n) and (i) and between groups (i) and (e)-Lens-cracking (0, 22, and 51; P<0.05) and Phaco-quadrants (16, 53, and 87; P<0.05). This was also the case with Capsulorhexis (0, 19, and 63; P<0.05) with the performance decreasing in the (n) and (i) group but improving in the (e) group (0, 55, and 73; P<0.05) and (0, 48, and 76; P<0.05) as task difficulty increased. Experienced/intermediate benchmark skill levels are defined allowing the development of a proficiency-based VR training curriculum for PS for novices using a structured scientific methodology.
Virtual reality lead extraction as a method for training new physicians: a pilot study.
Maytin, Melanie; Daily, Thomas P; Carillo, Roger G
2015-03-01
It is estimated that the demand for transvenous lead extraction (TLE) has reached an annual extraction rate of nearly 24,000 patients worldwide. Despite technologic advances, TLE still has the potential for significant morbidity and mortality. Complication rates with TLE directly parallel operator experience. However, obtaining adequate training during and postfellowship can be difficult. Given the potential for catastrophic complications and the steep learning curve (up to 300 cases) associated with this procedure, we sought to validate a virtual reality (VR) lead extraction simulator as an innovative training and evaluation tool for physicians new to TLE. We randomized eight electrophysiology fellows to VR simulator versus conventional training. We compared procedural skill competency between the groups using simulator competency, tactile measurements, markers of proficiency and attitudes, and cognitive abilities battery. Practical skills and simulator complications differed significantly between the VR simulator and conventional training groups. The VR simulator group executed patient preparation and procedure performance better than the conventional group (P < 0.01). All four fellows randomized to conventional training experienced a simulator complication (two superior vena cava [SVC] tears, three right ventricle [RV] avulsions) versus one fellow in the VR simulator group (one SVC tear) (P = 0.02). Tactile measurements revealed a trend toward excess pushing versus pulling forces among the conventionally trained group. The time for lead removal was also significantly higher in the conventional training group (12.46 minutes vs 5.54 minutes, P = 0.02). There was no significant difference in baseline or posttraining cognitive ability. We contend that the implementation of alternative training tools such as a VR simulation model will improve physician training and allow for an innovative pathway to assess the achievement of competency. ©2014 Wiley Periodicals, Inc.
Engineering flight and guest pilot evaluation report, phase 2. [DC 8 aircraft
NASA Technical Reports Server (NTRS)
Morrison, J. A.; Anderson, E. B.; Brown, G. W.; Schwind, G. K.
1974-01-01
Prior to the flight evaluation, the two-segment profile capabilities of the DC-8-61 were evaluated and flight procedures were developed in a flight simulator at the UA Flight Training Center in Denver, Colorado. The flight evaluation reported was conducted to determine the validity of the simulation results, further develop the procedures and use of the area navigation system in the terminal area, certify the system for line operation, and obtain evaluations of the system and procedures by a number of pilots from the industry. The full area navigation capabilities of the special equipment installed were developed to provide terminal area guidance for two-segment approaches. The objectives of this evaluation were: (1) perform an engineering flight evaluation sufficient to certify the two-segment system for the six-month in-service evaluation; (2) evaluate the suitability of a modified RNAV system for flying two-segment approaches; and (3) provide evaluation of the two-segment approach by management and line pilots.
NASA Astrophysics Data System (ADS)
Gu, J.; Bednarz, B.; Caracappa, P. F.; Xu, X. G.
2009-05-01
The latest multiple-detector technologies have further increased the popularity of x-ray CT as a diagnostic imaging modality. There is a continuing need to assess the potential radiation risk associated with such rapidly evolving multi-detector CT (MDCT) modalities and scanning protocols. This need can be met by the use of CT source models that are integrated with patient computational phantoms for organ dose calculations. Based on this purpose, this work developed and validated an MDCT scanner using the Monte Carlo method, and meanwhile the pregnant patient phantoms were integrated into the MDCT scanner model for assessment of the dose to the fetus as well as doses to the organs or tissues of the pregnant patient phantom. A Monte Carlo code, MCNPX, was used to simulate the x-ray source including the energy spectrum, filter and scan trajectory. Detailed CT scanner components were specified using an iterative trial-and-error procedure for a GE LightSpeed CT scanner. The scanner model was validated by comparing simulated results against measured CTDI values and dose profiles reported in the literature. The source movement along the helical trajectory was simulated using the pitch of 0.9375 and 1.375, respectively. The validated scanner model was then integrated with phantoms of a pregnant patient in three different gestational periods to calculate organ doses. It was found that the dose to the fetus of the 3 month pregnant patient phantom was 0.13 mGy/100 mAs and 0.57 mGy/100 mAs from the chest and kidney scan, respectively. For the chest scan of the 6 month patient phantom and the 9 month patient phantom, the fetal doses were 0.21 mGy/100 mAs and 0.26 mGy/100 mAs, respectively. The paper also discusses how these fetal dose values can be used to evaluate imaging procedures and to assess risk using recommendations of the report from AAPM Task Group 36. This work demonstrates the ability of modeling and validating an MDCT scanner by the Monte Carlo method, as well as assessing fetal and organ doses by combining the MDCT scanner model and the pregnant patient phantom.
NASA Astrophysics Data System (ADS)
Liu, Shulun; Li, Yuan; Pauwels, Valentijn R. N.; Walker, Jeffrey P.
2017-12-01
Rain gauges are widely used to obtain temporally continuous point rainfall records, which are then interpolated into spatially continuous data to force hydrological models. However, rainfall measurements and interpolation procedure are subject to various uncertainties, which can be reduced by applying quality control and selecting appropriate spatial interpolation approaches. Consequently, the integrated impact of rainfall quality control and interpolation on streamflow simulation has attracted increased attention but not been fully addressed. This study applies a quality control procedure to the hourly rainfall measurements obtained in the Warwick catchment in eastern Australia. The grid-based daily precipitation from the Australian Water Availability Project was used as a reference. The Pearson correlation coefficient between the daily accumulation of gauged rainfall and the reference data was used to eliminate gauges with significant quality issues. The unrealistic outliers were censored based on a comparison between gauged rainfall and the reference. Four interpolation methods, including the inverse distance weighting (IDW), nearest neighbors (NN), linear spline (LN), and ordinary Kriging (OK), were implemented. The four methods were firstly assessed through a cross-validation using the quality-controlled rainfall data. The impacts of the quality control and interpolation on streamflow simulation were then evaluated through a semi-distributed hydrological model. The results showed that the Nash–Sutcliffe model efficiency coefficient (NSE) and Bias of the streamflow simulations were significantly improved after quality control. In the cross-validation, the IDW and OK methods resulted in good interpolation rainfall, while the NN led to the worst result. In term of the impact on hydrological prediction, the IDW led to the most consistent streamflow predictions with the observations, according to the validation at five streamflow-gauged locations. The OK method performed second best according to streamflow predictions at the five gauges in the calibration period (01/01/2007–31/12/2011) and four gauges during the validation period (01/01/2012–30/06/2014). However, NN produced the worst prediction at the outlet of the catchment in the validation period, indicating a low robustness. While the IDW exhibited the best performance in the study catchment in terms of accuracy, robustness and efficiency, more general recommendations on the selection of rainfall interpolation methods need to be further explored.
NASA Astrophysics Data System (ADS)
Liu, Shulun; Li, Yuan; Pauwels, Valentijn R. N.; Walker, Jeffrey P.
2018-01-01
Rain gauges are widely used to obtain temporally continuous point rainfall records, which are then interpolated into spatially continuous data to force hydrological models. However, rainfall measurements and interpolation procedure are subject to various uncertainties, which can be reduced by applying quality control and selecting appropriate spatial interpolation approaches. Consequently, the integrated impact of rainfall quality control and interpolation on streamflow simulation has attracted increased attention but not been fully addressed. This study applies a quality control procedure to the hourly rainfall measurements obtained in the Warwick catchment in eastern Australia. The grid-based daily precipitation from the Australian Water Availability Project was used as a reference. The Pearson correlation coefficient between the daily accumulation of gauged rainfall and the reference data was used to eliminate gauges with significant quality issues. The unrealistic outliers were censored based on a comparison between gauged rainfall and the reference. Four interpolation methods, including the inverse distance weighting (IDW), nearest neighbors (NN), linear spline (LN), and ordinary Kriging (OK), were implemented. The four methods were firstly assessed through a cross-validation using the quality-controlled rainfall data. The impacts of the quality control and interpolation on streamflow simulation were then evaluated through a semi-distributed hydrological model. The results showed that the Nash–Sutcliffe model efficiency coefficient (NSE) and Bias of the streamflow simulations were significantly improved after quality control. In the cross-validation, the IDW and OK methods resulted in good interpolation rainfall, while the NN led to the worst result. In term of the impact on hydrological prediction, the IDW led to the most consistent streamflow predictions with the observations, according to the validation at five streamflow-gauged locations. The OK method performed second best according to streamflow predictions at the five gauges in the calibration period (01/01/2007–31/12/2011) and four gauges during the validation period (01/01/2012–30/06/2014). However, NN produced the worst prediction at the outlet of the catchment in the validation period, indicating a low robustness. While the IDW exhibited the best performance in the study catchment in terms of accuracy, robustness and efficiency, more general recommendations on the selection of rainfall interpolation methods need to be further explored.
Virtual simulation as a learning method in interventional radiology.
Avramov, Predrag; Avramov, Milena; Juković, Mirela; Kadić, Vuk; Till, Viktor
2013-01-01
Radiology is the fastest growing discipline of medicine thanks to the implementation of new technologies and very rapid development of imaging diagnostic procedures in the last few decades. On the other hand, the development of imaging diagnostic procedures has put aside the traditional gaining of experience by working on real patients, and the need for other alternatives of learning interventional radiology procedures has emerged. A new method of virtual approach was added as an excellent alternative to the currently known methods of training on physical models and animals. Virtual reality represents a computer-generated reconstruction of anatomical environment with tactile interactions and it enables operators not only to learn on their own mistakes without compromising the patient's safety, but also to enhance their knowledge and experience. It is true that studies published so far on the validity of endovascular simulators have shown certain improvement of operator's technical skills and reduction in time needed for the procedure, but on the other hand, it is still a question whether these skills are transferable to the real patients in the angio room. With further improvement of technology, shortcomings of virtual approach to interventional procedures learning will be less significant and this procedure is likely to become the only method of learning in the near future.
Markov Chains For Testing Redundant Software
NASA Technical Reports Server (NTRS)
White, Allan L.; Sjogren, Jon A.
1990-01-01
Preliminary design developed for validation experiment that addresses problems unique to assuring extremely high quality of multiple-version programs in process-control software. Approach takes into account inertia of controlled system in sense it takes more than one failure of control program to cause controlled system to fail. Verification procedure consists of two steps: experimentation (numerical simulation) and computation, with Markov model for each step.
Verevkin, Sergey P; Zaitsau, Dzmitry H; Emel'yanenko, Vladimir N; Schick, Christoph; Jayaraman, Saivenkataraman; Maginn, Edward J
2012-07-14
We used DSC for determination of the reaction enthalpy of the synthesis of the ionic liquid [C(4)mim][Cl]. A combination of DSC and quantum chemical calculations presents a new, indirect way to study thermodynamics of ionic liquids. The new procedure was validated with two direct experimental measurements and MD simulations.
A nonlinear dynamic finite element approach for simulating muscular hydrostats.
Vavourakis, V; Kazakidi, A; Tsakiris, D P; Ekaterinaris, J A
2014-01-01
An implicit nonlinear finite element model for simulating biological muscle mechanics is developed. The numerical method is suitable for dynamic simulations of three-dimensional, nonlinear, nearly incompressible, hyperelastic materials that undergo large deformations. These features characterise biological muscles, which consist of fibres and connective tissues. It can be assumed that the stress distribution inside the muscles is the superposition of stresses along the fibres and the connective tissues. The mechanical behaviour of the surrounding tissues is determined by adopting a Mooney-Rivlin constitutive model, while the mechanical description of fibres is considered to be the sum of active and passive stresses. Due to the nonlinear nature of the problem, evaluation of the Jacobian matrix is carried out in order to subsequently utilise the standard Newton-Raphson iterative procedure and to carry out time integration with an implicit scheme. The proposed methodology is implemented into our in-house, open source, finite element software, which is validated by comparing numerical results with experimental measurements and other numerical results. Finally, the numerical procedure is utilised to simulate primitive octopus arm manoeuvres, such as bending and reaching.
NASA Astrophysics Data System (ADS)
Song, Yang; Liu, Zhigang; Wang, Hongrui; Lu, Xiaobing; Zhang, Jing
2015-10-01
Due to the intrinsic nonlinear characteristics and complex structure of the high-speed catenary system, a modelling method is proposed based on the analytical expressions of nonlinear cable and truss elements. The calculation procedure for solving the initial equilibrium state is proposed based on the Newton-Raphson iteration method. The deformed configuration of the catenary system as well as the initial length of each wire can be calculated. Its accuracy and validity of computing the initial equilibrium state are verified by comparison with the separate model method, absolute nodal coordinate formulation and other methods in the previous literatures. Then, the proposed model is combined with a lumped pantograph model and a dynamic simulation procedure is proposed. The accuracy is guaranteed by the multiple iterative calculations in each time step. The dynamic performance of the proposed model is validated by comparison with EN 50318, the results of the finite element method software and SIEMENS simulation report, respectively. At last, the influence of the catenary design parameters (such as the reserved sag and pre-tension) on the dynamic performance is preliminarily analysed by using the proposed model.
NASA Technical Reports Server (NTRS)
Bartlett, R. G.; Hendricks, C. M.; Morison, W. B.
1972-01-01
The development of a breathing metabolic simulator (BMS) is reported. This BMS simulates all of the breathing and metabolic parameters required for complete evaluation and test of life support and resuscitation equipment. It is also useful for calibrating and validating mechanical and gaseous pulmonary function test procedures. Breathing rate, breathing depth, breath velocity contour, oxygen uptake, and carbon dioxide release are all variable over wide ranges simulating conditions from sleep to hard work with respiratory exchange ratios covering the range from hypoventilation. In addition, all of these parameters are remotely controllable to facilitate use of the device in hostile or remote environments. The exhaled breath is also maintained at body temperature and a high humidity. The simulation is accurate to the extent of having a variable functional residual capacity independent of other parameters.
Construct validity of the LapVR virtual-reality surgical simulator.
Iwata, Naoki; Fujiwara, Michitaka; Kodera, Yasuhiro; Tanaka, Chie; Ohashi, Norifumi; Nakayama, Goro; Koike, Masahiko; Nakao, Akimasa
2011-02-01
Laparoscopic surgery requires fundamental skills peculiar to endoscopic procedures such as eye-hand coordination. Acquisition of such skills prior to performing actual surgery is highly desirable for favorable outcome. Virtual-reality simulators have been developed for both surgical training and assessment of performance. The aim of the current study is to show construct validity of a novel simulator, LapVR (Immersion Medical, San Jose, CA, USA), for Japanese surgeons and surgical residents. Forty-four subjects were divided into the following three groups according to their experience in laparoscopic surgery: 14 residents (RE) with no experience in laparoscopic surgery, 14 junior surgeons (JR) with little experience, and 16 experienced surgeons (EX). All subjects executed "essential task 1" programmed in the LapVR, which consists of six tasks, resulting in automatic measurement of 100 parameters indicating various aspects of laparoscopic skills. Time required for each task tended to be inversely correlated with experience in laparoscopic surgery. For the peg transfer skill, statistically significant differences were observed between EX and RE in three parameters, including total time and average time taken to complete the procedure and path length for the nondominant hand. For the cutting skill, similar differences were observed between EX and RE in total time, number of unsuccessful cutting attempts, and path length for the nondominant hand. According to the programmed comprehensive evaluation, performance in terms of successful completion of the task and actual experience of the participants in laparoscopic surgery correlated significantly for the peg transfer (P=0.007) and cutting skills (P=0.026). The peg transfer and cutting skills could best distinguish between EX and RE. This study is the first to provide evidence that LapVR has construct validity to discriminate between novice and experienced laparoscopic surgeons.
Lichtenberger, John P; Tatum, Peter S; Gada, Satyen; Wyn, Mark; Ho, Vincent B; Liacouras, Peter
2018-03-01
This work describes customized, task-specific simulation models derived from 3D printing in clinical settings and medical professional training programs. Simulation models/task trainers have an array of purposes and desired achievements for the trainee, defining that these are the first step in the production process. After this purpose is defined, computer-aided design and 3D printing (additive manufacturing) are used to create a customized anatomical model. Simulation models then undergo initial in-house testing by medical specialists followed by a larger scale beta testing. Feedback is acquired, via surveys, to validate effectiveness and to guide or determine if any future modifications and/or improvements are necessary. Numerous custom simulation models have been successfully completed with resulting task trainers designed for procedures, including removal of ocular foreign bodies, ultrasound-guided joint injections, nerve block injections, and various suturing and reconstruction procedures. These task trainers have been frequently utilized in the delivery of simulation-based training with increasing demand. 3D printing has been integral to the production of limited-quantity, low-cost simulation models across a variety of medical specialties. In general, production cost is a small fraction of a commercial, generic simulation model, if available. These simulation and training models are customized to the educational need and serve an integral role in the education of our military health professionals.
A baseline-free procedure for transformation models under interval censorship.
Gu, Ming Gao; Sun, Liuquan; Zuo, Guoxin
2005-12-01
An important property of Cox regression model is that the estimation of regression parameters using the partial likelihood procedure does not depend on its baseline survival function. We call such a procedure baseline-free. Using marginal likelihood, we show that an baseline-free procedure can be derived for a class of general transformation models under interval censoring framework. The baseline-free procedure results a simplified and stable computation algorithm for some complicated and important semiparametric models, such as frailty models and heteroscedastic hazard/rank regression models, where the estimation procedures so far available involve estimation of the infinite dimensional baseline function. A detailed computational algorithm using Markov Chain Monte Carlo stochastic approximation is presented. The proposed procedure is demonstrated through extensive simulation studies, showing the validity of asymptotic consistency and normality. We also illustrate the procedure with a real data set from a study of breast cancer. A heuristic argument showing that the score function is a mean zero martingale is provided.
Heuristic for Critical Machine Based a Lot Streaming for Two-Stage Hybrid Production Environment
NASA Astrophysics Data System (ADS)
Vivek, P.; Saravanan, R.; Chandrasekaran, M.; Pugazhenthi, R.
2017-03-01
Lot streaming in Hybrid flowshop [HFS] is encountered in many real world problems. This paper deals with a heuristic approach for Lot streaming based on critical machine consideration for a two stage Hybrid Flowshop. The first stage has two identical parallel machines and the second stage has only one machine. In the second stage machine is considered as a critical by valid reasons these kind of problems is known as NP hard. A mathematical model developed for the selected problem. The simulation modelling and analysis were carried out in Extend V6 software. The heuristic developed for obtaining optimal lot streaming schedule. The eleven cases of lot streaming were considered. The proposed heuristic was verified and validated by real time simulation experiments. All possible lot streaming strategies and possible sequence under each lot streaming strategy were simulated and examined. The heuristic consistently yielded optimal schedule consistently in all eleven cases. The identification procedure for select best lot streaming strategy was suggested.
Willaert, Willem I M; Cheshire, Nicholas J; Aggarwal, Rajesh; Van Herzeele, Isabelle; Stansby, Gerard; Macdonald, Sumaira; Vermassen, Frank E
2012-12-01
Carotid artery stenting (CAS) is a technically demanding procedure with a risk of periprocedural stroke. A scoring system based on anatomic criteria has been developed to facilitate patient selection for CAS. Advancements in simulation science also enable case evaluation through patient-specific virtual reality (VR) rehearsal on an endovascular simulator. This study aimed to validate the anatomic scoring system for CAS using the patient-specific VR technology. Three patients were selected and graded according to the CAS scoring system (maximum score, 9): one easy (score, <4.9), one intermediate (score, 5.0-5.9), and one difficult (score, >7.0). The three cases were performed on the simulator in random order by 20 novice interventionalists pretrained in CAS. Technical performances were assessed using simulator-based metrics and expert-based ratings. The interventionalists took significantly longer to perform the difficult CAS case (median, 31.6 vs 19.7 vs 14.6 minutes; P<.0001) compared with the intermediate and easy cases; similarly, more fluoroscopy time (20.7 vs 12.1 vs 8.2 minutes; P<.0001), contrast volume (56.5 vs 51.5 vs 50.0 mL; P=.0060), and roadmaps (10 vs 9 vs 9; P=.0040) were used. The quality of performance declined significantly as the cases became more challenging (score, 24 vs 22 vs 19; P<.0001). The anatomic scoring system for CAS can predict the difficulty of a CAS procedure as measured by patient-specific VR. This scoring system, with or without the additional use of patient-specific VR, can guide novice interventionalists in selecting appropriate patients for CAS. This may reduce the perioperative stroke risk and enhance patient safety. Copyright © 2012 Society for Vascular Surgery. Published by Mosby, Inc. All rights reserved.
Advanced Space Shuttle simulation model
NASA Technical Reports Server (NTRS)
Tatom, F. B.; Smith, S. R.
1982-01-01
A non-recursive model (based on von Karman spectra) for atmospheric turbulence along the flight path of the shuttle orbiter was developed. It provides for simulation of instantaneous vertical and horizontal gusts at the vehicle center-of-gravity, and also for simulation of instantaneous gusts gradients. Based on this model the time series for both gusts and gust gradients were generated and stored on a series of magnetic tapes, entitled Shuttle Simulation Turbulence Tapes (SSTT). The time series are designed to represent atmospheric turbulence from ground level to an altitude of 120,000 meters. A description of the turbulence generation procedure is provided. The results of validating the simulated turbulence are described. Conclusions and recommendations are presented. One-dimensional von Karman spectra are tabulated, while a discussion of the minimum frequency simulated is provided. The results of spectral and statistical analyses of the SSTT are presented.
Simulation-To-Flight (STF-1): A Mission to Enable CubeSat Software-Based Validation and Verification
NASA Technical Reports Server (NTRS)
Morris, Justin; Zemerick, Scott; Grubb, Matt; Lucas, John; Jaridi, Majid; Gross, Jason N.; Ohi, Nicholas; Christian, John A.; Vassiliadis, Dimitris; Kadiyala, Anand;
2016-01-01
The Simulation-to-Flight 1 (STF-1) CubeSat mission aims to demonstrate how legacy simulation technologies may be adapted for flexible and effective use on missions using the CubeSat platform. These technologies, named NASA Operational Simulator (NOS), have demonstrated significant value on several missions such as James Webb Space Telescope, Global Precipitation Measurement, Juno, and Deep Space Climate Observatory in the areas of software development, mission operations/training, verification and validation (V&V), test procedure development and software systems check-out. STF-1 will demonstrate a highly portable simulation and test platform that allows seamless transition of mission development artifacts to flight products. This environment will decrease development time of future CubeSat missions by lessening the dependency on hardware resources. In addition, through a partnership between NASA GSFC, the West Virginia Space Grant Consortium and West Virginia University, the STF-1 CubeSat will hosts payloads for three secondary objectives that aim to advance engineering and physical-science research in the areas of navigation systems of small satellites, provide useful data for understanding magnetosphere-ionosphere coupling and space weather, and verify the performance and durability of III-V Nitride-based materials.
TAS: A Transonic Aircraft/Store flow field prediction code
NASA Technical Reports Server (NTRS)
Thompson, D. S.
1983-01-01
A numerical procedure has been developed that has the capability to predict the transonic flow field around an aircraft with an arbitrarily located, separated store. The TAS code, the product of a joint General Dynamics/NASA ARC/AFWAL research and development program, will serve as the basis for a comprehensive predictive method for aircraft with arbitrary store loadings. This report described the numerical procedures employed to simulate the flow field around a configuration of this type. The validity of TAS code predictions is established by comparison with existing experimental data. In addition, future areas of development of the code are outlined. A brief description of code utilization is also given in the Appendix. The aircraft/store configuration is simulated using a mesh embedding approach. The computational domain is discretized by three meshes: (1) a planform-oriented wing/body fine mesh, (2) a cylindrical store mesh, and (3) a global Cartesian crude mesh. This embedded mesh scheme enables simulation of stores with fins of arbitrary angular orientation.
NASA Astrophysics Data System (ADS)
Kreyca, J. F.; Falahati, A.; Kozeschnik, E.
2016-03-01
For industry, the mechanical properties of a material in form of flow curves are essential input data for finite element simulations. Current practice is to obtain flow curves experimentally and to apply fitting procedures to obtain constitutive equations that describe the material response to external loading as a function of temperature and strain rate. Unfortunately, the experimental procedure for characterizing flow curves is complex and expensive, which is why the prediction of flow-curves by computer modelling becomes increasingly important. In the present work, we introduce a state parameter based model that is capable of predicting the flow curves of an A6061 aluminium alloy in different heat-treatment conditions. The model is implemented in the thermo-kinetic software package MatCalc and takes into account precipitation kinetics, subgrain formation, dynamic recovery by spontaneous annihilation and dislocation climb. To validate the simulation results, a series of compression tests is performed on the thermo-mechanical simulator Gleeble 1500.
A stochastic vortex structure method for interacting particles in turbulent shear flows
NASA Astrophysics Data System (ADS)
Dizaji, Farzad F.; Marshall, Jeffrey S.; Grant, John R.
2018-01-01
In a recent study, we have proposed a new synthetic turbulence method based on stochastic vortex structures (SVSs), and we have demonstrated that this method can accurately predict particle transport, collision, and agglomeration in homogeneous, isotropic turbulence in comparison to direct numerical simulation results. The current paper extends the SVS method to non-homogeneous, anisotropic turbulence. The key element of this extension is a new inversion procedure, by which the vortex initial orientation can be set so as to generate a prescribed Reynolds stress field. After validating this inversion procedure for simple problems, we apply the SVS method to the problem of interacting particle transport by a turbulent planar jet. Measures of the turbulent flow and of particle dispersion, clustering, and collision obtained by the new SVS simulations are shown to compare well with direct numerical simulation results. The influence of different numerical parameters, such as number of vortices and vortex lifetime, on the accuracy of the SVS predictions is also examined.
Sensitivity Analysis of the Integrated Medical Model for ISS Programs
NASA Technical Reports Server (NTRS)
Goodenow, D. A.; Myers, J. G.; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Young, M.
2016-01-01
Sensitivity analysis estimates the relative contribution of the uncertainty in input values to the uncertainty of model outputs. Partial Rank Correlation Coefficient (PRCC) and Standardized Rank Regression Coefficient (SRRC) are methods of conducting sensitivity analysis on nonlinear simulation models like the Integrated Medical Model (IMM). The PRCC method estimates the sensitivity using partial correlation of the ranks of the generated input values to each generated output value. The partial part is so named because adjustments are made for the linear effects of all the other input values in the calculation of correlation between a particular input and each output. In SRRC, standardized regression-based coefficients measure the sensitivity of each input, adjusted for all the other inputs, on each output. Because the relative ranking of each of the inputs and outputs is used, as opposed to the values themselves, both methods accommodate the nonlinear relationship of the underlying model. As part of the IMM v4.0 validation study, simulations are available that predict 33 person-missions on ISS and 111 person-missions on STS. These simulated data predictions feed the sensitivity analysis procedures. The inputs to the sensitivity procedures include the number occurrences of each of the one hundred IMM medical conditions generated over the simulations and the associated IMM outputs: total quality time lost (QTL), number of evacuations (EVAC), and number of loss of crew lives (LOCL). The IMM team will report the results of using PRCC and SRRC on IMM v4.0 predictions of the ISS and STS missions created as part of the external validation study. Tornado plots will assist in the visualization of the condition-related input sensitivities to each of the main outcomes. The outcomes of this sensitivity analysis will drive review focus by identifying conditions where changes in uncertainty could drive changes in overall model output uncertainty. These efforts are an integral part of the overall verification, validation, and credibility review of IMM v4.0.
Is there inter-procedural transfer of skills in intraocular surgery? A randomized controlled trial.
Thomsen, Ann Sofia Skou; Kiilgaard, Jens Folke; la Cour, Morten; Brydges, Ryan; Konge, Lars
2017-12-01
To investigate how experience in simulated cataract surgery impacts and transfers to the learning curves for novices in vitreoretinal surgery. Twelve ophthalmology residents without previous experience in intraocular surgery were randomized to (1) intensive training in cataract surgery on a virtual-reality simulator until passing a test with predefined validity evidence (cataract trainees) or to (2) no cataract surgery training (novices). Possible skill transfer was assessed using a test consisting of all 11 vitreoretinal modules on the EyeSi virtual-reality simulator. All participants repeated the test of vitreoretinal surgical skills until their performance curve plateaued. Three experienced vitreoretinal surgeons also performed the test to establish validity evidence. Analysis with independent samples t-tests was performed. The vitreoretinal test on the EyeSi simulator demonstrated evidence of validity, given statistically significant differences in mean test scores for the first repetition; experienced surgeons scored higher than novices (p = 0.023) and cataract trainees (p = 0.003). Internal consistency for the 11 modules of the test was acceptable (Cronbach's α = 0.73). Our findings did not indicate a transfer effect with no significant differences found between cataract trainees and novices in their starting scores (mean ± SD 381 ± 129 points versus 455 ± 82 points, p = 0.262), time to reach maximum performance level (10.7 ± 3.0 hr versus 8.7 ± 2.8 hr, p = 0.265), or maximum scores (785 ± 162 points versus 805 ± 73 points, p = 0.791). Pretraining in cataract surgery did not demonstrate any measurable effect on vitreoretinal procedural performance. The results of this study indicate that we should not anticipate extensive transfer of surgical skills when planning training programmes in intraocular surgery. © 2017 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.
Zhang, Xiao C; Bermudez, Ana M; Reddy, Pranav M; Sarpatwari, Ravi R; Chheng, Darin B; Mezoian, Taylor J; Schwartz, Victoria R; Simmons, Quinneil J; Jay, Gregory D; Kobayashi, Leo
2017-03-01
A stable and readily accessible work surface for bedside medical procedures represents a valuable tool for acute care providers. In emergency department (ED) settings, the design and implementation of traditional Mayo stands and related surface devices often limit their availability, portability, and usability, which can lead to suboptimal clinical practice conditions that may affect the safe and effective performance of medical procedures and delivery of patient care. We designed and built a novel, open-source, portable, bedside procedural surface through an iterative development process with use testing in simulated and live clinical environments. The procedural surface development project was conducted between October 2014 and June 2016 at an academic referral hospital and its affiliated simulation facility. An interdisciplinary team of emergency physicians, mechanical engineers, medical students, and design students sought to construct a prototype bedside procedural surface out of off-the-shelf hardware during a collaborative university course on health care design. After determination of end-user needs and core design requirements, multiple prototypes were fabricated and iteratively modified, with early variants featuring undermattress stabilizing supports or ratcheting clamp mechanisms. Versions 1 through 4 underwent 2 hands-on usability-testing simulation sessions; version 5 was presented at a design critique held jointly by a panel of clinical and industrial design faculty for expert feedback. Responding to select feedback elements over several surface versions, investigators arrived at a near-final prototype design for fabrication and use testing in a live clinical setting. This experimental procedural surface (version 8) was constructed and then deployed for controlled usability testing against the standard Mayo stands in use at the study site ED. Clinical providers working in the ED who opted to participate in the study were provided with the prototype surface and just-in-time training on its use when performing bedside procedures. Subjects completed the validated 10-point System Usability Scale postshift for the surface that they had used. The study protocol was approved by the institutional review board. Multiple prototypes and recursive design revisions resulted in a fully functional, portable, and durable bedside procedural surface that featured a stainless steel tray and intuitive hook-and-lock mechanisms for attachment to ED stretcher bed rails. Forty-two control and 40 experimental group subjects participated and completed questionnaires. The median System Usability Scale score (out of 100; higher scores associated with better usability) was 72.5 (interquartile range [IQR] 51.3 to 86.3) for the Mayo stand; the experimental surface was scored at 93.8 (IQR 84.4 to 97.5 for a difference in medians of 17.5 (95% confidence interval 10 to 27.5). Subjects reported several usability challenges with the Mayo stand; the experimental surface was reviewed as easy to use, simple, and functional. In accordance with experimental live environment deployment, questionnaire responses, and end-user suggestions, the project team finalized the design specification for the experimental procedural surface for open dissemination. An iterative, interdisciplinary approach was used to generate, evaluate, revise, and finalize the design specification for a new procedural surface that met all core end-user requirements. The final surface design was evaluated favorably on a validated usability tool against Mayo stands when use tested in simulated and live clinical settings. Copyright © 2016 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.
Investigation of radiative interaction in laminar flows using Monte Carlo simulation
NASA Technical Reports Server (NTRS)
Liu, Jiwen; Tiwari, S. N.
1993-01-01
The Monte Carlo method (MCM) is employed to study the radiative interactions in fully developed laminar flow between two parallel plates. Taking advantage of the characteristics of easy mathematical treatment of the MCM, a general numerical procedure is developed for nongray radiative interaction. The nongray model is based on the statistical narrow band model with an exponential-tailed inverse intensity distribution. To validate the Monte Carlo simulation for nongray radiation problems, the results of radiative dissipation from the MCM are compared with two available solutions for a given temperature profile between two plates. After this validation, the MCM is employed to solve the present physical problem and results for the bulk temperature are compared with available solutions. In general, good agreement is noted and reasons for some discrepancies in certain ranges of parameters are explained.
Patient-specific cardiac phantom for clinical training and preprocedure surgical planning.
Laing, Justin; Moore, John; Vassallo, Reid; Bainbridge, Daniel; Drangova, Maria; Peters, Terry
2018-04-01
Minimally invasive mitral valve repair procedures including MitraClip ® are becoming increasingly common. For cases of complex or diseased anatomy, clinicians may benefit from using a patient-specific cardiac phantom for training, surgical planning, and the validation of devices or techniques. An imaging compatible cardiac phantom was developed to simulate a MitraClip ® procedure. The phantom contained a patient-specific cardiac model manufactured using tissue mimicking materials. To evaluate accuracy, the patient-specific model was imaged using computed tomography (CT), segmented, and the resulting point cloud dataset was compared using absolute distance to the original patient data. The result, when comparing the molded model point cloud to the original dataset, resulted in a maximum Euclidean distance error of 7.7 mm, an average error of 0.98 mm, and a standard deviation of 0.91 mm. The phantom was validated using a MitraClip ® device to ensure anatomical features and tools are identifiable under image guidance. Patient-specific cardiac phantoms may allow for surgical complications to be accounted for preoperative planning. The information gained by clinicians involved in planning and performing the procedure should lead to shorter procedural times and better outcomes for patients.
Cloud cover determination in polar regions from satellite imagery
NASA Technical Reports Server (NTRS)
Barry, R. G.; Key, J. R.; Maslanik, J. A.
1988-01-01
The principal objectives of this project are: to develop suitable validation data sets to evaluate the effectiveness of the ISCCP operational algorithm for cloud retrieval in polar regions and to validate model simulations of polar cloud cover; to identify limitations of current procedures for varying atmospheric surface conditions, and to explore potential means to remedy them using textural classifiers: and to compare synoptic cloud data from a control run experiment of the Goddard Institute for Space Studies (GISS) climate model 2 with typical observed synoptic cloud patterns. Current investigations underway are listed and the progress made to date is summarized.
NASA Technical Reports Server (NTRS)
Lee, Sangsan; Lele, Sanjiva K.; Moin, Parviz
1992-01-01
For the numerical simulation of inhomogeneous turbulent flows, a method is developed for generating stochastic inflow boundary conditions with a prescribed power spectrum. Turbulence statistics from spatial simulations using this method with a low fluctuation Mach number are in excellent agreement with the experimental data, which validates the procedure. Turbulence statistics from spatial simulations are also compared to those from temporal simulations using Taylor's hypothesis. Statistics such as turbulence intensity, vorticity, and velocity derivative skewness compare favorably with the temporal simulation. However, the statistics of dilatation show a significant departure from those obtained in the temporal simulation. To directly check the applicability of Taylor's hypothesis, space-time correlations of fluctuations in velocity, vorticity, and dilatation are investigated. Convection velocities based on vorticity and velocity fluctuations are computed as functions of the spatial and temporal separations. The profile of the space-time correlation of dilatation fluctuations is explained via a wave propagation model.
Gettman, Matthew T; Pereira, Claudio W; Lipsky, Katja; Wilson, Torrence; Arnold, Jacqueline J; Leibovich, Bradley C; Karnes, R Jeffrey; Dong, Yue
2009-03-01
Structured opportunities for learning communication, teamwork and laparoscopic principles are limited for urology residents. We evaluated and taught teamwork, communication and laparoscopic skills to urology residents in a simulated operating room. Scenarios related to laparoscopy (insufflator failure, carbon dioxide embolism) were developed using mannequins, urology residents and nurses. These scenarios were developed based on Accreditation Council for Graduate Medical Education core competencies and performed in a simulation center. Between the pretest scenario (insufflation failure) and the posttest scenario (carbon dioxide embolism) instruction was given on teamwork, communication and laparoscopic skills. A total of 19 urology residents participated in the training that involved participation in at least 2 scenarios. Performance was evaluated using validated teamwork instruments, questionnaires and videotape analysis. Significant improvement was noted on validated teamwork instruments between scenarios based on resident (pretest 24, posttest 27, p = 0.01) and expert (pretest 16, posttest 25, p = 0.008) evaluation. Increased teamwork and team performance were also noted between scenarios on videotape analysis with significant improvement for adherence to best practice (p = 0.01) and maintenance of positive rapport among team members (p = 0.02). Significant improvement in the setup of the laparoscopic procedure was observed (p = 0.01). Favorable face and content validity was noted for both scenarios. Teamwork, intraoperative communication and laparoscopic skills of urology residents improved during the high fidelity simulation course. Face and content validity of the individual sessions was favorable. In this study high fidelity simulation was effective for assessing and teaching Accreditation Council for Graduate Medical Education core competencies related to intraoperative communication, teamwork and laparoscopic skills.
von Barnekow, Ariel; Bonet-Codina, Núria; Tost, Dani
2017-03-23
To investigate if 3D gamified simulations can be valid vocational training tools for persons with intellectual disability. A 3D gamified simulation composed by a set of training tasks for cleaning in hostelry was developed in collaboration with professionals of a real hostel and pedagogues of a special needs school. The learning objectives focus on the acquisition of vocabulary skills, work procedures, social abilities and risk prevention. Several accessibility features were developed to make the tasks easy to do from a technological point-of-view. A pilot experiment was conducted to test the pedagogical efficacy of this tool on intellectually disabled workers and students. User scores in the gamified simulation follow a curve of increasing progression. When confronted with reality, they recognized the scenario and tried to reproduce what they had learned in the simulation. Finally, they were interested in the tool, they showed a strong feeling of immersion and engagement, and they reported having fun. On the basis of this experiment we believe that 3D gamified simulations can be efficient tools to train social and professional skills of persons with intellectual disabilities contributing thus to foster their social inclusion through work.
Approach for Input Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives
NASA Technical Reports Server (NTRS)
Putko, Michele M.; Taylor, Arthur C., III; Newman, Perry A.; Green, Lawrence L.
2002-01-01
An implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for quasi 3-D Euler CFD code is presented. Given uncertainties in statistically independent, random, normally distributed input variables, first- and second-order statistical moment procedures are performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, these moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.
Stochastic Petri Net extension of a yeast cell cycle model.
Mura, Ivan; Csikász-Nagy, Attila
2008-10-21
This paper presents the definition, solution and validation of a stochastic model of the budding yeast cell cycle, based on Stochastic Petri Nets (SPN). A specific family of SPNs is selected for building a stochastic version of a well-established deterministic model. We describe the procedure followed in defining the SPN model from the deterministic ODE model, a procedure that can be largely automated. The validation of the SPN model is conducted with respect to both the results provided by the deterministic one and the experimental results available from literature. The SPN model catches the behavior of the wild type budding yeast cells and a variety of mutants. We show that the stochastic model matches some characteristics of budding yeast cells that cannot be found with the deterministic model. The SPN model fine-tunes the simulation results, enriching the breadth and the quality of its outcome.
Retention of laparoscopic procedural skills acquired on a virtual-reality surgical trainer.
Maagaard, Mathilde; Sorensen, Jette Led; Oestergaard, Jeanett; Dalsgaard, Torur; Grantcharov, Teodor P; Ottesen, Bent S; Larsen, Christian Rifbjerg
2011-03-01
Virtual-reality (VR) simulator training has been shown to improve surgical performance in laparoscopic procedures in the operating room. We have, in a randomised controlled trial, demonstrated transferability to real operations. The validity of the LapSim virtual-reality simulator as an assessment tool has been demonstrated in several reports. However, an unanswered question regarding simulator training is the durability, or retention, of skills acquired during simulator training. The aim of the present study is to assess the retention of skills acquired using the LapSim VR simulator, 6 and 18 months after an initial training course. The investigation was designed as a 6- and 18-month follow-up on a cohort of participants who earlier participated in a skills training programme on the LapSim VR. The follow-up cohort consisted of trainees and senior consultants allocated to two groups: (1) novices (experience < 5 procedures, n = 9) and (2) experts (experience > 200 procedures during the past 3 years, n = 10). Each participant performed ten sessions. Assessment of skills was based on time, economy of movement and the error parameter "bleeding". The novice group were re-tested after 6 and 18 months, whereas the expert group were only retested once, after 6 months. None of the novices performed laparoscopic surgery in the follow-up period. The experts continued their daily work with laparoscopic surgery. Novices showed retention of skills after 6 months. After 18 months, novices' laparoscopic skills had returned to the pre-training level. This indicates that laparoscopic skills seemed to deteriorate in the period between 6 and 18 months without training. Experts showed consistent performance over time. This information can be included when planning training curricula in minimal invasive surgery.
VCSim3: a VR simulator for cardiovascular interventions.
Korzeniowski, Przemyslaw; White, Ruth J; Bello, Fernando
2018-01-01
Effective and safe performance of cardiovascular interventions requires excellent catheter/guidewire manipulation skills. These skills are currently mainly gained through an apprenticeship on real patients, which may not be safe or cost-effective. Computer simulation offers an alternative for core skills training. However, replicating the physical behaviour of real instruments navigated through blood vessels is a challenging task. We have developed VCSim3-a virtual reality simulator for cardiovascular interventions. The simulator leverages an inextensible Cosserat rod to model virtual catheters and guidewires. Their mechanical properties were optimized with respect to their real counterparts scanned in a silicone phantom using X-ray CT imaging. The instruments are manipulated via a VSP haptic device. Supporting solutions such as fluoroscopic visualization, contrast flow propagation, cardiac motion, balloon inflation, and stent deployment, enable performing a complete angioplasty procedure. We present detailed results of simulation accuracy of the virtual instruments, along with their computational performance. In addition, the results of a preliminary face and content validation study conveyed on a group of 17 interventional radiologists are given. VR simulation of cardiovascular procedure can contribute to surgical training and improve the educational experience without putting patients at risk, raising ethical issues or requiring expensive animal or cadaver facilities. VCSim3 is still a prototype, yet the initial results indicate that it provides promising foundations for further development.
Description of the GMAO OSSE for Weather Analysis Software Package: Version 3
NASA Technical Reports Server (NTRS)
Koster, Randal D. (Editor); Errico, Ronald M.; Prive, Nikki C.; Carvalho, David; Sienkiewicz, Meta; El Akkraoui, Amal; Guo, Jing; Todling, Ricardo; McCarty, Will; Putman, William M.;
2017-01-01
The Global Modeling and Assimilation Office (GMAO) at the NASA Goddard Space Flight Center has developed software and products for conducting observing system simulation experiments (OSSEs) for weather analysis applications. Such applications include estimations of potential effects of new observing instruments or data assimilation techniques on improving weather analysis and forecasts. The GMAO software creates simulated observations from nature run (NR) data sets and adds simulated errors to those observations. The algorithms employed are much more sophisticated, adding a much greater degree of realism, compared with OSSE systems currently available elsewhere. The algorithms employed, software designs, and validation procedures are described in this document. Instructions for using the software are also provided.
Orchestrating TRANSP Simulations for Interpretative and Predictive Tokamak Modeling with OMFIT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grierson, B. A.; Yuan, X.; Gorelenkova, M.
TRANSP simulations are being used in the OMFIT work- flow manager to enable a machine independent means of experimental analysis, postdictive validation, and predictive time dependent simulations on the DIII-D, NSTX, JET and C-MOD tokamaks. The procedures for preparing the input data from plasma profile diagnostics and equilibrium reconstruction, as well as processing of the time-dependent heating and current drive sources and assumptions about the neutral recycling, vary across machines, but are streamlined by using a common workflow manager. Settings for TRANSP simulation fidelity are incorporated into the OMFIT framework, contrasting between-shot analysis, power balance, and fast-particle simulations. A previouslymore » established series of data consistency metrics are computed such as comparison of experimental vs. calculated neutron rate, equilibrium stored energy vs. total stored energy from profile and fast-ion pressure, and experimental vs. computed surface loop voltage. Discrepancies between data consistency metrics can indicate errors in input quantities such as electron density profile or Zeff, or indicate anomalous fast-particle transport. Measures to assess the sensitivity of the verification metrics to input quantities are provided by OMFIT, including scans of the input profiles and standardized post-processing visualizations. For predictive simulations, TRANSP uses GLF23 or TGLF to predict core plasma profiles, with user defined boundary conditions in the outer region of the plasma. ITPA validation metrics are provided in post-processing to assess the transport model validity. By using OMFIT to orchestrate the steps for experimental data preparation, selection of operating mode, submission, post-processing and visualization, we have streamlined and standardized the usage of TRANSP.« less
Orchestrating TRANSP Simulations for Interpretative and Predictive Tokamak Modeling with OMFIT
Grierson, B. A.; Yuan, X.; Gorelenkova, M.; ...
2018-02-21
TRANSP simulations are being used in the OMFIT work- flow manager to enable a machine independent means of experimental analysis, postdictive validation, and predictive time dependent simulations on the DIII-D, NSTX, JET and C-MOD tokamaks. The procedures for preparing the input data from plasma profile diagnostics and equilibrium reconstruction, as well as processing of the time-dependent heating and current drive sources and assumptions about the neutral recycling, vary across machines, but are streamlined by using a common workflow manager. Settings for TRANSP simulation fidelity are incorporated into the OMFIT framework, contrasting between-shot analysis, power balance, and fast-particle simulations. A previouslymore » established series of data consistency metrics are computed such as comparison of experimental vs. calculated neutron rate, equilibrium stored energy vs. total stored energy from profile and fast-ion pressure, and experimental vs. computed surface loop voltage. Discrepancies between data consistency metrics can indicate errors in input quantities such as electron density profile or Zeff, or indicate anomalous fast-particle transport. Measures to assess the sensitivity of the verification metrics to input quantities are provided by OMFIT, including scans of the input profiles and standardized post-processing visualizations. For predictive simulations, TRANSP uses GLF23 or TGLF to predict core plasma profiles, with user defined boundary conditions in the outer region of the plasma. ITPA validation metrics are provided in post-processing to assess the transport model validity. By using OMFIT to orchestrate the steps for experimental data preparation, selection of operating mode, submission, post-processing and visualization, we have streamlined and standardized the usage of TRANSP.« less
Play to become a surgeon: impact of Nintendo Wii training on laparoscopic skills.
Giannotti, Domenico; Patrizi, Gregorio; Di Rocco, Giorgio; Vestri, Anna Rita; Semproni, Camilla Proietti; Fiengo, Leslie; Pontone, Stefano; Palazzini, Giorgio; Redler, Adriano
2013-01-01
Video-games have become an integral part of the new multimedia culture. Several studies assessed video-gaming enhancement of spatial attention and eye-hand coordination. Considering the technical difficulty of laparoscopic procedures, legal issues and time limitations, the validation of appropriate training even outside of the operating rooms is ongoing. We investigated the influence of a four-week structured Nintendo® Wii™ training on laparoscopic skills by analyzing performance metrics with a validated simulator (Lap Mentor™, Simbionix™). We performed a prospective randomized study on 42 post-graduate I-II year residents in General, Vascular and Endoscopic Surgery. All participants were tested on a validated laparoscopic simulator and then randomized to group 1 (Controls, no training with the Nintendo® Wii™), and group 2 (training with the Nintendo® Wii™) with 21 subjects in each group, according to a computer-generated list. After four weeks, all residents underwent a testing session on the laparoscopic simulator of the same tasks as in the first session. All 42 subjects in both groups improved significantly from session 1 to session 2. Compared to controls, the Wii group showed a significant improvement in performance (p<0.05) for 13 of the 16 considered performance metrics. The Nintendo® Wii™ might be helpful, inexpensive and entertaining part of the training of young laparoscopists, in addition to a standard surgical education based on simulators and the operating room.
GRC RBCC Concept Multidisciplinary Analysis
NASA Technical Reports Server (NTRS)
Suresh, Ambady
2001-01-01
This report outlines the GRC RBCC Concept for Multidisciplinary Analysis. The multidisciplinary coupling procedure is presented, along with technique validations and axisymmetric multidisciplinary inlet and structural results. The NPSS (Numerical Propulsion System Simulation) test bed developments and code parallelization are also presented. These include milestones and accomplishments, a discussion of running R4 fan application on the PII cluster as compared to other platforms, and the National Combustor Code speedup.
An efficient code for the simulation of nonhydrostatic stratified flow over obstacles
NASA Technical Reports Server (NTRS)
Pihos, G. G.; Wurtele, M. G.
1981-01-01
The physical model and computational procedure of the code is described in detail. The code is validated in tests against a variety of known analytical solutions from the literature and is also compared against actual mountain wave observations. The code will receive as initial input either mathematically idealized or discrete observational data. The form of the obstacle or mountain is arbitrary.
Randomized clinical trial of virtual reality simulation for laparoscopic skills training.
Grantcharov, T P; Kristiansen, V B; Bendix, J; Bardram, L; Rosenberg, J; Funch-Jensen, P
2004-02-01
This study examined the impact of virtual reality (VR) surgical simulation on improvement of psychomotor skills relevant to the performance of laparoscopic cholecystectomy. Sixteen surgical trainees performed a laparoscopic cholecystectomy on patients in the operating room (OR). The participants were then randomized to receive VR training (ten repetitions of all six tasks on the Minimally Invasive Surgical Trainer-Virtual Reality (MIST-VR)) or no training. Subsequently, all subjects performed a further laparoscopic cholecystectomy in the OR. Both operative procedures were recorded on videotape, and assessed by two independent and blinded observers using predefined objective criteria. Time to complete the procedure, error score and economy of movement score were assessed during the laparoscopic procedure in the OR. No differences in baseline variables were found between the two groups. Surgeons who received VR training performed laparoscopic cholecystectomy significantly faster than the control group (P=0.021). Furthermore, those who had VR training showed significantly greater improvement in error (P=0.003) and economy of movement (P=0.003) scores. Surgeons who received VR simulator training showed significantly greater improvement in performance in the OR than those in the control group. VR surgical simulation is therefore a valid tool for training of laparoscopic psychomotor skills and could be incorporated into surgical training programmes. Copyright 2003 British Journal of Surgery Society Ltd. Published by John Wiley & Sons, Ltd.
Simulation of a complete X-ray digital radiographic system for industrial applications.
Nazemi, E; Rokrok, B; Movafeghi, A; Choopan Dastjerdi, M H
2018-05-19
Simulating X-ray images is of great importance in industry and medicine. Using such simulation permits us to optimize parameters which affect image's quality without the limitations of an experimental procedure. This study revolves around a novel methodology to simulate a complete industrial X-ray digital radiographic system composed of an X-ray tube and a computed radiography (CR) image plate using Monte Carlo N Particle eXtended (MCNPX) code. In the process of our research, an industrial X-ray tube with maximum voltage of 300 kV and current of 5 mA was simulated. A 3-layer uniform plate including a polymer overcoat layer, a phosphor layer and a polycarbonate backing layer was also defined and simulated as the CR imaging plate. To model the image formation in the image plate, at first the absorbed dose was calculated in each pixel inside the phosphor layer of CR imaging plate using the mesh tally in MCNPX code and then was converted to gray value using a mathematical relationship determined in a separate procedure. To validate the simulation results, an experimental setup was designed and the images of two step wedges created out of aluminum and steel were captured by the experiments and compared with the simulations. The results show that the simulated images are in good agreement with the experimental ones demonstrating the ability of the proposed methodology for simulating an industrial X-ray imaging system. Copyright © 2018 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toma, Milan; Jensen, Morten Ø.; Einstein, Daniel R.
2015-07-17
Numerical models of native heart valves are being used to study valve biomechanics to aid design and development of repair procedures and replacement devices. These models have evolved from simple two-dimensional approximations to complex three-dimensional, fully coupled fluid-structure interaction (FSI) systems. Such simulations are useful for predicting the mechanical and hemodynamic loading on implanted valve devices. A current challenge for improving the accuracy of these predictions is choosing and implementing modeling boundary conditions. In order to address this challenge, we are utilizing an advanced in-vitro system to validate FSI conditions for the mitral valve system. Explanted ovine mitral valves weremore » mounted in an in vitro setup, and structural data for the mitral valve was acquired with *CT. Experimental data from the in-vitro ovine mitral valve system were used to validate the computational model. As the valve closes, the hemodynamic data, high speed lea et dynamics, and force vectors from the in-vitro system were compared to the results of the FSI simulation computational model. The total force of 2.6 N per papillary muscle is matched by the computational model. In vitro and in vivo force measurements are important in validating and adjusting material parameters in computational models. The simulations can then be used to answer questions that are otherwise not possible to investigate experimentally. This work is important to maximize the validity of computational models of not just the mitral valve, but any biomechanical aspect using computational simulation in designing medical devices.« less
Toma, Milan; Jensen, Morten Ø; Einstein, Daniel R; Yoganathan, Ajit P; Cochran, Richard P; Kunzelman, Karyn S
2016-04-01
Numerical models of native heart valves are being used to study valve biomechanics to aid design and development of repair procedures and replacement devices. These models have evolved from simple two-dimensional approximations to complex three-dimensional, fully coupled fluid-structure interaction (FSI) systems. Such simulations are useful for predicting the mechanical and hemodynamic loading on implanted valve devices. A current challenge for improving the accuracy of these predictions is choosing and implementing modeling boundary conditions. In order to address this challenge, we are utilizing an advanced in vitro system to validate FSI conditions for the mitral valve system. Explanted ovine mitral valves were mounted in an in vitro setup, and structural data for the mitral valve was acquired with [Formula: see text]CT. Experimental data from the in vitro ovine mitral valve system were used to validate the computational model. As the valve closes, the hemodynamic data, high speed leaflet dynamics, and force vectors from the in vitro system were compared to the results of the FSI simulation computational model. The total force of 2.6 N per papillary muscle is matched by the computational model. In vitro and in vivo force measurements enable validating and adjusting material parameters to improve the accuracy of computational models. The simulations can then be used to answer questions that are otherwise not possible to investigate experimentally. This work is important to maximize the validity of computational models of not just the mitral valve, but any biomechanical aspect using computational simulation in designing medical devices.
A Permutation Approach for Selecting the Penalty Parameter in Penalized Model Selection
Sabourin, Jeremy A; Valdar, William; Nobel, Andrew B
2015-01-01
Summary We describe a simple, computationally effcient, permutation-based procedure for selecting the penalty parameter in LASSO penalized regression. The procedure, permutation selection, is intended for applications where variable selection is the primary focus, and can be applied in a variety of structural settings, including that of generalized linear models. We briefly discuss connections between permutation selection and existing theory for the LASSO. In addition, we present a simulation study and an analysis of real biomedical data sets in which permutation selection is compared with selection based on the following: cross-validation (CV), the Bayesian information criterion (BIC), Scaled Sparse Linear Regression, and a selection method based on recently developed testing procedures for the LASSO. PMID:26243050
Clinical Outcome Metrics for Optimization of Robust Training
NASA Technical Reports Server (NTRS)
Ebert, D.; Byrne, V. E.; McGuire, K. M.; Hurst, V. W., IV; Kerstman, E. L.; Cole, R. W.; Sargsyan, A. E.; Garcia, K. M.; Reyes, D.; Young, M.
2016-01-01
Introduction: The emphasis of this research is on the Human Research Program (HRP) Exploration Medical Capability's (ExMC) "Risk of Unacceptable Health and Mission Outcomes Due to Limitations of In-Flight Medical Capabilities." Specifically, this project aims to contribute to the closure of gap ExMC 2.02: We do not know how the inclusion of a physician crew medical officer quantitatively impacts clinical outcomes during exploration missions. The experiments are specifically designed to address clinical outcome differences between physician and non-physician cohorts in both near-term and longer-term (mission impacting) outcomes. Methods: Medical simulations will systematically compare success of individual diagnostic and therapeutic procedure simulations performed by physician and non-physician crew medical officer (CMO) analogs using clearly defined short-term (individual procedure) outcome metrics. In the subsequent step of the project, the procedure simulation outcomes will be used as input to a modified version of the NASA Integrated Medical Model (IMM) to analyze the effect of the outcome (degree of success) of individual procedures (including successful, imperfectly performed, and failed procedures) on overall long-term clinical outcomes and the consequent mission impacts. The procedures to be simulated are endotracheal intubation, fundoscopic examination, kidney/urinary ultrasound, ultrasound-guided intravenous catheter insertion, and a differential diagnosis exercise. Multiple assessment techniques will be used, centered on medical procedure simulation studies occurring at 3, 6, and 12 months after initial training (as depicted in the following flow diagram of the experiment design). Discussion: Analysis of procedure outcomes in the physician and non-physician groups and their subsets (tested at different elapsed times post training) will allow the team to 1) define differences between physician and non-physician CMOs in terms of both procedure performance (pre-IMM analysis) and overall mitigation of the mission medical impact (IMM analysis); 2) refine the procedure outcome and clinical outcome metrics themselves; 3) refine or develop innovative medical training products and solutions to maximize CMO performance; and 4) validate the methods and products of this experiment for operational use in the planning, execution, and quality assurance of the CMO training process The team has finalized training protocols and developed a software training/testing tool in collaboration with Butler Graphics (Detroit, MI). In addition to the "hands on" medical procedure modules, the software includes a differential diagnosis exercise (limited clinical decision support tool) to evaluate the diagnostic skills of participants. Human subject testing will occur over the next year.
Can virtual reality simulators be a certification tool for bariatric surgeons?
Giannotti, Domenico; Patrizi, Gregorio; Casella, Giovanni; Di Rocco, Giorgio; Marchetti, Massimiliano; Frezzotti, Francesca; Bernieri, Maria Giulia; Vestri, Anna Rita; Redler, Adriano
2014-01-01
Construct validity of virtual laparoscopic simulators for basic laparoscopic skills has been proposed; however, it is not yet clear whether the simulators can identify the actual experience of surgeons in more complex procedures such as laparoscopic Roux-en-Y gastric bypass. This study tested the ability of the Lap Mentor simulator to recognize the experience in advanced laparoscopic procedures and to assess its role in the certification of bariatric surgeons. Twenty surgeons were divided into two groups according to their experience in laparoscopic and bariatric surgery. The general group included 10 general surgeons performing between 75 and 100 nonbariatric laparoscopic procedures. The bariatric group included 10 bariatric surgeons performing between 50 and 100 laparoscopic bariatric procedures. Participants were tested on the simulator in one basic task (task 1: eye-hand coordination) and in two tasks of the gastric bypass module (task 2: creation of the gastric pouch; task 3: gastrojejunal anastomosis). Comparing the groups, no significant differences were found in task 1. Analyzing the results from the gastric bypass module (bariatric vs. general), in task 2, significant differences (p < 0.05) were found in the median volume of the gastric pouch (21 vs. 48 cm(3)), in the percentage of fundus included in the pouch (8.4 vs. 29.4 %), in the complete dissection at the angle of His (10 vs. 3), and in safety parameters. In task 3, significant differences were found in the size and position of enterotomies. The Lap Mentor may be proposed as a certification tool for bariatric surgeons because it also recognizes their specific skills in the technical details of the procedure that affect long-term results. Furthermore, the possibility of analyzing the performance in detail can help define areas where the surgeon is lacking. These findings indicate a potential role of the Lap Mentor in tailoring the training to maximize improvement.
Validation and uncertainty analysis of a pre-treatment 2D dose prediction model
NASA Astrophysics Data System (ADS)
Baeza, Jose A.; Wolfs, Cecile J. A.; Nijsten, Sebastiaan M. J. J. G.; Verhaegen, Frank
2018-02-01
Independent verification of complex treatment delivery with megavolt photon beam radiotherapy (RT) has been effectively used to detect and prevent errors. This work presents the validation and uncertainty analysis of a model that predicts 2D portal dose images (PDIs) without a patient or phantom in the beam. The prediction model is based on an exponential point dose model with separable primary and secondary photon fluence components. The model includes a scatter kernel, off-axis ratio map, transmission values and penumbra kernels for beam-delimiting components. These parameters were derived through a model fitting procedure supplied with point dose and dose profile measurements of radiation fields. The model was validated against a treatment planning system (TPS; Eclipse) and radiochromic film measurements for complex clinical scenarios, including volumetric modulated arc therapy (VMAT). Confidence limits on fitted model parameters were calculated based on simulated measurements. A sensitivity analysis was performed to evaluate the effect of the parameter uncertainties on the model output. For the maximum uncertainty, the maximum deviating measurement sets were propagated through the fitting procedure and the model. The overall uncertainty was assessed using all simulated measurements. The validation of the prediction model against the TPS and the film showed a good agreement, with on average 90.8% and 90.5% of pixels passing a (2%,2 mm) global gamma analysis respectively, with a low dose threshold of 10%. The maximum and overall uncertainty of the model is dependent on the type of clinical plan used as input. The results can be used to study the robustness of the model. A model for predicting accurate 2D pre-treatment PDIs in complex RT scenarios can be used clinically and its uncertainties can be taken into account.
Modeling and simulation for space medicine operations: preliminary requirements considered
NASA Technical Reports Server (NTRS)
Dawson, D. L.; Billica, R. D.; McDonald, P. V.
2001-01-01
The NASA Space Medicine program is now developing plans for more extensive use of high-fidelity medical simulation systems. The use of simulation is seen as means to more effectively use the limited time available for astronaut medical training. Training systems should be adaptable for use in a variety of training environments, including classrooms or laboratories, space vehicle mockups, analog environments, and in microgravity. Modeling and simulation can also provide the space medicine development program a mechanism for evaluation of other medical technologies under operationally realistic conditions. Systems and procedures need preflight verification with ground-based testing. Traditionally, component testing has been accomplished, but practical means for "human in the loop" verification of patient care systems have been lacking. Medical modeling and simulation technology offer potential means to accomplish such validation work. Initial considerations in the development of functional requirements and design standards for simulation systems for space medicine are discussed.
Requirements for Modeling and Simulation for Space Medicine Operations: Preliminary Considerations
NASA Technical Reports Server (NTRS)
Dawson, David L.; Billica, Roger D.; Logan, James; McDonald, P. Vernon
2001-01-01
The NASA Space Medicine program is now developing plans for more extensive use of high-fidelity medical Simulation systems. The use of simulation is seen as means to more effectively use the limited time available for astronaut medical training. Training systems should be adaptable for use in a variety of training environments, including classrooms or laboratories, space vehicle mockups, analog environments, and in microgravity. Modeling and simulation can also provide the space medicine development program a mechanism for evaluation of other medical technologies under operationally realistic conditions. Systems and procedures need preflight verification with ground-based testing. Traditionally, component testing has been accomplished, but practical means for "human in the loop" verification of patient care systems have been lacking. Medical modeling and simulation technology offer potential means to accomplish such validation work. Initial considerations in the development of functional requirements and design standards for simulation systems for space medicine are discussed.
[Nursing students assessment in simulated conditions : in search of meaning and ethics].
Homerin, Marie-Pierre; Roumanet, Marie-Cécile
2014-10-01
A thought about the assessment in simulated conditions is at the origin of this research-action conducted at the Institute of Nursing Education of Chambery, France. Indeed, the differences in the assessment procedures between units that require this kind of validation and the disappointing rate of success at the examinations in simulated situations have led the trainers to raise the following question : « How can these assessments be meaningful and consistent with the goal of training (help to become autonomous and reflexive practitioners) » ?This issue was addressed with concepts such as socioconstructivism, simulation in health, assessment and ethical principles. The change of practices has been the application of the principles of ?educative? assessment according to G. Nunziatti which strongly involves the students in the assessment?s process.In order to estimate the impact of these changes of practices, an unidentified online survey was offered to all students who benefited from this kind of assessment. The results between two classes of students having had different evaluation procedures have also been compared.The objectives were, after the implementation of this new kind of evaluation, to assess the students? satisfaction, to compare the failure rate at the tests in simulated conditions and to verify the compliance with the ethical principles.This study has shown the students? satisfaction about these new forms of assessment in simulated conditions, an increased success rate in the tests and the applicability of the ethical principles with this way of proceeding. However, the principles of justice and non-maleficence are difficult to implement. Nevertheless, this critical thinking on the procedures of assessment in simulated conditions has helped to change the practices and the assessment design by the teachers.
Shamim Khan, Mohammad; Ahmed, Kamran; Gavazzi, Andrea; Gohil, Rishma; Thomas, Libby; Poulsen, Johan; Ahmed, Munir; Jaye, Peter; Dasgupta, Prokar
2013-03-01
WHAT'S KNOWN ON THE SUBJECT? AND WHAT DOES THE STUDY ADD?: A competent urologist should not only have effective technical skills, but also other attributes that would make him/her a complete surgeon. These include team-working, communication and decision-making skills. Although evidence for effectiveness of simulation exists for individual simulators, there is a paucity of evidence for utility and effectiveness of these simulators in training programmes that aims to combine technical and non-technical skills training. This article explains the process of development and validation of a centrally coordinated simulation program (Participants - South-East Region Specialist Registrars) under the umbrella of the British Association for Urological Surgeons (BAUS) and the London Deanery. This program incorporated training of both technical (synthetic, animal and virtual reality models) and non-technical skills (simulated operating theatres). To establish the feasibility and acceptability of a centralized, simulation-based training-programme. Simulation is increasingly establishing its role in urological training, with two areas that are relevant to urologists: (i) technical skills and (ii) non-technical skills. For this London Deanery supported pilot Simulation and Technology enhanced Learning Initiative (STeLI) project, we developed a structured multimodal simulation training programme. The programme incorporated: (i) technical skills training using virtual-reality simulators (Uro-mentor and Perc-mentor [Symbionix, Cleveland, OH, USA], Procedicus MIST-Nephrectomy [Mentice, Gothenburg, Sweden] and SEP Robotic simulator [Sim Surgery, Oslo, Norway]); bench-top models (synthetic models for cystocopy, transurethral resection of the prostate, transurethral resection of bladder tumour, ureteroscopy); and a European (Aalborg, Denmark) wet-lab training facility; as well as (ii) non-technical skills/crisis resource management (CRM), using SimMan (Laerdal Medical Ltd, Orpington, UK) to teach team-working, decision-making and communication skills. The feasibility, acceptability and construct validity of these training modules were assessed using validated questionnaires, as well as global and procedure/task-specific rating scales. In total 33, three specialist registrars of different grades and five urological nurses participated in the present study. Construct-validity between junior and senior trainees was significant. Of the participants, 90% rated the training models as being realistic and easy to use. In total 95% of the participants recommended the use of simulation during surgical training, 95% approved the format of the teaching by the faculty and 90% rated the sessions as well organized. A significant number of trainees (60%) would like to have easy access to a simulation facility to allow more practice and enhancement of their skills. A centralized simulation programme that provides training in both technical and non-technical skills is feasible. It is expected to improve the performance of future surgeons in a simulated environment and thus improve patient safety. © 2012 BJU International.
An Enriched Shell Finite Element for Progressive Damage Simulation in Composite Laminates
NASA Technical Reports Server (NTRS)
McElroy, Mark W.
2016-01-01
A formulation is presented for an enriched shell nite element capable of progressive damage simulation in composite laminates. The element uses a discrete adaptive splitting approach for damage representation that allows for a straightforward model creation procedure based on an initially low delity mesh. The enriched element is veri ed for Mode I, Mode II, and mixed Mode I/II delamination simulation using numerical benchmark data. Experimental validation is performed using test data from a delamination-migration experiment. Good correlation was found between the enriched shell element model results and the numerical and experimental data sets. The work presented in this paper is meant to serve as a rst milestone in the enriched element's development with an ultimate goal of simulating three-dimensional progressive damage processes in multidirectional laminates.
Development and initial validation of an endoscopic part-task training box.
Thompson, Christopher C; Jirapinyo, Pichamol; Kumar, Nitin; Ou, Amy; Camacho, Andrew; Lengyel, Balazs; Ryan, Michele B
2014-09-01
There is currently no objective and validated methodology available to assess the progress of endoscopy trainees or to determine when technical competence has been achieved. The aims of the current study were to develop an endoscopic part-task simulator and to assess scoring system validity. Fundamental endoscopic skills were determined via kinematic analysis, literature review, and expert interviews. Simulator prototypes and scoring systems were developed to reflect these skills. Validity evidence for content, internal structure, and response process was evaluated. The final training box consisted of five modules (knob control, torque, retroflexion, polypectomy, and navigation and loop reduction). A total of 5 minutes were permitted per module with extra points for early completion. Content validity index (CVI)-realism was 0.88, CVI-relevance was 1.00, and CVI-representativeness was 0.88, giving a composite CVI of 0.92. Overall, 82 % of participants considered the simulator to be capable of differentiating between ability levels, and 93 % thought the simulator should be used to assess ability prior to performing procedures in patients. Inter-item assessment revealed correlations from 0.67 to 0.93, suggesting that tasks were sufficiently correlated to assess the same underlying construct, with each task remaining independent. Each module represented 16.0 % - 26.1 % of the total score, suggesting that no module contributed disproportionately to the composite score. Average box scores were 272.6 and 284.4 (P = 0.94) when performed sequentially, and average score for all participants with proctor 1 was 297.6 and 308.1 with proctor 2 (P = 0.94), suggesting reproducibility and minimal error associated with test administration. A part-task training box and scoring system were developed to assess fundamental endoscopic skills, and validity evidence regarding content, internal structure, and response process was demonstrated. © Georg Thieme Verlag KG Stuttgart · New York.
A statistically robust EEG re-referencing procedure to mitigate reference effect
Lepage, Kyle Q.; Kramer, Mark A.; Chu, Catherine J.
2014-01-01
Background The electroencephalogram (EEG) remains the primary tool for diagnosis of abnormal brain activity in clinical neurology and for in vivo recordings of human neurophysiology in neuroscience research. In EEG data acquisition, voltage is measured at positions on the scalp with respect to a reference electrode. When this reference electrode responds to electrical activity or artifact all electrodes are affected. Successful analysis of EEG data often involves re-referencing procedures that modify the recorded traces and seek to minimize the impact of reference electrode activity upon functions of the original EEG recordings. New method We provide a novel, statistically robust procedure that adapts a robust maximum-likelihood type estimator to the problem of reference estimation, reduces the influence of neural activity from the re-referencing operation, and maintains good performance in a wide variety of empirical scenarios. Results The performance of the proposed and existing re-referencing procedures are validated in simulation and with examples of EEG recordings. To facilitate this comparison, channel-to-channel correlations are investigated theoretically and in simulation. Comparison with existing methods The proposed procedure avoids using data contaminated by neural signal and remains unbiased in recording scenarios where physical references, the common average reference (CAR) and the reference estimation standardization technique (REST) are not optimal. Conclusion The proposed procedure is simple, fast, and avoids the potential for substantial bias when analyzing low-density EEG data. PMID:24975291
Flores-Alsina, Xavier; Saagi, Ramesh; Lindblom, Erik; Thirsing, Carsten; Thornberg, Dines; Gernaey, Krist V; Jeppsson, Ulf
2014-03-15
The objective of this paper is to demonstrate the full-scale feasibility of the phenomenological dynamic influent pollutant disturbance scenario generator (DIPDSG) that was originally used to create the influent data of the International Water Association (IWA) Benchmark Simulation Model No. 2 (BSM2). In this study, the influent characteristics of two large Scandinavian treatment facilities are studied for a period of two years. A step-wise procedure based on adjusting the most sensitive parameters at different time scales is followed to calibrate/validate the DIPDSG model blocks for: 1) flow rate; 2) pollutants (carbon, nitrogen); 3) temperature; and, 4) transport. Simulation results show that the model successfully describes daily/weekly and seasonal variations and the effect of rainfall and snow melting on the influent flow rate, pollutant concentrations and temperature profiles. Furthermore, additional phenomena such as size and accumulation/flush of particulates of/in the upstream catchment and sewer system are incorporated in the simulated time series. Finally, this study is complemented with: 1) the generation of additional future scenarios showing the effects of different rainfall patterns (climate change) or influent biodegradability (process uncertainty) on the generated time series; 2) a demonstration of how to reduce the cost/workload of measuring campaigns by filling the gaps due to missing data in the influent profiles; and, 3) a critical discussion of the presented results balancing model structure/calibration procedure complexity and prediction capabilities. Copyright © 2013 Elsevier Ltd. All rights reserved.
Construct Validity of Fresh Frozen Human Cadaver as a Training Model in Minimal Access Surgery
Macafee, David; Pranesh, Nagarajan; Horgan, Alan F.
2012-01-01
Background: The construct validity of fresh human cadaver as a training tool has not been established previously. The aims of this study were to investigate the construct validity of fresh frozen human cadaver as a method of training in minimal access surgery and determine if novices can be rapidly trained using this model to a safe level of performance. Methods: Junior surgical trainees, novices (<3 laparoscopic procedure performed) in laparoscopic surgery, performed 10 repetitions of a set of structured laparoscopic tasks on fresh frozen cadavers. Expert laparoscopists (>100 laparoscopic procedures) performed 3 repetitions of identical tasks. Performances were scored using a validated, objective Global Operative Assessment of Laparoscopic Skills scale. Scores for 3 consecutive repetitions were compared between experts and novices to determine construct validity. Furthermore, to determine if the novices reached a safe level, a trimmed mean of the experts score was used to define a benchmark. Mann-Whitney U test was used for construct validity analysis and 1-sample t test to compare performances of the novice group with the benchmark safe score. Results: Ten novices and 2 experts were recruited. Four out of 5 tasks (nondominant to dominant hand transfer; simulated appendicectomy; intracorporeal and extracorporeal knot tying) showed construct validity. Novices’ scores became comparable to benchmark scores between the eighth and tenth repetition. Conclusion: Minimal access surgical training using fresh frozen human cadavers appears to have construct validity. The laparoscopic skills of novices can be accelerated through to a safe level within 8 to 10 repetitions. PMID:23318058
Gas-injection-start and shutdown characteristics of a 2-kilowatt to 15-kilowatt Brayton power system
NASA Technical Reports Server (NTRS)
Cantoni, D. A.
1972-01-01
Two methods of starting the Brayton power system have been considered: (1) using the alternator as a motor to spin the Brayton rotating unit (BRU), and (2) spinning the BRU by forced gas injection. The first method requires the use of an auxiliary electrical power source. An alternating voltage is applied to the terminals of the alternator to drive it as an induction motor. Only gas-injection starts are discussed in this report. The gas-injection starting method requires high-pressure gas storage and valves to route the gas flow to provide correct BRU rotation. An analog computer simulation was used to size hardware and to determine safe start and shutdown procedures. The simulation was also used to define the range of conditions for successful startups. Experimental data were also obtained under various test conditions. These data verify the validity of the start and shutdown procedures.
NASA Technical Reports Server (NTRS)
2002-01-01
Ames Research Center granted Reality Capture Technologies (RCT), Inc., a license to further develop NASA's Mars Map software platform. The company incorporated NASA#s innovation into software that uses the Virtual Plant Model (VPM)(TM) to structure, modify, and implement the construction sites of industrial facilities, as well as develop, validate, and train operators on procedures. The VPM orchestrates the exchange of information between engineering, production, and business transaction systems. This enables users to simulate, control, and optimize work processes while increasing the reliability of critical business decisions. Engineers can complete the construction process and test various aspects of it in virtual reality before building the actual structure. With virtual access to and simulation of the construction site, project personnel can manage, access control, and respond to changes on complex constructions more effectively. Engineers can also create operating procedures, training, and documentation. Virtual Plant Model(TM) is a trademark of Reality Capture Technologies, Inc.
Kang, Sung Gu; Cho, Seok; Kang, Seok Ho; Haidar, Abdul Muhsin; Samavedi, Srinivas; Palmer, Kenneth J; Patel, Vipul R; Cheon, Jun
2014-08-01
To better use virtual reality robotic simulators and offer surgeons more practical exercises, we developed the Tube 3 module for practicing vesicourethral anastomosis (VUA), one of the most complex steps in the robot-assisted radical prostatectomy procedure. Herein, we describe the principle of the Tube 3 module and evaluate its face, content, and construct validity. Residents and attending surgeons participated in a prospective study approved by the institutional review board. We divided subjects into 2 groups, those with experience and novices. Each subject performed a simulated VUA using the Tube 3 module. A built-in scoring algorithm recorded the data from each performance. After completing the Tube 3 module exercise, each subject answered a questionnaire to provide data to be used for face and content validation. The novice group consisted of 10 residents. The experienced subjects (n = 10) had each previously performed at least 10 robotic surgeries. The experienced group outperformed the novice group in most variables, including task time, total score, total economy of motion, and number of instrument collisions (P <.05). Additionally, 80% of the experienced surgeons agreed that the module reflects the technical skills required to perform VUA and would be a useful training tool. We describe the Tube 3 module for practicing VUA, which showed excellent face, content, and construct validity. The task needs to be refined in the future to reflect VUA under real operating conditions, and concurrent and predictive validity studies are currently underway. Copyright © 2014 Elsevier Inc. All rights reserved.
An over-view of robot assisted surgery curricula and the status of their validation.
Fisher, Rebecca A; Dasgupta, Prokar; Mottrie, Alex; Volpe, Alessandro; Khan, Mohammed S; Challacombe, Ben; Ahmed, Kamran
2015-01-01
Robotic surgery is a rapidly expanding field. Thus far training for robotic techniques has been unstructured and the requirements are variable across various regions. Several projects are currently underway to develop a robotic surgery curriculum and are in various stages of validation. We aimed to outline the structures of available curricula, their process of development, validation status and current utilization. We undertook a literature review of papers including the MeSH terms "Robotics" and "Education". When we had an overview of curricula in development, we searched recent conference abstracts to gain up to date information. The main curricula are the FRS, the FSRS, the Canadian BSTC and the ERUS initiative. They are in various stages of validation and offer a mixture of theoretical and practical training, using both physical and simulated models. Whilst the FSRS is based on tasks on the RoSS virtual reality simulator, FRS and BSTC are designed for use on simulators and the robot itself. The ERUS curricula benefits from a combination of dry lab, wet lab and virtual reality components, which may allow skills to be more transferable to the OR as tasks are completed in several formats. Finally, the ERUS curricula includes the OR modular training programme as table assistant and console surgeon. Curricula are a crucial step in global standardisation of training and certification of surgeons for robotic surgical procedures. Many curricula are in early stages of development and more work is needed in development and validation of these programmes before training can be standardised. Copyright © 2014 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.
A Systematic Review of Virtual Reality Simulators for Robot-assisted Surgery.
Moglia, Andrea; Ferrari, Vincenzo; Morelli, Luca; Ferrari, Mauro; Mosca, Franco; Cuschieri, Alfred
2016-06-01
No single large published randomized controlled trial (RCT) has confirmed the efficacy of virtual simulators in the acquisition of skills to the standard required for safe clinical robotic surgery. This remains the main obstacle for the adoption of these virtual simulators in surgical residency curricula. To evaluate the level of evidence in published studies on the efficacy of training on virtual simulators for robotic surgery. In April 2015 a literature search was conducted on PubMed, Web of Science, Scopus, Cochrane Library, the Clinical Trials Database (US) and the Meta Register of Controlled Trials. All publications were scrutinized for relevance to the review and for assessment of the levels of evidence provided using the classification developed by the Oxford Centre for Evidence-Based Medicine. The publications included in the review consisted of one RCT and 28 cohort studies on validity, and seven RCTs and two cohort studies on skills transfer from virtual simulators to robot-assisted surgery. Simulators were rated good for realism (face validity) and for usefulness as a training tool (content validity). However, the studies included used various simulation training methodologies, limiting the assessment of construct validity. The review confirms the absence of any consensus on which tasks and metrics are the most effective for the da Vinci Skills Simulator and dV-Trainer, the most widely investigated systems. Although there is consensus for the RoSS simulator, this is based on only two studies on construct validity involving four exercises. One study on initial evaluation of an augmented reality module for partial nephrectomy using the dV-Trainer reported high correlation (r=0.8) between in vivo porcine nephrectomy and a virtual renorrhaphy task according to the overall Global Evaluation Assessment of Robotic Surgery (GEARS) score. In one RCT on skills transfer, the experimental group outperformed the control group, with a significant difference in overall GEARS score (p=0.012) during performance of urethrovesical anastomosis on an inanimate model. Only one study included assessment of a surgical procedure on real patients: subjects trained on a virtual simulator outperformed the control group following traditional training. However, besides the small numbers, this study was not randomized. There is an urgent need for a large, well-designed, preferably multicenter RCT to study the efficacy of virtual simulation for acquisition competence in and safe execution of clinical robotic-assisted surgery. We reviewed the literature on virtual simulators for robot-assisted surgery. Validity studies used various simulation training methodologies. It is not clear which exercises and metrics are the most effective in distinguishing different levels of experience on the da Vinci robot. There is no reported evidence of skills transfer from simulation to clinical surgery on real patients. Copyright © 2015 European Association of Urology. Published by Elsevier B.V. All rights reserved.
Longo, Mariaconcetta; Marchioni, Chiara; Insero, Teresa; Donnarumma, Raffaella; D'Adamo, Alessandro; Lucatelli, Pierleone; Fanelli, Fabrizio; Salvatori, Filippo Maria; Cannavale, Alessandro; Di Castro, Elisabetta
2016-03-01
This study evaluates X-ray exposure in patient undergoing abdominal extra-vascular interventional procedures by means of Digital Imaging and COmmunications in Medicine (DICOM) image headers and Monte Carlo simulation. The main aim was to assess the effective and equivalent doses, under the hypothesis of their correlation with the dose area product (DAP) measured during each examination. This allows to collect dosimetric information about each patient and to evaluate associated risks without resorting to in vivo dosimetry. The dose calculation was performed in 79 procedures through the Monte Carlo simulator PCXMC (A PC-based Monte Carlo program for calculating patient doses in medical X-ray examinations), by using the real geometrical and dosimetric irradiation conditions, automatically extracted from DICOM headers. The DAP measurements were also validated by using thermoluminescent dosemeters on an anthropomorphic phantom. The expected linear correlation between effective doses and DAP was confirmed with an R(2) of 0.974. Moreover, in order to easily calculate patient doses, conversion coefficients that relate equivalent doses to measurable quantities, such as DAP, were obtained. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Interventional radiology virtual simulator for liver biopsy.
Villard, P F; Vidal, F P; ap Cenydd, L; Holbrey, R; Pisharody, S; Johnson, S; Bulpitt, A; John, N W; Bello, F; Gould, D
2014-03-01
Training in Interventional Radiology currently uses the apprenticeship model, where clinical and technical skills of invasive procedures are learnt during practice in patients. This apprenticeship training method is increasingly limited by regulatory restrictions on working hours, concerns over patient risk through trainees' inexperience and the variable exposure to case mix and emergencies during training. To address this, we have developed a computer-based simulation of visceral needle puncture procedures. A real-time framework has been built that includes: segmentation, physically based modelling, haptics rendering, pseudo-ultrasound generation and the concept of a physical mannequin. It is the result of a close collaboration between different universities, involving computer scientists, clinicians, clinical engineers and occupational psychologists. The technical implementation of the framework is a robust and real-time simulation environment combining a physical platform and an immersive computerized virtual environment. The face, content and construct validation have been previously assessed, showing the reliability and effectiveness of this framework, as well as its potential for teaching visceral needle puncture. A simulator for ultrasound-guided liver biopsy has been developed. It includes functionalities and metrics extracted from cognitive task analysis. This framework can be useful during training, particularly given the known difficulties in gaining significant practice of core skills in patients.
NASA Astrophysics Data System (ADS)
Jiang, Yewei; Luo, Jie; Wu, Yongquan
2017-06-01
Empirical potential is vital to the classic atomic simulation, especially for the study of phase transitions, as well as the solid-interface. In this paper, we attempt to set up a uniform procedure for the validation among different potentials before the formal simulation study of phase transitions of metals. Two main steps are involved: (1) the prediction of the structures of both solid and liquid phases and their mutual transitions, i.e. melting and crystallization; (2) the prediction of vital thermodynamic (the equilibrium melting point at ambient pressure) and dynamic properties (the degrees of superheating and undercooling). We applied this procedure to the testing of seven published embedded-atom potentials (MKBA (Mendelev et al 2008 Philos. Mag. 88 1723), MFMP (Mishin et al 1999 Phys. Rev. B 59 3393), MDSL (Sturgeon and Laird 2000 Phys. Rev. B 62 14720), ZM (Zope and Mishin 2003 Phys. Rev. B 68 024102), LEA (Liu et al 2004 Model. Simul. Mater. Sci. Eng. 12 665), WKG (Winey et al 2009 Model. Simul. Mater. Sci. Eng. 17 055004) and ZJW (Zhou et al 2004 Phys. Rev. B 69 144113)) for the description of the solid-liquid transition of Al. All the predictions of structure, melting point and superheating/undercooling degrees were compared with the experiments or theoretical calculations. Then, two of them, MKBA and MDSL, were proven suitable for the study of the solid-liquid transition of Al while the residuals were unqualified. However, potential MKBA is more accurate to predict the structures of solid and liquid, while MDSL works a little better in the thermodynamic and dynamic predictions of solid-liquid transitions.
Real-time mandibular angle reduction surgical simulation with haptic rendering.
Wang, Qiong; Chen, Hui; Wu, Wen; Jin, Hai-Yang; Heng, Pheng-Ann
2012-11-01
Mandibular angle reduction is a popular and efficient procedure widely used to alter the facial contour. The primary surgical instruments, the reciprocating saw and the round burr, employed in the surgery have a common feature: operating at a high-speed. Generally, inexperienced surgeons need a long-time practice to learn how to minimize the risks caused by the uncontrolled contacts and cutting motions in manipulation of instruments with high-speed reciprocation or rotation. A virtual reality-based surgical simulator for the mandibular angle reduction was designed and implemented on a CUDA-based platform in this paper. High-fidelity visual and haptic feedbacks are provided to enhance the perception in a realistic virtual surgical environment. The impulse-based haptic models were employed to simulate the contact forces and torques on the instruments. It provides convincing haptic sensation for surgeons to control the instruments under different reciprocation or rotation velocities. The real-time methods for bone removal and reconstruction during surgical procedures have been proposed to support realistic visual feedbacks. The simulated contact forces were verified by comparing against the actual force data measured through the constructed mechanical platform. An empirical study based on the patient-specific data was conducted to evaluate the ability of the proposed system in training surgeons with various experiences. The results confirm the validity of our simulator.
Parallel runway requirement analysis study. Volume 1: The analysis
NASA Technical Reports Server (NTRS)
Ebrahimi, Yaghoob S.
1993-01-01
The correlation of increased flight delays with the level of aviation activity is well recognized. A main contributor to these flight delays has been the capacity of airports. Though new airport and runway construction would significantly increase airport capacity, few programs of this type are currently underway, let alone planned, because of the high cost associated with such endeavors. Therefore, it is necessary to achieve the most efficient and cost effective use of existing fixed airport resources through better planning and control of traffic flows. In fact, during the past few years the FAA has initiated such an airport capacity program designed to provide additional capacity at existing airports. Some of the improvements that that program has generated thus far have been based on new Air Traffic Control procedures, terminal automation, additional Instrument Landing Systems, improved controller display aids, and improved utilization of multiple runways/Instrument Meteorological Conditions (IMC) approach procedures. A useful element to understanding potential operational capacity enhancements at high demand airports has been the development and use of an analysis tool called The PLAND_BLUNDER (PLB) Simulation Model. The objective for building this simulation was to develop a parametric model that could be used for analysis in determining the minimum safety level of parallel runway operations for various parameters representing the airplane, navigation, surveillance, and ATC system performance. This simulation is useful as: a quick and economical evaluation of existing environments that are experiencing IMC delays, an efficient way to study and validate proposed procedure modifications, an aid in evaluating requirements for new airports or new runways in old airports, a simple, parametric investigation of a wide range of issues and approaches, an ability to tradeoff air and ground technology and procedures contributions, and a way of considering probable blunder mechanisms and range of blunder scenarios. This study describes the steps of building the simulation and considers the input parameters, assumptions and limitations, and available outputs. Validation results and sensitivity analysis are addressed as well as outlining some IMC and Visual Meteorological Conditions (VMC) approaches to parallel runways. Also, present and future applicable technologies (e.g., Digital Autoland Systems, Traffic Collision and Avoidance System II, Enhanced Situational Awareness System, Global Positioning Systems for Landing, etc.) are assessed and recommendations made.
Morris, Marie C; Gallagher, Tom K; Ridgway, Paul F
2012-01-01
The objective was to systematically review the literature to identify and grade tools used for the end point assessment of procedural skills (e.g., phlebotomy, IV cannulation, suturing) competence in medical students prior to certification. The authors searched eight bibliographic databases electronically - ERIC, Medline, CINAHL, EMBASE, Psychinfo, PsychLIT, EBM Reviews and the Cochrane databases. Two reviewers independently reviewed the literature to identify procedural assessment tools used specifically for assessing medical students within the PRISMA framework, the inclusion/exclusion criteria and search period. Papers on OSATS and DOPS were excluded as they focused on post-registration assessment and clinical rather than simulated competence. Of 659 abstracted articles 56 identified procedural assessment tools. Only 11 specifically assessed medical students. The final 11 studies consisted of 1 randomised controlled trial, 4 comparative and 6 descriptive studies yielding 12 heterogeneous procedural assessment tools for analysis. Seven tools addressed four discrete pre-certification skills, basic suture (3), airway management (2), nasogastric tube insertion (1) and intravenous cannulation (1). One tool used a generic assessment of procedural skills. Two tools focused on postgraduate laparoscopic skills and one on osteopathic students and thus were not included in this review. The levels of evidence are low with regard to reliability - κ = 0.65-0.71 and minimum validity is achieved - face and content. In conclusion, there are no tools designed specifically to assess competence of procedural skills in a final certification examination. There is a need to develop standardised tools with proven reliability and validity for assessment of procedural skills competence at the end of medical training. Medicine graduates must have comparable levels of procedural skills acquisition entering the clinical workforce irrespective of the country of training.
Implementation of unsteady sampling procedures for the parallel direct simulation Monte Carlo method
NASA Astrophysics Data System (ADS)
Cave, H. M.; Tseng, K.-C.; Wu, J.-S.; Jermy, M. C.; Huang, J.-C.; Krumdieck, S. P.
2008-06-01
An unsteady sampling routine for a general parallel direct simulation Monte Carlo method called PDSC is introduced, allowing the simulation of time-dependent flow problems in the near continuum range. A post-processing procedure called DSMC rapid ensemble averaging method (DREAM) is developed to improve the statistical scatter in the results while minimising both memory and simulation time. This method builds an ensemble average of repeated runs over small number of sampling intervals prior to the sampling point of interest by restarting the flow using either a Maxwellian distribution based on macroscopic properties for near equilibrium flows (DREAM-I) or output instantaneous particle data obtained by the original unsteady sampling of PDSC for strongly non-equilibrium flows (DREAM-II). The method is validated by simulating shock tube flow and the development of simple Couette flow. Unsteady PDSC is found to accurately predict the flow field in both cases with significantly reduced run-times over single processor code and DREAM greatly reduces the statistical scatter in the results while maintaining accurate particle velocity distributions. Simulations are then conducted of two applications involving the interaction of shocks over wedges. The results of these simulations are compared to experimental data and simulations from the literature where there these are available. In general, it was found that 10 ensembled runs of DREAM processing could reduce the statistical uncertainty in the raw PDSC data by 2.5-3.3 times, based on the limited number of cases in the present study.
Face and Construct Validation of a Next Generation Virtual Reality (Gen2-VR©) Surgical Simulator
Sankaranarayanan, Ganesh; Li, Baichun; Manser, Kelly; Jones, Stephanie B.; Jones, Daniel B.; Schwaitzberg, Steven; Cao, Caroline G. L.; De, Suvranu
2015-01-01
Introduction Surgical performance is affected by distractors and interruptions to surgical workflow that exist in the operating room. However, traditional surgical simulators are used to train surgeons in a skills lab that does not recreate these conditions. To overcome this limitation, we have developed a novel, immersive virtual reality (Gen2-VR©) system to train surgeons in these environments. This study was to establish face and construct validity of our system. Methods and Procedures The study was a within-subjects design, with subjects repeating a virtual peg transfer task under three different conditions: CASE I: traditional VR; CASE II: Gen2-VR© with no distractions and CASE III: Gen2-VR© with distractions and interruptions.. In Case III, to simulate the effects of distractions and interruptions, music was played intermittently, the camera lens was fogged for 10 seconds and tools malfunctioned for 15 seconds at random points in time during the simulation. At the completion of the study subjects filled in a 5-point Likert scale feedback questionnaire. A total of sixteen subjects participated in this study. Results Friedman test showed significant difference in scores between the three conditions (p < 0.0001). Post hoc analysis using Wilcoxon Signed Rank tests with Bonferroni correction further showed that all the three conditions were significantly different from each other (Case I, Case II, p < 0.001), (Case I, Case III, p < 0.001) and (Case II, Case III, p = 0.009). Subjects rated that fog (mean= 4.18) and tool malfunction (median = 4.56) significantly hindered their performance. Conclusion The results showed that Gen2-VR© simulator has both face and construct validity and it can accurately and realistically present distractions and interruptions in a simulated OR, in spite of limitations of the current HMD hardware technology. PMID:26092010
Effect of virtual reality training on laparoscopic surgery: randomised controlled trial
Soerensen, Jette L; Grantcharov, Teodor P; Dalsgaard, Torur; Schouenborg, Lars; Ottosen, Christian; Schroeder, Torben V; Ottesen, Bent S
2009-01-01
Objective To assess the effect of virtual reality training on an actual laparoscopic operation. Design Prospective randomised controlled and blinded trial. Setting Seven gynaecological departments in the Zeeland region of Denmark. Participants 24 first and second year registrars specialising in gynaecology and obstetrics. Interventions Proficiency based virtual reality simulator training in laparoscopic salpingectomy and standard clinical education (controls). Main outcome measure The main outcome measure was technical performance assessed by two independent observers blinded to trainee and training status using a previously validated general and task specific rating scale. The secondary outcome measure was operation time in minutes. Results The simulator trained group (n=11) reached a median total score of 33 points (interquartile range 32-36 points), equivalent to the experience gained after 20-50 laparoscopic procedures, whereas the control group (n=10) reached a median total score of 23 (22-27) points, equivalent to the experience gained from fewer than five procedures (P<0.001). The median total operation time in the simulator trained group was 12 minutes (interquartile range 10-14 minutes) and in the control group was 24 (20-29) minutes (P<0.001). The observers’ inter-rater agreement was 0.79. Conclusion Skills in laparoscopic surgery can be increased in a clinically relevant manner using proficiency based virtual reality simulator training. The performance level of novices was increased to that of intermediately experienced laparoscopists and operation time was halved. Simulator training should be considered before trainees carry out laparoscopic procedures. Trial registration ClinicalTrials.gov NCT00311792. PMID:19443914
Comparison of normalization methods for differential gene expression analysis in RNA-Seq experiments
Maza, Elie; Frasse, Pierre; Senin, Pavel; Bouzayen, Mondher; Zouine, Mohamed
2013-01-01
In recent years, RNA-Seq technologies became a powerful tool for transcriptome studies. However, computational methods dedicated to the analysis of high-throughput sequencing data are yet to be standardized. In particular, it is known that the choice of a normalization procedure leads to a great variability in results of differential gene expression analysis. The present study compares the most widespread normalization procedures and proposes a novel one aiming at removing an inherent bias of studied transcriptomes related to their relative size. Comparisons of the normalization procedures are performed on real and simulated data sets. Real RNA-Seq data sets analyses, performed with all the different normalization methods, show that only 50% of significantly differentially expressed genes are common. This result highlights the influence of the normalization step on the differential expression analysis. Real and simulated data sets analyses give similar results showing 3 different groups of procedures having the same behavior. The group including the novel method named “Median Ratio Normalization” (MRN) gives the lower number of false discoveries. Within this group the MRN method is less sensitive to the modification of parameters related to the relative size of transcriptomes such as the number of down- and upregulated genes and the gene expression levels. The newly proposed MRN method efficiently deals with intrinsic bias resulting from relative size of studied transcriptomes. Validation with real and simulated data sets confirmed that MRN is more consistent and robust than existing methods. PMID:26442135
NASA Astrophysics Data System (ADS)
Caciuffo, Roberto; Esposti, Alessandra Degli; Deleuze, Michael S.; Leigh, David A.; Murphy, Aden; Paci, Barbara; Parker, Stewart F.; Zerbetto, Francesco
1998-12-01
The inelastic neutron scattering (INS) spectrum of the original benzylic amide [2]catenane is recorded and simulated by a semiempirical quantum chemical procedure coupled with the most comprehensive approach available to date, the CLIMAX program. The successful simulation of the spectrum indicates that the modified neglect of differential overlap (MNDO) model can reproduce the intramolecular vibrations of a molecular system as large as a catenane (136 atoms). Because of the computational costs involved and some numerical instabilities, a less expensive approach is attempted which involves the molecular mechanics-based calculation of the INS response in terms of the most basic formulation for the scattering activity. The encouraging results obtained validate the less computationally intensive procedure and allow its extension to the calculation of the INS spectrum for a second, theoretical, co-conformer, which, although structurally and energetically reasonable, is not, in fact, found in the solid state. The second structure was produced by a Monte Carlo simulated annealing method run in the conformational space (a procedure that would have been prohibitively expensive at the semiempirical level) and is characterized by a higher degree of intramolecular hydrogen bonding than the x-ray structure. The two alternative structures yield different simulated spectra, only one of which, the authentic one, is compatible with the experimental data. Comparison of the two simulated and experimental spectra affords the identification of an inelastic neutron scattering spectral signature of the correct hydrogen bonding motif in the region slightly above 700 cm-1. The study illustrates that combinations of simulated INS data and experimental results can be successfully used to discriminate between different proposed structures or possible hydrogen bonding motifs in large functional molecular systems.
Using simulators to teach pediatric airway procedures in an international setting.
Schwartz, Marissa A; Kavanagh, Katherine R; Frampton, Steven J; Bruce, Iain A; Valdez, Tulio A
2018-01-01
There has been a growing shift towards endoscopic management of laryngeal procedures in pediatric otolaryngology. There still appears to be a shortage of pediatric otolaryngology programs and children's hospitals worldwide where physicians can learn and practice these skills. Laryngeal simulation models have the potential to be part of the educational training of physicians who lack exposure to relatively uncommon pediatric otolaryngologic pathology. The objective of this study was to assess the utility of pediatric laryngeal models to teach laryngeal pathology to physicians at an international meeting. Pediatric laryngeal models were assessed by participants at an international pediatric otolaryngology meeting. Participants provided demographic information and previous experience with pediatric airways. Participants then performed simulated surgery on these models and evaluated them using both a previously validated Tissue Likeness Scale and a pre-simulation to post-simulation confidence scale. Participants reported significant subjective improvement in confidence level after use of the simulation models (p < 0.05). Participants reported realistic representations of human anatomy and pathology. The models' tissue mechanics were adequate to practice operative technique including the ability to incise, suture, and suspend models. The pediatric laryngeal models demonstrate high quality anatomy, which is easy manipulated with surgical instruments. These models allow both trainees and surgeons to practice time-sensitive airway surgeries in a safe and controlled environment. Copyright © 2017 Elsevier B.V. All rights reserved.
Varshney, Rickul; Frenkiel, Saul; Nguyen, Lily H P; Young, Meredith; Del Maestro, Rolando; Zeitouni, Anthony; Tewfik, Marc A
2014-01-01
The technical challenges of endoscopic sinus surgery (ESS) and the high risk of complications support the development of alternative modalities to train residents in these procedures. Virtual reality simulation is becoming a useful tool for training the skills necessary for minimally invasive surgery; however, there are currently no ESS virtual reality simulators available with valid evidence supporting their use in resident education. Our aim was to develop a new rhinology simulator, as well as to define potential performance metrics for trainee assessment. The McGill simulator for endoscopic sinus surgery (MSESS), a new sinus surgery virtual reality simulator with haptic feedback, was developed (a collaboration between the McGill University Department of Otolaryngology-Head and Neck Surgery, the Montreal Neurologic Institute Simulation Lab, and the National Research Council of Canada). A panel of experts in education, performance assessment, rhinology, and skull base surgery convened to identify core technical abilities that would need to be taught by the simulator, as well as performance metrics to be developed and captured. The MSESS allows the user to perform basic sinus surgery skills, such as an ethmoidectomy and sphenoidotomy, through the use of endoscopic tools in a virtual nasal model. The performance metrics were developed by an expert panel and include measurements of safety, quality, and efficiency of the procedure. The MSESS incorporates novel technological advancements to create a realistic platform for trainees. To our knowledge, this is the first simulator to combine novel tools such as the endonasal wash and elaborate anatomic deformity with advanced performance metrics for ESS.
Play to Become a Surgeon: Impact of Nintendo WII Training on Laparoscopic Skills
Giannotti, Domenico; Patrizi, Gregorio; Di Rocco, Giorgio; Vestri, Anna Rita; Semproni, Camilla Proietti; Fiengo, Leslie; Pontone, Stefano; Palazzini, Giorgio; Redler, Adriano
2013-01-01
Background Video-games have become an integral part of the new multimedia culture. Several studies assessed video-gaming enhancement of spatial attention and eye-hand coordination. Considering the technical difficulty of laparoscopic procedures, legal issues and time limitations, the validation of appropriate training even outside of the operating rooms is ongoing. We investigated the influence of a four-week structured Nintendo® Wii™ training on laparoscopic skills by analyzing performance metrics with a validated simulator (Lap Mentor™, Simbionix™). Methodology/Principal Findings We performed a prospective randomized study on 42 post-graduate I–II year residents in General, Vascular and Endoscopic Surgery. All participants were tested on a validated laparoscopic simulator and then randomized to group 1 (Controls, no training with the Nintendo® Wii™), and group 2 (training with the Nintendo® Wii™) with 21 subjects in each group, according to a computer-generated list. After four weeks, all residents underwent a testing session on the laparoscopic simulator of the same tasks as in the first session. All 42 subjects in both groups improved significantly from session 1 to session 2. Compared to controls, the Wii group showed a significant improvement in performance (p<0.05) for 13 of the 16 considered performance metrics. Conclusions/Significance The Nintendo® Wii™ might be helpful, inexpensive and entertaining part of the training of young laparoscopists, in addition to a standard surgical education based on simulators and the operating room. PMID:23460845
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abboud, Alexander William; Guillen, Donna Post
2016-01-01
At the Hanford site, radioactive waste stored in underground tanks is slated for vitrification for final disposal. A comprehensive knowledge of the glass batch melting process will be useful in optimizing the process, which could potentially reduce the cost and duration of this multi-billion dollar cleanup effort. We are developing a high-fidelity heat transfer model of a Joule-heated ceramic lined melter to improve the understanding of the complex, inter-related processes occurring with the melter. The glass conversion rates in the cold cap layer are dependent on promoting efficient heat transfer. In practice, heat transfer is augmented by inserting air bubblersmore » into the molten glass. However, the computational simulations must be validated to provide confidence in the solutions. As part of a larger validation procedure, it is beneficial to split the physics of the melter into smaller systems to validate individually. The substitution of molten glass for a simulant liquid with similar density and viscosity at room temperature provides a way to study mixing through bubbling as an isolated effect without considering the heat transfer dynamics. The simulation results are compared to experimental data obtained by the Vitreous State Laboratory at the Catholic University of America using bubblers placed within a large acrylic tank that is similar in scale to a pilot glass waste melter. Comparisons are made for surface area of the rising air bubbles between experiments and CFD simulations for a variety of air flow rates and bubble injection depths. Also, computed bubble rise velocity is compared to a well-accepted expression for bubble terminal velocity.« less
Seitz, Julien; Bars, Clément; Théodore, Guillaume; Beurtheret, Sylvain; Lellouche, Nicolas; Bremondy, Michel; Ferracci, Ange; Faure, Jacques; Penaranda, Guillaume; Yamazaki, Masatoshi; Avula, Uma Mahesh R.; Curel, Laurence; Siame, Sabrina; Berenfeld, Omer; Pisapia, André; Kalifa, Jérôme
2017-01-01
Background The use of intra-cardiac electrograms to guide atrial fibrillation (AF) ablation has yielded conflicting results. We evaluated an electrogram marker of AF drivers: the clustering of electrograms exhibiting spatio-temporal dispersion — regardless of whether such electrograms were fractionated or not. Objective To evaluate the usefulness of spatio-temporal dispersion, a visually recognizable electric footprint of AF drivers, for the ablation of all forms of AF. Methods We prospectively enrolled 105 patients admitted for AF ablation. AF was sequentially mapped in both atria with a 20-pole PentaRay catheter. We tagged and ablated only regions displaying electrogram dispersion during AF. Results were compared to a validation set in which a conventional ablation approach was used (pulmonary vein isolation/stepwise approach). To establish the mechanism underlying spatio-temporal dispersion of AF electrograms, we conducted realistic numerical simulations of AF drivers in a 2-dimensional model and optical mapping of ovine atrial scar-related AF. Results Ablation at dispersion areas terminated AF in 95%. After ablation of 17±10% of the left atrial surface and 18 months of follow-up, the atrial arrhythmia recurrence rate was 15% after 1.4±0.5 procedure/patient vs 41% in the validation set after 1.5±0.5 procedure/patient (arrhythmia free-survival rates: 85% vs 59%, log rank P<0.001). In comparison with the validation set, radiofrequency times (49 ± 21 minutes vs 85 ± 34.5 minutes, p=0.001) and procedure times (168 ± 42 minutes vs. 230 ± 67 minutes, p<.0001) were shorter. In simulations and optical mapping experiments, virtual PentaRay recordings demonstrated that electrogram dispersion is mostly recorded in the vicinity of a driver. Conclusions The clustering of intra-cardiac electrograms exhibiting spatio-temporal dispersion is indicative of AF drivers. Their ablation allows for a non-extensive and patient-tailored approach to AF ablation. Clinical trial.gov number: NCT02093949 PMID:28104073
Training in surgical oncology - the role of VR simulation.
Lewis, T M; Aggarwal, R; Rajaretnam, N; Grantcharov, T P; Darzi, A
2011-09-01
There have been dramatic changes in surgical training over the past two decades which have resulted in a number of concerns for the development of future surgeons. Changes in the structure of cancer services, working hour restrictions and a commitment to patient safety has led to a reduction in training opportunities that are available to the surgeon in training. Simulation and in particular virtual reality (VR) simulation has been heralded as an effective adjunct to surgical training. Advances in VR simulation has allowed trainees to practice realistic full length procedures in a safe and controlled environment, where mistakes are permitted and can be used as learning points. There is considerable evidence to demonstrate that the VR simulation can be used to enhance technical skills and improve operating room performance. Future work should focus on the cost effectiveness and predictive validity of VR simulation, which in turn would increase the uptake of simulation and enhance surgical training. Copyright © 2011 Elsevier Ltd. All rights reserved.
Evaluation of tocopherol recovery through simulation of molecular distillation process.
Moraes, E B; Batistella, C B; Alvarez, M E Torres; Filho, Rubens Maciel; Maciel, M R Wolf
2004-01-01
DISMOL simulator was used to determine the best possible operating conditions to guide, in future studies, experimental works. This simulator needs several physical-chemical properties and often it is very difficult to determine them because of the complexity of the involved components. Their determinations must be made through correlations and/or predictions, in order to characterize the system and calculate it. The first try is to have simulation results of a system that later can be validated with experimental data. To implement, in the simulator, the necessary parameters of complex systems is a difficult task. In this work, we aimed to determe these properties in order to evaluate the tocopherol (vitamin E) recovery using a DISMOL simulator. The raw material used was the crude deodorizer distillate of soya oil. With this procedure, it is possible to determine the best operating conditions for experimental works and to evaluate the process in the separation of new systems, analyzing the profiles obtained from these simulations for the falling film molecular distillator.
Virtual reality in surgical education.
Ota, D; Loftin, B; Saito, T; Lea, R; Keller, J
1995-03-01
Virtual reality (VR) is an emerging technology that can teach surgeons new procedures and can determine their level of competence before they operate on patients. Also VR allows the trainee to return to the same procedure or task several times later as a refresher course. Laparoscopic surgery is a new operative technique which requires the surgeon to observe the operation on a video-monitor and requires the acquisition of new skills. VR simulation could duplicate the operative field and thereby enhance training and reduce the need for expensive animal training models. Our preliminary experience has shown that we have the technology to model tissues and laparoscopic instruments and to develop in real time a VR learning environment for surgeons. Another basic need is to measure competence. Surgical training is an apprenticeship requiring close supervision and 5-7 years of training. Technical competence is judged by the mentor and has always been subjective. If VR surgical simulators are to play an important role in the future, quantitative measurement of competence would have to be part of the system. Because surgical competence is "vague" and is characterized by such terms as "too long, too short" or "too close, too far," it is possible that the principles of fuzzy logic could be used to measure competence in a VR surgical simulator. Because a surgical procedure consists of a series of tasks and each task is a series of steps, we will plan to create two important tasks in a VR simulator and validate their use. These tasks consist of laparoscopic knot tying and laparoscopic suturing. Our hypothesis is that VR in combination with fuzzy logic can educate surgeons and determine when they are competent to perform these procedures on patients.
Approximate Single-Diode Photovoltaic Model for Efficient I-V Characteristics Estimation
Ting, T. O.; Zhang, Nan; Guan, Sheng-Uei; Wong, Prudence W. H.
2013-01-01
Precise photovoltaic (PV) behavior models are normally described by nonlinear analytical equations. To solve such equations, it is necessary to use iterative procedures. Aiming to make the computation easier, this paper proposes an approximate single-diode PV model that enables high-speed predictions for the electrical characteristics of commercial PV modules. Based on the experimental data, statistical analysis is conducted to validate the approximate model. Simulation results show that the calculated current-voltage (I-V) characteristics fit the measured data with high accuracy. Furthermore, compared with the existing modeling methods, the proposed model reduces the simulation time by approximately 30% in this work. PMID:24298205
Cognitive simulators for medical education and training.
Kahol, Kanav; Vankipuram, Mithra; Smith, Marshall L
2009-08-01
Simulators for honing procedural skills (such as surgical skills and central venous catheter placement) have proven to be valuable tools for medical educators and students. While such simulations represent an effective paradigm in surgical education, there is an opportunity to add a layer of cognitive exercises to these basic simulations that can facilitate robust skill learning in residents. This paper describes a controlled methodology, inspired by neuropsychological assessment tasks and embodied cognition, to develop cognitive simulators for laparoscopic surgery. These simulators provide psychomotor skill training and offer the additional challenge of accomplishing cognitive tasks in realistic environments. A generic framework for design, development and evaluation of such simulators is described. The presented framework is generalizable and can be applied to different task domains. It is independent of the types of sensors, simulation environment and feedback mechanisms that the simulators use. A proof of concept of the framework is provided through developing a simulator that includes cognitive variations to a basic psychomotor task. The results of two pilot studies are presented that show the validity of the methodology in providing an effective evaluation and learning environments for surgeons.
Thermal performance evaluation of the infrared telescope dewar subsystem
NASA Technical Reports Server (NTRS)
Urban, E. W.
1986-01-01
Thermal performance evaluations (TPE) were conducted with the superfluid helium dewar of the Infrared Telescope (IRT) experiment from November 1981 to August 1982. Test included measuring key operating parameters, simulating operations with an attached instrument cryostat and validating servicing, operating and safety procedures. Test activities and results are summarized. All objectives are satisfied except for those involving transfer of low pressure liquid helium (LHe) from a supply dewar into the dewar subsystem.
Advanced Training in Laparoscopic Abdominal Surgery (Atlas): A Systematic Review
Beyer-Berjot, Laura; Palter, Vanessa; Grantcharov, Teodor; Aggarwal, Rajesh
2014-01-01
Background Simulation has widely spread this last decade, especially in laparoscopic surgery, and training out of the operating room (OR) has proven its positive impact on basic skills during real laparoscopic procedures. However, few articles dealing with advanced training in laparoscopic abdominal surgery (ATLAS) have been published so far. Such training may reduce learning curves in the OR for junior surgeons with limited access to complex laparoscopic procedures as a primary operator. Methods Two reviewers, using MEDLINE, EMBASE, and The Cochrane Library, conducted a systematic research with combinations of the following keywords: (teaching OR education OR computer simulation) AND laparoscopy AND (gastric OR stomach OR colorectal OR colon OR rectum OR small bowel OR liver OR spleen OR pancreas OR advanced surgery OR advanced procedure OR complex procedure). Additional studies were searched in the reference lists of all included articles. Results Fifty-four original studies were retrieved. Their level of evidence was low: most of the studies were case series, one fifth purely descriptive, and there were 8 randomized trials. Porcine models and video trainers, as well as gastric and colorectal procedures were mainly assessed. The retrieved studies showed some encouraging trends in terms of trainees' satisfaction, improvement after training (but mainly on the training tool itself). Some tools have been proven to be construct-valid. Conclusions Higher quality studies are required to appraise ATLAS educational value. PMID:24947643
The Arthroscopic Surgical Skill Evaluation Tool (ASSET)
Koehler, Ryan J.; Amsdell, Simon; Arendt, Elizabeth A; Bisson, Leslie J; Braman, Jonathan P; Butler, Aaron; Cosgarea, Andrew J; Harner, Christopher D; Garrett, William E; Olson, Tyson; Warme, Winston J.; Nicandri, Gregg T.
2014-01-01
Background Surgeries employing arthroscopic techniques are among the most commonly performed in orthopaedic clinical practice however, valid and reliable methods of assessing the arthroscopic skill of orthopaedic surgeons are lacking. Hypothesis The Arthroscopic Surgery Skill Evaluation Tool (ASSET) will demonstrate content validity, concurrent criterion-oriented validity, and reliability, when used to assess the technical ability of surgeons performing diagnostic knee arthroscopy on cadaveric specimens. Study Design Cross-sectional study; Level of evidence, 3 Methods Content validity was determined by a group of seven experts using a Delphi process. Intra-articular performance of a right and left diagnostic knee arthroscopy was recorded for twenty-eight residents and two sports medicine fellowship trained attending surgeons. Subject performance was assessed by two blinded raters using the ASSET. Concurrent criterion-oriented validity, inter-rater reliability, and test-retest reliability were evaluated. Results Content validity: The content development group identified 8 arthroscopic skill domains to evaluate using the ASSET. Concurrent criterion-oriented validity: Significant differences in total ASSET score (p<0.05) between novice, intermediate, and advanced experience groups were identified. Inter-rater reliability: The ASSET scores assigned by each rater were strongly correlated (r=0.91, p <0.01) and the intra-class correlation coefficient between raters for the total ASSET score was 0.90. Test-retest reliability: there was a significant correlation between ASSET scores for both procedures attempted by each individual (r = 0.79, p<0.01). Conclusion The ASSET appears to be a useful, valid, and reliable method for assessing surgeon performance of diagnostic knee arthroscopy in cadaveric specimens. Studies are ongoing to determine its generalizability to other procedures as well as to the live OR and other simulated environments. PMID:23548808
Performance of technology-driven simulators for medical students--a systematic review.
Michael, Michael; Abboudi, Hamid; Ker, Jean; Shamim Khan, Mohammed; Dasgupta, Prokar; Ahmed, Kamran
2014-12-01
Simulation-based education has evolved as a key training tool in high-risk industries such as aviation and the military. In parallel with these industries, the benefits of incorporating specialty-oriented simulation training within medical schools are vast. Adoption of simulators into medical school education programs has shown great promise and has the potential to revolutionize modern undergraduate education. An English literature search was carried out using MEDLINE, EMBASE, and psychINFO databases to identify all randomized controlled studies pertaining to "technology-driven" simulators used in undergraduate medical education. A validity framework incorporating the "framework for technology enhanced learning" report by the Department of Health, United Kingdom, was used to evaluate the capabilities of each technology-driven simulator. Information was collected regarding the simulator type, characteristics, and brand name. Where possible, we extracted information from the studies on the simulators' performance with respect to validity status, reliability, feasibility, education impact, acceptability, and cost effectiveness. We identified 19 studies, analyzing simulators for medical students across a variety of procedure-based specialities including; cardiovascular (n = 2), endoscopy (n = 3), laparoscopic surgery (n = 8), vascular access (n = 2), ophthalmology (n = 1), obstetrics and gynecology (n = 1), anesthesia (n = 1), and pediatrics (n = 1). Incorporation of simulators has so far been on an institutional level; no national or international trends have yet emerged. Simulators are capable of providing a highly educational and realistic experience for the medical students within a variety of speciality-oriented teaching sessions. Further research is needed to establish how best to incorporate simulators into a more primary stage of medical education; preclinical and clinical undergraduate medicine. Copyright © 2014 Elsevier Inc. All rights reserved.
SU-F-T-50: Evaluation of Monte Carlo Simulations Performance for Pediatric Brachytherapy Dosimetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chatzipapas, C; Kagadis, G; Papadimitroulas, P
Purpose: Pediatric tumors are generally treated with multi-modal procedures. Brachytherapy can be used with pediatric tumors, especially given that in this patient population low toxicity on normal tissues is critical as is the suppression of the probability for late malignancies. Our goal is to validate the GATE toolkit on realistic brachytherapy applications, and evaluate brachytherapy plans on pediatrics for accurate dosimetry on sensitive and critical organs of interest. Methods: The GATE Monte Carlo (MC) toolkit was used. Two High Dose Rate (HDR) 192Ir brachytherapy sources were simulated (Nucletron mHDR-v1 and Varian VS2000), and fully validated using the AAPM and ESTROmore » protocols. A realistic brachytherapy plan was also simulated using the XCAT anthropomorphic computational model .The simulated data were compared to the clinical dose points. Finally, a 14 years old girl with vaginal rhabdomyosarcoma was modelled based on clinical procedures for the calculation of the absorbed dose per organ. Results: The MC simulations resulted in accurate dosimetry in terms of dose rate constant (Λ), radial dose gL(r) and anisotropy function F(r,θ) for both sources.The simulations were executed using ∼1010 number of primaries resulting in statistical uncertainties lower than 2%.The differences between the theoretical values and the simulated ones ranged from 0.01% up to 3.3%, with the largest discrepancy (6%) being observed in the dose rate constant calculation.The simulated DVH using an adult female XCAT model was also compared to a clinical one resulting in differences smaller than 5%. Finally, a realistic pediatric brachytherapy simulation was performed to evaluate the absorbed dose per organ and to calculate DVH with respect to heterogeneities of the human anatomy. Conclusion: GATE is a reliable tool for brachytherapy simulations both for source modeling and for dosimetry in anthropomorphic voxelized models. Our project aims to evaluate a variety of pediatric brachytherapy schemes using a population of pediatric phantoms for several pathological cases. This study is part of a project that has received funding from the European Union Horizon2020 research and innovation programme under the MarieSklodowska-Curiegrantagreement.No691203.The results published in this study reflect only the authors view and the Research Executive Agency (REA) and the European Commission is not responsible for any use that may be madeof the information it contains.« less
Validation studies of the DOE-2 Building Energy Simulation Program. Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sullivan, R.; Winkelmann, F.
1998-06-01
This report documents many of the validation studies (Table 1) of the DOE-2 building energy analysis simulation program that have taken place since 1981. Results for several versions of the program are presented with the most recent study conducted in 1996 on version DOE-2.1E and the most distant study conducted in 1981 on version DOE-1.3. This work is part of an effort related to continued development of DOE-2, particularly in its use as a simulation engine for new specialized versions of the program such as the recently released RESFEN 3.1. RESFEN 3.1 is a program specifically dealing with analyzing themore » energy performance of windows in residential buildings. The intent in providing the results of these validation studies is to give potential users of the program a high degree of confidence in the calculated results. Validation studies in which calculated simulation data is compared to measured data have been conducted throughout the development of the DOE-2 program. Discrepancies discovered during the course of such work has resulted in improvements in the simulation algorithms. Table 2 provides a listing of additions and modifications that have been made to various versions of the program since version DOE-2.1A. One of the most significant recent changes in the program occurred with version DOE-2.1E. An improved algorithm for calculating the outside surface film coefficient was implemented. In addition, integration of the WINDOW 4 program was accomplished resulting in improved ability in analyzing window energy performance. Validation and verification of a program as sophisticated as DOE-2 must necessarily be limited because of the approximations inherent in the program. For example, the most accurate model of the heat transfer processes in a building would include a three-dimensional analysis. To justify such detailed algorithmic procedures would correspondingly require detailed information describing the building and/or HVAC system and energy plant parameters. Until building simulation programs can get this data directly from CAD programs, such detail would negate the usefulness of the program for the practicing engineers and architects who currently use the program. In addition, the validation studies discussed herein indicate that such detail is really unnecessary. The comparison of calculated and measured quantities have resulted in a satisfactory level of confidence that is sufficient for continued use of the DOE-2 program. However, additional validation is warranted, particularly at the component level, to further improve the program.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lai, Canhai; Xu, Zhijie; Pan, Wenxiao
2016-01-01
To quantify the predictive confidence of a solid sorbent-based carbon capture design, a hierarchical validation methodology—consisting of basic unit problems with increasing physical complexity coupled with filtered model-based geometric upscaling has been developed and implemented. This paper describes the computational fluid dynamics (CFD) multi-phase reactive flow simulations and the associated data flows among different unit problems performed within the said hierarchical validation approach. The bench-top experiments used in this calibration and validation effort were carefully designed to follow the desired simple-to-complex unit problem hierarchy, with corresponding data acquisition to support model parameters calibrations at each unit problem level. A Bayesianmore » calibration procedure is employed and the posterior model parameter distributions obtained at one unit-problem level are used as prior distributions for the same parameters in the next-tier simulations. Overall, the results have demonstrated that the multiphase reactive flow models within MFIX can be used to capture the bed pressure, temperature, CO2 capture capacity, and kinetics with quantitative accuracy. The CFD modeling methodology and associated uncertainty quantification techniques presented herein offer a solid framework for estimating the predictive confidence in the virtual scale up of a larger carbon capture device.« less
NASA Technical Reports Server (NTRS)
Jacklin, Stephen; Schumann, Johann; Gupta, Pramod; Richard, Michael; Guenther, Kurt; Soares, Fola
2005-01-01
Adaptive control technologies that incorporate learning algorithms have been proposed to enable automatic flight control and vehicle recovery, autonomous flight, and to maintain vehicle performance in the face of unknown, changing, or poorly defined operating environments. In order for adaptive control systems to be used in safety-critical aerospace applications, they must be proven to be highly safe and reliable. Rigorous methods for adaptive software verification and validation must be developed to ensure that control system software failures will not occur. Of central importance in this regard is the need to establish reliable methods that guarantee convergent learning, rapid convergence (learning) rate, and algorithm stability. This paper presents the major problems of adaptive control systems that use learning to improve performance. The paper then presents the major procedures and tools presently developed or currently being developed to enable the verification, validation, and ultimate certification of these adaptive control systems. These technologies include the application of automated program analysis methods, techniques to improve the learning process, analytical methods to verify stability, methods to automatically synthesize code, simulation and test methods, and tools to provide on-line software assurance.
Standardizing bimanual vaginal examination using cognitive task analysis.
Plumptre, Isabella; Mulki, Omar; Granados, Alejandro; Gayle, Claudine; Ahmed, Shahla; Low-Beer, Naomi; Higham, Jenny; Bello, Fernando
2017-10-01
To create a standardized universal list of procedural steps for bimanual vaginal examination (BVE) for teaching, assessment, and simulator development. This observational study, conducted from June-July 2012 and July-December 2014, collected video data of 10 expert clinicians performing BVE in a nonclinical environment. Video data were analyzed to produce a cognitive task analysis (CTA) of the examination steps performed. The CTA was further refined through structured interviews to make it suitable for teaching or assessment. It was validated through its use as a procedural examination checklist to rate expert clinician performance. BVE was deconstructed into 88 detailed steps outlining the complete examination process. These initial 88 steps were reduced to 35 by focusing on the unseen internal examination, then further refined through interviews with five experts into 30 essential procedural steps, five of which are additional steps if pathology is suspected. Using the CTA as a procedural checklist, the mean number of steps performed and/or verbalized was 21.6 ± 3.12 (72% ± 10.4%; range, 15.9-27.9, 53%-93%). This approach identified 30 essential steps for performing BVE, producing a new technique and standardized tool for teaching, assessment, and simulator development. © 2017 International Federation of Gynecology and Obstetrics.
Koopmeiners, Joseph S.; Feng, Ziding
2015-01-01
Group sequential testing procedures have been proposed as an approach to conserving resources in biomarker validation studies. Previously, Koopmeiners and Feng (2011) derived the asymptotic properties of the sequential empirical positive predictive value (PPV) and negative predictive value curves, which summarize the predictive accuracy of a continuous marker, under case-control sampling. A limitation of their approach is that the prevalence can not be estimated from a case-control study and must be assumed known. In this manuscript, we consider group sequential testing of the predictive accuracy of a continuous biomarker with unknown prevalence. First, we develop asymptotic theory for the sequential empirical PPV and NPV curves when the prevalence must be estimated, rather than assumed known in a case-control study. We then discuss how our results can be combined with standard group sequential methods to develop group sequential testing procedures and bias-adjusted estimators for the PPV and NPV curve. The small sample properties of the proposed group sequential testing procedures and estimators are evaluated by simulation and we illustrate our approach in the context of a study to validate a novel biomarker for prostate cancer. PMID:26537180
Objective assessment of laparoscopic skills using a virtual reality stimulator.
Eriksen, J R; Grantcharov, T
2005-09-01
Virtual reality simulation has a great potential as a training and assessment tool of laparoscopic skills. The study was carried out to investigate whether the LapSim system (Surgical Science Ltd., Gothenburg, Sweden) was able to differentiate between subjects with different laparoscopic experience and thus to demonstrate its construct validity. Subjects 24 were divided into two groups: experienced (performed > 100 laparoscopic procedures, n = 10) and beginners (performed <10 laparoscopic procedures, n = 14). Assessment of laparoscopic skills was based on parameters measured by the computer system. Experienced surgeons performed consistently better than the residents. Significant differences in the parameters time and economy of motion existed between the two groups in seven of seven tasks. Regarding error parameters, differences existed in most but not all tasks. LapSim was able to differentiate between subjects with different laparoscopic experience. This indicates that the system measures skills relevant for laparoscopic surgery and can be used in training programs as a valid assessment tool.
Approach for Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives
NASA Technical Reports Server (NTRS)
Putko, Michele M.; Newman, Perry A.; Taylor, Arthur C., III; Green, Lawrence L.
2001-01-01
This paper presents an implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for a quasi 1-D Euler CFD (computational fluid dynamics) code. Given uncertainties in statistically independent, random, normally distributed input variables, a first- and second-order statistical moment matching procedure is performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, the moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.
Chowriappa, Ashirwad J; Shi, Yi; Raza, Syed Johar; Ahmed, Kamran; Stegemann, Andrew; Wilding, Gregory; Kaouk, Jihad; Peabody, James O; Menon, Mani; Hassett, James M; Kesavadas, Thenkurussi; Guru, Khurshid A
2013-12-01
A standardized scoring system does not exist in virtual reality-based assessment metrics to describe safe and crucial surgical skills in robot-assisted surgery. This study aims to develop an assessment score along with its construct validation. All subjects performed key tasks on previously validated Fundamental Skills of Robotic Surgery curriculum, which were recorded, and metrics were stored. After an expert consensus for the purpose of content validation (Delphi), critical safety determining procedural steps were identified from the Fundamental Skills of Robotic Surgery curriculum and a hierarchical task decomposition of multiple parameters using a variety of metrics was used to develop Robotic Skills Assessment Score (RSA-Score). Robotic Skills Assessment mainly focuses on safety in operative field, critical error, economy, bimanual dexterity, and time. Following, the RSA-Score was further evaluated for construct validation and feasibility. Spearman correlation tests performed between tasks using the RSA-Scores indicate no cross correlation. Wilcoxon rank sum tests were performed between the two groups. The proposed RSA-Score was evaluated on non-robotic surgeons (n = 15) and on expert-robotic surgeons (n = 12). The expert group demonstrated significantly better performance on all four tasks in comparison to the novice group. Validation of the RSA-Score in this study was carried out on the Robotic Surgical Simulator. The RSA-Score is a valid scoring system that could be incorporated in any virtual reality-based surgical simulator to achieve standardized assessment of fundamental surgical tents during robot-assisted surgery. Copyright © 2013 Elsevier Inc. All rights reserved.
Pilot Validation Study of the European Association of Urology Robotic Training Curriculum.
Volpe, Alessandro; Ahmed, Kamran; Dasgupta, Prokar; Ficarra, Vincenzo; Novara, Giacomo; van der Poel, Henk; Mottrie, Alexandre
2015-08-01
The development of structured and validated training curricula is one of the current priorities in robot-assisted urological surgery. To establish the feasibility, acceptability, face validity, and educational impact of a structured training curriculum for robot-assisted radical prostatectomy (RARP), and to assess improvements in performance and ability to perform RARP after completion of the curriculum. A 12-wk training curriculum was developed based on an expert panel discussion and used to train ten fellows from major European teaching institutions. The curriculum included: (1) e-learning, (2) 1 wk of structured simulation-based training (virtual reality synthetic, animal, and cadaveric platforms), and (3) supervised modular training for RARP. The feasibility, acceptability, face validity, and educational impact were assessed using quantitative surveys. Improvement in the technical skills of participants over the training period was evaluated using the inbuilt validated assessment metrics on the da Vinci surgical simulator (dVSS). A final RARP performed by fellows on completion of their training was assessed using the Global Evaluative Assessment of Robotic Skills (GEARS) score and generic and procedure-specific scoring criteria. The median baseline experience of participants as console surgeon was 4 mo (interquartile range [IQR] 0-6.5 mo). All participants completed the curriculum and were involved in a median of 18 RARPs (IQR 14-36) during modular training. The overall score for dVSS tasks significantly increased over the training period (p<0.001-0.005). At the end of the curriculum, eight fellows (80%) were deemed able by their mentors to perform a RARP independently, safely, and effectively. At assessment of the final RARP, the participants achieved an average score ≥4 (scale 1-5) for all domains using the GEARS scale and an average score >10 (scale 4-16) for all procedural steps using a generic dedicated scoring tool. In performance comparison using this scoring tool, the experts significantly outperformed the fellows (mean score for all steps 13.6 vs 11). The European robot-assisted urologic training curriculum is acceptable, valid, and effective for training in RARP. This study shows that a 12-wk structured training program including simulation-based training and mentored training in the operating room allows surgeons with limited robotic experience to increase their robotic skills and their ability to perform the surgical steps of robot-assisted radical prostatectomy. Copyright © 2014 European Association of Urology. Published by Elsevier B.V. All rights reserved.
Progress in virtual reality simulators for surgical training and certification.
de Visser, Hans; Watson, Marcus O; Salvado, Olivier; Passenger, Joshua D
2011-02-21
There is increasing evidence that educating trainee surgeons by simulation is preferable to traditional operating-room training methods with actual patients. Apart from reducing costs and risks to patients, training by simulation can provide some unique benefits, such as greater control over the training procedure and more easily defined metrics for assessing proficiency. Virtual reality (VR) simulators are now playing an increasing role in surgical training. However, currently available VR simulators lack the fidelity to teach trainees past the novice-to-intermediate skills level. Recent technological developments in other industries using simulation, such as the games and entertainment and aviation industries, suggest that the next generation of VR simulators should be suitable for training, maintenance and certification of advanced surgical skills. To be effective as an advanced surgical training and assessment tool, VR simulation needs to provide adequate and relevant levels of physical realism, case complexity and performance assessment. Proper validation of VR simulators and an increased appreciation of their value by the medical profession are crucial for them to be accepted into surgical training curricula.
3D Printed Surgical Instruments Evaluated by a Simulated Crew of a Mars Mission.
Wong, Julielynn Y; Pfahnl, Andreas C
2016-09-01
The first space-based fused deposition modeling (FDM) 3D printer became operational in 2014. This study evaluated whether Mars simulation crewmembers of the Hawai'i Space Exploration Analog and Simulation (HI-SEAS) II mission with no prior surgical experience could utilize acrylonitrile butadiene styrene (ABS) thermoplastic surgical instruments FDM 3D printed on Earth to complete simulated surgical tasks. This study sought to examine the feasibility of using 3D printed surgical tools when the primary crew medical officer is incapacitated and the back-up crew medical officer must conduct a surgical procedure during a simulated extended space mission. During a 4 mo duration ground-based analog mission, five simulation crewmembers with no prior surgical experience completed 16 timed sets of simulated prepping, draping, incising, and suturing tasks to evaluate the relative speed of using four ABS thermoplastic instruments printed on Earth compared to conventional instruments. All four simulated surgical tasks were successfully performed using 3D printed instruments by Mars simulation crewmembers with no prior surgical experience. There was no substantial difference in time to completion of simulated tasks with control vs. 3D printed sponge stick, towel clamp, scalpel handle, and toothed forceps. These limited findings support further investigation into the creation of an onboard digital catalog of validated 3D printable surgical instrument design files to support autonomous, crew-administered healthcare on Mars missions. Future work could include addressing sterility, biocompatibility, and having astronaut crew medical officers test a wider range of surgical instruments printed in microgravity during actual surgical procedures. Wong JY, Pfahnl AC. 3D printed surgical instruments evaluated by a simulated crew of a Mars mission. Aerosp Med Hum Perform. 2016; 87(9):806-810.
The free jet as a simulator of forward velocity effects on jet noise
NASA Technical Reports Server (NTRS)
Ahuja, K. K.; Tester, B. J.; Tanna, H. K.
1978-01-01
A thorough theoretical and experimental study of the effects of the free-jet shear layer on the transmission of sound from a model jet placed within the free jet to the far-field receiver located outside the free-jet flow was conducted. The validity and accuracy of the free-jet flight simulation technique for forward velocity effects on jet noise was evaluated. Transformation charts and a systematic computational procedure for converting measurements from a free-jet simulation to the corresponding results from a wind-tunnel simulation, and, finally, to the flight case were provided. The effects of simulated forward flight on jet mixing noise, internal noise and shock-associated noise from model-scale unheated and heated jets were established experimentally in a free-jet facility. It was illustrated that the existing anomalies between full-scale flight data and model-scale flight simulation data projected to the flight case, could well be due to the contamination of flight data by engine internal noise.
Van Herzeele, Isabelle; O'Donoghue, Kevin G L; Aggarwal, Rajesh; Vermassen, Frank; Darzi, Ara; Cheshire, Nicholas J W
2010-04-01
This study evaluated virtual reality (VR) simulation for endovascular training of medical students to determine whether innate perceptual, visuospatial, and psychomotor aptitude (VSA) can predict initial and plateau phase of technical endovascular skills acquisition. Twenty medical students received didactic and endovascular training on a commercially available VR simulator. Each student treated a series of 10 identical noncomplex renal artery stenoses endovascularly. The simulator recorded performance data instantly and objectively. An experienced interventionalist rated the performance at the initial and final sessions using generic (out of 40) and procedure-specific (out of 30) rating scales. VSA were tested with fine motor dexterity (FMD, Perdue Pegboard), psychomotor ability (minimally invasive virtual reality surgical trainer [MIST-VR]), image recall (Rey-Osterrieth), and organizational aptitude (map-planning). VSA performance scores were correlated with the assessment parameters of endovascular skills at commencement and completion of training. Medical students exhibited statistically significant learning curves from the initial to the plateau performance for contrast usage (medians, 28 vs 17 mL, P < .001), total procedure time (2120 vs 867 seconds, P < .001), and fluoroscopy time (993 vs. 507 seconds, P < .001). Scores on generic and procedure-specific rating scales improved significantly (10 vs 25, P < .001; 8 vs 17 P < .001). Significant correlations were noted for FMD with initial and plateau sessions for fluoroscopy time (r(s) = -0.564, P = .010; r(s) = -.449, P = .047). FMD correlated with procedure-specific scores at the initial session (r(s) = .607, P = .006). Image recall correlated with generic skills at the end of training (r(s) = .587, P = .006). Simulator-based training in endovascular skills improved performance in medical students. There were significant correlations between initial endovascular skill and fine motor dexterity as well as with image recall at end of the training period. In addition to current recruitment strategies, VSA may be a useful tool for predictive validity studies.
A systematic review of validated sinus surgery simulators.
Stew, B; Kao, S S-T; Dharmawardana, N; Ooi, E H
2018-06-01
Simulation provides a safe and effective opportunity to develop surgical skills. A variety of endoscopic sinus surgery (ESS) simulators has been described in the literature. Validation of these simulators allows for effective utilisation in training. To conduct a systematic review of the published literature to analyse the evidence for validated ESS simulation. Pubmed, Embase, Cochrane and Cinahl were searched from inception of the databases to 11 January 2017. Twelve thousand five hundred and sixteen articles were retrieved of which 10 112 were screened following the removal of duplicates. Thirty-eight full-text articles were reviewed after meeting search criteria. Evidence of face, content, construct, discriminant and predictive validity was extracted. Twenty articles were included in the analysis describing 12 ESS simulators. Eleven of these simulators had undergone validation: 3 virtual reality, 7 physical bench models and 1 cadaveric simulator. Seven of the simulators were shown to have face validity, 7 had construct validity and 1 had predictive validity. None of the simulators demonstrated discriminate validity. This systematic review demonstrates that a number of ESS simulators have been comprehensively validated. Many of the validation processes, however, lack standardisation in outcome reporting, thus limiting a meta-analysis comparison between simulators. © 2017 John Wiley & Sons Ltd.
NASA Technical Reports Server (NTRS)
Noll, Thomas E.; Perry, Boyd, III; Tiffany, Sherwood H.; Cole, Stanley R.; Buttrill, Carey S.; Adams, William M., Jr.; Houck, Jacob A.; Srinathkumar, S.; Mukhopadhyay, Vivek; Pototzky, Anthony S.
1989-01-01
The status of the joint NASA/Rockwell Active Flexible Wing Wind-Tunnel Test Program is described. The objectives are to develop and validate the analysis, design, and test methodologies required to apply multifunction active control technology for improving aircraft performance and stability. Major tasks include designing digital multi-input/multi-output flutter-suppression and rolling-maneuver-load alleviation concepts for a flexible full-span wind-tunnel model, obtaining an experimental data base for the basic model and each control concept and providing comparisons between experimental and analytical results to validate the methodologies. The opportunity is provided to improve real-time simulation techniques and to gain practical experience with digital control law implementation procedures.
Boer, H M T; Butler, S T; Stötzel, C; Te Pas, M F W; Veerkamp, R F; Woelders, H
2017-11-01
A recently developed mechanistic mathematical model of the bovine estrous cycle was parameterized to fit empirical data sets collected during one estrous cycle of 31 individual cows, with the main objective to further validate the model. The a priori criteria for validation were (1) the resulting model can simulate the measured data correctly (i.e. goodness of fit), and (2) this is achieved without needing extreme, probably non-physiological parameter values. We used a least squares optimization procedure to identify parameter configurations for the mathematical model to fit the empirical in vivo measurements of follicle and corpus luteum sizes, and the plasma concentrations of progesterone, estradiol, FSH and LH for each cow. The model was capable of accommodating normal variation in estrous cycle characteristics of individual cows. With the parameter sets estimated for the individual cows, the model behavior changed for 21 cows, with improved fit of the simulated output curves for 18 of these 21 cows. Moreover, the number of follicular waves was predicted correctly for 18 of the 25 two-wave and three-wave cows, without extreme parameter value changes. Estimation of specific parameters confirmed results of previous model simulations indicating that parameters involved in luteolytic signaling are very important for regulation of general estrous cycle characteristics, and are likely responsible for differences in estrous cycle characteristics between cows.
Flight Test Evaluation of the Airborne Information for Lateral Spacing (AILS) Concept
NASA Technical Reports Server (NTRS)
Abbott, Terence S.
2002-01-01
The Airborne Information for Lateral Spacing (AILS) concept is designed to support independent parallel approach operations to runways spaced as close as 2,500 feet. This report briefly describes the AILS operational concept and the results of a flight test of one implementation of this concept. The focus of this flight test experiment was to validate a prior simulator study, evaluating pilot performance, pilot acceptability, and minimum miss-distances for the rare situation in which an aircraft on one approach intrudes into the path of an aircraft on the other approach. Although the flight data set was not meant to be a statistically valid sample, the trends acquired in flight followed those of the simulator and therefore met the intent of validating the findings from the simulator. Results from this study showed that the design-goal mean miss-distance of 1,200 feet to potential collision situations was surpassed with an actual mean miss-distance of 1,859 feet. Pilot reaction times to the alerting system, which was an operational concern, averaged 0.65 seconds, were well below the design goal reaction time of 2.0 seconds. From the results of both of these tests, it can be concluded that this operational concept, with supporting technology and procedures, may provide an operationally viable means for conducting simultaneous, independent instrument approaches to runways spaced as close as 2500 ft.
Calibration of a rotating accelerometer gravity gradiometer using centrifugal gradients
NASA Astrophysics Data System (ADS)
Yu, Mingbiao; Cai, Tijing
2018-05-01
The purpose of this study is to calibrate scale factors and equivalent zero biases of a rotating accelerometer gravity gradiometer (RAGG). We calibrate scale factors by determining the relationship between the centrifugal gradient excitation and RAGG response. Compared with calibration by changing the gravitational gradient excitation, this method does not need test masses and is easier to implement. The equivalent zero biases are superpositions of self-gradients and the intrinsic zero biases of the RAGG. A self-gradient is the gravitational gradient produced by surrounding masses, and it correlates well with the RAGG attitude angle. We propose a self-gradient model that includes self-gradients and the intrinsic zero biases of the RAGG. The self-gradient model is a function of the RAGG attitude, and it includes parameters related to surrounding masses. The calibration of equivalent zero biases determines the parameters of the self-gradient model. We provide detailed procedures and mathematical formulations for calibrating scale factors and parameters in the self-gradient model. A RAGG physical simulation system substitutes for the actual RAGG in the calibration and validation experiments. Four point masses simulate four types of surrounding masses producing self-gradients. Validation experiments show that the self-gradients predicted by the self-gradient model are consistent with those from the outputs of the RAGG physical simulation system, suggesting that the presented calibration method is valid.
NASA Technical Reports Server (NTRS)
Plankey, B.
1981-01-01
A computer program called ECPVER (Energy Consumption Program - Verification) was developed to simulate all energy loads for any number of buildings. The program computes simulated daily, monthly, and yearly energy consumption which can be compared with actual meter readings for the same time period. Such comparison can lead to validation of the model under a variety of conditions, which allows it to be used to predict future energy saving due to energy conservation measures. Predicted energy saving can then be compared with actual saving to verify the effectiveness of those energy conservation changes. This verification procedure is planned to be an important advancement in the Deep Space Network Energy Project, which seeks to reduce energy cost and consumption at all DSN Deep Space Stations.
Hosten, Bernard; Moreau, Ludovic; Castaings, Michel
2007-06-01
The paper presents a Fourier transform-based signal processing procedure for quantifying the reflection and transmission coefficients and mode conversion of guided waves diffracted by defects in plates made of viscoelastic materials. The case of the S(0) Lamb wave mode incident on a notch in a Perspex plate is considered. The procedure is applied to numerical data produced by a finite element code that simulates the propagation of attenuated guided modes and their diffraction by the notch, including mode conversion. Its validity and precision are checked by the way of the energy balance computation and by comparison with results obtained using an orthogonality relation-based processing method.
NASA Astrophysics Data System (ADS)
Golobokov, M.; Danilevich, S.
2018-04-01
In order to assess calibration reliability and automate such assessment, procedures for data collection and simulation study of thermal imager calibration procedure have been elaborated. The existing calibration techniques do not always provide high reliability. A new method for analyzing the existing calibration techniques and developing new efficient ones has been suggested and tested. A type of software has been studied that allows generating instrument calibration reports automatically, monitoring their proper configuration, processing measurement results and assessing instrument validity. The use of such software allows reducing man-hours spent on finalization of calibration data 2 to 5 times and eliminating a whole set of typical operator errors.
Strategies for concurrent processing of complex algorithms in data driven architectures
NASA Technical Reports Server (NTRS)
Stoughton, John W.; Mielke, Roland R.; Som, Sukhamony
1990-01-01
The performance modeling and enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures is examined. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called ATAMM (Algorithm To Architecture Mapping Model). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.
Strategies for concurrent processing of complex algorithms in data driven architectures
NASA Technical Reports Server (NTRS)
Som, Sukhamoy; Stoughton, John W.; Mielke, Roland R.
1990-01-01
Performance modeling and performance enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures are discussed. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called algorithm to architecture mapping model (ATAMM). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.
Theoretical modeling of a portable x-ray tube based KXRF system to measure lead in bone
Specht, Aaron J; Weisskopf, Marc G; Nie, Linda Huiling
2017-01-01
Objective K-shell x-ray fluorescence (KXRF) techniques have been used to identify health effects resulting from exposure to metals for decades, but the equipment is bulky and requires significant maintenance and licensing procedures. A portable x-ray fluorescence (XRF) device was developed to overcome these disadvantages, but introduced a measurement dependency on soft tissue thickness. With recent advances to detector technology, an XRF device utilizing the advantages of both systems should be feasible. Approach In this study, we used Monte Carlo simulations to test the feasibility of an XRF device with a high-energy x-ray tube and detector operable at room temperature. Main Results We first validated the use of Monte Carlo N-particle transport code (MCNP) for x-ray tube simulations, and found good agreement between experimental and simulated results. Then, we optimized x-ray tube settings and found the detection limit of the high-energy x-ray tube based XRF device for bone lead measurements to be 6.91 μg g−1 bone mineral using a cadmium zinc telluride detector. Significance In conclusion, this study validated the use of MCNP in simulations of x-ray tube physics and XRF applications, and demonstrated the feasibility of a high-energy x-ray tube based XRF for metal exposure assessment. PMID:28169835
Theoretical modeling of a portable x-ray tube based KXRF system to measure lead in bone.
Specht, Aaron J; Weisskopf, Marc G; Nie, Linda Huiling
2017-03-01
K-shell x-ray fluorescence (KXRF) techniques have been used to identify health effects resulting from exposure to metals for decades, but the equipment is bulky and requires significant maintenance and licensing procedures. A portable x-ray fluorescence (XRF) device was developed to overcome these disadvantages, but introduced a measurement dependency on soft tissue thickness. With recent advances to detector technology, an XRF device utilizing the advantages of both systems should be feasible. In this study, we used Monte Carlo simulations to test the feasibility of an XRF device with a high-energy x-ray tube and detector operable at room temperature. We first validated the use of Monte Carlo N-particle transport code (MCNP) for x-ray tube simulations, and found good agreement between experimental and simulated results. Then, we optimized x-ray tube settings and found the detection limit of the high-energy x-ray tube based XRF device for bone lead measurements to be 6.91 µg g -1 bone mineral using a cadmium zinc telluride detector. In conclusion, this study validated the use of MCNP in simulations of x-ray tube physics and XRF applications, and demonstrated the feasibility of a high-energy x-ray tube based XRF for metal exposure assessment.
Chandramouli, Balasubramanian; Mancini, Giordano
2016-01-01
Classical Molecular Dynamics (MD) simulations can provide insights at the nanoscopic scale into protein dynamics. Currently, simulations of large proteins and complexes can be routinely carried out in the ns-μs time regime. Clustering of MD trajectories is often performed to identify selective conformations and to compare simulation and experimental data coming from different sources on closely related systems. However, clustering techniques are usually applied without a careful validation of results and benchmark studies involving the application of different algorithms to MD data often deal with relatively small peptides instead of average or large proteins; finally clustering is often applied as a means to analyze refined data and also as a way to simplify further analysis of trajectories. Herein, we propose a strategy to classify MD data while carefully benchmarking the performance of clustering algorithms and internal validation criteria for such methods. We demonstrate the method on two showcase systems with different features, and compare the classification of trajectories in real and PCA space. We posit that the prototype procedure adopted here could be highly fruitful in clustering large trajectories of multiple systems or that resulting especially from enhanced sampling techniques like replica exchange simulations. Copyright: © 2016 by Fabrizio Serra editore, Pisa · Roma.
NASA Astrophysics Data System (ADS)
Cunha, J. S.; Cavalcante, F. R.; Souza, S. O.; Souza, D. N.; Santos, W. S.; Carvalho Júnior, A. B.
2017-11-01
One of the main criteria that must be held in Total Body Irradiation (TBI) is the uniformity of dose in the body. In TBI procedures the certification that the prescribed doses are absorbed in organs is made with dosimeters positioned on the patient skin. In this work, we modelled TBI scenarios in the MCNPX code to estimate the entrance dose rate in the skin for comparison and validation of simulations with experimental measurements from literature. Dose rates were estimated simulating an ionization chamber laterally positioned on thorax, abdomen, leg and thigh. Four exposure scenarios were simulated: ionization chamber (S1), TBI room (S2), and patient represented by hybrid phantom (S3) and water stylized phantom (S4) in sitting posture. The posture of the patient in experimental work was better represented by S4 compared with hybrid phantom, and this led to minimum and maximum percentage differences of 1.31% and 6.25% to experimental measurements for thorax and thigh regions, respectively. As for all simulations reported here the percentage differences in the estimated dose rates were less than 10%, we considered that the obtained results are consistent with experimental measurements and the modelled scenarios are suitable to estimate the absorbed dose in organs during TBI procedure.
Force and torque modelling of drilling simulation for orthopaedic surgery.
MacAvelia, Troy; Ghasempoor, Ahmad; Janabi-Sharifi, Farrokh
2014-01-01
The advent of haptic simulation systems for orthopaedic surgery procedures has provided surgeons with an excellent tool for training and preoperative planning purposes. This is especially true for procedures involving the drilling of bone, which require a great amount of adroitness and experience due to difficulties arising from vibration and drill bit breakage. One of the potential difficulties with the drilling of bone is the lack of consistent material evacuation from the drill's flutes as the material tends to clog. This clogging leads to significant increases in force and torque experienced by the surgeon. Clogging was observed for feed rates greater than 0.5 mm/s and spindle speeds less than 2500 rpm. The drilling simulation systems that have been created to date do not address the issue of drill flute clogging. This paper presents force and torque prediction models that account for this phenomenon. The two coefficients of friction required by these models were determined via a set of calibration experiments. The accuracy of both models was evaluated by an additional set of validation experiments resulting in average R² regression correlation values of 0.9546 and 0.9209 for the force and torque prediction models, respectively. The resulting models can be adopted by haptic simulation systems to provide a more realistic tactile output.
High correlation between performance on a virtual-reality simulator and real-life cataract surgery.
Thomsen, Ann Sofia Skou; Smith, Phillip; Subhi, Yousif; Cour, Morten la; Tang, Lilian; Saleh, George M; Konge, Lars
2017-05-01
To investigate the correlation in performance of cataract surgery between a virtual-reality simulator and real-life surgery using two objective assessment tools with evidence of validity. Cataract surgeons with varying levels of experience were included in the study. All participants performed and videorecorded three standard cataract surgeries before completing a proficiency-based test on the EyeSi virtual-reality simulator. Standard cataract surgeries were defined as: (1) surgery performed under local anaesthesia, (2) patient age >60 years, and (3) visual acuity >1/60 preoperatively. A motion-tracking score was calculated by multiplying average path length and average number of movements from the three real-life surgical videos of full procedures. The EyeSi test consisted of five abstract and two procedural modules: intracapsular navigation, antitremor training, intracapsular antitremor training, forceps training, bimanual training, capsulorhexis and phaco divide and conquer. Eleven surgeons were enrolled. After a designated warm-up period, the proficiency-based test on the EyeSi simulator was strongly correlated to real-life performance measured by motion-tracking software of cataract surgical videos with a Pearson correlation coefficient of -0.70 (p = 0.017). Performance on the EyeSi simulator is significantly and highly correlated to real-life surgical performance. However, it is recommended that performance assessments are made using multiple data sources. © 2016 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.
A dynamic regularized gradient model of the subgrid-scale stress tensor for large-eddy simulation
NASA Astrophysics Data System (ADS)
Vollant, A.; Balarac, G.; Corre, C.
2016-02-01
Large-eddy simulation (LES) solves only the large scales part of turbulent flows by using a scales separation based on a filtering operation. The solution of the filtered Navier-Stokes equations requires then to model the subgrid-scale (SGS) stress tensor to take into account the effect of scales smaller than the filter size. In this work, a new model is proposed for the SGS stress model. The model formulation is based on a regularization procedure of the gradient model to correct its unstable behavior. The model is developed based on a priori tests to improve the accuracy of the modeling for both structural and functional performances, i.e., the model ability to locally approximate the SGS unknown term and to reproduce enough global SGS dissipation, respectively. LES is then performed for a posteriori validation. This work is an extension to the SGS stress tensor of the regularization procedure proposed by Balarac et al. ["A dynamic regularized gradient model of the subgrid-scale scalar flux for large eddy simulations," Phys. Fluids 25(7), 075107 (2013)] to model the SGS scalar flux. A set of dynamic regularized gradient (DRG) models is thus made available for both the momentum and the scalar equations. The second objective of this work is to compare this new set of DRG models with direct numerical simulations (DNS), filtered DNS in the case of classic flows simulated with a pseudo-spectral solver and with the standard set of models based on the dynamic Smagorinsky model. Various flow configurations are considered: decaying homogeneous isotropic turbulence, turbulent plane jet, and turbulent channel flows. These tests demonstrate the stable behavior provided by the regularization procedure, along with substantial improvement for velocity and scalar statistics predictions.
Sadideen, Hazim; Wilson, David; Moiemen, Naiem; Kneebone, Roger
2014-01-01
Educational theory highlights the importance of contextualized simulation for effective learning. We explored this concept in a burns scenario in a novel, low-cost, high-fidelity, portable, immersive simulation environment (referred to as distributed simulation). This contextualized simulation/distributed simulation combination was named "The Burns Suite" (TBS). A pediatric burn resuscitation scenario was selected after high trainee demand. It was designed on Advanced Trauma and Life Support and Emergency Management of Severe Burns principles and refined using expert opinion through cognitive task analysis. TBS contained "realism" props, briefed nurses, and a simulated patient. Novices and experts were recruited. Five-point Likert-type questionnaires were developed for face and content validity. Cronbach's α was calculated for scale reliability. Semistructured interviews captured responses for qualitative thematic analysis allowing for data triangulation. Twelve participants completed TBS scenario. Mean face and content validity ratings were high (4.6 and 4.5, respectively; range, 4-5). The internal consistency of questions was high. Qualitative data analysis revealed that participants felt 1) the experience was "real" and they were "able to behave as if in a real resuscitation environment," and 2) TBS "addressed what Advanced Trauma and Life Support and Emergency Management of Severe Burns didn't" (including the efficacy of incorporating nontechnical skills). TBS provides a novel, effective simulation tool to significantly advance the delivery of burns education. Recreating clinical challenge is crucial to optimize simulation training. This low-cost approach also has major implications for surgical education, particularly during increasing financial austerity. Alternative scenarios and/or procedures can be recreated within TBS, providing a diverse educational immersive simulation experience.
Conceptual modeling for Prospective Health Technology Assessment.
Gantner-Bär, Marion; Djanatliev, Anatoli; Prokosch, Hans-Ulrich; Sedlmayr, Martin
2012-01-01
Prospective Health Technology Assessment (ProHTA) is a new and innovative approach to analyze and assess new technologies, methods and procedures in health care. Simulation processes are used to model innovations before the cost-intensive design and development phase. Thus effects on patient care, the health care system as well as health economics aspects can be estimated. To generate simulation models a valid information base is necessary and therefore conceptual modeling is most suitable. Project-specifically improved methods and characteristics of simulation modeling are combined in the ProHTA Conceptual Modeling Process and initially implemented for acute ischemic stroke treatment in Germany. Additionally the project aims at simulation of other diseases and health care systems as well. ProHTA is an interdisciplinary research project within the Cluster of Excellence for Medical Technology - Medical Valley European Metropolitan Region Nuremberg (EMN), which is funded by the German Federal Ministry of Education and Research (BMBF), project grant No. 01EX1013B.
Importance of inlet boundary conditions for numerical simulation of combustor flows
NASA Technical Reports Server (NTRS)
Sturgess, G. J.; Syed, S. A.; Mcmanus, K. R.
1983-01-01
Fluid dynamic computer codes for the mathematical simulation of problems in gas turbine engine combustion systems are required as design and diagnostic tools. To eventually achieve a performance standard with these codes of more than qualitative accuracy it is desirable to use benchmark experiments for validation studies. Typical of the fluid dynamic computer codes being developed for combustor simulations is the TEACH (Teaching Elliptic Axisymmetric Characteristics Heuristically) solution procedure. It is difficult to find suitable experiments which satisfy the present definition of benchmark quality. For the majority of the available experiments there is a lack of information concerning the boundary conditions. A standard TEACH-type numerical technique is applied to a number of test-case experiments. It is found that numerical simulations of gas turbine combustor-relevant flows can be sensitive to the plane at which the calculations start and the spatial distributions of inlet quantities for swirling flows.
NASA Technical Reports Server (NTRS)
Hoh, R. H.; Klein, R. H.; Johnson, W. A.
1977-01-01
A system analysis method for the development of an integrated configuration management/flight director system for IFR STOL approaches is presented. Curved descending decelerating approach trajectories are considered. Considerable emphasis is placed on satisfying the pilot centered requirements (acceptable workload) as well as the usual guidance and control requirements (acceptable performance). The Augmentor Wing Jet STOL Research Aircraft was utilized to allow illustration by example, and to validate the analysis procedure via manned simulation.
Piette, Elizabeth R; Moore, Jason H
2018-01-01
Machine learning methods and conventions are increasingly employed for the analysis of large, complex biomedical data sets, including genome-wide association studies (GWAS). Reproducibility of machine learning analyses of GWAS can be hampered by biological and statistical factors, particularly so for the investigation of non-additive genetic interactions. Application of traditional cross validation to a GWAS data set may result in poor consistency between the training and testing data set splits due to an imbalance of the interaction genotypes relative to the data as a whole. We propose a new cross validation method, proportional instance cross validation (PICV), that preserves the original distribution of an independent variable when splitting the data set into training and testing partitions. We apply PICV to simulated GWAS data with epistatic interactions of varying minor allele frequencies and prevalences and compare performance to that of a traditional cross validation procedure in which individuals are randomly allocated to training and testing partitions. Sensitivity and positive predictive value are significantly improved across all tested scenarios for PICV compared to traditional cross validation. We also apply PICV to GWAS data from a study of primary open-angle glaucoma to investigate a previously-reported interaction, which fails to significantly replicate; PICV however improves the consistency of testing and training results. Application of traditional machine learning procedures to biomedical data may require modifications to better suit intrinsic characteristics of the data, such as the potential for highly imbalanced genotype distributions in the case of epistasis detection. The reproducibility of genetic interaction findings can be improved by considering this variable imbalance in cross validation implementation, such as with PICV. This approach may be extended to problems in other domains in which imbalanced variable distributions are a concern.
NASA Astrophysics Data System (ADS)
Lock, S. S. M.; Lau, K. K.; Lock Sow Mei, Irene; Shariff, A. M.; Yeong, Y. F.; Bustam, A. M.
2017-08-01
A sequence of molecular modelling procedure has been proposed to simulate experimentally validated membrane structure characterizing the effect of CO2 plasticization, whereby it can be subsequently employed to elucidate the depression in glass transition temperature (Tg ). Based on the above motivation, unswollen and swollen Polysulfone membrane structures with different CO2 loadings have been constructed, whereby the accuracy has been validated through good compliance with experimentally measured physical properties. It is found that the presence of CO2 constitutes to enhancement in polymeric chain relaxation, which consequently promotes the enlargement of molecular spacing and causes dilation in the membrane matrix. A series of glass transition temperature treatment has been conducted on the verified molecular structure to elucidate the effect of CO2 loadings to the depression in Tg induced by plasticization. Subsequently, a modified Michealis-Menten (M-M) function has been implemented to quantify the effect of CO2 loading attributed to plasticization towards Tg .
Abramyan, Tigran M; Snyder, James A; Thyparambil, Aby A; Stuart, Steven J; Latour, Robert A
2016-08-05
Clustering methods have been widely used to group together similar conformational states from molecular simulations of biomolecules in solution. For applications such as the interaction of a protein with a surface, the orientation of the protein relative to the surface is also an important clustering parameter because of its potential effect on adsorbed-state bioactivity. This study presents cluster analysis methods that are specifically designed for systems where both molecular orientation and conformation are important, and the methods are demonstrated using test cases of adsorbed proteins for validation. Additionally, because cluster analysis can be a very subjective process, an objective procedure for identifying both the optimal number of clusters and the best clustering algorithm to be applied to analyze a given dataset is presented. The method is demonstrated for several agglomerative hierarchical clustering algorithms used in conjunction with three cluster validation techniques. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
On computations of the integrated space shuttle flowfield using overset grids
NASA Technical Reports Server (NTRS)
Chiu, I-T.; Pletcher, R. H.; Steger, J. L.
1990-01-01
Numerical simulations using the thin-layer Navier-Stokes equations and chimera (overset) grid approach were carried out for flows around the integrated space shuttle vehicle over a range of Mach numbers. Body-conforming grids were used for all the component grids. Testcases include a three-component overset grid - the external tank (ET), the solid rocket booster (SRB) and the orbiter (ORB), and a five-component overset grid - the ET, SRB, ORB, forward and aft attach hardware, configurations. The results were compared with the wind tunnel and flight data. In addition, a Poisson solution procedure (a special case of the vorticity-velocity formulation) using primitive variables was developed to solve three-dimensional, irrotational, inviscid flows for single as well as overset grids. The solutions were validated by comparisons with other analytical or numerical solution, and/or experimental results for various geometries. The Poisson solution was also used as an initial guess for the thin-layer Navier-Stokes solution procedure to improve the efficiency of the numerical flow simulations. It was found that this approach resulted in roughly a 30 percent CPU time savings as compared with the procedure solving the thin-layer Navier-Stokes equations from a uniform free stream flowfield.
Robust estimation of the proportion of treatment effect explained by surrogate marker information.
Parast, Layla; McDermott, Mary M; Tian, Lu
2016-05-10
In randomized treatment studies where the primary outcome requires long follow-up of patients and/or expensive or invasive obtainment procedures, the availability of a surrogate marker that could be used to estimate the treatment effect and could potentially be observed earlier than the primary outcome would allow researchers to make conclusions regarding the treatment effect with less required follow-up time and resources. The Prentice criterion for a valid surrogate marker requires that a test for treatment effect on the surrogate marker also be a valid test for treatment effect on the primary outcome of interest. Based on this criterion, methods have been developed to define and estimate the proportion of treatment effect on the primary outcome that is explained by the treatment effect on the surrogate marker. These methods aim to identify useful statistical surrogates that capture a large proportion of the treatment effect. However, current methods to estimate this proportion usually require restrictive model assumptions that may not hold in practice and thus may lead to biased estimates of this quantity. In this paper, we propose a nonparametric procedure to estimate the proportion of treatment effect on the primary outcome that is explained by the treatment effect on a potential surrogate marker and extend this procedure to a setting with multiple surrogate markers. We compare our approach with previously proposed model-based approaches and propose a variance estimation procedure based on a perturbation-resampling method. Simulation studies demonstrate that the procedure performs well in finite samples and outperforms model-based procedures when the specified models are not correct. We illustrate our proposed procedure using a data set from a randomized study investigating a group-mediated cognitive behavioral intervention for peripheral artery disease participants. Copyright © 2015 John Wiley & Sons, Ltd.
Global Properties of Fully Convective Accretion Disks from Local Simulations
NASA Astrophysics Data System (ADS)
Bodo, G.; Cattaneo, F.; Mignone, A.; Ponzo, F.; Rossi, P.
2015-08-01
We present an approach to deriving global properties of accretion disks from the knowledge of local solutions derived from numerical simulations based on the shearing box approximation. The approach consists of a two-step procedure. First, a local solution valid for all values of the disk height is constructed by piecing together an interior solution obtained numerically with an analytical exterior radiative solution. The matching is obtained by assuming hydrostatic balance and radiative equilibrium. Although in principle the procedure can be carried out in general, it simplifies considerably when the interior solution is fully convective. In these cases, the construction is analogous to the derivation of the Hayashi tracks for protostars. The second step consists of piecing together the local solutions at different radii to obtain a global solution. Here we use the symmetry of the solutions with respect to the defining dimensionless numbers—in a way similar to the use of homology relations in stellar structure theory—to obtain the scaling properties of the various disk quantities with radius.
Dynamic tests of composite panels of an aircraft wing
NASA Astrophysics Data System (ADS)
Splichal, Jan; Pistek, Antonin; Hlinka, Jiri
2015-10-01
The paper describes the analysis of aerospace composite structures under dynamic loading. Today, it is common to use design procedures based on assumption of static loading only, and dynamic loading is rarely assumed and applied in design and certification of aerospace structures. The paper describes the application of dynamic loading for the design of aircraft structures, and the validation of the procedure on a selected structure. The goal is to verify the possibility of reducing the weight through improved design/modelling processes using dynamic loading instead of static loading. The research activity focuses on the modelling and testing of a composite panel representing a local segment of an aircraft wing section, investigating in particular the buckling behavior under dynamic loading. Finite Elements simulation tools are discussed, as well as the advantages of using a digital optical measurement system for the evaluation of the tests. The comparison of the finite element simulations with the results of the tests is presented.
NASA Astrophysics Data System (ADS)
Vanhuyse, Johan; Deckers, Elke; Jonckheere, Stijn; Pluymers, Bert; Desmet, Wim
2016-02-01
The Biot theory is commonly used for the simulation of the vibro-acoustic behaviour of poroelastic materials. However, it relies on a number of material parameters. These can be hard to characterize and require dedicated measurement setups, yielding a time-consuming and costly characterisation. This paper presents a characterisation method which is able to identify all material parameters using only an impedance tube. The method relies on the assumption that the sample is clamped within the tube, that the shear wave is excited and that the acoustic field is no longer one-dimensional. This paper numerically shows the potential of the developed method. It therefore performs a sensitivity analysis of the quantification parameters, i.e. reflection coefficients and relative pressures, and a parameter estimation using global optimisation methods. A 3-step procedure is developed and validated. It is shown that even in the presence of numerically simulated noise this procedure leads to a robust parameter estimation.
Comparison of mapping algorithms used in high-throughput sequencing: application to Ion Torrent data
2014-01-01
Background The rapid evolution in high-throughput sequencing (HTS) technologies has opened up new perspectives in several research fields and led to the production of large volumes of sequence data. A fundamental step in HTS data analysis is the mapping of reads onto reference sequences. Choosing a suitable mapper for a given technology and a given application is a subtle task because of the difficulty of evaluating mapping algorithms. Results In this paper, we present a benchmark procedure to compare mapping algorithms used in HTS using both real and simulated datasets and considering four evaluation criteria: computational resource and time requirements, robustness of mapping, ability to report positions for reads in repetitive regions, and ability to retrieve true genetic variation positions. To measure robustness, we introduced a new definition for a correctly mapped read taking into account not only the expected start position of the read but also the end position and the number of indels and substitutions. We developed CuReSim, a new read simulator, that is able to generate customized benchmark data for any kind of HTS technology by adjusting parameters to the error types. CuReSim and CuReSimEval, a tool to evaluate the mapping quality of the CuReSim simulated reads, are freely available. We applied our benchmark procedure to evaluate 14 mappers in the context of whole genome sequencing of small genomes with Ion Torrent data for which such a comparison has not yet been established. Conclusions A benchmark procedure to compare HTS data mappers is introduced with a new definition for the mapping correctness as well as tools to generate simulated reads and evaluate mapping quality. The application of this procedure to Ion Torrent data from the whole genome sequencing of small genomes has allowed us to validate our benchmark procedure and demonstrate that it is helpful for selecting a mapper based on the intended application, questions to be addressed, and the technology used. This benchmark procedure can be used to evaluate existing or in-development mappers as well as to optimize parameters of a chosen mapper for any application and any sequencing platform. PMID:24708189
Cheng, Keding; Sloan, Angela; McCorrister, Stuart; Peterson, Lorea; Chui, Huixia; Drebot, Mike; Nadon, Celine; Knox, J David; Wang, Gehua
2014-12-01
The need for rapid and accurate H typing is evident during Escherichia coli outbreak situations. This study explores the transition of MS-H, a method originally developed for rapid H antigen typing of E. coli using LC-MS/MS of flagella digest of reference strains and some clinical strains, to E. coli isolates in clinical scenario through quantitative analysis and method validation. Motile and nonmotile strains were examined in batches to simulate clinical sample scenario. Various LC-MS/MS batch run procedures and MS-H typing rules were compared and summarized through quantitative analysis of MS-H data output for a standard method development. Label-free quantitative data analysis of MS-H typing was proven very useful for examining the quality of MS-H result and the effects of some sample carryovers from motile E. coli isolates. Based on this, a refined procedure and protein identification rule specific for clinical MS-H typing was established and validated. With LC-MS/MS batch run procedure and database search parameter unique for E. coli MS-H typing, the standard procedure maintained high accuracy and specificity in clinical situations, and its potential to be used in a clinical setting was clearly established. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Li, Peng; Jiang, Shengyuan; Tang, Dewei; Xu, Bo
2017-05-01
For sake of striking a balance between the need of drilling efficiency and the constrains of power budget on the moon, the penetrations per revolution of drill bit are generally limited in the range around 0.1 mm, and besides the geometric angle of the cutting blade need to be well designed. This paper introduces a simulation approach based on PFC3D (particle flow code 3 dimensions) for analyzing the cutting load feature on lunar rock simulant, which is derived from different geometric-angle blades with a small cutting depth. The mean values of the cutting force of five blades in the survey region (four on the boundary points and one on the center point) are selected as the macroscopic responses of model. The method of experimental design which includes Plackett-Burman (PB) design and central composite design (CCD) method is adopted in the matching procedure of microparameters in PFC model. Using the optimization method of enumeration, the optimum set of microparameters is acquired. Then, the experimental validation is implemented by using other twenty-five blades with different geometric angles, and the results from both simulations and laboratory tests give fair agreements. Additionally, the rock breaking process cut by different blades are quantified from simulation analysis. This research provides the theoretical support for the refinement of the rock cutting load prediction and the geometric design of cutting blade on the drill bit.
Procedure for the Selection and Validation of a Calibration Model I-Description and Application.
Desharnais, Brigitte; Camirand-Lemyre, Félix; Mireault, Pascal; Skinner, Cameron D
2017-05-01
Calibration model selection is required for all quantitative methods in toxicology and more broadly in bioanalysis. This typically involves selecting the equation order (quadratic or linear) and weighting factor correctly modelizing the data. A mis-selection of the calibration model will generate lower quality control (QC) accuracy, with an error up to 154%. Unfortunately, simple tools to perform this selection and tests to validate the resulting model are lacking. We present a stepwise, analyst-independent scheme for selection and validation of calibration models. The success rate of this scheme is on average 40% higher than a traditional "fit and check the QCs accuracy" method of selecting the calibration model. Moreover, the process was completely automated through a script (available in Supplemental Data 3) running in RStudio (free, open-source software). The need for weighting was assessed through an F-test using the variances of the upper limit of quantification and lower limit of quantification replicate measurements. When weighting was required, the choice between 1/x and 1/x2 was determined by calculating which option generated the smallest spread of weighted normalized variances. Finally, model order was selected through a partial F-test. The chosen calibration model was validated through Cramer-von Mises or Kolmogorov-Smirnov normality testing of the standardized residuals. Performance of the different tests was assessed using 50 simulated data sets per possible calibration model (e.g., linear-no weight, quadratic-no weight, linear-1/x, etc.). This first of two papers describes the tests, procedures and outcomes of the developed procedure using real LC-MS-MS results for the quantification of cocaine and naltrexone. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
A new vertical grid nesting capability in the Weather Research and Forecasting (WRF) Model
Daniels, Megan H.; Lundquist, Katherine A.; Mirocha, Jeffrey D.; ...
2016-09-16
Mesoscale atmospheric models are increasingly used for high-resolution (<3 km) simulations to better resolve smaller-scale flow details. Increased resolution is achieved using mesh refinement via grid nesting, a procedure where multiple computational domains are integrated either concurrently or in series. A constraint in the concurrent nesting framework offered by the Weather Research and Forecasting (WRF) Model is that mesh refinement is restricted to the horizontal dimensions. This limitation prevents control of the grid aspect ratio, leading to numerical errors due to poor grid quality and preventing grid optimization. Here, a procedure permitting vertical nesting for one-way concurrent simulation is developedmore » and validated through idealized cases. The benefits of vertical nesting are demonstrated using both mesoscale and large-eddy simulations (LES). Mesoscale simulations of the Terrain-Induced Rotor Experiment (T-REX) show that vertical grid nesting can alleviate numerical errors due to large aspect ratios on coarse grids, while allowing for higher vertical resolution on fine grids. Furthermore, the coarsening of the parent domain does not result in a significant loss of accuracy on the nested domain. LES of neutral boundary layer flow shows that, by permitting optimal grid aspect ratios on both parent and nested domains, use of vertical nesting yields improved agreement with the theoretical logarithmic velocity profile on both domains. Lastly, vertical grid nesting in WRF opens the path forward for multiscale simulations, allowing more accurate simulations spanning a wider range of scales than previously possible.« less
A new vertical grid nesting capability in the Weather Research and Forecasting (WRF) Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daniels, Megan H.; Lundquist, Katherine A.; Mirocha, Jeffrey D.
Mesoscale atmospheric models are increasingly used for high-resolution (<3 km) simulations to better resolve smaller-scale flow details. Increased resolution is achieved using mesh refinement via grid nesting, a procedure where multiple computational domains are integrated either concurrently or in series. A constraint in the concurrent nesting framework offered by the Weather Research and Forecasting (WRF) Model is that mesh refinement is restricted to the horizontal dimensions. This limitation prevents control of the grid aspect ratio, leading to numerical errors due to poor grid quality and preventing grid optimization. Here, a procedure permitting vertical nesting for one-way concurrent simulation is developedmore » and validated through idealized cases. The benefits of vertical nesting are demonstrated using both mesoscale and large-eddy simulations (LES). Mesoscale simulations of the Terrain-Induced Rotor Experiment (T-REX) show that vertical grid nesting can alleviate numerical errors due to large aspect ratios on coarse grids, while allowing for higher vertical resolution on fine grids. Furthermore, the coarsening of the parent domain does not result in a significant loss of accuracy on the nested domain. LES of neutral boundary layer flow shows that, by permitting optimal grid aspect ratios on both parent and nested domains, use of vertical nesting yields improved agreement with the theoretical logarithmic velocity profile on both domains. Lastly, vertical grid nesting in WRF opens the path forward for multiscale simulations, allowing more accurate simulations spanning a wider range of scales than previously possible.« less
Uterus models for use in virtual reality hysteroscopy simulators.
Niederer, Peter; Weiss, Stephan; Caduff, Rosmarie; Bajka, Michael; Szekély, Gabor; Harders, Matthias
2009-05-01
Virtual reality models of human organs are needed in surgery simulators which are developed for educational and training purposes. A simulation can only be useful, however, if the mechanical performance of the system in terms of force-feedback for the user as well as the visual representation is realistic. We therefore aim at developing a mechanical computer model of the organ in question which yields realistic force-deformation behavior under virtual instrument-tissue interactions and which, in particular, runs in real time. The modeling of the human uterus is described as it is to be implemented in a simulator for minimally invasive gynecological procedures. To this end, anatomical information which was obtained from specially designed computed tomography and magnetic resonance imaging procedures as well as constitutive tissue properties recorded from mechanical testing were used. In order to achieve real-time performance, the combination of mechanically realistic numerical uterus models of various levels of complexity with a statistical deformation approach is suggested. In view of mechanical accuracy of such models, anatomical characteristics including the fiber architecture along with the mechanical deformation properties are outlined. In addition, an approach to make this numerical representation potentially usable in an interactive simulation is discussed. The numerical simulation of hydrometra is shown in this communication. The results were validated experimentally. In order to meet the real-time requirements and to accommodate the large biological variability associated with the uterus, a statistical modeling approach is demonstrated to be useful.
The estimation of soil water fluxes using lysimeter data
NASA Astrophysics Data System (ADS)
Wegehenkel, M.
2009-04-01
The validation of soil water balance models regarding soil water fluxes in the field is still a problem. This requires time series of measured model outputs. In our study, a soil water balance model was validated using lysimeter time series of measured model outputs. The soil water balance model used in our study was the Hydrus-1D-model. This model was tested by a comparison of simulated with measured daily rates of actual evapotranspiration, soil water storage, groundwater recharge and capillary rise. These rates were obtained from twelve weighable lysimeters with three different soils and two different lower boundary conditions for the time period from January 1, 1996 to December 31, 1998. In that period, grass vegetation was grown on all lysimeters. These lysimeters are located in Berlin, Germany. One potential source of error in lysimeter experiments is preferential flow caused by an artificial channeling of water due to the occurrence of air space between the soil monolith and the inside wall of the lysimeters. To analyse such sources of errors, Hydrus-1D was applied with different modelling procedures. The first procedure consists of a general uncalibrated appli-cation of Hydrus-1D. The second one includes a calibration of soil hydraulic parameters via inverse modelling of different percolation events with Hydrus-1D. In the third procedure, the model DUALP_1D was applied with the optimized hydraulic parameter set to test the hy-pothesis of the existence of preferential flow paths in the lysimeters. The results of the different modelling procedures indicated that, in addition to a precise determination of the soil water retention functions, vegetation parameters such as rooting depth should also be taken into account. Without such information, the rooting depth is a calibration parameter. However, in some cases, the uncalibrated application of both models also led to an acceptable fit between measured and simulated model outputs.
NASA Astrophysics Data System (ADS)
Lima, José; Pereira, Ana I.; Costa, Paulo; Pinto, Andry; Costa, Pedro
2017-07-01
This paper describes an optimization procedure for a robot with 12 degrees of freedom avoiding the inverse kinematics problem, which is a hard task for this type of robot manipulator. This robot can be used to pick and place tasks in complex designs. Combining an accurate and fast direct kinematics model with optimization strategies, it is possible to achieve the joints angles for a desired end-effector position and orientation. The optimization methods stretched simulated annealing algorithm and genetic algorithm were used. The solutions found were validated using data originated by a real and by a simulated robot formed by 12 servomotors with a gripper.
The Elastic Behaviour of Sintered Metallic Fibre Networks: A Finite Element Study by Beam Theory
Bosbach, Wolfram A.
2015-01-01
Background The finite element method has complimented research in the field of network mechanics in the past years in numerous studies about various materials. Numerical predictions and the planning efficiency of experimental procedures are two of the motivational aspects for these numerical studies. The widespread availability of high performance computing facilities has been the enabler for the simulation of sufficiently large systems. Objectives and Motivation In the present study, finite element models were built for sintered, metallic fibre networks and validated by previously published experimental stiffness measurements. The validated models were the basis for predictions about so far unknown properties. Materials and Methods The finite element models were built by transferring previously published skeletons of fibre networks into finite element models. Beam theory was applied as simplification method. Results and Conclusions The obtained material stiffness isn’t a constant but rather a function of variables such as sample size and boundary conditions. Beam theory offers an efficient finite element method for the simulated fibre networks. The experimental results can be approximated by the simulated systems. Two worthwhile aspects for future work will be the influence of size and shape and the mechanical interaction with matrix materials. PMID:26569603
40 CFR 761.392 - Preparing validation study samples.
Code of Federal Regulations, 2014 CFR
2014-07-01
... establish a surface concentration to be included in the standard operating procedure. The surface levels of... Under § 761.79(d)(4) § 761.392 Preparing validation study samples. (a)(1) To validate a procedure to... surfaces must be ≥20 µg/100 cm2. (2) To validate a procedure to decontaminate a specified surface...
40 CFR 761.392 - Preparing validation study samples.
Code of Federal Regulations, 2012 CFR
2012-07-01
... establish a surface concentration to be included in the standard operating procedure. The surface levels of... Under § 761.79(d)(4) § 761.392 Preparing validation study samples. (a)(1) To validate a procedure to... surfaces must be ≥20 µg/100 cm2. (2) To validate a procedure to decontaminate a specified surface...
40 CFR 761.392 - Preparing validation study samples.
Code of Federal Regulations, 2013 CFR
2013-07-01
... establish a surface concentration to be included in the standard operating procedure. The surface levels of... Under § 761.79(d)(4) § 761.392 Preparing validation study samples. (a)(1) To validate a procedure to... surfaces must be ≥20 µg/100 cm2. (2) To validate a procedure to decontaminate a specified surface...
In Defense of an Instrument-Based Approach to Validity
ERIC Educational Resources Information Center
Hood, S. Brian
2012-01-01
Paul E. Newton argues in favor of a conception of validity, viz, "the consensus definition of validity," according to which the extension of the predicate "is valid" is a subset of "assessment-based decision-making procedure[s], which [are] underwritten by an argument that the assessment procedure can be used to measure the attribute entailed by…
Palter, Vanessa N; Orzech, Neil; Reznick, Richard K; Grantcharov, Teodor P
2013-02-01
: To develop and validate an ex vivo comprehensive curriculum for a basic laparoscopic procedure. : Although simulators have been well validated as tools to teach technical skills, their integration into comprehensive curricula is lacking. Moreover, neither the effect of ex vivo training on learning curves in the operating room (OR), nor the effect on nontechnical proficiency has been investigated. : This randomized single-blinded prospective trial allocated 20 surgical trainees to a structured training and assessment curriculum (STAC) group or conventional residency training. The STAC consisted of case-based learning, proficiency-based virtual reality training, laparoscopic box training, and OR participation. After completion of the intervention, all participants performed 5 sequential laparoscopic cholecystectomies in the OR. The primary outcome measure was the difference in technical performance between the 2 groups during the first laparoscopic cholecystectomy. Secondary outcome measures included differences with respect to learning curves in the OR, technical proficiency of each sequential laparoscopic cholecystectomy, and nontechnical skills. : Residents in the STAC group outperformed residents in the conventional group in the first (P = 0.004), second (P = 0.036), third (P = 0.021), and fourth (P = 0.023) laparoscopic cholecystectomies. The conventional group demonstrated a significant learning curve in the OR (P = 0.015) in contrast to the STAC group (P = 0.032). Residents in the STAC group also had significantly higher nontechnical skills (P = 0.027). : Participating in the STAC shifted the learning curve for a basic laparoscopic procedure from the operating room into the simulation laboratory. STAC-trained residents had superior technical proficiency in the OR and nontechnical skills compared with conventionally trained residents. (The study registration ID is NCT01560494.).
NASA Astrophysics Data System (ADS)
Alimohammadi, Shahrouz; Cavaglieri, Daniele; Beyhaghi, Pooriya; Bewley, Thomas R.
2016-11-01
This work applies a recently developed Derivative-free optimization algorithm to derive a new mixed implicit-explicit (IMEX) time integration scheme for Computational Fluid Dynamics (CFD) simulations. This algorithm allows imposing a specified order of accuracy for the time integration and other important stability properties in the form of nonlinear constraints within the optimization problem. In this procedure, the coefficients of the IMEX scheme should satisfy a set of constraints simultaneously. Therefore, the optimization process, at each iteration, estimates the location of the optimal coefficients using a set of global surrogates, for both the objective and constraint functions, as well as a model of the uncertainty function of these surrogates based on the concept of Delaunay triangulation. This procedure has been proven to converge to the global minimum of the constrained optimization problem provided the constraints and objective functions are twice differentiable. As a result, a new third-order, low-storage IMEX Runge-Kutta time integration scheme is obtained with remarkably fast convergence. Numerical tests are then performed leveraging the turbulent channel flow simulations to validate the theoretical order of accuracy and stability properties of the new scheme.
Ng, Danny Siu-Chun; Sun, Zihan; Young, Alvin Lerrmann; Ko, Simon Tak-Chuen; Lok, Jerry Ka-Hing; Lai, Timothy Yuk-Yau; Sikder, Shameema; Tham, Clement C
2018-01-01
To identify residents' perceived barriers to learning phacoemulsification surgical procedures and to evaluate whether virtual reality simulation training changed these perceptions. The ophthalmology residents undertook a simulation phacoemulsification course and proficiency assessment on the Eyesi system using the previously validated training modules of intracapsular navigation, anti-tremor, capsulorrhexis, and cracking/chopping. A cross-sectional, multicenter survey on the perceived difficulties in performing phacoemulsification tasks on patients, based on the validated International Council of Ophthalmology's Ophthalmology Surgical Competency Assessment Rubric (ICO-OSCAR), using a 5-point Likert scale (1 = least and 5 = most difficulty), was conducted among residents with or without prior simulation training. Mann-Whitney U tests were carried out to compare the mean scores, and multivariate regression analyses were performed to evaluate the association of lower scores with the following potential predictors: 1) higher level trainee, 2) can complete phacoemulsification most of the time (>90%) without supervisor's intervention, and 3) prior simulation training. The study was conducted in ophthalmology residency training programs in five regional hospitals in Hong Kong. Of the 22 residents, 19 responded (86.3%), of which 13 (68.4%) had completed simulation training. Nucleus cracking/chopping was ranked highest in difficulty by all respondents followed by capsulorrhexis completion and nucleus rotation/manipulation. Respondents with prior simulation training had significantly lower difficulty scores on these three tasks (nucleus cracking/chopping 3.85 vs 4.75, P = 0.03; capsulorrhexis completion 3.31 vs 4.40, P = 0.02; and nucleus rotation/manipulation 3.00 vs 4.75, P = 0.01). In multivariate analyses, simulation training was significantly associated with lower difficulty scores on these three tasks. Residents who had completed Eyesi simulation training had higher confidence in performing the most difficult tasks perceived during phacoemulsification.
Ng, Danny Siu-Chun; Sun, Zihan; Young, Alvin Lerrmann; Ko, Simon Tak-Chuen; Lok, Jerry Ka-Hing; Lai, Timothy Yuk-Yau; Sikder, Shameema; Tham, Clement C
2018-01-01
Objective To identify residents’ perceived barriers to learning phacoemulsification surgical procedures and to evaluate whether virtual reality simulation training changed these perceptions. Design The ophthalmology residents undertook a simulation phacoemulsification course and proficiency assessment on the Eyesi system using the previously validated training modules of intracapsular navigation, anti-tremor, capsulorrhexis, and cracking/chopping. A cross-sectional, multicenter survey on the perceived difficulties in performing phacoemulsification tasks on patients, based on the validated International Council of Ophthalmology’s Ophthalmology Surgical Competency Assessment Rubric (ICO-OSCAR), using a 5-point Likert scale (1 = least and 5 = most difficulty), was conducted among residents with or without prior simulation training. Mann–Whitney U tests were carried out to compare the mean scores, and multivariate regression analyses were performed to evaluate the association of lower scores with the following potential predictors: 1) higher level trainee, 2) can complete phacoemulsification most of the time (>90%) without supervisor’s intervention, and 3) prior simulation training. Setting The study was conducted in ophthalmology residency training programs in five regional hospitals in Hong Kong. Results Of the 22 residents, 19 responded (86.3%), of which 13 (68.4%) had completed simulation training. Nucleus cracking/chopping was ranked highest in difficulty by all respondents followed by capsulorrhexis completion and nucleus rotation/manipulation. Respondents with prior simulation training had significantly lower difficulty scores on these three tasks (nucleus cracking/chopping 3.85 vs 4.75, P = 0.03; capsulorrhexis completion 3.31 vs 4.40, P = 0.02; and nucleus rotation/manipulation 3.00 vs 4.75, P = 0.01). In multivariate analyses, simulation training was significantly associated with lower difficulty scores on these three tasks. Conclusion Residents who had completed Eyesi simulation training had higher confidence in performing the most difficult tasks perceived during phacoemulsification. PMID:29785084
Atmospheric Correction of Satellite Imagery Using Modtran 3.5 Code
NASA Technical Reports Server (NTRS)
Gonzales, Fabian O.; Velez-Reyes, Miguel
1997-01-01
When performing satellite remote sensing of the earth in the solar spectrum, atmospheric scattering and absorption effects provide the sensors corrupted information about the target's radiance characteristics. We are faced with the problem of reconstructing the signal that was reflected from the target, from the data sensed by the remote sensing instrument. This article presents a method for simulating radiance characteristic curves of satellite images using a MODTRAN 3.5 band model (BM) code to solve the radiative transfer equation (RTE), and proposes a method for the implementation of an adaptive system for automated atmospheric corrections. The simulation procedure is carried out as follows: (1) for each satellite digital image a radiance characteristic curve is obtained by performing a digital number (DN) to radiance conversion, (2) using MODTRAN 3.5 a simulation of the images characteristic curves is generated, (3) the output of the code is processed to generate radiance characteristic curves for the simulated cases. The simulation algorithm was used to simulate Landsat Thematic Mapper (TM) images for two types of locations: the ocean surface, and a forest surface. The simulation procedure was validated by computing the error between the empirical and simulated radiance curves. While results in the visible region of the spectrum where not very accurate, those for the infrared region of the spectrum were encouraging. This information can be used for correction of the atmospheric effects. For the simulation over ocean, the lowest error produced in this region was of the order of 105 and up to 14 times smaller than errors in the visible region. For the same spectral region on the forest case, the lowest error produced was of the order of 10-4, and up to 41 times smaller than errors in the visible region,
The use of simulation in neurosurgical education and training. A systematic review.
Kirkman, Matthew A; Ahmed, Maria; Albert, Angelique F; Wilson, Mark H; Nandi, Dipankar; Sevdalis, Nick
2014-08-01
There is increasing evidence that simulation provides high-quality, time-effective training in an era of resident duty-hour restrictions. Simulation may also permit trainees to acquire key skills in a safe environment, important in a specialty such as neurosurgery, where technical error can result in devastating consequences. The authors systematically reviewed the application of simulation within neurosurgical training and explored the state of the art in simulation within this specialty. To their knowledge this is the first systematic review published on this topic to date. The authors searched the Ovid MEDLINE, Embase, and PsycINFO databases and identified 4101 articles; 195 abstracts were screened by 2 authors for inclusion. The authors reviewed data on study population, study design and setting, outcome measures, key findings, and limitations. Twenty-eight articles formed the basis of this systematic review. Several different simulators are at the neurosurgeon's disposal, including those for ventriculostomy, neuroendoscopic procedures, and spinal surgery, with evidence for improved performance in a range of procedures. Feedback from participants has generally been favorable. However, study quality was found to be poor overall, with many studies hampered by nonrandomized design, presenting normal rather than abnormal anatomy, lack of control groups and long-term follow-up, poor study reporting, lack of evidence of improved simulator performance translating into clinical benefit, and poor reliability and validity evidence. The mean Medical Education Research Study Quality Instrument score of included studies was 9.21 ± 1.95 (± SD) out of a possible score of 18. The authors demonstrate qualitative and quantitative benefits of a range of neurosurgical simulators but find significant shortfalls in methodology and design. Future studies should seek to improve study design and reporting, and provide long-term follow-up data on simulated and ideally patient outcomes.
Dynamic fiber Bragg gratings based health monitoring system of composite aerospace structures
NASA Astrophysics Data System (ADS)
Panopoulou, A.; Loutas, T.; Roulias, D.; Fransen, S.; Kostopoulos, V.
2011-09-01
The main purpose of the current work is to develop a new system for structural health monitoring of composite aerospace structures based on real-time dynamic measurements, in order to identify the structural state condition. Long-gauge Fibre Bragg Grating (FBG) optical sensors were used for monitoring the dynamic response of the composite structure. The algorithm that was developed for structural damage detection utilizes the collected dynamic response data, analyzes them in various ways and through an artificial neural network identifies the damage state and its location. Damage was simulated by slightly varying locally the mass of the structure (by adding a known mass) at different zones of the structure. Lumped masses in different locations upon the structure alter the eigen-frequencies in a way similar to actual damage. The structural dynamic behaviour has been numerically simulated and experimentally verified by means of modal testing on two different composite aerospace structures. Advanced digital signal processing techniques, e.g. the wavelet transform (WT), were used for the analysis of the dynamic response for feature extraction. WT's capability of separating the different frequency components in the time domain without loosing frequency information makes it a versatile tool for demanding signal processing applications. The use of WT is also suggested by the no-stationary nature of dynamic response signals and the opportunity of evaluating the temporal evolution of their frequency contents. Feature extraction is the first step of the procedure. The extracted features are effective indices of damage size and location. The classification step comprises of a feed-forward back propagation network, whose output determines the simulated damage location. Finally, dedicated training and validation activities were carried out by means of numerical simulations and experimental procedures. Experimental validation was performed initially on a flat stiffened panel, representing a section of a typical aeronautical structure, manufactured and tested in the lab and, as a second step, on a scaled up space oriented structure, which is a composite honeycomb plate, used as a deployment base for antenna arrays. An integrated FBG sensor network, based on the advantage of multiplexing, was mounted on both structures and different excitation positions and boundary conditions were used. The analysis of operational dynamic responses was employed to identify both the damage and its position. The system that was designed and tested initially on the thin composite panel, was successfully validated on the larger honeycomb structure. Numerical simulation of both structures was used as a support tool at all the steps of the work providing among others the location of the optical sensors used. The proposed work will be the base for the whole system qualification and validation on an antenna reflector in future work.
Simulation-based training for prostate surgery.
Khan, Raheej; Aydin, Abdullatif; Khan, Muhammad Shamim; Dasgupta, Prokar; Ahmed, Kamran
2015-10-01
To identify and review the currently available simulators for prostate surgery and to explore the evidence supporting their validity for training purposes. A review of the literature between 1999 and 2014 was performed. The search terms included a combination of urology, prostate surgery, robotic prostatectomy, laparoscopic prostatectomy, transurethral resection of the prostate (TURP), simulation, virtual reality, animal model, human cadavers, training, assessment, technical skills, validation and learning curves. Furthermore, relevant abstracts from the American Urological Association, European Association of Urology, British Association of Urological Surgeons and World Congress of Endourology meetings, between 1999 and 2013, were included. Only studies related to prostate surgery simulators were included; studies regarding other urological simulators were excluded. A total of 22 studies that carried out a validation study were identified. Five validated models and/or simulators were identified for TURP, one for photoselective vaporisation of the prostate, two for holmium enucleation of the prostate, three for laparoscopic radical prostatectomy (LRP) and four for robot-assisted surgery. Of the TURP simulators, all five have demonstrated content validity, three face validity and four construct validity. The GreenLight laser simulator has demonstrated face, content and construct validities. The Kansai HoLEP Simulator has demonstrated face and content validity whilst the UroSim HoLEP Simulator has demonstrated face, content and construct validity. All three animal models for LRP have been shown to have construct validity whilst the chicken skin model was also content valid. Only two robotic simulators were identified with relevance to robot-assisted laparoscopic prostatectomy, both of which demonstrated construct validity. A wide range of different simulators are available for prostate surgery, including synthetic bench models, virtual-reality platforms, animal models, human cadavers, distributed simulation and advanced training programmes and modules. The currently validated simulators can be used by healthcare organisations to provide supplementary training sessions for trainee surgeons. Further research should be conducted to validate simulated environments, to determine which simulators have greater efficacy than others and to assess the cost-effectiveness of the simulators and the transferability of skills learnt. With surgeons investigating new possibilities for easily reproducible and valid methods of training, simulation offers great scope for implementation alongside traditional methods of training. © 2014 The Authors BJU International © 2014 BJU International Published by John Wiley & Sons Ltd.
Optimizing Positioning for In-Office Otology Procedures.
Govil, Nandini; DeMayo, William M; Hirsch, Barry E; McCall, Andrew A
2017-01-01
Objective Surgeons often report musculoskeletal discomfort in relation to their practice, but few understand optimal ergonomic positioning. This study aims to determine which patient position-sitting versus supine-is ergonomically optimal for performing otologic procedures. Study Design Observational study. Setting Outpatient otolaryngology clinic setting in a tertiary care facility. Subjects and Methods We observed 3 neurotologists performing a standardized simulated cerumen debridement procedure on volunteers in 2 positions: sitting and supine. The Rapid Upper Limb Assessment (RULA)-a validated tool that calculates stress placed on the upper limb during a task-was used to evaluate ergonomic positioning. Scores on this instrument range from 1 to 7, with a score of 1 to 2 indicating negligible risk of developing posture-related injury. The risk of musculoskeletal disorders increases as the RULA score increases. Results In nearly every trial, RULA scores were lower when the simulated patient was placed in the supine position. When examined as a group, the median RULA scores were 5 with the patient sitting and 3 with the patient in the supine position ( P < .0001). When the RULA scores of the 3 neurotologists were examined individually, each had a statistically significant decrease in score with the patient in the supine position. Conclusion This study indicates that patient position may contribute to ergonomic stress placed on the otolaryngologist's upper limb during in-office otologic procedures. Otolaryngologists should consider performing otologic procedures with the patient in the supine position to decrease their own risk of developing upper-limb musculoskeletal disorders.
VS2DI: Model use, calibration, and validation
Healy, Richard W.; Essaid, Hedeff I.
2012-01-01
VS2DI is a software package for simulating water, solute, and heat transport through soils or other porous media under conditions of variable saturation. The package contains a graphical preprocessor for constructing simulations, a postprocessor for displaying simulation results, and numerical models that solve for flow and solute transport (VS2DT) and flow and heat transport (VS2DH). Flow is described by the Richards equation, and solute and heat transport are described by advection-dispersion equations; the finite-difference method is used to solve these equations. Problems can be simulated in one, two, or three (assuming radial symmetry) dimensions. This article provides an overview of calibration techniques that have been used with VS2DI; included is a detailed description of calibration procedures used in simulating the interaction between groundwater and a stream fed by drainage from agricultural fields in central Indiana. Brief descriptions of VS2DI and the various types of problems that have been addressed with the software package are also presented.
Space shuttle simulation model
NASA Technical Reports Server (NTRS)
Tatom, F. B.; Smith, S. R.
1980-01-01
The effects of atmospheric turbulence in both horizontal and near horizontal flight, during the return of the space shuttle, are important for determining design, control, and 'pilot-in-the-loop' effects. A nonrecursive model (based on von Karman spectra) for atmospheric turbulence along the flight path of the shuttle orbiter was developed which provides for simulation of instantaneous vertical and horizontal gusts at the vehicle center-of-gravity, and also for simulation of instantaneous gust gradients. Based on this model, the time series for both gusts and gust gradients were generated and stored on a series of magnetic tapes which are entitled shuttle simulation turbulence tapes (SSTT). The time series are designed to represent atmospheric turbulence from ground level to an altitude of 10,000 meters. The turbulence generation procedure is described as well as the results of validating the simulated turbulence. Conclusions and recommendations are presented and references cited. The tabulated one dimensional von Karman spectra and the results of spectral and statistical analyses of the SSTT are contained in the appendix.
Analytical procedure validation and the quality by design paradigm.
Rozet, Eric; Lebrun, Pierre; Michiels, Jean-François; Sondag, Perceval; Scherder, Tara; Boulanger, Bruno
2015-01-01
Since the adoption of the ICH Q8 document concerning the development of pharmaceutical processes following a quality by design (QbD) approach, there have been many discussions on the opportunity for analytical procedure developments to follow a similar approach. While development and optimization of analytical procedure following QbD principles have been largely discussed and described, the place of analytical procedure validation in this framework has not been clarified. This article aims at showing that analytical procedure validation is fully integrated into the QbD paradigm and is an essential step in developing analytical procedures that are effectively fit for purpose. Adequate statistical methodologies have also their role to play: such as design of experiments, statistical modeling, and probabilistic statements. The outcome of analytical procedure validation is also an analytical procedure design space, and from it, control strategy can be set.
A framework for the direct evaluation of large deviations in non-Markovian processes
NASA Astrophysics Data System (ADS)
Cavallaro, Massimo; Harris, Rosemary J.
2016-11-01
We propose a general framework to simulate stochastic trajectories with arbitrarily long memory dependence and efficiently evaluate large deviation functions associated to time-extensive observables. This extends the ‘cloning’ procedure of Giardiná et al (2006 Phys. Rev. Lett. 96 120603) to non-Markovian systems. We demonstrate the validity of this method by testing non-Markovian variants of an ion-channel model and the totally asymmetric exclusion process, recovering results obtainable by other means.
Equation of state for 1,2-dichloroethane based on a hybrid data set
NASA Astrophysics Data System (ADS)
Thol, Monika; Rutkai, Gábor; Köster, Andreas; Miroshnichenko, Svetlana; Wagner, Wolfgang; Vrabec, Jadran; Span, Roland
2017-06-01
A fundamental equation of state in terms of the Helmholtz energy is presented for 1,2-dichloroethane. Due to a narrow experimental database, not only laboratory measurements but also molecular simulation data are applied to the fitting procedure. The present equation of state is valid from the triple point up to 560 K for pressures of up to 100 MPa. The accuracy of the equation is assessed in detail. Furthermore, a reasonable extrapolation behaviour is verified.
Charge-exchange plasma generated by an ion thruster
NASA Technical Reports Server (NTRS)
Kaufman, H. R.
1977-01-01
The charge exchange plasma generated by an ion thruster was investigated experimentally using both 5 cm and 15 cm thrusters. Results are shown for wide ranges of radial distance from the thruster and angle from the beam direction. Considerations of test environment, as well as distance from the thruster, indicate that a valid simulation of a thruster on a spacecraft was obtained. A calculation procedure and a sample calculation of charge exchange plasma density and saturation electron current density are included.
Statistical detection of EEG synchrony using empirical bayesian inference.
Singh, Archana K; Asoh, Hideki; Takeda, Yuji; Phillips, Steven
2015-01-01
There is growing interest in understanding how the brain utilizes synchronized oscillatory activity to integrate information across functionally connected regions. Computing phase-locking values (PLV) between EEG signals is a popular method for quantifying such synchronizations and elucidating their role in cognitive tasks. However, high-dimensionality in PLV data incurs a serious multiple testing problem. Standard multiple testing methods in neuroimaging research (e.g., false discovery rate, FDR) suffer severe loss of power, because they fail to exploit complex dependence structure between hypotheses that vary in spectral, temporal and spatial dimension. Previously, we showed that a hierarchical FDR and optimal discovery procedures could be effectively applied for PLV analysis to provide better power than FDR. In this article, we revisit the multiple comparison problem from a new Empirical Bayes perspective and propose the application of the local FDR method (locFDR; Efron, 2001) for PLV synchrony analysis to compute FDR as a posterior probability that an observed statistic belongs to a null hypothesis. We demonstrate the application of Efron's Empirical Bayes approach for PLV synchrony analysis for the first time. We use simulations to validate the specificity and sensitivity of locFDR and a real EEG dataset from a visual search study for experimental validation. We also compare locFDR with hierarchical FDR and optimal discovery procedures in both simulation and experimental analyses. Our simulation results showed that the locFDR can effectively control false positives without compromising on the power of PLV synchrony inference. Our results from the application locFDR on experiment data detected more significant discoveries than our previously proposed methods whereas the standard FDR method failed to detect any significant discoveries.
Stereovision-based pose and inertia estimation of unknown and uncooperative space objects
NASA Astrophysics Data System (ADS)
Pesce, Vincenzo; Lavagna, Michèle; Bevilacqua, Riccardo
2017-01-01
Autonomous close proximity operations are an arduous and attractive problem in space mission design. In particular, the estimation of pose, motion and inertia properties of an uncooperative object is a challenging task because of the lack of available a priori information. This paper develops a novel method to estimate the relative position, velocity, angular velocity, attitude and the ratios of the components of the inertia matrix of an uncooperative space object using only stereo-vision measurements. The classical Extended Kalman Filter (EKF) and an Iterated Extended Kalman Filter (IEKF) are used and compared for the estimation procedure. In addition, in order to compute the inertia properties, the ratios of the inertia components are added to the state and a pseudo-measurement equation is considered in the observation model. The relative simplicity of the proposed algorithm could be suitable for an online implementation for real applications. The developed algorithm is validated by numerical simulations in MATLAB using different initial conditions and uncertainty levels. The goal of the simulations is to verify the accuracy and robustness of the proposed estimation algorithm. The obtained results show satisfactory convergence of estimation errors for all the considered quantities. The obtained results, in several simulations, shows some improvements with respect to similar works, which deal with the same problem, present in literature. In addition, a video processing procedure is presented to reconstruct the geometrical properties of a body using cameras. This inertia reconstruction algorithm has been experimentally validated at the ADAMUS (ADvanced Autonomous MUltiple Spacecraft) Lab at the University of Florida. In the future, this different method could be integrated to the inertia ratios estimator to have a complete tool for mass properties recognition.
HYDRA-II: A hydrothermal analysis computer code: Volume 3, Verification/validation assessments
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCann, R.A.; Lowery, P.S.
1987-10-01
HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite difference solution in cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equationsmore » for conservation of momentum are enhanced by the incorporation of directional porosities and permeabilities that aid in modeling solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits modeling of orthotropic physical properties and film resistances. Several automated procedures are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. Volume I - Equations and Numerics describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. Volume II - User's Manual contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a model problem. This volume, Volume III - Verification/Validation Assessments, provides a comparison between the analytical solution and the numerical simulation for problems with a known solution. This volume also documents comparisons between the results of simulations of single- and multiassembly storage systems and actual experimental data. 11 refs., 55 figs., 13 tabs.« less
Rafique, Rashad; Fienen, Michael N.; Parkin, Timothy B.; Anex, Robert P.
2013-01-01
DayCent is a biogeochemical model of intermediate complexity widely used to simulate greenhouse gases (GHG), soil organic carbon and nutrients in crop, grassland, forest and savannah ecosystems. Although this model has been applied to a wide range of ecosystems, it is still typically parameterized through a traditional “trial and error” approach and has not been calibrated using statistical inverse modelling (i.e. algorithmic parameter estimation). The aim of this study is to establish and demonstrate a procedure for calibration of DayCent to improve estimation of GHG emissions. We coupled DayCent with the parameter estimation (PEST) software for inverse modelling. The PEST software can be used for calibration through regularized inversion as well as model sensitivity and uncertainty analysis. The DayCent model was analysed and calibrated using N2O flux data collected over 2 years at the Iowa State University Agronomy and Agricultural Engineering Research Farms, Boone, IA. Crop year 2003 data were used for model calibration and 2004 data were used for validation. The optimization of DayCent model parameters using PEST significantly reduced model residuals relative to the default DayCent parameter values. Parameter estimation improved the model performance by reducing the sum of weighted squared residual difference between measured and modelled outputs by up to 67 %. For the calibration period, simulation with the default model parameter values underestimated mean daily N2O flux by 98 %. After parameter estimation, the model underestimated the mean daily fluxes by 35 %. During the validation period, the calibrated model reduced sum of weighted squared residuals by 20 % relative to the default simulation. Sensitivity analysis performed provides important insights into the model structure providing guidance for model improvement.
Improved pump turbine transient behaviour prediction using a Thoma number-dependent hillchart model
NASA Astrophysics Data System (ADS)
Manderla, M.; Kiniger, K.; Koutnik, J.
2014-03-01
Water hammer phenomena are important issues for high head hydro power plants. Especially, if several reversible pump-turbines are connected to the same waterways there may be strong interactions between the hydraulic machines. The prediction and coverage of all relevant load cases is challenging and difficult using classical simulation models. On the basis of a recent pump-storage project, dynamic measurements motivate an improved modeling approach making use of the Thoma number dependency of the actual turbine behaviour. The proposed approach is validated for several transient scenarios and turns out to increase correlation between measurement and simulation results significantly. By applying a fully automated simulation procedure broad operating ranges can be covered which provides a consistent insight into critical load case scenarios. This finally allows the optimization of the closing strategy and hence the overall power plant performance.
Sharei, Hoda; Alderliesten, Tanja; van den Dobbelsteen, John J; Dankelman, Jenny
2018-01-01
Guidewires and catheters are used during minimally invasive interventional procedures to traverse in vascular system and access the desired position. Computer models are increasingly being used to predict the behavior of these instruments. This information can be used to choose the right instrument for each case and increase the success rate of the procedure. Moreover, a designer can test the performance of instruments before the manufacturing phase. A precise model of the instrument is also useful for a training simulator. Therefore, to identify the strengths and weaknesses of different approaches used to model guidewires and catheters, a literature review of the existing techniques has been performed. The literature search was carried out in Google Scholar and Web of Science and limited to English for the period 1960 to 2017. For a computer model to be used in practice, it should be sufficiently realistic and, for some applications, real time. Therefore, we compared different modeling techniques with regard to these requirements, and the purposes of these models are reviewed. Important factors that influence the interaction between the instruments and the vascular wall are discussed. Finally, different ways used to evaluate and validate the models are described. We classified the developed models based on their formulation into finite-element method (FEM), mass-spring model (MSM), and rigid multibody links. Despite its numerical stability, FEM requires a very high computational effort. On the other hand, MSM is faster but there is a risk of numerical instability. The rigid multibody links method has a simple structure and is easy to implement. However, as the length of the instrument is increased, the model becomes slower. For the level of realism of the simulation, friction and collision were incorporated as the most influential forces applied to the instrument during the propagation within a vascular system. To evaluate the accuracy, most of the studies compared the simulation results with the outcome of physical experiments on a variety of phantom models, and only a limited number of studies have done face validity. Although a subset of the validated models is considered to be sufficiently accurate for the specific task for which they were developed and, therefore, are already being used in practice, these models are still under an ongoing development for improvement. Realism and computation time are two important requirements in catheter and guidewire modeling; however, the reviewed studies made a trade-off depending on the purpose of their model. Moreover, due to the complexity of the interaction with the vascular system, some assumptions have been made regarding the properties of both instruments and vascular system. Some validation studies have been reported but without a consistent experimental methodology.
Frederick, R I
2000-01-01
Mixed group validation (MGV) is offered as an alternative to criterion group validation (CGV) to estimate the true positive and false positive rates of tests and other diagnostic signs. CGV requires perfect confidence about each research participant's status with respect to the presence or absence of pathology. MGV determines diagnostic efficiencies based on group data; knowing an individual's status with respect to pathology is not required. MGV can use relatively weak indicators to validate better diagnostic signs, whereas CGV requires perfect diagnostic signs to avoid error in computing true positive and false positive rates. The process of MGV is explained, and a computer simulation demonstrates the soundness of the procedure. MGV of the Rey 15-Item Memory Test (Rey, 1958) for 723 pre-trial criminal defendants resulted in higher estimates of true positive rates and lower estimates of false positive rates as compared with prior research conducted with CGV. The author demonstrates how MGV addresses all the criticisms Rogers (1997b) outlined for differential prevalence designs in malingering detection research. Copyright 2000 John Wiley & Sons, Ltd.
Validation and verification of a virtual environment for training naval submarine officers
NASA Astrophysics Data System (ADS)
Zeltzer, David L.; Pioch, Nicholas J.
1996-04-01
A prototype virtual environment (VE) has been developed for training a submarine officer of the desk (OOD) to perform in-harbor navigation on a surfaced submarine. The OOD, stationed on the conning tower of the vessel, is responsible for monitoring the progress of the boat as it negotiates a marked channel, as well as verifying the navigational suggestions of the below- deck piloting team. The VE system allows an OOD trainee to view a particular harbor and associated waterway through a head-mounted display, receive spoken reports from a simulated piloting team, give spoken commands to the helmsman, and receive verbal confirmation of command execution from the helm. The task analysis of in-harbor navigation, and the derivation of application requirements are briefly described. This is followed by a discussion of the implementation of the prototype. This implementation underwent a series of validation and verification assessment activities, including operational validation, data validation, and software verification of individual software modules as well as the integrated system. Validation and verification procedures are discussed with respect to the OOD application in particular, and with respect to VE applications in general.
The SCALE Verified, Archived Library of Inputs and Data - VALID
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, William BJ J; Rearden, Bradley T
The Verified, Archived Library of Inputs and Data (VALID) at ORNL contains high quality, independently reviewed models and results that improve confidence in analysis. VALID is developed and maintained according to a procedure of the SCALE quality assurance (QA) plan. This paper reviews the origins of the procedure and its intended purpose, the philosophy of the procedure, some highlights of its implementation, and the future of the procedure and associated VALID library. The original focus of the procedure was the generation of high-quality models that could be archived at ORNL and applied to many studies. The review process associated withmore » model generation minimized the chances of errors in these archived models. Subsequently, the scope of the library and procedure was expanded to provide high quality, reviewed sensitivity data files for deployment through the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE). Sensitivity data files for approximately 400 such models are currently available. The VALID procedure and library continue fulfilling these multiple roles. The VALID procedure is based on the quality assurance principles of ISO 9001 and nuclear safety analysis. Some of these key concepts include: independent generation and review of information, generation and review by qualified individuals, use of appropriate references for design data and documentation, and retrievability of the models, results, and documentation associated with entries in the library. Some highlights of the detailed procedure are discussed to provide background on its implementation and to indicate limitations of data extracted from VALID for use by the broader community. Specifically, external users of data generated within VALID must take responsibility for ensuring that the files are used within the QA framework of their organization and that use is appropriate. The future plans for the VALID library include expansion to include additional experiments from the IHECSBE, to include experiments from areas beyond criticality safety, such as reactor physics and shielding, and to include application models. In the future, external SCALE users may also obtain qualification under the VALID procedure and be involved in expanding the library. The VALID library provides a pathway for the criticality safety community to leverage modeling and analysis expertise at ORNL.« less
NASA Astrophysics Data System (ADS)
Sturrock, P. A.
2008-01-01
Using the chi-square statistic, one may conveniently test whether a series of measurements of a variable are consistent with a constant value. However, that test is predicated on the assumption that the appropriate probability distribution function (pdf) is normal in form. This requirement is usually not satisfied by experimental measurements of the solar neutrino flux. This article presents an extension of the chi-square procedure that is valid for any form of the pdf. This procedure is applied to the GALLEX-GNO dataset, and it is shown that the results are in good agreement with the results of Monte Carlo simulations. Whereas application of the standard chi-square test to symmetrized data yields evidence significant at the 1% level for variability of the solar neutrino flux, application of the extended chi-square test to the unsymmetrized data yields only weak evidence (significant at the 4% level) of variability.
Jensen Ang, Wei Jie; Hopkins, Michael Edward; Partridge, Roland; Hennessey, Iain; Brennan, Paul Martin; Fouyas, Ioannis; Hughes, Mark Antony
2014-03-01
Reductions in working hours affect training opportunities for surgeons. Surgical simulation is increasingly proposed to help bridge the resultant training gap. For simulation training to translate effectively into the operating theater, acquisition of technical proficiency must be objectively assessed. Evaluating "economy of movement" is one way to achieve this. We sought to validate a practical and economical method of assessing economy of movement during a simulated task. We hypothesized that accelerometers, found in smartphones, provide quantitative, objective feedback when attached to a neurosurgeon's wrists. Subjects (n = 25) included consultants, senior registrars, junior registrars, junior doctors, and medical students. Total resultant acceleration (TRA), average resultant acceleration, and movements with acceleration >0.6g (suprathreshold acceleration events) were recorded while subjects performed a simulated dural closure task. Students recorded an average TRA 97.0 ± 31.2 ms higher than senior registrars (P = .03) and 103 ± 31.2 ms higher than consultants (P = .02). Similarly, junior doctors accrued an average TRA 181 ± 31.2 ms higher than senior registrars (P < .001) and 187 ± 31.2 ms higher than consultants (P < .001). Significant correlations were observed between surgical outcome (as measured by quality of dural closure) and both TRA (r = .44, P < .001) and number of suprathreshold acceleration events (r = .33, P < .001). TRA (219 ± 66.6 ms; P = .01) and number of suprathreshold acceleration events (127 ± 42.5; P = .02) dropped between the first and fourth trials for junior doctors, suggesting procedural learning. TRA was 45.4 ± 17.1 ms higher in the dominant hand for students (P = .04) and 57.2 ± 17.1 ms for junior doctors (P = .005), contrasting with even TRA distribution between hands (acquired ambidexterity) in senior groups. Data from smartphone-based accelerometers show construct validity as an adjunct for assessing technical performance during simulation training.
NASA Astrophysics Data System (ADS)
Soligo, Riccardo
In this work, the insight provided by our sophisticated Full Band Monte Carlo simulator is used to analyze the behavior of state-of-art devices like GaN High Electron Mobility Transistors and Hot Electron Transistors. Chapter 1 is dedicated to the description of the simulation tool used to obtain the results shown in this work. Moreover, a separate section is dedicated the set up of a procedure to validate to the tunneling algorithm recently implemented in the simulator. Chapter 2 introduces High Electron Mobility Transistors (HEMTs), state-of-art devices characterized by highly non linear transport phenomena that require the use of advanced simulation methods. The techniques for device modeling are described applied to a recent GaN-HEMT, and they are validated with experimental measurements. The main techniques characterization techniques are also described, including the original contribution provided by this work. Chapter 3 focuses on a popular technique to enhance HEMTs performance: the down-scaling of the device dimensions. In particular, this chapter is dedicated to lateral scaling and the calculation of a limiting cutoff frequency for a device of vanishing length. Finally, Chapter 4 and Chapter 5 describe the modeling of Hot Electron Transistors (HETs). The simulation approach is validated by matching the current characteristics with the experimental one before variations of the layouts are proposed to increase the current gain to values suitable for amplification. The frequency response of these layouts is calculated, and modeled by a small signal circuit. For this purpose, a method to directly calculate the capacitance is developed which provides a graphical picture of the capacitative phenomena that limit the frequency response in devices. In Chapter 5 the properties of the hot electrons are investigated for different injection energies, which are obtained by changing the layout of the emitter barrier. Moreover, the large signal characterization of the HET is shown for different layouts, where the collector barrier was scaled.
NASA Astrophysics Data System (ADS)
Agafonova, N.; Aleksandrov, A.; Anokhina, A.; Aoki, S.; Ariga, A.; Ariga, T.; Bender, D.; Bertolin, A.; Bozza, C.; Brugnera, R.; Buonaura, A.; Buontempo, S.; Büttner, B.; Chernyavsky, M.; Chukanov, A.; Consiglio, L.; D'Ambrosio, N.; De Lellis, G.; De Serio, M.; Del Amo Sanchez, P.; Di Crescenzo, A.; Di Ferdinando, D.; Di Marco, N.; Dmitrievski, S.; Dracos, M.; Duchesneau, D.; Dusini, S.; Dzhatdoev, T.; Ebert, J.; Ereditato, A.; Fini, R. A.; Fukuda, T.; Galati, G.; Garfagnini, A.; Giacomelli, G.; Göllnitz, C.; Goldberg, J.; Gornushkin, Y.; Grella, G.; Guler, M.; Gustavino, C.; Hagner, C.; Hara, T.; Hollnagel, A.; Hosseini, B.; Ishida, H.; Ishiguro, K.; Jakovcic, K.; Jollet, C.; Kamiscioglu, C.; Kamiscioglu, M.; Kawada, J.; Kim, J. H.; Kim, S. H.; Kitagawa, N.; Klicek, B.; Kodama, K.; Komatsu, M.; Kose, U.; Kreslo, I.; Lauria, A.; Lenkeit, J.; Ljubicic, A.; Longhin, A.; Loverre, P.; Malgin, A.; Malenica, M.; Mandrioli, G.; Matsuo, T.; Matveev, V.; Mauri, N.; Medinaceli, E.; Meregaglia, A.; Mikado, S.; Monacelli, P.; Montesi, M. C.; Morishima, K.; Muciaccia, M. T.; Naganawa, N.; Naka, T.; Nakamura, M.; Nakano, T.; Nakatsuka, Y.; Niwa, K.; Ogawa, S.; Okateva, N.; Olshevsky, A.; Omura, T.; Ozaki, K.; Paoloni, A.; Park, B. D.; Park, I. G.; Pasqualini, L.; Pastore, A.; Patrizii, L.; Pessard, H.; Pistillo, C.; Podgrudkov, D.; Polukhina, N.; Pozzato, M.; Pupilli, F.; Roda, M.; Rokujo, H.; Roganova, T.; Rosa, G.; Ryazhskaya, O.; Sato, O.; Schembri, A.; Shakiryanova, I.; Shchedrina, T.; Sheshukov, A.; Shibuya, H.; Shiraishi, T.; Shoziyoev, G.; Simone, S.; Sioli, M.; Sirignano, C.; Sirri, G.; Spinetti, M.; Stanco, L.; Starkov, N.; Stellacci, S. M.; Stipcevic, M.; Strauss, T.; Strolin, P.; Takahashi, S.; Tenti, M.; Terranova, F.; Tioukov, V.; Tufanli, S.; Vilain, P.; Vladimirov, M.; Votano, L.; Vuilleumier, J. L.; Wilquet, G.; Wonsak, B.; Yoon, C. S.; Zemskova, S.; Zghiche, A.
2014-08-01
The OPERA experiment, designed to perform the first observation of oscillations in appearance mode through the detection of the leptons produced in charged current interactions, has collected data from 2008 to 2012. In the present paper, the procedure developed to detect particle decays, occurring over distances of the order of from the neutrino interaction point, is described in detail and applied to the search for charmed hadrons, showing similar decay topologies as the lepton. In the analysed sample, 50 charm decay candidate events are observed while are expected, proving that the detector performance and the analysis chain applied to neutrino events are well reproduced by the OPERA simulation and thus validating the methods for appearance detection.
Virtual surgical telesimulations in otolaryngology.
Navarro Newball, Andrés A; Hernández, Carlos J; Velez, Jorge A; Munera, Luis E; García, Gregorio B; Gamboa, Carlos A; Reyes, Antonio J
2005-01-01
Distance learning can be enhanced with the use of virtual reality; this paper describes the design and initial validation of a Web Environment for Surgery Skills Training on Otolaryngology (WESST-OT). WESST-OT was created aimed to help trainees to gain the skills required in order to perform the Functional Endoscopic Sinus Surgery procedure (FESS), since training centers and specialist in this knowledge are scarce in Colombia; also, it is part of a web based educational cycle which simulates the stages of a real procedure. WESST-OT is one from the WESST family of telesimulators which started to be developed from an architecture proposed at the Medicine Meets Virtual Reality conference 2002; also, it is a step towards the use of virtual reality technologies in Latin America.
Fast and Adaptive Sparse Precision Matrix Estimation in High Dimensions
Liu, Weidong; Luo, Xi
2014-01-01
This paper proposes a new method for estimating sparse precision matrices in the high dimensional setting. It has been popular to study fast computation and adaptive procedures for this problem. We propose a novel approach, called Sparse Column-wise Inverse Operator, to address these two issues. We analyze an adaptive procedure based on cross validation, and establish its convergence rate under the Frobenius norm. The convergence rates under other matrix norms are also established. This method also enjoys the advantage of fast computation for large-scale problems, via a coordinate descent algorithm. Numerical merits are illustrated using both simulated and real datasets. In particular, it performs favorably on an HIV brain tissue dataset and an ADHD resting-state fMRI dataset. PMID:25750463
Sequential Tests of Multiple Hypotheses Controlling Type I and II Familywise Error Rates
Bartroff, Jay; Song, Jinlin
2014-01-01
This paper addresses the following general scenario: A scientist wishes to perform a battery of experiments, each generating a sequential stream of data, to investigate some phenomenon. The scientist would like to control the overall error rate in order to draw statistically-valid conclusions from each experiment, while being as efficient as possible. The between-stream data may differ in distribution and dimension but also may be highly correlated, even duplicated exactly in some cases. Treating each experiment as a hypothesis test and adopting the familywise error rate (FWER) metric, we give a procedure that sequentially tests each hypothesis while controlling both the type I and II FWERs regardless of the between-stream correlation, and only requires arbitrary sequential test statistics that control the error rates for a given stream in isolation. The proposed procedure, which we call the sequential Holm procedure because of its inspiration from Holm’s (1979) seminal fixed-sample procedure, shows simultaneous savings in expected sample size and less conservative error control relative to fixed sample, sequential Bonferroni, and other recently proposed sequential procedures in a simulation study. PMID:25092948
Brinkmann, Christian; Fritz, Mathias; Pankratius, Ulrich; Bahde, Ralf; Neumann, Philipp; Schlueter, Steffen; Senninger, Norbert; Rijcken, Emile
Simulation training improves laparoscopic performance. Laparoscopic basic skills can be learned in simulators as box- or virtual-reality (VR) trainers. However, there is no clear recommendation for either box or VR trainers as the most appropriate tool for the transfer of acquired laparoscopic basic skills into a surgical procedure. Both training tools were compared, using validated and well-established curricula in the acquirement of basic skills, in a prospective randomized trial in a 5-day structured laparoscopic training course. Participants completed either a box- or VR-trainer curriculum and then applied the learned skills performing an ex situ laparoscopic cholecystectomy on a pig liver. The performance was recorded on video and evaluated offline by 4 blinded observers using the Global Operative Assessment of Laparoscopic Skills (GOALS) score. Learning curves of the various exercises included in the training course were compared and the improvement in each exercise was analyzed. Surgical Skills Lab of the Department of General and Visceral Surgery, University Hospital Muenster. Surgical novices without prior surgical experience (medical students, n = 36). Posttraining evaluation showed significant improvement compared with baseline in both groups, indicating acquisition of laparoscopic basic skills. Learning curves showed almost the same progression with no significant differences. In simulated laparoscopic cholecystectomy, total GOALS score was significantly higher for the box-trained group than the VR-trained group (box: 15.31 ± 3.61 vs. VR: 12.92 ± 3.06; p = 0.039; Hedge׳s g* = 0.699), indicating higher technical skill levels. Despite both systems having advantages and disadvantages, they can both be used for simulation training for laparoscopic skills. In the setting with 2 structured, validated and almost identical curricula, the box-trained group appears to be superior in the better transfer of basic skills into an experimental but structured surgical procedure. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Prottengeier, Johannes; Petzoldt, Marlen; Jess, Nikola; Moritz, Andreas; Gall, Christine; Schmidt, Joachim; Breuer, Georg
2016-03-01
Dual-tasking, the need to divide attention between concurrent tasks, causes a severe increase in workload in emergency situations and yet there is no standardised training simulation scenario for this key difficulty. We introduced and validated a quantifiable source of divided attention and investigated its effects on performance and workload in airway management. A randomised, crossover, interventional simulation study. Center for Training and Simulation, Department of Anaesthesiology, Erlangen University Hospital, Germany. One hundred and fifty volunteer medical students, paramedics and anaesthesiologists of all levels of training. Participants secured the airway of a manikin using a supraglottic airway, conventional endotracheal intubation and video-assisted endotracheal intubation with and without the Paced Auditory Serial Addition Test (PASAT), which served as a quantifiable source of divided attention. Primary endpoint was the time for the completion of each airway task. Secondary endpoints were the number of procedural mistakes made and the perceived workload as measured by the National Aeronautics and Space Administration's task load index (NASA-TLX). This is a six-dimensional questionnaire, which assesses the perception of demands, performance and frustration with respect to a task on a scale of 0 to 100. All 150 participants completed the tests. Volunteers perceived our test to be challenging (99%) and the experience of stress and distraction true to an emergency situation (80%), but still fair (98%) and entertaining (95%). The negative effects of divided attention were reproducible in participants of all levels of expertise. Time consumption and perceived workload increased and almost half the participants make procedural mistakes under divided attention. The supraglottic airway technique was least affected by divided attention. The scenario was effective for simulation training involving divided attention in acute care medicine. The significant effects on performance and perceived workload demonstrate the validity of the model, which was also characterised by high acceptability, technical simplicity and a novel degree of standardisation.
Desktop-based computer-assisted orthopedic training system for spinal surgery.
Rambani, Rohit; Ward, James; Viant, Warren
2014-01-01
Simulation and surgical training has moved on since its inception during the end of the last century. The trainees are getting more exposed to computers and laboratory training in different subspecialties. More needs to be done in orthopedic simulation in spinal surgery. To develop a training system for pedicle screw fixation and validate its effectiveness in a cohort of junior orthopedic trainees. Fully simulated computer-navigated training system is used to train junior orthopedic trainees perform pedicle screw insertion in the lumbar spine. Real patient computed tomography scans are used to produce the real-time fluoroscopic images of the lumbar spine. The training system was developed to simulate pedicle screw insertion in the lumbar spine. A total of 12 orthopedic senior house officers performed pedicle screw insertion in the lumbar spine before and after the training on training system. The results were assessed based on the scoring system, which included the amount of time taken, accuracy of pedicle screw insertion, and the number of exposures requested to complete the procedure. The result shows a significant improvement in amount of time taken, accuracy of fixation, and the number of exposures after the training on simulator system. This was statistically significant using paired Student t test (p < 0.05). Fully simulated computer-navigated training system is an efficient training tool for young orthopedic trainees. This system can be used to augment training in the operating room, and trainees acquire their skills in the comfort of their study room or in the training room in the hospital. The system has the potential to be used in various other orthopedic procedures for learning of technical skills in a manner aimed at ensuring a smooth escalation in task complexity leading to the better performance of procedures in the operating theater. Copyright © 2014 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Training Surgical Residents With a Haptic Robotic Central Venous Catheterization Simulator.
Pepley, David F; Gordon, Adam B; Yovanoff, Mary A; Mirkin, Katelin A; Miller, Scarlett R; Han, David C; Moore, Jason Z
Ultrasound guided central venous catheterization (CVC) is a common surgical procedure with complication rates ranging from 5 to 21 percent. Training is typically performed using manikins that do not simulate anatomical variations such as obesity and abnormal vessel positioning. The goal of this study was to develop and validate the effectiveness of a new virtual reality and force haptic based simulation platform for CVC of the right internal jugular vein. A CVC simulation platform was developed using a haptic robotic arm, 3D position tracker, and computer visualization. The haptic robotic arm simulated needle insertion force that was based on cadaver experiments. The 3D position tracker was used as a mock ultrasound device with realistic visualization on a computer screen. Upon completion of a practice simulation, performance feedback is given to the user through a graphical user interface including scoring factors based on good CVC practice. The effectiveness of the system was evaluated by training 13 first year surgical residents using the virtual reality haptic based training system over a 3 month period. The participants' performance increased from 52% to 96% on the baseline training scenario, approaching the average score of an expert surgeon: 98%. This also resulted in improvement in positive CVC practices including a 61% decrease between final needle tip position and vein center, a decrease in mean insertion attempts from 1.92 to 1.23, and a 12% increase in time spent aspirating the syringe throughout the procedure. A virtual reality haptic robotic simulator for CVC was successfully developed. Surgical residents training on the simulation improved to near expert levels after three robotic training sessions. This suggests that this system could act as an effective training device for CVC. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Fast image-based mitral valve simulation from individualized geometry.
Villard, Pierre-Frederic; Hammer, Peter E; Perrin, Douglas P; Del Nido, Pedro J; Howe, Robert D
2018-04-01
Common surgical procedures on the mitral valve of the heart include modifications to the chordae tendineae. Such interventions are used when there is extensive leaflet prolapse caused by chordae rupture or elongation. Understanding the role of individual chordae tendineae before operating could be helpful to predict whether the mitral valve will be competent at peak systole. Biomechanical modelling and simulation can achieve this goal. We present a method to semi-automatically build a computational model of a mitral valve from micro CT (computed tomography) scans: after manually picking chordae fiducial points, the leaflets are segmented and the boundary conditions as well as the loading conditions are automatically defined. Fast finite element method (FEM) simulation is carried out using Simulation Open Framework Architecture (SOFA) to reproduce leaflet closure at peak systole. We develop three metrics to evaluate simulation results: (i) point-to-surface error with the ground truth reference extracted from the CT image, (ii) coaptation surface area of the leaflets and (iii) an indication of whether the simulated closed leaflets leak. We validate our method on three explanted porcine hearts and show that our model predicts the closed valve surface with point-to-surface error of approximately 1 mm, a reasonable coaptation surface area, and absence of any leak at peak systole (maximum closed pressure). We also evaluate the sensitivity of our model to changes in various parameters (tissue elasticity, mesh accuracy, and the transformation matrix used for CT scan registration). We also measure the influence of the positions of the chordae tendineae on simulation results and show that marginal chordae have a greater influence on the final shape than intermediate chordae. The mitral valve simulation can help the surgeon understand valve behaviour and anticipate the outcome of a procedure. Copyright © 2018 John Wiley & Sons, Ltd.
Chen, Szi-Wen; Chen, Yuan-Ho
2015-01-01
In this paper, a discrete wavelet transform (DWT) based de-noising with its applications into the noise reduction for medical signal preprocessing is introduced. This work focuses on the hardware realization of a real-time wavelet de-noising procedure. The proposed de-noising circuit mainly consists of three modules: a DWT, a thresholding, and an inverse DWT (IDWT) modular circuits. We also proposed a novel adaptive thresholding scheme and incorporated it into our wavelet de-noising procedure. Performance was then evaluated on both the architectural designs of the software and. In addition, the de-noising circuit was also implemented by downloading the Verilog codes to a field programmable gate array (FPGA) based platform so that its ability in noise reduction may be further validated in actual practice. Simulation experiment results produced by applying a set of simulated noise-contaminated electrocardiogram (ECG) signals into the de-noising circuit showed that the circuit could not only desirably meet the requirement of real-time processing, but also achieve satisfactory performance for noise reduction, while the sharp features of the ECG signals can be well preserved. The proposed de-noising circuit was further synthesized using the Synopsys Design Compiler with an Artisan Taiwan Semiconductor Manufacturing Company (TSMC, Hsinchu, Taiwan) 40 nm standard cell library. The integrated circuit (IC) synthesis simulation results showed that the proposed design can achieve a clock frequency of 200 MHz and the power consumption was only 17.4 mW, when operated at 200 MHz. PMID:26501290
Chen, Szi-Wen; Chen, Yuan-Ho
2015-10-16
In this paper, a discrete wavelet transform (DWT) based de-noising with its applications into the noise reduction for medical signal preprocessing is introduced. This work focuses on the hardware realization of a real-time wavelet de-noising procedure. The proposed de-noising circuit mainly consists of three modules: a DWT, a thresholding, and an inverse DWT (IDWT) modular circuits. We also proposed a novel adaptive thresholding scheme and incorporated it into our wavelet de-noising procedure. Performance was then evaluated on both the architectural designs of the software and. In addition, the de-noising circuit was also implemented by downloading the Verilog codes to a field programmable gate array (FPGA) based platform so that its ability in noise reduction may be further validated in actual practice. Simulation experiment results produced by applying a set of simulated noise-contaminated electrocardiogram (ECG) signals into the de-noising circuit showed that the circuit could not only desirably meet the requirement of real-time processing, but also achieve satisfactory performance for noise reduction, while the sharp features of the ECG signals can be well preserved. The proposed de-noising circuit was further synthesized using the Synopsys Design Compiler with an Artisan Taiwan Semiconductor Manufacturing Company (TSMC, Hsinchu, Taiwan) 40 nm standard cell library. The integrated circuit (IC) synthesis simulation results showed that the proposed design can achieve a clock frequency of 200 MHz and the power consumption was only 17.4 mW, when operated at 200 MHz.
Experimental Errors in QSAR Modeling Sets: What We Can Do and What We Cannot Do.
Zhao, Linlin; Wang, Wenyi; Sedykh, Alexander; Zhu, Hao
2017-06-30
Numerous chemical data sets have become available for quantitative structure-activity relationship (QSAR) modeling studies. However, the quality of different data sources may be different based on the nature of experimental protocols. Therefore, potential experimental errors in the modeling sets may lead to the development of poor QSAR models and further affect the predictions of new compounds. In this study, we explored the relationship between the ratio of questionable data in the modeling sets, which was obtained by simulating experimental errors, and the QSAR modeling performance. To this end, we used eight data sets (four continuous endpoints and four categorical endpoints) that have been extensively curated both in-house and by our collaborators to create over 1800 various QSAR models. Each data set was duplicated to create several new modeling sets with different ratios of simulated experimental errors (i.e., randomizing the activities of part of the compounds) in the modeling process. A fivefold cross-validation process was used to evaluate the modeling performance, which deteriorates when the ratio of experimental errors increases. All of the resulting models were also used to predict external sets of new compounds, which were excluded at the beginning of the modeling process. The modeling results showed that the compounds with relatively large prediction errors in cross-validation processes are likely to be those with simulated experimental errors. However, after removing a certain number of compounds with large prediction errors in the cross-validation process, the external predictions of new compounds did not show improvement. Our conclusion is that the QSAR predictions, especially consensus predictions, can identify compounds with potential experimental errors. But removing those compounds by the cross-validation procedure is not a reasonable means to improve model predictivity due to overfitting.
Validity Evidence for the Neuro-Endoscopic Ventriculostomy Assessment Tool (NEVAT).
Breimer, Gerben E; Haji, Faizal A; Cinalli, Giuseppe; Hoving, Eelco W; Drake, James M
2017-02-01
Growing demand for transparent and standardized methods for evaluating surgical competence prompted the construction of the Neuro-Endoscopic Ventriculostomy Assessment Tool (NEVAT). To provide validity evidence of the NEVAT by reporting on the tool's internal structure and its relationship with surgical expertise during simulation-based training. The NEVAT was used to assess performance of trainees and faculty at an international neuroendoscopy workshop. All participants performed an endoscopic third ventriculostomy (ETV) on a synthetic simulator. Participants were simultaneously scored by 2 raters using the NEVAT procedural checklist and global rating scale (GRS). Evidence of internal structure was collected by calculating interrater reliability and internal consistency of raters' scores. Evidence of relationships with other variables was collected by comparing the ETV performance of experts, experienced trainees, and novices using Jonckheere's test (evidence of construct validity). Thirteen experts, 11 experienced trainees, and 10 novices participated. The interrater reliability by the intraclass correlation coefficient for the checklist and GRS was 0.82 and 0.94, respectively. Internal consistency (Cronbach's α) for the checklist and the GRS was 0.74 and 0.97, respectively. Median scores with interquartile range on the checklist and GRS for novices, experienced trainees, and experts were 0.69 (0.58-0.86), 0.85 (0.63-0.89), and 0.85 (0.81-0.91) and 3.1 (2.5-3.8), 3.7 (2.2-4.3) and 4.6 (4.4-4.9), respectively. Jonckheere's test showed that the median checklist and GRS score increased with performer expertise ( P = .04 and .002, respectively). This study provides validity evidence for the NEVAT to support its use as a standardized method of evaluating neuroendoscopic competence during simulation-based training. Copyright © 2016 by the Congress of Neurological Surgeons
Experimental Errors in QSAR Modeling Sets: What We Can Do and What We Cannot Do
2017-01-01
Numerous chemical data sets have become available for quantitative structure–activity relationship (QSAR) modeling studies. However, the quality of different data sources may be different based on the nature of experimental protocols. Therefore, potential experimental errors in the modeling sets may lead to the development of poor QSAR models and further affect the predictions of new compounds. In this study, we explored the relationship between the ratio of questionable data in the modeling sets, which was obtained by simulating experimental errors, and the QSAR modeling performance. To this end, we used eight data sets (four continuous endpoints and four categorical endpoints) that have been extensively curated both in-house and by our collaborators to create over 1800 various QSAR models. Each data set was duplicated to create several new modeling sets with different ratios of simulated experimental errors (i.e., randomizing the activities of part of the compounds) in the modeling process. A fivefold cross-validation process was used to evaluate the modeling performance, which deteriorates when the ratio of experimental errors increases. All of the resulting models were also used to predict external sets of new compounds, which were excluded at the beginning of the modeling process. The modeling results showed that the compounds with relatively large prediction errors in cross-validation processes are likely to be those with simulated experimental errors. However, after removing a certain number of compounds with large prediction errors in the cross-validation process, the external predictions of new compounds did not show improvement. Our conclusion is that the QSAR predictions, especially consensus predictions, can identify compounds with potential experimental errors. But removing those compounds by the cross-validation procedure is not a reasonable means to improve model predictivity due to overfitting. PMID:28691113
Recent Survey and Application of the simSUNDT Software
NASA Astrophysics Data System (ADS)
Persson, G.; Wirdelius, H.
2010-02-01
The simSUNDT software is based on a previous developed program (SUNDT). The latest version has been customized in order to generate realistic synthetic data (including a grain noise model), compatible with a number of off-line analysis software. The software consists of a Windows®-based preprocessor and postprocessor together with a mathematical kernel (UTDefect), dealing with the actual mathematical modeling. The model employs various integral transforms and integral equation and enables simulations of the entire ultrasonic testing situation. The model is completely three-dimensional though the simulated component is two-dimensional, bounded by the scanning surface and a planar back surface as an option. It is of great importance that inspection methods that are applied are proper validated and that their capability of detection of cracks and defects are quantified. In order to achieve this, statistical methods such as Probability of Detection (POD) often are applied, with the ambition to estimate the detectability as a function of defect size. Despite the fact that the proposed procedure with the utilization of test pieces is very expensive, it also tends to introduce a number of possible misalignments between the actual NDT situation that is to be performed and the proposed experimental simulation. The presentation will describe the developed model that will enable simulation of a phased array NDT inspection and the ambition to use this simulation software to generate POD information. The paper also includes the most recent developments of the model including some initial experimental validation of the phased array probe model.
A methodology for the rigorous verification of plasma simulation codes
NASA Astrophysics Data System (ADS)
Riva, Fabio
2016-10-01
The methodology used to assess the reliability of numerical simulation codes constitutes the Verification and Validation (V&V) procedure. V&V is composed by two separate tasks: the verification, which is a mathematical issue targeted to assess that the physical model is correctly solved, and the validation, which determines the consistency of the code results, and therefore of the physical model, with experimental data. In the present talk we focus our attention on the verification, which in turn is composed by the code verification, targeted to assess that a physical model is correctly implemented in a simulation code, and the solution verification, that quantifies the numerical error affecting a simulation. Bridging the gap between plasma physics and other scientific domains, we introduced for the first time in our domain a rigorous methodology for the code verification, based on the method of manufactured solutions, as well as a solution verification based on the Richardson extrapolation. This methodology was applied to GBS, a three-dimensional fluid code based on a finite difference scheme, used to investigate the plasma turbulence in basic plasma physics experiments and in the tokamak scrape-off layer. Overcoming the difficulty of dealing with a numerical method intrinsically affected by statistical noise, we have now generalized the rigorous verification methodology to simulation codes based on the particle-in-cell algorithm, which are employed to solve Vlasov equation in the investigation of a number of plasma physics phenomena.
Toppi, J; Petti, M; Vecchiato, G; Cincotti, F; Salinari, S; Mattia, D; Babiloni, F; Astolfi, L
2013-01-01
Partial Directed Coherence (PDC) is a spectral multivariate estimator for effective connectivity, relying on the concept of Granger causality. Even if its original definition derived directly from information theory, two modifies were introduced in order to provide better physiological interpretations of the estimated networks: i) normalization of the estimator according to rows, ii) squared transformation. In the present paper we investigated the effect of PDC normalization on the performances achieved by applying the statistical validation process on investigated connectivity patterns under different conditions of Signal to Noise ratio (SNR) and amount of data available for the analysis. Results of the statistical analysis revealed an effect of PDC normalization only on the percentages of type I and type II errors occurred by using Shuffling procedure for the assessment of connectivity patterns. No effects of the PDC formulation resulted on the performances achieved during the validation process executed instead by means of Asymptotic Statistic approach. Moreover, the percentages of both false positives and false negatives committed by Asymptotic Statistic are always lower than those achieved by Shuffling procedure for each type of normalization.
Nontechnical skill training and the use of scenarios in modern surgical education.
Brunckhorst, Oliver; Khan, Muhammad S; Dasgupta, Prokar; Ahmed, Kamran
2017-07-01
Nontechnical skills are being increasingly recognized as a core reason of surgical errors. Combined with the changing nature of surgical training, there has therefore been an increase in nontechnical skill research in the literature. This review therefore aims to: define nontechnical skillsets, assess current training methods, explore assessment modalities and suggest future research aims. The literature demonstrates an increasing understanding of the components of nontechnical skills within surgery. This has led to a greater availability of validated training methods for its training, including the use of didactic teaching, e-learning and simulation-based scenarios. In addition, there are now various extensively validated assessment tools for nontechnical skills including NOTSS, the Oxford NOTECHS and OTAS. Finally, there is now more focus on the development of tools which target individual nontechnical skill components and an attempt to understand which of these play a greater role in specific procedures such as laparoscopic or robotic surgery. Current evidence demonstrates various training methods and tools for the training of nontechnical skills. Future research is likely to focus increasingly on individual nontechnical skill components and procedure-specific skills.
NASA Astrophysics Data System (ADS)
da Silva, Felipe das Neves Roque; Alves, José Luis Drummond; Cataldi, Marcio
2018-03-01
This paper aims to validate inflow simulations concerning the present-day climate at Água Vermelha Hydroelectric Plant (AVHP—located on the Grande River Basin) based on the Soil Moisture Accounting Procedure (SMAP) hydrological model. In order to provide rainfall data to the SMAP model, the RegCM regional climate model was also used working with boundary conditions from the MIROC model. Initially, present-day climate simulation performed by RegCM model was analyzed. It was found that, in terms of rainfall, the model was able to simulate the main patterns observed over South America. A bias correction technique was also used and it was essential to reduce mistakes related to rainfall simulation. Comparison between rainfall simulations from RegCM and MIROC showed improvements when the dynamical downscaling was performed. Then, SMAP, a rainfall-runoff hydrological model, was used to simulate inflows at Água Vermelha Hydroelectric Plant. After calibration with observed rainfall, SMAP simulations were evaluated in two different periods from the one used in calibration. During calibration, SMAP captures the inflow variability observed at AVHP. During validation periods, the hydrological model obtained better results and statistics with observed rainfall. However, in spite of some discrepancies, the use of simulated rainfall without bias correction captured the interannual flow variability. However, the use of bias removal in the simulated rainfall performed by RegCM brought significant improvements to the simulation of natural inflows performed by SMAP. Not only the curve of simulated inflow became more similar to the observed inflow, but also the statistics improved their values. Improvements were also noticed in the inflow simulation when the rainfall was provided by the regional climate model compared to the global model. In general, results obtained so far prove that there was an added value in rainfall when regional climate model was compared to global climate model and that data from regional models must be bias-corrected so as to improve their results.
NASA Astrophysics Data System (ADS)
Barone, Alessandro; Fenton, Flavio; Veneziani, Alessandro
2017-09-01
An accurate estimation of cardiac conductivities is critical in computational electro-cardiology, yet experimental results in the literature significantly disagree on the values and ratios between longitudinal and tangential coefficients. These are known to have a strong impact on the propagation of potential particularly during defibrillation shocks. Data assimilation is a procedure for merging experimental data and numerical simulations in a rigorous way. In particular, variational data assimilation relies on the least-square minimization of the misfit between simulations and experiments, constrained by the underlying mathematical model, which in this study is represented by the classical Bidomain system, or its common simplification given by the Monodomain problem. Operating on the conductivity tensors as control variables of the minimization, we obtain a parameter estimation procedure. As the theory of this approach currently provides only an existence proof and it is not informative for practical experiments, we present here an extensive numerical simulation campaign to assess practical critical issues such as the size and the location of the measurement sites needed for in silico test cases of potential experimental and realistic settings. This will be finalized with a real validation of the variational data assimilation procedure. Results indicate the presence of lower and upper bounds for the number of sites which guarantee an accurate and minimally redundant parameter estimation, the location of sites being generally non critical for properly designed experiments. An effective combination of parameter estimation based on the Monodomain and Bidomain models is tested for the sake of computational efficiency. Parameter estimation based on the Monodomain equation potentially leads to the accurate computation of the transmembrane potential in real settings.
Wu, Xiaohui; Yang, Yang; Wu, Gaoming; Mao, Juan; Zhou, Tao
2016-01-01
Applications of activated sludge models (ASM) in simulating industrial biological wastewater treatment plants (WWTPs) are still difficult due to refractory and complex components in influents as well as diversity in activated sludges. In this study, an ASM3 modeling study was conducted to simulate and optimize a practical coking wastewater treatment plant (CWTP). First, respirometric characterizations of the coking wastewater and CWTP biomasses were conducted to determine the specific kinetic and stoichiometric model parameters for the consecutive aeration-anoxic-aeration (O-A/O) biological process. All ASM3 parameters have been further estimated and calibrated, through cross validation by the model dynamic simulation procedure. Consequently, an ASM3 model was successfully established to accurately simulate the CWTP performances in removing COD and NH4-N. An optimized CWTP operation condition could be proposed reducing the operation cost from 6.2 to 5.5 €/m(3) wastewater. This study is expected to provide a useful reference for mathematic simulations of practical industrial WWTPs. Copyright © 2015 Elsevier Ltd. All rights reserved.
System Engineering Strategy for Distributed Multi-Purpose Simulation Architectures
NASA Technical Reports Server (NTRS)
Bhula, Dlilpkumar; Kurt, Cindy Marie; Luty, Roger
2007-01-01
This paper describes the system engineering approach used to develop distributed multi-purpose simulations. The multi-purpose simulation architecture focuses on user needs, operations, flexibility, cost and maintenance. This approach was used to develop an International Space Station (ISS) simulator, which is called the International Space Station Integrated Simulation (ISIS)1. The ISIS runs unmodified ISS flight software, system models, and the astronaut command and control interface in an open system design that allows for rapid integration of multiple ISS models. The initial intent of ISIS was to provide a distributed system that allows access to ISS flight software and models for the creation, test, and validation of crew and ground controller procedures. This capability reduces the cost and scheduling issues associated with utilizing standalone simulators in fixed locations, and facilitates discovering unknowns and errors earlier in the development lifecycle. Since its inception, the flexible architecture of the ISIS has allowed its purpose to evolve to include ground operator system and display training, flight software modification testing, and as a realistic test bed for Exploration automation technology research and development.
Development and Utility of a Piloted Flight Simulator for Icing Effects Training
NASA Technical Reports Server (NTRS)
Ratvasky, Thomas P.; Ranaudo, Richard J.; Barnhart, Billy P.; Dickes, Edward G.; Gingras, David R.
2003-01-01
A piloted flight simulator called the Ice Contamination Effects Flight Training Device (ICEFTD), which uses low cost desktop components and a generic cockpit replication is being developed. The purpose of this device is to demonstrate the effectiveness of its use for training pilots to recognize and recover from aircraft handling anomalies that result from airframe ice formations. High-fidelity flight simulation models for various baseline (non-iced) and iced configurations were developed from wind tunnel tests of a subscale DeHavilland DHC-6 Twin Otter aircraft model. These simulation models were validated with flight test data from the NASA Twin Otter Icing Research Aircraft, which included the effects of ice on wing and tail stall characteristics. These simulation models are being implemented into an ICEFTD that will provide representative aircraft characteristics due to airframe icing. Scenario-based exercises are being constructed to give an operational-flavor to the simulation. Training pilots will learn to recognize iced aircraft characteristics from the baseline, and will practice and apply appropriate recovery procedures to a handling event.
Wang, Dong; Gan, Qi; Ye, Jian; Yue, Jian; Wang, Benzhong; Povoski, Stephen P.; Martin, Edward W.; Hitchcock, Charles L.; Yilmaz, Alper; Tweedle, Michael F.; Shao, Pengfei; Xu, Ronald X.
2016-01-01
Surgical resection remains the primary curative treatment for many early-stage cancers, including breast cancer. The development of intraoperative guidance systems for identifying all sites of disease and improving the likelihood of complete surgical resection is an area of active ongoing research, as this can lead to a decrease in the need of subsequent additional surgical procedures. We develop a wearable goggle navigation system for dual-mode optical and ultrasound imaging of suspicious lesions. The system consists of a light source module, a monochromatic CCD camera, an ultrasound system, a Google Glass, and a host computer. It is tested in tissue-simulating phantoms and an ex vivo human breast tissue model. Our experiments demonstrate that the surgical navigation system provides useful guidance for localization and core needle biopsy of simulated tumor within the tissue-simulating phantom, as well as a core needle biopsy and subsequent excision of Indocyanine Green (ICG)—fluorescing sentinel lymph nodes. Our experiments support the contention that this wearable goggle navigation system can be potentially very useful and fully integrated by the surgeon for optimizing many aspects of oncologic surgery. Further engineering optimization and additional in vivo clinical validation work is necessary before such a surgical navigation system can be fully realized in the everyday clinical setting. PMID:27367051
The LapSim virtual reality simulator: promising but not yet proven.
Fairhurst, Katherine; Strickland, Andrew; Maddern, Guy
2011-02-01
The acquisition of technical skills using surgical simulators is an area of active research and rapidly evolving technology. The LapSim is a virtual reality simulator that currently allows practice of basic laparoscopic skills and some procedures. To date, no reviews have been published with reference to a single virtual reality simulator. A PubMed search was performed using the keyword "LapSim," with further papers identified from the citations of original search articles. Use of the LapSim to develop surgical skills has yielded overall results, although inconsistencies exist. Data regarding the transferability of learned skills to the operative environment are encouraging as is the validation work, particularly the use of a combination of measured parameters to produce an overall comparative performance score. Although the LapSim currently does not have any proven significant advantages over video trainers in terms of basic skills instruction and although the results of validation studies are variable, the potential for such technology to have a huge impact on surgical training is apparent. Work to determine standardized learning curves and proficiency criteria for different levels of trainees is incomplete. Moreover, defining which performance parameters measured by the LapSim accurately determine laparoscopic skill is complex. Further technological advances will undoubtedly improve the efficacy of the LapSim, and the results of large multicenter trials are anticipated.
Zhang, Zeshu; Pei, Jing; Wang, Dong; Gan, Qi; Ye, Jian; Yue, Jian; Wang, Benzhong; Povoski, Stephen P; Martin, Edward W; Hitchcock, Charles L; Yilmaz, Alper; Tweedle, Michael F; Shao, Pengfei; Xu, Ronald X
2016-01-01
Surgical resection remains the primary curative treatment for many early-stage cancers, including breast cancer. The development of intraoperative guidance systems for identifying all sites of disease and improving the likelihood of complete surgical resection is an area of active ongoing research, as this can lead to a decrease in the need of subsequent additional surgical procedures. We develop a wearable goggle navigation system for dual-mode optical and ultrasound imaging of suspicious lesions. The system consists of a light source module, a monochromatic CCD camera, an ultrasound system, a Google Glass, and a host computer. It is tested in tissue-simulating phantoms and an ex vivo human breast tissue model. Our experiments demonstrate that the surgical navigation system provides useful guidance for localization and core needle biopsy of simulated tumor within the tissue-simulating phantom, as well as a core needle biopsy and subsequent excision of Indocyanine Green (ICG)-fluorescing sentinel lymph nodes. Our experiments support the contention that this wearable goggle navigation system can be potentially very useful and fully integrated by the surgeon for optimizing many aspects of oncologic surgery. Further engineering optimization and additional in vivo clinical validation work is necessary before such a surgical navigation system can be fully realized in the everyday clinical setting.
Toma, Milan; Bloodworth, Charles H; Einstein, Daniel R; Pierce, Eric L; Cochran, Richard P; Yoganathan, Ajit P; Kunzelman, Karyn S
2016-12-01
The diversity of mitral valve (MV) geometries and multitude of surgical options for correction of MV diseases necessitates the use of computational modeling. Numerical simulations of the MV would allow surgeons and engineers to evaluate repairs, devices, procedures, and concepts before performing them and before moving on to more costly testing modalities. Constructing, tuning, and validating these models rely upon extensive in vitro characterization of valve structure, function, and response to change due to diseases. Micro-computed tomography ([Formula: see text]CT) allows for unmatched spatial resolution for soft tissue imaging. However, it is still technically challenging to obtain an accurate geometry of the diastolic MV. We discuss here the development of a novel technique for treating MV specimens with glutaraldehyde fixative in order to minimize geometric distortions in preparation for [Formula: see text]CT scanning. The technique provides a resulting MV geometry which is significantly more detailed in chordal structure, accurate in leaflet shape, and closer to its physiological diastolic geometry. In this paper, computational fluid-structure interaction (FSI) simulations are used to show the importance of more detailed subject-specific MV geometry with 3D chordal structure to simulate a proper closure validated against [Formula: see text]CT images of the closed valve. Two computational models, before and after use of the aforementioned technique, are used to simulate closure of the MV.
NOViSE: a virtual natural orifice transluminal endoscopic surgery simulator.
Korzeniowski, Przemyslaw; Barrow, Alastair; Sodergren, Mikael H; Hald, Niels; Bello, Fernando
2016-12-01
Natural orifice transluminal endoscopic surgery (NOTES) is a novel technique in minimally invasive surgery whereby a flexible endoscope is inserted via a natural orifice to gain access to the abdominal cavity, leaving no external scars. This innovative use of flexible endoscopy creates many new challenges and is associated with a steep learning curve for clinicians. We developed NOViSE-the first force-feedback-enabled virtual reality simulator for NOTES training supporting a flexible endoscope. The haptic device is custom-built, and the behaviour of the virtual flexible endoscope is based on an established theoretical framework-the Cosserat theory of elastic rods. We present the application of NOViSE to the simulation of a hybrid trans-gastric cholecystectomy procedure. Preliminary results of face, content and construct validation have previously shown that NOViSE delivers the required level of realism for training of endoscopic manipulation skills specific to NOTES. VR simulation of NOTES procedures can contribute to surgical training and improve the educational experience without putting patients at risk, raising ethical issues or requiring expensive animal or cadaver facilities. In the context of an experimental technique, NOViSE could potentially facilitate NOTES development and contribute to its wider use by keeping practitioners up to date with this novel surgical technique. NOViSE is a first prototype, and the initial results indicate that it provides promising foundations for further development.
Bloemen-van Gurp, Esther J; Murrer, Lars H P; Haanstra, Björk K C; van Gils, Francis C J M; Dekker, Andre L A J; Mijnheer, Ben J; Lambin, Philippe
2009-01-01
In vivo dosimetry during brachytherapy of the prostate with (125)I seeds is challenging because of the high dose gradients and low photon energies involved. We present the results of a study using metal-oxide-semiconductor field-effect transistor (MOSFET) dosimeters to evaluate the dose in the urethra after a permanent prostate implantation procedure. Phantom measurements were made to validate the measurement technique, determine the measurement accuracy, and define action levels for clinical measurements. Patient measurements were performed with a MOSFET array in the urinary catheter immediately after the implantation procedure. A CT scan was performed, and dose values, calculated by the treatment planning system, were compared to in vivo dose values measured with MOSFET dosimeters. Corrections for temperature dependence of the MOSFET array response and photon attenuation in the catheter on the in vivo dose values are necessary. The overall uncertainty in the measurement procedure, determined in a simulation experiment, is 8.0% (1 SD). In vivo dose values were obtained for 17 patients. In the high-dose region (> 100 Gy), calculated and measured dose values agreed within 1.7% +/- 10.7% (1 SD). In the low-dose region outside the prostate (< 100 Gy), larger deviations occurred. MOSFET detectors are suitable for in vivo dosimetry during (125)I brachytherapy of prostate cancer. An action level of +/- 16% (2 SD) for detection of errors in the implantation procedure is achievable after validation of the detector system and measurement conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iavarone, Salvatore; Smith, Sean T.; Smith, Philip J.
Oxy-coal combustion is an emerging low-cost “clean coal” technology for emissions reduction and Carbon Capture and Sequestration (CCS). The use of Computational Fluid Dynamics (CFD) tools is crucial for the development of cost-effective oxy-fuel technologies and the minimization of environmental concerns at industrial scale. The coupling of detailed chemistry models and CFD simulations is still challenging, especially for large-scale plants, because of the high computational efforts required. The development of scale-bridging models is therefore necessary, to find a good compromise between computational efforts and the physical-chemical modeling precision. This paper presents a procedure for scale-bridging modeling of coal devolatilization, inmore » the presence of experimental error, that puts emphasis on the thermodynamic aspect of devolatilization, namely the final volatile yield of coal, rather than kinetics. The procedure consists of an engineering approach based on dataset consistency and Bayesian methodology including Gaussian-Process Regression (GPR). Experimental data from devolatilization tests carried out in an oxy-coal entrained flow reactor were considered and CFD simulations of the reactor were performed. Jointly evaluating experiments and simulations, a novel yield model was validated against the data via consistency analysis. In parallel, a Gaussian-Process Regression was performed, to improve the understanding of the uncertainty associated to the devolatilization, based on the experimental measurements. Potential model forms that could predict yield during devolatilization were obtained. The set of model forms obtained via GPR includes the yield model that was proven to be consistent with the data. Finally, the overall procedure has resulted in a novel yield model for coal devolatilization and in a valuable evaluation of uncertainty in the data, in the model form, and in the model parameters.« less
Validation of a national hydrological model
NASA Astrophysics Data System (ADS)
McMillan, H. K.; Booker, D. J.; Cattoën, C.
2016-10-01
Nationwide predictions of flow time-series are valuable for development of policies relating to environmental flows, calculating reliability of supply to water users, or assessing risk of floods or droughts. This breadth of model utility is possible because various hydrological signatures can be derived from simulated flow time-series. However, producing national hydrological simulations can be challenging due to strong environmental diversity across catchments and a lack of data available to aid model parameterisation. A comprehensive and consistent suite of test procedures to quantify spatial and temporal patterns in performance across various parts of the hydrograph is described and applied to quantify the performance of an uncalibrated national rainfall-runoff model of New Zealand. Flow time-series observed at 485 gauging stations were used to calculate Nash-Sutcliffe efficiency and percent bias when simulating between-site differences in daily series, between-year differences in annual series, and between-site differences in hydrological signatures. The procedures were used to assess the benefit of applying a correction to the modelled flow duration curve based on an independent statistical analysis. They were used to aid understanding of climatological, hydrological and model-based causes of differences in predictive performance by assessing multiple hypotheses that describe where and when the model was expected to perform best. As the procedures produce quantitative measures of performance, they provide an objective basis for model assessment that could be applied when comparing observed daily flow series with competing simulated flow series from any region-wide or nationwide hydrological model. Model performance varied in space and time with better scores in larger and medium-wet catchments, and in catchments with smaller seasonal variations. Surprisingly, model performance was not sensitive to aquifer fraction or rain gauge density.
Iavarone, Salvatore; Smith, Sean T.; Smith, Philip J.; ...
2017-06-03
Oxy-coal combustion is an emerging low-cost “clean coal” technology for emissions reduction and Carbon Capture and Sequestration (CCS). The use of Computational Fluid Dynamics (CFD) tools is crucial for the development of cost-effective oxy-fuel technologies and the minimization of environmental concerns at industrial scale. The coupling of detailed chemistry models and CFD simulations is still challenging, especially for large-scale plants, because of the high computational efforts required. The development of scale-bridging models is therefore necessary, to find a good compromise between computational efforts and the physical-chemical modeling precision. This paper presents a procedure for scale-bridging modeling of coal devolatilization, inmore » the presence of experimental error, that puts emphasis on the thermodynamic aspect of devolatilization, namely the final volatile yield of coal, rather than kinetics. The procedure consists of an engineering approach based on dataset consistency and Bayesian methodology including Gaussian-Process Regression (GPR). Experimental data from devolatilization tests carried out in an oxy-coal entrained flow reactor were considered and CFD simulations of the reactor were performed. Jointly evaluating experiments and simulations, a novel yield model was validated against the data via consistency analysis. In parallel, a Gaussian-Process Regression was performed, to improve the understanding of the uncertainty associated to the devolatilization, based on the experimental measurements. Potential model forms that could predict yield during devolatilization were obtained. The set of model forms obtained via GPR includes the yield model that was proven to be consistent with the data. Finally, the overall procedure has resulted in a novel yield model for coal devolatilization and in a valuable evaluation of uncertainty in the data, in the model form, and in the model parameters.« less
House, Joseph B.; Dooley-Hash, Suzanne; Kowalenko, Terry; Sikavitsas, Athina; Seeyave, Desiree M.; Younger, John G.; Hamstra, Stanley J.; Nypaver, Michele M.
2012-01-01
Introduction Real-time assessment of operator performance during procedural simulation is a common practice that requires undivided attention by 1 or more reviewers, potentially over many repetitions of the same case. Objective To determine whether reviewers display better interrater agreement of procedural competency when observing recorded, rather than live, performance; and to develop an assessment tool for pediatric rapid sequence intubation (pRSI). Methods A framework of a previously established Objective Structured Assessment of Technical Skills (OSATS) tool was modified for pRSI. Emergency medicine residents (postgraduate year 1–4) were prospectively enrolled in a pRSI simulation scenario and evaluated by 2 live raters using the modified tool. Sessions were videotaped and reviewed by the same raters at least 4 months later. Raters were blinded to their initial rating. Interrater agreement was determined by using the Krippendorff generalized concordance method. Results Overall interrater agreement for live review was 0.75 (95% confidence interval [CI], 0.72–0.78) and for video was 0.79 (95% CI, 0.73–0.82). Live review was significantly superior to video review in only 1 of the OSATS domains (Preparation) and was equivalent in the other domains. Intrarater agreement between the live and video evaluation was very good, greater than 0.75 for all raters, with a mean of 0.81 (95% CI, 0.76–0.85). Conclusion The modified OSATS assessment tool demonstrated some evidence of validity in discriminating among levels of resident experience and high interreviewer reliability. With this tool, intrareviewer reliability was high between live and 4-months' delayed video review of the simulated procedure, which supports feasibility of delayed video review in resident assessment. PMID:23997874
Collapse of a Liquid Column: Numerical Simulation and Experimental Validation
NASA Astrophysics Data System (ADS)
Cruchaga, Marcela A.; Celentano, Diego J.; Tezduyar, Tayfun E.
2007-03-01
This paper is focused on the numerical and experimental analyses of the collapse of a liquid column. The measurements of the interface position in a set of experiments carried out with shampoo and water for two different initial column aspect ratios are presented together with the corresponding numerical predictions. The experimental procedure was found to provide acceptable recurrence in the observation of the interface evolution. Basic models describing some of the relevant physical aspects, e.g. wall friction and turbulence, are included in the simulations. Numerical experiments are conducted to evaluate the influence of the parameters involved in the modeling by comparing the results with the data from the measurements. The numerical predictions reasonably describe the physical trends.
Matalon, Shanna A; Chikarmane, Sona A; Yeh, Eren D; Smith, Stacy E; Mayo-Smith, William W; Giess, Catherine S
2018-03-19
Increased attention to quality and safety has led to a re-evaluation of the classic apprenticeship model for procedural training. Many have proposed simulation as a supplementary teaching tool. The purpose of this study was to assess radiology resident exposure to procedural training and procedural simulation. An IRB-exempt online survey was distributed to current radiology residents in the United States by e-mail. Survey results were summarized using frequency and percentages. Chi-square tests were used for statistical analysis where appropriate. A total of 353 current residents completed the survey. 37% (n = 129/353) of respondents had never used procedure simulation. Of the residents who had used simulation, most did not do so until after having already performed procedures on patients (59%, n = 132/223). The presence of a dedicated simulation center was reported by over half of residents (56%, n = 196/353) and was associated with prior simulation experience (P = 0.007). Residents who had not had procedural simulation were somewhat likely or highly likely (3 and 4 on a 4-point Likert-scale) to participate if it were available (81%, n = 104/129). Simulation training was associated with higher comfort levels in performing procedures (P < 0.001). Although procedural simulation training is associated with higher comfort levels when performing procedures, there is variable use in radiology resident training and its use is not currently optimized. Given the increased emphasis on patient safety, these results suggest the need to increase procedural simulation use during residency, including an earlier introduction to simulation before patient exposure. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Gan, Y.; Liang, X. Z.; Duan, Q.; Xu, J.; Zhao, P.; Hong, Y.
2017-12-01
The uncertainties associated with the parameters of a hydrological model need to be quantified and reduced for it to be useful for operational hydrological forecasting and decision support. An uncertainty quantification framework is presented to facilitate practical assessment and reduction of model parametric uncertainties. A case study, using the distributed hydrological model CREST for daily streamflow simulation during the period 2008-2010 over ten watershed, was used to demonstrate the performance of this new framework. Model behaviors across watersheds were analyzed by a two-stage stepwise sensitivity analysis procedure, using LH-OAT method for screening out insensitive parameters, followed by MARS-based Sobol' sensitivity indices for quantifying each parameter's contribution to the response variance due to its first-order and higher-order effects. Pareto optimal sets of the influential parameters were then found by the adaptive surrogate-based multi-objective optimization procedure, using MARS model for approximating the parameter-response relationship and SCE-UA algorithm for searching the optimal parameter sets of the adaptively updated surrogate model. The final optimal parameter sets were validated against the daily streamflow simulation of the same watersheds during the period 2011-2012. The stepwise sensitivity analysis procedure efficiently reduced the number of parameters that need to be calibrated from twelve to seven, which helps to limit the dimensionality of calibration problem and serves to enhance the efficiency of parameter calibration. The adaptive MARS-based multi-objective calibration exercise provided satisfactory solutions to the reproduction of the observed streamflow for all watersheds. The final optimal solutions showed significant improvement when compared to the default solutions, with about 65-90% reduction in 1-NSE and 60-95% reduction in |RB|. The validation exercise indicated a large improvement in model performance with about 40-85% reduction in 1-NSE, and 35-90% reduction in |RB|. Overall, this uncertainty quantification framework is robust, effective and efficient for parametric uncertainty analysis, the results of which provide useful information that helps to understand the model behaviors and improve the model simulations.
Accelerating simulation for the multiple-point statistics algorithm using vector quantization
NASA Astrophysics Data System (ADS)
Zuo, Chen; Pan, Zhibin; Liang, Hao
2018-03-01
Multiple-point statistics (MPS) is a prominent algorithm to simulate categorical variables based on a sequential simulation procedure. Assuming training images (TIs) as prior conceptual models, MPS extracts patterns from TIs using a template and records their occurrences in a database. However, complex patterns increase the size of the database and require considerable time to retrieve the desired elements. In order to speed up simulation and improve simulation quality over state-of-the-art MPS methods, we propose an accelerating simulation for MPS using vector quantization (VQ), called VQ-MPS. First, a variable representation is presented to make categorical variables applicable for vector quantization. Second, we adopt a tree-structured VQ to compress the database so that stationary simulations are realized. Finally, a transformed template and classified VQ are used to address nonstationarity. A two-dimensional (2D) stationary channelized reservoir image is used to validate the proposed VQ-MPS. In comparison with several existing MPS programs, our method exhibits significantly better performance in terms of computational time, pattern reproductions, and spatial uncertainty. Further demonstrations consist of a 2D four facies simulation, two 2D nonstationary channel simulations, and a three-dimensional (3D) rock simulation. The results reveal that our proposed method is also capable of solving multifacies, nonstationarity, and 3D simulations based on 2D TIs.
Grover, Samir C; Garg, Ankit; Scaffidi, Michael A; Yu, Jeffrey J; Plener, Ian S; Yong, Elaine; Cino, Maria; Grantcharov, Teodor P; Walsh, Catharine M
2015-12-01
GI endoscopy simulation-based training augments early clinical performance; however, the optimal manner by which to deliver training is unknown. We aimed to validate a simulation-based structured comprehensive curriculum (SCC) designed to teach technical, cognitive, and integrative competencies in colonoscopy. Single-blinded, randomized, controlled trial. Endoscopic simulation course at an academic hospital. Thirty-three novice endoscopists were allocated to an SCC group or self-regulated learning (SRL) group. The SCC group received a curriculum consisting of 6 hours of didactic lectures and 8 hours of virtual reality simulation-based training with expert feedback. The SRL group was provided a list of desired objectives and was instructed to practice on the simulator for an equivalent time (8 hours). Clinical transfer was assessed during 2 patient colonoscopies using the Joint Advisory Group Direct Observation of Procedural Skills (JAG DOPS) scale. Secondary outcome measures included differences in procedural knowledge, immediate post-training simulation performance, and delayed post-training (4-6 weeks) performance during an integrated scenario test on the JAG DOPS communication and integrated scenario global rating scales. There was no significant difference in baseline or post-training performance on the simulator task. The SCC group performed superiorly during their first and second clinical colonoscopies. Additionally, the SCC group demonstrated significantly better knowledge and colonoscopy-specific performance, communication, and global performance during the integrated scenario. We were unable to measure SRL participants' effort outside of mandatory training. In addition, feedback metrics and number of available simulation cases are limited. These results support integration of endoscopy simulation into a structured curriculum incorporating instructional feedback and complementary didactic knowledge as a means to augment technical, cognitive, and integrative skills acquisition, as compared with SRL on virtual reality simulators. ( NCT01991522.) Copyright © 2015 American Society for Gastrointestinal Endoscopy. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Gariano, Stefano Luigi; Brunetti, Maria Teresa; Iovine, Giulio; Melillo, Massimo; Peruccacci, Silvia; Terranova, Oreste Giuseppe; Vennari, Carmela; Guzzetti, Fausto
2015-04-01
Prediction of rainfall-induced landslides can rely on empirical rainfall thresholds. These are obtained from the analysis of past rainfall events that have (or have not) resulted in slope failures. Accurate prediction requires reliable thresholds, which need to be validated before their use in operational landslide warning systems. Despite the clear relevance of validation, only a few studies have addressed the problem, and have proposed and tested robust validation procedures. We propose a validation procedure that allows for the definition of optimal thresholds for early warning purposes. The validation is based on contingency table, skill scores, and receiver operating characteristic (ROC) analysis. To establish the optimal threshold, which maximizes the correct landslide predictions and minimizes the incorrect predictions, we propose an index that results from the linear combination of three weighted skill scores. Selection of the optimal threshold depends on the scope and the operational characteristics of the early warning system. The choice is made by selecting appropriately the weights, and by searching for the optimal (maximum) value of the index. We discuss weakness in the validation procedure caused by the inherent lack of information (epistemic uncertainty) on landslide occurrence typical of large study areas. When working at the regional scale, landslides may have occurred and may have not been reported. This results in biases and variations in the contingencies and the skill scores. We introduce two parameters to represent the unknown proportion of rainfall events (above and below the threshold) for which landslides occurred and went unreported. We show that even a very small underestimation in the number of landslides can result in a significant decrease in the performance of a threshold measured by the skill scores. We show that the variations in the skill scores are different for different uncertainty of events above or below the threshold. This has consequences in the ROC analysis. We applied the proposed procedure to a catalogue of rainfall conditions that have resulted in landslides, and to a set of rainfall events that - presumably - have not resulted in landslides, in Sicily, in the period 2002-2012. First, we determined regional event duration-cumulated event (ED) rainfall thresholds for shallow landslide occurrence using 200 rainfall conditions that have resulted in 223 shallow landslides in Sicily in the period 2002-2011. Next, we validated the thresholds using 29 rainfall conditions that have triggered 42 shallow landslides in Sicily in 2012, and 1250 rainfall events that presumably have not resulted in landslides in the same year. We performed a back analysis simulating the use of the thresholds in a hypothetical landslide warning system operating in 2012.
Experiment Analysis and Modelling of Compaction Behaviour of Ag60Cu30Sn10 Mixed Metal Powders
NASA Astrophysics Data System (ADS)
Zhou, Mengcheng; Huang, Shangyu; Liu, Wei; Lei, Yu; Yan, Shiwei
2018-03-01
A novel process method combines powder compaction and sintering was employed to fabricate thin sheets of cadmium-free silver based filler metals, the compaction densification behaviour of Ag60Cu30Sn10 mixed metal powders was investigated experimentally. Based on the equivalent density method, the density-dependent Drucker-Prager Cap (DPC) model was introduced to model the powder compaction behaviour. Various experiment procedures were completed to determine the model parameters. The friction coefficients in lubricated and unlubricated die were experimentally determined. The determined material parameters were validated by experiments and numerical simulation of powder compaction process using a user subroutine (USDFLD) in ABAQUS/Standard. The good agreement between the simulated and experimental results indicates that the determined model parameters are able to describe the compaction behaviour of the multicomponent mixed metal powders, which can be further used for process optimization simulations.
NASA Astrophysics Data System (ADS)
WANG, J.; Kim, J.
2014-12-01
In this study, sensitivity of pollutant dispersion on turbulent Schmidt number (Sct) was investigated in a street canyon using a computational fluid dynamics (CFD) model. For this, numerical simulations with systematically varied Sct were performed and the CFD model results were validated against a wind‒tunnel measurement data. The results showed that root mean square error (RMSE) was quite dependent on Sct and dispersion patterns of non‒reactive scalar pollutant with different Sct were quite different among the simulation results. The RMSE was lowest in the case of Sct = 0.35 and the apparent dispersion pattern was most similar to the wind‒tunnel data in the case of Sct = 0.35. Also, numerical simulations using spatially weighted Sct were additionally performed in order for the best reproduction of the wind‒tunnel data. Detailed method and procedure to find the best reproduction will be presented.
A constitutive model and numerical simulation of sintering processes at macroscopic level
NASA Astrophysics Data System (ADS)
Wawrzyk, Krzysztof; Kowalczyk, Piotr; Nosewicz, Szymon; Rojek, Jerzy
2018-01-01
This paper presents modelling of both single and double-phase powder sintering processes at the macroscopic level. In particular, its constitutive formulation, numerical implementation and numerical tests are described. The macroscopic constitutive model is based on the assumption that the sintered material is a continuous medium. The parameters of the constitutive model for material under sintering are determined by simulation of sintering at the microscopic level using a micro-scale model. Numerical tests were carried out for a cylindrical specimen under hydrostatic and uniaxial pressure. Results of macroscopic analysis are compared against the microscopic model results. Moreover, numerical simulations are validated by comparison with experimental results. The simulations and preparation of the model are carried out by Abaqus FEA - a software for finite element analysis and computer-aided engineering. A mechanical model is defined by the user procedure "Vumat" which is developed by the first author in Fortran programming language. Modelling presented in the paper can be used to optimize and to better understand the process.
2016-01-01
Although qualitative strategies based on direct injection mass spectrometry (DIMS) have recently emerged as an alternative for the rapid classification of food samples, the potential of these approaches in quantitative tasks has scarcely been addressed to date. In this paper, the applicability of different multivariate regression procedures to data collected by DIMS from simulated mixtures has been evaluated. The most relevant factors affecting quantitation, such as random noise, the number of calibration samples, type of validation, mixture complexity and similarity of mass spectra, were also considered and comprehensively discussed. Based on the conclusions drawn from simulated data, and as an example of application, experimental mass spectral fingerprints collected by direct thermal desorption coupled to mass spectrometry were used for the quantitation of major volatiles in Thymus zygis subsp. zygis chemotypes. The results obtained, validated with the direct thermal desorption coupled to gas chromatography–mass spectrometry method here used as a reference, show the potential of DIMS approaches for the fast and precise quantitative profiling of volatiles in foods. This article is part of the themed issue ‘Quantitative mass spectrometry’. PMID:27644978
Determining procedures for simulation-based training in radiology: a nationwide needs assessment.
Nayahangan, Leizl Joy; Nielsen, Kristina Rue; Albrecht-Beste, Elisabeth; Bachmann Nielsen, Michael; Paltved, Charlotte; Lindorff-Larsen, Karen Gilboe; Nielsen, Bjørn Ulrik; Konge, Lars
2018-06-01
New training modalities such as simulation are widely accepted in radiology; however, development of effective simulation-based training programs is challenging. They are often unstructured and based on convenience or coincidence. The study objective was to perform a nationwide needs assessment to identify and prioritize technical procedures that should be included in a simulation-based curriculum. A needs assessment using the Delphi method was completed among 91 key leaders in radiology. Round 1 identified technical procedures that radiologists should learn. Round 2 explored frequency of procedure, number of radiologists performing the procedure, risk and/or discomfort for patients, and feasibility for simulation. Round 3 was elimination and prioritization of procedures. Response rates were 67 %, 70 % and 66 %, respectively. In Round 1, 22 technical procedures were included. Round 2 resulted in pre-prioritization of procedures. In round 3, 13 procedures were included in the final prioritized list. The three highly prioritized procedures were ultrasound-guided (US) histological biopsy and fine-needle aspiration, US-guided needle puncture and catheter drainage, and basic abdominal ultrasound. A needs assessment identified and prioritized 13 technical procedures to include in a simulation-based curriculum. The list may be used as guide for development of training programs. • Simulation-based training can supplement training on patients in radiology. • Development of simulation-based training should follow a structured approach. • The CAMES Needs Assessment Formula explores needs for simulation training. • A national Delphi study identified and prioritized procedures suitable for simulation training. • The prioritized list serves as guide for development of courses in radiology.
GLOBAL PROPERTIES OF FULLY CONVECTIVE ACCRETION DISKS FROM LOCAL SIMULATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bodo, G.; Ponzo, F.; Rossi, P.
2015-08-01
We present an approach to deriving global properties of accretion disks from the knowledge of local solutions derived from numerical simulations based on the shearing box approximation. The approach consists of a two-step procedure. First, a local solution valid for all values of the disk height is constructed by piecing together an interior solution obtained numerically with an analytical exterior radiative solution. The matching is obtained by assuming hydrostatic balance and radiative equilibrium. Although in principle the procedure can be carried out in general, it simplifies considerably when the interior solution is fully convective. In these cases, the construction ismore » analogous to the derivation of the Hayashi tracks for protostars. The second step consists of piecing together the local solutions at different radii to obtain a global solution. Here we use the symmetry of the solutions with respect to the defining dimensionless numbers—in a way similar to the use of homology relations in stellar structure theory—to obtain the scaling properties of the various disk quantities with radius.« less
Attention Recognition in EEG-Based Affective Learning Research Using CFS+KNN Algorithm.
Hu, Bin; Li, Xiaowei; Sun, Shuting; Ratcliffe, Martyn
2018-01-01
The research detailed in this paper focuses on the processing of Electroencephalography (EEG) data to identify attention during the learning process. The identification of affect using our procedures is integrated into a simulated distance learning system that provides feedback to the user with respect to attention and concentration. The authors propose a classification procedure that combines correlation-based feature selection (CFS) and a k-nearest-neighbor (KNN) data mining algorithm. To evaluate the CFS+KNN algorithm, it was test against CFS+C4.5 algorithm and other classification algorithms. The classification performance was measured 10 times with different 3-fold cross validation data. The data was derived from 10 subjects while they were attempting to learn material in a simulated distance learning environment. A self-assessment model of self-report was used with a single valence to evaluate attention on 3 levels (high, neutral, low). It was found that CFS+KNN had a much better performance, giving the highest correct classification rate (CCR) of % for the valence dimension divided into three classes.
Online Calibration of Polytomous Items Under the Generalized Partial Credit Model
Zheng, Yi
2016-01-01
Online calibration is a technology-enhanced architecture for item calibration in computerized adaptive tests (CATs). Many CATs are administered continuously over a long term and rely on large item banks. To ensure test validity, these item banks need to be frequently replenished with new items, and these new items need to be pretested before being used operationally. Online calibration dynamically embeds pretest items in operational tests and calibrates their parameters as response data are gradually obtained through the continuous test administration. This study extends existing formulas, procedures, and algorithms for dichotomous item response theory models to the generalized partial credit model, a popular model for items scored in more than two categories. A simulation study was conducted to investigate the developed algorithms and procedures under a variety of conditions, including two estimation algorithms, three pretest item selection methods, three seeding locations, two numbers of score categories, and three calibration sample sizes. Results demonstrated acceptable estimation accuracy of the two estimation algorithms in some of the simulated conditions. A variety of findings were also revealed for the interacted effects of included factors, and recommendations were made respectively. PMID:29881063
Building Energy Simulation Test for Existing Homes (BESTEST-EX) (Presentation)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Judkoff, R.; Neymark, J.; Polly, B.
2011-12-01
This presentation discusses the goals of NREL Analysis Accuracy R&D; BESTEST-EX goals; what BESTEST-EX is; how it works; 'Building Physics' cases; 'Building Physics' reference results; 'utility bill calibration' cases; limitations and potential future work. Goals of NREL Analysis Accuracy R&D are: (1) Provide industry with the tools and technical information needed to improve the accuracy and consistency of analysis methods; (2) Reduce the risks associated with purchasing, financing, and selling energy efficiency upgrades; and (3) Enhance software and input collection methods considering impacts on accuracy, cost, and time of energy assessments. BESTEST-EX Goals are: (1) Test software predictions of retrofitmore » energy savings in existing homes; (2) Ensure building physics calculations and utility bill calibration procedures perform up to a minimum standard; and (3) Quantify impact of uncertainties in input audit data and occupant behavior. BESTEST-EX is a repeatable procedure that tests how well audit software predictions compare to the current state of the art in building energy simulation. There is no direct truth standard. However, reference software have been subjected to validation testing, including comparisons with empirical data.« less
Famiglietti, Robin M; Norboge, Emily C; Boving, Valentine; Langabeer, James R; Buchholz, Thomas A; Mikhail, Osama
To meet demand for radiation oncology services and ensure patient-centered safe care, management in an academic radiation oncology department initiated quality improvement efforts using discrete-event simulation (DES). Although the long-term goal was testing and deploying solutions, the primary aim at the outset was characterizing and validating a computer simulation model of existing operations to identify targets for improvement. The adoption and validation of a DES model of processes and procedures affecting patient flow and satisfaction, employee experience, and efficiency were undertaken in 2012-2013. Multiple sources were tapped for data, including direct observation, equipment logs, timekeeping, and electronic health records. During their treatment visits, patients averaged 50.4 minutes in the treatment center, of which 38% was spent in the treatment room. Patients with appointments between 10 AM and 2 PM experienced the longest delays before entering the treatment room, and those in the clinic in the day's first and last hours, the shortest (<5 minutes). Despite staffed for 14.5 hours daily, the clinic registered only 20% of patients after 2:30 PM. Utilization of equipment averaged 58%, and utilization of staff, 56%. The DES modeling quantified operations, identifying evidence-based targets for next-phase remediation and providing data to justify initiatives.
Liu, Dan; Cai, Wenwen; Xia, Jiangzhou; Dong, Wenjie; Zhou, Guangsheng; Chen, Yang; Zhang, Haicheng; Yuan, Wenping
2014-01-01
Gross Primary Production (GPP) is the largest flux in the global carbon cycle. However, large uncertainties in current global estimations persist. In this study, we examined the performance of a process-based model (Integrated BIosphere Simulator, IBIS) at 62 eddy covariance sites around the world. Our results indicated that the IBIS model explained 60% of the observed variation in daily GPP at all validation sites. Comparison with a satellite-based vegetation model (Eddy Covariance-Light Use Efficiency, EC-LUE) revealed that the IBIS simulations yielded comparable GPP results as the EC-LUE model. Global mean GPP estimated by the IBIS model was 107.50±1.37 Pg C year(-1) (mean value ± standard deviation) across the vegetated area for the period 2000-2006, consistent with the results of the EC-LUE model (109.39±1.48 Pg C year(-1)). To evaluate the uncertainty introduced by the parameter Vcmax, which represents the maximum photosynthetic capacity, we inversed Vcmax using Markov Chain-Monte Carlo (MCMC) procedures. Using the inversed Vcmax values, the simulated global GPP increased by 16.5 Pg C year(-1), indicating that IBIS model is sensitive to Vcmax, and large uncertainty exists in model parameterization.
Angulo, J C; Arance, I; García-Tello, A; Las Heras, M M; Andrés, G; Gimbernat, H; Lista, F; Ramón de Fata, F
2014-09-01
The utility of a virtual reality simulator for training of the photoselective vaporization of the prostate with diode laser was studied. Two experiments were performed with a simulator (VirtaMed AG, Zürich, Switzerland) with software for specific training in prostate vaporization in contact mode with Twister fiber (Biolitec AG, Jena, German). Eighteen surgeons performed ablation of the prostate (55 cc) twice and compared the score obtained (190 points efficacy and 80 safety) in the second one of them by experience groups (medical students, residents, specialists). They also performed a spatial orientation test with scores of 0 to 6. After, six of these surgeons repeated 15 ablations of the prostate (55 and 70 ml). Improvement of the parameters obtained was evaluated to define the learning curve and how experience, spatial orientation skills and type of sequences performed affects them. Global efficacy and safety score was different according to the grade of experience (P=.005). When compared by pairs, specialist-student differences were detected (p=0.004), but not specialist-resident (P=.12) or resident-student (P=.2). Regarding efficacy of the procedure, specialist-student (p=0.0026) and resident-student (P=.08) differences were detected. The different partial indicators in terms of efficacy were rate of ablation (P=.01), procedure time (P=.03) and amount of unexposed capsule (p=0.03). Differences were not observed between groups in safety (P=.5). Regarding the learning curve, percentage median on the total score exceeded 90% after performing 4 procedures for prostates of 55 ml and 10 procedures for prostate glands of 70 ml. This course was not modified by previous experience (resident-specialist; P=.6). However, it was modified according to the repetition sequence (progressive-random; P=.007). Surgeons whose spatial orientation was less than the median of the group (value 2.5) did not surpass 90% of the score in spite of repetition of the procedure. Simulation for ablation of the prostate with contact diode laser is a good learning model with discriminative validity, as it correlates the metric results with levels of experience and sills. The sequential repetition of the procedure on growing levels of difficulty favors learning. Copyright © 2014 AEU. Published by Elsevier Espana. All rights reserved.
Analysis of cryogenic propellant behavior in microgravity and low thrust environments
NASA Technical Reports Server (NTRS)
Fisher, Mark F.; Schmidt, George R.; Martin, James J.
1991-01-01
Predictions of a CFD program calculating a fluid-free surface shape and motion as a function of imposed acceleration are validated against the drop-tower test data collected to support design and performance assessments of the Saturn S-IVB stage liquid-hydrogen tank. The drop-tower facility, experimental package, and experiment procedures are outlined, and the program is described. It is noted that the validation analysis confirms the program's suitability for predicting low-g fluid slosh behavior, and that a similar analysis could examine the effect of incorporating baffles and screens to impede initiation of any unwanted side loads due to slosh. It is concluded that in actual vehicle applications, the engine thrust tailoff profile should be included in computer simulations if the precise interface versus time definition is needed.
Optimal Combinations of Diagnostic Tests Based on AUC.
Huang, Xin; Qin, Gengsheng; Fang, Yixin
2011-06-01
When several diagnostic tests are available, one can combine them to achieve better diagnostic accuracy. This article considers the optimal linear combination that maximizes the area under the receiver operating characteristic curve (AUC); the estimates of the combination's coefficients can be obtained via a nonparametric procedure. However, for estimating the AUC associated with the estimated coefficients, the apparent estimation by re-substitution is too optimistic. To adjust for the upward bias, several methods are proposed. Among them the cross-validation approach is especially advocated, and an approximated cross-validation is developed to reduce the computational cost. Furthermore, these proposed methods can be applied for variable selection to select important diagnostic tests. The proposed methods are examined through simulation studies and applications to three real examples. © 2010, The International Biometric Society.
Medical Scenarios Relevant to Spaceflight
NASA Technical Reports Server (NTRS)
Bacal, Kira; Hurs, Victor; Doerr, Harold
2004-01-01
The Medical Operational Support Team (MOST) was tasked by the JSC Space Medicine and Life Sciences Directorate (SLSD) to incorporate medical simulation into 1) medical training for astronaut-crew medical officers (CMO) and medical flight control teams and 2) evaluations of procedures and resources required for medical care aboard the International Space Station (ISS). Development of evidence-based medical scenarios that mimic the physiology observed during spaceflight will be needed for the MOST to complete these two tasks. The MOST used a human patient simulator, the ISS-like resources in the Medical Simulation Laboratory (MSL), and evidence from space operations, military operations and medical literature to develop space relevant medical scenarios. These scenarios include conditions concerning airway management, Advanced Cardiac Life Support (ACLS) and mitigating anaphylactic symptoms. The MOST has used these space relevant medical scenarios to develop a preliminary space medical training regimen for NASA flight surgeons, Biomedical Flight Controllers (Biomedical Engineers; BME) and CMO-analogs. This regimen is conducted by the MOST in the MSL. The MOST has the capability to develop evidence-based space-relevant medical scenarios that can help SLSD I) demonstrate the proficiency of medical flight control teams to mitigate space-relevant medical events and 2) validate nextgeneration medical equipment and procedures for space medicine applications.
NASA Astrophysics Data System (ADS)
Johannes, Bernd; Salnitski, Vyacheslav; Soll, Henning; Rauch, Melina; Hoermann, Hans-Juergen
For the evaluation of an operator's skill reliability indicators of work quality as well as of psychophysiological states during the work have to be considered. The herein presented methodology and measurement equipment were developed and tested in numerous terrestrial and space experiments using a simulation of a spacecraft docking on a space station. However, in this study the method was applied to a comparable terrestrial task—the flight simulator test (FST) used in the DLR selection procedure for ab initio pilot applicants for passenger airlines. This provided a large amount of data for a statistical verification of the space methodology. For the evaluation of the strain level of applicants during the FST psychophysiological measurements were used to construct a "psychophysiological arousal vector" (PAV) which is sensitive to various individual reaction patterns of the autonomic nervous system to mental load. Its changes and increases will be interpreted as "strain". In the first evaluation study, 614 subjects were analyzed. The subjects first underwent a calibration procedure for the assessment of their autonomic outlet type (AOT) and on the following day they performed the FST, which included three tasks and was evaluated by instructors applying well-established and standardized rating scales. This new method will possibly promote a wide range of other future applications in aviation and space psychology.
Influence of pulsatile flow on LDL transport in the arterial wall.
Sun, Nanfeng; Wood, Nigel B; Hughes, Alun D; Thom, Simon A M; Xu, X Yun
2007-10-01
The accumulation of low-density lipoprotein (LDL) is one of the important factors in atherogenesis. Two different time scales may influence LDL transport in vivo: (1) LDL transport is coupled to blood flow with a pulse cycle of around 1 s in humans; (2) LDL transport within the arterial wall is mediated by transmural flow in the order of 10(-8) m/s. Most existing models have assumed steady flow conditions and overlooked the interactions between physical phenomena with different time scales. The objective of this study was to investigate the influence of pulsatile flow on LDL transport and examine the validity of steady flow assumption. The effect of pulsatile flow on transmural transport was incorporated by using a lumen-free cyclic (LFC) and a lumen-free time-averaged (LFTA) procedures. It is found that the steady flow simulation predicted a focal distribution in the post-stenotic region, differing from the diffuse distribution pattern produced by the pulsatile flow simulation. The LFTA procedure, in which time-averaged shear-dependent transport properties calculated from instantaneous wall shear stress (WSS) were used, predicted a similar distribution pattern to the LFC simulations. We conclude that the steady flow assumption is inadequate and instantaneous hemodynamic conditions have important influence on LDL transmural transport in arterial geometries with disturbed and complicated flow patterns.
Response to actual and simulated recordings of conventional takeoff and landing jet aircraft
NASA Technical Reports Server (NTRS)
Mabry, J. E.; Sullivan, B. M.
1978-01-01
Comparability between noise characteristics of synthesized recordings of aircraft in flight and actual recordings were investigated. Although the synthesized recordings were more smoothly time-varying than the actual recordings and the synthesizer could not produce a comb-filter effect that was present in the actual recordings, results supported the conclusion that annoyance response is comparable to the synthesized and actual recordings. A correction for duration markedly improved the validity of engineering calculation procedures designed to measure noise annoyance. Results led to the conclusion that the magnitude estimation psychophysical method was a highly reliable approach for evaluating engineering calculation procedures designed to measure noise annoyance. For repeated presentations of pairs of actual recordings, differences between judgment results for identical signals ranged from 0.0 to 0.5 db.
Competency-Based Training and Simulation: Making a "Valid" Argument.
Noureldin, Yasser A; Lee, Jason Y; McDougall, Elspeth M; Sweet, Robert M
2018-02-01
The use of simulation as an assessment tool is much more controversial than is its utility as an educational tool. However, without valid simulation-based assessment tools, the ability to objectively assess technical skill competencies in a competency-based medical education framework will remain challenging. The current literature in urologic simulation-based training and assessment uses a definition and framework of validity that is now outdated. This is probably due to the absence of awareness rather than an absence of comprehension. The following review article provides the urologic community an updated taxonomy on validity theory as it relates to simulation-based training and assessments and translates our simulation literature to date into this framework. While the old taxonomy considered validity as distinct subcategories and focused on the simulator itself, the modern taxonomy, for which we translate the literature evidence, considers validity as a unitary construct with a focus on interpretation of simulator data/scores.
Mulugeta, Lealem; Drach, Andrew; Erdemir, Ahmet; Hunt, C. A.; Horner, Marc; Ku, Joy P.; Myers Jr., Jerry G.; Vadigepalli, Rajanikanth; Lytton, William W.
2018-01-01
Modeling and simulation in computational neuroscience is currently a research enterprise to better understand neural systems. It is not yet directly applicable to the problems of patients with brain disease. To be used for clinical applications, there must not only be considerable progress in the field but also a concerted effort to use best practices in order to demonstrate model credibility to regulatory bodies, to clinics and hospitals, to doctors, and to patients. In doing this for neuroscience, we can learn lessons from long-standing practices in other areas of simulation (aircraft, computer chips), from software engineering, and from other biomedical disciplines. In this manuscript, we introduce some basic concepts that will be important in the development of credible clinical neuroscience models: reproducibility and replicability; verification and validation; model configuration; and procedures and processes for credible mechanistic multiscale modeling. We also discuss how garnering strong community involvement can promote model credibility. Finally, in addition to direct usage with patients, we note the potential for simulation usage in the area of Simulation-Based Medical Education, an area which to date has been primarily reliant on physical models (mannequins) and scenario-based simulations rather than on numerical simulations. PMID:29713272
Mulugeta, Lealem; Drach, Andrew; Erdemir, Ahmet; Hunt, C A; Horner, Marc; Ku, Joy P; Myers, Jerry G; Vadigepalli, Rajanikanth; Lytton, William W
2018-01-01
Modeling and simulation in computational neuroscience is currently a research enterprise to better understand neural systems. It is not yet directly applicable to the problems of patients with brain disease. To be used for clinical applications, there must not only be considerable progress in the field but also a concerted effort to use best practices in order to demonstrate model credibility to regulatory bodies, to clinics and hospitals, to doctors, and to patients. In doing this for neuroscience, we can learn lessons from long-standing practices in other areas of simulation (aircraft, computer chips), from software engineering, and from other biomedical disciplines. In this manuscript, we introduce some basic concepts that will be important in the development of credible clinical neuroscience models: reproducibility and replicability; verification and validation; model configuration; and procedures and processes for credible mechanistic multiscale modeling. We also discuss how garnering strong community involvement can promote model credibility. Finally, in addition to direct usage with patients, we note the potential for simulation usage in the area of Simulation-Based Medical Education, an area which to date has been primarily reliant on physical models (mannequins) and scenario-based simulations rather than on numerical simulations.
Validation of the procedures. [integrated multidisciplinary optimization of rotorcraft
NASA Technical Reports Server (NTRS)
Mantay, Wayne R.
1989-01-01
Validation strategies are described for procedures aimed at improving the rotor blade design process through a multidisciplinary optimization approach. Validation of the basic rotor environment prediction tools and the overall rotor design are discussed.
NASA Technical Reports Server (NTRS)
Duncan, L. M.; Reddell, J. P.; Schoonmaker, P. B.
1975-01-01
Techniques and support software for the efficient performance of simulation validation are discussed. Overall validation software structure, the performance of validation at various levels of simulation integration, guidelines for check case formulation, methods for real time acquisition and formatting of data from an all up operational simulator, and methods and criteria for comparison and evaluation of simulation data are included. Vehicle subsystems modules, module integration, special test requirements, and reference data formats are also described.
41 CFR 60-3.6 - Use of selection procedures which have not been validated.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) General Principles § 60-3.6 Use of selection procedures which have not been validated. A. Use of alternate... 41 Public Contracts and Property Management 1 2010-07-01 2010-07-01 true Use of selection procedures which have not been validated. 60-3.6 Section 60-3.6 Public Contracts and Property Management...
41 CFR 60-3.6 - Use of selection procedures which have not been validated.
Code of Federal Regulations, 2011 CFR
2011-07-01
... validation techniques contemplated by these guidelines. In such circumstances, the user should utilize... a formal and scored selection procedure is used which has an adverse impact, the validation... user cannot or need not follow the validation techniques anticipated by these guidelines, the user...
41 CFR 60-3.6 - Use of selection procedures which have not been validated.
Code of Federal Regulations, 2014 CFR
2014-07-01
... validation techniques contemplated by these guidelines. In such circumstances, the user should utilize... a formal and scored selection procedure is used which has an adverse impact, the validation... user cannot or need not follow the validation techniques anticipated by these guidelines, the user...
41 CFR 60-3.6 - Use of selection procedures which have not been validated.
Code of Federal Regulations, 2012 CFR
2012-07-01
... validation techniques contemplated by these guidelines. In such circumstances, the user should utilize... a formal and scored selection procedure is used which has an adverse impact, the validation... user cannot or need not follow the validation techniques anticipated by these guidelines, the user...
41 CFR 60-3.6 - Use of selection procedures which have not been validated.
Code of Federal Regulations, 2013 CFR
2013-07-01
... validation techniques contemplated by these guidelines. In such circumstances, the user should utilize... a formal and scored selection procedure is used which has an adverse impact, the validation... user cannot or need not follow the validation techniques anticipated by these guidelines, the user...
Adaptive identification and control of structural dynamics systems using recursive lattice filters
NASA Technical Reports Server (NTRS)
Sundararajan, N.; Montgomery, R. C.; Williams, J. P.
1985-01-01
A new approach for adaptive identification and control of structural dynamic systems by using least squares lattice filters thar are widely used in the signal processing area is presented. Testing procedures for interfacing the lattice filter identification methods and modal control method for stable closed loop adaptive control are presented. The methods are illustrated for a free-free beam and for a complex flexible grid, with the basic control objective being vibration suppression. The approach is validated by using both simulations and experimental facilities available at the Langley Research Center.
Space - A unique environment for process modeling R&D
NASA Technical Reports Server (NTRS)
Overfelt, Tony
1991-01-01
Process modeling, the application of advanced computational techniques to simulate real processes as they occur in regular use, e.g., welding, casting and semiconductor crystal growth, is discussed. Using the low-gravity environment of space will accelerate the technical validation of the procedures and enable extremely accurate determinations of the many necessary thermophysical properties. Attention is given to NASA's centers for the commercial development of space; joint ventures of universities, industries, and goverment agencies to study the unique attributes of space that offer potential for applied R&D and eventual commercial exploitation.
Efficiency of endoscopy units can be improved with use of discrete event simulation modeling.
Sauer, Bryan G; Singh, Kanwar P; Wagner, Barry L; Vanden Hoek, Matthew S; Twilley, Katherine; Cohn, Steven M; Shami, Vanessa M; Wang, Andrew Y
2016-11-01
Background and study aims: The projected increased demand for health services obligates healthcare organizations to operate efficiently. Discrete event simulation (DES) is a modeling method that allows for optimization of systems through virtual testing of different configurations before implementation. The objective of this study was to identify strategies to improve the daily efficiencies of an endoscopy center with the use of DES. Methods: We built a DES model of a five procedure room endoscopy unit at a tertiary-care university medical center. After validating the baseline model, we tested alternate configurations to run the endoscopy suite and evaluated outcomes associated with each change. The main outcome measures included adequate number of preparation and recovery rooms, blocked inflow, delay times, blocked outflows, and patient cycle time. Results: Based on a sensitivity analysis, the adequate number of preparation rooms is eight and recovery rooms is nine for a five procedure room unit (total 3.4 preparation and recovery rooms per procedure room). Simple changes to procedure scheduling and patient arrival times led to a modest improvement in efficiency. Increasing the preparation/recovery rooms based on the sensitivity analysis led to significant improvements in efficiency. Conclusions: By applying tools such as DES, we can model changes in an environment with complex interactions and find ways to improve the medical care we provide. DES is applicable to any endoscopy unit and would be particularly valuable to those who are trying to improve on the efficiency of care and patient experience.
Efficiency of endoscopy units can be improved with use of discrete event simulation modeling
Sauer, Bryan G.; Singh, Kanwar P.; Wagner, Barry L.; Vanden Hoek, Matthew S.; Twilley, Katherine; Cohn, Steven M.; Shami, Vanessa M.; Wang, Andrew Y.
2016-01-01
Background and study aims: The projected increased demand for health services obligates healthcare organizations to operate efficiently. Discrete event simulation (DES) is a modeling method that allows for optimization of systems through virtual testing of different configurations before implementation. The objective of this study was to identify strategies to improve the daily efficiencies of an endoscopy center with the use of DES. Methods: We built a DES model of a five procedure room endoscopy unit at a tertiary-care university medical center. After validating the baseline model, we tested alternate configurations to run the endoscopy suite and evaluated outcomes associated with each change. The main outcome measures included adequate number of preparation and recovery rooms, blocked inflow, delay times, blocked outflows, and patient cycle time. Results: Based on a sensitivity analysis, the adequate number of preparation rooms is eight and recovery rooms is nine for a five procedure room unit (total 3.4 preparation and recovery rooms per procedure room). Simple changes to procedure scheduling and patient arrival times led to a modest improvement in efficiency. Increasing the preparation/recovery rooms based on the sensitivity analysis led to significant improvements in efficiency. Conclusions: By applying tools such as DES, we can model changes in an environment with complex interactions and find ways to improve the medical care we provide. DES is applicable to any endoscopy unit and would be particularly valuable to those who are trying to improve on the efficiency of care and patient experience. PMID:27853739
Bias correction for selecting the minimal-error classifier from many machine learning models.
Ding, Ying; Tang, Shaowu; Liao, Serena G; Jia, Jia; Oesterreich, Steffi; Lin, Yan; Tseng, George C
2014-11-15
Supervised machine learning is commonly applied in genomic research to construct a classifier from the training data that is generalizable to predict independent testing data. When test datasets are not available, cross-validation is commonly used to estimate the error rate. Many machine learning methods are available, and it is well known that no universally best method exists in general. It has been a common practice to apply many machine learning methods and report the method that produces the smallest cross-validation error rate. Theoretically, such a procedure produces a selection bias. Consequently, many clinical studies with moderate sample sizes (e.g. n = 30-60) risk reporting a falsely small cross-validation error rate that could not be validated later in independent cohorts. In this article, we illustrated the probabilistic framework of the problem and explored the statistical and asymptotic properties. We proposed a new bias correction method based on learning curve fitting by inverse power law (IPL) and compared it with three existing methods: nested cross-validation, weighted mean correction and Tibshirani-Tibshirani procedure. All methods were compared in simulation datasets, five moderate size real datasets and two large breast cancer datasets. The result showed that IPL outperforms the other methods in bias correction with smaller variance, and it has an additional advantage to extrapolate error estimates for larger sample sizes, a practical feature to recommend whether more samples should be recruited to improve the classifier and accuracy. An R package 'MLbias' and all source files are publicly available. tsenglab.biostat.pitt.edu/software.htm. ctseng@pitt.edu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Okeke, Uche Godfrey; Akdemir, Deniz; Rabbi, Ismail; Kulakow, Peter; Jannink, Jean-Luc
2018-03-01
The HarvestPlus program for cassava ( Crantz) fortifies cassava with β-carotene by breeding for carotene-rich tubers (yellow cassava). However, a negative correlation between yellowness and dry matter (DM) content has been identified. We investigated the genetic control of DM in white and yellow cassava. We used regional heritability mapping (RHM) to associate DM with genomic segments in both subpopulations. Significant segments were subjected to candidate gene analysis and candidates were validated with prediction accuracies. The RHM procedure was validated via a simulation approach and revealed significant hits for white cassava on chromosomes 1, 4, 5, 10, 17, and 18, whereas hits for the yellow were on chromosome 1. Candidate gene analysis revealed genes in the carbohydrate biosynthesis pathway including plant serine-threonine protein kinases (SnRKs), UDP (uridine diphosphate)-glycosyltransferases, UDP-sugar transporters, invertases, pectinases, and regulons. Validation using 1252 unique identifiers from the SnRK gene family genome-wide recovered 50% of the predictive accuracy of whole-genome single nucleotide polymorphisms for DM, whereas validation using 53 likely genes (extracted from the literature) from significant segments recovered 32%. Genes including an acid invertase, a neutral or alkaline invertase, and a glucose-6-phosphate isomerase were validated on the basis of an a priori list for the cassava starch pathway, and also a fructose-biphosphate aldolase from the Calvin cycle pathway. The power of the RHM procedure was estimated as 47% when the causal quantitative trait loci generated 10% of the phenotypic variance (sample size = 451). Cassava DM genetics are complex and RHM may be useful for complex traits. Copyright © 2018 Crop Science Society of America.
Validating LES for Jet Aeroacoustics
NASA Technical Reports Server (NTRS)
Bridges, James; Wernet, Mark P.
2011-01-01
Engineers charged with making jet aircraft quieter have long dreamed of being able to see exactly how turbulent eddies produce sound and this dream is now coming true with the advent of large eddy simulation (LES). Two obvious challenges remain: validating the LES codes at the resolution required to see the fluid-acoustic coupling, and the interpretation of the massive datasets that are produced. This paper addresses the former, the use of advanced experimental techniques such as particle image velocimetry (PIV) and Raman and Rayleigh scattering, to validate the computer codes and procedures used to create LES solutions. This paper argues that the issue of accuracy of the experimental measurements be addressed by cross-facility and cross-disciplinary examination of modern datasets along with increased reporting of internal quality checks in PIV analysis. Further, it argues that the appropriate validation metrics for aeroacoustic applications are increasingly complicated statistics that have been shown in aeroacoustic theory to be critical to flow-generated sound, such as two-point space-time velocity correlations. A brief review of data sources available is presented along with examples illustrating cross-facility and internal quality checks required of the data before it should be accepted for validation of LES.
NASA Technical Reports Server (NTRS)
Grove, R. D.; Bowles, R. L.; Mayhew, S. C.
1972-01-01
A maximum likelihood parameter estimation procedure and program were developed for the extraction of the stability and control derivatives of aircraft from flight test data. Nonlinear six-degree-of-freedom equations describing aircraft dynamics were used to derive sensitivity equations for quasilinearization. The maximum likelihood function with quasilinearization was used to derive the parameter change equations, the covariance matrices for the parameters and measurement noise, and the performance index function. The maximum likelihood estimator was mechanized into an iterative estimation procedure utilizing a real time digital computer and graphic display system. This program was developed for 8 measured state variables and 40 parameters. Test cases were conducted with simulated data for validation of the estimation procedure and program. The program was applied to a V/STOL tilt wing aircraft, a military fighter airplane, and a light single engine airplane. The particular nonlinear equations of motion, derivation of the sensitivity equations, addition of accelerations into the algorithm, operational features of the real time digital system, and test cases are described.
Craig, John R; Zhao, Kai; Doan, Ngoc; Khalili, Sammy; Lee, John YK; Adappa, Nithin D; Palmer, James N
2016-01-01
Background Investigations into the distribution of sinus irrigations have been limited by labor-intensive methodologies that do not capture the full dynamics of irrigation flow. The purpose of this study was to validate the accuracy of a computational fluid dynamics (CFD) model for sinonasal irrigations through a cadaveric experiment. Methods Endoscopic sinus surgery was performed on two fresh cadavers to open all eight sinuses, including a Draf III procedure for cadaver 1, and Draf IIb frontal sinusotomies for cadaver 2. Computed tomography maxillofacial scans were obtained preoperatively and postoperatively, from which CFD models were created. Blue-dyed saline in a 240 mL squeeze bottle was used to irrigate cadaver sinuses at 60 mL/s (120 mL per side, over 2 seconds). These parameters were replicated in CFD simulations. Endoscopes were placed through trephinations drilled through the anterior walls of the maxillary and frontal sinuses, and sphenoid roofs. Irrigation flow into the maxillary, frontal, and sphenoid sinuses was graded both ipsilateral and contralateral to the side of nasal irrigation, and then compared with the CFD simulations. Results In both cadavers, preoperative and postoperative irrigation flow into maxillary, frontal, and sphenoid sinuses matched extremely well when comparing the CFD models and cadaver endoscopic videos. For cadaver 1, there was 100% concordance between the CFD model and cadaver videos, and 83% concordance for cadaver 2. Conclusions This cadaveric experiment provided potential validation of the CFD model for simulating saline irrigation flow into the maxillary, frontal, and sphenoid sinuses before and after sinus surgery. PMID:26880742
Media Fill Test for validation of autologous leukocytes separation and labelling by (99m)Tc-HmPAO.
Urbano, Nicoletta; Modoni, Sergio; Schillaci, Orazio
2013-01-01
Manufacturing of sterile products must be carried out in order to minimize risks of microbiological contamination. White blood cells (WBC) labelled with (99m)Tc-exametazime ((99m)Tc-hexamethylpropyleneamine oxime; (99m)Tc-HMPAO) are being successfully applied in the field of infection/inflammation scintigraphy for many years. In our radiopharmacy lab, separation and labelling of autologous leukocytes with (99m)Tc-HMPAO were performed in a laminar flow cabinet not classified and placed in a controlled area, whereas (99m)Tc-HMPAO radiolabelling procedure was carried out in a hot cell with manipulator gloves. This study was conducted to validate this process using a Media Fill simulation test. The study was performed using sterile Tryptic Soy Broth (TSB) in place of active product, reproducing as closely as possible the routine aseptic production process with all the critical steps, as described in the our internal standard operative procedures (SOP). The final vials containing the media of each processed step were then incubated for 14 days and examined for the evidence of microbial growth. No evidence of turbidity was observed in all the steps assayed by the Media Fill. In the separation and labelling of autologous leukocytes with (99m)Tc-HmPAO, Media-Fill test represents a reliable tool to validate the aseptic process. Copyright © 2013 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Lievens, Filip; Patterson, Fiona
2011-01-01
In high-stakes selection among candidates with considerable domain-specific knowledge and experience, investigations of whether high-fidelity simulations (assessment centers; ACs) have incremental validity over low-fidelity simulations (situational judgment tests; SJTs) are lacking. Therefore, this article integrates research on the validity of…
NASA Technical Reports Server (NTRS)
Kiris, Cetin
1995-01-01
Development of an incompressible Navier-Stokes solution procedure was performed for the analysis of a liquid rocket engine pump components and for the mechanical heart assist devices. The solution procedure for the propulsion systems is applicable to incompressible Navier-Stokes flows in a steadily rotating frame of reference for any general complex configurations. The computer codes were tested on different complex configurations such as liquid rocket engine inducer and impellers. As a spin-off technology from the turbopump component simulations, the flow analysis for an axial heart pump was conducted. The baseline Left Ventricular Assist Device (LVAD) design was improved by adding an inducer geometry by adapting from the liquid rocket engine pump. The time-accurate mode of the incompressible Navier-Stokes code was validated with flapping foil experiment by using different domain decomposition methods. In the flapping foil experiment, two upstream NACA 0025 foils perform high-frequency synchronized motion and generate unsteady flow conditions for a downstream larger stationary foil. Fairly good agreement was obtained between unsteady experimental data and numerical results from two different moving boundary procedures. Incompressible Navier-Stokes code (INS3D) has been extended for heat transfer applications. The temperature equation was written for both forced and natural convection phenomena. Flow in a square duct case was used for the validation of the code in both natural and forced convection.
Dwivedi, Jaya; Namdev, Kuldeep K; Chilkoti, Deepak C; Verma, Surajpal; Sharma, Swapnil
2018-06-06
Therapeutic drug monitoring (TDM) of anti-epileptic drugs provides a valid clinical tool in optimization of overall therapy. However, TDM is challenging due to the high biological samples (plasma/blood) storage/shipment costs and the limited availability of laboratories providing TDM services. Sampling in the form of dry plasma spot (DPS) or dry blood spot (DBS) is a suitable alternative to overcome these issues. An improved, simple, rapid, and stability indicating method for quantification of pregabalin in human plasma and DPS has been developed and validated. Analyses were performed on liquid chromatography tandem mass spectrometer under positive ionization mode of electrospray interface. Pregabain-d4 was used as internal standard, and the chromatographic separations were performed on Poroshell 120 EC-C18 column using an isocratic mobile phase flow rate of 1 mL/min. Stability of pregabalin in DPS was evaluated under simulated real-time conditions. Extraction procedures from plasma and DPS samples were compared using statistical tests. The method was validated considering the FDA method validation guideline. The method was linear over the concentration range of 20-16000 ng/mL and 100-10000 ng/mL in plasma and DPS, respectively. DPS samples were found stable for only one week upon storage at room temperature and for at least four weeks at freezing temperature (-20 ± 5 °C). Method was applied for quantification of pregabalin in over 600 samples of a clinical study. Statistical analyses revealed that two extraction procedures in plasma and DPS samples showed statistically insignificant difference and can be used interchangeably without any bias. Proposed method involves simple and rapid steps of sample processing that do not require a pre- or post-column derivatization procedure. The method is suitable for routine pharmacokinetic analysis and therapeutic monitoring of pregabalin.
Dynamic heart phantom with functional mitral and aortic valves
NASA Astrophysics Data System (ADS)
Vannelli, Claire; Moore, John; McLeod, Jonathan; Ceh, Dennis; Peters, Terry
2015-03-01
Cardiac valvular stenosis, prolapse and regurgitation are increasingly common conditions, particularly in an elderly population with limited potential for on-pump cardiac surgery. NeoChord©, MitraClipand numerous stent-based transcatheter aortic valve implantation (TAVI) devices provide an alternative to intrusive cardiac operations; performed while the heart is beating, these procedures require surgeons and cardiologists to learn new image-guidance based techniques. Developing these visual aids and protocols is a challenging task that benefits from sophisticated simulators. Existing models lack features needed to simulate off-pump valvular procedures: functional, dynamic valves, apical and vascular access, and user flexibility for different activation patterns such as variable heart rates and rapid pacing. We present a left ventricle phantom with these characteristics. The phantom can be used to simulate valvular repair and replacement procedures with magnetic tracking, augmented reality, fluoroscopy and ultrasound guidance. This tool serves as a platform to develop image-guidance and image processing techniques required for a range of minimally invasive cardiac interventions. The phantom mimics in vivo mitral and aortic valve motion, permitting realistic ultrasound images of these components to be acquired. It also has a physiological realistic left ventricular ejection fraction of 50%. Given its realistic imaging properties and non-biodegradable composition—silicone for tissue, water for blood—the system promises to reduce the number of animal trials required to develop image guidance applications for valvular repair and replacement. The phantom has been used in validation studies for both TAVI image-guidance techniques1, and image-based mitral valve tracking algorithms2.
Guerra, J G; Rubiano, J G; Winter, G; Guerra, A G; Alonso, H; Arnedo, M A; Tejera, A; Gil, J M; Rodríguez, R; Martel, P; Bolivar, J P
2015-11-01
The determination in a sample of the activity concentration of a specific radionuclide by gamma spectrometry needs to know the full energy peak efficiency (FEPE) for the energy of interest. The difficulties related to the experimental calibration make it advisable to have alternative methods for FEPE determination, such as the simulation of the transport of photons in the crystal by the Monte Carlo method, which requires an accurate knowledge of the characteristics and geometry of the detector. The characterization process is mainly carried out by Canberra Industries Inc. using proprietary techniques and methodologies developed by that company. It is a costly procedure (due to shipping and to the cost of the process itself) and for some research laboratories an alternative in situ procedure can be very useful. The main goal of this paper is to find an alternative to this costly characterization process, by establishing a method for optimizing the parameters of characterizing the detector, through a computational procedure which could be reproduced at a standard research lab. This method consists in the determination of the detector geometric parameters by using Monte Carlo simulation in parallel with an optimization process, based on evolutionary algorithms, starting from a set of reference FEPEs determined experimentally or computationally. The proposed method has proven to be effective and simple to implement. It provides a set of characterization parameters which it has been successfully validated for different source-detector geometries, and also for a wide range of environmental samples and certified materials. Copyright © 2015 Elsevier Ltd. All rights reserved.
POLYMAT-C: a comprehensive SPSS program for computing the polychoric correlation matrix.
Lorenzo-Seva, Urbano; Ferrando, Pere J
2015-09-01
We provide a free noncommercial SPSS program that implements procedures for (a) obtaining the polychoric correlation matrix between a set of ordered categorical measures, so that it can be used as input for the SPSS factor analysis (FA) program; (b) testing the null hypothesis of zero population correlation for each element of the matrix by using appropriate simulation procedures; (c) obtaining valid and accurate confidence intervals via bootstrap resampling for those correlations found to be significant; and (d) performing, if necessary, a smoothing procedure that makes the matrix amenable to any FA estimation procedure. For the main purpose (a), the program uses a robust unified procedure that allows four different types of estimates to be obtained at the user's choice. Overall, we hope the program will be a very useful tool for the applied researcher, not only because it provides an appropriate input matrix for FA, but also because it allows the researcher to carefully check the appropriateness of the matrix for this purpose. The SPSS syntax, a short manual, and data files related to this article are available as Supplemental materials that are available for download with this article.
Jensen, Katrine; Bjerrum, Flemming; Hansen, Henrik Jessen; Petersen, René Horsleben; Pedersen, Jesper Holst; Konge, Lars
2017-06-01
The societies of thoracic surgery are working to incorporate simulation and competency-based assessment into specialty training. One challenge is the development of a simulation-based test, which can be used as an assessment tool. The study objective was to establish validity evidence for a virtual reality simulator test of a video-assisted thoracoscopic surgery (VATS) lobectomy of a right upper lobe. Participants with varying experience in VATS lobectomy were included. They were familiarized with a virtual reality simulator (LapSim ® ) and introduced to the steps of the procedure for a VATS right upper lobe lobectomy. The participants performed two VATS lobectomies on the simulator with a 5-min break between attempts. Nineteen pre-defined simulator metrics were recorded. Fifty-three participants from nine different countries were included. High internal consistency was found for the metrics with Cronbach's alpha coefficient for standardized items of 0.91. Significant test-retest reliability was found for 15 of the metrics (p-values <0.05). Significant correlations between the metrics and the participants VATS lobectomy experience were identified for seven metrics (p-values <0.001), and 10 metrics showed significant differences between novices (0 VATS lobectomies performed) and experienced surgeons (>50 VATS lobectomies performed). A pass/fail level defined as approximately one standard deviation from the mean metric scores for experienced surgeons passed none of the novices (0 % false positives) and failed four of the experienced surgeons (29 % false negatives). This study is the first to establish validity evidence for a VATS right upper lobe lobectomy virtual reality simulator test. Several simulator metrics demonstrated significant differences between novices and experienced surgeons and pass/fail criteria for the test were set with acceptable consequences. This test can be used as a first step in assessing thoracic surgery trainees' VATS lobectomy competency.
Comparison of a Virtual Older Driver Assessment with an On-Road Driving Test.
Eramudugolla, Ranmalee; Price, Jasmine; Chopra, Sidhant; Li, Xiaolan; Anstey, Kaarin J
2016-12-01
To design a low-cost simulator-based driving assessment for older adults and to compare its validity with that of an on-road driving assessment and other measures of older driver risk. Cross-sectional observational study. Canberra, Australia. Older adult drivers (N = 47; aged 65-88, mean age 75.2). Error rate on a simulated drive with environment and scoring procedure matched to those of an on-road test. Other measures included participant age, simulator sickness severity, neuropsychological measures, and driver screening measures. Outcome variables included occupational therapist (OT)-rated on-road errors, on-road safety rating, and safety category. Participants' error rate on the simulated drive was significantly correlated with their OT-rated driving safety (correlation coefficient (r) = -0.398, P = .006), even after adjustment for age and simulator sickness (P = .009). The simulator error rate was a significant predictor of categorization as unsafe on the road (P = .02, sensitivity 69.2%, specificity 100%), with 13 (27%) drivers assessed as unsafe. Simulator error was also associated with other older driver safety screening measures such as useful field of view (r = 0.341, P = .02), DriveSafe (r = -0.455, P < .01), and visual motion sensitivity (r = 0.368, P = .01) but was not associated with memory (delayed word recall) or global cognition (Mini-Mental State Examination). Drivers made twice as many errors on the simulated assessment as during the on-road assessment (P < .001), with significant differences in the rate and type of errors between the two mediums. A low-cost simulator-based assessment is valid as a screening instrument for identifying at-risk older drivers but not as an alternative to on-road evaluation when accurate data on competence or pattern of impairment is required for licensing decisions and training programs. © 2016, Copyright the Authors Journal compilation © 2016, The American Geriatrics Society.
Advanced Maintenance Simulation by Means of Hand-Based Haptic Interfaces
NASA Astrophysics Data System (ADS)
Nappi, Michele; Paolino, Luca; Ricciardi, Stefano; Sebillo, Monica; Vitiello, Giuliana
Aerospace industry has been involved in virtual simulation for design and testing since the birth of virtual reality. Today this industry is showing a growing interest in the development of haptic-based maintenance training applications, which represent the most advanced way to simulate maintenance and repair tasks within a virtual environment by means of a visual-haptic approach. The goal is to allow the trainee to experiment the service procedures not only as a workflow reproduced at a visual level but also in terms of the kinaesthetic feedback involved with the manipulation of tools and components. This study, conducted in collaboration with aerospace industry specialists, is aimed to the development of an immersive virtual capable of immerging the trainees into a virtual environment where mechanics and technicians can perform maintenance simulation or training tasks by directly manipulating 3D virtual models of aircraft parts while perceiving force feedback through the haptic interface. The proposed system is based on ViRstperson, a virtual reality engine under development at the Italian Center for Aerospace Research (CIRA) to support engineering and technical activities such as design-time maintenance procedure validation, and maintenance training. This engine has been extended to support haptic-based interaction, enabling a more complete level of interaction, also in terms of impedance control, and thus fostering the development of haptic knowledge in the user. The user’s “sense of touch” within the immersive virtual environment is simulated through an Immersion CyberForce® hand-based force-feedback device. Preliminary testing of the proposed system seems encouraging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toltz, Allison; Hoesl, Michaela; Schuemann, Jan
Purpose: A method to refine the implementation of an in vivo, adaptive proton therapy range verification methodology was investigated. Simulation experiments and in-phantom measurements were compared to validate the calibration procedure of a time-resolved diode dosimetry technique. Methods: A silicon diode array system has been developed and experimentally tested in phantom for passively scattered proton beam range verification by correlating properties of the detector signal to the water equivalent path length (WEPL). The implementation of this system requires a set of calibration measurements to establish a beam-specific diode response to WEPL fit for the selected ‘scout’ beam in a solidmore » water phantom. This process is both tedious, as it necessitates a separate set of measurements for every ‘scout’ beam that may be appropriate to the clinical case, as well as inconvenient due to limited access to the clinical beamline. The diode response to WEPL relationship for a given ‘scout’ beam may be determined within a simulation environment, facilitating the applicability of this dosimetry technique. Measurements for three ‘scout’ beams were compared against simulated detector response with Monte Carlo methods using the Tool for Particle Simulation (TOPAS). Results: Detector response in water equivalent plastic was successfully validated against simulation for spread out Bragg peaks of range 10 cm, 15 cm, and 21 cm (168 MeV, 177 MeV, and 210 MeV) with adjusted R{sup 2} of 0.998. Conclusion: Feasibility has been shown for performing calibration of detector response for a given ‘scout’ beam through simulation for the time resolved diode dosimetry technique.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rider, William J.; Witkowski, Walter R.; Mousseau, Vincent Andrew
2016-04-13
The importance of credible, trustworthy numerical simulations is obvious especially when using the results for making high-consequence decisions. Determining the credibility of such numerical predictions is much more difficult and requires a systematic approach to assessing predictive capability, associated uncertainties and overall confidence in the computational simulation process for the intended use of the model. This process begins with an evaluation of the computational modeling of the identified, important physics of the simulation for its intended use. This is commonly done through a Phenomena Identification Ranking Table (PIRT). Then an assessment of the evidence basis supporting the ability to computationallymore » simulate these physics can be performed using various frameworks such as the Predictive Capability Maturity Model (PCMM). There were several critical activities that follow in the areas of code and solution verification, validation and uncertainty quantification, which will be described in detail in the following sections. Here, we introduce the subject matter for general applications but specifics are given for the failure prediction project. In addition, the first task that must be completed in the verification & validation procedure is to perform a credibility assessment to fully understand the requirements and limitations of the current computational simulation capability for the specific application intended use. The PIRT and PCMM are tools used at Sandia National Laboratories (SNL) to provide a consistent manner to perform such an assessment. Ideally, all stakeholders should be represented and contribute to perform an accurate credibility assessment. PIRTs and PCMMs are both described in brief detail below and the resulting assessments for an example project are given.« less
Virtual Reality simulator for dental anesthesia training in the inferior alveolar nerve block.
Corrêa, Cléber Gimenez; Machado, Maria Aparecida de Andrade Moreira; Ranzini, Edith; Tori, Romero; Nunes, Fátima de Lourdes Santos
2017-01-01
This study shows the development and validation of a dental anesthesia-training simulator, specifically for the inferior alveolar nerve block (IANB). The system developed provides the tactile sensation of inserting a real needle in a human patient, using Virtual Reality (VR) techniques and a haptic device that can provide a perceived force feedback in the needle insertion task during the anesthesia procedure. To simulate a realistic anesthesia procedure, a Carpule syringe was coupled to a haptic device. The Volere method was used to elicit requirements from users in the Dentistry area; Repeated Measures Two-Way ANOVA (Analysis of Variance), Tukey post-hoc test and averages for the results' analysis. A questionnaire-based subjective evaluation method was applied to collect information about the simulator, and 26 people participated in the experiments (12 beginners, 12 at intermediate level, and 2 experts). The questionnaire included profile, preferences (number of viewpoints, texture of the objects, and haptic device handler), as well as visual (appearance, scale, and position of objects) and haptic aspects (motion space, tactile sensation, and motion reproduction). The visual aspect was considered appropriate and the haptic feedback must be improved, which the users can do by calibrating the virtual tissues' resistance. The evaluation of visual aspects was influenced by the participants' experience, according to ANOVA test (F=15.6, p=0.0002, with p<0.01). The user preferences were the simulator with two viewpoints, objects with texture based on images and the device with a syringe coupled to it. The simulation was considered thoroughly satisfactory for the anesthesia training, considering the needle insertion task, which includes the correct insertion point and depth, as well as the perception of tissues resistances during the insertion.
ERIC Educational Resources Information Center
Smith, Karen; And Others
Procedures for validating data reported by students and parents on an application for Basic Educational Opportunity Grants were developed in 1978 for the U.S. Office of Education (OE). Validation activities include: validation of flagged Student Eligibility Reports (SERs) for students whose schools are part of the Alternate Disbursement System;…
Note: Tesla based pulse generator for electrical breakdown study of liquid dielectrics
NASA Astrophysics Data System (ADS)
Veda Prakash, G.; Kumar, R.; Patel, J.; Saurabh, K.; Shyam, A.
2013-12-01
In the process of studying charge holding capability and delay time for breakdown in liquids under nanosecond (ns) time scales, a Tesla based pulse generator has been developed. Pulse generator is a combination of Tesla transformer, pulse forming line, a fast closing switch, and test chamber. Use of Tesla transformer over conventional Marx generators makes the pulse generator very compact, cost effective, and requires less maintenance. The system has been designed and developed to deliver maximum output voltage of 300 kV and rise time of the order of tens of nanoseconds. The paper deals with the system design parameters, breakdown test procedure, and various experimental results. To validate the pulse generator performance, experimental results have been compared with PSPICE simulation software and are in good agreement with simulation results.
Roalf, David R; Moore, Tyler M; Wolk, David A; Arnold, Steven E; Mechanic-Hamilton, Dawn; Rick, Jacqueline; Kabadi, Sushila; Ruparel, Kosha; Chen-Plotkin, Alice S; Chahine, Lama M; Dahodwala, Nabila A; Duda, John E; Weintraub, Daniel A; Moberg, Paul J
2016-01-01
Introduction Screening for cognitive deficits is essential in neurodegenerative disease. Screening tests, such as the Montreal Cognitive Assessment (MoCA), are easily administered, correlate with neuropsychological performance and demonstrate diagnostic utility. Yet, administration time is too long for many clinical settings. Methods Item response theory and computerised adaptive testing simulation were employed to establish an abbreviated MoCA in 1850 well-characterised community-dwelling individuals with and without neurodegenerative disease. Results 8 MoCA items with high item discrimination and appropriate difficulty were identified for use in a short form (s-MoCA). The s-MoCA was highly correlated with the original MoCA, showed robust diagnostic classification and cross-validation procedures substantiated these items. Discussion Early detection of cognitive impairment is an important clinical and public health concern, but administration of screening measures is limited by time constraints in demanding clinical settings. Here, we provide as-MoCA that is valid across neurological disorders and can be administered in approximately 5 min. PMID:27071646
Yudkowsky, Rachel; Otaki, Junji; Lowenstein, Tali; Riddle, Janet; Nishigori, Hiroshi; Bordage, Georges
2009-08-01
Diagnostic accuracy is maximised by having clinical signs and diagnostic hypotheses in mind during the physical examination (PE). This diagnostic reasoning approach contrasts with the rote, hypothesis-free screening PE learned by many medical students. A hypothesis-driven PE (HDPE) learning and assessment procedure was developed to provide targeted practice and assessment in anticipating, eliciting and interpreting critical aspects of the PE in the context of diagnostic challenges. This study was designed to obtain initial content validity evidence, performance and reliability estimates, and impact data for the HDPE procedure. Nineteen clinical scenarios were developed, covering 160 PE manoeuvres. A total of 66 Year 3 medical students prepared for and encountered three clinical scenarios during required formative assessments. For each case, students listed anticipated positive PE findings for two plausible diagnoses before examining the patient; examined a standardised patient (SP) simulating one of the diagnoses; received immediate feedback from the SP, and documented their findings and working diagnosis. The same students later encountered some of the scenarios during their Year 4 clinical skills examination. On average, Year 3 students anticipated 65% of the positive findings, correctly performed 88% of the PE manoeuvres and documented 61% of the findings. Year 4 students anticipated and elicited fewer findings overall, but achieved proportionally more discriminating findings, thereby more efficiently achieving a diagnostic accuracy equivalent to that of students in Year 3. Year 4 students performed better on cases on which they had received feedback as Year 3 students. Twelve cases would provide a reliability of 0.80, based on discriminating checklist items only. The HDPE provided medical students with a thoughtful, deliberate approach to learning and assessing PE skills in a valid and reliable manner.
Perez-Ponce, Hector; Daul, Christian; Wolf, Didier; Noel, Alain
2013-08-01
In mammography, image quality assessment has to be directly related to breast cancer indicator (e.g. microcalcifications) detectability. Recently, we proposed an X-ray source/digital detector (XRS/DD) model leading to such an assessment. This model simulates very realistic contrast-detail phantom (CDMAM) images leading to gold disc (representing microcalcifications) detectability thresholds that are very close to those of real images taken under the simulated acquisition conditions. The detection step was performed with a mathematical observer. The aim of this contribution is to include human observers into the disc detection process in real and virtual images to validate the simulation framework based on the XRS/DD model. Mathematical criteria (contrast-detail curves, image quality factor, etc.) are used to assess and to compare, from the statistical point of view, the cancer indicator detectability in real and virtual images. The quantitative results given in this paper show that the images simulated by the XRS/DD model are useful for image quality assessment in the case of all studied exposure conditions using either human or automated scoring. Also, this paper confirms that with the XRS/DD model the image quality assessment can be automated and the whole time of the procedure can be drastically reduced. Compared to standard quality assessment methods, the number of images to be acquired is divided by a factor of eight. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.
Validating LES for Jet Aeroacoustics
NASA Technical Reports Server (NTRS)
Bridges, James
2011-01-01
Engineers charged with making jet aircraft quieter have long dreamed of being able to see exactly how turbulent eddies produce sound and this dream is now coming true with the advent of large eddy simulation (LES). Two obvious challenges remain: validating the LES codes at the resolution required to see the fluid-acoustic coupling, and the interpretation of the massive datasets that result in having dreams come true. This paper primarily addresses the former, the use of advanced experimental techniques such as particle image velocimetry (PIV) and Raman and Rayleigh scattering, to validate the computer codes and procedures used to create LES solutions. It also addresses the latter problem in discussing what are relevant measures critical for aeroacoustics that should be used in validating LES codes. These new diagnostic techniques deliver measurements and flow statistics of increasing sophistication and capability, but what of their accuracy? And what are the measures to be used in validation? This paper argues that the issue of accuracy be addressed by cross-facility and cross-disciplinary examination of modern datasets along with increased reporting of internal quality checks in PIV analysis. Further, it is argued that the appropriate validation metrics for aeroacoustic applications are increasingly complicated statistics that have been shown in aeroacoustic theory to be critical to flow-generated sound.
Toward a phenomenology of trance logic in posttraumatic stress disorder.
Beshai, J A
2004-04-01
Some induction procedures result in trance logic as an essential feature of hypnosis. Trance logic is a voluntary state of acceptance of suggestions without the critical evaluation that would destroy the validity of the meaningfulness of the suggestion. Induction procedures in real and simulated conditions induce a conflict between two contradictory messages in experimental hypnosis. In military induction the conflict is much more subtle involving society's need for security and its need for ethics. Such conflicts are often construed by the subject as trance logic. Trance logic provides an opportunity for therapists using the phenomenology of "presence" to deal with the objectified concepts of "avoidance," "numbing" implicit in this kind of dysfunctional thinking in Posttraumatic Stress Disorder. An individual phenomenology of induction procedures and suggestions, which trigger trance logic, may lead to a resolution of logical fallacies and recurring painful memories. It invites a reconciliation of conflicting messages implicit in phobias and avoidance traumas. Such a phenomenological analysis of trance logic may well be a novel approach to restructure the meaning of trauma.
NASA Astrophysics Data System (ADS)
Silvestro, Paolo Cosmo; Casa, Raffaele; Pignatti, Stefano; Castaldi, Fabio; Yang, Hao; Guijun, Yang
2016-08-01
The aim of this work was to develop a tool to evaluate the effect of water stress on yield losses at the farmland and regional scale, by assimilating remotely sensed biophysical variables into crop growth models. Biophysical variables were retrieved from HJ1A, HJ1B and Landsat 8 images, using an algorithm based on the training of artificial neural networks on PROSAIL.For the assimilation, two crop models of differing degree of complexity were used: Aquacrop and SAFY. For Aquacrop, an optimization procedure to reduce the difference between the remotely sensed and simulated CC was developed. For the modified version of SAFY, the assimilation procedure was based on the Ensemble Kalman Filter.These procedures were tested in a spatialized application, by using data collected in the rural area of Yangling (Shaanxi Province) between 2013 and 2015Results were validated by utilizing yield data both from ground measurements and statistical survey.
Safety validation test equipment operation
NASA Astrophysics Data System (ADS)
Kurosaki, Tadaaki; Watanabe, Takashi
1992-08-01
An overview of the activities conducted on safety validation test equipment operation for materials used for NASA manned missions is presented. Safety validation tests, such as flammability, odor, offgassing, and so forth were conducted in accordance with NASA-NHB-8060.1C using test subjects common with those used by NASA, and the equipment used were qualified for their functions and performances in accordance with NASDA-CR-99124 'Safety Validation Test Qualification Procedures.' Test procedure systems were established by preparing 'Common Procedures for Safety Validation Test' as well as test procedures for flammability, offgassing, and odor tests. The test operation organization chaired by the General Manager of the Parts and Material Laboratory of NASDA (National Space Development Agency of Japan) was established, and the test leaders and operators in the organization were qualified in accordance with the specified procedures. One-hundred-one tests had been conducted so far by the Parts and Material Laboratory according to the request submitted by the manufacturers through the Space Station Group and the Safety and Product Assurance for Manned Systems Office.
Validation of the Electromagnetic Code FACETS for Numerical Simulation of Radar Target Images
2009-12-01
Validation of the electromagnetic code FACETS for numerical simulation of radar target images S. Wong...Validation of the electromagnetic code FACETS for numerical simulation of radar target images S. Wong DRDC Ottawa...for simulating radar images of a target is obtained, through direct simulation-to-measurement comparisons. A 3-dimensional computer-aided design
NASA Astrophysics Data System (ADS)
Byrd, Kenneth A.; Yauger, Sunny
2012-06-01
In the medical community, patient simulators are used to educate and train nurses, medics and doctors in rendering dierent levels of treatment and care to various patient populations. Students have the opportunity to perform real-world medical procedures without putting any patients at risk. A new thrust for the U.S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD), is the use of remote sensing technologies to detect human vital signs at stando distances. This capability will provide medics with the ability to diagnose while under re in addition to helping them to prioritize the care and evacuation of battleeld casualties. A potential alternative (or precursor) to human subject testing is the use of patient simulators. This substitution (or augmenting) provides a safe and cost eective means to develop, test, and evaluate sensors without putting any human subjects at risk. In this paper, we present a generalized framework that can be used to accredit patient simulator technologies as human simulants for remote physiological monitoring (RPM). Results indicate that we were successful in using a commercial Laser Doppler Vibrometer (LDV) to exploit pulse and respiration signals from a SimMan 3G patient simulator at stando (8 meters).
NASA Astrophysics Data System (ADS)
Kolb, Kimberly E.; Choi, Hee-sue S.; Kaur, Balvinder; Olson, Jeffrey T.; Hill, Clayton F.; Hutchinson, James A.
2016-05-01
The US Army's Communications Electronics Research, Development and Engineering Center (CERDEC) Night Vision and Electronic Sensors Directorate (referred to as NVESD) is developing a virtual detection, recognition, and identification (DRI) testing methodology using simulated imagery as a means of augmenting the field testing component of sensor performance evaluation, which is expensive, resource intensive, time consuming, and limited to the available target(s) and existing atmospheric visibility and environmental conditions at the time of testing. Existing simulation capabilities such as the Digital Imaging Remote Sensing Image Generator (DIRSIG) and NVESD's Integrated Performance Model Image Generator (NVIPM-IG) can be combined with existing detection algorithms to reduce cost/time, minimize testing risk, and allow virtual/simulated testing using full spectral and thermal object signatures, as well as those collected in the field. NVESD has developed an end-to-end capability to demonstrate the feasibility of this approach. Simple detection algorithms have been used on the degraded images generated by NVIPM-IG to determine the relative performance of the algorithms on both DIRSIG-simulated and collected images. Evaluating the degree to which the algorithm performance agrees between simulated versus field collected imagery is the first step in validating the simulated imagery procedure.
Nonlinear dynamic simulation of single- and multi-spool core engines
NASA Technical Reports Server (NTRS)
Schobeiri, T.; Lippke, C.; Abouelkheir, M.
1993-01-01
In this paper a new computational method for accurate simulation of the nonlinear dynamic behavior of single- and multi-spool core engines, turbofan engines, and power generation gas turbine engines is presented. In order to perform the simulation, a modularly structured computer code has been developed which includes individual mathematical modules representing various engine components. The generic structure of the code enables the dynamic simulation of arbitrary engine configurations ranging from single-spool thrust generation to multi-spool thrust/power generation engines under adverse dynamic operating conditions. For precise simulation of turbine and compressor components, row-by-row calculation procedures were implemented that account for the specific turbine and compressor cascade and blade geometry and characteristics. The dynamic behavior of the subject engine is calculated by solving a number of systems of partial differential equations, which describe the unsteady behavior of the individual components. In order to ensure the capability, accuracy, robustness, and reliability of the code, comprehensive critical performance assessment and validation tests were performed. As representatives, three different transient cases with single- and multi-spool thrust and power generation engines were simulated. The transient cases range from operating with a prescribed fuel schedule, to extreme load changes, to generator and turbine shut down.
Testing for qualitative heterogeneity: An application to composite endpoints in survival analysis.
Oulhaj, Abderrahim; El Ghouch, Anouar; Holman, Rury R
2017-01-01
Composite endpoints are frequently used in clinical outcome trials to provide more endpoints, thereby increasing statistical power. A key requirement for a composite endpoint to be meaningful is the absence of the so-called qualitative heterogeneity to ensure a valid overall interpretation of any treatment effect identified. Qualitative heterogeneity occurs when individual components of a composite endpoint exhibit differences in the direction of a treatment effect. In this paper, we develop a general statistical method to test for qualitative heterogeneity, that is to test whether a given set of parameters share the same sign. This method is based on the intersection-union principle and, provided that the sample size is large, is valid whatever the model used for parameters estimation. We propose two versions of our testing procedure, one based on a random sampling from a Gaussian distribution and another version based on bootstrapping. Our work covers both the case of completely observed data and the case where some observations are censored which is an important issue in many clinical trials. We evaluated the size and power of our proposed tests by carrying out some extensive Monte Carlo simulations in the case of multivariate time to event data. The simulations were designed under a variety of conditions on dimensionality, censoring rate, sample size and correlation structure. Our testing procedure showed very good performances in terms of statistical power and type I error. The proposed test was applied to a data set from a single-center, randomized, double-blind controlled trial in the area of Alzheimer's disease.
NASA Astrophysics Data System (ADS)
Belfort, Benjamin; Weill, Sylvain; Lehmann, François
2017-07-01
A novel, non-invasive imaging technique is proposed that determines 2D maps of water content in unsaturated porous media. This method directly relates digitally measured intensities to the water content of the porous medium. This method requires the classical image analysis steps, i.e., normalization, filtering, background subtraction, scaling and calibration. The main advantages of this approach are that no calibration experiment is needed, because calibration curve relating water content and reflected light intensities is established during the main monitoring phase of each experiment and that no tracer or dye is injected into the flow tank. The procedure enables effective processing of a large number of photographs and thus produces 2D water content maps at high temporal resolution. A drainage/imbibition experiment in a 2D flow tank with inner dimensions of 40 cm × 14 cm × 6 cm (L × W × D) is carried out to validate the methodology. The accuracy of the proposed approach is assessed using a statistical framework to perform an error analysis and numerical simulations with a state-of-the-art computational code that solves the Richards' equation. Comparison of the cumulative mass leaving and entering the flow tank and water content maps produced by the photographic measurement technique and the numerical simulations demonstrate the efficiency and high accuracy of the proposed method for investigating vadose zone flow processes. Finally, the photometric procedure has been developed expressly for its extension to heterogeneous media. Other processes may be investigated through different laboratory experiments which will serve as benchmark for numerical codes validation.
Aycock, Kenneth I; Campbell, Robert L; Manning, Keefe B; Craven, Brent A
2017-06-01
Inferior vena cava (IVC) filters are medical devices designed to provide a mechanical barrier to the passage of emboli from the deep veins of the legs to the heart and lungs. Despite decades of development and clinical use, IVC filters still fail to prevent the passage of all hazardous emboli. The objective of this study is to (1) develop a resolved two-way computational model of embolus transport, (2) provide verification and validation evidence for the model, and (3) demonstrate the ability of the model to predict the embolus-trapping efficiency of an IVC filter. Our model couples computational fluid dynamics simulations of blood flow to six-degree-of-freedom simulations of embolus transport and resolves the interactions between rigid, spherical emboli and the blood flow using an immersed boundary method. Following model development and numerical verification and validation of the computational approach against benchmark data from the literature, embolus transport simulations are performed in an idealized IVC geometry. Centered and tilted filter orientations are considered using a nonlinear finite element-based virtual filter placement procedure. A total of 2048 coupled CFD/6-DOF simulations are performed to predict the embolus-trapping statistics of the filter. The simulations predict that the embolus-trapping efficiency of the IVC filter increases with increasing embolus diameter and increasing embolus-to-blood density ratio. Tilted filter placement is found to decrease the embolus-trapping efficiency compared with centered filter placement. Multiple embolus-trapping locations are predicted for the IVC filter, and the trapping locations are predicted to shift upstream and toward the vessel wall with increasing embolus diameter. Simulations of the injection of successive emboli into the IVC are also performed and reveal that the embolus-trapping efficiency decreases with increasing thrombus load in the IVC filter. In future work, the computational tool could be used to investigate IVC filter design improvements, the effect of patient anatomy on embolus transport and IVC filter embolus-trapping efficiency, and, with further development and validation, optimal filter selection and placement on a patient-specific basis.
Likelihood ratio data to report the validation of a forensic fingerprint evaluation method.
Ramos, Daniel; Haraksim, Rudolf; Meuwly, Didier
2017-02-01
Data to which the authors refer to throughout this article are likelihood ratios (LR) computed from the comparison of 5-12 minutiae fingermarks with fingerprints. These LRs data are used for the validation of a likelihood ratio (LR) method in forensic evidence evaluation. These data present a necessary asset for conducting validation experiments when validating LR methods used in forensic evidence evaluation and set up validation reports. These data can be also used as a baseline for comparing the fingermark evidence in the same minutiae configuration as presented in (D. Meuwly, D. Ramos, R. Haraksim,) [1], although the reader should keep in mind that different feature extraction algorithms and different AFIS systems used may produce different LRs values. Moreover, these data may serve as a reproducibility exercise, in order to train the generation of validation reports of forensic methods, according to [1]. Alongside the data, a justification and motivation for the use of methods is given. These methods calculate LRs from the fingerprint/mark data and are subject to a validation procedure. The choice of using real forensic fingerprint in the validation and simulated data in the development is described and justified. Validation criteria are set for the purpose of validation of the LR methods, which are used to calculate the LR values from the data and the validation report. For privacy and data protection reasons, the original fingerprint/mark images cannot be shared. But these images do not constitute the core data for the validation, contrarily to the LRs that are shared.
NASA Technical Reports Server (NTRS)
Kibler, Jennifer L.; Wilson, Sara R.; Hubbs, Clay E.; Smail, James W.
2015-01-01
The Interval Management for Near-term Operations Validation of Acceptability (IM-NOVA) experiment was conducted at the National Aeronautics and Space Administration (NASA) Langley Research Center (LaRC) in support of the NASA Airspace Systems Program's Air Traffic Management Technology Demonstration-1 (ATD-1). ATD-1 is intended to showcase an integrated set of technologies that provide an efficient arrival solution for managing aircraft using Next Generation Air Transportation System (NextGen) surveillance, navigation, procedures, and automation for both airborne and ground-based systems. The goal of the IMNOVA experiment was to assess if procedures outlined by the ATD-1 Concept of Operations were acceptable to and feasible for use by flight crews in a voice communications environment when used with a minimum set of Flight Deck-based Interval Management (FIM) equipment and a prototype crew interface. To investigate an integrated arrival solution using ground-based air traffic control tools and aircraft Automatic Dependent Surveillance-Broadcast (ADS-B) tools, the LaRC FIM system and the Traffic Management Advisor with Terminal Metering and Controller Managed Spacing tools developed at the NASA Ames Research Center (ARC) were integrated into LaRC's Air Traffic Operations Laboratory (ATOL). Data were collected from 10 crews of current 757/767 pilots asked to fly a high-fidelity, fixed-based simulator during scenarios conducted within an airspace environment modeled on the Dallas-Fort Worth (DFW) Terminal Radar Approach Control area. The aircraft simulator was equipped with the Airborne Spacing for Terminal Area Routes (ASTAR) algorithm and a FIM crew interface consisting of electronic flight bags and ADS-B guidance displays. Researchers used "pseudo-pilot" stations to control 24 simulated aircraft that provided multiple air traffic flows into the DFW International Airport, and recently retired DFW air traffic controllers served as confederate Center, Feeder, Final, and Tower controllers. Analyses of qualitative data revealed that the procedures used by flight crews to receive and execute interval management (IM) clearances in a voice communications environment were logical, easy to follow, did not contain any missing or extraneous steps, and required the use of an acceptable workload level. The majority of the pilot participants found the IM concept, in addition to the proposed FIM crew procedures, to be acceptable and indicated that the ATD-1 procedures could be successfully executed in a nearterm NextGen environment. Analyses of quantitative data revealed that the proposed procedures were feasible for use by flight crews in a voice communications environment. The delivery accuracy at the achieve-by point was within +/-5 sec, and the delivery precision was less than 5 sec. Furthermore, FIM speed commands occurred at a rate of less than one per minute, and pilots found the frequency of the speed commands to be acceptable at all times throughout the experiment scenarios.
Validation of Harris Detector and Eigen Features Detector
NASA Astrophysics Data System (ADS)
Kok, K. Y.; Rajendran, P.
2018-05-01
Harris detector is one of the most common features detection for applications such as object recognition, stereo matching and target tracking. In this paper, a similar Harris detector algorithm is written using MATLAB and the performance is compared with MATLAB built in Harris detector for validation. This is to ensure that rewritten version of Harris detector can be used for Unmanned Aerial Vehicle (UAV) application research purpose yet can be further improvised. Another corner detector close to Harris detector, which is Eigen features detector is rewritten and compared as well using same procedures with same purpose. The simulation results have shown that rewritten version for both Harris and Eigen features detectors have the same performance with MATLAB built in detectors with not more than 0.4% coordination deviation, less than 4% & 5% response deviation respectively, and maximum 3% computational cost error.
Current Status of Simulation-based Training Tools in Orthopedic Surgery: A Systematic Review.
Morgan, Michael; Aydin, Abdullatif; Salih, Alan; Robati, Shibby; Ahmed, Kamran
To conduct a systematic review of orthopedic training and assessment simulators with reference to their level of evidence (LoE) and level of recommendation. Medline and EMBASE library databases were searched for English language articles published between 1980 and 2016, describing orthopedic simulators or validation studies of these models. All studies were assessed for LoE, and each model was subsequently awarded a level of recommendation using a modified Oxford Centre for Evidence-Based Medicine classification, adapted for education. A total of 76 articles describing orthopedic simulators met the inclusion criteria, 47 of which described at least 1 validation study. The most commonly identified models (n = 34) and validation studies (n = 26) were for knee arthroscopy. Construct validation was the most frequent validation study attempted by authors. In all, 62% (47 of 76) of the simulator studies described arthroscopy simulators, which also contained validation studies with the highest LoE. Orthopedic simulators are increasingly being subjected to validation studies, although the LoE of such studies generally remain low. There remains a lack of focus on nontechnical skills and on cost analyses of orthopedic simulators. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
New robust statistical procedures for the polytomous logistic regression models.
Castilla, Elena; Ghosh, Abhik; Martin, Nirian; Pardo, Leandro
2018-05-17
This article derives a new family of estimators, namely the minimum density power divergence estimators, as a robust generalization of the maximum likelihood estimator for the polytomous logistic regression model. Based on these estimators, a family of Wald-type test statistics for linear hypotheses is introduced. Robustness properties of both the proposed estimators and the test statistics are theoretically studied through the classical influence function analysis. Appropriate real life examples are presented to justify the requirement of suitable robust statistical procedures in place of the likelihood based inference for the polytomous logistic regression model. The validity of the theoretical results established in the article are further confirmed empirically through suitable simulation studies. Finally, an approach for the data-driven selection of the robustness tuning parameter is proposed with empirical justifications. © 2018, The International Biometric Society.
Preoperative simulation for the planning of microsurgical clipping of intracranial aneurysms.
Marinho, Paulo; Vermandel, Maximilien; Bourgeois, Philippe; Lejeune, Jean-Paul; Mordon, Serge; Thines, Laurent
2014-12-01
The safety and success of intracranial aneurysm (IA) surgery could be improved through the dedicated application of simulation covering the procedure from the 3-dimensional (3D) description of the surgical scene to the visual representation of the clip application. We aimed in this study to validate the technical feasibility and clinical relevance of such a protocol. All patients preoperatively underwent 3D magnetic resonance imaging and 3D computed tomography angiography to build 3D reconstructions of the brain, cerebral arteries, and surrounding cranial bone. These 3D models were segmented and merged using Osirix, a DICOM image processing application. This provided the surgical scene that was subsequently imported into Blender, a modeling platform for 3D animation. Digitized clips and appliers could then be manipulated in the virtual operative environment, allowing the visual simulation of clipping. This simulation protocol was assessed in a series of 10 IAs by 2 neurosurgeons. The protocol was feasible in all patients. The visual similarity between the surgical scene and the operative view was excellent in 100% of the cases, and the identification of the vascular structures was accurate in 90% of the cases. The neurosurgeons found the simulation helpful for planning the surgical approach (ie, the bone flap, cisternal opening, and arterial tree exposure) in 100% of the cases. The correct number of final clip(s) needed was predicted from the simulation in 90% of the cases. The preoperatively expected characteristics of the optimal clip(s) (ie, their number, shape, size, and orientation) were validated during surgery in 80% of the cases. This study confirmed that visual simulation of IA clipping based on the processing of high-resolution 3D imaging can be effective. This is a new and important step toward the development of a more sophisticated integrated simulation platform dedicated to cerebrovascular surgery.
2013-01-01
Background Simulation as a pedagogical approach has been used in health professional education to address the need to safely develop effective clinical skills prior to undertaking clinical practice. However, evidence for the use of simulation in midwifery is largely anecdotal, and research evaluating the effectiveness of different levels of simulation fidelity are lacking. Woman centred care is a core premise of the midwifery profession and describes the behaviours of an individual midwife who demonstrates safe and effective care of the individual woman. Woman centred care occurs when the midwife modifies the care to ensure the needs of each individual woman are respected and addressed. However, a review of the literature demonstrates an absence of a valid and reliable tool to measure the development of woman centred care behaviours. This study aims to determine which level of fidelity in simulated learning experiences provides the most effective learning outcomes in the development of woman centred clinical assessment behaviors and skills in student midwives. Methods/Design Three-arm, randomised, intervention trial. In this research we plan to: a) trial three levels of simulation fidelity - low, medium and progressive, on student midwives performing the procedure of vaginal examination; b) measure clinical assessment skills using the Global Rating Scale (GRS) and Integrated Procedural Performance Instrument (IPPI); and c) pilot the newly developed Woman Centred Care Scale (WCCS) to measure clinical behaviors related to Woman-Centredness. Discussion This project aims to enhance knowledge in relation to the appropriate levels of fidelity in simulation that yield the best educational outcomes for the development of woman centred clinical assessment in student midwives. The outcomes of this project may contribute to improved woman centred clinical assessment for student midwives, and more broadly influence decision making regarding education resource allocation for maternity simulation. PMID:23706037
Sardari Nia, Peyman; Heuts, Samuel; Daemen, Jean; Luyten, Peter; Vainer, Jindrich; Hoorntje, Jan; Cheriex, Emile; Maessen, Jos
2017-02-01
Mitral valve repair performed by an experienced surgeon is superior to mitral valve replacement for degenerative mitral valve disease; however, many surgeons are still deterred from adapting this procedure because of a steep learning curve. Simulation-based training and planning could improve the surgical performance and reduce the learning curve. The aim of this study was to develop a patient-specific simulation for mitral valve repair and provide a proof of concept of personalized medicine in a patient prospectively planned for mitral valve surgery. A 65-year old male with severe symptomatic mitral valve regurgitation was referred to our mitral valve heart team. On the basis of three-dimensional (3D) transoesophageal echocardiography and computed tomography, 3D reconstructions of the patient's anatomy were constructed. By navigating through these reconstructions, the repair options and surgical access were chosen (minimally invasive repair). Using rapid prototyping and negative mould fabrication, we developed a process to cast a patient-specific mitral valve silicone replica for preoperative repair in a high-fidelity simulator. Mitral valve and negative mould were printed in systole to capture the pathology when the valve closes. A patient-specific mitral valve silicone replica was casted and mounted in the simulator. All repair techniques could be performed in the simulator to choose the best repair strategy. As the valve was printed in systole, no special testing other than adjusting the coaptation area was required. Subsequently, the patient was operated, mitral valve pathology was validated and repair was successfully done as in the simulation. The patient-specific simulation and planning could be applied for surgical training, starting the (minimally invasive) mitral valve repair programme, planning of complex cases and the evaluation of new interventional techniques. The personalized medicine could be a possible pathway towards enhancing reproducibility, patient's safety and effectiveness of a complex surgical procedure. © The Author 2016. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
Sørensen, Hans Eibe; Slater, Stanley F
2008-08-01
Atheoretical measure purification may lead to construct deficient measures. The purpose of this paper is to provide a theoretically driven procedure for the development and empirical validation of symmetric component measures of multidimensional constructs. Particular emphasis is placed on establishing a formalized three-step procedure for achieving a posteriori content validity. Then the procedure is applied to development and empirical validation of two symmetrical component measures of market orientation, customer orientation and competitor orientation. Analysis suggests that average variance extracted is particularly critical to reliability in the respecification of multi-indicator measures. In relation to this, the results also identify possible deficiencies in using Cronbach alpha for establishing reliable and valid measures.
Gathering Validity Evidence for Surgical Simulation: A Systematic Review.
Borgersen, Nanna Jo; Naur, Therese M H; Sørensen, Stine M D; Bjerrum, Flemming; Konge, Lars; Subhi, Yousif; Thomsen, Ann Sofia S
2018-06-01
To identify current trends in the use of validity frameworks in surgical simulation, to provide an overview of the evidence behind the assessment of technical skills in all surgical specialties, and to present recommendations and guidelines for future validity studies. Validity evidence for assessment tools used in the evaluation of surgical performance is of paramount importance to ensure valid and reliable assessment of skills. We systematically reviewed the literature by searching 5 databases (PubMed, EMBASE, Web of Science, PsycINFO, and the Cochrane Library) for studies published from January 1, 2008, to July 10, 2017. We included original studies evaluating simulation-based assessments of health professionals in surgical specialties and extracted data on surgical specialty, simulator modality, participant characteristics, and the validity framework used. Data were synthesized qualitatively. We identified 498 studies with a total of 18,312 participants. Publications involving validity assessments in surgical simulation more than doubled from 2008 to 2010 (∼30 studies/year) to 2014 to 2016 (∼70 to 90 studies/year). Only 6.6% of the studies used the recommended contemporary validity framework (Messick). The majority of studies used outdated frameworks such as face validity. Significant differences were identified across surgical specialties. The evaluated assessment tools were mostly inanimate or virtual reality simulation models. An increasing number of studies have gathered validity evidence for simulation-based assessments in surgical specialties, but the use of outdated frameworks remains common. To address the current practice, this paper presents guidelines on how to use the contemporary validity framework when designing validity studies.
Simulation validation of the XV-15 tilt-rotor research aircraft
NASA Technical Reports Server (NTRS)
Ferguson, S. W.; Hanson, G. D.; Churchill, G. B.
1984-01-01
The results of a simulation validation program of the XV-15 tilt-rotor research aircraft are detailed, covering such simulation aspects as the mathematical model, visual system, motion system, cab aural system, cab control loader system, pilot perceptual fidelity, and generic tilt rotor applications. Simulation validation was performed for the hover, low-speed, and sideward flight modes, with consideration of the in-ground rotor effect. Several deficiencies of the mathematical model and the simulation systems were identified in the course of the simulation validation project, and some were corrected. It is noted that NASA's Vertical Motion Simulator used in the program is an excellent tool for tilt-rotor and rotorcraft design, development, and pilot training.
Surgical task analysis of simulated laparoscopic cholecystectomy with a navigation system.
Sugino, T; Kawahira, H; Nakamura, R
2014-09-01
Advanced surgical procedures, which have become complex and difficult, increase the burden of surgeons. Quantitative analysis of surgical procedures can improve training, reduce variability, and enable optimization of surgical procedures. To this end, a surgical task analysis system was developed that uses only surgical navigation information. Division of the surgical procedure, task progress analysis, and task efficiency analysis were done. First, the procedure was divided into five stages. Second, the operating time and progress rate were recorded to document task progress during specific stages, including the dissecting task. Third, the speed of the surgical instrument motion (mean velocity and acceleration), as well as the size and overlap ratio of the approximate ellipse of the location log data distribution, was computed to estimate the task efficiency during each stage. These analysis methods were evaluated based on experimental validation with two groups of surgeons, i.e., skilled and "other" surgeons. The performance metrics and analytical parameters included incidents during the operation, the surgical environment, and the surgeon's skills or habits. Comparison of groups revealed that skilled surgeons tended to perform the procedure in less time and involved smaller regions; they also manipulated the surgical instruments more gently. Surgical task analysis developed for quantitative assessment of surgical procedures and surgical performance may provide practical methods and metrics for objective evaluation of surgical expertise.
Validation of NASA Thermal Ice Protection Computer Codes. Part 1; Program Overview
NASA Technical Reports Server (NTRS)
Miller, Dean; Bond, Thomas; Sheldon, David; Wright, William; Langhals, Tammy; Al-Khalil, Kamel; Broughton, Howard
1996-01-01
The Icing Technology Branch at NASA Lewis has been involved in an effort to validate two thermal ice protection codes developed at the NASA Lewis Research Center. LEWICE/Thermal (electrothermal deicing & anti-icing), and ANTICE (hot-gas & electrothermal anti-icing). The Thermal Code Validation effort was designated as a priority during a 1994 'peer review' of the NASA Lewis Icing program, and was implemented as a cooperative effort with industry. During April 1996, the first of a series of experimental validation tests was conducted in the NASA Lewis Icing Research Tunnel(IRT). The purpose of the April 96 test was to validate the electrothermal predictive capabilities of both LEWICE/Thermal, and ANTICE. A heavily instrumented test article was designed and fabricated for this test, with the capability of simulating electrothermal de-icing and anti-icing modes of operation. Thermal measurements were then obtained over a range of test conditions, for comparison with analytical predictions. This paper will present an overview of the test, including a detailed description of: (1) the validation process; (2) test article design; (3) test matrix development; and (4) test procedures. Selected experimental results will be presented for de-icing and anti-icing modes of operation. Finally, the status of the validation effort at this point will be summarized. Detailed comparisons between analytical predictions and experimental results are contained in the following two papers: 'Validation of NASA Thermal Ice Protection Computer Codes: Part 2- The Validation of LEWICE/Thermal' and 'Validation of NASA Thermal Ice Protection Computer Codes: Part 3-The Validation of ANTICE'
La Barbera, Luigi; Ottardi, Claudia; Villa, Tomaso
2015-10-01
Preclinical evaluation of the mechanical reliability of fixation devices is a mandatory activity before their introduction into market. There are two standardized protocols for preclinical testing of spinal implants. The American Society for Testing Materials (ASTM) recommends the F1717 standard, which describes a vertebrectomy condition that is relatively simple to implement, whereas the International Organization for Standardization (ISO) suggests the 12189 standard, which describes a more complex physiological anterior support-based setup. Moreover, ASTM F1717 is nowadays well established, whereas ISO 12189 has received little attention: A few studies tried to accurately describe the ISO experimental procedure through numeric models, but these studies totally neglect the recommended precompression step. This study aimed to build up a reliable, validated numeric model capable of describing the stress on the rods of a spinal fixator assembled according to ISO 12189 standard procedure. Such a model would more adequately represent the in vitro testing condition. This study used finite element (FE) simulations and experimental validation testing. An FE model of the ISO setup was built to calculate the stress on the rods. Simulation was validated by comparison with experimental strain gauges measurements. The same fixator has been previously virtually mounted in an L2-L4 FE model of the lumbar spine, and stresses in the rods were calculated when the spine was subjected to physiological forces and moments. The comparison between the FE predictions and experimental measurements is in good agreement, thus confirming the suitability of the FE method to evaluate the stresses in the device. The initial precompression induces a significant extension of the assembled construct. As the applied load increases, the initial extension is gradually compensated, so that at peak load the rods are bent in flexion: The final stress value predicted is thus reduced to about 50%, if compared with the previous model where the precompression was not considered. Neglecting the initial preload due to the assembly of the overall construct according to ISO 12189 standard could lead to an overestimation of the stress on the rods up to 50%. To correctly describe the state of stress on the posterior spinal fixator, tested according to the ISO procedure, it is important to take into account the initial preload due to the assembly of the overall construct. Copyright © 2015 Elsevier Inc. All rights reserved.
29 CFR 1607.7 - Use of other validity studies.
Code of Federal Regulations, 2012 CFR
2012-07-01
... EMPLOYEE SELECTION PROCEDURES (1978) General Principles § 1607.7 Use of other validity studies. A. Validity studies not conducted by the user. Users may, under certain circumstances, support the use of selection... described in test manuals. While publishers of selection procedures have a professional obligation to...
29 CFR 1607.7 - Use of other validity studies.
Code of Federal Regulations, 2014 CFR
2014-07-01
... EMPLOYEE SELECTION PROCEDURES (1978) General Principles § 1607.7 Use of other validity studies. A. Validity studies not conducted by the user. Users may, under certain circumstances, support the use of selection... described in test manuals. While publishers of selection procedures have a professional obligation to...