Systematic review of skills transfer after surgical simulation-based training.
Dawe, S R; Pena, G N; Windsor, J A; Broeders, J A J L; Cregan, P C; Hewett, P J; Maddern, G J
2014-08-01
Simulation-based training assumes that skills are directly transferable to the patient-based setting, but few studies have correlated simulated performance with surgical performance. A systematic search strategy was undertaken to find studies published since the last systematic review, published in 2007. Inclusion of articles was determined using a predetermined protocol, independent assessment by two reviewers and a final consensus decision. Studies that reported on the use of surgical simulation-based training and assessed the transferability of the acquired skills to a patient-based setting were included. Twenty-seven randomized clinical trials and seven non-randomized comparative studies were included. Fourteen studies investigated laparoscopic procedures, 13 endoscopic procedures and seven other procedures. These studies provided strong evidence that participants who reached proficiency in simulation-based training performed better in the patient-based setting than their counterparts who did not have simulation-based training. Simulation-based training was equally as effective as patient-based training for colonoscopy, laparoscopic camera navigation and endoscopic sinus surgery in the patient-based setting. These studies strengthen the evidence that simulation-based training, as part of a structured programme and incorporating predetermined proficiency levels, results in skills transfer to the operative setting. © 2014 BJS Society Ltd. Published by John Wiley & Sons Ltd.
Alcohol consumption for simulated driving performance: A systematic review.
Rezaee-Zavareh, Mohammad Saeid; Salamati, Payman; Ramezani-Binabaj, Mahdi; Saeidnejad, Mina; Rousta, Mansoureh; Shokraneh, Farhad; Rahimi-Movaghar, Vafa
2017-06-01
Alcohol consumption can lead to risky driving and increase the frequency of traffic accidents, injuries and mortalities. The main purpose of our study was to compare simulated driving performance between two groups of drivers, one consumed alcohol and the other not consumed, using a systematic review. In this systematic review, electronic resources and databases including Medline via Ovid SP, EMBASE via Ovid SP, PsycINFO via Ovid SP, PubMed, Scopus, Cumulative Index to Nursing and Allied Health Literature (CINHAL) via EBSCOhost were comprehensively and systematically searched. The randomized controlled clinical trials that compared simulated driving performance between two groups of drivers, one consumed alcohol and the other not consumed, were included. Lane position standard deviation (LPSD), mean of lane position deviation (MLPD), speed, mean of speed deviation (MSD), standard deviation of speed deviation (SDSD), number of accidents (NA) and line crossing (LC) were considered as the main parameters evaluating outcomes. After title and abstract screening, the articles were enrolled for data extraction and they were evaluated for risk of biases. Thirteen papers were included in our qualitative synthesis. All included papers were classified as high risk of biases. Alcohol consumption mostly deteriorated the following performance outcomes in descending order: SDSD, LPSD, speed, MLPD, LC and NA. Our systematic review had troublesome heterogeneity. Alcohol consumption may decrease simulated driving performance in alcohol consumed people compared with non-alcohol consumed people via changes in SDSD, LPSD, speed, MLPD, LC and NA. More well-designed randomized controlled clinical trials are recommended. Copyright © 2017. Production and hosting by Elsevier B.V.
GROUND-WATER MODEL TESTING: SYSTEMATIC EVALUATION AND TESTING OF CODE FUNCTIONALITY AND PERFORMANCE
Effective use of ground-water simulation codes as management decision tools requires the establishment of their functionality, performance characteristics, and applicability to the problem at hand. This is accomplished through application of a systematic code-testing protocol and...
Simulation Modelling in Healthcare: An Umbrella Review of Systematic Literature Reviews.
Salleh, Syed; Thokala, Praveen; Brennan, Alan; Hughes, Ruby; Booth, Andrew
2017-09-01
Numerous studies examine simulation modelling in healthcare. These studies present a bewildering array of simulation techniques and applications, making it challenging to characterise the literature. The aim of this paper is to provide an overview of the level of activity of simulation modelling in healthcare and the key themes. We performed an umbrella review of systematic literature reviews of simulation modelling in healthcare. Searches were conducted of academic databases (JSTOR, Scopus, PubMed, IEEE, SAGE, ACM, Wiley Online Library, ScienceDirect) and grey literature sources, enhanced by citation searches. The articles were included if they performed a systematic review of simulation modelling techniques in healthcare. After quality assessment of all included articles, data were extracted on numbers of studies included in each review, types of applications, techniques used for simulation modelling, data sources and simulation software. The search strategy yielded a total of 117 potential articles. Following sifting, 37 heterogeneous reviews were included. Most reviews achieved moderate quality rating on a modified AMSTAR (A Measurement Tool used to Assess systematic Reviews) checklist. All the review articles described the types of applications used for simulation modelling; 15 reviews described techniques used for simulation modelling; three reviews described data sources used for simulation modelling; and six reviews described software used for simulation modelling. The remaining reviews either did not report or did not provide enough detail for the data to be extracted. Simulation modelling techniques have been used for a wide range of applications in healthcare, with a variety of software tools and data sources. The number of reviews published in recent years suggest an increased interest in simulation modelling in healthcare.
Dawe, Susan R; Windsor, John A; Broeders, Joris A J L; Cregan, Patrick C; Hewett, Peter J; Maddern, Guy J
2014-02-01
A systematic review to determine whether skills acquired through simulation-based training transfer to the operating room for the procedures of laparoscopic cholecystectomy and endoscopy. Simulation-based training assumes that skills are directly transferable to the operation room, but only a few studies have investigated the effect of simulation-based training on surgical performance. A systematic search strategy that was used in 2006 was updated to retrieve relevant studies. Inclusion of articles was determined using a predetermined protocol, independent assessment by 2 reviewers, and a final consensus decision. Seventeen randomized controlled trials and 3 nonrandomized comparative studies were included in this review. In most cases, simulation-based training was in addition to patient-based training programs. Only 2 studies directly compared simulation-based training in isolation with patient-based training. For laparoscopic cholecystectomy (n = 10 studies) and endoscopy (n = 10 studies), participants who reached simulation-based skills proficiency before undergoing patient-based assessment performed with higher global assessment scores and fewer errors in the operating room than their counterparts who did not receive simulation training. Not all parameters measured were improved. Two of the endoscopic studies compared simulation-based training in isolation with patient-based training with different results: for sigmoidoscopy, patient-based training was more effective, whereas for colonoscopy, simulation-based training was equally effective. Skills acquired by simulation-based training seem to be transferable to the operative setting for laparoscopic cholecystectomy and endoscopy. Future research will strengthen these conclusions by evaluating predetermined competency levels on the same simulators and using objective validated global rating scales to measure operative performance.
A systematic analysis of model performance during simulations based on observed landcover/use change is used to quantify errors associated with simulations of known "future" conditions. Calibrated and uncalibrated assessments of relative change over different lengths of...
Van Dongen, Hans P A; Caldwell, John A; Caldwell, J Lynn
2006-05-01
Laboratory research has revealed considerable systematic variability in the degree to which individuals' alertness and performance are affected by sleep deprivation. However, little is known about whether or not different populations exhibit similar levels of individual variability. In the present study, we examined individual variability in performance impairment due to sleep loss in a highly select population of militaryjet pilots. Ten active-duty F-117 pilots were deprived of sleep for 38 h and studied repeatedly in a high-fidelity flight simulator. Data were analyzed with a mixed-model ANOVA to quantify individual variability. Statistically significant, systematic individual differences in the effects of sleep deprivation were observed, even when baseline differences were accounted for. The findings suggest that highly select populations may exhibit individual differences in vulnerability to performance impairment from sleep loss just as the general population does. Thus, the scientific and operational communities' reliance on group data as opposed to individual data may entail substantial misestimation of the impact of job-related stressors on safety and performance.
Khanduja, P Kristina; Bould, M Dylan; Naik, Viren N; Hladkowicz, Emily; Boet, Sylvain
2015-01-01
We systematically reviewed the effectiveness of simulation-based education, targeting independently practicing qualified physicians in acute care specialties. We also describe how simulation is used for performance assessment in this population. Data source included: DataMEDLINE, Embase, Cochrane Database of Systematic Reviews, Cochrane CENTRAL Database of Controlled Trials, and National Health Service Economic Evaluation Database. The last date of search was January 31, 2013. All original research describing simulation-based education for independently practicing physicians in anesthesiology, critical care, and emergency medicine was reviewed. Data analysis was performed in duplicate with further review by a third author in cases of disagreement until consensus was reached. Data extraction was focused on effectiveness according to Kirkpatrick's model. For simulation-based performance assessment, tool characteristics and sources of validity evidence were also collated. Of 39 studies identified, 30 studies focused on the effectiveness of simulation-based education and nine studies evaluated the validity of simulation-based assessment. Thirteen studies (30%) targeted the lower levels of Kirkpatrick's hierarchy with reliance on self-reporting. Simulation was unanimously described as a positive learning experience with perceived impact on clinical practice. Of the 17 remaining studies, 10 used a single group or "no intervention comparison group" design. The majority (n = 17; 44%) were able to demonstrate both immediate and sustained improvements in educational outcomes. Nine studies reported the psychometric properties of simulation-based performance assessment as their sole objective. These predominantly recruited independent practitioners as a convenience sample to establish whether the tool could discriminate between experienced and inexperienced operators and concentrated on a single aspect of validity evidence. Simulation is perceived as a positive learning experience with limited evidence to support improved learning. Future research should focus on the optimal modality and frequency of exposure, quality of assessment tools and on the impact of simulation-based education beyond the individuals toward improved patient care.
The use of simulation in neurosurgical education and training. A systematic review.
Kirkman, Matthew A; Ahmed, Maria; Albert, Angelique F; Wilson, Mark H; Nandi, Dipankar; Sevdalis, Nick
2014-08-01
There is increasing evidence that simulation provides high-quality, time-effective training in an era of resident duty-hour restrictions. Simulation may also permit trainees to acquire key skills in a safe environment, important in a specialty such as neurosurgery, where technical error can result in devastating consequences. The authors systematically reviewed the application of simulation within neurosurgical training and explored the state of the art in simulation within this specialty. To their knowledge this is the first systematic review published on this topic to date. The authors searched the Ovid MEDLINE, Embase, and PsycINFO databases and identified 4101 articles; 195 abstracts were screened by 2 authors for inclusion. The authors reviewed data on study population, study design and setting, outcome measures, key findings, and limitations. Twenty-eight articles formed the basis of this systematic review. Several different simulators are at the neurosurgeon's disposal, including those for ventriculostomy, neuroendoscopic procedures, and spinal surgery, with evidence for improved performance in a range of procedures. Feedback from participants has generally been favorable. However, study quality was found to be poor overall, with many studies hampered by nonrandomized design, presenting normal rather than abnormal anatomy, lack of control groups and long-term follow-up, poor study reporting, lack of evidence of improved simulator performance translating into clinical benefit, and poor reliability and validity evidence. The mean Medical Education Research Study Quality Instrument score of included studies was 9.21 ± 1.95 (± SD) out of a possible score of 18. The authors demonstrate qualitative and quantitative benefits of a range of neurosurgical simulators but find significant shortfalls in methodology and design. Future studies should seek to improve study design and reporting, and provide long-term follow-up data on simulated and ideally patient outcomes.
NASA Technical Reports Server (NTRS)
Koch, S. E.; Skillman, W. C.; Kocin, P. J.; Wetzel, P. J.; Brill, K. F.
1985-01-01
The synoptic scale performance characteristics of MASS 2.0 are determined by comparing filtered 12-24 hr model forecasts to same-case forecasts made by the National Meteorological Center's synoptic-scale Limited-area Fine Mesh model. Characteristics of the two systems are contrasted, and the analysis methodology used to determine statistical skill scores and systematic errors is described. The overall relative performance of the two models in the sample is documented, and important systematic errors uncovered are presented.
Virtual reality simulator training of laparoscopic cholecystectomies - a systematic review.
Ikonen, T S; Antikainen, T; Silvennoinen, M; Isojärvi, J; Mäkinen, E; Scheinin, T M
2012-01-01
Simulators are widely used in occupations where practice in authentic environments would involve high human or economic risks. Surgical procedures can be simulated by increasingly complex and expensive techniques. This review gives an update on computer-based virtual reality (VR) simulators in training for laparoscopic cholecystectomies. From leading databases (Medline, Cochrane, Embase), randomised or controlled trials and the latest systematic reviews were systematically searched and reviewed. Twelve randomised trials involving simulators were identified and analysed, as well as four controlled studies. Furthermore, seven studies comparing black boxes and simulators were included. The results indicated any kind of simulator training (black box, VR) to be beneficial at novice level. After VR training, novice surgeons seemed to be able to perform their first live cholecystectomies with fewer errors, and in one trial the positive effect remained during the first ten cholecystectomies. No clinical follow-up data were found. Optimal learning requires skills training to be conducted as part of a systematic training program. No data on the cost-benefit of simulators were found, the price of a VR simulator begins at EUR 60 000. Theoretical background to learning and limited research data support the use of simulators in the early phases of surgical training. The cost of buying and using simulators is justified if the risk of injuries and complications to patients can be reduced. Developing surgical skills requires repeated training. In order to achieve optimal learning a validated training program is needed.
Helicopter simulation validation using flight data
NASA Technical Reports Server (NTRS)
Key, D. L.; Hansen, R. S.; Cleveland, W. B.; Abbott, W. Y.
1982-01-01
A joint NASA/Army effort to perform a systematic ground-based piloted simulation validation assessment is described. The best available mathematical model for the subject helicopter (UH-60A Black Hawk) was programmed for real-time operation. Flight data were obtained to validate the math model, and to develop models for the pilot control strategy while performing mission-type tasks. The validated math model is to be combined with motion and visual systems to perform ground based simulation. Comparisons of the control strategy obtained in flight with that obtained on the simulator are to be used as the basis for assessing the fidelity of the results obtained in the simulator.
A systematic examination of preoperative surgery warm-up routines.
Pike, T W; Pathak, S; Mushtaq, F; Wilkie, R M; Mon-Williams, M; Lodge, J P A
2017-05-01
Recent evidence indicates that a preoperative warm-up is a potentially useful tool in facilitating performance. But what factors drive such improvements and how should a warm-up be implemented? In order to address these issues, we adopted a two-pronged approach: (1) we conducted a systematic review of the literature to identify existing studies utilising preoperative simulation techniques; (2) we performed task analysis to identify the constituent parts of effective warm-ups. We identified five randomised control trials, four randomised cross-over trials and four case series. The majority of these studies reviewed surgical performance following preoperative simulation relative to performance without simulation. Four studies reported outcome measures in real patients and the remainder reported simulated outcome measures. All but one of the studies found that preoperative simulation improves operative outcomes-but this improvement was not found across all measured parameters. While the reviewed studies had a number of methodological issues, the global data indicate that preoperative simulation has substantial potential to improve surgical performance. Analysis of the task characteristics of successful interventions indicated that the majority of these studies employed warm-ups that focused on the visual motor elements of surgery. However, there was no theoretical or empirical basis to inform the design of the intervention in any of these studies. There is an urgent need for a more rigorous approach to the development of "warm-up" routines if the potential value of preoperative simulation is to be understood and realised. We propose that such interventions need to be grounded in theory and empirical evidence on human motor performance.
Virtual reality-based simulators for spine surgery: a systematic review.
Pfandler, Michael; Lazarovici, Marc; Stefan, Philipp; Wucherer, Patrick; Weigl, Matthias
2017-09-01
Virtual reality (VR)-based simulators offer numerous benefits and are very useful in assessing and training surgical skills. Virtual reality-based simulators are standard in some surgical subspecialties, but their actual use in spinal surgery remains unclear. Currently, only technical reviews of VR-based simulators are available for spinal surgery. Thus, we performed a systematic review that examined the existing research on VR-based simulators in spinal procedures. We also assessed the quality of current studies evaluating VR-based training in spinal surgery. Moreover, we wanted to provide a guide for future studies evaluating VR-based simulators in this field. This is a systematic review of the current scientific literature regarding VR-based simulation in spinal surgery. Five data sources were systematically searched to identify relevant peer-reviewed articles regarding virtual, mixed, or augmented reality-based simulators in spinal surgery. A qualitative data synthesis was performed with particular attention to evaluation approaches and outcomes. Additionally, all included studies were appraised for their quality using the Medical Education Research Study Quality Instrument (MERSQI) tool. The initial review identified 476 abstracts and 63 full texts were then assessed by two reviewers. Finally, 19 studies that examined simulators for the following procedures were selected: pedicle screw placement, vertebroplasty, posterior cervical laminectomy and foraminotomy, lumbar puncture, facet joint injection, and spinal needle insertion and placement. These studies had a low-to-medium methodological quality with a MERSQI mean score of 11.47 out of 18 (standard deviation=1.81). This review described the current state and applications of VR-based simulator training and assessment approaches in spinal procedures. Limitations, strengths, and future advancements of VR-based simulators for training and assessment in spinal surgery were explored. Higher-quality studies with patient-related outcome measures are needed. To establish further adaptation of VR-based simulators in spinal surgery, future evaluations need to improve the study quality, apply long-term study designs, and examine non-technical skills, as well as multidisciplinary team training. Copyright © 2017 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Boh, Larry E.; And Others
1987-01-01
A project to (1) develop and apply a microcomputer simulation program to enhance clinical medication problem solving in preclerkship and clerkship students and (2) perform an initial formative evaluation of the simulation is described. A systematic instructional design approach was used in applying the simulation to the disease state of rheumatoid…
NASA Astrophysics Data System (ADS)
Goodman, A.; Lee, H.; Waliser, D. E.; Guttowski, W.
2017-12-01
Observation-based evaluations of global climate models (GCMs) have been a key element for identifying systematic model biases that can be targeted for model improvements and for establishing uncertainty associated with projections of global climate change. However, GCMs are limited in their ability to represent physical phenomena which occur on smaller, regional scales, including many types of extreme weather events. In order to help facilitate projections in changes of such phenomena, simulations from regional climate models (RCMs) for 14 different domains around the world are being provided by the Coordinated Regional Climate Downscaling Experiment (CORDEX; www.cordex.org). However, although CORDEX specifies standard simulation and archiving protocols, these simulations are conducted independently by individual research and modeling groups representing each of these domains often with different output requirements and data archiving and exchange capabilities. Thus, with respect to similar efforts using GCMs (e.g., the Coupled Model Intercomparison Project, CMIP), it is more difficult to achieve a standardized, systematic evaluation of the RCMs for each domain and across all the CORDEX domains. Using the Regional Climate Model Evaluation System (RCMES; rcmes.jpl.nasa.gov) developed at JPL, we are developing easy to use templates for performing systematic evaluations of CORDEX simulations. Results from the application of a number of evaluation metrics (e.g., biases, centered RMS, and pattern correlations) will be shown for a variety of physical quantities and CORDEX domains. These evaluations are performed using products from obs4MIPs, an activity initiated by DOE and NASA, and now shepherded by the World Climate Research Program's Data Advisory Council.
Tay, Charison; Khajuria, Ankur; Gupte, Chinmay
2014-01-01
Traditional orthopaedic training has followed an apprenticeship model whereby trainees enhance their skills by operating under guidance. However the introduction of limitations on training hours and shorter training programmes mean that alternative training strategies are required. To perform a literature review on simulation training in arthroscopy and devise a framework that structures different simulation techniques that could be used in arthroscopic training. A systematic search of Medline, Embase, Google Scholar and the Cochrane Databases were performed. Search terms included "virtual reality OR simulator OR simulation" and "arthroscopy OR arthroscopic". 14 studies evaluating simulators in knee, shoulder and hip arthroplasty were included. The majority of the studies demonstrated construct and transference validity but only one showed concurrent validity. More studies are required to assess its potential as a training and assessment tool, skills transference between simulators and to determine the extent of skills decay from prolonged delays in training. We also devised a "ladder of arthroscopic simulation" that provides a competency-based framework to implement different simulation strategies. The incorporation of simulation into an orthopaedic curriculum will depend on a coordinated approach between many bodies. But the successful integration of simulators in other areas of surgery supports a possible role for simulation in advancing orthopaedic education. Copyright © 2014 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.
Aeolus End-To-End Simulator and Wind Retrieval Algorithms up to Level 1B
NASA Astrophysics Data System (ADS)
Reitebuch, Oliver; Marksteiner, Uwe; Rompel, Marc; Meringer, Markus; Schmidt, Karsten; Huber, Dorit; Nikolaus, Ines; Dabas, Alain; Marshall, Jonathan; de Bruin, Frank; Kanitz, Thomas; Straume, Anne-Grete
2018-04-01
The first wind lidar in space ALADIN will be deployed on ESÁs Aeolus mission. In order to assess the performance of ALADIN and to optimize the wind retrieval and calibration algorithms an end-to-end simulator was developed. This allows realistic simulations of data downlinked by Aeolus. Together with operational processors this setup is used to assess random and systematic error sources and perform sensitivity studies about the influence of atmospheric and instrument parameters.
Leistedt, B.; Peiris, H. V.; Elsner, F.; ...
2016-10-17
Spatially-varying depth and characteristics of observing conditions, such as seeing, airmass, or sky background, are major sources of systematic uncertainties in modern galaxy survey analyses, in particular in deep multi-epoch surveys. We present a framework to extract and project these sources of systematics onto the sky, and apply it to the Dark Energy Survey (DES) to map the observing conditions of the Science Verification (SV) data. The resulting distributions and maps of sources of systematics are used in several analyses of DES SV to perform detailed null tests with the data, and also to incorporate systematics in survey simulations. Wemore » illustrate the complementarity of these two approaches by comparing the SV data with the BCC-UFig, a synthetic sky catalogue generated by forward-modelling of the DES SV images. We then analyse the BCC-UFig simulation to construct galaxy samples mimicking those used in SV galaxy clustering studies. We show that the spatially-varying survey depth imprinted in the observed galaxy densities and the redshift distributions of the SV data are successfully reproduced by the simulation and well-captured by the maps of observing conditions. The combined use of the maps, the SV data and the BCC-UFig simulation allows us to quantify the impact of spatial systematics on N(z), the redshift distributions inferred using photometric redshifts. We conclude that spatial systematics in the SV data are mainly due to seeing fluctuations and are under control in current clustering and weak lensing analyses. However, they will need to be carefully characterised in upcoming phases of DES in order to avoid biasing the inferred cosmological results. The framework presented is relevant to all multi-epoch surveys, and will be essential for exploiting future surveys such as the Large Synoptic Survey Telescope, which will require detailed null-tests and realistic end-to-end image simulations to correctly interpret the deep, high-cadence observations of the sky.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leistedt, B.; Peiris, H. V.; Elsner, F.
Spatially varying depth and the characteristics of observing conditions, such as seeing, airmass, or sky background, are major sources of systematic uncertainties in modern galaxy survey analyses, particularly in deep multi-epoch surveys. We present a framework to extract and project these sources of systematics onto the sky, and apply it to the Dark Energy Survey (DES) to map the observing conditions of the Science Verification (SV) data. The resulting distributions and maps of sources of systematics are used in several analyses of DES-SV to perform detailed null tests with the data, and also to incorporate systematics in survey simulations. Wemore » illustrate the complementary nature of these two approaches by comparing the SV data with BCC-UFig, a synthetic sky catalog generated by forward-modeling of the DES-SV images. We analyze the BCC-UFig simulation to construct galaxy samples mimicking those used in SV galaxy clustering studies. We show that the spatially varying survey depth imprinted in the observed galaxy densities and the redshift distributions of the SV data are successfully reproduced by the simulation and are well-captured by the maps of observing conditions. The combined use of the maps, the SV data, and the BCC-UFig simulation allows us to quantify the impact of spatial systematics on N(z), the redshift distributions inferred using photometric redshifts. We conclude that spatial systematics in the SV data are mainly due to seeing fluctuations and are under control in current clustering and weak-lensing analyses. However, they will need to be carefully characterized in upcoming phases of DES in order to avoid biasing the inferred cosmological results. The framework presented here is relevant to all multi-epoch surveys and will be essential for exploiting future surveys such as the Large Synoptic Survey Telescope, which will require detailed null tests and realistic end-to-end image simulations to correctly interpret the deep, high-cadence observations of the sky« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leistedt, B.; Peiris, H. V.; Elsner, F.
Spatially-varying depth and characteristics of observing conditions, such as seeing, airmass, or sky background, are major sources of systematic uncertainties in modern galaxy survey analyses, in particular in deep multi-epoch surveys. We present a framework to extract and project these sources of systematics onto the sky, and apply it to the Dark Energy Survey (DES) to map the observing conditions of the Science Verification (SV) data. The resulting distributions and maps of sources of systematics are used in several analyses of DES SV to perform detailed null tests with the data, and also to incorporate systematics in survey simulations. Wemore » illustrate the complementarity of these two approaches by comparing the SV data with the BCC-UFig, a synthetic sky catalogue generated by forward-modelling of the DES SV images. We then analyse the BCC-UFig simulation to construct galaxy samples mimicking those used in SV galaxy clustering studies. We show that the spatially-varying survey depth imprinted in the observed galaxy densities and the redshift distributions of the SV data are successfully reproduced by the simulation and well-captured by the maps of observing conditions. The combined use of the maps, the SV data and the BCC-UFig simulation allows us to quantify the impact of spatial systematics on N(z), the redshift distributions inferred using photometric redshifts. We conclude that spatial systematics in the SV data are mainly due to seeing fluctuations and are under control in current clustering and weak lensing analyses. However, they will need to be carefully characterised in upcoming phases of DES in order to avoid biasing the inferred cosmological results. The framework presented is relevant to all multi-epoch surveys, and will be essential for exploiting future surveys such as the Large Synoptic Survey Telescope, which will require detailed null-tests and realistic end-to-end image simulations to correctly interpret the deep, high-cadence observations of the sky.« less
Baumketner, Andrij
2009-01-01
The performance of reaction-field methods to treat electrostatic interactions is tested in simulations of ions solvated in water. The potential of mean force between sodium chloride pair of ions and between side chains of lysine and aspartate are computed using umbrella sampling and molecular dynamics simulations. It is found that in comparison with lattice sum calculations, the charge-group-based approaches to reaction-field treatments produce a large error in the association energy of the ions that exhibits strong systematic dependence on the size of the simulation box. The atom-based implementation of the reaction field is seen to (i) improve the overall quality of the potential of mean force and (ii) remove the dependence on the size of the simulation box. It is suggested that the atom-based truncation be used in reaction-field simulations of mixed media. PMID:19292522
Wei, Zhengxian; Song, Min; Yin, Guisheng; Wang, Hongbin; Ma, Xuefei; Song, Houbing
2017-07-12
Underwater wireless sensor networks (UWSNs) have become a new hot research area. However, due to the work dynamics and harsh ocean environment, how to obtain an UWSN with the best systematic performance while deploying as few sensor nodes as possible and setting up self-adaptive networking is an urgent problem that needs to be solved. Consequently, sensor deployment, networking, and performance calculation of UWSNs are challenging issues, hence the study in this paper centers on this topic and three relevant methods and models are put forward. Firstly, the normal body-centered cubic lattice to cross body-centered cubic lattice (CBCL) has been improved, and a deployment process and topology generation method are built. Then most importantly, a cross deployment networking method (CDNM) for UWSNs suitable for the underwater environment is proposed. Furthermore, a systematic quar-performance calculation model (SQPCM) is proposed from an integrated perspective, in which the systematic performance of a UWSN includes coverage, connectivity, durability and rapid-reactivity. Besides, measurement models are established based on the relationship between systematic performance and influencing parameters. Finally, the influencing parameters are divided into three types, namely, constraint parameters, device performance and networking parameters. Based on these, a networking parameters adjustment method (NPAM) for optimized systematic performance of UWSNs has been presented. The simulation results demonstrate that the approach proposed in this paper is feasible and efficient in networking and performance calculation of UWSNs.
Wei, Zhengxian; Song, Min; Yin, Guisheng; Wang, Hongbin; Ma, Xuefei
2017-01-01
Underwater wireless sensor networks (UWSNs) have become a new hot research area. However, due to the work dynamics and harsh ocean environment, how to obtain an UWSN with the best systematic performance while deploying as few sensor nodes as possible and setting up self-adaptive networking is an urgent problem that needs to be solved. Consequently, sensor deployment, networking, and performance calculation of UWSNs are challenging issues, hence the study in this paper centers on this topic and three relevant methods and models are put forward. Firstly, the normal body-centered cubic lattice to cross body-centered cubic lattice (CBCL) has been improved, and a deployment process and topology generation method are built. Then most importantly, a cross deployment networking method (CDNM) for UWSNs suitable for the underwater environment is proposed. Furthermore, a systematic quar-performance calculation model (SQPCM) is proposed from an integrated perspective, in which the systematic performance of a UWSN includes coverage, connectivity, durability and rapid-reactivity. Besides, measurement models are established based on the relationship between systematic performance and influencing parameters. Finally, the influencing parameters are divided into three types, namely, constraint parameters, device performance and networking parameters. Based on these, a networking parameters adjustment method (NPAM) for optimized systematic performance of UWSNs has been presented. The simulation results demonstrate that the approach proposed in this paper is feasible and efficient in networking and performance calculation of UWSNs. PMID:28704959
Evaluation of DNA Force Fields in Implicit Solvation
Gaillard, Thomas; Case, David A.
2011-01-01
DNA structural deformations and dynamics are crucial to its interactions in the cell. Theoretical simulations are essential tools to explore the structure, dynamics, and thermodynamics of biomolecules in a systematic way. Molecular mechanics force fields for DNA have benefited from constant improvements during the last decades. Several studies have evaluated and compared available force fields when the solvent is modeled by explicit molecules. On the other hand, few systematic studies have assessed the quality of duplex DNA models when implicit solvation is employed. The interest of an implicit modeling of the solvent consists in the important gain in the simulation performance and conformational sampling speed. In this study, respective influences of the force field and the implicit solvation model choice on DNA simulation quality are evaluated. To this end, extensive implicit solvent duplex DNA simulations are performed, attempting to reach both conformational and sequence diversity convergence. Structural parameters are extracted from simulations and statistically compared to available experimental and explicit solvation simulation data. Our results quantitatively expose the respective strengths and weaknesses of the different DNA force fields and implicit solvation models studied. This work can lead to the suggestion of improvements to current DNA theoretical models. PMID:22043178
HYDROLOGIC MODEL CALIBRATION AND UNCERTAINTY IN SCENARIO ANALYSIS
A systematic analysis of model performance during simulations based on
observed land-cover/use change is used to quantify error associated with water-yield
simulations for a series of known landscape conditions over a 24-year period with the
goal of evaluatin...
EVALUATION OF THE AGDISP AERIAL SPRAY ALGORITHMS IN THE AGDRIFT MODEL
A systematic evaluation of the AgDISP algorithms, which simulate off-site drift and deposition of aerially applied pesticides, contained in the AgDRIFT model was performed by comparing model simulations to field-trial data collected by the Spray Drift Task Force. Field-trial data...
INTEGRATING DATA ANALYTICS AND SIMULATION METHODS TO SUPPORT MANUFACTURING DECISION MAKING
Kibira, Deogratias; Hatim, Qais; Kumara, Soundar; Shao, Guodong
2017-01-01
Modern manufacturing systems are installed with smart devices such as sensors that monitor system performance and collect data to manage uncertainties in their operations. However, multiple parameters and variables affect system performance, making it impossible for a human to make informed decisions without systematic methodologies and tools. Further, the large volume and variety of streaming data collected is beyond simulation analysis alone. Simulation models are run with well-prepared data. Novel approaches, combining different methods, are needed to use this data for making guided decisions. This paper proposes a methodology whereby parameters that most affect system performance are extracted from the data using data analytics methods. These parameters are used to develop scenarios for simulation inputs; system optimizations are performed on simulation data outputs. A case study of a machine shop demonstrates the proposed methodology. This paper also reviews candidate standards for data collection, simulation, and systems interfaces. PMID:28690363
Compositional symbol grounding for motor patterns.
Greco, Alberto; Caneva, Claudio
2010-01-01
We developed a new experimental and simulative paradigm to study the establishing of compositional grounded representations for motor patterns. Participants learned to associate non-sense arm motor patterns, performed in three different hand postures, with non-sense words. There were two group conditions: in the first (compositional), each pattern was associated with a two-word (verb-adverb) sentence; in the second (holistic), each same pattern was associated with a unique word. Two experiments were performed. In the first, motor pattern recognition and naming were tested in the two conditions. Results showed that verbal compositionality had no role in recognition and that the main source of confusability in this task came from discriminating hand postures. As the naming task resulted too difficult, some changes in the learning procedure were implemented in the second experiment. In this experiment, the compositional group achieved better results in naming motor patterns especially for patterns where hand postures discrimination was relevant. In order to ascertain the differential effect, upon this result, of memory load and of systematic grounding, neural network simulations were also made. After a basic simulation that worked as a good model of subjects performance, in following simulations the number of stimuli (motor patterns and words) was increased and the systematic association between words and patterns was disrupted, while keeping the same number of words and syntax. Results showed that in both conditions the advantage for the compositional condition significantly increased. These simulations showed that the advantage for this condition may be more related to the systematicity rather than to the mere informational gain. All results are discussed in connection to the possible support of the hypothesis of a compositional motor representation and toward a more precise explanation of the factors that make compositional representations working.
CALIBRATED ULTRA FAST IMAGE SIMULATIONS FOR THE DARK ENERGY SURVEY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bruderer, Claudio; Chang, Chihway; Refregier, Alexandre
2016-01-20
Image simulations are becoming increasingly important in understanding the measurement process of the shapes of galaxies for weak lensing and the associated systematic effects. For this purpose we present the first implementation of the Monte Carlo Control Loops (MCCL), a coherent framework for studying systematic effects in weak lensing. It allows us to model and calibrate the shear measurement process using image simulations from the Ultra Fast Image Generator (UFig) and the image analysis software SExtractor. We apply this framework to a subset of the data taken during the Science Verification period (SV) of the Dark Energy Survey (DES). Wemore » calibrate the UFig simulations to be statistically consistent with one of the SV images, which covers ∼0.5 square degrees. We then perform tolerance analyses by perturbing six simulation parameters and study their impact on the shear measurement at the one-point level. This allows us to determine the relative importance of different parameters. For spatially constant systematic errors and point-spread function, the calibration of the simulation reaches the weak lensing precision needed for the DES SV survey area. Furthermore, we find a sensitivity of the shear measurement to the intrinsic ellipticity distribution, and an interplay between the magnitude-size and the pixel value diagnostics in constraining the noise model. This work is the first application of the MCCL framework to data and shows how it can be used to methodically study the impact of systematics on the cosmic shear measurement.« less
Assessment of simulation fidelity using measurements of piloting technique in flight
NASA Technical Reports Server (NTRS)
Ferguson, S. W.; Clement, W. F.; Cleveland, W. B.; Key, D. L.
1984-01-01
The U.S. Army and NASA have undertaken the systematic validation of a ground-based piloted simulator for the UH-60A helicopter. The results of previous handling quality and task performance flight tests for this helicopter have been used as a data base for evaluating the fidelity of the present simulation, which is being conducted at the NASA Ames Research Center's Vertical Motion Simulator. Such nap-of-the-earth piloting tasks as pop-up, hover turn, dash/quick stop, sidestep, dolphin, and slalom, have been investigated. It is noted that pilot simulator performance is significantly and quantifiable degraded by comparison with flight test results for the same tasks.
Neurosurgery simulation in residency training: feasibility, cost, and educational benefit.
Gasco, Jaime; Holbrook, Thomas J; Patel, Achal; Smith, Adrian; Paulson, David; Muns, Alan; Desai, Sohum; Moisi, Marc; Kuo, Yong-Fan; Macdonald, Bart; Ortega-Barnett, Juan; Patterson, Joel T
2013-10-01
The effort required to introduce simulation in neurosurgery academic programs and the benefits perceived by residents have not been systematically assessed. To create a neurosurgery simulation curriculum encompassing basic and advanced skills, cadaveric dissection, cranial and spine surgery simulation, and endovascular and computerized haptic training. A curriculum with 68 core exercises per academic year was distributed in individualized sets of 30 simulations to 6 neurosurgery residents. The total number of procedures completed during the academic year was set to 180. The curriculum includes 79 simulations with physical models, 57 cadaver dissections, and 44 haptic/computerized sessions. Likert-type evaluations regarding self-perceived performance were completed after each exercise. Subject identification was blinded to junior (postgraduate years 1-3) or senior resident (postgraduate years 4-6). Wilcoxon rank testing was used to detect differences within and between groups. One hundred eighty procedures and surveys were analyzed. Junior residents reported proficiency improvements in 82% of simulations performed (P < .001). Senior residents reported improvement in 42.5% of simulations (P < .001). Cadaver simulations accrued the highest reported benefit (71.5%; P < .001), followed by physical simulators (63.8%; P < .001) and haptic/computerized (59.1; P < .001). Initial cost is $341,978.00, with $27,876.36 for annual operational expenses. The systematic implementation of a simulation curriculum in a neurosurgery training program is feasible, is favorably regarded, and has a positive impact on trainees of all levels, particularly in junior years. All simulation forms, cadaver, physical, and haptic/computerized, have a role in different stages of learning and should be considered in the development of an educational simulation program.
Assessment of simulation fidelity using measurements of piloting technique in flight
NASA Technical Reports Server (NTRS)
Clement, W. F.; Cleveland, W. B.; Key, D. L.
1984-01-01
The U.S. Army and NASA joined together on a project to conduct a systematic investigation and validation of a ground based piloted simulation of the Army/Sikorsky UH-60A helicopter. Flight testing was an integral part of the validation effort. Nap-of-the-Earth (NOE) piloting tasks which were investigated included the bob-up, the hover turn, the dash/quickstop, the sidestep, the dolphin, and the slalom. Results from the simulation indicate that the pilot's NOE task performance in the simulator is noticeably and quantifiably degraded when compared with the task performance results generated in flight test. The results of the flight test and ground based simulation experiments support a unique rationale for the assessment of simulation fidelity: flight simulation fidelity should be judged quantitatively by measuring pilot's control strategy and technique as induced by the simulator. A quantitative comparison is offered between the piloting technique observed in a flight simulator and that observed in flight test for the same tasks performed by the same pilots.
Lui, Justin T; Hoy, Monica Y
2017-06-01
Background The increasing prevalence of virtual reality simulation in temporal bone surgery warrants an investigation to assess training effectiveness. Objectives To determine if temporal bone simulator use improves mastoidectomy performance. Data Sources Ovid Medline, Embase, and PubMed databases were systematically searched per the PRISMA guidelines. Review Methods Inclusion criteria were peer-reviewed publications that utilized quantitative data of mastoidectomy performance following the use of a temporal bone simulator. The search was restricted to human studies published in English. Studies were excluded if they were in non-peer-reviewed format, were descriptive in nature, or failed to provide surgical performance outcomes. Meta-analysis calculations were then performed. Results A meta-analysis based on the random-effects model revealed an improvement in overall mastoidectomy performance following training on the temporal bone simulator. A standardized mean difference of 0.87 (95% CI, 0.38-1.35) was generated in the setting of a heterogeneous study population ( I 2 = 64.3%, P < .006). Conclusion In the context of a diverse population of virtual reality simulation temporal bone surgery studies, meta-analysis calculations demonstrate an improvement in trainee mastoidectomy performance with virtual simulation training.
An automated procedure for developing hybrid computer simulations of turbofan engines
NASA Technical Reports Server (NTRS)
Szuch, J. R.; Krosel, S. M.
1980-01-01
A systematic, computer-aided, self-documenting methodology for developing hybrid computer simulations of turbofan engines is presented. The methodology makes use of a host program that can run on a large digital computer and a machine-dependent target (hybrid) program. The host program performs all of the calculations and date manipulations needed to transform user-supplied engine design information to a form suitable for the hybrid computer. The host program also trims the self contained engine model to match specified design point information. A test case is described and comparisons between hybrid simulation and specified engine performance data are presented.
Parallel-plate transmission line type of EMP simulators: Systematic review and recommendations
NASA Astrophysics Data System (ADS)
Giri, D. V.; Liu, T. K.; Tesche, F. M.; King, R. W. P.
1980-05-01
This report presents various aspects of the two-parallel-plate transmission line type of EMP simulator. Much of the work is the result of research efforts conducted during the last two decades at the Air Force Weapons Laboratory, and in industries/universities as well. The principal features of individual simulator components are discussed. The report also emphasizes that it is imperative to hybridize our understanding of individual components so that we can draw meaningful conclusions of simulator performance as a whole.
McGarvey, Richard; Burch, Paul; Matthews, Janet M
2016-01-01
Natural populations of plants and animals spatially cluster because (1) suitable habitat is patchy, and (2) within suitable habitat, individuals aggregate further into clusters of higher density. We compare the precision of random and systematic field sampling survey designs under these two processes of species clustering. Second, we evaluate the performance of 13 estimators for the variance of the sample mean from a systematic survey. Replicated simulated surveys, as counts from 100 transects, allocated either randomly or systematically within the study region, were used to estimate population density in six spatial point populations including habitat patches and Matérn circular clustered aggregations of organisms, together and in combination. The standard one-start aligned systematic survey design, a uniform 10 x 10 grid of transects, was much more precise. Variances of the 10 000 replicated systematic survey mean densities were one-third to one-fifth of those from randomly allocated transects, implying transect sample sizes giving equivalent precision by random survey would need to be three to five times larger. Organisms being restricted to patches of habitat was alone sufficient to yield this precision advantage for the systematic design. But this improved precision for systematic sampling in clustered populations is underestimated by standard variance estimators used to compute confidence intervals. True variance for the survey sample mean was computed from the variance of 10 000 simulated survey mean estimates. Testing 10 published and three newly proposed variance estimators, the two variance estimators (v) that corrected for inter-transect correlation (ν₈ and ν(W)) were the most accurate and also the most precise in clustered populations. These greatly outperformed the two "post-stratification" variance estimators (ν₂ and ν₃) that are now more commonly applied in systematic surveys. Similar variance estimator performance rankings were found with a second differently generated set of spatial point populations, ν₈ and ν(W) again being the best performers in the longer-range autocorrelated populations. However, no systematic variance estimators tested were free from bias. On balance, systematic designs bring more narrow confidence intervals in clustered populations, while random designs permit unbiased estimates of (often wider) confidence interval. The search continues for better estimators of sampling variance for the systematic survey mean.
NASA Astrophysics Data System (ADS)
Gatti, M.; Vielzeuf, P.; Davis, C.; Cawthon, R.; Rau, M. M.; DeRose, J.; De Vicente, J.; Alarcon, A.; Rozo, E.; Gaztanaga, E.; Hoyle, B.; Miquel, R.; Bernstein, G. M.; Bonnett, C.; Carnero Rosell, A.; Castander, F. J.; Chang, C.; da Costa, L. N.; Gruen, D.; Gschwend, J.; Hartley, W. G.; Lin, H.; MacCrann, N.; Maia, M. A. G.; Ogando, R. L. C.; Roodman, A.; Sevilla-Noarbe, I.; Troxel, M. A.; Wechsler, R. H.; Asorey, J.; Davis, T. M.; Glazebrook, K.; Hinton, S. R.; Lewis, G.; Lidman, C.; Macaulay, E.; Möller, A.; O'Neill, C. R.; Sommer, N. E.; Uddin, S. A.; Yuan, F.; Zhang, B.; Abbott, T. M. C.; Allam, S.; Annis, J.; Bechtol, K.; Brooks, D.; Burke, D. L.; Carollo, D.; Carrasco Kind, M.; Carretero, J.; Cunha, C. E.; D'Andrea, C. B.; DePoy, D. L.; Desai, S.; Eifler, T. F.; Evrard, A. E.; Flaugher, B.; Fosalba, P.; Frieman, J.; García-Bellido, J.; Gerdes, D. W.; Goldstein, D. A.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; Hoormann, J. K.; Jain, B.; James, D. J.; Jarvis, M.; Jeltema, T.; Johnson, M. W. G.; Johnson, M. D.; Krause, E.; Kuehn, K.; Kuhlmann, S.; Kuropatkin, N.; Li, T. S.; Lima, M.; Marshall, J. L.; Melchior, P.; Menanteau, F.; Nichol, R. C.; Nord, B.; Plazas, A. A.; Reil, K.; Rykoff, E. S.; Sako, M.; Sanchez, E.; Scarpine, V.; Schubnell, M.; Sheldon, E.; Smith, M.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thomas, D.; Tucker, B. E.; Tucker, D. L.; Vikram, V.; Walker, A. R.; Weller, J.; Wester, W.; Wolf, R. C.
2018-06-01
We use numerical simulations to characterize the performance of a clustering-based method to calibrate photometric redshift biases. In particular, we cross-correlate the weak lensing source galaxies from the Dark Energy Survey Year 1 sample with redMaGiC galaxies (luminous red galaxies with secure photometric redshifts) to estimate the redshift distribution of the former sample. The recovered redshift distributions are used to calibrate the photometric redshift bias of standard photo-z methods applied to the same source galaxy sample. We apply the method to two photo-z codes run in our simulated data: Bayesian Photometric Redshift and Directional Neighbourhood Fitting. We characterize the systematic uncertainties of our calibration procedure, and find that these systematic uncertainties dominate our error budget. The dominant systematics are due to our assumption of unevolving bias and clustering across each redshift bin, and to differences between the shapes of the redshift distributions derived by clustering versus photo-zs. The systematic uncertainty in the mean redshift bias of the source galaxy sample is Δz ≲ 0.02, though the precise value depends on the redshift bin under consideration. We discuss possible ways to mitigate the impact of our dominant systematics in future analyses.
Systematic analysis of signaling pathways using an integrative environment.
Visvanathan, Mahesh; Breit, Marc; Pfeifer, Bernhard; Baumgartner, Christian; Modre-Osprian, Robert; Tilg, Bernhard
2007-01-01
Understanding the biological processes of signaling pathways as a whole system requires an integrative software environment that has comprehensive capabilities. The environment should include tools for pathway design, visualization, simulation and a knowledge base concerning signaling pathways as one. In this paper we introduce a new integrative environment for the systematic analysis of signaling pathways. This system includes environments for pathway design, visualization, simulation and a knowledge base that combines biological and modeling information concerning signaling pathways that provides the basic understanding of the biological system, its structure and functioning. The system is designed with a client-server architecture. It contains a pathway designing environment and a simulation environment as upper layers with a relational knowledge base as the underlying layer. The TNFa-mediated NF-kB signal trans-duction pathway model was designed and tested using our integrative framework. It was also useful to define the structure of the knowledge base. Sensitivity analysis of this specific pathway was performed providing simulation data. Then the model was extended showing promising initial results. The proposed system offers a holistic view of pathways containing biological and modeling data. It will help us to perform biological interpretation of the simulation results and thus contribute to a better understanding of the biological system for drug identification.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Le Blanc, Katya Lee; Spielman, Zachary Alexander; Rice, Brandon Charles
2016-04-01
This report describes the installation of two advanced control room technologies, an advanced alarm system and a computerized procedure system, into the Human Systems Simulation Laboratory (HSSL). Installation of these technologies enables future phases of this research by providing a platform to systematically evaluate the effect of these technologies on operator and plant performance.
NASA Technical Reports Server (NTRS)
Ahn, Kyung H.
1994-01-01
The RNG-based algebraic turbulence model, with a new method of solving the cubic equation and applying new length scales, is introduced. An analysis is made of the RNG length scale which was previously reported and the resulting eddy viscosity is compared with those from other algebraic turbulence models. Subsequently, a new length scale is introduced which actually uses the two previous RNG length scales in a systematic way to improve the model performance. The performance of the present RNG model is demonstrated by simulating the boundary layer flow over a flat plate and the flow over an airfoil.
Sokolenko, Stanislav; Aucoin, Marc G
2015-09-04
The growing ubiquity of metabolomic techniques has facilitated high frequency time-course data collection for an increasing number of applications. While the concentration trends of individual metabolites can be modeled with common curve fitting techniques, a more accurate representation of the data needs to consider effects that act on more than one metabolite in a given sample. To this end, we present a simple algorithm that uses nonparametric smoothing carried out on all observed metabolites at once to identify and correct systematic error from dilution effects. In addition, we develop a simulation of metabolite concentration time-course trends to supplement available data and explore algorithm performance. Although we focus on nuclear magnetic resonance (NMR) analysis in the context of cell culture, a number of possible extensions are discussed. Realistic metabolic data was successfully simulated using a 4-step process. Starting with a set of metabolite concentration time-courses from a metabolomic experiment, each time-course was classified as either increasing, decreasing, concave, or approximately constant. Trend shapes were simulated from generic functions corresponding to each classification. The resulting shapes were then scaled to simulated compound concentrations. Finally, the scaled trends were perturbed using a combination of random and systematic errors. To detect systematic errors, a nonparametric fit was applied to each trend and percent deviations calculated at every timepoint. Systematic errors could be identified at time-points where the median percent deviation exceeded a threshold value, determined by the choice of smoothing model and the number of observed trends. Regardless of model, increasing the number of observations over a time-course resulted in more accurate error estimates, although the improvement was not particularly large between 10 and 20 samples per trend. The presented algorithm was able to identify systematic errors as small as 2.5 % under a wide range of conditions. Both the simulation framework and error correction method represent examples of time-course analysis that can be applied to further developments in (1)H-NMR methodology and the more general application of quantitative metabolomics.
Application of the Systematic Sensor Selection Strategy for Turbofan Engine Diagnostics
NASA Technical Reports Server (NTRS)
Sowers, T. Shane; Kopasakis, George; Simon, Donald L.
2008-01-01
The data acquired from available system sensors forms the foundation upon which any health management system is based, and the available sensor suite directly impacts the overall diagnostic performance that can be achieved. While additional sensors may provide improved fault diagnostic performance, there are other factors that also need to be considered such as instrumentation cost, weight, and reliability. A systematic sensor selection approach is desired to perform sensor selection from a holistic system-level perspective as opposed to performing decisions in an ad hoc or heuristic fashion. The Systematic Sensor Selection Strategy is a methodology that optimally selects a sensor suite from a pool of sensors based on the system fault diagnostic approach, with the ability of taking cost, weight, and reliability into consideration. This procedure was applied to a large commercial turbofan engine simulation. In this initial study, sensor suites tailored for improved diagnostic performance are constructed from a prescribed collection of candidate sensors. The diagnostic performance of the best performing sensor suites in terms of fault detection and identification are demonstrated, with a discussion of the results and implications for future research.
Application of the Systematic Sensor Selection Strategy for Turbofan Engine Diagnostics
NASA Technical Reports Server (NTRS)
Sowers, T. Shane; Kopasakis, George; Simon, Donald L.
2008-01-01
The data acquired from available system sensors forms the foundation upon which any health management system is based, and the available sensor suite directly impacts the overall diagnostic performance that can be achieved. While additional sensors may provide improved fault diagnostic performance there are other factors that also need to be considered such as instrumentation cost, weight, and reliability. A systematic sensor selection approach is desired to perform sensor selection from a holistic system-level perspective as opposed to performing decisions in an ad hoc or heuristic fashion. The Systematic Sensor Selection Strategy is a methodology that optimally selects a sensor suite from a pool of sensors based on the system fault diagnostic approach, with the ability of taking cost, weight and reliability into consideration. This procedure was applied to a large commercial turbofan engine simulation. In this initial study, sensor suites tailored for improved diagnostic performance are constructed from a prescribed collection of candidate sensors. The diagnostic performance of the best performing sensor suites in terms of fault detection and identification are demonstrated, with a discussion of the results and implications for future research.
A Closed Mars Analog Simulation: The Approach of Crew 5 At the Mars Desert Research Station
NASA Technical Reports Server (NTRS)
Clancey, William J.; Koga, Dennis (Technical Monitor)
2002-01-01
For twelve days in April 2002 we performed a closed simulation in the Mars Desert Research Station, isolated from other people, as on Mars, while performing systematic surface exploration and life support chores. Email provided our only means of contact; no phone or radio conversations were possible. All mission-related messages were mediated by a remote mission support team. This protocol enabled a systematic and controlled study of crew activities, scheduling, and use of space. The analysis presented here focuses on two questions: Where did the time go-why did people feel rushed and unable to complete their work? How can we measure and model productivity, to compare habitat designs, schedules, roles, and tools? Analysis suggests that a simple scheduling change-having lunch and dinner earlier, plus eliminating afternoon meetings-increased the available productive time by 41%.
Effectiveness of Virtual Reality Training in Orthopaedic Surgery.
Aïm, Florence; Lonjon, Guillaume; Hannouche, Didier; Nizard, Rémy
2016-01-01
The purpose of this study was to conduct a systematic review to determine the effectiveness of virtual reality (VR) training in orthopaedic surgery. A comprehensive systematic review was performed of articles of VR training in orthopaedic surgery published up to November 2014 from MEDLINE, EMBASE, and the Cochrane Central Register of Controlled Trials databases. We included 10 relevant trials of 91 identified articles, which all reported on training in arthroscopic surgery (shoulder, n = 5; knee, n = 4; undefined, n = 1). A total of 303 participants were involved. Assessment after training was made on a simulator in 9 of the 10 studies, and in one study it took place in the operating room (OR) on a real patient. A total of 32 different outcomes were extracted; 29 of them were about skills assessment. None involved a patient-related outcome. One study focused on anatomic learning, and the other evaluated technical task performance before and after training on a VR simulator. Five studies established construct validity. Three studies reported a statistically significant improvement in technical skills after training on a VR simulator. VR training leads to an improvement of technical skills in orthopaedic surgery. Before its widespread use, additional trials are needed to clarify the transfer of VR training to the OR. Systematic review of Level I through Level IV studies. Copyright © 2016 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.
Application of radiosonde data to VERITAS simulations
NASA Astrophysics Data System (ADS)
Daniel, M. K.
The atmosphere is a vital component of the detector in an atmospheric Cherenkov telescope. In order to understand observations from these instruments and reduce systematic uncertainties and biases in their data it is important to correctly model the atmosphere in simulations of the extensive air showers they detect. The Very High Energy Telescope Array (VERITAS) is a system of 4 such telescopes located at the Whipple Observatory in Southern Arizona. Daily radiosonde measurements from the nearby Tucson airport allow an accurate model of the atmosphere for the VERITAS experiment to be constructed. Comparison of the radiosonde data to existing atmospheric models is performed and the expected effects on the systematic uncertainties are summarised here.
Monte Carlo simulation of x-ray spectra in diagnostic radiology and mammography using MCNP4C
NASA Astrophysics Data System (ADS)
Ay, M. R.; Shahriari, M.; Sarkar, S.; Adib, M.; Zaidi, H.
2004-11-01
The general purpose Monte Carlo N-particle radiation transport computer code (MCNP4C) was used for the simulation of x-ray spectra in diagnostic radiology and mammography. The electrons were transported until they slow down and stop in the target. Both bremsstrahlung and characteristic x-ray production were considered in this work. We focus on the simulation of various target/filter combinations to investigate the effect of tube voltage, target material and filter thickness on x-ray spectra in the diagnostic radiology and mammography energy ranges. The simulated x-ray spectra were compared with experimental measurements and spectra calculated by IPEM report number 78. In addition, the anode heel effect and off-axis x-ray spectra were assessed for different anode angles and target materials and the results were compared with EGS4-based Monte Carlo simulations and measured data. Quantitative evaluation of the differences between our Monte Carlo simulated and comparison spectra was performed using student's t-test statistical analysis. Generally, there is a good agreement between the simulated x-ray and comparison spectra, although there are systematic differences between the simulated and reference spectra especially in the K-characteristic x-rays intensity. Nevertheless, no statistically significant differences have been observed between IPEM spectra and the simulated spectra. It has been shown that the difference between MCNP simulated spectra and IPEM spectra in the low energy range is the result of the overestimation of characteristic photons following the normalization procedure. The transmission curves produced by MCNP4C have good agreement with the IPEM report especially for tube voltages of 50 kV and 80 kV. The systematic discrepancy for higher tube voltages is the result of systematic differences between the corresponding spectra.
Transonic aerodynamic design experience
NASA Technical Reports Server (NTRS)
Bonner, E.
1989-01-01
Advancements have occurred in transonic numerical simulation that place aerodynamic performance design into a relatively well developed status. Efficient broad band operating characteristics can be reliably developed at the conceptual design level. Recent aeroelastic and separated flow simulation results indicate that systematic consideration of an increased range of design problems appears promising. This emerging capability addresses static and dynamic structural/aerodynamic coupling and nonlinearities associated with viscous dominated flows.
NASA Astrophysics Data System (ADS)
Møller, Søren H.; Vester-Petersen, Joakim; Nazir, Adnan; Eriksen, Emil H.; Julsgaard, Brian; Madsen, Søren P.; Balling, Peter
2018-02-01
Quantitative measurements of the electric near-field distribution of star-shaped gold nanoparticles have been performed by femtosecond laser ablation. Measurements were carried out on and off the plasmon resonance. A detailed comparison with numerical simulations of the electric fields is presented. Semi-quantitative agreement is found, with slight systematic differences between experimentally observed and simulated near-field patterns close to strong electric-field gradients. The deviations are attributed to carrier transport preceding ablation.
Evaluation of the CORDEX-Africa multi-RCM hindcast: systematic model errors
NASA Astrophysics Data System (ADS)
Kim, J.; Waliser, Duane E.; Mattmann, Chris A.; Goodale, Cameron E.; Hart, Andrew F.; Zimdars, Paul A.; Crichton, Daniel J.; Jones, Colin; Nikulin, Grigory; Hewitson, Bruce; Jack, Chris; Lennard, Christopher; Favre, Alice
2014-03-01
Monthly-mean precipitation, mean (TAVG), maximum (TMAX) and minimum (TMIN) surface air temperatures, and cloudiness from the CORDEX-Africa regional climate model (RCM) hindcast experiment are evaluated for model skill and systematic biases. All RCMs simulate basic climatological features of these variables reasonably, but systematic biases also occur across these models. All RCMs show higher fidelity in simulating precipitation for the west part of Africa than for the east part, and for the tropics than for northern Sahara. Interannual variation in the wet season rainfall is better simulated for the western Sahel than for the Ethiopian Highlands. RCM skill is higher for TAVG and TMAX than for TMIN, and regionally, for the subtropics than for the tropics. RCM skill in simulating cloudiness is generally lower than for precipitation or temperatures. For all variables, multi-model ensemble (ENS) generally outperforms individual models included in ENS. An overarching conclusion in this study is that some model biases vary systematically for regions, variables, and metrics, posing difficulties in defining a single representative index to measure model fidelity, especially for constructing ENS. This is an important concern in climate change impact assessment studies because most assessment models are run for specific regions/sectors with forcing data derived from model outputs. Thus, model evaluation and ENS construction must be performed separately for regions, variables, and metrics as required by specific analysis and/or assessments. Evaluations using multiple reference datasets reveal that cross-examination, quality control, and uncertainty estimates of reference data are crucial in model evaluations.
Reyes, Jeanette M; Xu, Yadong; Vizuete, William; Serre, Marc L
2017-01-01
The regulatory Community Multiscale Air Quality (CMAQ) model is a means to understanding the sources, concentrations and regulatory attainment of air pollutants within a model's domain. Substantial resources are allocated to the evaluation of model performance. The Regionalized Air quality Model Performance (RAMP) method introduced here explores novel ways of visualizing and evaluating CMAQ model performance and errors for daily Particulate Matter ≤ 2.5 micrometers (PM2.5) concentrations across the continental United States. The RAMP method performs a non-homogenous, non-linear, non-homoscedastic model performance evaluation at each CMAQ grid. This work demonstrates that CMAQ model performance, for a well-documented 2001 regulatory episode, is non-homogeneous across space/time. The RAMP correction of systematic errors outperforms other model evaluation methods as demonstrated by a 22.1% reduction in Mean Square Error compared to a constant domain wide correction. The RAMP method is able to accurately reproduce simulated performance with a correlation of r = 76.1%. Most of the error coming from CMAQ is random error with only a minority of error being systematic. Areas of high systematic error are collocated with areas of high random error, implying both error types originate from similar sources. Therefore, addressing underlying causes of systematic error will have the added benefit of also addressing underlying causes of random error.
Cognitive performance deficits in a simulated climb of Mount Everest - Operation Everest II
NASA Technical Reports Server (NTRS)
Kennedy, R. S.; Dunlap, W. P.; Banderet, L. E.; Smith, M. G.; Houston, C. S.
1989-01-01
Cognitive function at simulated altitude was investigated in a repeated-measures within-subject study of performance by seven volunteers in a hypobaric chamber, in which atmospheric pressure was systematically lowered over a period of 40 d to finally reach a pressure equivalent to 8845 m, the approximate height of Mount Everest. The automated performance test system employed compact computer design; automated test administrations, data storage, and retrieval; psychometric properties of stability and reliability; and factorial richness. Significant impairments of cognitive function were seen for three of the five tests in the battery; on two tests, grammatical reasoning and pattern comparison, every subject showed a substantial decrement.
NASA Astrophysics Data System (ADS)
Pierson, Kyle D.; Hochhalter, Jacob D.; Spear, Ashley D.
2018-05-01
Systematic correlation analysis was performed between simulated micromechanical fields in an uncracked polycrystal and the known path of an eventual fatigue-crack surface based on experimental observation. Concurrent multiscale finite-element simulation of cyclic loading was performed using a high-fidelity representation of grain structure obtained from near-field high-energy x-ray diffraction microscopy measurements. An algorithm was developed to parameterize and systematically correlate the three-dimensional (3D) micromechanical fields from simulation with the 3D fatigue-failure surface from experiment. For comparison, correlation coefficients were also computed between the micromechanical fields and hypothetical, alternative surfaces. The correlation of the fields with hypothetical surfaces was found to be consistently weaker than that with the known crack surface, suggesting that the micromechanical fields of the cyclically loaded, uncracked microstructure might provide some degree of predictiveness for microstructurally small fatigue-crack paths, although the extent of such predictiveness remains to be tested. In general, gradients of the field variables exhibit stronger correlations with crack path than the field variables themselves. Results from the data-driven approach implemented here can be leveraged in future model development for prediction of fatigue-failure surfaces (for example, to facilitate univariate feature selection required by convolution-based models).
Teamwork Assessment Tools in Obstetric Emergencies: A Systematic Review.
Onwochei, Desire N; Halpern, Stephen; Balki, Mrinalini
2017-06-01
Team-based training and simulation can improve patient safety, by improving communication, decision making, and performance of team members. Currently, there is no general consensus on whether or not a specific assessment tool is better adapted to evaluate teamwork in obstetric emergencies. The purpose of this qualitative systematic review was to find the tools available to assess team effectiveness in obstetric emergencies. We searched Embase, Medline, PubMed, Web of Science, PsycINFO, CINAHL, and Google Scholar for prospective studies that evaluated nontechnical skills in multidisciplinary teams involving obstetric emergencies. The search included studies from 1944 until January 11, 2016. Data on reliability and validity measures were collected and used for interpretation. A descriptive analysis was performed on the data. Thirteen studies were included in the final qualitative synthesis. All the studies assessed teams in the context of obstetric simulation scenarios, but only six included anesthetists in the simulations. One study evaluated their teamwork tool using just validity measures, five using just reliability measures, and one used both. The most reliable tools identified were the Clinical Teamwork Scale, the Global Assessment of Obstetric Team Performance, and the Global Rating Scale of performance. However, they were still lacking in terms of quality and validity. More work needs to be conducted to establish the validity of teamwork tools for nontechnical skills, and the development of an ideal tool is warranted. Further studies are required to assess how outcomes, such as performance and patient safety, are influenced when using these tools.
Training and Assessment of Hysteroscopic Skills: A Systematic Review.
Savran, Mona Meral; Sørensen, Stine Maya Dreier; Konge, Lars; Tolsgaard, Martin G; Bjerrum, Flemming
2016-01-01
The aim of this systematic review was to identify studies on hysteroscopic training and assessment. PubMed, Excerpta Medica, the Cochrane Library, and Web of Science were searched in January 2015. Manual screening of references and citation tracking were also performed. Studies on hysteroscopic educational interventions were selected without restrictions on study design, populations, language, or publication year. A qualitative data synthesis including the setting, study participants, training model, training characteristics, hysteroscopic skills, assessment parameters, and study outcomes was performed by 2 authors working independently. Effect sizes were calculated when possible. Overall, 2 raters independently evaluated sources of validity evidence supporting the outcomes of the hysteroscopy assessment tools. A total of 25 studies on hysteroscopy training were identified, of which 23 were performed in simulated settings. Overall, 10 studies used virtual-reality simulators and reported effect sizes for technical skills ranging from 0.31 to 2.65; 12 used inanimate models and reported effect sizes for technical skills ranging from 0.35 to 3.19. One study involved live animal models; 2 studies were performed in clinical settings. The validity evidence supporting the assessment tools used was low. Consensus between the 2 raters on the reported validity evidence was high (94%). This systematic review demonstrated large variations in the effect of different tools for hysteroscopy training. The validity evidence supporting the assessment of hysteroscopic skills was limited. Copyright © 2016 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Variation and adaptation: learning from success in patient safety-oriented simulation training.
Dieckmann, Peter; Patterson, Mary; Lahlou, Saadi; Mesman, Jessica; Nyström, Patrik; Krage, Ralf
2017-01-01
Simulation is traditionally used to reduce errors and their negative consequences. But according to modern safety theories, this focus overlooks the learning potential of the positive performance, which is much more common than errors. Therefore, a supplementary approach to simulation is needed to unfold its full potential. In our commentary, we describe the learning from success (LFS) approach to simulation and debriefing. Drawing on several theoretical frameworks, we suggest supplementing the widespread deficit-oriented, corrective approach to simulation with an approach that focusses on systematically understanding how good performance is produced in frequent (mundane) simulation scenarios. We advocate to investigate and optimize human activity based on the connected layers of any setting: the embodied competences of the healthcare professionals, the social and organizational rules that guide their actions, and the material aspects of the setting. We discuss implications of these theoretical perspectives for the design and conduct of simulation scenarios, post-simulation debriefings, and faculty development programs.
Systematic methods for knowledge acquisition and expert system development
NASA Technical Reports Server (NTRS)
Belkin, Brenda L.; Stengel, Robert F.
1991-01-01
Nine cooperating rule-based systems, collectively called AUTOCREW, were designed to automate functions and decisions associated with a combat aircraft's subsystem. The organization of tasks within each system is described; performance metrics were developed to evaluate the workload of each rule base, and to assess the cooperation between the rule-bases. Each AUTOCREW subsystem is composed of several expert systems that perform specific tasks. AUTOCREW's NAVIGATOR was analyzed in detail to understand the difficulties involved in designing the system and to identify tools and methodologies that ease development. The NAVIGATOR determines optimal navigation strategies from a set of available sensors. A Navigation Sensor Management (NSM) expert system was systematically designed from Kalman filter covariance data; four ground-based, a satellite-based, and two on-board INS-aiding sensors were modeled and simulated to aid an INS. The NSM Expert was developed using the Analysis of Variance (ANOVA) and the ID3 algorithm. Navigation strategy selection is based on an RSS position error decision metric, which is computed from the covariance data. Results show that the NSM Expert predicts position error correctly between 45 and 100 percent of the time for a specified navaid configuration and aircraft trajectory. The NSM Expert adapts to new situations, and provides reasonable estimates of hybrid performance. The systematic nature of the ANOVA/ID3 method makes it broadly applicable to expert system design when experimental or simulation data is available.
Garden, A L; Le Fevre, D M; Waddington, H L; Weller, J M
2015-05-01
Non-technical skills training in healthcare frequently uses high-fidelity simulation followed by a facilitated discussion known as debriefing. This type of training is mandatory for anaesthesia training in Australia and New Zealand. Debriefing by a skilled facilitator is thought to be essential for new learning through feedback and reflective processes. Key elements of effective debriefing need to be clearly identified to ensure that the training is evidence-based. We undertook a systematic review of empirical studies where elements of debriefing have been systematically manipulated during non-technical skills training. Eight publications met the inclusion criteria, but seven of these were of limited generalisability. The only study that was generalisable found that debriefing by novice instructors using a script improved team leader performance in paediatric resuscitation. The remaining seven publications were limited by the small number of debriefers included in each study and these reports were thus analogous to case reports. Generally, performance improved after debriefing by a skilled facilitator. However, the debriefer provided no specific advantage over other post-experience educational interventions. Acknowledging their limitations, these studies found that performance improved after self-led debrief, no debrief (with experienced practitioners), standardised multimedia debrief or after reviewing a DVD of the participants' own eye-tracking. There was no added performance improvement when review of a video recording was added to facilitator-led debriefing. One study reported no performance improvement after debriefing. Without empirical evidence that is specific to the healthcare domain, theories of learning from education and psychology should continue to inform practices and teaching for effective debriefing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hepburn, I.; De Schutter, E., E-mail: erik@oist.jp; Theoretical Neurobiology & Neuroengineering, University of Antwerp, Antwerp 2610
Spatial stochastic molecular simulations in biology are limited by the intense computation required to track molecules in space either in a discrete time or discrete space framework, which has led to the development of parallel methods that can take advantage of the power of modern supercomputers in recent years. We systematically test suggested components of stochastic reaction-diffusion operator splitting in the literature and discuss their effects on accuracy. We introduce an operator splitting implementation for irregular meshes that enhances accuracy with minimal performance cost. We test a range of models in small-scale MPI simulations from simple diffusion models to realisticmore » biological models and find that multi-dimensional geometry partitioning is an important consideration for optimum performance. We demonstrate performance gains of 1-3 orders of magnitude in the parallel implementation, with peak performance strongly dependent on model specification.« less
Are revised models better models? A skill score assessment of regional interannual variability
NASA Astrophysics Data System (ADS)
Sperber, Kenneth R.; Participating AMIP Modelling Groups
1999-05-01
Various skill scores are used to assess the performance of revised models relative to their original configurations. The interannual variability of all-India, Sahel and Nordeste rainfall and summer monsoon windshear is examined in integrations performed under the experimental design of the Atmospheric Model Intercomparison Project. For the indices considered, the revised models exhibit greater fidelity at simulating the observed interannual variability. Interannual variability of all-India rainfall is better simulated by models that have a more realistic rainfall climatology in the vicinity of India, indicating the beneficial effect of reducing systematic model error.
Are revised models better models? A skill score assessment of regional interannual variability
NASA Astrophysics Data System (ADS)
Participating AMIP Modelling Groups,; Sperber, Kenneth R.
Various skill scores are used to assess the performance of revised models relative to their original configurations. The interannual variability of all-India, Sahel and Nordeste rainfall and summer monsoon windshear is examined in integrations performed under the experimental design of the Atmospheric Model Intercomparison Project. For the indices considered, the revised models exhibit greater fidelity at simulating the observed interannual variability. Interannual variability of all-India rainfall is better simulated by models that have a more realistic rainfall climatology in the vicinity of India, indicating the beneficial effect of reducing systematic model error.
NASA Astrophysics Data System (ADS)
Major, Louis; Kyriacou, Theocharis; Brereton, Pearl
2014-07-01
This work investigates the effectiveness of simulated robots as tools to support the learning of programming. After the completion of a systematic review and exploratory research, a multi-case case study was undertaken. A simulator, named Kebot, was developed and used to run four 10-hour programming workshops. Twenty-three student participants (aged 16-18) in addition to 23 pre-service, and 3 in-service, teachers took part. The effectiveness of this intervention was determined by considering opinions, attitudes, and motivation as well as by analysing students' programming performance. Pre- and post-questionnaires, in- and post-workshop exercises, and interviews were used. Participants enjoyed learning using the simulator and believed the approach to be valuable and engaging. The performance of students indicates that the simulator aids learning as most completed tasks to a satisfactory standard. Evidence suggests robot simulators can offer an effective means of introducing programming. Recommendations to support the development of other simulators are provided.
Gatti, M.
2018-02-22
We use numerical simulations to characterize the performance of a clustering-based method to calibrate photometric redshift biases. In particular, we cross-correlate the weak lensing (WL) source galaxies from the Dark Energy Survey Year 1 (DES Y1) sample with redMaGiC galaxies (luminous red galaxies with secure photometric red- shifts) to estimate the redshift distribution of the former sample. The recovered redshift distributions are used to calibrate the photometric redshift bias of standard photo-z methods applied to the same source galaxy sample. We also apply the method to three photo-z codes run in our simulated data: Bayesian Photometric Redshift (BPZ), Directional Neighborhoodmore » Fitting (DNF), and Random Forest-based photo-z (RF). We characterize the systematic uncertainties of our calibration procedure, and find that these systematic uncertainties dominate our error budget. The dominant systematics are due to our assumption of unevolving bias and clustering across each redshift bin, and to differences between the shapes of the redshift distributions derived by clustering vs photo-z's. The systematic uncertainty in the mean redshift bias of the source galaxy sample is z ≲ 0.02, though the precise value depends on the redshift bin under consideration. Here, we discuss possible ways to mitigate the impact of our dominant systematics in future analyses.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gatti, M.
We use numerical simulations to characterize the performance of a clustering-based method to calibrate photometric redshift biases. In particular, we cross-correlate the weak lensing (WL) source galaxies from the Dark Energy Survey Year 1 (DES Y1) sample with redMaGiC galaxies (luminous red galaxies with secure photometric red- shifts) to estimate the redshift distribution of the former sample. The recovered redshift distributions are used to calibrate the photometric redshift bias of standard photo-z methods applied to the same source galaxy sample. We also apply the method to three photo-z codes run in our simulated data: Bayesian Photometric Redshift (BPZ), Directional Neighborhoodmore » Fitting (DNF), and Random Forest-based photo-z (RF). We characterize the systematic uncertainties of our calibration procedure, and find that these systematic uncertainties dominate our error budget. The dominant systematics are due to our assumption of unevolving bias and clustering across each redshift bin, and to differences between the shapes of the redshift distributions derived by clustering vs photo-z's. The systematic uncertainty in the mean redshift bias of the source galaxy sample is z ≲ 0.02, though the precise value depends on the redshift bin under consideration. Here, we discuss possible ways to mitigate the impact of our dominant systematics in future analyses.« less
State of the evidence on simulation-based training for laparoscopic surgery: a systematic review.
Zendejas, Benjamin; Brydges, Ryan; Hamstra, Stanley J; Cook, David A
2013-04-01
Summarize the outcomes and best practices of simulation training for laparoscopic surgery. Simulation-based training for laparoscopic surgery has become a mainstay of surgical training. Much new evidence has accrued since previous reviews were published. We systematically searched the literature through May 2011 for studies evaluating simulation, in comparison with no intervention or an alternate training activity, for training health professionals in laparoscopic surgery. Outcomes were classified as satisfaction, skills (in a test setting) of time (to perform the task), process (eg, performance rating), product (eg, knot strength), and behaviors when caring for patients. We used random effects to pool effect sizes. From 10,903 articles screened, we identified 219 eligible studies enrolling 7138 trainees, including 91 (42%) randomized trials. For comparisons with no intervention (n = 151 studies), pooled effect size (ES) favored simulation for outcomes of knowledge (1.18; N = 9 studies), skills time (1.13; N = 89), skills process (1.23; N = 114), skills product (1.09; N = 7), behavior time (1.15; N = 7), behavior process (1.22; N = 15), and patient effects (1.28; N = 1), all P < 0.05. When compared with nonsimulation instruction (n = 3 studies), results significantly favored simulation for outcomes of skills time (ES, 0.75) and skills process (ES, 0.54). Comparisons between different simulation interventions (n = 79 studies) clarified best practices. For example, in comparison with virtual reality, box trainers have similar effects for process skills outcomes and seem to be superior for outcomes of satisfaction and skills time. Simulation-based laparoscopic surgery training of health professionals has large benefits when compared with no intervention and is moderately more effective than nonsimulation instruction.
Current concepts in simulation-based trauma education.
Cherry, Robert A; Ali, Jameel
2008-11-01
The use of simulation-based technology in trauma education has focused on providing a safe and effective alternative to the more traditional methods that are used to teach technical skills and critical concepts in trauma resuscitation. Trauma team training using simulation-based technology is also being used to develop skills in leadership, team-information sharing, communication, and decision-making. The integration of simulators into medical student curriculum, residency training, and continuing medical education has been strongly recommended by the American College of Surgeons as an innovative means of enhancing patient safety, reducing medical errors, and performing a systematic evaluation of various competencies. Advanced human patient simulators are increasingly being used in trauma as an evaluation tool to assess clinical performance and to teach and reinforce essential knowledge, skills, and abilities. A number of specialty simulators in trauma and critical care have also been designed to meet these educational objectives. Ongoing educational research is still needed to validate long-term retention of knowledge and skills, provide reliable methods to evaluate teaching effectiveness and performance, and to demonstrate improvement in patient safety and overall quality of care.
Real-time haptic cutting of high-resolution soft tissues.
Wu, Jun; Westermann, Rüdiger; Dick, Christian
2014-01-01
We present our systematic efforts in advancing the computational performance of physically accurate soft tissue cutting simulation, which is at the core of surgery simulators in general. We demonstrate a real-time performance of 15 simulation frames per second for haptic soft tissue cutting of a deformable body at an effective resolution of 170,000 finite elements. This is achieved by the following innovative components: (1) a linked octree discretization of the deformable body, which allows for fast and robust topological modifications of the simulation domain, (2) a composite finite element formulation, which thoroughly reduces the number of simulation degrees of freedom and thus enables to carefully balance simulation performance and accuracy, (3) a highly efficient geometric multigrid solver for solving the linear systems of equations arising from implicit time integration, (4) an efficient collision detection algorithm that effectively exploits the composition structure, and (5) a stable haptic rendering algorithm for computing the feedback forces. Considering that our method increases the finite element resolution for physically accurate real-time soft tissue cutting simulation by an order of magnitude, our technique has a high potential to significantly advance the realism of surgery simulators.
ENHANCED FORMATION OF CHLORINATED PICS BY THE ADDITION OF BROMINE
A systematic series of experiments were performed on a pilot-scale rotary kiln incinerator simulator in which liquid surrogate wastes containing varied levels of chlorine and bromine were burned. The surrogate wastes used were a series of mixtures of methylene chloride and methyl...
NASA Astrophysics Data System (ADS)
Guidi, Giovanni; Scannapieco, Cecilia; Walcher, C. Jakob
2015-12-01
We study the sources of biases and systematics in the derivation of galaxy properties from observational studies, focusing on stellar masses, star formation rates, gas and stellar metallicities, stellar ages, magnitudes and colours. We use hydrodynamical cosmological simulations of galaxy formation, for which the real quantities are known, and apply observational techniques to derive the observables. We also analyse biases that are relevant for a proper comparison between simulations and observations. For our study, we post-process the simulation outputs to calculate the galaxies' spectral energy distributions (SEDs) using stellar population synthesis models and also generate the fully consistent far-UV-submillimetre wavelength SEDs with the radiative transfer code SUNRISE. We compared the direct results of simulations with the observationally derived quantities obtained in various ways, and found that systematic differences in all studied galaxy properties appear, which are caused by: (1) purely observational biases, (2) the use of mass-weighted and luminosity-weighted quantities, with preferential sampling of more massive and luminous regions, (3) the different ways of constructing the template of models when a fit to the spectra is performed, and (4) variations due to different calibrations, most notably for gas metallicities and star formation rates. Our results show that large differences can appear depending on the technique used to derive galaxy properties. Understanding these differences is of primary importance both for simulators, to allow a better judgement of similarities and differences with observations, and for observers, to allow a proper interpretation of the data.
NASA Technical Reports Server (NTRS)
Chappell, Steven P.; Norcross, Jason R.; Gernhardt, Michael L.
2010-01-01
The Apollo lunar EVA experience revealed challenges with suit stability and control-likely a combination of mass, mobility, and center of gravity (CG) factors. The EVA Physiology, Systems and Performence (EPSP) Project is systematically working with other NASA projects, labs, and facilities to lead a series of studies to understand the role of suit mass, weight, CG, and other parameters on astronaut performance in partial gravity environments.
Performance of quantum annealing on random Ising problems implemented using the D-Wave Two
NASA Astrophysics Data System (ADS)
Wang, Zhihui; Job, Joshua; Rønnow, Troels F.; Troyer, Matthias; Lidar, Daniel A.; USC Collaboration; ETH Collaboration
2014-03-01
Detecting a possible speedup of quantum annealing compared to classical algorithms is a pressing task in experimental adiabatic quantum computing. In this talk, we discuss the performance of the D-Wave Two quantum annealing device on Ising spin glass problems. The expected time to solution for the device to solve random instances with up to 503 spins and with specified coupling ranges is evaluated while carefully addressing the issue of statistical errors. We perform a systematic comparison of the expected time to solution between the D-Wave Two and classical stochastic solvers, specifically simulated annealing, and simulated quantum annealing based on quantum Monte Carlo, and discuss the question of speedup.
Navigating the tip of the genomic iceberg: Next-generation sequencing for plant systematics.
Straub, Shannon C K; Parks, Matthew; Weitemier, Kevin; Fishbein, Mark; Cronn, Richard C; Liston, Aaron
2012-02-01
Just as Sanger sequencing did more than 20 years ago, next-generation sequencing (NGS) is poised to revolutionize plant systematics. By combining multiplexing approaches with NGS throughput, systematists may no longer need to choose between more taxa or more characters. Here we describe a genome skimming (shallow sequencing) approach for plant systematics. Through simulations, we evaluated optimal sequencing depth and performance of single-end and paired-end short read sequences for assembly of nuclear ribosomal DNA (rDNA) and plastomes and addressed the effect of divergence on reference-guided plastome assembly. We also used simulations to identify potential phylogenetic markers from low-copy nuclear loci at different sequencing depths. We demonstrated the utility of genome skimming through phylogenetic analysis of the Sonoran Desert clade (SDC) of Asclepias (Apocynaceae). Paired-end reads performed better than single-end reads. Minimum sequencing depths for high quality rDNA and plastome assemblies were 40× and 30×, respectively. Divergence from the reference significantly affected plastome assembly, but relatively similar references are available for most seed plants. Deeper rDNA sequencing is necessary to characterize intragenomic polymorphism. The low-copy fraction of the nuclear genome was readily surveyed, even at low sequencing depths. Nearly 160000 bp of sequence from three organelles provided evidence of phylogenetic incongruence in the SDC. Adoption of NGS will facilitate progress in plant systematics, as whole plastome and rDNA cistrons, partial mitochondrial genomes, and low-copy nuclear markers can now be efficiently obtained for molecular phylogenetics studies.
Adib-Hajbaghery, Mohsen; Sharifi, Najmeh
2017-03-01
To gain insight into the existing scientific evidence on the effect of simulation on critical thinking in nursing education. A systematic literature review of original research publications. In this systematic review, the papers published in English and Farsi databases of PubMed, Science Direct, ProQuest, ERIC, Google Scholar and Ovid, MagIran and SID, from 1975 to 2015 were reviewed by two independent researchers. Original research publications were eligible for review when they described simulation program directed on nursing student and nurses; used a control group or a pretest post-test design; and gave information about the effects of simulation on critical thinking. Two reviewers independently assessed the studies for inclusion. Methodological quality of the included studies was also independently assessed by the reviewers, using a checklist developed by Greenhalgh et al. and the checklist of Cochrane Center. Data related to the original publications were extracted by one reviewer and checked by a second reviewer. No statistical pooling of outcomes was performed, due to the large heterogeneity of outcomes. After screening the titles and abstracts of 787 papers, 16 ones were included in the review according to the inclusion criteria. These used experimental or quasi-experimental designs. The studies used a variety of instruments and a wide range of simulation methods with differences in duration and numbers of exposures to simulation. Eight of the studies reported that simulation training positively affected the critical thinking skills. However, eight studies reported ineffectiveness of simulation on critical thinking. Studies are conflicting about the effect of simulation on nurses and nursing students' critical thinking. Also, a large heterogeneity exists between the studies in terms of the instruments and the methods used. Thus, more studies with careful designs are needed to produce more credible evidence on the effectiveness of simulation on critical thinking. Copyright © 2016 Elsevier Ltd. All rights reserved.
Jensen, Katrine; Ringsted, Charlotte; Hansen, Henrik Jessen; Petersen, René Horsleben; Konge, Lars
2014-06-01
Video-assisted thoracic surgery is gradually replacing conventional open thoracotomy as the method of choice for the treatment of early-stage non-small cell lung cancers, and thoracic surgical trainees must learn and master this technique. Simulation-based training could help trainees overcome the first part of the learning curve, but no virtual-reality simulators for thoracoscopy are commercially available. This study aimed to investigate whether training on a laparoscopic simulator enables trainees to perform a thoracoscopic lobectomy. Twenty-eight surgical residents were randomized to either virtual-reality training on a nephrectomy module or traditional black-box simulator training. After a retention period they performed a thoracoscopic lobectomy on a porcine model and their performance was scored using a previously validated assessment tool. The groups did not differ in age or gender. All participants were able to complete the lobectomy. The performance of the black-box group was significantly faster during the test scenario than the virtual-reality group: 26.6 min (SD 6.7 min) versus 32.7 min (SD 7.5 min). No difference existed between the two groups when comparing bleeding and anatomical and non-anatomical errors. Simulation-based training and targeted instructions enabled the trainees to perform a simulated thoracoscopic lobectomy. Traditional black-box training was more effective than virtual-reality laparoscopy training. Thus, a dedicated simulator for thoracoscopy should be available before establishing systematic virtual-reality training programs for trainees in thoracic surgery.
Impact of stress on dentists' clinical performance. A systematic review.
Plessas, A; Delgado, M B; Nasser, M; Hanoch, Y; Moles, D R
2018-03-01
Dentistry is recognised as a stressful profession and dentists perceive their profession to be more stressful than other healthcare professions. While earlier studies have shown a link between stress and well-being among dentists, whether stress negatively impacts their clinical performance is an important and open question. We do know, however, that stress is associated with reduced performance in other health (and non-health) related professions. This systematic review aimed to answer the question: how does stress impact on dentists' clinical performance? This systematic review was registered in PROSPERO (CRD42016045756). The CINHAL, Embase, Medline, PsycINFO, EThOS and OpenGrey electronic databases were searched according to PRISMA guidelines. Two reviewers independently screened the citations for relevance. The citation list of potentially eligible papers was also searched. Prospective empirical studies were considered for inclusion. The inclusion criteria were applied at the full-text stage by the two same reviewers independently. The search yielded 3535 titles and abstracts. Twelve publications were considered potentially eligible, eleven of which were excluded as they did not meet the predefined inclusion criteria. This systematic review identified a gap in the literature as it found no empirical evidence quantifying the impact of stress on dentists' clinical performance. Prospective well-designed experimental simulation studies, comparing stress with non-stress situations on clinical performance and decision making, as well studies evaluating prospectively real-life dentists' performance under stress are warranted. Copyright© 2018 Dennis Barber Ltd.
A Study on Fast Gates for Large-Scale Quantum Simulation with Trapped Ions
Taylor, Richard L.; Bentley, Christopher D. B.; Pedernales, Julen S.; Lamata, Lucas; Solano, Enrique; Carvalho, André R. R.; Hope, Joseph J.
2017-01-01
Large-scale digital quantum simulations require thousands of fundamental entangling gates to construct the simulated dynamics. Despite success in a variety of small-scale simulations, quantum information processing platforms have hitherto failed to demonstrate the combination of precise control and scalability required to systematically outmatch classical simulators. We analyse how fast gates could enable trapped-ion quantum processors to achieve the requisite scalability to outperform classical computers without error correction. We analyze the performance of a large-scale digital simulator, and find that fidelity of around 70% is realizable for π-pulse infidelities below 10−5 in traps subject to realistic rates of heating and dephasing. This scalability relies on fast gates: entangling gates faster than the trap period. PMID:28401945
A Study on Fast Gates for Large-Scale Quantum Simulation with Trapped Ions.
Taylor, Richard L; Bentley, Christopher D B; Pedernales, Julen S; Lamata, Lucas; Solano, Enrique; Carvalho, André R R; Hope, Joseph J
2017-04-12
Large-scale digital quantum simulations require thousands of fundamental entangling gates to construct the simulated dynamics. Despite success in a variety of small-scale simulations, quantum information processing platforms have hitherto failed to demonstrate the combination of precise control and scalability required to systematically outmatch classical simulators. We analyse how fast gates could enable trapped-ion quantum processors to achieve the requisite scalability to outperform classical computers without error correction. We analyze the performance of a large-scale digital simulator, and find that fidelity of around 70% is realizable for π-pulse infidelities below 10 -5 in traps subject to realistic rates of heating and dephasing. This scalability relies on fast gates: entangling gates faster than the trap period.
A Procedural Electroencephalogram Simulator for Evaluation of Anesthesia Monitors.
Petersen, Christian Leth; Görges, Matthias; Massey, Roslyn; Dumont, Guy Albert; Ansermino, J Mark
2016-11-01
Recent research and advances in the automation of anesthesia are driving the need to better understand electroencephalogram (EEG)-based anesthesia end points and to test the performance of anesthesia monitors. This effort is currently limited by the need to collect raw EEG data directly from patients. A procedural method to synthesize EEG signals was implemented in a mobile software application. The application is capable of sending the simulated signal to an anesthesia depth of hypnosis monitor. Systematic sweeps of the simulator generate functional monitor response profiles reminiscent of how network analyzers are used to test electronic components. Three commercial anesthesia monitors (Entropy, NeuroSENSE, and BIS) were compared with this new technology, and significant response and feature variations between the monitor models were observed; this includes reproducible, nonmonotonic apparent multistate behavior and significant hysteresis at light levels of anesthesia. Anesthesia monitor response to a procedural simulator can reveal significant differences in internal signal processing algorithms. The ability to synthesize EEG signals at different anesthetic depths potentially provides a new method for systematically testing EEG-based monitors and automated anesthesia systems with all sensor hardware fully operational before human trials.
Simulating the role of visual selective attention during the development of perceptual completion
Schlesinger, Matthew; Amso, Dima; Johnson, Scott P.
2014-01-01
We recently proposed a multi-channel, image-filtering model for simulating the development of visual selective attention in young infants (Schlesinger, Amso & Johnson, 2007). The model not only captures the performance of 3-month-olds on a visual search task, but also implicates two cortical regions that may play a role in the development of visual selective attention. In the current simulation study, we used the same model to simulate 3-month-olds’ performance on a second measure, the perceptual unity task. Two parameters in the model – corresponding to areas in the occipital and parietal cortices – were systematically varied while the gaze patterns produced by the model were recorded and subsequently analyzed. Three key findings emerged from the simulation study. First, the model successfully replicated the performance of 3-month-olds on the unity perception task. Second, the model also helps to explain the improved performance of 2-month-olds when the size of the occluder in the unity perception task is reduced. Third, in contrast to our previous simulation results, variation in only one of the two cortical regions simulated (i.e. recurrent activity in posterior parietal cortex) resulted in a performance pattern that matched 3-month-olds. These findings provide additional support for our hypothesis that the development of perceptual completion in early infancy is promoted by progressive improvements in visual selective attention and oculomotor skill. PMID:23106728
Simulating the role of visual selective attention during the development of perceptual completion.
Schlesinger, Matthew; Amso, Dima; Johnson, Scott P
2012-11-01
We recently proposed a multi-channel, image-filtering model for simulating the development of visual selective attention in young infants (Schlesinger, Amso & Johnson, 2007). The model not only captures the performance of 3-month-olds on a visual search task, but also implicates two cortical regions that may play a role in the development of visual selective attention. In the current simulation study, we used the same model to simulate 3-month-olds' performance on a second measure, the perceptual unity task. Two parameters in the model - corresponding to areas in the occipital and parietal cortices - were systematically varied while the gaze patterns produced by the model were recorded and subsequently analyzed. Three key findings emerged from the simulation study. First, the model successfully replicated the performance of 3-month-olds on the unity perception task. Second, the model also helps to explain the improved performance of 2-month-olds when the size of the occluder in the unity perception task is reduced. Third, in contrast to our previous simulation results, variation in only one of the two cortical regions simulated (i.e. recurrent activity in posterior parietal cortex) resulted in a performance pattern that matched 3-month-olds. These findings provide additional support for our hypothesis that the development of perceptual completion in early infancy is promoted by progressive improvements in visual selective attention and oculomotor skill. © 2012 Blackwell Publishing Ltd.
Systematic Studies for the Development of High-Intensity Abs
NASA Astrophysics Data System (ADS)
Barion, L.; Ciullo, G.; Contalbrigo, M.; Dalpiaz, P. F.; Lenisa, P.; Statera, M.
2011-01-01
The effect of the dissociator cooling temperature has been tested in order to explain the unexpected RHIC atomic beam intensity. Studies on trumpet nozzle geometry, compared to standard sonic nozzle have been performed, both with simulation methods and test bench measurements on molecular beams, obtaining promising results.
This paper presents a new system for automated 2D-3D migration of chemicals in large databases with conformer multiplication. The main advantages of this system are its straightforward performance, reasonable execution time, simplicity, and applicability to building large 3D che...
Impact of rescaling anomaly and seasonal components of soil moisture on hydrologic data assimilation
USDA-ARS?s Scientific Manuscript database
In hydrological sciences many observations and model simulations have moderate linear association due to the noise in the datasets and/or the systematic differences between their seasonality components. This degrades the performance of model-observation integration algorithms, such as the Kalman Fil...
NASA Astrophysics Data System (ADS)
Duan, Lian; Makita, Shuichi; Yamanari, Masahiro; Lim, Yiheng; Yasuno, Yoshiaki
2011-08-01
A Monte-Carlo-based phase retardation estimator is developed to correct the systematic error in phase retardation measurement by polarization sensitive optical coherence tomography (PS-OCT). Recent research has revealed that the phase retardation measured by PS-OCT has a distribution that is neither symmetric nor centered at the true value. Hence, a standard mean estimator gives us erroneous estimations of phase retardation, and it degrades the performance of PS-OCT for quantitative assessment. In this paper, the noise property in phase retardation is investigated in detail by Monte-Carlo simulation and experiments. A distribution transform function is designed to eliminate the systematic error by using the result of the Monte-Carlo simulation. This distribution transformation is followed by a mean estimator. This process provides a significantly better estimation of phase retardation than a standard mean estimator. This method is validated both by numerical simulations and experiments. The application of this method to in vitro and in vivo biological samples is also demonstrated.
Virtual reality simulation of fuzzy-logic control during underwater dynamic positioning
NASA Astrophysics Data System (ADS)
Thekkedan, Midhin Das; Chin, Cheng Siong; Woo, Wai Lok
2015-03-01
In this paper, graphical-user-interface (GUI) software for simulation and fuzzy-logic control of a remotely operated vehicle (ROV) using MATLAB™ GUI Designing Environment is proposed. The proposed ROV's GUI platform allows the controller such as fuzzy-logic control systems design to be compared with other controllers such as proportional-integral-derivative (PID) and sliding-mode controller (SMC) systematically and interactively. External disturbance such as sea current can be added to improve the modelling in actual underwater environment. The simulated results showed the position responses of the fuzzy-logic control exhibit reasonable performance under the sea current disturbance.
Telfer, Scott; Erdemir, Ahmet; Woodburn, James; Cavanagh, Peter R
2014-01-01
Over the past two decades finite element (FE) analysis has become a popular tool for researchers seeking to simulate the biomechanics of the healthy and diabetic foot. The primary aims of these simulations have been to improve our understanding of the foot's complicated mechanical loading in health and disease and to inform interventions designed to prevent plantar ulceration, a major complication of diabetes. This article provides a systematic review and summary of the findings from FE analysis-based computational simulations of the diabetic foot. A systematic literature search was carried out and 31 relevant articles were identified covering three primary themes: methodological aspects relevant to modelling the diabetic foot; investigations of the pathomechanics of the diabetic foot; and simulation-based design of interventions to reduce ulceration risk. Methodological studies illustrated appropriate use of FE analysis for simulation of foot mechanics, incorporating nonlinear tissue mechanics, contact and rigid body movements. FE studies of pathomechanics have provided estimates of internal soft tissue stresses, and suggest that such stresses may often be considerably larger than those measured at the plantar surface and are proportionally greater in the diabetic foot compared to controls. FE analysis allowed evaluation of insole performance and development of new insole designs, footwear and corrective surgery to effectively provide intervention strategies. The technique also presents the opportunity to simulate the effect of changes associated with the diabetic foot on non-mechanical factors such as blood supply to local tissues. While significant advancement in diabetic foot research has been made possible by the use of FE analysis, translational utility of this powerful tool for routine clinical care at the patient level requires adoption of cost-effective (both in terms of labour and computation) and reliable approaches with clear clinical validity for decision making.
NASA Technical Reports Server (NTRS)
Szuch, J. R.; Krosel, S. M.; Bruton, W. M.
1982-01-01
A systematic, computer-aided, self-documenting methodology for developing hybrid computer simulations of turbofan engines is presented. The methodology that is pesented makes use of a host program that can run on a large digital computer and a machine-dependent target (hybrid) program. The host program performs all the calculations and data manipulations that are needed to transform user-supplied engine design information to a form suitable for the hybrid computer. The host program also trims the self-contained engine model to match specified design-point information. Part I contains a general discussion of the methodology, describes a test case, and presents comparisons between hybrid simulation and specified engine performance data. Part II, a companion document, contains documentation, in the form of computer printouts, for the test case.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habte, A.; Andreas, A.; Ottoson, L.
2014-11-01
Indoor and outdoor testing of photovoltaic (PV) device performance requires the use of solar simulators and natural solar radiation, respectively. This performance characterization requires accurate knowledge of spectral irradiance distribution that is incident on the devices. Spectroradiometers are used to measure the spectral distribution of solar simulators and solar radiation. On September 17, 2013, a global spectral irradiance intercomparison using spectroradiometers was organized by the Solar Radiation Research Laboratory (SRRL) at the National Renewable Energy Laboratory (NREL). This paper presents highlights of the results of this first intercomparison, which will help to decrease systematic inter-laboratory differences in the measurements ofmore » the outputs or efficiencies of PV devices and harmonize laboratory experimental procedures.« less
Continued Analysis of the NIST Neutron Lifetime Measurement Using Ultracold Neutrons
NASA Astrophysics Data System (ADS)
Huffer, Craig; Huffman, P. R.; Schelhammer, K. W.; Dewey, M. S.; Huber, M. G.; Hughes, P. P.; Mumm, H. P.; Thompson, A. K.; Coakley, K.; Yue, A. T.; O'Shaughnessy, C. M.; Yang, L.
2013-10-01
The neutron lifetime is an important parameter for constraining the Standard Model and providing input for Big Bang Nucleosynthesis. The current disagreement in the most recent generation of lifetime experiments suggests unknown or underestimated systematics and motivates the need for alternative measurement methods as well as additional investigations into potential systematics. Our measurement was performed using magnetically trapped Ultracold Neutrons in a 3.1 T Ioffe type trap configuration. The decay rate of the neutron population is recorded in real time by monitoring visible light resulting from beta decay. Data collected in late 2010 and early 2011 is being analyzed and systematic effects are being investigated. An overview of our current work on the analysis, Monte Carlo simulations, and systematic effects will be provided. This work was supported by the NSF and NIST.
NASA Technical Reports Server (NTRS)
Kwatra, S. C.
1998-01-01
A large number of papers have been published attempting to give some analytical basis for the performance of Turbo-codes. It has been shown that performance improves with increased interleaver length. Also procedures have been given to pick the best constituent recursive systematic convolutional codes (RSCC's). However testing by computer simulation is still required to verify these results. This thesis begins by describing the encoding and decoding schemes used. Next simulation results on several memory 4 RSCC's are shown. It is found that the best BER performance at low E(sub b)/N(sub o) is not given by the RSCC's that were found using the analytic techniques given so far. Next the results are given from simulations using a smaller memory RSCC for one of the constituent encoders. Significant reduction in decoding complexity is obtained with minimal loss in performance. Simulation results are then given for a rate 1/3 Turbo-code with the result that this code performed as well as a rate 1/2 Turbo-code as measured by the distance from their respective Shannon limits. Finally the results of simulations where an inaccurate noise variance measurement was used are given. From this it was observed that Turbo-decoding is fairly stable with regard to noise variance measurement.
Goldenberg, Mitchell G; Lee, Jason Y; Kwong, Jethro C C; Grantcharov, Teodor P; Costello, Anthony
2018-03-31
To systematically review and synthesise the validity evidence supporting intraoperative and simulation-based assessments of technical skill in urological robot-assisted surgery (RAS), and make evidence-based recommendations for the implementation of these assessments in urological training. A literature search of the Medline, PsycINFO and Embase databases was performed. Articles using technical skill and simulation-based assessments in RAS were abstracted. Only studies involving urology trainees or faculty were included in the final analysis. Multiple tools for the assessment of technical robotic skill have been published, with mixed sources of validity evidence to support their use. These evaluations have been used in both the ex vivo and in vivo settings. Performance evaluations range from global rating scales to psychometrics, and assessments are carried out through automation, expert analysts, and crowdsourcing. There have been rapid expansions in approaches to RAS technical skills assessment, both in simulated and clinical settings. Alternative approaches to assessment in RAS, such as crowdsourcing and psychometrics, remain under investigation. Evidence to support the use of these metrics in high-stakes decisions is likely insufficient at present. © 2018 The Authors BJU International © 2018 BJU International Published by John Wiley & Sons Ltd.
The Effects of Coping Interventions on Ability to Perform Under Pressure
Kent, Sofie; Devonport, Tracey J.; Lane, Andrew M.; Nicholls, Wendy; Friesen, Andrew P.
2018-01-01
The ability to perform under pressure is necessary to achieve goals in various domains of life. We conducted a systematic review to synthesise findings from applied studies that focus on interventions developed to enhance an individual’s ability to cope under performance pressure. Following the preferred reporting items for systematic reviews and meta-analyses (PRISMA) guidelines, a comprehensive search of five electronic databases was conducted. This yielded 66,618 records, of which 23 peer review papers met inclusion criteria of containing an intervention that targeted coping skills for performing under pressure. Using the Standard Quality Assessment for evaluation of primary research papers (Kmet et al., 2004) to assess quality, included studies performed well on reporting research objectives, research design, and statistical procedures. Sixteen studies showed poor quality in controlling for potentially confounding factors and small sample sizes. A narrative aggregate synthesis identified intervention studies that provided an educational focus (n = 9), consultancy sessions (n = 6), simulation training (n = 5) and emotion regulation strategies (n = 3). Findings highlight a need to; 1) establish a contextualized pressure task which will generate high levels of ecological validity for participants. Having established a suitable pressure task, 2) research should assess the effects of pressure by evaluating conscious and nonconscious effects and associated coping mechanisms, which should inform the subsequent development of interventions, and 3) assess interventions to enhance understanding of the ways in which they improve coping with pressure, or may fail, and the mechanisms which may explain these outcomes. Key points Simulation studies that exposed individuals to ‘pressure’ settings produced the most consistent improvements to performance, in comparison to a control group. This systematic review highlights limitations with the design, execution, and evaluation of pressure interventions. Future research should attempt to better consider the approach used to generate meaningful performance pressures and assess the consequences of pressure by evaluating conscious and non-conscious effects and coping mechanisms through which coping with pressure might be improved. PMID:29535577
Assessing teamwork performance in obstetrics: A systematic search and review of validated tools.
Fransen, Annemarie F; de Boer, Liza; Kienhorst, Dieneke; Truijens, Sophie E; van Runnard Heimel, Pieter J; Oei, S Guid
2017-09-01
Teamwork performance is an essential component for the clinical efficiency of multi-professional teams in obstetric care. As patient safety is related to teamwork performance, it has become an important learning goal in simulation-based education. In order to improve teamwork performance, reliable assessment tools are required. These can be used to provide feedback during training courses, or to compare learning effects between different types of training courses. The aim of the current study is to (1) identify the available assessment tools to evaluate obstetric teamwork performance in a simulated environment, and (2) evaluate their psychometric properties in order to identify the most valuable tool(s) to use. We performed a systematic search in PubMed, MEDLINE, and EMBASE to identify articles describing assessment tools for the evaluation of obstetric teamwork performance in a simulated environment. In order to evaluate the quality of the identified assessment tools the standards and grading rules have been applied as recommended by the Accreditation Council for Graduate Medical Education (ACGME) Committee on Educational Outcomes. The included studies were also assessed according to the Oxford Centre for Evidence Based Medicine (OCEBM) levels of evidence. This search resulted in the inclusion of five articles describing the following six tools: Clinical Teamwork Scale, Human Factors Rating Scale, Global Rating Scale, Assessment of Obstetric Team Performance, Global Assessment of Obstetric Team Performance, and the Teamwork Measurement Tool. Based on the ACGME guidelines we assigned a Class 3, level C of evidence, to all tools. Regarding the OCEBM levels of evidence, a level 3b was assigned to two studies and a level 4 to four studies. The Clinical Teamwork Scale demonstrated the most comprehensive validation, and the Teamwork Measurement Tool demonstrated promising results, however it is recommended to further investigate its reliability. Copyright © 2017. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Shauly, Eitan; Rotstein, Israel; Peltinov, Ram; Latinski, Sergei; Adan, Ofer; Levi, Shimon; Menadeva, Ovadya
2009-03-01
The continues transistors scaling efforts, for smaller devices, similar (or larger) drive current/um and faster devices, increase the challenge to predict and to control the transistor off-state current. Typically, electrical simulators like SPICE, are using the design intent (as-drawn GDS data). At more sophisticated cases, the simulators are fed with the pattern after lithography and etch process simulations. As the importance of electrical simulation accuracy is increasing and leakage is becoming more dominant, there is a need to feed these simulators, with more accurate information extracted from physical on-silicon transistors. Our methodology to predict changes in device performances due to systematic lithography and etch effects was used in this paper. In general, the methodology consists on using the OPCCmaxTM for systematic Edge-Contour-Extraction (ECE) from transistors, taking along the manufacturing and includes any image distortions like line-end shortening, corner rounding and line-edge roughness. These measurements are used for SPICE modeling. Possible application of this new metrology is to provide a-head of time, physical and electrical statistical data improving time to market. In this work, we applied our methodology to analyze a small and large array's of 2.14um2 6T-SRAM, manufactured using Tower Standard Logic for General Purposes Platform. 4 out of the 6 transistors used "U-Shape AA", known to have higher variability. The predicted electrical performances of the transistors drive current and leakage current, in terms of nominal values and variability are presented. We also used the methodology to analyze an entire SRAM Block array. Study of an isolation leakage and variability are presented.
Belykh, Evgenii; Onaka, Naomi R; Abramov, Irakliy T; Yağmurlu, Kaan; Byvaltsev, Vadim A; Spetzler, Robert F; Nakaj, Peter; Preul, Mark C
2018-04-01
Microneurosurgical techniques involve complex manual skills and hand-eye coordination that require substantial training. Many factors affect microneurosurgical skills. The goal of this study was to use a systematic evidence-based approach to analyze the quality of evidence for intrinsic and extrinsic factors that influence microneurosurgical performance and to make weighted practical recommendations. A literature search of factors that may affect microsurgical performance was conducted using PubMed and Embase. The criteria for inclusion were established in accordance with the PRISMA-P (Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols) statement. Forty-eight studies were included in the analysis. Most of the studies used surgeons as participants. Most used endoscopic surgery simulators to assess skills, and only 12 studies focused on microsurgery. This review provides 18 practical recommendations based on a systematic literature analysis of the following 8 domains: 1) listening to music before and during microsurgery, 2) caffeine consumption, 3) β-blocker use, 4) physical exercise, 5) sleep deprivation, 6) alcohol consumption before performing surgery, 7) duration of the operation, and 8) the ergonomic position of the surgeon. Despite the clear value of determining the effects of various factors on surgical performance, the available body of literature is limited, and it is not possible to determine standards for each surgical field. These recommendations may be used by neurosurgical trainees and practicing neurosurgeons to improve microsurgical performance and acquisition of microsurgical skills. Randomized studies assessing the factors that influence microsurgical performance are required. Copyright © 2018 Elsevier Inc. All rights reserved.
SDRE controller for motion design of cable-suspended robot with uncertainties and moving obstacles
NASA Astrophysics Data System (ADS)
Behboodi, Ahad; Salehi, Seyedmohammad
2017-10-01
In this paper an optimal control approach for nonlinear dynamical systems was proposed based on State Dependent Riccati Equation (SDRE) and its robustness against uncertainties is shown by simulation results. The proposed method was applied on a spatial six-cable suspended robot, which was designed to carry loads or perform different tasks in huge workspaces. Motion planning for cable-suspended robots in such a big workspace is subjected to uncertainties and obstacles. First, we emphasized the ability of SDRE to construct a systematic basis and efficient design of controller for wide variety of nonlinear dynamical systems. Then we showed how this systematic design improved the robustness of the system and facilitated the integration of motion planning techniques with the controller. In particular, obstacle avoidance technique based on artificial potential field (APF) can be easily combined with SDRE controller with efficient performance. Due to difficulties of exact solution for SDRE, an approximation method was used based on power series expansion. The efficiency and robustness of the SDRE controller was illustrated on a six-cable suspended robot with proper simulations.
Effects of Planetary Boundary Layer Parameterizations on CWRF Regional Climate Simulation
NASA Astrophysics Data System (ADS)
Liu, S.; Liang, X.
2011-12-01
Planetary Boundary Layer (PBL) parameterizations incorporated in CWRF (Climate extension of the Weather Research and Forecasting model) are first evaluated by comparing simulated PBL heights with observations. Among the 10 evaluated PBL schemes, 2 (CAM, UW) are new in CWRF while the other 8 are original WRF schemes. MYJ, QNSE and UW determine the PBL heights based on turbulent kinetic energy (TKE) profiles, while others (YSU, ACM, GFS, CAM, TEMF) are from bulk Richardson criteria. All TKE-based schemes (MYJ, MYNN, QNSE, UW, Boulac) substantially underestimate convective or residual PBL heights from noon toward evening, while others (ACM, CAM, YSU) well capture the observed diurnal cycle except for the GFS with systematic overestimation. These differences among the schemes are representative over most areas of the simulation domain, suggesting systematic behaviors of the parameterizations. Lower PBL heights simulated by the QNSE and MYJ are consistent with their smaller Bowen ratios and heavier rainfalls, while higher PBL tops by the GFS correspond to warmer surface temperatures. Effects of PBL parameterizations on CWRF regional climate simulation are then compared. The QNSE PBL scheme yields systematically heavier rainfall almost everywhere and throughout the year; this is identified with a much greater surface Bowen ratio (smaller sensible versus larger latent heating) and wetter soil moisture than other PBL schemes. Its predecessor MYJ scheme shares the same deficiency to a lesser degree. For temperature, the performance of the QNSE and MYJ schemes remains poor, having substantially larger rms errors in all seasons. GFS PBL scheme also produces large warm biases. Pronounced sensitivities are also found to the PBL schemes in winter and spring over most areas except the southern U.S. (Southeast, Gulf States, NAM); excluding the outliers (QNSE, MYJ, GFS) that cause extreme biases of -6 to +3°C, the differences among the schemes are still visible (±2°C), where the CAM is generally more realistic. QNSE, MYJ, GFS and BouLac PBL parameterizations are identified as obvious outliers of overall performance in representing precipitation, surface air temperature or PBL height variations. Their poor performance may result from deficiencies in physical formulations, dependences on applicable scales, or trouble numerical implementations, requiring future detailed investigation to isolate the actual cause.
Virtual reality simulation training in Otolaryngology.
Arora, Asit; Lau, Loretta Y M; Awad, Zaid; Darzi, Ara; Singh, Arvind; Tolley, Neil
2014-01-01
To conduct a systematic review of the validity data for the virtual reality surgical simulator platforms available in Otolaryngology. Ovid and Embase databases searched July 13, 2013. Four hundred and nine abstracts were independently reviewed by 2 authors. Thirty-six articles which fulfilled the search criteria were retrieved and viewed in full text. These articles were assessed for quantitative data on at least one aspect of face, content, construct or predictive validity. Papers were stratified by simulator, sub-specialty and further classified by the validation method used. There were 21 articles reporting applications for temporal bone surgery (n = 12), endoscopic sinus surgery (n = 6) and myringotomy (n = 3). Four different simulator platforms were validated for temporal bone surgery and two for each of the other surgical applications. Face/content validation represented the most frequent study type (9/21). Construct validation studies performed on temporal bone and endoscopic sinus surgery simulators showed that performance measures reliably discriminated between different experience levels. Simulation training improved cadaver temporal bone dissection skills and operating room performance in sinus surgery. Several simulator platforms particularly in temporal bone surgery and endoscopic sinus surgery are worthy of incorporation into training programmes. Standardised metrics are necessary to guide curriculum development in Otolaryngology. Copyright © 2013 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.
Sundh, Joakim; Juslin, Peter
2018-02-01
In this study, we explore how people integrate risks of assets in a simulated financial market into a judgment of the conjunctive risk that all assets decrease in value, both when assets are independent and when there is a systematic risk present affecting all assets. Simulations indicate that while mental calculation according to naïve application of probability theory is best when the assets are independent, additive or exemplar-based algorithms perform better when systematic risk is high. Considering that people tend to intuitively approach compound probability tasks using additive heuristics, we expected the participants to find it easiest to master tasks with high systematic risk - the most complex tasks from the standpoint of probability theory - while they should shift to probability theory or exemplar memory with independence between the assets. The results from 3 experiments confirm that participants shift between strategies depending on the task, starting off with the default of additive integration. In contrast to results in similar multiple cue judgment tasks, there is little evidence for use of exemplar memory. The additive heuristics also appear to be surprisingly context-sensitive, with limited generalization across formally very similar tasks. Copyright © 2017 Elsevier B.V. All rights reserved.
Survey of outcomes in a faculty development program on simulation pedagogy.
Roh, Young Sook; Kim, Mi Kang; Tangkawanich, Thitiarpha
2016-06-01
Although many nursing programs use simulation as a teaching-learning modality, there are few systematic approaches to help nursing educators learn this pedagogy. This study evaluates the effects of a simulation pedagogy nursing faculty development program on participants' learning perceptions using a retrospective pre-course and post-course design. Sixteen Thai participants completed a two-day nursing faculty development program on simulation pedagogy. Thirteen questionnaires were used in the final analysis. The participants' self-perceived learning about simulation teaching showed significant post-course improvement. On a five-point Likert scale, the composite mean attitude, subjective norm, and perceived behavioral control scores, as well as intention to use a simulator, showed a significant post-course increase. A faculty development program on simulation pedagogy induced favorable learning and attitudes. Further studies must test how faculty performance affects the cognitive, emotional, and social dimensions of learning in a simulation-based learning domain. © 2015 Wiley Publishing Asia Pty Ltd.
Human-simulation-based learning to prevent medication error: A systematic review.
Sarfati, Laura; Ranchon, Florence; Vantard, Nicolas; Schwiertz, Vérane; Larbre, Virginie; Parat, Stéphanie; Faudel, Amélie; Rioufol, Catherine
2018-01-31
In the past 2 decades, there has been an increasing interest in simulation-based learning programs to prevent medication error (ME). To improve knowledge, skills, and attitudes in prescribers, nurses, and pharmaceutical staff, these methods enable training without directly involving patients. However, best practices for simulation for healthcare providers are as yet undefined. By analysing the current state of experience in the field, the present review aims to assess whether human simulation in healthcare helps to reduce ME. A systematic review was conducted on Medline from 2000 to June 2015, associating the terms "Patient Simulation," "Medication Errors," and "Simulation Healthcare." Reports of technology-based simulation were excluded, to focus exclusively on human simulation in nontechnical skills learning. Twenty-one studies assessing simulation-based learning programs were selected, focusing on pharmacy, medicine or nursing students, or concerning programs aimed at reducing administration or preparation errors, managing crises, or learning communication skills for healthcare professionals. The studies varied in design, methodology, and assessment criteria. Few demonstrated that simulation was more effective than didactic learning in reducing ME. This review highlights a lack of long-term assessment and real-life extrapolation, with limited scenarios and participant samples. These various experiences, however, help in identifying the key elements required for an effective human simulation-based learning program for ME prevention: ie, scenario design, debriefing, and perception assessment. The performance of these programs depends on their ability to reflect reality and on professional guidance. Properly regulated simulation is a good way to train staff in events that happen only exceptionally, as well as in standard daily activities. By integrating human factors, simulation seems to be effective in preventing iatrogenic risk related to ME, if the program is well designed. © 2018 John Wiley & Sons, Ltd.
Current status of validation for robotic surgery simulators - a systematic review.
Abboudi, Hamid; Khan, Mohammed S; Aboumarzouk, Omar; Guru, Khurshid A; Challacombe, Ben; Dasgupta, Prokar; Ahmed, Kamran
2013-02-01
To analyse studies validating the effectiveness of robotic surgery simulators. The MEDLINE(®), EMBASE(®) and PsycINFO(®) databases were systematically searched until September 2011. References from retrieved articles were reviewed to broaden the search. The simulator name, training tasks, participant level, training duration and evaluation scoring were extracted from each study. We also extracted data on feasibility, validity, cost-effectiveness, reliability and educational impact. We identified 19 studies investigating simulation options in robotic surgery. There are five different robotic surgery simulation platforms available on the market. In all, 11 studies sought opinion and compared performance between two different groups; 'expert' and 'novice'. Experts ranged in experience from 21-2200 robotic cases. The novice groups consisted of participants with no prior experience on a robotic platform and were often medical students or junior doctors. The Mimic dV-Trainer(®), ProMIS(®), SimSurgery Educational Platform(®) (SEP) and Intuitive systems have shown face, content and construct validity. The Robotic Surgical SimulatorTM system has only been face and content validated. All of the simulators except SEP have shown educational impact. Feasibility and cost-effectiveness of simulation systems was not evaluated in any trial. Virtual reality simulators were shown to be effective training tools for junior trainees. Simulation training holds the greatest potential to be used as an adjunct to traditional training methods to equip the next generation of robotic surgeons with the skills required to operate safely. However, current simulation models have only been validated in small studies. There is no evidence to suggest one type of simulator provides more effective training than any other. More research is needed to validate simulated environments further and investigate the effectiveness of animal and cadaveric training in robotic surgery. © 2012 BJU International.
Systematic simulations of modified gravity: chameleon models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brax, Philippe; Davis, Anne-Christine; Li, Baojiu
2013-04-01
In this work we systematically study the linear and nonlinear structure formation in chameleon theories of modified gravity, using a generic parameterisation which describes a large class of models using only 4 parameters. For this we have modified the N-body simulation code ecosmog to perform a total of 65 simulations for different models and parameter values, including the default ΛCDM. These simulations enable us to explore a significant portion of the parameter space. We have studied the effects of modified gravity on the matter power spectrum and mass function, and found a rich and interesting phenomenology where the difference withmore » the ΛCDM paradigm cannot be reproduced by a linear analysis even on scales as large as k ∼ 0.05 hMpc{sup −1}, since the latter incorrectly assumes that the modification of gravity depends only on the background matter density. Our results show that the chameleon screening mechanism is significantly more efficient than other mechanisms such as the dilaton and symmetron, especially in high-density regions and at early times, and can serve as a guidance to determine the parts of the chameleon parameter space which are cosmologically interesting and thus merit further studies in the future.« less
Computational study of 3-D hot-spot initiation in shocked insensitive high-explosive
NASA Astrophysics Data System (ADS)
Najjar, F. M.; Howard, W. M.; Fried, L. E.; Manaa, M. R.; Nichols, A., III; Levesque, G.
2012-03-01
High-explosive (HE) material consists of large-sized grains with micron-sized embedded impurities and pores. Under various mechanical/thermal insults, these pores collapse generating hightemperature regions leading to ignition. A hydrodynamic study has been performed to investigate the mechanisms of pore collapse and hot spot initiation in TATB crystals, employing a multiphysics code, ALE3D, coupled to the chemistry module, Cheetah. This computational study includes reactive dynamics. Two-dimensional high-resolution large-scale meso-scale simulations have been performed. The parameter space is systematically studied by considering various shock strengths, pore diameters and multiple pore configurations. Preliminary 3-D simulations are undertaken to quantify the 3-D dynamics.
Motion planning for an adaptive wing structure with macro-fiber composite actuators
NASA Astrophysics Data System (ADS)
Schröck, J.; Meurer, T.; Kugi, A.
2009-05-01
A systematic approach for flatness-based motion planning and feedforward control is presented for the transient shaping of a piezo-actuated rectangular cantilevered plate modeling an adaptive wing. In the first step the consideration of an idealized infinite-dimensional input allows to determine the state and input parametrization in terms of a flat or basic output, which is used for a systematic motion planning approach. Subsequently, the obtained idealized input function is projected onto a finite number of suitably placed Macro-fiber Composite (MFC) patch actuators. The tracking performance of the proposed approach is evaluated in a simulation scenario.
Frank, Martin
2015-01-01
Complex carbohydrates usually have a large number of rotatable bonds and consequently a large number of theoretically possible conformations can be generated (combinatorial explosion). The application of systematic search methods for conformational analysis of carbohydrates is therefore limited to disaccharides and trisaccharides in a routine analysis. An alternative approach is to use Monte-Carlo methods or (high-temperature) molecular dynamics (MD) simulations to explore the conformational space of complex carbohydrates. This chapter describes how to use MD simulation data to perform a conformational analysis (conformational maps, hydrogen bonds) of oligosaccharides and how to build realistic 3D structures of large polysaccharides using Conformational Analysis Tools (CAT).
Debriefing decreases mental workload in surgical crisis: A randomized controlled trial.
Boet, Sylvain; Sharma, Bharat; Pigford, Ashlee-Ann; Hladkowicz, Emily; Rittenhouse, Neil; Grantcharov, Teodor
2017-05-01
Mental workload is the amount of mental effort involved in performing a particular task. Crisis situations may increase mental workload, which can subsequently negatively impact operative performance and patient safety. This study aims to measure the impact of learning through debriefing and a systematic approach to crisis on trainees' mental workload in a simulated surgical crisis. Twenty junior surgical residents participated in a high-fidelity, simulated, postoperative crisis in a surgical ward environment (pretest). Participants were randomized to either an instructor-led debriefing, including performance feedback (intervention; n = 10) or no debriefing (control; n = 10). Subjects then immediately managed a second simulated crisis (post-test). Mental workload was assessed in real time during the scenarios using a previously validated, wireless, vibrotactile device. Mental workload was represented by subject response times to the vibrations, which were recorded and analyzed using the Mann-Whitney U test. Participants in the debriefing arm had a significantly reduced median response time in milliseconds (post-test minus pretest -695, quartile range -2,136 to -297) compared to participants in the control arm (42, -1,191 to 763), (between-arm difference P = .049). Debriefing after simulated surgical crisis situations may improve performance by decreasing trainee's mental workload during a subsequent simulated surgical crisis. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Statham, P.; Llovet, X.; Duncumb, P.
2012-03-01
We have assessed the reliability of different Monte Carlo simulation programmes using the two available Bastin-Heijligers databases of thin-film measurements by EPMA. The MC simulation programmes tested include Curgenven-Duncumb MSMC, NISTMonte, Casino and PENELOPE. Plots of the ratio of calculated to measured k-ratios ("kcalc/kmeas") against various parameters reveal error trends that are not apparent in simple error histograms. The results indicate that the MC programmes perform quite differently on the same dataset. However, they appear to show a similar pronounced trend with a "hockey stick" shape in the "kcalc/kmeas versus kmeas" plots. The most sophisticated programme PENELOPE gives the closest correspondence with experiment but still shows a tendency to underestimate experimental k-ratios by 10 % for films that are thin compared to the electron range. We have investigated potential causes for this systematic behaviour and extended the study to data not collected by Bastin and Heijligers.
Systematic Sensor Selection Strategy (S4) User Guide
NASA Technical Reports Server (NTRS)
Sowers, T. Shane
2012-01-01
This paper describes a User Guide for the Systematic Sensor Selection Strategy (S4). S4 was developed to optimally select a sensor suite from a larger pool of candidate sensors based on their performance in a diagnostic system. For aerospace systems, selecting the proper sensors is important for ensuring adequate measurement coverage to satisfy operational, maintenance, performance, and system diagnostic criteria. S4 optimizes the selection of sensors based on the system fault diagnostic approach while taking conflicting objectives such as cost, weight and reliability into consideration. S4 can be described as a general architecture structured to accommodate application-specific components and requirements. It performs combinational optimization with a user defined merit or cost function to identify optimum or near-optimum sensor suite solutions. The S4 User Guide describes the sensor selection procedure and presents an example problem using an open source turbofan engine simulation to demonstrate its application.
Simulation stimulates learning in a childbearing clinical course.
Simonelli, Mary Colleen; Paskausky, Anna L
2012-03-01
Preparing nursing students to become integral members of today's health care team presents educators with unique challenges in both classroom and clinical settings. This study examined the effectiveness of adding high-fidelity simulation to a childbearing clinical course. Our systematic research addressed the importance of evaluating the outcomes of using simulation on both knowledge acquisition and clinical competency. We found simulation to have a positive effect on not only student clinical performance, but also knowledge development in the undergraduate child-bearing clinical course. These outcome data will inform the curriculum changes needed as we strive to facilitate student proficiency in clinical concepts and skills and prepare the next generation of nurses entering our increasingly complex health care system. Copyright 2012, SLACK Incorporated.
NASA Technical Reports Server (NTRS)
Schulte, Peter Z.; Moore, James W.
2011-01-01
The Crew Exploration Vehicle Parachute Assembly System (CPAS) project conducts computer simulations to verify that flight performance requirements on parachute loads and terminal rate of descent are met. Design of Experiments (DoE) provides a systematic method for variation of simulation input parameters. When implemented and interpreted correctly, a DoE study of parachute simulation tools indicates values and combinations of parameters that may cause requirement limits to be violated. This paper describes one implementation of DoE that is currently being developed by CPAS, explains how DoE results can be interpreted, and presents the results of several preliminary studies. The potential uses of DoE to validate parachute simulation models and verify requirements are also explored.
Systematic Review of Patient-Specific Surgical Simulation: Toward Advancing Medical Education.
Ryu, Won Hyung A; Dharampal, Navjit; Mostafa, Ahmed E; Sharlin, Ehud; Kopp, Gail; Jacobs, William Bradley; Hurlbert, Robin John; Chan, Sonny; Sutherland, Garnette R
Simulation-based education has been shown to be an effective tool to teach foundational technical skills in various surgical specialties. However, most of the current simulations are limited to generic scenarios and do not allow continuation of the learning curve beyond basic technical skills to prepare for more advanced expertise, such as patient-specific surgical planning. The objective of this study was to evaluate the current medical literature with respect to the utilization and educational value of patient-specific simulations for surgical training. We performed a systematic review of the literature using Pubmed, Embase, and Scopus focusing on themes of simulation, patient-specific, surgical procedure, and education. The study included randomized controlled trials, cohort studies, and case-control studies published between 2005 and 2016. Two independent reviewers (W.H.R. and N.D) conducted the study appraisal, data abstraction, and quality assessment of the studies. The search identified 13 studies that met the inclusion criteria; 7 studies employed computer simulations and 6 studies used 3-dimensional (3D) synthetic models. A number of surgical specialties evaluated patient-specific simulation, including neurosurgery, vascular surgery, orthopedic surgery, and interventional radiology. However, most studies were small in size and primarily aimed at feasibility assessments and early validation. Early evidence has shown feasibility and utility of patient-specific simulation for surgical education. With further development of this technology, simulation-based education may be able to support training of higher-level competencies outside the clinical settingto aid learners in their development of surgical skills. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
An open-source job management framework for parameter-space exploration: OACIS
NASA Astrophysics Data System (ADS)
Murase, Y.; Uchitane, T.; Ito, N.
2017-11-01
We present an open-source software framework for parameter-space exporation, named OACIS, which is useful to manage vast amount of simulation jobs and results in a systematic way. Recent development of high-performance computers enabled us to explore parameter spaces comprehensively, however, in such cases, manual management of the workflow is practically impossible. OACIS is developed aiming at reducing the cost of these repetitive tasks when conducting simulations by automating job submissions and data management. In this article, an overview of OACIS as well as a getting started guide are presented.
Towards Systematic Benchmarking of Climate Model Performance
NASA Astrophysics Data System (ADS)
Gleckler, P. J.
2014-12-01
The process by which climate models are evaluated has evolved substantially over the past decade, with the Coupled Model Intercomparison Project (CMIP) serving as a centralizing activity for coordinating model experimentation and enabling research. Scientists with a broad spectrum of expertise have contributed to the CMIP model evaluation process, resulting in many hundreds of publications that have served as a key resource for the IPCC process. For several reasons, efforts are now underway to further systematize some aspects of the model evaluation process. First, some model evaluation can now be considered routine and should not require "re-inventing the wheel" or a journal publication simply to update results with newer models. Second, the benefit of CMIP research to model development has not been optimal because the publication of results generally takes several years and is usually not reproducible for benchmarking newer model versions. And third, there are now hundreds of model versions and many thousands of simulations, but there is no community-based mechanism for routinely monitoring model performance changes. An important change in the design of CMIP6 can help address these limitations. CMIP6 will include a small set standardized experiments as an ongoing exercise (CMIP "DECK": ongoing Diagnostic, Evaluation and Characterization of Klima), so that modeling groups can submit them at any time and not be overly constrained by deadlines. In this presentation, efforts to establish routine benchmarking of existing and future CMIP simulations will be described. To date, some benchmarking tools have been made available to all CMIP modeling groups to enable them to readily compare with CMIP5 simulations during the model development process. A natural extension of this effort is to make results from all CMIP simulations widely available, including the results from newer models as soon as the simulations become available for research. Making the results from routine performance tests readily accessible will help advance a more transparent model evaluation process.
NASA Astrophysics Data System (ADS)
Tesemma, Z. K.; Wei, Y.; Peel, M. C.; Western, A. W.
2014-09-01
This study assessed the effect of using observed monthly leaf area index (LAI) on hydrologic model performance and the simulation of streamflow during drought using the variable infiltration capacity (VIC) hydrological model in the Goulburn-Broken catchment of Australia, which has heterogeneous vegetation, soil and climate zones. VIC was calibrated with both observed monthly LAI and long-term mean monthly LAI, which were derived from the Global Land Surface Satellite (GLASS) observed monthly LAI dataset covering the period from 1982 to 2012. The model performance under wet and dry climates for the two different LAI inputs was assessed using three criteria, the classical Nash-Sutcliffe efficiency, the logarithm transformed flow Nash-Sutcliffe efficiency and the percentage bias. Finally, the percentage deviation of the simulated monthly streamflow using the observed monthly LAI from simulated streamflow using long-term mean monthly LAI was computed. The VIC model predicted monthly streamflow in the selected sub-catchments with model efficiencies ranging from 61.5 to 95.9% during calibration (1982-1997) and 59 to 92.4% during validation (1998-2012). Our results suggest systematic improvements from 4 to 25% in the Nash-Sutcliffe efficiency in pasture dominated catchments when the VIC model was calibrated with the observed monthly LAI instead of the long-term mean monthly LAI. There was limited systematic improvement in tree dominated catchments. The results also suggest that the model overestimation or underestimation of streamflow during wet and dry periods can be reduced to some extent by including the year-to-year variability of LAI in the model, thus reflecting the responses of vegetation to fluctuations in climate and other factors. Hence, the year-to-year variability in LAI should not be neglected; rather it should be included in model calibration as well as simulation of monthly water balance.
NASA Astrophysics Data System (ADS)
Tesemma, Z. K.; Wei, Y.; Peel, M. C.; Western, A. W.
2015-09-01
This study assessed the effect of using observed monthly leaf area index (LAI) on hydrological model performance and the simulation of runoff using the Variable Infiltration Capacity (VIC) hydrological model in the Goulburn-Broken catchment of Australia, which has heterogeneous vegetation, soil and climate zones. VIC was calibrated with both observed monthly LAI and long-term mean monthly LAI, which were derived from the Global Land Surface Satellite (GLASS) leaf area index dataset covering the period from 1982 to 2012. The model performance under wet and dry climates for the two different LAI inputs was assessed using three criteria, the classical Nash-Sutcliffe efficiency, the logarithm transformed flow Nash-Sutcliffe efficiency and the percentage bias. Finally, the deviation of the simulated monthly runoff using the observed monthly LAI from simulated runoff using long-term mean monthly LAI was computed. The VIC model predicted monthly runoff in the selected sub-catchments with model efficiencies ranging from 61.5% to 95.9% during calibration (1982-1997) and 59% to 92.4% during validation (1998-2012). Our results suggest systematic improvements, from 4% to 25% in Nash-Sutcliffe efficiency, in sparsely forested sub-catchments when the VIC model was calibrated with observed monthly LAI instead of long-term mean monthly LAI. There was limited systematic improvement in tree dominated sub-catchments. The results also suggest that the model overestimation or underestimation of runoff during wet and dry periods can be reduced to 25 mm and 35 mm respectively by including the year-to-year variability of LAI in the model, thus reflecting the responses of vegetation to fluctuations in climate and other factors. Hence, the year-to-year variability in LAI should not be neglected; rather it should be included in model calibration as well as simulation of monthly water balance.
NASA Astrophysics Data System (ADS)
Martinot, Zachary; Kohn, Saul; Aguirre, James; Washington, Immanuel; HERA Collaboration, PAPER Collaboration
2018-01-01
The HERA and PAPER experiments that aim to detect the power spectrum of the 21cm brightness temperature during the Epoch of Reionization (EoR) are planned with the expectation that foregrounds will be separated from the cosmological signal by a clearly demarcated boundary in Fourier space. Polarized foregrounds with complex frequency structure present a potential systematic as their mixing into unpolarized signal by the polarized response of an instrument's beam may be confused for the unpolarized EoR signal. There are two factors we believe will mitigate this systematic to the point that it will not impede the detection of the cosmological power spectrum in the foreground avoidance scheme. First, variation in the ionospheric plasma density observed between different days produces attenuation of the effective polarized power on the sky when visibilities are averaged coherently over many days. Second, the absolute level of polarization leakage can be suppressed through careful design of the instrument. We have performed detailed visibility simulations to investigate both effects, and present the results of these simulations for both the HERA and PAPER instruments.
Exploring the stability of ligand binding modes to proteins by molecular dynamics simulations.
Liu, Kai; Watanabe, Etsurou; Kokubo, Hironori
2017-02-01
The binding mode prediction is of great importance to structure-based drug design. The discrimination of various binding poses of ligand generated by docking is a great challenge not only to docking score functions but also to the relatively expensive free energy calculation methods. Here we systematically analyzed the stability of various ligand poses under molecular dynamics (MD) simulation. First, a data set of 120 complexes was built based on the typical physicochemical properties of drug-like ligands. Three potential binding poses (one correct pose and two decoys) were selected for each ligand from self-docking in addition to the experimental pose. Then, five independent MD simulations for each pose were performed with different initial velocities for the statistical analysis. Finally, the stabilities of ligand poses under MD were evaluated and compared with the native one from crystal structure. We found that about 94% of the native poses were maintained stable during the simulations, which suggests that MD simulations are accurate enough to judge most experimental binding poses as stable properly. Interestingly, incorrect decoy poses were maintained much less and 38-44% of decoys could be excluded just by performing equilibrium MD simulations, though 56-62% of decoys were stable. The computationally-heavy binding free energy calculation can be performed only for these survived poses.
Instrument performance and simulation verification of the POLAR detector
NASA Astrophysics Data System (ADS)
Kole, M.; Li, Z. H.; Produit, N.; Tymieniecka, T.; Zhang, J.; Zwolinska, A.; Bao, T. W.; Bernasconi, T.; Cadoux, F.; Feng, M. Z.; Gauvin, N.; Hajdas, W.; Kong, S. W.; Li, H. C.; Li, L.; Liu, X.; Marcinkowski, R.; Orsi, S.; Pohl, M.; Rybka, D.; Sun, J. C.; Song, L. M.; Szabelski, J.; Wang, R. J.; Wang, Y. H.; Wen, X.; Wu, B. B.; Wu, X.; Xiao, H. L.; Xiong, S. L.; Zhang, L.; Zhang, L. Y.; Zhang, S. N.; Zhang, X. F.; Zhang, Y. J.; Zhao, Y.
2017-11-01
POLAR is a new satellite-born detector aiming to measure the polarization of an unprecedented number of Gamma-Ray Bursts in the 50-500 keV energy range. The instrument, launched on-board the Tiangong-2 Chinese Space lab on the 15th of September 2016, is designed to measure the polarization of the hard X-ray flux by measuring the distribution of the azimuthal scattering angles of the incoming photons. A detailed understanding of the polarimeter and specifically of the systematic effects induced by the instrument's non-uniformity are required for this purpose. In order to study the instrument's response to polarization, POLAR underwent a beam test at the European Synchrotron Radiation Facility in France. In this paper both the beam test and the instrument performance will be described. This is followed by an overview of the Monte Carlo simulation tools developed for the instrument. Finally a comparison of the measured and simulated instrument performance will be provided and the instrument response to polarization will be presented.
NASA Astrophysics Data System (ADS)
Khaki, M.; Hoteit, I.; Kuhn, M.; Awange, J.; Forootan, E.; van Dijk, A. I. J. M.; Schumacher, M.; Pattiaratchi, C.
2017-09-01
The time-variable terrestrial water storage (TWS) products from the Gravity Recovery And Climate Experiment (GRACE) have been increasingly used in recent years to improve the simulation of hydrological models by applying data assimilation techniques. In this study, for the first time, we assess the performance of the most popular data assimilation sequential techniques for integrating GRACE TWS into the World-Wide Water Resources Assessment (W3RA) model. We implement and test stochastic and deterministic ensemble-based Kalman filters (EnKF), as well as Particle filters (PF) using two different resampling approaches of Multinomial Resampling and Systematic Resampling. These choices provide various opportunities for weighting observations and model simulations during the assimilation and also accounting for error distributions. Particularly, the deterministic EnKF is tested to avoid perturbing observations before assimilation (that is the case in an ordinary EnKF). Gaussian-based random updates in the EnKF approaches likely do not fully represent the statistical properties of the model simulations and TWS observations. Therefore, the fully non-Gaussian PF is also applied to estimate more realistic updates. Monthly GRACE TWS are assimilated into W3RA covering the entire Australia. To evaluate the filters performances and analyze their impact on model simulations, their estimates are validated by independent in-situ measurements. Our results indicate that all implemented filters improve the estimation of water storage simulations of W3RA. The best results are obtained using two versions of deterministic EnKF, i.e. the Square Root Analysis (SQRA) scheme and the Ensemble Square Root Filter (EnSRF), respectively, improving the model groundwater estimations errors by 34% and 31% compared to a model run without assimilation. Applying the PF along with Systematic Resampling successfully decreases the model estimation error by 23%.
NASA Astrophysics Data System (ADS)
Heene, V.; Buchholz, S.; Kossmann, M.
2016-12-01
Numerical studies of thermal conditions in cities based on model simulations of idealized urban domains are carried out to investigate how changes in the characteristics of urban areas influence street level air temperatures. The simulated modifications of the urban characteristics represent possible adaptation measures for heat reduction in cities, which are commonly used in urban planning. Model simulations are performed with the thermodynamic version of the 3-dimensional micro-scale urban climate model MUKLIMO_3. The simulated idealized urban areas are designed in a simplistic way, i. e. defining homogeneous squared cities of one settlement type, without orography and centered in the model domain. To assess the impact of different adaptation measures the characteristics of the urban areas have been systematically modified regarding building height, albedo of building roof and impervious surfaces, fraction of impervious surfaces between buildings, and percentage of green roofs. To assess the impact of green and blue infrastructure in cities, different configurations for parks and lakes have been investigated - e. g. varying size and distribution within the city. The experiments are performed for different combinations of typical German settlement types and surrounding rural types under conditions of a typical summer day in July. The adaptation measures implemented in the experiments show different impacts for different settlement types mainly due to the differences in building density, building height or impervious surface fraction. Parks and lakes implemented as adaptation measure show strong potential to reduce daytime air temperature, with cooling effects on their built-up surroundings. At night lakes generate negative and positive effects on air temperature, depending on water temperature. In general, all adaptation measures implemented in experiments reveal different impacts on day and night air temperature.
Implementation of a WRF-CMAQ Air Quality Modeling System in Bogotá, Colombia
NASA Astrophysics Data System (ADS)
Nedbor-Gross, R.; Henderson, B. H.; Pachon, J. E.; Davis, J. R.; Baublitz, C. B.; Rincón, A.
2014-12-01
Due to a continuous economic growth Bogotá, Colombia has experienced air pollution issues in recent years. The local environmental authority has implemented several strategies to curb air pollution that have resulted in the decrease of PM10 concentrations since 2010. However, more activities are necessary in order to meet international air quality standards in the city. The University of Florida Air Quality and Climate group is collaborating with the Universidad de La Salle to prioritize regulatory strategies for Bogotá using air pollution simulations. To simulate pollution, we developed a modeling platform that combines the Weather Research and Forecasting Model (WRF), local emissions, and the Community Multi-scale Air Quality model (CMAQ). This platform is the first of its kind to be implemented in the megacity of Bogota, Colombia. The presentation will discuss development and evaluation of the air quality modeling system, highlight initial results characterizing photochemical conditions in Bogotá, and characterize air pollution under proposed regulatory strategies. The WRF model has been configured and applied to Bogotá, which resides in a tropical climate with complex mountainous topography. Developing the configuration included incorporation of local topography and land-use data, a physics sensitivity analysis, review, and systematic evaluation. The threshold, however, was set based on synthesis of model performance under less mountainous conditions. We will evaluate the impact that differences in autocorrelation contribute to the non-ideal performance. Air pollution predictions are currently under way. CMAQ has been configured with WRF meteorology, global boundary conditions from GEOS-Chem, and a locally produced emission inventory. Preliminary results from simulations show promising performance of CMAQ in Bogota. Anticipated results include a systematic performance evaluation of ozone and PM10, characterization of photochemical sensitivity, and air quality predictions under proposed regulatory scenarios.
NASA Astrophysics Data System (ADS)
Benavidez, P. G.; Durda, D. D.; Enke, B.; Campo Bagatin, A.; Richardson, D. C.; Asphaug, E.; Bottke, W. F.
2018-04-01
In this work we extend the systematic investigation of impact outcomes of 100-km-diameter targets started by Durda et al. (2007) and Benavidez et al. (2012) to targets of D = 400 km using the same range of impact conditions and two internal structures: monolithic and rubble-pile. We performed a new set of simulations in the gravity regime for targets of 400 km in diameter using these same internal structures. This provides a large set of 600 simulations performed in a systematic way that permits a thorough analysis of the impact outcomes and evaluation of the main features of the size frequency distribution due mostly to self-gravity. In addition, we use the impact outcomes to attempt to constrain the impact conditions of the asteroid belt where known asteroid families with a large expected parent body were formed. We have found fairly good matches for the Eunomia and Hygiea families. In addition, we identified a potential acceptable match to the Vesta family from a monolithic parent body of 468 km. The impact conditions of the best matches suggest that these families were formed in a dynamically excited belt. The results also suggest that the parent body of the Eunomia family could be a monolithic body of 382 km diameter, while the one for Hygiea could have a rubble-pile internal structure of 416 km diameter.
Systematic investigations of low energy Ar ion beam sputtering of Si and Ag
NASA Astrophysics Data System (ADS)
Feder, R.; Frost, F.; Neumann, H.; Bundesmann, C.; Rauschenbach, B.
2013-12-01
Ion beam sputter deposition (IBD) delivers some intrinsic features influencing the growing film properties, because ion properties and geometrical process conditions generate different energy and spatial distributions of the sputtered and scattered particles. Even though IBD has been used for decades, the full capabilities are not investigated systematically and specifically used yet. Therefore, a systematic and comprehensive analysis of the correlation between the properties of the ion beam, the generated secondary particles and backscattered ions and the deposited films needs to be done.A vacuum deposition chamber has been set up which allows ion beam sputtering of different targets under variation of geometrical parameters (ion incidence angle, position of substrates and analytics in respect to the target) and of ion beam parameters (ion species, ion energy) to perform a systematic and comprehensive analysis of the correlation between the properties of the ion beam, the properties of the sputtered and scattered particles, and the properties of the deposited films. A set of samples was prepared and characterized with respect to selected film properties, such as thickness and surface topography. The experiments indicate a systematic influence of the deposition parameters on the film properties as hypothesized before. Because of this influence, the energy distribution of secondary particles was measured using an energy-selective mass spectrometer. Among others, experiments revealed a high-energetic maximum for backscattered primary ions, which shifts with increasing emission angle to higher energies. Experimental data are compared with Monte Carlo simulations done with the well-known Transport and Range of Ions in Matter, Sputtering version (TRIM.SP) code [J.P. Biersack, W. Eckstein, Appl. Phys. A: Mater. Sci. Process. 34 (1984) 73]. The thicknesses of the films are in good agreement with those calculated from simulated particle fluxes. For the positions of the high-energetic maxima in the energy distribution of the backscattered primary ions, a deviation between simulated and measured data was found, most likely originating in a higher energy loss under experimental conditions than considered in the simulation.
Surgeons' and surgical trainees' acute stress in real operations or simulation: A systematic review.
Georgiou, Konstantinos; Larentzakis, Andreas; Papavassiliou, Athanasios G
2017-12-01
Acute stress in surgery is ubiquitous and has an immediate impact on surgical performance and patient safety. Surgeons react with several coping strategies; however, they recognise the necessity of formal stress management training. Thus, stress assessment is a direct need. Surgical simulation is a validated standardised training milieu designed to replicate real-life situations. It replicates stress, prevents biases, and provides objective metrics. The complexity of stress mechanisms makes stress measurement difficult to quantify and interpret. This systematic review aims to identify studies that have used acute stress estimation measurements in surgeons or surgical trainees during real operations or surgical simulation, and to collectively present the rationale of these tools, with special emphasis in salivary markers. A search strategy was implemented to retrieve relevant articles from MEDLINE and SCOPUS databases. The 738 articles retrieved were reviewed for further evaluation according to the predetermined inclusion/exclusion criteria. Thirty-three studies were included in this systematic review. The methods for acute stress assessment varied greatly among studies with the non-invasive techniques being the most commonly used. Subjective and objective tests for surgeons' acute stress assessment are being presented. There is a broad spectrum of acute mental stress assessment tools in the surgical field and simulation and salivary biomarkers have recently gained popularity. There is a need to maintain a consistent methodology in future research, towards a deeper understanding of acute stress in the surgical field. Copyright © 2017 Royal College of Surgeons of Edinburgh (Scottish charity number SC005317) and Royal College of Surgeons in Ireland. Published by Elsevier Ltd. All rights reserved.
Simulation models in population breast cancer screening: A systematic review.
Koleva-Kolarova, Rositsa G; Zhan, Zhuozhao; Greuter, Marcel J W; Feenstra, Talitha L; De Bock, Geertruida H
2015-08-01
The aim of this review was to critically evaluate published simulation models for breast cancer screening of the general population and provide a direction for future modeling. A systematic literature search was performed to identify simulation models with more than one application. A framework for qualitative assessment which incorporated model type; input parameters; modeling approach, transparency of input data sources/assumptions, sensitivity analyses and risk of bias; validation, and outcomes was developed. Predicted mortality reduction (MR) and cost-effectiveness (CE) were compared to estimates from meta-analyses of randomized control trials (RCTs) and acceptability thresholds. Seven original simulation models were distinguished, all sharing common input parameters. The modeling approach was based on tumor progression (except one model) with internal and cross validation of the resulting models, but without any external validation. Differences in lead times for invasive or non-invasive tumors, and the option for cancers not to progress were not explicitly modeled. The models tended to overestimate the MR (11-24%) due to screening as compared to optimal RCTs 10% (95% CI - 2-21%) MR. Only recently, potential harms due to regular breast cancer screening were reported. Most scenarios resulted in acceptable cost-effectiveness estimates given current thresholds. The selected models have been repeatedly applied in various settings to inform decision making and the critical analysis revealed high risk of bias in their outcomes. Given the importance of the models, there is a need for externally validated models which use systematical evidence for input data to allow for more critical evaluation of breast cancer screening. Copyright © 2015 Elsevier Ltd. All rights reserved.
Piezoresistive Cantilever Performance—Part II: Optimization
Park, Sung-Jin; Doll, Joseph C.; Rastegar, Ali J.; Pruitt, Beth L.
2010-01-01
Piezoresistive silicon cantilevers fabricated by ion implantation are frequently used for force, displacement, and chemical sensors due to their low cost and electronic readout. However, the design of piezoresistive cantilevers is not a straightforward problem due to coupling between the design parameters, constraints, process conditions, and performance. We systematically analyzed the effect of design and process parameters on force resolution and then developed an optimization approach to improve force resolution while satisfying various design constraints using simulation results. The combined simulation and optimization approach is extensible to other doping methods beyond ion implantation in principle. The optimization results were validated by fabricating cantilevers with the optimized conditions and characterizing their performance. The measurement results demonstrate that the analytical model accurately predicts force and displacement resolution, and sensitivity and noise tradeoff in optimal cantilever performance. We also performed a comparison between our optimization technique and existing models and demonstrated eight times improvement in force resolution over simplified models. PMID:20333323
A Single Column Model Ensemble Approach Applied to the TWP-ICE Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davies, Laura; Jakob, Christian; Cheung, K.
2013-06-27
Single column models (SCM) are useful testbeds for investigating the parameterisation schemes of numerical weather prediction and climate models. The usefulness of SCM simulations are limited, however, by the accuracy of the best-estimate large-scale data prescribed. One method to address this uncertainty is to perform ensemble simulations of the SCM. This study first derives an ensemble of large-scale data for the Tropical Warm Pool International Cloud Experiment (TWP-ICE) based on an estimate of a possible source of error in the best-estimate product. This data is then used to carry out simulations with 11 SCM and 2 cloud-resolving models (CRM). Best-estimatemore » simulations are also performed. All models show that moisture related variables are close to observations and there are limited differences between the best-estimate and ensemble mean values. The models, however, show different sensitivities to changes in the forcing particularly when weakly forced. The ensemble simulations highlight important differences in the moisture budget between the SCM and CRM. Systematic differences are also apparent in the ensemble mean vertical structure of cloud variables. The ensemble is further used to investigate relations between cloud variables and precipitation identifying large differences between CRM and SCM. This study highlights that additional information can be gained by performing ensemble simulations enhancing the information derived from models using the more traditional single best-estimate simulation.« less
NASA Astrophysics Data System (ADS)
WANG, J.; Kim, J.
2014-12-01
In this study, sensitivity of pollutant dispersion on turbulent Schmidt number (Sct) was investigated in a street canyon using a computational fluid dynamics (CFD) model. For this, numerical simulations with systematically varied Sct were performed and the CFD model results were validated against a wind‒tunnel measurement data. The results showed that root mean square error (RMSE) was quite dependent on Sct and dispersion patterns of non‒reactive scalar pollutant with different Sct were quite different among the simulation results. The RMSE was lowest in the case of Sct = 0.35 and the apparent dispersion pattern was most similar to the wind‒tunnel data in the case of Sct = 0.35. Also, numerical simulations using spatially weighted Sct were additionally performed in order for the best reproduction of the wind‒tunnel data. Detailed method and procedure to find the best reproduction will be presented.
Bates, Nathaniel A.; Myer, Gregory D.; Shearn, Jason T.; Hewett, Timothy E.
2014-01-01
Investigators use in vitro joint simulations to invasively study the biomechanical behaviors of the anterior cruciate ligament. The aims of these simulations are to replicate physiologic conditions, but multiple mechanisms can be used to drive in vitro motions, which may influence biomechanical outcomes. The objective of this review was to examine, summarize, and compare biomechanical evidence related to anterior cruciate ligament function from in vitro simulations of knee motion. A systematic review was conducted (2004 to 2013) in Scopus, PubMed/Medline, and SPORTDiscus to identify peer-reviewed studies that reported kinematic and kinetic outcomes from in vitro simulations of physiologic or clinical tasks at the knee. Inclusion criteria for relevant studies were articles published in English that reported on whole-ligament anterior cruciate ligament mechanics during the in vitro simulation of physiologic or clinical motions on cadaveric knees that were unaltered outside of the anterior-cruciate-ligament-intact, -deficient, and -reconstructed conditions. A meta-analysis was performed to synthesize biomechanical differences between the anterior-cruciate-ligament-intact and reconstructed conditions. 77 studies met our inclusion/exclusion criteria and were reviewed. Combined joint rotations have the greatest impact on anterior cruciate ligament loads, but the magnitude by which individual kinematic degrees of freedom contribute to ligament loading during in vitro simulations is technique-dependent. Biomechanical data collected in prospective, longitudinal studies corresponds better with robotic-manipulator simulations than mechanical-impact simulations. Robotic simulation indicated that the ability to restore intact anterior cruciate ligament mechanics with anterior cruciate ligament reconstructions was dependent on loading condition and degree of freedom examined. PMID:25547070
Evaluation of the flame propagation within an SI engine using flame imaging and LES
NASA Astrophysics Data System (ADS)
He, Chao; Kuenne, Guido; Yildar, Esra; van Oijen, Jeroen; di Mare, Francesca; Sadiki, Amsini; Ding, Carl-Philipp; Baum, Elias; Peterson, Brian; Böhm, Benjamin; Janicka, Johannes
2017-11-01
This work shows experiments and simulations of the fired operation of a spark ignition engine with port-fuelled injection. The test rig considered is an optically accessible single cylinder engine specifically designed at TU Darmstadt for the detailed investigation of in-cylinder processes and model validation. The engine was operated under lean conditions using iso-octane as a substitute for gasoline. Experiments have been conducted to provide a sound database of the combustion process. A planar flame imaging technique has been applied within the swirl- and tumble-planes to provide statistical information on the combustion process to complement a pressure-based comparison between simulation and experiments. This data is then analysed and used to assess the large eddy simulation performed within this work. For the simulation, the engine code KIVA has been extended by the dynamically thickened flame model combined with chemistry reduction by means of pressure dependent tabulation. Sixty cycles have been simulated to perform a statistical evaluation. Based on a detailed comparison with the experimental data, a systematic study has been conducted to obtain insight into the most crucial modelling uncertainties.
Virtual reality simulation training of mastoidectomy - studies on novice performance.
Andersen, Steven Arild Wuyts
2016-08-01
Virtual reality (VR) simulation-based training is increasingly used in surgical technical skills training including in temporal bone surgery. The potential of VR simulation in enabling high-quality surgical training is great and VR simulation allows high-stakes and complex procedures such as mastoidectomy to be trained repeatedly, independent of patients and surgical tutors, outside traditional learning environments such as the OR or the temporal bone lab, and with fewer of the constraints of traditional training. This thesis aims to increase the evidence-base of VR simulation training of mastoidectomy and, by studying the final-product performances of novices, investigates the transfer of skills to the current gold-standard training modality of cadaveric dissection, the effect of different practice conditions and simulator-integrated tutoring on performance and retention of skills, and the role of directed, self-regulated learning. Technical skills in mastoidectomy were transferable from the VR simulation environment to cadaveric dissection with significant improvement in performance after directed, self-regulated training in the VR temporal bone simulator. Distributed practice led to a better learning outcome and more consolidated skills than massed practice and also resulted in a more consistent performance after three months of non-practice. Simulator-integrated tutoring accelerated the initial learning curve but also caused over-reliance on tutoring, which resulted in a drop in performance when the simulator-integrated tutor-function was discontinued. The learning curves were highly individual but often plateaued early and at an inadequate level, which related to issues concerning both the procedure and the VR simulator, over-reliance on the tutor function and poor self-assessment skills. Future simulator-integrated automated assessment could potentially resolve some of these issues and provide trainees with both feedback during the procedure and immediate assessment following each procedure. Standard setting by establishing a proficiency level that can be used for mastery learning with deliberate practice could also further sophisticate directed, self-regulated learning in VR simulation-based training. VR simulation-based training should be embedded in a systematic and competency-based training curriculum for high-quality surgical skills training, ultimately leading to improved safety and patient care.
All-optical nanomechanical heat engine.
Dechant, Andreas; Kiesel, Nikolai; Lutz, Eric
2015-05-08
We propose and theoretically investigate a nanomechanical heat engine. We show how a levitated nanoparticle in an optical trap inside a cavity can be used to realize a Stirling cycle in the underdamped regime. The all-optical approach enables fast and flexible control of all thermodynamical parameters and the efficient optimization of the performance of the engine. We develop a systematic optimization procedure to determine optimal driving protocols. Further, we perform numerical simulations with realistic parameters and evaluate the maximum power and the corresponding efficiency.
All-Optical Nanomechanical Heat Engine
NASA Astrophysics Data System (ADS)
Dechant, Andreas; Kiesel, Nikolai; Lutz, Eric
2015-05-01
We propose and theoretically investigate a nanomechanical heat engine. We show how a levitated nanoparticle in an optical trap inside a cavity can be used to realize a Stirling cycle in the underdamped regime. The all-optical approach enables fast and flexible control of all thermodynamical parameters and the efficient optimization of the performance of the engine. We develop a systematic optimization procedure to determine optimal driving protocols. Further, we perform numerical simulations with realistic parameters and evaluate the maximum power and the corresponding efficiency.
System calibration method for Fourier ptychographic microscopy
NASA Astrophysics Data System (ADS)
Pan, An; Zhang, Yan; Zhao, Tianyu; Wang, Zhaojun; Dan, Dan; Lei, Ming; Yao, Baoli
2017-09-01
Fourier ptychographic microscopy (FPM) is a recently proposed computational imaging technique with both high-resolution and wide field of view. In current FPM imaging platforms, systematic error sources come from aberrations, light-emitting diode (LED) intensity fluctuation, parameter imperfections, and noise, all of which may severely corrupt the reconstruction results with similar artifacts. Therefore, it would be unlikely to distinguish the dominating error from these degraded reconstructions without any preknowledge. In addition, systematic error is generally a mixture of various error sources in the real situation, and it cannot be separated due to their mutual restriction and conversion. To this end, we report a system calibration procedure, termed SC-FPM, to calibrate the mixed systematic errors simultaneously from an overall perspective, based on the simulated annealing algorithm, the LED intensity correction method, the nonlinear regression process, and the adaptive step-size strategy, which involves the evaluation of an error metric at each iteration step, followed by the re-estimation of accurate parameters. The performance achieved both in simulations and experiments demonstrates that the proposed method outperforms other state-of-the-art algorithms. The reported system calibration scheme improves the robustness of FPM, relaxes the experiment conditions, and does not require any preknowledge, which makes the FPM more pragmatic.
Telfer, Scott; Erdemir, Ahmet; Woodburn, James; Cavanagh, Peter R.
2014-01-01
Background Over the past two decades finite element (FE) analysis has become a popular tool for researchers seeking to simulate the biomechanics of the healthy and diabetic foot. The primary aims of these simulations have been to improve our understanding of the foot’s complicated mechanical loading in health and disease and to inform interventions designed to prevent plantar ulceration, a major complication of diabetes. This article provides a systematic review and summary of the findings from FE analysis-based computational simulations of the diabetic foot. Methods A systematic literature search was carried out and 31 relevant articles were identified covering three primary themes: methodological aspects relevant to modelling the diabetic foot; investigations of the pathomechanics of the diabetic foot; and simulation-based design of interventions to reduce ulceration risk. Results Methodological studies illustrated appropriate use of FE analysis for simulation of foot mechanics, incorporating nonlinear tissue mechanics, contact and rigid body movements. FE studies of pathomechanics have provided estimates of internal soft tissue stresses, and suggest that such stresses may often be considerably larger than those measured at the plantar surface and are proportionally greater in the diabetic foot compared to controls. FE analysis allowed evaluation of insole performance and development of new insole designs, footwear and corrective surgery to effectively provide intervention strategies. The technique also presents the opportunity to simulate the effect of changes associated with the diabetic foot on non-mechanical factors such as blood supply to local tissues. Discussion While significant advancement in diabetic foot research has been made possible by the use of FE analysis, translational utility of this powerful tool for routine clinical care at the patient level requires adoption of cost-effective (both in terms of labour and computation) and reliable approaches with clear clinical validity for decision making. PMID:25290098
Creating A Data Base For Design Of An Impeller
NASA Technical Reports Server (NTRS)
Prueger, George H.; Chen, Wei-Chung
1993-01-01
Report describes use of Taguchi method of parametric design to create data base facilitating optimization of design of impeller in centrifugal pump. Data base enables systematic design analysis covering all significant design parameters. Reduces time and cost of parametric optimization of design: for particular impeller considered, one can cover 4,374 designs by computational simulations of performance for only 18 cases.
Horizon sensors attitude errors simulation for the Brazilian Remote Sensing Satellite
NASA Astrophysics Data System (ADS)
Vicente de Brum, Antonio Gil; Ricci, Mario Cesar
Remote sensing, meteorological and other types of satellites require an increasingly better Earth related positioning. From the past experience it is well known that the thermal horizon in the 15 micrometer band provides conditions of determining the local vertical at any time. This detection is done by horizon sensors which are accurate instruments for Earth referred attitude sensing and control whose performance is limited by systematic and random errors amounting about 0.5 deg. Using the computer programs OBLATE, SEASON, ELECTRO and MISALIGN, developed at INPE to simulate four distinct facets of conical scanning horizon sensors, attitude errors are obtained for the Brazilian Remote Sensing Satellite (the first one, SSR-1, is scheduled to fly in 1996). These errors are due to the oblate shape of the Earth, seasonal and latitudinal variations of the 15 micrometer infrared radiation, electronic processing time delay and misalignment of sensor axis. The sensor related attitude errors are thus properly quantified in this work and will, together with other systematic errors (for instance, ambient temperature variation) take part in the pre-launch analysis of the Brazilian Remote Sensing Satellite, with respect to the horizon sensor performance.
Wiese, Steffen; Teutenberg, Thorsten; Schmidt, Torsten C
2012-01-27
In the present work it is shown that the linear elution strength (LES) model which was adapted from temperature-programming gas chromatography (GC) can also be employed for systematic method development in high-temperature liquid chromatography (HT-HPLC). The ability to predict isothermal retention times based on temperature-gradient as well as isothermal input data was investigated. For a small temperature interval of ΔT=40°C, both approaches result in very similar predictions. Average relative errors of predicted retention times of 2.7% and 1.9% were observed for simulations based on isothermal and temperature-gradient measurements, respectively. Concurrently, it was investigated whether the accuracy of retention time predictions of segmented temperature gradients can be further improved by temperature dependent calculation of the parameter S(T) of the LES relationship. It was found that the accuracy of retention time predictions of multi-step temperature gradients can be improved to around 1.5%, if S(T) was also calculated temperature dependent. The adjusted experimental design making use of four temperature-gradient measurements was applied for systematic method development of selected food additives by high-temperature liquid chromatography. Method development was performed within a temperature interval from 40°C to 180°C using water as mobile phase. Two separation methods were established where selected food additives were baseline separated. In addition, a good agreement between simulation and experiment was observed, because an average relative error of predicted retention times of complex segmented temperature gradients less than 5% was observed. Finally, a schedule of recommendations to assist the practitioner during systematic method development in high-temperature liquid chromatography was established. Copyright © 2011 Elsevier B.V. All rights reserved.
Systematic analysis of CMOS-micromachined inductors with application to mixer matching circuits
NASA Astrophysics Data System (ADS)
Wu, Jerry Chun-Li
The growing demand for consumer voice and data communication systems and military communication applications has created a need for low-power, low-cost, high-performance radio-frequency (RF) front-end. To achieve this goal, bringing passive components, especially inductors, to silicon is imperative. On-chip passive components such as inductors and capacitors generally enhance the reliability and efficiency of silicon-integrated RF cells. They can provide circuit solutions with superior performance and contribute to a higher level of integration. With passive components on chip, there is a great opportunity to have transformers, filters, and matching networks on chip. However, inductors on silicon have a low quality factor (Q) due to both substrate and metal loss. This dissertation demonstrates the systematic analysis of inductors fabricated using standard complementary metal-oxide-semiconductor (CMOS) and micro-electro-mechanical (MEMS) system technologies. We report system-on-chip inductor modeling, simulation, and measurements of effective inductance and quality factors. In this analysis methodology, a number of systematic simulations are performed on regular and micromachined inductors with different parameters such as spiral topology, number of turns, outer diameter, thickness, and percentage of substrate removed by using micromachining technologies. Three different novel support structures of the micromachined spiral inductor are proposed, analyzed, and implemented for larger size suspended inductors. The sensitivity of the structure support and different degree of substrate etching by post-processing is illustrated. The results provide guidelines for the selection of inductor parameters, post-processing methodologies, and its spiral supports to meet the RF design specifications and the stability requirements for mobile communication. The proposed CMOS-micromachined inductor is used in a low cost-effective double-balanced Gilbert mixer with on-chip matching network. The integrated mixer inductor was implemented and tested to prove the concept.
Ensemble simulations of inertial confinement fusion implosions
Nora, Ryan; Peterson, Jayson Luc; Spears, Brian Keith; ...
2017-05-24
The achievement of inertial confinement fusion ignition on the National Ignition Facility relies on the collection and interpretation of a limited (and expensive) set of experimental data. These data are therefore supplemented with state-of-the-art multi-dimensional radiation-hydrodynamic simulations to provide a better understanding of implosion dynamics and behavior. We present a relatively large number (~4000) of systematically perturbed 2D simulations to probe our understanding of low-mode fuel and ablator asymmetries seeded by asymmetric illumination. We find that Gaussian process surrogate models are able to predict both the total neutron yield and the degradation in performance due to asymmetries. Furthermore, the surrogatesmore » are then applied to simulations containing new sources of degradation to quantify the impact of the new source.« less
Marwan, Wolfgang; Sujatha, Arumugam; Starostzik, Christine
2005-10-21
We reconstruct the regulatory network controlling commitment and sporulation of Physarum polycephalum from experimental results using a hierarchical Petri Net-based modelling and simulation framework. The stochastic Petri Net consistently describes the structure and simulates the dynamics of the molecular network as analysed by genetic, biochemical and physiological experiments within a single coherent model. The Petri Net then is extended to simulate time-resolved somatic complementation experiments performed by mixing the cytoplasms of mutants altered in the sporulation response, to systematically explore the network structure and to probe its dynamics. This reverse engineering approach presumably can be employed to explore other molecular or genetic signalling systems where the activity of genes or their products can be experimentally controlled in a time-resolved manner.
A systematic review of validated sinus surgery simulators.
Stew, B; Kao, S S-T; Dharmawardana, N; Ooi, E H
2018-06-01
Simulation provides a safe and effective opportunity to develop surgical skills. A variety of endoscopic sinus surgery (ESS) simulators has been described in the literature. Validation of these simulators allows for effective utilisation in training. To conduct a systematic review of the published literature to analyse the evidence for validated ESS simulation. Pubmed, Embase, Cochrane and Cinahl were searched from inception of the databases to 11 January 2017. Twelve thousand five hundred and sixteen articles were retrieved of which 10 112 were screened following the removal of duplicates. Thirty-eight full-text articles were reviewed after meeting search criteria. Evidence of face, content, construct, discriminant and predictive validity was extracted. Twenty articles were included in the analysis describing 12 ESS simulators. Eleven of these simulators had undergone validation: 3 virtual reality, 7 physical bench models and 1 cadaveric simulator. Seven of the simulators were shown to have face validity, 7 had construct validity and 1 had predictive validity. None of the simulators demonstrated discriminate validity. This systematic review demonstrates that a number of ESS simulators have been comprehensively validated. Many of the validation processes, however, lack standardisation in outcome reporting, thus limiting a meta-analysis comparison between simulators. © 2017 John Wiley & Sons Ltd.
Cost: the missing outcome in simulation-based medical education research: a systematic review.
Zendejas, Benjamin; Wang, Amy T; Brydges, Ryan; Hamstra, Stanley J; Cook, David A
2013-02-01
The costs involved with technology-enhanced simulation remain unknown. Appraising the value of simulation-based medical education (SBME) requires complete accounting and reporting of cost. We sought to summarize the quantity and quality of studies that contain an economic analysis of SBME for the training of health professions learners. We performed a systematic search of MEDLINE, EMBASE, CINAHL, ERIC, PsychINFO, Scopus, key journals, and previous review bibliographies through May 2011. Articles reporting original research in any language evaluating the cost of simulation, in comparison with nonstimulation instruction or another simulation intervention, for training practicing and student physicians, nurses, and other health professionals were selected. Reviewers working in duplicate evaluated study quality and abstracted information on learners, instructional design, cost elements, and outcomes. From a pool of 10,903 articles we identified 967 comparative studies. Of these, 59 studies (6.1%) reported any cost elements and 15 (1.6%) provided information on cost compared with another instructional approach. We identified 11 cost components reported, most often the cost of the simulator (n = 42 studies; 71%) and training materials (n = 21; 36%). Ten potential cost components were never reported. The median number of cost components reported per study was 2 (range, 1-9). Only 12 studies (20%) reported cost in the Results section; most reported it in the Discussion (n = 34; 58%). Cost reporting in SBME research is infrequent and incomplete. We propose a comprehensive model for accounting and reporting costs in SBME. Copyright © 2013 Mosby, Inc. All rights reserved.
Optimization of lamp arrangement in a closed-conduit UV reactor based on a genetic algorithm.
Sultan, Tipu; Ahmad, Zeshan; Cho, Jinsoo
2016-01-01
The choice for the arrangement of the UV lamps in a closed-conduit ultraviolet (CCUV) reactor significantly affects the performance. However, a systematic methodology for the optimal lamp arrangement within the chamber of the CCUV reactor is not well established in the literature. In this research work, we propose a viable systematic methodology for the lamp arrangement based on a genetic algorithm (GA). In addition, we analyze the impacts of the diameter, angle, and symmetry of the lamp arrangement on the reduction equivalent dose (RED). The results are compared based on the simulated RED values and evaluated using the computational fluid dynamics simulations software ANSYS FLUENT. The fluence rate was calculated using commercial software UVCalc3D, and the GA-based lamp arrangement optimization was achieved using MATLAB. The simulation results provide detailed information about the GA-based methodology for the lamp arrangement, the pathogen transport, and the simulated RED values. A significant increase in the RED values was achieved by using the GA-based lamp arrangement methodology. This increase in RED value was highest for the asymmetric lamp arrangement within the chamber of the CCUV reactor. These results demonstrate that the proposed GA-based methodology for symmetric and asymmetric lamp arrangement provides a viable technical solution to the design and optimization of the CCUV reactor.
Andersen, Steven Arild Wuyts; Konge, Lars; Sørensen, Mads Sølvsten
2018-05-07
Complex tasks such as surgical procedures can induce excessive cognitive load (CL), which can have a negative effect on learning, especially for novices. To investigate if repeated and distributed virtual reality (VR) simulation practice induces a lower CL and higher performance in subsequent cadaveric dissection training. In a prospective, controlled cohort study, 37 residents in otorhinolaryngology received VR simulation training either as additional distributed practice prior to course participation (intervention) (9 participants) or as standard practice during the course (control) (28 participants). Cognitive load was estimated as the relative change in secondary-task reaction time during VR simulation and cadaveric procedures. Structured distributed VR simulation practice resulted in lower mean reaction times (32% vs. 47% for the intervention and control group, respectively, p < 0.01) as well as a superior final-product performance during subsequent cadaveric dissection training. Repeated and distributed VR simulation causes a lower CL to be induced when the learning situation is increased in complexity. A suggested mechanism is the formation of mental schemas and reduction of the intrinsic CL. This has potential implications for surgical skills training and suggests that structured, distributed training be systematically implemented in surgical training curricula.
Luo, Chuan; Jiang, Kaixia; Wan, Rongrong; Li, Hengpeng
2017-01-01
The Hydrological Simulation Program–Fortran (HSPF) is a hydrological and water quality computer model that was developed by the United States Environmental Protection Agency. Comprehensive performance evaluations were carried out for hydrological and nutrient simulation using the HSPF model in the Xitiaoxi watershed in China. Streamflow simulation was calibrated from 1 January 2002 to 31 December 2007 and then validated from 1 January 2008 to 31 December 2010 using daily observed data, and nutrient simulation was calibrated and validated using monthly observed data during the period from July 2009 to July 2010. These results of model performance evaluation showed that the streamflows were well simulated over the study period. The determination coefficient (R2) was 0.87, 0.77 and 0.63, and the Nash-Sutcliffe coefficient of efficiency (Ens) was 0.82, 0.76 and 0.65 for the streamflow simulation in annual, monthly and daily time-steps, respectively. Although limited to monthly observed data, satisfactory performance was still achieved during the quantitative evaluation for nutrients. The R2 was 0.73, 0.82 and 0.92, and the Ens was 0.67, 0.74 and 0.86 for nitrate, ammonium and orthophosphate simulation, respectively. Some issues may affect the application of HSPF were also discussed, such as input data quality, parameter values, etc. Overall, the HSPF model can be successfully used to describe streamflow and nutrients transport in the mesoscale watershed located in the East Asian monsoon climate area. This study is expected to serve as a comprehensive and systematic documentation of understanding the HSPF model for wide application and avoiding possible misuses. PMID:29257117
Li, Zhaofu; Luo, Chuan; Jiang, Kaixia; Wan, Rongrong; Li, Hengpeng
2017-12-19
The Hydrological Simulation Program-Fortran (HSPF) is a hydrological and water quality computer model that was developed by the United States Environmental Protection Agency. Comprehensive performance evaluations were carried out for hydrological and nutrient simulation using the HSPF model in the Xitiaoxi watershed in China. Streamflow simulation was calibrated from 1 January 2002 to 31 December 2007 and then validated from 1 January 2008 to 31 December 2010 using daily observed data, and nutrient simulation was calibrated and validated using monthly observed data during the period from July 2009 to July 2010. These results of model performance evaluation showed that the streamflows were well simulated over the study period. The determination coefficient ( R ²) was 0.87, 0.77 and 0.63, and the Nash-Sutcliffe coefficient of efficiency (Ens) was 0.82, 0.76 and 0.65 for the streamflow simulation in annual, monthly and daily time-steps, respectively. Although limited to monthly observed data, satisfactory performance was still achieved during the quantitative evaluation for nutrients. The R ² was 0.73, 0.82 and 0.92, and the Ens was 0.67, 0.74 and 0.86 for nitrate, ammonium and orthophosphate simulation, respectively. Some issues may affect the application of HSPF were also discussed, such as input data quality, parameter values, etc. Overall, the HSPF model can be successfully used to describe streamflow and nutrients transport in the mesoscale watershed located in the East Asian monsoon climate area. This study is expected to serve as a comprehensive and systematic documentation of understanding the HSPF model for wide application and avoiding possible misuses.
Systematic errors of EIT systems determined by easily-scalable resistive phantoms.
Hahn, G; Just, A; Dittmar, J; Hellige, G
2008-06-01
We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design.
Huang, Grace C; McSparron, Jakob I; Balk, Ethan M; Richards, Jeremy B; Smith, C Christopher; Whelan, Julia S; Newman, Lori R; Smetana, Gerald W
2016-04-01
Optimal approaches to teaching bedside procedures are unknown. To identify effective instructional approaches in procedural training. We searched PubMed, EMBASE, Web of Science and Cochrane Library through December 2014. We included research articles that addressed procedural training among physicians or physician trainees for 12 bedside procedures. Two independent reviewers screened 9312 citations and identified 344 articles for full-text review. Two independent reviewers extracted data from full-text articles. We included measurements as classified by translational science outcomes T1 (testing settings), T2 (patient care practices) and T3 (patient/public health outcomes). Due to incomplete reporting, we post hoc classified study outcomes as 'negative' or 'positive' based on statistical significance. We performed meta-analyses of outcomes on the subset of studies sharing similar outcomes. We found 161 eligible studies (44 randomised controlled trials (RCTs), 34 non-RCTs and 83 uncontrolled trials). Simulation was the most frequently published educational mode (78%). Our post hoc classification showed that studies involving simulation, competency-based approaches and RCTs had higher frequencies of T2/T3 outcomes. Meta-analyses showed that simulation (risk ratio (RR) 1.54 vs 0.55 for studies with vs without simulation, p=0.013) and competency-based approaches (RR 3.17 vs 0.89, p<0.001) were effective forms of training. This systematic review of bedside procedural skills demonstrates that the current literature is heterogeneous and of varying quality and rigour. Evidence is strongest for the use of simulation and competency-based paradigms in teaching procedures, and these approaches should be the mainstay of programmes that train physicians to perform procedures. Further research should clarify differences among instructional methods (eg, forms of hands-on training) rather than among educational modes (eg, lecture vs simulation). Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Gathering Validity Evidence for Surgical Simulation: A Systematic Review.
Borgersen, Nanna Jo; Naur, Therese M H; Sørensen, Stine M D; Bjerrum, Flemming; Konge, Lars; Subhi, Yousif; Thomsen, Ann Sofia S
2018-06-01
To identify current trends in the use of validity frameworks in surgical simulation, to provide an overview of the evidence behind the assessment of technical skills in all surgical specialties, and to present recommendations and guidelines for future validity studies. Validity evidence for assessment tools used in the evaluation of surgical performance is of paramount importance to ensure valid and reliable assessment of skills. We systematically reviewed the literature by searching 5 databases (PubMed, EMBASE, Web of Science, PsycINFO, and the Cochrane Library) for studies published from January 1, 2008, to July 10, 2017. We included original studies evaluating simulation-based assessments of health professionals in surgical specialties and extracted data on surgical specialty, simulator modality, participant characteristics, and the validity framework used. Data were synthesized qualitatively. We identified 498 studies with a total of 18,312 participants. Publications involving validity assessments in surgical simulation more than doubled from 2008 to 2010 (∼30 studies/year) to 2014 to 2016 (∼70 to 90 studies/year). Only 6.6% of the studies used the recommended contemporary validity framework (Messick). The majority of studies used outdated frameworks such as face validity. Significant differences were identified across surgical specialties. The evaluated assessment tools were mostly inanimate or virtual reality simulation models. An increasing number of studies have gathered validity evidence for simulation-based assessments in surgical specialties, but the use of outdated frameworks remains common. To address the current practice, this paper presents guidelines on how to use the contemporary validity framework when designing validity studies.
Space-charge effects in Penning ion traps
NASA Astrophysics Data System (ADS)
Porobić, T.; Beck, M.; Breitenfeldt, M.; Couratin, C.; Finlay, P.; Knecht, A.; Fabian, X.; Friedag, P.; Fléchard, X.; Liénard, E.; Ban, G.; Zákoucký, D.; Soti, G.; Van Gorp, S.; Weinheimer, Ch.; Wursten, E.; Severijns, N.
2015-06-01
The influence of space-charge on ion cyclotron resonances and magnetron eigenfrequency in a gas-filled Penning ion trap has been investigated. Off-line measurements with K39+ using the cooling trap of the WITCH retardation spectrometer-based setup at ISOLDE/CERN were performed. Experimental ion cyclotron resonances were compared with ab initio Coulomb simulations and found to be in agreement. As an important systematic effect of the WITCH experiment, the magnetron eigenfrequency of the ion cloud was studied under increasing space-charge conditions. Finally, the helium buffer gas pressure in the Penning trap was determined by comparing experimental cooling rates with simulations.
Guidelines for Computing Longitudinal Dynamic Stability Characteristics of a Subsonic Transport
NASA Technical Reports Server (NTRS)
Thompson, Joseph R.; Frank, Neal T.; Murphy, Patrick C.
2010-01-01
A systematic study is presented to guide the selection of a numerical solution strategy for URANS computation of a subsonic transport configuration undergoing simulated forced oscillation about its pitch axis. Forced oscillation is central to the prevalent wind tunnel methodology for quantifying aircraft dynamic stability derivatives from force and moment coefficients, which is the ultimate goal for the computational simulations. Extensive computations are performed that lead in key insights of the critical numerical parameters affecting solution convergence. A preliminary linear harmonic analysis is included to demonstrate the potential of extracting dynamic stability derivatives from computational solutions.
NASA Astrophysics Data System (ADS)
Jiang, Wei; Wu, Zhaomei; Zhu, Yingming; Tian, Wen; Liang, Bin
2018-01-01
Four silver chalcogen compounds, Ag2O, Ag2S, Ag2Se and Ag2Te, can be utilized as visible-light-driven photocatalysts. In this research, the electronic structures of these compounds were analyzed by simulation and experiments to systematically reveal the relationship between photocatalytic performance and energetic structure. All four chalcogenides exhibited interesting photocatalytic activities under ultraviolet, visible and near-infrared light. However, their photocatalytic performances and stability significantly depended on the band gap width, and the valence band and conduct band position, which was determined by their composition. Increasing the X atomic number from O to Te resulted in the upward movement of the valence band top and the conduct band bottom, which resulted in narrower band gaps, a wider absorption spectrum, a weaker photo-oxidization capacity, a higher recombination probability of hole and electron pairs, lower quantum efficiency, and worse stability. Among them, Ag2O has the highest photocatalytic performance and stability due to its widest band gap and lowest position of VB and CB. The combined action of photogenerated holes and different radicals, depending on the different electronic structures, including anion ozone radical, hydroxide radical, and superoxide radical, was observed and understood. The results of experimental observations and simulations of the four silver chalcogen compounds suggested that a proper electronic structure is necessary to obtain a balance between photocatalytic performance and absorbable light region in the development of new photocatalysts.
Effect of patient setup errors on simultaneously integrated boost head and neck IMRT treatment plans
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siebers, Jeffrey V.; Keall, Paul J.; Wu Qiuwen
2005-10-01
Purpose: The purpose of this study is to determine dose delivery errors that could result from random and systematic setup errors for head-and-neck patients treated using the simultaneous integrated boost (SIB)-intensity-modulated radiation therapy (IMRT) technique. Methods and Materials: Twenty-four patients who participated in an intramural Phase I/II parotid-sparing IMRT dose-escalation protocol using the SIB treatment technique had their dose distributions reevaluated to assess the impact of random and systematic setup errors. The dosimetric effect of random setup error was simulated by convolving the two-dimensional fluence distribution of each beam with the random setup error probability density distribution. Random setup errorsmore » of {sigma} = 1, 3, and 5 mm were simulated. Systematic setup errors were simulated by randomly shifting the patient isocenter along each of the three Cartesian axes, with each shift selected from a normal distribution. Systematic setup error distributions with {sigma} = 1.5 and 3.0 mm along each axis were simulated. Combined systematic and random setup errors were simulated for {sigma} = {sigma} = 1.5 and 3.0 mm along each axis. For each dose calculation, the gross tumor volume (GTV) received by 98% of the volume (D{sub 98}), clinical target volume (CTV) D{sub 90}, nodes D{sub 90}, cord D{sub 2}, and parotid D{sub 50} and parotid mean dose were evaluated with respect to the plan used for treatment for the structure dose and for an effective planning target volume (PTV) with a 3-mm margin. Results: Simultaneous integrated boost-IMRT head-and-neck treatment plans were found to be less sensitive to random setup errors than to systematic setup errors. For random-only errors, errors exceeded 3% only when the random setup error {sigma} exceeded 3 mm. Simulated systematic setup errors with {sigma} = 1.5 mm resulted in approximately 10% of plan having more than a 3% dose error, whereas a {sigma} = 3.0 mm resulted in half of the plans having more than a 3% dose error and 28% with a 5% dose error. Combined random and systematic dose errors with {sigma} = {sigma} = 3.0 mm resulted in more than 50% of plans having at least a 3% dose error and 38% of the plans having at least a 5% dose error. Evaluation with respect to a 3-mm expanded PTV reduced the observed dose deviations greater than 5% for the {sigma} = {sigma} = 3.0 mm simulations to 5.4% of the plans simulated. Conclusions: Head-and-neck SIB-IMRT dosimetric accuracy would benefit from methods to reduce patient systematic setup errors. When GTV, CTV, or nodal volumes are used for dose evaluation, plans simulated including the effects of random and systematic errors deviate substantially from the nominal plan. The use of PTVs for dose evaluation in the nominal plan improves agreement with evaluated GTV, CTV, and nodal dose values under simulated setup errors. PTV concepts should be used for SIB-IMRT head-and-neck squamous cell carcinoma patients, although the size of the margins may be less than those used with three-dimensional conformal radiation therapy.« less
A Systematic Determination of Skill and Simulator Requirements for Airplane Pilot Certification
DOT National Transportation Integrated Search
1985-03-01
This research report describes: (1) the FAA's ATP airman certification system; (2) needs of the system regarding simulator use; (3) a systematic methodology for meeting these needs; (4) application of the methodology; (5) results of the study; and (6...
Systematic use of closed-circuit television in a general practice teaching unit
Irwin, W. George; Perrott, Jon S.
1981-01-01
We describe use of closed-circuit television in teaching general practice consulting skills in a new central teaching unit of a department of general practice. We explain how the system works, present a simple analysis of student performance in communicating with real and simulated patients and discuss the value of teaching from the consultation with closed-circuit television and video. PMID:7328539
Phobos laser ranging: Numerical Geodesy experiments for Martian system science
NASA Astrophysics Data System (ADS)
Dirkx, D.; Vermeersen, L. L. A.; Noomen, R.; Visser, P. N. A. M.
2014-09-01
Laser ranging is emerging as a technology for use over (inter)planetary distances, having the advantage of high (mm-cm) precision and accuracy and low mass and power consumption. We have performed numerical simulations to assess the science return in terms of geodetic observables of a hypothetical Phobos lander performing active two-way laser ranging with Earth-based stations. We focus our analysis on the estimation of Phobos and Mars gravitational, tidal and rotational parameters. We explicitly include systematic error sources in addition to uncorrelated random observation errors. This is achieved through the use of consider covariance parameters, specifically the ground station position and observation biases. Uncertainties for the consider parameters are set at 5 mm and at 1 mm for the Gaussian uncorrelated observation noise (for an observation integration time of 60 s). We perform the analysis for a mission duration up to 5 years. It is shown that a Phobos Laser Ranging (PLR) can contribute to a better understanding of the Martian system, opening the possibility for improved determination of a variety of physical parameters of Mars and Phobos. The simulations show that the mission concept is especially suited for estimating Mars tidal deformation parameters, estimating degree 2 Love numbers with absolute uncertainties at the 10-2 to 10-4 level after 1 and 4 years, respectively and providing separate estimates for the Martian quality factors at Sun and Phobos-forced frequencies. The estimation of Phobos libration amplitudes and gravity field coefficients provides an estimate of Phobos' relative equatorial and polar moments of inertia with an absolute uncertainty of 10-4 and 10-7, respectively, after 1 year. The observation of Phobos tidal deformation will be able to differentiate between a rubble pile and monolithic interior within 2 years. For all parameters, systematic errors have a much stronger influence (per unit uncertainty) than the uncorrelated Gaussian observation noise. This indicates the need for the inclusion of systematic errors in simulation studies and special attention to the mitigation of these errors in mission and system design.
Hydrogen bonds and twist in cellulose microfibrils.
Kannam, Sridhar Kumar; Oehme, Daniel P; Doblin, Monika S; Gidley, Michael J; Bacic, Antony; Downton, Matthew T
2017-11-01
There is increasing experimental and computational evidence that cellulose microfibrils can exist in a stable twisted form. In this study, atomistic molecular dynamics (MD) simulations are performed to investigate the importance of intrachain hydrogen bonds on the twist in cellulose microfibrils. We systematically enforce or block the formation of these intrachain hydrogen bonds by either constraining dihedral angles or manipulating charges. For the majority of simulations a consistent right handed twist is observed. The exceptions are two sets of simulations that block the O2-O6' intrachain hydrogen bond, where no consistent twist is observed in multiple independent simulations suggesting that the O2-O6' hydrogen bond can drive twist. However, in a further simulation where exocyclic group rotation is also blocked, right-handed twist still develops suggesting that intrachain hydrogen bonds are not necessary to drive twist in cellulose microfibrils. Copyright © 2017 Elsevier Ltd. All rights reserved.
Optimal design and uncertainty quantification in blood flow simulations for congenital heart disease
NASA Astrophysics Data System (ADS)
Marsden, Alison
2009-11-01
Recent work has demonstrated substantial progress in capabilities for patient-specific cardiovascular flow simulations. Recent advances include increasingly complex geometries, physiological flow conditions, and fluid structure interaction. However inputs to these simulations, including medical image data, catheter-derived pressures and material properties, can have significant uncertainties associated with them. For simulations to predict clinically useful and reliable output information, it is necessary to quantify the effects of input uncertainties on outputs of interest. In addition, blood flow simulation tools can now be efficiently coupled to shape optimization algorithms for surgery design applications, and these tools should incorporate uncertainty information. We present a unified framework to systematically and efficient account for uncertainties in simulations using adaptive stochastic collocation. In addition, we present a framework for derivative-free optimization of cardiovascular geometries, and layer these tools to perform optimization under uncertainty. These methods are demonstrated using simulations and surgery optimization to improve hemodynamics in pediatric cardiology applications.
An approach to achieve progress in spacecraft shielding
NASA Astrophysics Data System (ADS)
Thoma, K.; Schäfer, F.; Hiermaier, S.; Schneider, E.
2004-01-01
Progress in shield design against space debris can be achieved only when a combined approach based on several tools is used. This approach depends on the combined application of advanced numerical methods, specific material models and experimental determination of input parameters for these models. Examples of experimental methods for material characterization are given, covering the range from quasi static to very high strain rates for materials like Nextel and carbon fiber-reinforced materials. Mesh free numerical methods have extraordinary capabilities in the simulation of extreme material behaviour including complete failure with phase changes, combined with shock wave phenomena and the interaction with structural components. In this paper the benefits from combining numerical methods, material modelling and detailed experimental studies for shield design are demonstrated. The following examples are given: (1) Development of a material model for Nextel and Kevlar-Epoxy to enable numerical simulation of hypervelocity impacts on complex heavy protection shields for the International Space Station. (2) The influence of projectile shape on protection performance of Whipple Shields and how experimental problems in accelerating such shapes can be overcome by systematic numerical simulation. (3) The benefits of using metallic foams in "sandwich bumper shields" for spacecraft and how to approach systematic characterization of such materials.
Naturalistic Decision Making For Power System Operators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greitzer, Frank L.; Podmore, Robin; Robinson, Marck
2009-06-23
Abstract: Motivation -- As indicated by the Blackout of 2003, the North American interconnected electric system is vulnerable to cascading outages and widespread blackouts. Investigations of large scale outages often attribute the causes to the three T’s: Trees, Training and Tools. A systematic approach has been developed to document and understand the mental processes that an expert power system operator uses when making critical decisions. The approach has been developed and refined as part of a capability demonstration of a high-fidelity real-time power system simulator under normal and emergency conditions. To examine naturalistic decision making (NDM) processes, transcripts of operator-to-operatormore » conversations are analyzed to reveal and assess NDM-based performance criteria. Findings/Design -- The results of the study indicate that we can map the Situation Awareness Level of the operators at each point in the scenario. We can also identify clearly what mental models and mental simulations are being performed at different points in the scenario. As a result of this research we expect that we can identify improved training methods and improved analytical and visualization tools for power system operators. Originality/Value -- The research applies for the first time, the concepts of Recognition Primed Decision Making, Situation Awareness Levels and Cognitive Task Analysis to training of electric power system operators. Take away message -- The NDM approach provides an ideal framework for systematic training management and mitigation to accelerate learning in team-based training scenarios with high-fidelity power grid simulators.« less
Evaluation of wave runup predictions from numerical and parametric models
Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.
2014-01-01
Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.
Prostate-cancer diagnosis by non-invasive prostatic Zinc mapping using X-Ray Fluorescence (XRF)
NASA Astrophysics Data System (ADS)
Cortesi, Marco
At present, the major screening tools (PSA, DRE, TRUS) for prostate cancer lack sensitivity and specificity, and none can distinguish between low-grade indolent cancer and high-grade lethal one. The situation calls for the promotion of alternative approaches, with better detection sensitivity and specificity, to provide more efficient selection of patients to biopsy and with possible guidance of the biopsy needles. The prime objective of the present work was the development of a novel non-invasive method and tool for promoting detection, localization, diagnosis and follow-up of PCa. The method is based on in-vivo imaging of Zn distribution in the peripheral zone of the prostate, by a trans-rectal X-ray fluorescence (XRF) probe. Local Zn levels, measured in 1--4 mm3 fresh tissue biopsy segments from an extensive clinical study involving several hundred patients, showed an unambiguous correlation with the histological classification of the tissue (Non-Cancer or PCa), and a systematic positive correlation of its depletion level with the cancer-aggressiveness grade (Gleason classification). A detailed analysis of computer-simulated Zn-concentration images (with input parameters from clinical data) disclosed the potential of the method to provide sensitive and specific detection and localization of the lesion, its grade and extension. Furthermore, it also yielded invaluable data on some requirements, such as the image resolution and counting-statistics, requested from a trans-rectal XRF probe for in-vivo recording of prostatic-Zn maps in patients. By means of systematic table-top experiments on prostate-phantoms comprising tumor-like inclusions, followed by dedicated Monte Carlo simulations, the XRF-probe and its components have been designed and optimized. Multi-parameter analysis of the experimental data confirmed the simulation estimations of the XRF detection system in terms of: delivered dose, counting statistics, scanning resolution, target-volume size and the accuracy of locating at various depths of small-volume tumor-like inclusions in tissue-phantoms. The clinical study, the Monte Carlo simulations and the analysis of Zn-map images provided essential information and promising vision on the potential performance of the Zn-based PCa detection concept. Simulations focusing on medical-probe design and its performance at permissible radiation doses yielded positive results - confirmed by a series of systematic laboratory experiments with a table-top XRF system.
NASA Astrophysics Data System (ADS)
Osnes, A. N.; Vartdal, M.; Pettersson Reif, B. A.
2018-05-01
The formation of jets from a shock-accelerated cylindrical shell of particles, confined in a Hele-Shaw cell, is studied by means of numerical simulation. A number of simulations have been performed, systematically varying the coupling between the gas and solid phases in an effort to identify the primary mechanism(s) responsible for jet formation. We find that coupling through drag is sufficient for the formation of jets. Including the effect of particle volume fraction and particle collisions did not alter the general behaviour, but had some influence on the length, spacing and number of jets. Furthermore, we find that the jet selection process starts early in the dispersal process, during the initial expansion of the particle layer.
A diagram for evaluating multiple aspects of model performance in simulating vector fields
NASA Astrophysics Data System (ADS)
Xu, Zhongfeng; Hou, Zhaolu; Han, Ying; Guo, Weidong
2016-12-01
Vector quantities, e.g., vector winds, play an extremely important role in climate systems. The energy and water exchanges between different regions are strongly dominated by wind, which in turn shapes the regional climate. Thus, how well climate models can simulate vector fields directly affects model performance in reproducing the nature of a regional climate. This paper devises a new diagram, termed the vector field evaluation (VFE) diagram, which is a generalized Taylor diagram and able to provide a concise evaluation of model performance in simulating vector fields. The diagram can measure how well two vector fields match each other in terms of three statistical variables, i.e., the vector similarity coefficient, root mean square length (RMSL), and root mean square vector difference (RMSVD). Similar to the Taylor diagram, the VFE diagram is especially useful for evaluating climate models. The pattern similarity of two vector fields is measured by a vector similarity coefficient (VSC) that is defined by the arithmetic mean of the inner product of normalized vector pairs. Examples are provided, showing that VSC can identify how close one vector field resembles another. Note that VSC can only describe the pattern similarity, and it does not reflect the systematic difference in the mean vector length between two vector fields. To measure the vector length, RMSL is included in the diagram. The third variable, RMSVD, is used to identify the magnitude of the overall difference between two vector fields. Examples show that the VFE diagram can clearly illustrate the extent to which the overall RMSVD is attributed to the systematic difference in RMSL and how much is due to the poor pattern similarity.
Comparison of two methods to determine fan performance curves using computational fluid dynamics
NASA Astrophysics Data System (ADS)
Onma, Patinya; Chantrasmi, Tonkid
2018-01-01
This work investigates a systematic numerical approach that employs Computational Fluid Dynamics (CFD) to obtain performance curves of a backward-curved centrifugal fan. Generating the performance curves requires a number of three-dimensional simulations with varying system loads at a fixed rotational speed. Two methods were used and their results compared to experimental data. The first method incrementally changes the mass flow late through the inlet boundary condition while the second method utilizes a series of meshes representing the physical damper blade at various angles. The generated performance curves from both methods are compared with an experiment setup in accordance with the AMCA fan performance testing standard.
Systematic errors in Monsoon simulation: importance of the equatorial Indian Ocean processes
NASA Astrophysics Data System (ADS)
Annamalai, H.; Taguchi, B.; McCreary, J. P., Jr.; Nagura, M.; Miyama, T.
2015-12-01
H. Annamalai1, B. Taguchi2, J.P. McCreary1, J. Hafner1, M. Nagura2, and T. Miyama2 International Pacific Research Center, University of Hawaii, USA Application Laboratory, JAMSTEC, Japan In climate models, simulating the monsoon precipitation climatology remains a grand challenge. Compared to CMIP3, the multi-model-mean (MMM) errors for Asian-Australian monsoon (AAM) precipitation climatology in CMIP5, relative to GPCP observations, have shown little improvement. One of the implications is that uncertainties in the future projections of time-mean changes to AAM rainfall may not have reduced from CMIP3 to CMIP5. Despite dedicated efforts by the modeling community, the progress in monsoon modeling is rather slow. This leads us to wonder: Has the scientific community reached a "plateau" in modeling mean monsoon precipitation? Our focus here is to better understanding of the coupled air-sea interactions, and moist processes that govern the precipitation characteristics over the tropical Indian Ocean where large-scale errors persist. A series idealized coupled model experiments are performed to test the hypothesis that errors in the coupled processes along the equatorial Indian Ocean during inter-monsoon seasons could potentially influence systematic errors during the monsoon season. Moist static energy budget diagnostics has been performed to identify the leading moist and radiative processes that account for the large-scale errors in the simulated precipitation. As a way forward, we propose three coordinated efforts, and they are: (i) idealized coupled model experiments; (ii) process-based diagnostics and (iii) direct observations to constrain model physics. We will argue that a systematic and coordinated approach in the identification of the various interactive processes that shape the precipitation basic state needs to be carried out, and high-quality observations over the data sparse monsoon region are needed to validate models and further improve model physics.
NASA Astrophysics Data System (ADS)
Hu, S. X.; Collins, L. A.; Boehly, T. R.; Ding, Y. H.; Radha, P. B.; Goncharov, V. N.; Karasiev, V. V.; Collins, G. W.; Regan, S. P.; Campbell, E. M.
2018-05-01
Polystyrene (CH), commonly known as "plastic," has been one of the widely used ablator materials for capsule designs in inertial confinement fusion (ICF). Knowing its precise properties under high-energy-density conditions is crucial to understanding and designing ICF implosions through radiation-hydrodynamic simulations. For this purpose, systematic ab initio studies on the static, transport, and optical properties of CH, in a wide range of density and temperature conditions (ρ = 0.1 to 100 g/cm3 and T = 103 to 4 × 106 K), have been conducted using quantum molecular dynamics (QMD) simulations based on the density functional theory. We have built several wide-ranging, self-consistent material-properties tables for CH, such as the first-principles equation of state, the QMD-based thermal conductivity (κQMD) and ionization, and the first-principles opacity table. This paper is devoted to providing a review on (1) what results were obtained from these systematic ab initio studies; (2) how these self-consistent results were compared with both traditional plasma-physics models and available experiments; and (3) how these first-principles-based properties of polystyrene affect the predictions of ICF target performance, through both 1-D and 2-D radiation-hydrodynamic simulations. In the warm dense regime, our ab initio results, which can significantly differ from predictions of traditional plasma-physics models, compared favorably with experiments. When incorporated into hydrocodes for ICF simulations, these first-principles material properties of CH have produced significant differences over traditional models in predicting 1-D/2-D target performance of ICF implosions on OMEGA and direct-drive-ignition designs for the National Ignition Facility. Finally, we will discuss the implications of these studies on the current small-margin ICF target designs using a CH ablator.
Brunner, S.; Berger, R. L.; Cohen, B. I.; ...
2014-10-01
Kinetic Vlasov simulations of one-dimensional finite amplitude Electron Plasma Waves are performed in a multi-wavelength long system. A systematic study of the most unstable linear sideband mode, in particular its growth rate γ and quasi- wavenumber δk, is carried out by scanning the amplitude and wavenumber of the initial wave. Simulation results are successfully compared against numerical and analytical solutions to the reduced model by Kruer et al. [Phys. Rev. Lett. 23, 838 (1969)] for the Trapped Particle Instability (TPI). A model recently suggested by Dodin et al. [Phys. Rev. Lett. 110, 215006 (2013)], which in addition to the TPImore » accounts for the so-called Negative Mass Instability because of a more detailed representation of the trapped particle dynamics, is also studied and compared with simulations.« less
NASA Astrophysics Data System (ADS)
Morozov, A.; Heindl, T.; Skrobol, C.; Wieser, J.; Krücken, R.; Ulrich, A.
2008-07-01
Electron beams with particle energy of ~10 keV were sent through 300 nm thick ceramic (Si3N4 + SiO2) foils and the resulting electron energy distribution functions were recorded using a retarding grid technique. The results are compared with Monte Carlo simulations performed with two publicly available packages, Geant4 and Casino v2.42. It is demonstrated that Geant4, unlike Casino, provides electron energy distribution functions very similar to the experimental distributions. Both simulation packages provide a quite precise average energy of transmitted electrons: we demonstrate that the maximum uncertainty of the calculated values of the average energy is 6% for Geant4 and 8% for Casino, taking into account all systematic uncertainties and the discrepancies in the experimental and simulated data.
Effects of deterministic surface distortions on reflector antenna performance
NASA Technical Reports Server (NTRS)
Rahmat-Samii, Y.
1985-01-01
Systematic distortions of reflector antenna surfaces can cause antenna radiation patterns to be undesirably different from those of perfectly smooth reflector surfaces. In this paper, a simulation model for systematic distortions is described which permits an efficient computation of the effects of distortions in the reflector pattern. The model uses a vector diffraction physical optics analysis for the determination of both the co-polar and cross-polar fields. An interpolation scheme is also presented for the description of reflector surfaces which are prescribed by discrete points. Representative numerical results are presented for reflectors with sinusoidally and thermally distorted surfaces. Finally, comparisons are made between the measured and calculated patterns of a slowly-varying distorted offset parabolic reflector.
Atomic defects in monolayer WSe2 tunneling FETs studied by systematic ab initio calculations
NASA Astrophysics Data System (ADS)
Wu, Jixuan; Fan, Zhiqiang; Chen, Jiezhi; Jiang, Xiangwei
2018-05-01
Atomic defects in monolayer WSe2 tunneling FETs (TFETs) are studied through systematic ab initio calculations aiming at performance predictions and enhancements. The effects of various defect positions and different passivation atoms are characterized in WSe2 TFETs by rigorous ab initio quantum transport simulations. It is suggested that the Se vacancy (VSe) defect located in the gate-controlled channel region tends to increase the OFF current (I off), whereas it can be well suppressed by oxygen passivation. It is demonstrated that chlorine (Cl) passivation at the source-side tunneling region can largely suppress I off, leading to an impressively improved on–off ratio (I on/I off) compared with that without any defect. However, it is also observed that randomly positioned atomic defects tend to induce significant fluctuation of the TFET output. Further discussions are made with focus on the performance-variability trade-off for robust circuit design.
Efficient computation paths for the systematic analysis of sensitivities
NASA Astrophysics Data System (ADS)
Greppi, Paolo; Arato, Elisabetta
2013-01-01
A systematic sensitivity analysis requires computing the model on all points of a multi-dimensional grid covering the domain of interest, defined by the ranges of variability of the inputs. The issues to efficiently perform such analyses on algebraic models are handling solution failures within and close to the feasible region and minimizing the total iteration count. Scanning the domain in the obvious order is sub-optimal in terms of total iterations and is likely to cause many solution failures. The problem of choosing a better order can be translated geometrically into finding Hamiltonian paths on certain grid graphs. This work proposes two paths, one based on a mixed-radix Gray code and the other, a quasi-spiral path, produced by a novel heuristic algorithm. Some simple, easy-to-visualize examples are presented, followed by performance results for the quasi-spiral algorithm and the practical application of the different paths in a process simulation tool.
Evaluating sampling designs by computer simulation: A case study with the Missouri bladderpod
Morrison, L.W.; Smith, D.R.; Young, C.; Nichols, D.W.
2008-01-01
To effectively manage rare populations, accurate monitoring data are critical. Yet many monitoring programs are initiated without careful consideration of whether chosen sampling designs will provide accurate estimates of population parameters. Obtaining accurate estimates is especially difficult when natural variability is high, or limited budgets determine that only a small fraction of the population can be sampled. The Missouri bladderpod, Lesquerella filiformis Rollins, is a federally threatened winter annual that has an aggregated distribution pattern and exhibits dramatic interannual population fluctuations. Using the simulation program SAMPLE, we evaluated five candidate sampling designs appropriate for rare populations, based on 4 years of field data: (1) simple random sampling, (2) adaptive simple random sampling, (3) grid-based systematic sampling, (4) adaptive grid-based systematic sampling, and (5) GIS-based adaptive sampling. We compared the designs based on the precision of density estimates for fixed sample size, cost, and distance traveled. Sampling fraction and cost were the most important factors determining precision of density estimates, and relative design performance changed across the range of sampling fractions. Adaptive designs did not provide uniformly more precise estimates than conventional designs, in part because the spatial distribution of L. filiformis was relatively widespread within the study site. Adaptive designs tended to perform better as sampling fraction increased and when sampling costs, particularly distance traveled, were taken into account. The rate that units occupied by L. filiformis were encountered was higher for adaptive than for conventional designs. Overall, grid-based systematic designs were more efficient and practically implemented than the others. ?? 2008 The Society of Population Ecology and Springer.
WFIRST: Simulating the Wide-Field Sky
NASA Astrophysics Data System (ADS)
Peeples, Molly; WFIRST Wide Field Imager Simulations Working Group
2018-01-01
As astronomy’s first high-resolution wide-field multi-mode instrument, simulated data will play a vital role in the planning for and analysis of data from WFIRST’s WFI (Wide Field Imager) instrument. Part of the key to WFIRST’s scientific success lies in our ability to push the systematics limit, but in order to do so, the WFI pipeline will need to be able to measure and take out said systematics. The efficacy of this pipeline can only be verified with large suites of synthetic data; these data must include both the range of astrophysical sky scenes (from crowded starfields to high-latitude grism data observations) and the systematics from the detector and telescope optics the WFI pipeline aims to mitigate. We summarize here(1) the status of current and planned astrophysical simulations in support of the WFI,(2) the status of current WFI instrument simulators and requirements on future generations thereof, and(3) plans, methods, and requirements on interfacing astrophysical simulations and WFI instrument simulators.
NASA Astrophysics Data System (ADS)
Güttler, I.
2012-04-01
Systematic errors in near-surface temperature (T2m), total cloud cover (CLD), shortwave albedo (ALB) and surface net longwave (SNL) and shortwave energy flux (SNS) are detected in simulations of RegCM on 50 km resolution over the European CORDEX domain when forced with ERA-Interim reanalysis. Simulated T2m is compared to CRU 3.0 and other variables to GEWEX-SRB 3.0 dataset. Most of systematic errors found in SNL and SNS are consistent with errors in T2m, CLD and ALB: they include prevailing negative errors in T2m and positive errors in CLD present during most of the year. Errors in T2m and CLD can be associated with the overestimation of SNL and SNS in most simulations. Impact of errors in albedo are primarily confined to north Africa, where e.g. underestimation of albedo in JJA is consistent with associated surface heating and positive SNS and T2m errors. Sensitivity to the choice of the PBL scheme and various parameters in PBL schemes is examined from an ensemble of 20 simulations. The recently implemented prognostic PBL scheme performs over Europe with a mixed success when compared to standard diagnostic scheme with a general increase of errors in T2m and CLD over all of the domain. Nevertheless, the improvements in T2m can be found in e.g. north-eastern Europe during DJF and western Europe during JJA where substantial warm biases existed in simulations with the diagnostic scheme. The most detectable impact, in terms of the JJA T2m errors over western Europe, comes form the variation in the formulation of mixing length. In order to reduce the above errors an update of the RegCM albedo values and further work in customizing PBL scheme is suggested.
Theoretical proposal for determining angular momentum compensation in ferrimagnets
NASA Astrophysics Data System (ADS)
Zhu, Zhifeng; Fong, Xuanyao; Liang, Gengchiau
2018-05-01
This work demonstrates that the magnetization and angular momentum compensation temperatures (TMC and TAMC) in ferrimagnets can be unambiguously determined by performing two sets of temperature-dependent current switching, with the symmetry reversals at TMC and TAMC, respectively. A theoretical model based on the modified Landau-Lifshitz-Bloch equation is developed to systematically study the spin torque effect under different temperatures, and numerical simulations are performed to corroborate our proposal. Furthermore, we demonstrate that the recently reported linear relation between TAMC and TMC can be explained using the Curie-Weiss theory.
NASA Astrophysics Data System (ADS)
Pathiraja, S.; Anghileri, D.; Burlando, P.; Sharma, A.; Marshall, L.; Moradkhani, H.
2018-03-01
The global prevalence of rapid and extensive land use change necessitates hydrologic modelling methodologies capable of handling non-stationarity. This is particularly true in the context of Hydrologic Forecasting using Data Assimilation. Data Assimilation has been shown to dramatically improve forecast skill in hydrologic and meteorological applications, although such improvements are conditional on using bias-free observations and model simulations. A hydrologic model calibrated to a particular set of land cover conditions has the potential to produce biased simulations when the catchment is disturbed. This paper sheds new light on the impacts of bias or systematic errors in hydrologic data assimilation, in the context of forecasting in catchments with changing land surface conditions and a model calibrated to pre-change conditions. We posit that in such cases, the impact of systematic model errors on assimilation or forecast quality is dependent on the inherent prediction uncertainty that persists even in pre-change conditions. Through experiments on a range of catchments, we develop a conceptual relationship between total prediction uncertainty and the impacts of land cover changes on the hydrologic regime to demonstrate how forecast quality is affected when using state estimation Data Assimilation with no modifications to account for land cover changes. This work shows that systematic model errors as a result of changing or changed catchment conditions do not always necessitate adjustments to the modelling or assimilation methodology, for instance through re-calibration of the hydrologic model, time varying model parameters or revised offline/online bias estimation.
Bravata, Dena M; McDonald, Kathryn M; Shojania, Kaveh G; Sundaram, Vandana; Owens, Douglas K
2005-06-21
Some important health policy topics, such as those related to the delivery, organization, and financing of health care, present substantial challenges to established methods for evidence synthesis. For example, such reviews may ask: What is the effect of for-profit versus not-for-profit delivery of care on patient outcomes? Or, which strategies are the most effective for promoting preventive care? This paper describes innovative methods for synthesizing evidence related to the delivery, organization, and financing of health care. We found 13 systematic reviews on these topics that described novel methodologic approaches. Several of these syntheses used 3 approaches: conceptual frameworks to inform problem formulation, systematic searches that included nontraditional literature sources, and hybrid synthesis methods that included simulations to address key gaps in the literature. As the primary literature on these topics expands, so will opportunities to develop additional novel methods for performing high-quality comprehensive syntheses.
Boiocchi, Riccardo; Gernaey, Krist V; Sin, Gürkan
2016-10-01
A methodology is developed to systematically design the membership functions of fuzzy-logic controllers for multivariable systems. The methodology consists of a systematic derivation of the critical points of the membership functions as a function of predefined control objectives. Several constrained optimization problems corresponding to different qualitative operation states of the system are defined and solved to identify, in a consistent manner, the critical points of the membership functions for the input variables. The consistently identified critical points, together with the linguistic rules, determine the long term reachability of the control objectives by the fuzzy logic controller. The methodology is highlighted using a single-stage side-stream partial nitritation/Anammox reactor as a case study. As a result, a new fuzzy-logic controller for high and stable total nitrogen removal efficiency is designed. Rigorous simulations are carried out to evaluate and benchmark the performance of the controller. The results demonstrate that the novel control strategy is capable of rejecting the long-term influent disturbances, and can achieve a stable and high TN removal efficiency. Additionally, the controller was tested, and showed robustness, against measurement noise levels typical for wastewater sensors. A feedforward-feedback configuration using the present controller would give even better performance. In comparison, a previously developed fuzzy-logic controller using merely expert and intuitive knowledge performed worse. This proved the importance of using a systematic methodology for the derivation of the membership functions for multivariable systems. These results are promising for future applications of the controller in real full-scale plants. Furthermore, the methodology can be used as a tool to help systematically design fuzzy logic control applications for other biological processes. Copyright © 2016 Elsevier Ltd. All rights reserved.
Mema, Briseida; Harris, Ilene
2016-01-01
PHENOMENON: Ultrasound-guided central venous line insertion is currently the standard of care. Randomized controlled trials and systematic reviews show that simulation is superior to apprenticeship training. The purpose of this study is to explore, from the perspectives of participants in a simulation-training program, the factors that help or hinder the transfer of skills from simulation to practice. Purposeful sampling was used to select and study the experience and perspective of novice fellows after they had completed simulation training and then performed ultrasound-guided central venous line in practice. Seven novice pediatric intensive care unit fellows and six supervising faculty in a university-affiliated academic center in a large urban city were recruited between September 2012 and January 2013. We conducted a qualitative study using semistructured interviews as our data source, employing a constructivist, grounded theory methodology. Both curricular and real-life factors influence the transfer of skills from simulation to practice and the overall performance of trainees. Clear instructions, the opportunity to practice to mastery, one-on-one observation with feedback, supervision, and further real-life experiences were perceived as factors that facilitated the transfer of skills. Concern for patient welfare, live trouble shooting, complexity of the intensive care unit environment, and the procedure itself were perceived as real-life factors that hindered the transfer of skills. Insights: As more studies confirm the superiority of simulation training versus apprenticeship training for initial student learning, the faculty should gain insight into factors that facilitate and hinder the transfer of skills from simulation to bedside settings and impact learners' performances. As simulation further augments clinical learning, efforts should be made to modify the curricular and bedside factors that facilitate transfer of skills from simulation to practice settings.
Ma, Irene W Y; Brindle, Mary E; Ronksley, Paul E; Lorenzetti, Diane L; Sauve, Reg S; Ghali, William A
2011-09-01
Central venous catheterization (CVC) is increasingly taught by simulation. The authors reviewed the literature on the effects of simulation training in CVC on learner and clinical outcomes. The authors searched computerized databases (1950 to May 2010), reference lists, and considered studies with a control group (without simulation education intervention). Two independent assessors reviewed the retrieved citations. Independent data abstraction was performed on study design, study quality score, learner characteristics, sample size, components of interventional curriculum, outcomes assessed, and method of assessment. Learner outcomes included performance measures on simulators, knowledge, and confidence. Patient outcomes included number of needle passes, arterial puncture, pneumothorax, and catheter-related infections. Twenty studies were identified. Simulation-based education was associated with significant improvements in learner outcomes: performance on simulators (standardized mean difference [SMD] 0.60 [95% CI 0.45 to 0.76]), knowledge (SMD 0.60 [95% CI 0.35 to 0.84]), and confidence (SMD 0.41 [95% CI 0.30 to 0.53] for studies with single-group pretest and posttest design; SMD 0.52 (95% CI 0.23 to 0.81) for studies with nonrandomized, two-group design). Furthermore, simulation-based education was associated with improved patient outcomes, including fewer needle passes (SMD -0.58 [95% CI -0.95 to -0.20]), and pneumothorax (relative risk 0.62 [95% CI 0.40 to 0.97]), for studies with nonrandomized, two-group design. However, simulation-based training was not associated with a significant reduction in risk of either arterial puncture or catheter-related infections. Despite some limitations in the literature reviewed, evidence suggests that simulation-based education for CVC provides benefits in learner and select clinical outcomes.
Simulating the interaction of jets with the intracluster medium
NASA Astrophysics Data System (ADS)
Weinberger, Rainer; Ehlert, Kristian; Pfrommer, Christoph; Pakmor, Rüdiger; Springel, Volker
2017-10-01
Jets from supermassive black holes in the centres of galaxy clusters are a potential candidate for moderating gas cooling and subsequent star formation through depositing energy in the intracluster gas. In this work, we simulate the jet-intracluster medium interaction using the moving-mesh magnetohydrodynamics code arepo. Our model injects supersonic, low-density, collimated and magnetized outflows in cluster centres, which are then stopped by the surrounding gas, thermalize and inflate low-density cavities filled with cosmic rays. We perform high-resolution, non-radiative simulations of the lobe creation, expansion and disruption, and find that its dynamical evolution is in qualitative agreement with simulations of idealized low-density cavities that are dominated by a large-scale Rayleigh-Taylor instability. The buoyant rising of the lobe does not create energetically significant small-scale chaotic motion in a volume-filling fashion, but rather a systematic upward motion in the wake of the lobe and a corresponding back-flow antiparallel to it. We find that, overall, 50 per cent of the injected energy ends up in material that is not part of the lobe, and about 25 per cent remains in the inner 100 kpc. We conclude that jet-inflated, buoyantly rising cavities drive systematic gas motions that play an important role in heating the central regions, while mixing of lobe material is subdominant. Encouragingly, the main mechanisms responsible for this energy deposition can be modelled already at resolutions within reach in future, high-resolution cosmological simulations of galaxy clusters.
System calibration method for Fourier ptychographic microscopy.
Pan, An; Zhang, Yan; Zhao, Tianyu; Wang, Zhaojun; Dan, Dan; Lei, Ming; Yao, Baoli
2017-09-01
Fourier ptychographic microscopy (FPM) is a recently proposed computational imaging technique with both high-resolution and wide field of view. In current FPM imaging platforms, systematic error sources come from aberrations, light-emitting diode (LED) intensity fluctuation, parameter imperfections, and noise, all of which may severely corrupt the reconstruction results with similar artifacts. Therefore, it would be unlikely to distinguish the dominating error from these degraded reconstructions without any preknowledge. In addition, systematic error is generally a mixture of various error sources in the real situation, and it cannot be separated due to their mutual restriction and conversion. To this end, we report a system calibration procedure, termed SC-FPM, to calibrate the mixed systematic errors simultaneously from an overall perspective, based on the simulated annealing algorithm, the LED intensity correction method, the nonlinear regression process, and the adaptive step-size strategy, which involves the evaluation of an error metric at each iteration step, followed by the re-estimation of accurate parameters. The performance achieved both in simulations and experiments demonstrates that the proposed method outperforms other state-of-the-art algorithms. The reported system calibration scheme improves the robustness of FPM, relaxes the experiment conditions, and does not require any preknowledge, which makes the FPM more pragmatic. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
A simulation based method to assess inversion algorithms for transverse relaxation data
NASA Astrophysics Data System (ADS)
Ghosh, Supriyo; Keener, Kevin M.; Pan, Yong
2008-04-01
NMR relaxometry is a very useful tool for understanding various chemical and physical phenomena in complex multiphase systems. A Carr-Purcell-Meiboom-Gill (CPMG) [P.T. Callaghan, Principles of Nuclear Magnetic Resonance Microscopy, Clarendon Press, Oxford, 1991] experiment is an easy and quick way to obtain transverse relaxation constant (T2) in low field. Most of the samples usually have a distribution of T2 values. Extraction of this distribution of T2s from the noisy decay data is essentially an ill-posed inverse problem. Various inversion approaches have been used to solve this problem, to date. A major issue in using an inversion algorithm is determining how accurate the computed distribution is. A systematic analysis of an inversion algorithm, UPEN [G.C. Borgia, R.J.S. Brown, P. Fantazzini, Uniform-penalty inversion of multiexponential decay data, Journal of Magnetic Resonance 132 (1998) 65-77; G.C. Borgia, R.J.S. Brown, P. Fantazzini, Uniform-penalty inversion of multiexponential decay data II. Data spacing, T2 data, systematic data errors, and diagnostics, Journal of Magnetic Resonance 147 (2000) 273-285] was performed by means of simulated CPMG data generation. Through our simulation technique and statistical analyses, the effects of various experimental parameters on the computed distribution were evaluated. We converged to the true distribution by matching up the inversion results from a series of true decay data and a noisy simulated data. In addition to simulation studies, the same approach was also applied on real experimental data to support the simulation results.
Review of 3-Dimensional Printing on Cranial Neurosurgery Simulation Training.
Vakharia, Vejay N; Vakharia, Nilesh N; Hill, Ciaran S
2016-04-01
Shorter working times, reduced operative exposure to complex procedures, and increased subspecialization have resulted in training constraints within most surgical fields. Simulation has been suggested as a possible means of acquiring new surgical skills without exposing patients to the surgeon's operative "learning curve." Here we review the potential impact of 3-dimensional printing on simulation and training within cranial neurosurgery and its implications for the future. In accordance with Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines, a comprehensive search of PubMed, OVID MEDLINE, Embase, and the Cochrane Database of Systematic Reviews was performed. In total, 31 studies relating to the use of 3-dimensional (3D) printing within neurosurgery, of which 16 were specifically related to simulation and training, were identified. The main impact of 3D printing on neurosurgical simulation training was within vascular surgery, where patient-specific replication of vascular anatomy and pathologies can aid surgeons in operative planning and clip placement for reconstruction of vascular anatomy. Models containing replicas of brain tumors have also been reconstructed and used for training purposes, with some providing realistic representations of skin, subcutaneous tissue, bone, dura, normal brain, and tumor tissue. 3D printing provides a unique means of directly replicating patient-specific pathologies. It can identify anatomic variation and provide a medium in which training models can be generated rapidly, allowing the trainee and experienced neurosurgeon to practice parts of operations preoperatively. Future studies are required to validate this technology in comparison with current simulators and show improved patient outcomes. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Richter, J.; Mayer, J.; Weigand, B.
2018-02-01
Non-resonant laser-induced thermal acoustics (LITA) was applied to measure Mach number, temperature and turbulence level along the centerline of a transonic nozzle flow. The accuracy of the measurement results was systematically studied regarding misalignment of the interrogation beam and frequency analysis of the LITA signals. 2D steady-state Reynolds-averaged Navier-Stokes (RANS) simulations were performed for reference. The simulations were conducted using ANSYS CFX 18 employing the shear-stress transport turbulence model. Post-processing of the LITA signals is performed by applying a discrete Fourier transformation (DFT) to determine the beat frequencies. It is shown that the systematical error of the DFT, which depends on the number of oscillations, signal chirp, and damping rate, is less than 1.5% for our experiments resulting in an average error of 1.9% for Mach number. Further, the maximum calibration error is investigated for a worst-case scenario involving maximum in situ readjustment of the interrogation beam within the limits of constructive interference. It is shown that the signal intensity becomes zero if the interrogation angle is altered by 2%. This, together with the accuracy of frequency analysis, results in an error of about 5.4% for temperature throughout the nozzle. Comparison with numerical results shows good agreement within the error bars.
Maritime Continent seasonal climate biases in AMIP experiments of the CMIP5 multimodel ensemble
NASA Astrophysics Data System (ADS)
Toh, Ying Ying; Turner, Andrew G.; Johnson, Stephanie J.; Holloway, Christopher E.
2018-02-01
The fidelity of 28 Coupled Model Intercomparison Project phase 5 (CMIP5) models in simulating mean climate over the Maritime Continent in the Atmospheric Model Intercomparison Project (AMIP) experiment is evaluated in this study. The performance of AMIP models varies greatly in reproducing seasonal mean climate and the seasonal cycle. The multi-model mean has better skill at reproducing the observed mean climate than the individual models. The spatial pattern of 850 hPa wind is better simulated than the precipitation in all four seasons. We found that model horizontal resolution is not a good indicator of model performance. Instead, a model's local Maritime Continent biases are somewhat related to its biases in the local Hadley circulation and global monsoon. The comparison with coupled models in CMIP5 shows that AMIP models generally performed better than coupled models in the simulation of the global monsoon and local Hadley circulation but less well at simulating the Maritime Continent annual cycle of precipitation. To characterize model systematic biases in the AMIP runs, we performed cluster analysis on Maritime Continent annual cycle precipitation. Our analysis resulted in two distinct clusters. Cluster I models are able to capture both the winter monsoon and summer monsoon shift, but they overestimate the precipitation; especially during the JJA and SON seasons. Cluster II models simulate weaker seasonal migration than observed, and the maximum rainfall position stays closer to the equator throughout the year. The tropics-wide properties of these clusters suggest a connection between the skill of simulating global properties of the monsoon circulation and the skill of simulating the regional scale of Maritime Continent precipitation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daleu, C. L.; Plant, R. S.; Woolnough, S. J.
Here, as part of an international intercomparison project, a set of single-column models (SCMs) and cloud-resolving models (CRMs) are run under the weak-temperature gradient (WTG) method and the damped gravity wave (DGW) method. For each model, the implementation of the WTG or DGW method involves a simulated column which is coupled to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. The simulated column has the same surface conditions as the reference state and is initialized with profiles from the reference state. We performed systematic comparison of the behavior of different models under a consistentmore » implementation of the WTG method and the DGW method and systematic comparison of the WTG and DGW methods in models with different physics and numerics. CRMs and SCMs produce a variety of behaviors under both WTG and DGW methods. Some of the models reproduce the reference state while others sustain a large-scale circulation which results in either substantially lower or higher precipitation compared to the value of the reference state. CRMs show a fairly linear relationship between precipitation and circulation strength. SCMs display a wider range of behaviors than CRMs. Some SCMs under the WTG method produce zero precipitation. Within an individual SCM, a DGW simulation and a corresponding WTG simulation can produce different signed circulation. When initialized with a dry troposphere, DGW simulations always result in a precipitating equilibrium state. The greatest sensitivities to the initial moisture conditions occur for multiple stable equilibria in some WTG simulations, corresponding to either a dry equilibrium state when initialized as dry or a precipitating equilibrium state when initialized as moist. Multiple equilibria are seen in more WTG simulations for higher SST. In some models, the existence of multiple equilibria is sensitive to some parameters in the WTG calculations.« less
Evaluation of East Asian climatology as simulated by seven coupled models
NASA Astrophysics Data System (ADS)
Jiang, Dabang; Wang, Huijun; Lang, Xianmei
2005-07-01
Using observation and reanalysis data throughout 1961 1990, the East Asian surface air temperature, precipitation and sea level pressure climatology as simulated by seven fully coupled atmosphere-ocean models, namely CCSR/NIES, CGCM2, CSIRO-Mk2, ECHAM4/OPYC3, GFDL-R30, HadCM3, and NCAR-PCM, are systematically evaluated in this study. It is indicated that the above models can successfully reproduce the annual and seasonal surface air temperature and precipitation climatology in East Asia, with relatively good performance for boreal autumn and annual mean. The models’ ability to simulate surface air temperature is more reliable than precipitation. In addition, the models can dependably capture the geographical distribution pattern of annual, boreal winter, spring and autumn sea level pressure in East Asia. In contrast, relatively large simulation errors are displayed when simulated boreal summer sea level pressure is compared with reanalysis data in East Asia. It is revealed that the simulation errors for surface air temperature, precipitation and sea level pressure are generally large over and around the Tibetan Plateau. No individual model is best in every aspect. As a whole, the ECHAM4/OPYC3 and HadCM3 performances are much better, whereas the CGCM2 is relatively poorer in East Asia. Additionally, the seven-model ensemble mean usually shows a relatively high reliability.
Murphy, Margaret; Curtis, Kate; McCloughen, Andrea
2016-02-01
In hospital emergencies require a structured team approach to facilitate simultaneous input into immediate resuscitation, stabilisation and prioritisation of care. Efforts to improve teamwork in the health care context include multidisciplinary simulation-based resuscitation team training, yet there is limited evidence demonstrating the value of these programmes.(1) We aimed to determine the current state of knowledge about the key components and impacts of multidisciplinary simulation-based resuscitation team training by conducting an integrative review of the literature. A systematic search using electronic (three databases) and hand searching methods for primary research published between 1980 and 2014 was undertaken; followed by a rigorous screening and quality appraisal process. The included articles were assessed for similarities and differences; the content was grouped and synthesised to form three main categories of findings. Eleven primary research articles representing a variety of simulation-based resuscitation team training were included. Five studies involved trauma teams; two described resuscitation teams in the context of intensive care and operating theatres and one focused on the anaesthetic team. Simulation is an effective method to train resuscitation teams in the management of crisis scenarios and has the potential to improve team performance in the areas of communication, teamwork and leadership. Team training improves the performance of the resuscitation team in simulated emergency scenarios. However, the transferability of educational outcomes to the clinical setting needs to be more clearly demonstrated. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.
Sci—Fri PM: Dosimetry—05: Megavoltage electron backscatter: EGSnrc results versus 21 experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ali, E. S. M.; The Ottawa Hospital Cancer Centre, Ottawa; Buchenberg, W.
2014-08-15
The accuracy of electron backscatter calculations at megavoltage energies is important for many medical physics applications. In this study, EGSnrc calculations of megavoltage electron backscatter (1–22 MeV) are performed and compared to the data from 21 experiments published between 1954 and 1993 for 25 single elements with atomic numbers from 3 to 92. Typical experimental uncertainties are 15%. For EGSnrc simulations, an ideal detector is assumed, and the most accurate electron physics options are employed, for a combined statistical and systematic uncertainty of 3%. The quantities compared are the backscatter coefficient and the energy spectra (in the backward hemisphere andmore » at specific detector locations). For the backscatter coefficient, the overall agreement is within ±2% in the absolute value of the backscatter coefficient (in per cent), and within 11% of the individual backscatter values. EGSnrc results are systematically on the higher end of the spread of the experimental data, which could be partially from systematic experimental errors discussed in the literature. For the energy spectra, reasonable agreement between simulations and experiments is observed, although there are significant variations in the experimental data. At the lower end of the spectra, simulations are higher than some experimental data, which could be due to reduced experimental sensitivity to lower energy electrons and/or over-estimation by EGSnrc for backscattered secondary electrons. In conclusion, overall good agreement is observed between EGSnrc backscatter calculations and experimental measurements for megavoltage electrons. There is a need for high quality experimental data for the energy spectra of backscattered electrons.« less
A systematic review of evidence for education and training interventions in microsurgery.
Ghanem, Ali M; Hachach-Haram, Nadine; Leung, Clement Chi Ming; Myers, Simon Richard
2013-07-01
Over the past decade, driven by advances in educational theory and pressures for efficiency in the clinical environment, there has been a shift in surgical education and training towards enhanced simulation training. Microsurgery is a technical skill with a steep competency learning curve on which the clinical outcome greatly depends. This paper investigates the evidence for educational and training interventions of traditional microsurgical skills courses in order to establish the best evidence practice in education and training and curriculum design. A systematic review of MEDLINE, EMBASE, and PubMed databases was performed to identify randomized control trials looking at educational and training interventions that objectively improved microsurgical skill acquisition, and these were critically appraised using the BestBETs group methodology. The databases search yielded 1,148, 1,460, and 2,277 citations respectively. These were then further limited to randomized controlled trials from which abstract reviews reduced the number to 5 relevant randomised controlled clinical trials. The best evidence supported a laboratory based low fidelity model microsurgical skills curriculum. There was strong evidence that technical skills acquired on low fidelity models transfers to improved performance on higher fidelity human cadaver models and that self directed practice leads to improved technical performance. Although there is significant paucity in the literature to support current microsurgical education and training practices, simulated training on low fidelity models in microsurgery is an effective intervention that leads to acquisition of transferable skills and improved technical performance. Further research to identify educational interventions associated with accelerated skill acquisition is required.
Medium-range Performance of the Global NWP Model
NASA Astrophysics Data System (ADS)
Kim, J.; Jang, T.; Kim, J.; Kim, Y.
2017-12-01
The medium-range performance of the global numerical weather prediction (NWP) model in the Korea Meteorological Administration (KMA) is investigated. The performance is based on the prediction of the extratropical circulation. The mean square error is expressed by sum of spatial variance of discrepancy between forecasts and observations and the square of the mean error (ME). Thus, it is important to investigate the ME effect in order to understand the model performance. The ME is expressed by the subtraction of an anomaly from forecast difference against the real climatology. It is found that the global model suffers from a severe systematic ME in medium-range forecasts. The systematic ME is dominant in the entire troposphere in all months. Such ME can explain at most 25% of root mean square error. We also compare the extratropical ME distribution with that from other NWP centers. NWP models exhibit similar spatial ME structure each other. It is found that the spatial ME pattern is highly correlated to that of an anomaly, implying that the ME varies with seasons. For example, the correlation coefficient between ME and anomaly ranges from -0.51 to -0.85 by months. The pattern of the extratropical circulation also has a high correlation to an anomaly. The global model has trouble in faithfully simulating extratropical cyclones and blockings in the medium-range forecast. In particular, the model has a hard to simulate an anomalous event in medium-range forecasts. If we choose an anomalous period for a test-bed experiment, we will suffer from a large error due to an anomaly.
Surgical simulation training in orthopedics: current insights.
Kalun, Portia; Wagner, Natalie; Yan, James; Nousiainen, Markku T; Sonnadara, Ranil R
2018-01-01
While the knowledge required of residents training in orthopedic surgery continues to increase, various factors, including reductions in work hours, have resulted in decreased clinical learning opportunities. Recent work suggests residents graduate from their training programs without sufficient exposure to key procedures. In response, simulation is increasingly being incorporated into training programs to supplement clinical learning. This paper reviews the literature to explore whether skills learned in simulation-based settings results in improved clinical performance in orthopedic surgery trainees. A scoping review of the literature was conducted to identify papers discussing simulation training in orthopedic surgery. We focused on exploring whether skills learned in simulation transferred effectively to a clinical setting. Experimental studies, systematic reviews, and narrative reviews were included. A total of 15 studies were included, with 11 review papers and four experimental studies. The review articles reported little evidence regarding the transfer of skills from simulation to the clinical setting, strong evidence that simulator models discriminate among different levels of experience, varied outcome measures among studies, and a need to define competent performance in both simulated and clinical settings. Furthermore, while three out of the four experimental studies demonstrated transfer between the simulated and clinical environments, methodological study design issues were identified. Our review identifies weak evidence as to whether skills learned in simulation transfer effectively to clinical practice for orthopedic surgery trainees. Given the increased reliance on simulation, there is an immediate need for comprehensive studies that focus on skill transfer, which will allow simulation to be incorporated effectively into orthopedic surgery training programs.
NASA Astrophysics Data System (ADS)
Nengker, T.; Choudhary, A.; Dimri, A. P.
2018-04-01
The ability of an ensemble of five regional climate models (hereafter RCMs) under Coordinated Regional Climate Downscaling Experiments-South Asia (hereafter, CORDEX-SA) in simulating the key features of present day near surface mean air temperature (Tmean) climatology (1970-2005) over the Himalayan region is studied. The purpose of this paper is to understand the consistency in the performance of models across the ensemble, space and seasons. For this a number of statistical measures like trend, correlation, variance, probability distribution function etc. are applied to evaluate the performance of models against observation and simultaneously the underlying uncertainties between them for four different seasons. The most evident finding from the study is the presence of a large cold bias (-6 to -8 °C) which is systematically seen across all the models and across space and time over the Himalayan region. However, these RCMs with its fine resolution perform extremely well in capturing the spatial distribution of the temperature features as indicated by a consistently high spatial correlation (greater than 0.9) with the observation in all seasons. In spite of underestimation in simulated temperature and general intensification of cold bias with increasing elevation the models show a greater rate of warming than the observation throughout entire altitudinal stretch of study region. During winter, the simulated rate of warming gets even higher at high altitudes. Moreover, a seasonal response of model performance and its spatial variability to elevation is found.
Comparing FDTD and Ray-Tracing Models in Numerical Simulation of HgCdTe LWIR Photodetectors
NASA Astrophysics Data System (ADS)
Vallone, Marco; Goano, Michele; Bertazzi, Francesco; Ghione, Giovanni; Schirmacher, Wilhelm; Hanna, Stefan; Figgemeier, Heinrich
2016-09-01
We present a simulation study of HgCdTe-based long-wavelength infrared detectors, focusing on methodological comparisons between the finite-difference time-domain (FDTD) and ray-tracing optical models. We performed three-dimensional simulations to determine the absorbed photon density distributions and the corresponding photocurrent and quantum efficiency spectra of isolated n-on- p uniform-composition pixels, systematically comparing the results obtained with FDTD and ray tracing. Since ray tracing is a classical optics approach, unable to describe interference effects, its applicability has been found to be strongly wavelength dependent, especially when reflections from metallic layers are relevant. Interesting cavity effects around the material cutoff wavelength are described, and the cases where ray tracing can be considered a viable approximation are discussed.
Numerical simulation of mechanical mixing in high solid anaerobic digester.
Yu, Liang; Ma, Jingwei; Chen, Shulin
2011-01-01
Computational fluid dynamics (CFD) was employed to study mixing performance in high solid anaerobic digester (HSAD) with A-310 impeller and helical ribbon. A mathematical model was constructed to assess flow fields. Good agreement of the model results with experimental data was obtained for the A-310 impeller. A systematic comparison for the interrelationship of power number, flow number and Reynolds number was simulated in a digester with less than 5% TS and 10% TS (total solids). The simulation results suggested a great potential for using the helical ribbon mixer in the mixing of high solids digester. The results also provided quantitative confirmation for minimum power consumption in HSAD and the effect of share rate on bio-structure. Copyright © 2010 Elsevier Ltd. All rights reserved.
A Numerical Simulation and Statistical Modeling of High Intensity Radiated Fields Experiment Data
NASA Technical Reports Server (NTRS)
Smith, Laura J.
2004-01-01
Tests are conducted on a quad-redundant fault tolerant flight control computer to establish upset characteristics of an avionics system in an electromagnetic field. A numerical simulation and statistical model are described in this work to analyze the open loop experiment data collected in the reverberation chamber at NASA LaRC as a part of an effort to examine the effects of electromagnetic interference on fly-by-wire aircraft control systems. By comparing thousands of simulation and model outputs, the models that best describe the data are first identified and then a systematic statistical analysis is performed on the data. All of these efforts are combined which culminate in an extrapolation of values that are in turn used to support previous efforts used in evaluating the data.
NASA Astrophysics Data System (ADS)
Li, Kai; Deng, Haixiao
2018-07-01
The Shanghai Coherent Light Facility (SCLF) is a quasi-continuous wave hard X-ray free electron laser facility, which is currently under construction. Due to the high repetition rate and high-quality electron beams, it is straightforward to consider X-ray free electron laser oscillator (XFELO) operation for the SCLF. In this paper, the main processes for XFELO design, and parameter optimization of the undulator, X-ray cavity, and electron beam are described. A three-dimensional X-ray crystal Bragg diffraction code, named BRIGHT, was introduced for the first time, which can be combined with the GENESIS and OPC codes for the numerical simulations of the XFELO. The performance of the XFELO of the SCLF is investigated and optimized by theoretical analysis and numerical simulation.
Uncertainty Quantification in Alchemical Free Energy Methods.
Bhati, Agastya P; Wan, Shunzhou; Hu, Yuan; Sherborne, Brad; Coveney, Peter V
2018-06-12
Alchemical free energy methods have gained much importance recently from several reports of improved ligand-protein binding affinity predictions based on their implementation using molecular dynamics simulations. A large number of variants of such methods implementing different accelerated sampling techniques and free energy estimators are available, each claimed to be better than the others in its own way. However, the key features of reproducibility and quantification of associated uncertainties in such methods have barely been discussed. Here, we apply a systematic protocol for uncertainty quantification to a number of popular alchemical free energy methods, covering both absolute and relative free energy predictions. We show that a reliable measure of error estimation is provided by ensemble simulation-an ensemble of independent MD simulations-which applies irrespective of the free energy method. The need to use ensemble methods is fundamental and holds regardless of the duration of time of the molecular dynamics simulations performed.
Carnahan, Heather; Herold, Jodi
2015-01-01
ABSTRACT Purpose: To review the literature on simulation-based learning experiences and to examine their potential to have a positive impact on physiotherapy (PT) learners' knowledge, skills, and attitudes in entry-to-practice curricula. Method: A systematic literature search was conducted in the MEDLINE, CINAHL, Embase Classic+Embase, Scopus, and Web of Science databases, using keywords such as physical therapy, simulation, education, and students. Results: A total of 820 abstracts were screened, and 23 articles were included in the systematic review. While there were few randomized controlled trials with validated outcome measures, some discoveries about simulation can positively affect the design of the PT entry-to-practice curricula. Using simulators to provide specific output feedback can help students learn specific skills. Computer simulations can also augment students' learning experience. Human simulation experiences in managing the acute patient in the ICU are well received by students, positively influence their confidence, and decrease their anxiety. There is evidence that simulated learning environments can replace a portion of a full-time 4-week clinical rotation without impairing learning. Conclusions: Simulation-based learning activities are being effectively incorporated into PT curricula. More rigorously designed experimental studies that include a cost–benefit analysis are necessary to help curriculum developers make informed choices in curriculum design. PMID:25931672
The Abundance of Large Arcs From CLASH
NASA Astrophysics Data System (ADS)
Xu, Bingxiao; Postman, Marc; Meneghetti, Massimo; Coe, Dan A.; Clash Team
2015-01-01
We have developed an automated arc-finding algorithm to perform a rigorous comparison of the observed and simulated abundance of large lensed background galaxies (a.k.a arcs). We use images from the CLASH program to derive our observed arc abundance. Simulated CLASH images are created by performing ray tracing through mock clusters generated by the N-body simulation calibrated tool -- MOKA, and N-body/hydrodynamic simulations -- MUSIC, over the same mass and redshift range as the CLASH X-ray selected sample. We derive a lensing efficiency of 15 ± 3 arcs per cluster for the X-ray selected CLASH sample and 4 ± 2 arcs per cluster for the simulated sample. The marginally significant difference (3.0 σ) between the results for the observations and the simulations can be explained by the systematically smaller area with magnification larger than 3 (by a factor of ˜4) in both MOKA and MUSIC mass models relative to those derived from the CLASH data. Accounting for this difference brings the observed and simulated arc statistics into full agreement. We find that the source redshift distribution does not have big impact on the arc abundance but the arc abundance is very sensitive to the concentration of the dark matter halos. Our results suggest that the solution to the "arc statistics problem" lies primarily in matching the cluster dark matter distribution.
Jiang, Jie; Yu, Wenbo; Zhang, Guangjun
2017-01-01
Navigation accuracy is one of the key performance indicators of an inertial navigation system (INS). Requirements for an accuracy assessment of an INS in a real work environment are exceedingly urgent because of enormous differences between real work and laboratory test environments. An attitude accuracy assessment of an INS based on the intensified high dynamic star tracker (IHDST) is particularly suitable for a real complex dynamic environment. However, the coupled systematic coordinate errors of an INS and the IHDST severely decrease the attitude assessment accuracy of an INS. Given that, a high-accuracy decoupling estimation method of the above systematic coordinate errors based on the constrained least squares (CLS) method is proposed in this paper. The reference frame of the IHDST is firstly converted to be consistent with that of the INS because their reference frames are completely different. Thereafter, the decoupling estimation model of the systematic coordinate errors is established and the CLS-based optimization method is utilized to estimate errors accurately. After compensating for error, the attitude accuracy of an INS can be assessed based on IHDST accurately. Both simulated experiments and real flight experiments of aircraft are conducted, and the experimental results demonstrate that the proposed method is effective and shows excellent performance for the attitude accuracy assessment of an INS in a real work environment. PMID:28991179
Crash testing difference-smoothing algorithm on a large sample of simulated light curves from TDC1
NASA Astrophysics Data System (ADS)
Rathna Kumar, S.
2017-09-01
In this work, we propose refinements to the difference-smoothing algorithm for the measurement of time delay from the light curves of the images of a gravitationally lensed quasar. The refinements mainly consist of a more pragmatic approach to choose the smoothing time-scale free parameter, generation of more realistic synthetic light curves for the estimation of time delay uncertainty and using a plot of normalized χ2 computed over a wide range of trial time delay values to assess the reliability of a measured time delay and also for identifying instances of catastrophic failure. We rigorously tested the difference-smoothing algorithm on a large sample of more than thousand pairs of simulated light curves having known true time delays between them from the two most difficult 'rungs' - rung3 and rung4 - of the first edition of Strong Lens Time Delay Challenge (TDC1) and found an inherent tendency of the algorithm to measure the magnitude of time delay to be higher than the true value of time delay. However, we find that this systematic bias is eliminated by applying a correction to each measured time delay according to the magnitude and sign of the systematic error inferred by applying the time delay estimator on synthetic light curves simulating the measured time delay. Following these refinements, the TDC performance metrics for the difference-smoothing algorithm are found to be competitive with those of the best performing submissions of TDC1 for both the tested 'rungs'. The MATLAB codes used in this work and the detailed results are made publicly available.
A systematic intercomparison of regional flood frequency analysis models in a simulation framework
NASA Astrophysics Data System (ADS)
Ganora, Daniele; Laio, Francesco; Claps, Pierluigi
2015-04-01
Regional frequency analysis (RFA) is a well-established methodology to provide an estimate of the flood frequency curve (or other discharge-related variables), based on the fundamental concept of substituting temporal information at a site (no data or short time series) by exploiting observations at other sites (spatial information). Different RFA paradigms exist, depending on the way the information is transferred to the site of interest. Despite the wide use of such methodology, a systematic comparison between these paradigms has not been performed. The aim of this study is to provide a framework wherein carrying out the intercomparison: we thus synthetically generate data through Monte Carlo simulations for a number of (virtual) stations, following a GEV parent distribution; different scenarios can be created to represent different spatial heterogeneity patterns by manipulating the parameters of the parent distribution at each station (e.g. with a linear variation in space of the shape parameter of the GEV). A special case is the homogeneous scenario where each station record is sampled from the same parent distribution. For each scenario and each simulation, different regional models are applied to evaluate the 200-year growth factor at each station. Results are than compared to the exact growth factor of each station, which is known in our virtual world. Considered regional approaches include: (i) a single growth curve for the whole region; (ii) a multiple-region model based on cluster analysis which search for an adequate number of homogeneous subregions; (iii) a Region-of-Influence model which defines a homogeneous subregion for each site; (iv) a spatially-smooth estimation procedure based on linear regressions.. A further benchmark model is the at-site estimate based on the analysis of the local record. A comprehensive analysis of the results of the simulations shows that, if the scenario is homogeneous (no spatial variability), all the regional approaches have comparable performances. Moreover, as expected, regional estimates are much more reliable than the at-site estimates. If the scenario is heterogeneous, the performances of the regional models depend on the pattern of heterogeneity; in general, however, the spatially-smooth regional approach performs better than the others, and its performances improve for increasing record lengths. For heterogeneous scenarios, the at-site estimates appear to be comparably more efficient than in the homogeneous case, and in general less biased than the regional estimates.
Ogurtsova, Katherine; Heise, Thomas L; Linnenkamp, Ute; Dintsios, Charalabos-Markos; Lhachimi, Stefan K; Icks, Andrea
2017-12-29
Type 2 diabetes mellitus (T2DM), a highly prevalent chronic disease, puts a large burden on individual health and health care systems. Computer simulation models, used to evaluate the clinical and economic effectiveness of various interventions to handle T2DM, have become a well-established tool in diabetes research. Despite the broad consensus about the general importance of validation, especially external validation, as a crucial instrument of assessing and controlling for the quality of these models, there are no systematic reviews comparing such validation of diabetes models. As a result, the main objectives of this systematic review are to identify and appraise the different approaches used for the external validation of existing models covering the development and progression of T2DM. We will perform adapted searches by applying respective search strategies to identify suitable studies from 14 electronic databases. Retrieved study records will be included or excluded based on predefined eligibility criteria as defined in this protocol. Among others, a publication filter will exclude studies published before 1995. We will run abstract and full text screenings and then extract data from all selected studies by filling in a predefined data extraction spreadsheet. We will undertake a descriptive, narrative synthesis of findings to address the study objectives. We will pay special attention to aspects of quality of these models in regard to the external validation based upon ISPOR and ADA recommendations as well as Mount Hood Challenge reports. All critical stages within the screening, data extraction and synthesis processes will be conducted by at least two authors. This protocol adheres to PRISMA and PRISMA-P standards. The proposed systematic review will provide a broad overview of the current practice in the external validation of models with respect to T2DM incidence and progression in humans built on simulation techniques. PROSPERO CRD42017069983 .
2011-03-01
variability of the volunteers and draftees being inducted. The military use of aircraft led to an emerging need to systematically train pilots for...shown to be the least effective factor in deciding who would perform well . In another study, the U.S. Army developed a computer-based simulation... reviewing instruction, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of
2010-02-24
A nested Faraday probe was designed and fabricated to assess facility effects in a systematic study of ion migration in a Hall thruster plume...Current density distributions were studied at 8, 12, 16, and 20 thruster diameters downstream of the Hall thruster exit plane with four probe configurations...measurements are a significant improvement for comparisons with numerical simulations and investigations of Hall thruster performance.
In Situ Operating Room-Based Simulation: A Review.
Owei, Lily; Neylan, Christopher J; Rao, Raghavendra; Caskey, Robert C; Morris, Jon B; Sensenig, Richard; Brooks, Ari D; Dempsey, Daniel T; Williams, Noel N; Atkins, Joshua H; Baranov, Dimitry Y; Dumon, Kristoffel R
To systematically review the literature surrounding operating room-based in situ training in surgery. A systematic review was conducted of MEDLINE. The review was conducted based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology, and employed the Population, Intervention, Comparator, Outcome (PICO) structure to define inclusion/exclusion criteria. The Kirkpatrick model was used to further classify the outcome of in situ training when possible. The search returned 308 database hits, and ultimately 19 articles were identified that met the stated PICO inclusion criteria. Operating room-based in situ simulation is used for a variety of purposes and in a variety of settings, and it has the potential to offer unique advantages over other types of simulation. Only one randomized controlled trial was conducted comparing in situ simulation to off-site simulation, which found few significant differences. One large-scale outcome study showed improved perinatal outcomes in obstetrics. Although in situ simulation theoretically offers certain advantages over other types of simulation, especially in addressing system-wide or environmental threats, its efficacy has yet to be clearly demonstrated. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Marathe, A R; Taylor, D M
2015-08-01
Decoding algorithms for brain-machine interfacing (BMI) are typically only optimized to reduce the magnitude of decoding errors. Our goal was to systematically quantify how four characteristics of BMI command signals impact closed-loop performance: (1) error magnitude, (2) distribution of different frequency components in the decoding errors, (3) processing delays, and (4) command gain. To systematically evaluate these different command features and their interactions, we used a closed-loop BMI simulator where human subjects used their own wrist movements to command the motion of a cursor to targets on a computer screen. Random noise with three different power distributions and four different relative magnitudes was added to the ongoing cursor motion in real time to simulate imperfect decoding. These error characteristics were tested with four different visual feedback delays and two velocity gains. Participants had significantly more trouble correcting for errors with a larger proportion of low-frequency, slow-time-varying components than they did with jittery, higher-frequency errors, even when the error magnitudes were equivalent. When errors were present, a movement delay often increased the time needed to complete the movement by an order of magnitude more than the delay itself. Scaling down the overall speed of the velocity command can actually speed up target acquisition time when low-frequency errors and delays are present. This study is the first to systematically evaluate how the combination of these four key command signal features (including the relatively-unexplored error power distribution) and their interactions impact closed-loop performance independent of any specific decoding method. The equations we derive relating closed-loop movement performance to these command characteristics can provide guidance on how best to balance these different factors when designing BMI systems. The equations reported here also provide an efficient way to compare a diverse range of decoding options offline.
NASA Astrophysics Data System (ADS)
Marathe, A. R.; Taylor, D. M.
2015-08-01
Objective. Decoding algorithms for brain-machine interfacing (BMI) are typically only optimized to reduce the magnitude of decoding errors. Our goal was to systematically quantify how four characteristics of BMI command signals impact closed-loop performance: (1) error magnitude, (2) distribution of different frequency components in the decoding errors, (3) processing delays, and (4) command gain. Approach. To systematically evaluate these different command features and their interactions, we used a closed-loop BMI simulator where human subjects used their own wrist movements to command the motion of a cursor to targets on a computer screen. Random noise with three different power distributions and four different relative magnitudes was added to the ongoing cursor motion in real time to simulate imperfect decoding. These error characteristics were tested with four different visual feedback delays and two velocity gains. Main results. Participants had significantly more trouble correcting for errors with a larger proportion of low-frequency, slow-time-varying components than they did with jittery, higher-frequency errors, even when the error magnitudes were equivalent. When errors were present, a movement delay often increased the time needed to complete the movement by an order of magnitude more than the delay itself. Scaling down the overall speed of the velocity command can actually speed up target acquisition time when low-frequency errors and delays are present. Significance. This study is the first to systematically evaluate how the combination of these four key command signal features (including the relatively-unexplored error power distribution) and their interactions impact closed-loop performance independent of any specific decoding method. The equations we derive relating closed-loop movement performance to these command characteristics can provide guidance on how best to balance these different factors when designing BMI systems. The equations reported here also provide an efficient way to compare a diverse range of decoding options offline.
Systematic review of coaching to enhance surgeons' operative performance.
Min, Hyeyoun; Morales, Dianali Rivera; Orgill, Dennis; Smink, Douglas S; Yule, Steven
2015-11-01
There is increasing attention on the coaching of surgeons and trainees to improve performance but no comprehensive review on this topic. The purpose of this review is to summarize the quantity and the quality of studies involving surgical coaching methods and their effectiveness. We performed a systematic literature search through PubMed and PsychINFO by using predefined inclusion criteria. Evidence for main outcome categories was evaluated with the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) system and the Medical Education Research Study Quality Instrument (MERSQI). Of a total 3,063 articles, 23 met our inclusion criteria; 4 randomized controlled trials and 19 observational studies. We categorized the articles into 4 groups on the basis of the outcome studied: perception, attitude and opinion; technical skills; nontechnical skills; and performance measures. Overall strength of evidence for each outcome groups was as follows: Perception, attitude, and opinion (Grading of Recommendations Assessment, Development, and Evaluation: Very Low, Medical Education Research Study Quality Instrument [MERSQI]: 10); technical skills (randomized controlled trials: High, 13.1; Observation studies: Very Low, 11.5); nontechnical skills (Very Low, 12.4) and performance measures (Very Low, 13.6). Simulation was the most used setting for coaching; more than half of the studies deployed an experienced surgeon as a coach and showed that coaching was effective. Surgical coaching interventions have a positive impact on learners' perception and attitudes, their technical and nontechnical skills, and performance measures. Evidence of impact on patient outcomes was limited, and the quality of research studies was variable. Despite this, our systematic review of different coaching interventions will benefit future coaching strategies and implementation to enhance operative performance. Copyright © 2015 Elsevier Inc. All rights reserved.
Multi-objective optimization for generating a weighted multi-model ensemble
NASA Astrophysics Data System (ADS)
Lee, H.
2017-12-01
Many studies have demonstrated that multi-model ensembles generally show better skill than each ensemble member. When generating weighted multi-model ensembles, the first step is measuring the performance of individual model simulations using observations. There is a consensus on the assignment of weighting factors based on a single evaluation metric. When considering only one evaluation metric, the weighting factor for each model is proportional to a performance score or inversely proportional to an error for the model. While this conventional approach can provide appropriate combinations of multiple models, the approach confronts a big challenge when there are multiple metrics under consideration. When considering multiple evaluation metrics, it is obvious that a simple averaging of multiple performance scores or model ranks does not address the trade-off problem between conflicting metrics. So far, there seems to be no best method to generate weighted multi-model ensembles based on multiple performance metrics. The current study applies the multi-objective optimization, a mathematical process that provides a set of optimal trade-off solutions based on a range of evaluation metrics, to combining multiple performance metrics for the global climate models and their dynamically downscaled regional climate simulations over North America and generating a weighted multi-model ensemble. NASA satellite data and the Regional Climate Model Evaluation System (RCMES) software toolkit are used for assessment of the climate simulations. Overall, the performance of each model differs markedly with strong seasonal dependence. Because of the considerable variability across the climate simulations, it is important to evaluate models systematically and make future projections by assigning optimized weighting factors to the models with relatively good performance. Our results indicate that the optimally weighted multi-model ensemble always shows better performance than an arithmetic ensemble mean and may provide reliable future projections.
ERIC Educational Resources Information Center
Gu, X.; Blackmore, K. L.
2015-01-01
This paper presents the results of a systematic review of agent-based modelling and simulation (ABMS) applications in the higher education (HE) domain. Agent-based modelling is a "bottom-up" modelling paradigm in which system-level behaviour (macro) is modelled through the behaviour of individual local-level agent interactions (micro).…
Willis, B H; Barton, P; Pearmain, P; Bryan, S; Hyde, C
2005-03-01
To assess the effectiveness and cost-effectiveness of adding automated image analysis to cervical screening programmes. Searching of all major electronic databases to the end of 2000 was supplemented by a detailed survey for unpublished UK literature. Four systematic reviews were conducted according to recognised guidance. The review of 'clinical effectiveness' included studies assessing reproducibility and impact on health outcomes and processes in addition to evaluations of test accuracy. A discrete event simulation model was developed, although the economic evaluation ultimately relied on a cost-minimisation analysis. The predominant finding from the systematic reviews was the very limited amount of rigorous primary research. None of the included studies refers to the only commercially available automated image analysis device in 2002, the AutoPap Guided Screening (GS) System. The results of the included studies were debatably most compatible with automated image analysis being equivalent in test performance to manual screening. Concerning process, there was evidence that automation does lead to reductions in average slide processing times. In the PRISMATIC trial this was reduced from 10.4 to 3.9 minutes, a statistically significant and practically important difference. The economic evaluation tentatively suggested that the AutoPap GS System may be efficient. The key proviso is that credible data become available to support that the AutoPap GS System has test performance and processing times equivalent to those obtained for PAPNET. The available evidence is still insufficient to recommend implementation of automated image analysis systems. The priority for action remains further research, particularly the 'clinical effectiveness' of the AutoPap GS System. Assessing the cost-effectiveness of introducing automation alongside other approaches is also a priority.
A BLIND METHOD TO DETREND INSTRUMENTAL SYSTEMATICS IN EXOPLANETARY LIGHT CURVES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morello, G., E-mail: giuseppe.morello.11@ucl.ac.uk
2015-07-20
The study of the atmospheres of transiting exoplanets requires a photometric precision, and repeatability, of one part in ∼10{sup 4}. This is beyond the original calibration plans of current observatories, hence the necessity to disentangle the instrumental systematics from the astrophysical signals in raw data sets. Most methods used in the literature are based on an approximate instrument model. The choice of parameters of the model and their functional forms can sometimes be subjective, causing controversies in the literature. Recently, Morello et al. (2014, 2015) have developed a non-parametric detrending method that gave coherent and repeatable results when applied tomore » Spitzer/IRAC data sets that were debated in the literature. Said method is based on independent component analysis (ICA) of individual pixel time-series, hereafter “pixel-ICA”. The main purpose of this paper is to investigate the limits and advantages of pixel-ICA on a series of simulated data sets with different instrument properties, and a range of jitter timescales and shapes, non-stationarity, sudden change points, etc. The performances of pixel-ICA are compared against the ones of other methods, in particular polynomial centroid division, and pixel-level decorrelation method. We find that in simulated cases pixel-ICA performs as well or better than other methods, and it also guarantees a higher degree of objectivity, because of its purely statistical foundation with no prior information on the instrument systematics. The results of this paper, together with previous analyses of Spitzer/IRAC data sets, suggest that photometric precision and repeatability of one part in 10{sup 4} can be achieved with current infrared space instruments.« less
Simulated astigmatism impairs academic-related performance in children.
Narayanasamy, Sumithira; Vincent, Stephen J; Sampson, Geoff P; Wood, Joanne M
2015-01-01
Astigmatism is an important refractive condition in children. However, the functional impact of uncorrected astigmatism in this population is not well established, particularly with regard to academic performance. This study investigated the impact of simulated bilateral astigmatism on academic-related tasks before and after sustained near work in children. Twenty visually normal children (mean age: 10.8 ± 0.7 years; six males and 14 females) completed a range of standardised academic-related tests with and without 1.50 D of simulated bilateral astigmatism (with both academic-related tests and the visual condition administered in a randomised order). The simulated astigmatism was induced using a positive cylindrical lens while maintaining a plano spherical equivalent. Performance was assessed before and after 20 min of sustained near work, during two separate testing sessions. Academic-related measures included a standardised reading test (the Neale Analysis of Reading Ability), visual information processing tests (Coding and Symbol Search subtests from the Wechsler Intelligence Scale for Children) and a reading-related eye movement test (the Developmental Eye Movement test). Each participant was systematically assigned either with-the-rule (WTR, axis 180°) or against-the-rule (ATR, axis 90°) simulated astigmatism to evaluate the influence of axis orientation on any decrements in performance. Reading, visual information processing and reading-related eye movement performance were all significantly impaired by both simulated bilateral astigmatism (p < 0.001) and sustained near work (p < 0.001), however, there was no significant interaction between these factors (p > 0.05). Simulated astigmatism led to a reduction of between 5% and 12% in performance across the academic-related outcome measures, but there was no significant effect of the axis (WTR or ATR) of astigmatism (p > 0.05). Simulated bilateral astigmatism impaired children's performance on a range of academic-related outcome measures irrespective of the orientation of the astigmatism. These findings have implications for the clinical management of non-amblyogenic levels of astigmatism in relation to academic performance in children. Correction of low to moderate levels of astigmatism may improve the functional performance of children in the classroom. © 2014 The Authors Ophthalmic & Physiological Optics © 2014 The College of Optometrists.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rider, William J.; Witkowski, Walter R.; Mousseau, Vincent Andrew
2016-04-13
The importance of credible, trustworthy numerical simulations is obvious especially when using the results for making high-consequence decisions. Determining the credibility of such numerical predictions is much more difficult and requires a systematic approach to assessing predictive capability, associated uncertainties and overall confidence in the computational simulation process for the intended use of the model. This process begins with an evaluation of the computational modeling of the identified, important physics of the simulation for its intended use. This is commonly done through a Phenomena Identification Ranking Table (PIRT). Then an assessment of the evidence basis supporting the ability to computationallymore » simulate these physics can be performed using various frameworks such as the Predictive Capability Maturity Model (PCMM). There were several critical activities that follow in the areas of code and solution verification, validation and uncertainty quantification, which will be described in detail in the following sections. Here, we introduce the subject matter for general applications but specifics are given for the failure prediction project. In addition, the first task that must be completed in the verification & validation procedure is to perform a credibility assessment to fully understand the requirements and limitations of the current computational simulation capability for the specific application intended use. The PIRT and PCMM are tools used at Sandia National Laboratories (SNL) to provide a consistent manner to perform such an assessment. Ideally, all stakeholders should be represented and contribute to perform an accurate credibility assessment. PIRTs and PCMMs are both described in brief detail below and the resulting assessments for an example project are given.« less
Chaibub Neto, Elias; Bare, J. Christopher; Margolin, Adam A.
2014-01-01
New algorithms are continuously proposed in computational biology. Performance evaluation of novel methods is important in practice. Nonetheless, the field experiences a lack of rigorous methodology aimed to systematically and objectively evaluate competing approaches. Simulation studies are frequently used to show that a particular method outperforms another. Often times, however, simulation studies are not well designed, and it is hard to characterize the particular conditions under which different methods perform better. In this paper we propose the adoption of well established techniques in the design of computer and physical experiments for developing effective simulation studies. By following best practices in planning of experiments we are better able to understand the strengths and weaknesses of competing algorithms leading to more informed decisions about which method to use for a particular task. We illustrate the application of our proposed simulation framework with a detailed comparison of the ridge-regression, lasso and elastic-net algorithms in a large scale study investigating the effects on predictive performance of sample size, number of features, true model sparsity, signal-to-noise ratio, and feature correlation, in situations where the number of covariates is usually much larger than sample size. Analysis of data sets containing tens of thousands of features but only a few hundred samples is nowadays routine in computational biology, where “omics” features such as gene expression, copy number variation and sequence data are frequently used in the predictive modeling of complex phenotypes such as anticancer drug response. The penalized regression approaches investigated in this study are popular choices in this setting and our simulations corroborate well established results concerning the conditions under which each one of these methods is expected to perform best while providing several novel insights. PMID:25289666
Warren, Jessie N; Luctkar-Flude, Marian; Godfrey, Christina; Lukewich, Julia
2016-11-01
High-fidelity simulation (HFS) is becoming an integral component in healthcare education programs. There is considerable evidence demonstrating the effectiveness of HFS on satisfaction and learning outcomes within undergraduate nursing programs; however, there are few studies that have investigated its use and effectiveness within nurse practitioner (NP) programs. To synthesize the best available evidence about the effectiveness of HFS within NP education programs worldwide. The specific review question was: what is the effect of HFS on learner satisfaction, knowledge, attitudes, and skill performance in NP education? Joanna Briggs Institute systematic review methodology was utilized. The following databases were searched: MEDLINE, CINAHL, EMBASE, Epistemonikos, PROSPERO, HealthSTAR, AMED, Cochrane, Global Health and PsycINFO. Studies were included if they were quantitative in nature and reported on any aspect HFS within a NP program. Ten studies were included in the review. All studies were conducted in the United States and published between 2007 and 2014. Outcomes explored included: knowledge, attitudes, skills and satisfaction. The majority of studies compared HFS to online learning or traditional classroom lecture. Most study scenarios featured high acuity, low frequency events within acute care settings; only two studies utilized scenarios simulated within primary care. There is limited evidence supporting the use of HFS within NP programs. In general, HFS increases students' knowledge and confidence, and students are more satisfied with simulation-based teaching in comparison to other methods. Future studies should explore the effectiveness of simulation training within NP programs in reducing the theory to practice gap, and evaluate knowledge retention, transferability to real patient situations, and impact of simulation on patient outcomes. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zwickl, Titus; Carleer, Bart; Kubli, Waldemar
2005-08-01
In the past decade, sheet metal forming simulation became a well established tool to predict the formability of parts. In the automotive industry, this has enabled significant reduction in the cost and time for vehicle design and development, and has helped to improve the quality and performance of vehicle parts. However, production stoppages for troubleshooting and unplanned die maintenance, as well as production quality fluctuations continue to plague manufacturing cost and time. The focus therefore has shifted in recent times beyond mere feasibility to robustness of the product and process being engineered. Ensuring robustness is the next big challenge for the virtual tryout / simulation technology. We introduce new methods, based on systematic stochastic simulations, to visualize the behavior of the part during the whole forming process — in simulation as well as in production. Sensitivity analysis explains the response of the part to changes in influencing parameters. Virtual tryout allows quick exploration of changed designs and conditions. Robust design and manufacturing guarantees quality and process capability for the production process. While conventional simulations helped to reduce development time and cost by ensuring feasible processes, robustness engineering tools have the potential for far greater cost and time savings. Through examples we illustrate how expected and unexpected behavior of deep drawing parts may be tracked down, identified and assigned to the influential parameters. With this knowledge, defects can be eliminated or springback can be compensated e.g.; the response of the part to uncontrollable noise can be predicted and minimized. The newly introduced methods enable more reliable and predictable stamping processes in general.
Low-resolution simulations of vesicle suspensions in 2D
NASA Astrophysics Data System (ADS)
Kabacaoğlu, Gökberk; Quaife, Bryan; Biros, George
2018-03-01
Vesicle suspensions appear in many biological and industrial applications. These suspensions are characterized by rich and complex dynamics of vesicles due to their interaction with the bulk fluid, and their large deformations and nonlinear elastic properties. Many existing state-of-the-art numerical schemes can resolve such complex vesicle flows. However, even when using provably optimal algorithms, these simulations can be computationally expensive, especially for suspensions with a large number of vesicles. These high computational costs can limit the use of simulations for parameter exploration, optimization, or uncertainty quantification. One way to reduce the cost is to use low-resolution discretizations in space and time. However, it is well-known that simply reducing the resolution results in vesicle collisions, numerical instabilities, and often in erroneous results. In this paper, we investigate the effect of a number of algorithmic empirical fixes (which are commonly used by many groups) in an attempt to make low-resolution simulations more stable and more predictive. Based on our empirical studies for a number of flow configurations, we propose a scheme that attempts to integrate these fixes in a systematic way. This low-resolution scheme is an extension of our previous work [51,53]. Our low-resolution correction algorithms (LRCA) include anti-aliasing and membrane reparametrization for avoiding spurious oscillations in vesicles' membranes, adaptive time stepping and a repulsion force for handling vesicle collisions and, correction of vesicles' area and arc-length for maintaining physical vesicle shapes. We perform a systematic error analysis by comparing the low-resolution simulations of dilute and dense suspensions with their high-fidelity, fully resolved, counterparts. We observe that the LRCA enables both efficient and statistically accurate low-resolution simulations of vesicle suspensions, while it can be 10× to 100× faster.
Simulation-Based Abdominal Ultrasound Training - A Systematic Review.
Østergaard, M L; Ewertsen, C; Konge, L; Albrecht-Beste, E; Bachmann Nielsen, M
2016-06-01
The aim is to provide a complete overview of the different simulation-based training options for abdominal ultrasound and to explore the evidence of their effect. This systematic review was performed according to the PRISMA guidelines and Medline, Embase, Web of Science, and the Cochrane Library was searched. Articles were divided into three categories based on study design (randomized controlled trials, before-and-after studies and descriptive studies) and assessed for level of evidence using the Oxford Centre for Evidence Based Medicine (OCEBM) system and for bias using the Cochrane Collaboration risk of bias assessment tool. Seventeen studies were included in the analysis: four randomized controlled trials, eight before-and-after studies with pre- and post-test evaluations, and five descriptive studies. No studies scored the highest level of evidence, and 14 had the lowest level. Bias was high for 11 studies, low for four, and unclear for two. No studies used a test with established evidence of validity or examined the correlation between obtained skills on the simulators and real-life clinical skills. Only one study used blinded assessors. The included studies were heterogeneous in the choice of simulator, study design, participants, and outcome measures, and the level of evidence for effect was inadequate. In all studies simulation training was equally or more beneficial than other instructions or no instructions. Study designs had significant built-in bias and confounding issues; therefore, further research should be based on randomized controlled trials using tests with validity evidence and blinded assessors. © Georg Thieme Verlag KG Stuttgart · New York.
Towards a cosmic-ray mass-composition study at Tunka Radio Extension
NASA Astrophysics Data System (ADS)
Kostunin, D.; Bezyazeekov, P. A.; Budnev, N. M.; Fedorov, O.; Gress, O. A.; Haungs, A.; Hiller, R.; Huege, T.; Kazarina, Y.; Kleifges, M.; Korosteleva, E. E.; Krömer, O.; Kungel, V.; Kuzmichev, L. A.; Lubsandorzhiev, N.; Mirgazov, R. R.; Monkhoev, R.; Osipova, E. A.; Pakhorukov, A.; Pankov, L.; Prosin, V. V.; Rubtsov, G. I.; Schröder, F. G.; Wischnewski, R.; Zagorodnikov, A.
2017-03-01
The Tunka Radio Extension (Tunka-Rex) is a radio detector at the TAIGA facility located in Siberia nearby the southern tip of Lake Baikal. Tunka-Rex measures air-showers induced by high-energy cosmic rays, in particular, the lateral distribution of the radio pulses. The depth of the air-shower maximum, statistically depends on the mass of the primary particle, is determined from the slope of the lateral distribution function (LDF). Using a model-independent approach, we have studied possible features of the one-dimensional slope method and tried to find improvements for the reconstruction of primary mass. To study the systematic uncertainties given by different primary particles, we have performed simulations using the CONEX and CoREAS software packages of the recently released CORSIKA v7.5 including the modern high-energy hadronic models QGSJet-II.04 and EPOS-LHC. The simulations have shown that the largest systematic uncertainty in the energy deposit is due to the unknown primary particle. Finally, we studied the relation between the polarization and the asymmetry of the LDF.
Le Strat, Yann
2017-01-01
The objective of this paper is to evaluate a panel of statistical algorithms for temporal outbreak detection. Based on a large dataset of simulated weekly surveillance time series, we performed a systematic assessment of 21 statistical algorithms, 19 implemented in the R package surveillance and two other methods. We estimated false positive rate (FPR), probability of detection (POD), probability of detection during the first week, sensitivity, specificity, negative and positive predictive values and F1-measure for each detection method. Then, to identify the factors associated with these performance measures, we ran multivariate Poisson regression models adjusted for the characteristics of the simulated time series (trend, seasonality, dispersion, outbreak sizes, etc.). The FPR ranged from 0.7% to 59.9% and the POD from 43.3% to 88.7%. Some methods had a very high specificity, up to 99.4%, but a low sensitivity. Methods with a high sensitivity (up to 79.5%) had a low specificity. All methods had a high negative predictive value, over 94%, while positive predictive values ranged from 6.5% to 68.4%. Multivariate Poisson regression models showed that performance measures were strongly influenced by the characteristics of time series. Past or current outbreak size and duration strongly influenced detection performances. PMID:28715489
Assessment of active methods for removal of LEO debris
NASA Astrophysics Data System (ADS)
Hakima, Houman; Emami, M. Reza
2018-03-01
This paper investigates the applicability of five active methods for removal of large low Earth orbit debris. The removal methods, namely net, laser, electrodynamic tether, ion beam shepherd, and robotic arm, are selected based on a set of high-level space mission constraints. Mission level criteria are then utilized to assess the performance of each redirection method in light of the results obtained from a Monte Carlo simulation. The simulation provides an insight into the removal time, performance robustness, and propellant mass criteria for the targeted debris range. The remaining attributes are quantified based on the models provided in the literature, which take into account several important parameters pertaining to each removal method. The means of assigning attributes to each assessment criterion is discussed in detail. A systematic comparison is performed using two different assessment schemes: Analytical Hierarchy Process and utility-based approach. A third assessment technique, namely the potential-loss analysis, is utilized to highlight the effect of risks in each removal methods.
Optimization of startup and shutdown operation of simulated moving bed chromatographic processes.
Li, Suzhou; Kawajiri, Yoshiaki; Raisch, Jörg; Seidel-Morgenstern, Andreas
2011-06-24
This paper presents new multistage optimal startup and shutdown strategies for simulated moving bed (SMB) chromatographic processes. The proposed concept allows to adjust transient operating conditions stage-wise, and provides capability to improve transient performance and to fulfill product quality specifications simultaneously. A specially tailored decomposition algorithm is developed to ensure computational tractability of the resulting dynamic optimization problems. By examining the transient operation of a literature separation example characterized by nonlinear competitive isotherm, the feasibility of the solution approach is demonstrated, and the performance of the conventional and multistage optimal transient regimes is evaluated systematically. The quantitative results clearly show that the optimal operating policies not only allow to significantly reduce both duration of the transient phase and desorbent consumption, but also enable on-spec production even during startup and shutdown periods. With the aid of the developed transient procedures, short-term separation campaigns with small batch sizes can be performed more flexibly and efficiently by SMB chromatography. Copyright © 2011 Elsevier B.V. All rights reserved.
van der Meulen, Miriam P; Lansdorp-Vogelaar, Iris; van Heijningen, Else-Mariëtte B; Kuipers, Ernst J; van Ballegooijen, Marjolein
2016-06-01
If some adenomas do not bleed over several years, they will cause systematic false-negative fecal immunochemical test (FIT) results. The long-term effectiveness of FIT screening has been estimated without accounting for such systematic false-negativity. There are now data with which to evaluate this issue. The authors developed one microsimulation model (MISCAN [MIcrosimulation SCreening ANalysis]-Colon) without systematic false-negative FIT results and one model that allowed a percentage of adenomas to be systematically missed in successive FIT screening rounds. Both variants were adjusted to reproduce the first-round findings of the Dutch CORERO FIT screening trial. The authors then compared simulated detection rates in the second screening round with those observed, and adjusted the simulated percentage of systematically missed adenomas to those data. Finally, the authors calculated the impact of systematic false-negative FIT results on the effectiveness of repeated FIT screening. The model without systematic false-negativity simulated higher detection rates in the second screening round than observed. These observed rates could be reproduced when assuming that FIT systematically missed 26% of advanced and 73% of nonadvanced adenomas. To reduce the false-positive rate in the second round to the observed level, the authors also had to assume that 30% of false-positive findings were systematically false-positive. Systematic false-negative FIT testing limits the long-term reduction of biennial FIT screening in the incidence of colorectal cancer (35.6% vs 40.9%) and its mortality (55.2% vs 59.0%) in participants. The results of the current study provide convincing evidence based on the combination of real-life and modeling data that a percentage of adenomas are systematically missed by repeat FIT screening. This impairs the efficacy of FIT screening. Cancer 2016;122:1680-8. © 2016 American Cancer Society. © 2016 American Cancer Society.
Simulation-based bronchoscopy training: systematic review and meta-analysis.
Kennedy, Cassie C; Maldonado, Fabien; Cook, David A
2013-07-01
Simulation-based bronchoscopy training is increasingly used, but effectiveness remains uncertain. We sought to perform a comprehensive synthesis of published work on simulation-based bronchoscopy training. We searched MEDLINE, EMBASE, CINAHL, PsycINFO, ERIC, Web of Science, and Scopus for eligible articles through May 11, 2011. We included all original studies involving health professionals that evaluated, in comparison with no intervention or an alternative instructional approach, simulation-based training for flexible or rigid bronchoscopy. Study selection and data abstraction were performed independently and in duplicate. We pooled results using random effects meta-analysis. From an initial pool of 10,903 articles, we identified 17 studies evaluating simulation-based bronchoscopy training. In comparison with no intervention, simulation training was associated with large benefits on skills and behaviors (pooled effect size, 1.21 [95% CI, 0.82-1.60]; n=8 studies) and moderate benefits on time (0.62 [95% CI, 0.12-1.13]; n=7). In comparison with clinical instruction, behaviors with real patients showed nonsignificant effects favoring simulation for time (0.61 [95% CI, -1.47 to 2.69]) and process (0.33 [95% CI, -1.46 to 2.11]) outcomes (n=2 studies each), although variation in training time might account for these differences. Four studies compared alternate simulation-based training approaches. Inductive analysis to inform instructional design suggested that longer or more structured training is more effective, authentic clinical context adds value, and animal models and plastic part-task models may be superior to more costly virtual-reality simulators. Simulation-based bronchoscopy training is effective in comparison with no intervention. Comparative effectiveness studies are few.
NASA Astrophysics Data System (ADS)
Reich, Rebecca D.; Eddington, Donald
2002-05-01
Signal processing in a cochlear implant (CI) is primarily designed to convey speech and environmental sounds, and can cause distortion of musical timbre. Systematic investigation of musical instrument identification through a CI has not yet revealed how timbre is affected by the implant's processing. In this experiment, the bandpass filtering, rectification, and low-pass filtering of an implant are simulated in MATLAB. Synthesized signals representing 12 common instruments, each performing a major scale, are processed by simulations using up to 8 analysis channels. The unprocessed recordings, together with the 8 simulation conditions for 12 instruments, are presented in random order to each of the subjects. The subject's task is to identify the instrument represented by each item. The subjects also subjectively score each item based on similarity and pleasantness. We anticipate performance using the simulation will be worse than the unprocessed condition because of the limited information delivered by the envelopes of the analysis channels. These results will be analyzed as a confusion matrix and provide a basis for contrasting the information used by subjects listening to the unprocessed and processed materials. Understanding these differences should aid in the development of new processing strategies to better represent music for cochlear implant users.
NASA Technical Reports Server (NTRS)
Alter, Stephen J.; Brauckmann, Gregory J.; Kleb, William L.; Glass, Christopher E.; Streett, Craig L.; Schuster, David M.
2015-01-01
A transonic flow field about a Space Launch System (SLS) configuration was simulated with the Fully Unstructured Three-Dimensional (FUN3D) computational fluid dynamics (CFD) code at wind tunnel conditions. Unsteady, time-accurate computations were performed using second-order Delayed Detached Eddy Simulation (DDES) for up to 1.5 physical seconds. The surface pressure time history was collected at 619 locations, 169 of which matched locations on a 2.5 percent wind tunnel model that was tested in the 11 ft. x 11 ft. test section of the NASA Ames Research Center's Unitary Plan Wind Tunnel. Comparisons between computation and experiment showed that the peak surface pressure RMS level occurs behind the forward attach hardware, and good agreement for frequency and power was obtained in this region. Computational domain, grid resolution, and time step sensitivity studies were performed. These included an investigation of pseudo-time sub-iteration convergence. Using these sensitivity studies and experimental data comparisons, a set of best practices to date have been established for FUN3D simulations for SLS launch vehicle analysis. To the author's knowledge, this is the first time DDES has been used in a systematic approach and establish simulation time needed, to analyze unsteady pressure loads on a space launch vehicle such as the NASA SLS.
Panchal, Mitesh B; Upadhyay, Sanjay H
2014-09-01
In this study, the feasibility of single walled boron nitride nanotube (SWBNNT)-based biosensors has been ensured considering the continuum modelling-based simulation approach, for mass-based detection of various bacterium/viruses. Various types of bacterium or viruses have been taken into consideration at the free-end of the cantilevered configuration of the SWBNNT, as a biosensor. Resonant frequency shift-based analysis has been performed with the adsorption of various bacterium/viruses considered as additional mass to the SWBNNT-based sensor system. The continuum mechanics-based analytical approach, considering effective wall thickness has been considered to validate the finite element method (FEM)-based simulation results, based on continuum volume-based modelling of the SWBNNT. As a systematic analysis approach, the FEM-based simulation results are found in excellent agreement with the analytical results, to analyse the SWBNNTs for their wide range of applications such as nanoresonators, biosensors, gas-sensors, transducers and so on. The obtained results suggest that by using the SWBNNT of smaller size the sensitivity of the sensor system can be enhanced and detection of the bacterium/virus having mass of 4.28 × 10⁻²⁴ kg can be effectively performed.
Transitional Flow in an Arteriovenous Fistula: Effect of Wall Distensibility
NASA Astrophysics Data System (ADS)
McGah, Patrick; Leotta, Daniel; Beach, Kirk; Aliseda, Alberto
2012-11-01
Arteriovenous fistulae are created surgically to provide adequate access for dialysis in patients with end-stage renal disease. Transitional flow and the subsequent pressure and shear stress fluctuations are thought to be causative in the fistula failure. Since 50% of fistulae require surgical intervention before year one, understanding the altered hemodynamic stresses is an important step toward improving clinical outcomes. We perform numerical simulations of a patient-specific model of a functioning fistula reconstructed from 3D ultrasound scans. Rigid wall simulations and fluid-structure interaction simulations using an in-house finite element solver for the wall deformations were performed and compared. In both the rigid and distensible wall cases, transitional flow is computed in fistula as evidenced by aperiodic high frequency velocity and pressure fluctuations. The spectrum of the fluctuations is much more narrow-banded in the distensible case, however, suggesting a partial stabilizing effect by the vessel elasticity. As a result, the distensible wall simulations predict shear stresses that are systematically 10-30% lower than the rigid cases. We propose a possible mechanism for stabilization involving the phase lag in the fluid work needed to deform the vessel wall. Support from an NIDDK R21 - DK08-1823.
NASA Astrophysics Data System (ADS)
Hakim, Layal; Lacaze, Guilhem; Khalil, Mohammad; Sargsyan, Khachik; Najm, Habib; Oefelein, Joseph
2018-05-01
This paper demonstrates the development of a simple chemical kinetics model designed for autoignition of n-dodecane in air using Bayesian inference with a model-error representation. The model error, i.e. intrinsic discrepancy from a high-fidelity benchmark model, is represented by allowing additional variability in selected parameters. Subsequently, we quantify predictive uncertainties in the results of autoignition simulations of homogeneous reactors at realistic diesel engine conditions. We demonstrate that these predictive error bars capture model error as well. The uncertainty propagation is performed using non-intrusive spectral projection that can also be used in principle with larger scale computations, such as large eddy simulation. While the present calibration is performed to match a skeletal mechanism, it can be done with equal success using experimental data only (e.g. shock-tube measurements). Since our method captures the error associated with structural model simplifications, we believe that the optimised model could then lead to better qualified predictions of autoignition delay time in high-fidelity large eddy simulations than the existing detailed mechanisms. This methodology provides a way to reduce the cost of reaction kinetics in simulations systematically, while quantifying the accuracy of predictions of important target quantities.
Ruggiero, Jeanne S; Redeker, Nancy S
2014-04-01
Night-shift workers are prone to sleep deprivation, misalignment of circadian rhythms, and subsequent sleepiness and sleep-related performance deficits. The purpose of this narrative systematic review is to critically review and synthesize the scientific literature regarding improvements in sleepiness and sleep-related performance deficits following planned naps taken during work-shift hours by night workers and to recommend directions for future research and practice. We conducted a literature search using the Medline, PsychInfo, CINAHL, Cochrane Library, and Health and Safety Science Abstracts databases and included English-language quasi-experimental and experimental studies that evaluated the effects of a nighttime nap taken during a simulated or actual night-work shift. We identified 13 relevant studies, which consisted primarily of small samples and mixed designs. Most investigators found that, despite short periods of sleep inertia immediately following naps, night-shift napping led to decreased sleepiness and improved sleep-related performance. None of the studies examined the effects of naps on safety outcomes in the workplace. Larger-scale randomized clinical trials of night-shift napping and direct safety outcomes are needed prior to wider implementation.
Reconciling solar and stellar magnetic cycles with nonlinear dynamo simulations.
Strugarek, A; Beaudoin, P; Charbonneau, P; Brun, A S; do Nascimento, J-D
2017-07-14
The magnetic fields of solar-type stars are observed to cycle over decadal periods-11 years in the case of the Sun. The fields originate in the turbulent convective layers of stars and have a complex dependency upon stellar rotation rate. We have performed a set of turbulent global simulations that exhibit magnetic cycles varying systematically with stellar rotation and luminosity. We find that the magnetic cycle period is inversely proportional to the Rossby number, which quantifies the influence of rotation on turbulent convection. The trend relies on a fundamentally nonlinear dynamo process and is compatible with the Sun's cycle and those of other solar-type stars. Copyright © 2017, American Association for the Advancement of Science.
Verrest, Luka; Dorlo, Thomas P C
2017-06-01
Neglected tropical diseases (NTDs) affect more than one billion people, mainly living in developing countries. For most of these NTDs, treatment is suboptimal. To optimize treatment regimens, clinical pharmacokinetic studies are required where they have not been previously conducted to enable the use of pharmacometric modeling and simulation techniques in their application, which can provide substantial advantages. Our aim was to provide a systematic overview and summary of all clinical pharmacokinetic studies in NTDs and to assess the use of pharmacometrics in these studies, as well as to identify which of the NTDs or which treatments have not been sufficiently studied. PubMed was systematically searched for all clinical trials and case reports until the end of 2015 that described the pharmacokinetics of a drug in the context of treating any of the NTDs in patients or healthy volunteers. Eighty-two pharmacokinetic studies were identified. Most studies included small patient numbers (only five studies included >50 subjects) and only nine (11 %) studies included pediatric patients. A large part of the studies was not very recent; 56 % of studies were published before 2000. Most studies applied non-compartmental analysis methods for pharmacokinetic analysis (62 %). Twelve studies used population-based compartmental analysis (15 %) and eight (10 %) additionally performed simulations or extrapolation. For ten out of the 17 NTDs, none or only very few pharmacokinetic studies could be identified. For most NTDs, adequate pharmacokinetic studies are lacking and population-based modeling and simulation techniques have not generally been applied. Pharmacokinetic clinical trials that enable population pharmacokinetic modeling are needed to make better use of the available data. Simulation-based studies should be employed to enable the design of improved dosing regimens and more optimally use the limited resources to effectively provide therapy in this neglected area.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neylon, J; Min, Y; Qi, S
2014-06-15
Purpose: Deformable image registration (DIR) plays a pivotal role in head and neck adaptive radiotherapy but a systematic validation of DIR algorithms has been limited by a lack of quantitative high-resolution groundtruth. We address this limitation by developing a GPU-based framework that provides a systematic DIR validation by generating (a) model-guided synthetic CTs representing posture and physiological changes, and (b) model-guided landmark-based validation. Method: The GPU-based framework was developed to generate massive mass-spring biomechanical models from patient simulation CTs and contoured structures. The biomechanical model represented soft tissue deformations for known rigid skeletal motion. Posture changes were simulated by articulatingmore » skeletal anatomy, which subsequently applied elastic corrective forces upon the soft tissue. Physiological changes such as tumor regression and weight loss were simulated in a biomechanically precise manner. Synthetic CT data was then generated from the deformed anatomy. The initial and final positions for one hundred randomly-chosen mass elements inside each of the internal contoured structures were recorded as ground truth data. The process was automated to create 45 synthetic CT datasets for a given patient CT. For instance, the head rotation was varied between +/− 4 degrees along each axis, and tumor volumes were systematically reduced up to 30%. Finally, the original CT and deformed synthetic CT were registered using an optical flow based DIR. Results: Each synthetic data creation took approximately 28 seconds of computation time. The number of landmarks per data set varied between two and three thousand. The validation method is able to perform sub-voxel analysis of the DIR, and report the results by structure, giving a much more in depth investigation of the error. Conclusions: We presented a GPU based high-resolution biomechanical head and neck model to validate DIR algorithms by generating CT equivalent 3D volumes with simulated posture changes and physiological regression.« less
Precision Attitude Determination for an Infrared Space Telescope
NASA Technical Reports Server (NTRS)
Benford, Dominic J.
2008-01-01
We have developed performance simulations for a precision attitude determination system using a focal plane star tracker on an infrared space telescope. The telescope is being designed for the Destiny mission to measure cosmologically distant supernovae as one of the candidate implementations for the Joint Dark Energy Mission. Repeat observations of the supernovae require attitude control at the level of 0.010 arcseconds (0.05 microradians) during integrations and at repeat intervals up to and over a year. While absolute accuracy is not required, the repoint precision is challenging. We have simulated the performance of a focal plane star tracker in a multidimensional parameter space, including pixel size, read noise, and readout rate. Systematic errors such as proper motion, velocity aberration, and parallax can be measured and compensated out. Our prediction is that a relative attitude determination accuracy of 0.001 to 0.002 arcseconds (0.005 to 0.010 microradians) will be achievable.
NASA Technical Reports Server (NTRS)
Koch, S. E.; Skillman, W. C.; Kocin, P. J.; Wetzel, P. J.; Brill, K.; Keyser, D. A.; Mccumber, M. C.
1983-01-01
The overall performance characteristics of a limited area, hydrostatic, fine (52 km) mesh, primitive equation, numerical weather prediction model are determined in anticipation of satellite data assimilations with the model. The synoptic and mesoscale predictive capabilities of version 2.0 of this model, the Mesoscale Atmospheric Simulation System (MASS 2.0), were evaluated. The two part study is based on a sample of approximately thirty 12h and 24h forecasts of atmospheric flow patterns during spring and early summer. The synoptic scale evaluation results benchmark the performance of MASS 2.0 against that of an operational, synoptic scale weather prediction model, the Limited area Fine Mesh (LFM). The large sample allows for the calculation of statistically significant measures of forecast accuracy and the determination of systematic model errors. The synoptic scale benchmark is required before unsmoothed mesoscale forecast fields can be seriously considered.
Idealized gas turbine combustor for performance research and validation of large eddy simulations.
Williams, Timothy C; Schefer, Robert W; Oefelein, Joseph C; Shaddix, Christopher R
2007-03-01
This paper details the design of a premixed, swirl-stabilized combustor that was designed and built for the express purpose of obtaining validation-quality data for the development of large eddy simulations (LES) of gas turbine combustors. The combustor features nonambiguous boundary conditions, a geometrically simple design that retains the essential fluid dynamics and thermochemical processes that occur in actual gas turbine combustors, and unrestrictive access for laser and optical diagnostic measurements. After discussing the design detail, a preliminary investigation of the performance and operating envelope of the combustor is presented. With the combustor operating on premixed methane/air, both the equivalence ratio and the inlet velocity were systematically varied and the flame structure was recorded via digital photography. Interesting lean flame blowout and resonance characteristics were observed. In addition, the combustor exhibited a large region of stable, acoustically clean combustion that is suitable for preliminary validation of LES models.
2013-01-01
Background Unexpected obstetric emergencies threaten the safety of pregnant women. As emergencies are rare, they are difficult to learn. Therefore, simulation-based medical education (SBME) seems relevant. In non-systematic reviews on SBME, medical simulation has been suggested to be associated with improved learner outcomes. However, many questions on how SBME can be optimized remain unanswered. One unresolved issue is how 'in situ simulation' (ISS) versus 'off site simulation' (OSS) impact learning. ISS means simulation-based training in the actual patient care unit (in other words, the labor room and operating room). OSS means training in facilities away from the actual patient care unit, either at a simulation centre or in hospital rooms that have been set up for this purpose. Methods and design The objective of this randomized trial is to study the effect of ISS versus OSS on individual learning outcome, safety attitude, motivation, stress, and team performance amongst multi-professional obstetric-anesthesia teams. The trial is a single-centre randomized superiority trial including 100 participants. The inclusion criteria were health-care professionals employed at the department of obstetrics or anesthesia at Rigshospitalet, Copenhagen, who were working on shifts and gave written informed consent. Exclusion criteria were managers with staff responsibilities, and staff who were actively taking part in preparation of the trial. The same obstetric multi-professional training was conducted in the two simulation settings. The experimental group was exposed to training in the ISS setting, and the control group in the OSS setting. The primary outcome is the individual score on a knowledge test. Exploratory outcomes are individual scores on a safety attitudes questionnaire, a stress inventory, salivary cortisol levels, an intrinsic motivation inventory, results from a questionnaire evaluating perceptions of the simulation and suggested changes needed in the organization, a team-based score on video-assessed team performance and on selected clinical performance. Discussion The perspective is to provide new knowledge on contextual effects of different simulation settings. Trial registration ClincialTrials.gov NCT01792674. PMID:23870501
Sørensen, Jette Led; Van der Vleuten, Cees; Lindschou, Jane; Gluud, Christian; Østergaard, Doris; LeBlanc, Vicki; Johansen, Marianne; Ekelund, Kim; Albrechtsen, Charlotte Krebs; Pedersen, Berit Woetman; Kjærgaard, Hanne; Weikop, Pia; Ottesen, Bent
2013-07-17
Unexpected obstetric emergencies threaten the safety of pregnant women. As emergencies are rare, they are difficult to learn. Therefore, simulation-based medical education (SBME) seems relevant. In non-systematic reviews on SBME, medical simulation has been suggested to be associated with improved learner outcomes. However, many questions on how SBME can be optimized remain unanswered. One unresolved issue is how 'in situ simulation' (ISS) versus 'off site simulation' (OSS) impact learning. ISS means simulation-based training in the actual patient care unit (in other words, the labor room and operating room). OSS means training in facilities away from the actual patient care unit, either at a simulation centre or in hospital rooms that have been set up for this purpose. The objective of this randomized trial is to study the effect of ISS versus OSS on individual learning outcome, safety attitude, motivation, stress, and team performance amongst multi-professional obstetric-anesthesia teams.The trial is a single-centre randomized superiority trial including 100 participants. The inclusion criteria were health-care professionals employed at the department of obstetrics or anesthesia at Rigshospitalet, Copenhagen, who were working on shifts and gave written informed consent. Exclusion criteria were managers with staff responsibilities, and staff who were actively taking part in preparation of the trial. The same obstetric multi-professional training was conducted in the two simulation settings. The experimental group was exposed to training in the ISS setting, and the control group in the OSS setting. The primary outcome is the individual score on a knowledge test. Exploratory outcomes are individual scores on a safety attitudes questionnaire, a stress inventory, salivary cortisol levels, an intrinsic motivation inventory, results from a questionnaire evaluating perceptions of the simulation and suggested changes needed in the organization, a team-based score on video-assessed team performance and on selected clinical performance. The perspective is to provide new knowledge on contextual effects of different simulation settings. ClincialTrials.gov NCT01792674.
NASA Astrophysics Data System (ADS)
Davris, Theodoros; Lyulin, Alexey V.
2016-05-01
The significant drop of the storage modulus under uniaxial deformation (Payne effect) restrains the performance of the elastomer-based composites and the development of possible new applications. In this paper molecular-dynamics (MD) computer simulations using LAMMPS MD package have been performed to study the mechanical properties of a coarse-grained model of this family of nanocomposite materials. Our goal is to provide simulational insights into the viscoelastic properties of filled elastomers, and try to connect the macroscopic mechanics with composite microstructure, the strength of the polymer-filler interactions and the polymer mobility at different scales. To this end we simulate random copolymer films capped between two infinite solid (filler aggregate) walls. We systematically vary the strength of the polymer-substrate adhesion interactions, degree of polymer confinement (film thickness), polymer crosslinking density, and study their influence on the equilibrium and non-equilibrium structure, segmental dynamics, and the mechanical properties of the simulated systems. The glass-transition temperature increases once the mesh size became smaller than the chain radius of gyration; otherwise it remained invariant to mesh-size variations. This increase in the glass-transition temperature was accompanied by a monotonic slowing-down of segmental dynamics on all studied length scales. This observation is attributed to the correspondingly decreased width of the bulk density layer that was obtained in films whose thickness was larger than the end-to-end distance of the bulk polymer chains. To test this hypothesis additional simulations were performed in which the crystalline walls were replaced with amorphous or rough walls.
Colen, Hadewig B; Neef, Cees; Schuring, Roel W
2003-06-01
Worldwide patient safety has become a major social policy problem for healthcare organisations. As in other organisations, the patients in our hospital also suffer from an inadequate distribution process, as becomes clear from incident reports involving medication errors. Medisch Spectrum Twente is a top primary-care, clinical, teaching hospital. The hospital pharmacy takes care of 1070 internal beds and 1120 beds in an affiliated psychiatric hospital and nursing homes. In the beginning of 1999, our pharmacy group started a large interdisciplinary research project to develop a safe, effective and efficient drug distribution system by using systematic process redesign. The process redesign includes both organisational and technological components. This article describes the identification and verification of critical performance dimensions for the design of drug distribution processes in hospitals (phase 1 of the systematic process redesign of drug distribution). Based on reported errors and related causes, we suggested six generic performance domains. To assess the role of the performance dimensions, we used three approaches: flowcharts, interviews with stakeholders and review of the existing performance using time studies and medication error studies. We were able to set targets for costs, quality of information, responsiveness, employee satisfaction, and degree of innovation. We still have to establish what drug distribution system, in respect of quality and cost-effectiveness, represents the best and most cost-effective way of preventing medication errors. We intend to develop an evaluation model, using the critical performance dimensions as a starting point. This model can be used as a simulation template to compare different drug distribution concepts in order to define the differences in quality and cost-effectiveness.
Performance of technology-driven simulators for medical students--a systematic review.
Michael, Michael; Abboudi, Hamid; Ker, Jean; Shamim Khan, Mohammed; Dasgupta, Prokar; Ahmed, Kamran
2014-12-01
Simulation-based education has evolved as a key training tool in high-risk industries such as aviation and the military. In parallel with these industries, the benefits of incorporating specialty-oriented simulation training within medical schools are vast. Adoption of simulators into medical school education programs has shown great promise and has the potential to revolutionize modern undergraduate education. An English literature search was carried out using MEDLINE, EMBASE, and psychINFO databases to identify all randomized controlled studies pertaining to "technology-driven" simulators used in undergraduate medical education. A validity framework incorporating the "framework for technology enhanced learning" report by the Department of Health, United Kingdom, was used to evaluate the capabilities of each technology-driven simulator. Information was collected regarding the simulator type, characteristics, and brand name. Where possible, we extracted information from the studies on the simulators' performance with respect to validity status, reliability, feasibility, education impact, acceptability, and cost effectiveness. We identified 19 studies, analyzing simulators for medical students across a variety of procedure-based specialities including; cardiovascular (n = 2), endoscopy (n = 3), laparoscopic surgery (n = 8), vascular access (n = 2), ophthalmology (n = 1), obstetrics and gynecology (n = 1), anesthesia (n = 1), and pediatrics (n = 1). Incorporation of simulators has so far been on an institutional level; no national or international trends have yet emerged. Simulators are capable of providing a highly educational and realistic experience for the medical students within a variety of speciality-oriented teaching sessions. Further research is needed to establish how best to incorporate simulators into a more primary stage of medical education; preclinical and clinical undergraduate medicine. Copyright © 2014 Elsevier Inc. All rights reserved.
Procedural virtual reality simulation in minimally invasive surgery.
Våpenstad, Cecilie; Buzink, Sonja N
2013-02-01
Simulation of procedural tasks has the potential to bridge the gap between basic skills training outside the operating room (OR) and performance of complex surgical tasks in the OR. This paper provides an overview of procedural virtual reality (VR) simulation currently available on the market and presented in scientific literature for laparoscopy (LS), flexible gastrointestinal endoscopy (FGE), and endovascular surgery (EVS). An online survey was sent to companies and research groups selling or developing procedural VR simulators, and a systematic search was done for scientific publications presenting or applying VR simulators to train or assess procedural skills in the PUBMED and SCOPUS databases. The results of five simulator companies were included in the survey. In the literature review, 116 articles were analyzed (45 on LS, 43 on FGE, 28 on EVS), presenting a total of 23 simulator systems. The companies stated to altogether offer 78 procedural tasks (33 for LS, 12 for FGE, 33 for EVS), of which 17 also were found in the literature review. Although study type and used outcomes vary between the three different fields, approximately 90 % of the studies presented in the retrieved publications for LS found convincing evidence to confirm the validity or added value of procedural VR simulation. This was the case in approximately 75 % for FGE and EVS. Procedural training using VR simulators has been found to improve clinical performance. There is nevertheless a large amount of simulated procedural tasks that have not been validated. Future research should focus on the optimal use of procedural simulators in the most effective training setups and further investigate the benefits of procedural VR simulation to improve clinical outcome.
Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2010-01-01
A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy
ASME B89.4.19 Performance Evaluation Tests and Geometric Misalignments in Laser Trackers
Muralikrishnan, B.; Sawyer, D.; Blackburn, C.; Phillips, S.; Borchardt, B.; Estler, W. T.
2009-01-01
Small and unintended offsets, tilts, and eccentricity of the mechanical and optical components in laser trackers introduce systematic errors in the measured spherical coordinates (angles and range readings) and possibly in the calculated lengths of reference artifacts. It is desirable that the tests described in the ASME B89.4.19 Standard [1] be sensitive to these geometric misalignments so that any resulting systematic errors are identified during performance evaluation. In this paper, we present some analysis, using error models and numerical simulation, of the sensitivity of the length measurement system tests and two-face system tests in the B89.4.19 Standard to misalignments in laser trackers. We highlight key attributes of the testing strategy adopted in the Standard and propose new length measurement system tests that demonstrate improved sensitivity to some misalignments. Experimental results with a tracker that is not properly error corrected for the effects of the misalignments validate claims regarding the proposed new length tests. PMID:27504211
Towards a nonperturbative calculation of weak Hamiltonian Wilson coefficients
Bruno, Mattia; Lehner, Christoph; Soni, Amarjit
2018-04-20
Here, we propose a method to compute the Wilson coefficients of the weak effective Hamiltonian to all orders in the strong coupling constant using Lattice QCD simulations. We perform our calculations adopting an unphysically light weak boson mass of around 2 GeV. We demonstrate that systematic errors for the Wilson coefficients C 1 and C 2, related to the current-current four-quark operators, can be controlled and present a path towards precise determinations in subsequent works.
Towards a nonperturbative calculation of weak Hamiltonian Wilson coefficients
NASA Astrophysics Data System (ADS)
Bruno, Mattia; Lehner, Christoph; Soni, Amarjit; Rbc; Ukqcd Collaborations
2018-04-01
We propose a method to compute the Wilson coefficients of the weak effective Hamiltonian to all orders in the strong coupling constant using Lattice QCD simulations. We perform our calculations adopting an unphysically light weak boson mass of around 2 GeV. We demonstrate that systematic errors for the Wilson coefficients C1 and C2 , related to the current-current four-quark operators, can be controlled and present a path towards precise determinations in subsequent works.
Towards a nonperturbative calculation of weak Hamiltonian Wilson coefficients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bruno, Mattia; Lehner, Christoph; Soni, Amarjit
Here, we propose a method to compute the Wilson coefficients of the weak effective Hamiltonian to all orders in the strong coupling constant using Lattice QCD simulations. We perform our calculations adopting an unphysically light weak boson mass of around 2 GeV. We demonstrate that systematic errors for the Wilson coefficients C 1 and C 2, related to the current-current four-quark operators, can be controlled and present a path towards precise determinations in subsequent works.
Prior video game utilization is associated with improved performance on a robotic skills simulator.
Harbin, Andrew C; Nadhan, Kumar S; Mooney, James H; Yu, Daohai; Kaplan, Joshua; McGinley-Hence, Nora; Kim, Andrew; Gu, Yiming; Eun, Daniel D
2017-09-01
Laparoscopic surgery and robotic surgery, two forms of minimally invasive surgery (MIS), have recently experienced a large increase in utilization. Prior studies have shown that video game experience (VGE) may be associated with improved laparoscopic surgery skills; however, similar data supporting a link between VGE and proficiency on a robotic skills simulator (RSS) are lacking. The objective of our study is to determine whether volume or timing of VGE had any impact on RSS performance. Pre-clinical medical students completed a comprehensive questionnaire detailing previous VGE across several time periods. Seventy-five subjects were ultimately evaluated in 11 training exercises on the daVinci Si Skills Simulator. RSS skill was measured by overall score, time to completion, economy of motion, average instrument collision, and improvement in Ring Walk 3 score. Using the nonparametric tests and linear regression, these metrics were analyzed for systematic differences between non-users, light, and heavy video game users based on their volume of use in each of the following four time periods: past 3 months, past year, past 3 years, and high school. Univariate analyses revealed significant differences between heavy and non-users in all five performance metrics. These trends disappeared as the period of VGE went further back. Our study showed a positive association between video game experience and robotic skills simulator performance that is stronger for more recent periods of video game use. The findings may have important implications for the evolution of robotic surgery training.
Modeling, simulation, and concept design for hybrid-electric medium-size military trucks
NASA Astrophysics Data System (ADS)
Rizzoni, Giorgio; Josephson, John R.; Soliman, Ahmed; Hubert, Christopher; Cantemir, Codrin-Gruie; Dembski, Nicholas; Pisu, Pierluigi; Mikesell, David; Serrao, Lorenzo; Russell, James; Carroll, Mark
2005-05-01
A large scale design space exploration can provide valuable insight into vehicle design tradeoffs being considered for the U.S. Army"s FMTV (Family of Medium Tactical Vehicles). Through a grant from TACOM (Tank-automotive and Armaments Command), researchers have generated detailed road, surface, and grade conditions representative of the performance criteria of this medium-sized truck and constructed a virtual powertrain simulator for both conventional and hybrid variants. The simulator incorporates the latest technology among vehicle design options, including scalable ultracapacitor and NiMH battery packs as well as a variety of generator and traction motor configurations. An energy management control strategy has also been developed to provide efficiency and performance. A design space exploration for the family of vehicles involves running a large number of simulations with systematically varied vehicle design parameters, where each variant is paced through several different mission profiles and multiple attributes of performance are measured. The resulting designs are filtered to remove dominated designs, exposing the multi-criterial surface of optimality (Pareto optimal designs), and revealing the design tradeoffs as they impact vehicle performance and economy. The results are not yet definitive because ride and drivability measures were not included, and work is not finished on fine-tuning the modeled dynamics of some powertrain components. However, the work so far completed demonstrates the effectiveness of the approach to design space exploration, and the results to date suggest the powertrain configuration best suited to the FMTV mission.
Strategies for concurrent processing of complex algorithms in data driven architectures
NASA Technical Reports Server (NTRS)
Som, Sukhamoy; Stoughton, John W.; Mielke, Roland R.
1990-01-01
Performance modeling and performance enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures are discussed. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called algorithm to architecture mapping model (ATAMM). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.
NASA Astrophysics Data System (ADS)
Lauer, Axel; Jones, Colin; Eyring, Veronika; Evaldsson, Martin; Hagemann, Stefan; Mäkelä, Jarmo; Martin, Gill; Roehrig, Romain; Wang, Shiyu
2018-01-01
The performance of updated versions of the four earth system models (ESMs) CNRM, EC-Earth, HadGEM, and MPI-ESM is assessed in comparison to their predecessor versions used in Phase 5 of the Coupled Model Intercomparison Project. The Earth System Model Evaluation Tool (ESMValTool) is applied to evaluate selected climate phenomena in the models against observations. This is the first systematic application of the ESMValTool to assess and document the progress made during an extensive model development and improvement project. This study focuses on the South Asian monsoon (SAM) and the West African monsoon (WAM), the coupled equatorial climate, and Southern Ocean clouds and radiation, which are known to exhibit systematic biases in present-day ESMs. The analysis shows that the tropical precipitation in three out of four models is clearly improved. Two of three updated coupled models show an improved representation of tropical sea surface temperatures with one coupled model not exhibiting a double Intertropical Convergence Zone (ITCZ). Simulated cloud amounts and cloud-radiation interactions are improved over the Southern Ocean. Improvements are also seen in the simulation of the SAM and WAM, although systematic biases remain in regional details and the timing of monsoon rainfall. Analysis of simulations with EC-Earth at different horizontal resolutions from T159 up to T1279 shows that the synoptic-scale variability in precipitation over the SAM and WAM regions improves with higher model resolution. The results suggest that the reasonably good agreement of modeled and observed mean WAM and SAM rainfall in lower-resolution models may be a result of unrealistic intensity distributions.
Nature-based interventions in institutional and organisational settings: a scoping review.
Moeller, Chris; King, Nigel; Burr, Viv; Gibbs, Graham R; Gomersall, Tim
2018-04-26
The objective of this review was to scope the literature on nature-based interventions that could be conducted in institutional settings where people reside full-time for care or rehabilitation purposes. Systematic searches were conducted across CINAHL, Medline, Criminal Justice Abstracts, PsycINFO, Scopus, Social Care Online and Cochrane CENTRAL. A total of 85 studies (reported in 86 articles) were included. Four intervention modalities were identified: Gardening/therapeutic horticulture; animal-assisted therapies; care farming and virtual reality-based simulations of natural environments. The interventions were conducted across a range of settings, including inpatient wards, care homes, prisons and women's shelters. Generally, favourable impacts were seen across intervention types, although the reported effects varied widely. There is a growing body of literature on nature-based interventions that could be applied to a variety of institutional settings. Within most intervention types, there is sufficient research data available to perform full systematic reviews. Recommendations for future systematic reviews are offered.
Guo, Chaohua; Wei, Mingzhen; Liu, Hong
2018-01-01
Development of unconventional shale gas reservoirs (SGRs) has been boosted by the advancements in two key technologies: horizontal drilling and multi-stage hydraulic fracturing. A large number of multi-stage fractured horizontal wells (MsFHW) have been drilled to enhance reservoir production performance. Gas flow in SGRs is a multi-mechanism process, including: desorption, diffusion, and non-Darcy flow. The productivity of the SGRs with MsFHW is influenced by both reservoir conditions and hydraulic fracture properties. However, rare simulation work has been conducted for multi-stage hydraulic fractured SGRs. Most of them use well testing methods, which have too many unrealistic simplifications and assumptions. Also, no systematical work has been conducted considering all reasonable transport mechanisms. And there are very few works on sensitivity studies of uncertain parameters using real parameter ranges. Hence, a detailed and systematic study of reservoir simulation with MsFHW is still necessary. In this paper, a dual porosity model was constructed to estimate the effect of parameters on shale gas production with MsFHW. The simulation model was verified with the available field data from the Barnett Shale. The following mechanisms have been considered in this model: viscous flow, slip flow, Knudsen diffusion, and gas desorption. Langmuir isotherm was used to simulate the gas desorption process. Sensitivity analysis on SGRs' production performance with MsFHW has been conducted. Parameters influencing shale gas production were classified into two categories: reservoir parameters including matrix permeability, matrix porosity; and hydraulic fracture parameters including hydraulic fracture spacing, and fracture half-length. Typical ranges of matrix parameters have been reviewed. Sensitivity analysis have been conducted to analyze the effect of the above factors on the production performance of SGRs. Through comparison, it can be found that hydraulic fracture parameters are more sensitive compared with reservoir parameters. And reservoirs parameters mainly affect the later production period. However, the hydraulic fracture parameters have a significant effect on gas production from the early period. The results of this study can be used to improve the efficiency of history matching process. Also, it can contribute to the design and optimization of hydraulic fracture treatment design in unconventional SGRs.
Wei, Mingzhen; Liu, Hong
2018-01-01
Development of unconventional shale gas reservoirs (SGRs) has been boosted by the advancements in two key technologies: horizontal drilling and multi-stage hydraulic fracturing. A large number of multi-stage fractured horizontal wells (MsFHW) have been drilled to enhance reservoir production performance. Gas flow in SGRs is a multi-mechanism process, including: desorption, diffusion, and non-Darcy flow. The productivity of the SGRs with MsFHW is influenced by both reservoir conditions and hydraulic fracture properties. However, rare simulation work has been conducted for multi-stage hydraulic fractured SGRs. Most of them use well testing methods, which have too many unrealistic simplifications and assumptions. Also, no systematical work has been conducted considering all reasonable transport mechanisms. And there are very few works on sensitivity studies of uncertain parameters using real parameter ranges. Hence, a detailed and systematic study of reservoir simulation with MsFHW is still necessary. In this paper, a dual porosity model was constructed to estimate the effect of parameters on shale gas production with MsFHW. The simulation model was verified with the available field data from the Barnett Shale. The following mechanisms have been considered in this model: viscous flow, slip flow, Knudsen diffusion, and gas desorption. Langmuir isotherm was used to simulate the gas desorption process. Sensitivity analysis on SGRs’ production performance with MsFHW has been conducted. Parameters influencing shale gas production were classified into two categories: reservoir parameters including matrix permeability, matrix porosity; and hydraulic fracture parameters including hydraulic fracture spacing, and fracture half-length. Typical ranges of matrix parameters have been reviewed. Sensitivity analysis have been conducted to analyze the effect of the above factors on the production performance of SGRs. Through comparison, it can be found that hydraulic fracture parameters are more sensitive compared with reservoir parameters. And reservoirs parameters mainly affect the later production period. However, the hydraulic fracture parameters have a significant effect on gas production from the early period. The results of this study can be used to improve the efficiency of history matching process. Also, it can contribute to the design and optimization of hydraulic fracture treatment design in unconventional SGRs. PMID:29320489
A Step-by-Step Framework on Discrete Events Simulation in Emergency Department; A Systematic Review.
Dehghani, Mahsa; Moftian, Nazila; Rezaei-Hachesu, Peyman; Samad-Soltani, Taha
2017-04-01
To systematically review the current literature of simulation in healthcare including the structured steps in the emergency healthcare sector by proposing a framework for simulation in the emergency department. For the purpose of collecting the data, PubMed and ACM databases were used between the years 2003 and 2013. The inclusion criteria were to select English-written articles available in full text with the closest objectives from among a total of 54 articles retrieved from the databases. Subsequently, 11 articles were selected for further analysis. The studies focused on the reduction of waiting time and patient stay, optimization of resources allocation, creation of crisis and maximum demand scenarios, identification of overcrowding bottlenecks, investigation of the impact of other systems on the existing system, and improvement of the system operations and functions. Subsequently, 10 simulation steps were derived from the relevant studies after an expert's evaluation. The 10-steps approach proposed on the basis of the selected studies provides simulation and planning specialists with a structured method for both analyzing problems and choosing best-case scenarios. Moreover, following this framework systematically enables the development of design processes as well as software implementation of simulation problems.
High Resolution Model Intercomparison Project (HighResMIP v1.0) for CMIP6
NASA Astrophysics Data System (ADS)
Haarsma, Reindert J.; Roberts, Malcolm J.; Vidale, Pier Luigi; Senior, Catherine A.; Bellucci, Alessio; Bao, Qing; Chang, Ping; Corti, Susanna; Fučkar, Neven S.; Guemas, Virginie; von Hardenberg, Jost; Hazeleger, Wilco; Kodama, Chihiro; Koenigk, Torben; Leung, L. Ruby; Lu, Jian; Luo, Jing-Jia; Mao, Jiafu; Mizielinski, Matthew S.; Mizuta, Ryo; Nobre, Paulo; Satoh, Masaki; Scoccimarro, Enrico; Semmler, Tido; Small, Justin; von Storch, Jin-Song
2016-11-01
Robust projections and predictions of climate variability and change, particularly at regional scales, rely on the driving processes being represented with fidelity in model simulations. The role of enhanced horizontal resolution in improved process representation in all components of the climate system is of growing interest, particularly as some recent simulations suggest both the possibility of significant changes in large-scale aspects of circulation as well as improvements in small-scale processes and extremes. However, such high-resolution global simulations at climate timescales, with resolutions of at least 50 km in the atmosphere and 0.25° in the ocean, have been performed at relatively few research centres and generally without overall coordination, primarily due to their computational cost. Assessing the robustness of the response of simulated climate to model resolution requires a large multi-model ensemble using a coordinated set of experiments. The Coupled Model Intercomparison Project 6 (CMIP6) is the ideal framework within which to conduct such a study, due to the strong link to models being developed for the CMIP DECK experiments and other model intercomparison projects (MIPs). Increases in high-performance computing (HPC) resources, as well as the revised experimental design for CMIP6, now enable a detailed investigation of the impact of increased resolution up to synoptic weather scales on the simulated mean climate and its variability. The High Resolution Model Intercomparison Project (HighResMIP) presented in this paper applies, for the first time, a multi-model approach to the systematic investigation of the impact of horizontal resolution. A coordinated set of experiments has been designed to assess both a standard and an enhanced horizontal-resolution simulation in the atmosphere and ocean. The set of HighResMIP experiments is divided into three tiers consisting of atmosphere-only and coupled runs and spanning the period 1950-2050, with the possibility of extending to 2100, together with some additional targeted experiments. This paper describes the experimental set-up of HighResMIP, the analysis plan, the connection with the other CMIP6 endorsed MIPs, as well as the DECK and CMIP6 historical simulations. HighResMIP thereby focuses on one of the CMIP6 broad questions, "what are the origins and consequences of systematic model biases?", but we also discuss how it addresses the World Climate Research Program (WCRP) grand challenges.
Systematic review on mentoring and simulation in laparoscopic colorectal surgery.
Miskovic, Danilo; Wyles, Susannah M; Ni, Melody; Darzi, Ara W; Hanna, George B
2010-12-01
To identify and evaluate the influence of mentoring and simulated training in laparoscopic colorectal surgery (LCS) and define the key components for learning advanced technical skills. Laparoscopic colorectal surgery is a complex procedure, often being self-taught by senior surgeons. Educational issues such as inadequate training facilities or a shortfall of training fellowships may result in a slow uptake of LCS. The effectiveness of mentored and simulated training, however, remains unclear. We conducted a systematic search, using Ovid databases. Four study categories were identified: mentored versus nonmentored cases, training case selection, simulation, and assessment. We performed a meta-analysis and a mixed model regression on the difference of the main outcome measures (conversion rates, morbidity, and mortality) for mentored trainees and expert surgeons. We also compared conversion rates of mentored and nonmentored. Meta-analysis of risk factors for conversion was performed using published and unpublished data sets requested from various investigators. For studies on simulation, we compared scores of surveys on the perception of different training courses. Thirty-seven studies were included. Pooled weighted outcomes of mentored cases (n = 751) showed a lower conversion rate (13.3% vs 20.5%, P = 0.0332) compared with nonmentored cases (n = 695). Compared to expert case series (n = 5313), there was no difference in conversion (P = 0.2835), anastomotic leak (P = 0.8342), or mortality (P = 0.5680). A meta-analysis of training case selection data (n = 4444) revealed male sex (P < 0.0001), previous abdominal surgery (P = 0.0200), a BMI greater than 30 (P = 0.0050), an ASA of less than 2 (P < 0.0001), colorectal cancer (P < 0.0001) and intra-abdominal fistula (P < 0.0001), but not older than 64 years (P = 0.4800), to significantly increase conversion risk. Participants on cadaveric courses were highly satisfied with the teaching value yet trainees on an animal course gave less positive feedback. Structured assessment for LCS has been partially implemented. This review and meta-analysis supports evidence that trainees can obtain similar clinical results like expert surgeons in laparoscopic colorectal surgery if supervised by an experienced trainer. Cadaveric models currently provide the best value for training in a simulated environment. There remains a need for further research into technical skills assessment and the educational value of simulated training.
Performance measurement of HARPO: A time projection chamber as a gamma-ray telescope and polarimeter
NASA Astrophysics Data System (ADS)
Gros, P.; Amano, S.; Attié, D.; Baron, P.; Baudin, D.; Bernard, D.; Bruel, P.; Calvet, D.; Colas, P.; Daté, S.; Delbart, A.; Frotin, M.; Geerebaert, Y.; Giebels, B.; Götz, D.; Hashimoto, S.; Horan, D.; Kotaka, T.; Louzir, M.; Magniette, F.; Minamiyama, Y.; Miyamoto, S.; Ohkuma, H.; Poilleux, P.; Semeniouk, I.; Sizun, P.; Takemoto, A.; Yamaguchi, M.; Yonamine, R.; Wang, S.
2018-01-01
We analyse the performance of a gas time projection chamber (TPC) as a high-performance gamma-ray telescope and polarimeter in the e+e- pair-creation regime. We use data collected at a gamma-ray beam of known polarisation. The TPC provides two orthogonal projections (x, z) and (y, z) of the tracks induced by each conversion in the gas volume. We use a simple vertex finder in which vertices and pseudo-tracks exiting from them are identified. We study the various contributions to the single-photon angular resolution using Monte Carlo simulations, compare them with the experimental data and find that they are in excellent agreement. The distribution of the azimuthal angle of pair conversions shows a bias due to the non-cylindrical-symmetric structure of the detector. This bias would average out for a long duration exposure on a space mission, but for this pencil-beam characterisation we have ensured its accurate simulation by a double systematics-control scheme, data taking with the detector rotated at several angles with respect to the beam polarisation direction and systematics control with a non-polarised beam. We measure, for the first time, the polarisation asymmetry of a linearly polarised gamma-ray beam in the low energy pair-creation regime. This sub-GeV energy range is critical for cosmic sources as their spectra are power laws which fall quickly as a function of increasing energy. This work could pave the way to extending polarised gamma-ray astronomy beyond the MeV energy regime.
Systematic and simulation-free coarse graining of homopolymer melts: a relative-entropy-based study.
Yang, Delian; Wang, Qiang
2015-09-28
We applied the systematic and simulation-free strategy proposed in our previous work (D. Yang and Q. Wang, J. Chem. Phys., 2015, 142, 054905) to the relative-entropy-based (RE-based) coarse graining of homopolymer melts. RE-based coarse graining provides a quantitative measure of the coarse-graining performance and can be used to select the appropriate analytic functional forms of the pair potentials between coarse-grained (CG) segments, which are more convenient to use than the tabulated (numerical) CG potentials obtained from structure-based coarse graining. In our general coarse-graining strategy for homopolymer melts using the RE framework proposed here, the bonding and non-bonded CG potentials are coupled and need to be solved simultaneously. Taking the hard-core Gaussian thread model (K. S. Schweizer and J. G. Curro, Chem. Phys., 1990, 149, 105) as the original system, we performed RE-based coarse graining using the polymer reference interaction site model theory under the assumption that the intrachain segment pair correlation functions of CG systems are the same as those in the original system, which de-couples the bonding and non-bonded CG potentials and simplifies our calculations (that is, we only calculated the latter). We compared the performance of various analytic functional forms of non-bonded CG pair potential and closures for CG systems in RE-based coarse graining, as well as the structural and thermodynamic properties of original and CG systems at various coarse-graining levels. Our results obtained from RE-based coarse graining are also compared with those from structure-based coarse graining.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, L. A.; Boehly, T. R.; Ding, Y. H.
Polystyrene (CH), commonly known as “plastic,” has been one of the widely used ablator materials for capsule designs in inertial confinement fusion (ICF). Knowing its precise properties under high-energy-density conditions is crucial to understanding and designing ICF implosions through radiation–hydrodynamic simulations. For this purpose, systematic ab initio studies on the static, transport, and optical properties of CH, in a wide range of density and temperature conditions (ρ= 0.1 to 100 g/cm 3 and T = 10 3 to 4 × 10 6K), have been conducted using quantum molecular dynamics (QMD) simulations based on the density functional theory. We have builtmore » several wide-ranging, self-consistent material-properties tables for CH, such as the first-principles equation of state (FPEOS), the QMD-based thermal conductivity (Κ QMD) and ionization, and the first-principles opacity table (FPOT). This paper is devoted to providing a review on (1) what results were obtained from these systematic ab initio studies; (2) how these self-consistent results were compared with both traditional plasma-physics models and available experiments; and (3) how these first-principles–based properties of polystyrene affect the predictions of ICF target performance, through both 1-D and 2-D radiation–hydrodynamic simulations. In the warm dense regime, our ab initio results, which can significantly differ from predictions of traditional plasma-physics models, compared favorably with experiments. When incorporated into hydrocodes for ICF simulations, these first-principles material properties of CH have produced significant differences over traditional models in predicting 1-D/2-D target performance of ICF implosions on OMEGA and direct-drive–ignition designs for the National Ignition Facility. Lastly, we will discuss the implications of these studies on the current small-margin ICF target designs using a CH ablator.« less
Collins, L. A.; Boehly, T. R.; Ding, Y. H.; ...
2018-03-23
Polystyrene (CH), commonly known as “plastic,” has been one of the widely used ablator materials for capsule designs in inertial confinement fusion (ICF). Knowing its precise properties under high-energy-density conditions is crucial to understanding and designing ICF implosions through radiation–hydrodynamic simulations. For this purpose, systematic ab initio studies on the static, transport, and optical properties of CH, in a wide range of density and temperature conditions (ρ= 0.1 to 100 g/cm 3 and T = 10 3 to 4 × 10 6K), have been conducted using quantum molecular dynamics (QMD) simulations based on the density functional theory. We have builtmore » several wide-ranging, self-consistent material-properties tables for CH, such as the first-principles equation of state (FPEOS), the QMD-based thermal conductivity (Κ QMD) and ionization, and the first-principles opacity table (FPOT). This paper is devoted to providing a review on (1) what results were obtained from these systematic ab initio studies; (2) how these self-consistent results were compared with both traditional plasma-physics models and available experiments; and (3) how these first-principles–based properties of polystyrene affect the predictions of ICF target performance, through both 1-D and 2-D radiation–hydrodynamic simulations. In the warm dense regime, our ab initio results, which can significantly differ from predictions of traditional plasma-physics models, compared favorably with experiments. When incorporated into hydrocodes for ICF simulations, these first-principles material properties of CH have produced significant differences over traditional models in predicting 1-D/2-D target performance of ICF implosions on OMEGA and direct-drive–ignition designs for the National Ignition Facility. Lastly, we will discuss the implications of these studies on the current small-margin ICF target designs using a CH ablator.« less
Vanniyasingam, Thuva; Daly, Caitlin; Jin, Xuejing; Zhang, Yuan; Foster, Gary; Cunningham, Charles; Thabane, Lehana
2018-06-01
This study reviews simulation studies of discrete choice experiments to determine (i) how survey design features affect statistical efficiency, (ii) and to appraise their reporting quality. Statistical efficiency was measured using relative design (D-) efficiency, D-optimality, or D-error. For this systematic survey, we searched Journal Storage (JSTOR), Since Direct, PubMed, and OVID which included a search within EMBASE. Searches were conducted up to year 2016 for simulation studies investigating the impact of DCE design features on statistical efficiency. Studies were screened and data were extracted independently and in duplicate. Results for each included study were summarized by design characteristic. Previously developed criteria for reporting quality of simulation studies were also adapted and applied to each included study. Of 371 potentially relevant studies, 9 were found to be eligible, with several varying in study objectives. Statistical efficiency improved when increasing the number of choice tasks or alternatives; decreasing the number of attributes, attribute levels; using an unrestricted continuous "manipulator" attribute; using model-based approaches with covariates incorporating response behaviour; using sampling approaches that incorporate previous knowledge of response behaviour; incorporating heterogeneity in a model-based design; correctly specifying Bayesian priors; minimizing parameter prior variances; and using an appropriate method to create the DCE design for the research question. The simulation studies performed well in terms of reporting quality. Improvement is needed in regards to clearly specifying study objectives, number of failures, random number generators, starting seeds, and the software used. These results identify the best approaches to structure a DCE. An investigator can manipulate design characteristics to help reduce response burden and increase statistical efficiency. Since studies varied in their objectives, conclusions were made on several design characteristics, however, the validity of each conclusion was limited. Further research should be conducted to explore all conclusions in various design settings and scenarios. Additional reviews to explore other statistical efficiency outcomes and databases can also be performed to enhance the conclusions identified from this review.
Designing Successful Next-Generation Instruments to Detect the Epoch of Reionization
NASA Astrophysics Data System (ADS)
Thyagarajan, Nithyanandan; Hydrogen Epoch of Reionization Array (HERA) team, Murchison Widefield Array (MWA) team
2018-01-01
The Epoch of Reionization (EoR) signifies a period of intense evolution of the Inter-Galactic Medium (IGM) in the early Universe caused by the first generations of stars and galaxies, wherein they turned the neutral IGM to be completely ionized by redshift ≥ 6. This important epoch is poorly explored to date. Measurement of redshifted 21 cm line from neutral Hydrogen during the EoR is promising to provide the most direct constraints of this epoch. Ongoing experiments to detect redshifted 21 cm power spectrum during reionization, including the Murchison Widefield Array (MWA), Precision Array for Probing the Epoch of Reionization (PAPER), and the Low Frequency Array (LOFAR), appear to be severely affected by bright foregrounds and unaccounted instrumental systematics. For example, the spectral structure introduced by wide-field effects, aperture shapes and angular power patterns of the antennas, electrical and geometrical reflections in the antennas and electrical paths, and antenna position errors can be major limiting factors. These mimic the 21 cm signal and severely degrade the instrument performance. It is imperative for the next-generation of experiments to eliminate these systematics at their source via robust instrument design. I will discuss a generic framework to set cosmologically motivated antenna performance specifications and design strategies using the Precision Radio Interferometry Simulator (PRISim) -- a high-precision tool that I have developed for simulations of foregrounds and the instrument transfer function intended primarily for 21 cm EoR studies, but also broadly applicable to interferometer-based intensity mapping experiments. The Hydrogen Epoch of Reionization Array (HERA), designed in-part based on this framework, is expected to detect the 21 cm signal with high significance. I will present this framework and the simulations, and their potential for designing upcoming radio instruments such as HERA and the Square Kilometre Array (SKA).
Strategies for concurrent processing of complex algorithms in data driven architectures
NASA Technical Reports Server (NTRS)
Stoughton, John W.; Mielke, Roland R.; Som, Sukhamony
1990-01-01
The performance modeling and enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures is examined. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called ATAMM (Algorithm To Architecture Mapping Model). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.
Hasse, Katelyn; Neylon, John; Sheng, Ke; Santhanam, Anand P
2016-03-01
Breast elastography is a critical tool for improving the targeted radiotherapy treatment of breast tumors. Current breast radiotherapy imaging protocols only involve prone and supine CT scans. There is a lack of knowledge on the quantitative accuracy with which breast elasticity can be systematically measured using only prone and supine CT datasets. The purpose of this paper is to describe a quantitative elasticity estimation technique for breast anatomy using only these supine/prone patient postures. Using biomechanical, high-resolution breast geometry obtained from CT scans, a systematic assessment was performed in order to determine the feasibility of this methodology for clinically relevant elasticity distributions. A model-guided inverse analysis approach is presented in this paper. A graphics processing unit (GPU)-based linear elastic biomechanical model was employed as a forward model for the inverse analysis with the breast geometry in a prone position. The elasticity estimation was performed using a gradient-based iterative optimization scheme and a fast-simulated annealing (FSA) algorithm. Numerical studies were conducted to systematically analyze the feasibility of elasticity estimation. For simulating gravity-induced breast deformation, the breast geometry was anchored at its base, resembling the chest-wall/breast tissue interface. Ground-truth elasticity distributions were assigned to the model, representing tumor presence within breast tissue. Model geometry resolution was varied to estimate its influence on convergence of the system. A priori information was approximated and utilized to record the effect on time and accuracy of convergence. The role of the FSA process was also recorded. A novel error metric that combined elasticity and displacement error was used to quantify the systematic feasibility study. For the authors' purposes, convergence was set to be obtained when each voxel of tissue was within 1 mm of ground-truth deformation. The authors' analyses showed that a ∼97% model convergence was systematically observed with no-a priori information. Varying the model geometry resolution showed no significant accuracy improvements. The GPU-based forward model enabled the inverse analysis to be completed within 10-70 min. Using a priori information about the underlying anatomy, the computation time decreased by as much as 50%, while accuracy improved from 96.81% to 98.26%. The use of FSA was observed to allow the iterative estimation methodology to converge more precisely. By utilizing a forward iterative approach to solve the inverse elasticity problem, this work indicates the feasibility and potential of the fast reconstruction of breast tissue elasticity using supine/prone patient postures.
A Systematic Approach for Real-Time Operator Functional State Assessment
NASA Technical Reports Server (NTRS)
Zhang, Guangfan; Wang, Wei; Pepe, Aaron; Xu, Roger; Schnell, Thomas; Anderson, Nick; Heitkamp, Dean; Li, Jiang; Li, Feng; McKenzie, Frederick
2012-01-01
A task overload condition often leads to high stress for an operator, causing performance degradation and possibly disastrous consequences. Just as dangerous, with automated flight systems, an operator may experience a task underload condition (during the en-route flight phase, for example), becoming easily bored and finding it difficult to maintain sustained attention. When an unexpected event occurs, either internal or external to the automated system, the disengaged operator may neglect, misunderstand, or respond slowly/inappropriately to the situation. In this paper, we discuss an approach for Operator Functional State (OFS) monitoring in a typical aviation environment. A systematic ground truth finding procedure has been designed based on subjective evaluations, performance measures, and strong physiological indicators. The derived OFS ground truth is continuous in time compared to a very sparse estimation of OFS based on an expert review or subjective evaluations. It can capture the variations of OFS during a mission to better guide through the training process of the OFS assessment model. Furthermore, an OFS assessment model framework based on advanced machine learning techniques was designed and the systematic approach was then verified and validated with experimental data collected in a high fidelity Boeing 737 simulator. Preliminary results show highly accurate engagement/disengagement detection making it suitable for real-time applications to assess pilot engagement.
Simulation-based training for nurses: Systematic review and meta-analysis.
Hegland, Pål A; Aarlie, Hege; Strømme, Hilde; Jamtvedt, Gro
2017-07-01
Simulation-based training is a widespread strategy to improve health-care quality. However, its effect on registered nurses has previously not been established in systematic reviews. The aim of this systematic review is to evaluate effect of simulation-based training on nurses' skills and knowledge. We searched CDSR, DARE, HTA, CENTRAL, CINAHL, MEDLINE, Embase, ERIC, and SveMed+ for randomised controlled trials (RCT) evaluating effect of simulation-based training among nurses. Searches were completed in December 2016. Two reviewers independently screened abstracts and full-text, extracted data, and assessed risk of bias. We compared simulation-based training to other learning strategies, high-fidelity simulation to other simulation strategies, and different organisation of simulation training. Data were analysed through meta-analysis and narrative syntheses. GRADE was used to assess the quality of evidence. Fifteen RCTs met the inclusion criteria. For the comparison of simulation-based training to other learning strategies on nurses' skills, six studies in the meta-analysis showed a significant, but small effect in favour of simulation (SMD -1.09, CI -1.72 to -0.47). There was large heterogeneity (I 2 85%). For the other comparisons, there was large between-study variation in results. The quality of evidence for all comparisons was graded as low. The effect of simulation-based training varies substantially between studies. Our meta-analysis showed a significant effect of simulation training compared to other learning strategies, but the quality of evidence was low indicating uncertainty. Other comparisons showed inconsistency in results. Based on our findings simulation training appears to be an effective strategy to improve nurses' skills, but further good-quality RCTs with adequate sample sizes are needed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Wong, Chee-Woon; Chong, Kok-Keong; Tan, Ming-Hui
2015-07-27
This paper presents an approach to optimize the electrical performance of dense-array concentrator photovoltaic system comprised of non-imaging dish concentrator by considering the circumsolar radiation and slope error effects. Based on the simulated flux distribution, a systematic methodology to optimize the layout configuration of solar cells interconnection circuit in dense array concentrator photovoltaic module has been proposed by minimizing the current mismatch caused by non-uniformity of concentrated sunlight. An optimized layout of interconnection solar cells circuit with minimum electrical power loss of 6.5% can be achieved by minimizing the effects of both circumsolar radiation and slope error.
Daleu, C. L.; Plant, R. S.; Woolnough, S. J.; ...
2015-10-24
Here, as part of an international intercomparison project, a set of single-column models (SCMs) and cloud-resolving models (CRMs) are run under the weak-temperature gradient (WTG) method and the damped gravity wave (DGW) method. For each model, the implementation of the WTG or DGW method involves a simulated column which is coupled to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. The simulated column has the same surface conditions as the reference state and is initialized with profiles from the reference state. We performed systematic comparison of the behavior of different models under a consistentmore » implementation of the WTG method and the DGW method and systematic comparison of the WTG and DGW methods in models with different physics and numerics. CRMs and SCMs produce a variety of behaviors under both WTG and DGW methods. Some of the models reproduce the reference state while others sustain a large-scale circulation which results in either substantially lower or higher precipitation compared to the value of the reference state. CRMs show a fairly linear relationship between precipitation and circulation strength. SCMs display a wider range of behaviors than CRMs. Some SCMs under the WTG method produce zero precipitation. Within an individual SCM, a DGW simulation and a corresponding WTG simulation can produce different signed circulation. When initialized with a dry troposphere, DGW simulations always result in a precipitating equilibrium state. The greatest sensitivities to the initial moisture conditions occur for multiple stable equilibria in some WTG simulations, corresponding to either a dry equilibrium state when initialized as dry or a precipitating equilibrium state when initialized as moist. Multiple equilibria are seen in more WTG simulations for higher SST. In some models, the existence of multiple equilibria is sensitive to some parameters in the WTG calculations.« less
Gostlow, Hannah; Marlow, Nicholas; Babidge, Wendy; Maddern, Guy
To examine and report on evidence relating to surgical trainees' voluntary participation in simulation-based laparoscopic skills training. Specifically, the underlying motivators, enablers, and barriers faced by surgical trainees with regard to attending training sessions on a regular basis. A systematic search of the literature (PubMed; CINAHL; EMBASE; Cochrane Collaboration) was conducted between May and July 2015. Studies were included on whether they reported on surgical trainee attendance at voluntary, simulation-based laparoscopic skills training sessions, in addition to qualitative data regarding participant's perceived barriers and motivators influencing their decision to attend such training. Factors affecting a trainee's motivation were categorized as either intrinsic (internal) or extrinsic (external). Two randomised control trials and 7 case series' met our inclusion criteria. Included studies were small and generally poor quality. Overall, voluntary simulation-based laparoscopic skills training was not well attended. Intrinsic motivators included clearly defined personal performance goals and relevance to clinical practice. Extrinsic motivators included clinical responsibilities and available free time, simulator location close to clinical training, and setting obligatory assessments or mandated training sessions. The effect of each of these factors was variable, and largely dependent on the individual trainee. The greatest reported barrier to attending voluntary training was the lack of available free time. Although data quality is limited, it can be seen that providing unrestricted access to simulator equipment is not effective in motivating surgical trainees to voluntarily participate in simulation-based laparoscopic skills training. To successfully encourage participation, consideration needs to be given to the factors influencing motivation to attend training. Further research, including better designed randomised control trials and large-scale surveys, is required to provide more definitive answers to the degree in which various incentives influence trainees' motivations and actual attendance rates. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Weak lensing magnification in the Dark Energy Survey Science Verification data
NASA Astrophysics Data System (ADS)
Garcia-Fernandez, M.; Sanchez, E.; Sevilla-Noarbe, I.; Suchyta, E.; Huff, E. M.; Gaztanaga, E.; Aleksić, J.; Ponce, R.; Castander, F. J.; Hoyle, B.; Abbott, T. M. C.; Abdalla, F. B.; Allam, S.; Annis, J.; Benoit-Lévy, A.; Bernstein, G. M.; Bertin, E.; Brooks, D.; Buckley-Geer, E.; Burke, D. L.; Carnero Rosell, A.; Carrasco Kind, M.; Carretero, J.; Crocce, M.; Cunha, C. E.; D'Andrea, C. B.; da Costa, L. N.; DePoy, D. L.; Desai, S.; Diehl, H. T.; Eifler, T. F.; Evrard, A. E.; Fernandez, E.; Flaugher, B.; Fosalba, P.; Frieman, J.; García-Bellido, J.; Gerdes, D. W.; Giannantonio, T.; Gruen, D.; Gruendl, R. A.; Gschwend, J.; Gutierrez, G.; James, D. J.; Jarvis, M.; Kirk, D.; Krause, E.; Kuehn, K.; Kuropatkin, N.; Lahav, O.; Lima, M.; MacCrann, N.; Maia, M. A. G.; March, M.; Marshall, J. L.; Melchior, P.; Miquel, R.; Mohr, J. J.; Plazas, A. A.; Romer, A. K.; Roodman, A.; Rykoff, E. S.; Scarpine, V.; Schubnell, M.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Tarle, G.; Thomas, D.; Walker, A. R.; Wester, W.; DES Collaboration
2018-05-01
In this paper, the effect of weak lensing magnification on galaxy number counts is studied by cross-correlating the positions of two galaxy samples, separated by redshift, using the Dark Energy Survey Science Verification data set. This analysis is carried out for galaxies that are selected only by its photometric redshift. An extensive analysis of the systematic effects, using new methods based on simulations is performed, including a Monte Carlo sampling of the selection function of the survey.
Yourganov, Grigori; Schmah, Tanya; Churchill, Nathan W; Berman, Marc G; Grady, Cheryl L; Strother, Stephen C
2014-08-01
The field of fMRI data analysis is rapidly growing in sophistication, particularly in the domain of multivariate pattern classification. However, the interaction between the properties of the analytical model and the parameters of the BOLD signal (e.g. signal magnitude, temporal variance and functional connectivity) is still an open problem. We addressed this problem by evaluating a set of pattern classification algorithms on simulated and experimental block-design fMRI data. The set of classifiers consisted of linear and quadratic discriminants, linear support vector machine, and linear and nonlinear Gaussian naive Bayes classifiers. For linear discriminant, we used two methods of regularization: principal component analysis, and ridge regularization. The classifiers were used (1) to classify the volumes according to the behavioral task that was performed by the subject, and (2) to construct spatial maps that indicated the relative contribution of each voxel to classification. Our evaluation metrics were: (1) accuracy of out-of-sample classification and (2) reproducibility of spatial maps. In simulated data sets, we performed an additional evaluation of spatial maps with ROC analysis. We varied the magnitude, temporal variance and connectivity of simulated fMRI signal and identified the optimal classifier for each simulated environment. Overall, the best performers were linear and quadratic discriminants (operating on principal components of the data matrix) and, in some rare situations, a nonlinear Gaussian naïve Bayes classifier. The results from the simulated data were supported by within-subject analysis of experimental fMRI data, collected in a study of aging. This is the first study that systematically characterizes interactions between analysis model and signal parameters (such as magnitude, variance and correlation) on the performance of pattern classifiers for fMRI. Copyright © 2014 Elsevier Inc. All rights reserved.
Adaptive Control Model Reveals Systematic Feedback and Key Molecules in Metabolic Pathway Regulation
Moffitt, Richard A.; Merrill, Alfred H.; Wang, May D.
2011-01-01
Abstract Robust behavior in metabolic pathways resembles stabilized performance in systems under autonomous control. This suggests we can apply control theory to study existing regulation in these cellular networks. Here, we use model-reference adaptive control (MRAC) to investigate the dynamics of de novo sphingolipid synthesis regulation in a combined theoretical and experimental case study. The effects of serine palmitoyltransferase over-expression on this pathway are studied in vitro using human embryonic kidney cells. We report two key results from comparing numerical simulations with observed data. First, MRAC simulations of pathway dynamics are comparable to simulations from a standard model using mass action kinetics. The root-sum-square (RSS) between data and simulations in both cases differ by less than 5%. Second, MRAC simulations suggest systematic pathway regulation in terms of adaptive feedback from individual molecules. In response to increased metabolite levels available for de novo sphingolipid synthesis, feedback from molecules along the main artery of the pathway is regulated more frequently and with greater amplitude than from other molecules along the branches. These biological insights are consistent with current knowledge while being new that they may guide future research in sphingolipid biology. In summary, we report a novel approach to study regulation in cellular networks by applying control theory in the context of robust metabolic pathways. We do this to uncover potential insight into the dynamics of regulation and the reverse engineering of cellular networks for systems biology. This new modeling approach and the implementation routines designed for this case study may be extended to other systems. Supplementary Material is available at www.liebertonline.com/cmb. PMID:21314456
Technology-enhanced simulation and pediatric education: a meta-analysis.
Cheng, Adam; Lang, Tara R; Starr, Stephanie R; Pusic, Martin; Cook, David A
2014-05-01
Pediatrics has embraced technology-enhanced simulation (TES) as an educational modality, but its effectiveness for pediatric education remains unclear. The objective of this study was to describe the characteristics and evaluate the effectiveness of TES for pediatric education. This review adhered to PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) standards. A systematic search of Medline, Embase, CINAHL, ERIC, Web of Science, Scopus, key journals, and previous review bibliographies through May 2011 and an updated Medline search through October 2013 were conducted. Original research articles in any language evaluating the use of TES for educating health care providers at any stage, where the content solely focuses on patients 18 years or younger, were selected. Reviewers working in duplicate abstracted information on learners, clinical topic, instructional design, study quality, and outcomes. We coded skills (simulated setting) separately for time and nontime measures and similarly classified patient care behaviors and patient effects. We identified 57 studies (3666 learners) using TES to teach pediatrics. Effect sizes (ESs) were pooled by using a random-effects model. Among studies comparing TES with no intervention, pooled ESs were large for outcomes of knowledge, nontime skills (eg, performance in simulated setting), behaviors with patients, and time to task completion (ES = 0.80-1.91). Studies comparing the use of high versus low physical realism simulators showed small to moderate effects favoring high physical realism (ES = 0.31-0.70). TES for pediatric education is associated with large ESs in comparison with no intervention. Future research should include comparative studies that identify optimal instructional methods and incorporate pediatric-specific issues into educational interventions. Copyright © 2014 by the American Academy of Pediatrics.
A systematic review of phacoemulsification cataract surgery in virtual reality simulators.
Lam, Chee Kiang; Sundaraj, Kenneth; Sulaiman, Mohd Nazri
2013-01-01
The aim of this study was to review the capability of virtual reality simulators in the application of phacoemulsification cataract surgery training. Our review included the scientific publications on cataract surgery simulators that had been developed by different groups of researchers along with commercialized surgical training products, such as EYESI® and PhacoVision®. The review covers the simulation of the main cataract surgery procedures, i.e., corneal incision, capsulorrhexis, phacosculpting, and intraocular lens implantation in various virtual reality surgery simulators. Haptics realism and visual realism of the procedures are the main elements in imitating the actual surgical environment. The involvement of ophthalmology in research on virtual reality since the early 1990s has made a great impact on the development of surgical simulators. Most of the latest cataract surgery training systems are able to offer high fidelity in visual feedback and haptics feedback, but visual realism, such as the rotational movements of an eyeball with response to the force applied by surgical instruments, is still lacking in some of them. The assessment of the surgical tasks carried out on the simulators showed a significant difference in the performance before and after the training.
Hui, YU; Ramkrishna, MITRA; Jing, YANG; YuanYuan, LI; ZhongMing, ZHAO
2016-01-01
Identification of differential regulators is critical to understand the dynamics of cellular systems and molecular mechanisms of diseases. Several computational algorithms have recently been developed for this purpose by using transcriptome and network data. However, it remains largely unclear which algorithm performs better under a specific condition. Such knowledge is important for both appropriate application and future enhancement of these algorithms. Here, we systematically evaluated seven main algorithms (TED, TDD, TFactS, RIF1, RIF2, dCSA_t2t, and dCSA_r2t), using both simulated and real datasets. In our simulation evaluation, we artificially inactivated either a single regulator or multiple regulators and examined how well each algorithm detected known gold standard regulators. We found that all these algorithms could effectively discern signals arising from regulatory network differences, indicating the validity of our simulation schema. Among the seven tested algorithms, TED and TFactS were placed first and second when both discrimination accuracy and robustness against data variation were considered. When applied to two independent lung cancer datasets, both TED and TFactS replicated a substantial fraction of their respective differential regulators. Since TED and TFactS rely on two distinct features of transcriptome data, namely differential co-expression and differential expression, both may be applied as mutual references during practical application. PMID:25326829
Simulation of Graphene Field-Effect Transistor Biosensors for Bacterial Detection.
Wu, Guangfu; Meyyappan, Meyya; Lai, King Wai Chiu
2018-05-25
Foodborne illness is correlated with the existence of infectious pathogens such as bacteria in food and drinking water. Probe-modified graphene field effect transistors (G-FETs) have been shown to be suitable for Escherichia coli ( E. coli ) detection. Here, the G-FETs for bacterial detection are modeled and simulated with COMSOL Multiphysics to understand the operation of the biosensors. The motion of E. coli cells in electrolyte and the surface charge of graphene induced by E. coli are systematically investigated. The comparison between the simulation and experimental data proves the sensing probe size to be a key parameter affecting the surface charge of graphene induced by bacteria. Finally, the relationship among the change in source-drain current (∆ I ds ), graphene-bacteria distance and bacterial concentration is established. The shorter graphene-bacteria distance and higher bacterial concentration give rise to better sensing performance (larger ∆ I ds ) of the G-FETs biosensors. The simulation here could serve as a guideline for the design and optimization of G-FET biosensors for various applications.
Hasnain, Sabeeha; McClendon, Christopher L; Hsu, Monica T; Jacobson, Matthew P; Bandyopadhyay, Pradipta
2014-01-01
A new coarse-grained model of the E. coli cytoplasm is developed by describing the proteins of the cytoplasm as flexible units consisting of one or more spheres that follow Brownian dynamics (BD), with hydrodynamic interactions (HI) accounted for by a mean-field approach. Extensive BD simulations were performed to calculate the diffusion coefficients of three different proteins in the cellular environment. The results are in close agreement with experimental or previously simulated values, where available. Control simulations without HI showed that use of HI is essential to obtain accurate diffusion coefficients. Anomalous diffusion inside the crowded cellular medium was investigated with Fractional Brownian motion analysis, and found to be present in this model. By running a series of control simulations in which various forces were removed systematically, it was found that repulsive interactions (volume exclusion) are the main cause for anomalous diffusion, with a secondary contribution from HI.
NASA Technical Reports Server (NTRS)
Phatak, A. V.
1980-01-01
A systematic analytical approach to the determination of helicopter IFR precision approach requirements is formulated. The approach is based upon the hypothesis that pilot acceptance level or opinion rating of a given system is inversely related to the degree of pilot involvement in the control task. A nonlinear simulation of the helicopter approach to landing task incorporating appropriate models for UH-1H aircraft, the environmental disturbances and the human pilot was developed as a tool for evaluating the pilot acceptance hypothesis. The simulated pilot model is generic in nature and includes analytical representation of the human information acquisition, processing, and control strategies. Simulation analyses in the flight director mode indicate that the pilot model used is reasonable. Results of the simulation are used to identify candidate pilot workload metrics and to test the well known performance-work-load relationship. A pilot acceptance analytical methodology is formulated as a basis for further investigation, development and validation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soltz, R. A.; Danagoulian, A.; Sheets, S.
Theoretical calculations indicate that the value of the Feynman variance, Y2F for the emitted distribution of neutrons from ssionable exhibits a strong monotonic de- pendence on a the multiplication, M, of a quantity of special nuclear material. In 2012 we performed a series of measurements at the Passport Inc. facility using a 9- MeV bremsstrahlung CW beam of photons incident on small quantities of uranium with liquid scintillator detectors. For the set of objects studies we observed deviations in the expected monotonic dependence, and these deviations were later con rmed by MCNP simulations. In this report, we modify the theorymore » to account for the contri- bution from the initial photo- ssion and benchmark the new theory with a series of MCNP simulations on DU, LEU, and HEU objects spanning a wide range of masses and multiplication values.« less
Parasitic Parameters Extraction for InP DHBT Based on EM Method and Validation up to H-Band
NASA Astrophysics Data System (ADS)
Li, Oupeng; Zhang, Yong; Wang, Lei; Xu, Ruimin; Cheng, Wei; Wang, Yuan; Lu, Haiyan
2017-05-01
This paper presents a small-signal model for InGaAs/InP double heterojunction bipolar transistor (DHBT). Parasitic parameters of access via and electrode finger are extracted by 3-D electromagnetic (EM) simulation. By analyzing the equivalent circuit of seven special structures and using the EM simulation results, the parasitic parameters are extracted systematically. Compared with multi-port s-parameter EM model, the equivalent circuit model has clear physical intension and avoids the complex internal ports setting. The model is validated on a 0.5 × 7 μm2 InP DHBT up to 325 GHz. The model provides a good fitting result between measured and simulated multi-bias s-parameters in full band. At last, an H-band amplifier is designed and fabricated for further verification. The measured amplifier performance is highly agreed with the model prediction, which indicates the model has good accuracy in submillimeterwave band.
Using GPU parallelization to perform realistic simulations of the LPCTrap experiments
NASA Astrophysics Data System (ADS)
Fabian, X.; Mauger, F.; Quéméner, G.; Velten, Ph.; Ban, G.; Couratin, C.; Delahaye, P.; Durand, D.; Fabre, B.; Finlay, P.; Fléchard, X.; Liénard, E.; Méry, A.; Naviliat-Cuncic, O.; Pons, B.; Porobic, T.; Severijns, N.; Thomas, J. C.
2015-11-01
The LPCTrap setup is a sensitive tool to measure the β - ν angular correlation coefficient, a β ν , which can yield the mixing ratio ρ of a β decay transition. The latter enables the extraction of the Cabibbo-Kobayashi-Maskawa (CKM) matrix element V u d . In such a measurement, the most relevant observable is the energy distribution of the recoiling daughter nuclei following the nuclear β decay, which is obtained using a time-of-flight technique. In order to maximize the precision, one can reduce the systematic errors through a thorough simulation of the whole set-up, especially with a correct model of the trapped ion cloud. This paper presents such a simulation package and focuses on the ion cloud features; particular attention is therefore paid to realistic descriptions of trapping field dynamics, buffer gas cooling and the N-body space charge effects.
Deciphering Cryptic Binding Sites on Proteins by Mixed-Solvent Molecular Dynamics.
Kimura, S Roy; Hu, Hai Peng; Ruvinsky, Anatoly M; Sherman, Woody; Favia, Angelo D
2017-06-26
In recent years, molecular dynamics simulations of proteins in explicit mixed solvents have been applied to various problems in protein biophysics and drug discovery, including protein folding, protein surface characterization, fragment screening, allostery, and druggability assessment. In this study, we perform a systematic study on how mixtures of organic solvent probes in water can reveal cryptic ligand binding pockets that are not evident in crystal structures of apo proteins. We examine a diverse set of eight PDB proteins that show pocket opening induced by ligand binding and investigate whether solvent MD simulations on the apo structures can induce the binding site observed in the holo structures. The cosolvent simulations were found to induce conformational changes on the protein surface, which were characterized and compared with the holo structures. Analyses of the biological systems, choice of probes and concentrations, druggability of the resulting induced pockets, and application to drug discovery are discussed here.
MacKinnon, Karen; Marcellus, Lenora; Rivers, Julie; Gordon, Carol; Ryan, Maureen; Butcher, Diane
2015-01-01
The overall aim of this systematic review is to identify the appropriateness and meaningfulness of maternal-child simulation-based learning for undergraduate or pre-registration nursing students in educational settings to inform curriculum decision-making.1. What are the experiences of nursing or health professional students participating in undergraduate or pre-licensure maternal-child simulation-based learning in educational settings?2. What are the experiences of educators participating in undergraduate or pre-licensure maternal-child simulation-based learning in educational settings?3. What teaching and learning practices in maternal-child simulation-based learning are considered appropriate and meaningful by students and educators? Maternal-child care is one of the pillars of primary health care. Health promotion and illness/ injury prevention begin in the preconception period and continue through pregnancy, birth, the postpartum period and the childrearing years. Thus, lifelong wellness is promoted across the continuum of perinatal and pediatric care which influences family health and early child development. Registered nurses (RNs) are expected to have the knowledge and skills needed to provide evidence-based nursing with childbearing and child-rearing families to promote health and address health inequities in many settings, including inner city, rural, northern, indigenous and global communities. The Canadian Maternity Experiences survey and the Report by the Advisor on Healthy Children and Youth provide information on current shortages of perinatal and child health care providers and stress the importance of the role of nurses as providers of rural and remote care. From a global health perspective, continued concern with both perinatal and child health morbidities and mortalities highlight the importance of maintaining and strengthening the presence of maternal and child health learning opportunities within undergraduate nursing curriculum.Despite this importance, educators in many countries have acknowledged difficulties providing nursing students with maternal-child hospital learning experiencesdue to declining birth rates, women's changing expectations about childbirth (i.e. birth as an intimate experience), increased outpatient and community management of early childhood health conditions, and increased competition for clinical placements. Canadian nurse educators and practice leaders have also identified gaps in recent RN graduates' readiness to provide safe, competent and evidence-based care for childbearing and child-rearing families. Newly graduated RNs working in acute care hospitals and in rural/remote community practice settings report feeling unprepared for providing maternity, neonatal and early childhood care.Recent concerns about the clinical reasoning skills of new graduates and the link to poor patient outcomes (e.g. not recognizing deteriorating patients) have led to calls to reform nursing education. In the Carnegie report, Benner, Sutphen, Leonard and Day identified four essential themes needed in the thinking and approach to nursing education, including: (1) a shift in focus from covering decontextualized knowledge to "teaching for a sense of salience, situated cognition, and identifying action in particular clinical situations"; (2) better integration of classroom and clinical teaching; (3) more emphasis on clinical reasoning; and, (4) an emphasis on identity formation rather than socialization. Brown and Hartrick Doane propose that nurses need to draw on a range of knowledge that enhances the nurse's "sensitivity and ability to be responsive in particular moments of practice". Theoretical or decontextualized knowledge becomes a "pragmatic tool" used to improve nursing practice. Simulation has been identified as a promising pragmatic educational tool for practice learning that can be integrated with theoretical knowledge from nursing and other disciplines.Bland, Topping and Wood conducted a concept analysis and defined simulation in nursing education as:They also proposed that "simulated learning is a dynamic concept that deserves empirical evaluation not merely to determine its effects but to uncover its full potential as a learning strategy".Simulation usually involves student(s) providing nursing care to a simulated patient who might be a manikin or actor based on a standardized scenario. Following the experiential learning opportunity the scenario is debriefed and the clinical situation analyzed with opportunities for reflection on performance. In nursing education, simulation is usually used in a way that complements learning in practice settings. However simulation has also been used: to make up some clinical practice hours, to provide opportunities to practice and assess particular clinical skills, and for remedial learning when students encounter difficulties in practice settings. In addition simulation provides the opportunity to focus on quality and safety competencies (QSEN) that have been identified for nurses. New forms of simulation are being developed with multiple patients so that nursing students can learn to prioritize care needs and delegate care to other team members.Nurse educators have identified several advantages for learners using simulation, including: providing a safe environment to improve nursing competence, allowing learners to become more comfortable with receiving feedback about their clinical performance, providing consistent and comparable experiences for all students, and learning a mix of technical and non-technical skills including communication, teamwork and delegation. Within the Canadian context, students and instructors have reported positive learning experiences with simulation, particularly in understanding complex patient care scenarios, multidisciplinary team scenarios, team-based learning, and reflective debriefing. Furthermore, simulation technology has been proposed as a strategy for developing clinical reasoning skills, enhancing nurses' abilities to build upon previous knowledge and past experiences, and manage new or unfamiliar situations.Simulation has previously been integrated into nursing curricula in a "piecemeal" fashion that lacks an integrative pedagogy or theoretical approach. More recently a number of theoretical and pedagogical frameworks and best practice standards have been published. In April 2014 a preliminary search of literature (in CINAHL, Medline, Academic Search Complete and Web of Science) was conducted with guidance from our library specialist to test the search strategy and ensure that there would be enough qualitative findings to include in the systematic review. A preliminary scan of the abstracts from these searches demonstrated that many experiential case reports with qualitative findings were missed with the use of research limiters (including our search strategy specifically constructed to retrieve qualitative research) so the decision was made to err on the side of caution by searching more broadly and review a larger number of abstracts for inclusion in the study. However, a number of reports with qualitative findings were identified. For example, from a review of the abstracts from a CINAHL search dated April 17, qualitative research papers (including two dissertations), 12 evaluation study reports, six mixed methods studies and nine case reports with qualitative findings were identified. It is timely then to review qualitative studies to better understand the meaningfulness and appropriateness of integrating maternal-child simulation-based learning activities in undergraduate nursing education programs.A search of both the Cochrane Library of Systematic Reviews and the Joanna Briggs Institute Database of Systematic Reviews and Implementation Reports has been conducted. No systematic reviews of qualitative studies of maternal-child simulation-based learning for undergraduate or pre-registration nursing students in educational settings are evident in the literature. Although a systematic review of the meaningfulness and appropriateness of using human patient simulation manikins as a teaching and learning strategy in undergraduate nursing education had been planned and a protocol registered in October 2009, we learned from contacting the lead author that this systematic review was not completed. Currently little is known about how nursing students and/or educators have experienced maternal-child simulation or their understandings of the appropriateness and meaningfulness of particular simulation-based learning practices. Our proposed systematic review therefore fulfills all requirements for the PROSPERO database. For this review we will use the definition of "simulation-based learning experience" adopted by the International Nursing Association for Clinical Simulation and Learning (INACSL):We will include any use of simulation in an educational setting (with pre-registration or pre-licensure or undergraduate nursing or health professional students) with a focus relevant for maternal-child nursing.Maternal-child nursing has been variously defined in literature to include maternity care and pediatric nursing. For the purposes of this review, we will include perinatal, neonatal and pediatric contexts of care that focus on families with children under the age of five. We will exclude studies that focus on school age children, adolescents and/or youth.We have adapted an earlier definition of "appropriateness" as the "best conditions under which simulation can be integrated into undergraduate nursing education". In this review "meaningfulness" refers to the experiences and reflections of undergraduate nursing or health professional students and educators as presented in the studies reviewed.
The Origin of Systematic Errors in the GCM Simulation of ITCZ Precipitation
NASA Technical Reports Server (NTRS)
Chao, Winston C.; Suarez, M. J.; Bacmeister, J. T.; Chen, B.; Takacs, L. L.
2006-01-01
Previous GCM studies have found that the systematic errors in the GCM simulation of the seasonal mean ITCZ intensity and location could be substantially corrected by adding suitable amount of rain re-evaporation or cumulus momentum transport. However, the reason(s) for these systematic errors and solutions has remained a puzzle. In this work the knowledge gained from previous studies of the ITCZ in an aqua-planet model with zonally uniform SST is applied to solve this puzzle. The solution is supported by further aqua-planet and full model experiments using the latest version of the Goddard Earth Observing System GCM.
MacKinnon, Ralph; Aitken, Deborah; Humphries, Christopher
2015-12-17
Technology-enhanced simulation is well-established in healthcare teaching curricula, including those regarding wilderness medicine. Compellingly, the evidence base for the value of this educational modality to improve learner competencies and patient outcomes are increasing. The aim was to systematically review the characteristics of technology-enhanced simulation presented in the wilderness medicine literature to date. Then, the secondary aim was to explore how this technology has been used and if the use of this technology has been associated with improved learner or patient outcomes. EMBASE and MEDLINE were systematically searched from 1946 to 2014, for articles on the provision of technology-enhanced simulation to teach wilderness medicine. Working independently, the team evaluated the information on the criteria of learners, setting, instructional design, content, and outcomes. From a pool of 37 articles, 11 publications were eligible for systematic review. The majority of learners in the included publications were medical students, settings included both indoors and outdoors, and the main focus clinical content was initial trauma management with some including leadership skills. The most prevalent instructional design components were clinical variation and cognitive interactivity, with learner satisfaction as the main outcome. The results confirm that the current provision of wilderness medicine utilizing technology-enhanced simulation is aligned with instructional design characteristics that have been used to achieve effective learning. Future research should aim to demonstrate the translation of learning into the clinical field to produce improved learner outcomes and create improved patient outcomes.
A Step-by-Step Framework on Discrete Events Simulation in Emergency Department; A Systematic Review
Dehghani, Mahsa; Moftian, Nazila; Rezaei-Hachesu, Peyman; Samad-Soltani, Taha
2017-01-01
Objective: To systematically review the current literature of simulation in healthcare including the structured steps in the emergency healthcare sector by proposing a framework for simulation in the emergency department. Methods: For the purpose of collecting the data, PubMed and ACM databases were used between the years 2003 and 2013. The inclusion criteria were to select English-written articles available in full text with the closest objectives from among a total of 54 articles retrieved from the databases. Subsequently, 11 articles were selected for further analysis. Results: The studies focused on the reduction of waiting time and patient stay, optimization of resources allocation, creation of crisis and maximum demand scenarios, identification of overcrowding bottlenecks, investigation of the impact of other systems on the existing system, and improvement of the system operations and functions. Subsequently, 10 simulation steps were derived from the relevant studies after an expert’s evaluation. Conclusion: The 10-steps approach proposed on the basis of the selected studies provides simulation and planning specialists with a structured method for both analyzing problems and choosing best-case scenarios. Moreover, following this framework systematically enables the development of design processes as well as software implementation of simulation problems. PMID:28507994
Measurement of the $B^-$ lifetime using a simulation free approach for trigger bias correction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aaltonen, T.; /Helsinki Inst. of Phys.; Adelman, J.
2010-04-01
The collection of a large number of B hadron decays to hadronic final states at the CDF II detector is possible due to the presence of a trigger that selects events based on track impact parameters. However, the nature of the selection requirements of the trigger introduces a large bias in the observed proper decay time distribution. A lifetime measurement must correct for this bias and the conventional approach has been to use a Monte Carlo simulation. The leading sources of systematic uncertainty in the conventional approach are due to differences between the data and the Monte Carlo simulation. Inmore » this paper they present an analytic method for bias correction without using simulation, thereby removing any uncertainty between data and simulation. This method is presented in the form of a measurement of the lifetime of the B{sup -} using the mode B{sup -} {yields} D{sup 0}{pi}{sup -}. The B{sup -} lifetime is measured as {tau}{sub B{sup -}} = 1.663 {+-} 0.023 {+-} 0.015 ps, where the first uncertainty is statistical and the second systematic. This new method results in a smaller systematic uncertainty in comparison to methods that use simulation to correct for the trigger bias.« less
Performance Modeling of Experimental Laser Lightcrafts
NASA Technical Reports Server (NTRS)
Wang, Ten-See; Chen, Yen-Sen; Liu, Jiwen; Myrabo, Leik N.; Mead, Franklin B., Jr.; Turner, Jim (Technical Monitor)
2001-01-01
A computational plasma aerodynamics model is developed to study the performance of a laser propelled Lightcraft. The computational methodology is based on a time-accurate, three-dimensional, finite-difference, chemically reacting, unstructured grid, pressure-based formulation. The underlying physics are added and tested systematically using a building-block approach. The physics modeled include non-equilibrium thermodynamics, non-equilibrium air-plasma finite-rate kinetics, specular ray tracing, laser beam energy absorption and refraction by plasma, non-equilibrium plasma radiation, and plasma resonance. A series of transient computations are performed at several laser pulse energy levels and the simulated physics are discussed and compared with those of tests and literatures. The predicted coupling coefficients for the Lightcraft compared reasonably well with those of tests conducted on a pendulum apparatus.
Exponential H ∞ Synchronization of Chaotic Cryptosystems Using an Improved Genetic Algorithm
Hsiao, Feng-Hsiag
2015-01-01
This paper presents a systematic design methodology for neural-network- (NN-) based secure communications in multiple time-delay chaotic (MTDC) systems with optimal H ∞ performance and cryptography. On the basis of the Improved Genetic Algorithm (IGA), which is demonstrated to have better performance than that of a traditional GA, a model-based fuzzy controller is then synthesized to stabilize the MTDC systems. A fuzzy controller is synthesized to not only realize the exponential synchronization, but also achieve optimal H ∞ performance by minimizing the disturbance attenuation level. Furthermore, the error of the recovered message is stated by using the n-shift cipher and key. Finally, a numerical example with simulations is given to demonstrate the effectiveness of our approach. PMID:26366432
Chung, C K; Shih, T R; Chen, T C; Wu, B H
2008-10-01
A planar micromixer with rhombic microchannels and a converging-diverging element has been systematically investigated by the Taguchi method, CFD-ACE simulations and experiments. To reduce the footprint and extend the operation range of Reynolds number, Taguchi method was used to numerically study the performance of the micromixer in a L(9) orthogonal array. Mixing efficiency is prominently influenced by geometrical parameters and Reynolds number (Re). The four factors in a L(9) orthogonal array are number of rhombi, turning angle, width of the rhombic channel and width of the throat. The degree of sensitivity by Taguchi method can be ranked as: Number of rhombi > Width of the rhombic channel > Width of the throat > Turning angle of the rhombic channel. Increasing the number of rhombi, reducing the width of the rhombic channel and throat and lowering the turning angle resulted in better fluid mixing efficiency. The optimal design of the micromixer in simulations indicates over 90% mixing efficiency at both Re > or = 80 and Re < or = 0.1. Experimental results in the optimal simulations are consistent with the simulated one. This planar rhombic micromixer has simplified the complex fabrication process of the multi-layer or three-dimensional micromixers and improved the performance of a previous rhombic micromixer at a reduced footprint and lower Re.
Redundancy Maintenance and Garbage Collection Strategies in Peer-to-Peer Storage Systems
NASA Astrophysics Data System (ADS)
Liu, Xin; Datta, Anwitaman
Maintaining redundancy in P2P storage systems is essential for reliability guarantees. Numerous P2P storage system maintenance algorithms have been proposed in the last years, each supposedly improving upon the previous approaches. We perform a systematic comparative study of the various strategies taking also into account the influence of different garbage collection mechanisms, an issue not studied so far. Our experiments show that while some strategies generally perform better than some others, there is no universally best strategy, and their relative superiority depends on various other design choices as well as the specific evaluation criterion. Our results can be used by P2P storage systems designers to make prudent design decisions, and our exploration of the various evaluation metrics also provides a more comprehensive framework to compare algorithms for P2P storage systems. While there are numerous network simulators specifically developed even to simulate peer-to-peer networks, there existed no P2P storage simulators - a byproduct of this work is a generic modular P2P storage system simulator which we provide as open-source. Different redundancy, maintenance, placement, garbage-collection policies, churn scenarios can be easily integrated to the simulator to try out new schemes in future, and provides a common framework to compare (future) p2p storage systems designs - something which has not been possible so far.
Costa, Luciano T; Ribeiro, Mauro C C
2006-05-14
Molecular dynamics (MD) simulations have been performed for prototype models of polymer electrolytes in which the salt is an ionic liquid based on 1-alkyl-3-methylimidazolium cations and the polymer is poly(ethylene oxide), PEO. The MD simulations were performed by combining the previously proposed models for pure ionic liquids and polymer electrolytes containing simple inorganic ions. A systematic investigation of ionic liquid concentration, temperature, and the 1-alkyl- chain length, [1,3-dimethylimidazolium]PF6, and [1-butyl-3-methylimidazolium]PF6, effects on resulting equilibrium structure is provided. It is shown that the ionic liquid is dispersed in the polymeric matrix, but ionic pairs remain in the polymer electrolyte. Imidazolium cations are coordinated by both the anions and the oxygen atoms of PEO chains. Probability density maps of occurrences of nearest neighbors around imidazolium cations give a detailed physical picture of the environment experienced by cations. Conformational changes on PEO chains upon addition of the ionic liquid are identified. The equilibrium structure of simulated systems is also analyzed in reciprocal space by using the static structure factor, S(k). Calculated S(k) display a low wave-vector peak, indicating that spatial correlation in an extended-range order prevail in the ionic liquid polymer electrolytes. Long-range correlations are assigned to nonuniform distribution of ionic species within the simulation box.
Higher representations on the lattice: Numerical simulations, SU(2) with adjoint fermions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Del Debbio, Luigi; Patella, Agostino; Pica, Claudio
2010-05-01
We discuss the lattice formulation of gauge theories with fermions in arbitrary representations of the color group and present in detail the implementation of the hybrid Monte Carlo (HMC)/rational HMC algorithm for simulating dynamical fermions. We discuss the validation of the implementation through an extensive set of tests and the stability of simulations by monitoring the distribution of the lowest eigenvalue of the Wilson-Dirac operator. Working with two flavors of Wilson fermions in the adjoint representation, benchmark results for realistic lattice simulations are presented. Runs are performed on different lattice sizes ranging from 4{sup 3}x8 to 24{sup 3}x64 sites. Formore » the two smallest lattices we also report the measured values of benchmark mesonic observables. These results can be used as a baseline for rapid cross-checks of simulations in higher representations. The results presented here are the first steps toward more extensive investigations with controlled systematic errors, aiming at a detailed understanding of the phase structure of these theories, and of their viability as candidates for strong dynamics beyond the standard model.« less
NASA Astrophysics Data System (ADS)
Nair, Shiny; Kathiresan, M.; Mukundan, T.
2018-02-01
Device characteristics of organic thin film transistor (OTFT) fabricated with conducting polyaniline:polystyrene sulphonic acid (PANi-PSS) electrodes, patterned by the Parylene lift-off method are systematically analyzed by way of two dimensional numerical simulation. The device simulation was performed taking into account field-dependent mobility, low mobility layer at the electrode-semiconductor interface, trap distribution in pentacene film and trapped charge at the organic/insulator interface. The electrical characteristics of bottom contact thin film transistor with PANi-PSS electrodes and pentacene active material is superior to those with palladium electrodes due to a lower charge injection barrier. Contact resistance was extracted in both cases by the transfer line method (TLM). The extracted charge concentration and potential profile from the two dimensional numerical simulation was used to explain the observed electrical characteristics. The simulated device characteristics not only matched the experimental electrical characteristics, but also gave an insight on the charge injection, transport and trap properties of the OTFTs as a function of different electrode materials from the perspectives of transistor operation.
Benchmarking of measurement and simulation of transverse rms-emittance growth
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeon, Dong-O
2008-01-01
Transverse emittance growth along the Alvarez DTL section is a major concern with respect to the preservation of beam quality of high current beams at the GSI UNILAC. In order to define measures to reduce this growth appropriated tools to simulate the beam dynamics are indispensable. This paper is about the benchmarking of three beam dynamics simulation codes, i.e. DYNAMION, PARMILA, and PARTRAN against systematic measurements of beam emittances for different machine settings. Experimental set-ups, data reduction, the preparation of the simulations, and the evaluation of the simulations will be described. It was found that the measured 100%-rmsemittances behind themore » DTL exceed the simulated values. Comparing measured 90%-rms-emittances to the simulated 95%-rms-emittances gives fair to good agreement instead. The sum of horizontal and vertical emittances is even described well by the codes as long as experimental 90%-rmsemittances are compared to simulated 95%-rms-emittances. Finally, the successful reduction of transverse emittance growth by systematic beam matching is reported.« less
NASA Astrophysics Data System (ADS)
Liang, Dong; Song, Yimin; Sun, Tao; Jin, Xueying
2017-09-01
A systematic dynamic modeling methodology is presented to develop the rigid-flexible coupling dynamic model (RFDM) of an emerging flexible parallel manipulator with multiple actuation modes. By virtue of assumed mode method, the general dynamic model of an arbitrary flexible body with any number of lumped parameters is derived in an explicit closed form, which possesses the modular characteristic. Then the completely dynamic model of system is formulated based on the flexible multi-body dynamics (FMD) theory and the augmented Lagrangian multipliers method. An approach of combining the Udwadia-Kalaba formulation with the hybrid TR-BDF2 numerical algorithm is proposed to address the nonlinear RFDM. Two simulation cases are performed to investigate the dynamic performance of the manipulator with different actuation modes. The results indicate that the redundant actuation modes can effectively attenuate vibration and guarantee higher dynamic performance compared to the traditional non-redundant actuation modes. Finally, a virtual prototype model is developed to demonstrate the validity of the presented RFDM. The systematic methodology proposed in this study can be conveniently extended for the dynamic modeling and controller design of other planar flexible parallel manipulators, especially the emerging ones with multiple actuation modes.
Patient Safety and Quality Improvement in Otolaryngology Education: A Systematic Review.
Gettelfinger, John D; Paulk, P Barrett; Schmalbach, Cecelia E
2017-06-01
Objective The breadth and depth of patient safety/quality improvement (PS/QI) research dedicated to otolaryngology-head and neck surgery (OHNS) education remains unknown. This systematic review aims to define this scope and to identify knowledge gaps as well as potential areas of future study to improved PS/QI education and training in OHNS. Data Sources A computerized Ovid/Medline database search was conducted (January 1, 1965, to May 15, 2015). Similar computerized searches were conducted using Cochrane Database, PubMed, and Google Scholar. Review Methods The study protocol was developed a priori using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). Articles were classified by year, subspecialty, Institute of Medicine (IOM) Crossing the Chasm categories, and World Health Organization (WHO) subclass. Results Computerized searches yielded 8743 eligible articles, 267 (3.4%) of which met otolaryngology PS/QI inclusion criteria; 51 (19%) were dedicated to resident/fellow education and training. Simulation studies (39%) and performance/competency evaluation (23.5%) were the most common focus. Most projects involved general otolaryngology (47%), rhinology (18%), and otology (16%). Classification by the IOM included effective care (45%), safety/effective care (41%), and effective and efficient care (7.8%). Most research fell into the WHO category of "identifying solutions" (61%). Conclusion Nineteen percent of OHNS PS/QI articles are dedicated to education, the majority of which are simulation and focus on effective care. Knowledges gaps for future research include facial plastics PS/QI and the WHO category of "studies translating evidence into safer care."
Benefits of simulation based training for neonatal resuscitation education: a systematic review.
Rakshasbhuvankar, A A; Patole, S K
2014-10-01
Simulation-based training (SBT) is being more frequently recommended for neonatal resuscitation education (NRE). It is important to assess if SBT improves clinical outcomes as neonatal resuscitation aims to improve survival without long-term neurodevelopmental impairment. We aimed to assess the evidence supporting benefits of SBT in NRE. A systematic review was conducted using the Cochrane methodology. PubMed, Embase, PsycInfo and Cochrane databases were searched. Related abstracts were scanned and full texts of the potentially relevant articles were studied. Randomised controlled trials (RCT) and quasi-experimental studies with controls (non-RCT) assessing SBT for NRE were eligible for inclusion in the review. Four small studies [three RCT (n=126) and one non-RCT (n=60)] evaluated SBT for NRE. Participants included medical students (one RCT and one non-RCT), residents (one RCT) and nursing staff (one RCT). Outcomes included performance in a simulation scenario, theoretical knowledge, and confidence in leading a resuscitation scenario. One RCT favoured simulation [improved resuscitation score (p=0.016), 2.31 more number of critical actions (p=0.017) and decreased time to achieve resuscitation steps (p=<0.001)]. The remaining two RCTs and the non-RCT did not find any difference between SBT and alternate methods of instruction. None of the four studies reported clinical outcomes. Evidence regarding benefits of SBT for NRE is limited. There are no data on clinical outcomes following SBT for NRE. Large RCTs assessing clinically important outcomes are required before SBT can be recommended widely for NRE. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Sample design effects in landscape genetics
Oyler-McCance, Sara J.; Fedy, Bradley C.; Landguth, Erin L.
2012-01-01
An important research gap in landscape genetics is the impact of different field sampling designs on the ability to detect the effects of landscape pattern on gene flow. We evaluated how five different sampling regimes (random, linear, systematic, cluster, and single study site) affected the probability of correctly identifying the generating landscape process of population structure. Sampling regimes were chosen to represent a suite of designs common in field studies. We used genetic data generated from a spatially-explicit, individual-based program and simulated gene flow in a continuous population across a landscape with gradual spatial changes in resistance to movement. Additionally, we evaluated the sampling regimes using realistic and obtainable number of loci (10 and 20), number of alleles per locus (5 and 10), number of individuals sampled (10-300), and generational time after the landscape was introduced (20 and 400). For a simulated continuously distributed species, we found that random, linear, and systematic sampling regimes performed well with high sample sizes (>200), levels of polymorphism (10 alleles per locus), and number of molecular markers (20). The cluster and single study site sampling regimes were not able to correctly identify the generating process under any conditions and thus, are not advisable strategies for scenarios similar to our simulations. Our research emphasizes the importance of sampling data at ecologically appropriate spatial and temporal scales and suggests careful consideration for sampling near landscape components that are likely to most influence the genetic structure of the species. In addition, simulating sampling designs a priori could help guide filed data collection efforts.
NASA Astrophysics Data System (ADS)
Durigon, Angelica; Lier, Quirijn de Jong van; Metselaar, Klaas
2016-10-01
To date, measuring plant transpiration at canopy scale is laborious and its estimation by numerical modelling can be used to assess high time frequency data. When using the model by Jacobs (1994) to simulate transpiration of water stressed plants it needs to be reparametrized. We compare the importance of model variables affecting simulated transpiration of water stressed plants. A systematic literature review was performed to recover existing parameterizations to be tested in the model. Data from a field experiment with common bean under full and deficit irrigation were used to correlate estimations to forcing variables applying principal component analysis. New parameterizations resulted in a moderate reduction of prediction errors and in an increase in model performance. Ags model was sensitive to changes in the mesophyll conductance and leaf angle distribution parameterizations, allowing model improvement. Simulated transpiration could be separated in temporal components. Daily, afternoon depression and long-term components for the fully irrigated treatment were more related to atmospheric forcing variables (specific humidity deficit between stomata and air, relative air humidity and canopy temperature). Daily and afternoon depression components for the deficit-irrigated treatment were related to both atmospheric and soil dryness, and long-term component was related to soil dryness.
in silico Surveillance: evaluating outbreak detection with simulation models
2013-01-01
Background Detecting outbreaks is a crucial task for public health officials, yet gaps remain in the systematic evaluation of outbreak detection protocols. The authors’ objectives were to design, implement, and test a flexible methodology for generating detailed synthetic surveillance data that provides realistic geographical and temporal clustering of cases and use to evaluate outbreak detection protocols. Methods A detailed representation of the Boston area was constructed, based on data about individuals, locations, and activity patterns. Influenza-like illness (ILI) transmission was simulated, producing 100 years of in silico ILI data. Six different surveillance systems were designed and developed using gathered cases from the simulated disease data. Performance was measured by inserting test outbreaks into the surveillance streams and analyzing the likelihood and timeliness of detection. Results Detection of outbreaks varied from 21% to 95%. Increased coverage did not linearly improve detection probability for all surveillance systems. Relaxing the decision threshold for signaling outbreaks greatly increased false-positives, improved outbreak detection slightly, and led to earlier outbreak detection. Conclusions Geographical distribution can be more important than coverage level. Detailed simulations of infectious disease transmission can be configured to represent nearly any conceivable scenario. They are a powerful tool for evaluating the performance of surveillance systems and methods used for outbreak detection. PMID:23343523
Karacan, C. Özgen; Olea, Ricardo A.
2013-01-01
The systematic approach presented in this paper is the first time in literature that history matching, TIs of GIPs and filter simulations are used for degasification performance evaluation and for assessing GIP for mining safety. Results from this study showed that using production history matching of coalbed methane wells to determine time-lapsed reservoir data could be used to compute spatial GIP and representative GIP TIs generated through Voronoi decomposition. Furthermore, performing filter simulations using point-wise data and TIs could be used to predict methane quantity in coal seams subjected to degasification. During the course of the study, it was shown that the material balance of gas produced by wellbores and the GIP reductions in coal seams predicted using filter simulations compared very well, showing the success of filter simulations for continuous variables in this case study. Quantitative results from filter simulations of GIP within the studied area briefly showed that GIP was reduced from an initial ∼73 Bcf (median) to ∼46 Bcf (2011), representing a 37 % decrease and varying spatially through degasification. It is forecasted that there will be an additional ∼2 Bcf reduction in methane quantity between 2011 and 2015. This study and presented results showed that the applied methodology and utilized techniques can be used to map GIP and its change within coal seams after degasification, which can further be used for ventilation design for methane control in coal mines.
Wayne—A Simulator for HST WFC3 IR Grism Spectroscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Varley, R.; Tsiaras, A.; Karpouzas, K., E-mail: r.varley@ucl.ac.uk
Wayne is an algorithm that simulates Hubble Space Telescope Wide Field Camera 3 (WFC3) grism spectroscopic frames, including sources of noise and systematics. It can simulate both staring and spatial scan modes, and observations such as the transit and the eclipse of an exoplanet. Unlike many other instrument simulators, the focus of Wayne is on creating frames with realistic systematics in order to test the effectiveness of different data analysis methods in a variety of different scenarios. This approach is critical for method validation and optimizing observing strategies. In this paper we describe the implementation of Wayne for WFC3 inmore » the near-infrared channel with the G102 and G141 grisms. We compare the simulations to real data obtained for the exoplanet HD 209458b, to verify the accuracy of the simulation. The software is now available as open source at https://github.com/ucl-exoplanets/wayne.« less
Foray search: an effective systematic dispersal strategy in fragmented landscapes
L. Conradt; P.A. Zollner; T.J. Roper; C.D. Thomas
2003-01-01
In the absence of evidence to the contrary, population models generally assume that the dispersal trajectories of animals are random, but systematic dispersal could be more efficient at detecting new habitat and may therefore constitute a more realistic assumption. Here, we investigate, by means of simulations, the properties of a potentially widespread systematic...
Tominaga, Koji; Aherne, Julian; Watmough, Shaun A; Alveteg, Mattias; Cosby, Bernard J; Driscoll, Charles T; Posch, Maximilian; Pourmokhtarian, Afshin
2010-12-01
The performance and prediction uncertainty (owing to parameter and structural uncertainties) of four dynamic watershed acidification models (MAGIC, PnET-BGC, SAFE, and VSD) were assessed by systematically applying them to data from the Hubbard Brook Experimental Forest (HBEF), New Hampshire, where long-term records of precipitation and stream chemistry were available. In order to facilitate systematic evaluation, Monte Carlo simulation was used to randomly generate common model input data sets (n = 10,000) from parameter distributions; input data were subsequently translated among models to retain consistency. The model simulations were objectively calibrated against observed data (streamwater: 1963-2004, soil: 1983). The ensemble of calibrated models was used to assess future response of soil and stream chemistry to reduced sulfur deposition at the HBEF. Although both hindcast (1850-1962) and forecast (2005-2100) predictions were qualitatively similar across the four models, the temporal pattern of key indicators of acidification recovery (stream acid neutralizing capacity and soil base saturation) differed substantially. The range in predictions resulted from differences in model structure and their associated posterior parameter distributions. These differences can be accommodated by employing multiple models (ensemble analysis) but have implications for individual model applications.
Dynamics of a flexible helical filament rotating in a viscous fluid near a rigid boundary
NASA Astrophysics Data System (ADS)
Jawed, M. K.; Reis, P. M.
2017-03-01
We study the effect of a no-slip rigid boundary on the dynamics of a flexible helical filament rotating in a viscous fluid, at low Reynolds number conditions (Stokes limit). This system is taken as a reduced model for the propulsion of uniflagellar bacteria, whose locomotion is known to be modified near solid boundaries. Specifically, we focus on how the propulsive force generated by the filament, as well as its buckling onset, are modified by the presence of a wall. We tackle this problem through numerical simulations that couple the elasticity of the filament, the hydrodynamic loading, and the wall effect. Each of these three ingredients is respectively modeled by the discrete elastic rods method (for a geometrically nonlinear description of the filament), Lighthill's slender body theory (for a nonlocal fluid force model), and the method of images (to emulate the boundary). The simulations are systematically validated by precision experiments on a rescaled macroscopic apparatus. We find that the propulsive force increases near the wall, while the critical rotation frequency for the onset of buckling usually decreases. A systematic parametric study is performed to quantify the dependence of the wall effects on the geometric parameters of the helical filament.
NASA Technical Reports Server (NTRS)
Ackermann, M.; Ajello, M.; Albert, A.; Allafort, A.; Atwood, W. B.; Axelsson, M.; Baldini, L.; Ballet, J.; Barbiellini, G.; Bastieri, D.;
2012-01-01
The Fermi Large Area Telescope (Fermi-LAT, hereafter LAT), the primary instrument on the Fermi Gamma-ray Space Telescope (Fermi) mission, is an imaging, wide field-of-view, high-energy -ray telescope, covering the energy range from 20 MeV to more than 300 GeV. During the first years of the mission the LAT team has gained considerable insight into the in-flight performance of the instrument. Accordingly, we have updated the analysis used to reduce LAT data for public release as well as the Instrument Response Functions (IRFs), the description of the instrument performance provided for data analysis. In this paper we describe the effects that motivated these updates. Furthermore, we discuss how we originally derived IRFs from Monte Carlo simulations and later corrected those IRFs for discrepancies observed between flight and simulated data. We also give details of the validations performed using flight data and quantify the residual uncertainties in the IRFs. Finally, we describe techniques the LAT team has developed to propagate those uncertainties into estimates of the systematic errors on common measurements such as fluxes and spectra of astrophysical sources.
Improving Climate Projections Using "Intelligent" Ensembles
NASA Technical Reports Server (NTRS)
Baker, Noel C.; Taylor, Patrick C.
2015-01-01
Recent changes in the climate system have led to growing concern, especially in communities which are highly vulnerable to resource shortages and weather extremes. There is an urgent need for better climate information to develop solutions and strategies for adapting to a changing climate. Climate models provide excellent tools for studying the current state of climate and making future projections. However, these models are subject to biases created by structural uncertainties. Performance metrics-or the systematic determination of model biases-succinctly quantify aspects of climate model behavior. Efforts to standardize climate model experiments and collect simulation data-such as the Coupled Model Intercomparison Project (CMIP)-provide the means to directly compare and assess model performance. Performance metrics have been used to show that some models reproduce present-day climate better than others. Simulation data from multiple models are often used to add value to projections by creating a consensus projection from the model ensemble, in which each model is given an equal weight. It has been shown that the ensemble mean generally outperforms any single model. It is possible to use unequal weights to produce ensemble means, in which models are weighted based on performance (called "intelligent" ensembles). Can performance metrics be used to improve climate projections? Previous work introduced a framework for comparing the utility of model performance metrics, showing that the best metrics are related to the variance of top-of-atmosphere outgoing longwave radiation. These metrics improve present-day climate simulations of Earth's energy budget using the "intelligent" ensemble method. The current project identifies several approaches for testing whether performance metrics can be applied to future simulations to create "intelligent" ensemble-mean climate projections. It is shown that certain performance metrics test key climate processes in the models, and that these metrics can be used to evaluate model quality in both current and future climate states. This information will be used to produce new consensus projections and provide communities with improved climate projections for urgent decision-making.
Boet, Sylvain; Bould, M Dylan; Fung, Lillia; Qosa, Haytham; Perrier, Laure; Tavares, Walter; Reeves, Scott; Tricco, Andrea C
2014-06-01
Simulation-based learning is increasingly used by healthcare professionals as a safe method to learn and practice non-technical skills, such as communication and leadership, required for effective crisis resource management (CRM). This systematic review was conducted to gain a better understanding of the impact of simulation-based CRM teaching on transfer of learning to the workplace and subsequent changes in patient outcomes. Studies on CRM, crisis management, crew resource management, teamwork, and simulation published up to September 2012 were searched in MEDLINE(®), EMBASE™, CINAHL, Cochrane Central Register of Controlled Trials, and ERIC. All studies that used simulation-based CRM teaching with outcomes measured at Kirkpatrick Level 3 (transfer of learning to the workplace) or 4 (patient outcome) were included. Studies measuring only learners' reactions or simple learning (Kirkpatrick Level 1 or 2, respectively) were excluded. Two authors independently reviewed all identified titles and abstracts for eligibility. Nine articles were identified as meeting the inclusion criteria. Four studies measured transfer of simulation-based CRM learning into the clinical setting (Kirkpatrick Level 3). In three of these studies, simulation-enhanced CRM training was found significantly more effective than no intervention or didactic teaching. Five studies measured patient outcomes (Kirkpatrick Level 4). Only one of these studies found that simulation-based CRM training made a clearly significant impact on patient mortality. Based on a small number of studies, this systematic review found that CRM skills learned at the simulation centre are transferred to clinical settings, and the acquired CRM skills may translate to improved patient outcomes, including a decrease in mortality.
Impact of bias-corrected reanalysis-derived lateral boundary conditions on WRF simulations
NASA Astrophysics Data System (ADS)
Moalafhi, Ditiro Benson; Sharma, Ashish; Evans, Jason Peter; Mehrotra, Rajeshwar; Rocheta, Eytan
2017-08-01
Lateral and lower boundary conditions derived from a suitable global reanalysis data set form the basis for deriving a dynamically consistent finer resolution downscaled product for climate and hydrological assessment studies. A problem with this, however, is that systematic biases have been noted to be present in the global reanalysis data sets that form these boundaries, biases which can be carried into the downscaled simulations thereby reducing their accuracy or efficacy. In this work, three Weather Research and Forecasting (WRF) model downscaling experiments are undertaken to investigate the impact of bias correcting European Centre for Medium range Weather Forecasting Reanalysis ERA-Interim (ERA-I) atmospheric temperature and relative humidity using Atmospheric Infrared Sounder (AIRS) satellite data. The downscaling is performed over a domain centered over southern Africa between the years 2003 and 2012. The sample mean and the mean as well as standard deviation at each grid cell for each variable are used for bias correction. The resultant WRF simulations of near-surface temperature and precipitation are evaluated seasonally and annually against global gridded observational data sets and compared with ERA-I reanalysis driving field. The study reveals inconsistencies between the impact of the bias correction prior to downscaling and the resultant model simulations after downscaling. Mean and standard deviation bias-corrected WRF simulations are, however, found to be marginally better than mean only bias-corrected WRF simulations and raw ERA-I reanalysis-driven WRF simulations. Performances, however, differ when assessing different attributes in the downscaled field. This raises questions about the efficacy of the correction procedures adopted.
Larsen, Christian Rifbjerg; Oestergaard, Jeanett; Ottesen, Bent S; Soerensen, Jette Led
2012-09-01
Virtual reality (VR) simulators for surgical training might possess the properties needed for basic training in laparoscopy. Evidence for training efficacy of VR has been investigated by research of varying quality over the past decade. To review randomized controlled trials regarding VR training efficacy compared with traditional or no training, with outcome measured as surgical performance in humans or animals. In June 2011 Medline, Embase, the Cochrane Central Register of Controlled Trials, Web of Science and Google Scholar were searched using the following medical subject headings (MeSh) terms: Laparoscopy/standards, Computing methodologies, Programmed instruction, Surgical procedures, Operative, and the following free text terms: Virtual real* OR simulat* AND Laparoscop* OR train* Controlled trials. All randomized controlled trials investigating the effect of VR training in laparoscopy, with outcome measured as surgical performance. A total of 98 studies were screened, 26 selected and 12 included, with a total of 241 participants. Operation time was reduced by 17-50% by VR training, depending on simulator type and training principles. Proficiency-based training appeared superior to training based on fixed time or fixed numbers of repetition. Simulators offering training for complete operative procedures came out as more efficient than simulators offering only basic skills training. Skills in laparoscopic surgery can be increased by proficiency-based procedural VR simulator training. There is substantial evidence (grade IA - IIB) to support the use of VR simulators in laparoscopic training. © 2012 The Authors Acta Obstetricia et Gynecologica Scandinavica© 2012 Nordic Federation of Societies of Obstetrics and Gynecology.
In Situ Simulation in Continuing Education for the Health Care Professions: A Systematic Review
ERIC Educational Resources Information Center
Rosen, Michael A.; Hunt, Elizabeth A.; Pronovost, Peter J.; Federowicz, Molly A.; Weaver, Sallie J.
2012-01-01
Introduction: Education in the health sciences increasingly relies on simulation-based training strategies to provide safe, structured, engaging, and effective practice opportunities. While this frequently occurs within a simulation center, in situ simulations occur within an actual clinical environment. This blending of learning and work…
NASA Astrophysics Data System (ADS)
Bell, C.; Li, Y.; Lopez, E.; Hogue, T. S.
2017-12-01
Decision support tools that quantitatively estimate the cost and performance of infrastructure alternatives are valuable for urban planners. Such a tool is needed to aid in planning stormwater projects to meet diverse goals such as the regulation of stormwater runoff and its pollutants, minimization of economic costs, and maximization of environmental and social benefits in the communities served by the infrastructure. This work gives a brief overview of an integrated decision support tool, called i-DST, that is currently being developed to serve this need. This presentation focuses on the development of a default database for the i-DST that parameterizes water quality treatment efficiency of stormwater best management practices (BMPs) by region. Parameterizing the i-DST by region will allow the tool to perform accurate simulations in all parts of the United States. A national dataset of BMP performance is analyzed to determine which of a series of candidate regionalizations explains the most variance in the national dataset. The data used in the regionalization analysis comes from the International Stormwater BMP Database and data gleaned from an ongoing systematic review of peer-reviewed and gray literature. In addition to identifying a regionalization scheme for water quality performance parameters in the i-DST, our review process will also provide example methods and protocols for systematic reviews in the field of Earth Science.
Robust, open-source removal of systematics in Kepler data
NASA Astrophysics Data System (ADS)
Aigrain, S.; Parviainen, H.; Roberts, S.; Reece, S.; Evans, T.
2017-10-01
We present ARC2 (Astrophysically Robust Correction 2), an open-source python-based systematics-correction pipeline, to correct for the Kepler prime mission long-cadence light curves. The ARC2 pipeline identifies and corrects any isolated discontinuities in the light curves and then removes trends common to many light curves. These trends are modelled using the publicly available co-trending basis vectors, within an (approximate) Bayesian framework with 'shrinkage' priors to minimize the risk of overfitting and the injection of any additional noise into the corrected light curves, while keeping any astrophysical signals intact. We show that the ARC2 pipeline's performance matches that of the standard Kepler PDC-MAP data products using standard noise metrics, and demonstrate its ability to preserve astrophysical signals using injection tests with simulated stellar rotation and planetary transit signals. Although it is not identical, the ARC2 pipeline can thus be used as an open-source alternative to PDC-MAP, whenever the ability to model the impact of the systematics removal process on other kinds of signal is important.
Cooperative Robots to Observe Moving Targets: Review.
Khan, Asif; Rinner, Bernhard; Cavallaro, Andrea
2018-01-01
The deployment of multiple robots for achieving a common goal helps to improve the performance, efficiency, and/or robustness in a variety of tasks. In particular, the observation of moving targets is an important multirobot application that still exhibits numerous open challenges, including the effective coordination of the robots. This paper reviews control techniques for cooperative mobile robots monitoring multiple targets. The simultaneous movement of robots and targets makes this problem particularly interesting, and our review systematically addresses this cooperative multirobot problem for the first time. We classify and critically discuss the control techniques: cooperative multirobot observation of multiple moving targets, cooperative search, acquisition, and track, cooperative tracking, and multirobot pursuit evasion. We also identify the five major elements that characterize this problem, namely, the coordination method, the environment, the target, the robot and its sensor(s). These elements are used to systematically analyze the control techniques. The majority of the studied work is based on simulation and laboratory studies, which may not accurately reflect real-world operational conditions. Importantly, while our systematic analysis is focused on multitarget observation, our proposed classification is useful also for related multirobot applications.
Gomes, Lara Elena; Loss, Jefferson Fagundes
2015-01-01
The understanding of swimming propulsion is a key factor in the improvement of performance in this sport. Propulsive forces have been quantified under steady conditions since the 1970s, but actual swimming involves unsteady conditions. Thus, the purpose of the present article was to review the effects of unsteady conditions on swimming propulsion based on studies that have compared steady and unsteady conditions while exploring their methods, their limitations and their results, as well as encouraging new studies based on the findings of this systematic review. A multiple database search was performed, and only those studies that met all eligibility criteria were included. Six studies that compared steady and unsteady conditions using physical experiments or numerical simulations were selected. The selected studies verified the effects of one or more factors that characterise a condition as unsteady on the propulsive forces. Consequently, much research is necessary to understand the effect of each individual variable that characterises a condition as unsteady on swimming propulsion, as well as the effects of these variables as a whole on swimming propulsion.
Rutherford-Hemming, Tonya; Nye, Carla; Coram, Cathy
2016-02-01
The National Organization for Nurse Practitioner Faculty (NONPF) does not allow simulation to be used in lieu of traditional clinical hours. The NONPF cites a lack of empirical evidence related to learning outcomes with simulation as rationale for its stance. The purpose of this systematic review was to search, extract, appraise, and synthesize research related to the use of simulation in Nurse Practitioner (NP) education in order to answer the two following questions: 1) What research related to simulation in NP education has emerged in the literature between 2010 and April 2015?, and 2) Of the research studies that have emerged, what level of Kirkpatrick's Training Evaluation Model (1994) is evaluated? This review was reported in line with Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). A literature search was completed in PubMed and CINAHL using a combination of medical subject headings, or Mesh terms, as well as keywords to retrieve non-indexed citations. The inclusion criteria for this review were broad in order to disseminate information on future research needed. The review considered studies related to NP education that included any form of simulation intervention, e.g. role-playing and standardized patients. The review considered studies that described original research, but no other design restrictions were imposed. The review was limited to studies published in the English language. The database search strategy yielded 198 citations. These results were narrowed down to 15 studies based on identified inclusion criteria. There is a lack of empirical evidence in the literature to support using simulation in lieu of direct patient care clinical hours in NP education. The evidence in this systematic review affirms NONPF's statement. Five years after the inception of NONPF's position statement, research to support learning outcomes with simulation in nurse practitioner education remains lacking. There is a need to produce rigorous scientific studies in the future in order to provide quantitative support to allow simulation to be counted as clinical hours in NP programs. Copyright © 2015 Elsevier Ltd. All rights reserved.
Hu, Suxing; Collins, Lee A.; Goncharov, V. N.; ...
2016-05-26
Using first-principles (FP) methods, we have performed ab initio compute for the equation of state (EOS), thermal conductivity, and opacity of deuterium-tritium (DT) in a wide range of densities and temperatures for inertial confinement fusion (ICF) applications. These systematic investigations have recently been expanded to accurately compute the plasma properties of CH ablators under extreme conditions. In particular, the first-principles EOS and thermal-conductivity tables of CH are self-consistently built from such FP calculations, which are benchmarked by experimental measurements. When compared with the traditional models used for these plasma properties in hydrocodes, significant differences have been identified in the warmmore » dense plasma regime. When these FP-calculated properties of DT and CH were used in our hydrodynamic simulations of ICF implosions, we found that the target performance in terms of neutron yield and energy gain can vary by a factor of 2 to 3, relative to traditional model simulations.« less
Simulations of hypervelocity impacts for asteroid deflection studies
NASA Astrophysics Data System (ADS)
Heberling, T.; Ferguson, J. M.; Gisler, G. R.; Plesko, C. S.; Weaver, R.
2016-12-01
The possibility of kinetic-impact deflection of threatening near-earth asteroids will be tested for the first time in the proposed AIDA (Asteroid Impact Deflection Assessment) mission, involving two independent spacecraft, NASAs DART (Double Asteroid Redirection Test) and ESAs AIM (Asteroid Impact Mission). The impact of the DART spacecraft onto the secondary of the binary asteroid 65803 Didymos, at a speed of 5 to 7 km/s, is expected to alter the mutual orbit by an observable amount. The velocity imparted to the secondary depends on the geometry and dynamics of the impact, and especially on the momentum enhancement factor, conventionally called beta. We use the Los Alamos hydrocodes Rage and Pagosa to estimate beta in laboratory-scale benchmark experiments and in the large-scale asteroid deflection test. Simulations are performed in two- and three-dimensions, using a variety of equations of state and strength models for both the lab-scale and large-scale cases. This work is being performed as part of a systematic benchmarking study for the AIDA mission that includes other hydrocodes.
High-order-harmonic generation from H2+ molecular ions near plasmon-enhanced laser fields
NASA Astrophysics Data System (ADS)
Yavuz, I.; Tikman, Y.; Altun, Z.
2015-08-01
Simulations of plasmon-enhanced high-order-harmonic generation are performed for a H2+ molecular cation near the metallic nanostructures. We employ the numerical solution of the time-dependent Schrödinger equation in reduced coordinates. We assume that the main axis of H2+ is aligned perfectly with the polarization direction of the plasmon-enhanced field. We perform systematic calculations on plasmon-enhanced harmonic generation based on an infinite-mass approximation, i.e., pausing nuclear vibrations. Our simulations show that molecular high-order-harmonic generation from plasmon-enhanced laser fields is possible. We observe the dispersion of a plateau of harmonics when the laser field is plasmon enhanced. We find that the maximum kinetic energy of the returning electron follows 4 Up . We also find that when nuclear vibrations are enabled, the efficiency of the harmonics is greatly enhanced relative to that of static nuclei. However, the maximum kinetic energy 4 Up is largely maintained.
Investigation for Molecular Attraction Impact Between Contacting Surfaces in Micro-Gears
NASA Astrophysics Data System (ADS)
Yang, Ping; Li, Xialong; Zhao, Yanfang; Yang, Haiying; Wang, Shuting; Yang, Jianming
2013-10-01
The aim of this research work is to provide a systematic method to perform molecular attraction impact between contacting surfaces in micro-gear train. This method is established by integrating involute profile analysis and molecular dynamics simulation. A mathematical computation of micro-gear involute is presented based on geometrical properties, Taylor expression and Hamaker assumption. In the meantime, Morse potential function and the cut-off radius are introduced with a molecular dynamics simulation. So a hybrid computational method for the Van Der Waals force between the contacting faces in micro-gear train is developed. An example is illustrated to show the performance of this method. The results show that the change of Van Der Waals force in micro-gear train has a nonlinear characteristic with parameters change such as the modulus of the gear and the tooth number of gear etc. The procedure implies a potential feasibility that we can control the Van Der Waals force by adjusting the manufacturing parameters for gear train design.
NASA Astrophysics Data System (ADS)
Wan, Kaidi; Xia, Jun; Vervisch, Luc; Liu, Yingzu; Wang, Zhihua; Cen, Kefa
2018-03-01
The numerical modelling of alkali metal reacting dynamics in turbulent pulverised-coal combustion is discussed using tabulated sodium chemistry in large eddy simulation (LES). A lookup table is constructed from a detailed sodium chemistry mechanism including five sodium species, i.e. Na, NaO, NaO2, NaOH and Na2O2H2, and 24 elementary reactions. This sodium chemistry table contains four coordinates, i.e. the equivalence ratio, the mass fraction of the sodium element, the gas-phase temperature, and a progress variable. The table is first validated against the detailed sodium chemistry mechanism by zero-dimensional simulations. Then, LES of a turbulent pulverised-coal jet flame is performed and major coal-flame parameters compared against experiments. The chemical percolation devolatilisation (CPD) model and the partially stirred reactor (PaSR) model are employed to predict coal pyrolysis and gas-phase combustion, respectively. The response of the five sodium species in the pulverised-coal jet flame is subsequently examined. Finally, a systematic global sensitivity analysis of the sodium lookup table is performed and the accuracy of the proposed tabulated sodium chemistry approach has been calibrated.
Kumarasabapathy, N.; Manoharan, P. S.
2015-01-01
This paper proposes a fuzzy logic based new control scheme for the Unified Power Quality Conditioner (UPQC) for minimizing the voltage sag and total harmonic distortion in the distribution system consequently to improve the power quality. UPQC is a recent power electronic module which guarantees better power quality mitigation as it has both series-active and shunt-active power filters (APFs). The fuzzy logic controller has recently attracted a great deal of attention and possesses conceptually the quality of the simplicity by tackling complex systems with vagueness and ambiguity. In this research, the fuzzy logic controller is utilized for the generation of reference signal controlling the UPQC. To enable this, a systematic approach for creating the fuzzy membership functions is carried out by using an ant colony optimization technique for optimal fuzzy logic control. An exhaustive simulation study using the MATLAB/Simulink is carried out to investigate and demonstrate the performance of the proposed fuzzy logic controller and the simulation results are compared with the PI controller in terms of its performance in improving the power quality by minimizing the voltage sag and total harmonic distortion. PMID:26504895
Prediction of rarefied micro-nozzle flows using the SPARTA library
NASA Astrophysics Data System (ADS)
Deschenes, Timothy R.; Grot, Jonathan
2016-11-01
The accurate numerical prediction of gas flows within micro-nozzles can help evaluate the performance and enable the design of optimal configurations for micro-propulsion systems. Viscous effects within the large boundary layers can have a strong impact on the nozzle performance. Furthermore, the variation in collision length scales from continuum to rarefied preclude the use of continuum-based computational fluid dynamics. In this paper, we describe the application of a massively parallel direct simulation Monte Carlo (DSMC) library to predict the steady-state and transient flow through a micro-nozzle. The nozzle's geometric configuration is described in a highly flexible manner to allow for the modification of the geometry in a systematic fashion. The transient simulation highlights a strong shock structure that forms within the converging portion of the nozzle when the expanded gas interacts with the nozzle walls. This structure has a strong impact on the buildup of the gas in the nozzle and affects the boundary layer thickness beyond the throat in the diverging section of the nozzle. Future work will look to examine the transient thrust and integrate this simulation capability into a web-based rarefied gas dynamics prediction software, which is currently under development.
A phenomenological continuum model for force-driven nano-channel liquid flows
NASA Astrophysics Data System (ADS)
Ghorbanian, Jafar; Celebi, Alper T.; Beskok, Ali
2016-11-01
A phenomenological continuum model is developed using systematic molecular dynamics (MD) simulations of force-driven liquid argon flows confined in gold nano-channels at a fixed thermodynamic state. Well known density layering near the walls leads to the definition of an effective channel height and a density deficit parameter. While the former defines the slip-plane, the latter parameter relates channel averaged density with the desired thermodynamic state value. Definitions of these new parameters require a single MD simulation performed for a specific liquid-solid pair at the desired thermodynamic state and used for calibration of model parameters. Combined with our observations of constant slip-length and kinematic viscosity, the model accurately predicts the velocity distribution and volumetric and mass flow rates for force-driven liquid flows in different height nano-channels. Model is verified for liquid argon flow at distinct thermodynamic states and using various argon-gold interaction strengths. Further verification is performed for water flow in silica and gold nano-channels, exhibiting slip lengths of 1.2 nm and 15.5 nm, respectively. Excellent agreements between the model and the MD simulations are reported for channel heights as small as 3 nm for various liquid-solid pairs.
Perspectives on continuum flow models for force-driven nano-channel liquid flows
NASA Astrophysics Data System (ADS)
Beskok, Ali; Ghorbanian, Jafar; Celebi, Alper
2017-11-01
A phenomenological continuum model is developed using systematic molecular dynamics (MD) simulations of force-driven liquid argon flows confined in gold nano-channels at a fixed thermodynamic state. Well known density layering near the walls leads to the definition of an effective channel height and a density deficit parameter. While the former defines the slip-plane, the latter parameter relates channel averaged density with the desired thermodynamic state value. Definitions of these new parameters require a single MD simulation performed for a specific liquid-solid pair at the desired thermodynamic state and used for calibration of model parameters. Combined with our observations of constant slip-length and kinematic viscosity, the model accurately predicts the velocity distribution and volumetric and mass flow rates for force-driven liquid flows in different height nano-channels. Model is verified for liquid argon flow at distinct thermodynamic states and using various argon-gold interaction strengths. Further verification is performed for water flow in silica and gold nano-channels, exhibiting slip lengths of 1.2 nm and 15.5 nm, respectively. Excellent agreements between the model and the MD simulations are reported for channel heights as small as 3 nm for various liquid-solid pairs.
Multibody simulation of vehicles equipped with an automatic transmission
NASA Astrophysics Data System (ADS)
Olivier, B.; Kouroussis, G.
2016-09-01
Nowadays automotive vehicles remain as one of the most used modes of transportation. Furthermore automatic transmissions are increasingly used to provide a better driving comfort and a potential optimization of the engine performances (by placing the gear shifts at specific engine and vehicle speeds). This paper presents an effective modeling of the vehicle using the multibody methodology (numerically computed under EasyDyn, an open source and in-house library dedicated to multibody simulations). However, the transmission part of the vehicle is described by the usual equations of motion computed using a systematic matrix approach: del Castillo's methodology for planetary gear trains. By coupling the analytic equations of the transmission and the equations computed by the multibody methodology, the performances of any vehicle can be obtained if the characteristics of each element in the vehicle are known. The multibody methodology offers the possibilities to develop the vehicle modeling from 1D-motion to 3D-motion by taking into account the rotations and implementing tire models. The modeling presented in this paper remains very efficient and provides an easy and quick vehicle simulation tool which could be used in order to calibrate the automatic transmission.
MacKinnon, Ralph; Humphries, Christopher
2015-01-01
Background: Technology-enhanced simulation is well-established in healthcare teaching curricula, including those regarding wilderness medicine. Compellingly, the evidence base for the value of this educational modality to improve learner competencies and patient outcomes are increasing. Aims: The aim was to systematically review the characteristics of technology-enhanced simulation presented in the wilderness medicine literature to date. Then, the secondary aim was to explore how this technology has been used and if the use of this technology has been associated with improved learner or patient outcomes. Methods: EMBASE and MEDLINE were systematically searched from 1946 to 2014, for articles on the provision of technology-enhanced simulation to teach wilderness medicine. Working independently, the team evaluated the information on the criteria of learners, setting, instructional design, content, and outcomes. Results: From a pool of 37 articles, 11 publications were eligible for systematic review. The majority of learners in the included publications were medical students, settings included both indoors and outdoors, and the main focus clinical content was initial trauma management with some including leadership skills. The most prevalent instructional design components were clinical variation and cognitive interactivity, with learner satisfaction as the main outcome. Conclusions: The results confirm that the current provision of wilderness medicine utilizing technology-enhanced simulation is aligned with instructional design characteristics that have been used to achieve effective learning. Future research should aim to demonstrate the translation of learning into the clinical field to produce improved learner outcomes and create improved patient outcomes. PMID:26824012
Arnold, Matthias
2017-12-02
The economic evaluation of stratified breast cancer screening gains momentum, but produces also very diverse results. Systematic reviews so far focused on modeling techniques and epidemiologic assumptions. However, cost and utility parameters received only little attention. This systematic review assesses simulation models for stratified breast cancer screening based on their cost and utility parameters in each phase of breast cancer screening and care. A literature review was conducted to compare economic evaluations with simulation models of personalized breast cancer screening. Study quality was assessed using reporting guidelines. Cost and utility inputs were extracted, standardized and structured using a care delivery framework. Studies were then clustered according to their study aim and parameters were compared within the clusters. Eighteen studies were identified within three study clusters. Reporting quality was very diverse in all three clusters. Only two studies in cluster 1, four studies in cluster 2 and one study in cluster 3 scored high in the quality appraisal. In addition to the quality appraisal, this review assessed if the simulation models were consistent in integrating all relevant phases of care, if utility parameters were consistent and methodological sound and if cost were compatible and consistent in the actual parameters used for screening, diagnostic work up and treatment. Of 18 studies, only three studies did not show signs of potential bias. This systematic review shows that a closer look into the cost and utility parameter can help to identify potential bias. Future simulation models should focus on integrating all relevant phases of care, using methodologically sound utility parameters and avoiding inconsistent cost parameters.
Simulation of the West African Monsoon using the MIT Regional Climate Model
NASA Astrophysics Data System (ADS)
Im, Eun-Soon; Gianotti, Rebecca L.; Eltahir, Elfatih A. B.
2013-04-01
We test the performance of the MIT Regional Climate Model (MRCM) in simulating the West African Monsoon. MRCM introduces several improvements over Regional Climate Model version 3 (RegCM3) including coupling of Integrated Biosphere Simulator (IBIS) land surface scheme, a new albedo assignment method, a new convective cloud and rainfall auto-conversion scheme, and a modified boundary layer height and cloud scheme. Using MRCM, we carried out a series of experiments implementing two different land surface schemes (IBIS and BATS) and three convection schemes (Grell with the Fritsch-Chappell closure, standard Emanuel, and modified Emanuel that includes the new convective cloud scheme). Our analysis primarily focused on comparing the precipitation characteristics, surface energy balance and large scale circulations against various observations. We document a significant sensitivity of the West African monsoon simulation to the choices of the land surface and convection schemes. In spite of several deficiencies, the simulation with the combination of IBIS and modified Emanuel schemes shows the best performance reflected in a marked improvement of precipitation in terms of spatial distribution and monsoon features. In particular, the coupling of IBIS leads to representations of the surface energy balance and partitioning that are consistent with observations. Therefore, the major components of the surface energy budget (including radiation fluxes) in the IBIS simulations are in better agreement with observation than those from our BATS simulation, or from previous similar studies (e.g Steiner et al., 2009), both qualitatively and quantitatively. The IBIS simulations also reasonably reproduce the dynamical structure of vertically stratified behavior of the atmospheric circulation with three major components: westerly monsoon flow, African Easterly Jet (AEJ), and Tropical Easterly Jet (TEJ). In addition, since the modified Emanuel scheme tends to reduce the precipitation amount, it improves the precipitation over regions suffering from systematic wet bias.
Performance of the ATLAS muon trigger in pp collisions at √s = 8 TeV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aad, G.
The performance of the ATLAS muon trigger system is evaluated with proton–proton collision data collected in 2012 at the Large Hadron Collider at a centre-of-mass energy of 8 TeV. It is primarily evaluated using events containing a pair of muons from the decay of Z bosons. The efficiency of the single-muon trigger is measured for muons with transverse momentum 25 < p T < 100 GeV, with a statistical uncertainty of less than 0.01 % and a systematic uncertainty of 0.6 %. The pT range for efficiency determination is extended by using muons from decays of J/ψ mesons, W bosons,more » and top quarks. The muon trigger shows highly uniform and stable performance. Thus, the performance is compared to the prediction of a detailed simulation.« less
Performance of the ATLAS muon trigger in pp collisions at √s = 8 TeV
Aad, G.
2015-03-13
The performance of the ATLAS muon trigger system is evaluated with proton–proton collision data collected in 2012 at the Large Hadron Collider at a centre-of-mass energy of 8 TeV. It is primarily evaluated using events containing a pair of muons from the decay of Z bosons. The efficiency of the single-muon trigger is measured for muons with transverse momentum 25 < p T < 100 GeV, with a statistical uncertainty of less than 0.01 % and a systematic uncertainty of 0.6 %. The pT range for efficiency determination is extended by using muons from decays of J/ψ mesons, W bosons,more » and top quarks. The muon trigger shows highly uniform and stable performance. Thus, the performance is compared to the prediction of a detailed simulation.« less
Parametric modeling studies of turbulent non-premixed jet flames with thin reaction zones
NASA Astrophysics Data System (ADS)
Wang, Haifeng
2013-11-01
The Sydney piloted jet flame series (Flames L, B, and M) feature thinner reaction zones and hence impose greater challenges to modeling than the Sanida Piloted jet flames (Flames D, E, and F). Recently, the Sydney flames received renewed interest due to these challenges. Several new modeling efforts have emerged. However, no systematic parametric modeling studies have been reported for the Sydney flames. A large set of modeling computations of the Sydney flames is presented here by using the coupled large eddy simulation (LES)/probability density function (PDF) method. Parametric studies are performed to gain insight into the model performance, its sensitivity and the effect of numerics.
NASA Astrophysics Data System (ADS)
Sánchez, M.; Oldenhof, M.; Freitez, J. A.; Mundim, K. C.; Ruette, F.
A systematic improvement of parametric quantum methods (PQM) is performed by considering: (a) a new application of parameterization procedure to PQMs and (b) novel parametric functionals based on properties of elementary parametric functionals (EPF) [Ruette et al., Int J Quantum Chem 2008, 108, 1831]. Parameterization was carried out by using the simplified generalized simulated annealing (SGSA) method in the CATIVIC program. This code has been parallelized and comparison with MOPAC/2007 (PM6) and MINDO/SR was performed for a set of molecules with C=C, C=H, and H=H bonds. Results showed better accuracy than MINDO/SR and MOPAC-2007 for a selected trial set of molecules.
Hunnicutt, Jacob N; Ulbricht, Christine M; Chrysanthopoulou, Stavroula A; Lapane, Kate L
2016-12-01
We systematically reviewed pharmacoepidemiologic and comparative effectiveness studies that use probabilistic bias analysis to quantify the effects of systematic error including confounding, misclassification, and selection bias on study results. We found articles published between 2010 and October 2015 through a citation search using Web of Science and Google Scholar and a keyword search using PubMed and Scopus. Eligibility of studies was assessed by one reviewer. Three reviewers independently abstracted data from eligible studies. Fifteen studies used probabilistic bias analysis and were eligible for data abstraction-nine simulated an unmeasured confounder and six simulated misclassification. The majority of studies simulating an unmeasured confounder did not specify the range of plausible estimates for the bias parameters. Studies simulating misclassification were in general clearer when reporting the plausible distribution of bias parameters. Regardless of the bias simulated, the probability distributions assigned to bias parameters, number of simulated iterations, sensitivity analyses, and diagnostics were not discussed in the majority of studies. Despite the prevalence and concern of bias in pharmacoepidemiologic and comparative effectiveness studies, probabilistic bias analysis to quantitatively model the effect of bias was not widely used. The quality of reporting and use of this technique varied and was often unclear. Further discussion and dissemination of the technique are warranted. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishnan, Shankar; Karri, Naveen K.; Gogna, Pawan K.
2012-03-13
Enormous military and commercial interests exist in developing quiet, lightweight, and compact thermoelectric (TE) power generation systems. This paper investigates design integration and analysis of an advanced TE power generation system implementing JP-8 fueled combustion and thermal recuperation. Design and development of a portable TE power system using a JP-8 combustor as a high temperature heat source and optimal process flows depend on efficient heat generation, transfer, and recovery within the system are explored. Design optimization of the system required considering the combustion system efficiency and TE conversion efficiency simultaneously. The combustor performance and TE sub-system performance were coupled directlymore » through exhaust temperatures, fuel and air mass flow rates, heat exchanger performance, subsequent hot-side temperatures, and cold-side cooling techniques and temperatures. Systematic investigation of this system relied on accurate thermodynamic modeling of complex, high-temperature combustion processes concomitantly with detailed thermoelectric converter thermal/mechanical modeling. To this end, this work reports on design integration of systemlevel process flow simulations using commercial software CHEMCADTM with in-house thermoelectric converter and module optimization, and heat exchanger analyses using COMSOLTM software. High-performance, high-temperature TE materials and segmented TE element designs are incorporated in coupled design analyses to achieve predicted TE subsystem level conversion efficiencies exceeding 10%. These TE advances are integrated with a high performance microtechnology combustion reactor based on recent advances at the Pacific Northwest National Laboratory (PNNL). Predictions from this coupled simulation established a basis for optimal selection of fuel and air flow rates, thermoelectric module design and operating conditions, and microtechnology heat-exchanger design criteria. This paper will discuss this simulation process that leads directly to system efficiency power maps defining potentially available optimal system operating conditions and regimes. This coupled simulation approach enables pathways for integrated use of high-performance combustor components, high performance TE devices, and microtechnologies to produce a compact, lightweight, combustion driven TE power system prototype that operates on common fuels.« less
Benchmarking hydrological model predictive capability for UK River flows and flood peaks.
NASA Astrophysics Data System (ADS)
Lane, Rosanna; Coxon, Gemma; Freer, Jim; Wagener, Thorsten
2017-04-01
Data and hydrological models are now available for national hydrological analyses. However, hydrological model performance varies between catchments, and lumped, conceptual models are not able to produce adequate simulations everywhere. This study aims to benchmark hydrological model performance for catchments across the United Kingdom within an uncertainty analysis framework. We have applied four hydrological models from the FUSE framework to 1128 catchments across the UK. These models are all lumped models and run at a daily timestep, but differ in the model structural architecture and process parameterisations, therefore producing different but equally plausible simulations. We apply FUSE over a 20 year period from 1988-2008, within a GLUE Monte Carlo uncertainty analyses framework. Model performance was evaluated for each catchment, model structure and parameter set using standard performance metrics. These were calculated both for the whole time series and to assess seasonal differences in model performance. The GLUE uncertainty analysis framework was then applied to produce simulated 5th and 95th percentile uncertainty bounds for the daily flow time-series and additionally the annual maximum prediction bounds for each catchment. The results show that the model performance varies significantly in space and time depending on catchment characteristics including climate, geology and human impact. We identify regions where models are systematically failing to produce good results, and present reasons why this could be the case. We also identify regions or catchment characteristics where one model performs better than others, and have explored what structural component or parameterisation enables certain models to produce better simulations in these catchments. Model predictive capability was assessed for each catchment, through looking at the ability of the models to produce discharge prediction bounds which successfully bound the observed discharge. These results improve our understanding of the predictive capability of simple conceptual hydrological models across the UK and help us to identify where further effort is needed to develop modelling approaches to better represent different catchment and climate typologies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jennings, Elise; Wolf, Rachel; Sako, Masao
2016-11-09
Cosmological parameter estimation techniques that robustly account for systematic measurement uncertainties will be crucial for the next generation of cosmological surveys. We present a new analysis method, superABC, for obtaining cosmological constraints from Type Ia supernova (SN Ia) light curves using Approximate Bayesian Computation (ABC) without any likelihood assumptions. The ABC method works by using a forward model simulation of the data where systematic uncertainties can be simulated and marginalized over. A key feature of the method presented here is the use of two distinct metrics, the `Tripp' and `Light Curve' metrics, which allow us to compare the simulated data to the observed data set. The Tripp metric takes as input the parameters of models fit to each light curve with the SALT-II method, whereas the Light Curve metric uses the measured fluxes directly without model fitting. We apply the superABC sampler to a simulated data set ofmore » $$\\sim$$1000 SNe corresponding to the first season of the Dark Energy Survey Supernova Program. Varying $$\\Omega_m, w_0, \\alpha$$ and $$\\beta$$ and a magnitude offset parameter, with no systematics we obtain $$\\Delta(w_0) = w_0^{\\rm true} - w_0^{\\rm best \\, fit} = -0.036\\pm0.109$$ (a $$\\sim11$$% 1$$\\sigma$$ uncertainty) using the Tripp metric and $$\\Delta(w_0) = -0.055\\pm0.068$$ (a $$\\sim7$$% 1$$\\sigma$$ uncertainty) using the Light Curve metric. Including 1% calibration uncertainties in four passbands, adding 4 more parameters, we obtain $$\\Delta(w_0) = -0.062\\pm0.132$$ (a $$\\sim14$$% 1$$\\sigma$$ uncertainty) using the Tripp metric. Overall we find a $17$% increase in the uncertainty on $$w_0$$ with systematics compared to without. We contrast this with a MCMC approach where systematic effects are approximately included. We find that the MCMC method slightly underestimates the impact of calibration uncertainties for this simulated data set.« less
Lu, Liang-Xing; Wang, Ying-Min; Srinivasan, Bharathi Madurai; Asbahi, Mohamed; Yang, Joel K W; Zhang, Yong-Wei
2016-09-01
We perform systematic two-dimensional energetic analysis to study the stability of various nanostructures formed by dewetting solid films deposited on patterned substrates. Our analytical results show that by controlling system parameters such as the substrate surface pattern, film thickness and wetting angle, a variety of equilibrium nanostructures can be obtained. Phase diagrams are presented to show the complex relations between these system parameters and various nanostructure morphologies. We further carry out both phase field simulations and dewetting experiments to validate the analytically derived phase diagrams. Good agreements between the results from our energetic analyses and those from our phase field simulations and experiments verify our analysis. Hence, the phase diagrams presented here provide guidelines for using solid-state dewetting as a tool to achieve various nanostructures.
Modelling the reactive nitrogen budget across Germany using LOTOS-EUROS between 2000 and 2013
NASA Astrophysics Data System (ADS)
Schaap, Martijn; Banzhaf, Sabine; Hendriks, Carlijn; Kranenburg, Richard
2017-04-01
Nitrogen deposition causes soil acidification and enhances eutrophication causing biodiversity loss. Currently, a major contribution to N-deposition derives from ammonia. Furthermore, ammonia contributes to the formation of secondary inorganic aerosol, a major contributor to atmospheric particulate matter levels. The aerosol formation provides a means of long range transport of reactive nitrogen as the life time of the aerosols is larger than that of ammonia itself. Despite its central role in these environmental threats, little is known about the ammonia budget. In this study we report on recent modelling study to assess the ammonia and reactive nitrogen budget over Germany for a period of 14 years (2000-2013). Prior to the long term simulation the process descriptions in the LOTOS-EUROS CTM were updated and a sensitivity simulation was performed showing that the impact of the compensation point for ammonia and the changes in aerosol deposition had the largest impact against earlier studies. Next, sensitivity simulations were performed to assess the impact of newly reported emissions totals (with 30 higher emissions caused by adjusted emission factors for fertilizer spreading), different spatial and temporal emission variability. Long term evaluation showed that the model is well able to reproduce the variability in wet deposition fluxes induced by varying precipitation amounts, but that systematic changes remain. These sensitivity simulations showed that detailing the seasonal emission variability is more important to remove systematic differences than lowering the uncertainty in dry deposition parametrization. Evaluation with the ammonia retrievals of the IASI satellite confirm that the newly reported emission data for fertilizer application have positive impacts on the modelled ammonia distribution. The new emission information confirms an emission area observed by the satellite in the northeast of Germany, which was previously absent from the national scale modelling exercises. This finding is supported by evaluating the model performance against wet deposition data and a compilation of ammonia passive sampler data. The new model setup was used to reassess the nitrogen deposition and PM formation in Germany between 2000 and 2013. In comparison to previous studies the nitrogen deposition estimates over Germany increased by 25% with considerable variability across the country. Two thrids of the deposition could be attributed to German sources, whereas the rest is of foreign origin. About 70% of the natural ecosystems across Germany receive nitrogen in access of their critical load.
Long, Guankui; Wu, Bo; Yang, Xuan; Kan, Bin; Zhou, Ye-Cheng; Chen, Li-Chuan; Wan, Xiangjian; Zhang, Hao-Li; Sum, Tze Chien; Chen, Yongsheng
2015-09-30
Both solution-processed polymers and small molecule based solar cells have achieved PCEs over 9% with the conventional device structure. However, for the practical applications of photovoltaic technology, further enhancement of both device performance and stability are urgently required, particularly for the inverted structure devices, since this architecture will probably be most promising for the possible coming commercialization. In this work, we have fabricated both conventional and inverted structure devices using the same small molecular donor/acceptor materials and compared the performance of both device structures, and found that the inverted structure based device gave significantly improved performance, the highest PCE so far for inverted structure based device using small molecules as the donor. Furthermore, the inverted device shows a remarkable stability with almost no obvious degradation after three months. Systematic device physics and charge generation dynamics studies, including optical simulation, light-intensity-dependent current-voltage experiments, photocurrent density-effective voltage analyses, transient absorption measurements, and electrical simulations, indicate that the significantly enhanced performance using inverted device is ascribed to the increasing of Jsc compared to the conventional device, which in turn is mainly attributed to the increased absorption of photons in the active layers, rather than the reduced nongeminate recombination.
Stress training improves performance during a stressful flight.
McClernon, Christopher K; McCauley, Michael E; O'Connor, Paul E; Warm, Joel S
2011-06-01
This study investigated whether stress training introduced during the acquisition of simulator-based flight skills enhances pilot performance during subsequent stressful flight operations in an actual aircraft. Despite knowledge that preconditions to aircraft accidents can be strongly influenced by pilot stress, little is known about the effectiveness of stress training and how it transfers to operational flight settings. For this study, 30 participants with no flying experience were assigned at random to a stress-trained treatment group or a control group. Stress training consisted of systematic pairing of skill acquisition in a flight simulator with stress coping mechanisms in the presence of a cold pressor. Control participants received identical flight skill acquisition training but without stress training. Participants then performed a stressful flying task in a Piper Archer aircraft. Stress-trained research participants flew the aircraft more smoothly, as recorded by aircraft telemetry data, and generally better, as recorded by flight instructor evaluations, than did control participants. Introducing stress coping mechanisms during flight training improved performance in a stressful flying task. The results of this study indicate that stress training during the acquisition of flight skills may serve to enhance pilot performance in stressful operational flight and, therefore, might mitigate the contribution of pilot stress to aircraft mishaps.
Performance Modeling of an Experimental Laser Propelled Lightcraft
NASA Technical Reports Server (NTRS)
Wang, Ten-See; Chen, Yen-Sen; Liu, Jiwen; Myrabo, Leik N.; Mead, Franklin B., Jr.
2000-01-01
A computational plasma aerodynamics model is developed to study the performance of an experimental laser propelled lightcraft. The computational methodology is based on a time-accurate, three-dimensional, finite-difference, chemically reacting, unstructured grid, pressure- based formulation. The underlying physics are added and tested systematically using a building-block approach. The physics modeled include non-equilibn'um thermodynamics, non-equilibrium air-plasma finite-rate kinetics, specular ray tracing, laser beam energy absorption and equi refraction by plasma, non-equilibrium plasma radiation, and plasma resonance. A series of transient computations are performed at several laser pulse energy levels and the simulated physics are discussed and compared with those of tests and literature. The predicted coupling coefficients for the lightcraft compared reasonably well with those of tests conducted on a pendulum apparatus.
Simulate what is measured: next steps towards predictive simulations (Conference Presentation)
NASA Astrophysics Data System (ADS)
Bussmann, Michael; Kluge, Thomas; Debus, Alexander; Hübl, Axel; Garten, Marco; Zacharias, Malte; Vorberger, Jan; Pausch, Richard; Widera, René; Schramm, Ulrich; Cowan, Thomas E.; Irman, Arie; Zeil, Karl; Kraus, Dominik
2017-05-01
Simulations of laser matter interaction at extreme intensities that have predictive power are nowadays in reach when considering codes that make optimum use of high performance compute architectures. Nevertheless, this is mostly true for very specific settings where model parameters are very well known from experiment and the underlying plasma dynamics is governed by Maxwell's equations solely. When including atomic effects, prepulse influences, radiation reaction and other physical phenomena things look different. Not only is it harder to evaluate the sensitivity of the simulation result on the variation of the various model parameters but numerical models are less well tested and their combination can lead to subtle side effects that influence the simulation outcome. We propose to make optimum use of future compute hardware to compute statistical and systematic errors rather than just find the mots optimum set of parameters fitting an experiment. This requires to include experimental uncertainties which is a challenge to current state of the art techniques. Moreover, it demands better comparison to experiments as inclusion of simulating the diagnostic's response becomes important. We strongly advocate the use of open standards for finding interoperability between codes for comparison studies, building complete tool chains for simulating laser matter experiments from start to end.
Fatigue Tests with Random Flight Simulation Loading
NASA Technical Reports Server (NTRS)
Schijve, J.
1972-01-01
Crack propagation was studied in a full-scale wing structure under different simulated flight conditions. Omission of low-amplitude gust cycles had a small effect on the crack rate. Truncation of the infrequently occurring high-amplitude gust cycles to a lower level had a noticeably accelerating effect on crack growth. The application of fail-safe load (100 percent limit load) effectively stopped subsequent crack growth under resumed flight-simulation loading. In another flight-simulation test series on sheet specimens, the variables studied are the design stress level and the cyclic frequency of the random gust loading. Inflight mean stresses vary from 5.5 to 10.0 kg/sq mm. The effect of the stress level is larger for the 2024 alloy than for the 7075 alloy. Three frequencies were employed: namely, 10 cps, 1 cps, and 0.1 cps. The frequency effect was small. The advantages and limitations of flight-simulation tests are compared with those of alternative test procedures such as constant-amplitude tests, program tests, and random-load tests. Various testing purposes are considered. The variables of flight-simulation tests are listed and their effects are discussed. A proposal is made for performing systematic flight-simulation tests in such a way that the compiled data may be used as a source of reference.
Evolving a Neural Olfactorimotor System in Virtual and Real Olfactory Environments
Rhodes, Paul A.; Anderson, Todd O.
2012-01-01
To provide a platform to enable the study of simulated olfactory circuitry in context, we have integrated a simulated neural olfactorimotor system with a virtual world which simulates both computational fluid dynamics as well as a robotic agent capable of exploring the simulated plumes. A number of the elements which we developed for this purpose have not, to our knowledge, been previously assembled into an integrated system, including: control of a simulated agent by a neural olfactorimotor system; continuous interaction between the simulated robot and the virtual plume; the inclusion of multiple distinct odorant plumes and background odor; the systematic use of artificial evolution driven by olfactorimotor performance (e.g., time to locate a plume source) to specify parameter values; the incorporation of the realities of an imperfect physical robot using a hybrid model where a physical robot encounters a simulated plume. We close by describing ongoing work toward engineering a high dimensional, reversible, low power electronic olfactory sensor which will allow olfactorimotor neural circuitry evolved in the virtual world to control an autonomous olfactory robot in the physical world. The platform described here is intended to better test theories of olfactory circuit function, as well as provide robust odor source localization in realistic environments. PMID:23112772
High Resolution Model Intercomparison Project (HighResMIP v1.0) for CMIP6
Haarsma, Reindert J.; Roberts, Malcolm J.; Vidale, Pier Luigi; ...
2016-11-22
Robust projections and predictions of climate variability and change, particularly at regional scales, rely on the driving processes being represented with fidelity in model simulations. The role of enhanced horizontal resolution in improved process representation in all components of the climate system is of growing interest, particularly as some recent simulations suggest both the possibility of significant changes in large-scale aspects of circulation as well as improvements in small-scale processes and extremes. However, such high-resolution global simulations at climate timescales, with resolutions of at least 50 km in the atmosphere and 0.25° in the ocean, have been performed at relativelymore » few research centres and generally without overall coordination, primarily due to their computational cost. Assessing the robustness of the response of simulated climate to model resolution requires a large multi-model ensemble using a coordinated set of experiments. The Coupled Model Intercomparison Project 6 (CMIP6) is the ideal framework within which to conduct such a study, due to the strong link to models being developed for the CMIP DECK experiments and other model intercomparison projects (MIPs). Increases in high-performance computing (HPC) resources, as well as the revised experimental design for CMIP6, now enable a detailed investigation of the impact of increased resolution up to synoptic weather scales on the simulated mean climate and its variability. The High Resolution Model Intercomparison Project (HighResMIP) presented in this paper applies, for the first time, a multi-model approach to the systematic investigation of the impact of horizontal resolution. A coordinated set of experiments has been designed to assess both a standard and an enhanced horizontal-resolution simulation in the atmosphere and ocean. The set of HighResMIP experiments is divided into three tiers consisting of atmosphere-only and coupled runs and spanning the period 1950–2050, with the possibility of extending to 2100, together with some additional targeted experiments. This paper describes the experimental set-up of HighResMIP, the analysis plan, the connection with the other CMIP6 endorsed MIPs, as well as the DECK and CMIP6 historical simulations. Lastly, HighResMIP thereby focuses on one of the CMIP6 broad questions, “what are the origins and consequences of systematic model biases?”, but we also discuss how it addresses the World Climate Research Program (WCRP) grand challenges.« less
NASA Astrophysics Data System (ADS)
Tian, Lin-Lin; Zhao, Ning; Song, Yi-Lei; Zhu, Chun-Ling
2018-05-01
This work is devoted to perform systematic sensitivity analysis of different turbulence models and various inflow boundary conditions in predicting the wake flow behind a horizontal axis wind turbine represented by an actuator disc (AD). The tested turbulence models are the standard k-𝜀 model and the Reynolds Stress Model (RSM). A single wind turbine immersed in both uniform flows and in modeled atmospheric boundary layer (ABL) flows is studied. Simulation results are validated against the field experimental data in terms of wake velocity and turbulence intensity.
NASA Astrophysics Data System (ADS)
Batic, Matej; Begalli, Marcia; Han, Min Cheol; Hauf, Steffen; Hoff, Gabriela; Kim, Chan Hyeong; Kim, Han Sung; Grazia Pia, Maria; Saracco, Paolo; Weidenspointner, Georg
2014-06-01
A systematic review of methods and data for the Monte Carlo simulation of photon interactions is in progress: it concerns a wide set of theoretical modeling approaches and data libraries available for this purpose. Models and data libraries are assessed quantitatively with respect to an extensive collection of experimental measurements documented in the literature to determine their accuracy; this evaluation exploits rigorous statistical analysis methods. The computational performance of the associated modeling algorithms is evaluated as well. An overview of the assessment of photon interaction models and results of the experimental validation are presented.
Directly comparing gravitational wave data to numerical relativity simulations: systematics
NASA Astrophysics Data System (ADS)
Lange, Jacob; O'Shaughnessy, Richard; Healy, James; Lousto, Carlos; Zlochower, Yosef; Shoemaker, Deirdre; Lovelace, Geoffrey; Pankow, Christopher; Brady, Patrick; Scheel, Mark; Pfeiffer, Harald; Ossokine, Serguei
2017-01-01
We compare synthetic data directly to complete numerical relativity simulations of binary black holes. In doing so, we circumvent ad-hoc approximations introduced in semi-analytical models previously used in gravitational wave parameter estimation and compare the data against the most accurate waveforms including higher modes. In this talk, we focus on the synthetic studies that test potential sources of systematic errors. We also run ``end-to-end'' studies of intrinsically different synthetic sources to show we can recover parameters for different systems.
Aydin, Denis; Feychting, Maria; Schüz, Joachim; Andersen, Tina Veje; Poulsen, Aslak Harbo; Prochazka, Michaela; Klaeboe, Lars; Kuehni, Claudia E; Tynes, Tore; Röösli, Martin
2011-07-01
Whether the use of mobile phones is a risk factor for brain tumors in adolescents is currently being studied. Case--control studies investigating this possible relationship are prone to recall error and selection bias. We assessed the potential impact of random and systematic recall error and selection bias on odds ratios (ORs) by performing simulations based on real data from an ongoing case--control study of mobile phones and brain tumor risk in children and adolescents (CEFALO study). Simulations were conducted for two mobile phone exposure categories: regular and heavy use. Our choice of levels of recall error was guided by a validation study that compared objective network operator data with the self-reported amount of mobile phone use in CEFALO. In our validation study, cases overestimated their number of calls by 9% on average and controls by 34%. Cases also overestimated their duration of calls by 52% on average and controls by 163%. The participation rates in CEFALO were 83% for cases and 71% for controls. In a variety of scenarios, the combined impact of recall error and selection bias on the estimated ORs was complex. These simulations are useful for the interpretation of previous case-control studies on brain tumor and mobile phone use in adults as well as for the interpretation of future studies on adolescents. Copyright © 2011 Wiley-Liss, Inc.
Virtual Reality-Based Simulators for Cranial Tumor Surgery: A Systematic Review.
Mazur, Travis; Mansour, Tarek R; Mugge, Luke; Medhkour, Azedine
2018-02-01
Virtual reality (VR) simulators have become useful tools in various fields of medicine. Prominent uses of VR technologies include assessment of physician skills and presurgical planning. VR has shown effectiveness in multiple surgical specialties, yet its use in neurosurgery remains limited. To examine all current literature on VR-based simulation for presurgical planning and training in cranial tumor surgeries and to assess the quality of these studies. PubMed and Embase were systematically searched to identify studies that used VR for presurgical planning and/or studies that investigated the use of VR as a training tool from inception to May 25, 2017. The initial search identified 1662 articles. Thirty-seven full-text articles were assessed for inclusion. Nine studies were included. These studies were subdivided into presurgical planning and training using VR. Prospects for VR are bright when surgical planning and skills training are considered. In terms of surgical planning, VR has noted and documented usefulness in the planning of cranial surgeries. Further, VR has been central to establishing reproducible benchmarks of performance in relation to cranial tumor resection, which are helpful not only in showing face and construct validity but also in enhancing neurosurgical training in a way not previously examined. Although additional studies are needed to better delineate the precise role of VR in each of these capacities, these studies stand to show the usefulness of VR in the neurosurgery and highlight the need for further investigation. Published by Elsevier Inc.
Curuksu, Jeremy; Zacharias, Martin
2009-03-14
Although molecular dynamics (MD) simulations have been applied frequently to study flexible molecules, the sampling of conformational states separated by barriers is limited due to currently possible simulation time scales. Replica-exchange (Rex)MD simulations that allow for exchanges between simulations performed at different temperatures (T-RexMD) can achieve improved conformational sampling. However, in the case of T-RexMD the computational demand grows rapidly with system size. A Hamiltonian RexMD method that specifically enhances coupled dihedral angle transitions has been developed. The method employs added biasing potentials as replica parameters that destabilize available dihedral substates and was applied to study coupled dihedral transitions in nucleic acid molecules. The biasing potentials can be either fixed at the beginning of the simulation or optimized during an equilibration phase. The method was extensively tested and compared to conventional MD simulations and T-RexMD simulations on an adenine dinucleotide system and on a DNA abasic site. The biasing potential RexMD method showed improved sampling of conformational substates compared to conventional MD simulations similar to T-RexMD simulations but at a fraction of the computational demand. It is well suited to study systematically the fine structure and dynamics of large nucleic acids under realistic conditions including explicit solvent and ions and can be easily extended to other types of molecules.
NASA Astrophysics Data System (ADS)
Lee, S.-H.; Kim, S.-W.; Angevine, W. M.; Bianco, L.; McKeen, S. A.; Senff, C. J.; Trainer, M.; Tucker, S. C.; Zamora, R. J.
2011-03-01
The performance of different urban surface parameterizations in the WRF (Weather Research and Forecasting) in simulating urban boundary layer (UBL) was investigated using extensive measurements during the Texas Air Quality Study 2006 field campaign. The extensive field measurements collected on surface (meteorological, wind profiler, energy balance flux) sites, a research aircraft, and a research vessel characterized 3-dimensional atmospheric boundary layer structures over the Houston-Galveston Bay area, providing a unique opportunity for the evaluation of the physical parameterizations. The model simulations were performed over the Houston metropolitan area for a summertime period (12-17 August) using a bulk urban parameterization in the Noah land surface model (original LSM), a modified LSM, and a single-layer urban canopy model (UCM). The UCM simulation compared quite well with the observations over the Houston urban areas, reducing the systematic model biases in the original LSM simulation by 1-2 °C in near-surface air temperature and by 200-400 m in UBL height, on average. A more realistic turbulent (sensible and latent heat) energy partitioning contributed to the improvements in the UCM simulation. The original LSM significantly overestimated the sensible heat flux (~200 W m-2) over the urban areas, resulting in warmer and higher UBL. The modified LSM slightly reduced warm and high biases in near-surface air temperature (0.5-1 °C) and UBL height (~100 m) as a result of the effects of urban vegetation. The relatively strong thermal contrast between the Houston area and the water bodies (Galveston Bay and the Gulf of Mexico) in the LSM simulations enhanced the sea/bay breezes, but the model performance in predicting local wind fields was similar among the simulations in terms of statistical evaluations. These results suggest that a proper surface representation (e.g. urban vegetation, surface morphology) and explicit parameterizations of urban physical processes are required for accurate urban atmospheric numerical modeling.
Pennell, Christopher; McCulloch, Peter
2015-01-01
The purpose of this study was to determine whether American Board of Surgery Certifying Examination (CE) performance is improved among residents who prepare using simulated oral examinations (SOEs). EMBASE and MEDLINE were searched using predefined search terms. No language restrictions were imposed and the latest search date was in November 2014. Included studies must have reported on residents training in a general surgery residency in the United States who used SOEs to prepare for the CE and have measured their performance against those without exposure to SOEs. Studies meeting inclusion criteria were qualitatively and quantitatively analyzed and a fixed effects meta-analysis was performed to determine the net effect of SOEs on CE performance. Overall, 4 of 25 abstracts reviewed met inclusion criteria and are included in this review. The most common simulation format included public examinations in front of resident peers during scheduled education sessions. All 4 included studies trended toward improved performance with SOEs and in 2 of these studies the improvement was statistically significant. Overall, 3 studies were of adequate quality to perform a meta-analysis and demonstrated a relative risk for first-attempt CE success of 1.22 (95% CI: 1.07-1.39, p = 0.003) for residents preparing with SOEs compared to those without SOEs. The published literature evaluating SOEs is limited and generally of fair quality. A modest improvement in CE performance was identified when public SOEs were used as an educational tool aimed to improve professionalism and communication skills, encourage reading at home, and provide a regular review of clinically relevant topics. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Saab, Mohamad M; McCarthy, Bridie; Andrews, Tom; Savage, Eileen; Drummond, Frances J; Walshe, Nuala; Forde, Mary; Breen, Dorothy; Henn, Patrick; Drennan, Jonathan; Hegarty, Josephine
2017-11-01
This review aims to determine the effect of adult Early Warning Systems education on nurses' knowledge, confidence and clinical performance. Early Warning Systems support timely identification of clinical deterioration and prevention of avoidable deaths. Several educational programmes have been designed to help nurses recognize and manage deteriorating patients. Little is known as to the effectiveness of these programmes. Systematic review. Academic Search Complete, CINAHL, MEDLINE, PsycINFO, PsycARTICLES, Psychology and Behavioral Science Collection, SocINDEX and the UK & Ireland Reference Centre, EMBASE, the Turning Research Into Practice database, the Cochrane Central Register of Controlled Trials (CENTRAL) and Grey Literature sources were searched between October and November 2015. This is a quantitative systematic review using Cochrane methods. Studies published between January 2011 - November 2015 in English were sought. The risk of bias, level of evidence and the quality of evidence per outcome were assessed. Eleven articles with 10 studies were included. Nine studies addressed clinical performance, four addressed knowledge and two addressed confidence. Knowledge, vital signs recording and Early Warning Score calculation were improved in the short term. Two interventions had no effect on nurses' response to clinical deterioration and use of communication tools. This review highlights the importance of measuring outcomes using standardized tools and valid and reliable instruments. Using longitudinal designs, researchers are encouraged to investigate the effect of Early Warning Systems educational programmes. These can include interactive e-learning, on-site interdisciplinary Early Warning Scoring systems training sessions and simulated scenarios. © 2017 John Wiley & Sons Ltd.
SU-E-T-613: Dosimetric Consequences of Systematic MLC Leaf Positioning Errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kathuria, K; Siebers, J
2014-06-01
Purpose: The purpose of this study is to determine the dosimetric consequences of systematic MLC leaf positioning errors for clinical IMRT patient plans so as to establish detection tolerances for quality assurance programs. Materials and Methods: Dosimetric consequences were simulated by extracting mlc delivery instructions from the TPS, altering the file by the specified error, reloading the delivery instructions into the TPS, recomputing dose, and extracting dose-volume metrics for one head-andneck and one prostate patient. Machine error was simulated by offsetting MLC leaves in Pinnacle in a systematic way. Three different algorithms were followed for these systematic offsets, and aremore » as follows: a systematic sequential one-leaf offset (one leaf offset in one segment per beam), a systematic uniform one-leaf offset (same one leaf offset per segment per beam) and a systematic offset of a given number of leaves picked uniformly at random from a given number of segments (5 out of 10 total). Dose to the PTV and normal tissue was simulated. Results: A systematic 5 mm offset of 1 leaf for all delivery segments of all beams resulted in a maximum PTV D98 deviation of 1%. Results showed very low dose error in all reasonably possible machine configurations, rare or otherwise, which could be simulated. Very low error in dose to PTV and OARs was shown in all possible cases of one leaf per beam per segment being offset (<1%), or that of only one leaf per beam being offset (<.2%). The errors resulting from a high number of adjacent leaves (maximum of 5 out of 60 total leaf-pairs) being simultaneously offset in many (5) of the control points (total 10–18 in all beams) per beam, in both the PTV and the OARs analyzed, were similarly low (<2–3%). Conclusions: The above results show that patient shifts and anatomical changes are the main source of errors in dose delivered, not machine delivery. These two sources of error are “visually complementary” and uncorrelated (albeit not additive in the final error) and one can easily incorporate error resulting from machine delivery in an error model based purely on tumor motion.« less
NASA Astrophysics Data System (ADS)
Rubin, D.; Aldering, G.; Barbary, K.; Boone, K.; Chappell, G.; Currie, M.; Deustua, S.; Fagrelius, P.; Fruchter, A.; Hayden, B.; Lidman, C.; Nordin, J.; Perlmutter, S.; Saunders, C.; Sofiatti, C.; Supernova Cosmology Project, The
2015-11-01
While recent supernova (SN) cosmology research has benefited from improved measurements, current analysis approaches are not statistically optimal and will prove insufficient for future surveys. This paper discusses the limitations of current SN cosmological analyses in treating outliers, selection effects, shape- and color-standardization relations, unexplained dispersion, and heterogeneous observations. We present a new Bayesian framework, called UNITY (Unified Nonlinear Inference for Type-Ia cosmologY), that incorporates significant improvements in our ability to confront these effects. We apply the framework to real SN observations and demonstrate smaller statistical and systematic uncertainties. We verify earlier results that SNe Ia require nonlinear shape and color standardizations, but we now include these nonlinear relations in a statistically well-justified way. This analysis was primarily performed blinded, in that the basic framework was first validated on simulated data before transitioning to real data. We also discuss possible extensions of the method.
MUSiC - A general search for deviations from monte carlo predictions in CMS
NASA Astrophysics Data System (ADS)
Biallass, Philipp A.; CMS Collaboration
2009-06-01
A model independent analysis approach in CMS is presented, systematically scanning the data for deviations from the Monte Carlo expectation. Such an analysis can contribute to the understanding of the detector and the tuning of the event generators. Furthermore, due to the minimal theoretical bias this approach is sensitive to a variety of models of new physics, including those not yet thought of. Events are classified into event classes according to their particle content (muons, electrons, photons, jets and missing transverse energy). A broad scan of various distributions is performed, identifying significant deviations from the Monte Carlo simulation. The importance of systematic uncertainties is outlined, which are taken into account rigorously within the algorithm. Possible detector effects and generator issues, as well as models involving Supersymmetry and new heavy gauge bosons are used as an input to the search algorithm.
MUSiC - A Generic Search for Deviations from Monte Carlo Predictions in CMS
NASA Astrophysics Data System (ADS)
Hof, Carsten
2009-05-01
We present a model independent analysis approach, systematically scanning the data for deviations from the Standard Model Monte Carlo expectation. Such an analysis can contribute to the understanding of the CMS detector and the tuning of the event generators. Furthermore, due to the minimal theoretical bias this approach is sensitive to a variety of models of new physics, including those not yet thought of. Events are classified into event classes according to their particle content (muons, electrons, photons, jets and missing transverse energy). A broad scan of various distributions is performed, identifying significant deviations from the Monte Carlo simulation. We outline the importance of systematic uncertainties, which are taken into account rigorously within the algorithm. Possible detector effects and generator issues, as well as models involving supersymmetry and new heavy gauge bosons have been used as an input to the search algorithm.
NASA Technical Reports Server (NTRS)
Calhoun, Philip C.; Sedlak, Joseph E.; Superfin, Emil
2011-01-01
Precision attitude determination for recent and planned space missions typically includes quaternion star trackers (ST) and a three-axis inertial reference unit (IRU). Sensor selection is based on estimates of knowledge accuracy attainable from a Kalman filter (KF), which provides the optimal solution for the case of linear dynamics with measurement and process errors characterized by random Gaussian noise with white spectrum. Non-Gaussian systematic errors in quaternion STs are often quite large and have an unpredictable time-varying nature, particularly when used in non-inertial pointing applications. Two filtering methods are proposed to reduce the attitude estimation error resulting from ST systematic errors, 1) extended Kalman filter (EKF) augmented with Markov states, 2) Unscented Kalman filter (UKF) with a periodic measurement model. Realistic assessments of the attitude estimation performance gains are demonstrated with both simulation and flight telemetry data from the Lunar Reconnaissance Orbiter.
Ahmad, Peer Zahoor; Quadri, S M K; Ahmad, Firdous; Bahar, Ali Newaz; Wani, Ghulam Mohammad; Tantary, Shafiq Maqbool
2017-12-01
Quantum-dot cellular automata, is an extremely small size and a powerless nanotechnology. It is the possible alternative to current CMOS technology. Reversible QCA logic is the most important issue at present time to reduce power losses. This paper presents a novel reversible logic gate called the F-Gate. It is simplest in design and a powerful technique to implement reversible logic. A systematic approach has been used to implement a novel single layer reversible Full-Adder, Full-Subtractor and a Full Adder-Subtractor using the F-Gate. The proposed Full Adder-Subtractor has achieved significant improvements in terms of overall circuit parameters among the most previously cost-efficient designs that exploit the inevitable nano-level issues to perform arithmetic computing. The proposed designs have been authenticated and simulated using QCADesigner tool ver. 2.0.3.
ERIC Educational Resources Information Center
Srinivasan, Srilekha; Perez, Lance C.; Palmer, Robert D.; Brooks, David W.; Wilson, Kathleen; Fowler, David
2006-01-01
A systematic study of the implementation of simulation hardware (TIMS) replacing software (MATLAB) was undertaken for advanced undergraduate and early graduate courses in electrical engineering. One outcome of the qualitative component of the study was remarkable: most students interviewed (4/4 and 6/9) perceived the software simulations as…
Toward Theory Building in the Field of Instructional Games and Simulations
ERIC Educational Resources Information Center
Cruickshank, Donald R.; Mager, Gerald M.
1976-01-01
Three suggestions are made for improving on the present uncoordinated state of games and simulations: establish precise vocabulary, understand the relationships between simulation/gaming and other instructional alternatives, and instigate systematic research based on the descriptive--correlational--experimental loop model. (Author/LS)
Ukkola, A. M.; De Kauwe, M. G.; Pitman, A. J.; ...
2016-10-13
Land surface models (LSMs) must accurately simulate observed energy and water fluxes during droughts in order to provide reliable estimates of future water resources. We evaluated 8 different LSMs (14 model versions) for simulating evapotranspiration (ET) during periods of evaporative drought (Edrought) across six flux tower sites. Using an empirically defined Edrought threshold (a decline in ET below the observed 15th percentile), we show that LSMs simulated 58 Edrought days per year, on average, across the six sites, ~3 times as many as the observed 20 d. The simulated Edrought magnitude was ~8 times greater than observed and twice asmore » intense. Our findings point to systematic biases across LSMs when simulating water and energy fluxes under water-stressed conditions. The overestimation of key Edrought characteristics undermines our confidence in the models' capability in simulating realistic drought responses to climate change and has wider implications for phenomena sensitive to soil moisture, including heat waves.« less
Hou, Tingjun; Wang, Junmei; Li, Youyong; Wang, Wei
2011-01-24
The Molecular Mechanics/Poisson-Boltzmann Surface Area (MM/PBSA) and the Molecular Mechanics/Generalized Born Surface Area (MM/GBSA) methods calculate binding free energies for macromolecules by combining molecular mechanics calculations and continuum solvation models. To systematically evaluate the performance of these methods, we report here an extensive study of 59 ligands interacting with six different proteins. First, we explored the effects of the length of the molecular dynamics (MD) simulation, ranging from 400 to 4800 ps, and the solute dielectric constant (1, 2, or 4) on the binding free energies predicted by MM/PBSA. The following three important conclusions could be observed: (1) MD simulation length has an obvious impact on the predictions, and longer MD simulation is not always necessary to achieve better predictions. (2) The predictions are quite sensitive to the solute dielectric constant, and this parameter should be carefully determined according to the characteristics of the protein/ligand binding interface. (3) Conformational entropy often show large fluctuations in MD trajectories, and a large number of snapshots are necessary to achieve stable predictions. Next, we evaluated the accuracy of the binding free energies calculated by three Generalized Born (GB) models. We found that the GB model developed by Onufriev and Case was the most successful model in ranking the binding affinities of the studied inhibitors. Finally, we evaluated the performance of MM/GBSA and MM/PBSA in predicting binding free energies. Our results showed that MM/PBSA performed better in calculating absolute, but not necessarily relative, binding free energies than MM/GBSA. Considering its computational efficiency, MM/GBSA can serve as a powerful tool in drug design, where correct ranking of inhibitors is often emphasized.
Design and Simulation Plant Layout Using Systematic Layout Planning
NASA Astrophysics Data System (ADS)
Suhardini, D.; Septiani, W.; Fauziah, S.
2017-12-01
This research aims to design the factory layout of PT. Gunaprima Budiwijaya in order to increase production capacity. The problem faced by this company is inappropriate layout causes cross traffic on the production floor. The re-layout procedure consist of these three steps: analysing the existing layout, designing plant layout based on SLP and evaluation and selection of alternative layout using Simulation Pro model version 6. Systematic layout planning is used to re-layout not based on the initial layout. This SLP produces four layout alternatives, and each alternative will be evaluated based on two criteria, namely cost of material handling using Material Handling Evaluation Sheet (MHES) and processing time by simulation. The results showed that production capacity is increasing as much as 37.5% with the addition of the machine and the operator, while material handling cost was reduced by improvement of the layout. The use of systematic layout planning method reduces material handling cost of 10,98% from initial layout or amounting to Rp1.229.813,34.
A Computational Framework for Bioimaging Simulation.
Watabe, Masaki; Arjunan, Satya N V; Fukushima, Seiya; Iwamoto, Kazunari; Kozuka, Jun; Matsuoka, Satomi; Shindo, Yuki; Ueda, Masahiro; Takahashi, Koichi
2015-01-01
Using bioimaging technology, biologists have attempted to identify and document analytical interpretations that underlie biological phenomena in biological cells. Theoretical biology aims at distilling those interpretations into knowledge in the mathematical form of biochemical reaction networks and understanding how higher level functions emerge from the combined action of biomolecules. However, there still remain formidable challenges in bridging the gap between bioimaging and mathematical modeling. Generally, measurements using fluorescence microscopy systems are influenced by systematic effects that arise from stochastic nature of biological cells, the imaging apparatus, and optical physics. Such systematic effects are always present in all bioimaging systems and hinder quantitative comparison between the cell model and bioimages. Computational tools for such a comparison are still unavailable. Thus, in this work, we present a computational framework for handling the parameters of the cell models and the optical physics governing bioimaging systems. Simulation using this framework can generate digital images of cell simulation results after accounting for the systematic effects. We then demonstrate that such a framework enables comparison at the level of photon-counting units.
An Event-Based Approach to Design a Teamwork Training Scenario and Assessment Tool in Surgery.
Nguyen, Ngan; Watson, William D; Dominguez, Edward
2016-01-01
Simulation is a technique recommended for teaching and measuring teamwork, but few published methodologies are available on how best to design simulation for teamwork training in surgery and health care in general. The purpose of this article is to describe a general methodology, called event-based approach to training (EBAT), to guide the design of simulation for teamwork training and discuss its application to surgery. The EBAT methodology draws on the science of training by systematically introducing training exercise events that are linked to training requirements (i.e., competencies being trained and learning objectives) and performance assessment. The EBAT process involves: Of the 4 teamwork competencies endorsed by the Agency for Healthcare Research Quality and Department of Defense, "communication" was chosen to be the focus of our training efforts. A total of 5 learning objectives were defined based on 5 validated teamwork and communication techniques. Diagnostic laparoscopy was chosen as the clinical context to frame the training scenario, and 29 KSAs were defined based on review of published literature on patient safety and input from subject matter experts. Critical events included those that correspond to a specific phase in the normal flow of a surgical procedure as well as clinical events that may occur when performing the operation. Similar to the targeted KSAs, targeted responses to the critical events were developed based on existing literature and gathering input from content experts. Finally, a 29-item EBAT-derived checklist was created to assess communication performance. Like any instructional tool, simulation is only effective if it is designed and implemented appropriately. It is recognized that the effectiveness of simulation depends on whether (1) it is built upon a theoretical framework, (2) it uses preplanned structured exercises or events to allow learners the opportunity to exhibit the targeted KSAs, (3) it assesses performance, and (4) it provides formative and constructive feedback to bridge the gap between the learners' KSAs and the targeted KSAs. The EBAT methodology guides the design of simulation that incorporates these 4 features and, thus, enhances training effectiveness with simulation. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Numerical simulations of groundwater flow at New Jersey Shallow Shelf
NASA Astrophysics Data System (ADS)
Fehr, Annick; Patterson, Fabian; Lofi, Johanna; Reiche, Sönke
2016-04-01
During IODP Expedition 313, three boreholes were drilled in the so-called New Jersey transect. Hydrochemical studies revealed the groundwater situation as more complex than expected, characterized by several sharp boundaries between fresh and saline groundwater. Two conflicting hypotheses regarding the nature of these freshwater reservoirs are currently debated. One hypothesis is that these reservoirs are connected with onshore aquifers and continuously recharged by seaward-flowing groundwater. The second hypothesis is that fresh groundwater was emplaced during the last glacial period. In addition to the petrophysical properties measured during IODP 313 expedition, Nuclear Magnetic Resonance (NMR) measurements were performed on samples from boreholes M0027, M0028 and M0029 in order to deduce porosities and permeabilities. These results are compared with data from alternative laboratory measurements and with petrophysical properties inferred from downhole logging data. We incorporate these results into a 2D numerical model that reflects the shelf architecture as known from drillings and seismic data to perform submarine groundwater flow simulations. In order to account for uncertainties related to the spatial distribution of physical properties, such as porosity and permeability, systematic variation of input parameters was performed during simulation runs. The target is to test the two conflicting hypotheses of fresh groundwater emplacements offshore New Jersey and to improve the understanding of fluid flow processes at marine passive margins.
Joda, Tim; Brägger, Urs; Gallucci, German
2015-01-01
Digital developments have led to the opportunity to compose simulated patient models based on three-dimensional (3D) skeletal, facial, and dental imaging. The aim of this systematic review is to provide an update on the current knowledge, to report on the technical progress in the field of 3D virtual patient science, and to identify further research needs to accomplish clinical translation. Searches were performed electronically (MEDLINE and OVID) and manually up to March 2014 for studies of 3D fusion imaging to create a virtual dental patient. Inclusion criteria were limited to human studies reporting on the technical protocol for superimposition of at least two different 3D data sets and medical field of interest. Of the 403 titles originally retrieved, 51 abstracts and, subsequently, 21 full texts were selected for review. Of the 21 full texts, 18 studies were included in the systematic review. Most of the investigations were designed as feasibility studies. Three different types of 3D data were identified for simulation: facial skeleton, extraoral soft tissue, and dentition. A total of 112 patients were investigated in the development of 3D virtual models. Superimposition of data on the facial skeleton, soft tissue, and/or dentition is a feasible technique to create a virtual patient under static conditions. Three-dimensional image fusion is of interest and importance in all fields of dental medicine. Future research should focus on the real-time replication of a human head, including dynamic movements, capturing data in a single step.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Yidong, E-mail: yidongyang@med.miami.edu; Wang, Ken Kang-Hsin; Wong, John W.
2015-04-15
Purpose: The cone beam computed tomography (CBCT) guided small animal radiation research platform (SARRP) has been developed for focal tumor irradiation, allowing laboratory researchers to test basic biological hypotheses that can modify radiotherapy outcomes in ways that were not feasible previously. CBCT provides excellent bone to soft tissue contrast, but is incapable of differentiating tumors from surrounding soft tissue. Bioluminescence tomography (BLT), in contrast, allows direct visualization of even subpalpable tumors and quantitative evaluation of tumor response. Integration of BLT with CBCT offers complementary image information, with CBCT delineating anatomic structures and BLT differentiating luminescent tumors. This study is tomore » develop a systematic method to calibrate an integrated CBCT and BLT imaging system which can be adopted onboard the SARRP to guide focal tumor irradiation. Methods: The integrated imaging system consists of CBCT, diffuse optical tomography (DOT), and BLT. The anatomy acquired from CBCT and optical properties acquired from DOT serve as a priori information for the subsequent BLT reconstruction. Phantoms were designed and procedures were developed to calibrate the CBCT, DOT/BLT, and the entire integrated system. Geometrical calibration was performed to calibrate the CBCT system. Flat field correction was performed to correct the nonuniform response of the optical imaging system. Absolute emittance calibration was performed to convert the camera readout to the emittance at the phantom or animal surface, which enabled the direct reconstruction of the bioluminescence source strength. Phantom and mouse imaging were performed to validate the calibration. Results: All calibration procedures were successfully performed. Both CBCT of a thin wire and a euthanized mouse revealed no spatial artifact, validating the accuracy of the CBCT calibration. The absolute emittance calibration was validated with a 650 nm laser source, resulting in a 3.0% difference between simulated and measured signal. The calibration of the entire system was confirmed through the CBCT and BLT reconstruction of a bioluminescence source placed inside a tissue-simulating optical phantom. Using a spatial region constraint, the source position was reconstructed with less than 1 mm error and the source strength reconstructed with less than 24% error. Conclusions: A practical and systematic method has been developed to calibrate an integrated x-ray and optical tomography imaging system, including the respective CBCT and optical tomography system calibration and the geometrical calibration of the entire system. The method can be modified and adopted to calibrate CBCT and optical tomography systems that are operated independently or hybrid x-ray and optical tomography imaging systems.« less
Yang, Yidong; Wang, Ken Kang-Hsin; Eslami, Sohrab; Iordachita, Iulian I.; Patterson, Michael S.; Wong, John W.
2015-01-01
Purpose: The cone beam computed tomography (CBCT) guided small animal radiation research platform (SARRP) has been developed for focal tumor irradiation, allowing laboratory researchers to test basic biological hypotheses that can modify radiotherapy outcomes in ways that were not feasible previously. CBCT provides excellent bone to soft tissue contrast, but is incapable of differentiating tumors from surrounding soft tissue. Bioluminescence tomography (BLT), in contrast, allows direct visualization of even subpalpable tumors and quantitative evaluation of tumor response. Integration of BLT with CBCT offers complementary image information, with CBCT delineating anatomic structures and BLT differentiating luminescent tumors. This study is to develop a systematic method to calibrate an integrated CBCT and BLT imaging system which can be adopted onboard the SARRP to guide focal tumor irradiation. Methods: The integrated imaging system consists of CBCT, diffuse optical tomography (DOT), and BLT. The anatomy acquired from CBCT and optical properties acquired from DOT serve as a priori information for the subsequent BLT reconstruction. Phantoms were designed and procedures were developed to calibrate the CBCT, DOT/BLT, and the entire integrated system. Geometrical calibration was performed to calibrate the CBCT system. Flat field correction was performed to correct the nonuniform response of the optical imaging system. Absolute emittance calibration was performed to convert the camera readout to the emittance at the phantom or animal surface, which enabled the direct reconstruction of the bioluminescence source strength. Phantom and mouse imaging were performed to validate the calibration. Results: All calibration procedures were successfully performed. Both CBCT of a thin wire and a euthanized mouse revealed no spatial artifact, validating the accuracy of the CBCT calibration. The absolute emittance calibration was validated with a 650 nm laser source, resulting in a 3.0% difference between simulated and measured signal. The calibration of the entire system was confirmed through the CBCT and BLT reconstruction of a bioluminescence source placed inside a tissue-simulating optical phantom. Using a spatial region constraint, the source position was reconstructed with less than 1 mm error and the source strength reconstructed with less than 24% error. Conclusions: A practical and systematic method has been developed to calibrate an integrated x-ray and optical tomography imaging system, including the respective CBCT and optical tomography system calibration and the geometrical calibration of the entire system. The method can be modified and adopted to calibrate CBCT and optical tomography systems that are operated independently or hybrid x-ray and optical tomography imaging systems. PMID:25832060
Aléx, Jonas; Gyllencreutz, Lina
2018-02-05
Trauma care at an accident site is of great importance for patient survival. The purpose of the study was to observe the compliance of ambulance nurses with the Prehospital Trauma Life Support (PHTLS) concept of trauma care in a simulation situation. The material consisted of video recordings in trauma simulation and an observation protocol was designed to analyze the video material. The result showed weaknesses in systematic exam and an ineffective use of time at the scene of injury. Development of observation protocols in trauma simulation can ensure the quality of ambulance nurses' compliance with established concepts. Our pilot study shows that insufficiencies in systematic care lead to an ineffective treatment for trauma patients which in turn may increase the risk of complications and mortality.
A systematic petri net approach for multiple-scale modeling and simulation of biochemical processes.
Chen, Ming; Hu, Minjie; Hofestädt, Ralf
2011-06-01
A method to exploit hybrid Petri nets for modeling and simulating biochemical processes in a systematic way was introduced. Both molecular biology and biochemical engineering aspects are manipulated. With discrete and continuous elements, the hybrid Petri nets can easily handle biochemical factors such as metabolites concentration and kinetic behaviors. It is possible to translate both molecular biological behavior and biochemical processes workflow into hybrid Petri nets in a natural manner. As an example, penicillin production bioprocess is modeled to illustrate the concepts of the methodology. Results of the dynamic of production parameters in the bioprocess were simulated and observed diagrammatically. Current problems and post-genomic perspectives were also discussed.
Afman, Gregg; Garside, Richard M; Dinan, Neal; Gant, Nicholas; Betts, James A; Williams, Clyde
2014-12-01
Current recommendations for nutritional interventions in basketball are largely extrapolated from laboratory-based studies that are not sport-specific. We therefore adapted and validated a basketball simulation test relative to competitive basketball games using well-trained basketball players (n = 10), then employed this test to evaluate the effects of two common preexercise nutritional interventions on basketball-specific physical and skilled performance. Specifically, in a randomized and counterbalanced order, participants ingested solutions providing either 75 g carbohydrate (sucrose) 45 min before exercise (Study A; n = 10) or 2 × 0.2 g · kg(-1) sodium bicarbonate (NaHCO3) 90 and 20 min before exercise (Study B; n = 7), each relative to appropriate placebos (H2O and 2 × 0.14 g · kg(-1) NaCl, respectively). Heart rate, sweat rate, pedometer count, and perceived exertion did not systematically differ between the 60-min basketball simulation test and competitive basketball, with a strong positive correlation in heart rate response (r = .9, p < .001). Preexercise carbohydrate ingestion resulted in marked hypoglycemia (< 3.5 mmol · l(-1)) throughout the first quarter, coincident with impaired sprinting (+0.08 ± 0.05 second; p = .01) and layup shooting performance (8.5/11 versus 10.3/11 baskets; p < .01). However, ingestion of either carbohydrate or sodium bicarbonate before exercise offset fatigue such that sprinting performance was maintained into the final quarter relative to placebo (Study A: -0.07 ± 0.04 second; p < .01 and Study B: -0.08 ± 0.05 second; p = .02), although neither translated into improved skilled (layup shooting) performance. This basketball simulation test provides a valid reflection of physiological demands in competitive basketball and is sufficiently sensitive to detect meaningful changes in physical and skilled performance. While there are benefits of preexercise carbohydrate or sodium bicarbonate ingestion, these should be balanced against potential negative side effects.
NASA Astrophysics Data System (ADS)
Worqlul, Abeyou W.; Ayana, Essayas K.; Maathuis, Ben H. P.; MacAlister, Charlotte; Philpot, William D.; Osorio Leyton, Javier M.; Steenhuis, Tammo S.
2018-01-01
In many developing countries and remote areas of important ecosystems, good quality precipitation data are neither available nor readily accessible. Satellite observations and processing algorithms are being extensively used to produce satellite rainfall products (SREs). Nevertheless, these products are prone to systematic errors and need extensive validation before to be usable for streamflow simulations. In this study, we investigated and corrected the bias of Multi-Sensor Precipitation Estimate-Geostationary (MPEG) data. The corrected MPEG dataset was used as input to a semi-distributed hydrological model Hydrologiska Byråns Vattenbalansavdelning (HBV) for simulation of discharge of the Gilgel Abay and Gumara watersheds in the Upper Blue Nile basin, Ethiopia. The result indicated that the MPEG satellite rainfall captured 81% and 78% of the gauged rainfall variability with a consistent bias of underestimating the gauged rainfall by 60%. A linear bias correction applied significantly reduced the bias while maintaining the coefficient of correlation. The simulated flow using bias corrected MPEG SRE resulted in a simulated flow comparable to the gauge rainfall for both watersheds. The study indicated the potential of MPEG SRE in water budget studies after applying a linear bias correction.
Hadronic Interaction Models and the Air Shower Simulation Program CORSIKA
NASA Astrophysics Data System (ADS)
Heck, D.; KASCADE Collaboration
The Monte Carlo program CORSIKA simulates the 4-dimensional evolution of extensive air showers in the atmosphere initiated by photons, hadrons or nuclei. It contains links to the hadronic interaction models DPMJET, HDPM, NEXUS, QGSJET, SIBYLL, and VENUS. These codes are employed to treat the hadronic interactions at energies above 80 GeV. Since their first implementation in 1996 the models DPMJET and SIBYLL have been revised to versions II.5 and 2.1, respectively. Also the treatment of diffractive interactions by QGSJET has been slightly modified. The models DPMJET, QGSJET and SIBYLL are able to simulate collisions even at the highest energies reaching up to 1020 eV, which are at the focus of present research. The recently added NEXUS 2 program uses a unified approach combining Gribov-Regge theory and perturbative QCD. This model is based on the universality hypothesis of the behavior of highenergy interactions and presently works up to 1017 eV. A comparison of simulations performed with different models gives an indication on the systematic uncertainties of simulated air shower properties, which arise from the extrapolations to energies, kinematic ranges, or projectile-target combinations not covered by man-made colliders. Results obtained with the most actual programs are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romander, C. M.; Cagliostro, D. J.
Five experiments were performed to help evaluate the structural integrity of the reactor vessel and head design and to verify code predictions. In the first experiment (SM 1), a detailed model of the head was loaded statically to determine its stiffness. In the remaining four experiments (SM 2 to SM 5), models of the vessel and head were loaded dynamically under a simulated 661 MW-sec hypothetical core disruptive accident (HCDA). Models SM 2 to SM 4, each of increasing complexity, systematically showed the effects of upper internals structures, a thermal liner, core support platform, and torospherical bottom on vessel response.more » Model SM 5, identical to SM 4 but more heavily instrumented, demonstrated experimental reproducibility and provided more comprehensive data. The models consisted of a Ni 200 vessel and core barrel, a head with shielding and simulated component masses, an upper internals structure (UIS), and, in the more complex models SM 4 and SM 5, a Ni 200 thermal liner and core support structure. Water simulated the liquid sodium coolant and a low-density explosive simulated the HCDA loads.« less
NASA Astrophysics Data System (ADS)
Prasanna, Venkatraman
2016-04-01
This paper evaluates the performance of 29 state-of-art CMIP5-coupled atmosphere-ocean general circulation models (AOGCM) in their representation of regional characteristics of monsoon simulation over South Asia. The AOGCMs, despite their relatively coarse resolution, have shown some reasonable skill in simulating the mean monsoon and precipitation variability over the South Asian monsoon region. However, considerable biases do exist with reference to the observed precipitation and also inter-model differences. The monsoon rainfall and surface flux bias with respect to the observations from the historical run for the period nominally from 1850 to 2005 are discussed in detail. Our results show that the coupled model simulations over South Asia exhibit large uncertainties from one model to the other. The analysis clearly brings out the presence of large systematic biases in coupled simulation of boreal summer precipitation, evaporation, and sea surface temperature (SST) in the Indian Ocean, often exceeding 50 % of the climatological values. Many of the biases are common to many models. Overall, the coupled models need further improvement in realistically portraying boreal summer monsoon over the South Asian monsoon region.
NASA Technical Reports Server (NTRS)
Collinson, Glyn A.; Dorelli, John Charles; Avanov, Leon A.; Lewis, Gethyn R.; Moore, Thomas E.; Pollock, Craig; Kataria, Dhiren O.; Bedington, Robert; Arridge, Chris S.; Chornay, Dennis J.;
2012-01-01
We report our findings comparing the geometric factor (GF) as determined from simulations and laboratory measurements of the new Dual Electron Spectrometer (DES) being developed at NASA Goddard Space Flight Center as part of the Fast Plasma Investigation on NASA's Magnetospheric Multiscale mission. Particle simulations are increasingly playing an essential role in the design and calibration of electrostatic analyzers, facilitating the identification and mitigation of the many sources of systematic error present in laboratory calibration. While equations for laboratory measurement of the Geometric Factpr (GF) have been described in the literature, these are not directly applicable to simulation since the two are carried out under substantially different assumptions and conditions, making direct comparison very challenging. Starting from first principles, we derive generalized expressions for the determination of the GF in simulation and laboratory, and discuss how we have estimated errors in both cases. Finally, we apply these equations to the new DES instrument and show that the results agree within errors. Thus we show that the techniques presented here will produce consistent results between laboratory and simulation, and present the first description of the performance of the new DES instrument in the literature.
Learning with STEM Simulations in the Classroom: Findings and Trends from a Meta-Analysis
ERIC Educational Resources Information Center
D'Angelo, Cynthia M.; Rutstein, Daisy; Harris, Christopher J.
2016-01-01
This article presents a summary of the findings of a systematic review and meta-analysis of the literature on computer-based interactive simulations for K-12 science, technology, engineering, and mathematics (STEM) learning topics. For achievement outcomes, simulations had a moderate to strong effect on student learning. Overall, simulations have…
Assessing accuracy of point fire intervals across landscapes with simulation modelling
Russell A. Parsons; Emily K. Heyerdahl; Robert E. Keane; Brigitte Dorner; Joseph Fall
2007-01-01
We assessed accuracy in point fire intervals using a simulation model that sampled four spatially explicit simulated fire histories. These histories varied in fire frequency and size and were simulated on a flat landscape with two forest types (dry versus mesic). We used three sampling designs (random, systematic grids, and stratified). We assessed the sensitivity of...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, B; Reyes, J; Wong, J
Purpose: To overcome the limitation of CT/CBCT in guiding radiation for soft tissue targets, we developed a bioluminescence tomography(BLT) system for preclinical radiation research. We systematically assessed the system performance in target localization and the ability of resolving two sources in simulations, phantom and in vivo environments. Methods: Multispectral images acquired in single projection were used for the BLT reconstruction. Simulation studies were conducted for single spherical source radius from 0.5 to 3 mm at depth of 3 to 12 mm. The same configuration was also applied for the double sources simulation with source separations varying from 3 to 9more » mm. Experiments were performed in a standalone BLT/CBCT system. Two sources with 3 and 4.7 mm separations placed inside a tissue-mimicking phantom were chosen as the test cases. Live mice implanted with single source at 6 and 9 mm depth, 2 sources with 3 and 5 mm separation at depth of 5 mm or 3 sources in the abdomen were also used to illustrate the in vivo localization capability of the BLT system. Results: Simulation and phantom results illustrate that our BLT can provide 3D source localization with approximately 1 mm accuracy. The in vivo results are encouraging that 1 and 1.7 mm accuracy can be attained for the single source case at 6 and 9 mm depth, respectively. For the 2 sources study, both sources can be distinguished at 3 and 5 mm separations at approximately 1 mm accuracy using 3D BLT but not 2D bioluminescence image. Conclusion: Our BLT/CBCT system can be potentially applied to localize and resolve targets at a wide range of target sizes, depths and separations. The information provided in this study can be instructive to devise margins for BLT-guided irradiation and suggests that the BLT could guide radiation for multiple targets, such as metastasis. Drs. John W. Wong and Iulian I. Iordachita receive royalty payment from a licensing agreement between Xstrahl Ltd and Johns Hopkins University.« less
Nayahangan, L J; Konge, L; Schroeder, T V; Paltved, C; Lindorff-Larsen, K G; Nielsen, B U; Eiberg, J P
2017-04-01
Practical skills training in vascular surgery is facing challenges because of an increased number of endovascular procedures and fewer open procedures, as well as a move away from the traditional principle of "learning by doing." This change has established simulation as a cornerstone in providing trainees with the necessary skills and competences. However, the development of simulation based programs often evolves based on available resources and equipment, reflecting convenience rather than a systematic educational plan. The objective of the present study was to perform a national needs assessment to identify the technical procedures that should be integrated in a simulation based curriculum. A national needs assessment using a Delphi process was initiated by engaging 33 predefined key persons in vascular surgery. Round 1 was a brainstorming phase to identify technical procedures that vascular surgeons should learn. Round 2 was a survey that used a needs assessment formula to explore the frequency of procedures, the number of surgeons performing each procedure, risk and/or discomfort, and feasibility for simulation based training. Round 3 involved elimination and ranking of procedures. The response rate for round 1 was 70%, with 36 procedures identified. Round 2 had a 76% response rate and resulted in a preliminary prioritised list after exploring the need for simulation based training. Round 3 had an 85% response rate; 17 procedures were eliminated, resulting in a final prioritised list of 19 technical procedures. A national needs assessment using a standardised Delphi method identified a list of procedures that are highly suitable and may provide the basis for future simulation based training programs for vascular surgeons in training. Copyright © 2017 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Eluszkiewicz, Janusz; Nehrkorn, Thomas; Wofsy, Steven C.; Matross, Daniel; Gerbig, Christoph; Lin, John C.; Freitas, Saulo; Longo, Marcos; Andrews, Arlyn E.; Peters, Wouter
2007-01-01
This paper evaluates simulations of atmospheric CO2 measured in 2004 at continental surface and airborne receptors, intended to test the capability to use data with high temporal and spatial resolution for analyses of carbon sources and sinks at regional and continental scales. The simulations were performed using the Stochastic Time-Inverted Lagrangian Transport (STILT) model driven by the Weather Forecast and Research (WRF) model, and linked to surface fluxes from the satellite-driven Vegetation Photosynthesis and Respiration Model (VPRM). The simulations provide detailed representations of hourly CO2 tower data and reproduce the shapes of airborne vertical profiles with high fidelity. WRF meteorology gives superior model performance compared with standard meteorological products, and the impact of including WRF convective mass fluxes in the STILT trajectory calculations is significant in individual cases. Important biases in the simulation are associated with the nighttime CO2 build-up and subsequent morning transition to convective conditions, and with errors in the advected lateral boundary condition. Comparison of STILT simulations driven by the WRF model against those driven by the Brazilian variant of the Regional Atmospheric Modeling System (BRAMS) shows that model-to-model differences are smaller than between an individual transport model and observations, pointing to systematic errors in the simulated transport. Future developments in the WRF model s data assimilation capabilities, basic research into the fundamental aspects of trajectory calculations, and intercomparison studies involving other transport models, are possible venues for reducing these errors. Overall, the STILT/WRF/VPRM offers a powerful tool for continental and regional scale carbon flux estimates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dowdell, S; Grassberger, C; Paganetti, H
2014-06-01
Purpose: Evaluate the sensitivity of intensity-modulated proton therapy (IMPT) lung treatments to systematic and random setup uncertainties combined with motion effects. Methods: Treatment plans with single-field homogeneity restricted to ±20% (IMPT-20%) were compared to plans with no restriction (IMPT-full). 4D Monte Carlo simulations were performed for 10 lung patients using the patient CT geometry with either ±5mm systematic or random setup uncertainties applied over a 35 × 2.5Gy(RBE) fractionated treatment course. Intra-fraction, inter-field and inter-fraction motions were investigated. 50 fractionated treatments with systematic or random setup uncertainties applied to each fraction were generated for both IMPT delivery methods and threemore » energy-dependent spot sizes (big spots - BS σ=18-9mm, intermediate spots - IS σ=11-5mm, small spots - SS σ=4-2mm). These results were compared to a Monte Carlo recalculation of the original treatment plan, with results presented as the difference in EUD (ΔEUD), V{sub 95} (ΔV{sub 95}) and target homogeneity (ΔD{sub 1}–D{sub 99}) between the 4D simulations and the Monte Carlo calculation on the planning CT. Results: The standard deviations in the ΔEUD were 1.95±0.47(BS), 1.85±0.66(IS) and 1.31±0.35(SS) times higher in IMPT-full compared to IMPT-20% when ±5mm systematic setup uncertainties were applied. The ΔV{sub 95} variations were also 1.53±0.26(BS), 1.60±0.50(IS) and 1.38±0.38(SS) times higher for IMPT-full. For random setup uncertainties, the standard deviations of the ΔEUD from 50 simulated fractionated treatments were 1.94±0.90(BS), 2.13±1.08(IS) and 1.45±0.57(SS) times higher in IMPTfull compared to IMPT-20%. For all spot sizes considered, the ΔD{sub 1}-D{sub 99} coincided within the uncertainty limits for the two IMPT delivery methods, with the mean value always higher for IMPT-full. Statistical analysis showed significant differences between the IMPT-full and IMPT-20% dose distributions for the majority of scenarios studied. Conclusion: Lung IMPT-full treatments are more sensitive to both systematic and random setup uncertainties compared to IMPT-20%. This work was supported by the NIH R01 CA111590.« less
USDA-ARS?s Scientific Manuscript database
Simulation models are extensively used to predict agricultural productivity and greenhouse gas (GHG) emissions. However, the uncertainties of (reduced) model ensemble simulations have not been assessed systematically for variables affecting food security and climate change mitigation, within multisp...
Towards a Theoretical Framework for Educational Simulations.
ERIC Educational Resources Information Center
Winer, Laura R.; Vazquez-Abad, Jesus
1981-01-01
Discusses the need for a sustained and systematic effort toward establishing a theoretical framework for educational simulations, proposes the adaptation of models borrowed from the natural and applied sciences, and describes three simulations based on such a model adapted using Brunerian learning theory. Sixteen references are listed. (LLS)
Impact of study design on development and evaluation of an activity-type classifier.
van Hees, Vincent T; Golubic, Rajna; Ekelund, Ulf; Brage, Søren
2013-04-01
Methods to classify activity types are often evaluated with an experimental protocol involving prescribed physical activities under confined (laboratory) conditions, which may not reflect real-life conditions. The present study aims to evaluate how study design may impact on classifier performance in real life. Twenty-eight healthy participants (21-53 yr) were asked to wear nine triaxial accelerometers while performing 58 activity types selected to simulate activities in real life. For each sensor location, logistic classifiers were trained in subsets of up to 8 activities to distinguish between walking and nonwalking activities and were then evaluated in all 58 activities. Different weighting factors were used to convert the resulting confusion matrices into an estimation of the confusion matrix as would apply in the real-life setting by creating four different real-life scenarios, as well as one traditional laboratory scenario. The sensitivity of a classifier estimated with a traditional laboratory protocol is within the range of estimates derived from real-life scenarios for any body location. The specificity, however, was systematically overestimated by the traditional laboratory scenario. Walking time was systematically overestimated, except for lower back sensor data (range: 7-757%). In conclusion, classifier performance under confined conditions may not accurately reflect classifier performance in real life. Future studies that aim to evaluate activity classification methods are warranted to pay special attention to the representativeness of experimental conditions for real-life conditions.
Going DEEP: guidelines for building simulation-based team assessments.
Grand, James A; Pearce, Marina; Rench, Tara A; Chao, Georgia T; Fernandez, Rosemarie; Kozlowski, Steve W J
2013-05-01
Whether for team training, research or evaluation, making effective use of simulation-based technologies requires robust, reliable and accurate assessment tools. Extant literature on simulation-based assessment practices has primarily focused on scenario and instructional design; however, relatively little direct guidance has been provided regarding the challenging decisions and fundamental principles related to assessment development and implementation. The objective of this manuscript is to introduce a generalisable assessment framework supplemented by specific guidance on how to construct and ensure valid and reliable simulation-based team assessment tools. The recommendations reflect best practices in assessment and are designed to empower healthcare educators, professionals and researchers with the knowledge to design and employ valid and reliable simulation-based team assessments. Information and actionable recommendations associated with creating assessments of team processes (non-technical 'teamwork' activities) and performance (demonstration of technical proficiency) are presented which provide direct guidance on how to Distinguish the underlying competencies one aims to assess, Elaborate the measures used to capture team member behaviours during simulation activities, Establish the content validity of these measures and Proceduralise the measurement tools in a way that is systematically aligned with the goals of the simulation activity while maintaining methodological rigour (DEEP). The DEEP framework targets fundamental principles and critical activities that are important for effective assessment, and should benefit healthcare educators, professionals and researchers seeking to design or enhance any simulation-based assessment effort.
Numerical study on the aerodynamic characteristics of both static and flapping wing with attachments
NASA Astrophysics Data System (ADS)
Xie, Lingwang; Zhang, Xingwei; Luo, Pan; Huang, Panpan
2017-10-01
The purpose of this paper is to investigate the aerodynamic mechanism of airfoils under different icing situations which are different icing type, different icing time, and different icing position. Numerical simulation is carried out by using the finite volume method for both static and flapping airfoils, when Reynolds number is kept at 135000. The difference of aerodynamic performance between the airfoil with attachments and without attachments are be investigated by comparing the force coefficients, lift-to-drag ratios and flow field contour. The present simulations reveal that some influences of attachment are similar in the static airfoil and the flapping airfoil. Specifically, the airfoil with the attachment derived from glaze ice type causes the worse aerodynamic performance than that derived from rime ice type. The longer the icing time, the greater influence of aerodynamic performance the attachment causes. The attachments on the leading-edge have the greater influence of aerodynamic performance than other positions. Moreover, there are little differences between the static airfoil and the flapping airfoil. Compared with the static airfoil, the flapping airfoil which attachment located on the trailing edge causes a worse aerodynamic performance. Both attachments derived from rime ice type and glaze ice type all will deteriorate the aerodynamic performance of the asymmetrical airfoils. Present work provides the systematic and comprehensive study about icing blade which is conducive to the development of the wind power generation technology.
Virtual Reality Simulation for the Operating Room
Gallagher, Anthony G.; Ritter, E Matt; Champion, Howard; Higgins, Gerald; Fried, Marvin P.; Moses, Gerald; Smith, C Daniel; Satava, Richard M.
2005-01-01
Summary Background Data: To inform surgeons about the practical issues to be considered for successful integration of virtual reality simulation into a surgical training program. The learning and practice of minimally invasive surgery (MIS) makes unique demands on surgical training programs. A decade ago Satava proposed virtual reality (VR) surgical simulation as a solution for this problem. Only recently have robust scientific studies supported that vision Methods: A review of the surgical education, human-factor, and psychology literature to identify important factors which will impinge on the successful integration of VR training into a surgical training program. Results: VR is more likely to be successful if it is systematically integrated into a well-thought-out education and training program which objectively assesses technical skills improvement proximate to the learning experience. Validated performance metrics should be relevant to the surgical task being trained but in general will require trainees to reach an objectively determined proficiency criterion, based on tightly defined metrics and perform at this level consistently. VR training is more likely to be successful if the training schedule takes place on an interval basis rather than massed into a short period of extensive practice. High-fidelity VR simulations will confer the greatest skills transfer to the in vivo surgical situation, but less expensive VR trainers will also lead to considerably improved skills generalizations. Conclusions: VR for improved performance of MIS is now a reality. However, VR is only a training tool that must be thoughtfully introduced into a surgical training curriculum for it to successfully improve surgical technical skills. PMID:15650649
Basic life support: evaluation of learning using simulation and immediate feedback devices1.
Tobase, Lucia; Peres, Heloisa Helena Ciqueto; Tomazini, Edenir Aparecida Sartorelli; Teodoro, Simone Valentim; Ramos, Meire Bruna; Polastri, Thatiane Facholi
2017-10-30
to evaluate students' learning in an online course on basic life support with immediate feedback devices, during a simulation of care during cardiorespiratory arrest. a quasi-experimental study, using a before-and-after design. An online course on basic life support was developed and administered to participants, as an educational intervention. Theoretical learning was evaluated by means of a pre- and post-test and, to verify the practice, simulation with immediate feedback devices was used. there were 62 participants, 87% female, 90% in the first and second year of college, with a mean age of 21.47 (standard deviation 2.39). With a 95% confidence level, the mean scores in the pre-test were 6.4 (standard deviation 1.61), and 9.3 in the post-test (standard deviation 0.82, p <0.001); in practice, 9.1 (standard deviation 0.95) with performance equivalent to basic cardiopulmonary resuscitation, according to the feedback device; 43.7 (standard deviation 26.86) mean duration of the compression cycle by second of 20.5 (standard deviation 9.47); number of compressions 167.2 (standard deviation 57.06); depth of compressions of 48.1 millimeter (standard deviation 10.49); volume of ventilation 742.7 (standard deviation 301.12); flow fraction percentage of 40.3 (standard deviation 10.03). the online course contributed to learning of basic life support. In view of the need for technological innovations in teaching and systematization of cardiopulmonary resuscitation, simulation and feedback devices are resources that favor learning and performance awareness in performing the maneuvers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romander, C M; Cagliostro, D J
Five experiments were performed to help evaluate the structural integrity of the reactor vessel and head design and to verify code predictions. In the first experiment (SM 1), a detailed model of the head was loaded statically to determine its stiffness. In the remaining four experiments (SM 2 to SM 5), models of the vessel and head were loaded dynamically under a simulated 661 MW-s hypothetical core disruptive accident (HCDA). Models SM 2 to SM 4, each of increasing complexity, systematically showed the effects of upper internals structures, a thermal liner, core support platform, and torospherical bottom on vessel response.more » Model SM 5, identical to SM 4 but more heavily instrumented, demonstrated experimental reproducibility and provided more comprehensive data. The models consisted of a Ni 200 vessel and core barrel, a head with shielding and simulated component masses, and an upper internals structure (UIS).« less
On the interpretation of synchronization in EEG hyperscanning studies: a cautionary note.
Burgess, Adrian P
2013-01-01
EEG Hyperscanning is a method for studying two or more individuals simultaneously with the objective of elucidating how co-variations in their neural activity (i.e., hyperconnectivity) are influenced by their behavioral and social interactions. The aim of this study was to compare the performance of different hyper-connectivity measures using (i) simulated data, where the degree of coupling could be systematically manipulated, and (ii) individually recorded human EEG combined into pseudo-pairs of participants where no hyper-connections could exist. With simulated data we found that each of the most widely used measures of hyperconnectivity were biased and detected hyper-connections where none existed. With pseudo-pairs of human data we found spurious hyper-connections that arose because there were genuine similarities between the EEG recorded from different people independently but under the same experimental conditions. Specifically, there were systematic differences between experimental conditions in terms of the rhythmicity of the EEG that were common across participants. As any imbalance between experimental conditions in terms of stimulus presentation or movement may affect the rhythmicity of the EEG, this problem could apply in many hyperscanning contexts. Furthermore, as these spurious hyper-connections reflected real similarities between the EEGs, they were not Type-1 errors that could be overcome by some appropriate statistical control. However, some measures that have not previously been used in hyperconnectivity studies, notably the circular correlation co-efficient (CCorr), were less susceptible to detecting spurious hyper-connections of this type. The reason for this advantage in performance is discussed and the use of the CCorr as an alternative measure of hyperconnectivity is advocated.
Electrical features of eighteen automated external defibrillators: a systematic evaluation.
Kette, Fulvio; Locatelli, Aldo; Bozzola, Marcella; Zoli, Alberto; Li, Yongqin; Salmoiraghi, Marco; Ristagno, Giuseppe; Andreassi, Aida
2013-11-01
Assessment and comparison of the electrical parameters (energy, current, first and second phase waveform duration) among eighteen AEDs. Engineering bench tests for a descriptive systematic evaluation in commercially available AEDs. AEDs were tested through an ECG simulator, an impedance simulator, an oscilloscope and a measuring device detecting energy delivered, peak and average current, and duration of first and second phase of the biphasic waveforms. All tests were performed at the engineering facility of the Lombardia Regional Emergency Service (AREU). Large variations in the energy delivered at the first shock were observed. The trend of current highlighted a progressive decline concurrent with the increases of impedance. First and second phase duration varied substantially among the AEDs using the exponential biphasic waveform, unlike rectilinear waveform AEDs in which phase duration remained relatively constant. There is a large variability in the electrical features of the AEDs tested. Energy is likely not to be the best indicator for strength dose selection. Current and shock duration should be both considered when approaching the technical features of AEDs. These findings may prompt further investigations to define the optimal current and duration of the shock waves to increase the success rate in the clinical setting. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Performance map of a cluster detection test using extended power
2013-01-01
Background Conventional power studies possess limited ability to assess the performance of cluster detection tests. In particular, they cannot evaluate the accuracy of the cluster location, which is essential in such assessments. Furthermore, they usually estimate power for one or a few particular alternative hypotheses and thus cannot assess performance over an entire region. Takahashi and Tango developed the concept of extended power that indicates both the rate of null hypothesis rejection and the accuracy of the cluster location. We propose a systematic assessment method, using here extended power, to produce a map showing the performance of cluster detection tests over an entire region. Methods To explore the behavior of a cluster detection test on identical cluster types at any possible location, we successively applied four different spatial and epidemiological parameters. These parameters determined four cluster collections, each covering the entire study region. We simulated 1,000 datasets for each cluster and analyzed them with Kulldorff’s spatial scan statistic. From the area under the extended power curve, we constructed a map for each parameter set showing the performance of the test across the entire region. Results Consistent with previous studies, the performance of the spatial scan statistic increased with the baseline incidence of disease, the size of the at-risk population and the strength of the cluster (i.e., the relative risk). Performance was heterogeneous, however, even for very similar clusters (i.e., similar with respect to the aforementioned factors), suggesting the influence of other factors. Conclusions The area under the extended power curve is a single measure of performance and, although needing further exploration, it is suitable to conduct a systematic spatial evaluation of performance. The performance map we propose enables epidemiologists to assess cluster detection tests across an entire study region. PMID:24156765
Performance optimization of a miniature Joule-Thomson cryocooler using numerical model
NASA Astrophysics Data System (ADS)
Ardhapurkar, P. M.; Atrey, M. D.
2014-09-01
The performance of a miniature Joule-Thomson cryocooler depends on the effectiveness of the heat exchanger. The heat exchanger used in such cryocooler is Hampson-type recuperative heat exchanger. The design of the efficient heat exchanger is crucial for the optimum performance of the cryocooler. In the present work, the heat exchanger is numerically simulated for the steady state conditions and the results are validated against the experimental data available from the literature. The area correction factor is identified for the calculation of effective heat transfer area which takes into account the effect of helical geometry. In order to get an optimum performance of the cryocoolers, operating parameters like mass flow rate, pressure and design parameters like heat exchanger length, helical diameter of coil, fin dimensions, fin density have to be identified. The present work systematically addresses this aspect of design for miniature J-T cryocooler.
Performance of the ATLAS muon trigger in pp collisions at [Formula: see text] TeV.
Aad, G; Abbott, B; Abdallah, J; Abdel Khalek, S; Abdinov, O; Aben, R; Abi, B; Abolins, M; AbouZeid, O S; Abramowicz, H; Abreu, H; Abreu, R; Abulaiti, Y; Acharya, B S; Adamczyk, L; Adams, D L; Adelman, J; Adomeit, S; Adye, T; Agatonovic-Jovin, T; Aguilar-Saavedra, J A; Agustoni, M; Ahlen, S P; Ahmadov, F; Aielli, G; Akerstedt, H; Åkesson, T P A; Akimoto, G; Akimov, A V; Alberghi, G L; Albert, J; Albrand, S; Alconada Verzini, M J; Aleksa, M; Aleksandrov, I N; Alexa, C; Alexander, G; Alexandre, G; Alexopoulos, T; Alhroob, M; Alimonti, G; Alio, L; Alison, J; Allbrooke, B M M; Allison, L J; Allport, P P; Almond, J; Aloisio, A; Alonso, A; Alonso, F; Alpigiani, C; Altheimer, A; Alvarez Gonzalez, B; Alviggi, M G; Amako, K; Amaral Coutinho, Y; Amelung, C; Amidei, D; Amor Dos Santos, S P; Amorim, A; Amoroso, S; Amram, N; Amundsen, G; Anastopoulos, C; Ancu, L S; Andari, N; Andeen, T; Anders, C F; Anders, G; Anderson, K J; Andreazza, A; Andrei, V; Anduaga, X S; Angelidakis, S; Angelozzi, I; Anger, P; Angerami, A; Anghinolfi, F; Anisenkov, A V; Anjos, N; Annovi, A; Antonaki, A; Antonelli, M; Antonov, A; Antos, J; Anulli, F; Aoki, M; Aperio Bella, L; Apolle, R; Arabidze, G; Aracena, I; Arai, Y; Araque, J P; Arce, A T H; Arguin, J-F; Argyropoulos, S; Arik, M; Armbruster, A J; Arnaez, O; Arnal, V; Arnold, H; Arratia, M; Arslan, O; Artamonov, A; Artoni, G; Asai, S; Asbah, N; Ashkenazi, A; Åsman, B; Asquith, L; Assamagan, K; Astalos, R; Atkinson, M; Atlay, N B; Auerbach, B; Augsten, K; Aurousseau, M; Avolio, G; Azuelos, G; Azuma, Y; Baak, M A; Baas, A E; Bacci, C; Bachacou, H; Bachas, K; Backes, M; Backhaus, M; Backus Mayes, J; Badescu, E; Bagiacchi, P; Bagnaia, P; Bai, Y; Bain, T; Baines, J T; Baker, O K; Balek, P; Balli, F; Banas, E; Banerjee, Sw; Bannoura, A A E; Bansal, V; Bansil, H S; Barak, L; Baranov, S P; Barberio, E L; Barberis, D; Barbero, M; Barillari, T; Barisonzi, M; Barklow, T; Barlow, N; Barnett, B M; Barnett, R M; Barnovska, Z; Baroncelli, A; Barone, G; Barr, A J; Barreiro, F; Barreiro Guimarães da Costa, J; Bartoldus, R; Barton, A E; Bartos, P; Bartsch, V; Bassalat, A; Basye, A; Bates, R L; Batley, J R; Battaglia, M; Battistin, M; Bauer, F; Bawa, H S; Beattie, M D; Beau, T; Beauchemin, P H; Beccherle, R; Bechtle, P; Beck, H P; Becker, K; Becker, S; Beckingham, M; Becot, C; Beddall, A J; Beddall, A; Bedikian, S; Bednyakov, V A; Bee, C P; Beemster, L J; Beermann, T A; Begel, M; Behr, K; Belanger-Champagne, C; Bell, P J; Bell, W H; Bella, G; Bellagamba, L; Bellerive, A; Bellomo, M; Belotskiy, K; Beltramello, O; Benary, O; Benchekroun, D; Bendtz, K; Benekos, N; Benhammou, Y; Benhar Noccioli, E; Benitez Garcia, J A; Benjamin, D P; Bensinger, J R; Benslama, K; Bentvelsen, S; Berge, D; Bergeaas Kuutmann, E; Berger, N; Berghaus, F; Beringer, J; Bernard, C; Bernat, P; Bernius, C; Bernlochner, F U; Berry, T; Berta, P; Bertella, C; Bertoli, G; Bertolucci, F; Bertsche, C; Bertsche, D; Besana, M I; Besjes, G J; Bessidskaia, O; Bessner, M; Besson, N; Betancourt, C; Bethke, S; Bhimji, W; Bianchi, R M; Bianchini, L; Bianco, M; Biebel, O; Bieniek, S P; Bierwagen, K; Biesiada, J; Biglietti, M; Bilbao De Mendizabal, J; Bilokon, H; Bindi, M; Binet, S; Bingul, A; Bini, C; Black, C W; Black, J E; Black, K M; Blackburn, D; Blair, R E; Blanchard, J-B; Blazek, T; Bloch, I; Blocker, C; Blum, W; Blumenschein, U; Bobbink, G J; Bobrovnikov, V S; Bocchetta, S S; Bocci, A; Bock, C; Boddy, C R; Boehler, M; Boek, T T; Bogaerts, J A; Bogdanchikov, A G; Bogouch, A; Bohm, C; Bohm, J; Boisvert, V; Bold, T; Boldea, V; Boldyrev, A S; Bomben, M; Bona, M; Boonekamp, M; Borisov, A; Borissov, G; Borri, M; Borroni, S; Bortfeldt, J; Bortolotto, V; Bos, K; Boscherini, D; Bosman, M; Boterenbrood, H; Boudreau, J; Bouffard, J; Bouhova-Thacker, E V; Boumediene, D; Bourdarios, C; Bousson, N; Boutouil, S; Boveia, A; Boyd, J; Boyko, I R; Bozic, I; Bracinik, J; Brandt, A; Brandt, G; Brandt, O; Bratzler, U; Brau, B; Brau, J E; Braun, H M; Brazzale, S F; Brelier, B; Brendlinger, K; Brennan, A J; Brenner, R; Bressler, S; Bristow, K; Bristow, T M; Britton, D; Brochu, F M; Brock, I; Brock, R; Bromberg, C; Bronner, J; Brooijmans, G; Brooks, T; Brooks, W K; Brosamer, J; Brost, E; Brown, J; Bruckman de Renstrom, P A; Bruncko, D; Bruneliere, R; Brunet, S; Bruni, A; Bruni, G; Bruschi, M; Bryngemark, L; Buanes, T; Buat, Q; Bucci, F; Buchholz, P; Buckingham, R M; Buckley, A G; Buda, S I; Budagov, I A; Buehrer, F; Bugge, L; Bugge, M K; Bulekov, O; Bundock, A C; Burckhart, H; Burdin, S; Burghgrave, B; Burke, S; Burmeister, I; Busato, E; Büscher, D; Büscher, V; Bussey, P; Buszello, C P; Butler, B; Butler, J M; Butt, A I; Buttar, C M; Butterworth, J M; Butti, P; Buttinger, W; Buzatu, A; Byszewski, M; Cabrera Urbán, S; Caforio, D; Cakir, O; Calace, N; Calafiura, P; Calandri, A; Calderini, G; Calfayan, P; Calkins, R; Caloba, L P; Calvet, D; Calvet, S; Camacho Toro, R; Camarda, S; Cameron, D; Caminada, L M; Caminal Armadans, R; Campana, S; Campanelli, M; Campoverde, A; Canale, V; Canepa, A; Cano Bret, M; Cantero, J; Cantrill, R; Cao, T; Capeans Garrido, M D M; Caprini, I; Caprini, M; Capua, M; Caputo, R; Cardarelli, R; Carli, T; Carlino, G; Carminati, L; Caron, S; Carquin, E; Carrillo-Montoya, G D; Carter, J R; Carvalho, J; Casadei, D; Casado, M P; Casolino, M; Castaneda-Miranda, E; Castelli, A; Castillo Gimenez, V; Castro, N F; Catastini, P; Catinaccio, A; Catmore, J R; Cattai, A; Cattani, G; Caudron, J; Cavaliere, V; Cavalli, D; Cavalli-Sforza, M; Cavasinni, V; Ceradini, F; Cerio, B C; Cerny, K; Cerqueira, A S; Cerri, A; Cerrito, L; Cerutti, F; Cerv, M; Cervelli, A; Cetin, S A; Chafaq, A; Chakraborty, D; Chalupkova, I; Chang, P; Chapleau, B; Chapman, J D; Charfeddine, D; Charlton, D G; Chau, C C; Chavez Barajas, C A; Cheatham, S; Chegwidden, A; Chekanov, S; Chekulaev, S V; Chelkov, G A; Chelstowska, M A; Chen, C; Chen, H; Chen, K; Chen, L; Chen, S; Chen, X; Chen, Y; Chen, Y; Cheng, H C; Cheng, Y; Cheplakov, A; Cherkaoui El Moursli, R; Chernyatin, V; Cheu, E; Chevalier, L; Chiarella, V; Chiefari, G; Childers, J T; Chilingarov, A; Chiodini, G; Chisholm, A S; Chislett, R T; Chitan, A; Chizhov, M V; Chouridou, S; Chow, B K B; Chromek-Burckhart, D; Chu, M L; Chudoba, J; Chwastowski, J J; Chytka, L; Ciapetti, G; Ciftci, A K; Ciftci, R; Cinca, D; Cindro, V; Ciocio, A; Cirkovic, P; Citron, Z H; Citterio, M; Ciubancan, M; Clark, A; Clark, P J; Clarke, R N; Cleland, W; Clemens, J C; Clement, C; Coadou, Y; Cobal, M; Coccaro, A; Cochran, J; Coffey, L; Cogan, J G; Coggeshall, J; Cole, B; Cole, S; Colijn, A P; Collot, J; Colombo, T; Colon, G; Compostella, G; Conde Muiño, P; Coniavitis, E; Conidi, M C; Connell, S H; Connelly, I A; Consonni, S M; Consorti, V; Constantinescu, S; Conta, C; Conti, G; Conventi, F; Cooke, M; Cooper, B D; Cooper-Sarkar, A M; Cooper-Smith, N J; Copic, K; Cornelissen, T; Corradi, M; Corriveau, F; Corso-Radu, A; Cortes-Gonzalez, A; Cortiana, G; Costa, G; Costa, M J; Costanzo, D; Côté, D; Cottin, G; Cowan, G; Cox, B E; Cranmer, K; Cree, G; Crépé-Renaudin, S; Crescioli, F; Cribbs, W A; Crispin Ortuzar, M; Cristinziani, M; Croft, V; Crosetti, G; Cuciuc, C-M; Cuhadar Donszelmann, T; Cummings, J; Curatolo, M; Cuthbert, C; Czirr, H; Czodrowski, P; Czyczula, Z; D'Auria, S; D'Onofrio, M; Cunha Sargedas De Sousa, M J Da; Via, C Da; Dabrowski, W; Dafinca, A; Dai, T; Dale, O; Dallaire, F; Dallapiccola, C; Dam, M; Daniells, A C; Dano Hoffmann, M; Dao, V; Darbo, G; Darmora, S; Dassoulas, J A; Dattagupta, A; Davey, W; David, C; Davidek, T; Davies, E; Davies, M; Davignon, O; Davison, A R; Davison, P; Davygora, Y; Dawe, E; Dawson, I; Daya-Ishmukhametova, R K; De, K; de Asmundis, R; De Castro, S; De Cecco, S; De Groot, N; de Jong, P; De la Torre, H; De Lorenzi, F; De Nooij, L; De Pedis, D; De Salvo, A; De Sanctis, U; De Santo, A; De Vivie De Regie, J B; Dearnaley, W J; Debbe, R; Debenedetti, C; Dechenaux, B; Dedovich, D V; Deigaard, I; Del Peso, J; Del Prete, T; Deliot, F; Delitzsch, C M; Deliyergiyev, M; Dell'Acqua, A; Dell'Asta, L; Dell'Orso, M; Della Pietra, M; Della Volpe, D; Delmastro, M; Delsart, P A; Deluca, C; Demers, S; Demichev, M; Demilly, A; Denisov, S P; Derendarz, D; Derkaoui, J E; Derue, F; Dervan, P; Desch, K; Deterre, C; Deviveiros, P O; Dewhurst, A; Dhaliwal, S; Di Ciaccio, A; Di Ciaccio, L; Di Domenico, A; Di Donato, C; Di Girolamo, A; Di Girolamo, B; Di Mattia, A; Di Micco, B; Di Nardo, R; Di Simone, A; Di Sipio, R; Di Valentino, D; Dias, F A; Diaz, M A; Diehl, E B; Dietrich, J; Dietzsch, T A; Diglio, S; Dimitrievska, A; Dingfelder, J; Dionisi, C; Dita, P; Dita, S; Dittus, F; Djama, F; Djobava, T; do Vale, M A B; Do Valle Wemans, A; Dobos, D; Doglioni, C; Doherty, T; Dohmae, T; Dolejsi, J; Dolezal, Z; Dolgoshein, B A; Donadelli, M; Donati, S; Dondero, P; Donini, J; Dopke, J; Doria, A; Dova, M T; Doyle, A T; Dris, M; Dubbert, J; Dube, S; Dubreuil, E; Duchovni, E; Duckeck, G; Ducu, O A; Duda, D; Dudarev, A; Dudziak, F; Duflot, L; Duguid, L; Dührssen, M; Dunford, M; Duran Yildiz, H; Düren, M; Durglishvili, A; Dwuznik, M; Dyndal, M; Ebke, J; Edson, W; Edwards, N C; Ehrenfeld, W; Eifert, T; Eigen, G; Einsweiler, K; Ekelof, T; El Kacimi, M; Ellert, M; Elles, S; Ellinghaus, F; Ellis, N; Elmsheuser, J; Elsing, M; Emeliyanov, D; Enari, Y; Endner, O C; Endo, M; Engelmann, R; Erdmann, J; Ereditato, A; Eriksson, D; Ernis, G; Ernst, J; Ernst, M; Ernwein, J; Errede, D; Errede, S; Ertel, E; Escalier, M; Esch, H; Escobar, C; Esposito, B; Etienvre, A I; Etzion, E; Evans, H; Ezhilov, A; Fabbri, L; Facini, G; Fakhrutdinov, R M; Falciano, S; Falla, R J; Faltova, J; Fang, Y; Fanti, M; Farbin, A; Farilla, A; Farooque, T; Farrell, S; Farrington, S M; Farthouat, P; Fassi, F; Fassnacht, P; Fassouliotis, D; Favareto, A; Fayard, L; Federic, P; Fedin, O L; Fedorko, W; Fehling-Kaschek, M; Feigl, S; Feligioni, L; Feng, C; Feng, E J; Feng, H; Fenyuk, A B; Fernandez Perez, S; Ferrag, S; Ferrando, J; Ferrari, A; Ferrari, P; Ferrari, R; Ferreira de Lima, D E; Ferrer, A; Ferrere, D; Ferretti, C; Ferretto Parodi, A; Fiascaris, M; Fiedler, F; Filipčič, A; Filipuzzi, M; Filthaut, F; Fincke-Keeler, M; Finelli, K D; Fiolhais, M C N; Fiorini, L; Firan, A; Fischer, A; Fischer, J; Fisher, W C; Fitzgerald, E A; Flechl, M; Fleck, I; Fleischmann, P; Fleischmann, S; Fletcher, G T; Fletcher, G; Flick, T; Floderus, A; Flores Castillo, L R; Florez Bustos, A C; Flowerdew, M J; Formica, A; Forti, A; Fortin, D; Fournier, D; Fox, H; Fracchia, S; Francavilla, P; Franchini, M; Franchino, S; Francis, D; Franconi, L; Franklin, M; Franz, S; Fraternali, M; French, S T; Friedrich, C; Friedrich, F; Froidevaux, D; Frost, J A; Fukunaga, C; Fullana Torregrosa, E; Fulsom, B G; Fuster, J; Gabaldon, C; Gabizon, O; Gabrielli, A; Gabrielli, A; Gadatsch, S; Gadomski, S; Gagliardi, G; Gagnon, P; Galea, C; Galhardo, B; Gallas, E J; Gallo, V; Gallop, B J; Gallus, P; Galster, G; Gan, K K; Gao, J; Gao, Y S; Garay Walls, F M; Garberson, F; García, C; García Navarro, J E; Garcia-Sciveres, M; Gardner, R W; Garelli, N; Garonne, V; Gatti, C; Gaudio, G; Gaur, B; Gauthier, L; Gauzzi, P; Gavrilenko, I L; Gay, C; Gaycken, G; Gazis, E N; Ge, P; Gecse, Z; Gee, C N P; Geerts, D A A; Geich-Gimbel, Ch; Gellerstedt, K; Gemme, C; Gemmell, A; Genest, M H; Gentile, S; George, M; George, S; Gerbaudo, D; Gershon, A; Ghazlane, H; Ghodbane, N; Giacobbe, B; Giagu, S; Giangiobbe, V; Giannetti, P; Gianotti, F; Gibbard, B; Gibson, S M; Gilchriese, M; Gillam, T P S; Gillberg, D; Gilles, G; Gingrich, D M; Giokaris, N; Giordani, M P; Giordano, R; Giorgi, F M; Giorgi, F M; Giraud, P F; Giugni, D; Giuliani, C; Giulini, M; Gjelsten, B K; Gkaitatzis, S; Gkialas, I; Gladilin, L K; Glasman, C; Glatzer, J; Glaysher, P C F; Glazov, A; Glonti, G L; Goblirsch-Kolb, M; Goddard, J R; Godlewski, J; Goeringer, C; Goldfarb, S; Golling, T; Golubkov, D; Gomes, A; Gomez Fajardo, L S; Gonçalo, R; Goncalves Pinto Firmino Da Costa, J; Gonella, L; González de la Hoz, S; Gonzalez Parra, G; Gonzalez-Sevilla, S; Goossens, L; Gorbounov, P A; Gordon, H A; Gorelov, I; Gorini, B; Gorini, E; Gorišek, A; Gornicki, E; Goshaw, A T; Gössling, C; Gostkin, M I; Gouighri, M; Goujdami, D; Goulette, M P; Goussiou, A G; Goy, C; Gozpinar, S; Grabas, H M X; Graber, L; Grabowska-Bold, I; Grafström, P; Grahn, K-J; Gramling, J; Gramstad, E; Grancagnolo, S; Grassi, V; Gratchev, V; Gray, H M; Graziani, E; Grebenyuk, O G; Greenwood, Z D; Gregersen, K; Gregor, I M; Grenier, P; Griffiths, J; Grillo, A A; Grimm, K; Grinstein, S; Gris, Ph; Grishkevich, Y V; Grivaz, J-F; Grohs, J P; Grohsjean, A; Gross, E; Grosse-Knetter, J; Grossi, G C; Groth-Jensen, J; Grout, Z J; Guan, L; Guescini, F; Guest, D; Gueta, O; Guicheney, C; Guido, E; Guillemin, T; Guindon, S; Gul, U; Gumpert, C; Gunther, J; Guo, J; Gupta, S; Gutierrez, P; Gutierrez Ortiz, N G; Gutschow, C; Guttman, N; Guyot, C; Gwenlan, C; Gwilliam, C B; Haas, A; Haber, C; Hadavand, H K; Haddad, N; Haefner, P; Hageböeck, S; Hajduk, Z; Hakobyan, H; Haleem, M; Hall, D; Halladjian, G; Hamacher, K; Hamal, P; Hamano, K; Hamer, M; Hamilton, A; Hamilton, S; Hamity, G N; Hamnett, P G; Han, L; Hanagaki, K; Hanawa, K; Hance, M; Hanke, P; Hanna, R; Hansen, J B; Hansen, J D; Hansen, P H; Hara, K; Hard, A S; Harenberg, T; Hariri, F; Harkusha, S; Harper, D; Harrington, R D; Harris, O M; Harrison, P F; Hartjes, F; Hasegawa, M; Hasegawa, S; Hasegawa, Y; Hasib, A; Hassani, S; Haug, S; Hauschild, M; Hauser, R; Havranek, M; Hawkes, C M; Hawkings, R J; Hawkins, A D; Hayashi, T; Hayden, D; Hays, C P; Hayward, H S; Haywood, S J; Head, S J; Heck, T; Hedberg, V; Heelan, L; Heim, S; Heim, T; Heinemann, B; Heinrich, L; Hejbal, J; Helary, L; Heller, C; Heller, M; Hellman, S; Hellmich, D; Helsens, C; Henderson, J; Henderson, R C W; Heng, Y; Hengler, C; Henrichs, A; Henriques Correia, A M; Henrot-Versille, S; Hensel, C; Herbert, G H; Hernández Jiménez, Y; Herrberg-Schubert, R; Herten, G; Hertenberger, R; Hervas, L; Hesketh, G G; Hessey, N P; Hickling, R; Higón-Rodriguez, E; Hill, E; Hill, J C; Hiller, K H; Hillert, S; Hillier, S J; Hinchliffe, I; Hines, E; Hirose, M; Hirschbuehl, D; Hobbs, J; Hod, N; Hodgkinson, M C; Hodgson, P; Hoecker, A; Hoeferkamp, M R; Hoenig, F; Hoffman, J; Hoffmann, D; Hofmann, J I; Hohlfeld, M; Holmes, T R; Hong, T M; Hooft van Huysduynen, L; Hopkins, W H; Horii, Y; Hostachy, J-Y; Hou, S; Hoummada, A; Howard, J; Howarth, J; Hrabovsky, M; Hristova, I; Hrivnac, J; Hryn'ova, T; Hsu, C; Hsu, P J; Hsu, S-C; Hu, D; Hu, X; Huang, Y; Hubacek, Z; Hubaut, F; Huegging, F; Huffman, T B; Hughes, E W; Hughes, G; Huhtinen, M; Hülsing, T A; Hurwitz, M; Huseynov, N; Huston, J; Huth, J; Iacobucci, G; Iakovidis, G; Ibragimov, I; Iconomidou-Fayard, L; Ideal, E; Iengo, P; Igonkina, O; Iizawa, T; Ikegami, Y; Ikematsu, K; Ikeno, M; Ilchenko, Y; Iliadis, D; Ilic, N; Inamaru, Y; Ince, T; Ioannou, P; Iodice, M; Iordanidou, K; Ippolito, V; Irles Quiles, A; Isaksson, C; Ishino, M; Ishitsuka, M; Ishmukhametov, R; Issever, C; Istin, S; Iturbe Ponce, J M; Iuppa, R; Ivarsson, J; Iwanski, W; Iwasaki, H; Izen, J M; Izzo, V; Jackson, B; Jackson, M; Jackson, P; Jaekel, M R; Jain, V; Jakobs, K; Jakobsen, S; Jakoubek, T; Jakubek, J; Jamin, D O; Jana, D K; Jansen, E; Jansen, H; Janssen, J; Janus, M; Jarlskog, G; Javadov, N; Javůrek, T; Jeanty, L; Jejelava, J; Jeng, G-Y; Jennens, D; Jenni, P; Jentzsch, J; Jeske, C; Jézéquel, S; Ji, H; Jia, J; Jiang, Y; Jimenez Belenguer, M; Jin, S; Jinaru, A; Jinnouchi, O; Joergensen, M D; Johansson, K E; Johansson, P; Johns, K A; Jon-And, K; Jones, G; Jones, R W L; Jones, T J; Jongmanns, J; Jorge, P M; Joshi, K D; Jovicevic, J; Ju, X; Jung, C A; Jungst, R M; Jussel, P; Juste Rozas, A; Kaci, M; Kaczmarska, A; Kado, M; Kagan, H; Kagan, M; Kajomovitz, E; Kalderon, C W; Kama, S; Kamenshchikov, A; Kanaya, N; Kaneda, M; Kaneti, S; Kantserov, V A; Kanzaki, J; Kaplan, B; Kapliy, A; Kar, D; Karakostas, K; Karastathis, N; Kareem, M J; Karnevskiy, M; Karpov, S N; Karpova, Z M; Karthik, K; Kartvelishvili, V; Karyukhin, A N; Kashif, L; Kasieczka, G; Kass, R D; Kastanas, A; Kataoka, Y; Katre, A; Katzy, J; Kaushik, V; Kawagoe, K; Kawamoto, T; Kawamura, G; Kazama, S; Kazanin, V F; Kazarinov, M Y; Keeler, R; Kehoe, R; Keil, M; Keller, J S; Kempster, J J; Keoshkerian, H; Kepka, O; Kerševan, B P; Kersten, S; Kessoku, K; Keung, J; Khalil-Zada, F; Khandanyan, H; Khanov, A; Khodinov, A; Khomich, A; Khoo, T J; Khoriauli, G; Khoroshilov, A; Khovanskiy, V; Khramov, E; Khubua, J; Kim, H Y; Kim, H; Kim, S H; Kimura, N; Kind, O; King, B T; King, M; King, R S B; King, S B; Kirk, J; Kiryunin, A E; Kishimoto, T; Kisielewska, D; Kiss, F; Kittelmann, T; Kiuchi, K; Kladiva, E; Klein, M; Klein, U; Kleinknecht, K; Klimek, P; Klimentov, A; Klingenberg, R; Klinger, J A; Klioutchnikova, T; Klok, P F; Kluge, E-E; Kluit, P; Kluth, S; Kneringer, E; Knoops, E B F G; Knue, A; Kobayashi, D; Kobayashi, T; Kobel, M; Kocian, M; Kodys, P; Koevesarki, P; Koffas, T; Koffeman, E; Kogan, L A; Kohlmann, S; Kohout, Z; Kohriki, T; Koi, T; Kolanoski, H; Koletsou, I; Koll, J; Komar, A A; Komori, Y; Kondo, T; Kondrashova, N; Köneke, K; König, A C; König, S; Kono, T; Konoplich, R; Konstantinidis, N; Kopeliansky, R; Koperny, S; Köpke, L; Kopp, A K; Korcyl, K; Kordas, K; Korn, A; Korol, A A; Korolkov, I; Korolkova, E V; Korotkov, V A; Kortner, O; Kortner, S; Kostyukhin, V V; Kotov, V M; Kotwal, A; Kourkoumelis, C; Kouskoura, V; Koutsman, A; Kowalewski, R; Kowalski, T Z; Kozanecki, W; Kozhin, A S; Kral, V; Kramarenko, V A; Kramberger, G; Krasnopevtsev, D; Krasny, M W; Krasznahorkay, A; Kraus, J K; Kravchenko, A; Kreiss, S; Kretz, M; Kretzschmar, J; Kreutzfeldt, K; Krieger, P; Kroeninger, K; Kroha, H; Kroll, J; Kroseberg, J; Krstic, J; Kruchonak, U; Krüger, H; Kruker, T; Krumnack, N; Krumshteyn, Z V; Kruse, A; Kruse, M C; Kruskal, M; Kubota, T; Kuday, S; Kuehn, S; Kugel, A; Kuhl, A; Kuhl, T; Kukhtin, V; Kulchitsky, Y; Kuleshov, S; Kuna, M; Kunkle, J; Kupco, A; Kurashige, H; Kurochkin, Y A; Kurumida, R; Kus, V; Kuwertz, E S; Kuze, M; Kvita, J; La Rosa, A; La Rotonda, L; Lacasta, C; Lacava, F; Lacey, J; Lacker, H; Lacour, D; Lacuesta, V R; Ladygin, E; Lafaye, R; Laforge, B; Lagouri, T; Lai, S; Laier, H; Lambourne, L; Lammers, S; Lampen, C L; Lampl, W; Lançon, E; Landgraf, U; Landon, M P J; Lang, V S; Lankford, A J; Lanni, F; Lantzsch, K; Laplace, S; Lapoire, C; Laporte, J F; Lari, T; Lasagni Manghi, F; Lassnig, M; Laurelli, P; Lavrijsen, W; Law, A T; Laycock, P; Le Dortz, O; Le Guirriec, E; Le Menedeu, E; LeCompte, T; Ledroit-Guillon, F; Lee, C A; Lee, H; Lee, J S H; Lee, S C; Lee, L; Lefebvre, G; Lefebvre, M; Legger, F; Leggett, C; Lehan, A; Lehmacher, M; Lehmann Miotto, G; Lei, X; Leight, W A; Leisos, A; Leister, A G; Leite, M A L; Leitner, R; Lellouch, D; Lemmer, B; Leney, K J C; Lenz, T; Lenzen, G; Lenzi, B; Leone, R; Leone, S; Leonidopoulos, C; Leontsinis, S; Leroy, C; Lester, C G; Lester, C M; Levchenko, M; Levêque, J; Levin, D; Levinson, L J; Levy, M; Lewis, A; Lewis, G H; Leyko, A M; Leyton, M; Li, B; Li, B; Li, H; Li, H L; Li, L; Li, L; Li, S; Li, Y; Liang, Z; Liao, H; Liberti, B; Lichard, P; Lie, K; Liebal, J; Liebig, W; Limbach, C; Limosani, A; Lin, S C; Lin, T H; Linde, F; Lindquist, B E; Linnemann, J T; Lipeles, E; Lipniacka, A; Lisovyi, M; Liss, T M; Lissauer, D; Lister, A; Litke, A M; Liu, B; Liu, D; Liu, J B; Liu, K; Liu, L; Liu, M; Liu, M; Liu, Y; Livan, M; Livermore, S S A; Lleres, A; Llorente Merino, J; Lloyd, S L; Lo Sterzo, F; Lobodzinska, E; Loch, P; Lockman, W S; Loddenkoetter, T; Loebinger, F K; Loevschall-Jensen, A E; Loginov, A; Lohse, T; Lohwasser, K; Lokajicek, M; Lombardo, V P; Long, B A; Long, J D; Long, R E; Lopes, L; Lopez Mateos, D; Lopez Paredes, B; Lopez Paz, I; Lorenz, J; Lorenzo Martinez, N; Losada, M; Loscutoff, P; Lou, X; Lounis, A; Love, J; Love, P A; Lowe, A J; Lu, F; Lu, N; Lubatti, H J; Luci, C; Lucotte, A; Luehring, F; Lukas, W; Luminari, L; Lundberg, O; Lund-Jensen, B; Lungwitz, M; Lynn, D; Lysak, R; Lytken, E; Ma, H; Ma, L L; Maccarrone, G; Macchiolo, A; Machado Miguens, J; Macina, D; Madaffari, D; Madar, R; Maddocks, H J; Mader, W F; Madsen, A; Maeno, M; Maeno, T; Maevskiy, A; Magradze, E; Mahboubi, K; Mahlstedt, J; Mahmoud, S; Maiani, C; Maidantchik, C; Maier, A A; Maio, A; Majewski, S; Makida, Y; Makovec, N; Mal, P; Malaescu, B; Malecki, Pa; Maleev, V P; Malek, F; Mallik, U; Malon, D; Malone, C; Maltezos, S; Malyshev, V M; Malyukov, S; Mamuzic, J; Mandelli, B; Mandelli, L; Mandić, I; Mandrysch, R; Maneira, J; Manfredini, A; Manhaes de Andrade Filho, L; Manjarres Ramos, J A; Mann, A; Manning, P M; Manousakis-Katsikakis, A; Mansoulie, B; Mantifel, R; Mapelli, L; March, L; Marchand, J F; Marchiori, G; Marcisovsky, M; Marino, C P; Marjanovic, M; Marques, C N; Marroquim, F; Marsden, S P; Marshall, Z; Marti, L F; Marti-Garcia, S; Martin, B; Martin, B; Martin, T A; Martin, V J; Martin Dit Latour, B; Martinez, H; Martinez, M; Martin-Haugh, S; Martyniuk, A C; Marx, M; Marzano, F; Marzin, A; Masetti, L; Mashimo, T; Mashinistov, R; Masik, J; Maslennikov, A L; Massa, I; Massa, L; Massol, N; Mastrandrea, P; Mastroberardino, A; Masubuchi, T; Mättig, P; Mattmann, J; Maurer, J; Maxfield, S J; Maximov, D A; Mazini, R; Mazzaferro, L; Mc Goldrick, G; Mc Kee, S P; McCarn, A; McCarthy, R L; McCarthy, T G; McCubbin, N A; McFarlane, K W; Mcfayden, J A; Mchedlidze, G; McMahon, S J; McPherson, R A; Mechnich, J; Medinnis, M; Meehan, S; Mehlhase, S; Mehta, A; Meier, K; Meineck, C; Meirose, B; Melachrinos, C; Mellado Garcia, B R; Meloni, F; Mengarelli, A; Menke, S; Meoni, E; Mercurio, K M; Mergelmeyer, S; Meric, N; Mermod, P; Merola, L; Meroni, C; Merritt, F S; Merritt, H; Messina, A; Metcalfe, J; Mete, A S; Meyer, C; Meyer, C; Meyer, J-P; Meyer, J; Middleton, R P; Migas, S; Mijović, L; Mikenberg, G; Mikestikova, M; Mikuž, M; Milic, A; Miller, D W; Mills, C; Milov, A; Milstead, D A; Milstein, D; Minaenko, A A; Minashvili, I A; Mincer, A I; Mindur, B; Mineev, M; Ming, Y; Mir, L M; Mirabelli, G; Mitani, T; Mitrevski, J; Mitsou, V A; Mitsui, S; Miucci, A; Miyagawa, P S; Mjörnmark, J U; Moa, T; Mochizuki, K; Mohapatra, S; Mohr, W; Molander, S; Moles-Valls, R; Mönig, K; Monini, C; Monk, J; Monnier, E; Montejo Berlingen, J; Monticelli, F; Monzani, S; Moore, R W; Morange, N; Moreno, D; Moreno Llácer, M; Morettini, P; Morgenstern, M; Morii, M; Moritz, S; Morley, A K; Mornacchi, G; Morris, J D; Morvaj, L; Moser, H G; Mosidze, M; Moss, J; Motohashi, K; Mount, R; Mountricha, E; Mouraviev, S V; Moyse, E J W; Muanza, S; Mudd, R D; Mueller, F; Mueller, J; Mueller, K; Mueller, T; Mueller, T; Muenstermann, D; Munwes, Y; Murillo Quijada, J A; Murray, W J; Musheghyan, H; Musto, E; Myagkov, A G; Myska, M; Nackenhorst, O; Nadal, J; Nagai, K; Nagai, R; Nagai, Y; Nagano, K; Nagarkar, A; Nagasaka, Y; Nagel, M; Nairz, A M; Nakahama, Y; Nakamura, K; Nakamura, T; Nakano, I; Namasivayam, H; Nanava, G; Narayan, R; Nattermann, T; Naumann, T; Navarro, G; Nayyar, R; Neal, H A; Nechaeva, P Yu; Neep, T J; Nef, P D; Negri, A; Negri, G; Negrini, M; Nektarijevic, S; Nellist, C; Nelson, A; Nelson, T K; Nemecek, S; Nemethy, P; Nepomuceno, A A; Nessi, M; Neubauer, M S; Neumann, M; Neves, R M; Nevski, P; Newman, P R; Nguyen, D H; Nickerson, R B; Nicolaidou, R; Nicquevert, B; Nielsen, J; Nikiforou, N; Nikiforov, A; Nikolaenko, V; Nikolic-Audit, I; Nikolics, K; Nikolopoulos, K; Nilsson, P; Ninomiya, Y; Nisati, A; Nisius, R; Nobe, T; Nodulman, L; Nomachi, M; Nomidis, I; Norberg, S; Nordberg, M; Novgorodova, O; Nowak, S; Nozaki, M; Nozka, L; Ntekas, K; Nunes Hanninger, G; Nunnemann, T; Nurse, E; Nuti, F; O'Brien, B J; O'grady, F; O'Neil, D C; O'Shea, V; Oakham, F G; Oberlack, H; Obermann, T; Ocariz, J; Ochi, A; Ochoa, M I; Oda, S; Odaka, S; Ogren, H; Oh, A; Oh, S H; Ohm, C C; Ohman, H; Okamura, W; Okawa, H; Okumura, Y; Okuyama, T; Olariu, A; Olchevski, A G; Olivares Pino, S A; Oliveira Damazio, D; Oliver Garcia, E; Olszewski, A; Olszowska, J; Onofre, A; Onyisi, P U E; Oram, C J; Oreglia, M J; Oren, Y; Orestano, D; Orlando, N; Oropeza Barrera, C; Orr, R S; Osculati, B; Ospanov, R; Otero Y Garzon, G; Otono, H; Ouchrif, M; Ouellette, E A; Ould-Saada, F; Ouraou, A; Oussoren, K P; Ouyang, Q; Ovcharova, A; Owen, M; Ozcan, V E; Ozturk, N; Pachal, K; Pacheco Pages, A; Padilla Aranda, C; Pagáčová, M; Pagan Griso, S; Paganis, E; Pahl, C; Paige, F; Pais, P; Pajchel, K; Palacino, G; Palestini, S; Palka, M; Pallin, D; Palma, A; Palmer, J D; Pan, Y B; Panagiotopoulou, E; Panduro Vazquez, J G; Pani, P; Panikashvili, N; Panitkin, S; Pantea, D; Paolozzi, L; Papadopoulou, Th D; Papageorgiou, K; Paramonov, A; Paredes Hernandez, D; Parker, M A; Parodi, F; Parsons, J A; Parzefall, U; Pasqualucci, E; Passaggio, S; Passeri, A; Pastore, F; Pastore, Fr; Pásztor, G; Pataraia, S; Patel, N D; Pater, J R; Patricelli, S; Pauly, T; Pearce, J; Pedersen, L E; Pedersen, M; Pedraza Lopez, S; Pedro, R; Peleganchuk, S V; Pelikan, D; Peng, H; Penning, B; Penwell, J; Perepelitsa, D V; Perez Codina, E; Pérez García-Estañ, M T; Perez Reale, V; Perini, L; Pernegger, H; Perrella, S; Perrino, R; Peschke, R; Peshekhonov, V D; Peters, K; Peters, R F Y; Petersen, B A; Petersen, T C; Petit, E; Petridis, A; Petridou, C; Petrolo, E; Petrucci, F; Pettersson, N E; Pezoa, R; Phillips, P W; Piacquadio, G; Pianori, E; Picazio, A; Piccaro, E; Piccinini, M; Piegaia, R; Pignotti, D T; Pilcher, J E; Pilkington, A D; Pina, J; Pinamonti, M; Pinder, A; Pinfold, J L; Pingel, A; Pinto, B; Pires, S; Pitt, M; Pizio, C; Plazak, L; Pleier, M-A; Pleskot, V; Plotnikova, E; Plucinski, P; Poddar, S; Podlyski, F; Poettgen, R; Poggioli, L; Pohl, D; Pohl, M; Polesello, G; Policicchio, A; Polifka, R; Polini, A; Pollard, C S; Polychronakos, V; Pommès, K; Pontecorvo, L; Pope, B G; Popeneciu, G A; Popovic, D S; Poppleton, A; Portell Bueso, X; Pospisil, S; Potamianos, K; Potrap, I N; Potter, C J; Potter, C T; Poulard, G; Poveda, J; Pozdnyakov, V; Pralavorio, P; Pranko, A; Prasad, S; Pravahan, R; Prell, S; Price, D; Price, J; Price, L E; Prieur, D; Primavera, M; Proissl, M; Prokofiev, K; Prokoshin, F; Protopapadaki, E; Protopopescu, S; Proudfoot, J; Przybycien, M; Przysiezniak, H; Ptacek, E; Puddu, D; Pueschel, E; Puldon, D; Purohit, M; Puzo, P; Qian, J; Qin, G; Qin, Y; Quadt, A; Quarrie, D R; Quayle, W B; Queitsch-Maitland, M; Quilty, D; Qureshi, A; Radeka, V; Radescu, V; Radhakrishnan, S K; Radloff, P; Rados, P; Ragusa, F; Rahal, G; Rajagopalan, S; Rammensee, M; Randle-Conde, A S; Rangel-Smith, C; Rao, K; Rauscher, F; Rave, T C; Ravenscroft, T; Raymond, M; Read, A L; Readioff, N P; Rebuzzi, D M; Redelbach, A; Redlinger, G; Reece, R; Reeves, K; Rehnisch, L; Reisin, H; Relich, M; Rembser, C; Ren, H; Ren, Z L; Renaud, A; Rescigno, M; Resconi, S; Rezanova, O L; Reznicek, P; Rezvani, R; Richter, R; Ridel, M; Rieck, P; Rieger, J; Rijssenbeek, M; Rimoldi, A; Rinaldi, L; Ritsch, E; Riu, I; Rizatdinova, F; Rizvi, E; Robertson, S H; Robichaud-Veronneau, A; Robinson, D; Robinson, J E M; Robson, A; Roda, C; Rodrigues, L; Roe, S; Røhne, O; Rolli, S; Romaniouk, A; Romano, M; Romero Adam, E; Rompotis, N; Ronzani, M; Roos, L; Ros, E; Rosati, S; Rosbach, K; Rose, M; Rose, P; Rosendahl, P L; Rosenthal, O; Rossetti, V; Rossi, E; Rossi, L P; Rosten, R; Rotaru, M; Roth, I; Rothberg, J; Rousseau, D; Royon, C R; Rozanov, A; Rozen, Y; Ruan, X; Rubbo, F; Rubinskiy, I; Rud, V I; Rudolph, C; Rudolph, M S; Rühr, F; Ruiz-Martinez, A; Rurikova, Z; Rusakovich, N A; Ruschke, A; Rutherfoord, J P; Ruthmann, N; Ryabov, Y F; Rybar, M; Rybkin, G; Ryder, N C; Saavedra, A F; Sacerdoti, S; Saddique, A; Sadeh, I; Sadrozinski, H F-W; Sadykov, R; Safai Tehrani, F; Sakamoto, H; Sakurai, Y; Salamanna, G; Salamon, A; Saleem, M; Salek, D; Sales De Bruin, P H; Salihagic, D; Salnikov, A; Salt, J; Salvatore, D; Salvatore, F; Salvucci, A; Salzburger, A; Sampsonidis, D; Sanchez, A; Sánchez, J; Sanchez Martinez, V; Sandaker, H; Sandbach, R L; Sander, H G; Sanders, M P; Sandhoff, M; Sandoval, T; Sandoval, C; Sandstroem, R; Sankey, D P C; Sansoni, A; Santoni, C; Santonico, R; Santos, H; Santoyo Castillo, I; Sapp, K; Sapronov, A; Saraiva, J G; Sarrazin, B; Sartisohn, G; Sasaki, O; Sasaki, Y; Sauvage, G; Sauvan, E; Savard, P; Savu, D O; Sawyer, C; Sawyer, L; Saxon, D H; Saxon, J; Sbarra, C; Sbrizzi, A; Scanlon, T; Scannicchio, D A; Scarcella, M; Scarfone, V; Schaarschmidt, J; Schacht, P; Schaefer, D; Schaefer, R; Schaepe, S; Schaetzel, S; Schäfer, U; Schaffer, A C; Schaile, D; Schamberger, R D; Scharf, V; Schegelsky, V A; Scheirich, D; Schernau, M; Scherzer, M I; Schiavi, C; Schieck, J; Schillo, C; Schioppa, M; Schlenker, S; Schmidt, E; Schmieden, K; Schmitt, C; Schmitt, S; Schneider, B; Schnellbach, Y J; Schnoor, U; Schoeffel, L; Schoening, A; Schoenrock, B D; Schorlemmer, A L S; Schott, M; Schouten, D; Schovancova, J; Schramm, S; Schreyer, M; Schroeder, C; Schuh, N; Schultens, M J; Schultz-Coulon, H-C; Schulz, H; Schumacher, M; Schumm, B A; Schune, Ph; Schwanenberger, C; Schwartzman, A; Schwarz, T A; Schwegler, Ph; Schwemling, Ph; Schwienhorst, R; Schwindling, J; Schwindt, T; Schwoerer, M; Sciacca, F G; Scifo, E; Sciolla, G; Scott, W G; Scuri, F; Scutti, F; Searcy, J; Sedov, G; Sedykh, E; Seidel, S C; Seiden, A; Seifert, F; Seixas, J M; Sekhniaidze, G; Sekula, S J; Selbach, K E; Seliverstov, D M; Sellers, G; Semprini-Cesari, N; Serfon, C; Serin, L; Serkin, L; Serre, T; Seuster, R; Severini, H; Sfiligoj, T; Sforza, F; Sfyrla, A; Shabalina, E; Shamim, M; Shan, L Y; Shang, R; Shank, J T; Shapiro, M; Shatalov, P B; Shaw, K; Shehu, C Y; Sherwood, P; Shi, L; Shimizu, S; Shimmin, C O; Shimojima, M; Shiyakova, M; Shmeleva, A; Shochet, M J; Short, D; Shrestha, S; Shulga, E; Shupe, M A; Shushkevich, S; Sicho, P; Sidiropoulou, O; Sidorov, D; Sidoti, A; Siegert, F; Sijacki, Dj; Silva, J; Silver, Y; Silverstein, D; Silverstein, S B; Simak, V; Simard, O; Simic, Lj; Simion, S; Simioni, E; Simmons, B; Simoniello, R; Simonyan, M; Sinervo, P; Sinev, N B; Sipica, V; Siragusa, G; Sircar, A; Sisakyan, A N; Sivoklokov, S Yu; Sjölin, J; Sjursen, T B; Skottowe, H P; Skovpen, K Yu; Skubic, P; Slater, M; Slavicek, T; Sliwa, K; Smakhtin, V; Smart, B H; Smestad, L; Smirnov, S Yu; Smirnov, Y; Smirnova, L N; Smirnova, O; Smith, K M; Smizanska, M; Smolek, K; Snesarev, A A; Snidero, G; Snyder, S; Sobie, R; Socher, F; Soffer, A; Soh, D A; Solans, C A; Solar, M; Solc, J; Soldatov, E Yu; Soldevila, U; Solodkov, A A; Soloshenko, A; Solovyanov, O V; Solovyev, V; Sommer, P; Song, H Y; Soni, N; Sood, A; Sopczak, A; Sopko, B; Sopko, V; Sorin, V; Sosebee, M; Soualah, R; Soueid, P; Soukharev, A M; South, D; Spagnolo, S; Spanò, F; Spearman, W R; Spettel, F; Spighi, R; Spigo, G; Spiller, L A; Spousta, M; Spreitzer, T; Spurlock, B; Denis, R D St; Staerz, S; Stahlman, J; Stamen, R; Stamm, S; Stanecka, E; Stanek, R W; Stanescu, C; Stanescu-Bellu, M; Stanitzki, M M; Stapnes, S; Starchenko, E A; Stark, J; Staroba, P; Starovoitov, P; Staszewski, R; Stavina, P; Steinberg, P; Stelzer, B; Stelzer, H J; Stelzer-Chilton, O; Stenzel, H; Stern, S; Stewart, G A; Stillings, J A; Stockton, M C; Stoebe, M; Stoicea, G; Stolte, P; Stonjek, S; Stradling, A R; Straessner, A; Stramaglia, M E; Strandberg, J; Strandberg, S; Strandlie, A; Strauss, E; Strauss, M; Strizenec, P; Ströhmer, R; Strom, D M; Stroynowski, R; Strubig, A; Stucci, S A; Stugu, B; Styles, N A; Su, D; Su, J; Subramaniam, R; Succurro, A; Sugaya, Y; Suhr, C; Suk, M; Sulin, V V; Sultansoy, S; Sumida, T; Sun, S; Sun, X; Sundermann, J E; Suruliz, K; Susinno, G; Sutton, M R; Suzuki, Y; Svatos, M; Swedish, S; Swiatlowski, M; Sykora, I; Sykora, T; Ta, D; Taccini, C; Tackmann, K; Taenzer, J; Taffard, A; Tafirout, R; Taiblum, N; Takai, H; Takashima, R; Takeda, H; Takeshita, T; Takubo, Y; Talby, M; Talyshev, A A; Tam, J Y C; Tan, K G; Tanaka, J; Tanaka, R; Tanaka, S; Tanaka, S; Tanasijczuk, A J; Tannenwald, B B; Tannoury, N; Tapprogge, S; Tarem, S; Tarrade, F; Tartarelli, G F; Tas, P; Tasevsky, M; Tashiro, T; Tassi, E; Tavares Delgado, A; Tayalati, Y; Taylor, F E; Taylor, G N; Taylor, W; Teischinger, F A; Teixeira Dias Castanheira, M; Teixeira-Dias, P; Temming, K K; Ten Kate, H; Teng, P K; Teoh, J J; Terada, S; Terashi, K; Terron, J; Terzo, S; Testa, M; Teuscher, R J; Therhaag, J; Theveneaux-Pelzer, T; Thomas, J P; Thomas-Wilsker, J; Thompson, E N; Thompson, P D; Thompson, P D; Thompson, R J; Thompson, A S; Thomsen, L A; Thomson, E; Thomson, M; Thong, W M; Thun, R P; Tian, F; Tibbetts, M J; Tikhomirov, V O; Tikhonov, Yu A; Timoshenko, S; Tiouchichine, E; Tipton, P; Tisserant, S; Todorov, T; Todorova-Nova, S; Toggerson, B; Tojo, J; Tokár, S; Tokushuku, K; Tollefson, K; Tolley, E; Tomlinson, L; Tomoto, M; Tompkins, L; Toms, K; Topilin, N D; Torrence, E; Torres, H; Torró Pastor, E; Toth, J; Touchard, F; Tovey, D R; Tran, H L; Trefzger, T; Tremblet, L; Tricoli, A; Trigger, I M; Trincaz-Duvoid, S; Tripiana, M F; Trischuk, W; Trocmé, B; Troncon, C; Trottier-McDonald, M; Trovatelli, M; True, P; Trzebinski, M; Trzupek, A; Tsarouchas, C; Tseng, J C-L; Tsiareshka, P V; Tsionou, D; Tsipolitis, G; Tsirintanis, N; Tsiskaridze, S; Tsiskaridze, V; Tskhadadze, E G; Tsukerman, I I; Tsulaia, V; Tsuno, S; Tsybychev, D; Tudorache, A; Tudorache, V; Tuna, A N; Tupputi, S A; Turchikhin, S; Turecek, D; Turk Cakir, I; Turra, R; Tuts, P M; Tykhonov, A; Tylmad, M; Tyndel, M; Uchida, K; Ueda, I; Ueno, R; Ughetto, M; Ugland, M; Uhlenbrock, M; Ukegawa, F; Unal, G; Undrus, A; Unel, G; Ungaro, F C; Unno, Y; Unverdorben, C; Urbaniec, D; Urquijo, P; Usai, G; Usanova, A; Vacavant, L; Vacek, V; Vachon, B; Valencic, N; Valentinetti, S; Valero, A; Valery, L; Valkar, S; Valladolid Gallego, E; Vallecorsa, S; Valls Ferrer, J A; Van Den Wollenberg, W; Van Der Deijl, P C; van der Geer, R; van der Graaf, H; Van Der Leeuw, R; van der Ster, D; van Eldik, N; van Gemmeren, P; Van Nieuwkoop, J; van Vulpen, I; van Woerden, M C; Vanadia, M; Vandelli, W; Vanguri, R; Vaniachine, A; Vankov, P; Vannucci, F; Vardanyan, G; Vari, R; Varnes, E W; Varol, T; Varouchas, D; Vartapetian, A; Varvell, K E; Vazeille, F; Vazquez Schroeder, T; Veatch, J; Veloso, F; Veneziano, S; Ventura, A; Ventura, D; Venturi, M; Venturi, N; Venturini, A; Vercesi, V; Verducci, M; Verkerke, W; Vermeulen, J C; Vest, A; Vetterli, M C; Viazlo, O; Vichou, I; Vickey, T; Vickey Boeriu, O E; Viehhauser, G H A; Viel, S; Vigne, R; Villa, M; Villaplana Perez, M; Vilucchi, E; Vincter, M G; Vinogradov, V B; Virzi, J; Vivarelli, I; Vives Vaque, F; Vlachos, S; Vladoiu, D; Vlasak, M; Vogel, A; Vogel, M; Vokac, P; Volpi, G; Volpi, M; von der Schmitt, H; von Radziewski, H; von Toerne, E; Vorobel, V; Vorobev, K; Vos, M; Voss, R; Vossebeld, J H; Vranjes, N; Vranjes Milosavljevic, M; Vrba, V; Vreeswijk, M; Vu Anh, T; Vuillermet, R; Vukotic, I; Vykydal, Z; Wagner, P; Wagner, W; Wahlberg, H; Wahrmund, S; Wakabayashi, J; Walder, J; Walker, R; Walkowiak, W; Wall, R; Waller, P; Walsh, B; Wang, C; Wang, C; Wang, F; Wang, H; Wang, H; Wang, J; Wang, J; Wang, K; Wang, R; Wang, S M; Wang, T; Wang, X; Wanotayaroj, C; Warburton, A; Ward, C P; Wardrope, D R; Warsinsky, M; Washbrook, A; Wasicki, C; Watkins, P M; Watson, A T; Watson, I J; Watson, M F; Watts, G; Watts, S; Waugh, B M; Webb, S; Weber, M S; Weber, S W; Webster, J S; Weidberg, A R; Weigell, P; Weinert, B; Weingarten, J; Weiser, C; Weits, H; Wells, P S; Wenaus, T; Wendland, D; Weng, Z; Wengler, T; Wenig, S; Wermes, N; Werner, M; Werner, P; Wessels, M; Wetter, J; Whalen, K; White, A; White, M J; White, R; White, S; Whiteson, D; Wicke, D; Wickens, F J; Wiedenmann, W; Wielers, M; Wienemann, P; Wiglesworth, C; Wiik-Fuchs, L A M; Wijeratne, P A; Wildauer, A; Wildt, M A; Wilkens, H G; Will, J Z; Williams, H H; Williams, S; Willis, C; Willocq, S; Wilson, A; Wilson, J A; Wingerter-Seez, I; Winklmeier, F; Winter, B T; Wittgen, M; Wittig, T; Wittkowski, J; Wollstadt, S J; Wolter, M W; Wolters, H; Wosiek, B K; Wotschack, J; Woudstra, M J; Wozniak, K W; Wright, M; Wu, M; Wu, S L; Wu, X; Wu, Y; Wulf, E; Wyatt, T R; Wynne, B M; Xella, S; Xiao, M; Xu, D; Xu, L; Yabsley, B; Yacoob, S; Yakabe, R; Yamada, M; Yamaguchi, H; Yamaguchi, Y; Yamamoto, A; Yamamoto, K; Yamamoto, S; Yamamura, T; Yamanaka, T; Yamauchi, K; Yamazaki, Y; Yan, Z; Yang, H; Yang, H; Yang, U K; Yang, Y; Yanush, S; Yao, L; Yao, W-M; Yasu, Y; Yatsenko, E; Yau Wong, K H; Ye, J; Ye, S; Yeletskikh, I; Yen, A L; Yildirim, E; Yilmaz, M; Yoosoofmiya, R; Yorita, K; Yoshida, R; Yoshihara, K; Young, C; Young, C J S; Youssef, S; Yu, D R; Yu, J; Yu, J M; Yu, J; Yuan, L; Yurkewicz, A; Yusuff, I; Zabinski, B; Zaidan, R; Zaitsev, A M; Zaman, A; Zambito, S; Zanello, L; Zanzi, D; Zeitnitz, C; Zeman, M; Zemla, A; Zengel, K; Zenin, O; Ženiš, T; Zerwas, D; Zevi Della Porta, G; Zhang, D; Zhang, F; Zhang, H; Zhang, J; Zhang, L; Zhang, X; Zhang, Z; Zhao, Z; Zhemchugov, A; Zhong, J; Zhou, B; Zhou, L; Zhou, N; Zhu, C G; Zhu, H; Zhu, J; Zhu, Y; Zhuang, X; Zhukov, K; Zibell, A; Zieminska, D; Zimine, N I; Zimmermann, C; Zimmermann, R; Zimmermann, S; Zimmermann, S; Zinonos, Z; Ziolkowski, M; Zobernig, G; Zoccoli, A; Zur Nedden, M; Zurzolo, G; Zutshi, V; Zwalinski, L
The performance of the ATLAS muon trigger system is evaluated with proton-proton collision data collected in 2012 at the Large Hadron Collider at a centre-of-mass energy of 8 TeV. It is primarily evaluated using events containing a pair of muons from the decay of [Formula: see text] bosons. The efficiency of the single-muon trigger is measured for muons with transverse momentum [Formula: see text] GeV, with a statistical uncertainty of less than 0.01 % and a systematic uncertainty of 0.6 %. The [Formula: see text] range for efficiency determination is extended by using muons from decays of [Formula: see text] mesons, [Formula: see text] bosons, and top quarks. The muon trigger shows highly uniform and stable performance. The performance is compared to the prediction of a detailed simulation.
Simulator Motion as a Factor in Flight Simulator Training Effectiveness.
ERIC Educational Resources Information Center
Jacobs, Robert S.
The document reviews the literature concerning the training effectiveness of flight simulators and describes an experiment in progress at the University of Illinois' Institute of Aviation which is an initial attempt to develop systematically the relationship between motion cue fidelity and resultant training effectiveness. The literature review…
Evaluation of surgical training in the era of simulation
Shaharan, Shazrinizam; Neary, Paul
2014-01-01
AIM: To assess where we currently stand in relation to simulator-based training within modern surgical training curricula. METHODS: A systematic literature search was performed in PubMed database using keywords “simulation”, “skills assessment” and “surgery”. The studies retrieved were examined according to the inclusion and exclusion criteria. Time period reviewed was 2000 to 2013. The methodology of skills assessment was examined. RESULTS: Five hundred and fifteen articles focussed upon simulator based skills assessment. Fifty-two articles were identified that dealt with technical skills assessment in general surgery. Five articles assessed open skills, 37 assessed laparoscopic skills, 4 articles assessed both open and laparoscopic skills and 6 assessed endoscopic skills. Only 12 articles were found to be integrating simulators in the surgical training curricula. Observational assessment tools, in the form of Objective Structured Assessment of Technical Skills (OSATS) dominated the literature. CONCLUSION: Observational tools such as OSATS remain the top assessment instrument in surgical training especially in open technical skills. Unlike the aviation industry, simulation based assessment has only now begun to cross the threshold of incorporation into mainstream skills training. Over the next decade we expect the promise of simulator-based training to finally take flight and begin an exciting voyage of discovery for surgical trainees. PMID:25228946
A proposed method to investigate reliability throughout a questionnaire.
Wentzel-Larsen, Tore; Norekvål, Tone M; Ulvik, Bjørg; Nygård, Ottar; Pripp, Are H
2011-10-05
Questionnaires are used extensively in medical and health care research and depend on validity and reliability. However, participants may differ in interest and awareness throughout long questionnaires, which can affect reliability of their answers. A method is proposed for "screening" of systematic change in random error, which could assess changed reliability of answers. A simulation study was conducted to explore whether systematic change in reliability, expressed as changed random error, could be assessed using unsupervised classification of subjects by cluster analysis (CA) and estimation of intraclass correlation coefficient (ICC). The method was also applied on a clinical dataset from 753 cardiac patients using the Jalowiec Coping Scale. The simulation study showed a relationship between the systematic change in random error throughout a questionnaire and the slope between the estimated ICC for subjects classified by CA and successive items in a questionnaire. This slope was proposed as an awareness measure--to assessing if respondents provide only a random answer or one based on a substantial cognitive effort. Scales from different factor structures of Jalowiec Coping Scale had different effect on this awareness measure. Even though assumptions in the simulation study might be limited compared to real datasets, the approach is promising for assessing systematic change in reliability throughout long questionnaires. Results from a clinical dataset indicated that the awareness measure differed between scales.
NASA Astrophysics Data System (ADS)
Zhu, J.; Winter, C. L.; Wang, Z.
2015-08-01
Computational experiments are performed to evaluate the effects of locally heterogeneous conductivity fields on regional exchanges of water between stream and aquifer systems in the Middle Heihe River Basin (MHRB) of northwestern China. The effects are found to be nonlinear in the sense that simulated discharges from aquifers to streams are systematically lower than discharges produced by a base model parameterized with relatively coarse effective conductivity. A similar, but weaker, effect is observed for stream leakage. The study is organized around three hypotheses: (H1) small-scale spatial variations of conductivity significantly affect regional exchanges of water between streams and aquifers in river basins, (H2) aggregating small-scale heterogeneities into regional effective parameters systematically biases estimates of stream-aquifer exchanges, and (H3) the biases result from slow-paths in groundwater flow that emerge due to small-scale heterogeneities. The hypotheses are evaluated by comparing stream-aquifer fluxes produced by the base model to fluxes simulated using realizations of the MHRB characterized by local (grid-scale) heterogeneity. Levels of local heterogeneity are manipulated as control variables by adjusting coefficients of variation. All models are implemented using the MODFLOW simulation environment, and the PEST tool is used to calibrate effective conductivities defined over 16 zones within the MHRB. The effective parameters are also used as expected values to develop log-normally distributed conductivity (K) fields on local grid scales. Stream-aquifer exchanges are simulated with K fields at both scales and then compared. Results show that the effects of small-scale heterogeneities significantly influence exchanges with simulations based on local-scale heterogeneities always producing discharges that are less than those produced by the base model. Although aquifer heterogeneities are uncorrelated at local scales, they appear to induce coherent slow-paths in groundwater fluxes that in turn reduce aquifer-stream exchanges. Since surface water-groundwater exchanges are critical hydrologic processes in basin-scale water budgets, these results also have implications for water resources management.
Numerical study of metal oxide hetero-junction solar cells with defects and interface states
NASA Astrophysics Data System (ADS)
Zhu, Le; Shao, Guosheng; Luo, J. K.
2013-05-01
Further to our previous work on ideal metal oxide (MO) hetero-junction solar cells, a systematic simulation has been carried out to investigate the effects of defects and interface states on the cells. Two structures of the window/absorber (WA) and window/absorber/voltage-enhancer (WAV) were modelled with defect concentration, defect energy level, interface state (ISt) density and ISt energy level as parameters. The simulation showed that the defects in the window layer and the voltage-enhancer layer have very limited effects on the performance of the cells, but those in the absorption layer have profound effects on the cell performance. The interface states at the W/A interface have a limited effect on the performance even for a density up to 1013 cm-2, while those at the A/V interface cause the solar cell to deteriorate severely even at a low density of lower than 1 × 1011 cm-2. It also showed that the back surface field (BSF) induced by band gap off-set in the WAV structure loses its function when defects with a modest concentration exist in the absorption layer and does not improve the open voltage at all.
A computational model that predicts behavioral sensitivity to intracortical microstimulation
Kim, Sungshin; Callier, Thierri; Bensmaia, Sliman J.
2016-01-01
Objective Intracortical microstimulation (ICMS) is a powerful tool to investigate the neural mechanisms of perception and can be used to restore sensation for patients who have lost it. While sensitivity to ICMS has previously been characterized, no systematic framework has been developed to summarize the detectability of individual ICMS pulse trains or the discriminability of pairs of pulse trains. Approach We develop a simple simulation that describes the responses of a population of neurons to a train of electrical pulses delivered through a microelectrode. We then perform an ideal observer analysis on the simulated population responses to predict the behavioral performance of non-human primates in ICMS detection and discrimination tasks. Main results Our computational model can predict behavioral performance across a wide range of stimulation conditions with high accuracy (R2 = 0.97) and generalizes to novel ICMS pulse trains that were not used to fit its parameters. Furthermore, the model provides a theoretical basis for the finding that amplitude discrimination based on ICMS violates Weber's law. Significance The model can be used to characterize the sensitivity to ICMS across the range of perceptible and safe stimulation regimes. As such, it will be a useful tool for both neuroscience and neuroprosthetics. PMID:27977419
A computational model that predicts behavioral sensitivity to intracortical microstimulation.
Kim, Sungshin; Callier, Thierri; Bensmaia, Sliman J
2017-02-01
Intracortical microstimulation (ICMS) is a powerful tool to investigate the neural mechanisms of perception and can be used to restore sensation for patients who have lost it. While sensitivity to ICMS has previously been characterized, no systematic framework has been developed to summarize the detectability of individual ICMS pulse trains or the discriminability of pairs of pulse trains. We develop a simple simulation that describes the responses of a population of neurons to a train of electrical pulses delivered through a microelectrode. We then perform an ideal observer analysis on the simulated population responses to predict the behavioral performance of non-human primates in ICMS detection and discrimination tasks. Our computational model can predict behavioral performance across a wide range of stimulation conditions with high accuracy (R 2 = 0.97) and generalizes to novel ICMS pulse trains that were not used to fit its parameters. Furthermore, the model provides a theoretical basis for the finding that amplitude discrimination based on ICMS violates Weber's law. The model can be used to characterize the sensitivity to ICMS across the range of perceptible and safe stimulation regimes. As such, it will be a useful tool for both neuroscience and neuroprosthetics.
Ackermann, M.; Ajello, M.; Albert, A.; ...
2012-10-12
The Fermi Large Area Telescope (Fermi-LAT, hereafter LAT), the primary instrument on the Fermi Gamma-ray Space Telescope (Fermi) mission, is an imaging, wide field-of-view, high-energy γ-ray telescope, covering the energy range from 20 MeV to more than 300 GeV. During the first years of the mission, the LAT team has gained considerable insight into the in-flight performance of the instrument. Accordingly, we have updated the analysis used to reduce LAT data for public release as well as the instrument response functions (IRFs), the description of the instrument performance provided for data analysis. In this study, we describe the effects thatmore » motivated these updates. Furthermore, we discuss how we originally derived IRFs from Monte Carlo simulations and later corrected those IRFs for discrepancies observed between flight and simulated data. We also give details of the validations performed using flight data and quantify the residual uncertainties in the IRFs. In conclusion, we describe techniques the LAT team has developed to propagate those uncertainties into estimates of the systematic errors on common measurements such as fluxes and spectra of astrophysical sources.« less
Pal, Parimal; Thakura, Ritwik; Chakrabortty, Sankha
2016-05-01
A user-friendly, menu-driven simulation software tool has been developed for the first time to optimize and analyze the system performance of an advanced continuous membrane-integrated pharmaceutical wastewater treatment plant. The software allows pre-analysis and manipulation of input data which helps in optimization and shows the software performance visually on a graphical platform. Moreover, the software helps the user to "visualize" the effects of the operating parameters through its model-predicted output profiles. The software is based on a dynamic mathematical model, developed for a systematically integrated forward osmosis-nanofiltration process for removal of toxic organic compounds from pharmaceutical wastewater. The model-predicted values have been observed to corroborate well with the extensive experimental investigations which were found to be consistent under varying operating conditions like operating pressure, operating flow rate, and draw solute concentration. Low values of the relative error (RE = 0.09) and high values of Willmott-d-index (d will = 0.981) reflected a high degree of accuracy and reliability of the software. This software is likely to be a very efficient tool for system design or simulation of an advanced membrane-integrated treatment plant for hazardous wastewater.
A computational model that predicts behavioral sensitivity to intracortical microstimulation
NASA Astrophysics Data System (ADS)
Kim, Sungshin; Callier, Thierri; Bensmaia, Sliman J.
2017-02-01
Objective. Intracortical microstimulation (ICMS) is a powerful tool to investigate the neural mechanisms of perception and can be used to restore sensation for patients who have lost it. While sensitivity to ICMS has previously been characterized, no systematic framework has been developed to summarize the detectability of individual ICMS pulse trains or the discriminability of pairs of pulse trains. Approach. We develop a simple simulation that describes the responses of a population of neurons to a train of electrical pulses delivered through a microelectrode. We then perform an ideal observer analysis on the simulated population responses to predict the behavioral performance of non-human primates in ICMS detection and discrimination tasks. Main results. Our computational model can predict behavioral performance across a wide range of stimulation conditions with high accuracy (R 2 = 0.97) and generalizes to novel ICMS pulse trains that were not used to fit its parameters. Furthermore, the model provides a theoretical basis for the finding that amplitude discrimination based on ICMS violates Weber’s law. Significance. The model can be used to characterize the sensitivity to ICMS across the range of perceptible and safe stimulation regimes. As such, it will be a useful tool for both neuroscience and neuroprosthetics.
The impact of sleep deprivation in military surgical teams: a systematic review.
Parker, Rachael Sv; Parker, P
2017-06-01
Fatigue in military operations leads to safety and operational problems due to a decrease in alertness and performance. The primary method of counteracting the effects of sleep deprivation is to increase nightly sleep time, which in operational situations is not always feasible. History has taught us that surgeons and surgical teams are finite resources that cannot operate on patients indefinitely. A systematic review was conducted using the search terms ' sleep ' and ' deprivation ' examining the impact of sleep deprivation on cognitive performance in military surgical teams. Studies examining outcomes on intensive care patients and subjects with comorbidities were not addressed in this review. Sleep deprivation in any ' out-of-hours ' surgery has a significant impact on overall morbidity and mortality. Sleep deprivation in surgeons and surgical trainees negatively impacts cognitive performance and puts their own and patients' health at risk. All published research lacks consensus when defining ' sleep deprivation ' and ' rested ' states. It is recognised that it would be unethical to conduct a well-designed randomised controlled trial, to determine the effects of fatigue on performance in surgery; however, there is a paucity between surrogate markers and applying simulated results to actual clinical performance. This requires further research. Recommended methods of combating fatigue include: prophylactically ' sleep-banking ' prior to known periods of sleep deprivation, napping, use of stimulant or alerting substances such as modafinil, coordinated work schedules to reduce circadian desynchronisation and regular breaks with enforced rest periods. A forward surgical team will become combat-ineffective after 48 hours of continuous operations. This systematic review recommends implementing on-call periods of no more than 12 hours in duration, with adequate rest periods every 24 hours. Drug therapies and sleep banking may, in the short term, prevent negative effects of acute sleep deprivation. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Bellot, Pau; Olsen, Catharina; Salembier, Philippe; Oliveras-Vergés, Albert; Meyer, Patrick E
2015-09-29
In the last decade, a great number of methods for reconstructing gene regulatory networks from expression data have been proposed. However, very few tools and datasets allow to evaluate accurately and reproducibly those methods. Hence, we propose here a new tool, able to perform a systematic, yet fully reproducible, evaluation of transcriptional network inference methods. Our open-source and freely available Bioconductor package aggregates a large set of tools to assess the robustness of network inference algorithms against different simulators, topologies, sample sizes and noise intensities. The benchmarking framework that uses various datasets highlights the specialization of some methods toward network types and data. As a result, it is possible to identify the techniques that have broad overall performances.
A modified homogeneous relaxation model for CO2 two-phase flow in vapour ejector
NASA Astrophysics Data System (ADS)
Haida, M.; Palacz, M.; Smolka, J.; Nowak, A. J.; Hafner, A.; Banasiak, K.
2016-09-01
In this study, the homogenous relaxation model (HRM) for CO2 flow in a two-phase ejector was modified in order to increase the accuracy of the numerical simulations The two- phase flow model was implemented on the effective computational tool called ejectorPL for fully automated and systematic computations of various ejector shapes and operating conditions. The modification of the HRM was performed by a change of the relaxation time and the constants included in the relaxation time equation based on the experimental result under the operating conditions typical for the supermarket refrigeration system. The modified HRM was compared to the HEM results, which were performed based on the comparison of motive nozzle and suction nozzle mass flow rates.
Current Status of Simulation-based Training Tools in Orthopedic Surgery: A Systematic Review.
Morgan, Michael; Aydin, Abdullatif; Salih, Alan; Robati, Shibby; Ahmed, Kamran
To conduct a systematic review of orthopedic training and assessment simulators with reference to their level of evidence (LoE) and level of recommendation. Medline and EMBASE library databases were searched for English language articles published between 1980 and 2016, describing orthopedic simulators or validation studies of these models. All studies were assessed for LoE, and each model was subsequently awarded a level of recommendation using a modified Oxford Centre for Evidence-Based Medicine classification, adapted for education. A total of 76 articles describing orthopedic simulators met the inclusion criteria, 47 of which described at least 1 validation study. The most commonly identified models (n = 34) and validation studies (n = 26) were for knee arthroscopy. Construct validation was the most frequent validation study attempted by authors. In all, 62% (47 of 76) of the simulator studies described arthroscopy simulators, which also contained validation studies with the highest LoE. Orthopedic simulators are increasingly being subjected to validation studies, although the LoE of such studies generally remain low. There remains a lack of focus on nontechnical skills and on cost analyses of orthopedic simulators. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Investigating the impact of moulage on simulation engagement - A systematic review.
Stokes-Parish, Jessica B; Duvivier, Robbert; Jolly, Brian
2018-05-01
Simulation Based Education (SBE) is used as a primer for clinical education in nursing and other health professions. Participant engagement strategies and good debriefing have been identified as key for effective simulations. The environment in which the simulation is situated also plays a large role in the degree of participant engagement. Various cues are staged within simulations to enhance this engagement process. Moulage techniques are used in current-day simulation to mimic illnesses and wounds, acting as visual and tactile cues for the learner. To effectively utilise moulage in simulation, significant expense is required to train simulation staff and to purchase relevant equipment. Explore the use of moulage in simulation practice today and its influence on participant engagement. Using a systematic process to extract papers, we reviewed the literature with a critical-realist lens. CINAHL Complete, ERIC, Embase, Medline, PsycINFO, SCOPUS, Web of Science, Proquest, Science Direct and SAGE. 10 databases were systematically reviewed using the keyword "moulage" to answer the question "How does the authenticity of moulage impact on participant engagement?". 1318 records were identified prior to exclusion criterion were applied. 10 articles were targeted for review, following exclusion for English language and publication between 2005 and 2015. The resulting 10 papers were assessed for quality using the Medical Education Research Study Quality Instrument (MERSQI). The majority of papers were situated in dermatology teaching, with only one nursing paper. Study participants were both undergraduate and postgraduate. Most of the studies were undertaken at a university setting. No papers comprehensively addressed whether the authenticity of moulage influences learner engagement. Results were limited, yet clearly outline a widely held assumption that moulage is essential in simulation-based education for improved realism and subsequent learner engagement. Despite this, there is no clear evidence from the literature that this is the case, suggesting that further research to explore the impact of moulage on participant engagement is warranted. A number of recommendations are made for future research. Copyright © 2018 Elsevier Ltd. All rights reserved.
Lu, Liang-Xing; Wang, Ying-Min; Srinivasan, Bharathi Madurai; Asbahi, Mohamed; Yang, Joel K. W.; Zhang, Yong-Wei
2016-01-01
We perform systematic two-dimensional energetic analysis to study the stability of various nanostructures formed by dewetting solid films deposited on patterned substrates. Our analytical results show that by controlling system parameters such as the substrate surface pattern, film thickness and wetting angle, a variety of equilibrium nanostructures can be obtained. Phase diagrams are presented to show the complex relations between these system parameters and various nanostructure morphologies. We further carry out both phase field simulations and dewetting experiments to validate the analytically derived phase diagrams. Good agreements between the results from our energetic analyses and those from our phase field simulations and experiments verify our analysis. Hence, the phase diagrams presented here provide guidelines for using solid-state dewetting as a tool to achieve various nanostructures. PMID:27580943
Effect of damping on the laser induced ultrafast switching in rare earth-transition metal alloys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oniciuc, Eugen; Stoleriu, Laurentiu; Cimpoesu, Dorin
2014-06-02
In this paper, we present simulations of thermally induced magnetic switching in ferrimagnetic systems performed with a Landau-Lifshitz-Bloch (LLB) equation for damping constant in a wide range of values. We have systematically studied the GdFeCo ferrimagnet with various concentrations of Gd and compared for some values of parameters the LLB results with atomistic simulations. The agreement is remarkably good, which shows that the dynamics described by the ferrimagnetic LLB is a reasonable approximation of this complex physical phenomenon. As an important element, we show that the LLB is able to also describe the intermediate formation of a ferromagnetic state whichmore » seems to be essential to understand laser induced ultrafast switching. The study reveals the fundamental role of damping during the switching process.« less
Investigation and Modeling of Capacitive Human Body Communication.
Zhu, Xiao-Qi; Guo, Yong-Xin; Wu, Wen
2017-04-01
This paper presents a systematic investigation of the capacitive human body communication (HBC). The measurement of HBC channels is performed using a novel battery-powered system to eliminate the effects of baluns, cables and instruments. To verify the measured results, a numerical model incorporating the entire HBC system is established. Besides, it is demonstrated that both the impedance and path gain bandwidths of HBC channels is affected by the electrode configuration. Based on the analysis of the simulated electric field distribution, an equivalent circuit model is proposed and the circuit parameters are extracted using the finite element method. The transmission capability along the human body is also studied. The simulated results using the numerical and circuit models coincide very well with the measurement, which demonstrates that the proposed circuit model can effectively interpret the operation mechanism of the capacitive HBC.
ERIC Educational Resources Information Center
McReynolds, William T.; Tori, Christopher
1972-01-01
It was predicted that phobic Ss receiving systematic desensitization would show greater reductions on both fear-related behavior measures and simulated fear measures than Ss receiving relaxation treatment or no treatment at all. The prediction was confirmed. (Author)
Doctor performance assessment in daily practise: does it help doctors or not? A systematic review.
Overeem, Karlijn; Faber, Marjan J; Arah, Onyebuchi A; Elwyn, Glyn; Lombarts, Kiki M J M H; Wollersheim, Hub C; Grol, Richard P T M
2007-11-01
Continuous assessment of individual performance of doctors is crucial for life-long learning and quality of care. Policy-makers and health educators should have good insights into the strengths and weaknesses of the methods available. The aim of this study was to systematically evaluate the feasibility of methods, the psychometric properties of instruments that are especially important for summative assessments, and the effectiveness of methods serving formative assessments used in routine practise to assess the performance of individual doctors. We searched the MEDLINE (1966-January 2006), PsychINFO (1972-January 2006), CINAHL (1982-January 2006), EMBASE (1980-January 2006) and Cochrane (1966-2006) databases for English language articles, and supplemented this with a hand-search of reference lists of relevant studies and bibliographies of review articles. Studies that aimed to assess the performance of individual doctors in routine practise were included. Two reviewers independently abstracted data regarding study design, setting and findings related to reliability, validity, feasibility and effectiveness using a standard data abstraction form. A total of 64 articles met our inclusion criteria. We observed 6 different methods of evaluating performance: simulated patients; video observation; direct observation; peer assessment; audit of medical records, and portfolio or appraisal. Peer assessment is the most feasible method in terms of costs and time. Little psychometric assessment of the instruments has been undertaken so far. Effectiveness of formative assessments is poorly studied. All systems but 2 rely on a single method to assess performance. There is substantial potential to assess performance of doctors in routine practise. The longterm impact and effectiveness of formative performance assessments on education and quality of care remains hardly known. Future research designs need to pay special attention to unmasking effectiveness in terms of performance improvement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pitarka, A.
We analyzed the performance of the Irikura and Miyake (2011) (IM2011) asperity- based kinematic rupture model generator, as implemented in the hybrid broadband ground-motion simulation methodology of Graves and Pitarka (2010), for simulating ground motion from crustal earthquakes of intermediate size. The primary objective of our study is to investigate the transportability of IM2011 into the framework used by the Southern California Earthquake Center broadband simulation platform. In our analysis, we performed broadband (0 - 20Hz) ground motion simulations for a suite of M6.7 crustal scenario earthquakes in a hard rock seismic velocity structure using rupture models produced with bothmore » IM2011 and the rupture generation method of Graves and Pitarka (2016) (GP2016). The level of simulated ground motions for the two approaches compare favorably with median estimates obtained from the 2014 Next Generation Attenuation-West2 Project (NGA-West2) ground-motion prediction equations (GMPEs) over the frequency band 0.1–10 Hz and for distances out to 22 km from the fault. We also found that, compared to GP2016, IM2011 generates ground motion with larger variability, particularly at near-fault distances (<12km) and at long periods (>1s). For this specific scenario, the largest systematic difference in ground motion level for the two approaches occurs in the period band 1 – 3 sec where the IM2011 motions are about 20 – 30% lower than those for GP2016. We found that increasing the rupture speed by 20% on the asperities in IM2011 produced ground motions in the 1 – 3 second bandwidth that are in much closer agreement with the GMPE medians and similar to those obtained with GP2016. The potential implications of this modification for other rupture mechanisms and magnitudes are not yet fully understood, and this topic is the subject of ongoing study.« less
Lights All Askew: Systematics in Galaxy Images from Megaparsecs to Microns
NASA Astrophysics Data System (ADS)
Bradshaw, Andrew Kenneth
The stars and galaxies are not where they seem. In the process of imaging and measurement, the light from distant objects is distorted, blurred, and skewed by several physical effects on scales from megaparsecs to microns. Charge-coupled devices (CCDs) provide sensitive detection of this light, but introduce their own problems in the form of systematic biases. Images of these stars and galaxies are formed in CCDs when incoming light generates photoelectrons which are then collected in a pixel's potential well and measured as signal. However, these signal electrons can be diverted from purely parallel paths toward the pixel wells by transverse fields sourced by structural elements of the CCD, accidental imperfections in fabrication, or dynamic electric fields induced by other collected charges. These charge transport anomalies lead to measurable systematic errors in the images which bias cosmological inferences based on them. The physics of imaging therefore deserves thorough investigation, which is performed in the laboratory using a unique optical beam simulator and in computer simulations of charge transport. On top of detector systematics, there are often biases in the mathematical analysis of pixelized images; in particular, the location, shape, and orientation of stars and galaxies. Using elliptical Gaussians as a toy model for galaxies, it is demonstrated how small biases in the computed image moments lead to observable orientation patterns in modern survey data. Also presented are examples of the reduction of data and fitting of optical aberrations of images in the lab and on the sky which are modeled by physically or mathematically-motivated methods. Finally, end-to-end analysis of the weak gravitational lensing signal is presented using deep sky data as well as in N-body simulations. It is demonstrated how measured weak lens shear can be transformed by signal matched filters which aid in the detection of mass overdensities and separate signal from noise. A commonly-used decomposition of shear into two components, E- and B-modes, is thoroughly tested and both modes are shown to be useful in the detection of large scale structure. We find several astrophysical sources of B-mode and explain their apparent origin. The methods presented therefore offer an optimal way to filter weak gravitational shear into maps of large scale structure through the process of cosmic mass cartography.
Systematic testing of flood adaptation options in urban areas through simulations
NASA Astrophysics Data System (ADS)
Löwe, Roland; Urich, Christian; Sto. Domingo, Nina; Mark, Ole; Deletic, Ana; Arnbjerg-Nielsen, Karsten
2016-04-01
While models can quantify flood risk in great detail, the results are subject to a number of deep uncertainties. Climate dependent drivers such as sea level and rainfall intensities, population growth and economic development all have a strong influence on future flood risk, but future developments can only be estimated coarsely. In such a situation, robust decision making frameworks call for the systematic evaluation of mitigation measures against ensembles of potential futures. We have coupled the urban development software DAnCE4Water and the 1D-2D hydraulic simulation package MIKE FLOOD to create a framework that allows for such systematic evaluations, considering mitigation measures under a variety of climate futures and urban development scenarios. A wide spectrum of mitigation measures can be considered in this setup, ranging from structural measures such as modifications of the sewer network over local retention of rainwater and the modification of surface flow paths to policy measures such as restrictions on urban development in flood prone areas or master plans that encourage compact development. The setup was tested in a 300 ha residential catchment in Melbourne, Australia. The results clearly demonstrate the importance of considering a range of potential futures in the planning process. For example, local rainwater retention measures strongly reduce flood risk a scenario with moderate increase of rain intensities and moderate urban growth, but their performance strongly varies, yielding very little improvement in situations with pronounced climate change. The systematic testing of adaptation measures further allows for the identification of so-called adaptation tipping points, i.e. levels for the drivers of flood risk where the desired level of flood risk is exceeded despite the implementation of (a combination of) mitigation measures. Assuming a range of development rates for the drivers of flood risk, such tipping points can be translated into anticipated time spans over which a measure will be effective. While the new simulation setup is limited to situations where the planner is able to define realistic ranges for the development of drivers of flood risk, it certainly contributes to an improved consideration of deep uncertainties in the planning process. Future work will particularly focus on the application of the framework in a variety of urban development contexts.
Observer roles that optimise learning in healthcare simulation education: a systematic review.
O'Regan, Stephanie; Molloy, Elizabeth; Watterson, Leonie; Nestel, Debra
2016-01-01
Simulation is widely used in health professional education. The convention that learners are actively involved may limit access to this educational method. The aim of this paper is to review the evidence for learning methods that employ directed observation as an alternative to hands-on participation in scenario-based simulation training. We sought studies that included either direct comparison of the learning outcomes of observers with those of active participants or identified factors important for the engagement of observers in simulation. We systematically searched health and education databases and reviewed journals and bibliographies for studies investigating or referring to observer roles in simulation using mannequins, simulated patients or role play simulations. A quality framework was used to rate the studies. We sought studies that included either direct comparison of the learning outcomes of observers with those of active participants or identified factors important for the engagement of observers in simulation. We systematically searched health and education databases and reviewed journals and bibliographies for studies investigating or referring to observer roles in simulation using mannequins, simulated patients or role play simulations. A quality framework was used to rate the studies. Nine studies met the inclusion criteria. Five studies suggest learning outcomes in observer roles are as good or better than hands-on roles in simulation. Four studies document learner satisfaction in observer roles. Five studies used a tool to guide observers. Eight studies involved observers in the debrief. Learning and satisfaction in observer roles is closely associated with observer tools, learner engagement, role clarity and contribution to the debrief. Learners that valued observer roles described them as affording an overarching view, examination of details from a distance, and meaningful feedback during the debrief. Learners who did not value observer roles described them as passive, or boring when compared to hands-on engagement in the simulation encounter. Learning outcomes and role satisfaction for observers is improved through learner engagement and the use of observer tools. The value that students attach to observer roles appear contingent on role clarity, use of observer tools, and inclusion of observers' perspectives in the debrief.
Experimental validation of the TOPAS Monte Carlo system for passive scattering proton therapy
Testa, M.; Schümann, J.; Lu, H.-M.; Shin, J.; Faddegon, B.; Perl, J.; Paganetti, H.
2013-01-01
Purpose: TOPAS (TOol for PArticle Simulation) is a particle simulation code recently developed with the specific aim of making Monte Carlo simulations user-friendly for research and clinical physicists in the particle therapy community. The authors present a thorough and extensive experimental validation of Monte Carlo simulations performed with TOPAS in a variety of setups relevant for proton therapy applications. The set of validation measurements performed in this work represents an overall end-to-end testing strategy recommended for all clinical centers planning to rely on TOPAS for quality assurance or patient dose calculation and, more generally, for all the institutions using passive-scattering proton therapy systems. Methods: The authors systematically compared TOPAS simulations with measurements that are performed routinely within the quality assurance (QA) program in our institution as well as experiments specifically designed for this validation study. First, the authors compared TOPAS simulations with measurements of depth-dose curves for spread-out Bragg peak (SOBP) fields. Second, absolute dosimetry simulations were benchmarked against measured machine output factors (OFs). Third, the authors simulated and measured 2D dose profiles and analyzed the differences in terms of field flatness and symmetry and usable field size. Fourth, the authors designed a simple experiment using a half-beam shifter to assess the effects of multiple Coulomb scattering, beam divergence, and inverse square attenuation on lateral and longitudinal dose profiles measured and simulated in a water phantom. Fifth, TOPAS’ capabilities to simulate time dependent beam delivery was benchmarked against dose rate functions (i.e., dose per unit time vs time) measured at different depths inside an SOBP field. Sixth, simulations of the charge deposited by protons fully stopping in two different types of multilayer Faraday cups (MLFCs) were compared with measurements to benchmark the nuclear interaction models used in the simulations. Results: SOBPs’ range and modulation width were reproduced, on average, with an accuracy of +1, −2 and ±3 mm, respectively. OF simulations reproduced measured data within ±3%. Simulated 2D dose-profiles show field flatness and average field radius within ±3% of measured profiles. The field symmetry resulted, on average in ±3% agreement with commissioned profiles. TOPAS accuracy in reproducing measured dose profiles downstream the half beam shifter is better than 2%. Dose rate function simulation reproduced the measurements within ∼2% showing that the four-dimensional modeling of the passively modulation system was implement correctly and millimeter accuracy can be achieved in reproducing measured data. For MLFCs simulations, 2% agreement was found between TOPAS and both sets of experimental measurements. The overall results show that TOPAS simulations are within the clinical accepted tolerances for all QA measurements performed at our institution. Conclusions: Our Monte Carlo simulations reproduced accurately the experimental data acquired through all the measurements performed in this study. Thus, TOPAS can reliably be applied to quality assurance for proton therapy and also as an input for commissioning of commercial treatment planning systems. This work also provides the basis for routine clinical dose calculations in patients for all passive scattering proton therapy centers using TOPAS. PMID:24320505
Pitarka, Arben; Graves, Robert; Irikura, Kojiro; Miyake, Hiroe; Rodgers, Arthur
2017-01-01
We analyzed the performance of the Irikura and Miyake (Pure and Applied Geophysics 168(2011):85–104, 2011) (IM2011) asperity-based kinematic rupture model generator, as implemented in the hybrid broadband ground motion simulation methodology of Graves and Pitarka (Bulletin of the Seismological Society of America 100(5A):2095–2123, 2010), for simulating ground motion from crustal earthquakes of intermediate size. The primary objective of our study is to investigate the transportability of IM2011 into the framework used by the Southern California Earthquake Center broadband simulation platform. In our analysis, we performed broadband (0–20 Hz) ground motion simulations for a suite of M6.7 crustal scenario earthquakes in a hard rock seismic velocity structure using rupture models produced with both IM2011 and the rupture generation method of Graves and Pitarka (Bulletin of the Seismological Society of America, 2016) (GP2016). The level of simulated ground motions for the two approaches compare favorably with median estimates obtained from the 2014 Next Generation Attenuation-West2 Project (NGA-West2) ground motion prediction equations (GMPEs) over the frequency band 0.1–10 Hz and for distances out to 22 km from the fault. We also found that, compared to GP2016, IM2011 generates ground motion with larger variability, particularly at near-fault distances (<12 km) and at long periods (>1 s). For this specific scenario, the largest systematic difference in ground motion level for the two approaches occurs in the period band 1–3 s where the IM2011 motions are about 20–30% lower than those for GP2016. We found that increasing the rupture speed by 20% on the asperities in IM2011 produced ground motions in the 1–3 s bandwidth that are in much closer agreement with the GMPE medians and similar to those obtained with GP2016. The potential implications of this modification for other rupture mechanisms and magnitudes are not yet fully understood, and this topic is the subject of ongoing study. We concluded that IM2011 rupture generator performs well in ground motion simulations using Graves and Pitarka hybrid method. Therefore, we recommend it to be considered for inclusion into the framework used by the Southern California Earthquake Center broadband simulation platform.
NASA Astrophysics Data System (ADS)
Pitarka, Arben; Graves, Robert; Irikura, Kojiro; Miyake, Hiroe; Rodgers, Arthur
2017-09-01
We analyzed the performance of the Irikura and Miyake (Pure and Applied Geophysics 168(2011):85-104, 2011) (IM2011) asperity-based kinematic rupture model generator, as implemented in the hybrid broadband ground motion simulation methodology of Graves and Pitarka (Bulletin of the Seismological Society of America 100(5A):2095-2123, 2010), for simulating ground motion from crustal earthquakes of intermediate size. The primary objective of our study is to investigate the transportability of IM2011 into the framework used by the Southern California Earthquake Center broadband simulation platform. In our analysis, we performed broadband (0-20 Hz) ground motion simulations for a suite of M6.7 crustal scenario earthquakes in a hard rock seismic velocity structure using rupture models produced with both IM2011 and the rupture generation method of Graves and Pitarka (Bulletin of the Seismological Society of America, 2016) (GP2016). The level of simulated ground motions for the two approaches compare favorably with median estimates obtained from the 2014 Next Generation Attenuation-West2 Project (NGA-West2) ground motion prediction equations (GMPEs) over the frequency band 0.1-10 Hz and for distances out to 22 km from the fault. We also found that, compared to GP2016, IM2011 generates ground motion with larger variability, particularly at near-fault distances (<12 km) and at long periods (>1 s). For this specific scenario, the largest systematic difference in ground motion level for the two approaches occurs in the period band 1-3 s where the IM2011 motions are about 20-30% lower than those for GP2016. We found that increasing the rupture speed by 20% on the asperities in IM2011 produced ground motions in the 1-3 s bandwidth that are in much closer agreement with the GMPE medians and similar to those obtained with GP2016. The potential implications of this modification for other rupture mechanisms and magnitudes are not yet fully understood, and this topic is the subject of ongoing study. We concluded that IM2011 rupture generator performs well in ground motion simulations using Graves and Pitarka hybrid method. Therefore, we recommend it to be considered for inclusion into the framework used by the Southern California Earthquake Center broadband simulation platform.
Phase shifts in I = 2 ππ-scattering from two lattice approaches
NASA Astrophysics Data System (ADS)
Kurth, T.; Ishii, N.; Doi, T.; Aoki, S.; Hatsuda, T.
2013-12-01
We present a lattice QCD study of the phase shift of I = 2 ππ scattering on the basis of two different approaches: the standard finite volume approach by Lüscher and the recently introduced HAL QCD potential method. Quenched QCD simulations are performed on lattices with extents N s = 16 , 24 , 32 , 48 and N t = 128 as well as lattice spacing a ~ 0 .115 fm and a pion mass of m π ~ 940 MeV. The phase shift and the scattering length are calculated in these two methods. In the potential method, the error is dominated by the systematic uncertainty associated with the violation of rotational symmetry due to finite lattice spacing. In Lüscher's approach, such systematic uncertainty is difficult to be evaluated and thus is not included in this work. A systematic uncertainty attributed to the quenched approximation, however, is not evaluated in both methods. In case of the potential method, the phase shift can be calculated for arbitrary energies below the inelastic threshold. The energy dependence of the phase shift is also obtained from Lüscher's method using different volumes and/or nonrest-frame extension of it. The results are found to agree well with the potential method.
Alex, J; Kolisch, G; Krause, K
2002-01-01
The objective of this presented project is to use the results of an CFD simulation to automatically, systematically and reliably generate an appropriate model structure for simulation of the biological processes using CSTR activated sludge compartments. Models and dynamic simulation have become important tools for research but also increasingly for the design and optimisation of wastewater treatment plants. Besides the biological models several cases are reported about the application of computational fluid dynamics ICFD) to wastewater treatment plants. One aim of the presented method to derive model structures from CFD results is to exclude the influence of empirical structure selection to the result of dynamic simulations studies of WWTPs. The second application of the approach developed is the analysis of badly performing treatment plants where the suspicion arises that bad flow behaviour such as short cut flows is part of the problem. The method suggested requires as the first step the calculation of fluid dynamics of the biological treatment step at different loading situations by use of 3-dimensional CFD simulation. The result of this information is used to generate a suitable model structure for conventional dynamic simulation of the treatment plant by use of a number of CSTR modules with a pattern of exchange flows between the tanks automatically. The method is explained in detail and the application to the WWTP Wuppertal Buchenhofen is presented.
Reduced order model based on principal component analysis for process simulation and optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lang, Y.; Malacina, A.; Biegler, L.
2009-01-01
It is well-known that distributed parameter computational fluid dynamics (CFD) models provide more accurate results than conventional, lumped-parameter unit operation models used in process simulation. Consequently, the use of CFD models in process/equipment co-simulation offers the potential to optimize overall plant performance with respect to complex thermal and fluid flow phenomena. Because solving CFD models is time-consuming compared to the overall process simulation, we consider the development of fast reduced order models (ROMs) based on CFD results to closely approximate the high-fidelity equipment models in the co-simulation. By considering process equipment items with complicated geometries and detailed thermodynamic property models,more » this study proposes a strategy to develop ROMs based on principal component analysis (PCA). Taking advantage of commercial process simulation and CFD software (for example, Aspen Plus and FLUENT), we are able to develop systematic CFD-based ROMs for equipment models in an efficient manner. In particular, we show that the validity of the ROM is more robust within well-sampled input domain and the CPU time is significantly reduced. Typically, it takes at most several CPU seconds to evaluate the ROM compared to several CPU hours or more to solve the CFD model. Two case studies, involving two power plant equipment examples, are described and demonstrate the benefits of using our proposed ROM methodology for process simulation and optimization.« less
Systematic and Performance Tests of the Hard X-ray Polarimeter X-Calibur
NASA Astrophysics Data System (ADS)
Endsley, Ryan; Beilicke, Matthias; Kislat, Fabian; Krawczynski, Henric; X-Calibur/InFOCuS
2015-01-01
X-ray polarimetry has great potential to reveal new astrophysical information about the emission processes of high energy sources such as black hole environments, X-ray binary systems, and active galactic nuclei. Here we present the results and conclusions of systematic and performance measurements of the hard X-ray polarimeter, X-Calibur. Designed to be flown on a balloon-borne X-ray telescope, X-Calibur will achieve unprecedented sensitivity and makes use of the fact that polarized X-rays preferentially Compton-scatter perpendicular to their E-field vector. Extensive laboratory measurements taken at Washington University and the Cornell High-Energy Synchrotron Source (CHESS) indicate that X-Calibur combines a detection efficiency on the order of unity with a high modulation factor of µ ≈ 0.5 averaged over the whole detector assembly, and with values up to µ ≈ 0.7 for select subsections of the polarimeter. Additionally, we are able to suppress background flux by more than two orders of magnitude by utilizing an active shield and scintillator coincidence. Comparing laboratory data with Monte Carlo simulations of both polarized and unpolarized hard X-ray beams illustrate that we have an exceptional understanding of the detector response.
ERIC Educational Resources Information Center
Hirumi, Atsusi; Kleinsmith, Andrea; Johnsen, Kyle; Kubovec, Stacey; Eakins, Michael; Bogert, Kenneth; Rivera-Gutierrez, Diego J.; Reyes, Ramsamooj Javier; Lok, Benjamin; Cendan, Juan
2016-01-01
Systematic reviews and meta-analyses of randomized controlled studies conclude that virtual patient simulations are consistently associated with higher learning outcomes compared to other educational methods. However, we cannot assume that students will learn from simply exposing students to the simulations. The instructional features that are…
Flatness-Based Tracking Control and Nonlinear Observer for a Micro Aerial Quadcopter
NASA Astrophysics Data System (ADS)
Rivera, G.; Sawodny, O.
2010-09-01
This paper deals with the design of a nonlinear observer and a differential flat based path tracking controller for a mini aerial quadcopter. Taking into account that only the inertial coordinates and the yaw angle are available for measurements, it is shown, that the system is differentially flat, allowing a systematic design of a nonlinear tracking control in open and closed loop. A nonlinear observer is carried out to estimate the roll and pitch angle as well as all the linear and angular velocities. Finally the performance of the feedback controller and observer are illustrated in a computer simulation.
Effect of the Magnus force on skyrmion relaxation dynamics
NASA Astrophysics Data System (ADS)
Brown, Barton L.; Täuber, Uwe C.; Pleimling, Michel
2018-01-01
We perform systematic Langevin molecular dynamics simulations of interacting skyrmions in thin films. The interplay between the Magnus force, the repulsive skyrmion-skyrmion interaction, and the thermal noise yields different regimes during nonequilibrium relaxation. In the noise-dominated regime, the Magnus force enhances the disordering effects of the thermal noise. In the Magnus-force-dominated regime, the Magnus force cooperates with the skyrmion-skyrmion interaction to yield a dynamic regime with slow decaying correlations. These two regimes are characterized by different values of the aging exponent. In general, the Magnus force accelerates the approach to the steady state.
Weight propagation and equivalent horsepower for alternate-engined cars
NASA Technical Reports Server (NTRS)
Klose, G. J.; Kurtz, D. W.
1978-01-01
In order to evaluate properly the consequences of replacing conventional Otto-cycle engines with alternate power systems, comparisons must be carried out at the vehicle level with functionally equivalent cars. This paper presents the development and application of a procedure for establishing equivalent vehicles. A systematic weight propagation methodology, based on detailed weight breakdowns and influence factors, yields the vehicle weight impacts due to changes in engine weight and power. Performance-matching criteria, utilizing a vehicle simulation program, are then employed to establish Otto-engine-equivalent vehicles, whose characteristics can form the basis for alternative engine evaluations.
Arisholm, Gunnar
2007-05-14
Group velocity mismatch (GVM) is a major concern in the design of optical parametric amplifiers (OPAs) and generators (OPGs) for pulses shorter than a few picoseconds. By simplifying the coupled propagation equations and exploiting their scaling properties, the number of free parameters for a collinear OPA is reduced to a level where the parameter space can be studied systematically by simulations. The resulting set of figures show the combinations of material parameters and pulse lengths for which high performance can be achieved, and they can serve as a basis for a design.
Plasmon Enhanced Hetero-Junction Solar Cell
NASA Astrophysics Data System (ADS)
Long, Gen; Ching, Levine; Sadoqi, Mostafa; Xu, Huizhong
2015-03-01
Here we report a systematic study of plasmon-enhanced hetero-junction solar cells made of colloidal quantum dots (PbS) and nanowires (ZnO), with/without metal nanoparticles (Au). The structure of solar cell devices was characterized by AFM, SEM and profilometer, etc. The power conversion efficiencies of solar cell devices were characterized by solar simulator (OAI TriSOL, AM1.5G Class AAA). The enhancement in the photocurrent due to introduction of metal nanoparticles was obvious. We believe this is due to the plasmonic effect from the metal nanoparticles. The correlation between surface roughness, film uniformity and device performance was also studied.
Hydrogen fluoride capture by imidazolium acetate ionic liquid
NASA Astrophysics Data System (ADS)
Chaban, Vitaly
2015-04-01
Extraction of hydrofluoric acid (HF) from oils is a drastically important problem in petroleum industry, since HF causes quick corrosion of pipe lines and brings severe health problems to humanity. Some ionic liquids (ILs) constitute promising scavenger agents thanks to strong binding to polar compounds and tunability. PM7-MD simulations and hybrid density functional theory are employed here to consider HF capture ability of ILs. Discussing the effects and impacts of the cation and the anion separately and together, we evaluate performance of imidazolium acetate and outline systematic search guidelines for efficient adsorption and extraction of HF.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Guangjin, E-mail: guangjin.ma@mpq.mpg.de; Max-Planck-Institut für Quantenoptik, D-85748 Garching; Dallari, William
2015-03-15
We have performed a systematic study through particle-in-cell simulations to investigate the generation of attosecond pulse from relativistic laser plasmas when laser pulse duration approaches the few-cycle regime. A significant enhancement of attosecond pulse energy has been found to depend on laser pulse duration, carrier envelope phase, and plasma scale length. Based on the results obtained in this work, the potential of attaining isolated attosecond pulses with ∼100 μJ energy for photons >16 eV using state-of-the-art laser technology appears to be within reach.
Patient simulation: a literary synthesis of assessment tools in anesthesiology.
Edler, Alice A; Fanning, Ruth G; Chen, Michael I; Claure, Rebecca; Almazan, Dondee; Struyk, Brain; Seiden, Samuel C
2009-12-20
High-fidelity patient simulation (HFPS) has been hypothesized as a modality for assessing competency of knowledge and skill in patient simulation, but uniform methods for HFPS performance assessment (PA) have not yet been completely achieved. Anesthesiology as a field founded the HFPS discipline and also leads in its PA. This project reviews the types, quality, and designated purpose of HFPS PA tools in anesthesiology. We used the systematic review method and systematically reviewed anesthesiology literature referenced in PubMed to assess the quality and reliability of available PA tools in HFPS. Of 412 articles identified, 50 met our inclusion criteria. Seventy seven percent of studies have been published since 2000; more recent studies demonstrated higher quality. Investigators reported a variety of test construction and validation methods. The most commonly reported test construction methods included "modified Delphi Techniques" for item selection, reliability measurement using inter-rater agreement, and intra-class correlations between test items or subtests. Modern test theory, in particular generalizability theory, was used in nine (18%) of studies. Test score validity has been addressed in multiple investigations and shown a significant improvement in reporting accuracy. However the assessment of predicative has been low across the majority of studies. Usability and practicality of testing occasions and tools was only anecdotally reported. To more completely comply with the gold standards for PA design, both shared experience of experts and recognition of test construction standards, including reliability and validity measurements, instrument piloting, rater training, and explicit identification of the purpose and proposed use of the assessment tool, are required.
Monte Carlo simulation of the neutron monitor yield function
NASA Astrophysics Data System (ADS)
Mangeard, P.-S.; Ruffolo, D.; Sáiz, A.; Madlee, S.; Nutaro, T.
2016-08-01
Neutron monitors (NMs) are ground-based detectors that measure variations of the Galactic cosmic ray flux at GV range rigidities. Differences in configuration, electronics, surroundings, and location induce systematic effects on the calculation of the yield functions of NMs worldwide. Different estimates of NM yield functions can differ by a factor of 2 or more. In this work, we present new Monte Carlo simulations to calculate NM yield functions and perform an absolute (not relative) comparison with the count rate of the Princess Sirindhorn Neutron Monitor (PSNM) at Doi Inthanon, Thailand, both for the entire monitor and for individual counter tubes. We model the atmosphere using profiles from the Global Data Assimilation System database and the Naval Research Laboratory Mass Spectrometer, Incoherent Scatter Radar Extended model. Using FLUKA software and the detailed geometry of PSNM, we calculated the PSNM yield functions for protons and alpha particles. An agreement better than 9% was achieved between the PSNM observations and the simulated count rate during the solar minimum of December 2009. The systematic effect from the electronic dead time was studied as a function of primary cosmic ray rigidity at the top of the atmosphere up to 1 TV. We show that the effect is not negligible and can reach 35% at high rigidity for a dead time >1 ms. We analyzed the response function of each counter tube at PSNM using its actual dead time, and we provide normalization coefficients between count rates for various tube configurations in the standard NM64 design that are valid to within ˜1% for such stations worldwide.
Systematic methods for knowledge acquisition and expert system development
NASA Technical Reports Server (NTRS)
Belkin, Brenda L.; Stengel, Robert F.
1991-01-01
Nine cooperating rule-based systems, collectively called AUTOCREW which were designed to automate functions and decisions associated with a combat aircraft's subsystems, are discussed. The organization of tasks within each system is described; performance metrics were developed to evaluate the workload of each rule base and to assess the cooperation between the rule bases. Simulation and comparative workload results for two mission scenarios are given. The scenarios are inbound surface-to-air-missile attack on the aircraft and pilot incapacitation. The methodology used to develop the AUTOCREW knowledge bases is summarized. Issues involved in designing the navigation sensor selection expert in AUTOCREW's NAVIGATOR knowledge base are discussed in detail. The performance of seven navigation systems aiding a medium-accuracy INS was investigated using Kalman filter covariance analyses. A navigation sensor management (NSM) expert system was formulated from covariance simulation data using the analysis of variance (ANOVA) method and the ID3 algorithm. ANOVA results show that statistically different position accuracies are obtained when different navaids are used, the number of navaids aiding the INS is varied, the aircraft's trajectory is varied, and the performance history is varied. The ID3 algorithm determines the NSM expert's classification rules in the form of decision trees. The performance of these decision trees was assessed on two arbitrary trajectories, and the results demonstrate that the NSM expert adapts to new situations and provides reasonable estimates of the expected hybrid performance.
Hybrid simulation: bringing motivation to the art of teamwork training in the operating room.
Kjellin, A; Hedman, L; Escher, C; Felländer-Tsai, L
2014-12-01
Crew resource management-based operating room team training will be an evident part of future surgical training. Hybrid simulation in the operating room enables the opportunity for trainees to perform higher fidelity training of technical and non-technical skills in a realistic context. We focus on situational motivation and self-efficacy, two important factors for optimal learning in light of a prototype course for teams of residents in surgery and anesthesiology and nurses. Authentic operating room teams consisting of residents in anesthesia (n = 2), anesthesia nurses (n = 3), residents in surgery (n = 2), and scrub nurses (n = 6) were, during a one-day course, exposed to four different scenarios. Their situational motivation was self-assessed (ranging from 1 = does not correspond at all to 7 = corresponds exactly) immediately after training, and their self-efficacy (graded from 1 to 7) before and after training. Training was performed in a mock-up operating theater equipped with a hybrid patient simulator (SimMan 3G; Laerdal) and a laparoscopic simulator (Lap Mentor Express; Simbionix). The functionality of the systematic hybrid procedure simulation scenario was evaluated by an exit questionnaire (graded from 1 = disagree entirely to 5 = agree completely). The trainees were mostly intrinsically motivated, engaged for their own sake, and had a rather great degree of self-determination toward the training situation. Self-efficacy among the team members improved significantly from 4 to 6 (median). Overall evaluation showed very good result with a median grading of 5. We conclude that hybrid simulation is feasible and has the possibility to train an authentic operating team in order to improve individual motivation and confidence. © The Finnish Surgical Society 2014.
The influence of shrouded stator cavity flows on multistage compressor performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wellborn, S.R.; Okiishi, T.H.
1999-07-01
Experiments were performed on a low-speed multistage axial-flow compressor to assess the effects of shrouded stator cavity flows on aerodynamic performance. Five configurations, which involved systematic changes in seal-tooth leakage rates and/or elimination of the shrouded stator cavities, were tested. Rig data indicate increasing seal-tooth leakage substantially degraded compressor performance. For every 1 percent increase in seal-tooth clearance-to-span ratio, the decrease in pressure rise was 3 percent and the reduction in efficiency was 1 point. These observed performance penalties are comparable to those commonly reported for rotor and cantilevered stator tip clearance variations. The performance degradation observed with increased leakagemore » was brought about in two distinct ways. First, increasing seal-tooth leakage directly spoiled the near-hub performance of the stator row in which leakage occurred. Second, the altered stator exit flow conditions, caused by increased leakage, impaired the performance of the next downstream stage by decreasing the work input of the rotor and increasing total pressure loss of the stator. These trends caused the performance of downstream stages to deteriorate progressively. Numerical simulations of the test rig stator flow field were also conducted to help resolve important fluid mechanic details associated with the interaction between the primary and cavity flows. Simulation results show that fluid originating in the upstream cavity collected on the stator suction surface when the cavity tangential momentum was low and on the pressure side when it was high. The convection of cavity fluid to the suction surface was a mechanism that reduced stator performance when leakage increased.« less
The Influence of Intrinsic Framework Flexibility on Adsorption in Nanoporous Materials
Witman, Matthew; Ling, Sanliang; Jawahery, Sudi; ...
2017-03-30
For applications of metal–organic frameworks (MOFs) such as gas storage and separation, flexibility is often seen as a parameter that can tune material performance. In this work we aim to determine the optimal flexibility for the shape selective separation of similarly sized molecules (e.g., Xe/Kr mixtures). To obtain systematic insight into how the flexibility impacts this type of separation, we develop a simple analytical model that predicts a material’s Henry regime adsorption and selectivity as a function of flexibility. We elucidate the complex dependence of selectivity on a framework’s intrinsic flexibility whereby performance is either improved or reduced with increasingmore » flexibility, depending on the material’s pore size characteristics. However, the selectivity of a material with the pore size and chemistry that already maximizes selectivity in the rigid approximation is continuously diminished with increasing flexibility, demonstrating that the globally optimal separation exists within an entirely rigid pore. Molecular simulations show that our simple model predicts performance trends that are observed when screening the adsorption behavior of flexible MOFs. These flexible simulations provide better agreement with experimental adsorption data in a high-performance material that is not captured when modeling this framework as rigid, an approximation typically made in high-throughput screening studies. We conclude that, for shape selective adsorption applications, the globally optimal material will have the optimal pore size/chemistry and minimal intrinsic flexibility even though other nonoptimal materials’ selectivity can actually be improved by flexibility. In conclusion, equally important, we find that flexible simulations can be critical for correctly modeling adsorption in these types of systems.« less
The Influence of Intrinsic Framework Flexibility on Adsorption in Nanoporous Materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Witman, Matthew; Ling, Sanliang; Jawahery, Sudi
For applications of metal–organic frameworks (MOFs) such as gas storage and separation, flexibility is often seen as a parameter that can tune material performance. In this work we aim to determine the optimal flexibility for the shape selective separation of similarly sized molecules (e.g., Xe/Kr mixtures). To obtain systematic insight into how the flexibility impacts this type of separation, we develop a simple analytical model that predicts a material’s Henry regime adsorption and selectivity as a function of flexibility. We elucidate the complex dependence of selectivity on a framework’s intrinsic flexibility whereby performance is either improved or reduced with increasingmore » flexibility, depending on the material’s pore size characteristics. However, the selectivity of a material with the pore size and chemistry that already maximizes selectivity in the rigid approximation is continuously diminished with increasing flexibility, demonstrating that the globally optimal separation exists within an entirely rigid pore. Molecular simulations show that our simple model predicts performance trends that are observed when screening the adsorption behavior of flexible MOFs. These flexible simulations provide better agreement with experimental adsorption data in a high-performance material that is not captured when modeling this framework as rigid, an approximation typically made in high-throughput screening studies. We conclude that, for shape selective adsorption applications, the globally optimal material will have the optimal pore size/chemistry and minimal intrinsic flexibility even though other nonoptimal materials’ selectivity can actually be improved by flexibility. In conclusion, equally important, we find that flexible simulations can be critical for correctly modeling adsorption in these types of systems.« less
Technology-enhanced simulation in emergency medicine: a systematic review and meta-analysis.
Ilgen, Jonathan S; Sherbino, Jonathan; Cook, David A
2013-02-01
Technology-enhanced simulation is used frequently in emergency medicine (EM) training programs. Evidence for its effectiveness, however, remains unclear. The objective of this study was to evaluate the effectiveness of technology-enhanced simulation for training in EM and identify instructional design features associated with improved outcomes by conducting a systematic review. The authors systematically searched MEDLINE, EMBASE, CINAHL, ERIC, PsychINFO, Scopus, key journals, and previous review bibliographies through May 2011. Original research articles in any language were selected if they compared simulation to no intervention or another educational activity for the purposes of training EM health professionals (including student and practicing physicians, midlevel providers, nurses, and prehospital providers). Reviewers evaluated study quality and abstracted information on learners, instructional design (curricular integration, feedback, repetitive practice, mastery learning), and outcomes. From a collection of 10,903 articles, 85 eligible studies enrolling 6,099 EM learners were identified. Of these, 56 studies compared simulation to no intervention, 12 compared simulation with another form of instruction, and 19 compared two forms of simulation. Effect sizes were pooled using a random-effects model. Heterogeneity among these studies was large (I(2) ≥ 50%). Among studies comparing simulation to no intervention, pooled effect sizes were large (range = 1.13 to 1.48) for knowledge, time, and skills and small to moderate for behaviors with patients (0.62) and patient effects (0.43; all p < 0.02 except patient effects p = 0.12). Among comparisons between simulation and other forms of instruction, the pooled effect sizes were small (≤ 0.33) for knowledge, time, and process skills (all p > 0.1). Qualitative comparisons of different simulation curricula are limited, although feedback, mastery learning, and higher fidelity were associated with improved learning outcomes. Technology-enhanced simulation for EM learners is associated with moderate or large favorable effects in comparison with no intervention and generally small and nonsignificant benefits in comparison with other instruction. Future research should investigate the features that lead to effective simulation-based instructional design. © 2013 by the Society for Academic Emergency Medicine.
A multimodel intercomparison of resolution effects on precipitation: simulations and theory
Rauscher, Sara A.; O?Brien, Travis A.; Piani, Claudio; ...
2016-02-27
An ensemble of six pairs of RCM experiments performed at 25 and 50 km for the period 1961–2000 over a large European domain is examined in order to evaluate the effects of resolution on the simulation of daily precipitation statistics. Application of the non-parametric two-sample Kolmorgorov–Smirnov test, which tests for differences in the location and shape of the probability distributions of two samples, shows that the distribution of daily precipitation differs between the pairs of simulations over most land areas in both summer and winter, with the strongest signal over southern Europe. Two-dimensional histograms reveal that precipitation intensity increases with resolutionmore » over almost the entire domain in both winter and summer. In addition, the 25 km simulations have more dry days than the 50 km simulations. The increase in dry days with resolution is indicative of an improvement in model performance at higher resolution, while the more intense precipitation exceeds observed values. The systematic increase in precipitation extremes with resolution across all models suggests that this response is fundamental to model formulation. Simple theoretical arguments suggest that fluid continuity, combined with the emergent scaling properties of the horizontal wind field, results in an increase in resolved vertical transport as grid spacing decreases. This increase in resolution-dependent vertical mass flux then drives an intensification of convergence and resolvable-scale precipitation as grid spacing decreases. In conclusion, this theoretical result could help explain the increasingly, and often anomalously, large stratiform contribution to total rainfall observed with increasing resolution in many regional and global models.« less
A multimodel intercomparison of resolution effects on precipitation: simulations and theory
NASA Astrophysics Data System (ADS)
Rauscher, Sara A.; O'Brien, Travis A.; Piani, Claudio; Coppola, Erika; Giorgi, Filippo; Collins, William D.; Lawston, Patricia M.
2016-10-01
An ensemble of six pairs of RCM experiments performed at 25 and 50 km for the period 1961-2000 over a large European domain is examined in order to evaluate the effects of resolution on the simulation of daily precipitation statistics. Application of the non-parametric two-sample Kolmorgorov-Smirnov test, which tests for differences in the location and shape of the probability distributions of two samples, shows that the distribution of daily precipitation differs between the pairs of simulations over most land areas in both summer and winter, with the strongest signal over southern Europe. Two-dimensional histograms reveal that precipitation intensity increases with resolution over almost the entire domain in both winter and summer. In addition, the 25 km simulations have more dry days than the 50 km simulations. The increase in dry days with resolution is indicative of an improvement in model performance at higher resolution, while the more intense precipitation exceeds observed values. The systematic increase in precipitation extremes with resolution across all models suggests that this response is fundamental to model formulation. Simple theoretical arguments suggest that fluid continuity, combined with the emergent scaling properties of the horizontal wind field, results in an increase in resolved vertical transport as grid spacing decreases. This increase in resolution-dependent vertical mass flux then drives an intensification of convergence and resolvable-scale precipitation as grid spacing decreases. This theoretical result could help explain the increasingly, and often anomalously, large stratiform contribution to total rainfall observed with increasing resolution in many regional and global models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deichmann, Gregor; Marcon, Valentina; Vegt, Nico F. A. van der, E-mail: vandervegt@csi.tu-darmstadt.de
Molecular simulations of soft matter systems have been performed in recent years using a variety of systematically coarse-grained models. With these models, structural or thermodynamic properties can be quite accurately represented while the prediction of dynamic properties remains difficult, especially for multi-component systems. In this work, we use constraint molecular dynamics simulations for calculating dissipative pair forces which are used together with conditional reversible work (CRW) conservative forces in dissipative particle dynamics (DPD) simulations. The combined CRW-DPD approach aims to extend the representability of CRW models to dynamic properties and uses a bottom-up approach. Dissipative pair forces are derived frommore » fluctuations of the direct atomistic forces between mapped groups. The conservative CRW potential is obtained from a similar series of constraint dynamics simulations and represents the reversible work performed to couple the direct atomistic interactions between the mapped atom groups. Neopentane, tetrachloromethane, cyclohexane, and n-hexane have been considered as model systems. These molecular liquids are simulated with atomistic molecular dynamics, coarse-grained molecular dynamics, and DPD. We find that the CRW-DPD models reproduce the liquid structure and diffusive dynamics of the liquid systems in reasonable agreement with the atomistic models when using single-site mapping schemes with beads containing five or six heavy atoms. For a two-site representation of n-hexane (3 carbons per bead), time scale separation can no longer be assumed and the DPD approach consequently fails to reproduce the atomistic dynamics.« less
NASA Astrophysics Data System (ADS)
Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha; Kalinkin, Alexander A.
2017-02-01
Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, which is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,'bottom-up' and 'top-down', are illustrated. Preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.
Modeling Hohlraum-Based Laser Plasma Instability Experiments
NASA Astrophysics Data System (ADS)
Meezan, N. B.
2005-10-01
Laser fusion targets must control laser-plasma instabilities (LPI) in order to perform as designed. We present analyses of recent hohlraum LPI experiments from the Omega laser facility. The targets, gold hohlraums filled with gas or SiO2 foam, are preheated by several 3φ beams before an interaction beam (2φ or 3φ) is fired along the hohlraum axis. The experiments are simulated in 2-D and 3-D using the code hydra. The choice of electron thermal conduction model in hydra strongly affects the simulated plasma conditions. This work is part of a larger effort to systematically explore the usefulness of linear gain as a design tool for fusion targets. We find that the measured Raman and Brillouin backscatter scale monotonically with the peak linear gain calculated for the target; however, linear gain is not sufficient to explain all trends in the data. This work was performed under the auspices of the U.S. Department of Energy by the University of California Lawrence Livermore National Laboratory under contract No. W-7405-ENG-48.
Comparison of algorithms to generate event times conditional on time-dependent covariates.
Sylvestre, Marie-Pierre; Abrahamowicz, Michal
2008-06-30
The Cox proportional hazards model with time-dependent covariates (TDC) is now a part of the standard statistical analysis toolbox in medical research. As new methods involving more complex modeling of time-dependent variables are developed, simulations could often be used to systematically assess the performance of these models. Yet, generating event times conditional on TDC requires well-designed and efficient algorithms. We compare two classes of such algorithms: permutational algorithms (PAs) and algorithms based on a binomial model. We also propose a modification of the PA to incorporate a rejection sampler. We performed a simulation study to assess the accuracy, stability, and speed of these algorithms in several scenarios. Both classes of algorithms generated data sets that, once analyzed, provided virtually unbiased estimates with comparable variances. In terms of computational efficiency, the PA with the rejection sampler reduced the time necessary to generate data by more than 50 per cent relative to alternative methods. The PAs also allowed more flexibility in the specification of the marginal distributions of event times and required less calibration.
Solvation structures of water in trihexyltetradecylphosphonium-orthoborate ionic liquids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yong-Lei, E-mail: wangyonl@gmail.com; System and Component Design, Department of Machine Design, KTH Royal Institute of Technology, SE-100 44 Stockholm; Sarman, Sten
2016-08-14
Atomistic molecular dynamics simulations have been performed to investigate effective interactions of isolated water molecules dispersed in trihexyltetradecylphosphonium-orthoborate ionic liquids (ILs). The intrinsic free energy changes in solvating one water molecule from gas phase into bulk IL matrices were estimated as a function of temperature, and thereafter, the calculations of potential of mean force between two dispersed water molecules within different IL matrices were performed using umbrella sampling simulations. The systematic analyses of local ionic microstructures, orientational preferences, probability and spatial distributions of dispersed water molecules around neighboring ionic species indicate their preferential coordinations to central polar segments in orthoboratemore » anions. The effective interactions between two dispersed water molecules are partially or totally screened as their separation distance increases due to interference of ionic species in between. These computational results connect microscopic anionic structures with macroscopically and experimentally observed difficulty in completely removing water from synthesized IL samples and suggest that the introduction of hydrophobic groups to central polar segments and the formation of conjugated ionic structures in orthoborate anions can effectively reduce residual water content in the corresponding IL samples.« less
NASA Technical Reports Server (NTRS)
Talbot, P. D.; Dugan, D. D.; Chen, R. T. N.; Gerdes, R. M.
1980-01-01
A coordinated analysis and ground simulator experiment was performed to investigate the effects on single rotor helicopter handling qualities of systematic variations in the main rotor hinge restraint, hub hinge offset, pitch-flap coupling, and blade lock number. Teetering rotor, articulated rotor, and hingeless rotor helicopters were evaluated by research pilots in special low level flying tasks involving obstacle avoidance at 60 to 100 knots airspeed. The results of the experiment are in the form of pilot ratings, pilot commentary, and some objective performance measures. Criteria for damping and sensitivity are reexamined when combined with the additional factors of cross coupling due to pitch and roll rates, pitch coupling with collective pitch, and longitudinal static stability. Ratings obtained with and without motion are compared. Acceptable flying qualities were obtained within each rotor type by suitable adjustment of the hub parameters, however, pure teetering rotors were found to lack control power for the tasks. A limit for the coupling parameter L sub q/L sub p of 0.35 is suggested.
NASA Astrophysics Data System (ADS)
Tabik, S.; Romero, L. F.; Mimica, P.; Plata, O.; Zapata, E. L.
2012-09-01
A broad area in astronomy focuses on simulating extragalactic objects based on Very Long Baseline Interferometry (VLBI) radio-maps. Several algorithms in this scope simulate what would be the observed radio-maps if emitted from a predefined extragalactic object. This work analyzes the performance and scaling of this kind of algorithms on multi-socket, multi-core architectures. In particular, we evaluate a sharing approach, a privatizing approach and a hybrid approach on systems with complex memory hierarchy that includes shared Last Level Cache (LLC). In addition, we investigate which manual processes can be systematized and then automated in future works. The experiments show that the data-privatizing model scales efficiently on medium scale multi-socket, multi-core systems (up to 48 cores) while regardless of algorithmic and scheduling optimizations, the sharing approach is unable to reach acceptable scalability on more than one socket. However, the hybrid model with a specific level of data-sharing provides the best scalability over all used multi-socket, multi-core systems.
A Computational Framework for Bioimaging Simulation
Watabe, Masaki; Arjunan, Satya N. V.; Fukushima, Seiya; Iwamoto, Kazunari; Kozuka, Jun; Matsuoka, Satomi; Shindo, Yuki; Ueda, Masahiro; Takahashi, Koichi
2015-01-01
Using bioimaging technology, biologists have attempted to identify and document analytical interpretations that underlie biological phenomena in biological cells. Theoretical biology aims at distilling those interpretations into knowledge in the mathematical form of biochemical reaction networks and understanding how higher level functions emerge from the combined action of biomolecules. However, there still remain formidable challenges in bridging the gap between bioimaging and mathematical modeling. Generally, measurements using fluorescence microscopy systems are influenced by systematic effects that arise from stochastic nature of biological cells, the imaging apparatus, and optical physics. Such systematic effects are always present in all bioimaging systems and hinder quantitative comparison between the cell model and bioimages. Computational tools for such a comparison are still unavailable. Thus, in this work, we present a computational framework for handling the parameters of the cell models and the optical physics governing bioimaging systems. Simulation using this framework can generate digital images of cell simulation results after accounting for the systematic effects. We then demonstrate that such a framework enables comparison at the level of photon-counting units. PMID:26147508
Systematic land climate and evapotranspiration biases in CMIP5 simulations.
Mueller, B; Seneviratne, S I
2014-01-16
[1] Land climate is important for human population since it affects inhabited areas. Here we evaluate the realism of simulated evapotranspiration (ET), precipitation, and temperature in the CMIP5 multimodel ensemble on continental areas. For ET, a newly compiled synthesis data set prepared within the Global Energy and Water Cycle Experiment-sponsored LandFlux-EVAL project is used. The results reveal systematic ET biases in the Coupled Model Intercomparison Project Phase 5 (CMIP5) simulations, with an overestimation in most regions, especially in Europe, Africa, China, Australia, Western North America, and part of the Amazon region. The global average overestimation amounts to 0.17 mm/d. This bias is more pronounced than in the previous CMIP3 ensemble (overestimation of 0.09 mm/d). Consistent with the ET overestimation, precipitation is also overestimated relative to existing reference data sets. We suggest that the identified biases in ET can explain respective systematic biases in temperature in many of the considered regions. The biases additionally display a seasonal dependence and are generally of opposite sign (ET underestimation and temperature overestimation) in boreal summer (June-August).
NASA Astrophysics Data System (ADS)
Lange, J.; O'Shaughnessy, R.; Boyle, M.; Calderón Bustillo, J.; Campanelli, M.; Chu, T.; Clark, J. A.; Demos, N.; Fong, H.; Healy, J.; Hemberger, D. A.; Hinder, I.; Jani, K.; Khamesra, B.; Kidder, L. E.; Kumar, P.; Laguna, P.; Lousto, C. O.; Lovelace, G.; Ossokine, S.; Pfeiffer, H.; Scheel, M. A.; Shoemaker, D. M.; Szilagyi, B.; Teukolsky, S.; Zlochower, Y.
2017-11-01
We present and assess a Bayesian method to interpret gravitational wave signals from binary black holes. Our method directly compares gravitational wave data to numerical relativity (NR) simulations. In this study, we present a detailed investigation of the systematic and statistical parameter estimation errors of this method. This procedure bypasses approximations used in semianalytical models for compact binary coalescence. In this work, we use the full posterior parameter distribution for only generic nonprecessing binaries, drawing inferences away from the set of NR simulations used, via interpolation of a single scalar quantity (the marginalized log likelihood, ln L ) evaluated by comparing data to nonprecessing binary black hole simulations. We also compare the data to generic simulations, and discuss the effectiveness of this procedure for generic sources. We specifically assess the impact of higher order modes, repeating our interpretation with both l ≤2 as well as l ≤3 harmonic modes. Using the l ≤3 higher modes, we gain more information from the signal and can better constrain the parameters of the gravitational wave signal. We assess and quantify several sources of systematic error that our procedure could introduce, including simulation resolution and duration; most are negligible. We show through examples that our method can recover the parameters for equal mass, zero spin, GW150914-like, and unequal mass, precessing spin sources. Our study of this new parameter estimation method demonstrates that we can quantify and understand the systematic and statistical error. This method allows us to use higher order modes from numerical relativity simulations to better constrain the black hole binary parameters.
Understanding the effects of laser imprint on plastic-target implosions on OMEGA
Hu, S. X.; Michel, D. T.; Davis, A. K.; ...
2016-10-03
Understanding the effects of laser imprint on target performance is critical to the success of direct-drive inertial confinement fusion. Directly measuring the disruption caused by laser imprints to the imploding shell and hot-spot formation, in comparison with multidimensional radiation–hydrodynamic simulations, can provide a clear picture of how laser nonuniformities cause target performance to degrade. With the recently developed x-ray self-emission imaging technique and the state-of-the-art physics models recently implemented in the two-dimensional hydrocode DRACO, a systematic study of laser-imprint effects on warm target implosions on OMEGA has been performed using both experimental results and simulations. By varying the laser-picket intensity,more » the imploding shells were set at different adiabats (from α = 2 to α = 6). As the shell adiabats decreased, it was observed that (1) the measured shell thickness at the time the hot spot lit up became larger than the uniform one-dimensional (1-D) predictions; (2) the hot-spot core emitted earlier than the corresponding 1-D predictions; (3) the measured neutron yield first increased then decreased as the shell adiabat α was reduced; and (4) the hot-spot size reduced as α decreased for cases where SSD (smoothing by spectral dispersion) was on but became larger for low-α shots in cases where SSD was off. Most of these experimental observations are well reproduced by DRACO simulations with laser imprints including modes up to λ max = 200. In addition, these studies identify the importance of laser imprint as the major source of degrading target performance for OMEGA implosions of adiabat α ≤ 3. Mitigating laser imprints is required to improve low-α target performance.« less
DeMaere, Matthew Z.
2016-01-01
Background Chromosome conformation capture, coupled with high throughput DNA sequencing in protocols like Hi-C and 3C-seq, has been proposed as a viable means of generating data to resolve the genomes of microorganisms living in naturally occuring environments. Metagenomic Hi-C and 3C-seq datasets have begun to emerge, but the feasibility of resolving genomes when closely related organisms (strain-level diversity) are present in the sample has not yet been systematically characterised. Methods We developed a computational simulation pipeline for metagenomic 3C and Hi-C sequencing to evaluate the accuracy of genomic reconstructions at, above, and below an operationally defined species boundary. We simulated datasets and measured accuracy over a wide range of parameters. Five clustering algorithms were evaluated (2 hard, 3 soft) using an adaptation of the extended B-cubed validation measure. Results When all genomes in a sample are below 95% sequence identity, all of the tested clustering algorithms performed well. When sequence data contains genomes above 95% identity (our operational definition of strain-level diversity), a naive soft-clustering extension of the Louvain method achieves the highest performance. Discussion Previously, only hard-clustering algorithms have been applied to metagenomic 3C and Hi-C data, yet none of these perform well when strain-level diversity exists in a metagenomic sample. Our simple extension of the Louvain method performed the best in these scenarios, however, accuracy remained well below the levels observed for samples without strain-level diversity. Strain resolution is also highly dependent on the amount of available 3C sequence data, suggesting that depth of sequencing must be carefully considered during experimental design. Finally, there appears to be great scope to improve the accuracy of strain resolution through further algorithm development. PMID:27843713
Understanding the effects of laser imprint on plastic-target implosions on OMEGA
NASA Astrophysics Data System (ADS)
Hu, S. X.; Michel, D. T.; Davis, A. K.; Betti, R.; Radha, P. B.; Campbell, E. M.; Froula, D. H.; Stoeckl, C.
2016-10-01
Understanding the effects of laser imprint on target performance is critical to the success of direct-drive inertial confinement fusion. Directly measuring the disruption caused by laser imprints to the imploding shell and hot-spot formation, in comparison with multidimensional radiation-hydrodynamic simulations, can provide a clear picture of how laser nonuniformities cause target performance to degrade. With the recently developed x-ray self-emission imaging technique and the state-of-the-art physics models recently implemented in the two-dimensional hydrocode DRACO, a systematic study of laser-imprint effects on warm target implosions on OMEGA has been performed using both experimental results and simulations. By varying the laser-picket intensity, the imploding shells were set at different adiabats (from α = 2 to α = 6). As the shell adiabats decreased, it was observed that (1) the measured shell thickness at the time the hot spot lit up became larger than the uniform one-dimensional (1-D) predictions; (2) the hot-spot core emitted earlier than the corresponding 1-D predictions; (3) the measured neutron yield first increased then decreased as the shell adiabat α was reduced; and (4) the hot-spot size reduced as α decreased for cases where SSD (smoothing by spectral dispersion) was on but became larger for low-α shots in cases where SSD was off. Most of these experimental observations are well reproduced by DRACO simulations with laser imprints including modes up to λmax = 200. These studies identify the importance of laser imprint as the major source of degrading target performance for OMEGA implosions of adiabat α ≤ 3. Mitigating laser imprints is required to improve low-α target performance.
Understanding the effects of laser imprint on plastic-target implosions on OMEGA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, S. X.; Michel, D. T.; Davis, A. K.
Understanding the effects of laser imprint on target performance is critical to the success of direct-drive inertial confinement fusion. Directly measuring the disruption caused by laser imprints to the imploding shell and hot-spot formation, in comparison with multidimensional radiation–hydrodynamic simulations, can provide a clear picture of how laser nonuniformities cause target performance to degrade. With the recently developed x-ray self-emission imaging technique and the state-of-the-art physics models recently implemented in the two-dimensional hydrocode DRACO, a systematic study of laser-imprint effects on warm target implosions on OMEGA has been performed using both experimental results and simulations. By varying the laser-picket intensity,more » the imploding shells were set at different adiabats (from α = 2 to α = 6). As the shell adiabats decreased, it was observed that (1) the measured shell thickness at the time the hot spot lit up became larger than the uniform one-dimensional (1-D) predictions; (2) the hot-spot core emitted earlier than the corresponding 1-D predictions; (3) the measured neutron yield first increased then decreased as the shell adiabat α was reduced; and (4) the hot-spot size reduced as α decreased for cases where SSD (smoothing by spectral dispersion) was on but became larger for low-α shots in cases where SSD was off. Most of these experimental observations are well reproduced by DRACO simulations with laser imprints including modes up to λ max = 200. In addition, these studies identify the importance of laser imprint as the major source of degrading target performance for OMEGA implosions of adiabat α ≤ 3. Mitigating laser imprints is required to improve low-α target performance.« less
Munroe, Belinda; Curtis, Kate; Murphy, Margaret; Strachan, Luke; Considine, Julie; Hardy, Jennifer; Wilson, Mark; Ruperto, Kate; Fethney, Judith; Buckley, Thomas
2016-08-01
The aim of this study was to evaluate the effect of the new evidence-informed nursing assessment framework HIRAID (History, Identify Red flags, Assessment, Interventions, Diagnostics, reassessment and communication) on the quality of patient assessment and fundamental nontechnical skills including communication, decision making, task management and situational awareness. Assessment is a core component of nursing practice and underpins clinical decisions and the safe delivery of patient care. Yet there is no universal or validated system used to teach emergency nurses how to comprehensively assess and care for patients. A pre-post design was used. The performance of thirty eight emergency nurses from five Australian hospitals was evaluated before and after undertaking education in the application of the HIRAID assessment framework. Video recordings of participant performance in immersive simulations of common presentations to the emergency department were evaluated, as well as participant documentation during the simulations. Paired parametric and nonparametric tests were used to compare changes from pre to postintervention. From pre to postintervention, participant performance increases were observed in the percentage of patient history elements collected, critical indicators of urgency collected and reported to medical officers, and patient reassessments performed. Participants also demonstrated improvement in each of the four nontechnical skills categories: communication, decision making, task management and situational awareness. The HIRAID assessment framework improves clinical patient assessments performed by emergency nurses and has the potential to enhance patient care. HIRAID should be considered for integration into clinical practice to provide nurses with a systematic approach to patient assessment and potentially improve the delivery of safe patient care. © 2016 John Wiley & Sons Ltd.
Surrogate model approach for improving the performance of reactive transport simulations
NASA Astrophysics Data System (ADS)
Jatnieks, Janis; De Lucia, Marco; Sips, Mike; Dransch, Doris
2016-04-01
Reactive transport models can serve a large number of important geoscientific applications involving underground resources in industry and scientific research. It is common for simulation of reactive transport to consist of at least two coupled simulation models. First is a hydrodynamics simulator that is responsible for simulating the flow of groundwaters and transport of solutes. Hydrodynamics simulators are well established technology and can be very efficient. When hydrodynamics simulations are performed without coupled geochemistry, their spatial geometries can span millions of elements even when running on desktop workstations. Second is a geochemical simulation model that is coupled to the hydrodynamics simulator. Geochemical simulation models are much more computationally costly. This is a problem that makes reactive transport simulations spanning millions of spatial elements very difficult to achieve. To address this problem we propose to replace the coupled geochemical simulation model with a surrogate model. A surrogate is a statistical model created to include only the necessary subset of simulator complexity for a particular scenario. To demonstrate the viability of such an approach we tested it on a popular reactive transport benchmark problem that involves 1D Calcite transport. This is a published benchmark problem (Kolditz, 2012) for simulation models and for this reason we use it to test the surrogate model approach. To do this we tried a number of statistical models available through the caret and DiceEval packages for R, to be used as surrogate models. These were trained on randomly sampled subset of the input-output data from the geochemical simulation model used in the original reactive transport simulation. For validation we use the surrogate model to predict the simulator output using the part of sampled input data that was not used for training the statistical model. For this scenario we find that the multivariate adaptive regression splines (MARS) method provides the best trade-off between speed and accuracy. This proof-of-concept forms an essential step towards building an interactive visual analytics system to enable user-driven systematic creation of geochemical surrogate models. Such a system shall enable reactive transport simulations with unprecedented spatial and temporal detail to become possible. References: Kolditz, O., Görke, U.J., Shao, H. and Wang, W., 2012. Thermo-hydro-mechanical-chemical processes in porous media: benchmarks and examples (Vol. 86). Springer Science & Business Media.
Mock, U; Dieckmann, K; Wolff, U; Knocke, T H; Pötter, R
1999-08-01
Geometrical accuracy in patient positioning can vary substantially during external radiotherapy. This study estimated the set-up accuracy during pelvic irradiation for gynecological malignancies for determination of safety margins (planning target volume, PTV). Based on electronic portal imaging devices (EPID), 25 patients undergoing 4-field pelvic irradiation for gynecological malignancies were analyzed with regard to set-up accuracy during the treatment course. Regularly performed EPID images were used in order to systematically assess the systematic and random component of set-up displacements. Anatomical matching of verification and simulation images was followed by measuring corresponding distances between the central axis and anatomical features. Data analysis of set-up errors referred to the x-, y-,and z-axes. Additionally, cumulative frequencies were evaluated. A total of 50 simulation films and 313 verification images were analyzed. For the anterior-posterior (AP) beam direction mean deviations along the x- and z-axes were 1.5 mm and -1.9 mm, respectively. Moreover, random errors of 4.8 mm (x-axis) and 3.0 mm (z-axis) were determined. Concerning the latero-lateral treatment fields, the systematic errors along the two axes were calculated to 2.9 mm (y-axis) and -2.0 mm (z-axis) and random errors of 3.8 mm and 3.5 mm were found, respectively. The cumulative frequency of misalignments < or =5 mm showed values of 75% (AP fields) and 72% (latero-lateral fields). With regard to cumulative frequencies < or =10 mm quantification revealed values of 97% for both beam directions. During external pelvic irradiation therapy for gynecological malignancies, EPID images on a regular basis revealed acceptable set-up inaccuracies. Safety margins (PTV) of 1 cm appear to be sufficient, accounting for more than 95% of all deviations.
Connell, Clifford J; Endacott, Ruth; Jackman, Jennifer A; Kiprillis, Noelleen R; Sparkes, Louise M; Cooper, Simon J
2016-09-01
Survival from in-hospital cardiac arrest is poor. Clinical features, including abnormal vital signs, often indicate patient deterioration prior to severe adverse events. Early warning systems and rapid response teams are commonly used to assist the health profession in the identification and management of the deteriorating patient. Education programs are widely used in the implementation of these systems. The effectiveness of the education is unknown. The aims of this study were to identify: (i) the evidence supporting educational effectiveness in the recognition and management of the deteriorating patient and (ii) outcome measures used to evaluate educational effectiveness. A mixed methods systematic review of the literature was conducted using studies published between 2002 and 2014. Included studies were assessed for quality and data were synthesized thematically, while original data are presented in tabular form. Twenty-three studies were included in the review. Most educational programs were found to be effective reporting significant positive impacts upon learners, patient outcomes and organisational systems. Outcome measures related to: i learners, for example knowledge and performance, ii systems, including activation and responses of rapid response teams, and iii patients, including patient length of stay and adverse events. All but one of the programs used blended teaching with >87% including medium to high fidelity simulation. In situ simulation was employed in two of the interventions. The median program time was eight hours. The longest program lasted 44h however one of the most educationally effective programs was based upon a 40min simulation program. Educational interventions designed to improve the recognition and management of patient deterioration can improve learner outcomes when they incorporate medium to high-fidelity simulation. High-fidelity simulation has demonstrated effectiveness when delivered in brief sessions lasting only forty minutes. In situ simulation has demonstrated sustained positive impact upon the real world implementation of rapid response systems. Outcome measures should include knowledge and skill developments but there are important benefits in understanding patient outcomes. Copyright © 2016 Elsevier Ltd. All rights reserved.
Performance Evaluation and Improvement of Ferroelectric Field-Effect Transistor Memory
NASA Astrophysics Data System (ADS)
Yu, Hyung Suk
Flash memory is reaching scaling limitations rapidly due to reduction of charge in floating gates, charge leakage and capacitive coupling between cells which cause threshold voltage fluctuations, short retention times, and interference. Many new memory technologies are being considered as alternatives to flash memory in an effort to overcome these limitations. Ferroelectric Field-Effect Transistor (FeFET) is one of the main emerging candidates because of its structural similarity to conventional FETs and fast switching speed. Nevertheless, the performance of FeFETs have not been systematically compared and analyzed against other competing technologies. In this work, we first benchmark the intrinsic performance of FeFETs and other memories by simulations in order to identify the strengths and weaknesses of FeFETs. To simulate realistic memory applications, we compare memories on an array structure. For the comparisons, we construct an accurate delay model and verify it by benchmarking against exact HSPICE simulations. Second, we propose an accurate model for FeFET memory window since the existing model has limitations. The existing model assumes symmetric operation voltages but it is not valid for the practical asymmetric operation voltages. In this modeling, we consider practical operation voltages and device dimensions. Also, we investigate realistic changes of memory window over time and retention time of FeFETs. Last, to improve memory window and subthreshold swing, we suggest nonplanar junctionless structures for FeFETs. Using the suggested structures, we study the dimensional dependences of crucial parameters like memory window and subthreshold swing and also analyze key interference mechanisms.
Sun, Wen; Ge, Yu; Zhang, Zhiqiang; Wong, Wai-Choong
2015-09-25
A wearable sensor system enables continuous and remote health monitoring and is widely considered as the next generation of healthcare technology. The performance, the packet error rate (PER) in particular, of a wearable sensor system may deteriorate due to a number of factors, particularly the interference from the other wearable sensor systems in the vicinity. We systematically evaluate the performance of the wearable sensor system in terms of PER in the presence of such interference in this paper. The factors that affect the performance of the wearable sensor system, such as density, traffic load, and transmission power in a realistic moderate-scale deployment case in hospital are all considered. Simulation results show that with 20% duty cycle, only 68.5% of data transmission can achieve the targeted reliability requirement (PER is less than 0.05) even in the off-peak period in hospital. We then suggest some interference mitigation schemes based on the performance evaluation results in the case study.
Changes in running pattern due to fatigue and cognitive load in orienteering.
Millet, Guillaume Y; Divert, Caroline; Banizette, Marion; Morin, Jean-Benoit
2010-01-01
The aim of this study was to examine the influence of fatigue on running biomechanics in normal running, in normal running with a cognitive task, and in running while map reading. Nineteen international and less experienced orienteers performed a fatiguing running exercise of duration and intensity similar to a classic distance orienteering race on an instrumented treadmill while performing mental arithmetic, an orienteering simulation, and control running at regular intervals. Two-way repeated-measures analysis of variance did not reveal any significant difference between mental arithmetic and control running for any of the kinematic and kinetic parameters analysed eight times over the fatiguing protocol. However, these parameters were systematically different between the orienteering simulation and the other two conditions (mental arithmetic and control running). The adaptations in orienteering simulation running were significantly more pronounced in the elite group when step frequency, peak vertical ground reaction force, vertical stiffness, and maximal downward displacement of the centre of mass during contact were considered. The effects of fatigue on running biomechanics depended on whether the orienteers read their map or ran normally. It is concluded that adding a cognitive load does not modify running patterns. Therefore, all changes in running pattern observed during the orienteering simulation, particularly in elite orienteers, are the result of adaptations to enable efficient map reading and/or potentially prevent injuries. Finally, running patterns are not affected to the same extent by fatigue when a map reading task is added.
NASA Astrophysics Data System (ADS)
Mohammadyari, Parvin; Faghihi, Reza; Mosleh-Shirazi, Mohammad Amin; Lotfi, Mehrzad; Rahim Hematiyan, Mohammad; Koontz, Craig; Meigooni, Ali S.
2015-12-01
Compression is a technique to immobilize the target or improve the dose distribution within the treatment volume during different irradiation techniques such as AccuBoost® brachytherapy. However, there is no systematic method for determination of dose distribution for uncompressed tissue after irradiation under compression. In this study, the mechanical behavior of breast tissue between compressed and uncompressed states was investigated. With that, a novel method was developed to determine the dose distribution in uncompressed tissue after irradiation of compressed breast tissue. Dosimetry was performed using two different methods, namely, Monte Carlo simulations using the MCNP5 code and measurements using thermoluminescent dosimeters (TLD). The displacement of the breast elements was simulated using a finite element model and calculated using ABAQUS software. From these results, the 3D dose distribution in uncompressed tissue was determined. The geometry of the model was constructed from magnetic resonance images of six different women volunteers. The mechanical properties were modeled by using the Mooney-Rivlin hyperelastic material model. Experimental dosimetry was performed by placing the TLD chips into the polyvinyl alcohol breast equivalent phantom. The results determined that the nodal displacements, due to the gravitational force and the 60 Newton compression forces (with 43% contraction in the loading direction and 37% expansion in the orthogonal direction) were determined. Finally, a comparison of the experimental data and the simulated data showed agreement within 11.5% ± 5.9%.
Mohammadyari, Parvin; Faghihi, Reza; Mosleh-Shirazi, Mohammad Amin; Lotfi, Mehrzad; Hematiyan, Mohammad Rahim; Koontz, Craig; Meigooni, Ali S
2015-12-07
Compression is a technique to immobilize the target or improve the dose distribution within the treatment volume during different irradiation techniques such as AccuBoost(®) brachytherapy. However, there is no systematic method for determination of dose distribution for uncompressed tissue after irradiation under compression. In this study, the mechanical behavior of breast tissue between compressed and uncompressed states was investigated. With that, a novel method was developed to determine the dose distribution in uncompressed tissue after irradiation of compressed breast tissue. Dosimetry was performed using two different methods, namely, Monte Carlo simulations using the MCNP5 code and measurements using thermoluminescent dosimeters (TLD). The displacement of the breast elements was simulated using a finite element model and calculated using ABAQUS software. From these results, the 3D dose distribution in uncompressed tissue was determined. The geometry of the model was constructed from magnetic resonance images of six different women volunteers. The mechanical properties were modeled by using the Mooney-Rivlin hyperelastic material model. Experimental dosimetry was performed by placing the TLD chips into the polyvinyl alcohol breast equivalent phantom. The results determined that the nodal displacements, due to the gravitational force and the 60 Newton compression forces (with 43% contraction in the loading direction and 37% expansion in the orthogonal direction) were determined. Finally, a comparison of the experimental data and the simulated data showed agreement within 11.5% ± 5.9%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Titt, U; Suzuki, K
Purpose: The PTCH is preparing the ocular proton beam nozzle for clinical use. Currently commissioning measurements are being performed using films, diodes and ionization chambers. In parallel, a Monte Carlo model of the beam line was created for integration into the automated Monte Carlo treatment plan computation system, MC{sup 2}. This work aims to compare Monte Carlo predictions to measured proton doses in order to validate the Monte Carlo model. Methods: A complete model of the double scattering ocular beam line has been created and is capable of simulating proton beams with a comprehensive set of beam modifying devices, includingmore » eleven different range modulator wheels. Simulations of doses in water were scored and compare to ion chamber measurements of depth doses, lateral dose profiles extracted from half beam block exposures of films, and diode measurements of lateral penumbrae at various depths. Results: All comparison resulted in an average relative entrance dose difference of less than 3% and peak dose difference of less than 2%. All range differences were smaller than 0.2 mm. The differences in the lateral beam profiles were smaller than 0.2 mm, and the differences in the penumbrae were all smaller than 0.4%. Conclusion: All available data shows excellent agreement of simulations and measurements. More measurements will have to be performed in order to completely and systematically validate the model. Besides simulating and measuring PDDs and lateral profiles of all remaining range modulator wheels, the absolute dosimetry factors in terms of number of source protons per monitor unit have to be determined.« less
A proposed method to investigate reliability throughout a questionnaire
2011-01-01
Background Questionnaires are used extensively in medical and health care research and depend on validity and reliability. However, participants may differ in interest and awareness throughout long questionnaires, which can affect reliability of their answers. A method is proposed for "screening" of systematic change in random error, which could assess changed reliability of answers. Methods A simulation study was conducted to explore whether systematic change in reliability, expressed as changed random error, could be assessed using unsupervised classification of subjects by cluster analysis (CA) and estimation of intraclass correlation coefficient (ICC). The method was also applied on a clinical dataset from 753 cardiac patients using the Jalowiec Coping Scale. Results The simulation study showed a relationship between the systematic change in random error throughout a questionnaire and the slope between the estimated ICC for subjects classified by CA and successive items in a questionnaire. This slope was proposed as an awareness measure - to assessing if respondents provide only a random answer or one based on a substantial cognitive effort. Scales from different factor structures of Jalowiec Coping Scale had different effect on this awareness measure. Conclusions Even though assumptions in the simulation study might be limited compared to real datasets, the approach is promising for assessing systematic change in reliability throughout long questionnaires. Results from a clinical dataset indicated that the awareness measure differed between scales. PMID:21974842
On the applicability of density dependent effective interactions in cluster-forming systems
NASA Astrophysics Data System (ADS)
Montes-Saralegui, Marta; Kahl, Gerhard; Nikoubashman, Arash
2017-02-01
We systematically studied the validity and transferability of the force-matching algorithm for computing effective pair potentials in a system of dendritic polymers, i.e., a particular class of ultrasoft colloids. We focused on amphiphilic dendrimers, macromolecules which can aggregate into clusters of overlapping particles to minimize the contact area with the surrounding implicit solvent. Simulations were performed for both the monomeric and coarse-grained models in the liquid phase at densities ranging from infinite dilution up to values close to the freezing point. The effective pair potentials for the coarse-grained simulations were computed from the monomeric simulations both in the zero-density limit (Φeff0) and at each investigated finite density (Φeff). Conducting the coarse-grained simulations with Φeff0 at higher densities is not appropriate as they failed at reproducing the structural properties of the monomeric simulations. In contrast, we found excellent agreement between the spatial dendrimer distributions obtained from the coarse-grained simulations with Φeff and the microscopically detailed simulations at low densities, where the macromolecules were distributed homogeneously in the system. However, the reliability of the coarse-grained simulations deteriorated significantly as the density was increased further and the cluster occupation became more polydisperse. Under these conditions, the effective pair potential of the coarse-grained model can no longer be computed by averaging over the whole system, but the local density needs to be taken into account instead.
Multidisciplinary team simulation for the operating theatre: a review of the literature.
Tan, Shaw Boon; Pena, Guilherme; Altree, Meryl; Maddern, Guy J
2014-01-01
Analyses of adverse events inside the operating theatre has demonstrated that many errors are caused by failure in non-technical skills and teamwork. While simulation has been used successfully for teaching and improving technical skills, more recently, multidisciplinary simulation has been used for training team skills. We hypothesized that this type of training is feasible and improves team skills in the operating theatre. A systematic search of the literature for studies describing true multidisciplinary operating theatre team simulation was conducted in November and December 2012. We looked at the characteristics and outcomes of the team simulation programmes. 1636 articles were initially retrieved. Utilizing a stepwise evaluation process, 26 articles were included in the review. The studies reveal that multidisciplinary operating theatre simulation has been used to provide training in technical and non-technical skills, to help implement new techniques and technologies, and to identify latent weaknesses within a health system. Most of the studies included are descriptions of training programmes with a low level of evidence. No randomized control trial was identified. Participants' reactions to the training programme were positive in all studies; however, none of them could objectively demonstrate that skills acquired from simulation are transferred to the operating theatre or show a demonstrable benefit in patient outcomes. Multidisciplinary operating room team simulation is feasible and widely accepted by participants. More studies are required to assess the impact of this type of training on operative performance and patient safety. © 2013 Royal Australasian College of Surgeons.
Multidisciplinary crisis simulations: the way forward for training surgical teams.
Undre, Shabnam; Koutantji, Maria; Sevdalis, Nick; Gautama, Sanjay; Selvapatt, Nowlan; Williams, Samantha; Sains, Parvinderpal; McCulloch, Peter; Darzi, Ara; Vincent, Charles
2007-09-01
High-reliability organizations have stressed the importance of non-technical skills for safety and of regularly providing such training to their teams. Recently safety skills training has been applied in the practice of medicine. In this study, we developed and piloted a module using multidisciplinary crisis scenarios in a simulated operating theatre to train entire surgical teams. Twenty teams participated (n = 80); each consisted of a trainee surgeon, anesthetist, operating department practitioner (ODP), and scrub nurse. Crisis scenarios such as difficult intubation, hemorrhage, or cardiac arrest were simulated. Technical and non-technical skills (leadership, communication, team skills, decision making, and vigilance), were assessed by clinical experts and by two psychologists using relevant technical and human factors rating scales. Participants received technical and non-technical feedback, and the whole team received feedback on teamwork. Trainees assessed the training favorably. For technical skills there were no differences between surgical trainees' assessment scores and the assessment scores of the trainers. However, nurses overrated their technical skill. Regarding non-technical skills, leadership and decision making were scored lower than the other three non-technical skills (communication, team skills, and vigilance). Surgeons scored lower than nurses on communication and teamwork skills. Surgeons and anesthetists scored lower than nurses on leadership. Multidisciplinary simulation-based team training is feasible and well received by surgical teams. Non-technical skills can be assessed alongside technical skills, and differences in performance indicate where there is a need for further training. Future work should focus on developing team performance measures for training and on the development and evaluation of systematic training for technical and non-technical skills to enhance team performance and safety in surgery.
Simulating a Senate Office: The Impact on Student Knowledge and Attitudes
ERIC Educational Resources Information Center
Lay, J. Celeste; Smarick, Kathleen J.
2006-01-01
Although many instructors are now using simulations and other experiential pedagogies in their classrooms, the effectiveness of such tools has generally not been examined in a systematic way. In this paper, we assess the effectiveness of a simulation of the legislative process in the U.S. Senate as a tool for teaching college students about the…
Neale, Chris; Madill, Chris; Rauscher, Sarah; Pomès, Régis
2013-08-13
All molecular dynamics simulations are susceptible to sampling errors, which degrade the accuracy and precision of observed values. The statistical convergence of simulations containing atomistic lipid bilayers is limited by the slow relaxation of the lipid phase, which can exceed hundreds of nanoseconds. These long conformational autocorrelation times are exacerbated in the presence of charged solutes, which can induce significant distortions of the bilayer structure. Such long relaxation times represent hidden barriers that induce systematic sampling errors in simulations of solute insertion. To identify optimal methods for enhancing sampling efficiency, we quantitatively evaluate convergence rates using generalized ensemble sampling algorithms in calculations of the potential of mean force for the insertion of the ionic side chain analog of arginine in a lipid bilayer. Umbrella sampling (US) is used to restrain solute insertion depth along the bilayer normal, the order parameter commonly used in simulations of molecular solutes in lipid bilayers. When US simulations are modified to conduct random walks along the bilayer normal using a Hamiltonian exchange algorithm, systematic sampling errors are eliminated more rapidly and the rate of statistical convergence of the standard free energy of binding of the solute to the lipid bilayer is increased 3-fold. We compute the ratio of the replica flux transmitted across a defined region of the order parameter to the replica flux that entered that region in Hamiltonian exchange simulations. We show that this quantity, the transmission factor, identifies sampling barriers in degrees of freedom orthogonal to the order parameter. The transmission factor is used to estimate the depth-dependent conformational autocorrelation times of the simulation system, some of which exceed the simulation time, and thereby identify solute insertion depths that are prone to systematic sampling errors and estimate the lower bound of the amount of sampling that is required to resolve these sampling errors. Finally, we extend our simulations and verify that the conformational autocorrelation times estimated by the transmission factor accurately predict correlation times that exceed the simulation time scale-something that, to our knowledge, has never before been achieved.
Discrete Particle Method for Simulating Hypervelocity Impact Phenomena.
Watson, Erkai; Steinhauser, Martin O
2017-04-02
In this paper, we introduce a computational model for the simulation of hypervelocity impact (HVI) phenomena which is based on the Discrete Element Method (DEM). Our paper constitutes the first application of DEM to the modeling and simulating of impact events for velocities beyond 5 kms -1 . We present here the results of a systematic numerical study on HVI of solids. For modeling the solids, we use discrete spherical particles that interact with each other via potentials. In our numerical investigations we are particularly interested in the dynamics of material fragmentation upon impact. We model a typical HVI experiment configuration where a sphere strikes a thin plate and investigate the properties of the resulting debris cloud. We provide a quantitative computational analysis of the resulting debris cloud caused by impact and a comprehensive parameter study by varying key parameters of our model. We compare our findings from the simulations with recent HVI experiments performed at our institute. Our findings are that the DEM method leads to very stable, energy-conserving simulations of HVI scenarios that map the experimental setup where a sphere strikes a thin plate at hypervelocity speed. Our chosen interaction model works particularly well in the velocity range where the local stresses caused by impact shock waves markedly exceed the ultimate material strength.
NASA Technical Reports Server (NTRS)
Matsui, Toshihisa; Zeng, Xiping; Tao, Wei-Kuo; Masunaga, Hirohiko; Olson, William S.; Lang, Stephen
2008-01-01
This paper proposes a methodology known as the Tropical Rainfall Measuring Mission (TRMM) Triple-Sensor Three-step Evaluation Framework (T3EF) for the systematic evaluation of precipitating cloud types and microphysics in a cloud-resolving model (CRM). T3EF utilizes multi-frequency satellite simulators and novel statistics of multi-frequency radiance and backscattering signals observed from the TRMM satellite. Specifically, T3EF compares CRM and satellite observations in the form of combined probability distributions of precipitation radar (PR) reflectivity, polarization-corrected microwave brightness temperature (Tb), and infrared Tb to evaluate the candidate CRM. T3EF is used to evaluate the Goddard Cumulus Ensemble (GCE) model for cases involving the South China Sea Monsoon Experiment (SCSMEX) and Kwajalein Experiment (KWAJEX). This evaluation reveals that the GCE properly captures the satellite-measured frequencies of different precipitating cloud types in the SCSMEX case but underestimates the frequencies of deep convective and deep stratiform types in the KWAJEX case. Moreover, the GCE tends to simulate excessively large and abundant frozen condensates in deep convective clouds as inferred from the overestimated GCE-simulated radar reflectivities and microwave Tb depressions. Unveiling the detailed errors in the GCE s performance provides the best direction for model improvements.
Discrete Particle Method for Simulating Hypervelocity Impact Phenomena
Watson, Erkai; Steinhauser, Martin O.
2017-01-01
In this paper, we introduce a computational model for the simulation of hypervelocity impact (HVI) phenomena which is based on the Discrete Element Method (DEM). Our paper constitutes the first application of DEM to the modeling and simulating of impact events for velocities beyond 5 kms−1. We present here the results of a systematic numerical study on HVI of solids. For modeling the solids, we use discrete spherical particles that interact with each other via potentials. In our numerical investigations we are particularly interested in the dynamics of material fragmentation upon impact. We model a typical HVI experiment configuration where a sphere strikes a thin plate and investigate the properties of the resulting debris cloud. We provide a quantitative computational analysis of the resulting debris cloud caused by impact and a comprehensive parameter study by varying key parameters of our model. We compare our findings from the simulations with recent HVI experiments performed at our institute. Our findings are that the DEM method leads to very stable, energy–conserving simulations of HVI scenarios that map the experimental setup where a sphere strikes a thin plate at hypervelocity speed. Our chosen interaction model works particularly well in the velocity range where the local stresses caused by impact shock waves markedly exceed the ultimate material strength. PMID:28772739
Kang, Xianbiao; Zhang, Rong-Hua; Gao, Chuan; Zhu, Jieshun
2017-12-07
The El Niño-Southern oscillation (ENSO) simulated in the Community Earth System Model of the National Center for Atmospheric Research (NCAR CESM) is much stronger than in reality. Here, satellite data are used to derive a statistical relationship between interannual variations in oceanic chlorophyll (CHL) and sea surface temperature (SST), which is then incorporated into the CESM to represent oceanic chlorophyll -induced climate feedback in the tropical Pacific. Numerical runs with and without the feedback (referred to as feedback and non-feedback runs) are performed and compared with each other. The ENSO amplitude simulated in the feedback run is more accurate than that in the non-feedback run; quantitatively, the Niño3 SST index is reduced by 35% when the feedback is included. The underlying processes are analyzed and the results show that interannual CHL anomalies exert a systematic modulating effect on the solar radiation penetrating into the subsurface layers, which induces differential heating in the upper ocean that affects vertical mixing and thus SST. The statistical modeling approach proposed in this work offers an effective and economical way for improving climate simulations.
Polar Processes in a 50-year Simulation of Stratospheric Chemistry and Transport
NASA Technical Reports Server (NTRS)
Kawa, S.R.; Douglass, A. R.; Patrick, L. C.; Allen, D. R.; Randall, C. E.
2004-01-01
The unique chemical, dynamical, and microphysical processes that occur in the winter polar lower stratosphere are expected to interact strongly with changing climate and trace gas abundances. Significant changes in ozone have been observed and prediction of future ozone and climate interactions depends on modeling these processes successfully. We have conducted an off-line model simulation of the stratosphere for trace gas conditions representative of 1975-2025 using meteorology from the NASA finite-volume general circulation model. The objective of this simulation is to examine the sensitivity of stratospheric ozone and chemical change to varying meteorology and trace gas inputs. This presentation will examine the dependence of ozone and related processes in polar regions on the climatological and trace gas changes in the model. The model past performance is base-lined against available observations, and a future ozone recovery scenario is forecast. Overall the model ozone simulation is quite realistic, but initial analysis of the detailed evolution of some observable processes suggests systematic shortcomings in our description of the polar chemical rates and/or mechanisms. Model sensitivities, strengths, and weaknesses will be discussed with implications for uncertainty and confidence in coupled climate chemistry predictions.
An engineering closure for heavily under-resolved coarse-grid CFD in large applications
NASA Astrophysics Data System (ADS)
Class, Andreas G.; Yu, Fujiang; Jordan, Thomas
2016-11-01
Even though high performance computation allows very detailed description of a wide range of scales in scientific computations, engineering simulations used for design studies commonly merely resolve the large scales thus speeding up simulation time. The coarse-grid CFD (CGCFD) methodology is developed for flows with repeated flow patterns as often observed in heat exchangers or porous structures. It is proposed to use inviscid Euler equations on a very coarse numerical mesh. This coarse mesh needs not to conform to the geometry in all details. To reinstall physics on all smaller scales cheap subgrid models are employed. Subgrid models are systematically constructed by analyzing well-resolved generic representative simulations. By varying the flow conditions in these simulations correlations are obtained. These comprehend for each individual coarse mesh cell a volume force vector and volume porosity. Moreover, for all vertices, surface porosities are derived. CGCFD is related to the immersed boundary method as both exploit volume forces and non-body conformal meshes. Yet, CGCFD differs with respect to the coarser mesh and the use of Euler equations. We will describe the methodology based on a simple test case and the application of the method to a 127 pin wire-wrap fuel bundle.
Modeling the Transfer Function for the Dark Energy Survey
Chang, C.
2015-03-04
We present a forward-modeling simulation framework designed to model the data products from the Dark Energy Survey (DES). This forward-model process can be thought of as a transfer function—a mapping from cosmological/astronomical signals to the final data products used by the scientists. Using output from the cosmological simulations (the Blind Cosmology Challenge), we generate simulated images (the Ultra Fast Image Simulator) and catalogs representative of the DES data. In this work we demonstrate the framework by simulating the 244 deg 2 coadd images and catalogs in five bands for the DES Science Verification data. The simulation output is compared withmore » the corresponding data to show that major characteristics of the images and catalogs can be captured. We also point out several directions of future improvements. Two practical examples—star-galaxy classification and proximity effects on object detection—are then used to illustrate how one can use the simulations to address systematics issues in data analysis. With clear understanding of the simplifications in our model, we show that one can use the simulations side-by-side with data products to interpret the measurements. This forward modeling approach is generally applicable for other upcoming and future surveys. It provides a powerful tool for systematics studies that is sufficiently realistic and highly controllable.« less
Vandyk, Amanda D; Lalonde, Michelle; Merali, Sabrina; Wright, Erica; Bajnok, Irmajean; Davies, Barbara
2018-04-01
Evidence on the use of simulation to teach psychiatry and mental health (including addiction) content is emerging, yet no summary of the implementation processes or associated outcomes exists. The aim of this study was to systematically search and review empirical literature on the use of psychiatry-focused simulation in undergraduate nursing education. Objectives were to (i) assess the methodological quality of existing evidence on the use of simulation to teach mental health content to undergraduate nursing students, (ii) describe the operationalization of the simulations, and (iii) summarize the associated quantitative and qualitative outcomes. We conducted online database (MEDLINE, Embase, ERIC, CINAHL, PsycINFO from January 2004 to October 2015) and grey literature searches. Thirty-two simulation studies were identified describing and evaluating six types of simulations (standardized patients, audio simulations, high-fidelity simulators, virtual world, multimodal, and tabletop). Overall, 2724 participants were included in the studies. Studies reflected a limited number of intervention designs, and outcomes were evaluated with qualitative and quantitative methods incorporating a variety of tools. Results indicated that simulation was effective in reducing student anxiety and improving their knowledge, empathy, communication, and confidence. The summarized qualitative findings all supported the benefit of simulation; however, more research is needed to assess the comparative effectiveness of the types of simulations. Recommendations from the findings include the development of guidelines for educators to deliver each simulation component (briefing, active simulation, debriefing). Finally, consensus around appropriate training of facilitators is needed, as is consistent and agreed upon simulation terminology. © 2017 Australian College of Mental Health Nurses Inc.
Monte Carlo modeling of light-tissue interactions in narrow band imaging.
Le, Du V N; Wang, Quanzeng; Ramella-Roman, Jessica C; Pfefer, T Joshua
2013-01-01
Light-tissue interactions that influence vascular contrast enhancement in narrow band imaging (NBI) have not been the subject of extensive theoretical study. In order to elucidate relevant mechanisms in a systematic and quantitative manner we have developed and validated a Monte Carlo model of NBI and used it to study the effect of device and tissue parameters, specifically, imaging wavelength (415 versus 540 nm) and vessel diameter and depth. Simulations provided quantitative predictions of contrast-including up to 125% improvement in small, superficial vessel contrast for 415 over 540 nm. Our findings indicated that absorption rather than scattering-the mechanism often cited in prior studies-was the dominant factor behind spectral variations in vessel depth-selectivity. Narrow-band images of a tissue-simulating phantom showed good agreement in terms of trends and quantitative values. Numerical modeling represents a powerful tool for elucidating the factors that affect the performance of spectral imaging approaches such as NBI.
Nanoscale structure and morphology of sulfonated polyphenylenes via atomistic simulations
Abbott, Lauren J.; Frischknecht, Amalie L.
2017-01-23
We performed atomistic simulations on a series of sulfonated polyphenylenes systematically varying the degree of sulfonation and water content to determine their effect on the nanoscale structure, particularly for the hydrophilic domains formed by the ionic groups and water molecules. We found that the local structure around the ionic groups depended on the sulfonation and hydration levels, with the sulfonate groups and hydronium ions less strongly coupled at higher water contents. In addition, we characterized the morphology of the ionic domains employing two complementary clustering algorithms. At low sulfonation and hydration levels, clusters were more elongated in shape and poorlymore » connected throughout the system. As the degree of sulfonation and water content were increased, the clusters became more spherical, and a fully percolated ionic domain was formed. As a result, these structural details have important implications for ion transport.« less
An RF-only ion-funnel for extraction from high-pressure gases
Brunner, T.; Fudenberg, D.; Varentsov, V.; ...
2015-01-27
An RF ion-funnel technique has been developed to extract ions from a high-pressure (10 bar) noble-gas environment into a vacuum (10 -6 mbar). Detailed simulations have been performed and a prototype has been developed for the purpose of extracting 136Ba ions from Xe gas with high efficiency. With this prototype, ions have been extracted for the first time from high-pressure xenon gas and argon gas. Systematic studies have been carried out and compared to simulations. This demonstration of extraction of ions, with mass comparable to that of the gas generating the high-pressure, has applications to Ba tagging from a Xe-gasmore » time-projection chamber for double-beta decay, as well as to the general problem of recovering trace amounts of an ionized element in a heavy (m > 40 u) carrier gas.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, B.; Erni, W.; Krusche, B.
Simulation results for future measurements of electromagnetic proton form factors atmore » $$\\overline{\\rm P}$$ANDA (FAIR) within the PandaRoot software framework are reported. The statistical precision with which the proton form factors can be determined is estimated. The signal channel p¯p → e +e – is studied on the basis of two different but consistent procedures. The suppression of the main background channel, i.e. p¯p → π +π –, is studied. Furthermore, the background versus signal efficiency, statistical and systematical uncertainties on the extracted proton form factors are evaluated using two different procedures. The results are consistent with those of a previous simulation study using an older, simplified framework. Furthermore, a slightly better precision is achieved in the PandaRoot study in a large range of momentum transfer, assuming the nominal beam conditions and detector performance.« less