NASA Astrophysics Data System (ADS)
Riaz, Muhammad
The purpose of this study was to examine how simulations in physics class, class management, laboratory practice, student engagement, critical thinking, cooperative learning, and use of simulations predicted the percentage of students achieving a grade point average of B or higher and their academic performance as reported by teachers in secondary school physics classes. The target population consisted of secondary school physics teachers who were members of Science Technology, Engineeering and,Mathematics Teachers of New York City (STEMteachersNYC) and American Modeling Teachers Association (AMTA). They used simulations in their physics classes in the 2013 and 2014 school years. Subjects for this study were volunteers. A survey was constructed based on a literature review. Eighty-two physics teachers completed the survey about instructional practice in physics. All respondents were anonymous. Classroom management was the only predictor of the percent of students achieving a grade point average of B or higher in high school physics class. Cooperative learning, use of simulations, and student engagement were predictors of teacher's views of student academic performance in high school physics class. All other variables -- class management, laboratory practice, critical thinking, and teacher self-efficacy -- were not predictors of teacher's views of student academic performance in high school physics class. The implications of these findings were discussed and recommendations for physics teachers to improve student learning were presented.
Ohtake, Patricia J; Lazarus, Marcilene; Schillo, Rebecca; Rosen, Michael
2013-02-01
Rehabilitation of patients in critical care environments improves functional outcomes. This finding has led to increased implementation of intensive care unit (ICU) rehabilitation programs, including early mobility, and an associated increased demand for physical therapists practicing in ICUs. Unfortunately, many physical therapists report being inadequately prepared to work in this high-risk environment. Simulation provides focused, deliberate practice in safe, controlled learning environments and may be a method to initiate academic preparation of physical therapists for ICU practice. The purpose of this study was to examine the effect of participation in simulation-based management of a patient with critical illness in an ICU setting on levels of confidence and satisfaction in physical therapist students. A one-group, pretest-posttest, quasi-experimental design was used. Physical therapist students (N=43) participated in a critical care simulation experience requiring technical (assessing bed mobility and pulmonary status), behavioral (patient and interprofessional communication), and cognitive (recognizing a patient status change and initiating appropriate responses) skill performance. Student confidence and satisfaction were surveyed before and after the simulation experience. Students' confidence in their technical, behavioral, and cognitive skill performance increased from "somewhat confident" to "confident" following the critical care simulation experience. Student satisfaction was highly positive, with strong agreement the simulation experience was valuable, reinforced course content, and was a useful educational tool. Limitations of the study were the small sample from one university and a control group was not included. Incorporating a simulated, interprofessional critical care experience into a required clinical course improved physical therapist student confidence in technical, behavioral, and cognitive performance measures and was associated with high student satisfaction. Using simulation, students were introduced to the critical care environment, which may increase interest in working in this practice area.
Relation of Parallel Discrete Event Simulation algorithms with physical models
NASA Astrophysics Data System (ADS)
Shchur, L. N.; Shchur, L. V.
2015-09-01
We extend concept of local simulation times in parallel discrete event simulation (PDES) in order to take into account architecture of the current hardware and software in high-performance computing. We shortly review previous research on the mapping of PDES on physical problems, and emphasise how physical results may help to predict parallel algorithms behaviour.
IMPETUS - Interactive MultiPhysics Environment for Unified Simulations.
Ha, Vi Q; Lykotrafitis, George
2016-12-08
We introduce IMPETUS - Interactive MultiPhysics Environment for Unified Simulations, an object oriented, easy-to-use, high performance, C++ program for three-dimensional simulations of complex physical systems that can benefit a large variety of research areas, especially in cell mechanics. The program implements cross-communication between locally interacting particles and continuum models residing in the same physical space while a network facilitates long-range particle interactions. Message Passing Interface is used for inter-processor communication for all simulations. Copyright © 2016 Elsevier Ltd. All rights reserved.
The effects of fatigue on performance in simulated nursing work.
Barker, Linsey M; Nussbaum, Maury A
2011-09-01
Fatigue is associated with increased rates of medical errors and healthcare worker injuries, yet existing research in this sector has not considered multiple dimensions of fatigue simultaneously. This study evaluated hypothesised causal relationships between mental and physical fatigue and performance. High and low levels of mental and physical fatigue were induced in 16 participants during simulated nursing work tasks in a laboratory setting. Task-induced changes in fatigue dimensions were quantified using both subjective and objective measures, as were changes in performance on physical and mental tasks. Completing the simulated work tasks increased total fatigue, mental fatigue and physical fatigue in all experimental conditions. Higher physical fatigue adversely affected measures of physical and mental performance, whereas higher mental fatigue had a positive effect on one measure of mental performance. Overall, these results suggest causal effects between manipulated levels of mental and physical fatigue and task-induced changes in mental and physical performance. STATEMENT OF RELEVANCE: Nurse fatigue and performance has implications for patient and provider safety. Results from this study demonstrate the importance of a multidimensional view of fatigue in understanding the causal relationships between fatigue and performance. The findings can guide future work aimed at predicting fatigue-related performance decrements and designing interventions.
High Performance Computing Modeling Advances Accelerator Science for High-Energy Physics
Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis
2014-07-28
The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable ofmore » handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.« less
Development of IR imaging system simulator
NASA Astrophysics Data System (ADS)
Xiang, Xinglang; He, Guojing; Dong, Weike; Dong, Lu
2017-02-01
To overcome the disadvantages of the tradition semi-physical simulation and injection simulation equipment in the performance evaluation of the infrared imaging system (IRIS), a low-cost and reconfigurable IRIS simulator, which can simulate the realistic physical process of infrared imaging, is proposed to test and evaluate the performance of the IRIS. According to the theoretical simulation framework and the theoretical models of the IRIS, the architecture of the IRIS simulator is constructed. The 3D scenes are generated and the infrared atmospheric transmission effects are simulated using OGRE technology in real-time on the computer. The physical effects of the IRIS are classified as the signal response characteristic, modulation transfer characteristic and noise characteristic, and they are simulated on the single-board signal processing platform based on the core processor FPGA in real-time using high-speed parallel computation method.
Coswig, Victor S; Gentil, Paulo; Bueno, João C A; Follmer, Bruno; Marques, Vitor A; Del Vecchio, Fabrício B
2018-01-01
Among combat sports, Judo and Brazilian Jiu-Jitsu (BJJ) present elevated physical fitness demands from the high-intensity intermittent efforts. However, information regarding how metabolic and neuromuscular physical fitness is associated with technical-tactical performance in Judo and BJJ fights is not available. This study aimed to relate indicators of physical fitness with combat performance variables in Judo and BJJ. The sample consisted of Judo ( n = 16) and BJJ ( n = 24) male athletes. At the first meeting, the physical tests were applied and, in the second, simulated fights were performed for later notational analysis. The main findings indicate: (i) high reproducibility of the proposed instrument and protocol used for notational analysis in a mobile device; (ii) differences in the technical-tactical and time-motion patterns between modalities; (iii) performance-related variables are different in Judo and BJJ; and (iv) regression models based on metabolic fitness variables may account for up to 53% of the variances in technical-tactical and/or time-motion variables in Judo and up to 31% in BJJ, whereas neuromuscular fitness models can reach values up to 44 and 73% of prediction in Judo and BJJ, respectively. When all components are combined, they can explain up to 90% of high intensity actions in Judo. In conclusion, performance prediction models in simulated combat indicate that anaerobic, aerobic and neuromuscular fitness variables contribute to explain time-motion variables associated with high intensity and technical-tactical variables in Judo and BJJ fights.
Salko, Robert K.; Schmidt, Rodney C.; Avramova, Maria N.
2014-11-23
This study describes major improvements to the computational infrastructure of the CTF subchannel code so that full-core, pincell-resolved (i.e., one computational subchannel per real bundle flow channel) simulations can now be performed in much shorter run-times, either in stand-alone mode or as part of coupled-code multi-physics calculations. These improvements support the goals of the Department Of Energy Consortium for Advanced Simulation of Light Water Reactors (CASL) Energy Innovation Hub to develop high fidelity multi-physics simulation tools for nuclear energy design and analysis.
Real-time haptic cutting of high-resolution soft tissues.
Wu, Jun; Westermann, Rüdiger; Dick, Christian
2014-01-01
We present our systematic efforts in advancing the computational performance of physically accurate soft tissue cutting simulation, which is at the core of surgery simulators in general. We demonstrate a real-time performance of 15 simulation frames per second for haptic soft tissue cutting of a deformable body at an effective resolution of 170,000 finite elements. This is achieved by the following innovative components: (1) a linked octree discretization of the deformable body, which allows for fast and robust topological modifications of the simulation domain, (2) a composite finite element formulation, which thoroughly reduces the number of simulation degrees of freedom and thus enables to carefully balance simulation performance and accuracy, (3) a highly efficient geometric multigrid solver for solving the linear systems of equations arising from implicit time integration, (4) an efficient collision detection algorithm that effectively exploits the composition structure, and (5) a stable haptic rendering algorithm for computing the feedback forces. Considering that our method increases the finite element resolution for physically accurate real-time soft tissue cutting simulation by an order of magnitude, our technique has a high potential to significantly advance the realism of surgery simulators.
Gentil, Paulo; Bueno, João C.A.; Follmer, Bruno; Marques, Vitor A.; Del Vecchio, Fabrício B.
2018-01-01
Background Among combat sports, Judo and Brazilian Jiu-Jitsu (BJJ) present elevated physical fitness demands from the high-intensity intermittent efforts. However, information regarding how metabolic and neuromuscular physical fitness is associated with technical-tactical performance in Judo and BJJ fights is not available. This study aimed to relate indicators of physical fitness with combat performance variables in Judo and BJJ. Methods The sample consisted of Judo (n = 16) and BJJ (n = 24) male athletes. At the first meeting, the physical tests were applied and, in the second, simulated fights were performed for later notational analysis. Results The main findings indicate: (i) high reproducibility of the proposed instrument and protocol used for notational analysis in a mobile device; (ii) differences in the technical-tactical and time-motion patterns between modalities; (iii) performance-related variables are different in Judo and BJJ; and (iv) regression models based on metabolic fitness variables may account for up to 53% of the variances in technical-tactical and/or time-motion variables in Judo and up to 31% in BJJ, whereas neuromuscular fitness models can reach values up to 44 and 73% of prediction in Judo and BJJ, respectively. When all components are combined, they can explain up to 90% of high intensity actions in Judo. Discussion In conclusion, performance prediction models in simulated combat indicate that anaerobic, aerobic and neuromuscular fitness variables contribute to explain time-motion variables associated with high intensity and technical-tactical variables in Judo and BJJ fights. PMID:29844991
NASA Astrophysics Data System (ADS)
Khuwaileh, Bassam
High fidelity simulation of nuclear reactors entails large scale applications characterized with high dimensionality and tremendous complexity where various physics models are integrated in the form of coupled models (e.g. neutronic with thermal-hydraulic feedback). Each of the coupled modules represents a high fidelity formulation of the first principles governing the physics of interest. Therefore, new developments in high fidelity multi-physics simulation and the corresponding sensitivity/uncertainty quantification analysis are paramount to the development and competitiveness of reactors achieved through enhanced understanding of the design and safety margins. Accordingly, this dissertation introduces efficient and scalable algorithms for performing efficient Uncertainty Quantification (UQ), Data Assimilation (DA) and Target Accuracy Assessment (TAA) for large scale, multi-physics reactor design and safety problems. This dissertation builds upon previous efforts for adaptive core simulation and reduced order modeling algorithms and extends these efforts towards coupled multi-physics models with feedback. The core idea is to recast the reactor physics analysis in terms of reduced order models. This can be achieved via identifying the important/influential degrees of freedom (DoF) via the subspace analysis, such that the required analysis can be recast by considering the important DoF only. In this dissertation, efficient algorithms for lower dimensional subspace construction have been developed for single physics and multi-physics applications with feedback. Then the reduced subspace is used to solve realistic, large scale forward (UQ) and inverse problems (DA and TAA). Once the elite set of DoF is determined, the uncertainty/sensitivity/target accuracy assessment and data assimilation analysis can be performed accurately and efficiently for large scale, high dimensional multi-physics nuclear engineering applications. Hence, in this work a Karhunen-Loeve (KL) based algorithm previously developed to quantify the uncertainty for single physics models is extended for large scale multi-physics coupled problems with feedback effect. Moreover, a non-linear surrogate based UQ approach is developed, used and compared to performance of the KL approach and brute force Monte Carlo (MC) approach. On the other hand, an efficient Data Assimilation (DA) algorithm is developed to assess information about model's parameters: nuclear data cross-sections and thermal-hydraulics parameters. Two improvements are introduced in order to perform DA on the high dimensional problems. First, a goal-oriented surrogate model can be used to replace the original models in the depletion sequence (MPACT -- COBRA-TF - ORIGEN). Second, approximating the complex and high dimensional solution space with a lower dimensional subspace makes the sampling process necessary for DA possible for high dimensional problems. Moreover, safety analysis and design optimization depend on the accurate prediction of various reactor attributes. Predictions can be enhanced by reducing the uncertainty associated with the attributes of interest. Accordingly, an inverse problem can be defined and solved to assess the contributions from sources of uncertainty; and experimental effort can be subsequently directed to further improve the uncertainty associated with these sources. In this dissertation a subspace-based gradient-free and nonlinear algorithm for inverse uncertainty quantification namely the Target Accuracy Assessment (TAA) has been developed and tested. The ideas proposed in this dissertation were first validated using lattice physics applications simulated using SCALE6.1 package (Pressurized Water Reactor (PWR) and Boiling Water Reactor (BWR) lattice models). Ultimately, the algorithms proposed her were applied to perform UQ and DA for assembly level (CASL progression problem number 6) and core wide problems representing Watts Bar Nuclear 1 (WBN1) for cycle 1 of depletion (CASL Progression Problem Number 9) modeled via simulated using VERA-CS which consists of several multi-physics coupled models. The analysis and algorithms developed in this dissertation were encoded and implemented in a newly developed tool kit algorithms for Reduced Order Modeling based Uncertainty/Sensitivity Estimator (ROMUSE).
Weakly supervised classification in high energy physics
Dery, Lucio Mwinmaarong; Nachman, Benjamin; Rubbo, Francesco; ...
2017-05-01
As machine learning algorithms become increasingly sophisticated to exploit subtle features of the data, they often become more dependent on simulations. Here, this paper presents a new approach called weakly supervised classification in which class proportions are the only input into the machine learning algorithm. Using one of the most challenging binary classification tasks in high energy physics $-$ quark versus gluon tagging $-$ we show that weakly supervised classification can match the performance of fully supervised algorithms. Furthermore, by design, the new algorithm is insensitive to any mis-modeling of discriminating features in the data by the simulation. Weakly supervisedmore » classification is a general procedure that can be applied to a wide variety of learning problems to boost performance and robustness when detailed simulations are not reliable or not available.« less
Weakly supervised classification in high energy physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dery, Lucio Mwinmaarong; Nachman, Benjamin; Rubbo, Francesco
As machine learning algorithms become increasingly sophisticated to exploit subtle features of the data, they often become more dependent on simulations. Here, this paper presents a new approach called weakly supervised classification in which class proportions are the only input into the machine learning algorithm. Using one of the most challenging binary classification tasks in high energy physics $-$ quark versus gluon tagging $-$ we show that weakly supervised classification can match the performance of fully supervised algorithms. Furthermore, by design, the new algorithm is insensitive to any mis-modeling of discriminating features in the data by the simulation. Weakly supervisedmore » classification is a general procedure that can be applied to a wide variety of learning problems to boost performance and robustness when detailed simulations are not reliable or not available.« less
Foulis, Stephen A; Redmond, Jan E; Frykman, Peter N; Warr, Bradley J; Zambraski, Edward J; Sharp, Marilyn A
2017-12-01
Foulis, SA, Redmond, JE, Frykman, PN, Warr, BJ, Zambraski, EJ, and Sharp, MA. U.S. Army physical demands study: reliability of simulations of physically demanding tasks performed by combat arms soldiers. J Strength Cond Res 31(12): 3245-3252, 2017-Recently, the U.S. Army has mandated that soldiers must successfully complete the physically demanding tasks of their job to graduate from their Initial Military Training. Evaluating individual soldiers in the field is difficult; however, simulations of these tasks may aid in the assessment of soldiers' abilities. The purpose of this study was to determine the reliability of simulated physical soldiering tasks relevant to combat arms soldiers. Three cohorts of ∼50 soldiers repeated a subset of 8 simulated tasks 4 times over 2 weeks. Simulations included: sandbag carry, casualty drag, and casualty evacuation from a vehicle turret, move under direct fire, stow ammunition on a tank, load the main gun of a tank, transferring ammunition with a field artillery supply vehicle, and a 4-mile foot march. Reliability was assessed using intraclass correlation coefficients (ICCs), standard errors of measurement (SEMs), and 95% limits of agreement. Performance of the casualty drag and foot march did not improve across trials (p > 0.05), whereas improvements, suggestive of learning effects, were observed on the remaining 6 tasks (p ≤ 0.05). The ICCs ranged from 0.76 to 0.96, and the SEMs ranged from 3 to 16% of the mean. These 8 simulated tasks show high reliability. Given proper practice, they are suitable for evaluating the ability of Combat Arms Soldiers to complete the physical requirements of their jobs.
Petascale computation of multi-physics seismic simulations
NASA Astrophysics Data System (ADS)
Gabriel, Alice-Agnes; Madden, Elizabeth H.; Ulrich, Thomas; Wollherr, Stephanie; Duru, Kenneth C.
2017-04-01
Capturing the observed complexity of earthquake sources in concurrence with seismic wave propagation simulations is an inherently multi-scale, multi-physics problem. In this presentation, we present simulations of earthquake scenarios resolving high-detail dynamic rupture evolution and high frequency ground motion. The simulations combine a multitude of representations of model complexity; such as non-linear fault friction, thermal and fluid effects, heterogeneous fault stress and fault strength initial conditions, fault curvature and roughness, on- and off-fault non-elastic failure to capture dynamic rupture behavior at the source; and seismic wave attenuation, 3D subsurface structure and bathymetry impacting seismic wave propagation. Performing such scenarios at the necessary spatio-temporal resolution requires highly optimized and massively parallel simulation tools which can efficiently exploit HPC facilities. Our up to multi-PetaFLOP simulations are performed with SeisSol (www.seissol.org), an open-source software package based on an ADER-Discontinuous Galerkin (DG) scheme solving the seismic wave equations in velocity-stress formulation in elastic, viscoelastic, and viscoplastic media with high-order accuracy in time and space. Our flux-based implementation of frictional failure remains free of spurious oscillations. Tetrahedral unstructured meshes allow for complicated model geometry. SeisSol has been optimized on all software levels, including: assembler-level DG kernels which obtain 50% peak performance on some of the largest supercomputers worldwide; an overlapping MPI-OpenMP parallelization shadowing the multiphysics computations; usage of local time stepping; parallel input and output schemes and direct interfaces to community standard data formats. All these factors enable aim to minimise the time-to-solution. The results presented highlight the fact that modern numerical methods and hardware-aware optimization for modern supercomputers are essential to further our understanding of earthquake source physics and complement both physic-based ground motion research and empirical approaches in seismic hazard analysis. Lastly, we will conclude with an outlook on future exascale ADER-DG solvers for seismological applications.
Load management strategy for Particle-In-Cell simulations in high energy particle acceleration
NASA Astrophysics Data System (ADS)
Beck, A.; Frederiksen, J. T.; Dérouillat, J.
2016-09-01
In the wake of the intense effort made for the experimental CILEX project, numerical simulation campaigns have been carried out in order to finalize the design of the facility and to identify optimal laser and plasma parameters. These simulations bring, of course, important insight into the fundamental physics at play. As a by-product, they also characterize the quality of our theoretical and numerical models. In this paper, we compare the results given by different codes and point out algorithmic limitations both in terms of physical accuracy and computational performances. These limitations are illustrated in the context of electron laser wakefield acceleration (LWFA). The main limitation we identify in state-of-the-art Particle-In-Cell (PIC) codes is computational load imbalance. We propose an innovative algorithm to deal with this specific issue as well as milestones towards a modern, accurate high-performance PIC code for high energy particle acceleration.
Nishihara, Yuichi; Isobe, Yoh; Kitagawa, Yuko
2017-12-01
A realistic simulator for transabdominal preperitoneal (TAPP) inguinal hernia repair would enhance surgeons' training experience before they enter the operating theater. The purpose of this study was to create a novel physical simulator for TAPP inguinal hernia repair and obtain surgeons' opinions regarding its efficacy. Our novel TAPP inguinal hernia repair simulator consists of a physical laparoscopy simulator and a handmade organ replica model. The physical laparoscopy simulator was created by three-dimensional (3D) printing technology, and it represents the trunk of the human body and the bendability of the abdominal wall under pneumoperitoneal pressure. The organ replica model was manually created by assembling materials. The TAPP inguinal hernia repair simulator allows for the performance of all procedures required in TAPP inguinal hernia repair. Fifteen general surgeons performed TAPP inguinal hernia repair using our simulator. Their opinions were scored on a 5-point Likert scale. All participants strongly agreed that the 3D-printed physical simulator and organ replica model were highly useful for TAPP inguinal hernia repair training (median, 5 points) and TAPP inguinal hernia repair education (median, 5 points). They felt that the simulator would be effective for TAPP inguinal hernia repair training before entering the operating theater. All surgeons considered that this simulator should be introduced in the residency curriculum. We successfully created a physical simulator for TAPP inguinal hernia repair training using 3D printing technology and a handmade organ replica model created with inexpensive, readily accessible materials. Preoperative TAPP inguinal hernia repair training using this simulator and organ replica model may be of benefit in the training of all surgeons. All general surgeons involved in the present study felt that this simulator and organ replica model should be used in their residency curriculum.
Fast Simulation of the Impact Parameter Calculation of Electrons through Pair Production
NASA Astrophysics Data System (ADS)
Bang, Hyesun; Kweon, MinJung; Huh, Kyoung Bum; Pachmayer, Yvonne
2018-05-01
A fast simulation method is introduced that reduces tremendously the time required for the impact parameter calculation, a key observable in physics analyses of high energy physics experiments and detector optimisation studies. The impact parameter of electrons produced through pair production was calculated considering key related processes using the Bethe-Heitler formula, the Tsai formula and a simple geometric model. The calculations were performed at various conditions and the results were compared with those from full GEANT4 simulations. The computation time using this fast simulation method is 104 times shorter than that of the full GEANT4 simulation.
HEP Software Foundation Community White Paper Working Group - Detector Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Apostolakis, J.
A working group on detector simulation was formed as part of the high-energy physics (HEP) Software Foundation's initiative to prepare a Community White Paper that describes the main software challenges and opportunities to be faced in the HEP field over the next decade. The working group met over a period of several months in order to review the current status of the Full and Fast simulation applications of HEP experiments and the improvements that will need to be made in order to meet the goals of future HEP experimental programmes. The scope of the topics covered includes the main componentsmore » of a HEP simulation application, such as MC truth handling, geometry modeling, particle propagation in materials and fields, physics modeling of the interactions of particles with matter, the treatment of pileup and other backgrounds, as well as signal processing and digitisation. The resulting work programme described in this document focuses on the need to improve both the software performance and the physics of detector simulation. The goals are to increase the accuracy of the physics models and expand their applicability to future physics programmes, while achieving large factors in computing performance gains consistent with projections on available computing resources.« less
NASA Astrophysics Data System (ADS)
Yamamoto, H.; Nakajima, K.; Zhang, K.; Nanai, S.
2015-12-01
Powerful numerical codes that are capable of modeling complex coupled processes of physics and chemistry have been developed for predicting the fate of CO2 in reservoirs as well as its potential impacts on groundwater and subsurface environments. However, they are often computationally demanding for solving highly non-linear models in sufficient spatial and temporal resolutions. Geological heterogeneity and uncertainties further increase the challenges in modeling works. Two-phase flow simulations in heterogeneous media usually require much longer computational time than that in homogeneous media. Uncertainties in reservoir properties may necessitate stochastic simulations with multiple realizations. Recently, massively parallel supercomputers with more than thousands of processors become available in scientific and engineering communities. Such supercomputers may attract attentions from geoscientist and reservoir engineers for solving the large and non-linear models in higher resolutions within a reasonable time. However, for making it a useful tool, it is essential to tackle several practical obstacles to utilize large number of processors effectively for general-purpose reservoir simulators. We have implemented massively-parallel versions of two TOUGH2 family codes (a multi-phase flow simulator TOUGH2 and a chemically reactive transport simulator TOUGHREACT) on two different types (vector- and scalar-type) of supercomputers with a thousand to tens of thousands of processors. After completing implementation and extensive tune-up on the supercomputers, the computational performance was measured for three simulations with multi-million grid models, including a simulation of the dissolution-diffusion-convection process that requires high spatial and temporal resolutions to simulate the growth of small convective fingers of CO2-dissolved water to larger ones in a reservoir scale. The performance measurement confirmed that the both simulators exhibit excellent scalabilities showing almost linear speedup against number of processors up to over ten thousand cores. Generally this allows us to perform coupled multi-physics (THC) simulations on high resolution geologic models with multi-million grid in a practical time (e.g., less than a second per time step).
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2012-01-01
The simulation was performed on 64K cores of Intrepid, running at 0.25 simulated-years-per-day and taking 25 million core-hours. This is the first simulation using both the CAM5 physics and the highly scalable spectral element dynamical core. The animation of Total Precipitable Water clearly shows hurricanes developing in the Atlantic and Pacific.
Physical Scaffolding Accelerates the Evolution of Robot Behavior.
Buckingham, David; Bongard, Josh
2017-01-01
In some evolutionary robotics experiments, evolved robots are transferred from simulation to reality, while sensor/motor data flows back from reality to improve the next transferral. We envision a generalization of this approach: a simulation-to-reality pipeline. In this pipeline, increasingly embodied agents flow up through a sequence of increasingly physically realistic simulators, while data flows back down to improve the next transferral between neighboring simulators; physical reality is the last link in this chain. As a first proof of concept, we introduce a two-link chain: A fast yet low-fidelity ( lo-fi) simulator hosts minimally embodied agents, which gradually evolve controllers and morphologies to colonize a slow yet high-fidelity ( hi-fi) simulator. The agents are thus physically scaffolded. We show here that, given the same computational budget, these physically scaffolded robots reach higher performance in the hi-fi simulator than do robots that only evolve in the hi-fi simulator, but only for a sufficiently difficult task. These results suggest that a simulation-to-reality pipeline may strike a good balance between accelerating evolution in simulation while anchoring the results in reality, free the investigator from having to prespecify the robot's morphology, and pave the way to scalable, automated, robot-generating systems.
An Integrated Study on a Novel High Temperature High Entropy Alloy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Shizhong
2016-12-31
This report summarizes our recent works of theoretical modeling, simulation, and experimental validation of the simulation results on the new refractory high entropy alloy (HEA) design and oxide doped refractory HEA research. The simulation of the stability and thermal dynamics simulation on potential thermal stable candidates were performed and related HEA with oxide doped samples were synthesized and characterized. The HEA ab initio density functional theory and molecular dynamics physical property simulation methods and experimental texture validation techniques development, achievements already reached, course work development, students and postdoc training, and future improvement research directions are briefly introduced.
Yang, Chaowei; Wu, Huayi; Huang, Qunying; Li, Zhenlong; Li, Jing
2011-01-01
Contemporary physical science studies rely on the effective analyses of geographically dispersed spatial data and simulations of physical phenomena. Single computers and generic high-end computing are not sufficient to process the data for complex physical science analysis and simulations, which can be successfully supported only through distributed computing, best optimized through the application of spatial principles. Spatial computing, the computing aspect of a spatial cyberinfrastructure, refers to a computing paradigm that utilizes spatial principles to optimize distributed computers to catalyze advancements in the physical sciences. Spatial principles govern the interactions between scientific parameters across space and time by providing the spatial connections and constraints to drive the progression of the phenomena. Therefore, spatial computing studies could better position us to leverage spatial principles in simulating physical phenomena and, by extension, advance the physical sciences. Using geospatial science as an example, this paper illustrates through three research examples how spatial computing could (i) enable data intensive science with efficient data/services search, access, and utilization, (ii) facilitate physical science studies with enabling high-performance computing capabilities, and (iii) empower scientists with multidimensional visualization tools to understand observations and simulations. The research examples demonstrate that spatial computing is of critical importance to design computing methods to catalyze physical science studies with better data access, phenomena simulation, and analytical visualization. We envision that spatial computing will become a core technology that drives fundamental physical science advancements in the 21st century. PMID:21444779
Yang, Chaowei; Wu, Huayi; Huang, Qunying; Li, Zhenlong; Li, Jing
2011-04-05
Contemporary physical science studies rely on the effective analyses of geographically dispersed spatial data and simulations of physical phenomena. Single computers and generic high-end computing are not sufficient to process the data for complex physical science analysis and simulations, which can be successfully supported only through distributed computing, best optimized through the application of spatial principles. Spatial computing, the computing aspect of a spatial cyberinfrastructure, refers to a computing paradigm that utilizes spatial principles to optimize distributed computers to catalyze advancements in the physical sciences. Spatial principles govern the interactions between scientific parameters across space and time by providing the spatial connections and constraints to drive the progression of the phenomena. Therefore, spatial computing studies could better position us to leverage spatial principles in simulating physical phenomena and, by extension, advance the physical sciences. Using geospatial science as an example, this paper illustrates through three research examples how spatial computing could (i) enable data intensive science with efficient data/services search, access, and utilization, (ii) facilitate physical science studies with enabling high-performance computing capabilities, and (iii) empower scientists with multidimensional visualization tools to understand observations and simulations. The research examples demonstrate that spatial computing is of critical importance to design computing methods to catalyze physical science studies with better data access, phenomena simulation, and analytical visualization. We envision that spatial computing will become a core technology that drives fundamental physical science advancements in the 21st century.
An intelligent tutoring system for the investigation of high performance skill acquisition
NASA Technical Reports Server (NTRS)
Fink, Pamela K.; Herren, L. Tandy; Regian, J. Wesley
1991-01-01
The issue of training high performance skills is of increasing concern. These skills include tasks such as driving a car, playing the piano, and flying an aircraft. Traditionally, the training of high performance skills has been accomplished through the use of expensive, high-fidelity, 3-D simulators, and/or on-the-job training using the actual equipment. Such an approach to training is quite expensive. The design, implementation, and deployment of an intelligent tutoring system developed for the purpose of studying the effectiveness of skill acquisition using lower-cost, lower-physical-fidelity, 2-D simulation. Preliminary experimental results are quite encouraging, indicating that intelligent tutoring systems are a cost-effective means of training high performance skills.
plasmaFoam: An OpenFOAM framework for computational plasma physics and chemistry
NASA Astrophysics Data System (ADS)
Venkattraman, Ayyaswamy; Verma, Abhishek Kumar
2016-09-01
As emphasized in the 2012 Roadmap for low temperature plasmas (LTP), scientific computing has emerged as an essential tool for the investigation and prediction of the fundamental physical and chemical processes associated with these systems. While several in-house and commercial codes exist, with each having its own advantages and disadvantages, a common framework that can be developed by researchers from all over the world will likely accelerate the impact of computational studies on advances in low-temperature plasma physics and chemistry. In this regard, we present a finite volume computational toolbox to perform high-fidelity simulations of LTP systems. This framework, primarily based on the OpenFOAM solver suite, allows us to enhance our understanding of multiscale plasma phenomenon by performing massively parallel, three-dimensional simulations on unstructured meshes using well-established high performance computing tools that are widely used in the computational fluid dynamics community. In this talk, we will present preliminary results obtained using the OpenFOAM-based solver suite with benchmark three-dimensional simulations of microplasma devices including both dielectric and plasma regions. We will also discuss the future outlook for the solver suite.
2013-01-01
Background The validity of studies describing clinicians’ judgements based on their responses to paper cases is questionable, because - commonly used - paper case simulations only partly reflect real clinical environments. In this study we test whether paper case simulations evoke similar risk assessment judgements to the more realistic simulated patients used in high fidelity physical simulations. Methods 97 nurses (34 experienced nurses and 63 student nurses) made dichotomous assessments of risk of acute deterioration on the same 25 simulated scenarios in both paper case and physical simulation settings. Scenarios were generated from real patient cases. Measures of judgement ‘ecology’ were derived from the same case records. The relationship between nurses’ judgements, actual patient outcomes (i.e. ecological criteria), and patient characteristics were described using the methodology of judgement analysis. Logistic regression models were constructed to calculate Lens Model Equation parameters. Parameters were then compared between the modeled paper-case and physical-simulation judgements. Results Participants had significantly less achievement (ra) judging physical simulations than when judging paper cases. They used less modelable knowledge (G) with physical simulations than with paper cases, while retaining similar cognitive control and consistency on repeated patients. Respiration rate, the most important cue for predicting patient risk in the ecological model, was weighted most heavily by participants. Conclusions To the extent that accuracy in judgement analysis studies is a function of task representativeness, improving task representativeness via high fidelity physical simulations resulted in lower judgement performance in risk assessments amongst nurses when compared to paper case simulations. Lens Model statistics could prove useful when comparing different options for the design of simulations used in clinical judgement analysis. The approach outlined may be of value to those designing and evaluating clinical simulations as part of education and training strategies aimed at improving clinical judgement and reasoning. PMID:23718556
Cheng, Adam; Hunt, Elizabeth A; Donoghue, Aaron; Nelson-McMillan, Kristen; Nishisaki, Akira; Leflore, Judy; Eppich, Walter; Moyer, Mike; Brett-Fleegler, Marisa; Kleinman, Monica; Anderson, Jodee; Adler, Mark; Braga, Matthew; Kost, Susanne; Stryjewski, Glenn; Min, Steve; Podraza, John; Lopreiato, Joseph; Hamilton, Melinda Fiedor; Stone, Kimberly; Reid, Jennifer; Hopkins, Jeffrey; Manos, Jennifer; Duff, Jonathan; Richard, Matthew; Nadkarni, Vinay M
2013-06-01
Resuscitation training programs use simulation and debriefing as an educational modality with limited standardization of debriefing format and content. Our study attempted to address this issue by using a debriefing script to standardize debriefings. To determine whether use of a scripted debriefing by novice instructors and/or simulator physical realism affects knowledge and performance in simulated cardiopulmonary arrests. DESIGN Prospective, randomized, factorial study design. The study was conducted from 2008 to 2011 at 14 Examining Pediatric Resuscitation Education Using Simulation and Scripted Debriefing (EXPRESS) network simulation programs. Interprofessional health care teams participated in 2 simulated cardiopulmonary arrests, before and after debriefing. We randomized 97 participants (23 teams) to nonscripted low-realism; 93 participants (22 teams) to scripted low-realism; 103 participants (23 teams) to nonscripted high-realism; and 94 participants (22 teams) to scripted high-realism groups. INTERVENTION Participants were randomized to 1 of 4 arms: permutations of scripted vs nonscripted debriefing and high-realism vs low-realism simulators. Percentage difference (0%-100%) in multiple choice question (MCQ) test (individual scores), Behavioral Assessment Tool (BAT) (team leader performance), and the Clinical Performance Tool (CPT) (team performance) scores postintervention vs preintervention comparison (PPC). There was no significant difference at baseline in nonscripted vs scripted groups for MCQ (P = .87), BAT (P = .99), and CPT (P = .95) scores. Scripted debriefing showed greater improvement in knowledge (mean [95% CI] MCQ-PPC, 5.3% [4.1%-6.5%] vs 3.6% [2.3%-4.7%]; P = .04) and team leader behavioral performance (median [interquartile range (IQR)] BAT-PPC, 16% [7.4%-28.5%] vs 8% [0.2%-31.6%]; P = .03). Their improvement in clinical performance during simulated cardiopulmonary arrests was not significantly different (median [IQR] CPT-PPC, 7.9% [4.8%-15.1%] vs 6.7% [2.8%-12.7%], P = .18). Level of physical realism of the simulator had no independent effect on these outcomes. The use of a standardized script by novice instructors to facilitate team debriefings improves acquisition of knowledge and team leader behavioral performance during subsequent simulated cardiopulmonary arrests. Implementation of debriefing scripts in resuscitation courses may help to improve learning outcomes and standardize delivery of debriefing, particularly for novice instructors.
Simulation of plasma loading of high-pressure RF cavities
NASA Astrophysics Data System (ADS)
Yu, K.; Samulyak, R.; Yonehara, K.; Freemire, B.
2018-01-01
Muon beam-induced plasma loading of radio-frequency (RF) cavities filled with high pressure hydrogen gas with 1% dry air dopant has been studied via numerical simulations. The electromagnetic code SPACE, that resolves relevant atomic physics processes, including ionization by the muon beam, electron attachment to dopant molecules, and electron-ion and ion-ion recombination, has been used. Simulations studies have been performed in the range of parameters typical for practical muon cooling channels.
Design of a Modular Monolithic Implicit Solver for Multi-Physics Applications
NASA Technical Reports Server (NTRS)
Carton De Wiart, Corentin; Diosady, Laslo T.; Garai, Anirban; Burgess, Nicholas; Blonigan, Patrick; Ekelschot, Dirk; Murman, Scott M.
2018-01-01
The design of a modular multi-physics high-order space-time finite-element framework is presented together with its extension to allow monolithic coupling of different physics. One of the main objectives of the framework is to perform efficient high- fidelity simulations of capsule/parachute systems. This problem requires simulating multiple physics including, but not limited to, the compressible Navier-Stokes equations, the dynamics of a moving body with mesh deformations and adaptation, the linear shell equations, non-re effective boundary conditions and wall modeling. The solver is based on high-order space-time - finite element methods. Continuous, discontinuous and C1-discontinuous Galerkin methods are implemented, allowing one to discretize various physical models. Tangent and adjoint sensitivity analysis are also targeted in order to conduct gradient-based optimization, error estimation, mesh adaptation, and flow control, adding another layer of complexity to the framework. The decisions made to tackle these challenges are presented. The discussion focuses first on the "single-physics" solver and later on its extension to the monolithic coupling of different physics. The implementation of different physics modules, relevant to the capsule/parachute system, are also presented. Finally, examples of coupled computations are presented, paving the way to the simulation of the full capsule/parachute system.
Simulation of plasma loading of high-pressure RF cavities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, K.; Samulyak, R.; Yonehara, K.
2018-01-11
Muon beam-induced plasma loading of radio-frequency (RF) cavities filled with high pressure hydrogen gas with 1% dry air dopant has been studied via numerical simulations. The electromagnetic code SPACE, that resolves relevant atomic physics processes, including ionization by the muon beam, electron attachment to dopant molecules, and electron-ion and ion-ion recombination, has been used. Simulations studies have also been performed in the range of parameters typical for practical muon cooling channels.
Capacity Loss Studies on High Capacity Li-ion Cells for the Orbiter Advanced Hydraulic Power System
NASA Technical Reports Server (NTRS)
Jeevarajan, Judith A.; Irlbeck, Bradley W.
2004-01-01
Contents include the following: Introduction. Physical and electrochemical characteristics. Performance evaluation. Rate performance. Internal resistance. Performance at different temperatures. Safety evaluation. Overcharge. Overdischarge. External short. Simulated internal short. Heat-to-vent. Vibration. Drop rest. Vent and burst pressure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sprague, Michael A.; Boldyrev, Stanislav; Fischer, Paul
This report details the impact exascale will bring to turbulent-flow simulations in applied science and technology. The need for accurate simulation of turbulent flows is evident across the DOE applied-science and engineering portfolios, including combustion, plasma physics, nuclear-reactor physics, wind energy, and atmospheric science. The workshop brought together experts in turbulent-flow simulation, computational mathematics, and high-performance computing. Building upon previous ASCR workshops on exascale computing, participants defined a research agenda and path forward that will enable scientists and engineers to continually leverage, engage, and direct advances in computational systems on the path to exascale computing.
NASA Astrophysics Data System (ADS)
Pohl, E.; Maximini, M.; Bauschulte, A.; vom Schloß, J.; Hermanns, R. T. E.
2015-02-01
HT-PEM fuel cells suffer from performance losses due to degradation effects. Therefore, the durability of HT-PEM is currently an important factor of research and development. In this paper a novel approach is presented for an integrated short term and long term simulation of HT-PEM accelerated lifetime testing. The physical phenomena of short term and long term effects are commonly modeled separately due to the different time scales. However, in accelerated lifetime testing, long term degradation effects have a crucial impact on the short term dynamics. Our approach addresses this problem by applying a novel method for dual time scale simulation. A transient system simulation is performed for an open voltage cycle test on a HT-PEM fuel cell for a physical time of 35 days. The analysis describes the system dynamics by numerical electrochemical impedance spectroscopy. Furthermore, a performance assessment is performed in order to demonstrate the efficiency of the approach. The presented approach reduces the simulation time by approximately 73% compared to conventional simulation approach without losing too much accuracy. The approach promises a comprehensive perspective considering short term dynamic behavior and long term degradation effects.
2012-03-01
such as FASCODE is accomplished. The assessment is limited by the correctness of the models used; validating the models is beyond the scope of this...comparisons with other models and validation against data sets (Snell et al. 2000). 2.3.2 Previous Research Several LADAR simulations have been produced...performance models would better capture the atmosphere physics and climatological effects on these systems. Also, further validation needs to be performed
High performance MRI simulations of motion on multi-GPU systems.
Xanthis, Christos G; Venetis, Ioannis E; Aletras, Anthony H
2014-07-04
MRI physics simulators have been developed in the past for optimizing imaging protocols and for training purposes. However, these simulators have only addressed motion within a limited scope. The purpose of this study was the incorporation of realistic motion, such as cardiac motion, respiratory motion and flow, within MRI simulations in a high performance multi-GPU environment. Three different motion models were introduced in the Magnetic Resonance Imaging SIMULator (MRISIMUL) of this study: cardiac motion, respiratory motion and flow. Simulation of a simple Gradient Echo pulse sequence and a CINE pulse sequence on the corresponding anatomical model was performed. Myocardial tagging was also investigated. In pulse sequence design, software crushers were introduced to accommodate the long execution times in order to avoid spurious echoes formation. The displacement of the anatomical model isochromats was calculated within the Graphics Processing Unit (GPU) kernel for every timestep of the pulse sequence. Experiments that would allow simulation of custom anatomical and motion models were also performed. Last, simulations of motion with MRISIMUL on single-node and multi-node multi-GPU systems were examined. Gradient Echo and CINE images of the three motion models were produced and motion-related artifacts were demonstrated. The temporal evolution of the contractility of the heart was presented through the application of myocardial tagging. Better simulation performance and image quality were presented through the introduction of software crushers without the need to further increase the computational load and GPU resources. Last, MRISIMUL demonstrated an almost linear scalable performance with the increasing number of available GPU cards, in both single-node and multi-node multi-GPU computer systems. MRISIMUL is the first MR physics simulator to have implemented motion with a 3D large computational load on a single computer multi-GPU configuration. The incorporation of realistic motion models, such as cardiac motion, respiratory motion and flow may benefit the design and optimization of existing or new MR pulse sequences, protocols and algorithms, which examine motion related MR applications.
Simulation of digital mammography images
NASA Astrophysics Data System (ADS)
Workman, Adam
2005-04-01
A number of different technologies are available for digital mammography. However, it is not clear how differences in the physical performance aspects of the different imaging technologies affect clinical performance. Randomised controlled trials provide a means of gaining information on clinical performance however do not provide direct comparison of the different digital imaging technologies. This work describes a method of simulating the performance of different digital mammography systems. The method involves modifying the imaging performance parameters of images from a small field of view (SFDM), high resolution digital imaging system used for spot imaging. Under normal operating conditions this system produces images with higher signal-to-noise ratio (SNR) over a wide spatial frequency range than current full field digital mammography (FFDM) systems. The SFDM images can be 'degraded" by computer processing to simulate the characteristics of a FFDM system. Initial work characterised the physical performance (MTF, NPS) of the SFDM detector and developed a model and method for simulating signal transfer and noise properties of a FFDM system. It was found that the SNR properties of the simulated FFDM images were very similar to those measured from an actual FFDM system verifying the methodology used. The application of this technique to clinical images from the small field system will allow the clinical performance of different FFDM systems to be simulated and directly compared using the same clinical image datasets.
The change in critical technologies for computational physics
NASA Technical Reports Server (NTRS)
Watson, Val
1990-01-01
It is noted that the types of technology required for computational physics are changing as the field matures. Emphasis has shifted from computer technology to algorithm technology and, finally, to visual analysis technology as areas of critical research for this field. High-performance graphical workstations tied to a supercommunicator with high-speed communications along with the development of especially tailored visualization software has enabled analysis of highly complex fluid-dynamics simulations. Particular reference is made here to the development of visual analysis tools at NASA's Numerical Aerodynamics Simulation Facility. The next technology which this field requires is one that would eliminate visual clutter by extracting key features of simulations of physics and technology in order to create displays that clearly portray these key features. Research in the tuning of visual displays to human cognitive abilities is proposed. The immediate transfer of technology to all levels of computers, specifically the inclusion of visualization primitives in basic software developments for all work stations and PCs, is recommended.
Assessing the Effects of Data Compression in Simulations Using Physically Motivated Metrics
Laney, Daniel; Langer, Steven; Weber, Christopher; ...
2014-01-01
This paper examines whether lossy compression can be used effectively in physics simulations as a possible strategy to combat the expected data-movement bottleneck in future high performance computing architectures. We show that, for the codes and simulations we tested, compression levels of 3–5X can be applied without causing significant changes to important physical quantities. Rather than applying signal processing error metrics, we utilize physics-based metrics appropriate for each code to assess the impact of compression. We evaluate three different simulation codes: a Lagrangian shock-hydrodynamics code, an Eulerian higher-order hydrodynamics turbulence modeling code, and an Eulerian coupled laser-plasma interaction code. Wemore » compress relevant quantities after each time-step to approximate the effects of tightly coupled compression and study the compression rates to estimate memory and disk-bandwidth reduction. We find that the error characteristics of compression algorithms must be carefully considered in the context of the underlying physics being modeled.« less
Unstructured LES of Reacting Multiphase Flows in Realistic Gas Turbine Combustors
NASA Technical Reports Server (NTRS)
Ham, Frank; Apte, Sourabh; Iaccarino, Gianluca; Wu, Xiao-Hua; Herrmann, Marcus; Constantinescu, George; Mahesh, Krishnan; Moin, Parviz
2003-01-01
As part of the Accelerated Strategic Computing Initiative (ASCI) program, an accurate and robust simulation tool is being developed to perform high-fidelity LES studies of multiphase, multiscale turbulent reacting flows in aircraft gas turbine combustor configurations using hybrid unstructured grids. In the combustor, pressurized gas from the upstream compressor is reacted with atomized liquid fuel to produce the combustion products that drive the downstream turbine. The Large Eddy Simulation (LES) approach is used to simulate the combustor because of its demonstrated superiority over RANS in predicting turbulent mixing, which is central to combustion. This paper summarizes the accomplishments of the combustor group over the past year, concentrating mainly on the two major milestones achieved this year: 1) Large scale simulation: A major rewrite and redesign of the flagship unstructured LES code has allowed the group to perform large eddy simulations of the complete combustor geometry (all 18 injectors) with over 100 million control volumes; 2) Multi-physics simulation in complex geometry: The first multi-physics simulations including fuel spray breakup, coalescence, evaporation, and combustion are now being performed in a single periodic sector (1/18th) of an actual Pratt & Whitney combustor geometry.
The GeantV project: Preparing the future of simulation
Amadio, G.; J. Apostolakis; Bandieramonte, M.; ...
2015-12-23
Detector simulation is consuming at least half of the HEP computing cycles, and even so, experiments have to take hard decisions on what to simulate, as their needs greatly surpass the availability of computing resources. New experiments still in the design phase such as FCC, CLIC and ILC as well as upgraded versions of the existing LHC detectors will push further the simulation requirements. Since the increase in computing resources is not likely to keep pace with our needs, it is therefore necessary to explore innovative ways of speeding up simulation in order to sustain the progress of High Energymore » Physics. The GeantV project aims at developing a high performance detector simulation system integrating fast and full simulation that can be ported on different computing architectures, including CPU accelerators. After more than two years of R&D the project has produced a prototype capable of transporting particles in complex geometries exploiting micro-parallelism, SIMD and multithreading. Portability is obtained via C++ template techniques that allow the development of machine- independent computational kernels. Furthermore, a set of tables derived from Geant4 for cross sections and final states provides a realistic shower development and, having been ported into a Geant4 physics list, can be used as a basis for a direct performance comparison.« less
ERIC Educational Resources Information Center
Gale, Jessica; Wind, Stefanie; Koval, Jayma; Dagosta, Joseph; Ryan, Mike; Usselman, Marion
2016-01-01
This paper illustrates the use of simulation-based performance assessment (PA) methodology in a recent study of eighth-grade students' understanding of physical science concepts. A set of four simulation-based PA tasks were iteratively developed to assess student understanding of an array of physical science concepts, including net force,…
NASA Technical Reports Server (NTRS)
Gibson, Jim; Jordan, Joe; Grant, Terry
1990-01-01
Local Area Network Extensible Simulator (LANES) computer program provides method for simulating performance of high-speed local-area-network (LAN) technology. Developed as design and analysis software tool for networking computers on board proposed Space Station. Load, network, link, and physical layers of layered network architecture all modeled. Mathematically models according to different lower-layer protocols: Fiber Distributed Data Interface (FDDI) and Star*Bus. Written in FORTRAN 77.
NASA Astrophysics Data System (ADS)
Rohde, Mitchell M.; Crawford, Justin; Toschlog, Matthew; Iagnemma, Karl D.; Kewlani, Guarav; Cummins, Christopher L.; Jones, Randolph A.; Horner, David A.
2009-05-01
It is widely recognized that simulation is pivotal to vehicle development, whether manned or unmanned. There are few dedicated choices, however, for those wishing to perform realistic, end-to-end simulations of unmanned ground vehicles (UGVs). The Virtual Autonomous Navigation Environment (VANE), under development by US Army Engineer Research and Development Center (ERDC), provides such capabilities but utilizes a High Performance Computing (HPC) Computational Testbed (CTB) and is not intended for on-line, real-time performance. A product of the VANE HPC research is a real-time desktop simulation application under development by the authors that provides a portal into the HPC environment as well as interaction with wider-scope semi-automated force simulations (e.g. OneSAF). This VANE desktop application, dubbed the Autonomous Navigation Virtual Environment Laboratory (ANVEL), enables analysis and testing of autonomous vehicle dynamics and terrain/obstacle interaction in real-time with the capability to interact within the HPC constructive geo-environmental CTB for high fidelity sensor evaluations. ANVEL leverages rigorous physics-based vehicle and vehicle-terrain interaction models in conjunction with high-quality, multimedia visualization techniques to form an intuitive, accurate engineering tool. The system provides an adaptable and customizable simulation platform that allows developers a controlled, repeatable testbed for advanced simulations. ANVEL leverages several key technologies not common to traditional engineering simulators, including techniques from the commercial video-game industry. These enable ANVEL to run on inexpensive commercial, off-the-shelf (COTS) hardware. In this paper, the authors describe key aspects of ANVEL and its development, as well as several initial applications of the system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Batista, Rafael Alves; Dundovic, Andrej; Sigl, Guenter
2016-05-01
We present the simulation framework CRPropa version 3 designed for efficient development of astrophysical predictions for ultra-high energy particles. Users can assemble modules of the most relevant propagation effects in galactic and extragalactic space, include their own physics modules with new features, and receive on output primary and secondary cosmic messengers including nuclei, neutrinos and photons. In extension to the propagation physics contained in a previous CRPropa version, the new version facilitates high-performance computing and comprises new physical features such as an interface for galactic propagation using lensing techniques, an improved photonuclear interaction calculation, and propagation in time dependent environmentsmore » to take into account cosmic evolution effects in anisotropy studies and variable sources. First applications using highlighted features are presented as well.« less
A Generic Mesh Data Structure with Parallel Applications
ERIC Educational Resources Information Center
Cochran, William Kenneth, Jr.
2009-01-01
High performance, massively-parallel multi-physics simulations are built on efficient mesh data structures. Most data structures are designed from the bottom up, focusing on the implementation of linear algebra routines. In this thesis, we explore a top-down approach to design, evaluating the various needs of many aspects of simulation, not just…
Fracture Simulation of Highly Crosslinked Polymer Networks: Triglyceride-Based Adhesives
NASA Astrophysics Data System (ADS)
Lorenz, Christian; Stevens, Mark; Wool, Richard
2003-03-01
The ACRES program at the U. of Delaware has shown that triglyceride oils derived from plants are a favorable alternative to the traditional adhesives. The triglyceride networks are formed from an initial mixture of styrene monomers, free-radical initiators and triglycerides. We have performed simulations to study the effect of physical composition and physical characteristics of the triglyceride network on the strength of triglyceride network. A coarse-grained, bead-spring model of the triglyceride system is used. The average triglyceride consists of 6 beads per chain, the styrenes are represented as a single bead and the initiators are two bead chains. The polymer network is formed using an off-lattice 3D Monte Carlo simulation, in which the initiators activate the styrene and triglyceride reactive sites and then bonds are randomly formed between the styrene and active triglyceride monomers producing a highly crosslinked polymer network. Molecular dynamics simulations of the network under tensile and shear strains were performed to determine the strength as a function of the network composition. The relationship between the network structure and its strength will also be discussed.
Contention Modeling for Multithreaded Distributed Shared Memory Machines: The Cray XMT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Secchi, Simone; Tumeo, Antonino; Villa, Oreste
Distributed Shared Memory (DSM) machines are a wide class of multi-processor computing systems where a large virtually-shared address space is mapped on a network of physically distributed memories. High memory latency and network contention are two of the main factors that limit performance scaling of such architectures. Modern high-performance computing DSM systems have evolved toward exploitation of massive hardware multi-threading and fine-grained memory hashing to tolerate irregular latencies, avoid network hot-spots and enable high scaling. In order to model the performance of such large-scale machines, parallel simulation has been proved to be a promising approach to achieve good accuracy inmore » reasonable times. One of the most critical factors in solving the simulation speed-accuracy trade-off is network modeling. The Cray XMT is a massively multi-threaded supercomputing architecture that belongs to the DSM class, since it implements a globally-shared address space abstraction on top of a physically distributed memory substrate. In this paper, we discuss the development of a contention-aware network model intended to be integrated in a full-system XMT simulator. We start by measuring the effects of network contention in a 128-processor XMT machine and then investigate the trade-off that exists between simulation accuracy and speed, by comparing three network models which operate at different levels of accuracy. The comparison and model validation is performed by executing a string-matching algorithm on the full-system simulator and on the XMT, using three datasets that generate noticeably different contention patterns.« less
Hatala, Rose; Issenberg, S Barry; Kassen, Barry; Cole, Gary; Bacchus, C Maria; Scalese, Ross J
2008-06-01
High-stakes assessments of doctors' physical examination skills often employ standardised patients (SPs) who lack physical abnormalities. Simulation technology provides additional opportunities to assess these skills by mimicking physical abnormalities. The current study examined the relationship between internists' cardiac physical examination competence as assessed with simulation technology compared with that assessed with real patients (RPs). The cardiac physical examination skills and bedside diagnostic accuracy of 28 internists were assessed during an objective structured clinical examination (OSCE). The OSCE included 3 modalities of cardiac patients: RPs with cardiac abnormalities; SPs combined with computer-based, audio-video simulations of auscultatory abnormalities, and a cardiac patient simulator (CPS) manikin. Four cardiac diagnoses and their associated cardiac findings were matched across modalities. At each station, 2 examiners independently rated a participant's physical examination technique and global clinical competence. Two investigators separately scored diagnostic accuracy. Inter-rater reliability between examiners for global ratings (GRs) ranged from 0.75-0.78 for the different modalities. Although there was no significant difference between participants' mean GRs for each modality, the correlations between participants' performances on each modality were low to modest: RP versus SP, r = 0.19; RP versus CPS, r = 0.22; SP versus CPS, r = 0.57 (P < 0.01). Methodological limitations included variability between modalities in the components contributing to examiners' GRs, a paucity of objective outcome measures and restricted case sampling. No modality provided a clear 'gold standard' for the assessment of cardiac physical examination competence. These limitations need to be addressed before determining the optimal patient modality for high-stakes assessment purposes.
2015-09-01
NC. 14. ABSTRACT A high-resolution numerical simulation of jet breakup and spray formation from a complex diesel fuel injector at diesel engine... diesel fuel injector at diesel engine type conditions has been performed. A full understanding of the primary atomization process in diesel fuel... diesel liquid sprays the complexity is further compounded by the physical attributes present including nozzle turbulence, large density ratios
Physically Based Modeling and Simulation with Dynamic Spherical Volumetric Simplex Splines
Tan, Yunhao; Hua, Jing; Qin, Hong
2009-01-01
In this paper, we present a novel computational modeling and simulation framework based on dynamic spherical volumetric simplex splines. The framework can handle the modeling and simulation of genus-zero objects with real physical properties. In this framework, we first develop an accurate and efficient algorithm to reconstruct the high-fidelity digital model of a real-world object with spherical volumetric simplex splines which can represent with accuracy geometric, material, and other properties of the object simultaneously. With the tight coupling of Lagrangian mechanics, the dynamic volumetric simplex splines representing the object can accurately simulate its physical behavior because it can unify the geometric and material properties in the simulation. The visualization can be directly computed from the object’s geometric or physical representation based on the dynamic spherical volumetric simplex splines during simulation without interpolation or resampling. We have applied the framework for biomechanic simulation of brain deformations, such as brain shifting during the surgery and brain injury under blunt impact. We have compared our simulation results with the ground truth obtained through intra-operative magnetic resonance imaging and the real biomechanic experiments. The evaluations demonstrate the excellent performance of our new technique. PMID:20161636
Multidisciplinary propulsion simulation using the numerical propulsion system simulator (NPSS)
NASA Technical Reports Server (NTRS)
Claus, Russel W.
1994-01-01
Implementing new technology in aerospace propulsion systems is becoming prohibitively expensive. One of the major contributions to the high cost is the need to perform many large scale system tests. The traditional design analysis procedure decomposes the engine into isolated components and focuses attention on each single physical discipline (e.g., fluid for structural dynamics). Consequently, the interactions that naturally occur between components and disciplines can be masked by the limited interactions that occur between individuals or teams doing the design and must be uncovered during expensive engine testing. This overview will discuss a cooperative effort of NASA, industry, and universities to integrate disciplines, components, and high performance computing into a Numerical propulsion System Simulator (NPSS).
A flight simulator control system using electric torque motors
NASA Technical Reports Server (NTRS)
Musick, R. O.; Wagner, C. A.
1975-01-01
Control systems are required in flight simulators to provide representative stick and rudder pedal characteristics. A system has been developed that uses electric dc torque motors instead of the more common hydraulic actuators. The torque motor system overcomes certain disadvantages of hydraulic systems, such as high cost, high power consumption, noise, oil leaks, and safety problems. A description of the torque motor system is presented, including both electrical and mechanical design as well as performance characteristics. The system develops forces sufficiently high for most simulations, and is physically small and light enough to be used in most motion-base cockpits.
An investigation of bleed configurations and their effect on shock wave/boundary layer interactions
NASA Technical Reports Server (NTRS)
Hamed, Awatef
1995-01-01
The design of high efficiency supersonic inlets is a complex task involving the optimization of a number of performance parameters such as pressure recovery, spillage, drag, and exit distortion profile, over the flight Mach number range. Computational techniques must be capable of accurately simulating the physics of shock/boundary layer interactions, secondary corner flows, flow separation, and bleed if they are to be useful in the design. In particular, bleed and flow separation, play an important role in inlet unstart, and the associated pressure oscillations. Numerical simulations were conducted to investigate some of the basic physical phenomena associated with bleed in oblique shock wave boundary layer interactions that affect the inlet performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, Michael K.; Davidson, Megan
As part of Sandia’s nuclear deterrence mission, the B61-12 Life Extension Program (LEP) aims to modernize the aging weapon system. Modernization requires requalification and Sandia is using high performance computing to perform advanced computational simulations to better understand, evaluate, and verify weapon system performance in conjunction with limited physical testing. The Nose Bomb Subassembly (NBSA) of the B61-12 is responsible for producing a fuzing signal upon ground impact. The fuzing signal is dependent upon electromechanical impact sensors producing valid electrical fuzing signals at impact. Computer generated models were used to assess the timing between the impact sensor’s response to themore » deceleration of impact and damage to major components and system subassemblies. The modeling and simulation team worked alongside the physical test team to design a large-scale reverse ballistic test to not only assess system performance, but to also validate their computational models. The reverse ballistic test conducted at Sandia’s sled test facility sent a rocket sled with a representative target into a stationary B61-12 (NBSA) to characterize the nose crush and functional response of NBSA components. Data obtained from data recorders and high-speed photometrics were integrated with previously generated computer models in order to refine and validate the model’s ability to reliably simulate real-world effects. Large-scale tests are impractical to conduct for every single impact scenario. By creating reliable computer models, we can perform simulations that identify trends and produce estimates of outcomes over the entire range of required impact conditions. Sandia’s HPCs enable geometric resolution that was unachievable before, allowing for more fidelity and detail, and creating simulations that can provide insight to support evaluation of requirements and performance margins. As computing resources continue to improve, researchers at Sandia are hoping to improve these simulations so they provide increasingly credible analysis of the system response and performance over the full range of conditions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Langer, Steven H.; Karlin, Ian; Marinak, Marty M.
HYDRA is used to simulate a variety of experiments carried out at the National Ignition Facility (NIF) [4] and other high energy density physics facilities. HYDRA has packages to simulate radiation transfer, atomic physics, hydrodynamics, laser propagation, and a number of other physics effects. HYDRA has over one million lines of code and includes both MPI and thread-level (OpenMP and pthreads) parallelism. This paper measures the performance characteristics of HYDRA using hardware counters on an IBM BlueGene/Q system. We report key ratios such as bytes/instruction and memory bandwidth for several different physics packages. The total number of bytes read andmore » written per time step is also reported. We show that none of the packages which use significant time are memory bandwidth limited on a Blue Gene/Q. HYDRA currently issues very few SIMD instructions. The pressure on memory bandwidth will increase if high levels of SIMD instructions can be achieved.« less
Impact of detector simulation in particle physics collider experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elvira, V. Daniel
Through the last three decades, precise simulation of the interactions of particles with matter and modeling of detector geometries has proven to be of critical importance to the success of the international high-energy physics experimental programs. For example, the detailed detector modeling and accurate physics of the Geant4-based simulation software of the CMS and ATLAS particle physics experiments at the European Center of Nuclear Research (CERN) Large Hadron Collider (LHC) was a determinant factor for these collaborations to deliver physics results of outstanding quality faster than any hadron collider experiment ever before. This review article highlights the impact of detectormore » simulation on particle physics collider experiments. It presents numerous examples of the use of simulation, from detector design and optimization, through software and computing development and testing, to cases where the use of simulation samples made a difference in the accuracy of the physics results and publication turnaround, from data-taking to submission. It also presents the economic impact and cost of simulation in the CMS experiment. Future experiments will collect orders of magnitude more data, taxing heavily the performance of simulation and reconstruction software for increasingly complex detectors. Consequently, it becomes urgent to find solutions to speed up simulation software in order to cope with the increased demand in a time of flat budgets. The study ends with a short discussion on the potential solutions that are being explored, by leveraging core count growth in multicore machines, using new generation coprocessors, and re-engineering of HEP code for concurrency and parallel computing.« less
Impact of detector simulation in particle physics collider experiments
Elvira, V. Daniel
2017-06-01
Through the last three decades, precise simulation of the interactions of particles with matter and modeling of detector geometries has proven to be of critical importance to the success of the international high-energy physics experimental programs. For example, the detailed detector modeling and accurate physics of the Geant4-based simulation software of the CMS and ATLAS particle physics experiments at the European Center of Nuclear Research (CERN) Large Hadron Collider (LHC) was a determinant factor for these collaborations to deliver physics results of outstanding quality faster than any hadron collider experiment ever before. This review article highlights the impact of detectormore » simulation on particle physics collider experiments. It presents numerous examples of the use of simulation, from detector design and optimization, through software and computing development and testing, to cases where the use of simulation samples made a difference in the accuracy of the physics results and publication turnaround, from data-taking to submission. It also presents the economic impact and cost of simulation in the CMS experiment. Future experiments will collect orders of magnitude more data, taxing heavily the performance of simulation and reconstruction software for increasingly complex detectors. Consequently, it becomes urgent to find solutions to speed up simulation software in order to cope with the increased demand in a time of flat budgets. The study ends with a short discussion on the potential solutions that are being explored, by leveraging core count growth in multicore machines, using new generation coprocessors, and re-engineering of HEP code for concurrency and parallel computing.« less
Impact of detector simulation in particle physics collider experiments
NASA Astrophysics Data System (ADS)
Daniel Elvira, V.
2017-06-01
Through the last three decades, accurate simulation of the interactions of particles with matter and modeling of detector geometries has proven to be of critical importance to the success of the international high-energy physics (HEP) experimental programs. For example, the detailed detector modeling and accurate physics of the Geant4-based simulation software of the CMS and ATLAS particle physics experiments at the European Center of Nuclear Research (CERN) Large Hadron Collider (LHC) was a determinant factor for these collaborations to deliver physics results of outstanding quality faster than any hadron collider experiment ever before. This review article highlights the impact of detector simulation on particle physics collider experiments. It presents numerous examples of the use of simulation, from detector design and optimization, through software and computing development and testing, to cases where the use of simulation samples made a difference in the precision of the physics results and publication turnaround, from data-taking to submission. It also presents estimates of the cost and economic impact of simulation in the CMS experiment. Future experiments will collect orders of magnitude more data with increasingly complex detectors, taxing heavily the performance of simulation and reconstruction software. Consequently, exploring solutions to speed up simulation and reconstruction software to satisfy the growing demand of computing resources in a time of flat budgets is a matter that deserves immediate attention. The article ends with a short discussion on the potential solutions that are being considered, based on leveraging core count growth in multicore machines, using new generation coprocessors, and re-engineering HEP code for concurrency and parallel computing.
The Laser Mega-Joule : LMJ & PETAL status and Program Overview
NASA Astrophysics Data System (ADS)
Miquel, J.-L.; Lion, C.; Vivini, P.
2016-03-01
The laser Megajoule (LMJ), developed by the French Commissariat à l'Energie Atomique et aux Energies Alternatives (CEA), will be a cornerstone of the French Simulation Program, which combines improvement of physics models, high performance numerical simulation, and experimental validation. The LMJ facility is under construction at CEA CESTA near Bordeaux and will provide the experimental capabilities to study High-Energy Density Physics (HEDP). One of its goals is to obtain ignition and burn of DT-filled capsules imploded, through indirect drive scheme, inside rugby-shape hohlraum. The PETAL project consists in the addition of one short-pulse (ps) ultra-high-power, high-energy beam (kJ) to the LMJ facility. PETAL will offer a combination of a very high intensity multi-petawatt beam, synchronized with the nanosecond beams of the LMJ. This combination will expand the LMJ experimental field on HEDP. This paper presents an update of LMJ & PETAL status, together with the development of the overall program including targets, plasma diagnostics and simulation tools.
Pattern dependence in high-speed Q-modulated distributed feedback laser.
Zhu, Hongli; Xia, Yimin; He, Jian-Jun
2015-05-04
We investigate the pattern dependence in high speed Q-modulated distributed feedback laser based on its complete physical structure and material properties. The structure parameters of the gain section as well as the modulation and phase sections are all taken into account in the simulations based on an integrated traveling wave model. Using this model, we show that an example Q-modulated DFB laser can achieve an extinction ratio of 6.8dB with a jitter of 4.7ps and a peak intensity fluctuation of less than 15% for 40Gbps RZ modulation signal. The simulation method is proved very useful for the complex laser structure design and high speed performance optimization, as well as for providing physical insight of the operation mechanism.
Diaz-Lara, Francisco Javier; Del Coso, Juan; Portillo, Javier; Areces, Francisco; García, Jose Manuel; Abián-Vicén, Javier
2016-10-01
Although caffeine is one of the most commonly used substances in combat sports, information about its ergogenic effects on these disciplines is very limited. To determine the effectiveness of ingesting a moderate dose of caffeine to enhance overall performance during a simulated Brazilian jiu-jitsu (BJJ) competition. Fourteen elite BJJ athletes participated in a double-blind, placebo-controlled experimental design. In a random order, the athletes ingested either 3 mg/kg body mass of caffeine or a placebo (cellulose, 0 mg/kg) and performed 2 simulated BJJ combats (with 20 min rest between them), following official BJJ rules. Specific physical tests such as maximal handgrip dynamometry, maximal height during a countermovement jump, permanence during a maximal static-lift test, peak power in a bench-press exercise, and blood lactate concentration were measured at 3 specific times: before the first combat and immediately after the first and second combats. The combats were video-recorded to analyze fight actions. After the caffeine ingestion, participants spent more time in offensive actions in both combats and revealed higher blood lactate values (P < .05). Performance in all physical tests carried out before the first combat was enhanced with caffeine (P < .05), and some improvements remained after the first combat (eg, maximal static-lift test and bench-press exercise; P < .05). After the second combat, the values in all physical tests were similar between caffeine and placebo. Caffeine might be an effective ergogenic aid for improving intensity and physical performance during successive elite BJJ combats.
Andreatta, Pamela; Gans-Larty, Florence; Debpuur, Domitilla; Ofosu, Anthony; Perosky, Joseph
2011-10-01
Maternal mortality from postpartum hemorrhage remains high globally, in large part because women give birth in rural communities where unskilled (traditional birth attendants) provide care for delivering mothers. Traditional attendants are neither trained nor equipped to recognize or manage postpartum hemorrhage as a life-threatening emergent condition. Recommended treatment includes using uterotonic agents and physical manipulation to aid uterine contraction. In resource-limited areas where Obstetric first aid may be the only care option, physical methods such as bimanual uterine compression are easily taught, highly practical and if performed correctly, highly effective. A simulator with objective performance feedback was designed to teach skilled and unskilled birth attendants to perform the technique. To evaluate the impact of simulation-based training on the ability of birth attendants to correctly perform bimanual compression in response to postpartum hemorrhage from uterine atony. Simulation-based training was conducted for skilled (N=111) and unskilled birth attendants (N=14) at two regional (Kumasi, Tamale) and two district (Savelugu, Sene) medical centers in Ghana. Training was evaluated using Kirkpatrick's 4-level model. All participants significantly increased their bimanual uterine compression skills after training (p=0.000). There were no significant differences between 2-week delayed post-test performances indicating retention (p=0.52). Applied behavioral and clinical outcomes were reported for 9 months from a subset of birth attendants in Sene District: 425 births, 13 postpartum hemorrhages were reported without concomitant maternal mortality. The results of this study suggest that simulation-based training for skilled and unskilled birth attendants to perform bi-manual uterine compression as postpartum hemorrhage Obstetric first aid leads to improved applied procedural skills. Results from a smaller subset of the sample suggest that these skills could potentially lead to improved clinical outcomes and additional study is merited. Copyright © 2011 Elsevier Ltd. All rights reserved.
CISP: Simulation Platform for Collective Instabilities in the BRing of HIAF project
NASA Astrophysics Data System (ADS)
Liu, J.; Yang, J. C.; Xia, J. W.; Yin, D. Y.; Shen, G. D.; Li, P.; Zhao, H.; Ruan, S.; Wu, B.
2018-02-01
To simulate collective instabilities during the complicated beam manipulation in the BRing (Booster Ring) of HIAF (High Intensity heavy-ion Accelerator Facility) or other high intensity accelerators, a code, named CISP (Simulation Platform for Collective Instabilities), is designed and constructed in China's IMP (Institute of Modern Physics). The CISP is a scalable multi-macroparticle simulation platform that can perform longitudinal and transverse tracking when chromaticity, space charge effect, nonlinear magnets and wakes are included. And due to its well object-oriented design, the CISP is also a basic platform used to develop many other applications (like feedback). Several simulations, completed by the CISP in this paper, agree with analytical results very well, which shows that the CISP is fully functional now and it is a powerful platform for the further collective instability research in the BRing or other accelerators. In the future, the CISP can also be extended easily into a physics control system for HIAF or other facilities.
Real-Time and High-Fidelity Simulation Environment for Autonomous Ground Vehicle Dynamics
NASA Technical Reports Server (NTRS)
Cameron, Jonathan; Myint, Steven; Kuo, Calvin; Jain, Abhi; Grip, Havard; Jayakumar, Paramsothy; Overholt, Jim
2013-01-01
This paper reports on a collaborative project between U.S. Army TARDEC and Jet Propulsion Laboratory (JPL) to develop a unmanned ground vehicle (UGV) simulation model using the ROAMS vehicle modeling framework. Besides modeling the physical suspension of the vehicle, the sensing and navigation of the HMMWV vehicle are simulated. Using models of urban and off-road environments, the HMMWV simulation was tested in several ways, including navigation in an urban environment with obstacle avoidance and the performance of a lane change maneuver.
Design Considerations of a Virtual Laboratory for Advanced X-ray Sources
NASA Astrophysics Data System (ADS)
Luginsland, J. W.; Frese, M. H.; Frese, S. D.; Watrous, J. J.; Heileman, G. L.
2004-11-01
The field of scientific computation has greatly advanced in the last few years, resulting in the ability to perform complex computer simulations that can predict the performance of real-world experiments in a number of fields of study. Among the forces driving this new computational capability is the advent of parallel algorithms, allowing calculations in three-dimensional space with realistic time scales. Electromagnetic radiation sources driven by high-voltage, high-current electron beams offer an area to further push the state-of-the-art in high fidelity, first-principles simulation tools. The physics of these x-ray sources combine kinetic plasma physics (electron beams) with dense fluid-like plasma physics (anode plasmas) and x-ray generation (bremsstrahlung). There are a number of mature techniques and software packages for dealing with the individual aspects of these sources, such as Particle-In-Cell (PIC), Magneto-Hydrodynamics (MHD), and radiation transport codes. The current effort is focused on developing an object-oriented software environment using the Rational© Unified Process and the Unified Modeling Language (UML) to provide a framework where multiple 3D parallel physics packages, such as a PIC code (ICEPIC), a MHD code (MACH), and a x-ray transport code (ITS) can co-exist in a system-of-systems approach to modeling advanced x-ray sources. Initial software design and assessments of the various physics algorithms' fidelity will be presented.
A Computational Framework for Efficient Low Temperature Plasma Simulations
NASA Astrophysics Data System (ADS)
Verma, Abhishek Kumar; Venkattraman, Ayyaswamy
2016-10-01
Over the past years, scientific computing has emerged as an essential tool for the investigation and prediction of low temperature plasmas (LTP) applications which includes electronics, nanomaterial synthesis, metamaterials etc. To further explore the LTP behavior with greater fidelity, we present a computational toolbox developed to perform LTP simulations. This framework will allow us to enhance our understanding of multiscale plasma phenomenon using high performance computing tools mainly based on OpenFOAM FVM distribution. Although aimed at microplasma simulations, the modular framework is able to perform multiscale, multiphysics simulations of physical systems comprises of LTP. Some salient introductory features are capability to perform parallel, 3D simulations of LTP applications on unstructured meshes. Performance of the solver is tested based on numerical results assessing accuracy and efficiency of benchmarks for problems in microdischarge devices. Numerical simulation of microplasma reactor at atmospheric pressure with hemispherical dielectric coated electrodes will be discussed and hence, provide an overview of applicability and future scope of this framework.
High performance MRI simulations of motion on multi-GPU systems
2014-01-01
Background MRI physics simulators have been developed in the past for optimizing imaging protocols and for training purposes. However, these simulators have only addressed motion within a limited scope. The purpose of this study was the incorporation of realistic motion, such as cardiac motion, respiratory motion and flow, within MRI simulations in a high performance multi-GPU environment. Methods Three different motion models were introduced in the Magnetic Resonance Imaging SIMULator (MRISIMUL) of this study: cardiac motion, respiratory motion and flow. Simulation of a simple Gradient Echo pulse sequence and a CINE pulse sequence on the corresponding anatomical model was performed. Myocardial tagging was also investigated. In pulse sequence design, software crushers were introduced to accommodate the long execution times in order to avoid spurious echoes formation. The displacement of the anatomical model isochromats was calculated within the Graphics Processing Unit (GPU) kernel for every timestep of the pulse sequence. Experiments that would allow simulation of custom anatomical and motion models were also performed. Last, simulations of motion with MRISIMUL on single-node and multi-node multi-GPU systems were examined. Results Gradient Echo and CINE images of the three motion models were produced and motion-related artifacts were demonstrated. The temporal evolution of the contractility of the heart was presented through the application of myocardial tagging. Better simulation performance and image quality were presented through the introduction of software crushers without the need to further increase the computational load and GPU resources. Last, MRISIMUL demonstrated an almost linear scalable performance with the increasing number of available GPU cards, in both single-node and multi-node multi-GPU computer systems. Conclusions MRISIMUL is the first MR physics simulator to have implemented motion with a 3D large computational load on a single computer multi-GPU configuration. The incorporation of realistic motion models, such as cardiac motion, respiratory motion and flow may benefit the design and optimization of existing or new MR pulse sequences, protocols and algorithms, which examine motion related MR applications. PMID:24996972
Evaluating average and atypical response in radiation effects simulations
NASA Astrophysics Data System (ADS)
Weller, R. A.; Sternberg, A. L.; Massengill, L. W.; Schrimpf, R. D.; Fleetwood, D. M.
2003-12-01
We examine the limits of performing single-event simulations using pre-averaged radiation events. Geant4 simulations show the necessity, for future devices, to supplement current methods with ensemble averaging of device-level responses to physically realistic radiation events. Initial Monte Carlo simulations have generated a significant number of extremal events in local energy deposition. These simulations strongly suggest that proton strikes of sufficient energy, even those that initiate purely electronic interactions, can initiate device response capable in principle of producing single event upset or microdose damage in highly scaled devices.
The TeraShake Computational Platform for Large-Scale Earthquake Simulations
NASA Astrophysics Data System (ADS)
Cui, Yifeng; Olsen, Kim; Chourasia, Amit; Moore, Reagan; Maechling, Philip; Jordan, Thomas
Geoscientific and computer science researchers with the Southern California Earthquake Center (SCEC) are conducting a large-scale, physics-based, computationally demanding earthquake system science research program with the goal of developing predictive models of earthquake processes. The computational demands of this program continue to increase rapidly as these researchers seek to perform physics-based numerical simulations of earthquake processes for larger meet the needs of this research program, a multiple-institution team coordinated by SCEC has integrated several scientific codes into a numerical modeling-based research tool we call the TeraShake computational platform (TSCP). A central component in the TSCP is a highly scalable earthquake wave propagation simulation program called the TeraShake anelastic wave propagation (TS-AWP) code. In this chapter, we describe how we extended an existing, stand-alone, wellvalidated, finite-difference, anelastic wave propagation modeling code into the highly scalable and widely used TS-AWP and then integrated this code into the TeraShake computational platform that provides end-to-end (initialization to analysis) research capabilities. We also describe the techniques used to enhance the TS-AWP parallel performance on TeraGrid supercomputers, as well as the TeraShake simulations phases including input preparation, run time, data archive management, and visualization. As a result of our efforts to improve its parallel efficiency, the TS-AWP has now shown highly efficient strong scaling on over 40K processors on IBM’s BlueGene/L Watson computer. In addition, the TSCP has developed into a computational system that is useful to many members of the SCEC community for performing large-scale earthquake simulations.
Data-Driven Anomaly Detection Performance for the Ares I-X Ground Diagnostic Prototype
NASA Technical Reports Server (NTRS)
Martin, Rodney A.; Schwabacher, Mark A.; Matthews, Bryan L.
2010-01-01
In this paper, we will assess the performance of a data-driven anomaly detection algorithm, the Inductive Monitoring System (IMS), which can be used to detect simulated Thrust Vector Control (TVC) system failures. However, the ability of IMS to detect these failures in a true operational setting may be related to the realistic nature of how they are simulated. As such, we will investigate both a low fidelity and high fidelity approach to simulating such failures, with the latter based upon the underlying physics. Furthermore, the ability of IMS to detect anomalies that were previously unknown and not previously simulated will be studied in earnest, as well as apparent deficiencies or misapplications that result from using the data-driven paradigm. Our conclusions indicate that robust detection performance of simulated failures using IMS is not appreciably affected by the use of a high fidelity simulation. However, we have found that the inclusion of a data-driven algorithm such as IMS into a suite of deployable health management technologies does add significant value.
A multi-scale model for geared transmission aero-thermodynamics
NASA Astrophysics Data System (ADS)
McIntyre, Sean M.
A multi-scale, multi-physics computational tool for the simulation of high-per- formance gearbox aero-thermodynamics was developed and applied to equilibrium and pathological loss-of-lubrication performance simulation. The physical processes at play in these systems include multiphase compressible ow of the air and lubricant within the gearbox, meshing kinematics and tribology, as well as heat transfer by conduction, and free and forced convection. These physics are coupled across their representative space and time scales in the computational framework developed in this dissertation. These scales span eight orders of magnitude, from the thermal response of the full gearbox O(100 m; 10 2 s), through effects at the tooth passage time scale O(10-2 m; 10-4 s), down to tribological effects on the meshing gear teeth O(10-6 m; 10-6 s). Direct numerical simulation of these coupled physics and scales is intractable. Accordingly, a scale-segregated simulation strategy was developed by partitioning and treating the contributing physical mechanisms as sub-problems, each with associated space and time scales, and appropriate coupling mechanisms. These are: (1) the long time scale thermal response of the system, (2) the multiphase (air, droplets, and film) aerodynamic flow and convective heat transfer within the gearbox, (3) the high-frequency, time-periodic thermal effects of gear tooth heating while in mesh and its subsequent cooling through the rest of rotation, (4) meshing effects including tribology and contact mechanics. The overarching goal of this dissertation was to develop software and analysis procedures for gearbox loss-of-lubrication performance. To accommodate these four physical effects and their coupling, each is treated in the CFD code as a sub problem. These physics modules are coupled algorithmically. Specifically, the high- frequency conduction analysis derives its local heat transfer coefficient and near-wall air temperature boundary conditions from a quasi-steady cyclic-symmetric simulation of the internal flow. This high-frequency conduction solution is coupled directly with a model for the meshing friction, developed by a collaborator, which was adapted for use in a finite-volume CFD code. The local surface heat flux on solid surfaces is calculated by time-averaging the heat flux in the high-frequency analysis. This serves as a fixed-flux boundary condition in the long time scale conduction module. The temperature distribution from this long time scale heat transfer calculation serves as a boundary condition for the internal convection simulation, and as the initial condition for the high-frequency heat transfer module. Using this multi-scale model, simulations were performed for equilibrium and loss-of-lubrication operation of the NASA Glenn Research Center test stand. Results were compared with experimental measurements. In addition to the multi-scale model itself, several other specific contributions were made. Eulerian models for droplets and wall-films were developed and im- plemented in the CFD code. A novel approach to retaining liquid film on the solid surfaces, and strategies for its mass exchange with droplets, were developed and verified. Models for interfacial transfer between droplets and wall-film were implemented, and include the effects of droplet deposition, splashing, bouncing, as well as film breakup. These models were validated against airfoil data. To mitigate the observed slow convergence of CFD simulations of the enclosed aerodynamic flows within gearboxes, Fourier stability analysis was applied to the SIMPLE-C fractional-step algorithm. From this, recommendations to accelerate the convergence rate through enhanced pressure-velocity coupling were made. These were shown to be effective. A fast-running finite-volume reduced-order-model of the gearbox aero-thermo- dynamics was developed, and coupled with the tribology model to investigate the sensitivity of loss-of-lubrication predictions to various model and physical param- eters. This sensitivity study was instrumental in guiding efforts toward improving the accuracy of the multi-scale model without undue increase in computational cost. In addition, the reduced-order model is now used extensively by a collaborator in tribology model development and testing. Experimental measurements of high-speed gear windage in partially and fully- shrouded configurations were performed to supplement the paucity of available validation data. This measurement program provided measurements of windage loss for a gear of design-relevant size and operating speed, as well as guidance for increasing the accuracy of future measurements.
NASA Astrophysics Data System (ADS)
Bloser, P. F.; Legere, J. S.; Bancroft, C. M.; Jablonski, L. F.; Wurtz, J. R.; Ertley, C. D.; McConnell, M. L.; Ryan, J. M.
2014-11-01
Space-based gamma-ray detectors for high-energy astronomy and solar physics face severe constraints on mass, volume, and power, and must endure harsh launch conditions and operating environments. Historically, such instruments have usually been based on scintillator materials due to their relatively low cost, inherent ruggedness, high stopping power, and radiation hardness. New scintillator materials, such as LaBr3:Ce, feature improved energy and timing performance, making them attractive for future astronomy and solar physics space missions in an era of tightly constrained budgets. Despite this promise, the use of scintillators in space remains constrained by the volume, mass, power, and fragility of the associated light readout device, typically a vacuum photomultiplier tube (PMT). In recent years, silicon photomultipliers (SiPMs) have emerged as promising alternative light readout devices that offer gains and quantum efficiencies similar to those of PMTs, but with greatly reduced mass and volume, high ruggedness, low voltage requirements, and no sensitivity to magnetic fields. In order for SiPMs to replace PMTs in space-based instruments, however, it must be shown that they can provide comparable performance, and that their inherent temperature sensitivity can be corrected for. To this end, we have performed extensive testing and modeling of a small gamma-ray spectrometer composed of a 6 mm×6 mm SiPM coupled to a 6 mm×6 mm ×10 mm LaBr3:Ce crystal. A custom readout board monitors the temperature and adjusts the bias voltage to compensate for gain variations. We record an energy resolution of 5.7% (FWHM) at 662 keV at room temperature. We have also performed simulations of the scintillation process and optical light collection using Geant4, and of the SiPM response using the GosSiP package. The simulated energy resolution is in good agreement with the data from 22 keV to 662 keV. Above ~1 MeV, however, the measured energy resolution is systematically worse than the simulations. This discrepancy is likely due to the high input impedance of the readout board front-end electronics, which introduces a non-linear saturation effect in the SiPM for large light pulses. Analysis of the simulations indicates several additional steps that must be taken to optimize the energy resolution of SiPM-based scintillator detectors.
Sleep restriction during simulated wildfire suppression: effect on physical task performance.
Vincent, Grace; Ferguson, Sally A; Tran, Jacqueline; Larsen, Brianna; Wolkow, Alexander; Aisbett, Brad
2015-01-01
To examine the effects of sleep restriction on firefighters' physical task performance during simulated wildfire suppression. Thirty-five firefighters were matched and randomly allocated to either a control condition (8-hour sleep opportunity, n = 18) or a sleep restricted condition (4-hour sleep opportunity, n = 17). Performance on physical work tasks was evaluated across three days. In addition, heart rate, core temperature, and worker activity were measured continuously. Rate of perceived and exertion and effort sensation were evaluated during the physical work periods. There were no differences between the sleep-restricted and control groups in firefighters' task performance, heart rate, core temperature, or perceptual responses during self-paced simulated firefighting work tasks. However, the sleep-restricted group were less active during periods of non-physical work compared to the control group. Under self-paced work conditions, 4 h of sleep restriction did not adversely affect firefighters' performance on physical work tasks. However, the sleep-restricted group were less physically active throughout the simulation. This may indicate that sleep-restricted participants adapted their behaviour to conserve effort during rest periods, to subsequently ensure they were able to maintain performance during the firefighter work tasks. This work contributes new knowledge to inform fire agencies of firefighters' operational capabilities when their sleep is restricted during multi-day wildfire events. The work also highlights the need for further research to explore how sleep restriction affects physical performance during tasks of varying duration, intensity, and complexity.
Sleep Restriction during Simulated Wildfire Suppression: Effect on Physical Task Performance
Vincent, Grace; Ferguson, Sally A.; Tran, Jacqueline; Larsen, Brianna; Wolkow, Alexander; Aisbett, Brad
2015-01-01
Objectives To examine the effects of sleep restriction on firefighters’ physical task performance during simulated wildfire suppression. Methods Thirty-five firefighters were matched and randomly allocated to either a control condition (8-hour sleep opportunity, n = 18) or a sleep restricted condition (4-hour sleep opportunity, n = 17). Performance on physical work tasks was evaluated across three days. In addition, heart rate, core temperature, and worker activity were measured continuously. Rate of perceived and exertion and effort sensation were evaluated during the physical work periods. Results There were no differences between the sleep-restricted and control groups in firefighters’ task performance, heart rate, core temperature, or perceptual responses during self-paced simulated firefighting work tasks. However, the sleep-restricted group were less active during periods of non-physical work compared to the control group. Conclusions Under self-paced work conditions, 4 h of sleep restriction did not adversely affect firefighters’ performance on physical work tasks. However, the sleep-restricted group were less physically active throughout the simulation. This may indicate that sleep-restricted participants adapted their behaviour to conserve effort during rest periods, to subsequently ensure they were able to maintain performance during the firefighter work tasks. This work contributes new knowledge to inform fire agencies of firefighters’ operational capabilities when their sleep is restricted during multi-day wildfire events. The work also highlights the need for further research to explore how sleep restriction affects physical performance during tasks of varying duration, intensity, and complexity. PMID:25615988
A precision device needs precise simulation: Software description of the CBM Silicon Tracking System
NASA Astrophysics Data System (ADS)
Malygina, Hanna; Friese, Volker;
2017-10-01
Precise modelling of detectors in simulations is the key to the understanding of their performance, which, in turn, is a prerequisite for the proper design choice and, later, for the achievement of valid physics results. In this report, we describe the implementation of the Silicon Tracking System (STS), the main tracking device of the CBM experiment, in the CBM software environment. The STS makes uses of double-sided silicon micro-strip sensors with double metal layers. We present a description of transport and detector response simulation, including all relevant physical effects like charge creation and drift, charge collection, cross-talk and digitization. Of particular importance and novelty is the description of the time behaviour of the detector, since its readout will not be externally triggered but continuous. We also cover some aspects of local reconstruction, which in the CBM case has to be performed in real-time and thus requires high-speed algorithms.
NASA Astrophysics Data System (ADS)
De Lucia, Marco; Kempka, Thomas; Jatnieks, Janis; Kühn, Michael
2017-04-01
Reactive transport simulations - where geochemical reactions are coupled with hydrodynamic transport of reactants - are extremely time consuming and suffer from significant numerical issues. Given the high uncertainties inherently associated with the geochemical models, which also constitute the major computational bottleneck, such requirements may seem inappropriate and probably constitute the main limitation for their wide application. A promising way to ease and speed-up such coupled simulations is achievable employing statistical surrogates instead of "full-physics" geochemical models [1]. Data-driven surrogates are reduced models obtained on a set of pre-calculated "full physics" simulations, capturing their principal features while being extremely fast to compute. Model reduction of course comes at price of a precision loss; however, this appears justified in presence of large uncertainties regarding the parametrization of geochemical processes. This contribution illustrates the integration of surrogates into the flexible simulation framework currently being developed by the authors' research group [2]. The high level language of choice for obtaining and dealing with surrogate models is R, which profits from state-of-the-art methods for statistical analysis of large simulations ensembles. A stand-alone advective mass transport module was furthermore developed in order to add such capability to any multiphase finite volume hydrodynamic simulator within the simulation framework. We present 2D and 3D case studies benchmarking the performance of surrogates and "full physics" chemistry in scenarios pertaining the assessment of geological subsurface utilization. [1] Jatnieks, J., De Lucia, M., Dransch, D., Sips, M.: "Data-driven surrogate model approach for improving the performance of reactive transport simulations.", Energy Procedia 97, 2016, p. 447-453. [2] Kempka, T., Nakaten, B., De Lucia, M., Nakaten, N., Otto, C., Pohl, M., Chabab [Tillner], E., Kühn, M.: "Flexible Simulation Framework to Couple Processes in Complex 3D Models for Subsurface Utilization Assessment.", Energy Procedia, 97, 2016 p. 494-501.
Development of a high resolution voxelised head phantom for medical physics applications.
Giacometti, V; Guatelli, S; Bazalova-Carter, M; Rosenfeld, A B; Schulte, R W
2017-01-01
Computational anthropomorphic phantoms have become an important investigation tool for medical imaging and dosimetry for radiotherapy and radiation protection. The development of computational phantoms with realistic anatomical features contribute significantly to the development of novel methods in medical physics. For many applications, it is desirable that such computational phantoms have a real-world physical counterpart in order to verify the obtained results. In this work, we report the development of a voxelised phantom, the HIGH_RES_HEAD, modelling a paediatric head based on the commercial phantom 715-HN (CIRS). HIGH_RES_HEAD is unique for its anatomical details and high spatial resolution (0.18×0.18mm 2 pixel size). The development of such a phantom was required to investigate the performance of a new proton computed tomography (pCT) system, in terms of detector technology and image reconstruction algorithms. The HIGH_RES_HEAD was used in an ad-hoc Geant4 simulation modelling the pCT system. The simulation application was previously validated with respect to experimental results. When compared to a standard spatial resolution voxelised phantom of the same paediatric head, it was shown that in pCT reconstruction studies, the use of the HIGH_RES_HEAD translates into a reduction from 2% to 0.7% of the average relative stopping power difference between experimental and simulated results thus improving the overall quality of the head phantom simulation. The HIGH_RES_HEAD can also be used for other medical physics applications such as treatment planning studies. A second version of the voxelised phantom was created that contains a prototypic base of skull tumour and surrounding organs at risk. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Numerical Simulations of Spacecraft Charging: Selected Applications
NASA Astrophysics Data System (ADS)
Moulton, J. D.; Delzanno, G. L.; Meierbachtol, C.; Svyatskiy, D.; Vernon, L.; Borovsky, J.; Thomsen, M. F.
2016-12-01
The electrical charging of spacecraft due to bombarding charged particles affects their performance and operation. We study this charging using CPIC, a particle-in-cell code specifically designed for studying plasma-material interactions. CPIC is based on multi-block curvilinear meshes, resulting in near-optimal computational performance while maintaining geometric accuracy. It is interfaced to a mesh generator that creates a computational mesh conforming to complex objects like a spacecraft. Relevant plasma parameters can be imported from the SHIELDS framework (currently under development at LANL), which simulates geomagnetic storms and substorms in the Earth's magnetosphere. Selected physics results will be presented, together with an overview of the code. The physics results include spacecraft-charging simulations with geometry representative of the Van Allen Probes spacecraft, focusing on the conditions that can lead to significant spacecraft charging events. Second, results from a recent study that investigates the conditions for which a high-power (>keV) electron beam could be emitted from a magnetospheric spacecraft will be presented. The latter study proposes a spacecraft-charging mitigation strategy based on the plasma contactor technology that might allow beam experiments to operate in the low-density magnetosphere. High-power electron beams could be used for instance to establish magnetic-field-line connectivity between ionosphere and magnetosphere and help solving long-standing questions in ionospheric/magnetospheric physics.
Modeling, validation and analysis of a Whegs robot in the USARSim environment
NASA Astrophysics Data System (ADS)
Taylor, Brian K.; Balakirsky, Stephen; Messina, Elena; Quinn, Roger D.
2008-04-01
Simulation of robots in a virtual domain has multiple benefits. End users can use the simulation as a training tool to increase their skill with the vehicle without risking damage to the robot or surrounding environment. Simulation allows researchers and developers to benchmark robot performance in a range of scenarios without having the physical robot or environment present. The simulation can also help guide and generate new design concepts. USARSim (Unified System for Automation and Robot Simulation) is a tool that is being used to accomplish these goals, particularly within the realm of search and rescue. It is based on the Unreal Tournament 2004 gaming engine, which approximates the physics of how a robot interacts with its environment. A family of vehicles that can benefit from simulation in USARSim are Whegs TM robots. Developed in the Biorobotics Laboratory at Case Western Reserve University, Whegs TM robots are highly mobile ground vehicles that use abstracted biological principles to achieve a robust level of locomotion, including passive gait adaptation and enhanced climbing abilities. This paper describes a Whegs TM robot model that was constructed in USARSim. The model was configured with the same kinds of behavioral characteristics found in real Whegs TM vehicles. Once these traits were implemented, a validation study was performed using identical performance metrics measured on both the virtual and real vehicles to quantify vehicle performance and to ensure that the virtual robot's performance matched that of the real robot.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spentzouris, Panagiotis; /Fermilab; Cary, John
The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessarymore » accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors.« less
Stochastic optimization of GeantV code by use of genetic algorithms
Amadio, G.; Apostolakis, J.; Bandieramonte, M.; ...
2017-10-01
GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) andmore » handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. Here, the goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.« less
Stochastic optimization of GeantV code by use of genetic algorithms
NASA Astrophysics Data System (ADS)
Amadio, G.; Apostolakis, J.; Bandieramonte, M.; Behera, S. P.; Brun, R.; Canal, P.; Carminati, F.; Cosmo, G.; Duhem, L.; Elvira, D.; Folger, G.; Gheata, A.; Gheata, M.; Goulas, I.; Hariri, F.; Jun, S. Y.; Konstantinov, D.; Kumawat, H.; Ivantchenko, V.; Lima, G.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.
2017-10-01
GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) and handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. The goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.
Stochastic optimization of GeantV code by use of genetic algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amadio, G.; Apostolakis, J.; Bandieramonte, M.
GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) andmore » handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. Here, the goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.« less
Renden, Peter G; Savelsbergh, Geert J P; Oudejans, Raôul R D
2017-05-01
We investigated the effects of reflex-based self-defence training on police performance in simulated high-pressure arrest situations. Police officers received this training as well as a regular police arrest and self-defence skills training (control training) in a crossover design. Officers' performance was tested on several variables in six reality-based scenarios before and after each training intervention. Results showed improved performance after the reflex-based training, while there was no such effect of the regular police training. Improved performance could be attributed to better communication, situational awareness (scanning area, alertness), assertiveness, resolution, proportionality, control and converting primary responses into tactical movements. As officers trained complete violent situations (and not just physical skills), they learned to use their actions before physical contact for de-escalation but also for anticipation on possible attacks. Furthermore, they learned to respond against attacks with skills based on their primary reflexes. The results of this study seem to suggest that reflex-based self-defence training better prepares officers for performing in high-pressure arrest situations than the current form of police arrest and self-defence skills training. Practitioner Summary: Police officers' performance in high-pressure arrest situations improved after a reflex-based self-defence training, while there was no such effect of a regular police training. As officers learned to anticipate on possible attacks and to respond with skills based on their primary reflexes, they were better able to perform effectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amadio, G.; et al.
An intensive R&D and programming effort is required to accomplish new challenges posed by future experimental high-energy particle physics (HEP) programs. The GeantV project aims to narrow the gap between the performance of the existing HEP detector simulation software and the ideal performance achievable, exploiting latest advances in computing technology. The project has developed a particle detector simulation prototype capable of transporting in parallel particles in complex geometries exploiting instruction level microparallelism (SIMD and SIMT), task-level parallelism (multithreading) and high-level parallelism (MPI), leveraging both the multi-core and the many-core opportunities. We present preliminary verification results concerning the electromagnetic (EM) physicsmore » models developed for parallel computing architectures within the GeantV project. In order to exploit the potential of vectorization and accelerators and to make the physics model effectively parallelizable, advanced sampling techniques have been implemented and tested. In this paper we introduce a set of automated statistical tests in order to verify the vectorized models by checking their consistency with the corresponding Geant4 models and to validate them against experimental data.« less
Extremely high frequency RF effects on electronics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loubriel, Guillermo Manuel; Vigliano, David; Coleman, Phillip Dale
The objective of this work was to understand the fundamental physics of extremely high frequency RF effects on electronics. To accomplish this objective, we produced models, conducted simulations, and performed measurements to identify the mechanisms of effects as frequency increases into the millimeter-wave regime. Our purpose was to answer the questions, 'What are the tradeoffs between coupling, transmission losses, and device responses as frequency increases?', and, 'How high in frequency do effects on electronic systems continue to occur?' Using full wave electromagnetics codes and a transmission-line/circuit code, we investigated how extremely high-frequency RF propagates on wires and printed circuit boardmore » traces. We investigated both field-to-wire coupling and direct illumination of printed circuit boards to determine the significant mechanisms for inducing currents at device terminals. We measured coupling to wires and attenuation along wires for comparison to the simulations, looking at plane-wave coupling as it launches modes onto single and multiconductor structures. We simulated the response of discrete and integrated circuit semiconductor devices to those high-frequency currents and voltages, using SGFramework, the open-source General-purpose Semiconductor Simulator (gss), and Sandia's Charon semiconductor device physics codes. This report documents our findings.« less
Alosco, Michael L.; Penn, Marc S.; Spitznagel, Mary Beth; Cleveland, Mary Jo; Ott, Brian R.
2015-01-01
OBJECTIVE. Reduced physical fitness secondary to heart failure (HF) may contribute to poor driving; reduced physical fitness is a known correlate of cognitive impairment and has been associated with decreased independence in driving. No study has examined the associations among physical fitness, cognition, and driving performance in people with HF. METHOD. Eighteen people with HF completed a physical fitness assessment, a cognitive test battery, and a validated driving simulator scenario. RESULTS. Partial correlations showed that poorer physical fitness was correlated with more collisions and stop signs missed and lower scores on a composite score of attention, executive function, and psychomotor speed. Cognitive dysfunction predicted reduced driving simulation performance. CONCLUSION. Reduced physical fitness in participants with HF was associated with worse simulated driving, possibly because of cognitive dysfunction. Larger studies using on-road testing are needed to confirm our findings and identify clinical interventions to maximize safe driving. PMID:26122681
NASA Astrophysics Data System (ADS)
Javernick, Luke; Redolfi, Marco; Bertoldi, Walter
2018-05-01
New data collection techniques offer numerical modelers the ability to gather and utilize high quality data sets with high spatial and temporal resolution. Such data sets are currently needed for calibration, verification, and to fuel future model development, particularly morphological simulations. This study explores the use of high quality spatial and temporal data sets of observed bed load transport in braided river flume experiments to evaluate the ability of a two-dimensional model, Delft3D, to predict bed load transport. This study uses a fixed bed model configuration and examines the model's shear stress calculations, which are the foundation to predict the sediment fluxes necessary for morphological simulations. The evaluation is conducted for three flow rates, and model setup used highly accurate Structure-from-Motion (SfM) topography and discharge boundary conditions. The model was hydraulically calibrated using bed roughness, and performance was evaluated based on depth and inundation agreement. Model bed load performance was evaluated in terms of critical shear stress exceedance area compared to maps of observed bed mobility in a flume. Following the standard hydraulic calibration, bed load performance was tested for sensitivity to horizontal eddy viscosity parameterization and bed morphology updating. Simulations produced depth errors equal to the SfM inherent errors, inundation agreement of 77-85%, and critical shear stress exceedance in agreement with 49-68% of the observed active area. This study provides insight into the ability of physically based, two-dimensional simulations to accurately predict bed load as well as the effects of horizontal eddy viscosity and bed updating. Further, this study highlights how using high spatial and temporal data to capture the physical processes at work during flume experiments can help to improve morphological modeling.
NASA Astrophysics Data System (ADS)
Li, Xiaoyi; Soteriou, Marios C.
2016-08-01
Recent advances in numerical methods coupled with the substantial enhancements in computing power and the advent of high performance computing have presented first principle, high fidelity simulation as a viable tool in the prediction and analysis of spray atomization processes. The credibility and potential impact of such simulations, however, has been hampered by the relative absence of detailed validation against experimental evidence. The numerical stability and accuracy challenges arising from the need to simulate the high liquid-gas density ratio across the sharp interfaces encountered in these flows are key reasons for this. In this work we challenge this status quo by presenting a numerical model able to deal with these challenges, employing it in simulations of liquid jet in crossflow atomization and performing extensive validation of its results against a carefully executed experiment with detailed measurements in the atomization region. We then proceed to the detailed analysis of the flow physics. The computational model employs the coupled level set and volume of fluid approach to directly capture the spatiotemporal evolution of the liquid-gas interface and the sharp-interface ghost fluid method to stably handle high liquid-air density ratio. Adaptive mesh refinement and Lagrangian droplet models are shown to be viable options for computational cost reduction. Moreover, high performance computing is leveraged to manage the computational cost. The experiment selected for validation eliminates the impact of inlet liquid and gas turbulence and focuses on the impact of the crossflow aerodynamic forces on the atomization physics. Validation is demonstrated by comparing column surface wavelengths, deformation, breakup locations, column trajectories and droplet sizes, velocities, and mass rates for a range of intermediate Weber numbers. Analysis of the physics is performed in terms of the instability and breakup characteristics and the features of downstream flow recirculation, and vortex shedding. Formation of "Λ" shape windward column waves is observed and explained by the combined upward and lateral surface motion. The existence of Rayleigh-Taylor instability as the primary mechanism for the windward column waves is verified for this case by comparing wavelengths from the simulations to those predicted by linear stability analyses. Physical arguments are employed to postulate that the type of instability manifested may be related to conditions such as the gas Weber number and the inlet turbulence level. The decreased column wavelength with increasing Weber number is found to cause enhanced surface stripping and early depletion of liquid core at higher Weber number. A peculiar "three-streak-two-membrane" liquid structure is identified at the lowest Weber number and explained as the consequence of the symmetric recirculation zones behind the jet column. It is found that the vortical flow downstream of the liquid column resembles a von Karman vortex street and that the coupling between the gas flow and droplet transport is weak for the conditions explored.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Xiaoyi, E-mail: lixy2@utrc.utc.com; Soteriou, Marios C.
Recent advances in numerical methods coupled with the substantial enhancements in computing power and the advent of high performance computing have presented first principle, high fidelity simulation as a viable tool in the prediction and analysis of spray atomization processes. The credibility and potential impact of such simulations, however, has been hampered by the relative absence of detailed validation against experimental evidence. The numerical stability and accuracy challenges arising from the need to simulate the high liquid-gas density ratio across the sharp interfaces encountered in these flows are key reasons for this. In this work we challenge this status quomore » by presenting a numerical model able to deal with these challenges, employing it in simulations of liquid jet in crossflow atomization and performing extensive validation of its results against a carefully executed experiment with detailed measurements in the atomization region. We then proceed to the detailed analysis of the flow physics. The computational model employs the coupled level set and volume of fluid approach to directly capture the spatiotemporal evolution of the liquid-gas interface and the sharp-interface ghost fluid method to stably handle high liquid-air density ratio. Adaptive mesh refinement and Lagrangian droplet models are shown to be viable options for computational cost reduction. Moreover, high performance computing is leveraged to manage the computational cost. The experiment selected for validation eliminates the impact of inlet liquid and gas turbulence and focuses on the impact of the crossflow aerodynamic forces on the atomization physics. Validation is demonstrated by comparing column surface wavelengths, deformation, breakup locations, column trajectories and droplet sizes, velocities, and mass rates for a range of intermediate Weber numbers. Analysis of the physics is performed in terms of the instability and breakup characteristics and the features of downstream flow recirculation, and vortex shedding. Formation of “Λ” shape windward column waves is observed and explained by the combined upward and lateral surface motion. The existence of Rayleigh-Taylor instability as the primary mechanism for the windward column waves is verified for this case by comparing wavelengths from the simulations to those predicted by linear stability analyses. Physical arguments are employed to postulate that the type of instability manifested may be related to conditions such as the gas Weber number and the inlet turbulence level. The decreased column wavelength with increasing Weber number is found to cause enhanced surface stripping and early depletion of liquid core at higher Weber number. A peculiar “three-streak-two-membrane” liquid structure is identified at the lowest Weber number and explained as the consequence of the symmetric recirculation zones behind the jet column. It is found that the vortical flow downstream of the liquid column resembles a von Karman vortex street and that the coupling between the gas flow and droplet transport is weak for the conditions explored.« less
The impact of physical and mental tasks on pilot mental workoad
NASA Technical Reports Server (NTRS)
Berg, S. L.; Sheridan, T. B.
1986-01-01
Seven instrument-rated pilots with a wide range of backgrounds and experience levels flew four different scenarios on a fixed-base simulator. The Baseline scenario was the simplest of the four and had few mental and physical tasks. An activity scenario had many physical but few mental tasks. The Planning scenario had few physical and many mental taks. A Combined scenario had high mental and physical task loads. The magnitude of each pilot's altitude and airspeed deviations was measured, subjective workload ratings were recorded, and the degree of pilot compliance with assigned memory/planning tasks was noted. Mental and physical performance was a strong function of the manual activity level, but not influenced by the mental task load. High manual task loads resulted in a large percentage of mental errors even under low mental task loads. Although all the pilots gave similar subjective ratings when the manual task load was high, subjective ratings showed greater individual differences with high mental task loads. Altitude or airspeed deviations and subjective ratings were most correlated when the total task load was very high. Although airspeed deviations, altitude deviations, and subjective workload ratings were similar for both low experience and high experience pilots, at very high total task loads, mental performance was much lower for the low experience pilots.
NASA Astrophysics Data System (ADS)
Park, Jun; Hwang, Seung-On
2017-11-01
The impact of a spectral nudging technique for the dynamical downscaling of the summer surface air temperature in a high-resolution regional atmospheric model is assessed. The performance of this technique is measured by comparing 16 analysis-driven simulation sets of physical parameterization combinations of two shortwave radiation and four land surface model schemes of the model, which are known to be crucial for the simulation of the surface air temperature. It is found that the application of spectral nudging to the outermost domain has a greater impact on the regional climate than any combination of shortwave radiation and land surface model physics schemes. The optimal choice of two model physics parameterizations is helpful for obtaining more realistic spatiotemporal distributions of land surface variables such as the surface air temperature, precipitation, and surface fluxes. However, employing spectral nudging adds more value to the results; the improvement is greater than using sophisticated shortwave radiation and land surface model physical parameterizations. This result indicates that spectral nudging applied to the outermost domain provides a more accurate lateral boundary condition to the innermost domain when forced by analysis data by securing the consistency with large-scale forcing over a regional domain. This consequently indirectly helps two physical parameterizations to produce small-scale features closer to the observed values, leading to a better representation of the surface air temperature in a high-resolution downscaled climate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacFarlane, Joseph J.; Golovkin, I. E.; Woodruff, P. R.
2009-08-07
This Final Report summarizes work performed under DOE STTR Phase II Grant No. DE-FG02-05ER86258 during the project period from August 2006 to August 2009. The project, “Development of Spectral and Atomic Models for Diagnosing Energetic Particle Characteristics in Fast Ignition Experiments,” was led by Prism Computational Sciences (Madison, WI), and involved collaboration with subcontractors University of Nevada-Reno and Voss Scientific (Albuquerque, NM). In this project, we have: Developed and implemented a multi-dimensional, multi-frequency radiation transport model in the LSP hybrid fluid-PIC (particle-in-cell) code [1,2]. Updated the LSP code to support the use of accurate equation-of-state (EOS) tables generated by Prism’smore » PROPACEOS [3] code to compute more accurate temperatures in high energy density physics (HEDP) plasmas. Updated LSP to support the use of Prism’s multi-frequency opacity tables. Generated equation of state and opacity data for LSP simulations for several materials being used in plasma jet experimental studies. Developed and implemented parallel processing techniques for the radiation physics algorithms in LSP. Benchmarked the new radiation transport and radiation physics algorithms in LSP and compared simulation results with analytic solutions and results from numerical radiation-hydrodynamics calculations. Performed simulations using Prism radiation physics codes to address issues related to radiative cooling and ionization dynamics in plasma jet experiments. Performed simulations to study the effects of radiation transport and radiation losses due to electrode contaminants in plasma jet experiments. Updated the LSP code to generate output using NetCDF to provide a better, more flexible interface to SPECT3D [4] in order to post-process LSP output. Updated the SPECT3D code to better support the post-processing of large-scale 2-D and 3-D datasets generated by simulation codes such as LSP. Updated atomic physics modeling to provide for more comprehensive and accurate atomic databases that feed into the radiation physics modeling (spectral simulations and opacity tables). Developed polarization spectroscopy modeling techniques suitable for diagnosing energetic particle characteristics in HEDP experiments. A description of these items is provided in this report. The above efforts lay the groundwork for utilizing the LSP and SPECT3D codes in providing simulation support for DOE-sponsored HEDP experiments, such as plasma jet and fast ignition physics experiments. We believe that taken together, the LSP and SPECT3D codes have unique capabilities for advancing our understanding of the physics of these HEDP plasmas. Based on conversations early in this project with our DOE program manager, Dr. Francis Thio, our efforts emphasized developing radiation physics and atomic modeling capabilities that can be utilized in the LSP PIC code, and performing radiation physics studies for plasma jets. A relatively minor component focused on the development of methods to diagnose energetic particle characteristics in short-pulse laser experiments related to fast ignition physics. The period of performance for the grant was extended by one year to August 2009 with a one-year no-cost extension, at the request of subcontractor University of Nevada-Reno.« less
NASA Astrophysics Data System (ADS)
Li, Xiaoyi; Gao, Hui; Soteriou, Marios C.
2017-08-01
Atomization of extremely high viscosity liquid can be of interest for many applications in aerospace, automotive, pharmaceutical, and food industries. While detailed atomization measurements usually face grand challenges, high-fidelity numerical simulations offer the advantage to comprehensively explore the atomization details. In this work, a previously validated high-fidelity first-principle simulation code HiMIST is utilized to simulate high-viscosity liquid jet atomization in crossflow. The code is used to perform a parametric study of the atomization process in a wide range of Ohnesorge numbers (Oh = 0.004-2) and Weber numbers (We = 10-160). Direct comparisons between the present study and previously published low-viscosity jet in crossflow results are performed. The effects of viscous damping and slowing on jet penetration, liquid surface instabilities, ligament formation/breakup, and subsequent droplet formation are investigated. Complex variations in near-field and far-field jet penetrations with increasing Oh at different We are observed and linked with the underlying jet deformation and breakup physics. Transition in breakup regimes and increase in droplet size with increasing Oh are observed, mostly consistent with the literature reports. The detailed simulations elucidate a distinctive edge-ligament-breakup dominated process with long surviving ligaments for the higher Oh cases, as opposed to a two-stage edge-stripping/column-fracture process for the lower Oh counterparts. The trend of decreasing column deflection with increasing We is reversed as Oh increases. A predominantly unimodal droplet size distribution is predicted at higher Oh, in contrast to the bimodal distribution at lower Oh. It has been found that both Rayleigh-Taylor and Kelvin-Helmholtz linear stability theories cannot be easily applied to interpret the distinct edge breakup process and further study of the underlying physics is needed.
Have More Fun Teaching Physics: Simulating, Stimulating Software.
ERIC Educational Resources Information Center
Jenkins, Doug
1996-01-01
High school physics offers opportunities to use problem solving and lab practices as well as cement skills in research, technical writing, and software applications. Describes and evaluates computer software enhancing the high school physics curriculum including spreadsheets for laboratory data, all-in-one simulators, projectile motion simulators,…
Efficient evaluation of wireless real-time control networks.
Horvath, Peter; Yampolskiy, Mark; Koutsoukos, Xenofon
2015-02-11
In this paper, we present a system simulation framework for the design and performance evaluation of complex wireless cyber-physical systems. We describe the simulator architecture and the specific developments that are required to simulate cyber-physical systems relying on multi-channel, multihop mesh networks. We introduce realistic and efficient physical layer models and a system simulation methodology, which provides statistically significant performance evaluation results with low computational complexity. The capabilities of the proposed framework are illustrated in the example of WirelessHART, a centralized, real-time, multi-hop mesh network designed for industrial control and monitor applications.
Unsteady Analyses of Valve Systems in Rocket Engine Testing Environments
NASA Technical Reports Server (NTRS)
Shipman, Jeremy; Hosangadi, Ashvin; Ahuja, Vineet
2004-01-01
This paper discusses simulation technology used to support the testing of rocket propulsion systems by performing high fidelity analyses of feed system components. A generalized multi-element framework has been used to perform simulations of control valve systems. This framework provides the flexibility to resolve the structural and functional complexities typically associated with valve-based high pressure feed systems that are difficult to deal with using traditional Computational Fluid Dynamics (CFD) methods. In order to validate this framework for control valve systems, results are presented for simulations of a cryogenic control valve at various plug settings and compared to both experimental data and simulation results obtained at NASA Stennis Space Center. A detailed unsteady analysis has also been performed for a pressure regulator type control valve used to support rocket engine and component testing at Stennis Space Center. The transient simulation captures the onset of a modal instability that has been observed in the operation of the valve. A discussion of the flow physics responsible for the instability and a prediction of the dominant modes associated with the fluctuations is presented.
NASA Technical Reports Server (NTRS)
Negrut, Dan; Mazhar, Hammad; Melanz, Daniel; Lamb, David; Jayakumar, Paramsothy; Letherwood, Michael; Jain, Abhinandan; Quadrelli, Marco
2012-01-01
This paper is concerned with the physics-based simulation of light tracked vehicles operating on rough deformable terrain. The focus is on small autonomous vehicles, which weigh less than 100 lb and move on deformable and rough terrain that is feature rich and no longer representable using a continuum approach. A scenario of interest is, for instance, the simulation of a reconnaissance mission for a high mobility lightweight robot where objects such as a boulder or a ditch that could otherwise be considered small for a truck or tank, become major obstacles that can impede the mobility of the light autonomous vehicle and negatively impact the success of its mission. Analyzing and gauging the mobility and performance of these light vehicles is accomplished through a modeling and simulation capability called Chrono::Engine. Chrono::Engine relies on parallel execution on Graphics Processing Unit (GPU) cards.
Nonequilibrium radiative hypersonic flow simulation
NASA Astrophysics Data System (ADS)
Shang, J. S.; Surzhikov, S. T.
2012-08-01
Nearly all the required scientific disciplines for computational hypersonic flow simulation have been developed on the framework of gas kinetic theory. However when high-temperature physical phenomena occur beneath the molecular and atomic scales, the knowledge of quantum physics and quantum chemical-physics becomes essential. Therefore the most challenging topics in computational simulation probably can be identified as the chemical-physical models for a high-temperature gaseous medium. The thermal radiation is also associated with quantum transitions of molecular and electronic states. The radiative energy exchange is characterized by the mechanisms of emission, absorption, and scattering. In developing a simulation capability for nonequilibrium radiation, an efficient numerical procedure is equally important both for solving the radiative transfer equation and for generating the required optical data via the ab-initio approach. In computational simulation, the initial values and boundary conditions are paramount for physical fidelity. Precise information at the material interface of ablating environment requires more than just a balance of the fluxes across the interface but must also consider the boundary deformation. The foundation of this theoretic development shall be built on the eigenvalue structure of the governing equations which can be described by Reynolds' transport theorem. Recent innovations for possible aerospace vehicle performance enhancement via an electromagnetic effect appear to be very attractive. The effectiveness of this mechanism is dependent strongly on the degree of ionization of the flow medium, the consecutive interactions of fluid dynamics and electrodynamics, as well as an externally applied magnetic field. Some verified research results in this area will be highlighted. An assessment of all these most recent advancements in nonequilibrium modeling of chemical kinetics, chemical-physics kinetics, ablation, radiative exchange, computational algorithms, and the aerodynamic-electromagnetic interaction are summarized and delineated. The critical basic research areas for physic-based hypersonic flow simulation should become self-evident through the present discussion. Nevertheless intensive basic research efforts must be sustained in these areas for fundamental knowledge and future technology advancement.
Expanded Processing Techniques for EMI Systems
2012-07-01
possible to perform better target detection using physics-based algorithms and the entire data set, rather than simulating a simpler data set and mapping...possible to perform better target detection using physics-based algorithms and the entire data set, rather than simulating a simpler data set and...54! Figure 4.25: Plots of simulated MetalMapper data for two oblate spheroidal targets
A New Simulation Framework for the Electron-Ion Collider
NASA Astrophysics Data System (ADS)
Arrington, John
2017-09-01
Last year, a collaboration between Physics Division and High-Energy Physics at Argonne was formed to enable significantly broader contributions to the development of the Electron-Ion Collider. This includes efforts in accelerator R&D, theory, simulations, and detector R&D. I will give a brief overview of the status of these efforts, with emphasis on the aspects aimed at enabling the community to more easily become involved in evaluation of physics, detectors, and details of spectrometer designs. We have put together a new, easy-to-use simulation framework using flexible software tools. The goal is to enable detailed simulations to evaluate detector performance and compare detector designs. In addition, a common framework capable of providing detailed simulations of different spectrometer designs will allow for fully consistent evaluations of the physics reach of different spectrometer designs or detector systems for a variety of physics channels. In addition, new theory efforts will provide self-consistent models of GPDs (including QCD evolution) and TMDs in nucleons and light nuclei, as well as providing more detailed physics input for the evaluation of some new observables. This material is based upon work supported by Laboratory Directed Research and Development (LDRD) funding from Argonne National Laboratory, provided by the Director, Office of Science, of the U.S. Department of Energy under Contract DE-AC02-06CH11357.
Analytical modeling of helium turbomachinery using FORTRAN 77
NASA Astrophysics Data System (ADS)
Balaji, Purushotham
Advanced Generation IV modular reactors, including Very High Temperature Reactors (VHTRs), utilize helium as the working fluid, with a potential for high efficiency power production utilizing helium turbomachinery. Helium is chemically inert and nonradioactive which makes the gas ideal for a nuclear power-plant environment where radioactive leaks are a high concern. These properties of helium gas helps to increase the safety features as well as to decrease the aging process of plant components. The lack of sufficient helium turbomachinery data has made it difficult to study the vital role played by the gas turbine components of these VHTR powered cycles. Therefore, this research work focuses on predicting the performance of helium compressors. A FORTRAN77 program is developed to simulate helium compressor operation, including surge line prediction. The resulting design point and off design performance data can be used to develop compressor map files readable by Numerical Propulsion Simulation Software (NPSS). This multi-physics simulation software that was developed for propulsion system analysis has found applications in simulating power-plant cycles.
Simulating industrial plasma reactors - A fresh perspective
NASA Astrophysics Data System (ADS)
Mohr, Sebastian; Rahimi, Sara; Tennyson, Jonathan; Ansell, Oliver; Patel, Jash
2016-09-01
A key goal of the presented research project PowerBase is to produce new integration schemes which enable the manufacturability of 3D integrated power smart systems with high precision TSV etched features. The necessary high aspect ratio etch is performed via the BOSCH process. Investigations in industrial research are often use trial and improvement experimental methods. Simulations provide an alternative way to study the influence of external parameters on the final product, whilst also giving insights into the physical processes. This presentation investigates the process of simulating an industrial ICP reactor used over high power (up to 2x5 kW) and pressure (up to 200 mTorr) ranges, analysing the specific procedures to achieve a compromise between physical correctness and computational speed, while testing commonly made assumptions. This includes, for example, the effect of different physical models and the inclusion of different gas phase and surface reactions with the aim of accurately predicting the dependence of surface rates and profiles on external parameters in SF6 and C4F8 discharges. This project has received funding from the Electronic Component Systems for European Leadership Joint Undertaking under Grant Agreement No. 662133 PowerBase.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turinsky, Paul J., E-mail: turinsky@ncsu.edu; Kothe, Douglas B., E-mail: kothe@ornl.gov
The Consortium for the Advanced Simulation of Light Water Reactors (CASL), the first Energy Innovation Hub of the Department of Energy, was established in 2010 with the goal of providing modeling and simulation (M&S) capabilities that support and accelerate the improvement of nuclear energy's economic competitiveness and the reduction of spent nuclear fuel volume per unit energy, and all while assuring nuclear safety. To accomplish this requires advances in M&S capabilities in radiation transport, thermal-hydraulics, fuel performance and corrosion chemistry. To focus CASL's R&D, industry challenge problems have been defined, which equate with long standing issues of the nuclear powermore » industry that M&S can assist in addressing. To date CASL has developed a multi-physics “core simulator” based upon pin-resolved radiation transport and subchannel (within fuel assembly) thermal-hydraulics, capitalizing on the capabilities of high performance computing. CASL's fuel performance M&S capability can also be optionally integrated into the core simulator, yielding a coupled multi-physics capability with untapped predictive potential. Material models have been developed to enhance predictive capabilities of fuel clad creep and growth, along with deeper understanding of zirconium alloy clad oxidation and hydrogen pickup. Understanding of corrosion chemistry (e.g., CRUD formation) has evolved at all scales: micro, meso and macro. CFD R&D has focused on improvement in closure models for subcooled boiling and bubbly flow, and the formulation of robust numerical solution algorithms. For multiphysics integration, several iterative acceleration methods have been assessed, illuminating areas where further research is needed. Finally, uncertainty quantification and data assimilation techniques, based upon sampling approaches, have been made more feasible for practicing nuclear engineers via R&D on dimensional reduction and biased sampling. Industry adoption of CASL's evolving M&S capabilities, which is in progress, will assist in addressing long-standing and future operational and safety challenges of the nuclear industry. - Highlights: • Complexity of physics based modeling of light water reactor cores being addressed. • Capability developed to help address problems that have challenged the nuclear power industry. • Simulation capabilities that take advantage of high performance computing developed.« less
Overview of the CLIC detector and its physics potential
NASA Astrophysics Data System (ADS)
Ström, Rickard
2017-12-01
The CLIC detector and physics study (CLICdp) is an international collaboration that investigates the physics potential of the Compact Linear Collider (CLIC). CLIC is a high-energy electron-positron collider under development, aiming for centre-of-mass energies from a few hundred GeV to 3 TeV. In addition to physics studies based on full Monte Carlo simulations of signal and background processes, CLICdp performs cuttingedge hardware R&D. In this contribution CLICdp will present recent results from physics prospect studies, emphasising Higgs studies. Additionally the new CLIC detector model and the recently updated CLIC baseline staging scenario will be presented.
Multi-Physics Demonstration Problem with the SHARP Reactor Simulation Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merzari, E.; Shemon, E. R.; Yu, Y. Q.
This report describes to employ SHARP to perform a first-of-a-kind analysis of the core radial expansion phenomenon in an SFR. This effort required significant advances in the framework Multi-Physics Demonstration Problem with the SHARP Reactor Simulation Toolkit used to drive the coupled simulations, manipulate the mesh in response to the deformation of the geometry, and generate the necessary modified mesh files. Furthermore, the model geometry is fairly complex, and consistent mesh generation for the three physics modules required significant effort. Fully-integrated simulations of a 7-assembly mini-core test problem have been performed, and the results are presented here. Physics models ofmore » a full-core model of the Advanced Burner Test Reactor have also been developed for each of the three physics modules. Standalone results of each of the three physics modules for the ABTR are presented here, which provides a demonstration of the feasibility of the fully-integrated simulation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spong, D.A.
The design techniques and physics analysis of modern stellarator configurations for magnetic fusion research rely heavily on high performance computing and simulation. Stellarators, which are fundamentally 3-dimensional in nature, offer significantly more design flexibility than more symmetric devices such as the tokamak. By varying the outer boundary shape of the plasma, a variety of physics features, such as transport, stability, and heating efficiency can be optimized. Scientific visualization techniques are an important adjunct to this effort as they provide a necessary ergonomic link between the numerical results and the intuition of the human researcher. The authors have developed a varietymore » of visualization techniques for stellarators which both facilitate the design optimization process and allow the physics simulations to be more readily understood.« less
NASA Technical Reports Server (NTRS)
Follen, G.; Naiman, C.; auBuchon, M.
2000-01-01
Within NASA's High Performance Computing and Communication (HPCC) program, NASA Glenn Research Center is developing an environment for the analysis/design of propulsion systems for aircraft and space vehicles called the Numerical Propulsion System Simulation (NPSS). The NPSS focuses on the integration of multiple disciplines such as aerodynamics, structures, and heat transfer, along with the concept of numerical zooming between 0- Dimensional to 1-, 2-, and 3-dimensional component engine codes. The vision for NPSS is to create a "numerical test cell" enabling full engine simulations overnight on cost-effective computing platforms. Current "state-of-the-art" engine simulations are 0-dimensional in that there is there is no axial, radial or circumferential resolution within a given component (e.g. a compressor or turbine has no internal station designations). In these 0-dimensional cycle simulations the individual component performance characteristics typically come from a table look-up (map) with adjustments for off-design effects such as variable geometry, Reynolds effects, and clearances. Zooming one or more of the engine components to a higher order, physics-based analysis means a higher order code is executed and the results from this analysis are used to adjust the 0-dimensional component performance characteristics within the system simulation. By drawing on the results from more predictive, physics based higher order analysis codes, "cycle" simulations are refined to closely model and predict the complex physical processes inherent to engines. As part of the overall development of the NPSS, NASA and industry began the process of defining and implementing an object class structure that enables Numerical Zooming between the NPSS Version I (0-dimension) and higher order 1-, 2- and 3-dimensional analysis codes. The NPSS Version I preserves the historical cycle engineering practices but also extends these classical practices into the area of numerical zooming for use within a companies' design system. What follows here is a description of successfully zooming I-dimensional (row-by-row) high pressure compressor results back to a NPSS engine 0-dimension simulation and a discussion of the results illustrated using an advanced data visualization tool. This type of high fidelity system-level analysis, made possible by the zooming capability of the NPSS, will greatly improve the fidelity of the engine system simulation and enable the engine system to be "pre-validated" prior to commitment to engine hardware.
1D-Var multilayer assimilation of X-band SAR data into a detailed snowpack model
NASA Astrophysics Data System (ADS)
Phan, X. V.; Ferro-Famil, L.; Gay, M.; Durand, Y.; Dumont, M.; Morin, S.; Allain, S.; D'Urso, G.; Girard, A.
2014-10-01
The structure and physical properties of a snowpack and their temporal evolution may be simulated using meteorological data and a snow metamorphism model. Such an approach may meet limitations related to potential divergences and accumulated errors, to a limited spatial resolution, to wind or topography-induced local modulations of the physical properties of a snow cover, etc. Exogenous data are then required in order to constrain the simulator and improve its performance over time. Synthetic-aperture radars (SARs) and, in particular, recent sensors provide reflectivity maps of snow-covered environments with high temporal and spatial resolutions. The radiometric properties of a snowpack measured at sufficiently high carrier frequencies are known to be tightly related to some of its main physical parameters, like its depth, snow grain size and density. SAR acquisitions may then be used, together with an electromagnetic backscattering model (EBM) able to simulate the reflectivity of a snowpack from a set of physical descriptors, in order to constrain a physical snowpack model. In this study, we introduce a variational data assimilation scheme coupling TerraSAR-X radiometric data into the snowpack evolution model Crocus. The physical properties of a snowpack, such as snow density and optical diameter of each layer, are simulated by Crocus, fed by the local reanalysis of meteorological data (SAFRAN) at a French Alpine location. These snowpack properties are used as inputs of an EBM based on dense media radiative transfer (DMRT) theory, which simulates the total backscattering coefficient of a dry snow medium at X and higher frequency bands. After evaluating the sensitivity of the EBM to snowpack parameters, a 1D-Var data assimilation scheme is implemented in order to minimize the discrepancies between EBM simulations and observations obtained from TerraSAR-X acquisitions by modifying the physical parameters of the Crocus-simulated snowpack. The algorithm then re-initializes Crocus with the modified snowpack physical parameters, allowing it to continue the simulation of snowpack evolution, with adjustments based on remote sensing information. This method is evaluated using multi-temporal TerraSAR-X images acquired over the specific site of the Argentière glacier (Mont-Blanc massif, French Alps) to constrain the evolution of Crocus. Results indicate that X-band SAR data can be taken into account to modify the evolution of snowpack simulated by Crocus.
Inter-Disciplinary Collaboration in Support of the Post-Standby TREAT Mission
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeHart, Mark; Baker, Benjamin; Ortensi, Javier
Although analysis methods have advanced significantly in the last two decades, high fidelity multi- physics methods for reactors systems have been under development for only a few years and are not presently mature nor deployed. Furthermore, very few methods provide the ability to simulate rapid transients in three dimensions. Data for validation of advanced time-dependent multi- physics is sparse; at TREAT, historical data were not collected for the purpose of validating three-dimensional methods, let alone multi-physics simulations. Existing data continues to be collected to attempt to simulate the behavior of experiments and calibration transients, but it will be insufficient formore » the complete validation of analysis methods used for TREAT transient simulations. Hence, a 2018 restart will most likely occur without the direct application of advanced modeling and simulation methods. At present, the current INL modeling and simulation team plans to work with TREAT operations staff in performing reactor simulations with MAMMOTH, in parallel with the software packages currently being used in preparation for core restart (e.g., MCNP5, RELAP5, ABAQUS). The TREAT team has also requested specific measurements to be performed during startup testing, currently scheduled to run from February to August of 2018. These startup measurements will be crucial in validating the new analysis methods in preparation for ultimate application for TREAT operations and experiment design. This document describes the collaboration between modeling and simulation staff and restart, operations, instrumentation and experiment development teams to be able to effectively interact and achieve successful validation work during restart testing.« less
VERA Core Simulator Methodology for PWR Cycle Depletion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kochunas, Brendan; Collins, Benjamin S; Jabaay, Daniel
2015-01-01
This paper describes the methodology developed and implemented in MPACT for performing high-fidelity pressurized water reactor (PWR) multi-cycle core physics calculations. MPACT is being developed primarily for application within the Consortium for the Advanced Simulation of Light Water Reactors (CASL) as one of the main components of the VERA Core Simulator, the others being COBRA-TF and ORIGEN. The methods summarized in this paper include a methodology for performing resonance self-shielding and computing macroscopic cross sections, 2-D/1-D transport, nuclide depletion, thermal-hydraulic feedback, and other supporting methods. These methods represent a minimal set needed to simulate high-fidelity models of a realistic nuclearmore » reactor. Results demonstrating this are presented from the simulation of a realistic model of the first cycle of Watts Bar Unit 1. The simulation, which approximates the cycle operation, is observed to be within 50 ppm boron (ppmB) reactivity for all simulated points in the cycle and approximately 15 ppmB for a consistent statepoint. The verification and validation of the PWR cycle depletion capability in MPACT is the focus of two companion papers.« less
Challenge toward the prediction of typhoon behaviour and down pour
NASA Astrophysics Data System (ADS)
Takahashi, K.; Onishi, R.; Baba, Y.; Kida, S.; Matsuda, K.; Goto, K.; Fuchigami, H.
2013-08-01
Mechanisms of interactions among different scale phenomena play important roles for forecasting of weather and climate. Multi-scale Simulator for the Geoenvironment (MSSG), which deals with multi-scale multi-physics phenomena, is a coupled non-hydrostatic atmosphere-ocean model designed to be run efficiently on the Earth Simulator. We present simulation results with the world-highest 1.9km horizontal resolution for the entire globe and regional heavy rain with 1km horizontal resolution and 5m horizontal/vertical resolution for urban area simulation. To gain high performance by exploiting the system capabilities, we propose novel performance evaluation metrics introduced in previous studies that incorporate the effects of the data caching mechanism between CPU and memory. With a useful code optimization guideline based on such metrics, we demonstrate that MSSG can achieve an excellent peak performance ratio of 32.2% on the Earth Simulator with the single-core performance found to be a key to a reduced time-to-solution.
ERIC Educational Resources Information Center
Riaz, Muhammad
2015-01-01
The purpose of this study was to examine how simulations in physics class, class management, laboratory practice, student engagement, critical thinking, cooperative learning, and use of simulations predicted the percentage of students achieving a grade point average of B or higher and their academic performance as reported by teachers in secondary…
NASA Astrophysics Data System (ADS)
Magyar, Rudolph
2013-06-01
We report a computational and validation study of equation of state (EOS) properties of liquid / dense plasma mixtures of xenon and ethane to explore and to illustrate the physics of the molecular scale mixing of light elements with heavy elements. Accurate EOS models are crucial to achieve high-fidelity hydrodynamics simulations of many high-energy-density phenomena such as inertial confinement fusion and strong shock waves. While the EOS is often tabulated for separate species, the equation of state for arbitrary mixtures is generally not available, requiring properties of the mixture to be approximated by combining physical properties of the pure systems. The main goal of this study is to access how accurate this approximation is under shock conditions. Density functional theory molecular dynamics (DFT-MD) at elevated-temperature and pressure is used to assess the thermodynamics of the xenon-ethane mixture. The simulations are unbiased as to elemental species and therefore provide comparable accuracy when describing total energies, pressures, and other physical properties of mixtures as they do for pure systems. In addition, we have performed shock compression experiments using the Sandia Z-accelerator on pure xenon, ethane, and various mixture ratios thereof. The Hugoniot results are compared to the DFT-MD results and the predictions of different rules for combing EOS tables. The DFT-based simulation results compare well with the experimental points, and it is found that a mixing rule based on pressure equilibration performs reliably well for the mixtures considered. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
High-resolution 3D simulations of NIF ignition targets performed on Sequoia with HYDRA
NASA Astrophysics Data System (ADS)
Marinak, M. M.; Clark, D. S.; Jones, O. S.; Kerbel, G. D.; Sepke, S.; Patel, M. V.; Koning, J. M.; Schroeder, C. R.
2015-11-01
Developments in the multiphysics ICF code HYDRA enable it to perform large-scale simulations on the Sequoia machine at LLNL. With an aggregate computing power of 20 Petaflops, Sequoia offers an unprecedented capability to resolve the physical processes in NIF ignition targets for a more complete, consistent treatment of the sources of asymmetry. We describe modifications to HYDRA that enable it to scale to over one million processes on Sequoia. These include new options for replicating parts of the mesh over a subset of the processes, to avoid strong scaling limits. We consider results from a 3D full ignition capsule-only simulation performed using over one billion zones run on 262,000 processors which resolves surface perturbations through modes l = 200. We also report progress towards a high-resolution 3D integrated hohlraum simulation performed using 262,000 processors which resolves surface perturbations on the ignition capsule through modes l = 70. These aim for the most complete calculations yet of the interactions and overall impact of the various sources of asymmetry for NIF ignition targets. This work was performed under the auspices of the Lawrence Livermore National Security, LLC, (LLNS) under Contract No. DE-AC52-07NA27344.
A model of motor performance during surface penetration: from physics to voluntary control.
Klatzky, Roberta L; Gershon, Pnina; Shivaprabhu, Vikas; Lee, Randy; Wu, Bing; Stetten, George; Swendsen, Robert H
2013-10-01
The act of puncturing a surface with a hand-held tool is a ubiquitous but complex motor behavior that requires precise force control to avoid potentially severe consequences. We present a detailed model of puncture over a time course of approximately 1,000 ms, which is fit to kinematic data from individual punctures, obtained via a simulation with high-fidelity force feedback. The model describes puncture as proceeding from purely physically determined interactions between the surface and tool, through decline of force due to biomechanical viscosity, to cortically mediated voluntary control. When fit to the data, it yields parameters for the inertial mass of the tool/person coupling, time characteristic of force decline, onset of active braking, stopping time and distance, and late oscillatory behavior, all of which the analysis relates to physical variables manipulated in the simulation. While the present data characterize distinct phases of motor performance in a group of healthy young adults, the approach could potentially be extended to quantify the performance of individuals from other populations, e.g., with sensory-motor impairments. Applications to surgical force control devices are also considered.
NASA Astrophysics Data System (ADS)
Huang, Shih-Chieh Douglas
In this dissertation, I investigate the effects of a grounded learning experience on college students' mental models of physics systems. The grounded learning experience consisted of a priming stage and an instruction stage, and within each stage, one of two different types of visuo-haptic representation was applied: visuo-gestural simulation (visual modality and gestures) and visuo-haptic simulation (visual modality, gestures, and somatosensory information). A pilot study involving N = 23 college students examined how using different types of visuo-haptic representation in instruction affected people's mental model construction for physics systems. Participants' abilities to construct mental models were operationalized through their pretest-to-posttest gain scores for a basic physics system and their performance on a transfer task involving an advanced physics system. Findings from this pilot study revealed that, while both simulations significantly improved participants' mental modal construction for physics systems, visuo-haptic simulation was significantly better than visuo-gestural simulation. In addition, clinical interviews suggested that participants' mental model construction for physics systems benefited from receiving visuo-haptic simulation in a tutorial prior to the instruction stage. A dissertation study involving N = 96 college students examined how types of visuo-haptic representation in different applications support participants' mental model construction for physics systems. Participant's abilities to construct mental models were again operationalized through their pretest-to-posttest gain scores for a basic physics system and their performance on a transfer task involving an advanced physics system. Participants' physics misconceptions were also measured before and after the grounded learning experience. Findings from this dissertation study not only revealed that visuo-haptic simulation was significantly more effective in promoting mental model construction and remedying participants' physics misconceptions than visuo-gestural simulation, they also revealed that visuo-haptic simulation was more effective during the priming stage than during the instruction stage. Interestingly, the effects of visuo-haptic simulation in priming and visuo-haptic simulation in instruction on participants' pretest-to-posttest gain scores for a basic physics system appeared additive. These results suggested that visuo-haptic simulation is effective in physics learning, especially when it is used during the priming stage.
Simulation Study of Structure and Properties of Plasma Liners for the PLX- α Project
NASA Astrophysics Data System (ADS)
Samulyak, Roman; Shih, Wen; Hsu, Scott; PLX-Alpha Team
2017-10-01
Detailed numerical studies of the propagation and merger of high-Mach-number plasma jets and the formation and implosion of plasma liners have been performed using the FronTier code in support of the Plasma Liner Experiment-ALPHA (PLX- α) project. Physics models include radiation, physical diffusion, plasma-EOS models, and an anisotropic diffusion model that mimics deviations from fully collisional hydrodynamics in outer layers of plasma jets. Detailed structure and non-uniformity of plasma liners of due to primary and secondary shock waves have been studies as well as averaged quantities of ram pressure and Mach number. Synthetic data from simulations have been compared with available experimental data from a multi-chord interferometer and survey and high-resolution spectrometers. Numerical studies of the sensitivity of liner properties to experimental errors in the initial masses of jets and the synchronization of plasma gun valves have also been performed. Supported by the ARPA-E ALPHA program.
Multigrid accelerated simulations for Twisted Mass fermions
NASA Astrophysics Data System (ADS)
Bacchio, Simone; Alexandrou, Constantia; Finkerath, Jacob
2018-03-01
Simulations at physical quark masses are affected by the critical slowing down of the solvers. Multigrid preconditioning has proved to deal effectively with this problem. Multigrid accelerated simulations at the physical value of the pion mass are being performed to generate Nf = 2 and Nf = 2 + 1 + 1 gauge ensembles using twisted mass fermions. The adaptive aggregation-based domain decomposition multigrid solver, referred to as DD-αAMG method, is employed for these simulations. Our simulation strategy consists of an hybrid approach of different solvers, involving the Conjugate Gradient (CG), multi-mass-shift CG and DD-αAMG solvers. We present an analysis of the multigrid performance during the simulations discussing the stability of the method. This significant speeds up the Hybrid Monte Carlo simulation by more than a factor 4 at physical pion mass compared to the usage of the CG solver.
Banducci, Sarah E.; Daugherty, Ana M.; Fanning, Jason; Awick, Elizabeth A.; Porter, Gwenndolyn C.; Burzynska, Agnieszka; Shen, Sa; Kramer, Arthur F.; McAuley, Edward
2017-01-01
Objectives. Despite evidence of self-efficacy and physical function's influences on functional limitations in older adults, few studies have examined relationships in the context of complex, real-world tasks. The present study tested the roles of self-efficacy and physical function in predicting older adults' street-crossing performance in single- and dual-task simulations. Methods. Lower-extremity physical function, gait self-efficacy, and street-crossing success ratio were assessed in 195 older adults (60–79 years old) at baseline of a randomized exercise trial. During the street-crossing task, participants walked on a self-propelled treadmill in a virtual reality environment. Participants crossed the street without distraction (single-task trials) and conversed on a cell phone (dual-task trials). Structural equation modeling was used to test hypothesized associations independent of demographic and clinical covariates. Results. Street-crossing performance was better on single-task trials when compared with dual-task trials. Direct effects of self-efficacy and physical function on success ratio were observed in dual-task trials only. The total effect of self-efficacy was significant in both conditions. The indirect path through physical function was evident in the dual-task condition only. Conclusion. Physical function can predict older adults' performance on high fidelity simulations of complex, real-world tasks. Perceptions of function (i.e., self-efficacy) may play an even greater role. The trial is registered with United States National Institutes of Health ClinicalTrials.gov (ID: NCT01472744; Fit & Active Seniors Trial). PMID:28255557
NASA Astrophysics Data System (ADS)
Gabriel, A. A.; Madden, E. H.; Ulrich, T.; Wollherr, S.
2016-12-01
Capturing the observed complexity of earthquake sources in dynamic rupture simulations may require: non-linear fault friction, thermal and fluid effects, heterogeneous fault stress and strength initial conditions, fault curvature and roughness, on- and off-fault non-elastic failure. All of these factors have been independently shown to alter dynamic rupture behavior and thus possibly influence the degree of realism attainable via simulated ground motions. In this presentation we will show examples of high-resolution earthquake scenarios, e.g. based on the 2004 Sumatra-Andaman Earthquake and a potential rupture of the Husavik-Flatey fault system in Northern Iceland. The simulations combine a multitude of representations of source complexity at the necessary spatio-temporal resolution enabled by excellent scalability on modern HPC systems. Such simulations allow an analysis of the dominant factors impacting earthquake source physics and ground motions given distinct tectonic settings or distinct focuses of seismic hazard assessment. Across all simulations, we find that fault geometry concurrently with the regional background stress state provide a first order influence on source dynamics and the emanated seismic wave field. The dynamic rupture models are performed with SeisSol, a software package based on an ADER-Discontinuous Galerkin scheme for solving the spontaneous dynamic earthquake rupture problem with high-order accuracy in space and time. Use of unstructured tetrahedral meshes allows for a realistic representation of the non-planar fault geometry, subsurface structure and bathymetry. The results presented highlight the fact that modern numerical methods are essential to further our understanding of earthquake source physics and complement both physic-based ground motion research and empirical approaches in seismic hazard analysis.
NASA Astrophysics Data System (ADS)
Gabriel, Alice-Agnes; Madden, Elizabeth H.; Ulrich, Thomas; Wollherr, Stephanie
2017-04-01
Capturing the observed complexity of earthquake sources in dynamic rupture simulations may require: non-linear fault friction, thermal and fluid effects, heterogeneous fault stress and fault strength initial conditions, fault curvature and roughness, on- and off-fault non-elastic failure. All of these factors have been independently shown to alter dynamic rupture behavior and thus possibly influence the degree of realism attainable via simulated ground motions. In this presentation we will show examples of high-resolution earthquake scenarios, e.g. based on the 2004 Sumatra-Andaman Earthquake, the 1994 Northridge earthquake and a potential rupture of the Husavik-Flatey fault system in Northern Iceland. The simulations combine a multitude of representations of source complexity at the necessary spatio-temporal resolution enabled by excellent scalability on modern HPC systems. Such simulations allow an analysis of the dominant factors impacting earthquake source physics and ground motions given distinct tectonic settings or distinct focuses of seismic hazard assessment. Across all simulations, we find that fault geometry concurrently with the regional background stress state provide a first order influence on source dynamics and the emanated seismic wave field. The dynamic rupture models are performed with SeisSol, a software package based on an ADER-Discontinuous Galerkin scheme for solving the spontaneous dynamic earthquake rupture problem with high-order accuracy in space and time. Use of unstructured tetrahedral meshes allows for a realistic representation of the non-planar fault geometry, subsurface structure and bathymetry. The results presented highlight the fact that modern numerical methods are essential to further our understanding of earthquake source physics and complement both physic-based ground motion research and empirical approaches in seismic hazard analysis.
The Predictive Value of Ultrasound Learning Curves Across Simulated and Clinical Settings.
Madsen, Mette E; Nørgaard, Lone N; Tabor, Ann; Konge, Lars; Ringsted, Charlotte; Tolsgaard, Martin G
2017-01-01
The aim of the study was to explore whether learning curves on a virtual-reality (VR) sonographic simulator can be used to predict subsequent learning curves on a physical mannequin and learning curves during clinical training. Twenty midwives completed a simulation-based training program in transvaginal sonography. The training was conducted on a VR simulator as well as on a physical mannequin. A subgroup of 6 participants underwent subsequent clinical training. During each of the 3 steps, the participants' performance was assessed using instruments with established validity evidence, and they advanced to the next level only after attaining predefined levels of performance. The number of repetitions and time needed to achieve predefined performance levels were recorded along with the performance scores in each setting. Finally, the outcomes were correlated across settings. A good correlation was found between time needed to achieve predefined performance levels on the VR simulator and the physical mannequin (Pearson correlation coefficient .78; P < .001). Performance scores on the VR simulator correlated well to the clinical performance scores (Pearson correlation coefficient .81; P = .049). No significant correlations were found between numbers of attempts needed to reach proficiency across the 3 different settings. A post hoc analysis found that the 50% fastest trainees at reaching proficiency during simulation-based training received higher clinical performance scores compared to trainees with scores placing them among the 50% slowest (P = .025). Performances during simulation-based sonography training may predict performance in related tasks and subsequent clinical learning curves. © 2016 by the American Institute of Ultrasound in Medicine.
PFLOTRAN Verification: Development of a Testing Suite to Ensure Software Quality
NASA Astrophysics Data System (ADS)
Hammond, G. E.; Frederick, J. M.
2016-12-01
In scientific computing, code verification ensures the reliability and numerical accuracy of a model simulation by comparing the simulation results to experimental data or known analytical solutions. The model is typically defined by a set of partial differential equations with initial and boundary conditions, and verification ensures whether the mathematical model is solved correctly by the software. Code verification is especially important if the software is used to model high-consequence systems which cannot be physically tested in a fully representative environment [Oberkampf and Trucano (2007)]. Justified confidence in a particular computational tool requires clarity in the exercised physics and transparency in its verification process with proper documentation. We present a quality assurance (QA) testing suite developed by Sandia National Laboratories that performs code verification for PFLOTRAN, an open source, massively-parallel subsurface simulator. PFLOTRAN solves systems of generally nonlinear partial differential equations describing multiphase, multicomponent and multiscale reactive flow and transport processes in porous media. PFLOTRAN's QA test suite compares the numerical solutions of benchmark problems in heat and mass transport against known, closed-form, analytical solutions, including documentation of the exercised physical process models implemented in each PFLOTRAN benchmark simulation. The QA test suite development strives to follow the recommendations given by Oberkampf and Trucano (2007), which describes four essential elements in high-quality verification benchmark construction: (1) conceptual description, (2) mathematical description, (3) accuracy assessment, and (4) additional documentation and user information. Several QA tests within the suite will be presented, including details of the benchmark problems and their closed-form analytical solutions, implementation of benchmark problems in PFLOTRAN simulations, and the criteria used to assess PFLOTRAN's performance in the code verification procedure. References Oberkampf, W. L., and T. G. Trucano (2007), Verification and Validation Benchmarks, SAND2007-0853, 67 pgs., Sandia National Laboratories, Albuquerque, NM.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Po-Lun; Rasch, Philip J.; Fast, Jerome D.
A suite of physical parameterizations (deep and shallow convection, turbulent boundary layer, aerosols, cloud microphysics, and cloud fraction) from the global climate model Community Atmosphere Model version 5.1 (CAM5) has been implemented in the regional model Weather Research and Forecasting with chemistry (WRF-Chem). A downscaling modeling framework with consistent physics has also been established in which both global and regional simulations use the same emissions and surface fluxes. The WRF-Chem model with the CAM5 physics suite is run at multiple horizontal resolutions over a domain encompassing the northern Pacific Ocean, northeast Asia, and northwest North America for April 2008 whenmore » the ARCTAS, ARCPAC, and ISDAC field campaigns took place. These simulations are evaluated against field campaign measurements, satellite retrievals, and ground-based observations, and are compared with simulations that use a set of common WRF-Chem Parameterizations. This manuscript describes the implementation of the CAM5 physics suite in WRF-Chem provides an overview of the modeling framework and an initial evaluation of the simulated meteorology, clouds, and aerosols, and quantifies the resolution dependence of the cloud and aerosol parameterizations. We demonstrate that some of the CAM5 biases, such as high estimates of cloud susceptibility to aerosols and the underestimation of aerosol concentrations in the Arctic, can be reduced simply by increasing horizontal resolution. We also show that the CAM5 physics suite performs similarly to a set of parameterizations commonly used in WRF-Chem, but produces higher ice and liquid water condensate amounts and near-surface black carbon concentration. Further evaluations that use other mesoscale model parameterizations and perform other case studies are needed to infer whether one parameterization consistently produces results more consistent with observations.« less
An ARM data-oriented diagnostics package to evaluate the climate model simulation
NASA Astrophysics Data System (ADS)
Zhang, C.; Xie, S.
2016-12-01
A set of diagnostics that utilize long-term high frequency measurements from the DOE Atmospheric Radiation Measurement (ARM) program is developed for evaluating the regional simulation of clouds, radiation and precipitation in climate models. The diagnostics results are computed and visualized automatically in a python-based package that aims to serve as an easy entry point for evaluating climate simulations using the ARM data, as well as the CMIP5 multi-model simulations. Basic performance metrics are computed to measure the accuracy of mean state and variability of simulated regional climate. The evaluated physical quantities include vertical profiles of clouds, temperature, relative humidity, cloud liquid water path, total column water vapor, precipitation, sensible and latent heat fluxes, radiative fluxes, aerosol and cloud microphysical properties. Process-oriented diagnostics focusing on individual cloud and precipitation-related phenomena are developed for the evaluation and development of specific model physical parameterizations. Application of the ARM diagnostics package will be presented in the AGU session. This work is performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344, IM release number is: LLNL-ABS-698645.
Toward a first-principles integrated simulation of tokamak edge plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, C S; Klasky, Scott A; Cummings, Julian
2008-01-01
Performance of the ITER is anticipated to be highly sensitive to the edge plasma condition. The edge pedestal in ITER needs to be predicted from an integrated simulation of the necessary firstprinciples, multi-scale physics codes. The mission of the SciDAC Fusion Simulation Project (FSP) Prototype Center for Plasma Edge Simulation (CPES) is to deliver such a code integration framework by (1) building new kinetic codes XGC0 and XGC1, which can simulate the edge pedestal buildup; (2) using and improving the existing MHD codes ELITE, M3D-OMP, M3D-MPP and NIMROD, for study of large-scale edge instabilities called Edge Localized Modes (ELMs); andmore » (3) integrating the codes into a framework using cutting-edge computer science technology. Collaborative effort among physics, computer science, and applied mathematics within CPES has created the first working version of the End-to-end Framework for Fusion Integrated Simulation (EFFIS), which can be used to study the pedestal-ELM cycles.« less
NASA Astrophysics Data System (ADS)
Incerti, S.; Barberet, Ph.; Dévès, G.; Michelet, C.; Francis, Z.; Ivantchenko, V.; Mantero, A.; El Bitar, Z.; Bernal, M. A.; Tran, H. N.; Karamitros, M.; Seznec, H.
2015-09-01
The general purpose Geant4 Monte Carlo simulation toolkit is able to simulate radiative and non-radiative atomic de-excitation processes such as fluorescence and Auger electron emission, occurring after interaction of incident ionising radiation with target atomic electrons. In this paper, we evaluate the Geant4 modelling capability for the simulation of fluorescence spectra induced by 1.5 MeV proton irradiation of thin high-Z foils (Fe, GdF3, Pt, Au) with potential interest for nanotechnologies and life sciences. Simulation results are compared to measurements performed at the Centre d'Etudes Nucléaires de Bordeaux-Gradignan AIFIRA nanobeam line irradiation facility in France. Simulation and experimental conditions are described and the influence of Geant4 electromagnetic physics models is discussed.
Distributed communication and psychosocial performance in simulated space dwelling groups
NASA Astrophysics Data System (ADS)
Hienz, R. D.; Brady, J. V.; Hursh, S. R.; Ragusa, L. C.; Rouse, C. O.; Gasior, E. D.
2005-05-01
The present report describes the development and application of a distributed interactive multi-person simulation in a computer-generated planetary environment as an experimental test bed for modeling the human performance effects of variations in the types of communication modes available, and in the types of stress and incentive conditions underlying the completion of mission goals. The results demonstrated a high degree of interchangeability between communication modes (audio, text) when one mode was not available. Additionally, the addition of time pressure stress to complete tasks resulted in a reduction in performance effectiveness, and these performance reductions were ameliorated via the introduction of positive incentives contingent upon improved performances. The results obtained confirmed that cooperative and productive psychosocial interactions can be maintained between individually isolated and dispersed members of simulated spaceflight crews communicating and problem-solving effectively over extended time intervals without the benefit of one another's physical presence.
NASA Astrophysics Data System (ADS)
Jenkins, Thomas; Smithe, David
2016-10-01
Inefficiencies and detrimental physical effects may arise in conjunction with ICRF heating of tokamak plasmas. Large wall potential drops, associated with sheath formation near plasma-facing antenna hardware, give rise to high-Z impurity sputtering from plasma-facing components and subsequent radiative cooling. Linear and nonlinear wave excitations in the plasma edge/SOL also dissipate injected RF power and reduce overall antenna efficiency. Recent advances in finite-difference time-domain (FDTD) modeling techniques allow the physics of localized sheath potentials, and associated sputtering events, to be modeled concurrently with the physics of antenna near- and far-field behavior and RF power flow. The new methods enable time-domain modeling of plasma-surface interactions and ICRF physics in realistic experimental configurations at unprecedented spatial resolution. We present results/animations from high-performance (10k-100k core) FDTD/PIC simulations spanning half of Alcator C-Mod at mm-scale resolution, exploring impurity production due to localized sputtering (in response to self-consistent sheath potentials at antenna surfaces) and the physics of parasitic slow wave excitation near the antenna hardware and SOL. Supported by US DoE (Award DE-SC0009501) and the ALCC program.
NASA Astrophysics Data System (ADS)
Shaheed, M. Reaz
1995-01-01
Higher speed at lower cost and at low power consumption is a driving force for today's semiconductor technology. Despite a substantial effort toward achieving this goal via alternative technologies such as III-V compounds, silicon technology still dominates mainstream electronics. Progress in silicon technology will continue for some time with continual scaling of device geometry. However, there are foreseeable limits on achievable device performance, reliability and scaling for room temperature technologies. Thus, reduced temperature operation is commonly viewed as a means for continuing the progress towards higher performance. Although silicon CMOS will be the first candidate for low temperature applications, bipolar devices will be used in a hybrid fashion, as line drivers or in limited critical path elements. Silicon -germanium-base bipolar transistors look especially attractive for low-temperature bipolar applications. At low temperatures, various new physical phenomena become important in determining device behavior. Carrier freeze-out effects which are negligible at room temperature, become of crucial importance for analyzing the low temperature device characteristics. The conventional Pearson-Bardeen model of activation energy, used for calculation of carrier freeze-out, is based on an incomplete picture of the physics that takes place and hence, leads to inaccurate results at low temperatures. Plasma -induced bandgap narrowing becomes more pronounced in device characteristics at low temperatures. Even with modern numerical simulators, this effect is not well modeled or simulated. In this dissertation, improved models for such physical phenomena are presented. For accurate simulation of carrier freeze-out, the Pearson-Bardeen model has been extended to include the temperature dependence of the activation energy. The extraction of the model is based on the rigorous, first-principle theoretical calculations available in the literature. The new model is shown to provide consistently accurate values for base sheet resistance for both Si- and SiGe-base transistors over a wide range of temperatures. A model for plasma-induced bandgap narrowing suitable for implementation in a numerical simulator has been developed. The appropriate method of incorporating this model in a drift -diffusion solver is described. The importance of including this model for low temperature simulation is demonstrated. With these models in place, the enhanced simulator has been used for evaluating and designing the Si- and SiGe-base bipolar transistors. Silicon-germanium heterojunction bipolar transistors offer significant performance and cost advantages over conventional technologies in the production of integrated circuits for communications, computer and transportation applications. Their high frequency performance at low cost, will find widespread use in the currently exploding wireless communication market. However, the high performance SiGe-base transistors are prone to have a low common-emitter breakdown voltage. In this dissertation, a modification in the collector design is proposed for improving the breakdown voltage without sacrificing the high frequency performance. A comprehensive simulation study of p-n-p SiGe-base transistors has been performed. Different figures of merit such as drive current, current gain, cut -off frequency and Early voltage were compared between a graded germanium profile and an abrupt germanium profile. The differences in the performance level between the two profiles diminishes as the base width is scaled down.
NASA Technical Reports Server (NTRS)
Venkatachari, Balaji Shankar; Streett, Craig L.; Chang, Chau-Lyan; Friedlander, David J.; Wang, Xiao-Yen; Chang, Sin-Chung
2016-01-01
Despite decades of development of unstructured mesh methods, high-fidelity time-accurate simulations are still predominantly carried out on structured, or unstructured hexahedral meshes by using high-order finite-difference, weighted essentially non-oscillatory (WENO), or hybrid schemes formed by their combinations. In this work, the space-time conservation element solution element (CESE) method is used to simulate several flow problems including supersonic jet/shock interaction and its impact on launch vehicle acoustics, and direct numerical simulations of turbulent flows using tetrahedral meshes. This paper provides a status report for the continuing development of the space-time conservation element solution element (CESE) numerical and software framework under the Revolutionary Computational Aerosciences (RCA) project. Solution accuracy and large-scale parallel performance of the numerical framework is assessed with the goal of providing a viable paradigm for future high-fidelity flow physics simulations.
Addressing spatial scales and new mechanisms in climate impact ecosystem modeling
NASA Astrophysics Data System (ADS)
Poulter, B.; Joetzjer, E.; Renwick, K.; Ogunkoya, G.; Emmett, K.
2015-12-01
Climate change impacts on vegetation distributions are typically addressed using either an empirical approach, such as a species distribution model (SDM), or with process-based methods, for example, dynamic global vegetation models (DGVMs). Each approach has its own benefits and disadvantages. For example, an SDM is constrained by data and few parameters, but does not include adaptation or acclimation processes or other ecosystem feedbacks that may act to mitigate or enhance climate effects. Alternatively, a DGVM model includes many mechanisms relating plant growth and disturbance to climate, but simulations are costly to perform at high-spatial resolution and there remains large uncertainty on a variety of fundamental physical processes. To address these issues, here, we present two DGVM-based case studies where i) high-resolution (1 km) simulations are being performed for vegetation in the Greater Yellowstone Ecosystem using a biogeochemical, forest gap model, LPJ-GUESS, and ii) where new mechanisms for simulating tropical tree-mortality are being introduced. High-resolution DGVM model simulations require not only computing and reorganizing code but also a consideration of scaling issues on vegetation dynamics and stochasticity and also on disturbance and migration. New mechanisms for simulating forest mortality must consider hydraulic limitations and carbon reserves and their interactions on source-sink dynamics and in controlling water potentials. Improving DGVM approaches by addressing spatial scale challenges and integrating new approaches for estimating forest mortality will provide new insights more relevant for land management and possibly reduce uncertainty by physical processes more directly comparable to experimental and observational evidence.
Tuzer, Hilal; Dinc, Leyla; Elcin, Melih
2016-10-01
Existing research literature indicates that the use of various simulation techniques in the training of physical examination skills develops students' cognitive and psychomotor abilities in a realistic learning environment while improving patient safety. The study aimed to compare the effects of the use of a high-fidelity simulator and standardized patients on the knowledge and skills of students conducting thorax-lungs and cardiac examinations, and to explore the students' views and learning experiences. A mixed-method explanatory sequential design. The study was conducted in the Simulation Laboratory of a Nursing School, the Training Center at the Faculty of Medicine, and in the inpatient clinics of the Education and Research Hospital. Fifty-two fourth-year nursing students. Students were randomly assigned to Group I and Group II. The students in Group 1 attended the thorax-lungs and cardiac examination training using a high-fidelity simulator, while the students in Group 2 using standardized patients. After the training sessions, all students practiced their skills on real patients in the clinical setting under the supervision of the investigator. Knowledge and performance scores of all students increased following the simulation activities; however, the students that worked with standardized patients achieved significantly higher knowledge scores than those that worked with the high-fidelity simulator; however, there was no significant difference in performance scores between the groups. The mean performance scores of students on real patients were significantly higher compared to the post-simulation assessment scores (p<0.001). Results of this study revealed that use of standardized patients was more effective than the use of a high-fidelity simulator in increasing the knowledge scores of students on thorax-lungs and cardiac examinations; however, practice on real patients increased performance scores of all students without any significant difference in two groups. Copyright © 2016 Elsevier Ltd. All rights reserved.
Design and experimentally measure a high performance metamaterial filter
NASA Astrophysics Data System (ADS)
Xu, Ya-wen; Xu, Jing-cheng
2018-03-01
Metamaterial filter is a kind of expecting optoelectronic device. In this paper, a metal/dielectric/metal (M/D/M) structure metamaterial filter is simulated and measured. Simulated results indicate that the perfect impedance matching condition between the metamaterial filter and the free space leads to the transmission band. Measured results show that the proposed metamaterial filter achieves high performance transmission on TM and TE polarization directions. Moreover, the high transmission rate is also can be obtained when the incident angle reaches to 45°. Further measured results show that the transmission band can be expanded through optimizing structural parameters. The central frequency of the transmission band is also can be adjusted through optimizing structural parameters. The physical mechanism behind the central frequency shifted is solved through establishing an equivalent resonant circuit model.
NASA Technical Reports Server (NTRS)
Schmahl, Edward J.; Kundu, Mukul R.
1998-01-01
We have continued our previous efforts in studies of fourier imaging methods applied to hard X-ray flares. We have performed physical and theoretical analysis of rotating collimator grids submitted to GSFC(Goddard Space Flight Center) for the High Energy Solar Spectroscopic Imager (HESSI). We have produced simulation algorithms which are currently being used to test imaging software and hardware for HESSI. We have developed Maximum-Entropy, Maximum-Likelihood, and "CLEAN" methods for reconstructing HESSI images from count-rate profiles. This work is expected to continue through the launch of HESSI in July, 2000. Section 1 shows a poster presentation "Image Reconstruction from HESSI Photon Lists" at the Solar Physics Division Meeting, June 1998; Section 2 shows the text and viewgraphs prepared for "Imaging Simulations" at HESSI's Preliminary Design Review on July 30, 1998.
pWeb: A High-Performance, Parallel-Computing Framework for Web-Browser-Based Medical Simulation.
Halic, Tansel; Ahn, Woojin; De, Suvranu
2014-01-01
This work presents a pWeb - a new language and compiler for parallelization of client-side compute intensive web applications such as surgical simulations. The recently introduced HTML5 standard has enabled creating unprecedented applications on the web. Low performance of the web browser, however, remains the bottleneck of computationally intensive applications including visualization of complex scenes, real time physical simulations and image processing compared to native ones. The new proposed language is built upon web workers for multithreaded programming in HTML5. The language provides fundamental functionalities of parallel programming languages as well as the fork/join parallel model which is not supported by web workers. The language compiler automatically generates an equivalent parallel script that complies with the HTML5 standard. A case study on realistic rendering for surgical simulations demonstrates enhanced performance with a compact set of instructions.
TGeoCad: an Interface between ROOT and CAD Systems
NASA Astrophysics Data System (ADS)
Luzzi, C.; Carminati, F.
2014-06-01
In the simulation of High Energy Physics experiment a very high precision in the description of the detector geometry is essential to achieve the required performances. The physicists in charge of Monte Carlo Simulation of the detector need to collaborate efficiently with the engineers working at the mechanical design of the detector. Often, this collaboration is made hard by the usage of different and incompatible software. ROOT is an object-oriented C++ framework used by physicists for storing, analyzing and simulating data produced by the high-energy physics experiments while CAD (Computer-Aided Design) software is used for mechanical design in the engineering field. The necessity to improve the level of communication between physicists and engineers led to the implementation of an interface between the ROOT geometrical modeler used by the virtual Monte Carlo simulation software and the CAD systems. In this paper we describe the design and implementation of the TGeoCad Interface that has been developed to enable the use of ROOT geometrical models in several CAD systems. To achieve this goal, the ROOT geometry description is converted into STEP file format (ISO 10303), which can be imported and used by many CAD systems.
Asymmetric Uncertainty Expression for High Gradient Aerodynamics
NASA Technical Reports Server (NTRS)
Pinier, Jeremy T
2012-01-01
When the physics of the flow around an aircraft changes very abruptly either in time or space (e.g., flow separation/reattachment, boundary layer transition, unsteadiness, shocks, etc), the measurements that are performed in a simulated environment like a wind tunnel test or a computational simulation will most likely incorrectly predict the exact location of where (or when) the change in physics happens. There are many reasons for this, includ- ing the error introduced by simulating a real system at a smaller scale and at non-ideal conditions, or the error due to turbulence models in a computational simulation. The un- certainty analysis principles that have been developed and are being implemented today do not fully account for uncertainty in the knowledge of the location of abrupt physics changes or sharp gradients, leading to a potentially underestimated uncertainty in those areas. To address this problem, a new asymmetric aerodynamic uncertainty expression containing an extra term to account for a phase-uncertainty, the magnitude of which is emphasized in the high-gradient aerodynamic regions is proposed in this paper. Additionally, based on previous work, a method for dispersing aerodynamic data within asymmetric uncer- tainty bounds in a more realistic way has been developed for use within Monte Carlo-type analyses.
NASA Astrophysics Data System (ADS)
Supurwoko; Cari; Sarwanto; Sukarmin; Fauzi, Ahmad; Faradilla, Lisa; Summa Dewi, Tiarasita
2017-11-01
The process of learning and teaching in Physics is often confronted with abstract concepts. It makes difficulty for students to understand and teachers to teach the concept. One of the materials that has an abstract concept is Compton Effect. The purpose of this research is to evaluate computer simulation model on Compton Effect material which is used to improve high thinking ability of Physics teacher candidate students. This research is a case study. The subject is students at physics educations who have attended Modern Physics lectures. Data were obtained through essay test for measuring students’ high-order thinking skills and quisioners for measuring students’ responses. The results obtained indicate that computer simulation model can be used to improve students’ high order thinking skill and can be used to improve students’ responses. With this result it is suggested that the audiences use the simulation media in learning
The numerical simulation of a high-speed axial flow compressor
NASA Technical Reports Server (NTRS)
Mulac, Richard A.; Adamczyk, John J.
1991-01-01
The advancement of high-speed axial-flow multistage compressors is impeded by a lack of detailed flow-field information. Recent development in compressor flow modeling and numerical simulation have the potential to provide needed information in a timely manner. The development of a computer program is described to solve the viscous form of the average-passage equation system for multistage turbomachinery. Programming issues such as in-core versus out-of-core data storage and CPU utilization (parallelization, vectorization, and chaining) are addressed. Code performance is evaluated through the simulation of the first four stages of a five-stage, high-speed, axial-flow compressor. The second part addresses the flow physics which can be obtained from the numerical simulation. In particular, an examination of the endwall flow structure is made, and its impact on blockage distribution assessed.
Evolving a Neural Olfactorimotor System in Virtual and Real Olfactory Environments
Rhodes, Paul A.; Anderson, Todd O.
2012-01-01
To provide a platform to enable the study of simulated olfactory circuitry in context, we have integrated a simulated neural olfactorimotor system with a virtual world which simulates both computational fluid dynamics as well as a robotic agent capable of exploring the simulated plumes. A number of the elements which we developed for this purpose have not, to our knowledge, been previously assembled into an integrated system, including: control of a simulated agent by a neural olfactorimotor system; continuous interaction between the simulated robot and the virtual plume; the inclusion of multiple distinct odorant plumes and background odor; the systematic use of artificial evolution driven by olfactorimotor performance (e.g., time to locate a plume source) to specify parameter values; the incorporation of the realities of an imperfect physical robot using a hybrid model where a physical robot encounters a simulated plume. We close by describing ongoing work toward engineering a high dimensional, reversible, low power electronic olfactory sensor which will allow olfactorimotor neural circuitry evolved in the virtual world to control an autonomous olfactory robot in the physical world. The platform described here is intended to better test theories of olfactory circuit function, as well as provide robust odor source localization in realistic environments. PMID:23112772
Physical environment virtualization for human activities recognition
NASA Astrophysics Data System (ADS)
Poshtkar, Azin; Elangovan, Vinayak; Shirkhodaie, Amir; Chan, Alex; Hu, Shuowen
2015-05-01
Human activity recognition research relies heavily on extensive datasets to verify and validate performance of activity recognition algorithms. However, obtaining real datasets are expensive and highly time consuming. A physics-based virtual simulation can accelerate the development of context based human activity recognition algorithms and techniques by generating relevant training and testing videos simulating diverse operational scenarios. In this paper, we discuss in detail the requisite capabilities of a virtual environment to aid as a test bed for evaluating and enhancing activity recognition algorithms. To demonstrate the numerous advantages of virtual environment development, a newly developed virtual environment simulation modeling (VESM) environment is presented here to generate calibrated multisource imagery datasets suitable for development and testing of recognition algorithms for context-based human activities. The VESM environment serves as a versatile test bed to generate a vast amount of realistic data for training and testing of sensor processing algorithms. To demonstrate the effectiveness of VESM environment, we present various simulated scenarios and processed results to infer proper semantic annotations from the high fidelity imagery data for human-vehicle activity recognition under different operational contexts.
NASA Astrophysics Data System (ADS)
Liu, Hongdong; Zhang, Lian; Chen, Zhi; Liu, Xinguo; Dai, Zhongying; Li, Qiang; Xu, Xie George
2017-09-01
In medical physics it is desirable to have a Monte Carlo code that is less complex, reliable yet flexible for dose verification, optimization, and component design. TOPAS is a newly developed Monte Carlo simulation tool which combines extensive radiation physics libraries available in Geant4 code, easyto-use geometry and support for visualization. Although TOPAS has been widely tested and verified in simulations of proton therapy, there has been no reported application for carbon ion therapy. To evaluate the feasibility and accuracy of TOPAS simulations for carbon ion therapy, a licensed TOPAS code (version 3_0_p1) was used to carry out a dosimetric study of therapeutic carbon ions. Results of depth dose profile based on different physics models have been obtained and compared with the measurements. It is found that the G4QMD model is at least as accurate as the TOPAS default BIC physics model for carbon ions, but when the energy is increased to relatively high levels such as 400 MeV/u, the G4QMD model shows preferable performance. Also, simulations of special components used in the treatment head at the Institute of Modern Physics facility was conducted to investigate the Spread-Out dose distribution in water. The physical dose in water of SOBP was found to be consistent with the aim of the 6 cm ridge filter.
Billings, Jay Jay; Deyton, Jordan H.; Forest Hull, S.; ...
2015-07-17
Building new fission reactors in the United States presents many technical and regulatory challenges. Chief among the technical challenges is the need to share and present results from new high- fidelity, high- performance simulations in an easily consumable way. In light of the modern multi-scale, multi-physics simulations can generate petabytes of data, this will require the development of new techniques and methods to reduce the data to familiar quantities of interest with a more reasonable resolution and size. Furthermore, some of the results from these simulations may be new quantities for which visualization and analysis techniques are not immediately availablemore » in the community and need to be developed. Our paper describes a new system for managing high-performance simulation results in a domain-specific way that naturally exposes quantities of interest for light water and sodium-cooled fast reactors. It enables easy qualitative and quantitative comparisons between simulation results with a graphical user interface and cross-platform, multi-language input- output libraries for use by developers to work with the data. One example comparing results from two different simulation suites for a single assembly in a light-water reactor is presented along with a detailed discussion of the system s requirements and design.« less
NASA Astrophysics Data System (ADS)
Lin, Dongguo; Kang, Tae Gon; Han, Jun Sae; Park, Seong Jin; Chung, Seong Taek; Kwon, Young-Sam
2018-02-01
Both experimental and numerical analysis of powder injection molding (PIM) of Ti-6Al-4V alloy were performed to prepare a defect-free high-performance Ti-6Al-4V part with low carbon/oxygen contents. The prepared feedstock was characterized with specific experiments to identify its viscosity, pressure-volume-temperature and thermal properties to simulate its injection molding process. A finite-element-based numerical scheme was employed to simulate the thermomechanical process during the injection molding. In addition, the injection molding, debinding, sintering and hot isostatic pressing processes were performed in sequence to prepare the PIMed parts. With optimized processing conditions, the PIMed Ti-6Al-4V part exhibits excellent physical and mechanical properties, showing a final density of 99.8%, tensile strength of 973 MPa and elongation of 16%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wardle, Kent E.; Frey, Kurt; Pereira, Candido
2014-02-02
This task is aimed at predictive modeling of solvent extraction processes in typical extraction equipment through multiple simulation methods at various scales of resolution. We have conducted detailed continuum fluid dynamics simulation on the process unit level as well as simulations of the molecular-level physical interactions which govern extraction chemistry. Through combination of information gained through simulations at each of these two tiers along with advanced techniques such as the Lattice Boltzmann Method (LBM) which can bridge these two scales, we can develop the tools to work towards predictive simulation for solvent extraction on the equipment scale (Figure 1). Themore » goal of such a tool-along with enabling optimized design and operation of extraction units-would be to allow prediction of stage extraction effrciency under specified conditions. Simulation efforts on each of the two scales will be described below. As the initial application of FELBM in the work performed during FYl0 has been on annular mixing it will be discussed in context of the continuum-scale. In the future, however, it is anticipated that the real value of FELBM will be in its use as a tool for sub-grid model development through highly refined DNS-like multiphase simulations facilitating exploration and development of droplet models including breakup and coalescence which will be needed for the large-scale simulations where droplet level physics cannot be resolved. In this area, it can have a significant advantage over traditional CFD methods as its high computational efficiency allows exploration of significantly greater physical detail especially as computational resources increase in the future.« less
GEANT4 and Secondary Particle Production
NASA Technical Reports Server (NTRS)
Patterson, Jeff
2004-01-01
GEANT 4 is a Monte Carlo tool set developed by the High Energy Physics Community (CERN, SLAC, etc) to perform simulations of complex particle detectors. GEANT4 is the ideal tool to study radiation transport and should be applied to space environments and the complex geometries of modern day spacecraft.
NASA Astrophysics Data System (ADS)
Tong, Qiujie; Wang, Qianqian; Li, Xiaoyang; Shan, Bin; Cui, Xuntai; Li, Chenyu; Peng, Zhong
2016-11-01
In order to satisfy the requirements of the real-time and generality, a laser target simulator in semi-physical simulation system based on RTX+LabWindows/CVI platform is proposed in this paper. Compared with the upper-lower computers simulation platform architecture used in the most of the real-time system now, this system has better maintainability and portability. This system runs on the Windows platform, using Windows RTX real-time extension subsystem to ensure the real-time performance of the system combining with the reflective memory network to complete some real-time tasks such as calculating the simulation model, transmitting the simulation data, and keeping real-time communication. The real-time tasks of simulation system run under the RTSS process. At the same time, we use the LabWindows/CVI to compile a graphical interface, and complete some non-real-time tasks in the process of simulation such as man-machine interaction, display and storage of the simulation data, which run under the Win32 process. Through the design of RTX shared memory and task scheduling algorithm, the data interaction between the real-time tasks process of RTSS and non-real-time tasks process of Win32 is completed. The experimental results show that this system has the strongly real-time performance, highly stability, and highly simulation accuracy. At the same time, it also has the good performance of human-computer interaction.
NASA Astrophysics Data System (ADS)
Chini, Jacquelyn J.; Madsen, Adrian; Gire, Elizabeth; Rebello, N. Sanjay; Puntambekar, Sadhana
2012-06-01
Recent research results have failed to support the conventionally held belief that students learn physics best from hands-on experiences with physical equipment. Rather, studies have found that students who perform similar experiments with computer simulations perform as well or better on measures of conceptual understanding than their peers who used physical equipment. In this study, we explored how university-level nonscience majors’ understanding of the physics concepts related to pulleys was supported by experimentation with real pulleys and a computer simulation of pulleys. We report that when students use one type of manipulative (physical or virtual), the comparison is influenced both by the concept studied and the timing of the post-test. Students performed similarly on questions related to force and mechanical advantage regardless of the type of equipment used. On the other hand, students who used the computer simulation performed better on questions related to work immediately after completing the activities; however, the two groups performed similarly on the work questions on a test given one week later. Additionally, both sequences of experimentation (physical-virtual and virtual-physical) equally supported students’ understanding of all of the concepts. These results suggest that both the concept learned and the stability of learning gains should continue to be explored to improve educators’ ability to select the best learning experience for a given topic.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ely, Geoffrey P.
2013-10-31
This project uses dynamic rupture simulations to investigate high-frequency seismic energy generation. The relevant phenomena (frictional breakdown, shear heating, effective normal-stress fluctuations, material damage, etc.) controlling rupture are strongly interacting and span many orders of magnitude in spatial scale, requiring highresolution simulations that couple disparate physical processes (e.g., elastodynamics, thermal weakening, pore-fluid transport, and heat conduction). Compounding the computational challenge, we know that natural faults are not planar, but instead have roughness that can be approximated by power laws potentially leading to large, multiscale fluctuations in normal stress. The capacity to perform 3D rupture simulations that couple these processes willmore » provide guidance for constructing appropriate source models for high-frequency ground motion simulations. The improved rupture models from our multi-scale dynamic rupture simulations will be used to conduct physicsbased (3D waveform modeling-based) probabilistic seismic hazard analysis (PSHA) for California. These calculation will provide numerous important seismic hazard results, including a state-wide extended earthquake rupture forecast with rupture variations for all significant events, a synthetic seismogram catalog for thousands of scenario events and more than 5000 physics-based seismic hazard curves for California.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dill, Eric D.; Folmer, Jacob C.W.; Martin, James D.
A series of simulations was performed to enable interpretation of the material and physical significance of the parameters defined in the Kolmogorov, Johnson and Mehl, and Avrami (KJMA) rate expression commonly used to describe phase boundary controlled reactions of condensed matter. The parameters k, n, and t 0 are shown to be highly correlated, which if unaccounted for seriously challenge mechanistic interpretation. It is demonstrated that rate measurements exhibit an intrinsic uncertainty without precise knowledge of the location and orientation of nucleation with respect to the free volume into which it grows. More significantly, it is demonstrated that the KJMAmore » rate constant k is highly dependent on sample size. However, under the simulated conditions of slow nucleation relative to crystal growth, sample volume and sample anisotropy correction affords a means to eliminate the experimental condition dependence of the KJMA rate constant, k, producing the material-specific parameter, the velocity of the phase boundary, v pb.« less
A generic framework for individual-based modelling and physical-biological interaction
2018-01-01
The increased availability of high-resolution ocean data globally has enabled more detailed analyses of physical-biological interactions and their consequences to the ecosystem. We present IBMlib, which is a versatile, portable and computationally effective framework for conducting Lagrangian simulations in the marine environment. The purpose of the framework is to handle complex individual-level biological models of organisms, combined with realistic 3D oceanographic model of physics and biogeochemistry describing the environment of the organisms without assumptions about spatial or temporal scales. The open-source framework features a minimal robust interface to facilitate the coupling between individual-level biological models and oceanographic models, and we provide application examples including forward/backward simulations, habitat connectivity calculations, assessing ocean conditions, comparison of physical circulation models, model ensemble runs and recently posterior Eulerian simulations using the IBMlib framework. We present the code design ideas behind the longevity of the code, our implementation experiences, as well as code performance benchmarking. The framework may contribute substantially to progresses in representing, understanding, predicting and eventually managing marine ecosystems. PMID:29351280
Meyer, Georg F.; Wong, Li Ting; Timson, Emma; Perfect, Philip; White, Mark D.
2012-01-01
We argue that objective fidelity evaluation of virtual environments, such as flight simulation, should be human-performance-centred and task-specific rather than measure the match between simulation and physical reality. We show how principled experimental paradigms and behavioural models to quantify human performance in simulated environments that have emerged from research in multisensory perception provide a framework for the objective evaluation of the contribution of individual cues to human performance measures of fidelity. We present three examples in a flight simulation environment as a case study: Experiment 1: Detection and categorisation of auditory and kinematic motion cues; Experiment 2: Performance evaluation in a target-tracking task; Experiment 3: Transferrable learning of auditory motion cues. We show how the contribution of individual cues to human performance can be robustly evaluated for each task and that the contribution is highly task dependent. The same auditory cues that can be discriminated and are optimally integrated in experiment 1, do not contribute to target-tracking performance in an in-flight refuelling simulation without training, experiment 2. In experiment 3, however, we demonstrate that the auditory cue leads to significant, transferrable, performance improvements with training. We conclude that objective fidelity evaluation requires a task-specific analysis of the contribution of individual cues. PMID:22957068
Large eddy simulations of time-dependent and buoyancy-driven channel flows
NASA Technical Reports Server (NTRS)
Cabot, William H.
1993-01-01
The primary goal of this work has been to assess the performance of the dynamic SGS model in the large eddy simulation (LES) of channel flows in a variety of situations, viz., in temporal development of channel flow turned by a transverse pressure gradient and especially in buoyancy-driven turbulent flows such as Rayleigh-Benard and internally heated channel convection. For buoyancy-driven flows, there are additional buoyant terms that are possible in the base models, and one objective has been to determine if the dynamic SGS model results are sensitive to such terms. The ultimate goal is to determine the minimal base model needed in the dynamic SGS model to provide accurate results in flows with more complicated physical features. In addition, a program of direct numerical simulation (DNS) of fully compressible channel convection has been undertaken to determine stratification and compressibility effects. These simulations are intended to provide a comparative base for performing the LES of compressible (or highly stratified, pseudo-compressible) convection at high Reynolds number in the future.
Modeling the Effects of Turbulence in Rotating Detonation Engines
NASA Astrophysics Data System (ADS)
Towery, Colin; Smith, Katherine; Hamlington, Peter; van Schoor, Marthinus; TESLa Team; Midé Team
2014-03-01
Propulsion systems based on detonation waves, such as rotating and pulsed detonation engines, have the potential to substantially improve the efficiency and power density of gas turbine engines. Numerous technical challenges remain to be solved in such systems, however, including obtaining more efficient injection and mixing of air and fuels, more reliable detonation initiation, and better understanding of the flow in the ejection nozzle. These challenges can be addressed using numerical simulations. Such simulations are enormously challenging, however, since accurate descriptions of highly unsteady turbulent flow fields are required in the presence of combustion, shock waves, fluid-structure interactions, and other complex physical processes. In this study, we performed high-fidelity three dimensional simulations of a rotating detonation engine and examined turbulent flow effects on the operation, performance, and efficiency of the engine. Along with experimental data, these simulations were used to test the accuracy of commonly-used Reynolds averaged and subgrid-scale turbulence models when applied to detonation engines. The authors gratefully acknowledge the support of the Defense Advanced Research Projects Agency (DARPA).
IGMS: An Integrated ISO-to-Appliance Scale Grid Modeling System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmintier, Bryan; Hale, Elaine; Hansen, Timothy M.
This paper describes the Integrated Grid Modeling System (IGMS), a novel electric power system modeling platform for integrated transmission-distribution analysis that co-simulates off-the-shelf tools on high performance computing (HPC) platforms to offer unprecedented resolution from ISO markets down to appliances and other end uses. Specifically, the system simultaneously models hundreds or thousands of distribution systems in co-simulation with detailed Independent System Operator (ISO) markets and AGC-level reserve deployment. IGMS uses a new MPI-based hierarchical co-simulation framework to connect existing sub-domain models. Our initial efforts integrate opensource tools for wholesale markets (FESTIV), bulk AC power flow (MATPOWER), and full-featured distribution systemsmore » including physics-based end-use and distributed generation models (many instances of GridLAB-D[TM]). The modular IGMS framework enables tool substitution and additions for multi-domain analyses. This paper describes the IGMS tool, characterizes its performance, and demonstrates the impacts of the coupled simulations for analyzing high-penetration solar PV and price responsive load scenarios.« less
Outcomes from the DOE Workshop on Turbulent Flow Simulation at the Exascale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sprague, Michael; Boldyrev, Stanislav; Chang, Choong-Seock
This paper summarizes the outcomes from the Turbulent Flow Simulation at the Exascale: Opportunities and Challenges Workshop, which was held 4-5 August 2015, and was sponsored by the U.S. Department of Energy Office of Advanced Scientific Computing Research. The workshop objective was to define and describe the challenges and opportunities that computing at the exascale will bring to turbulent-flow simulations in applied science and technology. The need for accurate simulation of turbulent flows is evident across the U.S. Department of Energy applied-science and engineering portfolios, including combustion, plasma physics, nuclear-reactor physics, wind energy, and atmospheric science. The workshop brought togethermore » experts in turbulent-flow simulation, computational mathematics, and high-performance computing. Building upon previous ASCR workshops on exascale computing, participants defined a research agenda and path forward that will enable scientists and engineers to continually leverage, engage, and direct advances in computational systems on the path to exascale computing.« less
Kinetic physics in ICF: present understanding and future directions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rinderknecht, Hans G.; Amendt, P. A.; Wilks, S. C.
Kinetic physics has the potential to impact the performance of indirect-drive inertial confinement fusion (ICF) experiments. Systematic anomalies in the National Ignition Facility implosion dataset have been identified in which kinetic physics may play a role, including inferred missing energy in the hohlraum, drive asymmetry in near-vacuum hohlraums, low areal density and high burn-averaged ion temperatures (T i ) compared with mainline simulations, and low ratios of the DD-neutron and DT-neutron yields and inferred T i . Several components of ICF implosions are likely to be influenced or dominated by kinetic physics: laser-plasma interactions in the LEH and hohlraum interior;more » the hohlraum wall blowoff, blowoff/gas and blowoff/ablator interfaces; the ablator and ablator/ice interface; and the DT fuel all present conditions in which kinetic physics can significantly affect the dynamics. This review presents the assembled experimental data and simulation results to date, which indicate that the effects of long mean-free-path plasma phenomena and self-generated electromagnetic fields may have a significant impact in ICF targets. Finally, simulation and experimental efforts are proposed to definitively quantify the importance of these effects at ignition-relevant conditions, including priorities for ongoing study.« less
Kinetic physics in ICF: present understanding and future directions
Rinderknecht, Hans G.; Amendt, P. A.; Wilks, S. C.; ...
2018-03-19
Kinetic physics has the potential to impact the performance of indirect-drive inertial confinement fusion (ICF) experiments. Systematic anomalies in the National Ignition Facility implosion dataset have been identified in which kinetic physics may play a role, including inferred missing energy in the hohlraum, drive asymmetry in near-vacuum hohlraums, low areal density and high burn-averaged ion temperatures (T i ) compared with mainline simulations, and low ratios of the DD-neutron and DT-neutron yields and inferred T i . Several components of ICF implosions are likely to be influenced or dominated by kinetic physics: laser-plasma interactions in the LEH and hohlraum interior;more » the hohlraum wall blowoff, blowoff/gas and blowoff/ablator interfaces; the ablator and ablator/ice interface; and the DT fuel all present conditions in which kinetic physics can significantly affect the dynamics. This review presents the assembled experimental data and simulation results to date, which indicate that the effects of long mean-free-path plasma phenomena and self-generated electromagnetic fields may have a significant impact in ICF targets. Finally, simulation and experimental efforts are proposed to definitively quantify the importance of these effects at ignition-relevant conditions, including priorities for ongoing study.« less
Kinetic physics in ICF: present understanding and future directions
NASA Astrophysics Data System (ADS)
Rinderknecht, Hans G.; Amendt, P. A.; Wilks, S. C.; Collins, G.
2018-06-01
Kinetic physics has the potential to impact the performance of indirect-drive inertial confinement fusion (ICF) experiments. Systematic anomalies in the National Ignition Facility implosion dataset have been identified in which kinetic physics may play a role, including inferred missing energy in the hohlraum, drive asymmetry in near-vacuum hohlraums, low areal density and high burn-averaged ion temperatures (〈Ti 〉) compared with mainline simulations, and low ratios of the DD-neutron and DT-neutron yields and inferred 〈Ti 〉. Several components of ICF implosions are likely to be influenced or dominated by kinetic physics: laser-plasma interactions in the LEH and hohlraum interior; the hohlraum wall blowoff, blowoff/gas and blowoff/ablator interfaces; the ablator and ablator/ice interface; and the DT fuel all present conditions in which kinetic physics can significantly affect the dynamics. This review presents the assembled experimental data and simulation results to date, which indicate that the effects of long mean-free-path plasma phenomena and self-generated electromagnetic fields may have a significant impact in ICF targets. Simulation and experimental efforts are proposed to definitively quantify the importance of these effects at ignition-relevant conditions, including priorities for ongoing study.
An Empirical Study of Combining Communicating Processes in a Parallel Discrete Event Simulation
1990-12-01
dynamics of the cost/performance criteria which typically made up computer resource acquisition decisions . offering a broad range of tradeoffs in the way... prcesses has a significant impact on simulation performance. It is the hypothesis of this 3-4 SYSTEM DECOMPOSITION PHYSICAL SYSTEM 1: N PHYSICAL PROCESS 1...EMPTY)) next-event = pop(next-event-queue); lp-clock = next-event - time; Simulate next event departure- consume event-enqueue new event end while; If no
NASA Technical Reports Server (NTRS)
Shyam, Vikram; Ameri, Ali
2009-01-01
Unsteady 3-D RANS simulations have been performed on a highly loaded transonic turbine stage and results are compared to steady calculations as well as to experiment. A low Reynolds number k-epsilon turbulence model is employed to provide closure for the RANS system. A phase-lag boundary condition is used in the tangential direction. This allows the unsteady simulation to be performed by using only one blade from each of the two rows. The objective of this work is to study the effect of unsteadiness on rotor heat transfer and to glean any insight into unsteady flow physics. The role of the stator wake passing on the pressure distribution at the leading edge is also studied. The simulated heat transfer and pressure results agreed favorably with experiment. The time-averaged heat transfer predicted by the unsteady simulation is higher than the heat transfer predicted by the steady simulation everywhere except at the leading edge. The shock structure formed due to stator-rotor interaction was analyzed. Heat transfer and pressure at the hub and casing were also studied. Thermal segregation was observed that leads to the heat transfer patterns predicted by steady and unsteady simulations to be different.
The Aouda.X space suit simulator and its applications to astrobiology.
Groemer, Gernot E; Hauth, Stefan; Luger, Ulrich; Bickert, Klaus; Sattler, Birgit; Hauth, Eva; Föger, Daniel; Schildhammer, Daniel; Agerer, Christian; Ragonig, Christoph; Sams, Sebastian; Kaineder, Felix; Knoflach, Martin
2012-02-01
We have developed the space suit simulator Aouda.X, which is capable of reproducing the physical and sensory limitations a flight-worthy suit would have on Mars. Based upon a Hard-Upper-Torso design, it has an advanced human-machine interface and a sensory network connected to an On-Board Data Handling system to increase the situational awareness in the field. Although the suit simulator is not pressurized, the physical forces that lead to a reduced working envelope and physical performance are reproduced with a calibrated exoskeleton. This allows us to simulate various pressure regimes from 0.3-1 bar. Aouda.X has been tested in several laboratory and field settings, including sterile sampling at 2800 m altitude inside a glacial ice cave and a cryochamber at -110°C, and subsurface tests in connection with geophysical instrumentation relevant to astrobiology, including ground-penetrating radar, geoacoustics, and drilling. The communication subsystem allows for a direct interaction with remote science teams via telemetry from a mission control center. Aouda.X as such is a versatile experimental platform for studying Mars exploration activities in a high-fidelity Mars analog environment with a focus on astrobiology and operations research that has been optimized to reduce the amount of biological cross contamination. We report on the performance envelope of the Aouda.X system and its operational limitations.
NASA Astrophysics Data System (ADS)
Yao, Bing; Yang, Hui
2016-12-01
This paper presents a novel physics-driven spatiotemporal regularization (STRE) method for high-dimensional predictive modeling in complex healthcare systems. This model not only captures the physics-based interrelationship between time-varying explanatory and response variables that are distributed in the space, but also addresses the spatial and temporal regularizations to improve the prediction performance. The STRE model is implemented to predict the time-varying distribution of electric potentials on the heart surface based on the electrocardiogram (ECG) data from the distributed sensor network placed on the body surface. The model performance is evaluated and validated in both a simulated two-sphere geometry and a realistic torso-heart geometry. Experimental results show that the STRE model significantly outperforms other regularization models that are widely used in current practice such as Tikhonov zero-order, Tikhonov first-order and L1 first-order regularization methods.
Capstone: A Geometry-Centric Platform to Enable Physics-Based Simulation and Design of Systems
2015-10-05
foundation for the air-vehicle early design tool DaVinci being developed by CREATETM-AV project to enable development of associative models of air...CREATETM-AV solvers Kestrel [11] and Helios [16,17]. Furthermore, it is the foundation for the CREATETM-AV’s DaVinci [9] tool that provides a... Tools and Environments (CREATETM) program [6] aimed at developing a suite of high- performance physics-based computational tools addressing the needs
Update of global TC simulations using a variable resolution non-hydrostatic model
NASA Astrophysics Data System (ADS)
Park, S. H.
2017-12-01
Using in a variable resolution meshes in MPAS during 2017 summer., Tropical cyclone (TC) forecasts are simulated. Two physics suite are tested to explore performance and bias of each physics suite for TC forecasting. A WRF physics suite is selected from experience on weather forecasting and CAM (Community Atmosphere Model) physics is taken from a AMIP type climate simulation. Based on the last year results from CAM5 physical parameterization package and comparing with WRF physics, we investigated a issue with intensity bias using updated version of CAM physics (CAM6). We also compared these results with coupled version of TC simulations. During this talk, TC structure will be compared specially around of boundary layer and investigate their relationship between TC intensity and different physics package.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mirocha, Jeff D.; Simpson, Matthew D.; Fast, Jerome D.
Simulations of two periods featuring three consecutive low level jet (LLJ) events in the US Upper Great Plains during the autumn of 2011 were conducted to explore the impacts of various setup configurations and physical process models on simulated flow parameters within the lowest 200 m above the surface, using the Weather Research and Forecasting (WRF) model. Sensitivities of simulated flow parameters to the horizontal and vertical grid spacing, planetary boundary layer (PBL) and land surface model (LSM) physics options, were assessed. Data from a Light Detection and Ranging (lidar) system, deployed to the Weather Forecast Improvement Project (WFIP; Finleymore » et al. 2013) were used to evaluate the accuracy of simulated wind speed and direction at 80 m above the surface, as well as their vertical distributions between 120 and 40 m, covering the typical span of contemporary tall wind turbines. All of the simulations qualitatively captured the overall diurnal cycle of wind speed and stratification, producing LLJs during each overnight period, however large discrepancies occurred at certain times for each simulation in relation to the observations. 54-member ensembles encompassing changes of the above discussed configuration parameters displayed a wide range of simulated vertical distributions of wind speed and direction, and potential temperature, reflecting highly variable representations of stratification during the weakly stable overnight conditions. Root mean square error (RMSE) statistics show that different ensemble members performed better and worse in various simulated parameters at different times, with no clearly superior configuration . Simulations using a PBL parameterization designed specifically for the stable conditions investigated herein provided superior overall simulations of wind speed at 80 m, demonstrating the efficacy of targeting improvements of physical process models in areas of known deficiencies. However, the considerable magnitudes of the RMSE values of even the best performing simulations indicate ample opportunities for further improvements.« less
Development of a Robust and Efficient Parallel Solver for Unsteady Turbomachinery Flows
NASA Technical Reports Server (NTRS)
West, Jeff; Wright, Jeffrey; Thakur, Siddharth; Luke, Ed; Grinstead, Nathan
2012-01-01
The traditional design and analysis practice for advanced propulsion systems relies heavily on expensive full-scale prototype development and testing. Over the past decade, use of high-fidelity analysis and design tools such as CFD early in the product development cycle has been identified as one way to alleviate testing costs and to develop these devices better, faster and cheaper. In the design of advanced propulsion systems, CFD plays a major role in defining the required performance over the entire flight regime, as well as in testing the sensitivity of the design to the different modes of operation. Increased emphasis is being placed on developing and applying CFD models to simulate the flow field environments and performance of advanced propulsion systems. This necessitates the development of next generation computational tools which can be used effectively and reliably in a design environment. The turbomachinery simulation capability presented here is being developed in a computational tool called Loci-STREAM [1]. It integrates proven numerical methods for generalized grids and state-of-the-art physical models in a novel rule-based programming framework called Loci [2] which allows: (a) seamless integration of multidisciplinary physics in a unified manner, and (b) automatic handling of massively parallel computing. The objective is to be able to routinely simulate problems involving complex geometries requiring large unstructured grids and complex multidisciplinary physics. An immediate application of interest is simulation of unsteady flows in rocket turbopumps, particularly in cryogenic liquid rocket engines. The key components of the overall methodology presented in this paper are the following: (a) high fidelity unsteady simulation capability based on Detached Eddy Simulation (DES) in conjunction with second-order temporal discretization, (b) compliance with Geometric Conservation Law (GCL) in order to maintain conservative property on moving meshes for second-order time-stepping scheme, (c) a novel cloud-of-points interpolation method (based on a fast parallel kd-tree search algorithm) for interfaces between turbomachinery components in relative motion which is demonstrated to be highly scalable, and (d) demonstrated accuracy and parallel scalability on large grids (approx 250 million cells) in full turbomachinery geometries.
High-Temperature Gas-Cooled Test Reactor Point Design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sterbentz, James William; Bayless, Paul David; Nelson, Lee Orville
2016-04-01
A point design has been developed for a 200 MW high-temperature gas-cooled test reactor. The point design concept uses standard prismatic blocks and 15.5% enriched UCO fuel. Reactor physics and thermal-hydraulics simulations have been performed to characterize the capabilities of the design. In addition to the technical data, overviews are provided on the technological readiness level, licensing approach and costs.
Multi-scale gyrokinetic simulations of an Alcator C-Mod, ELM-y H-mode plasma
NASA Astrophysics Data System (ADS)
Howard, N. T.; Holland, C.; White, A. E.; Greenwald, M.; Rodriguez-Fernandez, P.; Candy, J.; Creely, A. J.
2018-01-01
High fidelity, multi-scale gyrokinetic simulations capable of capturing both ion ({k}θ {ρ }s∼ { O }(1.0)) and electron-scale ({k}θ {ρ }e∼ { O }(1.0)) turbulence were performed in the core of an Alcator C-Mod ELM-y H-mode discharge which exhibits reactor-relevant characteristics. These simulations, performed with all experimental inputs and realistic ion to electron mass ratio ({({m}i/{m}e)}1/2=60.0) provide insight into the physics fidelity that may be needed for accurate simulation of the core of fusion reactor discharges. Three multi-scale simulations and series of separate ion and electron-scale simulations performed using the GYRO code (Candy and Waltz 2003 J. Comput. Phys. 186 545) are presented. As with earlier multi-scale results in L-mode conditions (Howard et al 2016 Nucl. Fusion 56 014004), both ion and multi-scale simulations results are compared with experimentally inferred ion and electron heat fluxes, as well as the measured values of electron incremental thermal diffusivities—indicative of the experimental electron temperature profile stiffness. Consistent with the L-mode results, cross-scale coupling is found to play an important role in the simulation of these H-mode conditions. Extremely stiff ion-scale transport is observed in these high-performance conditions which is shown to likely play and important role in the reproduction of measurements of perturbative transport. These results provide important insight into the role of multi-scale plasma turbulence in the core of reactor-relevant plasmas and establish important constraints on the the fidelity of models needed for predictive simulations.
Breimer, Gerben E; Haji, Faizal A; Bodani, Vivek; Cunningham, Melissa S; Lopez-Rios, Adriana-Lucia; Okrainec, Allan; Drake, James M
2017-02-01
The relative educational benefits of virtual reality (VR) and physical simulation models for endoscopic third ventriculostomy (ETV) have not been evaluated "head to head." To compare and identify the relative utility of a physical and VR ETV simulation model for use in neurosurgical training. Twenty-three neurosurgical residents and 3 fellows performed an ETV on both a physical and VR simulation model. Trainees rated the models using 5-point Likert scales evaluating the domains of anatomy, instrument handling, procedural content, and the overall fidelity of the simulation. Paired t tests were performed for each domain's mean overall score and individual items. The VR model has relative benefits compared with the physical model with respect to realistic representation of intraventricular anatomy at the foramen of Monro (4.5, standard deviation [SD] = 0.7 vs 4.1, SD = 0.6; P = .04) and the third ventricle floor (4.4, SD = 0.6 vs 4.0, SD = 0.9; P = .03), although the overall anatomy score was similar (4.2, SD = 0.6 vs 4.0, SD = 0.6; P = .11). For overall instrument handling and procedural content, the physical simulator outperformed the VR model (3.7, SD = 0.8 vs 4.5; SD = 0.5, P < .001 and 3.9; SD = 0.8 vs 4.2, SD = 0.6; P = .02, respectively). Overall task fidelity across the 2 simulators was not perceived as significantly different. Simulation model selection should be based on educational objectives. Training focused on learning anatomy or decision-making for anatomic cues may be aided with the VR simulation model. A focus on developing manual dexterity and technical skills using endoscopic equipment in the operating room may be better learned on the physical simulation model. Copyright © 2016 by the Congress of Neurological Surgeons
Training Knowledge Bots for Physics-Based Simulations Using Artificial Neural Networks
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.; Wong, Jay Ming
2014-01-01
Millions of complex physics-based simulations are required for design of an aerospace vehicle. These simulations are usually performed by highly trained and skilled analysts, who execute, monitor, and steer each simulation. Analysts rely heavily on their broad experience that may have taken 20-30 years to accumulate. In addition, the simulation software is complex in nature, requiring significant computational resources. Simulations of system of systems become even more complex and are beyond human capacity to effectively learn their behavior. IBM has developed machines that can learn and compete successfully with a chess grandmaster and most successful jeopardy contestants. These machines are capable of learning some complex problems much faster than humans can learn. In this paper, we propose using artificial neural network to train knowledge bots to identify the idiosyncrasies of simulation software and recognize patterns that can lead to successful simulations. We examine the use of knowledge bots for applications of computational fluid dynamics (CFD), trajectory analysis, commercial finite-element analysis software, and slosh propellant dynamics. We will show that machine learning algorithms can be used to learn the idiosyncrasies of computational simulations and identify regions of instability without including any additional information about their mathematical form or applied discretization approaches.
SciDAC GSEP: Gyrokinetic Simulation of Energetic Particle Turbulence and Transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Zhihong
Energetic particle (EP) confinement is a key physics issue for burning plasma experiment ITER, the crucial next step in the quest for clean and abundant energy, since ignition relies on self-heating by energetic fusion products (α-particles). Due to the strong coupling of EP with burning thermal plasmas, plasma confinement property in the ignition regime is one of the most uncertain factors when extrapolating from existing fusion devices to the ITER tokamak. EP population in current tokamaks are mostly produced by auxiliary heating such as neutral beam injection (NBI) and radio frequency (RF) heating. Remarkable progress in developing comprehensive EP simulationmore » codes and understanding basic EP physics has been made by two concurrent SciDAC EP projects GSEP funded by the Department of Energy (DOE) Office of Fusion Energy Science (OFES), which have successfully established gyrokinetic turbulence simulation as a necessary paradigm shift for studying the EP confinement in burning plasmas. Verification and validation have rapidly advanced through close collaborations between simulation, theory, and experiment. Furthermore, productive collaborations with computational scientists have enabled EP simulation codes to effectively utilize current petascale computers and emerging exascale computers. We review here key physics progress in the GSEP projects regarding verification and validation of gyrokinetic simulations, nonlinear EP physics, EP coupling with thermal plasmas, and reduced EP transport models. Advances in high performance computing through collaborations with computational scientists that enable these large scale electromagnetic simulations are also highlighted. These results have been widely disseminated in numerous peer-reviewed publications including many Phys. Rev. Lett. papers and many invited presentations at prominent fusion conferences such as the biennial International Atomic Energy Agency (IAEA) Fusion Energy Conference and the annual meeting of the American Physics Society, Division of Plasma Physics (APS-DPP).« less
HACC: Simulating sky surveys on state-of-the-art supercomputing architectures
NASA Astrophysics Data System (ADS)
Habib, Salman; Pope, Adrian; Finkel, Hal; Frontiere, Nicholas; Heitmann, Katrin; Daniel, David; Fasel, Patricia; Morozov, Vitali; Zagaris, George; Peterka, Tom; Vishwanath, Venkatram; Lukić, Zarija; Sehrish, Saba; Liao, Wei-keng
2016-01-01
Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the 'Dark Universe', dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers that enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of programming models and algorithms. It has been demonstrated at scale on Cell- and GPU-accelerated systems, standard multi-core node clusters, and Blue Gene systems. HACC's design allows for ease of portability, and at the same time, high levels of sustained performance on the fastest supercomputers available. We present a description of the design philosophy of HACC, the underlying algorithms and code structure, and outline implementation details for several specific architectures. We show selected accuracy and performance results from some of the largest high resolution cosmological simulations so far performed, including benchmarks evolving more than 3.6 trillion particles.
HACC: Simulating sky surveys on state-of-the-art supercomputing architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habib, Salman; Pope, Adrian; Finkel, Hal
2016-01-01
Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the ‘Dark Universe’, dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers thatmore » enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of programming models and algorithms. It has been demonstrated at scale on Cell- and GPU-accelerated systems, standard multi-core node clusters, and Blue Gene systems. HACC’s design allows for ease of portability, and at the same time, high levels of sustained performance on the fastest supercomputers available. We present a description of the design philosophy of HACC, the underlying algorithms and code structure, and outline implementation details for several specific architectures. We show selected accuracy and performance results from some of the largest high resolution cosmological simulations so far performed, including benchmarks evolving more than 3.6 trillion particles.« less
A Method for Functional Task Alignment Analysis of an Arthrocentesis Simulator.
Adams, Reid A; Gilbert, Gregory E; Buckley, Lisa A; Nino Fong, Rodolfo; Fuentealba, I Carmen; Little, Erika L
2018-05-16
During simulation-based education, simulators are subjected to procedures composed of a variety of tasks and processes. Simulators should functionally represent a patient in response to the physical action of these tasks. The aim of this work was to describe a method for determining whether a simulator does or does not have sufficient functional task alignment (FTA) to be used in a simulation. Potential performance checklist items were gathered from published arthrocentesis guidelines and aggregated into a performance checklist using Lawshe's method. An expert panel used this performance checklist and an FTA analysis questionnaire to evaluate a simulator's ability to respond to the physical actions required by the performance checklist. Thirteen items, from a pool of 39, were included on the performance checklist. Experts had mixed reviews of the simulator's FTA and its suitability for use in simulation. Unexpectedly, some positive FTA was found for several tasks where the simulator lacked functionality. By developing a detailed list of specific tasks required to complete a clinical procedure, and surveying experts on the simulator's response to those actions, educators can gain insight into the simulator's clinical accuracy and suitability. Unexpected of positive FTA ratings of function deficits suggest that further revision of the survey method is required.
Physical Uncertainty Bounds (PUB)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vaughan, Diane Elizabeth; Preston, Dean L.
2015-03-19
This paper introduces and motivates the need for a new methodology for determining upper bounds on the uncertainties in simulations of engineered systems due to limited fidelity in the composite continuum-level physics models needed to simulate the systems. We show that traditional uncertainty quantification methods provide, at best, a lower bound on this uncertainty. We propose to obtain bounds on the simulation uncertainties by first determining bounds on the physical quantities or processes relevant to system performance. By bounding these physics processes, as opposed to carrying out statistical analyses of the parameter sets of specific physics models or simply switchingmore » out the available physics models, one can obtain upper bounds on the uncertainties in simulated quantities of interest.« less
Numerical Simulation of Thermal Performance of Glass-Fibre-Reinforced Polymer
NASA Astrophysics Data System (ADS)
Zhao, Yuchao; Jiang, Xu; Zhang, Qilin; Wang, Qi
2017-10-01
Glass-Fibre-Reinforced Polymer (GFRP), as a developing construction material, has a rapidly increasing application in civil engineering especially bridge engineering area these years, mainly used as decorating materials and reinforcing bars for now. Compared with traditional construction material, these kinds of composite material have obvious advantages such as high strength, low density, resistance to corrosion and ease of processing. There are different processing methods to form members, such as pultrusion and resin transfer moulding (RTM) methods, which process into desired shape directly through raw material; meanwhile, GFRP, as a polymer composite, possesses several particular physical and mechanical properties, and the thermal property is one of them. The matrix material, polymer, performs special after heated and endue these composite material a potential hot processing property, but also a poor fire resistance. This paper focuses on thermal performance of GFRP as panels and corresponding researches are conducted. First, dynamic thermomechanical analysis (DMA) experiment is conducted to obtain the glass transition temperature (Tg) of the object GFRP, and the curve of bending elastic modulus with temperature is calculated according to the experimental data. Then compute and estimate the values of other various thermal parameters through DMA experiment and other literatures, and conduct numerical simulation under two condition respectively: (1) the heat transfer process of GFRP panel in which the panel would be heated directly on the surface above Tg, and the hot processing under this temperature field; (2) physical and mechanical performance of GFRP panel under fire condition. Condition (1) is mainly used to guide the development of high temperature processing equipment, and condition (2) indicates that GFRP’s performance under fire is unsatisfactory, measures must be taken when being adopted. Since composite materials’ properties differ from each other and their high temperature parameters can’t be obtained through common methods, some parameters are estimated, the simulation is to guide the actual high temperature experiment, and the parameters will also be adjusted by then.
Route complexity and simulated physical ageing negatively influence wayfinding.
Zijlstra, Emma; Hagedoorn, Mariët; Krijnen, Wim P; van der Schans, Cees P; Mobach, Mark P
2016-09-01
The aim of this age-simulation field experiment was to assess the influence of route complexity and physical ageing on wayfinding. Seventy-five people (aged 18-28) performed a total of 108 wayfinding tasks (i.e., 42 participants performed two wayfinding tasks and 33 performed one wayfinding task), of which 59 tasks were performed wearing gerontologic ageing suits. Outcome variables were wayfinding performance (i.e., efficiency and walking speed) and physiological outcomes (i.e., heart and respiratory rates). Analysis of covariance showed that persons on more complex routes (i.e., more floor and building changes) walked less efficiently than persons on less complex routes. In addition, simulated elderly participants perform worse in wayfinding than young participants in terms of speed (p < 0.001). Moreover, a linear mixed model showed that simulated elderly persons had higher heart rates and respiratory rates compared to young people during a wayfinding task, suggesting that simulated elderly consumed more energy during this task. Copyright © 2016 Elsevier Ltd. All rights reserved.
A Novel Approach to Develop the Lower Order Model of Multi-Input Multi-Output System
NASA Astrophysics Data System (ADS)
Rajalakshmy, P.; Dharmalingam, S.; Jayakumar, J.
2017-10-01
A mathematical model is a virtual entity that uses mathematical language to describe the behavior of a system. Mathematical models are used particularly in the natural sciences and engineering disciplines like physics, biology, and electrical engineering as well as in the social sciences like economics, sociology and political science. Physicists, Engineers, Computer scientists, and Economists use mathematical models most extensively. With the advent of high performance processors and advanced mathematical computations, it is possible to develop high performing simulators for complicated Multi Input Multi Ouptut (MIMO) systems like Quadruple tank systems, Aircrafts, Boilers etc. This paper presents the development of the mathematical model of a 500 MW utility boiler which is a highly complex system. A synergistic combination of operational experience, system identification and lower order modeling philosophy has been effectively used to develop a simplified but accurate model of a circulation system of a utility boiler which is a MIMO system. The results obtained are found to be in good agreement with the physics of the process and with the results obtained through design procedure. The model obtained can be directly used for control system studies and to realize hardware simulators for boiler testing and operator training.
Trissel, Lawrence A; Trusley, Craig; Ben, Michel; Kupiec, Thomas C
2007-06-01
The physical and chemical compatibility of palonosetron hydrochloride with fentanyl citrate, hydromorphone hydrochloride, meperidine hydrochloride, morphine sulfate, and sufentanil citrate during simulated Y-site administration was studied. Test samples were prepared in triplicate by mixing 7.5-mL samples of undiluted palonosetron 50 microg/mL (of palonosetron) with 7.5-mL samples of fentanyl citrate 50 microg/mL, morphine sulfate 15 mg/mL, hydromorphone hydrochloride 0.5 mg/mL, meperidine hydrochloride 10 mg/mL, and sufentanil citrate 12.5 microg/mL (of sufentanil) per milliliter individually in colorless 15-mL borosilicate glass screw-cap culture tubes with polypropylene caps. Physical stability of the admixtures was assessed by visual examination and by measuring turbidity and particle size and content. Chemical stability was assessed by stability-indicating high-performance liquid chromatography. Evaluations were performed immediately and one and four hours after mixing. All of the admixtures were initially clear and colorless in normal fluorescent room light and when viewed with a high-intensity monodirectional light (Tyndall beam) and were essentially without haze. Changes in turbidity were minor throughout the study. Particulates measuring 10 microm or larger were few in all samples throughout the observation period. The admixtures remained colorless throughout the study. No loss of palonosetron hydrochloride occurred with any of the opiate agonists tested over the four-hour period. Similarly, little or no loss of the opiate agonists occurred over the four-hour period. Palonosetron hydrochloride was physically and chemically stable with fentanyl citrate, hydromorphone hydrochloride, meperidine hydrochloride, morphine sulfate, and sufentanil citrate during simulated Y-site administration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peterson, John Russell
This grant funded the development and dissemination of the Photon Simulator (PhoSim) for the purpose of studying dark energy at high precision with the upcoming Large Synoptic Survey Telescope (LSST) astronomical survey. The work was in collaboration with the LSST Dark Energy Science Collaboration (DESC). Several detailed physics improvements were made in the optics, atmosphere, and sensor, a number of validation studies were performed, and a significant number of usability features were implemented. Future work in DESC will use PhoSim as the image simulation tool for data challenges used by the analysis groups.
The Design and Semi-Physical Simulation Test of Fault-Tolerant Controller for Aero Engine
NASA Astrophysics Data System (ADS)
Liu, Yuan; Zhang, Xin; Zhang, Tianhong
2017-11-01
A new fault-tolerant control method for aero engine is proposed, which can accurately diagnose the sensor fault by Kalman filter banks and reconstruct the signal by real-time on-board adaptive model combing with a simplified real-time model and an improved Kalman filter. In order to verify the feasibility of the method proposed, a semi-physical simulation experiment has been carried out. Besides the real I/O interfaces, controller hardware and the virtual plant model, semi-physical simulation system also contains real fuel system. Compared with the hardware-in-the-loop (HIL) simulation, semi-physical simulation system has a higher degree of confidence. In order to meet the needs of semi-physical simulation, a rapid prototyping controller with fault-tolerant control ability based on NI CompactRIO platform is designed and verified on the semi-physical simulation test platform. The result shows that the controller can realize the aero engine control safely and reliably with little influence on controller performance in the event of fault on sensor.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Panebianco, S.; Dore, D.; Giomataris, I.
Time Projection Chambers are widely used since many years for tracking and identification of charged particles in high energy physics. We present a new R and D project to investigate the feasibility of a Micromegas TPC for low energy heavy ions detection. Two physics cases are relevant for this project. The first is the study of the nuclear fission of actinides by measuring the fission fragments properties (mass, nuclear charge, kinetic energy) that will be performed at different installations and in particular at the NFS facility to be built in the framework of the SPIRAL2 project in GANIL. The secondmore » physics case is the study of heavy ion reactions, like ({alpha},{gamma}), ({alpha},p), ({alpha},n) and all the inverse reactions in the energy range between 1.5 and 3 AMeV using both stable and radioactive beams. These reactions have a key role in p process in nuclear astrophysics to explain the synthesis of heavy proton-rich nuclei. Within the project, a large effort is devoted to Monte-Carlo simulations and a detailed benchmark of different simulation codes on the energy loss and range in gas of heavy ions at low energy has been performed. A new approach for simulating the ion charge state evolution in GEANT4 is also presented. Finally, preliminary results of an experimental test campaign on prototype are discussed.« less
NASA Astrophysics Data System (ADS)
Tomshaw, Stephen G.
Physics education research has shown that students bring alternate conceptions to the classroom which can be quite resistant to traditional instruction methods (Clement, 1982; Halloun & Hestenes, 1985; McDermott, 1991). Microcomputer-based laboratory (MBL) experiments that employ an active-engagement strategy have been shown to improve student conceptual understanding in high school and introductory university physics courses (Thornton & Sokoloff, 1998). These (MBL) experiments require a specialized computer interface, type-specific sensors (e.g. motion detectors, force probes, accelerometers), and specialized software in addition to the standard physics experimental apparatus. Tao and Gunstone (1997) have shown that computer simulations used in an active engagement environment can also lead to conceptual change. This study investigated 69 secondary physics students' use of computer simulations of MBL activities in place of the hands-on MBL laboratory activities. The average normalized gain
Simulation and assessment of ion kinetic effects in a direct-drive capsule implosion experiment
Le, Ari Yitzchak; Kwan, Thomas J. T.; Schmitt, Mark J.; ...
2016-10-24
The first simulations employing a kinetic treatment of both fuel and shell ions to model inertial confinement fusion experiments are presented, including results showing the importance of kinetic physics processes in altering fusion burn. A pair of direct drive capsule implosions performed at the OMEGA facility with two different gas fills of deuterium, tritium, and helium-3 are analyzed. During implosion shock convergence, highly non-Maxwellian ion velocity distributions and separations in the density and temperature amongst the ion species are observed. Finally, diffusion of fuel into the capsule shell is identified as a principal process that degrades fusion burn performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rider, William J.; Witkowski, Walter R.; Mousseau, Vincent Andrew
2016-04-13
The importance of credible, trustworthy numerical simulations is obvious especially when using the results for making high-consequence decisions. Determining the credibility of such numerical predictions is much more difficult and requires a systematic approach to assessing predictive capability, associated uncertainties and overall confidence in the computational simulation process for the intended use of the model. This process begins with an evaluation of the computational modeling of the identified, important physics of the simulation for its intended use. This is commonly done through a Phenomena Identification Ranking Table (PIRT). Then an assessment of the evidence basis supporting the ability to computationallymore » simulate these physics can be performed using various frameworks such as the Predictive Capability Maturity Model (PCMM). There were several critical activities that follow in the areas of code and solution verification, validation and uncertainty quantification, which will be described in detail in the following sections. Here, we introduce the subject matter for general applications but specifics are given for the failure prediction project. In addition, the first task that must be completed in the verification & validation procedure is to perform a credibility assessment to fully understand the requirements and limitations of the current computational simulation capability for the specific application intended use. The PIRT and PCMM are tools used at Sandia National Laboratories (SNL) to provide a consistent manner to perform such an assessment. Ideally, all stakeholders should be represented and contribute to perform an accurate credibility assessment. PIRTs and PCMMs are both described in brief detail below and the resulting assessments for an example project are given.« less
Educational aspects of molecular simulation
NASA Astrophysics Data System (ADS)
Allen, Michael P.
This article addresses some aspects of teaching simulation methods to undergraduates and graduate students. Simulation is increasingly a cross-disciplinary activity, which means that the students who need to learn about simulation methods may have widely differing backgrounds. Also, they may have a wide range of views on what constitutes an interesting application of simulation methods. Almost always, a successful simulation course includes an element of practical, hands-on activity: a balance always needs to be struck between treating the simulation software as a 'black box', and becoming bogged down in programming issues. With notebook computers becoming widely available, students often wish to take away the programs to run themselves, and access to raw computer power is not the limiting factor that it once was; on the other hand, the software should be portable and, if possible, free. Examples will be drawn from the author's experience in three different contexts. (1) An annual simulation summer school for graduate students, run by the UK CCP5 organization, in which practical sessions are combined with an intensive programme of lectures describing the methodology. (2) A molecular modelling module, given as part of a doctoral training centre in the Life Sciences at Warwick, for students who might not have a first degree in the physical sciences. (3) An undergraduate module in Physics at Warwick, also taken by students from other disciplines, teaching high performance computing, visualization, and scripting in the context of a physical application such as Monte Carlo simulation.
Ha, Jennifer F; Morrison, Robert J; Green, Glenn E; Zopf, David A
2017-06-01
Autologous cartilage grafting during open airway reconstruction is a complex skill instrumental to the success of the operation. Most trainees lack adequate opportunities to develop proficiency in this skill. We hypothesized that 3-dimensional (3D) printing and computer-aided design can be used to create a high-fidelity simulator for developing skills carving costal cartilage grafts for airway reconstruction. The rapid manufacturing and low cost of the simulator allow deployment in locations lacking expert instructors or cadaveric dissection, such as medical missions and Third World countries. In this blinded, prospective observational study, resident trainees completed a physical simulator exercise using a 3D-printed costal cartilage grafting tool. Participant assessment was performed using a Likert scale questionnaire, and airway grafts were assessed by a blinded expert surgeon. Most participants found this to be a very relevant training tool and highly rated the level of realism of the simulation tool.
Using computer simulations to facilitate conceptual understanding of electromagnetic induction
NASA Astrophysics Data System (ADS)
Lee, Yu-Fen
This study investigated the use of computer simulations to facilitate conceptual understanding in physics. The use of computer simulations in the present study was grounded in a conceptual framework drawn from findings related to the use of computer simulations in physics education. To achieve the goal of effective utilization of computers for physics education, I first reviewed studies pertaining to computer simulations in physics education categorized by three different learning frameworks and studies comparing the effects of different simulation environments. My intent was to identify the learning context and factors for successful use of computer simulations in past studies and to learn from the studies which did not obtain a significant result. Based on the analysis of reviewed literature, I proposed effective approaches to integrate computer simulations in physics education. These approaches are consistent with well established education principles such as those suggested by How People Learn (Bransford, Brown, Cocking, Donovan, & Pellegrino, 2000). The research based approaches to integrated computer simulations in physics education form a learning framework called Concept Learning with Computer Simulations (CLCS) in the current study. The second component of this study was to examine the CLCS learning framework empirically. The participants were recruited from a public high school in Beijing, China. All participating students were randomly assigned to two groups, the experimental (CLCS) group and the control (TRAD) group. Research based computer simulations developed by the physics education research group at University of Colorado at Boulder were used to tackle common conceptual difficulties in learning electromagnetic induction. While interacting with computer simulations, CLCS students were asked to answer reflective questions designed to stimulate qualitative reasoning and explanation. After receiving model reasoning online, students were asked to submit their revised answers electronically. Students in the TRAD group were not granted access to the CLCS material and followed their normal classroom routine. At the end of the study, both the CLCS and TRAD students took a post-test. Questions on the post-test were divided into "what" questions, "how" questions, and an open response question. Analysis of students' post-test performance showed mixed results. While the TRAD students scored higher on the "what" questions, the CLCS students scored higher on the "how" questions and the one open response questions. This result suggested that more TRAD students knew what kinds of conditions may or may not cause electromagnetic induction without understanding how electromagnetic induction works. Analysis of the CLCS students' learning also suggested that frequent disruption and technical trouble might pose threats to the effectiveness of the CLCS learning framework. Despite the mixed results of students' post-test performance, the CLCS learning framework revealed some limitations to promote conceptual understanding in physics. Improvement can be made by providing students with background knowledge necessary to understand model reasoning and incorporating the CLCS learning framework with other learning frameworks to promote integration of various physics concepts. In addition, the reflective questions in the CLCS learning framework may be refined to better address students' difficulties. Limitations of the study, as well as suggestions for future research, are also presented in this study.
Large Eddy Simulation of High Reynolds Number Complex Flows
NASA Astrophysics Data System (ADS)
Verma, Aman
Marine configurations are subject to a variety of complex hydrodynamic phenomena affecting the overall performance of the vessel. The turbulent flow affects the hydrodynamic drag, propulsor performance and structural integrity, control-surface effectiveness, and acoustic signature of the marine vessel. Due to advances in massively parallel computers and numerical techniques, an unsteady numerical simulation methodology such as Large Eddy Simulation (LES) is well suited to study such complex turbulent flows whose Reynolds numbers (Re) are typically on the order of 10. 6. LES also promises increasedaccuracy over RANS based methods in predicting unsteady phenomena such as cavitation and noise production. This dissertation develops the capability to enable LES of high Re flows in complex geometries (e.g. a marine vessel) on unstructured grids and provide physical insight into the turbulent flow. LES is performed to investigate the geometry induced separated flow past a marine propeller attached to a hull, in an off-design condition called crashback. LES shows good quantitative agreement with experiments and provides a physical mechanism to explain the increase in side-force on the propeller blades below an advance ratio of J=-0.7. Fundamental developments in the dynamic subgrid-scale model for LES are pursued to improve the LES predictions, especially for complex flows on unstructured grids. A dynamic procedure is proposed to estimate a Lagrangian time scale based on a surrogate correlation without any adjustable parameter. The proposed model is applied to turbulent channel, cylinder and marine propeller flows and predicts improved results over other model variants due to a physically consistent Lagrangian time scale. A wall model is proposed for application to LES of high Reynolds number wall-bounded flows. The wall model is formulated as the minimization of a generalized constraint in the dynamic model for LES and applied to LES of turbulent channel flow at various Reynolds numbers up to Reτ=10000 and coarse grid resolutions to obtain significant improvement.
The high performance parallel algorithm for Unified Gas-Kinetic Scheme
NASA Astrophysics Data System (ADS)
Li, Shiyi; Li, Qibing; Fu, Song; Xu, Jinxiu
2016-11-01
A high performance parallel algorithm for UGKS is developed to simulate three-dimensional flows internal and external on arbitrary grid system. The physical domain and velocity domain are divided into different blocks and distributed according to the two-dimensional Cartesian topology with intra-communicators in physical domain for data exchange and other intra-communicators in velocity domain for sum reduction to moment integrals. Numerical results of three-dimensional cavity flow and flow past a sphere agree well with the results from the existing studies and validate the applicability of the algorithm. The scalability of the algorithm is tested both on small (1-16) and large (729-5832) scale processors. The tested speed-up ratio is near linear ashind thus the efficiency is around 1, which reveals the good scalability of the present algorithm.
NASA Astrophysics Data System (ADS)
Leung, L.; Hagos, S. M.; Rauscher, S.; Ringler, T.
2012-12-01
This study compares two grid refinement approaches using global variable resolution model and nesting for high-resolution regional climate modeling. The global variable resolution model, Model for Prediction Across Scales (MPAS), and the limited area model, Weather Research and Forecasting (WRF) model, are compared in an idealized aqua-planet context with a focus on the spatial and temporal characteristics of tropical precipitation simulated by the models using the same physics package from the Community Atmosphere Model (CAM4). For MPAS, simulations have been performed with a quasi-uniform resolution global domain at coarse (1 degree) and high (0.25 degree) resolution, and a variable resolution domain with a high-resolution region at 0.25 degree configured inside a coarse resolution global domain at 1 degree resolution. Similarly, WRF has been configured to run on a coarse (1 degree) and high (0.25 degree) resolution tropical channel domain as well as a nested domain with a high-resolution region at 0.25 degree nested two-way inside the coarse resolution (1 degree) tropical channel. The variable resolution or nested simulations are compared against the high-resolution simulations that serve as virtual reality. Both MPAS and WRF simulate 20-day Kelvin waves propagating through the high-resolution domains fairly unaffected by the change in resolution. In addition, both models respond to increased resolution with enhanced precipitation. Grid refinement induces zonal asymmetry in precipitation (heating), accompanied by zonal anomalous Walker like circulations and standing Rossby wave signals. However, there are important differences between the anomalous patterns in MPAS and WRF due to differences in the grid refinement approaches and sensitivity of model physics to grid resolution. This study highlights the need for "scale aware" parameterizations in variable resolution and nested regional models.
High-resolution coupled physics solvers for analysing fine-scale nuclear reactor design problems.
Mahadevan, Vijay S; Merzari, Elia; Tautges, Timothy; Jain, Rajeev; Obabko, Aleksandr; Smith, Michael; Fischer, Paul
2014-08-06
An integrated multi-physics simulation capability for the design and analysis of current and future nuclear reactor models is being investigated, to tightly couple neutron transport and thermal-hydraulics physics under the SHARP framework. Over several years, high-fidelity, validated mono-physics solvers with proven scalability on petascale architectures have been developed independently. Based on a unified component-based architecture, these existing codes can be coupled with a mesh-data backplane and a flexible coupling-strategy-based driver suite to produce a viable tool for analysts. The goal of the SHARP framework is to perform fully resolved coupled physics analysis of a reactor on heterogeneous geometry, in order to reduce the overall numerical uncertainty while leveraging available computational resources. The coupling methodology and software interfaces of the framework are presented, along with verification studies on two representative fast sodium-cooled reactor demonstration problems to prove the usability of the SHARP framework.
High-resolution coupled physics solvers for analysing fine-scale nuclear reactor design problems
Mahadevan, Vijay S.; Merzari, Elia; Tautges, Timothy; Jain, Rajeev; Obabko, Aleksandr; Smith, Michael; Fischer, Paul
2014-01-01
An integrated multi-physics simulation capability for the design and analysis of current and future nuclear reactor models is being investigated, to tightly couple neutron transport and thermal-hydraulics physics under the SHARP framework. Over several years, high-fidelity, validated mono-physics solvers with proven scalability on petascale architectures have been developed independently. Based on a unified component-based architecture, these existing codes can be coupled with a mesh-data backplane and a flexible coupling-strategy-based driver suite to produce a viable tool for analysts. The goal of the SHARP framework is to perform fully resolved coupled physics analysis of a reactor on heterogeneous geometry, in order to reduce the overall numerical uncertainty while leveraging available computational resources. The coupling methodology and software interfaces of the framework are presented, along with verification studies on two representative fast sodium-cooled reactor demonstration problems to prove the usability of the SHARP framework. PMID:24982250
A near one-dimensional indirectly driven implosion at convergence ratio 30
NASA Astrophysics Data System (ADS)
MacLaren, S. A.; Masse, L. P.; Czajka, C. E.; Khan, S. F.; Kyrala, G. A.; Ma, T.; Ralph, J. E.; Salmonson, J. D.; Bachmann, B.; Benedetti, L. R.; Bhandarkar, S. D.; Bradley, P. A.; Hatarik, R.; Herrmann, H. W.; Mariscal, D. A.; Millot, M.; Patel, P. K.; Pino, J. E.; Ratledge, M.; Rice, N. G.; Tipton, R. E.; Tommasini, R.; Yeamans, C. B.
2018-05-01
Inertial confinement fusion cryogenic-layered implosions at the National Ignition Facility, while successfully demonstrating self-heating due to alpha-particle deposition, have fallen short of the performance predicted by one-dimensional (1D) multi-physics implosion simulations. The current understanding, from experimental evidence as well as simulations, suggests that engineering features such as the capsule tent and fill tube, as well as time-dependent low-mode asymmetry, are to blame for the lack of agreement. A short series of experiments designed specifically to avoid these degradations to the implosion are described here in order to understand if, once they are removed, a high-convergence cryogenic-layered deuterium-tritium implosion can achieve the 1D simulated performance. The result is a cryogenic layered implosion, round at stagnation, that matches closely the performance predicted by 1D simulations. This agreement can then be exploited to examine the sensitivity of approximations in the model to the constraints imposed by the data.
Physics and control of wall turbulence for drag reduction.
Kim, John
2011-04-13
Turbulence physics responsible for high skin-friction drag in turbulent boundary layers is first reviewed. A self-sustaining process of near-wall turbulence structures is then discussed from the perspective of controlling this process for the purpose of skin-friction drag reduction. After recognizing that key parts of this self-sustaining process are linear, a linear systems approach to boundary-layer control is discussed. It is shown that singular-value decomposition analysis of the linear system allows us to examine different approaches to boundary-layer control without carrying out the expensive nonlinear simulations. Results from the linear analysis are consistent with those observed in full nonlinear simulations, thus demonstrating the validity of the linear analysis. Finally, fundamental performance limit expected of optimal control input is discussed.
High-performance finite-difference time-domain simulations of C-Mod and ITER RF antennas
NASA Astrophysics Data System (ADS)
Jenkins, Thomas G.; Smithe, David N.
2015-12-01
Finite-difference time-domain methods have, in recent years, developed powerful capabilities for modeling realistic ICRF behavior in fusion plasmas [1, 2, 3, 4]. When coupled with the power of modern high-performance computing platforms, such techniques allow the behavior of antenna near and far fields, and the flow of RF power, to be studied in realistic experimental scenarios at previously inaccessible levels of resolution. In this talk, we present results and 3D animations from high-performance FDTD simulations on the Titan Cray XK7 supercomputer, modeling both Alcator C-Mod's field-aligned ICRF antenna and the ITER antenna module. Much of this work focuses on scans over edge density, and tailored edge density profiles, to study dispersion and the physics of slow wave excitation in the immediate vicinity of the antenna hardware and SOL. An understanding of the role of the lower-hybrid resonance in low-density scenarios is emerging, and possible implications of this for the NSTX launcher and power balance are also discussed. In addition, we discuss ongoing work centered on using these simulations to estimate sputtering and impurity production, as driven by the self-consistent sheath potentials at antenna surfaces.
High-performance finite-difference time-domain simulations of C-Mod and ITER RF antennas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jenkins, Thomas G., E-mail: tgjenkins@txcorp.com; Smithe, David N., E-mail: smithe@txcorp.com
Finite-difference time-domain methods have, in recent years, developed powerful capabilities for modeling realistic ICRF behavior in fusion plasmas [1, 2, 3, 4]. When coupled with the power of modern high-performance computing platforms, such techniques allow the behavior of antenna near and far fields, and the flow of RF power, to be studied in realistic experimental scenarios at previously inaccessible levels of resolution. In this talk, we present results and 3D animations from high-performance FDTD simulations on the Titan Cray XK7 supercomputer, modeling both Alcator C-Mod’s field-aligned ICRF antenna and the ITER antenna module. Much of this work focuses on scansmore » over edge density, and tailored edge density profiles, to study dispersion and the physics of slow wave excitation in the immediate vicinity of the antenna hardware and SOL. An understanding of the role of the lower-hybrid resonance in low-density scenarios is emerging, and possible implications of this for the NSTX launcher and power balance are also discussed. In addition, we discuss ongoing work centered on using these simulations to estimate sputtering and impurity production, as driven by the self-consistent sheath potentials at antenna surfaces.« less
High Performance Computing Software Applications for Space Situational Awareness
NASA Astrophysics Data System (ADS)
Giuliano, C.; Schumacher, P.; Matson, C.; Chun, F.; Duncan, B.; Borelli, K.; Desonia, R.; Gusciora, G.; Roe, K.
The High Performance Computing Software Applications Institute for Space Situational Awareness (HSAI-SSA) has completed its first full year of applications development. The emphasis of our work in this first year was in improving space surveillance sensor models and image enhancement software. These applications are the Space Surveillance Network Analysis Model (SSNAM), the Air Force Space Fence simulation (SimFence), and physically constrained iterative de-convolution (PCID) image enhancement software tool. Specifically, we have demonstrated order of magnitude speed-up in those codes running on the latest Cray XD-1 Linux supercomputer (Hoku) at the Maui High Performance Computing Center. The software applications improvements that HSAI-SSA has made, has had significant impact to the warfighter and has fundamentally changed the role of high performance computing in SSA.
Simulation of Plasma Jet Merger and Liner Formation within the PLX- α Project
NASA Astrophysics Data System (ADS)
Samulyak, Roman; Chen, Hsin-Chiang; Shih, Wen; Hsu, Scott
2015-11-01
Detailed numerical studies of the propagation and merger of high Mach number argon plasma jets and the formation of plasma liners have been performed using the newly developed method of Lagrangian particles (LP). The LP method significantly improves accuracy and mathematical rigor of common particle-based numerical methods such as smooth particle hydrodynamics while preserving their main advantages compared to grid-based methods. A brief overview of the LP method will be presented. The Lagrangian particle code implements main relevant physics models such as an equation of state for argon undergoing atomic physics transformation, radiation losses in thin optical limit, and heat conduction. Simulations of the merger of two plasma jets are compared with experimental data from past PLX experiments. Simulations quantify the effect of oblique shock waves, ionization, and radiation processes on the jet merger process. Results of preliminary simulations of future PLX- alpha experiments involving the ~ π / 2 -solid-angle plasma-liner configuration with 9 guns will also be presented. Partially supported by ARPA-E's ALPHA program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deng, Wei; Zhang, Bing; Li, Hui
We perform 3D relativistic ideal magnetohydrodynamics (MHD) simulations to study the collisions between high-σ (Poynting-flux-dominated (PFD)) blobs which contain both poloidal and toroidal magnetic field components. This is meant to mimic the interactions inside a highly variable PFD jet. We discover a significant electromagnetic field (EMF) energy dissipation with an Alfvénic rate with the efficiency around 35%. Detailed analyses show that this dissipation is mostly facilitated by the collision-induced magnetic reconnection. Additional resolution and parameter studies show a robust result that the relative EMF energy dissipation efficiency is nearly independent of the numerical resolution or most physical parameters in themore » relevant parameter range. The reconnection outflows in our simulation can potentially form the multi-orientation relativistic mini jets as needed for several analytical models. We also find a linear relationship between the σ values before and after the major EMF energy dissipation process. Our results give support to the proposed astrophysical models that invoke significant magnetic energy dissipation in PFD jets, such as the internal collision-induced magnetic reconnection and turbulence model for gamma-ray bursts, and reconnection triggered mini jets model for active galactic nuclei. The simulation movies are shown in http://www.physics.unlv.edu/∼deng/simulation1.html.« less
Virtual Observation System for Earth System Model: An Application to ACME Land Model Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Dali; Yuan, Fengming; Hernandez, Benjamin
Investigating and evaluating physical-chemical-biological processes within an Earth system model (EMS) can be very challenging due to the complexity of both model design and software implementation. A virtual observation system (VOS) is presented to enable interactive observation of these processes during system simulation. Based on advance computing technologies, such as compiler-based software analysis, automatic code instrumentation, and high-performance data transport, the VOS provides run-time observation capability, in-situ data analytics for Earth system model simulation, model behavior adjustment opportunities through simulation steering. A VOS for a terrestrial land model simulation within the Accelerated Climate Modeling for Energy model is also presentedmore » to demonstrate the implementation details and system innovations.« less
Virtual Observation System for Earth System Model: An Application to ACME Land Model Simulations
Wang, Dali; Yuan, Fengming; Hernandez, Benjamin; ...
2017-01-01
Investigating and evaluating physical-chemical-biological processes within an Earth system model (EMS) can be very challenging due to the complexity of both model design and software implementation. A virtual observation system (VOS) is presented to enable interactive observation of these processes during system simulation. Based on advance computing technologies, such as compiler-based software analysis, automatic code instrumentation, and high-performance data transport, the VOS provides run-time observation capability, in-situ data analytics for Earth system model simulation, model behavior adjustment opportunities through simulation steering. A VOS for a terrestrial land model simulation within the Accelerated Climate Modeling for Energy model is also presentedmore » to demonstrate the implementation details and system innovations.« less
Co-located haptic and 3D graphic interface for medical simulations.
Berkelman, Peter; Miyasaka, Muneaki; Bozlee, Sebastian
2013-01-01
We describe a system which provides high-fidelity haptic feedback in the same physical location as a 3D graphical display, in order to enable realistic physical interaction with virtual anatomical tissue during modelled procedures such as needle driving, palpation, and other interventions performed using handheld instruments. The haptic feedback is produced by the interaction between an array of coils located behind a thin flat LCD screen, and permanent magnets embedded in the instrument held by the user. The coil and magnet configuration permits arbitrary forces and torques to be generated on the instrument in real time according to the dynamics of the simulated tissue by activating the coils in combination. A rigid-body motion tracker provides position and orientation feedback of the handheld instrument to the computer simulation, and the 3D display is produced using LCD shutter glasses and a head-tracking system for the user.
gpuSPHASE-A shared memory caching implementation for 2D SPH using CUDA
NASA Astrophysics Data System (ADS)
Winkler, Daniel; Meister, Michael; Rezavand, Massoud; Rauch, Wolfgang
2017-04-01
Smoothed particle hydrodynamics (SPH) is a meshless Lagrangian method that has been successfully applied to computational fluid dynamics (CFD), solid mechanics and many other multi-physics problems. Using the method to solve transport phenomena in process engineering requires the simulation of several days to weeks of physical time. Based on the high computational demand of CFD such simulations in 3D need a computation time of years so that a reduction to a 2D domain is inevitable. In this paper gpuSPHASE, a new open-source 2D SPH solver implementation for graphics devices, is developed. It is optimized for simulations that must be executed with thousands of frames per second to be computed in reasonable time. A novel caching algorithm for Compute Unified Device Architecture (CUDA) shared memory is proposed and implemented. The software is validated and the performance is evaluated for the well established dambreak test case.
Workshop on data acquisition and trigger system simulations for high energy physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1992-12-31
This report discusses the following topics: DAQSIM: A data acquisition system simulation tool; Front end and DCC Simulations for the SDC Straw Tube System; Simulation of Non-Blocklng Data Acquisition Architectures; Simulation Studies of the SDC Data Collection Chip; Correlation Studies of the Data Collection Circuit & The Design of a Queue for this Circuit; Fast Data Compression & Transmission from a Silicon Strip Wafer; Simulation of SCI Protocols in Modsim; Visual Design with vVHDL; Stochastic Simulation of Asynchronous Buffers; SDC Trigger Simulations; Trigger Rates, DAQ & Online Processing at the SSC; Planned Enhancements to MODSEM II & SIMOBJECT -- anmore » Overview -- R.; DAGAR -- A synthesis system; Proposed Silicon Compiler for Physics Applications; Timed -- LOTOS in a PROLOG Environment: an Algebraic language for Simulation; Modeling and Simulation of an Event Builder for High Energy Physics Data Acquisition Systems; A Verilog Simulation for the CDF DAQ; Simulation to Design with Verilog; The DZero Data Acquisition System: Model and Measurements; DZero Trigger Level 1.5 Modeling; Strategies Optimizing Data Load in the DZero Triggers; Simulation of the DZero Level 2 Data Acquisition System; A Fast Method for Calculating DZero Level 1 Jet Trigger Properties and Physics Input to DAQ Studies.« less
Fowler, P; Duffield, R; Vaile, J
2015-06-01
The present study examined effects of simulated air travel on physical performance. In a randomized crossover design, 10 physically active males completed a simulated 5-h domestic flight (DOM), 24-h simulated international travel (INT), and a control trial (CON). The mild hypoxia, seating arrangements, and activity levels typically encountered during air travel were simulated in a normobaric, hypoxic altitude room. Physical performance was assessed in the afternoon of the day before (D - 1 PM) and in the morning (D + 1 AM) and afternoon (D + 1 PM) of the day following each trial. Mood states and physiological and perceptual responses to exercise were also examined at these time points, while sleep quantity and quality were monitored throughout each condition. Sleep quantity and quality were significantly reduced during INT compared with CON and DOM (P < 0.01). Yo-Yo Intermittent Recovery level 1 test performance was significantly reduced at D + 1 PM following INT compared with CON and DOM (P < 0.01), where performance remained unchanged (P > 0.05). Compared with baseline, physiological and perceptual responses to exercise, and mood states were exacerbated following the INT trial (P < 0.05). Attenuated intermittent-sprint performance following simulated international air travel may be due to sleep disruption during travel and the subsequent exacerbated physiological and perceptual markers of fatigue. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
The QuakeSim Project: Numerical Simulations for Active Tectonic Processes
NASA Technical Reports Server (NTRS)
Donnellan, Andrea; Parker, Jay; Lyzenga, Greg; Granat, Robert; Fox, Geoffrey; Pierce, Marlon; Rundle, John; McLeod, Dennis; Grant, Lisa; Tullis, Terry
2004-01-01
In order to develop a solid earth science framework for understanding and studying of active tectonic and earthquake processes, this task develops simulation and analysis tools to study the physics of earthquakes using state-of-the art modeling, data manipulation, and pattern recognition technologies. We develop clearly defined accessible data formats and code protocols as inputs to the simulations. these are adapted to high-performance computers because the solid earth system is extremely complex and nonlinear resulting in computationally intensive problems with millions of unknowns. With these tools it will be possible to construct the more complex models and simulations necessary to develop hazard assessment systems critical for reducing future losses from major earthquakes.
Physical and digital simulations for IVA robotics
NASA Technical Reports Server (NTRS)
Hinman, Elaine; Workman, Gary L.
1992-01-01
Space based materials processing experiments can be enhanced through the use of IVA robotic systems. A program to determine requirements for the implementation of robotic systems in a microgravity environment and to develop some preliminary concepts for acceleration control of small, lightweight arms has been initiated with the development of physical and digital simulation capabilities. The physical simulation facilities incorporate a robotic workcell containing a Zymark Zymate II robot instrumented for acceleration measurements, which is able to perform materials transfer functions while flying on NASA's KC-135 aircraft during parabolic manuevers to simulate reduced gravity. Measurements of accelerations occurring during the reduced gravity periods will be used to characterize impacts of robotic accelerations in a microgravity environment in space. Digital simulations are being performed with TREETOPS, a NASA developed software package which is used for the dynamic analysis of systems with a tree topology. Extensive use of both simulation tools will enable the design of robotic systems with enhanced acceleration control for use in the space manufacturing environment.
Validation of a small-animal PET simulation using GAMOS: a GEANT4-based framework
NASA Astrophysics Data System (ADS)
Cañadas, M.; Arce, P.; Rato Mendes, P.
2011-01-01
Monte Carlo-based modelling is a powerful tool to help in the design and optimization of positron emission tomography (PET) systems. The performance of these systems depends on several parameters, such as detector physical characteristics, shielding or electronics, whose effects can be studied on the basis of realistic simulated data. The aim of this paper is to validate a comprehensive study of the Raytest ClearPET small-animal PET scanner using a new Monte Carlo simulation platform which has been developed at CIEMAT (Madrid, Spain), called GAMOS (GEANT4-based Architecture for Medicine-Oriented Simulations). This toolkit, based on the GEANT4 code, was originally designed to cover multiple applications in the field of medical physics from radiotherapy to nuclear medicine, but has since been applied by some of its users in other fields of physics, such as neutron shielding, space physics, high energy physics, etc. Our simulation model includes the relevant characteristics of the ClearPET system, namely, the double layer of scintillator crystals in phoswich configuration, the rotating gantry, the presence of intrinsic radioactivity in the crystals or the storage of single events for an off-line coincidence sorting. Simulated results are contrasted with experimental acquisitions including studies of spatial resolution, sensitivity, scatter fraction and count rates in accordance with the National Electrical Manufacturers Association (NEMA) NU 4-2008 protocol. Spatial resolution results showed a discrepancy between simulated and measured values equal to 8.4% (with a maximum FWHM difference over all measurement directions of 0.5 mm). Sensitivity results differ less than 1% for a 250-750 keV energy window. Simulated and measured count rates agree well within a wide range of activities, including under electronic saturation of the system (the measured peak of total coincidences, for the mouse-sized phantom, was 250.8 kcps reached at 0.95 MBq mL-1 and the simulated peak was 247.1 kcps at 0.87 MBq mL-1). Agreement better than 3% was obtained in the scatter fraction comparison study. We also measured and simulated a mini-Derenzo phantom obtaining images with similar quality using iterative reconstruction methods. We concluded that the overall performance of the simulation showed good agreement with the measured results and validates the GAMOS package for PET applications. Furthermore, its ease of use and flexibility recommends it as an excellent tool to optimize design features or image reconstruction techniques.
NASA Astrophysics Data System (ADS)
Heister, Timo; Dannberg, Juliane; Gassmöller, Rene; Bangerth, Wolfgang
2017-08-01
Computations have helped elucidate the dynamics of Earth's mantle for several decades already. The numerical methods that underlie these simulations have greatly evolved within this time span, and today include dynamically changing and adaptively refined meshes, sophisticated and efficient solvers, and parallelization to large clusters of computers. At the same time, many of the methods - discussed in detail in a previous paper in this series - were developed and tested primarily using model problems that lack many of the complexities that are common to the realistic models our community wants to solve today. With several years of experience solving complex and realistic models, we here revisit some of the algorithm designs of the earlier paper and discuss the incorporation of more complex physics. In particular, we re-consider time stepping and mesh refinement algorithms, evaluate approaches to incorporate compressibility, and discuss dealing with strongly varying material coefficients, latent heat, and how to track chemical compositions and heterogeneities. Taken together and implemented in a high-performance, massively parallel code, the techniques discussed in this paper then allow for high resolution, 3-D, compressible, global mantle convection simulations with phase transitions, strongly temperature dependent viscosity and realistic material properties based on mineral physics data.
NASA Technical Reports Server (NTRS)
Shyam, Vikram; Ameri, Ali; Luk, Daniel F.; Chen, Jen-Ping
2010-01-01
Unsteady three-dimensional RANS simulations have been performed on a highly loaded transonic turbine stage and results are compared to steady calculations as well as experiment. A low Reynolds number k- turbulence model is employed to provide closure for the RANS system. A phase-lag boundary condition is used in the periodic direction. This allows the unsteady simulation to be performed by using only one blade from each of the two rows. The objective of this paper is to study the effect of unsteadiness on rotor heat transfer and to glean any insight into unsteady flow physics. The role of the stator wake passing on the pressure distribution at the leading edge is also studied. The simulated heat transfer and pressure results agreed favorably with experiment. The time-averaged heat transfer predicted by the unsteady simulation is higher than the heat transfer predicted by the steady simulation everywhere except at the leading edge. The shock structure formed due to stator-rotor interaction was analyzed. Heat transfer and pressure at the hub and casing were also studied. Thermal segregation was observed that leads to the heat transfer patterns predicted by steady and unsteady simulations to be different.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spentzouris, P.; /Fermilab; Cary, J.
The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessarymore » accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors. ComPASS is in the first year of executing its plan to develop the next-generation HPC accelerator modeling tools. ComPASS aims to develop an integrated simulation environment that will utilize existing and new accelerator physics modules with petascale capabilities, by employing modern computing and solver technologies. The ComPASS vision is to deliver to accelerator scientists a virtual accelerator and virtual prototyping modeling environment, with the necessary multiphysics, multiscale capabilities. The plan for this development includes delivering accelerator modeling applications appropriate for each stage of the ComPASS software evolution. Such applications are already being used to address challenging problems in accelerator design and optimization. The ComPASS organization for software development and applications accounts for the natural domain areas (beam dynamics, electromagnetics, and advanced acceleration), and all areas depend on the enabling technologies activities, such as solvers and component technology, to deliver the desired performance and integrated simulation environment. The ComPASS applications focus on computationally challenging problems important for design or performance optimization to all major HEP, NP, and BES accelerator facilities. With the cost and complexity of particle accelerators rising, the use of computation to optimize their designs and find improved operating regimes becomes essential, potentially leading to significant cost savings with modest investment.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmintier, Bryan S; Krishnamurthy, Dheepak; Top, Philip
This paper describes the design rationale for a new cyber-physical-energy co-simulation framework for electric power systems. This new framework will support very large-scale (100,000+ federates) co-simulations with off-the-shelf power-systems, communication, and end-use models. Other key features include cross-platform operating system support, integration of both event-driven (e.g. packetized communication) and time-series (e.g. power flow) simulation, and the ability to co-iterate among federates to ensure model convergence at each time step. After describing requirements, we begin by evaluating existing co-simulation frameworks, including HLA and FMI, and conclude that none provide the required features. Then we describe the design for the new layeredmore » co-simulation architecture.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmintier, Bryan S; Krishnamurthy, Dheepak; Top, Philip
This paper describes the design rationale for a new cyber-physical-energy co-simulation framework for electric power systems. This new framework will support very large-scale (100,000+ federates) co-simulations with off-the-shelf power-systems, communication, and end-use models. Other key features include cross-platform operating system support, integration of both event-driven (e.g. packetized communication) and time-series (e.g. power flow) simulation, and the ability to co-iterate among federates to ensure model convergence at each time step. After describing requirements, we begin by evaluating existing co-simulation frameworks, including HLA and FMI, and conclude that none provide the required features. Then we describe the design for the new layeredmore » co-simulation architecture.« less
Comptational Design Of Functional CA-S-H and Oxide Doped Alloy Systems
NASA Astrophysics Data System (ADS)
Yang, Shizhong; Chilla, Lokeshwar; Yang, Yan; Li, Kuo; Wicker, Scott; Zhao, Guang-Lin; Khosravi, Ebrahim; Bai, Shuju; Zhang, Boliang; Guo, Shengmin
Computer aided functional materials design accelerates the discovery of novel materials. This presentation will cover our recent research advance on the Ca-S-H system properties prediction and oxide doped high entropy alloy property simulation and experiment validation. Several recent developed computational materials design methods were utilized to the two systems physical and chemical properties prediction. A comparison of simulation results to the corresponding experiment data will be introduced. This research is partially supported by NSF CIMM project (OIA-15410795 and the Louisiana BoR), NSF HBCU Supplement climate change and ecosystem sustainability subproject 3, and LONI high performance computing time allocation loni mat bio7.
Shiyuan Zhong; Xiuping Li; Xindi Bian; Warren E. Heilman; L. Ruby Leung; William I. Jr. Gustafson
2012-01-01
The performance of regional climate simulations is evaluated for the Great Lakes region. Three 10-year (1990-1999) current-climate simulations are performed using the MM5 regional climate model (RCM) with 36-km horizontal resolution. The simulations employed identical configuration and physical parameterizations, but different lateral boundary conditions and sea-...
Physically-based modelling of high magnitude torrent events with uncertainty quantification
NASA Astrophysics Data System (ADS)
Wing-Yuen Chow, Candace; Ramirez, Jorge; Zimmermann, Markus; Keiler, Margreth
2017-04-01
High magnitude torrent events are associated with the rapid propagation of vast quantities of water and available sediment downslope where human settlements may be established. Assessing the vulnerability of built structures to these events is a part of consequence analysis, where hazard intensity is related to the degree of loss sustained. The specific contribution of the presented work describes a procedure simulate these damaging events by applying physically-based modelling and to include uncertainty information about the simulated results. This is a first step in the development of vulnerability curves based on several intensity parameters (i.e. maximum velocity, sediment deposition depth and impact pressure). The investigation process begins with the collection, organization and interpretation of detailed post-event documentation and photograph-based observation data of affected structures in three sites that exemplify the impact of highly destructive mudflows and flood occurrences on settlements in Switzerland. Hazard intensity proxies are then simulated with the physically-based FLO-2D model (O'Brien et al., 1993). Prior to modelling, global sensitivity analysis is conducted to support a better understanding of model behaviour, parameterization and the quantification of uncertainties (Song et al., 2015). The inclusion of information describing the degree of confidence in the simulated results supports the credibility of vulnerability curves developed with the modelled data. First, key parameters are identified and selected based on literature review. Truncated a priori ranges of parameter values were then defined by expert solicitation. Local sensitivity analysis is performed based on manual calibration to provide an understanding of the parameters relevant to the case studies of interest. Finally, automated parameter estimation is performed to comprehensively search for optimal parameter combinations and associated values, which are evaluated using the observed data collected in the first stage of the investigation. O'Brien, J.S., Julien, P.Y., Fullerton, W. T., 1993. Two-dimensional water flood and mudflow simulation. Journal of Hydraulic Engineering 119(2): 244-261. Song, X., Zhang, J., Zhan, C., Xuan, Y., Ye, M., Xu C., 2015. Global sensitivity analysis in hydrological modeling: Review of concepts, methods, theoretical frameworks, Journal of Hydrology 523: 739-757.
Dynamical simulation priors for human motion tracking.
Vondrak, Marek; Sigal, Leonid; Jenkins, Odest Chadwicke
2013-01-01
We propose a simulation-based dynamical motion prior for tracking human motion from video in presence of physical ground-person interactions. Most tracking approaches to date have focused on efficient inference algorithms and/or learning of prior kinematic motion models; however, few can explicitly account for the physical plausibility of recovered motion. Here, we aim to recover physically plausible motion of a single articulated human subject. Toward this end, we propose a full-body 3D physical simulation-based prior that explicitly incorporates a model of human dynamics into the Bayesian filtering framework. We consider the motion of the subject to be generated by a feedback “control loop” in which Newtonian physics approximates the rigid-body motion dynamics of the human and the environment through the application and integration of interaction forces, motor forces, and gravity. Interaction forces prevent physically impossible hypotheses, enable more appropriate reactions to the environment (e.g., ground contacts), and are produced from detected human-environment collisions. Motor forces actuate the body, ensure that proposed pose transitions are physically feasible, and are generated using a motion controller. For efficient inference in the resulting high-dimensional state space, we utilize an exemplar-based control strategy that reduces the effective search space of motor forces. As a result, we are able to recover physically plausible motion of human subjects from monocular and multiview video. We show, both quantitatively and qualitatively, that our approach performs favorably with respect to Bayesian filtering methods with standard motion priors.
An objective measure of physical function of elderly outpatients. The Physical Performance Test.
Reuben, D B; Siu, A L
1990-10-01
Direct observation of physical function has the advantage of providing an objective, quantifiable measure of functional capabilities. We have developed the Physical Performance Test (PPT), which assesses multiple domains of physical function using observed performance of tasks that simulate activities of daily living of various degrees of difficulty. Two versions are presented: a nine-item scale that includes writing a sentence, simulated eating, turning 360 degrees, putting on and removing a jacket, lifting a book and putting it on a shelf, picking up a penny from the floor, a 50-foot walk test, and climbing stairs (scored as two items); and a seven-item scale that does not include stairs. The PPT can be completed in less than 10 minutes and requires only a few simple props. We then tested the validity of PPT using 183 subjects (mean age, 79 years) in six settings including four clinical practices (one of Parkinson's disease patients), a board-and-care home, and a senior citizens' apartment. The PPT was reliable (Cronbach's alpha = 0.87 and 0.79, interrater reliability = 0.99 and 0.93 for the nine-item and seven-item tests, respectively) and demonstrated concurrent validity with self-reported measures of physical function. Scores on the PPT for both scales were highly correlated (.50 to .80) with modified Rosow-Breslau, Instrumental and Basic Activities of Daily Living scales, and Tinetti gait score. Scores on the PPT were more moderately correlated with self-reported health status, cognitive status, and mental health (.24 to .47), and negatively with age (-.24 and -.18). Thus, the PPT also demonstrated construct validity. The PPT is a promising objective measurement of physical function, but its clinical and research value for screening, monitoring, and prediction will have to be determined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
FINNEY, Charles E A; Edwards, Kevin Dean; Stoyanov, Miroslav K
2015-01-01
Combustion instabilities in dilute internal combustion engines are manifest in cyclic variability (CV) in engine performance measures such as integrated heat release or shaft work. Understanding the factors leading to CV is important in model-based control, especially with high dilution where experimental studies have demonstrated that deterministic effects can become more prominent. Observation of enough consecutive engine cycles for significant statistical analysis is standard in experimental studies but is largely wanting in numerical simulations because of the computational time required to compute hundreds or thousands of consecutive cycles. We have proposed and begun implementation of an alternative approach to allowmore » rapid simulation of long series of engine dynamics based on a low-dimensional mapping of ensembles of single-cycle simulations which map input parameters to output engine performance. This paper details the use Titan at the Oak Ridge Leadership Computing Facility to investigate CV in a gasoline direct-injected spark-ignited engine with a moderately high rate of dilution achieved through external exhaust gas recirculation. The CONVERGE CFD software was used to perform single-cycle simulations with imposed variations of operating parameters and boundary conditions selected according to a sparse grid sampling of the parameter space. Using an uncertainty quantification technique, the sampling scheme is chosen similar to a design of experiments grid but uses functions designed to minimize the number of samples required to achieve a desired degree of accuracy. The simulations map input parameters to output metrics of engine performance for a single cycle, and by mapping over a large parameter space, results can be interpolated from within that space. This interpolation scheme forms the basis for a low-dimensional metamodel which can be used to mimic the dynamical behavior of corresponding high-dimensional simulations. Simulations of high-EGR spark-ignition combustion cycles within a parametric sampling grid were performed and analyzed statistically, and sensitivities of the physical factors leading to high CV are presented. With these results, the prospect of producing low-dimensional metamodels to describe engine dynamics at any point in the parameter space will be discussed. Additionally, modifications to the methodology to account for nondeterministic effects in the numerical solution environment are proposed« less
High performance ultrasonic field simulation on complex geometries
NASA Astrophysics Data System (ADS)
Chouh, H.; Rougeron, G.; Chatillon, S.; Iehl, J. C.; Farrugia, J. P.; Ostromoukhov, V.
2016-02-01
Ultrasonic field simulation is a key ingredient for the design of new testing methods as well as a crucial step for NDT inspection simulation. As presented in a previous paper [1], CEA-LIST has worked on the acceleration of these simulations focusing on simple geometries (planar interfaces, isotropic materials). In this context, significant accelerations were achieved on multicore processors and GPUs (Graphics Processing Units), bringing the execution time of realistic computations in the 0.1 s range. In this paper, we present recent works that aim at similar performances on a wider range of configurations. We adapted the physical model used by the CIVA platform to design and implement a new algorithm providing a fast ultrasonic field simulation that yields nearly interactive results for complex cases. The improvements over the CIVA pencil-tracing method include adaptive strategies for pencil subdivisions to achieve a good refinement of the sensor geometry while keeping a reasonable number of ray-tracing operations. Also, interpolation of the times of flight was used to avoid time consuming computations in the impulse response reconstruction stage. To achieve the best performance, our algorithm runs on multi-core superscalar CPUs and uses high performance specialized libraries such as Intel Embree for ray-tracing, Intel MKL for signal processing and Intel TBB for parallelization. We validated the simulation results by comparing them to the ones produced by CIVA on identical test configurations including mono-element and multiple-element transducers, homogeneous, meshed 3D CAD specimens, isotropic and anisotropic materials and wave paths that can involve several interactions with interfaces. We show performance results on complete simulations that achieve computation times in the 1s range.
Additions and improvements to the high energy density physics capabilities in the FLASH code
NASA Astrophysics Data System (ADS)
Lamb, D. Q.; Flocke, N.; Graziani, C.; Tzeferacos, P.; Weide, K.
2016-10-01
FLASH is an open source, finite-volume Eulerian, spatially adaptive radiation magnetohydrodynamics code that has the capabilities to treat a broad range of physical processes. FLASH performs well on a wide range of computer architectures, and has a broad user base. Extensive high energy density physics (HEDP) capabilities have been added to FLASH to make it an open toolset for the academic HEDP community. We summarize these capabilities, emphasizing recent additions and improvements. In particular, we showcase the ability of FLASH to simulate the Faraday Rotation Measure produced by the presence of magnetic fields; and proton radiography, proton self-emission, and Thomson scattering diagnostics with and without the presence of magnetic fields. We also describe several collaborations with the academic HEDP community in which FLASH simulations were used to design and interpret HEDP experiments. This work was supported in part at the University of Chicago by the DOE NNSA ASC through the Argonne Institute for Computing in Science under field work proposal 57789; and the NSF under Grant PHY-0903997.
Strangeness S =-1 hyperon-nucleon interactions: Chiral effective field theory versus lattice QCD
NASA Astrophysics Data System (ADS)
Song, Jing; Li, Kai-Wen; Geng, Li-Sheng
2018-06-01
Hyperon-nucleon interactions serve as basic inputs to studies of hypernuclear physics and dense (neutron) stars. Unfortunately, a precise understanding of these important quantities has lagged far behind that of the nucleon-nucleon interaction due to lack of high-precision experimental data. Historically, hyperon-nucleon interactions are either formulated in quark models or meson exchange models. In recent years, lattice QCD simulations and chiral effective field theory approaches start to offer new insights from first principles. In the present work, we contrast the state-of-the-art lattice QCD simulations with the latest chiral hyperon-nucleon forces and show that the leading order relativistic chiral results can already describe the lattice QCD data reasonably well. Given the fact that the lattice QCD simulations are performed with pion masses ranging from the (almost) physical point to 700 MeV, such studies provide a useful check on both the chiral effective field theory approaches as well as lattice QCD simulations. Nevertheless more precise lattice QCD simulations are eagerly needed to refine our understanding of hyperon-nucleon interactions.
Tackling some of the most intricate geophysical challenges via high-performance computing
NASA Astrophysics Data System (ADS)
Khosronejad, A.
2016-12-01
Recently, world has been witnessing significant enhancements in computing power of supercomputers. Computer clusters in conjunction with the advanced mathematical algorithms has set the stage for developing and applying powerful numerical tools to tackle some of the most intricate geophysical challenges that today`s engineers face. One such challenge is to understand how turbulent flows, in real-world settings, interact with (a) rigid and/or mobile complex bed bathymetry of waterways and sea-beds in the coastal areas; (b) objects with complex geometry that are fully or partially immersed; and (c) free-surface of waterways and water surface waves in the coastal area. This understanding is especially important because the turbulent flows in real-world environments are often bounded by geometrically complex boundaries, which dynamically deform and give rise to multi-scale and multi-physics transport phenomena, and characterized by multi-lateral interactions among various phases (e.g. air/water/sediment phases). Herein, I present some of the multi-scale and multi-physics geophysical fluid mechanics processes that I have attempted to study using an in-house high-performance computational model, the so-called VFS-Geophysics. More specifically, I will present the simulation results of turbulence/sediment/solute/turbine interactions in real-world settings. Parts of the simulations I present are performed to gain scientific insights into the processes such as sand wave formation (A. Khosronejad, and F. Sotiropoulos, (2014), Numerical simulation of sand waves in a turbulent open channel flow, Journal of Fluid Mechanics, 753:150-216), while others are carried out to predict the effects of climate change and large flood events on societal infrastructures ( A. Khosronejad, et al., (2016), Large eddy simulation of turbulence and solute transport in a forested headwater stream, Journal of Geophysical Research:, doi: 10.1002/2014JF003423).
High-Order Methods for Incompressible Fluid Flow
NASA Astrophysics Data System (ADS)
Deville, M. O.; Fischer, P. F.; Mund, E. H.
2002-08-01
High-order numerical methods provide an efficient approach to simulating many physical problems. This book considers the range of mathematical, engineering, and computer science topics that form the foundation of high-order numerical methods for the simulation of incompressible fluid flows in complex domains. Introductory chapters present high-order spatial and temporal discretizations for one-dimensional problems. These are extended to multiple space dimensions with a detailed discussion of tensor-product forms, multi-domain methods, and preconditioners for iterative solution techniques. Numerous discretizations of the steady and unsteady Stokes and Navier-Stokes equations are presented, with particular sttention given to enforcement of imcompressibility. Advanced discretizations. implementation issues, and parallel and vector performance are considered in the closing sections. Numerous examples are provided throughout to illustrate the capabilities of high-order methods in actual applications.
NASA Technical Reports Server (NTRS)
Garai, Anirban; Diosady, Laslo T.; Murman, Scott M.; Madavan, Nateri K.
2016-01-01
Recent progress towards developing a new computational capability for accurate and efficient high-fidelity direct numerical simulation (DNS) and large-eddy simulation (LES) of turbomachinery is described. This capability is based on an entropy- stable Discontinuous-Galerkin spectral-element approach that extends to arbitrarily high orders of spatial and temporal accuracy, and is implemented in a computationally efficient manner on a modern high performance computer architecture. An inflow turbulence generation procedure based on a linear forcing approach has been incorporated in this framework and DNS conducted to study the effect of inflow turbulence on the suction- side separation bubble in low-pressure turbine (LPT) cascades. The T106 series of airfoil cascades in both lightly (T106A) and highly loaded (T106C) configurations at exit isentropic Reynolds numbers of 60,000 and 80,000, respectively, are considered. The numerical simulations are performed using 8th-order accurate spatial and 4th-order accurate temporal discretization. The changes in separation bubble topology due to elevated inflow turbulence is captured by the present method and the physical mechanisms leading to the changes are explained. The present results are in good agreement with prior numerical simulations but some expected discrepancies with the experimental data for the T106C case are noted and discussed.
High Temperature Gas-Cooled Test Reactor Point Design: Summary Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sterbentz, James William; Bayless, Paul David; Nelson, Lee Orville
2016-01-01
A point design has been developed for a 200-MW high-temperature gas-cooled test reactor. The point design concept uses standard prismatic blocks and 15.5% enriched uranium oxycarbide fuel. Reactor physics and thermal-hydraulics simulations have been performed to characterize the capabilities of the design. In addition to the technical data, overviews are provided on the technology readiness level, licensing approach, and costs of the test reactor point design.
High Temperature Gas-Cooled Test Reactor Point Design: Summary Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sterbentz, James William; Bayless, Paul David; Nelson, Lee Orville
2016-03-01
A point design has been developed for a 200-MW high-temperature gas-cooled test reactor. The point design concept uses standard prismatic blocks and 15.5% enriched uranium oxycarbide fuel. Reactor physics and thermal-hydraulics simulations have been performed to characterize the capabilities of the design. In addition to the technical data, overviews are provided on the technology readiness level, licensing approach, and costs of the test reactor point design.
Curing of Thick Thermoset Composite Laminates: Multiphysics Modeling and Experiments
NASA Astrophysics Data System (ADS)
Anandan, S.; Dhaliwal, G. S.; Huo, Z.; Chandrashekhara, K.; Apetre, N.; Iyyer, N.
2017-11-01
Fiber reinforced polymer composites are used in high-performance aerospace applications as they are resistant to fatigue, corrosion free and possess high specific strength. The mechanical properties of these composite components depend on the degree of cure and residual stresses developed during the curing process. While these parameters are difficult to determine experimentally in large and complex parts, they can be simulated using numerical models in a cost-effective manner. These simulations can be used to develop cure cycles and change processing parameters to obtain high-quality parts. In the current work, a numerical model was built in Comsol MultiPhysics to simulate the cure behavior of a carbon/epoxy prepreg system (IM7/Cycom 5320-1). A thermal spike was observed in thick laminates when the recommended cure cycle was used. The cure cycle was modified to reduce the thermal spike and maintain the degree of cure at the laminate center. A parametric study was performed to evaluate the effect of air flow in the oven, post cure cycles and cure temperatures on the thermal spike and the resultant degree of cure in the laminate.
Ouergui, Ibrahim; Davis, Philip; Houcine, Nizar; Marzouki, Hamza; Zaouali, Monia; Franchini, Emerson; Gmada, Nabil; Bouhlel, Ezzedine
2016-05-01
The aim of the current study was to investigate the hormonal, physiological, and physical responses of simulated kickboxing competition and evaluate if there was a difference between winners and losers. Twenty athletes of regional and national level participated in the study (mean ± SD age 21.3 ± 2.7 y, height 170.0 ± 5.0 cm). Hormone (cortisol, testosterone, growth hormone), blood lactate [La], and glucose concentrations, as well as upper-body Wingate test and countermovement-jump (CMJ) performances, were measured before and after combats. Heart rate (HR) was measured throughout rounds 1, 2, and 3 and rating of perceived exertion (RPE) was taken after each round. All combats were recorded and analyzed to determine the length of different activity phases (high-intensity, low-intensity, and referee pause) and the frequency of techniques. Hormones, glucose, [La], HR, and RPE increased (all P < .001) precombat to postcombat, while a decrease was observed for CMJ, Wingate test performance, body mass (all P < .001), and time of high-intensity activities (P = .005). There was no difference between winners and losers for hormonal, physiological, and physical variables (P > .05). However, winners executed more jab cross, total punches, roundhouse kicks, total kicks, and total attacking techniques (all P < .042) than losers. Kickboxing is an intermittent physically demanding sport that induces changes in the stress-related hormones soliciting the anaerobic lactic system. Training should be oriented to enhance kickboxers' anaerobic lactic fitness and their ability to strike at a sufficient rate. Further investigation is needed to identify possible differences in tactical and mental abilities that offer some insight into what makes winners winners.
NASA Astrophysics Data System (ADS)
Mishra, Gaurav; Ghosh, Karabi; Ray, Aditi; Gupta, N. K.
2018-06-01
Radiation hydrodynamic (RHD) simulations for four different potential high-Z hohlraum materials, namely Tungsten (W), Gold (Au), Lead (Pb), and Uranium (U) are performed in order to investigate their performance with respect to x-ray absorption, re-emission and ablation properties, when irradiated by constant temperature drives. A universal functional form is derived for estimating time dependent wall albedo for high-Z materials. Among the high-Z materials studied, it is observed that for a fixed simulation time the albedo is maximum for Au below 250 eV, whereas it is maximum for U above 250 eV. New scaling laws for shock speed vs drive temperature, applicable over a wide temperature range of 100 eV to 500 eV, are proposed based on the physics of x-ray driven stationary ablation. The resulting scaling relation for a reference material Aluminium (Al), shows good agreement with that of Kauffman's power law for temperatures ranging from 100 eV to 275 eV. New scaling relations are also obtained for temperature dependent mass ablation rate and ablation pressure, through RHD simulation. Finally, our study reveals that for temperatures above 250 eV, U serves as a better hohlraum material since it offers maximum re-emission for x-rays along with comparable mass ablation rate. Nevertheless, traditional choice, Au works well for temperatures below 250 eV. Besides inertial confinement fusion (ICF), the new scaling relations may find its application in view-factor codes, which generally ignore atomic physics calculations of opacities and emissivities, details of laser-plasma interaction and hydrodynamic motions.
Fast Photon Monte Carlo for Water Cherenkov Detectors
NASA Astrophysics Data System (ADS)
Latorre, Anthony; Seibert, Stanley
2012-03-01
We present Chroma, a high performance optical photon simulation for large particle physics detectors, such as the water Cerenkov far detector option for LBNE. This software takes advantage of the CUDA parallel computing platform to propagate photons using modern graphics processing units. In a computer model of a 200 kiloton water Cerenkov detector with 29,000 photomultiplier tubes, Chroma can propagate 2.5 million photons per second, around 200 times faster than the same simulation with Geant4. Chroma uses a surface based approach to modeling geometry which offers many benefits over a solid based modelling approach which is used in other simulations like Geant4.
A ``Cyber Wind Facility'' for HPC Wind Turbine Field Experiments
NASA Astrophysics Data System (ADS)
Brasseur, James; Paterson, Eric; Schmitz, Sven; Campbell, Robert; Vijayakumar, Ganesh; Lavely, Adam; Jayaraman, Balaji; Nandi, Tarak; Jha, Pankaj; Dunbar, Alex; Motta-Mena, Javier; Craven, Brent; Haupt, Sue
2013-03-01
The Penn State ``Cyber Wind Facility'' (CWF) is a high-fidelity multi-scale high performance computing (HPC) environment in which ``cyber field experiments'' are designed and ``cyber data'' collected from wind turbines operating within the atmospheric boundary layer (ABL) environment. Conceptually the ``facility'' is akin to a high-tech wind tunnel with controlled physical environment, but unlike a wind tunnel it replicates commercial-scale wind turbines operating in the field and forced by true atmospheric turbulence with controlled stability state. The CWF is created from state-of-the-art high-accuracy technology geometry and grid design and numerical methods, and with high-resolution simulation strategies that blend unsteady RANS near the surface with high fidelity large-eddy simulation (LES) in separated boundary layer, blade and rotor wake regions, embedded within high-resolution LES of the ABL. CWF experiments complement physical field facility experiments that can capture wider ranges of meteorological events, but with minimal control over the environment and with very small numbers of sensors at low spatial resolution. I shall report on the first CWF experiments aimed at dynamical interactions between ABL turbulence and space-time wind turbine loadings. Supported by DOE and NSF.
Scaling effects in direct shear tests
Orlando, A.D.; Hanes, D.M.; Shen, H.H.
2009-01-01
Laboratory experiments of the direct shear test were performed on spherical particles of different materials and diameters. Results of the bulk friction vs. non-dimensional shear displacement are presented as a function of the non-dimensional particle diameter. Simulations of the direct shear test were performed using the Discrete Element Method (DEM). The simulation results show Considerable differences with the physical experiments. Particle level material properties, such as the coefficients of static friction, restitution and rolling friction need to be known a priori in order to guarantee that the simulation results are an accurate representation of the physical phenomenon. Furthermore, laboratory results show a clear size dependency on the results, with smaller particles having a higher bulk friction than larger ones. ?? 2009 American Institute of Physics.
Computational physics in RISC environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rhoades, C.E. Jr.
The new high performance Reduced Instruction Set Computers (RISC) promise near Cray-level performance at near personal-computer prices. This paper explores the performance, conversion and compatibility issues associated with developing, testing and using our traditional, large-scale simulation models in the RISC environments exemplified by the IBM RS6000 and MISP R3000 machines. The questions of operating systems (CTSS versus UNIX), compilers (Fortran, C, pointers) and data are addressed in detail. Overall, it is concluded that the RISC environments are practical for a very wide range of computational physic activities. Indeed, all but the very largest two- and three-dimensional codes will work quitemore » well, particularly in a single user environment. Easily projected hardware-performance increases will revolutionize the field of computational physics. The way we do research will change profoundly in the next few years. There is, however, nothing more difficult to plan, nor more dangerous to manage than the creation of this new world.« less
Computational physics in RISC environments. Revision 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rhoades, C.E. Jr.
The new high performance Reduced Instruction Set Computers (RISC) promise near Cray-level performance at near personal-computer prices. This paper explores the performance, conversion and compatibility issues associated with developing, testing and using our traditional, large-scale simulation models in the RISC environments exemplified by the IBM RS6000 and MISP R3000 machines. The questions of operating systems (CTSS versus UNIX), compilers (Fortran, C, pointers) and data are addressed in detail. Overall, it is concluded that the RISC environments are practical for a very wide range of computational physic activities. Indeed, all but the very largest two- and three-dimensional codes will work quitemore » well, particularly in a single user environment. Easily projected hardware-performance increases will revolutionize the field of computational physics. The way we do research will change profoundly in the next few years. There is, however, nothing more difficult to plan, nor more dangerous to manage than the creation of this new world.« less
Quantum Accelerators for High-performance Computing Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humble, Travis S.; Britt, Keith A.; Mohiyaddin, Fahd A.
We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantum-accelerator framework that uses specialized kernels to offload select workloads while integrating with existing computing infrastructure. We elaborate on the role of the host operating system to manage these unique accelerator resources, themore » prospects for deploying quantum modules, and the requirements placed on the language hierarchy connecting these different system components. We draw on recent advances in the modeling and simulation of quantum computing systems with the development of architectures for hybrid high-performance computing systems and the realization of software stacks for controlling quantum devices. Finally, we present simulation results that describe the expected system-level behavior of high-performance computing systems composed from compute nodes with quantum processing units. We describe performance for these hybrid systems in terms of time-to-solution, accuracy, and energy consumption, and we use simple application examples to estimate the performance advantage of quantum acceleration.« less
Computationally efficient optimization of radiation drives
NASA Astrophysics Data System (ADS)
Zimmerman, George; Swift, Damian
2017-06-01
For many applications of pulsed radiation, the temporal pulse shape is designed to induce a desired time-history of conditions. This optimization is normally performed using multi-physics simulations of the system, adjusting the shape until the desired response is induced. These simulations may be computationally intensive, and iterative forward optimization is then expensive and slow. In principle, a simulation program could be modified to adjust the radiation drive automatically until the desired instantaneous response is achieved, but this may be impracticable in a complicated multi-physics program. However, the computational time increment is typically much shorter than the time scale of changes in the desired response, so the radiation intensity can be adjusted so that the response tends toward the desired value. This relaxed in-situ optimization method can give an adequate design for a pulse shape in a single forward simulation, giving a typical gain in computational efficiency of tens to thousands. This approach was demonstrated for the design of laser pulse shapes to induce ramp loading to high pressure in target assemblies where different components had significantly different mechanical impedance, requiring careful pulse shaping. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
NASA Astrophysics Data System (ADS)
Urata, Yumi; Kuge, Keiko; Kase, Yuko
2015-02-01
Phase transitions of pore water have never been considered in dynamic rupture simulations with thermal pressurization (TP), although they may control TP. From numerical simulations of dynamic rupture propagation including TP, in the absence of any water phase transition process, we predict that frictional heating and TP are likely to change liquid pore water into supercritical water for a strike-slip fault under depth-dependent stress. This phase transition causes changes of a few orders of magnitude in viscosity, compressibility, and thermal expansion among physical properties of water, thus affecting the diffusion of pore pressure. Accordingly, we perform numerical simulations of dynamic ruptures with TP, considering physical properties that vary with the pressure and temperature of pore water on a fault. To observe the effects of the phase transition, we assume uniform initial stress and no fault-normal variations in fluid density and viscosity. The results suggest that the varying physical properties decrease the total slip in cases with high stress at depth and small shear zone thickness. When fault-normal variations in fluid density and viscosity are included in the diffusion equation, they activate TP much earlier than the phase transition. As a consequence, the total slip becomes greater than that in the case with constant physical properties, eradicating the phase transition effect. Varying physical properties do not affect the rupture velocity, irrespective of the fault-normal variations. Thus, the phase transition of pore water has little effect on dynamic ruptures. Fault-normal variations in fluid density and viscosity may play a more significant role.
NASA Astrophysics Data System (ADS)
Prudden, R.; Arribas, A.; Tomlinson, J.; Robinson, N.
2017-12-01
The Unified Model is a numerical model of the atmosphere used at the UK Met Office (and numerous partner organisations including Korean Meteorological Agency, Australian Bureau of Meteorology and US Air Force) for both weather and climate applications.Especifically, dynamical models such as the Unified Model are now a central part of weather forecasting. Starting from basic physical laws, these models make it possible to predict events such as storms before they have even begun to form. The Unified Model can be simply described as having two components: one component solves the navier-stokes equations (usually referred to as the "dynamics"); the other solves relevant sub-grid physical processes (usually referred to as the "physics"). Running weather forecasts requires substantial computing resources - for example, the UK Met Office operates the largest operational High Performance Computer in Europe - and the cost of a typical simulation is spent roughly 50% in the "dynamics" and 50% in the "physics". Therefore there is a high incentive to reduce cost of weather forecasts and Machine Learning is a possible option because, once a machine learning model has been trained, it is often much faster to run than a full simulation. This is the motivation for a technique called model emulation, the idea being to build a fast statistical model which closely approximates a far more expensive simulation. In this paper we discuss the use of Machine Learning as an emulator to replace the "physics" component of the Unified Model. Various approaches and options will be presented and the implications for further model development, operational running of forecasting systems, development of data assimilation schemes, and development of ensemble prediction techniques will be discussed.
Multi-scale image segmentation and numerical modeling in carbonate rocks
NASA Astrophysics Data System (ADS)
Alves, G. C.; Vanorio, T.
2016-12-01
Numerical methods based on computational simulations can be an important tool in estimating physical properties of rocks. These can complement experimental results, especially when time constraints and sample availability are a problem. However, computational models created at different scales can yield conflicting results with respect to the physical laboratory. This problem is exacerbated in carbonate rocks due to their heterogeneity at all scales. We developed a multi-scale approach performing segmentation of the rock images and numerical modeling across several scales, accounting for those heterogeneities. As a first step, we measured the porosity and the elastic properties of a group of carbonate samples with varying micrite content. Then, samples were imaged by Scanning Electron Microscope (SEM) as well as optical microscope at different magnifications. We applied three different image segmentation techniques to create numerical models from the SEM images and performed numerical simulations of the elastic wave-equation. Our results show that a multi-scale approach can efficiently account for micro-porosities in tight micrite-supported samples, yielding acoustic velocities comparable to those obtained experimentally. Nevertheless, in high-porosity samples characterized by larger grain/micrite ratio, results show that SEM scale images tend to overestimate velocities, mostly due to their inability to capture macro- and/or intragranular- porosity. This suggests that, for high-porosity carbonate samples, optical microscope images would be more suited for numerical simulations.
Physical-layer network coding for passive optical interconnect in datacenter networks.
Lin, Rui; Cheng, Yuxin; Guan, Xun; Tang, Ming; Liu, Deming; Chan, Chun-Kit; Chen, Jiajia
2017-07-24
We introduce physical-layer network coding (PLNC) technique in a passive optical interconnect (POI) architecture for datacenter networks. The implementation of the PLNC in the POI at 2.5 Gb/s and 10Gb/s have been experimentally validated while the gains in terms of network layer performances have been investigated by simulation. The results reveal that in order to realize negligible packet drop, the wavelengths usage can be reduced by half while a significant improvement in packet delay especially under high traffic load can be achieved by employing PLNC over POI.
Massively Parallel Real-Time TDDFT Simulations of Electronic Stopping Processes
NASA Astrophysics Data System (ADS)
Yost, Dillon; Lee, Cheng-Wei; Draeger, Erik; Correa, Alfredo; Schleife, Andre; Kanai, Yosuke
Electronic stopping describes transfer of kinetic energy from fast-moving charged particles to electrons, producing massive electronic excitations in condensed matter. Understanding this phenomenon for ion irradiation has implications in modern technologies, ranging from nuclear reactors, to semiconductor devices for aerospace missions, to proton-based cancer therapy. Recent advances in high-performance computing allow us to achieve an accurate parameter-free description of these phenomena through numerical simulations. Here we discuss results from our recently-developed large-scale real-time TDDFT implementation for electronic stopping processes in important example materials such as metals, semiconductors, liquid water, and DNA. We will illustrate important insight into the physics underlying electronic stopping and we discuss current limitations of our approach both regarding physical and numerical approximations. This work is supported by the DOE through the INCITE awards and by the NSF. Part of this work was performed under the auspices of U.S. DOE by LLNL under Contract DE-AC52-07NA27344.
Performance Analysis of Transposition Models Simulating Solar Radiation on Inclined Surfaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Yu; Sengupta, Manajit
2016-06-02
Transposition models have been widely used in the solar energy industry to simulate solar radiation on inclined photovoltaic panels. Following numerous studies comparing the performance of transposition models, this work aims to understand the quantitative uncertainty in state-of-the-art transposition models and the sources leading to the uncertainty. Our results show significant differences between two highly used isotropic transposition models, with one substantially underestimating the diffuse plane-of-array irradiances when diffuse radiation is perfectly isotropic. In the empirical transposition models, the selection of the empirical coefficients and land surface albedo can both result in uncertainty in the output. This study can bemore » used as a guide for the future development of physics-based transposition models and evaluations of system performance.« less
CHORUS code for solar and planetary convection
NASA Astrophysics Data System (ADS)
Wang, Junfeng
Turbulent, density stratified convection is ubiquitous in stars and planets. Numerical simulation has become an indispensable tool for understanding it. A primary contribution of this dissertation work is the creation of the Compressible High-ORder Unstructured Spectral-difference (CHORUS) code for simulating the convection and related fluid dynamics in the interiors of stars and planets. In this work, the CHORUS code is verified by using two newly defined benchmark cases and demonstrates excellent parallel performance. It has unique potential to simulate challenging physical phenomena such as multi-scale solar convection, core convection, and convection in oblate, rapidly-rotating stars. In order to exploit its unique capabilities, the CHORUS code has been extended to perform the first 3D simulations of convection in oblate, rapidly rotating solar-type stars. New insights are obtained with respect to the influence of oblateness on the convective structure and heat flux transport. With the presence of oblateness resulting from the centrifugal force effect, the convective structure in the polar regions decouples from the main convective modes in the equatorial regions. Our convection simulations predict that heat flux peaks in both the polar and equatorial regions, contrary to previous theoretical results that predict darker equators. High latitudinal zonal jets are also observed in the simulations.
Large eddy simulations and direct numerical simulations of high speed turbulent reacting flows
NASA Technical Reports Server (NTRS)
Givi, P.; Madnia, C. K.; Steinberger, C. J.; Frankel, S. H.
1992-01-01
The basic objective of this research is to extend the capabilities of Large Eddy Simulations (LES) and Direct Numerical Simulations (DNS) for the computational analyses of high speed reacting flows. In the efforts related to LES, we were primarily involved with assessing the performance of the various modern methods based on the Probability Density Function (PDF) methods for providing closures for treating the subgrid fluctuation correlations of scalar quantities in reacting turbulent flows. In the work on DNS, we concentrated on understanding some of the relevant physics of compressible reacting flows by means of statistical analysis of the data generated by DNS of such flows. In the research conducted in the second year of this program, our efforts focused on the modeling of homogeneous compressible turbulent flows by PDF methods, and on DNS of non-equilibrium reacting high speed mixing layers. Some preliminary work is also in progress on PDF modeling of shear flows, and also on LES of such flows.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fields, Laura; Genser, Krzysztof; Hatcher, Robert
Geant4 is the leading detector simulation toolkit used in high energy physics to design detectors and to optimize calibration and reconstruction software. It employs a set of carefully validated physics models to simulate interactions of particles with matter across a wide range of interaction energies. These models, especially the hadronic ones, rely largely on directly measured cross-sections and phenomenological predictions with physically motivated parameters estimated by theoretical calculation or measurement. Because these models are tuned to cover a very wide range of possible simulation tasks, they may not always be optimized for a given process or a given material. Thismore » raises several critical questions, e.g. how sensitive Geant4 predictions are to the variations of the model parameters, or what uncertainties are associated with a particular tune of a Geant4 physics model, or a group of models, or how to consistently derive guidance for Geant4 model development and improvement from a wide range of available experimental data. We have designed and implemented a comprehensive, modular, user-friendly software toolkit to study and address such questions. It allows one to easily modify parameters of one or several Geant4 physics models involved in the simulation, and to perform collective analysis of multiple variants of the resulting physics observables of interest and comparison against a variety of corresponding experimental data. Based on modern event-processing infrastructure software, the toolkit offers a variety of attractive features, e.g. flexible run-time configurable workflow, comprehensive bookkeeping, easy to expand collection of analytical components. Design, implementation technology, and key functionalities of the toolkit are presented and illustrated with results obtained with Geant4 key hadronic models.« less
2011-03-25
number one and Nebulae at number three. Both systems rely on GPU co-processing and use Intel Xeon processors cards and NVIDIA Tesla C2050 GPUs. In...spite of a theoretical peak capability of almost 3 Petaflop/s, Nebulae clocked at 1.271 PFlop/s when running the Linpack benchmark, which puts it
Perception-based synthetic cueing for night vision device rotorcraft hover operations
NASA Astrophysics Data System (ADS)
Bachelder, Edward N.; McRuer, Duane
2002-08-01
Helicopter flight using night-vision devices (NVDs) is difficult to perform, as evidenced by the high accident rate associated with NVD flight compared to day operation. The approach proposed in this paper is to augment the NVD image with synthetic cueing, whereby the cues would emulate position and motion and appear to be actually occurring in physical space on which they are overlaid. Synthetic cues allow for selective enhancement of perceptual state gains to match the task requirements. A hover cue set was developed based on an analogue of a physical target used in a flight handling qualities tracking task, a perceptual task analysis for hover, and fundamentals of human spatial perception. The display was implemented on a simulation environment, constructed using a virtual reality device, an ultrasound head-tracker, and a fixed-base helicopter simulator. Seven highly trained helicopter pilots were used as experimental subjects and tasked to maintain hover in the presence of aircraft positional disturbances while viewing a synthesized NVD environment and the experimental hover cues. Significant performance improvements were observed when using synthetic cue augmentation. This paper demonstrates that artificial magnification of perceptual states through synthetic cueing can be an effective method of improving night-vision helicopter hover operations.
Representing ductile damage with the dual domain material point method
Long, C. C.; Zhang, D. Z.; Bronkhorst, C. A.; ...
2015-12-14
In this study, we incorporate a ductile damage material model into a computational framework based on the Dual Domain Material Point (DDMP) method. As an example, simulations of a flyer plate experiment involving ductile void growth and material failure are performed. The results are compared with experiments performed on high purity tantalum. We also compare the numerical results obtained from the DDMP method with those obtained from the traditional Material Point Method (MPM). Effects of an overstress model, artificial viscosity, and physical viscosity are investigated. Our results show that a physical bulk viscosity and overstress model are important in thismore » impact and failure problem, while physical shear viscosity and artificial shock viscosity have negligible effects. A simple numerical procedure with guaranteed convergence is introduced to solve for the equilibrium plastic state from the ductile damage model.« less
Fast simulation of the NICER instrument
NASA Astrophysics Data System (ADS)
Doty, John P.; Wampler-Doty, Matthew P.; Prigozhin, Gregory Y.; Okajima, Takashi; Arzoumanian, Zaven; Gendreau, Keith
2016-07-01
The NICER1 mission uses a complicated physical system to collect information from objects that are, by x-ray timing science standards, rather faint. To get the most out of the data we will need a rigorous understanding of all instrumental effects. We are in the process of constructing a very fast, high fidelity simulator that will help us to assess instrument performance, support simulation-based data reduction, and improve our estimates of measurement error. We will combine and extend existing optics, detector, and electronics simulations. We will employ the Compute Unified Device Architecture (CUDA2) to parallelize these calculations. The price of suitable CUDA-compatible multi-giga op cores is about $0.20/core, so this approach will be very cost-effective.
Cui, Yang; Hanley, Luke
2015-06-01
ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science.
Cui, Yang; Hanley, Luke
2015-01-01
ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science. PMID:26133872
NASA Astrophysics Data System (ADS)
Cui, Yang; Hanley, Luke
2015-06-01
ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science.
Youssef, Yassar; Lee, Gyusung; Godinez, Carlos; Sutton, Erica; Klein, Rosemary V; George, Ivan M; Seagull, F Jacob; Park, Adrian
2011-07-01
This study compares surgical techniques and surgeon's standing position during laparoscopic cholecystectomy (LC), investigating each with respect to surgeons' learning, performance, and ergonomics. Little homogeneity exists in LC performance and training. Variations in standing position (side-standing technique vs. between-standing technique) and hand technique (one-handed vs. two-handed) exist. Thirty-two LC procedures performed on a virtual reality simulator were video-recorded and analyzed. Each subject performed four different procedures: one-handed/side-standing, one-handed/between-standing, two-handed/side-standing, and two-handed/between-standing. Physical ergonomics were evaluated using Rapid Upper Limb Assessment (RULA). Mental workload assessment was acquired with the National Aeronautics and Space Administration-Task Load Index (NASA-TLX). Virtual reality (VR) simulator-generated performance evaluation and a subjective survey were analyzed. RULA scores were consistently lower (indicating better ergonomics) for the between-standing technique and higher (indicating worse ergonomics) for the side-standing technique, regardless of whether one- or two-handed. Anatomical scores overall showed side-standing to have a detrimental effect on the upper arms and trunk. The NASA-TLX showed significant association between the side-standing position and high physical demand, effort, and frustration (p<0.05). The two-handed technique in the side-standing position required more effort than the one-handed (p<0.05). No difference in operative time or complication rate was demonstrated among the four procedures. The two-handed/between-standing method was chosen as the best procedure to teach and standardize. Laparoscopic cholecystectomy poses a risk of physical injury to the surgeon. As LC is currently commonly performed in the United States, the left side-standing position may lead to increased physical demand and effort, resulting in ergonomically unsound conditions for the surgeon. Though further investigations should be conducted, adopting the between-standing position deserves serious consideration as it may be the best short-term ergonomic alternative.
Dynamic large eddy simulation: Stability via realizability
NASA Astrophysics Data System (ADS)
Mokhtarpoor, Reza; Heinz, Stefan
2017-10-01
The concept of dynamic large eddy simulation (LES) is highly attractive: such methods can dynamically adjust to changing flow conditions, which is known to be highly beneficial. For example, this avoids the use of empirical, case dependent approximations (like damping functions). Ideally, dynamic LES should be local in physical space (without involving artificial clipping parameters), and it should be stable for a wide range of simulation time steps, Reynolds numbers, and numerical schemes. These properties are not trivial, but dynamic LES suffers from such problems over decades. We address these questions by performing dynamic LES of periodic hill flow including separation at a high Reynolds number Re = 37 000. For the case considered, the main result of our studies is that it is possible to design LES that has the desired properties. It requires physical consistency: a PDF-realizable and stress-realizable LES model, which requires the inclusion of the turbulent kinetic energy in the LES calculation. LES models that do not honor such physical consistency can become unstable. We do not find support for the previous assumption that long-term correlations of negative dynamic model parameters are responsible for instability. Instead, we concluded that instability is caused by the stable spatial organization of significant unphysical states, which are represented by wall-type gradient streaks of the standard deviation of the dynamic model parameter. The applicability of our realizability stabilization to other dynamic models (including the dynamic Smagorinsky model) is discussed.
Terascale direct numerical simulations of turbulent combustion using S3D
NASA Astrophysics Data System (ADS)
Chen, J. H.; Choudhary, A.; de Supinski, B.; DeVries, M.; Hawkes, E. R.; Klasky, S.; Liao, W. K.; Ma, K. L.; Mellor-Crummey, J.; Podhorszki, N.; Sankaran, R.; Shende, S.; Yoo, C. S.
2009-01-01
Computational science is paramount to the understanding of underlying processes in internal combustion engines of the future that will utilize non-petroleum-based alternative fuels, including carbon-neutral biofuels, and burn in new combustion regimes that will attain high efficiency while minimizing emissions of particulates and nitrogen oxides. Next-generation engines will likely operate at higher pressures, with greater amounts of dilution and utilize alternative fuels that exhibit a wide range of chemical and physical properties. Therefore, there is a significant role for high-fidelity simulations, direct numerical simulations (DNS), specifically designed to capture key turbulence-chemistry interactions in these relatively uncharted combustion regimes, and in particular, that can discriminate the effects of differences in fuel properties. In DNS, all of the relevant turbulence and flame scales are resolved numerically using high-order accurate numerical algorithms. As a consequence terascale DNS are computationally intensive, require massive amounts of computing power and generate tens of terabytes of data. Recent results from terascale DNS of turbulent flames are presented here, illustrating its role in elucidating flame stabilization mechanisms in a lifted turbulent hydrogen/air jet flame in a hot air coflow, and the flame structure of a fuel-lean turbulent premixed jet flame. Computing at this scale requires close collaborations between computer and combustion scientists to provide optimized scaleable algorithms and software for terascale simulations, efficient collective parallel I/O, tools for volume visualization of multiscale, multivariate data and automating the combustion workflow. The enabling computer science, applied to combustion science, is also required in many other terascale physics and engineering simulations. In particular, performance monitoring is used to identify the performance of key kernels in the DNS code, S3D and especially memory intensive loops in the code. Through the careful application of loop transformations, data reuse in cache is exploited thereby reducing memory bandwidth needs, and hence, improving S3D's nodal performance. To enhance collective parallel I/O in S3D, an MPI-I/O caching design is used to construct a two-stage write-behind method for improving the performance of write-only operations. The simulations generate tens of terabytes of data requiring analysis. Interactive exploration of the simulation data is enabled by multivariate time-varying volume visualization. The visualization highlights spatial and temporal correlations between multiple reactive scalar fields using an intuitive user interface based on parallel coordinates and time histogram. Finally, an automated combustion workflow is designed using Kepler to manage large-scale data movement, data morphing, and archival and to provide a graphical display of run-time diagnostics.
Lu, Zeqin; Jhoja, Jaspreet; Klein, Jackson; Wang, Xu; Liu, Amy; Flueckiger, Jonas; Pond, James; Chrostowski, Lukas
2017-05-01
This work develops an enhanced Monte Carlo (MC) simulation methodology to predict the impacts of layout-dependent correlated manufacturing variations on the performance of photonics integrated circuits (PICs). First, to enable such performance prediction, we demonstrate a simple method with sub-nanometer accuracy to characterize photonics manufacturing variations, where the width and height for a fabricated waveguide can be extracted from the spectral response of a racetrack resonator. By measuring the spectral responses for a large number of identical resonators spread over a wafer, statistical results for the variations of waveguide width and height can be obtained. Second, we develop models for the layout-dependent enhanced MC simulation. Our models use netlist extraction to transfer physical layouts into circuit simulators. Spatially correlated physical variations across the PICs are simulated on a discrete grid and are mapped to each circuit component, so that the performance for each component can be updated according to its obtained variations, and therefore, circuit simulations take the correlated variations between components into account. The simulation flow and theoretical models for our layout-dependent enhanced MC simulation are detailed in this paper. As examples, several ring-resonator filter circuits are studied using the developed enhanced MC simulation, and statistical results from the simulations can predict both common-mode and differential-mode variations of the circuit performance.
A real time sorting algorithm to time sort any deterministic time disordered data stream
NASA Astrophysics Data System (ADS)
Saini, J.; Mandal, S.; Chakrabarti, A.; Chattopadhyay, S.
2017-12-01
In new generation high intensity high energy physics experiments, millions of free streaming high rate data sources are to be readout. Free streaming data with associated time-stamp can only be controlled by thresholds as there is no trigger information available for the readout. Therefore, these readouts are prone to collect large amount of noise and unwanted data. For this reason, these experiments can have output data rate of several orders of magnitude higher than the useful signal data rate. It is therefore necessary to perform online processing of the data to extract useful information from the full data set. Without trigger information, pre-processing on the free streaming data can only be done with time based correlation among the data set. Multiple data sources have different path delays and bandwidth utilizations and therefore the unsorted merged data requires significant computational efforts for real time manifestation of sorting before analysis. Present work reports a new high speed scalable data stream sorting algorithm with its architectural design, verified through Field programmable Gate Array (FPGA) based hardware simulation. Realistic time based simulated data likely to be collected in an high energy physics experiment have been used to study the performance of the algorithm. The proposed algorithm uses parallel read-write blocks with added memory management and zero suppression features to make it efficient for high rate data-streams. This algorithm is best suited for online data streams with deterministic time disorder/unsorting on FPGA like hardware.
Performance of a reconfigured atmospheric general circulation model at low resolution
NASA Astrophysics Data System (ADS)
Wen, Xinyu; Zhou, Tianjun; Wang, Shaowu; Wang, Bin; Wan, Hui; Li, Jian
2007-07-01
Paleoclimate simulations usually require model runs over a very long time. The fast integration version of a state-of-the-art general circulation model (GCM), which shares the same physical and dynamical processes but with reduced horizontal resolution and increased time step, is usually developed. In this study, we configure a fast version of an atmospheric GCM (AGCM), the Grid Atmospheric Model of IAP/LASG (Institute of Atmospheric Physics/State Key Laboratory of Numerical Modeling for Atmospheric Sciences and Geophysical Fluid Dynamics), at low resolution (GAMIL-L, hereafter), and compare the simulation results with the NCEP/NCAR reanalysis and other data to examine its performance. GAMIL-L, which is derived from the original GAMIL, is a finite difference AGCM with 72×40 grids in longitude and latitude and 26 vertical levels. To validate the simulated climatology and variability, two runs were achieved. One was a 60-year control run with fixed climatological monthly sea surface temperature (SST) forcing, and the other was a 50-yr (1950 2000) integration with observational time-varying monthly SST forcing. Comparisons between these two cases and the reanalysis, including intra-seasonal and inter-annual variability are also presented. In addition, the differences between GAMIL-L and the original version of GAMIL are also investigated. The results show that GAMIL-L can capture most of the large-scale dynamical features of the atmosphere, especially in the tropics and mid latitudes, although a few deficiencies exist, such as the underestimated Hadley cell and thereby the weak strength of the Asia summer monsoon. However, the simulated mean states over high latitudes, especially over the polar regions, are not acceptable. Apart from dynamics, the thermodynamic features mainly depend upon the physical parameterization schemes. Since the physical package of GAMIL-L is exactly the same as the original high-resolution version of GAMIL, in which the NCAR Community Atmosphere Model (CAM2) physical package was used, there are only small differences between them in the precipitation and temperature fields. Because our goal is to develop a fast-running AGCM and employ it in the coupled climate system model of IAP/LASG for paleoclimate studies such as ENSO and Australia-Asia monsoon, particular attention has been paid to the model performances in the tropics. More model validations, such as those ran for the Southern Oscillation and South Asia monsoon, indicate that GAMIL-L is reasonably competent and valuable in this regard.
Modelling of proton acceleration in application to a ground level enhancement
NASA Astrophysics Data System (ADS)
Afanasiev, A.; Vainio, R.; Rouillard, A. P.; Battarbee, M.; Aran, A.; Zucca, P.
2018-06-01
Context. The source of high-energy protons (above 500 MeV) responsible for ground level enhancements (GLEs) remains an open question in solar physics. One of the candidates is a shock wave driven by a coronal mass ejection, which is thought to accelerate particles via diffusive-shock acceleration. Aims: We perform physics-based simulations of proton acceleration using information on the shock and ambient plasma parameters derived from the observation of a real GLE event. We analyse the simulation results to find out which of the parameters are significant in controlling the acceleration efficiency and to get a better understanding of the conditions under which the shock can produce relativistic protons. Methods: We use the results of the recently developed technique to determine the shock and ambient plasma parameters, applied to the 17 May 2012 GLE event, and carry out proton acceleration simulations with the Coronal Shock Acceleration (CSA) model. Results: We performed proton acceleration simulations for nine individual magnetic field lines characterised by various plasma conditions. Analysis of the simulation results shows that the acceleration efficiency of the shock, i.e. its ability to accelerate particles to high energies, tends to be higher for those shock portions that are characterised by higher values of the scattering-centre compression ratio rc and/or the fast-mode Mach number MFM. At the same time, the acceleration efficiency can be strengthened by enhanced plasma density in the corresponding flux tube. The simulations show that protons can be accelerated to GLE energies in the shock portions characterised by the highest values of rc. Analysis of the delays between the flare onset and the production times of protons of 1 GV rigidity for different field lines in our simulations, and a subsequent comparison of those with the observed values indicate a possibility that quasi-perpendicular portions of the shock play the main role in producing relativistic protons.
Simulating parameters of lunar physical libration on the basis of its analytical theory
NASA Astrophysics Data System (ADS)
Petrova, N.; Zagidullin, A.; Nefediev, Yu.
2014-04-01
Results of simulating behavior of lunar physical libration parameters are presented. Some features in the speed change of impulse variables are revealed: fast periodic changes in р2 and long periodic changes in р3. A problem of searching for a dynamic explanation of this phenomenon is put. The simulation was performed on the basis of the analytical libration theory [1] in the programming environment VBA.
Lee, Hyo Taek; Roh, Hyo Lyun; Kim, Yoon Sang
2016-01-01
[Purpose] Efficient management using exercise programs with various benefits should be provided by educational institutions for children in their growth phase. We analyzed the heart rates of children during ski simulator exercise and the Harvard step test to evaluate the cardiopulmonary endurance by calculating their post-exercise recovery rate. [Subjects and Methods] The subjects (n = 77) were categorized into a normal weight and an overweight/obesity group by body mass index. They performed each exercise for 3 minutes. The cardiorespiratory endurance was calculated using the Physical Efficiency Index formula. [Results] The ski simulator and Harvard step test showed that there was a significant difference in the heart rates of the 2 body mass index-based groups at each minute. The normal weight and the ski-simulator group had higher Physical Efficiency Index levels. [Conclusion] This study showed that a simulator exercise can produce a cumulative load even when performed at low intensity, and can be effectively utilized as exercise equipment since it resulted in higher Physical Efficiency Index levels than the Harvard step test. If schools can increase sport durability by stimulating students' interests, the ski simulator exercise can be used in programs designed to improve and strengthen students' physical fitness.
A survey of simulators for palpation training.
Zhang, Yan; Phillips, Roger; Ward, James; Pisharody, Sandhya
2009-01-01
Palpation is a widely used diagnostic method in medical practice. The sensitivity of palpation is highly dependent upon the skill of clinicians, which is often difficult to master. There is a need of simulators in palpation training. This paper summarizes important work and the latest achievements in simulation for palpation training. Three types of simulators; physical models, Virtual Reality (VR) based simulations, and hybrid (computerized and physical) simulators, are surveyed. Comparisons among different kinds of simulators are presented.
Evaluation of coupling approaches for thermomechanical simulations
Novascone, S. R.; Spencer, B. W.; Hales, J. D.; ...
2015-08-10
Many problems of interest, particularly in the nuclear engineering field, involve coupling between the thermal and mechanical response of an engineered system. The strength of the two-way feedback between the thermal and mechanical solution fields can vary significantly depending on the problem. Contact problems exhibit a particularly high degree of two-way feedback between those fields. This paper describes and demonstrates the application of a flexible simulation environment that permits the solution of coupled physics problems using either a tightly coupled approach or a loosely coupled approach. In the tight coupling approach, Newton iterations include the coupling effects between all physics,more » while in the loosely coupled approach, the individual physics models are solved independently, and fixed-point iterations are performed until the coupled system is converged. These approaches are applied to simple demonstration problems and to realistic nuclear engineering applications. The demonstration problems consist of single and multi-domain thermomechanics with and without thermal and mechanical contact. Simulations of a reactor pressure vessel under pressurized thermal shock conditions and a simulation of light water reactor fuel are also presented. Here, problems that include thermal and mechanical contact, such as the contact between the fuel and cladding in the fuel simulation, exhibit much stronger two-way feedback between the thermal and mechanical solutions, and as a result, are better solved using a tight coupling strategy.« less
NASA Astrophysics Data System (ADS)
Adriani, O.; Albergo, S.; Auditore, L.; Basti, A.; Berti, E.; Bigongiari, G.; Bonechi, L.; Bonechi, S.; Bongi, M.; Bonvicini, V.; Bottai, S.; Brogi, P.; Carotenuto, G.; Castellini, G.; Cattaneo, P. W.; Daddi, N.; D'Alessandro, R.; Detti, S.; Finetti, N.; Italiano, A.; Lenzi, P.; Maestro, P.; Marrocchesi, P. S.; Mori, N.; Orzan, G.; Olmi, M.; Pacini, L.; Papini, P.; Pellegriti, M. G.; Rappoldi, A.; Ricciarini, S.; Sciuto, A.; Spillantini, P.; Starodubtsev, O.; Stolzi, F.; Suh, J. E.; Sulaj, A.; Tiberio, A.; Tricomi, A.; Trifiro', A.; Trimarchi, M.; Vannuccini, E.; Zampa, G.; Zampa, N.
2017-11-01
The direct detection of high-energy cosmic rays up to the PeV region is one of the major challenges for the next generation of space-borne cosmic-ray detectors. The physics performance will be primarily determined by their geometrical acceptance and energy resolution. CaloCube is a homogeneous calorimeter whose geometry allows an almost isotropic response, so as to detect particles arriving from every direction in space, thus maximizing the acceptance. A comparative study of different scintillating materials and mechanical structures has been performed by means of Monte Carlo simulation. The scintillation-Cherenkov dual read-out technique has been also considered and its benefit evaluated.
Progress towards computer simulation of NiH2 battery performance over life
NASA Technical Reports Server (NTRS)
Zimmerman, Albert H.; Quinzio, M. V.
1995-01-01
The long-term performance of rechargeable battery cells has traditionally been verified through life-testing, a procedure that generally requires significant commitments of funding and test resources. In the situation of nickel hydrogen battery cells, which have the capability of providing extremely long cycle life, the time and cost required to conduct even accelerated testing has become a serious impediment to transitioning technology improvements into spacecraft applications. The utilization of computer simulations to indicate the changes in performance to be expected in response to design or operating changes in nickel hydrogen cells is therefore a particularly attractive tool in advanced battery development, as well as for verifying performance in different applications. Computer-based simulations of the long-term performance of rechargeable battery cells have typically had very limited success in the past. There are a number of reasons for the lack in progress in this area. First, and probably most important, all battery cells are relatively complex electrochemical systems, in which performance is dictated by a large number of interacting physical and chemical processes. While the complexity alone is a significant part of the problem, in many instances the fundamental chemical and physical processes underlying long-term degradation and its effects on performance have not even been understood. Second, while specific chemical and physical changes within cell components have been associated with degradation, there has been no generalized simulation architecture that enables the chemical and physical structure (and changes therein) to be translated into cell performance. For the nickel hydrogen battery cell, our knowledge of the underlying reactions that control the performance of this cell has progressed to where it clearly is possible to model them. The recent development of a relative generalized cell modelling approach provides the framework for translating the chemical and physical structure of the components inside a cell into its performance characteristics over its entire cycle life. This report describes our approach to this task in terms of defining those processes deemed critical in controlling performance over life, and the model architecture required to translate the fundamental cell processes into performance profiles.
NASA Astrophysics Data System (ADS)
Tasdighi, A.; Arabi, M.
2014-12-01
Calibration of physically-based distributed hydrologic models has always been a challenging task and subject of controversy in the literature. This study is aimed to investigate how different physiographic characteristics of watersheds call for adaption of the methods used in order to have more robust and internally justifiable simulations. Haw Watershed (1300 sq. mi.) is located in the piedmont region of North Carolina draining into B. Everett Jordan Lake located in west of Raleigh. Major land covers in this watershed are forest (50%), urban/suburban (21%) and agriculture (25%) of which a large portion is pasture. Different hydrologic behaviors are observed in this watershed based on the land use composition and size of the sub-watersheds. Highly urbanized sub-watersheds show flashier hydrographs and near instantaneous hydrologic responses. This is also the case with smaller sub-watersheds with relatively lower percentage of urban areas. The Soil and Water Assessment Tool (SWAT) has been widely used in the literature for hydrologic simulation on daily basis using Soil Conservation Service Curve Number method (SCS CN). However, it has not been used as frequently using the sub-daily routines. In this regard there are a number of studies in the literature which have used coarse time scale (daily) precipitation with methods like SCS CN to calibrate SWAT for watersheds containing different types of land uses and soils reporting satisfying results at the outlet of the watershed. This is while for physically-based distributed models, the more important concern should be to check and analyze the internal processes leading to those results. In this study, the watershed is divided into several sub-watersheds to compare the performance of SCS CN and Green & Ampt (GA) methods on different land uses at different spatial scales. The results suggest better performance of GA compared to SCS CN for smaller and highly urbanized sub-watersheds although GA predominance is not very significant for the latter. Also, the better performance of GA in simulating the peak flows and flashy behavior of the hydrographs is notable. GA did not show a significant improvement over SCS CN in simulating the excess rainfall for larger sub-watersheds.
Numerical Propulsion System Simulation: An Overview
NASA Technical Reports Server (NTRS)
Lytle, John K.
2000-01-01
The cost of implementing new technology in aerospace propulsion systems is becoming prohibitively expensive and time consuming. One of the main contributors to the high cost and lengthy time is the need to perform many large-scale hardware tests and the inability to integrate all appropriate subsystems early in the design process. The NASA Glenn Research Center is developing the technologies required to enable simulations of full aerospace propulsion systems in sufficient detail to resolve critical design issues early in the design process before hardware is built. This concept, called the Numerical Propulsion System Simulation (NPSS), is focused on the integration of multiple disciplines such as aerodynamics, structures and heat transfer with computing and communication technologies to capture complex physical processes in a timely and cost-effective manner. The vision for NPSS, as illustrated, is to be a "numerical test cell" that enables full engine simulation overnight on cost-effective computing platforms. There are several key elements within NPSS that are required to achieve this capability: 1) clear data interfaces through the development and/or use of data exchange standards, 2) modular and flexible program construction through the use of object-oriented programming, 3) integrated multiple fidelity analysis (zooming) techniques that capture the appropriate physics at the appropriate fidelity for the engine systems, 4) multidisciplinary coupling techniques and finally 5) high performance parallel and distributed computing. The current state of development in these five area focuses on air breathing gas turbine engines and is reported in this paper. However, many of the technologies are generic and can be readily applied to rocket based systems and combined cycles currently being considered for low-cost access-to-space applications. Recent accomplishments include: (1) the development of an industry-standard engine cycle analysis program and plug 'n play architecture, called NPSS Version 1, (2) A full engine simulation that combines a 3D low-pressure subsystem with a 0D high pressure core simulation. This demonstrates the ability to integrate analyses at different levels of detail and to aerodynamically couple components, the fan/booster and low-pressure turbine, through a 3D computational fluid dynamics simulation. (3) Simulation of all of the turbomachinery in a modern turbofan engine on parallel computing platform for rapid and cost-effective execution. This capability can also be used to generate full compressor map, requiring both design and off-design simulation. (4) Three levels of coupling characterize the multidisciplinary analysis under NPSS: loosely coupled, process coupled and tightly coupled. The loosely coupled and process coupled approaches require a common geometry definition to link CAD to analysis tools. The tightly coupled approach is currently validating the use of arbitrary Lagrangian/Eulerian formulation for rotating turbomachinery. The validation includes both centrifugal and axial compression systems. The results of the validation will be reported in the paper. (5) The demonstration of significant computing cost/performance reduction for turbine engine applications using PC clusters. The NPSS Project is supported under the NASA High Performance Computing and Communications Program.
Geant4 Computing Performance Benchmarking and Monitoring
Dotti, Andrea; Elvira, V. Daniel; Folger, Gunter; ...
2015-12-23
Performance evaluation and analysis of large scale computing applications is essential for optimal use of resources. As detector simulation is one of the most compute intensive tasks and Geant4 is the simulation toolkit most widely used in contemporary high energy physics (HEP) experiments, it is important to monitor Geant4 through its development cycle for changes in computing performance and to identify problems and opportunities for code improvements. All Geant4 development and public releases are being profiled with a set of applications that utilize different input event samples, physics parameters, and detector configurations. Results from multiple benchmarking runs are compared tomore » previous public and development reference releases to monitor CPU and memory usage. Observed changes are evaluated and correlated with code modifications. Besides the full summary of call stack and memory footprint, a detailed call graph analysis is available to Geant4 developers for further analysis. The set of software tools used in the performance evaluation procedure, both in sequential and multi-threaded modes, include FAST, IgProf and Open|Speedshop. In conclusion, the scalability of the CPU time and memory performance in multi-threaded application is evaluated by measuring event throughput and memory gain as a function of the number of threads for selected event samples.« less
Simulation-Based Training for Colonoscopy
Preisler, Louise; Svendsen, Morten Bo Søndergaard; Nerup, Nikolaj; Svendsen, Lars Bo; Konge, Lars
2015-01-01
Abstract The aim of this study was to create simulation-based tests with credible pass/fail standards for 2 different fidelities of colonoscopy models. Only competent practitioners should perform colonoscopy. Reliable and valid simulation-based tests could be used to establish basic competency in colonoscopy before practicing on patients. Twenty-five physicians (10 consultants with endoscopic experience and 15 fellows with very little endoscopic experience) were tested on 2 different simulator models: a virtual-reality simulator and a physical model. Tests were repeated twice on each simulator model. Metrics with discriminatory ability were identified for both modalities and reliability was determined. The contrasting-groups method was used to create pass/fail standards and the consequences of these were explored. The consultants significantly performed faster and scored higher than the fellows on both the models (P < 0.001). Reliability analysis showed Cronbach α = 0.80 and 0.87 for the virtual-reality and the physical model, respectively. The established pass/fail standards failed one of the consultants (virtual-reality simulator) and allowed one fellow to pass (physical model). The 2 tested simulations-based modalities provided reliable and valid assessments of competence in colonoscopy and credible pass/fail standards were established for both the tests. We propose to use these standards in simulation-based training programs before proceeding to supervised training on patients. PMID:25634177
NASA Technical Reports Server (NTRS)
Drozda, Tomasz, G.; Cabell, Karen F.; Ziltz, Austin R.; Hass, Neil E.; Inman, Jennifer A.; Burns, Ross A.; Bathel, Brett F.; Danehy, Paul M.; Abul-Huda, Yasin M.; Gamba, Mirko
2017-01-01
The current work compares experimentally and computationally obtained nitric oxide (NO) planar laser-induced fluorescence (PLIF) images of the mixing flowfields for three types of high-speed fuel injectors: a strut, a ramp, and a rectangular flush-wall. These injection devices, which exhibited promising mixing performance at lower flight Mach numbers, are currently being studied as a part of the Enhanced Injection and Mixing Project (EIMP) at the NASA Langley Research Center. The EIMP aims to investigate scramjet fuel injection and mixing physics, and improve the understanding of underlying physical processes relevant to flight Mach numbers greater than eight. In the experiments, conducted in the NASA Langley Arc-Heated Scramjet Test Facility (AHSTF), the injectors are placed downstream of a Mach 6 facility nozzle, which simulates the high Mach number air flow at the entrance of a scramjet combustor. Helium is used as an inert substitute for hydrogen fuel. The PLIF is obtained by using a tunable laser to excite the NO, which is present in the AHSTF air as a direct result of arc-heating. Consequently, the absence of signal is an indication of pure helium (fuel). The PLIF images computed from the computational fluid dynamics (CFD) simulations are obtained by combining a fluorescence model for NO with the Reynolds-Averaged Simulation results carried out using the VULCAN-CFD solver to obtain a computational equivalent of the experimentally measured PLIF signal. The measured NO PLIF signal is mainly a function of NO concentration allowing for semi-quantitative comparisons between the CFD and the experiments. The PLIF signal intensity is also sensitive to pressure and temperature variations in the flow, allowing additional flow features to be identified and compared with the CFD. Good agreement between the PLIF and the CFD results provides increased confidence in the CFD simulations for investigations of injector performance.
High-resolution coupled physics solvers for analysing fine-scale nuclear reactor design problems
Mahadevan, Vijay S.; Merzari, Elia; Tautges, Timothy; ...
2014-06-30
An integrated multi-physics simulation capability for the design and analysis of current and future nuclear reactor models is being investigated, to tightly couple neutron transport and thermal-hydraulics physics under the SHARP framework. Over several years, high-fidelity, validated mono-physics solvers with proven scalability on petascale architectures have been developed independently. Based on a unified component-based architecture, these existing codes can be coupled with a mesh-data backplane and a flexible coupling-strategy-based driver suite to produce a viable tool for analysts. The goal of the SHARP framework is to perform fully resolved coupled physics analysis of a reactor on heterogeneous geometry, in ordermore » to reduce the overall numerical uncertainty while leveraging available computational resources. Finally, the coupling methodology and software interfaces of the framework are presented, along with verification studies on two representative fast sodium-cooled reactor demonstration problems to prove the usability of the SHARP framework.« less
Design and Principles Enabling the Space Reference FOM
NASA Technical Reports Server (NTRS)
Moeller, Bjoern; Dexter, Dan; Madden, Michael; Crues, Edwin Z.; Garro, Alfredo; Skuratovskiy, Anton
2017-01-01
A first complete draft of the Simulation Interoperability Standards Organization (SISO) Space Reference Federation Object Model (FOM) has now been produced. This paper provides some insights into its capabilities and discusses the opportunity for reuse in other domains. The focus of this first version of the standard is execution control, time management and coordinate systems, well-known reference frames, as well as some basic support for physical entities. The biggest part of the execution control is the coordinated start-up process. This process contains a number of steps, including checking of required federates, handling of early versus late joiners, sharing of federation wide configuration data and multi-phase initialization. An additional part of Execution Control is the coordinated and synchronized transition between Run mode, Freeze mode and Shutdown. For time management, several time lines are defined, including real-time, scenario time, High Level Architecture (HLA) logical time and physical time. A strategy for mixing simulations that use different time steps is introduced, as well as an approach for finding common boundaries for fully synchronized freeze. For describing spatial information, a mechanism with a set of reference frames is specified. Each reference frame has a position and orientation related to a parent reference frame. This makes it possible for federates to perform calculations in reference frames that are convenient to them. An operation on the Moon can be performed using lunar coordinates whereas an operation on Earth can be performed using Earth coordinates. At the same time, coordinates in one reference frame have an unambiguous relationship to a coordinate in another reference frame. While the Space Reference FOM is originally being developed for Space operations, the authors believe that many parts of it can be reused for any simulation that has a focus on physical processes with one or more coordinate systems, and require high fidelity and repeatability.
A comprehensive combustion model for biodiesel-fueled engine simulations
NASA Astrophysics Data System (ADS)
Brakora, Jessica L.
Engine models for alternative fuels are available, but few are comprehensive, well-validated models that include accurate physical property data as well as a detailed description of the fuel chemistry. In this work, a comprehensive biodiesel combustion model was created for use in multi-dimensional engine simulations, specifically the KIVA3v R2 code. The model incorporates realistic physical properties in a vaporization model developed for multi-component fuel sprays and applies an improved mechanism for biodiesel combustion chemistry. A reduced mechanism was generated from the methyl decanoate (MD) and methyl-9-decenoate (MD9D) mechanism developed at Lawrence Livermore National Laboratory. It was combined with a multi-component mechanism to include n-heptane in the fuel chemistry. The biodiesel chemistry was represented using a combination of MD, MD9D and n-heptane, which varied for a given fuel source. The reduced mechanism, which contained 63 species, accurately predicted ignition delay times of the detailed mechanism over a range of engine-specific operating conditions. Physical property data for the five methyl ester components of biodiesel were added to the KIVA library. Spray simulations were performed to ensure that the models adequately reproduce liquid penetration observed in biodiesel spray experiments. Fuel composition impacted liquid length as expected, with saturated species vaporizing more and penetrating less. Distillation curves were created to ensure the fuel vaporization process was comparable to available data. Engine validation was performed against a low-speed, high-load, conventional combustion experiments and the model was able to predict the performance and NOx formation seen in the experiment. High-speed, low-load, low-temperature combustion conditions were also modeled, and the emissions (HC, CO, NOx) and fuel consumption were well-predicted for a sweep of injection timings. Finally, comparisons were made between the results of biodiesel composition (palm vs. soy) and fuel blends (neat vs. B20). The model effectively reproduced the trends observed in the experiments.
NASA Astrophysics Data System (ADS)
Zarzycki, C. M.; Gettelman, A.; Callaghan, P.
2017-12-01
Accurately predicting weather extremes such as precipitation (floods and droughts) and temperature (heat waves) requires high resolution to resolve mesoscale dynamics and topography at horizontal scales of 10-30km. Simulating such resolutions globally for climate scales (years to decades) remains computationally impractical. Simulating only a small region of the planet is more tractable at these scales for climate applications. This work describes global simulations using variable-resolution static meshes with multiple dynamical cores that target the continental United States using developmental versions of the Community Earth System Model version 2 (CESM2). CESM2 is tested in idealized, aquaplanet and full physics configurations to evaluate variable mesh simulations against uniform high and uniform low resolution simulations at resolutions down to 15km. Different physical parameterization suites are also evaluated to gauge their sensitivity to resolution. Idealized variable-resolution mesh cases compare well to high resolution tests. More recent versions of the atmospheric physics, including cloud schemes for CESM2, are more stable with respect to changes in horizontal resolution. Most of the sensitivity is due to sensitivity to timestep and interactions between deep convection and large scale condensation, expected from the closure methods. The resulting full physics model produces a comparable climate to the global low resolution mesh and similar high frequency statistics in the high resolution region. Some biases are reduced (orographic precipitation in the western United States), but biases do not necessarily go away at high resolution (e.g. summertime JJA surface Temp). The simulations are able to reproduce uniform high resolution results, making them an effective tool for regional climate studies and are available in CESM2.
ERIC Educational Resources Information Center
Chini, Jacquelyn J.; Madsen, Adrian; Gire, Elizabeth; Rebello, N. Sanjay; Puntambekar, Sadhana
2012-01-01
Recent research results have failed to support the conventionally held belief that students learn physics best from hands-on experiences with physical equipment. Rather, studies have found that students who perform similar experiments with computer simulations perform as well or better on measures of conceptual understanding than their peers who…
3D Hybrid Simulations of Interactions of High-Velocity Plasmoids with Obstacles
NASA Astrophysics Data System (ADS)
Omelchenko, Y. A.; Weber, T. E.; Smith, R. J.
2015-11-01
Interactions of fast plasma streams and objects with magnetic obstacles (dipoles, mirrors, etc) lie at the core of many space and laboratory plasma phenomena ranging from magnetoshells and solar wind interactions with planetary magnetospheres to compact fusion plasmas (spheromaks and FRCs) to astrophysics-in-lab experiments. Properly modeling ion kinetic, finite-Larmor radius and Hall effects is essential for describing large-scale plasma dynamics, turbulence and heating in complex magnetic field geometries. Using an asynchronous parallel hybrid code, HYPERS, we conduct 3D hybrid (particle-in-cell ion, fluid electron) simulations of such interactions under realistic conditions that include magnetic flux coils, ion-ion collisions and the Chodura resistivity. HYPERS does not step simulation variables synchronously in time but instead performs time integration by executing asynchronous discrete events: updates of particles and fields carried out as frequently as dictated by local physical time scales. Simulations are compared with data from the MSX experiment which studies the physics of magnetized collisionless shocks through the acceleration and subsequent stagnation of FRC plasmoids against a strong magnetic mirror and flux-conserving boundary.
Sound level exposure of high-risk infants in different environmental conditions.
Byers, Jacqueline F; Waugh, W Randolph; Lowman, Linda B
2006-01-01
To provide descriptive information about the sound levels to which high-risk infants are exposed in various actual environmental conditions in the NICU, including the impact of physical renovation on sound levels, and to assess the contributions of various types of equipment, alarms, and activities to sound levels in simulated conditions in the NICU. Descriptive and comparative design. Convenience sample of 134 infants at a southeastern quarternary children's hospital. A-weighted decibel (dBA) sound levels under various actual and simulated environmental conditions. The renovated NICU was, on average, 4-6 dBA quieter across all environmental conditions than a comparable nonrenovated room, representing a significant sound level reduction. Sound levels remained above consensus recommendations despite physical redesign and staff training. Respiratory therapy equipment, alarms, staff talking, and infant fussiness contributed to higher sound levels. Evidence-based sound-reducing strategies are proposed. Findings were used to plan environment management as part of a developmental, family-centered care, performance improvement program and in new NICU planning.
Beckwith, M. A.; Jiang, S.; Schropp, A.; ...
2017-05-01
Tuning the energy of an x-ray probe to an absorption line or edge can provide material-specific measurements that are particularly useful for interfaces. Simulated hard x-ray images above the Fe K-edge are presented to examine ion diffusion across an interface between Fe 2O 3 and SiO 2 aerogel foam materials. The simulations demonstrate the feasibility of such a technique for measurements of density scale lengths near the interface with submicron spatial resolution. A proof-of-principle experiment is designed and performed at the Linac coherent light source facility. Preliminary data show the change of the interface after shock compression and heating withmore » simultaneous fluorescence spectra for temperature determination. Here, the results provide the first demonstration of using x-ray imaging at an absorption edge as a diagnostic to detect ultrafast phenomena for interface physics in high-energy-density systems.« less
Development of flat-plate solar collectors for the heating and cooling of buildings
NASA Technical Reports Server (NTRS)
Ramsey, J. W.; Borzoni, J. T.; Holland, T. H.
1975-01-01
The relevant design parameters in the fabrication of a solar collector for heating liquids were examined. The objective was to design, fabricate, and test a low-cost, flat-plate solar collector with high collection efficiency, high durability, and requiring little maintenance. Computer-aided math models of the heat transfer processes in the collector assisted in the design. The preferred physical design parameters were determined from a heat transfer standpoint and the absorber panel configuration, the surface treatment of the absorber panel, the type and thickness of insulation, and the number, spacing and material of the covers were defined. Variations of this configuration were identified, prototypes built, and performance tests performed using a solar simulator. Simulated operation of the baseline collector configuration was combined with insolation data for a number of locations and compared with a predicted load to determine the degree of solar utilization.
NASA Astrophysics Data System (ADS)
Lee, Jong-Chul; Lee, Won-Ho; Kim, Woun-Jea
2015-09-01
The design and development procedures of SF6 gas circuit breakers are still largely based on trial and error through testing although the development costs go higher every year. The computation cannot cover the testing satisfactorily because all the real processes arc not taken into account. But the knowledge of the arc behavior and the prediction of the thermal-flow inside the interrupters by numerical simulations are more useful than those by experiments due to the difficulties to obtain physical quantities experimentally and the reduction of computational costs in recent years. In this paper, in order to get further information into the interruption process of a SF6 self-blast interrupter, which is based on a combination of thermal expansion and the arc rotation principle, gas flow simulations with a CFD-arc modeling are performed during the whole switching process such as high-current period, pre-current zero period, and current-zero period. Through the complete work, the pressure-rise and the ramp of the pressure inside the chamber before current zero as well as the post-arc current after current zero should be a good criterion to predict the short-line fault interruption performance of interrupters.
Automatic Beam Path Analysis of Laser Wakefield Particle Acceleration Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rubel, Oliver; Geddes, Cameron G.R.; Cormier-Michel, Estelle
2009-10-19
Numerical simulations of laser wakefield particle accelerators play a key role in the understanding of the complex acceleration process and in the design of expensive experimental facilities. As the size and complexity of simulation output grows, an increasingly acute challenge is the practical need for computational techniques that aid in scientific knowledge discovery. To that end, we present a set of data-understanding algorithms that work in concert in a pipeline fashion to automatically locate and analyze high energy particle bunches undergoing acceleration in very large simulation datasets. These techniques work cooperatively by first identifying features of interest in individual timesteps,more » then integrating features across timesteps, and based on the information derived perform analysis of temporally dynamic features. This combination of techniques supports accurate detection of particle beams enabling a deeper level of scientific understanding of physical phenomena than hasbeen possible before. By combining efficient data analysis algorithms and state-of-the-art data management we enable high-performance analysis of extremely large particle datasets in 3D. We demonstrate the usefulness of our methods for a variety of 2D and 3D datasets and discuss the performance of our analysis pipeline.« less
Self-Consistent Monte Carlo Study of the Coulomb Interaction under Nano-Scale Device Structures
NASA Astrophysics Data System (ADS)
Sano, Nobuyuki
2011-03-01
It has been pointed that the Coulomb interaction between the electrons is expected to be of crucial importance to predict reliable device characteristics. In particular, the device performance is greatly degraded due to the plasmon excitation represented by dynamical potential fluctuations in high-doped source and drain regions by the channel electrons. We employ the self-consistent 3D Monte Carlo (MC) simulations, which could reproduce both the correct mobility under various electron concentrations and the collective plasma waves, to study the physical impact of dynamical potential fluctuations on device performance under the Double-gate MOSFETs. The average force experienced by an electron due to the Coulomb interaction inside the device is evaluated by performing the self-consistent MC simulations and the fixed-potential MC simulations without the Coulomb interaction. Also, the band-tailing associated with the local potential fluctuations in high-doped source region is quantitatively evaluated and it is found that the band-tailing becomes strongly dependent of position in real space even inside the uniform source region. This work was partially supported by Grants-in-Aid for Scientific Research B (No. 2160160) from the Ministry of Education, Culture, Sports, Science and Technology in Japan.
Hybrid-optimization strategy for the communication of large-scale Kinetic Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Wu, Baodong; Li, Shigang; Zhang, Yunquan; Nie, Ningming
2017-02-01
The parallel Kinetic Monte Carlo (KMC) algorithm based on domain decomposition has been widely used in large-scale physical simulations. However, the communication overhead of the parallel KMC algorithm is critical, and severely degrades the overall performance and scalability. In this paper, we present a hybrid optimization strategy to reduce the communication overhead for the parallel KMC simulations. We first propose a communication aggregation algorithm to reduce the total number of messages and eliminate the communication redundancy. Then, we utilize the shared memory to reduce the memory copy overhead of the intra-node communication. Finally, we optimize the communication scheduling using the neighborhood collective operations. We demonstrate the scalability and high performance of our hybrid optimization strategy by both theoretical and experimental analysis. Results show that the optimized KMC algorithm exhibits better performance and scalability than the well-known open-source library-SPPARKS. On 32-node Xeon E5-2680 cluster (total 640 cores), the optimized algorithm reduces the communication time by 24.8% compared with SPPARKS.
Numerical simulation of the helium gas spin-up channel performance of the relativity gyroscope
NASA Technical Reports Server (NTRS)
Karr, Gerald R.; Edgell, Josephine; Zhang, Burt X.
1991-01-01
The dependence of the spin-up system efficiency on each geometrical parameter of the spin-up channel and the exhaust passage of the Gravity Probe-B (GPB) is individually investigated. The spin-up model is coded into a computer program which simulates the spin-up process. Numerical results reveal optimal combinations of the geometrical parameters for the ultimate spin-up performance. Comparisons are also made between the numerical results and experimental data. The experimental leakage rate can only be reached when the gap between the channel lip and the rotor surface increases beyond physical limit. The computed rotating frequency is roughly twice as high as the measured ones although the spin-up torques fairly match.
Review of hardware-in-the-loop simulation and its prospects in the automotive area
NASA Astrophysics Data System (ADS)
Fathy, Hosam K.; Filipi, Zoran S.; Hagena, Jonathan; Stein, Jeffrey L.
2006-05-01
Hardware-in-the-loop (HIL) simulation is rapidly evolving from a control prototyping tool to a system modeling, simulation, and synthesis paradigm synergistically combining many advantages of both physical and virtual prototyping. This paper provides a brief overview of the key enablers and numerous applications of HIL simulation, focusing on its metamorphosis from a control validation tool into a system development paradigm. It then describes a state-of-the art engine-in-the-loop (EIL) simulation facility that highlights the use of HIL simulation for the system-level experimental evaluation of powertrain interactions and development of strategies for clean and efficient propulsion. The facility comprises a real diesel engine coupled to accurate real-time driver, driveline, and vehicle models through a highly responsive dynamometer. This enables the verification of both performance and fuel economy predictions of different conventional and hybrid powertrains. Furthermore, the facility can both replicate the highly dynamic interactions occurring within a real powertrain and measure their influence on transient emissions and visual signature through state-of-the-art instruments. The viability of this facility for integrated powertrain system development is demonstrated through a case study exploring the development of advanced High Mobility Multipurpose Wheeled Vehicle (HMMWV) powertrains.
Simulating Astrophysical Jets with Inertial Confinement Fusion Machines
NASA Astrophysics Data System (ADS)
Blue, Brent
2005-10-01
Large-scale directional outflows of supersonic plasma, also known as `jets', are ubiquitous phenomena in astrophysics. The traditional approach to understanding such phenomena is through theoretical analysis and numerical simulations. However, theoretical analysis might not capture all the relevant physics and numerical simulations have limited resolution and fail to scale correctly in Reynolds number and perhaps other key dimensionless parameters. Recent advances in high energy density physics using large inertial confinement fusion devices now allow controlled laboratory experiments on macroscopic volumes of plasma of direct relevance to astrophysics. This talk will present an overview of these facilities as well as results from current laboratory astrophysics experiments designed to study hydrodynamic jets and Rayleigh-Taylor mixing. This work is performed under the auspices of the U. S. DOE by Lawrence Livermore National Laboratory under Contract No. W-7405-ENG-48, Los Alamos National Laboratory under Contract No. W-7405-ENG-36, and the Laboratory for Laser Energetics under Contract No. DE-FC03-92SF19460.
Computational Nanoelectronics and Nanotechnology at NASA ARC
NASA Technical Reports Server (NTRS)
Saini, Subhash; Kutler, Paul (Technical Monitor)
1998-01-01
Both physical and economic considerations indicate that the scaling era of CMOS will run out of steam around the year 2010. However, physical laws also indicate that it is possible to compute at a rate of a billion times present speeds with the expenditure of only one Watt of electrical power. NASA has long-term needs where ultra-small semiconductor devices are needed for critical applications: high performance, low power, compact computers for intelligent autonomous vehicles and Petaflop computing technology are some key examples. To advance the design, development, and production of future generation micro- and nano-devices, IT Modeling and Simulation Group has been started at NASA Ames with a goal to develop an integrated simulation environment that addresses problems related to nanoelectronics and molecular nanotechnology. Overview of nanoelectronics and nanotechnology research activities being carried out at Ames Research Center will be presented. We will also present the vision and the research objectives of the IT Modeling and Simulation Group including the applications of nanoelectronic based devices relevant to NASA missions.
Computational Nanoelectronics and Nanotechnology at NASA ARC
NASA Technical Reports Server (NTRS)
Saini, Subhash
1998-01-01
Both physical and economic considerations indicate that the scaling era of CMOS will run out of steam around the year 2010. However, physical laws also indicate that it is possible to compute at a rate of a billion times present speeds with the expenditure of only one Watt of electrical power. NASA has long-term needs where ultra-small semiconductor devices are needed for critical applications: high performance, low power, compact computers for intelligent autonomous vehicles and Petaflop computing technolpgy are some key examples. To advance the design, development, and production of future generation micro- and nano-devices, IT Modeling and Simulation Group has been started at NASA Ames with a goal to develop an integrated simulation environment that addresses problems related to nanoelectronics and molecular nanotecnology. Overview of nanoelectronics and nanotechnology research activities being carried out at Ames Research Center will be presented. We will also present the vision and the research objectives of the IT Modeling and Simulation Group including the applications of nanoelectronic based devices relevant to NASA missions.
Divergence-Free SPH for Incompressible and Viscous Fluids.
Bender, Jan; Koschier, Dan
2017-03-01
In this paper we present a novel Smoothed Particle Hydrodynamics (SPH) method for the efficient and stable simulation of incompressible fluids. The most efficient SPH-based approaches enforce incompressibility either on position or velocity level. However, the continuity equation for incompressible flow demands to maintain a constant density and a divergence-free velocity field. We propose a combination of two novel implicit pressure solvers enforcing both a low volume compression as well as a divergence-free velocity field. While a compression-free fluid is essential for realistic physical behavior, a divergence-free velocity field drastically reduces the number of required solver iterations and increases the stability of the simulation significantly. Thanks to the improved stability, our method can handle larger time steps than previous approaches. This results in a substantial performance gain since the computationally expensive neighborhood search has to be performed less frequently. Moreover, we introduce a third optional implicit solver to simulate highly viscous fluids which seamlessly integrates into our solver framework. Our implicit viscosity solver produces realistic results while introducing almost no numerical damping. We demonstrate the efficiency, robustness and scalability of our method in a variety of complex simulations including scenarios with millions of turbulent particles or highly viscous materials.
ERIC Educational Resources Information Center
Rodrigues, João P. G. L. M.; Melquiond, Adrien S. J.; Bonvin, Alexandre M. J. J.
2016-01-01
Molecular modelling and simulations are nowadays an integral part of research in areas ranging from physics to chemistry to structural biology, as well as pharmaceutical drug design. This popularity is due to the development of high-performance hardware and of accurate and efficient molecular mechanics algorithms by the scientific community. These…
Utilization of Short-Simulations for Tuning High-Resolution Climate Model
NASA Astrophysics Data System (ADS)
Lin, W.; Xie, S.; Ma, P. L.; Rasch, P. J.; Qian, Y.; Wan, H.; Ma, H. Y.; Klein, S. A.
2016-12-01
Many physical parameterizations in atmospheric models are sensitive to resolution. Tuning the models that involve a multitude of parameters at high resolution is computationally expensive, particularly when relying primarily on multi-year simulations. This work describes a complementary set of strategies for tuning high-resolution atmospheric models, using ensembles of short simulations to reduce the computational cost and elapsed time. Specifically, we utilize the hindcast approach developed through the DOE Cloud Associated Parameterization Testbed (CAPT) project for high-resolution model tuning, which is guided by a combination of short (< 10 days ) and longer ( 1 year) Perturbed Parameters Ensemble (PPE) simulations at low resolution to identify model feature sensitivity to parameter changes. The CAPT tests have been found to be effective in numerous previous studies in identifying model biases due to parameterized fast physics, and we demonstrate that it is also useful for tuning. After the most egregious errors are addressed through an initial "rough" tuning phase, longer simulations are performed to "hone in" on model features that evolve over longer timescales. We explore these strategies to tune the DOE ACME (Accelerated Climate Modeling for Energy) model. For the ACME model at 0.25° resolution, it is confirmed that, given the same parameters, major biases in global mean statistics and many spatial features are consistent between Atmospheric Model Intercomparison Project (AMIP)-type simulations and CAPT-type hindcasts, with just a small number of short-term simulations for the latter over the corresponding season. The use of CAPT hindcasts to find parameter choice for the reduction of large model biases dramatically improves the turnaround time for the tuning at high resolution. Improvement seen in CAPT hindcasts generally translates to improved AMIP-type simulations. An iterative CAPT-AMIP tuning approach is therefore adopted during each major tuning cycle, with the former to survey the likely responses and narrow the parameter space, and the latter to verify the results in climate context along with assessment in greater detail once an educated set of parameter choice is selected. Limitations on using short-term simulations for tuning climate model are also discussed.
Impact of chemistry on Standard High Solids Vessel Design mixing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poirier, M.
2016-03-02
The plan for resolving technical issues regarding mixing performance within vessels of the Hanford Waste Treatment Plant Pretreatment Facility directs a chemical impact study to be performed. The vessels involved are those that will process higher (e.g., 5 wt % or more) concentrations of solids. The mixing equipment design for these vessels includes both pulse jet mixers (PJM) and air spargers. This study assesses the impact of feed chemistry on the effectiveness of PJM mixing in the Standard High Solids Vessel Design (SHSVD). The overall purpose of this study is to complement the Properties that Matter document in helping tomore » establish an acceptable physical simulant for full-scale testing. The specific objectives for this study are (1) to identify the relevant properties and behavior of the in-process tank waste that control the performance of the system being tested, (2) to assess the solubility limits of key components that are likely to precipitate or crystallize due to PJM and sparger interaction with the waste feeds, (3) to evaluate the impact of waste chemistry on rheology and agglomeration, (4) to assess the impact of temperature on rheology and agglomeration, (5) to assess the impact of organic compounds on PJM mixing, and (6) to provide the technical basis for using a physical-rheological simulant rather than a physical-rheological-chemical simulant for full-scale vessel testing. Among the conclusions reached are the following: The primary impact of precipitation or crystallization of salts due to interactions between PJMs or spargers and waste feeds is to increase the insoluble solids concentration in the slurries, which will increase the slurry yield stress. Slurry yield stress is a function of pH, ionic strength, insoluble solids concentration, and particle size. Ionic strength and chemical composition can affect particle size. Changes in temperature can affect SHSVD mixing through its effect on properties such as viscosity, yield stress, solubility, and vapor pressure, or chemical reactions that occur at high temperatures. Organic compounds will affect SHSVD mixing through their effect on properties such as rheology, particle agglomeration/size, particle density, and particle concentration.« less
Sword, David O; Thomas, K Jackson; Wise, Holly H; Brown, Deborah D
2017-01-01
Sophisticated high-fidelity human simulation (HFHS) manikins allow for practice of both evaluation and treatment techniques in a controlled environment in which real patients are not put at risk. However, due to high demand, access to HFHS by students has been very competitive and limited. In the present study, a basic CPR manikin with a speaker implanted in the chest cavity and internet access to a variety of heart and breath sounds was used. Students were evaluated on their ability to locate and identify auscultation sites and heart/breath sounds. A five-point Likert scale survey was administered to gain insight into student perceptions on the use of this simulation method. Our results demonstrated that 95% of students successfully identified the heart and breath sounds. Furthermore, survey results indicated that 75% of students agreed or strongly agreed that this manner of evaluation was an effective way to assess their auscultation skills. Based on performance and perception, we conclude that a simulation method as described in this paper is a viable and cost-effective means of evaluating auscultation competency in not only student physical therapists but across other health professions as well.
Verifying and Validating Simulation Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hemez, Francois M.
2015-02-23
This presentation is a high-level discussion of the Verification and Validation (V&V) of computational models. Definitions of V&V are given to emphasize that “validation” is never performed in a vacuum; it accounts, instead, for the current state-of-knowledge in the discipline considered. In particular comparisons between physical measurements and numerical predictions should account for their respective sources of uncertainty. The differences between error (bias), aleatoric uncertainty (randomness) and epistemic uncertainty (ignorance, lack-of- knowledge) are briefly discussed. Four types of uncertainty in physics and engineering are discussed: 1) experimental variability, 2) variability and randomness, 3) numerical uncertainty and 4) model-form uncertainty. Statisticalmore » sampling methods are available to propagate, and analyze, variability and randomness. Numerical uncertainty originates from the truncation error introduced by the discretization of partial differential equations in time and space. Model-form uncertainty is introduced by assumptions often formulated to render a complex problem more tractable and amenable to modeling and simulation. The discussion concludes with high-level guidance to assess the “credibility” of numerical simulations, which stems from the level of rigor with which these various sources of uncertainty are assessed and quantified.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slepoy, Alexander; Mitchell, Scott A.; Backus, George A.
2008-09-01
Sandia National Laboratories is investing in projects that aim to develop computational modeling and simulation applications that explore human cognitive and social phenomena. While some of these modeling and simulation projects are explicitly research oriented, others are intended to support or provide insight for people involved in high consequence decision-making. This raises the issue of how to evaluate computational modeling and simulation applications in both research and applied settings where human behavior is the focus of the model: when is a simulation 'good enough' for the goals its designers want to achieve? In this report, we discuss two years' worthmore » of review and assessment of the ASC program's approach to computational model verification and validation, uncertainty quantification, and decision making. We present a framework that extends the principles of the ASC approach into the area of computational social and cognitive modeling and simulation. In doing so, we argue that the potential for evaluation is a function of how the modeling and simulation software will be used in a particular setting. In making this argument, we move from strict, engineering and physics oriented approaches to V&V to a broader project of model evaluation, which asserts that the systematic, rigorous, and transparent accumulation of evidence about a model's performance under conditions of uncertainty is a reasonable and necessary goal for model evaluation, regardless of discipline. How to achieve the accumulation of evidence in areas outside physics and engineering is a significant research challenge, but one that requires addressing as modeling and simulation tools move out of research laboratories and into the hands of decision makers. This report provides an assessment of our thinking on ASC Verification and Validation, and argues for further extending V&V research in the physical and engineering sciences toward a broader program of model evaluation in situations of high consequence decision-making.« less
An assessment of coupling algorithms for nuclear reactor core physics simulations
Hamilton, Steven; Berrill, Mark; Clarno, Kevin; ...
2016-04-01
This paper evaluates the performance of multiphysics coupling algorithms applied to a light water nuclear reactor core simulation. The simulation couples the k-eigenvalue form of the neutron transport equation with heat conduction and subchannel flow equations. We compare Picard iteration (block Gauss–Seidel) to Anderson acceleration and multiple variants of preconditioned Jacobian-free Newton–Krylov (JFNK). The performance of the methods are evaluated over a range of energy group structures and core power levels. A novel physics-based approximation to a Jacobian-vector product has been developed to mitigate the impact of expensive on-line cross section processing steps. Furthermore, numerical simulations demonstrating the efficiency ofmore » JFNK and Anderson acceleration relative to standard Picard iteration are performed on a 3D model of a nuclear fuel assembly. Both criticality (k-eigenvalue) and critical boron search problems are considered.« less
An assessment of coupling algorithms for nuclear reactor core physics simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamilton, Steven; Berrill, Mark; Clarno, Kevin
This paper evaluates the performance of multiphysics coupling algorithms applied to a light water nuclear reactor core simulation. The simulation couples the k-eigenvalue form of the neutron transport equation with heat conduction and subchannel flow equations. We compare Picard iteration (block Gauss–Seidel) to Anderson acceleration and multiple variants of preconditioned Jacobian-free Newton–Krylov (JFNK). The performance of the methods are evaluated over a range of energy group structures and core power levels. A novel physics-based approximation to a Jacobian-vector product has been developed to mitigate the impact of expensive on-line cross section processing steps. Furthermore, numerical simulations demonstrating the efficiency ofmore » JFNK and Anderson acceleration relative to standard Picard iteration are performed on a 3D model of a nuclear fuel assembly. Both criticality (k-eigenvalue) and critical boron search problems are considered.« less
An assessment of coupling algorithms for nuclear reactor core physics simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamilton, Steven, E-mail: hamiltonsp@ornl.gov; Berrill, Mark, E-mail: berrillma@ornl.gov; Clarno, Kevin, E-mail: clarnokt@ornl.gov
This paper evaluates the performance of multiphysics coupling algorithms applied to a light water nuclear reactor core simulation. The simulation couples the k-eigenvalue form of the neutron transport equation with heat conduction and subchannel flow equations. We compare Picard iteration (block Gauss–Seidel) to Anderson acceleration and multiple variants of preconditioned Jacobian-free Newton–Krylov (JFNK). The performance of the methods are evaluated over a range of energy group structures and core power levels. A novel physics-based approximation to a Jacobian-vector product has been developed to mitigate the impact of expensive on-line cross section processing steps. Numerical simulations demonstrating the efficiency of JFNKmore » and Anderson acceleration relative to standard Picard iteration are performed on a 3D model of a nuclear fuel assembly. Both criticality (k-eigenvalue) and critical boron search problems are considered.« less
First validation of the PASSPORT training environment for arthroscopic skills.
Tuijthof, Gabriëlle J M; van Sterkenburg, Maayke N; Sierevelt, Inger N; van Oldenrijk, Jakob; Van Dijk, C Niek; Kerkhoffs, Gino M M J
2010-02-01
The demand for high quality care is in contrast to reduced training time for residents to develop arthroscopic skills. Thereto, simulators are introduced to train skills away from the operating room. In our clinic, a physical simulation environment to Practice Arthroscopic Surgical Skills for Perfect Operative Real-life Treatment (PASSPORT) is being developed. The PASSPORT concept consists of maintaining the normal arthroscopic equipment, replacing the human knee joint by a phantom, and integrating registration devices to provide performance feedback. The first prototype of the knee phantom allows inspection, treatment of menisci, irrigation, and limb stressing. PASSPORT was evaluated for face and construct validity. Construct validity was assessed by measuring the performance of two groups with different levels of arthroscopic experience (20 surgeons and 8 residents). Participants performed a navigation task five times on PASSPORT. Task times were recorded. Face validity was assessed by completion of a short questionnaire on the participants' impressions and comments for improvements. Construct validity was demonstrated as the surgeons (median task time 19.7 s [8.0-37.6]) were more efficient than the residents (55.2 s [27.9-96.6]) in task completion for each repetition (Mann-Whitney U test, P < 0.05). The prototype of the knee phantom sufficiently imitated limb outer appearance (79%), portal resistance (82%), and arthroscopic view (81%). Improvements are required for the stressing device and the material of cruciate ligaments. Our physical simulation environment (PASSPORT) demonstrates its potential to evolve as a training modality. In future, automated performance feedback is aimed for.
Large Eddy Simulation of Crashback in Marine Propulsors
NASA Astrophysics Data System (ADS)
Jang, Hyunchul
Crashback is an operating condition to quickly stop a propelled vehicle, where the propeller is rotated in the reverse direction to yield negative thrust. The crashback condition is dominated by the interaction of the free stream flow with the strong reverse flow. This interaction forms a highly unsteady vortex ring, which is a very prominent feature of crashback. Crashback causes highly unsteady loads and flow separation on the blade surface. The unsteady loads can cause propulsor blade damage, and also affect vehicle maneuverability. Crashback is therefore well known as one of the most challenging propeller states to analyze. This dissertation uses Large-Eddy Simulation (LES) to predict the highly unsteady flow field in crashback. A non-dissipative and robust finite volume method developed by Mahesh et al. (2004) for unstructured grids is applied to flow around marine propulsors. The LES equations are written in a rotating frame of reference. The objectives of this dissertation are: (1) to understand the flow physics of crashback in marine propulsors with and without a duct, (2) to develop a finite volume method for highly skewed meshes which usually occur in complex propulsor geometries, and (3) to develop a sliding interface method for simulations of rotor-stator propulsor on parallel platforms. LES is performed for an open propulsor in crashback and validated against experiments performed by Jessup et al. (2004). The LES results show good agreement with experiments. Effective pressures for thrust and side-force are introduced to more clearly understand the physical sources of thrust and side-force. Both thrust and side-force are seen to be mainly generated from the leading edge of the suction side of the propeller. This implies that thrust and side-force have the same source---the highly unsteady leading edge separation. Conditional averaging is performed to obtain quantitative information about the complex flow physics of high- or low-amplitude events. The events for thrust and side force show the same tendency. The conditional averages show that during high amplitude events, the vortex ring core is closer to the propeller blades, the reverse flow induced by the propeller rotation is lower, the forward flow is higher at the root of the blades, and leading and trailing edge flow separations are larger. The instantaneous flow field shows that during low amplitude events, the vortex ring is more axisymmetric and the stronger reverse flow induced by the vortex ring suppresses the forward flow so that flow separation on the blades is smaller. During high amplitude events, the vortex ring is less coherent and the weaker reverse flow cannot overcome the forward flow. The stronger forward flow makes flow separation on the blades larger. The effect of a duct on crashback is studied with LES. Thrust mostly arises from the blade surface, but most of side-force is generated from the duct surface. Both mean and RMS of pressure are much higher on inner surface of duct, especially near blade tips. This implies that side-force on the ducted propulsor is caused by the blade-duct interaction. Strong tip leakage flow is observed behind the suction side at the tip gap. The physical source of the tip leakage flow is seen to be the large pressure difference between pressure and suction sides. The conditional average for high amplitude event shows consistent results; the tip leakage flow and pressure difference are significantly higher when thrust and side-force are higher. A sliding interface method is developed to allow simulations of rotor-stator propulsor in crashback. The method allows relative rotations between different parts of the computational grid. Search algorithm for sliding elements, data structures for message passing, and accurate interpolation scheme at the sliding interface are developed for arbitrary shaped unstructured grids on parallel computing platforms. Preliminary simulations of open propulsor in crashback show reasonable performance.
2009. Rob's areas of expertise are daylighting, physically based lighting simulation, the integration of lighting simulation with whole-building energy simulations, and high-dynamic range imaging. He has simulation, and high-dynamic range imaging. Rob is an advisory member of the Illuminating Engineering Society
Visual comparison testing of automotive paint simulation
NASA Astrophysics Data System (ADS)
Meyer, Gary; Fan, Hua-Tzu; Seubert, Christopher; Evey, Curtis; Meseth, Jan; Schnackenberg, Ryan
2015-03-01
An experiment was performed to determine whether typical industrial automotive color paint comparisons made using real physical samples could also be carried out using a digital simulation displayed on a calibrated color television monitor. A special light booth, designed to facilitate evaluation of the car paint color with reflectance angle, was employed in both the real and virtual color comparisons. Paint samples were measured using a multi-angle spectrophotometer and were simulated using a commercially available software package. Subjects performed the test quicker using the computer graphic simulation, and results indicate that there is only a small difference between the decisions made using the light booth and the computer monitor. This outcome demonstrates the potential of employing simulations to replace some of the time consuming work with real physical samples that still characterizes material appearance work in industry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chubar O.; Berman, L; Chu, Y.S.
2012-04-04
Partially-coherent wavefront propagation calculations have proven to be feasible and very beneficial in the design of beamlines for 3rd and 4th generation Synchrotron Radiation (SR) sources. These types of calculations use the framework of classical electrodynamics for the description, on the same accuracy level, of the emission by relativistic electrons moving in magnetic fields of accelerators, and the propagation of the emitted radiation wavefronts through beamline optical elements. This enables accurate prediction of performance characteristics for beamlines exploiting high SR brightness and/or high spectral flux. Detailed analysis of radiation degree of coherence, offered by the partially-coherent wavefront propagation method, ismore » of paramount importance for modern storage-ring based SR sources, which, thanks to extremely small sub-nanometer-level electron beam emittances, produce substantial portions of coherent flux in X-ray spectral range. We describe the general approach to partially-coherent SR wavefront propagation simulations and present examples of such simulations performed using 'Synchrotron Radiation Workshop' (SRW) code for the parameters of hard X-ray undulator based beamlines at the National Synchrotron Light Source II (NSLS-II), Brookhaven National Laboratory. These examples illustrate general characteristics of partially-coherent undulator radiation beams in low-emittance SR sources, and demonstrate advantages of applying high-accuracy physical-optics simulations to the optimization and performance prediction of X-ray optical beamlines in these new sources.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jung, Y. S.; Joo, H. G.; Yoon, J. I.
The nTRACER direct whole core transport code employing the planar MOC solution based 3-D calculation method, the subgroup method for resonance treatment, the Krylov matrix exponential method for depletion, and a subchannel thermal/hydraulic calculation solver was developed for practical high-fidelity simulation of power reactors. Its accuracy and performance is verified by comparing with the measurement data obtained for three pressurized water reactor cores. It is demonstrated that accurate and detailed multi-physic simulation of power reactors is practically realizable without any prior calculations or adjustments. (authors)
NASA Astrophysics Data System (ADS)
Yuan, Feng; Yoon, DooSoo; Li, Ya-Ping; Gan, Zhao-Ming; Ho, Luis C.; Guo, Fulai
2018-04-01
We investigate the effects of AGN feedback on the cosmological evolution of an isolated elliptical galaxy by performing two-dimensional high-resolution hydrodynamical numerical simulations. The inner boundary of the simulation is chosen so that the Bondi radius is resolved. Compared to previous works, the two accretion modes—namely, hot and cold, which correspond to different accretion rates and have different radiation and wind outputs—are carefully discriminated, and the feedback effects by radiation and wind in each mode are taken into account. The most updated AGN physics, including the descriptions of radiation and wind from the hot accretion flows and wind from cold accretion disks, are adopted. Physical processes like star formation and SNe Ia and II are taken into account. We study the AGN light curve, typical AGN lifetime, growth of the black hole mass, AGN duty cycle, star formation, and X-ray surface brightness of the galaxy. We compare our simulation results with observations and find general consistency. Comparisons with previous simulation works find significant differences, indicating the importance of AGN physics. The respective roles of radiation and wind feedback are examined, and it is found that they are different for different problems of interest, such as AGN luminosity and star formation. We find that it is hard to neglect any of them, so we suggest using the names “cold feedback mode” and “hot feedback mode” to replace the currently used ones.
NASA Astrophysics Data System (ADS)
Nellist, C.; Dinu, N.; Gkougkousis, E.; Lounis, A.
2015-06-01
The LHC accelerator complex will be upgraded between 2020-2022, to the High-Luminosity-LHC, to considerably increase statistics for the various physics analyses. To operate under these challenging new conditions, and maintain excellent performance in track reconstruction and vertex location, the ATLAS pixel detector must be substantially upgraded and a full replacement is expected. Processing techniques for novel pixel designs are optimised through characterisation of test structures in a clean room and also through simulations with Technology Computer Aided Design (TCAD). A method to study non-perpendicular tracks through a pixel device is discussed. Comparison of TCAD simulations with Secondary Ion Mass Spectrometry (SIMS) measurements to investigate the doping profile of structures and validate the simulation process is also presented.
ERIC Educational Resources Information Center
Singh, Gurmukh
2012-01-01
The present article is primarily targeted for the advanced college/university undergraduate students of chemistry/physics education, computational physics/chemistry, and computer science. The most recent software system such as MS Visual Studio .NET version 2010 is employed to perform computer simulations for modeling Bohr's quantum theory of…
Benchmarking test of empirical root water uptake models
NASA Astrophysics Data System (ADS)
dos Santos, Marcos Alex; de Jong van Lier, Quirijn; van Dam, Jos C.; Freire Bezerra, Andre Herman
2017-01-01
Detailed physical models describing root water uptake (RWU) are an important tool for the prediction of RWU and crop transpiration, but the hydraulic parameters involved are hardly ever available, making them less attractive for many studies. Empirical models are more readily used because of their simplicity and the associated lower data requirements. The purpose of this study is to evaluate the capability of some empirical models to mimic the RWU distribution under varying environmental conditions predicted from numerical simulations with a detailed physical model. A review of some empirical models used as sub-models in ecohydrological models is presented, and alternative empirical RWU models are proposed. All these empirical models are analogous to the standard Feddes model, but differ in how RWU is partitioned over depth or how the transpiration reduction function is defined. The parameters of the empirical models are determined by inverse modelling of simulated depth-dependent RWU. The performance of the empirical models and their optimized empirical parameters depends on the scenario. The standard empirical Feddes model only performs well in scenarios with low root length density R, i.e. for scenarios with low RWU compensation
. For medium and high R, the Feddes RWU model cannot mimic properly the root uptake dynamics as predicted by the physical model. The Jarvis RWU model in combination with the Feddes reduction function (JMf) only provides good predictions for low and medium R scenarios. For high R, it cannot mimic the uptake patterns predicted by the physical model. Incorporating a newly proposed reduction function into the Jarvis model improved RWU predictions. Regarding the ability of the models to predict plant transpiration, all models accounting for compensation show good performance. The Akaike information criterion (AIC) indicates that the Jarvis (2010) model (JMII), with no empirical parameters to be estimated, is the best model
. The proposed models are better in predicting RWU patterns similar to the physical model. The statistical indices point to them as the best alternatives for mimicking RWU predictions of the physical model.
Duration of mentally simulated movement before and after a golf shot.
Koyama, Satoshi; Tsuruhara, Kiyoshi; Yamamoto, Yuji
2009-02-01
This report examined the temporal consistency of preshot routines and the temporal similarity and variability between simulated movements before and after a shot. 12 male amateur golfers ages 32 to 69 years (M=53.4, SD=10.5) were assigned into two groups according to their handicaps: skilled (M=4.0 handicap, SD=3.1) and less-skilled (M=16.0 handicap, SD=6.5). They performed their shots mentally from their preshot routines to the points when the balls came to rest, then performed the same shots physically and again recalled the shots mentally. For each of four par-three holes, participants' performances were filmed, and the durations of mental and actual shots were timed. Analysis showed that the skilled golfers had more consistent preshot routines in actual movement, and they also had longer durations for the ball flight phase than the less-skilled golfers in simulated movement. The present findings support the importance of consistent preshot routines for high performance in golf, however, the duration of simulated movements was underestimated both before and after the shots. This also suggests that skilled golfers attend to performance goals both before and after shots to execute their shots under proceduralized control and to correct their movements for their next shot.
NASA Astrophysics Data System (ADS)
Zhang, Shaojie; Zhao, Luqiang; Delgado-Tellez, Ricardo; Bao, Hongjun
2018-03-01
Conventional outputs of physics-based landslide forecasting models are presented as deterministic warnings by calculating the safety factor (Fs) of potentially dangerous slopes. However, these models are highly dependent on variables such as cohesion force and internal friction angle which are affected by a high degree of uncertainty especially at a regional scale, resulting in unacceptable uncertainties of Fs. Under such circumstances, the outputs of physical models are more suitable if presented in the form of landslide probability values. In order to develop such models, a method to link the uncertainty of soil parameter values with landslide probability is devised. This paper proposes the use of Monte Carlo methods to quantitatively express uncertainty by assigning random values to physical variables inside a defined interval. The inequality Fs < 1 is tested for each pixel in n simulations which are integrated in a unique parameter. This parameter links the landslide probability to the uncertainties of soil mechanical parameters and is used to create a physics-based probabilistic forecasting model for rainfall-induced shallow landslides. The prediction ability of this model was tested in a case study, in which simulated forecasting of landslide disasters associated with heavy rainfalls on 9 July 2013 in the Wenchuan earthquake region of Sichuan province, China, was performed. The proposed model successfully forecasted landslides in 159 of the 176 disaster points registered by the geo-environmental monitoring station of Sichuan province. Such testing results indicate that the new model can be operated in a highly efficient way and show more reliable results, attributable to its high prediction accuracy. Accordingly, the new model can be potentially packaged into a forecasting system for shallow landslides providing technological support for the mitigation of these disasters at regional scale.
Grid Sensitivity Study for Slat Noise Simulations
NASA Technical Reports Server (NTRS)
Lockard, David P.; Choudhari, Meelan M.; Buning, Pieter G.
2014-01-01
The slat noise from the 30P/30N high-lift system is being investigated through computational fluid dynamics simulations in conjunction with a Ffowcs Williams-Hawkings acoustics solver. Many previous simulations have been performed for the configuration, and the case was introduced as a new category for the Second AIAA workshop on Benchmark problems for Airframe Noise Configurations (BANC-II). However, the cost of the simulations has restricted the study of grid resolution effects to a baseline grid and coarser meshes. In the present study, two different approaches are being used to investigate the effect of finer resolution of near-field unsteady structures. First, a standard grid refinement by a factor of two is used, and the calculations are performed by using the same CFL3D solver employed in the majority of the previous simulations. Second, the OVERFLOW code is applied to the baseline grid, but with a 5th-order upwind spatial discretization as compared with the second-order discretization used in the CFL3D simulations. In general, the fine grid CFL3D simulation and OVERFLOW calculation are in very good agreement and exhibit the lowest levels of both surface pressure fluctuations and radiated noise. Although the smaller scales resolved by these simulations increase the velocity fluctuation levels, they appear to mitigate the influence of the larger scales on the surface pressure. These new simulations are used to investigate the influence of the grid on unsteady high-lift simulations and to gain a better understanding of the physics responsible for the noise generation and radiation.
Performance studies of the P barANDA planar GEM-tracking detector in physics simulations
NASA Astrophysics Data System (ADS)
Divani Veis, Nazila; Firoozabadi, Mohammad M.; Karabowicz, Radoslaw; Maas, Frank; Saito, Takehiko R.; Voss, Bernd; ̅PANDA Gem-Tracker Subgroup
2018-03-01
The P barANDA experiment will be installed at the future facility for antiproton and ion research (FAIR) in Darmstadt, Germany, to study events from the annihilation of protons and antiprotons. The P barANDA detectors can cover a wide physics program about baryon spectroscopy and nucleon structure as well as the study of hadrons and hypernuclear physics including the study of excited hyperon states. One very specific feature of most hyperon ground states is the long decay length of several centimeters in the forward direction. The central tracking detectors of the P barANDA setup are not sufficiently optimized for these long decay lengths. Therefore, using a set of the planar GEM-tracking detectors in the forward region of interest can improve the results in the hyperon physics-benchmark channel. The current conceptual designed P barANDA GEM-tracking stations contribute the measurement of the particles emitted in the polar angles between about 2 to 22 degrees. For this designed detector performance and acceptance, studies have been performed using one of the important hyperonic decay channel p bar p → Λ bar Λ → p bar pπ+π- in physics simulations. The simulations were carried out using the PandaRoot software packages based on the FairRoot framework.
NASA Astrophysics Data System (ADS)
Endo, S.; Lin, W.; Jackson, R. C.; Collis, S. M.; Vogelmann, A. M.; Wang, D.; Oue, M.; Kollias, P.
2017-12-01
Tropical convection is one of the main drivers of the climate system and recognized as a major source of uncertainty in climate models. High-resolution modeling is performed with a focus on the deep convection cases during the active monsoon period of the TWP-ICE field campaign to explore ways to improve the fidelity of convection permitting tropical simulations. Cloud resolving model (CRM) simulations are performed with WRF modified to apply flexible configurations for LES/CRM simulations. We have enhanced the capability of the forcing module to test different implementations of large-scale vertical advective forcing, including a function for optional use of large-scale thermodynamic profiles and a function for the condensate advection. The baseline 3D CRM configurations are, following Fridlind et al. (2012), driven by observationally-constrained ARM forcing and tested with diagnosed surface fluxes and fixed sea-surface temperature and prescribed aerosol size distributions. After the spin-up period, the simulations follow the observed precipitation peaks associated with the passages of precipitation systems. Preliminary analysis shows that the simulation is generally not sensitive to the treatment of the large-scale vertical advection of heat and moisture, while more noticeable changes in the peak precipitation rate are produced when thermodynamic profiles above the boundary layer were nudged to the reference profiles from the forcing dataset. The presentation will explore comparisons with observationally-based metrics associated with convective characteristics and examine the model performance with a focus on model physics, doubly-periodic vs. nested configurations, and different forcing procedures/sources. A radar simulator will be used to understand possible uncertainties in radar-based retrievals of convection properties. Fridlind, A. M., et al. (2012), A comparison of TWP-ICE observational data with cloud-resolving model results, J. Geophys. Res., 117, D05204, doi:10.1029/2011JD016595.
NASA Astrophysics Data System (ADS)
Lee, S.-H.; Kim, S.-W.; Angevine, W. M.; Bianco, L.; McKeen, S. A.; Senff, C. J.; Trainer, M.; Tucker, S. C.; Zamora, R. J.
2011-03-01
The performance of different urban surface parameterizations in the WRF (Weather Research and Forecasting) in simulating urban boundary layer (UBL) was investigated using extensive measurements during the Texas Air Quality Study 2006 field campaign. The extensive field measurements collected on surface (meteorological, wind profiler, energy balance flux) sites, a research aircraft, and a research vessel characterized 3-dimensional atmospheric boundary layer structures over the Houston-Galveston Bay area, providing a unique opportunity for the evaluation of the physical parameterizations. The model simulations were performed over the Houston metropolitan area for a summertime period (12-17 August) using a bulk urban parameterization in the Noah land surface model (original LSM), a modified LSM, and a single-layer urban canopy model (UCM). The UCM simulation compared quite well with the observations over the Houston urban areas, reducing the systematic model biases in the original LSM simulation by 1-2 °C in near-surface air temperature and by 200-400 m in UBL height, on average. A more realistic turbulent (sensible and latent heat) energy partitioning contributed to the improvements in the UCM simulation. The original LSM significantly overestimated the sensible heat flux (~200 W m-2) over the urban areas, resulting in warmer and higher UBL. The modified LSM slightly reduced warm and high biases in near-surface air temperature (0.5-1 °C) and UBL height (~100 m) as a result of the effects of urban vegetation. The relatively strong thermal contrast between the Houston area and the water bodies (Galveston Bay and the Gulf of Mexico) in the LSM simulations enhanced the sea/bay breezes, but the model performance in predicting local wind fields was similar among the simulations in terms of statistical evaluations. These results suggest that a proper surface representation (e.g. urban vegetation, surface morphology) and explicit parameterizations of urban physical processes are required for accurate urban atmospheric numerical modeling.
Modeling and analysis of hybrid pixel detector deficiencies for scientific applications
NASA Astrophysics Data System (ADS)
Fahim, Farah; Deptuch, Grzegorz W.; Hoff, James R.; Mohseni, Hooman
2015-08-01
Semiconductor hybrid pixel detectors often consist of a pixellated sensor layer bump bonded to a matching pixelated readout integrated circuit (ROIC). The sensor can range from high resistivity Si to III-V materials, whereas a Si CMOS process is typically used to manufacture the ROIC. Independent, device physics and electronic design automation (EDA) tools are used to determine sensor characteristics and verify functional performance of ROICs respectively with significantly different solvers. Some physics solvers provide the capability of transferring data to the EDA tool. However, single pixel transient simulations are either not feasible due to convergence difficulties or are prohibitively long. A simplified sensor model, which includes a current pulse in parallel with detector equivalent capacitor, is often used; even then, spice type top-level (entire array) simulations range from days to weeks. In order to analyze detector deficiencies for a particular scientific application, accurately defined transient behavioral models of all the functional blocks are required. Furthermore, various simulations, such as transient, noise, Monte Carlo, inter-pixel effects, etc. of the entire array need to be performed within a reasonable time frame without trading off accuracy. The sensor and the analog front-end can be modeling using a real number modeling language, as complex mathematical functions or detailed data can be saved to text files, for further top-level digital simulations. Parasitically aware digital timing is extracted in a standard delay format (sdf) from the pixel digital back-end layout as well as the periphery of the ROIC. For any given input, detector level worst-case and best-case simulations are performed using a Verilog simulation environment to determine the output. Each top-level transient simulation takes no more than 10-15 minutes. The impact of changing key parameters such as sensor Poissonian shot noise, analog front-end bandwidth, jitter due to clock distribution etc. can be accurately analyzed to determine ROIC architectural viability and bottlenecks. Hence the impact of the detector parameters on the scientific application can be studied.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fahim, Farah; Deptuch, Grzegorz W.; Hoff, James R.
Semiconductor hybrid pixel detectors often consist of a pixellated sensor layer bump bonded to a matching pixelated readout integrated circuit (ROIC). The sensor can range from high resistivity Si to III-V materials, whereas a Si CMOS process is typically used to manufacture the ROIC. Independent, device physics and electronic design automation (EDA) tools are used to determine sensor characteristics and verify functional performance of ROICs respectively with significantly different solvers. Some physics solvers provide the capability of transferring data to the EDA tool. However, single pixel transient simulations are either not feasible due to convergence difficulties or are prohibitively long.more » A simplified sensor model, which includes a current pulse in parallel with detector equivalent capacitor, is often used; even then, spice type top-level (entire array) simulations range from days to weeks. In order to analyze detector deficiencies for a particular scientific application, accurately defined transient behavioral models of all the functional blocks are required. Furthermore, various simulations, such as transient, noise, Monte Carlo, inter-pixel effects, etc. of the entire array need to be performed within a reasonable time frame without trading off accuracy. The sensor and the analog front-end can be modeling using a real number modeling language, as complex mathematical functions or detailed data can be saved to text files, for further top-level digital simulations. Parasitically aware digital timing is extracted in a standard delay format (sdf) from the pixel digital back-end layout as well as the periphery of the ROIC. For any given input, detector level worst-case and best-case simulations are performed using a Verilog simulation environment to determine the output. Each top-level transient simulation takes no more than 10-15 minutes. The impact of changing key parameters such as sensor Poissonian shot noise, analog front-end bandwidth, jitter due to clock distribution etc. can be accurately analyzed to determine ROIC architectural viability and bottlenecks. Hence the impact of the detector parameters on the scientific application can be studied.« less
Rossler, Kelly L; Kimble, Laura P
2016-01-01
Didactic lecture does not lend itself to teaching interprofessional collaboration. High-fidelity human patient simulation with a focus on clinical situations/scenarios is highly conducive to interprofessional education. Consequently, a need for research supporting the incorporation of interprofessional education with high-fidelity patient simulation based technology exists. The purpose of this study was to explore readiness for interprofessional learning and collaboration among pre-licensure health professions students participating in an interprofessional education human patient simulation experience. Using a mixed methods convergent parallel design, a sample of 53 pre-licensure health professions students enrolled in nursing, respiratory therapy, health administration, and physical therapy programs within a college of health professions participated in high-fidelity human patient simulation experiences. Perceptions of interprofessional learning and collaboration were measured with the revised Readiness for Interprofessional Learning Scale (RIPLS) and the Health Professional Collaboration Scale (HPCS). Focus groups were conducted during the simulation post-briefing to obtain qualitative data. Statistical analysis included non-parametric, inferential statistics. Qualitative data were analyzed using a phenomenological approach. Pre- and post-RIPLS demonstrated pre-licensure health professions students reported significantly more positive attitudes about readiness for interprofessional learning post-simulation in the areas of team work and collaboration, negative professional identity, and positive professional identity. Post-simulation HPCS revealed pre-licensure nursing and health administration groups reported greater health collaboration during simulation than physical therapy students. Qualitative analysis yielded three themes: "exposure to experiential learning," "acquisition of interactional relationships," and "presence of chronology in role preparation." Quantitative and qualitative data converged around the finding that physical therapy students had less positive perceptions of the experience because they viewed physical therapy practice as occurring one-on-one rather than in groups. Findings support that pre-licensure students are ready to engage in interprofessional education through exposure to an experiential format such as high-fidelity human patient simulation. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rutland, Christopher J.
2009-04-26
The Terascale High-Fidelity Simulations of Turbulent Combustion (TSTC) project is a multi-university collaborative effort to develop a high-fidelity turbulent reacting flow simulation capability utilizing terascale, massively parallel computer technology. The main paradigm of the approach is direct numerical simulation (DNS) featuring the highest temporal and spatial accuracy, allowing quantitative observations of the fine-scale physics found in turbulent reacting flows as well as providing a useful tool for development of sub-models needed in device-level simulations. Under this component of the TSTC program the simulation code named S3D, developed and shared with coworkers at Sandia National Laboratories, has been enhanced with newmore » numerical algorithms and physical models to provide predictive capabilities for turbulent liquid fuel spray dynamics. Major accomplishments include improved fundamental understanding of mixing and auto-ignition in multi-phase turbulent reactant mixtures and turbulent fuel injection spray jets.« less
NASA Astrophysics Data System (ADS)
Li, W. Q.; Wang, G.; Zhang, X. N.; Geng, H. P.; Shen, J. L.; Wang, L. S.; Zhao, J.; Xu, L. F.; Zhang, L. J.; Wu, Y. Q.; Tai, R. Z.; Chen, G.
2015-09-01
Here we present an in-depth and comprehensive study of the effect of the geometry and morphology of nanoarray (NA) substrates on their surface-enhanced Raman scattering (SERS) performance. The high-quality SERS-active NA substrates of various unit shapes and pitches are assembled through electron beam lithography and fabricated by electron beam physical vapor deposition. Good agreement is found on comparing the Raman scattering results with the integrals of the fourth power of local electric fields from the three-dimensional numerical simulations. A novel type of hybrid NA substrate composed of disordered nanoparticles and a periodic NA is fabricated and characterized. The morphology of NAs has little influence on the SERS performance of hybrid NA substrates and they perform better than both their counterparts pure NA and disordered nanoparticle substrates.
Li, W Q; Wang, G; Zhang, X N; Geng, H P; Shen, J L; Wang, L S; Zhao, J; Xu, L F; Zhang, L J; Wu, Y Q; Tai, R Z; Chen, G
2015-10-07
Here we present an in-depth and comprehensive study of the effect of the geometry and morphology of nanoarray (NA) substrates on their surface-enhanced Raman scattering (SERS) performance. The high-quality SERS-active NA substrates of various unit shapes and pitches are assembled through electron beam lithography and fabricated by electron beam physical vapor deposition. Good agreement is found on comparing the Raman scattering results with the integrals of the fourth power of local electric fields from the three-dimensional numerical simulations. A novel type of hybrid NA substrate composed of disordered nanoparticles and a periodic NA is fabricated and characterized. The morphology of NAs has little influence on the SERS performance of hybrid NA substrates and they perform better than both their counterparts pure NA and disordered nanoparticle substrates.
Tangible Landscape: Cognitively Grasping the Flow of Water
NASA Astrophysics Data System (ADS)
Harmon, B. A.; Petrasova, A.; Petras, V.; Mitasova, H.; Meentemeyer, R. K.
2016-06-01
Complex spatial forms like topography can be challenging to understand, much less intentionally shape, given the heavy cognitive load of visualizing and manipulating 3D form. Spatiotemporal processes like the flow of water over a landscape are even more challenging to understand and intentionally direct as they are dependent upon their context and require the simulation of forces like gravity and momentum. This cognitive work can be offloaded onto computers through 3D geospatial modeling, analysis, and simulation. Interacting with computers, however, can also be challenging, often requiring training and highly abstract thinking. Tangible computing - an emerging paradigm of human-computer interaction in which data is physically manifested so that users can feel it and directly manipulate it - aims to offload this added cognitive work onto the body. We have designed Tangible Landscape, a tangible interface powered by an open source geographic information system (GRASS GIS), so that users can naturally shape topography and interact with simulated processes with their hands in order to make observations, generate and test hypotheses, and make inferences about scientific phenomena in a rapid, iterative process. Conceptually Tangible Landscape couples a malleable physical model with a digital model of a landscape through a continuous cycle of 3D scanning, geospatial modeling, and projection. We ran a flow modeling experiment to test whether tangible interfaces like this can effectively enhance spatial performance by offloading cognitive processes onto computers and our bodies. We used hydrological simulations and statistics to quantitatively assess spatial performance. We found that Tangible Landscape enhanced 3D spatial performance and helped users understand water flow.
Semi-physical Simulation Platform of a Parafoil Nonlinear Dynamic System
NASA Astrophysics Data System (ADS)
Gao, Hai-Tao; Yang, Sheng-Bo; Zhu, Er-Lin; Sun, Qing-Lin; Chen, Zeng-Qiang; Kang, Xiao-Feng
2013-11-01
Focusing on the problems in the process of simulation and experiment on a parafoil nonlinear dynamic system, such as limited methods, high cost and low efficiency we present a semi-physical simulation platform. It is designed by connecting parts of physical objects to a computer, and remedies the defect that a computer simulation is divorced from a real environment absolutely. The main components of the platform and its functions, as well as simulation flows, are introduced. The feasibility and validity are verified through a simulation experiment. The experimental results show that the platform has significance for improving the quality of the parafoil fixed-point airdrop system, shortening the development cycle and saving cost.
Goble, David; Christie, Candice Jo-Anne
2017-06-01
The purpose of this study was to assess how cognitive and physical performance are affected during a prolonged, fatigue-inducing cricket-batting simulation. Fifteen amateur batters from three Eastern Cape schools in South Africa were recruited (mean ± SD: age 17 ± 0.92 years; stature 1.75 ± 0.07 m; body mass 78.3 ± 13.2 kg). Participants completed a 6-stage, 30-over batting simulation (BATEX © ). During the protocol, there were five periods of cognitive assessment (CogState brief test battery, Melbourne, Australia). The primary outcome measures from each cognitive task were speed and accuracy/error rates. Physiological (heart rate) and physical (sprint times) responses were also recorded. Sprint times deteriorated (d = 0.84; P < 0.01) while physiological responses increased (d = 0.91; P < 0.01) as batting duration increased, with longest times and highest responses occurring in the final stage. Prolonged batting had a large effect on executive task performance (d = 0.85; P = 0.03), and moderate effects on visual attention and vigilance (d = 0.56; P = 0.21) and attention and working memory (d = 0.61; P = 0.11), reducing task performance after 30 overs. Therefore, prolonged batting with repeated shuttle running fatigues amateur batters and adversely affects higher-order cognitive function. This will affect decision-making, response selection, response execution and other batting-related executive processes. We recommend that training should incorporate greater proportions of centre-wicket batting with repeated, high-intensity shuttle running. This will improve batting-related skills and information processing when fatigued, making practice more representative of competition.
NASA Astrophysics Data System (ADS)
Mohammed, K.; Islam, A. S.; Khan, M. J. U.; Das, M. K.
2017-12-01
With the large number of hydrologic models presently available along with the global weather and geographic datasets, streamflows of almost any river in the world can be easily modeled. And if a reasonable amount of observed data from that river is available, then simulations of high accuracy can sometimes be performed after calibrating the model parameters against those observed data through inverse modeling. Although such calibrated models can succeed in simulating the general trend or mean of the observed flows very well, more often than not they fail to adequately simulate the extreme flows. This causes difficulty in tasks such as generating reliable projections of future changes in extreme flows due to climate change, which is obviously an important task due to floods and droughts being closely connected to people's lives and livelihoods. We propose an approach where the outputs of a physically-based hydrologic model are used as an input to a machine learning model to try and better simulate the extreme flows. To demonstrate this offline-coupling approach, the Soil and Water Assessment Tool (SWAT) was selected as the physically-based hydrologic model, the Artificial Neural Network (ANN) as the machine learning model and the Ganges-Brahmaputra-Meghna (GBM) river system as the study area. The GBM river system, located in South Asia, is the third largest in the world in terms of freshwater generated and forms the largest delta in the world. The flows of the GBM rivers were simulated separately in order to test the performance of this proposed approach in accurately simulating the extreme flows generated by different basins that vary in size, climate, hydrology and anthropogenic intervention on stream networks. Results show that by post-processing the simulated flows of the SWAT models with ANN models, simulations of extreme flows can be significantly improved. The mean absolute errors in simulating annual maximum/minimum daily flows were minimized from 4967 cusecs to 1294 cusecs for Ganges, from 5695 cusecs to 2115 cusecs for Brahmaputra and from 689 cusecs to 321 cusecs for Meghna. Using this approach, simulations of hydrologic variables other than streamflow can also be improved given that a decent amount of observed data for that variable is available.
ERIC Educational Resources Information Center
Stephens, A. Lynn
2012-01-01
The purpose of this study is to investigate student interactions with simulations, and teacher support of those interactions, within naturalistic high school physics classroom settings. This study focuses on data from two lesson sequences that were conducted in several physics classrooms. The lesson sequences were conducted in a whole class…
NASA Astrophysics Data System (ADS)
Zimmerling, Clemens; Dörr, Dominik; Henning, Frank; Kärger, Luise
2018-05-01
Due to their high mechanical performance, continuous fibre reinforced plastics (CoFRP) become increasingly important for load bearing structures. In many cases, manufacturing CoFRPs comprises a forming process of textiles. To predict and optimise the forming behaviour of a component, numerical simulations are applied. However, for maximum part quality, both the geometry and the process parameters must match in mutual regard, which in turn requires numerous numerically expensive optimisation iterations. In both textile and metal forming, a lot of research has focused on determining optimum process parameters, whilst regarding the geometry as invariable. In this work, a meta-model based approach on component level is proposed, that provides a rapid estimation of the formability for variable geometries based on pre-sampled, physics-based draping data. Initially, a geometry recognition algorithm scans the geometry and extracts a set of doubly-curved regions with relevant geometry parameters. If the relevant parameter space is not part of an underlying data base, additional samples via Finite-Element draping simulations are drawn according to a suitable design-table for computer experiments. Time saving parallel runs of the physical simulations accelerate the data acquisition. Ultimately, a Gaussian Regression meta-model is built from the data base. The method is demonstrated on a box-shaped generic structure. The predicted results are in good agreement with physics-based draping simulations. Since evaluations of the established meta-model are numerically inexpensive, any further design exploration (e.g. robustness analysis or design optimisation) can be performed in short time. It is expected that the proposed method also offers great potential for future applications along virtual process chains: For each process step along the chain, a meta-model can be set-up to predict the impact of design variations on manufacturability and part performance. Thus, the method is considered to facilitate a lean and economic part and process design under consideration of manufacturing effects.
Particle-In-Cell Simulations of Asymmetric Dual Frequency Capacitive Discharge Physics
NASA Astrophysics Data System (ADS)
Wu, Alan; Lichtenberg, A. J.; Lieberman, M. A.; Verboncoeur, J. P.
2003-10-01
Dual frequency capacitive discharges are finding increasing use for etching in the microelectronics industry. In the ideal case, the high frequency power (typically 27.1-160 MHz) controls the plasma density and the low frequency power (typically 2-13.56 MHz) controls the ion energy. The electron power deposition and the dynamics of dual frequency rf sheaths are not well understood. We report on particle-in-cell computer simulations of an asymmetric dual frequency argon discharge. The simulations are performed in 1D (radial) geometry using the bounded electrostatic code XPDP1. Operating parameters are 27.1/2 MHz high/low frequencies, 10/13 cm inner/outer radii, 3-200 mTorr pressures, and 10^9-10^11 cm-3 densities. We determine the power deposition and sheath dynamics for the high frequency power alone, and with various added low frequency powers. We compare the simulation results to simple global models of dual frequency discharges. Support provided by Lam Research, NSF Grant ECS-0139956, California industries, and UC-SMART Contract SM99-10051.
Fully-Coupled Thermo-Electrical Modeling and Simulation of Transition Metal Oxide Memristors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mamaluy, Denis; Gao, Xujiao; Tierney, Brian David
2016-11-01
Transition metal oxide (TMO) memristors have recently attracted special attention from the semiconductor industry and academia. Memristors are one of the strongest candidates to replace flash memory, and possibly DRAM and SRAM in the near future. Moreover, memristors have a high potential to enable beyond-CMOS technology advances in novel architectures for high performance computing (HPC). The utility of memristors has been demonstrated in reprogrammable logic (cross-bar switches), brain-inspired computing and in non-CMOS complementary logic. Indeed, the potential use of memristors as logic devices is especially important considering the inevitable end of CMOS technology scaling that is anticipated by 2025. Inmore » order to aid the on-going Sandia memristor fabrication effort with a memristor design tool and establish a clear physical picture of resistance switching in TMO memristors, we have created and validated with experimental data a simulation tool we name the Memristor Charge Transport (MCT) Simulator.« less
Ishikawa, Shun; Okamoto, Shogo; Isogai, Kaoru; Akiyama, Yasuhiro; Yanagihara, Naomi; Yamada, Yoji
2015-01-01
Robots that simulate patients suffering from joint resistance caused by biomechanical and neural impairments are used to aid the training of physical therapists in manual examination techniques. However, there are few methods for assessing such robots. This article proposes two types of assessment measures based on typical judgments of clinicians. One of the measures involves the evaluation of how well the simulator presents different severities of a specified disease. Experienced clinicians were requested to rate the simulated symptoms in terms of severity, and the consistency of their ratings was used as a performance measure. The other measure involves the evaluation of how well the simulator presents different types of symptoms. In this case, the clinicians were requested to classify the simulated resistances in terms of symptom type, and the average ratios of their answers were used as performance measures. For both types of assessment measures, a higher index implied higher agreement among the experienced clinicians that subjectively assessed the symptoms based on typical symptom features. We applied these two assessment methods to a patient knee robot and achieved positive appraisals. The assessment measures have potential for use in comparing several patient simulators for training physical therapists, rather than as absolute indices for developing a standard. PMID:25923719
Characterization of Stereo Vision Performance for Roving at the Lunar Poles
NASA Technical Reports Server (NTRS)
Wong, Uland; Nefian, Ara; Edwards, Larry; Furlong, Michael; Bouyssounouse, Xavier; To, Vinh; Deans, Matthew; Cannon, Howard; Fong, Terry
2016-01-01
Surface rover operations at the polar regions of airless bodies, particularly the Moon, are of particular interest to future NASA science missions such as Resource Prospector (RP). Polar optical conditions present challenges to conventional imaging techniques, with repercussions to driving, safeguarding and science. High dynamic range, long cast shadows, opposition and white out conditions are all significant factors in appearance. RP is currently undertaking an effort to characterize stereo vision performance in polar conditions through physical laboratory experimentation with regolith simulants, obstacle distributions and oblique lighting.
Energy regeneration model of self-consistent field of electron beams into electric power*
NASA Astrophysics Data System (ADS)
Kazmin, B. N.; Ryzhov, D. R.; Trifanov, I. V.; Snezhko, A. A.; Savelyeva, M. V.
2016-04-01
We consider physic-mathematical models of electric processes in electron beams, conversion of beam parameters into electric power values and their transformation into users’ electric power grid (onboard spacecraft network). We perform computer simulation validating high energy efficiency of the studied processes to be applied in the electric power technology to produce the power as well as electric power plants and propulsion installation in the spacecraft.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gopan, O; Kalet, A; Smith, W
2016-06-15
Purpose: A standard tool for ensuring the quality of radiation therapy treatments is the initial physics plan review. However, little is known about its performance in practice. The goal of this study is to measure the effectiveness of physics plan review by introducing simulated errors into “mock” treatment plans and measuring the performance of plan review by physicists. Methods: We generated six mock treatment plans containing multiple errors. These errors were based on incident learning system data both within the department and internationally (SAFRON). These errors were scored for severity and frequency. Those with the highest scores were included inmore » the simulations (13 errors total). Observer bias was minimized using a multiple co-correlated distractor approach. Eight physicists reviewed these plans for errors, with each physicist reviewing, on average, 3/6 plans. The confidence interval for the proportion of errors detected was computed using the Wilson score interval. Results: Simulated errors were detected in 65% of reviews [51–75%] (95% confidence interval [CI] in brackets). The following error scenarios had the highest detection rates: incorrect isocenter in DRRs/CBCT (91% [73–98%]) and a planned dose different from the prescribed dose (100% [61–100%]). Errors with low detection rates involved incorrect field parameters in record and verify system (38%, [18–61%]) and incorrect isocenter localization in planning system (29% [8–64%]). Though pre-treatment QA failure was reliably identified (100%), less than 20% of participants reported the error that caused the failure. Conclusion: This is one of the first quantitative studies of error detection. Although physics plan review is a key safety measure and can identify some errors with high fidelity, others errors are more challenging to detect. This data will guide future work on standardization and automation. Creating new checks or improving existing ones (i.e., via automation) will help in detecting those errors with low detection rates.« less
Abstract: Two physically based and deterministic models, CASC2-D and KINEROS are evaluated and compared for their performances on modeling sediment movement on a small agricultural watershed over several events. Each model has different conceptualization of a watershed. CASC...
Two physically based watershed models, GSSHA and KINEROS-2 are evaluated and compared for their performances on modeling flow and sediment movement. Each model has a different watershed conceptualization. GSSHA divides the watershed into cells, and flow and sediments are routed t...
Velocity Resolved---Scalar Modeled Simulations of High Schmidt Number Turbulent Transport
NASA Astrophysics Data System (ADS)
Verma, Siddhartha
The objective of this thesis is to develop a framework to conduct velocity resolved - scalar modeled (VR-SM) simulations, which will enable accurate simulations at higher Reynolds and Schmidt (Sc) numbers than are currently feasible. The framework established will serve as a first step to enable future simulation studies for practical applications. To achieve this goal, in-depth analyses of the physical, numerical, and modeling aspects related to Sc " 1 are presented, specifically when modeling in the viscous-convective subrange. Transport characteristics are scrutinized by examining scalar-velocity Fourier mode interactions in Direct Numerical Simulation (DNS) datasets and suggest that scalar modes in the viscous-convective subrange do not directly affect large-scale transport for high Sc . Further observations confirm that discretization errors inherent in numerical schemes can be sufficiently large to wipe out any meaningful contribution from subfilter models. This provides strong incentive to develop more effective numerical schemes to support high Sc simulations. To lower numerical dissipation while maintaining physically and mathematically appropriate scalar bounds during the convection step, a novel method of enforcing bounds is formulated, specifically for use with cubic Hermite polynomials. Boundedness of the scalar being transported is effected by applying derivative limiting techniques, and physically plausible single sub-cell extrema are allowed to exist to help minimize numerical dissipation. The proposed bounding algorithm results in significant performance gain in DNS of turbulent mixing layers and of homogeneous isotropic turbulence. Next, the combined physical/mathematical behavior of the subfilter scalar-flux vector is analyzed in homogeneous isotropic turbulence, by examining vector orientation in the strain-rate eigenframe. The results indicate no discernible dependence on the modeled scalar field, and lead to the identification of the tensor-diffusivity model as a good representation of the subfilter flux. Velocity resolved - scalar modeled simulations of homogeneous isotropic turbulence are conducted to confirm the behavior theorized in these a priori analyses, and suggest that the tensor-diffusivity model is ideal for use in the viscous-convective subrange. Simulations of a turbulent mixing layer are also discussed, with the partial objective of analyzing Schmidt number dependence of a variety of scalar statistics. Large-scale statistics are confirmed to be relatively independent of the Schmidt number for Sc " 1, which is explained by the dominance of subfilter dissipation over resolved molecular dissipation in the simulations. Overall, the VR-SM framework presented is quite effective in predicting large-scale transport characteristics of high Schmidt number scalars, however, it is determined that prediction of subfilter quantities would entail additional modeling intended specifically for this purpose. The VR-SM simulations presented in this thesis provide us with the opportunity to overlap with experimental studies, while at the same time creating an assortment of baseline datasets for future validation of LES models, thereby satisfying the objectives outlined for this work.
Assessing groundwater policy with coupled economic-groundwater hydrologic modeling
NASA Astrophysics Data System (ADS)
Mulligan, Kevin B.; Brown, Casey; Yang, Yi-Chen E.; Ahlfeld, David P.
2014-03-01
This study explores groundwater management policies and the effect of modeling assumptions on the projected performance of those policies. The study compares an optimal economic allocation for groundwater use subject to streamflow constraints, achieved by a central planner with perfect foresight, with a uniform tax on groundwater use and a uniform quota on groundwater use. The policies are compared with two modeling approaches, the Optimal Control Model (OCM) and the Multi-Agent System Simulation (MASS). The economic decision models are coupled with a physically based representation of the aquifer using a calibrated MODFLOW groundwater model. The results indicate that uniformly applied policies perform poorly when simulated with more realistic, heterogeneous, myopic, and self-interested agents. In particular, the effects of the physical heterogeneity of the basin and the agents undercut the perceived benefits of policy instruments assessed with simple, single-cell groundwater modeling. This study demonstrates the results of coupling realistic hydrogeology and human behavior models to assess groundwater management policies. The Republican River Basin, which overlies a portion of the Ogallala aquifer in the High Plains of the United States, is used as a case study for this analysis.
Aerodynamic Simulation of Runback Ice Accretion
NASA Technical Reports Server (NTRS)
Broeren, Andy P.; Whalen, Edward A.; Busch, Greg T.; Bragg, Michael B.
2010-01-01
This report presents the results of recent investigations into the aerodynamics of simulated runback ice accretion on airfoils. Aerodynamic tests were performed on a full-scale model using a high-fidelity, ice-casting simulation at near-flight Reynolds (Re) number. The ice-casting simulation was attached to the leading edge of a 72-in. (1828.8-mm ) chord NACA 23012 airfoil model. Aerodynamic performance tests were conducted at the ONERA F1 pressurized wind tunnel over a Reynolds number range of 4.7?10(exp 6) to 16.0?10(exp 6) and a Mach (M) number ran ge of 0.10 to 0.28. For Re = 16.0?10(exp 6) and M = 0.20, the simulated runback ice accretion on the airfoil decreased the maximum lift coe fficient from 1.82 to 1.51 and decreased the stalling angle of attack from 18.1deg to 15.0deg. The pitching-moment slope was also increased and the drag coefficient was increased by more than a factor of two. In general, the performance effects were insensitive to Reynolds numb er and Mach number changes over the range tested. Follow-on, subscale aerodynamic tests were conducted on a quarter-scale NACA 23012 model (18-in. (457.2-mm) chord) at Re = 1.8?10(exp 6) and M = 0.18, using low-fidelity, geometrically scaled simulations of the full-scale castin g. It was found that simple, two-dimensional simulations of the upper- and lower-surface runback ridges provided the best representation of the full-scale, high Reynolds number iced-airfoil aerodynamics, whereas higher-fidelity simulations resulted in larger performance degrada tions. The experimental results were used to define a new subclassification of spanwise ridge ice that distinguishes between short and tall ridges. This subclassification is based upon the flow field and resulting aerodynamic characteristics, regardless of the physical size of the ridge and the ice-accretion mechanism.
NASA Technical Reports Server (NTRS)
Mocko, David M.; Sud, Y. C.
2000-01-01
Refinements to the snow-physics scheme of SSiB (Simplified Simple Biosphere Model) are described and evaluated. The upgrades include a partial redesign of the conceptual architecture to better simulate the diurnal temperature of the snow surface. For a deep snowpack, there are two separate prognostic temperature snow layers - the top layer responds to diurnal fluctuations in the surface forcing, while the deep layer exhibits a slowly varying response. In addition, the use of a very deep soil temperature and a treatment of snow aging with its influence on snow density is parameterized and evaluated. The upgraded snow scheme produces better timing of snow melt in GSWP-style simulations using ISLSCP Initiative I data for 1987-1988 in the Russian Wheat Belt region. To simulate more realistic runoff in regions with high orographic variability, additional improvements are made to SSiB's soil hydrology. These improvements include an orography-based surface runoff scheme as well as interaction with a water table below SSiB's three soil layers. The addition of these parameterizations further help to simulate more realistic runoff and accompanying prognostic soil moisture fields in the GSWP-style simulations. In intercomparisons of the performance of the new snow-physics SSiB with its earlier versions using an 18-year single-site dataset from Valdai Russia, the version of SSiB described in this paper again produces the earliest onset of snow melt. Soil moisture and deep soil temperatures also compare favorably with observations.
The physics of bat echolocation: Signal processing techniques
NASA Astrophysics Data System (ADS)
Denny, Mark
2004-12-01
The physical principles and signal processing techniques underlying bat echolocation are investigated. It is shown, by calculation and simulation, how the measured echolocation performance of bats can be achieved.
A performance study of WebDav access to storages within the Belle II collaboration
NASA Astrophysics Data System (ADS)
Pardi, S.; Russo, G.
2017-10-01
WebDav and HTTP are becoming popular protocols for data access in the High Energy Physics community. The most used Grid and Cloud storage solutions provide such kind of interfaces, in this scenario tuning and performance evaluation became crucial aspects to promote the adoption of these protocols within the Belle II community. In this work, we present the results of a large-scale test activity, made with the goal to evaluate performances and reliability of the WebDav protocol, and study a possible adoption for the user analysis. More specifically, we considered a pilot infrastructure composed by a set of storage elements configured with the WebDav interface, hosted at the Belle II sites. The performance tests include a comparison with xrootd and gridftp. As reference tests we used a set of analysis jobs running under the Belle II software framework, accessing the input data with the ROOT I/O library, in order to simulate as much as possible a realistic user activity. The final analysis shows the possibility to achieve promising performances with WebDav on different storage systems, and gives an interesting feedback, for Belle II community and for other high energy physics experiments.
Particle-in-cell numerical simulations of a cylindrical Hall thruster with permanent magnets
NASA Astrophysics Data System (ADS)
Miranda, Rodrigo A.; Martins, Alexandre A.; Ferreira, José L.
2017-10-01
The cylindrical Hall thruster (CHT) is a propulsion device that offers high propellant utilization and performance at smaller dimensions and lower power levels than traditional Hall thrusters. In this paper we present first results of a numerical model of a CHT. This model solves particle and field dynamics self-consistently using a particle-in-cell approach. We describe a number of techniques applied to reduce the execution time of the numerical simulations. The specific impulse and thrust computed from our simulations are in agreement with laboratory experiments. This simplified model will allow for a detailed analysis of different thruster operational parameters and obtain an optimal configuration to be implemented at the Plasma Physics Laboratory at the University of Brasília.
Laboratory simulations of Martian gullies on sand dunes
NASA Astrophysics Data System (ADS)
Védie, E.; Costard, F.; Font, M.; Lagarde, J. L.
2008-11-01
Small gullies, observed on Mars, could be formed by groundwater seepage from an underground aquifer or may result from the melting of near-surface ground ice at high obliquity. To test these different hypotheses, a cold room-based laboratory simulation has been performed. The experimental slope was designed to simulate debris flows on sand dune slopes at a range of angles, different granulometry and permafrost characteristics. Preliminary results suggest that the typical morphology of gullies observed on Mars can best be reproduced by the formation of linear debris flows related to the melting of a near-surface ground ice with silty materials. This physical modelling highlights the role of the periglacial conditions, especially the active-layer thickness during debris-flow formation.
The new Langley Research Center advanced real-time simulation (ARTS) system
NASA Technical Reports Server (NTRS)
Crawford, D. J.; Cleveland, J. I., II
1986-01-01
Based on a survey of current local area network technology with special attention paid to high bandwidth and very low transport delay requirements, NASA's Langley Research Center designed a new simulation subsystem using the computer automated measurement and control (CAMAC) network. This required significant modifications to the standard CAMAC system and development of a network switch, a clocking system, new conversion equipment, new consoles, supporting software, etc. This system is referred to as the advanced real-time simulation (ARTS) system. It is presently being built at LaRC. This paper provides a functional and physical description of the hardware and a functional description of the software. The requirements which drove the design are presented as well as present performance figures and status.
SURFACTANT - POLYMER INTERACTION FOR IMPROVED OIL RECOVERY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Unknown
1998-10-01
The goal of this research is to use the interaction between a surfactant and a polymer for efficient displacement of tertiary oil by improving slug integrity, adsorption and mobility control. Surfactant--polymer flooding has been shown to be highly effective in laboratory-scale linear floods. The focus of this proposal is to design an inexpensive surfactant-polymer mixture that can efficiently recover tertiary oil by avoiding surfactant slug degradation high adsorption and viscous/heterogeneity fingering. A mixture comprising a ''pseudo oil'' with appropriate surfactant and polymer has been selected to study micellar-polymer chemical flooding. The physical properties and phase behavior of this system havemore » been determined. A surfactant-polymer slug has been designed to achieve high efficiency recovery by improving phase behavior and mobility control. Recovery experiments have been performed on linear cores and a quarter 5-spot. The same recovery experiments have been simulated using a commercially available simulator (UTCHEM). Good agreement between experimental data and simulation results has been achieved.« less
High-performance scientific computing in the cloud
NASA Astrophysics Data System (ADS)
Jorissen, Kevin; Vila, Fernando; Rehr, John
2011-03-01
Cloud computing has the potential to open up high-performance computational science to a much broader class of researchers, owing to its ability to provide on-demand, virtualized computational resources. However, before such approaches can become commonplace, user-friendly tools must be developed that hide the unfamiliar cloud environment and streamline the management of cloud resources for many scientific applications. We have recently shown that high-performance cloud computing is feasible for parallelized x-ray spectroscopy calculations. We now present benchmark results for a wider selection of scientific applications focusing on electronic structure and spectroscopic simulation software in condensed matter physics. These applications are driven by an improved portable interface that can manage virtual clusters and run various applications in the cloud. We also describe a next generation of cluster tools, aimed at improved performance and a more robust cluster deployment. Supported by NSF grant OCI-1048052.
Umar, Amara; Javaid, Nadeem; Ahmad, Ashfaq; Khan, Zahoor Ali; Qasim, Umar; Alrajeh, Nabil; Hayat, Amir
2015-06-18
Performance enhancement of Underwater Wireless Sensor Networks (UWSNs) in terms of throughput maximization, energy conservation and Bit Error Rate (BER) minimization is a potential research area. However, limited available bandwidth, high propagation delay, highly dynamic network topology, and high error probability leads to performance degradation in these networks. In this regard, many cooperative communication protocols have been developed that either investigate the physical layer or the Medium Access Control (MAC) layer, however, the network layer is still unexplored. More specifically, cooperative routing has not yet been jointly considered with sink mobility. Therefore, this paper aims to enhance the network reliability and efficiency via dominating set based cooperative routing and sink mobility. The proposed work is validated via simulations which show relatively improved performance of our proposed work in terms the selected performance metrics.
Jiang, Xianan; Waliser, Duane E.; Xavier, Prince K.; ...
2015-05-27
Aimed at reducing deficiencies in representing the Madden-Julian oscillation (MJO) in general circulation models (GCMs), a global model evaluation project on vertical structure and physical processes of the MJO was coordinated. In this paper, results from the climate simulation component of this project are reported. Here, it is shown that the MJO remains a great challenge in these latest generation GCMs. The systematic eastward propagation of the MJO is only well simulated in about one fourth of the total participating models. The observed vertical westward tilt with altitude of the MJO is well simulated in good MJO models but notmore » in the poor ones. Damped Kelvin wave responses to the east of convection in the lower troposphere could be responsible for the missing MJO preconditioning process in these poor MJO models. Several process-oriented diagnostics were conducted to discriminate key processes for realistic MJO simulations. While large-scale rainfall partition and low-level mean zonal winds over the Indo-Pacific in a model are not found to be closely associated with its MJO skill, two metrics, including the low-level relative humidity difference between high- and low-rain events and seasonal mean gross moist stability, exhibit statistically significant correlations with the MJO performance. It is further indicated that increased cloud-radiative feedback tends to be associated with reduced amplitude of intraseasonal variability, which is incompatible with the radiative instability theory previously proposed for the MJO. Finally, results in this study confirm that inclusion of air-sea interaction can lead to significant improvement in simulating the MJO.« less
Hypersonic Combustor Model Inlet CFD Simulations and Experimental Comparisons
NASA Technical Reports Server (NTRS)
Venkatapathy, E.; TokarcikPolsky, S.; Deiwert, G. S.; Edwards, Thomas A. (Technical Monitor)
1995-01-01
Numerous two-and three-dimensional computational simulations were performed for the inlet associated with the combustor model for the hypersonic propulsion experiment in the NASA Ames 16-Inch Shock Tunnel. The inlet was designed to produce a combustor-inlet flow that is nearly two-dimensional and of sufficient mass flow rate for large scale combustor testing. The three-dimensional simulations demonstrated that the inlet design met all the design objectives and that the inlet produced a very nearly two-dimensional combustor inflow profile. Numerous two-dimensional simulations were performed with various levels of approximations such as in the choice of chemical and physical models, as well as numerical approximations. Parametric studies were conducted to better understand and to characterize the inlet flow. Results from the two-and three-dimensional simulations were used to predict the mass flux entering the combustor and a mass flux correlation as a function of facility stagnation pressure was developed. Surface heat flux and pressure measurements were compared with the computed results and good agreement was found. The computational simulations helped determine the inlet low characteristics in the high enthalpy environment, the important parameters that affect the combustor-inlet flow, and the sensitivity of the inlet flow to various modeling assumptions.
NASA Astrophysics Data System (ADS)
Davini, Paolo; von Hardenberg, Jost; Corti, Susanna; Christensen, Hannah M.; Juricke, Stephan; Subramanian, Aneesh; Watson, Peter A. G.; Weisheimer, Antje; Palmer, Tim N.
2017-03-01
The Climate SPHINX (Stochastic Physics HIgh resolutioN eXperiments) project is a comprehensive set of ensemble simulations aimed at evaluating the sensitivity of present and future climate to model resolution and stochastic parameterisation. The EC-Earth Earth system model is used to explore the impact of stochastic physics in a large ensemble of 30-year climate integrations at five different atmospheric horizontal resolutions (from 125 up to 16 km). The project includes more than 120 simulations in both a historical scenario (1979-2008) and a climate change projection (2039-2068), together with coupled transient runs (1850-2100). A total of 20.4 million core hours have been used, made available from a single year grant from PRACE (the Partnership for Advanced Computing in Europe), and close to 1.5 PB of output data have been produced on SuperMUC IBM Petascale System at the Leibniz Supercomputing Centre (LRZ) in Garching, Germany. About 140 TB of post-processed data are stored on the CINECA supercomputing centre archives and are freely accessible to the community thanks to an EUDAT data pilot project. This paper presents the technical and scientific set-up of the experiments, including the details on the forcing used for the simulations performed, defining the SPHINX v1.0 protocol. In addition, an overview of preliminary results is given. An improvement in the simulation of Euro-Atlantic atmospheric blocking following resolution increase is observed. It is also shown that including stochastic parameterisation in the low-resolution runs helps to improve some aspects of the tropical climate - specifically the Madden-Julian Oscillation and the tropical rainfall variability. These findings show the importance of representing the impact of small-scale processes on the large-scale climate variability either explicitly (with high-resolution simulations) or stochastically (in low-resolution simulations).
Nagayama, T.; Bailey, J. E.; Loisel, G.; ...
2016-02-05
Recently, frequency-resolved iron opacity measurements at electron temperatures of 170–200 eV and electron densities of (0.7 – 4.0) × 10 22 cm –3 revealed a 30–400% disagreement with the calculated opacities [J. E. Bailey et al., Nature (London) 517, 56 (2015)]. The discrepancies have a high impact on astrophysics, atomic physics, and high-energy density physics, and it is important to verify our understanding of the experimental platform with simulations. Reliable simulations are challenging because the temporal and spatial evolution of the source radiation and of the sample plasma are both complex and incompletely diagnosed. In this article, we describe simulationsmore » that reproduce the measured temperature and density in recent iron opacity experiments performed at the Sandia National Laboratories Z facility. The time-dependent spectral irradiance at the sample is estimated using the measured time- and space-dependent source radiation distribution, in situ source-to-sample distance measurements, and a three-dimensional (3D) view-factor code. The inferred spectral irradiance is used to drive 1D sample radiation hydrodynamics simulations. The images recorded by slit-imaged space-resolved spectrometers are modeled by solving radiation transport of the source radiation through the sample. We find that the same drive radiation time history successfully reproduces the measured plasma conditions for eight different opacity experiments. These results provide a quantitative physical explanation for the observed dependence of both temperature and density on the sample configuration. Simulated spectral images for the experiments without the FeMg sample show quantitative agreement with the measured spectral images. The agreement in spectral profile, spatial profile, and brightness provides further confidence in our understanding of the backlight-radiation time history and image formation. Furthermore, these simulations bridge the static-uniform picture of the data interpretation and the dynamic-gradient reality of the experiments, and they will allow us to quantitatively assess the impact of effects neglected in the data interpretation.« less
Exploiting MIC architectures for the simulation of channeling of charged particles in crystals
NASA Astrophysics Data System (ADS)
Bagli, Enrico; Karpusenko, Vadim
2016-08-01
Coherent effects of ultra-relativistic particles in crystals is an area of science under development. DYNECHARM + + is a toolkit for the simulation of coherent interactions between high-energy charged particles and complex crystal structures. The particle trajectory in a crystal is computed through numerical integration of the equation of motion. The code was revised and improved in order to exploit parallelization on multi-cores and vectorization of single instructions on multiple data. An Intel Xeon Phi card was adopted for the performance measurements. The computation time was proved to scale linearly as a function of the number of physical and virtual cores. By enabling the auto-vectorization flag of the compiler a three time speedup was obtained. The performances of the card were compared to the Dual Xeon ones.
The Oceanographic Multipurpose Software Environment (OMUSE v1.0)
NASA Astrophysics Data System (ADS)
Pelupessy, Inti; van Werkhoven, Ben; van Elteren, Arjen; Viebahn, Jan; Candy, Adam; Portegies Zwart, Simon; Dijkstra, Henk
2017-08-01
In this paper we present the Oceanographic Multipurpose Software Environment (OMUSE). OMUSE aims to provide a homogeneous environment for existing or newly developed numerical ocean simulation codes, simplifying their use and deployment. In this way, numerical experiments that combine ocean models representing different physics or spanning different ranges of physical scales can be easily designed. Rapid development of simulation models is made possible through the creation of simple high-level scripts. The low-level core of the abstraction in OMUSE is designed to deploy these simulations efficiently on heterogeneous high-performance computing resources. Cross-verification of simulation models with different codes and numerical methods is facilitated by the unified interface that OMUSE provides. Reproducibility in numerical experiments is fostered by allowing complex numerical experiments to be expressed in portable scripts that conform to a common OMUSE interface. Here, we present the design of OMUSE as well as the modules and model components currently included, which range from a simple conceptual quasi-geostrophic solver to the global circulation model POP (Parallel Ocean Program). The uniform access to the codes' simulation state and the extensive automation of data transfer and conversion operations aids the implementation of model couplings. We discuss the types of couplings that can be implemented using OMUSE. We also present example applications that demonstrate the straightforward model initialization and the concurrent use of data analysis tools on a running model. We give examples of multiscale and multiphysics simulations by embedding a regional ocean model into a global ocean model and by coupling a surface wave propagation model with a coastal circulation model.
Neurosurgery simulation in residency training: feasibility, cost, and educational benefit.
Gasco, Jaime; Holbrook, Thomas J; Patel, Achal; Smith, Adrian; Paulson, David; Muns, Alan; Desai, Sohum; Moisi, Marc; Kuo, Yong-Fan; Macdonald, Bart; Ortega-Barnett, Juan; Patterson, Joel T
2013-10-01
The effort required to introduce simulation in neurosurgery academic programs and the benefits perceived by residents have not been systematically assessed. To create a neurosurgery simulation curriculum encompassing basic and advanced skills, cadaveric dissection, cranial and spine surgery simulation, and endovascular and computerized haptic training. A curriculum with 68 core exercises per academic year was distributed in individualized sets of 30 simulations to 6 neurosurgery residents. The total number of procedures completed during the academic year was set to 180. The curriculum includes 79 simulations with physical models, 57 cadaver dissections, and 44 haptic/computerized sessions. Likert-type evaluations regarding self-perceived performance were completed after each exercise. Subject identification was blinded to junior (postgraduate years 1-3) or senior resident (postgraduate years 4-6). Wilcoxon rank testing was used to detect differences within and between groups. One hundred eighty procedures and surveys were analyzed. Junior residents reported proficiency improvements in 82% of simulations performed (P < .001). Senior residents reported improvement in 42.5% of simulations (P < .001). Cadaver simulations accrued the highest reported benefit (71.5%; P < .001), followed by physical simulators (63.8%; P < .001) and haptic/computerized (59.1; P < .001). Initial cost is $341,978.00, with $27,876.36 for annual operational expenses. The systematic implementation of a simulation curriculum in a neurosurgery training program is feasible, is favorably regarded, and has a positive impact on trainees of all levels, particularly in junior years. All simulation forms, cadaver, physical, and haptic/computerized, have a role in different stages of learning and should be considered in the development of an educational simulation program.
NASA Astrophysics Data System (ADS)
Byun, D. W.; Rappenglueck, B.; Lefer, B.
2007-12-01
Accurate meteorological and photochemical modeling efforts are necessary to understand the measurements made during the Texas Air Quality Study (TexAQS-II). The main objective of the study is to understand the meteorological and chemical processes of high ozone and regional haze events in the Eastern Texas, including the Houston-Galveston metropolitan area. Real-time and retrospective meteorological and photochemical model simulations were performed to study key physical and chemical processes in the Houston Galveston Area. In particular, the Vertical Mixing Experiment (VME) at the University of Houston campus was performed on selected days during the TexAQS-II. Results of the MM5 meteorological model and CMAQ air quality model simulations were compared with the VME and other TexAQS-II measurements to understand the interaction of the boundary layer dynamics and photochemical evolution affecting Houston air quality.
Comparison of deterministic and stochastic methods for time-dependent Wigner simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shao, Sihong, E-mail: sihong@math.pku.edu.cn; Sellier, Jean Michel, E-mail: jeanmichel.sellier@parallel.bas.bg
2015-11-01
Recently a Monte Carlo method based on signed particles for time-dependent simulations of the Wigner equation has been proposed. While it has been thoroughly validated against physical benchmarks, no technical study about its numerical accuracy has been performed. To this end, this paper presents the first step towards the construction of firm mathematical foundations for the signed particle Wigner Monte Carlo method. An initial investigation is performed by means of comparisons with a cell average spectral element method, which is a highly accurate deterministic method and utilized to provide reference solutions. Several different numerical tests involving the time-dependent evolution ofmore » a quantum wave-packet are performed and discussed in deep details. In particular, this allows us to depict a set of crucial criteria for the signed particle Wigner Monte Carlo method to achieve a satisfactory accuracy.« less
Petascale supercomputing to accelerate the design of high-temperature alloys
Shin, Dongwon; Lee, Sangkeun; Shyam, Amit; ...
2017-10-25
Recent progress in high-performance computing and data informatics has opened up numerous opportunities to aid the design of advanced materials. Herein, we demonstrate a computational workflow that includes rapid population of high-fidelity materials datasets via petascale computing and subsequent analyses with modern data science techniques. We use a first-principles approach based on density functional theory to derive the segregation energies of 34 microalloying elements at the coherent and semi-coherent interfaces between the aluminium matrix and the θ'-Al 2Cu precipitate, which requires several hundred supercell calculations. We also perform extensive correlation analyses to identify materials descriptors that affect the segregation behaviourmore » of solutes at the interfaces. Finally, we show an example of leveraging machine learning techniques to predict segregation energies without performing computationally expensive physics-based simulations. As a result, the approach demonstrated in the present work can be applied to any high-temperature alloy system for which key materials data can be obtained using high-performance computing.« less
Petascale supercomputing to accelerate the design of high-temperature alloys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shin, Dongwon; Lee, Sangkeun; Shyam, Amit
Recent progress in high-performance computing and data informatics has opened up numerous opportunities to aid the design of advanced materials. Herein, we demonstrate a computational workflow that includes rapid population of high-fidelity materials datasets via petascale computing and subsequent analyses with modern data science techniques. We use a first-principles approach based on density functional theory to derive the segregation energies of 34 microalloying elements at the coherent and semi-coherent interfaces between the aluminium matrix and the θ'-Al 2Cu precipitate, which requires several hundred supercell calculations. We also perform extensive correlation analyses to identify materials descriptors that affect the segregation behaviourmore » of solutes at the interfaces. Finally, we show an example of leveraging machine learning techniques to predict segregation energies without performing computationally expensive physics-based simulations. As a result, the approach demonstrated in the present work can be applied to any high-temperature alloy system for which key materials data can be obtained using high-performance computing.« less
Petascale supercomputing to accelerate the design of high-temperature alloys
NASA Astrophysics Data System (ADS)
Shin, Dongwon; Lee, Sangkeun; Shyam, Amit; Haynes, J. Allen
2017-12-01
Recent progress in high-performance computing and data informatics has opened up numerous opportunities to aid the design of advanced materials. Herein, we demonstrate a computational workflow that includes rapid population of high-fidelity materials datasets via petascale computing and subsequent analyses with modern data science techniques. We use a first-principles approach based on density functional theory to derive the segregation energies of 34 microalloying elements at the coherent and semi-coherent interfaces between the aluminium matrix and the θ‧-Al2Cu precipitate, which requires several hundred supercell calculations. We also perform extensive correlation analyses to identify materials descriptors that affect the segregation behaviour of solutes at the interfaces. Finally, we show an example of leveraging machine learning techniques to predict segregation energies without performing computationally expensive physics-based simulations. The approach demonstrated in the present work can be applied to any high-temperature alloy system for which key materials data can be obtained using high-performance computing.
Strategies for Large Scale Implementation of a Multiscale, Multiprocess Integrated Hydrologic Model
NASA Astrophysics Data System (ADS)
Kumar, M.; Duffy, C.
2006-05-01
Distributed models simulate hydrologic state variables in space and time while taking into account the heterogeneities in terrain, surface, subsurface properties and meteorological forcings. Computational cost and complexity associated with these model increases with its tendency to accurately simulate the large number of interacting physical processes at fine spatio-temporal resolution in a large basin. A hydrologic model run on a coarse spatial discretization of the watershed with limited number of physical processes needs lesser computational load. But this negatively affects the accuracy of model results and restricts physical realization of the problem. So it is imperative to have an integrated modeling strategy (a) which can be universally applied at various scales in order to study the tradeoffs between computational complexity (determined by spatio- temporal resolution), accuracy and predictive uncertainty in relation to various approximations of physical processes (b) which can be applied at adaptively different spatial scales in the same domain by taking into account the local heterogeneity of topography and hydrogeologic variables c) which is flexible enough to incorporate different number and approximation of process equations depending on model purpose and computational constraint. An efficient implementation of this strategy becomes all the more important for Great Salt Lake river basin which is relatively large (~89000 sq. km) and complex in terms of hydrologic and geomorphic conditions. Also the types and the time scales of hydrologic processes which are dominant in different parts of basin are different. Part of snow melt runoff generated in the Uinta Mountains infiltrates and contributes as base flow to the Great Salt Lake over a time scale of decades to centuries. The adaptive strategy helps capture the steep topographic and climatic gradient along the Wasatch front. Here we present the aforesaid modeling strategy along with an associated hydrologic modeling framework which facilitates a seamless, computationally efficient and accurate integration of the process model with the data model. The flexibility of this framework leads to implementation of multiscale, multiresolution, adaptive refinement/de-refinement and nested modeling simulations with least computational burden. However, performing these simulations and related calibration of these models over a large basin at higher spatio- temporal resolutions is computationally intensive and requires use of increasing computing power. With the advent of parallel processing architectures, high computing performance can be achieved by parallelization of existing serial integrated-hydrologic-model code. This translates to running the same model simulation on a network of large number of processors thereby reducing the time needed to obtain solution. The paper also discusses the implementation of the integrated model on parallel processors. Also will be discussed the mapping of the problem on multi-processor environment, method to incorporate coupling between hydrologic processes using interprocessor communication models, model data structure and parallel numerical algorithms to obtain high performance.
Functional performance testing for power and return to sports.
Manske, Robert; Reiman, Michael
2013-05-01
Functional performance testing of athletes can determine physical limitations that may affect sporting activities. Optimal functional performance testing simulates the athlete's activity. A Medline search from 1960 to 2012 was implemented with the keywords functional testing, functional impairment testing, and functional performance testing in the English language. Each author also undertook independent searches of article references. Functional performance tests can bridge the gap between general physical tests and full, unrestricted athletic activity.
Mechanism study and numerical simulation of Uranium nitriding induced by high energy laser
NASA Astrophysics Data System (ADS)
Zhu, Yuan; Xu, Jingjing; Qi, Yanwen; Li, Shengpeng; Zhao, Hui
2018-06-01
The gradients of interfacial tension induced by local heating led to Marangoni convection, which had a significant effect on surface formation and the process of mass transport in the laser nitriding of uranium. An experimental observation of the underlying processes was very difficult. In present study, the Marangoni convection was considered and the computational fluid dynamic (CFD) analysis technique of FLUENT program was performed to determine the physical processes such as heat transfer and mass transport. The progress of gas-liquid falling film desorption was presented by combining phase-change model with fluid volume function (VOF) model. The time-dependent distribution of the temperature had been derived. Moreover, the concentration and distribution of nitrogen across the laser spot are calculated. The simulation results matched with the experimental data. The numerical resolution method provided a better approach to know the physical processes and dependencies of the coating formation.
Fault-tolerant Control of a Cyber-physical System
NASA Astrophysics Data System (ADS)
Roxana, Rusu-Both; Eva-Henrietta, Dulf
2017-10-01
Cyber-physical systems represent a new emerging field in automatic control. The fault system is a key component, because modern, large scale processes must meet high standards of performance, reliability and safety. Fault propagation in large scale chemical processes can lead to loss of production, energy, raw materials and even environmental hazard. The present paper develops a multi-agent fault-tolerant control architecture using robust fractional order controllers for a (13C) cryogenic separation column cascade. The JADE (Java Agent DEvelopment Framework) platform was used to implement the multi-agent fault tolerant control system while the operational model of the process was implemented in Matlab/SIMULINK environment. MACSimJX (Multiagent Control Using Simulink with Jade Extension) toolbox was used to link the control system and the process model. In order to verify the performance and to prove the feasibility of the proposed control architecture several fault simulation scenarios were performed.
NASA Astrophysics Data System (ADS)
Torres, Hilario; Iaccarino, Gianluca
2017-11-01
Soleil-X is a multi-physics solver being developed at Stanford University as a part of the Predictive Science Academic Alliance Program II. Our goal is to conduct high fidelity simulations of particle laden turbulent flows in a radiation environment for solar energy receiver applications as well as to demonstrate our readiness to effectively utilize next generation Exascale machines. The novel aspect of Soleil-X is that it is built upon the Legion runtime system to enable easy portability to different parallel distributed heterogeneous architectures while also being written entirely in high-level/high-productivity languages (Ebb and Regent). An overview of the Soleil-X software architecture will be given. Results from coupled fluid flow, Lagrangian point particle tracking, and thermal radiation simulations will be presented. Performance diagnostic tools and metrics corresponding the the same cases will also be discussed. US Department of Energy, National Nuclear Security Administration.
Laser-driven Ion Acceleration using Nanodiamonds
NASA Astrophysics Data System (ADS)
D'Hauthuille, Luc; Nguyen, Tam; Dollar, Franklin
2016-10-01
Interactions of high-intensity lasers with mass-limited nanoparticles enable the generation of extremely high electric fields. These fields accelerate ions, which has applications in nuclear medicine, high brightness radiography, as well as fast ignition for inertial confinement fusion. Previous studies have been performed with ensembles of nanoparticles, but this obscures the physics of the interaction due to the wide array of variables in the interaction. The work presented here looks instead at the interactions of a high intensity short pulse laser with an isolated nanodiamond. Specifically, we studied the effect of nanoparticle size and intensity of the laser on the interaction. A novel target scheme was developed to isolate the nanodiamond. Particle-in-cell simulations were performed using the EPOCH framework to show the sheath fields and resulting energetic ion beams.
An Integrated Simulation Module for Cyber-Physical Automation Systems †
Ferracuti, Francesco; Freddi, Alessandro; Monteriù, Andrea; Prist, Mariorosario
2016-01-01
The integration of Wireless Sensors Networks (WSNs) into Cyber Physical Systems (CPSs) is an important research problem to solve in order to increase the performances, safety, reliability and usability of wireless automation systems. Due to the complexity of real CPSs, emulators and simulators are often used to replace the real control devices and physical connections during the development stage. The most widespread simulators are free, open source, expandable, flexible and fully integrated into mathematical modeling tools; however, the connection at a physical level and the direct interaction with the real process via the WSN are only marginally tackled; moreover, the simulated wireless sensor motes are not able to generate the analogue output typically required for control purposes. A new simulation module for the control of a wireless cyber-physical system is proposed in this paper. The module integrates the COntiki OS JAva Simulator (COOJA), a cross-level wireless sensor network simulator, and the LabVIEW system design software from National Instruments. The proposed software module has been called “GILOO” (Graphical Integration of Labview and cOOja). It allows one to develop and to debug control strategies over the WSN both using virtual or real hardware modules, such as the National Instruments Real-Time Module platform, the CompactRio, the Supervisory Control And Data Acquisition (SCADA), etc. To test the proposed solution, we decided to integrate it with one of the most popular simulators, i.e., the Contiki OS, and wireless motes, i.e., the Sky mote. As a further contribution, the Contiki Sky DAC driver and a new “Advanced Sky GUI” have been proposed and tested in the COOJA Simulator in order to provide the possibility to develop control over the WSN. To test the performances of the proposed GILOO software module, several experimental tests have been made, and interesting preliminary results are reported. The GILOO module has been applied to a smart home mock-up where a networked control has been developed for the LED lighting system. PMID:27164109
An Integrated Simulation Module for Cyber-Physical Automation Systems.
Ferracuti, Francesco; Freddi, Alessandro; Monteriù, Andrea; Prist, Mariorosario
2016-05-05
The integration of Wireless Sensors Networks (WSNs) into Cyber Physical Systems (CPSs) is an important research problem to solve in order to increase the performances, safety, reliability and usability of wireless automation systems. Due to the complexity of real CPSs, emulators and simulators are often used to replace the real control devices and physical connections during the development stage. The most widespread simulators are free, open source, expandable, flexible and fully integrated into mathematical modeling tools; however, the connection at a physical level and the direct interaction with the real process via the WSN are only marginally tackled; moreover, the simulated wireless sensor motes are not able to generate the analogue output typically required for control purposes. A new simulation module for the control of a wireless cyber-physical system is proposed in this paper. The module integrates the COntiki OS JAva Simulator (COOJA), a cross-level wireless sensor network simulator, and the LabVIEW system design software from National Instruments. The proposed software module has been called "GILOO" (Graphical Integration of Labview and cOOja). It allows one to develop and to debug control strategies over the WSN both using virtual or real hardware modules, such as the National Instruments Real-Time Module platform, the CompactRio, the Supervisory Control And Data Acquisition (SCADA), etc. To test the proposed solution, we decided to integrate it with one of the most popular simulators, i.e., the Contiki OS, and wireless motes, i.e., the Sky mote. As a further contribution, the Contiki Sky DAC driver and a new "Advanced Sky GUI" have been proposed and tested in the COOJA Simulator in order to provide the possibility to develop control over the WSN. To test the performances of the proposed GILOO software module, several experimental tests have been made, and interesting preliminary results are reported. The GILOO module has been applied to a smart home mock-up where a networked control has been developed for the LED lighting system.
High-performance modeling of plasma-based acceleration and laser-plasma interactions
NASA Astrophysics Data System (ADS)
Vay, Jean-Luc; Blaclard, Guillaume; Godfrey, Brendan; Kirchen, Manuel; Lee, Patrick; Lehe, Remi; Lobet, Mathieu; Vincenti, Henri
2016-10-01
Large-scale numerical simulations are essential to the design of plasma-based accelerators and laser-plasma interations for ultra-high intensity (UHI) physics. The electromagnetic Particle-In-Cell (PIC) approach is the method of choice for self-consistent simulations, as it is based on first principles, and captures all kinetic effects, and also scale favorably to many cores on supercomputers. The standard PIC algorithm relies on second-order finite-difference discretization of the Maxwell and Newton-Lorentz equations. We present here novel formulations, based on very high-order pseudo-spectral Maxwell solvers, which enable near-total elimination of the numerical Cherenkov instability and increased accuracy over the standard PIC method for standard laboratory frame and Lorentz boosted frame simulations. We also present the latest implementations in the PIC modules Warp-PICSAR and FBPIC on the Intel Xeon Phi and GPU architectures. Examples of applications will be given on the simulation of laser-plasma accelerators and high-harmonic generation with plasma mirrors. Work supported by US-DOE Contracts DE-AC02-05CH11231 and by the European Commission through the Marie Slowdoska-Curie fellowship PICSSAR Grant Number 624543. Used resources of NERSC.
A Physics-Based Vibrotactile Feedback Library for Collision Events.
Park, Gunhyuk; Choi, Seungmoon
2017-01-01
We present PhysVib: a software solution on the mobile platform extending an open-source physics engine in a multi-rate rendering architecture for automatic vibrotactile feedback upon collision events. PhysVib runs concurrently with a physics engine at a low update rate and generates vibrotactile feedback commands at a high update rate based on the simulation results of the physics engine using an exponentially-decaying sinusoidal model. We demonstrate through a user study that this vibration model is more appropriate to our purpose in terms of perceptual quality than more complex models based on sound synthesis. We also evaluated the perceptual performance of PhysVib by comparing eight vibrotactile rendering methods. Experimental results suggested that PhysVib enables more realistic vibrotactile feedback than the other methods as to perceived similarity to the visual events. PhysVib is an effective solution for providing physically plausible vibrotactile responses while reducing application development time to great extent.
NASA Technical Reports Server (NTRS)
Pholsiri, Chalongrath; English, James; Seberino, Charles; Lim, Yi-Je
2010-01-01
The Excavator Design Validation tool verifies excavator designs by automatically generating control systems and modeling their performance in an accurate simulation of their expected environment. Part of this software design includes interfacing with human operations that can be included in simulation-based studies and validation. This is essential for assessing productivity, versatility, and reliability. This software combines automatic control system generation from CAD (computer-aided design) models, rapid validation of complex mechanism designs, and detailed models of the environment including soil, dust, temperature, remote supervision, and communication latency to create a system of high value. Unique algorithms have been created for controlling and simulating complex robotic mechanisms automatically from just a CAD description. These algorithms are implemented as a commercial cross-platform C++ software toolkit that is configurable using the Extensible Markup Language (XML). The algorithms work with virtually any mobile robotic mechanisms using module descriptions that adhere to the XML standard. In addition, high-fidelity, real-time physics-based simulation algorithms have also been developed that include models of internal forces and the forces produced when a mechanism interacts with the outside world. This capability is combined with an innovative organization for simulation algorithms, new regolith simulation methods, and a unique control and study architecture to make powerful tools with the potential to transform the way NASA verifies and compares excavator designs. Energid's Actin software has been leveraged for this design validation. The architecture includes parametric and Monte Carlo studies tailored for validation of excavator designs and their control by remote human operators. It also includes the ability to interface with third-party software and human-input devices. Two types of simulation models have been adapted: high-fidelity discrete element models and fast analytical models. By using the first to establish parameters for the second, a system has been created that can be executed in real time, or faster than real time, on a desktop PC. This allows Monte Carlo simulations to be performed on a computer platform available to all researchers, and it allows human interaction to be included in a real-time simulation process. Metrics on excavator performance are established that work with the simulation architecture. Both static and dynamic metrics are included.
Cosmological Hydrodynamics on a Moving Mesh
NASA Astrophysics Data System (ADS)
Hernquist, Lars
We propose to construct a model for the visible Universe using cosmological simulations of structure formation. Our simulations will include both dark matter and baryons, and will employ two entirely different schemes for evolving the gas: smoothed particle hydrodynamics (SPH) and a moving mesh approach as incorporated in the new code, AREPO. By performing simulations that are otherwise nearly identical, except for the hydrodynamics solver, we will isolate and understand differences in the properties of galaxies, galaxy groups and clusters, and the intergalactic medium caused by the computational approach that have plagued efforts to understand galaxy formation for nearly two decades. By performing simulations at different levels of resolution and with increasingly complex treatments of the gas physics, we will identify the results that are converged numerically and that are robust with respect to variations in unresolved physical processes, especially those related to star formation, black hole growth, and related feedback effects. In this manner, we aim to undertake a research program that will redefine the state of the art in cosmological hydrodynamics and galaxy formation. In particular, we will focus our scientific efforts on understanding: 1) the formation of galactic disks in a cosmological context; 2) the physical state of diffuse gas in galaxy clusters and groups so that they can be used as high-precision probes of cosmology; 3) the nature of gas inflows into galaxy halos and the subsequent accretion of gas by forming disks; 4) the co-evolution of galaxies and galaxy clusters with their central supermassive black holes and the implications of related feedback for galaxy evolution and the dichotomy between blue and red galaxies; 5) the physical state of the intergalactic medium (IGM) and the evolution of the metallicity of the IGM; and 6) the reaction of dark matter around galaxies to galaxy formation. Our proposed work will be of immediate significance for several NASA missions. Our simulations will allow a detailed comparison of observed CHANDRA X-ray groups and clusters with state-of-the-art theoretical models. Scaling relations and their evolution with redshift can constrain the processes occurring in cluster centers. At higher energies, data from the FERMI gamma-ray satellite combined with our simulated data set will permit us to estimate the non- thermal pressure support in clusters due to cosmic rays. Another science goal of FERMI is the search for annihilation radiation produced by dark matter. The high resolution of our proposed calculations gives us the capability of making predictions for the annihilation signature from large-scale structure. Our proposed work is also relevant to upcoming missions like the James Webb Space Telescope (JWST). With our scheme, we will study the morphological evolution of galaxies in a full cosmological setting for the first time. JWST is specifically designed to observe the high redshift structure of emerging galaxies and their subsequent evolution. Our simulations will thus provide an indispensable tool for understanding JWST observations. We will make our simulations available to the community, accessible through a web-based interface, including the simulation data as well as galaxy catalogs, images, and videos generated during the course of the calculations. This will be the first time that such a dataset, drawn from a hydrodynamical model of the Universe, will be made public. As we anticipate that our simulations will have numerous applications in addition to those listed above, this will ensure that our work will have the greatest possible impact by fostering research on diverse problems related to the formation of galaxies and larger-scale structures.
Molecular Dynamics Simulations of Simple Liquids
ERIC Educational Resources Information Center
Speer, Owner F.; Wengerter, Brian C.; Taylor, Ramona S.
2004-01-01
An experiment, in which students were given the opportunity to perform molecular dynamics simulations on a series of molecular liquids using the Amber suite of programs, is presented. They were introduced to both physical theories underlying classical mechanics simulations and to the atom-atom pair distribution function.
Ioannou, Ioanna; Kazmierczak, Edmund; Stern, Linda
2015-01-01
The use of virtual reality (VR) simulation for surgical training has gathered much interest in recent years. Despite increasing popularity and usage, limited work has been carried out in the use of automated objective measures to quantify the extent to which performance in a simulator resembles performance in the operating theatre, and the effects of simulator training on real world performance. To this end, we present a study exploring the effects of VR training on the performance of dentistry students learning a novel oral surgery task. We compare the performance of trainees in a VR simulator and in a physical setting involving ovine jaws, using a range of automated metrics derived by motion analysis. Our results suggest that simulator training improved the motion economy of trainees without adverse effects on task outcome. Comparison of surgical technique on the simulator with the ovine setting indicates that simulator technique is similar, but not identical to real world technique.
Critical Resolution and Physical Dependenices of Supernovae: Stars in Heat and Under Pressure
NASA Astrophysics Data System (ADS)
Vartanyan, David; Burrows, Adam Seth
2017-01-01
For over five decades, the mechanism of explosion in core-collapse supernova continues to remain one of the last untoppled bastions in astrophysics, presenting both a technical and physical problem.Motivated by advances in computation and nuclear physics and the resilience of the core-collapse problem, collaborators Adam Burrows (Princeton), Joshua Dolence (LANL), and Aaron Skinner (LNL) have developed FORNAX - a highly parallelizable multidimensional supernova simulation code featuring an explicit hydrodynamic and radiation-transfer solver.We present the results (Vartanyan et. al 2016, Burrows et. al 2016, both in preparation) of a sequence of two-dimensional axisymmetric simulations of core-collapse supernovae using FORNAX, probing both progenitor mass dependence and the effect of physical inputs in explosiveness in our study on the revival of the stalled shock via the neutrino heating mechanism. We also performed a resolution study, testing spatial and energy group resolutions as well as compilation flags. We illustrate that, when the protoneutron star bounded by a stalled shock is close to the critical explosion condition (Burrows & Goshy 1993), small changes of order 10% in neutrino energies and luminosities can result in explosion, and that these effects couple nonlinearly.We show that many-body medium effects due to neutrino-nucleon scattering as well as inelastic neutrino-nucleon and neutrino-electron scattering are strongly favorable to earlier and more vigorous explosions by depositing energy in the gain region. Additionally, we probe the effects of a ray-by-ray+ transport solver (which does not include transverse velocity terms) employed by many groups and confirm that it artificially accelerates explosion (see also Skinner et. al 2016).In the coming year, we are gearing up for the first set of 3D simulations yet performed in the context of core-collapse supernovae employing 20 energy groups, and one of the most complete nuclear physics modules in the field with the ambitious goal of simulating supernova remants like Cas A. The current environment for core-collapse supernova provides for invigorating optimism that a robust explosion mechanism is within reach on graduate student lifetimes.
Expected Performance of the ATLAS Experiment - Detector, Trigger and Physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aad, G.; Abat, E.; Abbott, B.
2011-11-28
The Large Hadron Collider (LHC) at CERN promises a major step forward in the understanding of the fundamental nature of matter. The ATLAS experiment is a general-purpose detector for the LHC, whose design was guided by the need to accommodate the wide spectrum of possible physics signatures. The major remit of the ATLAS experiment is the exploration of the TeV mass scale where groundbreaking discoveries are expected. In the focus are the investigation of the electroweak symmetry breaking and linked to this the search for the Higgs boson as well as the search for Physics beyond the Standard Model. Inmore » this report a detailed examination of the expected performance of the ATLAS detector is provided, with a major aim being to investigate the experimental sensitivity to a wide range of measurements and potential observations of new physical processes. An earlier summary of the expected capabilities of ATLAS was compiled in 1999 [1]. A survey of physics capabilities of the CMS detector was published in [2]. The design of the ATLAS detector has now been finalised, and its construction and installation have been completed [3]. An extensive test-beam programme was undertaken. Furthermore, the simulation and reconstruction software code and frameworks have been completely rewritten. Revisions incorporated reflect improved detector modelling as well as major technical changes to the software technology. Greatly improved understanding of calibration and alignment techniques, and their practical impact on performance, is now in place. The studies reported here are based on full simulations of the ATLAS detector response. A variety of event generators were employed. The simulation and reconstruction of these large event samples thus provided an important operational test of the new ATLAS software system. In addition, the processing was distributed world-wide over the ATLAS Grid facilities and hence provided an important test of the ATLAS computing system - this is the origin of the expression 'CSC studies' ('computing system commissioning'), which is occasionally referred to in these volumes. The work reported does generally assume that the detector is fully operational, and in this sense represents an idealised detector: establishing the best performance of the ATLAS detector with LHC proton-proton collisions is a challenging task for the future. The results summarised here therefore represent the best estimate of ATLAS capabilities before real operational experience of the full detector with beam. Unless otherwise stated, simulations also do not include the effect of additional interactions in the same or other bunch-crossings, and the effect of neutron background is neglected. Thus simulations correspond to the low-luminosity performance of the ATLAS detector. This report is broadly divided into two parts: firstly the performance for identification of physics objects is examined in detail, followed by a detailed assessment of the performance of the trigger system. This part is subdivided into chapters surveying the capabilities for charged particle tracking, each of electron/photon, muon and tau identification, jet and missing transverse energy reconstruction, b-tagging algorithms and performance, and finally the trigger system performance. In each chapter of the report, there is a further subdivision into shorter notes describing different aspects studied. The second major subdivision of the report addresses physics measurement capabilities, and new physics search sensitivities. Individual chapters in this part discuss ATLAS physics capabilities in Standard Model QCD and electroweak processes, in the top quark sector, in b-physics, in searches for Higgs bosons, supersymmetry searches, and finally searches for other new particles predicted in more exotic models.« less
Evolving locomotion for a 12-DOF quadruped robot in simulated environments.
Klaus, Gordon; Glette, Kyrre; Høvin, Mats
2013-05-01
We demonstrate the power of evolutionary robotics (ER) by comparing to a more traditional approach its performance and cost on the task of simulated robot locomotion. A novel quadruped robot is introduced, the legs of which - each having three non-coplanar degrees of freedom - are very maneuverable. Using a simplistic control architecture and a physics simulation of the robot, gaits are designed both by hand and using a highly parallel evolutionary algorithm (EA). It is found that the EA produces, in a small fraction of the time that takes to design by hand, gaits that travel at two to four times the speed of the hand-designed one. The flexibility of this approach is demonstrated by applying it across a range of differently configured simulators. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Samosky, Joseph T; Allen, Pete; Boronyak, Steve; Branstetter, Barton; Hein, Steven; Juhas, Mark; Nelson, Douglas A; Orebaugh, Steven; Pinto, Rohan; Smelko, Adam; Thompson, Mitch; Weaver, Robert A
2011-01-01
We are developing a simulator of peripheral nerve block utilizing a mixed-reality approach: the combination of a physical model, an MRI-derived virtual model, mechatronics and spatial tracking. Our design uses tangible (physical) interfaces to simulate surface anatomy, haptic feedback during needle insertion, mechatronic display of muscle twitch corresponding to the specific nerve stimulated, and visual and haptic feedback for the injection syringe. The twitch response is calculated incorporating the sensed output of a real neurostimulator. The virtual model is isomorphic with the physical model and is derived from segmented MRI data. This model provides the subsurface anatomy and, combined with electromagnetic tracking of a sham ultrasound probe and a standard nerve block needle, supports simulated ultrasound display and measurement of needle location and proximity to nerves and vessels. The needle tracking and virtual model also support objective performance metrics of needle targeting technique.
NASA Astrophysics Data System (ADS)
Imran, H. M.; Kala, J.; Ng, A. W. M.; Muthukumaran, S.
2018-04-01
Appropriate choice of physics options among many physics parameterizations is important when using the Weather Research and Forecasting (WRF) model. The responses of different physics parameterizations of the WRF model may vary due to geographical locations, the application of interest, and the temporal and spatial scales being investigated. Several studies have evaluated the performance of the WRF model in simulating the mean climate and extreme rainfall events for various regions in Australia. However, no study has explicitly evaluated the sensitivity of the WRF model in simulating heatwaves. Therefore, this study evaluates the performance of a WRF multi-physics ensemble that comprises 27 model configurations for a series of heatwave events in Melbourne, Australia. Unlike most previous studies, we not only evaluate temperature, but also wind speed and relative humidity, which are key factors influencing heatwave dynamics. No specific ensemble member for all events explicitly showed the best performance, for all the variables, considering all evaluation metrics. This study also found that the choice of planetary boundary layer (PBL) scheme had largest influence, the radiation scheme had moderate influence, and the microphysics scheme had the least influence on temperature simulations. The PBL and microphysics schemes were found to be more sensitive than the radiation scheme for wind speed and relative humidity. Additionally, the study tested the role of Urban Canopy Model (UCM) and three Land Surface Models (LSMs). Although the UCM did not play significant role, the Noah-LSM showed better performance than the CLM4 and NOAH-MP LSMs in simulating the heatwave events. The study finally identifies an optimal configuration of WRF that will be a useful modelling tool for further investigations of heatwaves in Melbourne. Although our results are invariably region-specific, our results will be useful to WRF users investigating heatwave dynamics elsewhere.
Physical mechanisms affecting hot carrier-induced degradation in gallium nitride HEMTs
NASA Astrophysics Data System (ADS)
Mukherjee, Shubhajit
Gallium Nitride or GaN-based high electron mobility transistors (HEMTs) is currently the most promising device technology in several key military and civilian applications due to excellent high-power as well as high-frequency performance. Even though the performance figures are outstanding, GaN-based HEMTs are not as mature as some competing technologies, which means that establishing the reliability of the technology is important to enable use in critical applications. The objective of this research is to understand the physical mechanisms affecting the reliability of GaN HEMTs at moderate drain biases (typically VDS < 30 V in the devices considered here). The degradation in device performance is believed to be due to the formation or modification of charged defects near the interface by hydrogen depassivation processes (due to electron-activated hydrogen removal) from energetic carriers. A rate-equation describing the defect generation process is formulated based on this assumption. A combination of ensemble Monte-Carlo (EMC) simulation statistics, ab-initio density functional theory (DFT) calculations, and accelerated stress experiments is used to relate the candidate defects to the overall degradation behavior (VT and gm). The focus of this work is on the 'semi-ON' mode of transistor operation in which the degradation is usually observed to be at its highest. This semi-ON state is reasonably close to the biasing region of class-AB high power amplifiers, which are popular because of the combination of high efficiency and low distortion that is associated with this configuration. The carrier-energy distributions are obtained using an EMC simulator that was developed specifically for III-V HFETs. The rate equation is used to model the degradation at different operating conditions as well as longer stress times from the result of one short duration stress test, by utilizing the carrier-energy distribution obtained from EMC simulations for one baseline condition. This work also attempts to identify the spatial location of these defects, and how this impacts the V T shift and gm degradation of the devices.
NASA Technical Reports Server (NTRS)
Daigle, Matthew John; Goebel, Kai Frank
2010-01-01
Model-based prognostics captures system knowledge in the form of physics-based models of components, and how they fail, in order to obtain accurate predictions of end of life (EOL). EOL is predicted based on the estimated current state distribution of a component and expected profiles of future usage. In general, this requires simulations of the component using the underlying models. In this paper, we develop a simulation-based prediction methodology that achieves computational efficiency by performing only the minimal number of simulations needed in order to accurately approximate the mean and variance of the complete EOL distribution. This is performed through the use of the unscented transform, which predicts the means and covariances of a distribution passed through a nonlinear transformation. In this case, the EOL simulation acts as that nonlinear transformation. In this paper, we review the unscented transform, and describe how this concept is applied to efficient EOL prediction. As a case study, we develop a physics-based model of a solenoid valve, and perform simulation experiments to demonstrate improved computational efficiency without sacrificing prediction accuracy.
NASA/ESACV-990 spacelab simulation. Appendix B: Experiment development and performance
NASA Technical Reports Server (NTRS)
Reller, J. O., Jr.; Neel, C. B.; Haughney, L. C.
1976-01-01
Eight experiments flown on the CV-990 airborne laboratory during the NASA/ESA joint Spacelab simulation mission are described in terms of their physical arrangement in the aircraft, their scientific objectives, developmental considerations dictated by mission requirements, checkout, integration into the aircraft, and the inflight operation and performance of the experiments.
Learning from avatars: Learning assistants practice physics pedagogy in a classroom simulator
NASA Astrophysics Data System (ADS)
Chini, Jacquelyn J.; Straub, Carrie L.; Thomas, Kevin H.
2016-06-01
[This paper is part of the Focused Collection on Preparing and Supporting University Physics Educators.] Undergraduate students are increasingly being used to support course transformations that incorporate research-based instructional strategies. While such students are typically selected based on strong content knowledge and possible interest in teaching, they often do not have previous pedagogical training. The current training models make use of real students or classmates role playing as students as the test subjects. We present a new environment for facilitating the practice of physics pedagogy skills, a highly immersive mixed-reality classroom simulator, and assess its effectiveness for undergraduate physics learning assistants (LAs). LAs prepared, taught, and reflected on a lesson about motion graphs for five highly interactive computer generated student avatars in the mixed-reality classroom simulator. To assess the effectiveness of the simulator for this population, we analyzed the pedagogical skills LAs intended to practice and exhibited during their lessons and explored LAs' descriptions of their experiences with the simulator. Our results indicate that the classroom simulator created a safe, effective environment for LAs to practice a variety of skills, such as questioning styles and wait time. Additionally, our analysis revealed areas for improvement in our preparation of LAs and use of the simulator. We conclude with a summary of research questions this environment could facilitate.
NASA Astrophysics Data System (ADS)
Yao, Yi; Kanai, Yosuke
Our ability to correctly model the association of oppositely charged ions in water is fundamental in physical chemistry and essential to various technological and biological applications of molecular dynamics (MD) simulations. MD simulations using classical force fields often show strong clustering of NaCl in the aqueous ionic solutions as a consequence of a deep contact pair minimum in the potential of mean force (PMF) curve. First-Principles Molecular Dynamics (FPMD) based on Density functional theory (DFT) with the popular PBE exchange-correlation approximation, on the other hand, show a different result with a shallow contact pair minimum in the PMF. We employed two of most promising exchange-correlation approximations, ωB97xv by Mardiorossian and Head-Gordon and SCAN by Sun, Ruzsinszky and Perdew, to examine the PMF using FPMD simulations. ωB97xv is highly empirically and optimized in the space of range-separated hybrid functional with a dispersion correction while SCAN is the most recent meta-GGA functional that is constructed by satisfying various known conditions in well-defined physical limits. We will discuss our findings for PMF, charge transfer, water dipoles, etc.
AX-GADGET: a new code for cosmological simulations of Fuzzy Dark Matter and Axion models
NASA Astrophysics Data System (ADS)
Nori, Matteo; Baldi, Marco
2018-05-01
We present a new module of the parallel N-Body code P-GADGET3 for cosmological simulations of light bosonic non-thermal dark matter, often referred as Fuzzy Dark Matter (FDM). The dynamics of the FDM features a highly non-linear Quantum Potential (QP) that suppresses the growth of structures at small scales. Most of the previous attempts of FDM simulations either evolved suppressed initial conditions, completely neglecting the dynamical effects of QP throughout cosmic evolution, or resorted to numerically challenging full-wave solvers. The code provides an interesting alternative, following the FDM evolution without impairing the overall performance. This is done by computing the QP acceleration through the Smoothed Particle Hydrodynamics (SPH) routines, with improved schemes to ensure precise and stable derivatives. As an extension of the P-GADGET3 code, it inherits all the additional physics modules implemented up to date, opening a wide range of possibilities to constrain FDM models and explore its degeneracies with other physical phenomena. Simulations are compared with analytical predictions and results of other codes, validating the QP as a crucial player in structure formation at small scales.
Flocking and self-defense: experiments and simulations of avian mobbing
NASA Astrophysics Data System (ADS)
Kane, Suzanne Amador
2011-03-01
We have performed motion capture studies in the field of avian mobbing, in which flocks of prey birds harass predatory birds. Our empirical studies cover both field observations of mobbing occurring in mid-air, where both predator and prey are in flight, and an experimental system using actual prey birds and simulated predator ``perch and wait'' strategies. To model our results and establish the effectiveness of mobbing flight paths at minimizing risk of capture while optimizing predator harassment, we have performed computer simulations using the actual measured trajectories of mobbing prey birds combined with model predator trajectories. To accurately simulate predator motion, we also measured raptor acceleration and flight dynamics, well as prey-pursuit strategies. These experiments and theoretical studies were all performed with undergraduate research assistants in a liberal arts college setting. This work illustrates how biological physics provides undergraduate research projects well-suited to the abilities of physics majors with interdisciplinary science interests and diverse backgrounds.
NASA Astrophysics Data System (ADS)
Turinsky, Paul J.; Kothe, Douglas B.
2016-05-01
The Consortium for the Advanced Simulation of Light Water Reactors (CASL), the first Energy Innovation Hub of the Department of Energy, was established in 2010 with the goal of providing modeling and simulation (M&S) capabilities that support and accelerate the improvement of nuclear energy's economic competitiveness and the reduction of spent nuclear fuel volume per unit energy, and all while assuring nuclear safety. To accomplish this requires advances in M&S capabilities in radiation transport, thermal-hydraulics, fuel performance and corrosion chemistry. To focus CASL's R&D, industry challenge problems have been defined, which equate with long standing issues of the nuclear power industry that M&S can assist in addressing. To date CASL has developed a multi-physics ;core simulator; based upon pin-resolved radiation transport and subchannel (within fuel assembly) thermal-hydraulics, capitalizing on the capabilities of high performance computing. CASL's fuel performance M&S capability can also be optionally integrated into the core simulator, yielding a coupled multi-physics capability with untapped predictive potential. Material models have been developed to enhance predictive capabilities of fuel clad creep and growth, along with deeper understanding of zirconium alloy clad oxidation and hydrogen pickup. Understanding of corrosion chemistry (e.g., CRUD formation) has evolved at all scales: micro, meso and macro. CFD R&D has focused on improvement in closure models for subcooled boiling and bubbly flow, and the formulation of robust numerical solution algorithms. For multiphysics integration, several iterative acceleration methods have been assessed, illuminating areas where further research is needed. Finally, uncertainty quantification and data assimilation techniques, based upon sampling approaches, have been made more feasible for practicing nuclear engineers via R&D on dimensional reduction and biased sampling. Industry adoption of CASL's evolving M&S capabilities, which is in progress, will assist in addressing long-standing and future operational and safety challenges of the nuclear industry.
NASA Astrophysics Data System (ADS)
Hauschild, Dirk; Homburg, Oliver; Mitra, Thomas; Ivanenko, Mikhail; Jarczynski, Manfred; Meinschien, Jens; Bayer, Andreas; Lissotschenko, Vitalij
2009-02-01
High power laser sources are used in various production tools for microelectronic products and solar cells, including the applications annealing, lithography, edge isolation as well as dicing and patterning. Besides the right choice of the laser source suitable high performance optics for generating the appropriate beam profile and intensity distribution are of high importance for the right processing speed, quality and yield. For industrial applications equally important is an adequate understanding of the physics of the light-matter interaction behind the process. In advance simulations of the tool performance can minimize technical and financial risk as well as lead times for prototyping and introduction into series production. LIMO has developed its own software founded on the Maxwell equations taking into account all important physical aspects of the laser based process: the light source, the beam shaping optical system and the light-matter interaction. Based on this knowledge together with a unique free-form micro-lens array production technology and patented micro-optics beam shaping designs a number of novel solar cell production tool sub-systems have been built. The basic functionalities, design principles and performance results are presented with a special emphasis on resilience, cost reduction and process reliability.
Affect Response to Simulated Information Attack during Complex Task Performance
2014-12-02
AND FRUSTRATION ........................ 42 FIGURE 27. TASK LOAD INDEX OF MENTAL DEMAND, TEMPORAL DEMAND, AND PHYSICAL DEMAND...situational awareness, affect, and trait characteristics interact with human performance during cyberspace attacks in the physical and information...Operator state was manipulated using emotional stimulation portrayed through the presentation of video segments. The effect of emotions on
Programmable prostate palpation simulator using property-changing pneumatic bladder.
Talhan, Aishwari; Jeon, Seokhee
2018-05-01
The currently available prostate palpation simulators are based on either a physical mock-up or pure virtual simulation. Both cases have their inherent limitations. The former lacks flexibility in presenting abnormalities and scenarios because of the static nature of the mock-up and has usability issues because the prostate model must be replaced in different scenarios. The latter has realism issues, particularly in haptic feedback, because of the very limited performance of haptic hardware and inaccurate haptic simulation. This paper presents a highly flexible and programmable simulator with high haptic fidelity. Our new approach is based on a pneumatic-driven, property-changing, silicone prostate mock-up that can be embedded in a human torso mannequin. The mock-up has seven pneumatically controlled, multi-layered bladder cells to mimic the stiffness, size, and location changes of nodules in the prostate. The size is controlled by inflating the bladder with positive pressure in the chamber, and a hard nodule can be generated using the particle jamming technique; the fine sand in the bladder becomes stiff when it is vacuumed. The programmable valves and system identification process enable us to precisely control the size and stiffness, which results in a simulator that can realistically generate many different diseases without replacing anything. The three most common abnormalities in a prostate are selected for demonstration, and multiple progressive stages of each abnormality are carefully designed based on medical data. A human perception experiment is performed by actual medical professionals and confirms that our simulator exhibits higher realism and usability than do the conventional simulators. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Weston, Brian; Nourgaliev, Robert; Delplanque, Jean-Pierre
2017-11-01
We present a new block-based Schur complement preconditioner for simulating all-speed compressible flow with phase change. The conservation equations are discretized with a reconstructed Discontinuous Galerkin method and integrated in time with fully implicit time discretization schemes. The resulting set of non-linear equations is converged using a robust Newton-Krylov framework. Due to the stiffness of the underlying physics associated with stiff acoustic waves and viscous material strength effects, we solve for the primitive-variables (pressure, velocity, and temperature). To enable convergence of the highly ill-conditioned linearized systems, we develop a physics-based preconditioner, utilizing approximate block factorization techniques to reduce the fully-coupled 3×3 system to a pair of reduced 2×2 systems. We demonstrate that our preconditioned Newton-Krylov framework converges on very stiff multi-physics problems, corresponding to large CFL and Fourier numbers, with excellent algorithmic and parallel scalability. Results are shown for the classic lid-driven cavity flow problem as well as for 3D laser-induced phase change. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
NASA Technical Reports Server (NTRS)
Nichino, Takafumi; Hahn, Seonghyeon; Shariff, Karim
2010-01-01
This slide presentation reviews the Large Eddy Simulation of a high reynolds number Coanda flow that is separated from a round trailing edge of a ciruclation control airfoil. The objectives of the study are: (1) To investigate detailed physics (flow structures and statistics) of the fully turbulent Coanda jet applied to a CC airfoil, by using LES (2) To compare LES and RANS results to figure out how to improve the performance of existing RANS models for this type of flow.
X-ray simulation for structural integrity for aerospace components - A case study
NASA Astrophysics Data System (ADS)
Singh, Surendra; Gray, Joseph
2016-02-01
The use of Integrated Computational Materials Engineering (ICME) has rapidly evolved from an emerging technology to the industry standards in Materials, Manufacturing, Chemical, Civil, and Aerospace engineering. Despite this the recognition of the ICME merits has been somewhat lacking within NDE community. This is due in part to the makeup of NDE practitioners. They are a very diverse but regimented group. More than 80% of NDE experts are trained and certified as NDT Level 3's and auditors in order to perform their daily inspection jobs. These jobs involve detection of attribute of interest, which may be a defect or condition or both, in a material. These jobs are performed in strict compliance with procedures that have been developed over many years by trial-and-error with minimal understanding of the underlying physics and interplay between the NDE methods setup parameters. It is not in the nature of these trained Level 3's experts to look for alternate or out-of-the box, solutions. Instead, they follow the procedures for compliance as required by regulatory agencies. This approach is time-consuming, subjective, and is treated as a bottleneck in today's manufacturing environments. As such, there is a need for new NDE tools that provide rapid, high quality solutions for studying structural and dimensional integrity in parts at a reduced cost. NDE simulations offer such options by a shortening NDE technique development-time, attaining a new level in the scientific understanding of physics of interactions between interrogating energy and materials, and reducing costs. In this paper, we apply NDE simulation (XRSIM as an example) for simulating X-Ray techniques for studying aerospace components. These results show that NDE simulations help: 1) significantly shorten NDE technique development-time, 2) assist in training NDE experts, by facilitating the understanding of the underlying physics, and 3) improve both capability and reliability of NDE methods in terms of coverage maps.
NASA Astrophysics Data System (ADS)
Tian, Jiyang; Liu, Jia; Wang, Jianhua; Li, Chuanzhe; Yu, Fuliang; Chu, Zhigang
2017-07-01
Mesoscale Numerical Weather Prediction systems can provide rainfall products at high resolutions in space and time, playing an increasingly more important role in water management and flood forecasting. The Weather Research and Forecasting (WRF) model is one of the most popular mesoscale systems and has been extensively used in research and practice. However, for hydrologists, an unsolved question must be addressed before each model application in a different target area. That is, how are the most appropriate combinations of physical parameterisations from the vast WRF library selected to provide the best downscaled rainfall? In this study, the WRF model was applied with 12 designed parameterisation schemes with different combinations of physical parameterisations, including microphysics, radiation, planetary boundary layer (PBL), land-surface model (LSM) and cumulus parameterisations. The selected study areas are two semi-humid and semi-arid catchments located in the Daqinghe River basin, Northern China. The performance of WRF with different parameterisation schemes is tested for simulating eight typical 24-h storm events with different evenness in space and time. In addition to the cumulative rainfall amount, the spatial and temporal patterns of the simulated rainfall are evaluated based on a two-dimensional composed verification statistic. Among the 12 parameterisation schemes, Scheme 4 outperforms the other schemes with the best average performance in simulating rainfall totals and temporal patterns; in contrast, Scheme 6 is generally a good choice for simulations of spatial rainfall distributions. Regarding the individual parameterisations, Single-Moment 6 (WSM6), Yonsei University (YSU), Kain-Fritsch (KF) and Grell-Devenyi (GD) are better choices for microphysics, planetary boundary layers (PBL) and cumulus parameterisations, respectively, in the study area. These findings provide helpful information for WRF rainfall downscaling in semi-humid and semi-arid areas. The methodologies to design and test the combination schemes of parameterisations can also be regarded as a reference for generating ensembles in numerical rainfall predictions using the WRF model.
NASA Astrophysics Data System (ADS)
Ryu, Hoon; Jeong, Yosang; Kang, Ji-Hoon; Cho, Kyu Nam
2016-12-01
Modelling of multi-million atomic semiconductor structures is important as it not only predicts properties of physically realizable novel materials, but can accelerate advanced device designs. This work elaborates a new Technology-Computer-Aided-Design (TCAD) tool for nanoelectronics modelling, which uses a sp3d5s∗ tight-binding approach to describe multi-million atomic structures, and simulate electronic structures with high performance computing (HPC), including atomic effects such as alloy and dopant disorders. Being named as Quantum simulation tool for Advanced Nanoscale Devices (Q-AND), the tool shows nice scalability on traditional multi-core HPC clusters implying the strong capability of large-scale electronic structure simulations, particularly with remarkable performance enhancement on latest clusters of Intel Xeon PhiTM coprocessors. A review of the recent modelling study conducted to understand an experimental work of highly phosphorus-doped silicon nanowires, is presented to demonstrate the utility of Q-AND. Having been developed via Intel Parallel Computing Center project, Q-AND will be open to public to establish a sound framework of nanoelectronics modelling with advanced HPC clusters of a many-core base. With details of the development methodology and exemplary study of dopant electronics, this work will present a practical guideline for TCAD development to researchers in the field of computational nanoelectronics.
Comparisons between MCNP, EGS4 and experiment for clinical electron beams.
Jeraj, R; Keall, P J; Ostwald, P M
1999-03-01
Understanding the limitations of Monte Carlo codes is essential in order to avoid systematic errors in simulations, and to suggest further improvement of the codes. MCNP and EGS4, Monte Carlo codes commonly used in medical physics, were compared and evaluated against electron depth dose data and experimental backscatter results obtained using clinical radiotherapy beams. Different physical models and algorithms used in the codes give significantly different depth dose curves and electron backscattering factors. The default version of MCNP calculates electron depth dose curves which are too penetrating. The MCNP results agree better with experiment if the ITS-style energy-indexing algorithm is used. EGS4 underpredicts electron backscattering for high-Z materials. The results slightly improve if optimal PRESTA-I parameters are used. MCNP simulates backscattering well even for high-Z materials. To conclude the comparison, a timing study was performed. EGS4 is generally faster than MCNP and use of a large number of scoring voxels dramatically slows down the MCNP calculation. However, use of a large number of geometry voxels in MCNP only slightly affects the speed of the calculation.
Maladen, Ryan D.; Ding, Yang; Umbanhowar, Paul B.; Kamor, Adam; Goldman, Daniel I.
2011-01-01
We integrate biological experiment, empirical theory, numerical simulation and a physical model to reveal principles of undulatory locomotion in granular media. High-speed X-ray imaging of the sandfish lizard, Scincus scincus, in 3 mm glass particles shows that it swims within the medium without using its limbs by propagating a single-period travelling sinusoidal wave down its body, resulting in a wave efficiency, η, the ratio of its average forward speed to the wave speed, of approximately 0.5. A resistive force theory (RFT) that balances granular thrust and drag forces along the body predicts η close to the observed value. We test this prediction against two other more detailed modelling approaches: a numerical model of the sandfish coupled to a discrete particle simulation of the granular medium, and an undulatory robot that swims within granular media. Using these models and analytical solutions of the RFT, we vary the ratio of undulation amplitude to wavelength (A/λ) and demonstrate an optimal condition for sand-swimming, which for a given A results from the competition between η and λ. The RFT, in agreement with the simulated and physical models, predicts that for a single-period sinusoidal wave, maximal speed occurs for A/λ ≈ 0.2, the same kinematics used by the sandfish. PMID:21378020
Atmospheric cloud physics thermal systems analysis
NASA Technical Reports Server (NTRS)
1977-01-01
Engineering analyses performed on the Atmospheric Cloud Physics (ACPL) Science Simulator expansion chamber and associated thermal control/conditioning system are reported. Analyses were made to develop a verified thermal model and to perform parametric thermal investigations to evaluate systems performance characteristics. Thermal network representations of solid components and the complete fluid conditioning system were solved simultaneously using the Systems Improved Numerical Differencing Analyzer (SINDA) computer program.
NASA Astrophysics Data System (ADS)
Huang, Qian
2014-09-01
Scientific computing often requires the availability of a massive number of computers for performing large-scale simulations, and computing in mineral physics is no exception. In order to investigate physical properties of minerals at extreme conditions in computational mineral physics, parallel computing technology is used to speed up the performance by utilizing multiple computer resources to process a computational task simultaneously thereby greatly reducing computation time. Traditionally, parallel computing has been addressed by using High Performance Computing (HPC) solutions and installed facilities such as clusters and super computers. Today, it has been seen that there is a tremendous growth in cloud computing. Infrastructure as a Service (IaaS), the on-demand and pay-as-you-go model, creates a flexible and cost-effective mean to access computing resources. In this paper, a feasibility report of HPC on a cloud infrastructure is presented. It is found that current cloud services in IaaS layer still need to improve performance to be useful to research projects. On the other hand, Software as a Service (SaaS), another type of cloud computing, is introduced into an HPC system for computing in mineral physics, and an application of which is developed. In this paper, an overall description of this SaaS application is presented. This contribution can promote cloud application development in computational mineral physics, and cross-disciplinary studies.
NASA Astrophysics Data System (ADS)
Kim, Go-Un; Seo, Kyong-Hwan
2018-01-01
A key physical factor in regulating the performance of Madden-Julian oscillation (MJO) simulation is examined by using 26 climate model simulations from the World Meteorological Organization's Working Group for Numerical Experimentation/Global Energy and Water Cycle Experiment Atmospheric System Study (WGNE and MJO-Task Force/GASS) global model comparison project. For this, intraseasonal moisture budget equation is analyzed and a simple, efficient physical quantity is developed. The result shows that MJO skill is most sensitive to vertically integrated intraseasonal zonal wind convergence (ZC). In particular, a specific threshold value of the strength of the ZC can be used as distinguishing between good and poor models. An additional finding is that good models exhibit the correct simultaneous convection and large-scale circulation phase relationship. In poor models, however, the peak circulation response appears 3 days after peak rainfall, suggesting unfavorable coupling between convection and circulation. For an improving simulation of the MJO in climate models, we propose that this delay of circulation in response to convection needs to be corrected in the cumulus parameterization scheme.
NASA Astrophysics Data System (ADS)
Borrás, E.; Ródenas, M.; Vera, T.; Muñoz, A.
2015-12-01
The atmospheric particulate matter has a large impact on climate, biosphere behaviour and human health. Its study is complex because of large number of species are present at low concentrations and the continuous time evolution, being not easily separable from meteorology, and transport processes. Closed systems have been proposed by isolating specific reactions, pollutants or products and controlling the oxidizing environment. High volume simulation chambers, such as EUropean PHOtoREactor (EUPHORE), are an essential tool used to simulate atmospheric photochemical reactions. This communication describes the last results about the reactivity of prominent atmospheric pollutants and the subsequent particulate matter formation. Specific experiments focused on organic aerosols have been developed at the EUPHORE photo-reactor. The use of on-line instrumentation, supported by off-line techniques, has provided well-defined reaction profiles, physical properties, and up to 300 different species are determined in particulate matter. The application fields include the degradation of anthropogenic and biogenic pollutants, and pesticides under several atmospheric conditions, studying their contribution on the formation of secondary organic aerosols (SOA). The studies performed at the EUPHORE have improved the mechanistic studies of atmospheric degradation processes and the knowledge about the chemical and physical properties of atmospheric particulate matter formed during these processes.
Additions and improvements to the high energy density physics capabilities in the FLASH code
NASA Astrophysics Data System (ADS)
Lamb, D.; Bogale, A.; Feister, S.; Flocke, N.; Graziani, C.; Khiar, B.; Laune, J.; Tzeferacos, P.; Walker, C.; Weide, K.
2017-10-01
FLASH is an open-source, finite-volume Eulerian, spatially-adaptive radiation magnetohydrodynamics code that has the capabilities to treat a broad range of physical processes. FLASH performs well on a wide range of computer architectures, and has a broad user base. Extensive high energy density physics (HEDP) capabilities exist in FLASH, which make it a powerful open toolset for the academic HEDP community. We summarize these capabilities, emphasizing recent additions and improvements. We describe several non-ideal MHD capabilities that are being added to FLASH, including the Hall and Nernst effects, implicit resistivity, and a circuit model, which will allow modeling of Z-pinch experiments. We showcase the ability of FLASH to simulate Thomson scattering polarimetry, which measures Faraday due to the presence of magnetic fields, as well as proton radiography, proton self-emission, and Thomson scattering diagnostics. Finally, we describe several collaborations with the academic HEDP community in which FLASH simulations were used to design and interpret HEDP experiments. This work was supported in part at U. Chicago by DOE NNSA ASC through the Argonne Institute for Computing in Science under FWP 57789; DOE NNSA under NLUF Grant DE-NA0002724; DOE SC OFES Grant DE-SC0016566; and NSF Grant PHY-1619573.
Physical employment standards for U.K. fire and rescue service personnel.
Blacker, S D; Rayson, M P; Wilkinson, D M; Carter, J M; Nevill, A M; Richmond, V L
2016-01-01
Evidence-based physical employment standards are vital for recruiting, training and maintaining the operational effectiveness of personnel in physically demanding occupations. (i) Develop criterion tests for in-service physical assessment, which simulate the role-related physical demands of UK fire and rescue service (UK FRS) personnel. (ii) Develop practical physical selection tests for FRS applicants. (iii) Evaluate the validity of the selection tests to predict criterion test performance. Stage 1: we conducted a physical demands analysis involving seven workshops and an expert panel to document the key physical tasks required of UK FRS personnel and to develop 'criterion' and 'selection' tests. Stage 2: we measured the performance of 137 trainee and 50 trained UK FRS personnel on selection, criterion and 'field' measures of aerobic power, strength and body size. Statistical models were developed to predict criterion test performance. Stage 3: matter experts derived minimum performance standards. We developed single person simulations of the key physical tasks required of UK FRS personnel as criterion and selection tests (rural fire, domestic fire, ladder lift, ladder extension, ladder climb, pump assembly, enclosed space search). Selection tests were marginally stronger predictors of criterion test performance (r = 0.88-0.94, 95% Limits of Agreement [LoA] 7.6-14.0%) than field test scores (r = 0.84-0.94, 95% LoA 8.0-19.8%) and offered greater face and content validity and more practical implementation. This study outlines the development of role-related, gender-free physical employment tests for the UK FRS, which conform to equal opportunities law. © The Author 2015. Published by Oxford University Press on behalf of the Society of Occupational Medicine. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Perkins, Casey; Muller, George
2015-10-08
The number of connections between physical and cyber security systems is rapidly increasing due to centralized control from automated and remotely connected means. As the number of interfaces between systems continues to grow, the interactions and interdependencies between them cannot be ignored. Historically, physical and cyber vulnerability assessments have been performed independently. This independent evaluation omits important aspects of the integrated system, where the impacts resulting from malicious or opportunistic attacks are not easily known or understood. Here, we describe a discrete event simulation model that uses information about integrated physical and cyber security systems, attacker characteristics and simple responsemore » rules to identify key safeguards that limit an attacker's likelihood of success. Key features of the proposed model include comprehensive data generation to support a variety of sophisticated analyses, and full parameterization of safeguard performance characteristics and attacker behaviours to evaluate a range of scenarios. Lastly, we also describe the core data requirements and the network of networks that serves as the underlying simulation structure.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perkins, Casey; Muller, George
The number of connections between physical and cyber security systems is rapidly increasing due to centralized control from automated and remotely connected means. As the number of interfaces between systems continues to grow, the interactions and interdependencies between them cannot be ignored. Historically, physical and cyber vulnerability assessments have been performed independently. This independent evaluation omits important aspects of the integrated system, where the impacts resulting from malicious or opportunistic attacks are not easily known or understood. Here, we describe a discrete event simulation model that uses information about integrated physical and cyber security systems, attacker characteristics and simple responsemore » rules to identify key safeguards that limit an attacker's likelihood of success. Key features of the proposed model include comprehensive data generation to support a variety of sophisticated analyses, and full parameterization of safeguard performance characteristics and attacker behaviours to evaluate a range of scenarios. Lastly, we also describe the core data requirements and the network of networks that serves as the underlying simulation structure.« less
Summary: Special Session SpS15: Data Intensive Astronomy
NASA Astrophysics Data System (ADS)
Montmerle, Thierry
2015-03-01
A new paradigm in astronomical research has been emerging - ``Data Intensive Astronomy'' that utilizes large amounts of data combined with statistical data analyses. The first research method in astronomy was observations by our eyes. It is well known that the invention of telescope impacted the human view on our Universe (although it was almost limited to the solar system), and lead to Keplerfs law that was later used by Newton to derive his mechanics. Newtonian mechanics then enabled astronomers to provide the theoretical explanation to the motion of the planets. Thus astronomers obtained the second paradigm, theoretical astronomy. Astronomers succeeded to apply various laws of physics to reconcile phenomena in the Universe; e.g., nuclear fusion was found to be the energy source of a star. Theoretical astronomy has been paired with observational astronomy to better understand the background physics in observed phenomena in the Universe. Although theoretical astronomy succeeded to provide good physical explanations qualitatively, it was not easy to have quantitative agreements with observations in the Universe. Since the invention of high-performance computers, however, astronomers succeeded to have the third research method, simulations, to get better agreements with observations. Simulation astronomy developed so rapidly along with the development of computer hardware (CPUs, GPUs, memories, storage systems, networks, and others) and simulation codes.
NASA Technical Reports Server (NTRS)
Smith, Jeffrey
2003-01-01
The Bio- Visualization, Imaging and Simulation (BioVIS) Technology Center at NASA's Ames Research Center is dedicated to developing and applying advanced visualization, computation and simulation technologies to support NASA Space Life Sciences research and the objectives of the Fundamental Biology Program. Research ranges from high resolution 3D cell imaging and structure analysis, virtual environment simulation of fine sensory-motor tasks, computational neuroscience and biophysics to biomedical/clinical applications. Computer simulation research focuses on the development of advanced computational tools for astronaut training and education. Virtual Reality (VR) and Virtual Environment (VE) simulation systems have become important training tools in many fields from flight simulation to, more recently, surgical simulation. The type and quality of training provided by these computer-based tools ranges widely, but the value of real-time VE computer simulation as a method of preparing individuals for real-world tasks is well established. Astronauts routinely use VE systems for various training tasks, including Space Shuttle landings, robot arm manipulations and extravehicular activities (space walks). Currently, there are no VE systems to train astronauts for basic and applied research experiments which are an important part of many missions. The Virtual Glovebox (VGX) is a prototype VE system for real-time physically-based simulation of the Life Sciences Glovebox where astronauts will perform many complex tasks supporting research experiments aboard the International Space Station. The VGX consists of a physical display system utilizing duel LCD projectors and circular polarization to produce a desktop-sized 3D virtual workspace. Physically-based modeling tools (Arachi Inc.) provide real-time collision detection, rigid body dynamics, physical properties and force-based controls for objects. The human-computer interface consists of two magnetic tracking devices (Ascention Inc.) attached to instrumented gloves (Immersion Inc.) which co-locate the user's hands with hand/forearm representations in the virtual workspace. Force-feedback is possible in a work volume defined by a Phantom Desktop device (SensAble inc.). Graphics are written in OpenGL. The system runs on a 2.2 GHz Pentium 4 PC. The prototype VGX provides astronauts and support personnel with a real-time physically-based VE system to simulate basic research tasks both on Earth and in the microgravity of Space. The immersive virtual environment of the VGX also makes it a useful tool for virtual engineering applications including CAD development, procedure design and simulation of human-system systems in a desktop-sized work volume.
Wang, Y; Yin, D C; Liu, Y M; Shi, J Z; Lu, H M; Shi, Z H; Qian, A R; Shang, P
2011-03-01
A high-field superconducting magnet can provide both high-magnetic fields and large-field gradients, which can be used as a special environment for research or practical applications in materials processing, life science studies, physical and chemical reactions, etc. To make full use of a superconducting magnet, shared instruments (the operating platform, sample holders, temperature controller, and observation system) must be prepared as prerequisites. This paper introduces the design of a set of sample holders and a temperature controller in detail with an emphasis on validating the performance of the force and temperature sensors in the high-magnetic field.
NASA Astrophysics Data System (ADS)
Wang, Y.; Yin, D. C.; Liu, Y. M.; Shi, J. Z.; Lu, H. M.; Shi, Z. H.; Qian, A. R.; Shang, P.
2011-03-01
A high-field superconducting magnet can provide both high-magnetic fields and large-field gradients, which can be used as a special environment for research or practical applications in materials processing, life science studies, physical and chemical reactions, etc. To make full use of a superconducting magnet, shared instruments (the operating platform, sample holders, temperature controller, and observation system) must be prepared as prerequisites. This paper introduces the design of a set of sample holders and a temperature controller in detail with an emphasis on validating the performance of the force and temperature sensors in the high-magnetic field.
QCD thermodynamics with two flavors at Nt=6
NASA Astrophysics Data System (ADS)
Bernard, Claude; Ogilvie, Michael C.; Degrand, Thomas A.; Detar, Carleton; Gottlieb, Steven; Krasnitz, Alex; Sugar, R. L.; Toussaint, D.
1992-05-01
The first results of numerical simulations of quantum chromodynamics on the Intel iPSC/860 parallel processor are presented. We performed calculations with two flavors of Kogut-Susskind quarks at Nt=6 with masses of 0.15T and 0.075T (0.025 and 0.0125 in lattice units) in order to locate the crossover from the low-temperature regime of ordinary hadronic matter to the high-temperature chirally symmetric regime. As with other recent two-flavor simulations, these calculations are insufficient to distinguish between a rapid crossover and a true phase transition. The phase transition is either absent or feeble at this quark mass. An improved estimate of the crossover temperature in physical units is given and results are presented for the hadronic screening lengths in both the high- and low-temperature regimes.
Wind tunnel simulation of air pollution dispersion in a street canyon.
Civis, Svatopluk; Strizík, Michal; Janour, Zbynek; Holpuch, Jan; Zelinger, Zdenek
2002-01-01
Physical simulation was used to study pollution dispersion in a street canyon. The street canyon model was designed to study the effect of measuring flow and concentration fields. A method of C02-laser photoacoustic spectrometry was applied for detection of trace concentration of gas pollution. The advantage of this method is its high sensitivity and broad dynamic range, permitting monitoring of concentrations from trace to saturation values. Application of this method enabled us to propose a simple model based on line permeation pollutant source, developed on the principle of concentration standards, to ensure high precision and homogeneity of the concentration flow. Spatial measurement of the concentration distribution inside the street canyon was performed on the model with reference velocity of 1.5 m/s.
Method for computationally efficient design of dielectric laser accelerator structures
Hughes, Tyler; Veronis, Georgios; Wootton, Kent P.; ...
2017-06-22
Here, dielectric microstructures have generated much interest in recent years as a means of accelerating charged particles when powered by solid state lasers. The acceleration gradient (or particle energy gain per unit length) is an important figure of merit. To design structures with high acceleration gradients, we explore the adjoint variable method, a highly efficient technique used to compute the sensitivity of an objective with respect to a large number of parameters. With this formalism, the sensitivity of the acceleration gradient of a dielectric structure with respect to its entire spatial permittivity distribution is calculated by the use of onlymore » two full-field electromagnetic simulations, the original and ‘adjoint’. The adjoint simulation corresponds physically to the reciprocal situation of a point charge moving through the accelerator gap and radiating. Using this formalism, we perform numerical optimizations aimed at maximizing acceleration gradients, which generate fabricable structures of greatly improved performance in comparison to previously examined geometries.« less
Design of high-fidelity haptic display for one-dimensional force reflection applications
NASA Astrophysics Data System (ADS)
Gillespie, Brent; Rosenberg, Louis B.
1995-12-01
This paper discusses the development of a virtual reality platform for the simulation of medical procedures which involve needle insertion into human tissue. The paper's focus is the hardware and software requirements for haptic display of a particular medical procedure known as epidural analgesia. To perform this delicate manual procedure, an anesthesiologist must carefully guide a needle through various layers of tissue using only haptic cues for guidance. As a simplifying aspect for the simulator design, all motions and forces involved in the task occur along a fixed line once insertion begins. To create a haptic representation of this procedure, we have explored both physical modeling and perceptual modeling techniques. A preliminary physical model was built based on CT-scan data of the operative site. A preliminary perceptual model was built based on current training techniques for the procedure provided by a skilled instructor. We compare and contrast these two modeling methods and discuss the implications of each. We select and defend the perceptual model as a superior approach for the epidural analgesia simulator.
DYNAMICO, an atmospheric dynamical core for high-performance climate modeling
NASA Astrophysics Data System (ADS)
Dubos, Thomas; Meurdesoif, Yann; Spiga, Aymeric; Millour, Ehouarn; Fita, Lluis; Hourdin, Frédéric; Kageyama, Masa; Traore, Abdoul-Khadre; Guerlet, Sandrine; Polcher, Jan
2017-04-01
Institut Pierre Simon Laplace has developed a very scalable atmospheric dynamical core, DYNAMICO, based on energy-conserving finite-difference/finite volume numerics on a quasi-uniform icosahedral-hexagonal mesh. Scalability is achieved by combining hybrid MPI/OpenMP parallelism to asynchronous I/O. This dynamical core has been coupled to radiative transfer physics tailored to the atmosphere of Saturn, allowing unprecedented simulations of the climate of this giant planet. For terrestrial climate studies DYNAMICO is being integrated into the IPSL Earth System Model IPSL-CM. Preliminary aquaplanet and AMIP-style simulations yield reasonable results when compared to outputs from IPSL-CM5. The observed performance suggests that an order of magnitude may be gained with respect to IPSL-CM CMIP5 simulations either on the duration of simulations or on their resolution. Longer simulations would be of interest for the study of paleoclimate, while higher resolution could improve certain aspects of the modeled climate such as extreme events, as will be explored in the HighResMIP project. Following IPSL's strategic vision of building a unified global-regional modelling system, a fully-compressible, non-hydrostatic prototype of DYNAMICO has been developed, enabling future convection-resolving simulations. Work supported by ANR project "HEAT", grant number CE23_2014_HEAT Dubos, T., Dubey, S., Tort, M., Mittal, R., Meurdesoif, Y., and Hourdin, F.: DYNAMICO-1.0, an icosahedral hydrostatic dynamical core designed for consistency and versatility, Geosci. Model Dev., 8, 3131-3150, doi:10.5194/gmd-8-3131-2015, 2015.
Shibuta, Yasushi; Sakane, Shinji; Miyoshi, Eisuke; Okita, Shin; Takaki, Tomohiro; Ohno, Munekazu
2017-04-05
Can completely homogeneous nucleation occur? Large scale molecular dynamics simulations performed on a graphics-processing-unit rich supercomputer can shed light on this long-standing issue. Here, a billion-atom molecular dynamics simulation of homogeneous nucleation from an undercooled iron melt reveals that some satellite-like small grains surrounding previously formed large grains exist in the middle of the nucleation process, which are not distributed uniformly. At the same time, grains with a twin boundary are formed by heterogeneous nucleation from the surface of the previously formed grains. The local heterogeneity in the distribution of grains is caused by the local accumulation of the icosahedral structure in the undercooled melt near the previously formed grains. This insight is mainly attributable to the multi-graphics processing unit parallel computation combined with the rapid progress in high-performance computational environments.Nucleation is a fundamental physical process, however it is a long-standing issue whether completely homogeneous nucleation can occur. Here the authors reveal, via a billion-atom molecular dynamics simulation, that local heterogeneity exists during homogeneous nucleation in an undercooled iron melt.
Multak, Nina; Newell, Karen; Spear, Sherrie; Scalese, Ross J; Issenberg, S Barry
2015-06-01
Research demonstrates limitations in the ability of health care trainees/practitioners, including physician assistants (PAs), to identify important cardiopulmonary examination findings and diagnose corresponding conditions. Studies also show that simulation-based training leads to improved performance and that these skills can transfer to real patients. This study evaluated the effectiveness of a newly developed curriculum incorporating simulation with deliberate practice for teaching cardiopulmonary physical examination/bedside diagnosis skills in the PA population. This multi-institutional study used a pretest/posttest design. Participants, PA students from 4 different programs, received a standardized curriculum including instructor-led activities interspersed among small-group/independent self-study time. Didactic sessions and independent study featured practice with the "Harvey" simulator and use of specially developed computer-based multimedia tutorials. Preintervention: participants completed demographic questionnaires, rated self-confidence, and underwent baseline evaluation of knowledge and cardiopulmonary physical examination skills. Students logged self-study time using various learning resources. Postintervention: students again rated self-confidence and underwent repeat cognitive/performance testing using equivalent written/simulator-based assessments. Physician assistant students (N = 56) demonstrated significant gains in knowledge, cardiac examination technique, recognition of total cardiac findings, identification of key auscultatory findings (extra heart sounds, systolic/diastolic murmurs), and the ability to make correct diagnoses. Learner self-confidence also improved significantly. This study demonstrated the effectiveness of a simulation-based curriculum for teaching essential physical examination/bedside diagnosis skills to PA students. Its results reinforce those of similar/previous research, which suggest that simulation-based training is most effective under certain educational conditions. Future research will include subgroup analyses/correlation of other variables to explore best features/uses of simulation technology for training PAs.
Modeling Materials: Design for Planetary Entry, Electric Aircraft, and Beyond
NASA Technical Reports Server (NTRS)
Thompson, Alexander; Lawson, John W.
2014-01-01
NASA missions push the limits of what is possible. The development of high-performance materials must keep pace with the agency's demanding, cutting-edge applications. Researchers at NASA's Ames Research Center are performing multiscale computational modeling to accelerate development times and further the design of next-generation aerospace materials. Multiscale modeling combines several computationally intensive techniques ranging from the atomic level to the macroscale, passing output from one level as input to the next level. These methods are applicable to a wide variety of materials systems. For example: (a) Ultra-high-temperature ceramics for hypersonic aircraft-we utilized the full range of multiscale modeling to characterize thermal protection materials for faster, safer air- and spacecraft, (b) Planetary entry heat shields for space vehicles-we computed thermal and mechanical properties of ablative composites by combining several methods, from atomistic simulations to macroscale computations, (c) Advanced batteries for electric aircraft-we performed large-scale molecular dynamics simulations of advanced electrolytes for ultra-high-energy capacity batteries to enable long-distance electric aircraft service; and (d) Shape-memory alloys for high-efficiency aircraft-we used high-fidelity electronic structure calculations to determine phase diagrams in shape-memory transformations. Advances in high-performance computing have been critical to the development of multiscale materials modeling. We used nearly one million processor hours on NASA's Pleiades supercomputer to characterize electrolytes with a fidelity that would be otherwise impossible. For this and other projects, Pleiades enables us to push the physics and accuracy of our calculations to new levels.
High-Performance Integrated Control of water quality and quantity in urban water reservoirs
NASA Astrophysics Data System (ADS)
Galelli, S.; Castelletti, A.; Goedbloed, A.
2015-11-01
This paper contributes a novel High-Performance Integrated Control framework to support the real-time operation of urban water supply storages affected by water quality problems. We use a 3-D, high-fidelity simulation model to predict the main water quality dynamics and inform a real-time controller based on Model Predictive Control. The integration of the simulation model into the control scheme is performed by a model reduction process that identifies a low-order, dynamic emulator running 4 orders of magnitude faster. The model reduction, which relies on a semiautomatic procedural approach integrating time series clustering and variable selection algorithms, generates a compact and physically meaningful emulator that can be coupled with the controller. The framework is used to design the hourly operation of Marina Reservoir, a 3.2 Mm3 storm-water-fed reservoir located in the center of Singapore, operated for drinking water supply and flood control. Because of its recent formation from a former estuary, the reservoir suffers from high salinity levels, whose behavior is modeled with Delft3D-FLOW. Results show that our control framework reduces the minimum salinity levels by nearly 40% and cuts the average annual deficit of drinking water supply by about 2 times the active storage of the reservoir (about 4% of the total annual demand).
Simulation of process identification and controller tuning for flow control system
NASA Astrophysics Data System (ADS)
Chew, I. M.; Wong, F.; Bono, A.; Wong, K. I.
2017-06-01
PID controller is undeniably the most popular method used in controlling various industrial processes. The feature to tune the three elements in PID has allowed the controller to deal with specific needs of the industrial processes. This paper discusses the three elements of control actions and improving robustness of controllers through combination of these control actions in various forms. A plant model is simulated using the Process Control Simulator in order to evaluate the controller performance. At first, the open loop response of the plant is studied by applying a step input to the plant and collecting the output data from the plant. Then, FOPDT of physical model is formed by using both Matlab-Simulink and PRC method. Then, calculation of controller’s setting is performed to find the values of Kc and τi that will give satisfactory control in closed loop system. Then, the performance analysis of closed loop system is obtained by set point tracking analysis and disturbance rejection performance. To optimize the overall physical system performance, a refined tuning of PID or detuning is further conducted to ensure a consistent resultant output of closed loop system reaction to the set point changes and disturbances to the physical model. As a result, the PB = 100 (%) and τi = 2.0 (s) is preferably chosen for setpoint tracking while PB = 100 (%) and τi = 2.5 (s) is selected for rejecting the imposed disturbance to the model. In a nutshell, selecting correlation tuning values is likewise depended on the required control’s objective for the stability performance of overall physical model.
NASA Astrophysics Data System (ADS)
Kaplinger, Brian Douglas
For the past few decades, both the scientific community and the general public have been becoming more aware that the Earth lives in a shooting gallery of small objects. We classify all of these asteroids and comets, known or unknown, that cross Earth's orbit as near-Earth objects (NEOs). A look at our geologic history tells us that NEOs have collided with Earth in the past, and we expect that they will continue to do so. With thousands of known NEOs crossing the orbit of Earth, there has been significant scientific interest in developing the capability to deflect an NEO from an impacting trajectory. This thesis applies the ideas of Smoothed Particle Hydrodynamics (SPH) theory to the NEO disruption problem. A simulation package was designed that allows efficacy simulation to be integrated into the mission planning and design process. This is done by applying ideas in high-performance computing (HPC) on the computer graphics processing unit (GPU). Rather than prove a concept through large standalone simulations on a supercomputer, a highly parallel structure allows for flexible, target dependent questions to be resolved. Built around nonclassified data and analysis, this computer package will allow academic institutions to better tackle the issue of NEO mitigation effectiveness.
The energy performance of thermochromic glazing
NASA Astrophysics Data System (ADS)
Diamantouros, Pavlos
This study investigated the energy performance of thermochromic glazing. It was done by simulating the model of a small building in a highly advanced computer program (EnergyPlus - U.S. DOE). The physical attributes of the thermochromic samples examined came from actual laboratory samples fabricated in UCL's Department of Chemistry (Prof I. P. Parkin). It was found that they can substantially reduce cooling loads while requiring the same heating loads as a high end low-e double glazing. The reductions in annual cooling energy required were in the 20%-40% range depending on sample, location and building layout. A series of sensitivity analyses showed the importance of switching temperature and emissivity factor in the performance of the glazing. Finally an ideal pane was designed to explore the limits this technology has to offer.
A new framework for the analysis of continental-scale convection-resolving climate simulations
NASA Astrophysics Data System (ADS)
Leutwyler, D.; Charpilloz, C.; Arteaga, A.; Ban, N.; Di Girolamo, S.; Fuhrer, O.; Hoefler, T.; Schulthess, T. C.; Christoph, S.
2017-12-01
High-resolution climate simulations at horizontal resolution of O(1-4 km) allow explicit treatment of deep convection (thunderstorms and rain showers). Explicitly treating convection by the governing equations reduces uncertainties associated with parametrization schemes and allows a model formulation closer to physical first principles [1,2]. But kilometer-scale climate simulations with long integration periods and large computational domains are expensive and data storage becomes unbearably voluminous. Hence new approaches to perform analysis are required. In the crCLIM project we propose a new climate modeling framework that allows scientists to conduct analysis at high spatial and temporal resolution. We tackle the computational cost by using the largest available supercomputers such as hybrid CPU-GPU architectures. For this the COSMO model has been adapted to run on such architectures [2]. We then alleviate the I/O-bottleneck by employing a simulation data-virtualizer (SDaVi) that allows to trade-off storage (space) for computational effort (time). This is achieved by caching the simulation outputs and efficiently launching re-simulations in case of cache misses. All this is done transparently from the analysis applications [3]. For the re-runs this approach requires a bit-reproducible version of COSMO. That is to say a model that produces identical results on different architectures to ensure coherent recomputation of the requested data [4]. In this contribution we present a version of SDaVi, a first performance model, and a strategy to obtain bit-reproducibility across hardware architectures.[1] N. Ban, J. Schmidli, C. Schär. Evaluation of the convection-resolving regional climate modeling approach in decade-long simulations. J. Geophys. Res. Atmos., 7889-7907, 2014.[2] D. Leutwyler, O. Fuhrer, X. Lapillonne, D. Lüthi, C. Schär. Towards European-scale convection-resolving climate simulations with GPUs: a study with COSMO 4.19. Geosci. Model Dev, 3393-3412, 2016.[3] S. Di Girolamo, P. Schmid, T. Schulthess, T. Hoefler. Virtualized Big Data: Reproducing Simulation Output on Demand. Submit. to the 23rd ACM Symposium on PPoPP 18, Vienna, Austria.[4] A. Arteaga, O. Fuhrer, T. Hoefler. Designing Bit-Reproducible Portable High-Performance Applications. IEEE 28th IPDPS, 2014.
Machine learning strategies for systems with invariance properties
NASA Astrophysics Data System (ADS)
Ling, Julia; Jones, Reese; Templeton, Jeremy
2016-08-01
In many scientific fields, empirical models are employed to facilitate computational simulations of engineering systems. For example, in fluid mechanics, empirical Reynolds stress closures enable computationally-efficient Reynolds Averaged Navier Stokes simulations. Likewise, in solid mechanics, constitutive relations between the stress and strain in a material are required in deformation analysis. Traditional methods for developing and tuning empirical models usually combine physical intuition with simple regression techniques on limited data sets. The rise of high performance computing has led to a growing availability of high fidelity simulation data. These data open up the possibility of using machine learning algorithms, such as random forests or neural networks, to develop more accurate and general empirical models. A key question when using data-driven algorithms to develop these empirical models is how domain knowledge should be incorporated into the machine learning process. This paper will specifically address physical systems that possess symmetry or invariance properties. Two different methods for teaching a machine learning model an invariance property are compared. In the first method, a basis of invariant inputs is constructed, and the machine learning model is trained upon this basis, thereby embedding the invariance into the model. In the second method, the algorithm is trained on multiple transformations of the raw input data until the model learns invariance to that transformation. Results are discussed for two case studies: one in turbulence modeling and one in crystal elasticity. It is shown that in both cases embedding the invariance property into the input features yields higher performance at significantly reduced computational training costs.
Machine learning strategies for systems with invariance properties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ling, Julia; Jones, Reese E.; Templeton, Jeremy Alan
Here, in many scientific fields, empirical models are employed to facilitate computational simulations of engineering systems. For example, in fluid mechanics, empirical Reynolds stress closures enable computationally-efficient Reynolds-Averaged Navier-Stokes simulations. Likewise, in solid mechanics, constitutive relations between the stress and strain in a material are required in deformation analysis. Traditional methods for developing and tuning empirical models usually combine physical intuition with simple regression techniques on limited data sets. The rise of high-performance computing has led to a growing availability of high-fidelity simulation data, which open up the possibility of using machine learning algorithms, such as random forests or neuralmore » networks, to develop more accurate and general empirical models. A key question when using data-driven algorithms to develop these models is how domain knowledge should be incorporated into the machine learning process. This paper will specifically address physical systems that possess symmetry or invariance properties. Two different methods for teaching a machine learning model an invariance property are compared. In the first , a basis of invariant inputs is constructed, and the machine learning model is trained upon this basis, thereby embedding the invariance into the model. In the second method, the algorithm is trained on multiple transformations of the raw input data until the model learns invariance to that transformation. Results are discussed for two case studies: one in turbulence modeling and one in crystal elasticity. It is shown that in both cases embedding the invariance property into the input features yields higher performance with significantly reduced computational training costs.« less
Machine learning strategies for systems with invariance properties
Ling, Julia; Jones, Reese E.; Templeton, Jeremy Alan
2016-05-06
Here, in many scientific fields, empirical models are employed to facilitate computational simulations of engineering systems. For example, in fluid mechanics, empirical Reynolds stress closures enable computationally-efficient Reynolds-Averaged Navier-Stokes simulations. Likewise, in solid mechanics, constitutive relations between the stress and strain in a material are required in deformation analysis. Traditional methods for developing and tuning empirical models usually combine physical intuition with simple regression techniques on limited data sets. The rise of high-performance computing has led to a growing availability of high-fidelity simulation data, which open up the possibility of using machine learning algorithms, such as random forests or neuralmore » networks, to develop more accurate and general empirical models. A key question when using data-driven algorithms to develop these models is how domain knowledge should be incorporated into the machine learning process. This paper will specifically address physical systems that possess symmetry or invariance properties. Two different methods for teaching a machine learning model an invariance property are compared. In the first , a basis of invariant inputs is constructed, and the machine learning model is trained upon this basis, thereby embedding the invariance into the model. In the second method, the algorithm is trained on multiple transformations of the raw input data until the model learns invariance to that transformation. Results are discussed for two case studies: one in turbulence modeling and one in crystal elasticity. It is shown that in both cases embedding the invariance property into the input features yields higher performance with significantly reduced computational training costs.« less
Next generation interatomic potentials for condensed systems
NASA Astrophysics Data System (ADS)
Handley, Christopher Michael; Behler, Jörg
2014-07-01
The computer simulation of condensed systems is a challenging task. While electronic structure methods like density-functional theory (DFT) usually provide a good compromise between accuracy and efficiency, they are computationally very demanding and thus applicable only to systems containing up to a few hundred atoms. Unfortunately, many interesting problems require simulations to be performed on much larger systems involving thousands of atoms or more. Consequently, more efficient methods are urgently needed, and a lot of effort has been spent on the development of a large variety of potentials enabling simulations with significantly extended time and length scales. Most commonly, these potentials are based on physically motivated functional forms and thus perform very well for the applications they have been designed for. On the other hand, they are often highly system-specific and thus cannot easily be transferred from one system to another. Moreover, their numerical accuracy is restricted by the intrinsic limitations of the imposed functional forms. In recent years, several novel types of potentials have emerged, which are not based on physical considerations. Instead, they aim to reproduce a set of reference electronic structure data as accurately as possible by using very general and flexible functional forms. In this review we will survey a number of these methods. While they differ in the choice of the employed mathematical functions, they all have in common that they provide high-quality potential-energy surfaces, while the efficiency is comparable to conventional empirical potentials. It has been demonstrated that in many cases these potentials now offer a very interesting new approach to study complex systems with hitherto unreached accuracy.
CFD Vision 2030 Study: A Path to Revolutionary Computational Aerosciences
NASA Technical Reports Server (NTRS)
Slotnick, Jeffrey; Khodadoust, Abdollah; Alonso, Juan; Darmofal, David; Gropp, William; Lurie, Elizabeth; Mavriplis, Dimitri
2014-01-01
This report documents the results of a study to address the long range, strategic planning required by NASA's Revolutionary Computational Aerosciences (RCA) program in the area of computational fluid dynamics (CFD), including future software and hardware requirements for High Performance Computing (HPC). Specifically, the "Vision 2030" CFD study is to provide a knowledge-based forecast of the future computational capabilities required for turbulent, transitional, and reacting flow simulations across a broad Mach number regime, and to lay the foundation for the development of a future framework and/or environment where physics-based, accurate predictions of complex turbulent flows, including flow separation, can be accomplished routinely and efficiently in cooperation with other physics-based simulations to enable multi-physics analysis and design. Specific technical requirements from the aerospace industrial and scientific communities were obtained to determine critical capability gaps, anticipated technical challenges, and impediments to achieving the target CFD capability in 2030. A preliminary development plan and roadmap were created to help focus investments in technology development to help achieve the CFD vision in 2030.
ERIC Educational Resources Information Center
Geigel, Joan; And Others
A self-paced program designed to integrate the use of computers and physics courseware into the regular classroom environment is offered for physics high school teachers in this module on projectile and circular motion. A diversity of instructional strategies including lectures, demonstrations, videotapes, computer simulations, laboratories, and…
Large-scale large eddy simulation of nuclear reactor flows: Issues and perspectives
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merzari, Elia; Obabko, Aleks; Fischer, Paul
Numerical simulation has been an intrinsic part of nuclear engineering research since its inception. In recent years a transition is occurring toward predictive, first-principle-based tools such as computational fluid dynamics. Even with the advent of petascale computing, however, such tools still have significant limitations. In the present work some of these issues, and in particular the presence of massive multiscale separation, are discussed, as well as some of the research conducted to mitigate them. Petascale simulations at high fidelity (large eddy simulation/direct numerical simulation) were conducted with the massively parallel spectral element code Nek5000 on a series of representative problems.more » These simulations shed light on the requirements of several types of simulation: (1) axial flow around fuel rods, with particular attention to wall effects; (2) natural convection in the primary vessel; and (3) flow in a rod bundle in the presence of spacing devices. Finally, the focus of the work presented here is on the lessons learned and the requirements to perform these simulations at exascale. Additional physical insight gained from these simulations is also emphasized.« less
Large-scale large eddy simulation of nuclear reactor flows: Issues and perspectives
Merzari, Elia; Obabko, Aleks; Fischer, Paul; ...
2016-11-03
Numerical simulation has been an intrinsic part of nuclear engineering research since its inception. In recent years a transition is occurring toward predictive, first-principle-based tools such as computational fluid dynamics. Even with the advent of petascale computing, however, such tools still have significant limitations. In the present work some of these issues, and in particular the presence of massive multiscale separation, are discussed, as well as some of the research conducted to mitigate them. Petascale simulations at high fidelity (large eddy simulation/direct numerical simulation) were conducted with the massively parallel spectral element code Nek5000 on a series of representative problems.more » These simulations shed light on the requirements of several types of simulation: (1) axial flow around fuel rods, with particular attention to wall effects; (2) natural convection in the primary vessel; and (3) flow in a rod bundle in the presence of spacing devices. Finally, the focus of the work presented here is on the lessons learned and the requirements to perform these simulations at exascale. Additional physical insight gained from these simulations is also emphasized.« less
Physical and chemical characterization of petroleum products by GC-MS.
Mendez, A; Meneghini, R; Lubkowitz, J
2007-01-01
There is a need for reliable and fast means of monitoring refining, conversion, and upgrading processes aiming to increase the yield of light distillates, and thus, reducing the oil barrel bottoms. By simultaneously utilizing the FID and mass selective detectors while splitting the column effluent in a controlled way, it is possible to obtain identical gas chromatograms and total ion chromatograms from a single run. This means that besides the intensity vs. time graphs, the intensity vs. mass and boiling point can also be obtained. As a result, physical and chemical characterization can be performed in a simple and rapid manner. Experimental results on middle, heavy distillates, and crude oil fractions show clearly the effect of upgrading processes on the chemical composition and yields of diesel, jet fuels, and high vacuum gasoil fractions. The methodology is fully compliant with ASTM D-2887, D-7213, D-6352, and D7169 for simulated distillation and the previously mentioned mass spectrometry standards. The group type analysis correlated satisfactorily with high-performance liquid chromatography data.
NASA Astrophysics Data System (ADS)
Greene, Patrick T.; Eldredge, Jeff D.; Zhong, Xiaolin; Kim, John
2016-07-01
In this paper, we present a method for performing uniformly high-order direct numerical simulations of high-speed flows over arbitrary geometries. The method was developed with the goal of simulating and studying the effects of complex isolated roughness elements on the stability of hypersonic boundary layers. The simulations are carried out on Cartesian grids with the geometries imposed by a third-order cut-stencil method. A fifth-order hybrid weighted essentially non-oscillatory scheme was implemented to capture any steep gradients in the flow created by the geometries and a third-order Runge-Kutta method is used for time advancement. A multi-zone refinement method was also utilized to provide extra resolution at locations with expected complex physics. The combination results in a globally fourth-order scheme in space and third order in time. Results confirming the method's high order of convergence are shown. Two-dimensional and three-dimensional test cases are presented and show good agreement with previous results. A simulation of Mach 3 flow over the logo of the Ubuntu Linux distribution is shown to demonstrate the method's capabilities for handling complex geometries. Results for Mach 6 wall-bounded flow over a three-dimensional cylindrical roughness element are also presented. The results demonstrate that the method is a promising tool for the study of hypersonic roughness-induced transition.
Mixed martial arts induces significant fatigue and muscle damage up to 24 hours post-combat.
Ghoul, Nihel; Tabben, Montassar; Miarka, Bianca; Tourny, Claire; Chamari, Karim; Coquart, Jeremy
2017-06-22
This study investigates the physiological/physical responses to a simulated mixed martial arts (MMA) competition over 24 hr. Twelve fighters performed a simulated MMA competition, consisting of three 5-min MMA matches. Physiological/physical data were assessed before (Trest), directly after round 1 (Trd1), round 2 (Trd2) and round 3 (Trd3), and then 30-min (Trecovery30min) and/or 24-hr (Trecovery24h) post-competition. Heart rate (HR), rating of perceived exertion (RPE) and blood lactate concentration ([La]) were assessed at Trest, Trd1, Trd2 and Trd3. Biological data were collected at Trest, Trd3, Trecovery30min and Trecovery24h. Physical tests were performed at Trest, Trecovery30min and Trecovery24h. HR, RPE and [La] were high during competition. Leukocytes, hemoglobin, total protein and glycemia were increased at Trd3 compared with all other time points (p<0.05). Cortisol was increased at Trd3 compared with Trest and Trecovery24h (p<0.05). Testosterone was higher at Trd3 and Trecovery30min than Trest (p<0.001). Higher values of uric acid were noted during recovery periods (p<0.001). Lactate dehydrogenase was lower at Trest compared with Trd3, Trecovery30min and Trecovery24h (p<0.05). Countermovement jump was higher at Trest than Trecovery30min (p=0.020). Consequently, MMA is a high-intensity intermittent combat sport that induces significant fatigue and muscle damage, both of which are still present 24-hr post-competition.
Generalized simulation technique for turbojet engine system analysis
NASA Technical Reports Server (NTRS)
Seldner, K.; Mihaloew, J. R.; Blaha, R. J.
1972-01-01
A nonlinear analog simulation of a turbojet engine was developed. The purpose of the study was to establish simulation techniques applicable to propulsion system dynamics and controls research. A schematic model was derived from a physical description of a J85-13 turbojet engine. Basic conservation equations were applied to each component along with their individual performance characteristics to derive a mathematical representation. The simulation was mechanized on an analog computer. The simulation was verified in both steady-state and dynamic modes by comparing analytical results with experimental data obtained from tests performed at the Lewis Research Center with a J85-13 engine. In addition, comparison was also made with performance data obtained from the engine manufacturer. The comparisons established the validity of the simulation technique.
Ade, C J; Broxterman, R M; Craig, J C; Schlup, S J; Wilcox, S L; Barstow, T J
2014-11-01
The purpose was to evaluate the relationships between tests of fitness and two activities that simulate components of Lunar- and Martian-based extravehicular activities (EVA). Seventy-one subjects completed two field tests: a physical abilities test and a 10km Walkback test. The relationships between test times and the following parameters were determined: running V˙O2max, gas exchange threshold (GET), speed at V˙O2max (s-V˙O2max), highest sustainable rate of aerobic metabolism [critical speed (CS)], and the finite distance that could be covered above CS (D'): arm cranking V˙O2peak, GET, critical power (CP), and the finite work that can be performed above CP (W'). CS, running V˙O2max, s-V˙O2max, and arm cranking V˙O2peak had the highest correlations with the physical abilities field test (r=0.66-0.82, P<0.001). For the 10km Walkback, CS, s-V˙O2max, and running V˙O2max were significant predictors (r=0.64-0.85, P<0.001). CS and to a lesser extent V˙O2max are most strongly associated with tasks that simulate aspects of EVA performance, highlighting CS as a method for evaluating astronaut physical capacity. Copyright © 2014 Elsevier B.V. All rights reserved.
Preface to advances in numerical simulation of plasmas
NASA Astrophysics Data System (ADS)
Parker, Scott E.; Chacon, Luis
2016-10-01
This Journal of Computational Physics Special Issue, titled ;Advances in Numerical Simulation of Plasmas,; presents a snapshot of the international state of the art in the field of computational plasma physics. The articles herein are a subset of the topics presented as invited talks at the 24th International Conference on the Numerical Simulation of Plasmas (ICNSP), August 12-14, 2015 in Golden, Colorado. The choice of papers was highly selective. The ICNSP is held every other year and is the premier scientific meeting in the field of computational plasma physics.
NASA Technical Reports Server (NTRS)
Drozda, Tomasz G.; Cabell, Karen F.; Ziltz, Austin R.; Hass, Neal E.; Inman, Jennifer A.; Burns, Ross A.; Bathel, Brett F.; Danehy, Paul M.
2017-01-01
The current work compares experimentally and computationally obtained nitric oxide (NO) planar laser induced fluorescence (PLIF) images of the mixing flowfields for three types of high-speed fuel injectors: a strut, a ramp, and a rectangular flushwall. These injection devices, which exhibited promising mixing performance at lower flight Mach numbers, are currently being studied as a part of the Enhanced Injection and Mixing Project (EIMP) at the NASA Langley Research Center. The EIMP aims to investigate scramjet fuel injection and mixing physics, and improve the understanding of underlying physical processes relevant to flight Mach numbers greater than eight. In the experiments, conducted in the NASA Langley Arc-Heated Scramjet Test Facility (AHSTF), the injectors are placed downstream of a Mach 6 facility nozzle, which simulates the high Mach number air flow at the entrance of a scramjet combustor. Helium is used as an inert substitute for hydrogen fuel. Both schlieren and PLIF techniques are applied to obtain mixing flowfield flow visualizations. The experimental PLIF is obtained by using a UV laser sheet to interrogate a plane of the flow by exciting fluorescence from the NO molecules, which are present in the AHSTF air. Consequently, the absence of signal in the resulting PLIF images is an indication of pure helium (fuel). The computational PLIF is obtained by applying a fluorescence model for NO to the results of the Reynolds-averaged simulations (RAS) of the mixing flow field carried out using the VULCAN-CFD solver. This approach is required because the PLIF signal is a nonlinear function of not only NO concentration, but also pressure, temperature, and the flow velocity. This complexity allows additional flow features to be identified and compared with those obtained from the computational fluid dynamics (CFD) simulations, however, such comparisons are only semiquantitative. Three-dimensional image reconstruction, similar to that used in magnetic resonance imaging, is also used to obtain images in the streamwise and spanwise planes from select cross-stream PLIF plane data. Synthetic schlieren is also computed from the RAS data. Good agreement between the experimental and computational results provides increased confidence in the CFD simulations for investigations of injector performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Chun; Leung, L. Ruby; Park, Sang-Hun
Advances in computing resources are gradually moving regional and global numerical forecasting simulations towards sub-10 km resolution, but global high resolution climate simulations remain a challenge. The non-hydrostatic Model for Prediction Across Scales (MPAS) provides a global framework to achieve very high resolution using regional mesh refinement. Previous studies using the hydrostatic version of MPAS (H-MPAS) with the physics parameterizations of Community Atmosphere Model version 4 (CAM4) found notable resolution dependent behaviors. This study revisits the resolution sensitivity using the non-hydrostatic version of MPAS (NH-MPAS) with both CAM4 and CAM5 physics. A series of aqua-planet simulations at global quasi-uniform resolutionsmore » ranging from 240 km to 30 km and global variable resolution simulations with a regional mesh refinement of 30 km resolution over the tropics are analyzed, with a primary focus on the distinct characteristics of NH-MPAS in simulating precipitation, clouds, and large-scale circulation features compared to H-MPAS-CAM4. The resolution sensitivity of total precipitation and column integrated moisture in NH-MPAS is smaller than that in H-MPAS-CAM4. This contributes importantly to the reduced resolution sensitivity of large-scale circulation features such as the inter-tropical convergence zone and Hadley circulation in NH-MPAS compared to H-MPAS. In addition, NH-MPAS shows almost no resolution sensitivity in the simulated westerly jet, in contrast to the obvious poleward shift in H-MPAS with increasing resolution, which is partly explained by differences in the hyperdiffusion coefficients used in the two models that influence wave activity. With the reduced resolution sensitivity, simulations in the refined region of the NH-MPAS global variable resolution configuration exhibit zonally symmetric features that are more comparable to the quasi-uniform high-resolution simulations than those from H-MPAS that displays zonal asymmetry in simulations inside the refined region. Overall, NH-MPAS with CAM5 physics shows less resolution sensitivity compared to CAM4. These results provide a reference for future studies to further explore the use of NH-MPAS for high-resolution climate simulations in idealized and realistic configurations.« less
Development of a contrast phantom for active millimeter-wave imaging systems
NASA Astrophysics Data System (ADS)
Barber, Jeffrey; Weatherall, James C.; Brauer, Carolyn S.; Smith, Barry T.
2011-06-01
As the development of active millimeter wave imaging systems continues, it is necessary to validate materials that simulate the expected response of explosives. While physics-based models have been used to develop simulants, it is desirable to image both the explosive and simulant together in a controlled fashion in order to demonstrate success. To this end, a millimeter wave contrast phantom has been created to calibrate image grayscale while controlling the configuration of the explosive and simulant such that direct comparison of their respective returns can be performed. The physics of the phantom are described, with millimeter wave images presented to show successful development of the phantom and simulant validation at GHz frequencies.
Numerical Simulation of Rocket Exhaust Interaction with Lunar Soil
NASA Technical Reports Server (NTRS)
Liever, Peter; Tosh, Abhijit; Curtis, Jennifer
2012-01-01
This technology development originated from the need to assess the debris threat resulting from soil material erosion induced by landing spacecraft rocket plume impingement on extraterrestrial planetary surfaces. The impact of soil debris was observed to be highly detrimental during NASA s Apollo lunar missions and will pose a threat for any future landings on the Moon, Mars, and other exploration targets. The innovation developed under this program provides a simulation tool that combines modeling of the diverse disciplines of rocket plume impingement gas dynamics, granular soil material liberation, and soil debris particle kinetics into one unified simulation system. The Unified Flow Solver (UFS) developed by CFDRC enabled the efficient, seamless simulation of mixed continuum and rarefied rocket plume flow utilizing a novel direct numerical simulation technique of the Boltzmann gas dynamics equation. The characteristics of the soil granular material response and modeling of the erosion and liberation processes were enabled through novel first principle-based granular mechanics models developed by the University of Florida specifically for the highly irregularly shaped and cohesive lunar regolith material. These tools were integrated into a unique simulation system that accounts for all relevant physics aspects: (1) Modeling of spacecraft rocket plume impingement flow under lunar vacuum environment resulting in a mixed continuum and rarefied flow; (2) Modeling of lunar soil characteristics to capture soil-specific effects of particle size and shape composition, soil layer cohesion and granular flow physics; and (3) Accurate tracking of soil-borne debris particles beginning with aerodynamically driven motion inside the plume to purely ballistic motion in lunar far field conditions. In the earlier project phase of this innovation, the capabilities of the UFS for mixed continuum and rarefied flow situations were validated and demonstrated for lunar lander rocket plume flow impingement under lunar vacuum conditions. Applications and improvements to the granular flow simulation tools contributed by the University of Florida were tested against Earth environment experimental results. Requirements for developing, validating, and demonstrating this solution environment were clearly identified, and an effective second phase execution plan was devised. In this phase, the physics models were refined and fully integrated into a production-oriented simulation tool set. Three-dimensional simulations of Apollo Lunar Excursion Module (LEM) and Altair landers (including full-scale lander geometry) established the practical applicability of the UFS simulation approach and its advanced performance level for large-scale realistic problems.
Ultrasound imaging based on nonlinear pressure field properties
NASA Astrophysics Data System (ADS)
Bouakaz, Ayache; Frinking, Peter J. A.; de Jong, Nico
2000-07-01
Ultrasound image quality has experienced a significant improvement over the past years with the utilization of harmonic frequencies. This brings the need to understand the physical processes involved in the propagation of finite amplitude sound beams, and the issues for redesigning and optimizing the phased array transducers. New arrays with higher imaging performances are essential for tissue imaging and contrast imaging as well. This study presents measurements and simulations on a 4.6 MHz square transducer. The numerical scheme used solves the KZK equation in the time domain. Comparison of measured and computed data showed good agreement for low and high excitation levels. In a similar way, a numerical simulation was performed on a linear array with five elements. The simulation showed that the second harmonic beam is narrower than the fundamental with less energy in the near field. In addition, the grating lobes are significantly lower. Accordingly, selective harmonic imaging shows less near field artifacts and will lower the clutter, resulting in much cleaner images.
Detached Eddy Simulation for the F-16XL Aircraft Configuration
NASA Technical Reports Server (NTRS)
Elmiligui, Alaa; Abdol-Hamid, Khaled; Parlette, Edward B.
2015-01-01
Numerical simulations for the flow around the F-16XL configuration as a contribution to the Cranked Arrow Wing Aerodynamic Project International 2 (CAWAPI-2) have been performed. The NASA Langley Tetrahedral Unstructured Software System (TetrUSS) with its USM3D solver was used to perform the unsteady flow field simulations for the subsonic high angle-of-attack case corresponding to flight condition (FC) 25. Two approaches were utilized to capture the unsteady vortex flow over the wing of the F-16XL. The first approach was to use Unsteady Reynolds-Averaged Navier-Stokes (URANS) coupled with standard turbulence closure models. The second approach was to use Detached Eddy Simulation (DES), which creates a hybrid model that attempts to combine the most favorable elements of URANS models and Large Eddy Simulation (LES). Computed surface static pressure profiles are presented and compared with flight data. Time-averaged and instantaneous results obtained on coarse, medium and fine grids are compared with the flight data. The intent of this study is to demonstrate that the DES module within the USM3D solver can be used to provide valuable data in predicting vortex-flow physics on a complex configuration.
A Numerical and Experimental Study of Damage Growth in a Composite Laminate
NASA Technical Reports Server (NTRS)
McElroy, Mark; Ratcliffe, James; Czabaj, Michael; Wang, John; Yuan, Fuh-Gwo
2014-01-01
The present study has three goals: (1) perform an experiment where a simple laminate damage process can be characterized in high detail; (2) evaluate the performance of existing commercially available laminate damage simulation tools by modeling the experiment; (3) observe and understand the underlying physics of damage in a composite honeycomb sandwich structure subjected to low-velocity impact. A quasi-static indentation experiment has been devised to provide detailed information about a simple mixed-mode damage growth process. The test specimens consist of an aluminum honeycomb core with a cross-ply laminate facesheet supported on a stiff uniform surface. When the sample is subjected to an indentation load, the honeycomb core provides support to the facesheet resulting in a gradual and stable damage growth process in the skin. This enables real time observation as a matrix crack forms, propagates through a ply, and then causes a delamination. Finite element analyses were conducted in ABAQUS/Explicit(TradeMark) 6.13 that used continuum and cohesive modeling techniques to simulate facesheet damage and a geometric and material nonlinear model to simulate core crushing. The high fidelity of the experimental data allows a detailed investigation and discussion of the accuracy of each numerical modeling approach.
Effect of Knudsen thermal force on the performance of low-pressure micro gas sensor
NASA Astrophysics Data System (ADS)
Barzegar Gerdroodbary, M.; Ganji, D. D.; Taeibi-Rahni, M.; Vakilipour, Shidvash
2017-07-01
In this paper, Direct Simulation Monte Carlo (DSMC) simulations were applied to investigate the mechanism of the force generation inside a low-pressure gas sensor. The flow feature and force generation mechanism inside a rectangular enclosure with heat and cold arms as the non-isothermal walls are comprehensively explained. In addition, extensive parametric studies are done to study the effects of physical parameters on the performance and characteristics of this device in different operating conditions. In this research, the Knudsen number is varied from 0.1 to 4.5 (0.5 to 11torr) to reveal all the characteristics of the thermally driven force inside the MEMS sensor. In order to simulate a rarefied gas inside the micro gas detector, Boltzmann equations are applied to obtain high-precision results. The effects of ambient pressure and temperature difference of arms are comprehensively investigated. Our findings show that maximum force increases more than 7 times when the temperature difference of the cold and hot arms is increased from 10 to 100K. In addition, the results demonstrate that the thermal gradient at rarefied pressure induces complex structure, and the mechanism of force generation highly varies at different pressure conditions.
ERIC Educational Resources Information Center
Fogarty, Ian; Geelan, David
2013-01-01
Students in 4 Canadian high school physics classes completed instructional sequences in two key physics topics related to motion--Straight Line Motion and Newton's First Law. Different sequences of laboratory investigation, teacher explanation (lecture) and the use of computer-based scientific visualizations (animations and simulations) were…
Mechanical impact of dynamic phenomena in Francis turbines at off design conditions
NASA Astrophysics Data System (ADS)
Duparchy, F.; Brammer, J.; Thibaud, M.; Favrel, A.; Lowys, P. Y.; Avellan, F.
2017-04-01
At partial load and overload conditions, Francis turbines are subjected to hydraulic instabilities that can potentially result in high dynamic solicitations of the turbine components and significantly reduce their lifetime. This study presents both experimental data and numerical simulations that were used as complementary approaches to study these dynamic solicitations. Measurements performed on a reduced scale physical model, including a special runner instrumented with on-board strain gauges and pressure sensors, were used to investigate the dynamic phenomena experienced by the runner. They were also taken as reference to validate the numerical simulation results. After validation, advantage was taken from the numerical simulations to highlight the mechanical response of the structure to the unsteady hydraulic phenomena, as well as their impact on the fatigue damage of the runner.
Numerical simulations for active tectonic processes: increasing interoperability and performance
NASA Technical Reports Server (NTRS)
Donnellan, A.; Fox, G.; Rundle, J.; McLeod, D.; Tullis, T.; Grant, L.
2002-01-01
The objective of this project is to produce a system to fully model earthquake-related data. This task develops simulation and analysis tools to study the physics of earthquakes using state-of-the-art modeling.
Yang, Tao; Sezer, Hayri; Celik, Ismail B.; ...
2015-06-02
In the present paper, a physics-based procedure combining experiments and multi-physics numerical simulations is developed for overall analysis of SOFCs operational diagnostics and performance predictions. In this procedure, essential information for the fuel cell is extracted first by utilizing empirical polarization analysis in conjunction with experiments and refined by multi-physics numerical simulations via simultaneous analysis and calibration of polarization curve and impedance behavior. The performance at different utilization cases and operating currents is also predicted to confirm the accuracy of the proposed model. It is demonstrated that, with the present electrochemical model, three air/fuel flow conditions are needed to producemore » a set of complete data for better understanding of the processes occurring within SOFCs. After calibration against button cell experiments, the methodology can be used to assess performance of planar cell without further calibration. The proposed methodology would accelerate the calibration process and improve the efficiency of design and diagnostics.« less
Functional Performance Testing for Power and Return to Sports
Manske, Robert; Reiman, Michael
2013-01-01
Context: Functional performance testing of athletes can determine physical limitations that may affect sporting activities. Optimal functional performance testing simulates the athlete’s activity. Evidence Acquisition: A Medline search from 1960 to 2012 was implemented with the keywords functional testing, functional impairment testing, and functional performance testing in the English language. Each author also undertook independent searches of article references. Conclusion: Functional performance tests can bridge the gap between general physical tests and full, unrestricted athletic activity. PMID:24427396
Design of invisibility cloaks with an open tunnel.
Ako, Thomas; Yan, Min; Qiu, Min
2010-12-20
In this paper we apply the methodology of transformation optics for design of a novel invisibility cloak which can possess an open tunnel. Such a cloak facilitates the insertion (retrieval) of matter into (from) the cloak's interior without significantly affecting the cloak's performance, overcoming the matter exchange bottleneck inherent to most previously proposed cloak designs.We achieve this by applying a transformation which expands a point at the origin in electromagnetic space to a finite area in physical space in a highly anisotropic manner. The invisibility performance of the proposed cloak is verified by using full-wave finite-element simulations.
Lu, Zhonghua; Arikatla, Venkata S; Han, Zhongqing; Allen, Brian F; De, Suvranu
2014-12-01
High-frequency electricity is used in the majority of surgical interventions. However, modern computer-based training and simulation systems rely on physically unrealistic models that fail to capture the interplay of the electrical, mechanical and thermal properties of biological tissue. We present a real-time and physically realistic simulation of electrosurgery by modelling the electrical, thermal and mechanical properties as three iteratively solved finite element models. To provide subfinite-element graphical rendering of vaporized tissue, a dual-mesh dynamic triangulation algorithm based on isotherms is proposed. The block compressed row storage (BCRS) structure is shown to be critical in allowing computationally efficient changes in the tissue topology due to vaporization. We have demonstrated our physics-based electrosurgery cutting algorithm through various examples. Our matrix manipulation algorithms designed for topology changes have shown low computational cost. Our simulator offers substantially greater physical fidelity compared to previous simulators that use simple geometry-based heat characterization. Copyright © 2013 John Wiley & Sons, Ltd.
Visualization Methods for Viability Studies of Inspection Modules for the Space Shuttle
NASA Technical Reports Server (NTRS)
Mobasher, Amir A.
2005-01-01
An effective simulation of an object, process, or task must be similar to that object, process, or task. A simulation could consist of a physical device, a set of mathematical equations, a computer program, a person, or some combination of these. There are many reasons for the use of simulators. Although some of the reasons are unique to a specific situation, there are many general reasons and purposes for using simulators. Some are listed but not limited to (1) Safety, (2) Scarce resources, (3) Teaching/education, (4) Additional capabilities, (5) Flexibility and (6) Cost. Robot simulators are in use for all of these reasons. Virtual environments such as simulators will eliminate physical contact with humans and hence will increase the safety of work environment. Corporations with limited funding and resources may utilize simulators to accomplish their goals while saving manpower and money. A computer simulation is safer than working with a real robot. Robots are typically a scarce resource. Schools typically don t have a large number of robots, if any. Factories don t want the robots not performing useful work unless absolutely necessary. Robot simulators are useful in teaching robotics. A simulator gives a student hands-on experience, if only with a simulator. The simulator is more flexible. A user can quickly change the robot configuration, workcell, or even replace the robot with a different one altogether. In order to be useful, a robot simulator must create a model that accurately performs like the real robot. A powerful simulator is usually thought of as a combination of a CAD package with simulation capabilities. Computer Aided Design (CAD) techniques are used extensively by engineers in virtually all areas of engineering. Parts are designed interactively aided by the graphical display of both wireframe and more realistic shaded renderings. Once a part s dimensions have been specified to the CAD package, designers can view the part from any direction to examine how it will look and perform in relation to other parts. If changes are deemed necessary, the designer can easily make the changes and view the results graphically. However, a complex process of moving parts intended for operation in a complex environment can only be fully understood through the process of animated graphical simulation. A CAD package with simulation capabilities allows the designer to develop geometrical models of the process being designed, as well as the environment in which the process will be used, and then test the process in graphical animation much as the actual physical system would be run . By being able to operate the system of moving and stationary parts, the designer is able to see in simulation how the system will perform under a wide variety of conditions. If, for example, undesired collisions occur between parts of the system, design changes can be easily made without the expense or potential danger of testing the physical system.
High heat flux testing of CFC composites for the tokamak physics experiment
NASA Astrophysics Data System (ADS)
Valentine, P. G.; Nygren, R. E.; Burns, R. W.; Rocket, P. D.; Colleraine, A. P.; Lederich, R. J.; Bradley, J. T.
1996-10-01
High heat flux (HHF) testing of carbon fiber reinforced carbon composites (CFC's) was conducted under the General Atomics program to develop plasma-facing components (PFC's) for Princeton Plasma Physics Laboratory's tokamak physics experiment (TPX). As part of the process of selecting TPX CFC materials, a series of HHF tests were conducted with the 30 kW electron beam test system (EBTS) facility at Sandia National Laboratories, and with the plasma disruption simulator I (PLADIS-I) facility at the University of New Mexico. The purpose of the tests was to make assessments of the thermal performance and erosion behavior of CFC materials. Tests were conducted with 42 different CFC materials. In general, the CFC materials withstood the rapid thermal pulse environments without fracturing, delaminating, or degrading in a non-uniform manner; significant differences in thermal performance, erosion behavior, vapor evolution, etc. were observed and preliminary findings are presented below. The CFC's exposed to the hydrogen plasma pulses in PLADIS-I exhibited greater erosion rates than the CFC materials exposed to the electron-beam pulses in EBTS. The results obtained support the continued consideration of a variety of CFC composites for TPX PFC components.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strydom, Gerhard; Bostelmann, F.
The continued development of High Temperature Gas Cooled Reactors (HTGRs) requires verification of HTGR design and safety features with reliable high fidelity physics models and robust, efficient, and accurate codes. The predictive capability of coupled neutronics/thermal-hydraulics and depletion simulations for reactor design and safety analysis can be assessed with sensitivity analysis (SA) and uncertainty analysis (UA) methods. Uncertainty originates from errors in physical data, manufacturing uncertainties, modelling and computational algorithms. (The interested reader is referred to the large body of published SA and UA literature for a more complete overview of the various types of uncertainties, methodologies and results obtained).more » SA is helpful for ranking the various sources of uncertainty and error in the results of core analyses. SA and UA are required to address cost, safety, and licensing needs and should be applied to all aspects of reactor multi-physics simulation. SA and UA can guide experimental, modelling, and algorithm research and development. Current SA and UA rely either on derivative-based methods such as stochastic sampling methods or on generalized perturbation theory to obtain sensitivity coefficients. Neither approach addresses all needs. In order to benefit from recent advances in modelling and simulation and the availability of new covariance data (nuclear data uncertainties) extensive sensitivity and uncertainty studies are needed for quantification of the impact of different sources of uncertainties on the design and safety parameters of HTGRs. Only a parallel effort in advanced simulation and in nuclear data improvement will be able to provide designers with more robust and well validated calculation tools to meet design target accuracies. In February 2009, the Technical Working Group on Gas-Cooled Reactors (TWG-GCR) of the International Atomic Energy Agency (IAEA) recommended that the proposed Coordinated Research Program (CRP) on the HTGR Uncertainty Analysis in Modelling (UAM) be implemented. This CRP is a continuation of the previous IAEA and Organization for Economic Co-operation and Development (OECD)/Nuclear Energy Agency (NEA) international activities on Verification and Validation (V&V) of available analytical capabilities for HTGR simulation for design and safety evaluations. Within the framework of these activities different numerical and experimental benchmark problems were performed and insight was gained about specific physics phenomena and the adequacy of analysis methods.« less
Dosimetry in small-animal CT using Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Lee, C.-L.; Park, S.-J.; Jeon, P.-H.; Jo, B.-D.; Kim, H.-J.
2016-01-01
Small-animal computed tomography (micro-CT) imaging devices are increasingly being used in biological research. While investigators are mainly interested in high-contrast, low-noise, and high-resolution anatomical images, relatively large radiation doses are required, and there is also growing concern over the radiological risk from preclinical experiments. This study was conducted to determine the radiation dose in a mouse model for dosimetric estimates using the GEANT4 application for tomographic emission simulations (GATE) and to extend its techniques to various small-animal CT applications. Radiation dose simulations were performed with the same parameters as those for the measured micro-CT data, using the MOBY phantom, a pencil ion chamber and an electrometer with a CT detector. For physical validation of radiation dose, absorbed dose of brain and liver in mouse were evaluated to compare simulated results with physically measured data using thermoluminescent dosimeters (TLDs). The mean difference between simulated and measured data was less than 2.9% at 50 kVp X-ray source. The absorbed doses of 37 brain tissues and major organs of the mouse were evaluated according to kVp changes. The absorbed dose over all of the measurements in the brain (37 types of tissues) consistently increased and ranged from 42.4 to 104.0 mGy. Among the brain tissues, the absorbed dose of the hypothalamus (157.8-414.30 mGy) was the highest for the beams at 50-80 kVp, and that of the corpus callosum (11.2-26.6 mGy) was the lowest. These results can be used as a dosimetric database to control mouse doses and preclinical targeted radiotherapy experiments. In addition, to accurately calculate the mouse-absorbed dose, the X-ray spectrum, detector alignment, and uncertainty in the elemental composition of the simulated materials must be accurately modeled.
NASA Astrophysics Data System (ADS)
Massmann, J.; Nagel, T.; Bilke, L.; Böttcher, N.; Heusermann, S.; Fischer, T.; Kumar, V.; Schäfers, A.; Shao, H.; Vogel, P.; Wang, W.; Watanabe, N.; Ziefle, G.; Kolditz, O.
2016-12-01
As part of the German site selection process for a high-level nuclear waste repository, different repository concepts in the geological candidate formations rock salt, clay stone and crystalline rock are being discussed. An open assessment of these concepts using numerical simulations requires physical models capturing the individual particularities of each rock type and associated geotechnical barrier concept to a comparable level of sophistication. In a joint work group of the Helmholtz Centre for Environmental Research (UFZ) and the German Federal Institute for Geosciences and Natural Resources (BGR), scientists of the UFZ are developing and implementing multiphysical process models while BGR scientists apply them to large scale analyses. The advances in simulation methods for waste repositories are incorporated into the open-source code OpenGeoSys. Here, recent application-driven progress in this context is highlighted. A robust implementation of visco-plasticity with temperature-dependent properties into a framework for the thermo-mechanical analysis of rock salt will be shown. The model enables the simulation of heat transport along with its consequences on the elastic response as well as on primary and secondary creep or the occurrence of dilatancy in the repository near field. Transverse isotropy, non-isothermal hydraulic processes and their coupling to mechanical stresses are taken into account for the analysis of repositories in clay stone. These processes are also considered in the near field analyses of engineered barrier systems, including the swelling/shrinkage of the bentonite material. The temperature-dependent saturation evolution around the heat-emitting waste container is described by different multiphase flow formulations. For all mentioned applications, we illustrate the workflow from model development and implementation, over verification and validation, to repository-scale application simulations using methods of high performance computing.
NASA Astrophysics Data System (ADS)
Myre, Joseph M.
Heterogeneous computing systems have recently come to the forefront of the High-Performance Computing (HPC) community's interest. HPC computer systems that incorporate special purpose accelerators, such as Graphics Processing Units (GPUs), are said to be heterogeneous. Large scale heterogeneous computing systems have consistently ranked highly on the Top500 list since the beginning of the heterogeneous computing trend. By using heterogeneous computing systems that consist of both general purpose processors and special- purpose accelerators, the speed and problem size of many simulations could be dramatically increased. Ultimately this results in enhanced simulation capabilities that allows, in some cases for the first time, the execution of parameter space and uncertainty analyses, model optimizations, and other inverse modeling techniques that are critical for scientific discovery and engineering analysis. However, simplifying the usage and optimization of codes for heterogeneous computing systems remains a challenge. This is particularly true for scientists and engineers for whom understanding HPC architectures and undertaking performance analysis may not be primary research objectives. To enable scientists and engineers to remain focused on their primary research objectives, a modular environment for geophysical inversion and run-time autotuning on heterogeneous computing systems is presented. This environment is composed of three major components: 1) CUSH---a framework for reducing the complexity of programming heterogeneous computer systems, 2) geophysical inversion routines which can be used to characterize physical systems, and 3) run-time autotuning routines designed to determine configurations of heterogeneous computing systems in an attempt to maximize the performance of scientific and engineering codes. Using three case studies, a lattice-Boltzmann method, a non-negative least squares inversion, and a finite-difference fluid flow method, it is shown that this environment provides scientists and engineers with means to reduce the programmatic complexity of their applications, to perform geophysical inversions for characterizing physical systems, and to determine high-performing run-time configurations of heterogeneous computing systems using a run-time autotuner.
Dynamic Biological Functioning Important for Simulating and Stabilizing Ocean Biogeochemistry
NASA Astrophysics Data System (ADS)
Buchanan, P. J.; Matear, R. J.; Chase, Z.; Phipps, S. J.; Bindoff, N. L.
2018-04-01
The biogeochemistry of the ocean exerts a strong influence on the climate by modulating atmospheric greenhouse gases. In turn, ocean biogeochemistry depends on numerous physical and biological processes that change over space and time. Accurately simulating these processes is fundamental for accurately simulating the ocean's role within the climate. However, our simulation of these processes is often simplistic, despite a growing understanding of underlying biological dynamics. Here we explore how new parameterizations of biological processes affect simulated biogeochemical properties in a global ocean model. We combine 6 different physical realizations with 6 different biogeochemical parameterizations (36 unique ocean states). The biogeochemical parameterizations, all previously published, aim to more accurately represent the response of ocean biology to changing physical conditions. We make three major findings. First, oxygen, carbon, alkalinity, and phosphate fields are more sensitive to changes in the ocean's physical state. Only nitrate is more sensitive to changes in biological processes, and we suggest that assessment protocols for ocean biogeochemical models formally include the marine nitrogen cycle to assess their performance. Second, we show that dynamic variations in the production, remineralization, and stoichiometry of organic matter in response to changing environmental conditions benefit the simulation of ocean biogeochemistry. Third, dynamic biological functioning reduces the sensitivity of biogeochemical properties to physical change. Carbon and nitrogen inventories were 50% and 20% less sensitive to physical changes, respectively, in simulations that incorporated dynamic biological functioning. These results highlight the importance of a dynamic biology for ocean properties and climate.
A large eddy lattice Boltzmann simulation of magnetohydrodynamic turbulence
NASA Astrophysics Data System (ADS)
Flint, Christopher; Vahala, George
2018-02-01
Large eddy simulations (LES) of a lattice Boltzmann magnetohydrodynamic (LB-MHD) model are performed for the unstable magnetized Kelvin-Helmholtz jet instability. This algorithm is an extension of Ansumali et al. [1] to MHD in which one performs first an expansion in the filter width on the kinetic equations followed by the usual low Knudsen number expansion. These two perturbation operations do not commute. Closure is achieved by invoking the physical constraint that subgrid effects occur at transport time scales. The simulations are in very good agreement with direct numerical simulations.
An efficient surrogate-based simulation-optimization method for calibrating a regional MODFLOW model
NASA Astrophysics Data System (ADS)
Chen, Mingjie; Izady, Azizallah; Abdalla, Osman A.
2017-01-01
Simulation-optimization method entails a large number of model simulations, which is computationally intensive or even prohibitive if the model simulation is extremely time-consuming. Statistical models have been examined as a surrogate of the high-fidelity physical model during simulation-optimization process to tackle this problem. Among them, Multivariate Adaptive Regression Splines (MARS), a non-parametric adaptive regression method, is superior in overcoming problems of high-dimensions and discontinuities of the data. Furthermore, the stability and accuracy of MARS model can be improved by bootstrap aggregating methods, namely, bagging. In this paper, Bagging MARS (BMARS) method is integrated to a surrogate-based simulation-optimization framework to calibrate a three-dimensional MODFLOW model, which is developed to simulate the groundwater flow in an arid hardrock-alluvium region in northwestern Oman. The physical MODFLOW model is surrogated by the statistical model developed using BMARS algorithm. The surrogate model, which is fitted and validated using training dataset generated by the physical model, can approximate solutions rapidly. An efficient Sobol' method is employed to calculate global sensitivities of head outputs to input parameters, which are used to analyze their importance for the model outputs spatiotemporally. Only sensitive parameters are included in the calibration process to further improve the computational efficiency. Normalized root mean square error (NRMSE) between measured and simulated heads at observation wells is used as the objective function to be minimized during optimization. The reasonable history match between the simulated and observed heads demonstrated feasibility of this high-efficient calibration framework.
Simulation teaching method in Engineering Optics
NASA Astrophysics Data System (ADS)
Lu, Qieni; Wang, Yi; Li, Hongbin
2017-08-01
We here introduce a pedagogical method of theoretical simulation as one major means of the teaching process of "Engineering Optics" in course quality improvement action plan (Qc) in our school. Students, in groups of three to five, complete simulations of interference, diffraction, electromagnetism and polarization of light; each student is evaluated and scored in light of his performance in the interviews between the teacher and the student, and each student can opt to be interviewed many times until he is satisfied with his score and learning. After three years of Qc practice, the remarkable teaching and learning effect is obatined. Such theoretical simulation experiment is a very valuable teaching method worthwhile for physical optics which is highly theoretical and abstruse. This teaching methodology works well in training students as to how to ask questions and how to solve problems, which can also stimulate their interest in research learning and their initiative to develop their self-confidence and sense of innovation.
V&V Of CFD Modeling Of The Argonne Bubble Experiment: FY15 Summary Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoyt, Nathaniel C.; Wardle, Kent E.; Bailey, James L.
2015-09-30
In support of the development of accelerator-driven production of the fission product Mo 99, computational fluid dynamics (CFD) simulations of an electron-beam irradiated, experimental-scale bubble chamber have been conducted in order to aid in interpretation of existing experimental results, provide additional insights into the physical phenomena, and develop predictive thermal hydraulic capabilities that can be applied to full-scale target solution vessels. Toward that end, a custom hybrid Eulerian-Eulerian-Lagrangian multiphase solver was developed, and simulations have been performed on high-resolution meshes. Good agreement between experiments and simulations has been achieved, especially with respect to the prediction of the maximum temperature ofmore » the uranyl sulfate solution in the experimental vessel. These positive results suggest that the simulation methodology that has been developed will prove to be suitable to assist in the development of full-scale production hardware.« less
High Fidelity Simulation of Transcritical Liquid Jet in Crossflow
NASA Astrophysics Data System (ADS)
Li, Xiaoyi; Soteriou, Marios
2017-11-01
Transcritical injection of liquid fuel occurs in many practical applications such as diesel, rocket and gas turbine engines. In these applications, the liquid fuel, with a supercritical pressure and a subcritical temperature, is introduced into an environment where both the pressure and temperature exceeds the critical point of the fuel. The convoluted physics of the transition from subcritical to supercritical conditions poses great challenges for both experimental and numerical investigations. In this work, numerical simulation of a binary system of a subcritical liquid injecting into a supercritical gaseous crossflow is performed. The spatially varying fluid thermodynamic and transport properties are evaluated using established cubic equation of state and extended corresponding state principles with established mixing rules. To efficiently account for the large spatial gradients in property variations, an adaptive mesh refinement technique is employed. The transcritical simulation results are compared with the predictions from the traditional subcritical jet atomization simulations.
Development of Multistep and Degenerate Variational Integrators for Applications in Plasma Physics
NASA Astrophysics Data System (ADS)
Ellison, Charles Leland
Geometric integrators yield high-fidelity numerical results by retaining conservation laws in the time advance. A particularly powerful class of geometric integrators is symplectic integrators, which are widely used in orbital mechanics and accelerator physics. An important application presently lacking symplectic integrators is the guiding center motion of magnetized particles represented by non-canonical coordinates. Because guiding center trajectories are foundational to many simulations of magnetically confined plasmas, geometric guiding center algorithms have high potential for impact. The motivation is compounded by the need to simulate long-pulse fusion devices, including ITER, and opportunities in high performance computing, including the use of petascale resources and beyond. This dissertation uses a systematic procedure for constructing geometric integrators --- known as variational integration --- to deliver new algorithms for guiding center trajectories and other plasma-relevant dynamical systems. These variational integrators are non-trivial because the Lagrangians of interest are degenerate - the Euler-Lagrange equations are first-order differential equations and the Legendre transform is not invertible. The first contribution of this dissertation is that variational integrators for degenerate Lagrangian systems are typically multistep methods. Multistep methods admit parasitic mode instabilities that can ruin the numerical results. These instabilities motivate the second major contribution: degenerate variational integrators. By replicating the degeneracy of the continuous system, degenerate variational integrators avoid parasitic mode instabilities. The new methods are therefore robust geometric integrators for degenerate Lagrangian systems. These developments in variational integration theory culminate in one-step degenerate variational integrators for non-canonical magnetic field line flow and guiding center dynamics. The guiding center integrator assumes coordinates such that one component of the magnetic field is zero; it is shown how to construct such coordinates for nested magnetic surface configurations. Additionally, collisional drag effects are incorporated in the variational guiding center algorithm for the first time, allowing simulation of energetic particle thermalization. Advantages relative to existing canonical-symplectic and non-geometric algorithms are numerically demonstrated. All algorithms have been implemented as part of a modern, parallel, ODE-solving library, suitable for use in high-performance simulations.
Carbon Nanofiber-Based, High-Frequency, High-Q, Miniaturized Mechanical Resonators
NASA Technical Reports Server (NTRS)
Kaul, Anupama B.; Epp, Larry W.; Bagge, Leif
2011-01-01
High Q resonators are a critical component of stable, low-noise communication systems, radar, and precise timing applications such as atomic clocks. In electronic resonators based on Si integrated circuits, resistive losses increase as a result of the continued reduction in device dimensions, which decreases their Q values. On the other hand, due to the mechanical construct of bulk acoustic wave (BAW) and surface acoustic wave (SAW) resonators, such loss mechanisms are absent, enabling higher Q-values for both BAW and SAW resonators compared to their electronic counterparts. The other advantages of mechanical resonators are their inherently higher radiation tolerance, a factor that makes them attractive for NASA s extreme environment planetary missions, for example to the Jovian environments where the radiation doses are at hostile levels. Despite these advantages, both BAW and SAW resonators suffer from low resonant frequencies and they are also physically large, which precludes their integration into miniaturized electronic systems. Because there is a need to move the resonant frequency of oscillators to the order of gigahertz, new technologies and materials are being investigated that will make performance at those frequencies attainable. By moving to nanoscale structures, in this case vertically oriented, cantilevered carbon nanotubes (CNTs), that have larger aspect ratios (length/thickness) and extremely high elastic moduli, it is possible to overcome the two disadvantages of both bulk acoustic wave (BAW) and surface acoustic wave (SAW) resonators. Nano-electro-mechanical systems (NEMS) that utilize high aspect ratio nanomaterials exhibiting high elastic moduli (e.g., carbon-based nanomaterials) benefit from high Qs, operate at high frequency, and have small force constants that translate to high responsivity that results in improved sensitivity, lower power consumption, and im - proved tunablity. NEMS resonators have recently been demonstrated using topdown, lithographically fabricated ap - proaches to form cantilever or bridgetype structures. Top-down approaches, however, rely on complicated and expensive e-beam lithography, and often require a release mechanism. Reso - nance effects in structures synthesized using bottom-up approaches have also recently been reported based on carbon nanotubes, but such approaches have relied on a planar two-dimensional (2D) geometry. In this innovation, vertically aligned tubes synthesized using a bottom- up approach have been considered, where the vertical orientation of the tubes has the potential to increase integration density even further. The simulation of a vertically oriented, cantilevered carbon nanotube was performed using COMSOL Multi - physics, a finite element simulation package. All simulations were performed in a 2D geometry that provided consistent results and minimized computational complexity. The simulations assumed a vertically oriented, cantilevered nanotube of uniform density (1.5 g/cu cm). An elastic modulus was assumed to be 600 GPa, relative permittivity of the nanotube was assumed to be 5.0, and Poisson s ratio was assumed to be 0.2. It should be noted that the relative permittivity and Poisson s ratio for the nanotubes of interest are not known accurately. However, as in previous simulations, the relative permittivity and Poisson s ratios were treated as weak variables in the simulation, and no significant changes were recognized when these variables were varied.
Visualization and Analysis of Climate Simulation Performance Data
NASA Astrophysics Data System (ADS)
Röber, Niklas; Adamidis, Panagiotis; Behrens, Jörg
2015-04-01
Visualization is the key process of transforming abstract (scientific) data into a graphical representation, to aid in the understanding of the information hidden within the data. Climate simulation data sets are typically quite large, time varying, and consist of many different variables sampled on an underlying grid. A large variety of climate models - and sub models - exist to simulate various aspects of the climate system. Generally, one is mainly interested in the physical variables produced by the simulation runs, but model developers are also interested in performance data measured along with these simulations. Climate simulation models are carefully developed complex software systems, designed to run in parallel on large HPC systems. An important goal thereby is to utilize the entire hardware as efficiently as possible, that is, to distribute the workload as even as possible among the individual components. This is a very challenging task, and detailed performance data, such as timings, cache misses etc. have to be used to locate and understand performance problems in order to optimize the model implementation. Furthermore, the correlation of performance data to the processes of the application and the sub-domains of the decomposed underlying grid is vital when addressing communication and load imbalance issues. High resolution climate simulations are carried out on tens to hundreds of thousands of cores, thus yielding a vast amount of profiling data, which cannot be analyzed without appropriate visualization techniques. This PICO presentation displays and discusses the ICON simulation model, which is jointly developed by the Max Planck Institute for Meteorology and the German Weather Service and in partnership with DKRZ. The visualization and analysis of the models performance data allows us to optimize and fine tune the model, as well as to understand its execution on the HPC system. We show and discuss our workflow, as well as present new ideas and solutions that greatly aided our understanding. The software employed is based on Avizo Green, ParaView and SimVis, as well as own developed software extensions.
Parallelization of a Monte Carlo particle transport simulation code
NASA Astrophysics Data System (ADS)
Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.
2010-05-01
We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.
Investigation of Turbulent Tip Leakage Vortex in an Axial Water Jet Pump with Large Eddy Simulation
NASA Technical Reports Server (NTRS)
Hah, Chunill; Katz, Joseph
2012-01-01
Detailed steady and unsteady numerical studies were performed to investigate tip clearance flow in an axial water jet pump. The primary objective is to understand physics of unsteady tip clearance flow, unsteady tip leakage vortex, and cavitation inception in an axial water jet pump. Steady pressure field and resulting steady tip leakage vortex from a steady flow analysis do not seem to explain measured cavitation inception correctly. The measured flow field near the tip is unsteady and measured cavitation inception is highly transient. Flow visualization with cavitation bubbles shows that the leakage vortex is oscillating significantly and many intermittent vortex ropes are present between the suction side of the blade and the tip leakage core vortex. Although the flow field is highly transient, the overall flow structure is stable and a characteristic frequency seems to exist. To capture relevant flow physics as much as possible, a Reynolds-averaged Navier-Stokes (RANS) calculation and a Large Eddy Simulation (LES) were applied for the current investigation. The present study reveals that several vortices from the tip leakage vortex system cross the tip gap of the adjacent blade periodically. Sudden changes in local pressure field inside tip gap due to these vortices create vortex ropes. The instantaneous pressure filed inside the tip gap is drastically different from that of the steady flow simulation. Unsteady flow simulation which can calculate unsteady vortex motion is necessary to calculate cavitation inception accurately even at design flow condition in such a water jet pump.
Vincent, Grace E; Ferguson, Sally; Larsen, Brianna; Ridgers, Nicola D; Snow, Rod; Aisbett, Brad
2018-04-06
To examine the effects of sleep restriction on firefighters' physical task performance, physical activity, and physiological and perceived exertion during simulated hot wildfire conditions. 31 firefighters were randomly allocated to either the hot (n = 18, HOT; 33 °C, 8-h sleep opportunity) or hot and sleep restricted (n = 13, HOT + SR; 33 °C, 4-h sleep opportunity) condition. Intermittent, self-paced work circuits of six firefighting tasks were performed for 3 days. Firefighters self-reported ratings of perceived exertion. Heart rate, core temperature, and physical activity were measured continuously. Fluids were consumed ad libitum, and all food and fluids consumed were recorded. Urine volume and urine specific gravity (USG) were analysed and sleep was assessed using polysomnography (PSG). There were no differences between the HOT and HOT + SR groups in firefighters' physical task performance, heart rate, core temperature, USG, or fluid intake. Ratings of perceived exertion were higher (p < 0.05) in the HOT + SR group for two of the six firefighting tasks. The HOT group spent approximately 7 min more undertaking moderate physical activity throughout the 2-h work circuits compared to the HOT + SR group. Two nights of sleep restriction did not influence firefighters' physical task performance or physiological responses during 3 days of simulated wildfire suppression. Further research is needed to explore firefighters' pacing strategies during real wildfire suppression.
Nonlinear intrinsic variables and state reconstruction in multiscale simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dsilva, Carmeline J., E-mail: cdsilva@princeton.edu; Talmon, Ronen, E-mail: ronen.talmon@yale.edu; Coifman, Ronald R., E-mail: coifman@math.yale.edu
2013-11-14
Finding informative low-dimensional descriptions of high-dimensional simulation data (like the ones arising in molecular dynamics or kinetic Monte Carlo simulations of physical and chemical processes) is crucial to understanding physical phenomena, and can also dramatically assist in accelerating the simulations themselves. In this paper, we discuss and illustrate the use of nonlinear intrinsic variables (NIV) in the mining of high-dimensional multiscale simulation data. In particular, we focus on the way NIV allows us to functionally merge different simulation ensembles, and different partial observations of these ensembles, as well as to infer variables not explicitly measured. The approach relies on certainmore » simple features of the underlying process variability to filter out measurement noise and systematically recover a unique reference coordinate frame. We illustrate the approach through two distinct sets of atomistic simulations: a stochastic simulation of an enzyme reaction network exhibiting both fast and slow time scales, and a molecular dynamics simulation of alanine dipeptide in explicit water.« less
Nonlinear intrinsic variables and state reconstruction in multiscale simulations
NASA Astrophysics Data System (ADS)
Dsilva, Carmeline J.; Talmon, Ronen; Rabin, Neta; Coifman, Ronald R.; Kevrekidis, Ioannis G.
2013-11-01
Finding informative low-dimensional descriptions of high-dimensional simulation data (like the ones arising in molecular dynamics or kinetic Monte Carlo simulations of physical and chemical processes) is crucial to understanding physical phenomena, and can also dramatically assist in accelerating the simulations themselves. In this paper, we discuss and illustrate the use of nonlinear intrinsic variables (NIV) in the mining of high-dimensional multiscale simulation data. In particular, we focus on the way NIV allows us to functionally merge different simulation ensembles, and different partial observations of these ensembles, as well as to infer variables not explicitly measured. The approach relies on certain simple features of the underlying process variability to filter out measurement noise and systematically recover a unique reference coordinate frame. We illustrate the approach through two distinct sets of atomistic simulations: a stochastic simulation of an enzyme reaction network exhibiting both fast and slow time scales, and a molecular dynamics simulation of alanine dipeptide in explicit water.
High-Performance Modeling and Simulation of Anchoring in Granular Media for NEO Applications
NASA Technical Reports Server (NTRS)
Quadrelli, Marco B.; Jain, Abhinandan; Negrut, Dan; Mazhar, Hammad
2012-01-01
NASA is interested in designing a spacecraft capable of visiting a near-Earth object (NEO), performing experiments, and then returning safely. Certain periods of this mission would require the spacecraft to remain stationary relative to the NEO, in an environment characterized by very low gravity levels; such situations require an anchoring mechanism that is compact, easy to deploy, and upon mission completion, easy to remove. The design philosophy used in this task relies on the simulation capability of a high-performance multibody dynamics physics engine. On Earth, it is difficult to create low-gravity conditions, and testing in low-gravity environments, whether artificial or in space, can be costly and very difficult to achieve. Through simulation, the effect of gravity can be controlled with great accuracy, making it ideally suited to analyze the problem at hand. Using Chrono::Engine, a simulation pack age capable of utilizing massively parallel Graphic Processing Unit (GPU) hardware, several validation experiments were performed. Modeling of the regolith interaction has been carried out, after which the anchor penetration tests were performed and analyzed. The regolith was modeled by a granular medium composed of very large numbers of convex three-dimensional rigid bodies, subject to microgravity levels and interacting with each other with contact, friction, and cohesional forces. The multibody dynamics simulation approach used for simulating anchors penetrating a soil uses a differential variational inequality (DVI) methodology to solve the contact problem posed as a linear complementarity method (LCP). Implemented within a GPU processing environment, collision detection is greatly accelerated compared to traditional CPU (central processing unit)- based collision detection. Hence, systems of millions of particles interacting with complex dynamic systems can be efficiently analyzed, and design recommendations can be made in a much shorter time. The figure shows an example of this capability where the Brazil Nut problem is simulated: as the container full of granular material is vibrated, the large ball slowly moves upwards. This capability was expanded to account for anchors of different shapes and penetration velocities, interacting with granular soils.
xyZET: A Simulation Program for Physics Teaching.
ERIC Educational Resources Information Center
Hartel, Hermann
2000-01-01
Discusses xyZET, a simulation program that allows 3D-space in numerous experiments in basic mechanics and electricity and was developed to support physics teaching. Tests course material for 11th grade at German high schools under classroom conditions and reports on their stability and effectiveness. (Contains 15 references.) (Author/YDS)
Fero, Laura J; O'Donnell, John M; Zullo, Thomas G; Dabbs, Annette DeVito; Kitutu, Julius; Samosky, Joseph T; Hoffman, Leslie A
2010-10-01
This paper is a report of an examination of the relationship between metrics of critical thinking skills and performance in simulated clinical scenarios. Paper and pencil assessments are commonly used to assess critical thinking but may not reflect simulated performance. In 2007, a convenience sample of 36 nursing students participated in measurement of critical thinking skills and simulation-based performance using videotaped vignettes, high-fidelity human simulation, the California Critical Thinking Disposition Inventory and California Critical Thinking Skills Test. Simulation-based performance was rated as 'meeting' or 'not meeting' overall expectations. Test scores were categorized as strong, average, or weak. Most (75.0%) students did not meet overall performance expectations using videotaped vignettes or high-fidelity human simulation; most difficulty related to problem recognition and reporting findings to the physician. There was no difference between overall performance based on method of assessment (P = 0.277). More students met subcategory expectations for initiating nursing interventions (P ≤ 0.001) using high-fidelity human simulation. The relationship between videotaped vignette performance and critical thinking disposition or skills scores was not statistically significant, except for problem recognition and overall critical thinking skills scores (Cramer's V = 0.444, P = 0.029). There was a statistically significant relationship between overall high-fidelity human simulation performance and overall critical thinking disposition scores (Cramer's V = 0.413, P = 0.047). Students' performance reflected difficulty meeting expectations in simulated clinical scenarios. High-fidelity human simulation performance appeared to approximate scores on metrics of critical thinking best. Further research is needed to determine if simulation-based performance correlates with critical thinking skills in the clinical setting. © 2010 The Authors. Journal of Advanced Nursing © 2010 Blackwell Publishing Ltd.
Fero, Laura J.; O’Donnell, John M.; Zullo, Thomas G.; Dabbs, Annette DeVito; Kitutu, Julius; Samosky, Joseph T.; Hoffman, Leslie A.
2018-01-01
Aim This paper is a report of an examination of the relationship between metrics of critical thinking skills and performance in simulated clinical scenarios. Background Paper and pencil assessments are commonly used to assess critical thinking but may not reflect simulated performance. Methods In 2007, a convenience sample of 36 nursing students participated in measurement of critical thinking skills and simulation-based performance using videotaped vignettes, high-fidelity human simulation, the California Critical Thinking Disposition Inventory and California Critical Thinking Skills Test. Simulation- based performance was rated as ‘meeting’ or ‘not meeting’ overall expectations. Test scores were categorized as strong, average, or weak. Results Most (75·0%) students did not meet overall performance expectations using videotaped vignettes or high-fidelity human simulation; most difficulty related to problem recognition and reporting findings to the physician. There was no difference between overall performance based on method of assessment (P = 0·277). More students met subcategory expectations for initiating nursing interventions (P ≤ 0·001) using high-fidelity human simulation. The relationship between video-taped vignette performance and critical thinking disposition or skills scores was not statistically significant, except for problem recognition and overall critical thinking skills scores (Cramer’s V = 0·444, P = 0·029). There was a statistically significant relationship between overall high-fidelity human simulation performance and overall critical thinking disposition scores (Cramer’s V = 0·413, P = 0·047). Conclusion Students’ performance reflected difficulty meeting expectations in simulated clinical scenarios. High-fidelity human simulation performance appeared to approximate scores on metrics of critical thinking best. Further research is needed to determine if simulation-based performance correlates with critical thinking skills in the clinical setting. PMID:20636471
Cross-flow turbines: physical and numerical model studies towards improved array simulations
NASA Astrophysics Data System (ADS)
Wosnik, M.; Bachant, P.
2015-12-01
Cross-flow, or vertical-axis turbines, show potential in marine hydrokinetic (MHK) and wind energy applications. As turbine designs mature, the research focus is shifting from individual devices towards improving turbine array layouts for maximizing overall power output, i.e., minimizing wake interference for axial-flow turbines, or taking advantage of constructive wake interaction for cross-flow turbines. Numerical simulations are generally better suited to explore the turbine array design parameter space, as physical model studies of large arrays at large model scale would be expensive. However, since the computing power available today is not sufficient to conduct simulations of the flow in and around large arrays of turbines with fully resolved turbine geometries, the turbines' interaction with the energy resource needs to be parameterized, or modeled. Most models in use today, e.g. actuator disk, are not able to predict the unique wake structure generated by cross-flow turbines. Experiments were carried out using a high-resolution turbine test bed in a large cross-section tow tank, designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. To improve parameterization in array simulations, an actuator line model (ALM) was developed to provide a computationally feasible method for simulating full turbine arrays inside Navier--Stokes models. The ALM predicts turbine loading with the blade element method combined with sub-models for dynamic stall and flow curvature. The open-source software is written as an extension library for the OpenFOAM CFD package, which allows the ALM body force to be applied to their standard RANS and LES solvers. Turbine forcing is also applied to volume of fluid (VOF) models, e.g., for predicting free surface effects on submerged MHK devices. An additional sub-model is considered for injecting turbulence model scalar quantities based on actuator line element loading. Results are presented for the simulation of performance and wake dynamics of axial- and cross-flow turbines and compared with experiments and body-fitted mesh, blade-resolving CFD. Supported by NSF-CBET grant 1150797.
Application of Plasma Waveguides to High Energy Accelerators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Milchberg, Howard M
2013-03-30
The eventual success of laser-plasma based acceleration schemes for high-energy particle physics will require the focusing and stable guiding of short intense laser pulses in reproducible plasma channels. For this goal to be realized, many scientific issues need to be addressed. These issues include an understanding of the basic physics of, and an exploration of various schemes for, plasma channel formation. In addition, the coupling of intense laser pulses to these channels and the stable propagation of pulses in the channels require study. Finally, new theoretical and computational tools need to be developed to aid in the design and analysismore » of experiments and future accelerators. Here we propose a 3-year renewal of our combined theoretical and experimental program on the applications of plasma waveguides to high-energy accelerators. During the past grant period we have made a number of significant advances in the science of laser-plasma based acceleration. We pioneered the development of clustered gases as a new highly efficient medium for plasma channel formation. Our contributions here include theoretical and experimental studies of the physics of cluster ionization, heating, explosion, and channel formation. We have demonstrated for the first time the generation of and guiding in a corrugated plasma waveguide. The fine structure demonstrated in these guides is only possible with cluster jet heating by lasers. The corrugated guide is a slow wave structure operable at arbitrarily high laser intensities, allowing direct laser acceleration, a process we have explored in detail with simulations. The development of these guides opens the possibility of direct laser acceleration, a true miniature analogue of the SLAC RF-based accelerator. Our theoretical studies during this period have also contributed to the further development of the simulation codes, Wake and QuickPIC, which can be used for both laser driven and beam driven plasma based acceleration schemes. We will continue our development of advanced simulation tools by modifying the QuickPIC algorithm to allow for the simulation of plasma particle pick-up by the wake fields. We have also performed extensive simulations of plasma slow wave structures for efficient THz generation by guided laser beams or accelerated electron beams. We will pursue experimental studies of direct laser acceleration, and THz generation by two methods, ponderomotive-induced THz polarization, and THz radiation by laser accelerated electron beams. We also plan to study both conventional and corrugated plasma channels using our new 30 TW in our new lab facilities. We will investigate production of very long hydrogen plasma waveguides (5 cm). We will study guiding at increasing power levels through the onset of laser-induced cavitation (bubble regime) to assess the role played by the preformed channel. Experiments in direct acceleration will be performed, using laser plasma wakefields as the electron injector. Finally, we will use 2-colour ionization of gases as a high frequency THz source (<60 THz) in order for femtosecond measurements of low plasma densities in waveguides and beams.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miles, Paul C.
2015-03-01
The development and application of optically accessible engines to further our understanding of in-cylinder combustion processes is reviewed, spanning early efforts in simplified engines to the more recent development of high-pressure, high-speed engines that retain the geometric complexities of modern production engines. Limitations of these engines with respect to the reproduction of realistic metal test engine characteristics and performance are identified, as well as methods that have been used to overcome these limitations. Finally, the role of the work performed in these engines on clarifying the fundamental physical processes governing the combustion process and on laying the foundation for predictivemore » engine simulation is summarized.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Varner, R.L.; Blankenship, J.L.; Beene, J.R.
1998-02-01
Custom monolithic electronic circuits have been developed recently for large detector applications in high energy physics where subsystems require tens of thousands of channels of signal processing and data acquisition. In the design and construction of these enormous detectors, it has been found that monolithic circuits offer significant advantages over discrete implementations through increased performance, flexible packaging, lower power and reduced cost per channel. Much of the integrated circuit design for the high energy physics community is directly applicable to intermediate energy heavy-ion and electron physics. This STTR project conducted in collaboration with researchers at the Holifield Radioactive Ion Beammore » Facility (HRIBF) at Oak Ridge National Laboratory, sought to develop a new integrated circuit chip set for barium fluoride (BaF{sub 2}) detector arrays based upon existing CMOS monolithic circuit designs created for the high energy physics experiments. The work under the STTR Phase 1 demonstrated through the design, simulation, and testing of several prototype chips the feasibility of using custom CMOS integrated circuits for processing signals from BaF{sub 2} detectors. Function blocks including charge-sensitive amplifiers, comparators, one shots, time-to-amplitude converters, analog memory circuits and buffer amplifiers were implemented during Phase 1 effort. Experimental results from bench testing and laboratory testing with sources were documented.« less
Mock Data Challenge for the MPD/NICA Experiment on the HybriLIT Cluster
NASA Astrophysics Data System (ADS)
Gertsenberger, Konstantin; Rogachevsky, Oleg
2018-02-01
Simulation of data processing before receiving first experimental data is an important issue in high-energy physics experiments. This article presents the current Event Data Model and the Mock Data Challenge for the MPD experiment at the NICA accelerator complex which uses ongoing simulation studies to exercise in a stress-testing the distributed computing infrastructure and experiment software in the full production environment from simulated data through the physical analysis.
NASA Astrophysics Data System (ADS)
Fang, Ye; Feng, Sheng; Tam, Ka-Ming; Yun, Zhifeng; Moreno, Juana; Ramanujam, J.; Jarrell, Mark
2014-10-01
Monte Carlo simulations of the Ising model play an important role in the field of computational statistical physics, and they have revealed many properties of the model over the past few decades. However, the effect of frustration due to random disorder, in particular the possible spin glass phase, remains a crucial but poorly understood problem. One of the obstacles in the Monte Carlo simulation of random frustrated systems is their long relaxation time making an efficient parallel implementation on state-of-the-art computation platforms highly desirable. The Graphics Processing Unit (GPU) is such a platform that provides an opportunity to significantly enhance the computational performance and thus gain new insight into this problem. In this paper, we present optimization and tuning approaches for the CUDA implementation of the spin glass simulation on GPUs. We discuss the integration of various design alternatives, such as GPU kernel construction with minimal communication, memory tiling, and look-up tables. We present a binary data format, Compact Asynchronous Multispin Coding (CAMSC), which provides an additional 28.4% speedup compared with the traditionally used Asynchronous Multispin Coding (AMSC). Our overall design sustains a performance of 33.5 ps per spin flip attempt for simulating the three-dimensional Edwards-Anderson model with parallel tempering, which significantly improves the performance over existing GPU implementations.
NASA Astrophysics Data System (ADS)
Kemp, Gregory Elijah
Ultra-intense laser (> 1018 W/cm2) interactions with matter are capable of producing relativistic electrons which have a variety of applications in state-of-the-art scientific and medical research conducted at universities and national laboratories across the world. Control of various aspects of these hot-electron distributions is highly desired to optimize a particular outcome. Hot-electron generation in low-contrast interactions, where significant amounts of under-dense pre-plasma are present, can be plagued by highly non-linear relativistic laser-plasma instabilities and quasi-static magnetic field generation, often resulting in less than desirable and predictable electron source characteristics. High-contrast interactions offer more controlled interactions but often at the cost of overall lower coupling and increased sensitivity to initial target conditions. An experiment studying the differences in hot-electron generation between high and low-contrast pulse interactions with solid density targets was performed on the Titan laser platform at the Jupiter Laser Facility at Lawrence Livermore National Laboratory in Livermore, CA. To date, these hot-electrons generated in the laboratory are not directly observable at the source of the interaction. Instead, indirect studies are performed using state-of-the-art simulations, constrained by the various experimental measurements. These measurements, more-often-than-not, rely on secondary processes generated by the transport of these electrons through the solid density materials which can susceptible to a variety instabilities and target material/geometry effects. Although often neglected in these types of studies, the specularly reflected light can provide invaluable insight as it is directly influenced by the interaction. In this thesis, I address the use of (personally obtained) experimental specular reflectivity measurements to indirectly study hot-electron generation in the context of high-contrast, relativistic laser-plasma interactions. Spatial, temporal and spectral properties of the incident and specular pulses, both near and far away from the interaction region where experimental measurements are obtained, are used to benchmark simulations designed to infer dominant hot-electron acceleration mechanisms and their corresponding energy/angular distributions. To handle this highly coupled interaction, I employed particle-in-cell modeling using a wide variety of algorithms (verified to be numerically stable and consistent with analytic expressions) and physical models (validated by experimental results) to reasonably model the interaction's sweeping range of plasma densities, temporal and spatial scales, electromagnetic wave propagation and its interaction with solid density matter. Due to the fluctuations in the experimental conditions and limited computational resources, only a limited number of full-scale simulations were performed under typical experimental conditions to infer the relevant physical phenomena in the interactions. I show the usefulness of the often overlooked specular reflectivity measurements in constraining both high and low-contrast simulations, as well as limitations of their experimental interpretations. Using these experimental measurements to reasonably constrain the simulation results, I discuss the sensitivity of relativistic electron generation in ultra-intense laser plasma interactions to initial target conditions and the dynamic evolution of the interaction region.
Efficient Numerical Simulation of Aerothermoelastic Hypersonic Vehicles
NASA Astrophysics Data System (ADS)
Klock, Ryan J.
Hypersonic vehicles operate in a high-energy flight environment characterized by high dynamic pressures, high thermal loads, and non-equilibrium flow dynamics. This environment induces strong fluid, thermal, and structural dynamics interactions that are unique to this flight regime. If these vehicles are to be effectively designed and controlled, then a robust and intuitive understanding of each of these disciplines must be developed not only in isolation, but also when coupled. Limitations on scaling and the availability of adequate test facilities mean that physical investigation is infeasible. Ever growing computational power offers the ability to perform elaborate numerical simulations, but also has its own limitations. The state of the art in numerical simulation is either to create ever more high-fidelity physics models that do not couple well and require too much processing power to consider more than a few seconds of flight, or to use low-fidelity analytical models that can be tightly coupled and processed quickly, but do not represent realistic systems due to their simplifying assumptions. Reduced-order models offer a middle ground by distilling the dominant trends of high-fidelity training solutions into a form that can be quickly processed and more tightly coupled. This thesis presents a variably coupled, variable-fidelity, aerothermoelastic framework for the simulation and analysis of high-speed vehicle systems using analytical, reduced-order, and surrogate modeling techniques. Full launch-to-landing flights of complete vehicles are considered and used to define flight envelopes with aeroelastic, aerothermal, and thermoelastic limits, tune in-the-loop flight controllers, and inform future design considerations. A partitioned approach to vehicle simulation is considered in which regions dominated by particular combinations of processes are made separate from the overall solution and simulated by a specialized set of models to improve overall processing speed and overall solution fidelity. A number of enhancements to this framework are made through 1. the implementation of a publish-subscribe code architecture for rapid prototyping of physics and process models. 2. the implementation of a selection of linearization and model identification methods including high-order pseudo-time forward difference, complex-step, and direct identification from ordinary differential equation inspection. 3. improvements to the aeroheating and thermal models with non-equilibrium gas dynamics and generalized temperature dependent material thermal properties. A variety of model reduction and surrogate model techniques are applied to a representative hypersonic vehicle on a terminal trajectory to enable complete aerothermoelastic flight simulations. Multiple terminal trajectories of various starting altitudes and Mach numbers are optimized to maximize final kinetic energy of the vehicle upon reaching the surface. Surrogate models are compared to represent the variation of material thermal properties with temperature. A new method is developed and shown to be both accurate and computationally efficient. While the numerically efficient simulation of high-speed vehicles is developed within the presented framework, the goal of real time simulation is hampered by the necessity of multiple nested convergence loops. An alternative all-in-one surrogate model method is developed based on singular-value decomposition and regression that is near real time. Finally, the aeroelastic stability of pressurized cylindrical shells is investigated in the context of a maneuvering axisymmetric high-speed vehicle. Moderate internal pressurization is numerically shown to decrease stability, as showed experimentally in the literature, yet not well reproduced analytically. Insights are drawn from time simulation results and used to inform approaches for future vehicle model development.
NASA Astrophysics Data System (ADS)
Wang, LiFeng; Ye, WenHua; He, XianTu; Wu, JunFeng; Fan, ZhengFeng; Xue, Chuang; Guo, HongYu; Miao, WenYong; Yuan, YongTeng; Dong, JiaQin; Jia, Guo; Zhang, Jing; Li, YingJun; Liu, Jie; Wang, Min; Ding, YongKun; Zhang, WeiYan
2017-05-01
Inertial fusion energy (IFE) has been considered a promising, nearly inexhaustible source of sustainable carbon-free power for the world's energy future. It has long been recognized that the control of hydrodynamic instabilities is of critical importance for ignition and high-gain in the inertial-confinement fusion (ICF) hot-spot ignition scheme. In this mini-review, we summarize the progress of theoretical and simulation research of hydrodynamic instabilities in the ICF central hot-spot implosion in our group over the past decade. In order to obtain sufficient understanding of the growth of hydrodynamic instabilities in ICF, we first decompose the problem into different stages according to the implosion physics processes. The decomposed essential physics pro- cesses that are associated with ICF implosions, such as Rayleigh-Taylor instability (RTI), Richtmyer-Meshkov instability (RMI), Kelvin-Helmholtz instability (KHI), convergent geometry effects, as well as perturbation feed-through are reviewed. Analyti- cal models in planar, cylindrical, and spherical geometries have been established to study different physical aspects, including density-gradient, interface-coupling, geometry, and convergent effects. The influence of ablation in the presence of preheating on the RTI has been extensively studied by numerical simulations. The KHI considering the ablation effect has been discussed in detail for the first time. A series of single-mode ablative RTI experiments has been performed on the Shenguang-II laser facility. The theoretical and simulation research provides us the physical insights of linear and weakly nonlinear growths, and nonlinear evolutions of the hydrodynamic instabilities in ICF implosions, which has directly supported the research of ICF ignition target design. The ICF hot-spot ignition implosion design that uses several controlling features, based on our current understanding of hydrodynamic instabilities, to address shell implosion stability, has been briefly described, several of which are novel.
Anderson, Collin; Boehme, Sabrina; Ouellette, Jacquelyn; Stidham, Chanelle; MacKay, Mark
2014-01-01
Purpose: The physical and chemical compatibility of intravenous acetaminophen with commonly administered injectable medications was evaluated. Methods: Simulated Y-site evaluation was accomplished by mixing 2 mL of acetaminophen (10 mg/mL) with 2 mL of an alternative intravenous medication and subsequently storing the mixture in a polypropylene syringe for 4 hours. The aliquot solutions were visually inspected and evaluated for crystal content at 4 hours by infusing 4 mL of the medication mixture through a 0.45-μm nitrocellulose filter disc. Medication mixtures that were selected for chemical stability testing were analyzed by high-performance liquid chromatography at 0, 1, and 4 hours using a Zorbax Eclipse Plus C18, 4.6 x 100 mm, 3.5-μm column for separation of analytes with subsequent diode-array detection. Medications were considered chemically compatible if the concentrations of all components were >90% of the original concentrations during the 4 hour simulated Y-site compatibility test. Results: U.S. Pharmacopeial Convention (USP) standards for physical particle counts were met for acetaminophen injection (10 mg/mL) when combined with cefoxitin, ceftriaxone, clindamycin, dexamethasone, diphenhydramine, dolasetron, fentanyl, granisetron, hydrocortisone, hydromorphone, ketorolac, meperidine, methylprednisolone, midazolam, morphine, nalbuphine, ondansetron, piperacillin/tazobactam, ranitidine, and vancomycin. Injectable acetaminophen is incompatible with acyclovir and diazepam and therefore should not be administered concomitantly with either of these products. Further testing confirmed the chemical compatibility of acetaminophen with ceftriaxone, diphenhydramine, granisetron, ketorolac, nalbuphine, ondansetron, piperacillin/tazobactam, and vancomycin. Conclusion: All medications tested with acetaminophen were physically compatible except for acyclovir and diazepam. All 8 medications tested for chemical compatibility with acetaminophen were stable over the 4 hour simulated Y-site administration study. PMID:24421562
NASA Astrophysics Data System (ADS)
Yücel, M.; Emirhan, E.; Bayrak, A.; Ozben, C. S.; Yücel, E. Barlas
2015-11-01
Design and production of a simple and low cost X-ray imaging system that can be used for light industrial applications was targeted in the Nuclear Physics Laboratory of Istanbul Technical University. In this study, production, transmission and detection of X-rays were simulated for the proposed imaging device. OX/70-P dental tube was used and X-ray spectra simulated by Geant4 were validated by comparison with X-ray spectra measured between 20 and 35 keV. Relative detection efficiency of the detector was also determined to confirm the physics processes used in the simulations. Various time optimization tools were performed to reduce the simulation time.
NASA Astrophysics Data System (ADS)
Puranik, Bhalchandra; Watvisave, Deepak; Bhandarkar, Upendra
2016-11-01
The interaction of a shock with a density interface is observed in several technological applications such as supersonic combustion, inertial confinement fusion, and shock-induced fragmentation of kidney and gall-stones. The central physical process in this interaction is the mechanism of the Richtmyer-Meshkov Instability (RMI). The specific situation where the density interface is initially an isolated spherical or cylindrical gas bubble presents a relatively simple geometry that exhibits all the essential RMI processes such as reflected and refracted shocks, secondary instabilities, turbulence and mixing of the species. If the incident shocks are strong, the calorically imperfect nature needs to be modelled. In the present work, we have carried out simulations of the shock-bubble interaction using the DSMC method for such situations. Specifically, an investigation of the shock-bubble interaction with diatomic gases involving rotational and vibrational excitations at high temperatures is performed, and the effects of such high temperature phenomena will be presented.
NASA Astrophysics Data System (ADS)
Trivedi, Nitin; Kumar, Manoj; Haldar, Subhasis; Deswal, S. S.; Gupta, Mridula; Gupta, R. S.
2017-09-01
A charge plasma technique based dopingless (DL) accumulation mode (AM) junctionless (JL) cylindrical surrounding gate (CSG) MOSFET has been proposed and extensively investigated. Proposed device has no physical junction at source to channel and channel to drain interface. The complete silicon pillar has been considered as undoped. The high free electron density or induced N+ region is designed by keeping the work function of source/drain metal contacts lower than the work function of undoped silicon. Thus, its fabrication complexity is drastically reduced by curbing the requirement of high temperature doping techniques. The electrical/analog characteristics for the proposed device has been extensively investigated using the numerical simulation and are compared with conventional junctionless cylindrical surrounding gate (JL-CSG) MOSFET with identical dimensions. For the numerical simulation purpose ATLAS-3D device simulator is used. The results show that the proposed device is more short channel immune to conventional JL-CSG MOSFET and suitable for faster switching applications due to higher I ON/ I OFF ratio.
The Effects of Training on Anxiety and Task Performance in Simulated Suborbital Spaceflight.
Blue, Rebecca S; Bonato, Frederick; Seaton, Kimberly; Bubka, Andrea; Vardiman, Johnené L; Mathers, Charles; Castleberry, Tarah L; Vanderploeg, James M
2017-07-01
In commercial spaceflight, anxiety could become mission-impacting, causing negative experiences or endangering the flight itself. We studied layperson response to four varied-length training programs (ranging from 1 h-2 d of preparation) prior to centrifuge simulation of launch and re-entry acceleration profiles expected during suborbital spaceflight. We examined subject task execution, evaluating performance in high-stress conditions. We sought to identify any trends in demographics, hemodynamics, or similar factors in subjects with the highest anxiety or poorest tolerance of the experience. Volunteers participated in one of four centrifuge training programs of varied complexity and duration, culminating in two simulated suborbital spaceflights. At most, subjects underwent seven centrifuge runs over 2 d, including two +Gz runs (peak +3.5 Gz, Run 2) and two +Gx runs (peak +6.0 Gx, Run 4) followed by three runs approximating suborbital spaceflight profiles (combined +Gx and +Gz, peak +6.0 Gx and +4.0 Gz). Two cohorts also received dedicated anxiety-mitigation training. Subjects were evaluated on their performance on various tasks, including a simulated emergency. Participating in 2-7 centrifuge exposures were 148 subjects (105 men, 43 women, age range 19-72 yr, mean 39.4 ± 13.2 yr, body mass index range 17.3-38.1, mean 25.1 ± 3.7). There were 10 subjects who withdrew or limited their G exposure; history of motion sickness was associated with opting out. Shorter length training programs were associated with elevated hemodynamic responses. Single-directional G training did not significantly improve tolerance. Training programs appear best when high fidelity and sequential exposures may improve tolerance of physical/psychological flight stressors. The studied variables did not predict anxiety-related responses to these centrifuge profiles.Blue RS, Bonato F, Seaton K, Bubka A, Vardiman JL, Mathers C, Castleberry TL, Vanderploeg JM. The effects of training on anxiety and task performance in simulated suborbital spaceflight. Aerosp Med Hum Perform. 2017; 88(7):641-650.
NASA Astrophysics Data System (ADS)
Willson, D.; Rask, J. C.; George, S. C.; de Leon, P.; Bonaccorsi, R.; Blank, J.; Slocombe, J.; Silburn, K.; Steele, H.; Gargarno, M.; McKay, C. P.
2014-01-01
We conducted simulated Apollo Extravehicular Activity's (EVA) at the 3.45 Ga Australian 'Pilbara Dawn of life' (Western Australia) trail with field and non-field scientists using the University of North Dakota's NDX-1 pressurizable space suit to overview the effectiveness of scientist astronauts employing their field observation skills while looking for stromatolite fossil evidence. Off-world scientist astronauts will be faced with space suit limitations in vision, human sense perception, mobility, dexterity, the space suit fit, time limitations, and the psychological fear of death from accidents, causing physical fatigue reducing field science performance. Finding evidence of visible biosignatures for past life such as stromatolite fossils, on Mars, is a very significant discovery. Our preliminary overview trials showed that when in simulated EVAs, 25% stromatolite fossil evidence is missed with more incorrect identifications compared to ground truth surveys but providing quality characterization descriptions becomes less affected by simulated EVA limitations as the science importance of the features increases. Field scientists focused more on capturing high value characterization detail from the rock features whereas non-field scientists focused more on finding many features. We identified technologies and training to improve off-world field science performance. The data collected is also useful for NASA's "EVA performance and crew health" research program requirements but further work will be required to confirm the conclusions.
Physical Ability-Task Performance Models: Assessing the Risk of Omitted Variable Bias
2008-09-15
association was evaluated in a study of simulated job performance in men and women. The study measured four major abilities, Static Strength (SS), Dynamic...ability- performance interface for physical tasks. Methods Sample Participants were active-duty naval personnel (64 men , 38 women) between ages 20...bench with feet flat on the floor. Position was adjusted so the bar was between the shoulder and nipple line. Handles were gripped at a comfortable
Molecular dynamics simulations of amphiphilic graft copolymer molecules at a water/air interface.
Anderson, Philip M; Wilson, Mark R
2004-11-01
Fully atomistic molecular dynamics simulations of amphiphilic graft copolymer molecules have been performed at a range of surface concentrations at a water/air interface. These simulations are compared to experimental results from a corresponding system over a similar range of surface concentrations. Neutron reflectivity data calculated from the simulation trajectories agrees well with experimentally acquired profiles. In particular, excellent agreement in neutron reflectivity is found for lower surface concentration simulations. A simulation of a poly(ethylene oxide) (PEO) chain in aqueous solution has also been performed. This simulation allows the conformational behavior of the free PEO chain and those tethered to the interface in the previous simulations to be compared. (c) 2004 American Institute of Physics.
Enhancing physical performance in elite junior tennis players with a caffeinated energy drink.
Gallo-Salazar, César; Areces, Francisco; Abián-Vicén, Javier; Lara, Beatriz; Salinero, Juan José; Gonzalez-Millán, Cristina; Portillo, Javier; Muñoz, Victor; Juarez, Daniel; Del Coso, Juan
2015-04-01
The aim of this study was to investigate the effectiveness of a caffeinated energy drink to enhance physical performance in elite junior tennis players. In 2 different sessions separated by 1 wk, 14 young (16 ± 1 y) elite-level tennis players ingested 3 mg caffeine per kg body mass in the form of an energy drink or the same drink without caffeine (placebo). After 60 min, participants performed a handgrip-strength test, a maximal-velocity serving test, and an 8 × 15-m sprint test and then played a simulated singles match (best of 3 sets). Instantaneous running speed during the matches was assessed using global positioning (GPS) devices. Furthermore, the matches were videotaped and notated afterward. In comparison with the placebo drink, the ingestion of the caffeinated energy drink increased handgrip force by ~4.2% ± 7.2% (P = .03) in both hands, the running pace at high intensity (46.7 ± 28.5 vs 63.3 ± 27.7 m/h, P = .02), and the number of sprints (12.1 ± 1.7 vs 13.2 ± 1.7, P = .05) during the simulated match. There was a tendency for increased maximal running velocity during the sprint test (22.3 ± 2.0 vs 22.9 ± 2.1 km/h, P = .07) and higher percentage of points won on service with the caffeinated energy drink (49.7% ± 9.8% vs 56.4% ± 10.0%, P = .07) in comparison with the placebo drink. The energy drink did not improve ball velocity during the serving test (42.6 ± 4.8 vs 42.7 ± 5.0 m/s, P = .49). The preexercise ingestion of caffeinated energy drinks was effective to enhance some aspects of physical performance of elite junior tennis players.
NASA Astrophysics Data System (ADS)
Inochkin, F. M.; Kruglov, S. K.; Bronshtein, I. G.; Kompan, T. A.; Kondratjev, S. V.; Korenev, A. S.; Pukhov, N. F.
2017-06-01
A new method for precise subpixel edge estimation is presented. The principle of the method is the iterative image approximation in 2D with subpixel accuracy until the appropriate simulated is found, matching the simulated and acquired images. A numerical image model is presented consisting of three parts: an edge model, object and background brightness distribution model, lens aberrations model including diffraction. The optimal values of model parameters are determined by means of conjugate-gradient numerical optimization of a merit function corresponding to the L2 distance between acquired and simulated images. Computationally-effective procedure for the merit function calculation along with sufficient gradient approximation is described. Subpixel-accuracy image simulation is performed in a Fourier domain with theoretically unlimited precision of edge points location. The method is capable of compensating lens aberrations and obtaining the edge information with increased resolution. Experimental method verification with digital micromirror device applied to physically simulate an object with known edge geometry is shown. Experimental results for various high-temperature materials within the temperature range of 1000°C..2400°C are presented.