Nishihara, Yuichi; Isobe, Yoh; Kitagawa, Yuko
2017-12-01
A realistic simulator for transabdominal preperitoneal (TAPP) inguinal hernia repair would enhance surgeons' training experience before they enter the operating theater. The purpose of this study was to create a novel physical simulator for TAPP inguinal hernia repair and obtain surgeons' opinions regarding its efficacy. Our novel TAPP inguinal hernia repair simulator consists of a physical laparoscopy simulator and a handmade organ replica model. The physical laparoscopy simulator was created by three-dimensional (3D) printing technology, and it represents the trunk of the human body and the bendability of the abdominal wall under pneumoperitoneal pressure. The organ replica model was manually created by assembling materials. The TAPP inguinal hernia repair simulator allows for the performance of all procedures required in TAPP inguinal hernia repair. Fifteen general surgeons performed TAPP inguinal hernia repair using our simulator. Their opinions were scored on a 5-point Likert scale. All participants strongly agreed that the 3D-printed physical simulator and organ replica model were highly useful for TAPP inguinal hernia repair training (median, 5 points) and TAPP inguinal hernia repair education (median, 5 points). They felt that the simulator would be effective for TAPP inguinal hernia repair training before entering the operating theater. All surgeons considered that this simulator should be introduced in the residency curriculum. We successfully created a physical simulator for TAPP inguinal hernia repair training using 3D printing technology and a handmade organ replica model created with inexpensive, readily accessible materials. Preoperative TAPP inguinal hernia repair training using this simulator and organ replica model may be of benefit in the training of all surgeons. All general surgeons involved in the present study felt that this simulator and organ replica model should be used in their residency curriculum.
NASA Astrophysics Data System (ADS)
Riaz, Muhammad
The purpose of this study was to examine how simulations in physics class, class management, laboratory practice, student engagement, critical thinking, cooperative learning, and use of simulations predicted the percentage of students achieving a grade point average of B or higher and their academic performance as reported by teachers in secondary school physics classes. The target population consisted of secondary school physics teachers who were members of Science Technology, Engineeering and,Mathematics Teachers of New York City (STEMteachersNYC) and American Modeling Teachers Association (AMTA). They used simulations in their physics classes in the 2013 and 2014 school years. Subjects for this study were volunteers. A survey was constructed based on a literature review. Eighty-two physics teachers completed the survey about instructional practice in physics. All respondents were anonymous. Classroom management was the only predictor of the percent of students achieving a grade point average of B or higher in high school physics class. Cooperative learning, use of simulations, and student engagement were predictors of teacher's views of student academic performance in high school physics class. All other variables -- class management, laboratory practice, critical thinking, and teacher self-efficacy -- were not predictors of teacher's views of student academic performance in high school physics class. The implications of these findings were discussed and recommendations for physics teachers to improve student learning were presented.
BSM Kaon Mixing at the Physical Point
NASA Astrophysics Data System (ADS)
Boyle, Peter; Garron, Nicolas; Kettle, Julia; Khamseh, Ava; Tsang, Justus Tobias
2018-03-01
We present a progress update on the RBC-UKQCD calculation of beyond the standard model (BSM) kaon mixing matrix elements at the physical point. Simulations are performed using 2+1 flavour domain wall lattice QCD with the Iwasaki gauge action at 3 lattice spacings and with pion masses ranging from 430 MeV to the physical pion mass.
ERIC Educational Resources Information Center
Riaz, Muhammad
2015-01-01
The purpose of this study was to examine how simulations in physics class, class management, laboratory practice, student engagement, critical thinking, cooperative learning, and use of simulations predicted the percentage of students achieving a grade point average of B or higher and their academic performance as reported by teachers in secondary…
Semi-physical Simulation Platform of a Parafoil Nonlinear Dynamic System
NASA Astrophysics Data System (ADS)
Gao, Hai-Tao; Yang, Sheng-Bo; Zhu, Er-Lin; Sun, Qing-Lin; Chen, Zeng-Qiang; Kang, Xiao-Feng
2013-11-01
Focusing on the problems in the process of simulation and experiment on a parafoil nonlinear dynamic system, such as limited methods, high cost and low efficiency we present a semi-physical simulation platform. It is designed by connecting parts of physical objects to a computer, and remedies the defect that a computer simulation is divorced from a real environment absolutely. The main components of the platform and its functions, as well as simulation flows, are introduced. The feasibility and validity are verified through a simulation experiment. The experimental results show that the platform has significance for improving the quality of the parafoil fixed-point airdrop system, shortening the development cycle and saving cost.
Tawalbeh, Loai I
2017-08-01
Simulation is an effective teaching strategy. However, no study in Jordan has examined the effect of simulation on the confidence of university nursing students in applying heart and lung physical examination skills. The current study aimed to test the effect of simulation on the confidence of university nursing students in applying heart and lung physical examination skills. A randomized controlled trial design was applied. The researcher introduced the simulation scenario regarding cardiopulmonary examination skills. This scenario included a 1-hour PowerPoint presentation and video for the experimental group (n= 35) and a PowerPoint presentation and a video showing a traditional demonstration in the laboratory for the control group (n = 34). Confidence in applying cardiopulmonary physical examination skills was measured for both groups at baseline and at 1 day and 3 months posttest. A paired t test showed that confidence was significantly higher in the posttest than in the pretest for both groups. An independent t test showed a statistically significant difference (t(67) = -42.95, p < .001) between the two groups in terms of the difference between the first posttest and second posttest scores (t(67) = -43.36, p < .001) for confidence in applying physical examination skills. Both simulation and traditional training in the laboratory significantly improved the confidence of participants in applying cardiopulmonary assessment skills. However, the simulation training had a more significant effect than usual training in enhancing the confidence of nursing students in applying physical examination skills.
NASA Astrophysics Data System (ADS)
Ueda, Yoshikatsu; Omura, Yoshiharu; Kojima, Hiro
Spacecraft observation is essentially "one-point measurement", while numerical simulation can reproduce a whole system of physical processes on a computer. By performing particle simulations of plasma wave instabilities and calculating correlation of waves and particles observed at a single point, we examine how well we can infer the characteristics of the whole system by a one-point measurement. We perform various simulation runs with different plasma parameters using one-dimensional electromagnetic particle code (KEMPO1) and calculate 'E dot v' or other moments at a single point. We find good correlation between the measurement and the macroscopic fluctuations of the total simulation region. We make use of the results of the computer experiments in our system design of new instruments 'One-chip Wave Particle Interaction Analyzer (OWPIA)'.
Using a Virtual Experiment to Analyze Infiltration Process from Point to Grid-cell Size Scale
NASA Astrophysics Data System (ADS)
Barrios, M. I.
2013-12-01
The hydrological science requires the emergence of a consistent theoretical corpus driving the relationships between dominant physical processes at different spatial and temporal scales. However, the strong spatial heterogeneities and non-linearities of these processes make difficult the development of multiscale conceptualizations. Therefore, scaling understanding is a key issue to advance this science. This work is focused on the use of virtual experiments to address the scaling of vertical infiltration from a physically based model at point scale to a simplified physically meaningful modeling approach at grid-cell scale. Numerical simulations have the advantage of deal with a wide range of boundary and initial conditions against field experimentation. The aim of the work was to show the utility of numerical simulations to discover relationships between the hydrological parameters at both scales, and to use this synthetic experience as a media to teach the complex nature of this hydrological process. The Green-Ampt model was used to represent vertical infiltration at point scale; and a conceptual storage model was employed to simulate the infiltration process at the grid-cell scale. Lognormal and beta probability distribution functions were assumed to represent the heterogeneity of soil hydraulic parameters at point scale. The linkages between point scale parameters and the grid-cell scale parameters were established by inverse simulations based on the mass balance equation and the averaging of the flow at the point scale. Results have shown numerical stability issues for particular conditions and have revealed the complex nature of the non-linear relationships between models' parameters at both scales and indicate that the parameterization of point scale processes at the coarser scale is governed by the amplification of non-linear effects. The findings of these simulations have been used by the students to identify potential research questions on scale issues. Moreover, the implementation of this virtual lab improved the ability to understand the rationale of these process and how to transfer the mathematical models to computational representations.
The validity of multiphase DNS initialized on the basis of single--point statistics
NASA Astrophysics Data System (ADS)
Subramaniam, Shankar
1999-11-01
A study of the point--process statistical representation of a spray reveals that single--point statistical information contained in the droplet distribution function (ddf) is related to a sequence of single surrogate--droplet pdf's, which are in general different from the physical single--droplet pdf's. The results of this study have important consequences for the initialization and evolution of direct numerical simulations (DNS) of multiphase flows, which are usually initialized on the basis of single--point statistics such as the average number density in physical space. If multiphase DNS are initialized in this way, this implies that even the initial representation contains certain implicit assumptions concerning the complete ensemble of realizations, which are invalid for general multiphase flows. Also the evolution of a DNS initialized in this manner is shown to be valid only if an as yet unproven commutation hypothesis holds true. Therefore, it is questionable to what extent DNS that are initialized in this manner constitute a direct simulation of the physical droplets.
2+1 flavor lattice QCD toward the physical point
NASA Astrophysics Data System (ADS)
Aoki, S.; Ishikawa, K.-I.; Ishizuka, N.; Izubuchi, T.; Kadoh, D.; Kanaya, K.; Kuramashi, Y.; Namekawa, Y.; Okawa, M.; Taniguchi, Y.; Ukawa, A.; Ukita, N.; Yoshié, T.
2009-02-01
We present the first results of the PACS-CS project which aims to simulate 2+1 flavor lattice QCD on the physical point with the nonperturbatively O(a)-improved Wilson quark action and the Iwasaki gauge action. Numerical simulations are carried out at β=1.9, corresponding to the lattice spacing of a=0.0907(13)fm, on a 323×64 lattice with the use of the domain-decomposed HMC algorithm to reduce the up-down quark mass. Further algorithmic improvements make possible the simulation whose up-down quark mass is as light as the physical value. The resulting pseudoscalar meson masses range from 702 MeV down to 156 MeV, which clearly exhibit the presence of chiral logarithms. An analysis of the pseudoscalar meson sector with SU(3) chiral perturbation theory reveals that the next-to-leading order corrections are large at the physical strange quark mass. In order to estimate the physical up-down quark mass, we employ the SU(2) chiral analysis expanding the strange quark contributions analytically around the physical strange quark mass. The SU(2) low energy constants lmacr 3 and lmacr 4 are comparable with the recent estimates by other lattice QCD calculations. We determine the physical point together with the lattice spacing employing mπ, mK and mΩ as input. The hadron spectrum extrapolated to the physical point shows an agreement with the experimental values at a few % level of statistical errors, albeit there remain possible cutoff effects. We also find that our results of fπ, fK and their ratio, where renormalization is carries out perturbatively at one loop, are compatible with the experimental values. For the physical quark masses we obtain mudM Smacr and msM Smacr extracted from the axial-vector Ward-Takahashi identity with the perturbative renormalization factors. We also briefly discuss the results for the static quark potential.
Multi-physics CFD simulations in engineering
NASA Astrophysics Data System (ADS)
Yamamoto, Makoto
2013-08-01
Nowadays Computational Fluid Dynamics (CFD) software is adopted as a design and analysis tool in a great number of engineering fields. We can say that single-physics CFD has been sufficiently matured in the practical point of view. The main target of existing CFD software is single-phase flows such as water and air. However, many multi-physics problems exist in engineering. Most of them consist of flow and other physics, and the interactions between different physics are very important. Obviously, multi-physics phenomena are critical in developing machines and processes. A multi-physics phenomenon seems to be very complex, and it is so difficult to be predicted by adding other physics to flow phenomenon. Therefore, multi-physics CFD techniques are still under research and development. This would be caused from the facts that processing speed of current computers is not fast enough for conducting a multi-physics simulation, and furthermore physical models except for flow physics have not been suitably established. Therefore, in near future, we have to develop various physical models and efficient CFD techniques, in order to success multi-physics simulations in engineering. In the present paper, I will describe the present states of multi-physics CFD simulations, and then show some numerical results such as ice accretion and electro-chemical machining process of a three-dimensional compressor blade which were obtained in my laboratory. Multi-physics CFD simulations would be a key technology in near future.
Kawamura, Kazuya; Kobayashi, Yo; Fujie, Masakatsu G
2007-01-01
Medical technology has advanced with the introduction of robot technology, making previous medical treatments that were very difficult far more possible. However, operation of a surgical robot demands substantial training and continual practice on the part of the surgeon because it requires difficult techniques that are different from those of traditional surgical procedures. We focused on a simulation technology based on the physical characteristics of organs. In this research, we proposed the development of surgical simulation, based on a physical model, for intra-operative navigation by a surgeon. In this paper, we describe the design of our system, in particular our organ deformation calculator. The proposed simulation system consists of an organ deformation calculator and virtual slave manipulators. We obtained adequate experimental results of a target node at a nearby point of interaction, because this point ensures better accuracy for our simulation model. The next research step would be to focus on a surgical environment in which internal organ models would be integrated into a slave simulation system.
Number of Women in Physics Departments: A Simulation Analysis. Report
ERIC Educational Resources Information Center
White, Susan; Ivie, Rachel
2013-01-01
Women's representation in physics lags behind most other STEM disciplines. Currently, women make up about 13% of faculty members in all physics degree-granting departments, and there are physics departments with no women faculty members at all. These two data points are often cited as evidence of a lack of equity for women. In this article,…
Statistical representation of a spray as a point process
NASA Astrophysics Data System (ADS)
Subramaniam, S.
2000-10-01
The statistical representation of a spray as a finite point process is investigated. One objective is to develop a better understanding of how single-point statistical information contained in descriptions such as the droplet distribution function (ddf), relates to the probability density functions (pdfs) associated with the droplets themselves. Single-point statistical information contained in the droplet distribution function (ddf) is shown to be related to a sequence of single surrogate-droplet pdfs, which are in general different from the physical single-droplet pdfs. It is shown that the ddf contains less information than the fundamental single-point statistical representation of the spray, which is also described. The analysis shows which events associated with the ensemble of spray droplets can be characterized by the ddf, and which cannot. The implications of these findings for the ddf approach to spray modeling are discussed. The results of this study also have important consequences for the initialization and evolution of direct numerical simulations (DNS) of multiphase flows, which are usually initialized on the basis of single-point statistics such as the droplet number density in physical space. If multiphase DNS are initialized in this way, this implies that even the initial representation contains certain implicit assumptions concerning the complete ensemble of realizations, which are invalid for general multiphase flows. Also the evolution of a DNS initialized in this manner is shown to be valid only if an as yet unproven commutation hypothesis holds true. Therefore, it is questionable to what extent DNS that are initialized in this manner constitute a direct simulation of the physical droplets. Implications of these findings for large eddy simulations of multiphase flows are also discussed.
Compound simulator IR radiation characteristics test and calibration
NASA Astrophysics Data System (ADS)
Li, Yanhong; Zhang, Li; Li, Fan; Tian, Yi; Yang, Yang; Li, Zhuo; Shi, Rui
2015-10-01
The Hardware-in-the-loop simulation can establish the target/interference physical radiation and interception of product flight process in the testing room. In particular, the simulation of environment is more difficult for high radiation energy and complicated interference model. Here the development in IR scene generation produced by a fiber array imaging transducer with circumferential lamp spot sources is introduced. The IR simulation capability includes effective simulation of aircraft signatures and point-source IR countermeasures. Two point-sources as interference can move in two-dimension random directions. For simulation the process of interference release, the radiation and motion characteristic is tested. Through the zero calibration for optical axis of simulator, the radiation can be well projected to the product detector. The test and calibration results show the new type compound simulator can be used in the hardware-in-the-loop simulation trial.
Miller, Thomas F; Manolopoulos, David E; Madden, Paul A; Konieczny, Martin; Oberhofer, Harald
2005-02-01
We show that the two phase points considered in the recent simulations of liquid para hydrogen by Hone and Voth lie in the liquid-vapor coexistence region of a purely classical molecular dynamics simulation. By contrast, their phase point for ortho deuterium was in the one-phase liquid region for both classical and quantum simulations. These observations are used to account for their report that quantum mechanical effects enhance the diffusion in liquid para hydrogen and decrease it in ortho deuterium.(c) 2005 American Institute of Physics.
Sensitivity of air quality simulation to smoke plume rise
Yongqiang Liu; Gary Achtemeier; Scott Goodrick
2008-01-01
Plume rise is the height smoke plumes can reach. This information is needed by air quality models such as the Community Multiscale Air Quality (CMAQ) model to simulate physical and chemical processes of point-source fire emissions. This study seeks to understand the importance of plume rise to CMAQ air quality simulation of prescribed burning to plume rise. CMAQ...
NASA Astrophysics Data System (ADS)
Mert, A.
2016-12-01
The main motivation of this study is the impending occurrence of a catastrophic earthquake along the Prince Island Fault (PIF) in Marmara Sea and the disaster risk around Marmara region, especially in İstanbul. This study provides the results of a physically-based Probabilistic Seismic Hazard Analysis (PSHA) methodology, using broad-band strong ground motion simulations, for sites within the Marmara region, Turkey, due to possible large earthquakes throughout the PIF segments in the Marmara Sea. The methodology is called physically-based because it depends on the physical processes of earthquake rupture and wave propagation to simulate earthquake ground motion time histories. We include the effects of all considerable magnitude earthquakes. To generate the high frequency (0.5-20 Hz) part of the broadband earthquake simulation, the real small magnitude earthquakes recorded by local seismic array are used as an Empirical Green's Functions (EGF). For the frequencies below 0.5 Hz the simulations are obtained using by Synthetic Green's Functions (SGF) which are synthetic seismograms calculated by an explicit 2D/3D elastic finite difference wave propagation routine. Using by a range of rupture scenarios for all considerable magnitude earthquakes throughout the PIF segments we provide a hazard calculation for frequencies 0.1-20 Hz. Physically based PSHA used here follows the same procedure of conventional PSHA except that conventional PSHA utilizes point sources or a series of point sources to represent earthquakes and this approach utilizes full rupture of earthquakes along faults. Further, conventional PSHA predicts ground-motion parameters using by empirical attenuation relationships, whereas this approach calculates synthetic seismograms for all magnitude earthquakes to obtain ground-motion parameters. PSHA results are produced for 2%, 10% and 50% hazards for all studied sites in Marmara Region.
Rocks in a Box: A Three-Point Problem.
ERIC Educational Resources Information Center
Leyden, Michael B.
1981-01-01
Describes a simulation drilling core activity involving the use of a physical model from which students gather data and solve a three-point problem to determine the strike and dip of a buried stratum. Includes descriptions of model making, data plots, and additional problems involving strike and dip. (DS)
NASA Astrophysics Data System (ADS)
Tierz, Pablo; Sandri, Laura; Ramona Stefanescu, Elena; Patra, Abani; Marzocchi, Warner; Costa, Antonio; Sulpizio, Roberto
2014-05-01
Explosive volcanoes and, especially, Pyroclastic Density Currents (PDCs) pose an enormous threat to populations living in the surroundings of volcanic areas. Difficulties in the modeling of PDCs are related to (i) very complex and stochastic physical processes, intrinsic to their occurrence, and (ii) to a lack of knowledge about how these processes actually form and evolve. This means that there are deep uncertainties (namely, of aleatory nature due to point (i) above, and of epistemic nature due to point (ii) above) associated to the study and forecast of PDCs. Consequently, the assessment of their hazard is better described in terms of probabilistic approaches rather than by deterministic ones. What is actually done to assess probabilistic hazard from PDCs is to couple deterministic simulators with statistical techniques that can, eventually, supply probabilities and inform about the uncertainties involved. In this work, some examples of both PDC numerical simulators (Energy Cone and TITAN2D) and uncertainty quantification techniques (Monte Carlo sampling -MC-, Polynomial Chaos Quadrature -PCQ- and Bayesian Linear Emulation -BLE-) are presented, and their advantages, limitations and future potential are underlined. The key point in choosing a specific method leans on the balance between its related computational cost, the physical reliability of the simulator and the pursued target of the hazard analysis (type of PDCs considered, time-scale selected for the analysis, particular guidelines received from decision-making agencies, etc.). Although current numerical and statistical techniques have brought important advances in probabilistic volcanic hazard assessment from PDCs, some of them may be further applicable to more sophisticated simulators. In addition, forthcoming improvements could be focused on three main multidisciplinary directions: 1) Validate the simulators frequently used (through comparison with PDC deposits and other simulators), 2) Decrease simulator runtimes (whether by increasing the knowledge about the physical processes or by doing more efficient programming, parallelization, ...) and 3) Improve uncertainty quantification techniques.
High Temperature Gas-Cooled Test Reactor Point Design: Summary Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sterbentz, James William; Bayless, Paul David; Nelson, Lee Orville
2016-01-01
A point design has been developed for a 200-MW high-temperature gas-cooled test reactor. The point design concept uses standard prismatic blocks and 15.5% enriched uranium oxycarbide fuel. Reactor physics and thermal-hydraulics simulations have been performed to characterize the capabilities of the design. In addition to the technical data, overviews are provided on the technology readiness level, licensing approach, and costs of the test reactor point design.
High Temperature Gas-Cooled Test Reactor Point Design: Summary Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sterbentz, James William; Bayless, Paul David; Nelson, Lee Orville
2016-03-01
A point design has been developed for a 200-MW high-temperature gas-cooled test reactor. The point design concept uses standard prismatic blocks and 15.5% enriched uranium oxycarbide fuel. Reactor physics and thermal-hydraulics simulations have been performed to characterize the capabilities of the design. In addition to the technical data, overviews are provided on the technology readiness level, licensing approach, and costs of the test reactor point design.
Breimer, Gerben E; Haji, Faizal A; Bodani, Vivek; Cunningham, Melissa S; Lopez-Rios, Adriana-Lucia; Okrainec, Allan; Drake, James M
2017-02-01
The relative educational benefits of virtual reality (VR) and physical simulation models for endoscopic third ventriculostomy (ETV) have not been evaluated "head to head." To compare and identify the relative utility of a physical and VR ETV simulation model for use in neurosurgical training. Twenty-three neurosurgical residents and 3 fellows performed an ETV on both a physical and VR simulation model. Trainees rated the models using 5-point Likert scales evaluating the domains of anatomy, instrument handling, procedural content, and the overall fidelity of the simulation. Paired t tests were performed for each domain's mean overall score and individual items. The VR model has relative benefits compared with the physical model with respect to realistic representation of intraventricular anatomy at the foramen of Monro (4.5, standard deviation [SD] = 0.7 vs 4.1, SD = 0.6; P = .04) and the third ventricle floor (4.4, SD = 0.6 vs 4.0, SD = 0.9; P = .03), although the overall anatomy score was similar (4.2, SD = 0.6 vs 4.0, SD = 0.6; P = .11). For overall instrument handling and procedural content, the physical simulator outperformed the VR model (3.7, SD = 0.8 vs 4.5; SD = 0.5, P < .001 and 3.9; SD = 0.8 vs 4.2, SD = 0.6; P = .02, respectively). Overall task fidelity across the 2 simulators was not perceived as significantly different. Simulation model selection should be based on educational objectives. Training focused on learning anatomy or decision-making for anatomic cues may be aided with the VR simulation model. A focus on developing manual dexterity and technical skills using endoscopic equipment in the operating room may be better learned on the physical simulation model. Copyright © 2016 by the Congress of Neurological Surgeons
High-Temperature Gas-Cooled Test Reactor Point Design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sterbentz, James William; Bayless, Paul David; Nelson, Lee Orville
2016-04-01
A point design has been developed for a 200 MW high-temperature gas-cooled test reactor. The point design concept uses standard prismatic blocks and 15.5% enriched UCO fuel. Reactor physics and thermal-hydraulics simulations have been performed to characterize the capabilities of the design. In addition to the technical data, overviews are provided on the technological readiness level, licensing approach and costs.
NIMROD resistive magnetohydrodynamic simulations of spheromak physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hooper, E. B.; Cohen, B. I.; McLean, H. S.
The physics of spheromak plasmas is addressed by time-dependent, three-dimensional, resistive magnetohydrodynamic simulations with the NIMROD code [C. R. Sovinec et al., J. Comput. Phys. 195, 355 (2004)]. Included in some detail are the formation of a spheromak driven electrostatically by a coaxial plasma gun with a flux-conserver geometry and power systems that accurately model the sustained spheromak physics experiment [R. D. Wood et al., Nucl. Fusion 45, 1582 (2005)]. The controlled decay of the spheromak plasma over several milliseconds is also modeled as the programmable current and voltage relax, resulting in simulations of entire experimental pulses. Reconnection phenomena andmore » the effects of current profile evolution on the growth of symmetry-breaking toroidal modes are diagnosed; these in turn affect the quality of magnetic surfaces and the energy confinement. The sensitivity of the simulation results addresses variations in both physical and numerical parameters, including spatial resolution. There are significant points of agreement between the simulations and the observed experimental behavior, e.g., in the evolution of the magnetics and the sensitivity of the energy confinement to the presence of symmetry-breaking magnetic fluctuations.« less
NIMROD Resistive Magnetohydrodynamic Simulations of Spheromak Physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hooper, E B; Cohen, B I; McLean, H S
The physics of spheromak plasmas is addressed by time-dependent, three-dimensional, resistive magneto-hydrodynamic simulations with the NIMROD code. Included in some detail are the formation of a spheromak driven electrostatically by a coaxial plasma gun with a flux-conserver geometry and power systems that accurately model the Sustained Spheromak Physics Experiment (SSPX) (R. D. Wood, et al., Nucl. Fusion 45, 1582 (2005)). The controlled decay of the spheromak plasma over several milliseconds is also modeled as the programmable current and voltage relax, resulting in simulations of entire experimental pulses. Reconnection phenomena and the effects of current profile evolution on the growth ofmore » symmetry-breaking toroidal modes are diagnosed; these in turn affect the quality of magnetic surfaces and the energy confinement. The sensitivity of the simulation results address variations in both physical and numerical parameters, including spatial resolution. There are significant points of agreement between the simulations and the observed experimental behavior, e.g., in the evolution of the magnetics and the sensitivity of the energy confinement to the presence of symmetry-breaking magnetic fluctuations.« less
Studying Turbulence Using Numerical Simulation Databases. Proceedings of the 1987 Summer Program
NASA Technical Reports Server (NTRS)
Moin, Parviz (Editor); Reynolds, William C. (Editor); Kim, John (Editor)
1987-01-01
The focus was on the use of databases obtained from direct numerical simulations of turbulent flows, for study of turbulence physics and modeling. Topics addressed included: stochastic decomposition/chaos/bifurcation; two-point closure (or k-space) modeling; scalar transport/reacting flows; Reynolds stress modeling; and structure of turbulent boundary layers.
Interactive Physics: the role of interactive learning objects in teaching Physics in Engineering
NASA Astrophysics Data System (ADS)
Benito, R. M.; Cámara, M. E.; Arranz, F. J.
2009-04-01
In this work we present the results of a Project in educational innovation entitled "Interactive Physics". We have developed resources for teaching Physics for students of Engineering, with an emphasis in conceptual reinforcement and addressing the shortcomings of students entering the University. The resources developed include hypertext, graphics, equations, quizzes and more elaborated problems that cover the customary syllabus in first-year Physics: kinematics and dynamics, Newton laws, electricity and magnetism, elementary circuits… The role of vector quantities is stressed and we also provide help for the most usual mathematical tools (calculus and trigonometric formulas). The structure and level of detail of the resources are fitted to the conceptual difficulties that most of the students find. Some of the most advanced resources we have developed are interactive simulations. These are real simulations of key physical situations, not only animations. They serve as learning objects, in the well known sense of small reusable digital objects that are self-contained and tagged with metadata. In this sense, we use them to link concepts and content through interaction with active engagement of the student. The development of an interactive simulation involves several steps. First, we identify common pitfalls in the conceptual framework of the students and the points in which they stumble frequently. Then we think of a way to make clear the physical concepts using a simulation. After that, we program the simulation (using Flash or Java) and finally the simulation is tested with the students, and we reelaborate some parts of it in terms of usability. In our communication, we discuss the usefulness of these interactive simulations in teaching Physics for engineers, and their integration in a more comprehensive b-learning system.
Representing ductile damage with the dual domain material point method
Long, C. C.; Zhang, D. Z.; Bronkhorst, C. A.; ...
2015-12-14
In this study, we incorporate a ductile damage material model into a computational framework based on the Dual Domain Material Point (DDMP) method. As an example, simulations of a flyer plate experiment involving ductile void growth and material failure are performed. The results are compared with experiments performed on high purity tantalum. We also compare the numerical results obtained from the DDMP method with those obtained from the traditional Material Point Method (MPM). Effects of an overstress model, artificial viscosity, and physical viscosity are investigated. Our results show that a physical bulk viscosity and overstress model are important in thismore » impact and failure problem, while physical shear viscosity and artificial shock viscosity have negligible effects. A simple numerical procedure with guaranteed convergence is introduced to solve for the equilibrium plastic state from the ductile damage model.« less
Li, Yong; Wang, Hanpeng; Zhu, Weishen; Li, Shucai; Liu, Jian
2015-08-31
Fiber Bragg Grating (FBG) sensors are comprehensively recognized as a structural stability monitoring device for all kinds of geo-materials by either embedding into or bonding onto the structural entities. The physical model in geotechnical engineering, which could accurately simulate the construction processes and the effects on the stability of underground caverns on the basis of satisfying the similarity principles, is an actual physical entity. Using a physical model test of underground caverns in Shuangjiangkou Hydropower Station, FBG sensors were used to determine how to model the small displacements of some key monitoring points in the large-scale physical model during excavation. In the process of building the test specimen, it is most successful to embed FBG sensors in the physical model through making an opening and adding some quick-set silicon. The experimental results show that the FBG sensor has higher measuring accuracy than other conventional sensors like electrical resistance strain gages and extensometers. The experimental results are also in good agreement with the numerical simulation results. In conclusion, FBG sensors could effectively measure small displacements of monitoring points in the whole process of the physical model test. The experimental results reveal the deformation and failure characteristics of the surrounding rock mass and make some guidance for the in situ engineering construction.
Li, Yong; Wang, Hanpeng; Zhu, Weishen; Li, Shucai; Liu, Jian
2015-01-01
Fiber Bragg Grating (FBG) sensors are comprehensively recognized as a structural stability monitoring device for all kinds of geo-materials by either embedding into or bonding onto the structural entities. The physical model in geotechnical engineering, which could accurately simulate the construction processes and the effects on the stability of underground caverns on the basis of satisfying the similarity principles, is an actual physical entity. Using a physical model test of underground caverns in Shuangjiangkou Hydropower Station, FBG sensors were used to determine how to model the small displacements of some key monitoring points in the large-scale physical model during excavation. In the process of building the test specimen, it is most successful to embed FBG sensors in the physical model through making an opening and adding some quick-set silicon. The experimental results show that the FBG sensor has higher measuring accuracy than other conventional sensors like electrical resistance strain gages and extensometers. The experimental results are also in good agreement with the numerical simulation results. In conclusion, FBG sensors could effectively measure small displacements of monitoring points in the whole process of the physical model test. The experimental results reveal the deformation and failure characteristics of the surrounding rock mass and make some guidance for the in situ engineering construction. PMID:26404287
NASA Astrophysics Data System (ADS)
Mert, Aydin; Fahjan, Yasin M.; Hutchings, Lawrence J.; Pınar, Ali
2016-08-01
The main motivation for this study was the impending occurrence of a catastrophic earthquake along the Prince Island Fault (PIF) in the Marmara Sea and the disaster risk around the Marmara region, especially in Istanbul. This study provides the results of a physically based probabilistic seismic hazard analysis (PSHA) methodology, using broadband strong ground motion simulations, for sites within the Marmara region, Turkey, that may be vulnerable to possible large earthquakes throughout the PIF segments in the Marmara Sea. The methodology is called physically based because it depends on the physical processes of earthquake rupture and wave propagation to simulate earthquake ground motion time histories. We included the effects of all considerable-magnitude earthquakes. To generate the high-frequency (0.5-20 Hz) part of the broadband earthquake simulation, real, small-magnitude earthquakes recorded by a local seismic array were used as empirical Green's functions. For the frequencies below 0.5 Hz, the simulations were obtained by using synthetic Green's functions, which are synthetic seismograms calculated by an explicit 2D /3D elastic finite difference wave propagation routine. By using a range of rupture scenarios for all considerable-magnitude earthquakes throughout the PIF segments, we produced a hazard calculation for frequencies of 0.1-20 Hz. The physically based PSHA used here followed the same procedure as conventional PSHA, except that conventional PSHA utilizes point sources or a series of point sources to represent earthquakes, and this approach utilizes the full rupture of earthquakes along faults. Furthermore, conventional PSHA predicts ground motion parameters by using empirical attenuation relationships, whereas this approach calculates synthetic seismograms for all magnitudes of earthquakes to obtain ground motion parameters. PSHA results were produced for 2, 10, and 50 % hazards for all sites studied in the Marmara region.
A simulation model for analysing brain structure deformations.
Di Bona, Sergio; Lutzemberger, Ludovico; Salvetti, Ovidio
2003-12-21
Recent developments of medical software applications--from the simulation to the planning of surgical operations--have revealed the need for modelling human tissues and organs, not only from a geometric point of view but also from a physical one, i.e. soft tissues, rigid body, viscoelasticity, etc. This has given rise to the term 'deformable objects', which refers to objects with a morphology, a physical and a mechanical behaviour of their own and that reflects their natural properties. In this paper, we propose a model, based upon physical laws, suitable for the realistic manipulation of geometric reconstructions of volumetric data taken from MR and CT scans. In particular, a physically based model of the brain is presented that is able to simulate the evolution of different nature pathological intra-cranial phenomena such as haemorrhages, neoplasm, haematoma, etc and to describe the consequences that are caused by their volume expansions and the influences they have on the anatomical and neuro-functional structures of the brain.
NASA Astrophysics Data System (ADS)
Becherer, Nico; Hesser, Jürgen; Kornmesser, Ulrike; Schranz, Dietmar; Männer, Reinhard
2007-03-01
Simulation systems are becoming increasingly essential in medical education. Hereby, capturing the physical behaviour of the real world requires a sophisticated modelling of instruments within the virtual environment. Most models currently used are not capable of user interactive simulations due to the computation of the complex underlying analytical equations. Alternatives are often based on simplifying mass-spring systems, being able to deliver high update rates that come at the cost of less realistic motion. In addition, most techniques are limited to narrow and tubular vessel structures or restrict shape alterations to two degrees of freedom, not allowing instrument deformations like torsion. In contrast, our approach combines high update rates with highly realistic motion and can in addition be used with respect to arbitrary structures like vessels or cavities (e.g. atrium, ventricle) without limiting the degrees of freedom. Based on energy minimization, bending energies and vessel structures are considered as linear elastic elements; energies are evaluated at regularly spaced points on the instrument, while the distance of the points is fixed, i.e. we simulate an articulated structure of joints with fixed connections between them. Arbitrary tissue structures are modeled through adaptive distance fields and are connected by nodes via an undirected graph system. The instrument points are linked to nodes by a system of rules. Energy minimization uses a Quasi Newton method without preconditioning and, hereby, gradients are estimated using a combination of analytical and numerical terms. Results show a high quality in motion simulation when compared to a phantom model. The approach is also robust and fast. Simulating an instrument with 100 joints runs at 100 Hz on a 3 GHz PC.
Stochastic optimization of GeantV code by use of genetic algorithms
Amadio, G.; Apostolakis, J.; Bandieramonte, M.; ...
2017-10-01
GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) andmore » handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. Here, the goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.« less
Stochastic optimization of GeantV code by use of genetic algorithms
NASA Astrophysics Data System (ADS)
Amadio, G.; Apostolakis, J.; Bandieramonte, M.; Behera, S. P.; Brun, R.; Canal, P.; Carminati, F.; Cosmo, G.; Duhem, L.; Elvira, D.; Folger, G.; Gheata, A.; Gheata, M.; Goulas, I.; Hariri, F.; Jun, S. Y.; Konstantinov, D.; Kumawat, H.; Ivantchenko, V.; Lima, G.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.
2017-10-01
GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) and handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. The goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.
Stochastic optimization of GeantV code by use of genetic algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amadio, G.; Apostolakis, J.; Bandieramonte, M.
GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) andmore » handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. Here, the goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.« less
The physics of volume rendering
NASA Astrophysics Data System (ADS)
Peters, Thomas
2014-11-01
Radiation transfer is an important topic in several physical disciplines, probably most prominently in astrophysics. Computer scientists use radiation transfer, among other things, for the visualization of complex data sets with direct volume rendering. In this article, I point out the connection between physical radiation transfer and volume rendering, and I describe an implementation of direct volume rendering in the astrophysical radiation transfer code RADMC-3D. I show examples for the use of this module on analytical models and simulation data.
An infrared sky model based on the IRAS point source data
NASA Technical Reports Server (NTRS)
Cohen, Martin; Walker, Russell; Wainscoat, Richard; Volk, Kevin; Walker, Helen; Schwartz, Deborah
1990-01-01
A detailed model for the infrared point source sky is presented that comprises geometrically and physically realistic representations of the galactic disk, bulge, spheroid, spiral arms, molecular ring, and absolute magnitudes. The model was guided by a parallel Monte Carlo simulation of the Galaxy. The content of the galactic source table constitutes an excellent match to the 12 micrometer luminosity function in the simulation, as well as the luminosity functions at V and K. Models are given for predicting the density of asteroids to be observed, and the diffuse background radiance of the Zodiacal cloud. The model can be used to predict the character of the point source sky expected for observations from future infrared space experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bryan, Frank; Dennis, John; MacCready, Parker
This project aimed to improve long term global climate simulations by resolving and enhancing the representation of the processes involved in the cycling of freshwater through estuaries and coastal regions. This was a collaborative multi-institution project consisting of physical oceanographers, climate model developers, and computational scientists. It specifically targeted the DOE objectives of advancing simulation and predictive capability of climate models through improvements in resolution and physical process representation. The main computational objectives were: 1. To develop computationally efficient, but physically based, parameterizations of estuary and continental shelf mixing processes for use in an Earth System Model (CESM). 2. Tomore » develop a two-way nested regional modeling framework in order to dynamically downscale the climate response of particular coastal ocean regions and to upscale the impact of the regional coastal processes to the global climate in an Earth System Model (CESM). 3. To develop computational infrastructure to enhance the efficiency of data transfer between specific sources and destinations, i.e., a point-to-point communication capability, (used in objective 1) within POP, the ocean component of CESM.« less
NASA Hybrid Reflectometer Project
NASA Technical Reports Server (NTRS)
Lynch, Dana; Mancini, Ron (Technical Monitor)
2002-01-01
Time-domain and frequency-domain reflectometry have been used for about forty years to locate opens and shorts in cables. Interpretation of reflectometry data is as much art as science. Is there information in the data that is being missed? Can the reflectometers be improved to allow us to detect and locate defects in cables that are not outright shorts or opens? The Hybrid Reflectometer Project was begun this year at NASA Ames Research Center, initially to model wire physics, simulating time-domain reflectometry (TDR) signals in those models and validating the models against actual TDR data taken on testbed cables. Theoretical models of reflectometry in wires will give us an understanding of the merits and limits of these techniques and will guide the application of a proposed hybrid reflectometer with the aim of enhancing reflectometer sensitivity to the point that wire defects can be detected. We will point out efforts by some other researchers to apply wire physics models to the problem of defect detection in wires and we will describe our own initial efforts to create wire physics models and report on testbed validation of the TDR simulations.
NASA Astrophysics Data System (ADS)
Badjin, D. A.; Glazyrin, S. I.; Manukovskiy, K. V.; Blinnikov, S. I.
2016-06-01
We describe our modelling of the radiatively cooling shocks and their thin shells with various numerical tools in different physical and calculational setups. We inspect structure of the dense shell, its formation and evolution, pointing out physical and numerical factors that sustain its shape and also may lead to instabilities. We have found that under certain physical conditions, the circular shaped shells show a strong bending instability and successive fragmentation on Cartesian grids soon after their formation, while remain almost unperturbed when simulated on polar meshes. We explain this by physical Rayleigh-Taylor-like instabilities triggered by corrugation of the dense shell surfaces by numerical noise. Conditions for these instabilities follow from both the shell structure itself and from episodes of transient acceleration during re-establishing of dynamical pressure balance after sudden radiative cooling onset. They are also easily excited by physical perturbations of the ambient medium. The widely mentioned non-linear thin shell instability, in contrast, in tests with physical perturbations is shown to have only limited chances to develop in real radiative shocks, as it seems to require a special spatial arrangement of fluctuations to be excited efficiently. The described phenomena also set new requirements on further simulations of the radiatively cooling shocks in order to be physically correct and free of numerical artefacts.
Equatorial waves simulated by the NCAR community climate model
NASA Technical Reports Server (NTRS)
Cheng, Xinhua; Chen, Tsing-Chang
1988-01-01
The equatorial planetary waves simulated by the NCAR CCM1 general circulation model were investigated in terms of space-time spectral analysis (Kao, 1968; Hayashi, 1971, 1973) and energetic analysis (Hayashi, 1980). These analyses are particularly applied to grid-point data on latitude circles. In order to test some physical factors which may affect the generation of tropical transient planetary waves, three different model simulations with the CCM1 (the control, the no-mountain, and the no-cloud experiments) were analyzed.
Tokunaga, Jin; Takamura, Norito; Ogata, Kenji; Setoguchi, Nao; Sato, Keizo
2013-01-01
Bedside training for fourth-year students, as well as seminars in hospital pharmacy (vital sign seminars) for fifth-year students at the Department of Pharmacy of Kyushu University of Health and Welfare have been implemented using patient training models and various patient simulators. The introduction of simulation-based pharmaceutical education, where no patients are present, promotes visually, aurally, and tactilely simulated learning regarding the evaluation of vital signs and implementation of physical assessment when disease symptoms are present or adverse effects occur. A patient simulator also promotes the creation of training programs for emergency and critical care, with which basic as well as advanced life support can be practiced. In addition, an advanced objective structured clinical examination (OSCE) trial has been implemented to evaluate skills regarding vital signs and physical assessments. Pharmacists are required to examine vital signs and conduct physical assessment from a pharmaceutical point of view. The introduction of these pharmacy clinical skills will improve the efficacy of drugs, work for the prevention or early detection of adverse effects, and promote the appropriate use of drugs. It is considered that simulation-based pharmaceutical education is essential to understand physical assessment, and such education will ideally be applied and developed according to on-site practices.
A compact physical model for the simulation of pNML-based architectures
NASA Astrophysics Data System (ADS)
Turvani, G.; Riente, F.; Plozner, E.; Schmitt-Landsiedel, D.; Breitkreutz-v. Gamm, S.
2017-05-01
Among emerging technologies, perpendicular Nanomagnetic Logic (pNML) seems to be very promising because of its capability of combining logic and memory onto the same device, scalability, 3D-integration and low power consumption. Recently, Full Adder (FA) structures clocked by a global magnetic field have been experimentally demonstrated and detailed characterizations of the switching process governing the domain wall (DW) nucleation probability Pnuc and time tnuc have been performed. However, the design of pNML architectures represent a crucial point in the study of this technology; this can have a remarkable impact on the reliability of pNML structures. Here, we present a compact model developed in VHDL which enables to simulate complex pNML architectures while keeping into account critical physical parameters. Therefore, such parameters have been extracted from the experiments, fitted by the corresponding physical equations and encapsulated into the proposed model. Within this, magnetic structures are decomposed into a few basic elements (nucleation centers, nanowires, inverters etc.) represented by the according physical description. To validate the model, we redesigned a FA and compared our simulation results to the experiment. With this compact model of pNML devices we have envisioned a new methodology which makes it possible to simulate and test the physical behavior of complex architectures with very low computational costs.
Simulation of 100-300 GHz solid-state harmonic sources
NASA Technical Reports Server (NTRS)
Zybura, Michael F.; Jones, J. Robert; Jones, Stephen H.; Tait, Gregory B.
1995-01-01
Accurate and efficient simulations of the large-signal time-dependent characteristics of second-harmonic Transferred Electron Oscillators (TEO's) and Heterostructure Barrier Varactor (HBV) frequency triplers have been obtained. This is accomplished by using a novel and efficient harmonic-balance circuit analysis technique which facilitates the integration of physics-based hydrodynamic device simulators. The integrated hydrodynamic device/harmonic-balance circuit simulators allow TEO and HBV circuits to be co-designed from both a device and a circuit point of view. Comparisons have been made with published experimental data for both TEO's and HBV's. For TEO's, excellent correlation has been obtained at 140 GHz and 188 GHz in second-harmonic operation. Excellent correlation has also been obtained for HBV frequency triplers operating near 200 GHz. For HBV's, both a lumped quasi-static equivalent circuit model and the hydrodynamic device simulator have been linked to the harmonic-balance circuit simulator. This comparison illustrates the importance of representing active devices with physics-based numerical device models rather than analytical device models.
NASA Astrophysics Data System (ADS)
Farmer, W. H.; Kiang, J. E.
2017-12-01
The development, deployment and maintenance of water resources management infrastructure and practices rely on hydrologic characterization, which requires an understanding of local hydrology. With regards to streamflow, this understanding is typically quantified with statistics derived from long-term streamgage records. However, a fundamental problem is how to characterize local hydrology without the luxury of streamgage records, a problem that complicates water resources management at ungaged locations and for long-term future projections. This problem has typically been addressed through the development of point estimators, such as regression equations, to estimate particular statistics. Physically-based precipitation-runoff models, which are capable of producing simulated hydrographs, offer an alternative to point estimators. The advantage of simulated hydrographs is that they can be used to compute any number of streamflow statistics from a single source (the simulated hydrograph) rather than relying on a diverse set of point estimators. However, the use of simulated hydrographs introduces a degree of model uncertainty that is propagated through to estimated streamflow statistics and may have drastic effects on management decisions. We compare the accuracy and precision of streamflow statistics (e.g. the mean annual streamflow, the annual maximum streamflow exceeded in 10% of years, and the minimum seven-day average streamflow exceeded in 90% of years, among others) derived from point estimators (e.g. regressions, kriging, machine learning) to that of statistics derived from simulated hydrographs across the continental United States. Initial results suggest that the error introduced through hydrograph simulation may substantially bias the resulting hydrologic characterization.
2D modeling of direct laser metal deposition process using a finite particle method
NASA Astrophysics Data System (ADS)
Anedaf, T.; Abbès, B.; Abbès, F.; Li, Y. M.
2018-05-01
Direct laser metal deposition is one of the material additive manufacturing processes used to produce complex metallic parts. A thorough understanding of the underlying physical phenomena is required to obtain a high-quality parts. In this work, a mathematical model is presented to simulate the coaxial laser direct deposition process tacking into account of mass addition, heat transfer, and fluid flow with free surface and melting. The fluid flow in the melt pool together with mass and energy balances are solved using the Computational Fluid Dynamics (CFD) software NOGRID-points, based on the meshless Finite Pointset Method (FPM). The basis of the computations is a point cloud, which represents the continuum fluid domain. Each finite point carries all fluid information (density, velocity, pressure and temperature). The dynamic shape of the molten zone is explicitly described by the point cloud. The proposed model is used to simulate a single layer cladding.
A systems engineering analysis of three-point and four-point wind turbine drivetrain configurations
Guo, Yi; Parsons, Tyler; Dykes, Katherine; ...
2016-08-24
This study compares the impact of drivetrain configuration on the mass and capital cost of a series of wind turbines ranging from 1.5 MW to 5.0 MW power ratings for both land-based and offshore applications. The analysis is performed with a new physics-based drivetrain analysis and sizing tool, Drive Systems Engineering (DriveSE), which is part of the Wind-Plant Integrated System Design & Engineering Model. DriveSE uses physics-based relationships to size all major drivetrain components according to given rotor loads simulated based on International Electrotechnical Commission design load cases. The model's sensitivity to input loads that contain a high degree ofmore » variability was analyzed. Aeroelastic simulations are used to calculate the rotor forces and moments imposed on the drivetrain for each turbine design. DriveSE is then used to size all of the major drivetrain components for each turbine for both three-point and four-point configurations. The simulation results quantify the trade-offs in mass and component costs for the different configurations. On average, a 16.7% decrease in total nacelle mass can be achieved when using a three-point drivetrain configuration, resulting in a 3.5% reduction in turbine capital cost. This analysis is driven by extreme loads and does not consider fatigue. Thus, the effects of configuration choices on reliability and serviceability are not captured. Furthermore, a first order estimate of the sizing, dimensioning and costing of major drivetrain components are made which can be used in larger system studies which consider trade-offs between subsystems such as the rotor, drivetrain and tower.« less
A systems engineering analysis of three-point and four-point wind turbine drivetrain configurations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Yi; Parsons, Tyler; Dykes, Katherine
This study compares the impact of drivetrain configuration on the mass and capital cost of a series of wind turbines ranging from 1.5 MW to 5.0 MW power ratings for both land-based and offshore applications. The analysis is performed with a new physics-based drivetrain analysis and sizing tool, Drive Systems Engineering (DriveSE), which is part of the Wind-Plant Integrated System Design & Engineering Model. DriveSE uses physics-based relationships to size all major drivetrain components according to given rotor loads simulated based on International Electrotechnical Commission design load cases. The model's sensitivity to input loads that contain a high degree ofmore » variability was analyzed. Aeroelastic simulations are used to calculate the rotor forces and moments imposed on the drivetrain for each turbine design. DriveSE is then used to size all of the major drivetrain components for each turbine for both three-point and four-point configurations. The simulation results quantify the trade-offs in mass and component costs for the different configurations. On average, a 16.7% decrease in total nacelle mass can be achieved when using a three-point drivetrain configuration, resulting in a 3.5% reduction in turbine capital cost. This analysis is driven by extreme loads and does not consider fatigue. Thus, the effects of configuration choices on reliability and serviceability are not captured. Furthermore, a first order estimate of the sizing, dimensioning and costing of major drivetrain components are made which can be used in larger system studies which consider trade-offs between subsystems such as the rotor, drivetrain and tower.« less
Optimization and Comparison of Different Digital Mammographic Tomosynthesis Reconstruction Methods
2008-04-01
physical measurements of impulse response analysis, modulation transfer function (MTF) and noise power spectrum (NPS). (Months 5- 12). This task has...and 2 impulse -added: projection images with simulated impulse and the I /r2 shading difference. Other system blur and noise issues are not...blur, and suppressed high frequency noise . Point-by-point BP rather than traditional SAA should be considered as the basis of further deblurring
Detecting a Protein in its Natural Environment with a MOSFET Transistor
NASA Astrophysics Data System (ADS)
Perez, Benjamin; Balijepalli, Arvind
2015-03-01
Our group's goal is to make a MOSFET transistor that has a nanopore through it. We want to have proteins flow through this device and examine their structure based on the modulation they cause on the current. This process does not harm the protein and allows the protein to be studied in its natural environment. The electric field and electric potential of a point charge were computed within a nano-transistor. The simulations were used to see if the point charge had enough influence on the current to cause a modulation. The point charge did cause a rise in the current making the modulation concept a viable one for medical applications. COMSOL metaphysics software was used to perform all simulations. The Society of Physics Students internship program and NIST.
Mariani, Alberto; Brunner, S.; Dominski, J.; ...
2018-01-17
Reducing the uncertainty on physical input parameters derived from experimental measurements is essential towards improving the reliability of gyrokinetic turbulence simulations. This can be achieved by introducing physical constraints. Amongst them, the zero particle flux condition is considered here. A first attempt is also made to match as well the experimental ion/electron heat flux ratio. This procedure is applied to the analysis of a particular Tokamak à Configuration Variable discharge. A detailed reconstruction of the zero particle flux hyper-surface in the multi-dimensional physical parameter space at fixed time of the discharge is presented, including the effect of carbon as themore » main impurity. Both collisionless and collisional regimes are considered. Hyper-surface points within the experimental error bars are found. In conclusion, the analysis is done performing gyrokinetic simulations with the local version of the GENE code, computing the fluxes with a Quasi-Linear (QL) model and validating the QL results with non-linear simulations in a subset of cases.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mariani, Alberto; Brunner, S.; Dominski, J.
Reducing the uncertainty on physical input parameters derived from experimental measurements is essential towards improving the reliability of gyrokinetic turbulence simulations. This can be achieved by introducing physical constraints. Amongst them, the zero particle flux condition is considered here. A first attempt is also made to match as well the experimental ion/electron heat flux ratio. This procedure is applied to the analysis of a particular Tokamak à Configuration Variable discharge. A detailed reconstruction of the zero particle flux hyper-surface in the multi-dimensional physical parameter space at fixed time of the discharge is presented, including the effect of carbon as themore » main impurity. Both collisionless and collisional regimes are considered. Hyper-surface points within the experimental error bars are found. In conclusion, the analysis is done performing gyrokinetic simulations with the local version of the GENE code, computing the fluxes with a Quasi-Linear (QL) model and validating the QL results with non-linear simulations in a subset of cases.« less
Incerti, S; Kyriakou, I; Bernal, M A; Bordage, M C; Francis, Z; Guatelli, S; Ivanchenko, V; Karamitros, M; Lampe, N; Lee, S B; Meylan, S; Min, C H; Shin, W G; Nieminen, P; Sakata, D; Tang, N; Villagrasa, C; Tran, H; Brown, J M C
2018-06-14
This Special Report presents a description of Geant4-DNA user applications dedicated to the simulation of track structures (TS) in liquid water and associated physical quantities (e.g. range, stopping power, mean free path…). These example applications are included in the Geant4 Monte Carlo toolkit and are available in open access. Each application is described and comparisons to recent international recommendations are shown (e.g. ICRU, MIRD), when available. The influence of physics models available in Geant4-DNA for the simulation of electron interactions in liquid water is discussed. Thanks to these applications, the authors show that the most recent sets of physics models available in Geant4-DNA (the so-called "option4″ and "option 6″ sets) enable more accurate simulation of stopping powers, dose point kernels and W-values in liquid water, than the default set of models ("option 2″) initially provided in Geant4-DNA. They also serve as reference applications for Geant4-DNA users interested in TS simulations. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
A method to map errors in the deformable registration of 4DCT images1
Vaman, Constantin; Staub, David; Williamson, Jeffrey; Murphy, Martin J.
2010-01-01
Purpose: To present a new approach to the problem of estimating errors in deformable image registration (DIR) applied to sequential phases of a 4DCT data set. Methods: A set of displacement vector fields (DVFs) are made by registering a sequence of 4DCT phases. The DVFs are assumed to display anatomical movement, with the addition of errors due to the imaging and registration processes. The positions of physical landmarks in each CT phase are measured as ground truth for the physical movement in the DVF. Principal component analysis of the DVFs and the landmarks is used to identify and separate the eigenmodes of physical movement from the error eigenmodes. By subtracting the physical modes from the principal components of the DVFs, the registration errors are exposed and reconstructed as DIR error maps. The method is demonstrated via a simple numerical model of 4DCT DVFs that combines breathing movement with simulated maps of spatially correlated DIR errors. Results: The principal components of the simulated DVFs were observed to share the basic properties of principal components for actual 4DCT data. The simulated error maps were accurately recovered by the estimation method. Conclusions: Deformable image registration errors can have complex spatial distributions. Consequently, point-by-point landmark validation can give unrepresentative results that do not accurately reflect the registration uncertainties away from the landmarks. The authors are developing a method for mapping the complete spatial distribution of DIR errors using only a small number of ground truth validation landmarks. PMID:21158288
Simulation of process identification and controller tuning for flow control system
NASA Astrophysics Data System (ADS)
Chew, I. M.; Wong, F.; Bono, A.; Wong, K. I.
2017-06-01
PID controller is undeniably the most popular method used in controlling various industrial processes. The feature to tune the three elements in PID has allowed the controller to deal with specific needs of the industrial processes. This paper discusses the three elements of control actions and improving robustness of controllers through combination of these control actions in various forms. A plant model is simulated using the Process Control Simulator in order to evaluate the controller performance. At first, the open loop response of the plant is studied by applying a step input to the plant and collecting the output data from the plant. Then, FOPDT of physical model is formed by using both Matlab-Simulink and PRC method. Then, calculation of controller’s setting is performed to find the values of Kc and τi that will give satisfactory control in closed loop system. Then, the performance analysis of closed loop system is obtained by set point tracking analysis and disturbance rejection performance. To optimize the overall physical system performance, a refined tuning of PID or detuning is further conducted to ensure a consistent resultant output of closed loop system reaction to the set point changes and disturbances to the physical model. As a result, the PB = 100 (%) and τi = 2.0 (s) is preferably chosen for setpoint tracking while PB = 100 (%) and τi = 2.5 (s) is selected for rejecting the imposed disturbance to the model. In a nutshell, selecting correlation tuning values is likewise depended on the required control’s objective for the stability performance of overall physical model.
Bayerl, Manfred
2015-01-01
Peyronie's disease is a connective tissue disorder in the soft tissue of the penis. The underlying cause of Peyronie's disease is not well understood but is thought to be caused by trauma or injury to the penis during sexual intercourse. The purpose of the interdisciplinary cooperation between urological surgery and physics is the development of a physical simulation tool in order to give prognosis of possible tunica albuginea fibre rupture at a certain degree of deviation of the penis. For our group the first challenge was to translate the human organ of the penis into a physical model. Starting and marginal parameters had to be defined, whereby some of them had to be based on assumption, as physical data of the human living tissue have rarely been measured up to now. The algorithm and its dependencies had to be developed. This paper is a first step of a three-dimensional mathematical-physical simulation with the assumption of a 100% filled rigid penis. The calculation gives proof of the hypothesis that the fibre-load-angle of the penis is less than 12 degrees. Thus physical simulation is able to provide the surgeon with a simple instrument to calculate and forecast the risk of the individual patient. PMID:25648614
Bias correction factors for near-Earth asteroids
NASA Technical Reports Server (NTRS)
Benedix, Gretchen K.; Mcfadden, Lucy Ann; Morrow, Esther M.; Fomenkova, Marina N.
1992-01-01
Knowledge of the population size and physical characteristics (albedo, size, and rotation rate) of near-Earth asteroids (NEA's) is biased by observational selection effects which are functions of the population's intrinsic properties and the size of the telescope, detector sensitivity, and search strategy used. The NEA population is modeled in terms of orbital and physical elements: a, e, i, omega, Omega, M, albedo, and diameter, and an asteroid search program is simulated using actual telescope pointings of right ascension, declination, date, and time. The position of each object in the model population is calculated at the date and time of each telescope pointing. The program tests to see if that object is within the field of view (FOV = 8.75 degrees) of the telescope and above the limiting magnitude (V = +1.65) of the film. The effect of the starting population on the outcome of the simulation's discoveries is compared to the actual discoveries in order to define a most probable starting population.
Monte Carlo Study of Melting of a Model Bulk Ice.
NASA Astrophysics Data System (ADS)
Han, Kyu-Kwang
The methods of NVT (constant number, volume and temperature) and NPT (constant number, pressure and temperature) Monte Carlo computer simulations are used to examine the melting of a periodic hexagonal ice (ice Ih) sample with a unit cell of 192 (rigid) water molecules interacting via the revised central force potentials of Stillinger and Rahman (RSL2). In NVT Monte Carlo simulation of P-T plot for a constant density (0.904g/cm^3) is used to locate onset of the liquid-solid coexistence region (where the slope of the pressure changes sign) and estimate the (constant density) melting point. The slope reversal is a natural consequence of the constant density condition for substances which expand upon freezing and it is pointed out that this analysis is extremely useful for substances such as water. In this study, a sign reversal of the pressure slope is observed near 280 K, indicating that the RSL2 potentials reproduce the freezing expansion expected for water and support a bulk ice Ih system which melts <280 K. The internal energy, specific heat, and two dimensional structure factors for the constant density H_2O system are also examined at a range of temperatures between 100 and 370 K and support the P-T analysis for location of the melting point. This P-T analysis might likewise be useful for determining a (constant density) freezing point, or, with multiple simulations at appropriate densities, the triple point. For NPT Monte Carlo simulations preliminary results are presented. In this study the density, enthalpy, specific heat, and structure factor dependences on temperature are monitored during a sequential heating of the system from 100 to 370 K at a constant pressure (1 atm.). A jump in density upon melting is observed and indicates that the RSL2 potentials reproduce the melting contraction of ice. From the dependences of monitored physical properties on temperature an upper bound on the melting temperature is estimated. In this study we made the first analysis and calculation of the P-T curve for ice Ih melting at constant volume and the first NPT study of ice and of ice melting. In the NVT simulation we found for rho = 0.904g/cm^3 T_ {rm m} ~eq 280 K which is much closer to physical T_ {rm m} than any other published NVT simulation of ice. Finally it is shown that RSL2 potentials do a credible job of describing the thermodynamic properties of ice Ih near its melting point.
simulation of the DNA force-extension curve
NASA Astrophysics Data System (ADS)
Shinaberry, Gregory; Mikhaylov, Ivan; Balaeff, Alexander
A molecular dynamics simulation study of the force-extension curve of double-stranded DNA is presented. Extended simulations of the DNA at multiple points along the force-extension curve are conducted with DNA end-to-end length constrained at each point. The calculated force-extension curve qualitatively reproduces the experimental one. The DNA conformational ensemble at each extension shows that the famous plateau of the force-extension curve results from B-DNA melting, whereas the formation of the earlier-predicted novel DNA conformation called 'zip-DNA' takes place at extensions past the plateau. An extensive analysis of the DNA conformational ensemble in terms of base configuration, backbone configuration, solvent interaction energy, etc., is conducted in order to elucidate the physical origin of DNA elasticity and the main interactions responsible for the shape of the force-extension curve.
Numerical Simulations of a Jet–Cloud Collision and Starburst: Application to Minkowski’s Object
Fragile, P. Chris; Anninos, Peter; Croft, Steve; ...
2017-11-30
In this work, we present results of three-dimensional, multi-physics simulations of an AGN jet colliding with an intergalactic cloud. The purpose of these simulations is to assess the degree of "positive feedback," i.e., jet-induced star formation, that results. We have specifically tailored our simulation parameters to facilitate a comparison with recent observations of Minkowski's Object (MO), a stellar nursery located at the termination point of a radio jet coming from galaxy NGC 541. As shown in our simulations, such a collision triggers shocks, which propagate around and through the cloud. These shocks condense the gas and under the right circumstancesmore » may trigger cooling instabilities, creating runaway increases in density, to the point that individual clumps can become Jeans unstable. Our simulations provide information about the expected star formation rate, total mass converted to H I, H 2, and stars, and the relative velocity of the stars and gas. Finally, our results confirm the possibility of jet-induced star formation, and agree well with the observations of MO.« less
Numerical Simulations of a Jet–Cloud Collision and Starburst: Application to Minkowski’s Object
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fragile, P. Chris; Anninos, Peter; Croft, Steve
In this work, we present results of three-dimensional, multi-physics simulations of an AGN jet colliding with an intergalactic cloud. The purpose of these simulations is to assess the degree of "positive feedback," i.e., jet-induced star formation, that results. We have specifically tailored our simulation parameters to facilitate a comparison with recent observations of Minkowski's Object (MO), a stellar nursery located at the termination point of a radio jet coming from galaxy NGC 541. As shown in our simulations, such a collision triggers shocks, which propagate around and through the cloud. These shocks condense the gas and under the right circumstancesmore » may trigger cooling instabilities, creating runaway increases in density, to the point that individual clumps can become Jeans unstable. Our simulations provide information about the expected star formation rate, total mass converted to H I, H 2, and stars, and the relative velocity of the stars and gas. Finally, our results confirm the possibility of jet-induced star formation, and agree well with the observations of MO.« less
Numerical Simulations of a Jet-Cloud Collision and Starburst: Application to Minkowski’s Object
NASA Astrophysics Data System (ADS)
Fragile, P. Chris; Anninos, Peter; Croft, Steve; Lacy, Mark; Witry, Jason W. L.
2017-12-01
We present results of three-dimensional, multi-physics simulations of an AGN jet colliding with an intergalactic cloud. The purpose of these simulations is to assess the degree of “positive feedback,” i.e., jet-induced star formation, that results. We have specifically tailored our simulation parameters to facilitate a comparison with recent observations of Minkowski’s Object (MO), a stellar nursery located at the termination point of a radio jet coming from galaxy NGC 541. As shown in our simulations, such a collision triggers shocks, which propagate around and through the cloud. These shocks condense the gas and under the right circumstances may trigger cooling instabilities, creating runaway increases in density, to the point that individual clumps can become Jeans unstable. Our simulations provide information about the expected star formation rate, total mass converted to H I, H2, and stars, and the relative velocity of the stars and gas. Our results confirm the possibility of jet-induced star formation, and agree well with the observations of MO.
Smooth Sensor Motion Planning for Robotic Cyber Physical Social Sensing (CPSS)
Tang, Hong; Li, Liangzhi; Xiao, Nanfeng
2017-01-01
Although many researchers have begun to study the area of Cyber Physical Social Sensing (CPSS), few are focused on robotic sensors. We successfully utilize robots in CPSS, and propose a sensor trajectory planning method in this paper. Trajectory planning is a fundamental problem in mobile robotics. However, traditional methods are not suited for robotic sensors, because of their low efficiency, instability, and non-smooth-generated paths. This paper adopts an optimizing function to generate several intermediate points and regress these discrete points to a quintic polynomial which can output a smooth trajectory for the robotic sensor. Simulations demonstrate that our approach is robust and efficient, and can be well applied in the CPSS field. PMID:28218649
Numerical Simulation of Measurements during the Reactor Physical Startup at Unit 3 of Rostov NPP
NASA Astrophysics Data System (ADS)
Tereshonok, V. A.; Kryakvin, L. V.; Pitilimov, V. A.; Karpov, S. A.; Kulikov, V. I.; Zhylmaganbetov, N. M.; Kavun, O. Yu.; Popykin, A. I.; Shevchenko, R. A.; Shevchenko, S. A.; Semenova, T. V.
2017-12-01
The results of numerical calculations and measurements of some reactor parameters during the physical startup tests at unit 3 of Rostov NPP are presented. The following parameters are considered: the critical boron acid concentration and the currents from ionization chambers (IC) during the scram system efficiency evaluation. The scram system efficiency was determined using the inverse point kinetics equation with the measured and simulated IC currents. The results of steady-state calculations of relative power distribution and efficiency of the scram system and separate groups of control rods of the control and protection system are also presented. The calculations are performed using several codes, including precision ones.
A Special Topic From Nuclear Reactor Dynamics for the Undergraduate Physics Curriculum
ERIC Educational Resources Information Center
Sevenich, R. A.
1977-01-01
Presents an intuitive derivation of the point reactor equations followed by formulation of equations for inverse and direct kinetics which are readily programmed on a digital computer. Suggests several computer simulations involving the effect of control rod motion on reactor power. (MLH)
Calculation of electron Dose Point Kernel in water with GEANT4 for medical application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guimaraes, C. C.; Sene, F. F.; Martinelli, J. R.
2009-06-03
The rapid insertion of new technologies in medical physics in the last years, especially in nuclear medicine, has been followed by a great development of faster Monte Carlo algorithms. GEANT4 is a Monte Carlo toolkit that contains the tools to simulate the problems of particle transport through matter. In this work, GEANT4 was used to calculate the dose-point-kernel (DPK) for monoenergetic electrons in water, which is an important reference medium for nuclear medicine. The three different physical models of electromagnetic interactions provided by GEANT4 - Low Energy, Penelope and Standard - were employed. To verify the adequacy of these models,more » the results were compared with references from the literature. For all energies and physical models, the agreement between calculated DPKs and reported values is satisfactory.« less
Practices to enable the geophysical research spectrum: from fundamentals to applications
NASA Astrophysics Data System (ADS)
Kang, S.; Cockett, R.; Heagy, L. J.; Oldenburg, D.
2016-12-01
In a geophysical survey, a source injects energy into the earth and a response is measured. These physical systems are governed by partial differential equations and their numerical solutions are obtained by discretizing the earth. Geophysical simulations and inversions are tools for understanding physical responses and constructing models of the subsurface given a finite amount of data. SimPEG (http://simpeg.xyz) is our effort to synthesize geophysical forward and inverse methodologies into a consistent framework. The primary focus of our initial development has been on the electromagnetics (EM) package, with recent extensions to magnetotelluric, direct current (DC), and induced polarization. Across these methods, and applied geophysics in general, we require tools to explore and build an understanding of the physics (behaviour of fields, fluxes), and work with data to produce models through reproducible inversions. If we consider DC or EM experiments, with the aim of understanding responses from subsurface conductors, we require resources that provide multiple "entry points" into the geophysical problem. To understand the physical responses and measured data, we must simulate the physical system and visualize electric fields, currents, and charges. Performing an inversion requires that many moving pieces be brought together: simulation, physics, linear algebra, data processing, optimization, etc. Each component must be trusted, accessible to interrogation and manipulation, and readily combined in order to enable investigation into inversion methodologies. To support such research, we not only require "entry points" into the software, but also extensibility to new situations. In our development of SimPEG, we have sought to use leading practices in software development with the aim of supporting and promoting collaborations across a spectrum of geophysical research: from fundamentals to applications. Designing software to enable this spectrum puts unique constraints on both the architecture of the codebase as well as the development practices that are employed. In this presentation, we will share some lessons learned and, in particular, how our prioritization of testing, documentation, and refactoring has impacted our own research and fostered collaborations.
Infant phantom head circuit board for EEG head phantom and pediatric brain simulation
NASA Astrophysics Data System (ADS)
Almohsen, Safa
The infant's skull differs from an adult skull because of the characteristic features of the human skull during early development. The fontanels and the conductivity of the infant skull influence surface currents, generated by neurons, which underlie electroencephalography (EEG) signals. An electric circuit was built to power a set of simulated neural sources for an infant brain activity simulator. Also, in the simulator, three phantom tissues were created using saline solution plus Agarose gel to mimic the conductivity of each layer in the head [scalp, skull brain]. The conductivity measurement was accomplished by two different techniques: using the four points' measurement technique, and a conductivity meter. Test results showed that the optimized phantom tissues had appropriate conductivities to simulate each tissue layer to fabricate a physical head phantom. In this case, the best results should be achieved by testing the electrical neural circuit with the sample physical model to generate simulated EEG data and use that to solve both the forward and the inverse problems for the purpose of localizing the neural sources in the head phantom.
Disturbance characteristics of half-selected cells in a cross-point resistive switching memory array
NASA Astrophysics Data System (ADS)
Chen, Zhe; Li, Haitong; Chen, Hong-Yu; Chen, Bing; Liu, Rui; Huang, Peng; Zhang, Feifei; Jiang, Zizhen; Ye, Hongfei; Gao, Bin; Liu, Lifeng; Liu, Xiaoyan; Kang, Jinfeng; Wong, H.-S. Philip; Yu, Shimeng
2016-05-01
Disturbance characteristics of cross-point resistive random access memory (RRAM) arrays are comprehensively studied in this paper. An analytical model is developed to quantify the number of pulses (#Pulse) the cell can bear before disturbance occurs under various sub-switching voltage stresses based on physical understanding. An evaluation methodology is proposed to assess the disturb behavior of half-selected (HS) cells in cross-point RRAM arrays by combining the analytical model and SPICE simulation. The characteristics of cross-point RRAM arrays such as energy consumption, reliable operating cycles and total error bits are evaluated by the methodology. A possible solution to mitigate disturbance is proposed.
Non-magnetic photospheric bright points in 3D simulations of the solar atmosphere
NASA Astrophysics Data System (ADS)
Calvo, F.; Steiner, O.; Freytag, B.
2016-11-01
Context. Small-scale bright features in the photosphere of the Sun, such as faculae or G-band bright points, appear in connection with small-scale magnetic flux concentrations. Aims: Here we report on a new class of photospheric bright points that are free of magnetic fields. So far, these are visible in numerical simulations only. We explore conditions required for their observational detection. Methods: Numerical radiation (magneto-)hydrodynamic simulations of the near-surface layers of the Sun were carried out. The magnetic field-free simulations show tiny bright points, reminiscent of magnetic bright points, only smaller. A simple toy model for these non-magnetic bright points (nMBPs) was established that serves as a base for the development of an algorithm for their automatic detection. Basic physical properties of 357 detected nMBPs were extracted and statistically evaluated. We produced synthetic intensity maps that mimic observations with various solar telescopes to obtain hints on their detectability. Results: The nMBPs of the simulations show a mean bolometric intensity contrast with respect to their intergranular surroundings of approximately 20%, a size of 60-80 km, and the isosurface of optical depth unity is at their location depressed by 80-100 km. They are caused by swirling downdrafts that provide, by means of the centripetal force, the necessary pressure gradient for the formation of a funnel of reduced mass density that reaches from the subsurface layers into the photosphere. Similar, frequently occurring funnels that do not reach into the photosphere, do not produce bright points. Conclusions: Non-magnetic bright points are the observable manifestation of vertically extending vortices (vortex tubes) in the photosphere. The resolving power of 4-m-class telescopes, such as the DKIST, is needed for an unambiguous detection of them. The movie associated to Fig. 1 is available at http://www.aanda.org
Image-Based Reconstruction and Analysis of Dynamic Scenes in a Landslide Simulation Facility
NASA Astrophysics Data System (ADS)
Scaioni, M.; Crippa, J.; Longoni, L.; Papini, M.; Zanzi, L.
2017-12-01
The application of image processing and photogrammetric techniques to dynamic reconstruction of landslide simulations in a scaled-down facility is described. Simulations are also used here for active-learning purpose: students are helped understand how physical processes happen and which kinds of observations may be obtained from a sensor network. In particular, the use of digital images to obtain multi-temporal information is presented. On one side, using a multi-view sensor set up based on four synchronized GoPro 4 Black® cameras, a 4D (3D spatial position and time) reconstruction of the dynamic scene is obtained through the composition of several 3D models obtained from dense image matching. The final textured 4D model allows one to revisit in dynamic and interactive mode a completed experiment at any time. On the other side, a digital image correlation (DIC) technique has been used to track surface point displacements from the image sequence obtained from the camera in front of the simulation facility. While the 4D model may provide a qualitative description and documentation of the experiment running, DIC analysis output quantitative information such as local point displacements and velocities, to be related to physical processes and to other observations. All the hardware and software equipment adopted for the photogrammetric reconstruction has been based on low-cost and open-source solutions.
NASA Astrophysics Data System (ADS)
Klejment, Piotr; Kosmala, Alicja; Foltyn, Natalia; Dębski, Wojciech
2017-04-01
The earthquake focus is the point where a rock under external stress starts to fracture. Understanding earthquake nucleation and earthquake dynamics requires thus understanding of fracturing of brittle materials. This, however, is a continuing problem and enduring challenge to geoscience. In spite of significant progress we still do not fully understand the failure of rock materials due to extreme stress concentration in natural condition. One of the reason of this situation is that information about natural or induced seismic events is still not sufficient for precise description of physical processes in seismic foci. One of the possibility of improving this situation is using numerical simulations - a powerful tool of contemporary physics. For this reason we used an advanced implementation of the Discrete Element Method (DEM). DEM's main task is to calculate physical properties of materials which are represented as an assembly of a great number of particles interacting with each other. We analyze the possibility of using DEM for describing materials during so called Brazilian Test. Brazilian Test is a testing method to obtain the tensile strength of brittle material. One of the primary reasons for conducting such simulations is to measure macroscopic parameters of the rock sample. We would like to report our efforts of describing the fracturing process during the Brazilian Test from the microscopic point of view and give an insight into physical processes preceding materials failure.
SAFSIM theory manual: A computer program for the engineering simulation of flow systems
NASA Astrophysics Data System (ADS)
Dobranich, Dean
1993-12-01
SAFSIM (System Analysis Flow SIMulator) is a FORTRAN computer program for simulating the integrated performance of complex flow systems. SAFSIM provides sufficient versatility to allow the engineering simulation of almost any system, from a backyard sprinkler system to a clustered nuclear reactor propulsion system. In addition to versatility, speed and robustness are primary SAFSIM development goals. SAFSIM contains three basic physics modules: (1) a fluid mechanics module with flow network capability; (2) a structure heat transfer module with multiple convection and radiation exchange surface capability; and (3) a point reactor dynamics module with reactivity feedback and decay heat capability. Any or all of the physics modules can be implemented, as the problem dictates. SAFSIM can be used for compressible and incompressible, single-phase, multicomponent flow systems. Both the fluid mechanics and structure heat transfer modules employ a one-dimensional finite element modeling approach. This document contains a description of the theory incorporated in SAFSIM, including the governing equations, the numerical methods, and the overall system solution strategies.
GIS-Based Noise Simulation Open Source Software: N-GNOIS
NASA Astrophysics Data System (ADS)
Vijay, Ritesh; Sharma, A.; Kumar, M.; Shende, V.; Chakrabarti, T.; Gupta, Rajesh
2015-12-01
Geographical information system (GIS)-based noise simulation software (N-GNOIS) has been developed to simulate the noise scenario due to point and mobile sources considering the impact of geographical features and meteorological parameters. These have been addressed in the software through attenuation modules of atmosphere, vegetation and barrier. N-GNOIS is a user friendly, platform-independent and open geospatial consortia (OGC) compliant software. It has been developed using open source technology (QGIS) and open source language (Python). N-GNOIS has unique features like cumulative impact of point and mobile sources, building structure and honking due to traffic. Honking is the most common phenomenon in developing countries and is frequently observed on any type of roads. N-GNOIS also helps in designing physical barrier and vegetation cover to check the propagation of noise and acts as a decision making tool for planning and management of noise component in environmental impact assessment (EIA) studies.
ViSBARD: Visual System for Browsing, Analysis and Retrieval of Data
NASA Astrophysics Data System (ADS)
Roberts, D. Aaron; Boller, Ryan; Rezapkin, V.; Coleman, J.; McGuire, R.; Goldstein, M.; Kalb, V.; Kulkarni, R.; Luckyanova, M.; Byrnes, J.; Kerbel, U.; Candey, R.; Holmes, C.; Chimiak, R.; Harris, B.
2018-04-01
ViSBARD interactively visualizes and analyzes space physics data. It provides an interactive integrated 3-D and 2-D environment to determine correlations between measurements across many spacecraft. It supports a variety of spacecraft data products and MHD models and is easily extensible to others. ViSBARD provides a way of visualizing multiple vector and scalar quantities as measured by many spacecraft at once. The data are displayed three-dimesionally along the orbits which may be displayed either as connected lines or as points. The data display allows the rapid determination of vector configurations, correlations between many measurements at multiple points, and global relationships. With the addition of magnetohydrodynamic (MHD) model data, this environment can also be used to validate simulation results with observed data, use simulated data to provide a global context for sparse observed data, and apply feature detection techniques to the simulated data.
Atmospheric Modeling And Sensor Simulation (AMASS) study
NASA Technical Reports Server (NTRS)
Parker, K. G.
1984-01-01
The capabilities of the atmospheric modeling and sensor simulation (AMASS) system were studied in order to enhance them. This system is used in processing atmospheric measurements which are utilized in the evaluation of sensor performance, conducting design-concept simulation studies, and also in the modeling of the physical and dynamical nature of atmospheric processes. The study tasks proposed in order to both enhance the AMASS system utilization and to integrate the AMASS system with other existing equipment to facilitate the analysis of data for modeling and image processing are enumerated. The following array processors were evaluated for anticipated effectiveness and/or improvements in throughput by attachment of the device to the P-e: (1) Floating Point Systems AP-120B; (2) Floating Point Systems 5000; (3) CSP, Inc. MAP-400; (4) Analogic AP500; (5) Numerix MARS-432; and (6) Star Technologies, Inc. ST-100.
Material point method of modelling and simulation of reacting flow of oxygen
NASA Astrophysics Data System (ADS)
Mason, Matthew; Chen, Kuan; Hu, Patrick G.
2014-07-01
Aerospace vehicles are continually being designed to sustain flight at higher speeds and higher altitudes than previously attainable. At hypersonic speeds, gases within a flow begin to chemically react and the fluid's physical properties are modified. It is desirable to model these effects within the Material Point Method (MPM). The MPM is a combined Eulerian-Lagrangian particle-based solver that calculates the physical properties of individual particles and uses a background grid for information storage and exchange. This study introduces chemically reacting flow modelling within the MPM numerical algorithm and illustrates a simple application using the AeroElastic Material Point Method (AEMPM) code. The governing equations of reacting flows are introduced and their direct application within an MPM code is discussed. A flow of 100% oxygen is illustrated and the results are compared with independently developed computational non-equilibrium algorithms. Observed trends agree well with results from an independently developed source.
de Vries, R
2004-02-15
Electrostatic complexation of flexible polyanions with the whey proteins alpha-lactalbumin and beta-lactoglobulin is studied using Monte Carlo simulations. The proteins are considered at their respective isoelectric points. Discrete charges on the model polyelectrolytes and proteins interact through Debye-Huckel potentials. Protein excluded volume is taken into account through a coarse-grained model of the protein shape. Consistent with experimental results, it is found that alpha-lactalbumin complexes much more strongly than beta-lactoglobulin. For alpha-lactalbumin, strong complexation is due to localized binding to a single large positive "charge patch," whereas for beta-lactoglobulin, weak complexation is due to diffuse binding to multiple smaller charge patches. Copyright 2004 American Institute of Physics
A statistical physics perspective on criticality in financial markets
NASA Astrophysics Data System (ADS)
Bury, Thomas
2013-11-01
Stock markets are complex systems exhibiting collective phenomena and particular features such as synchronization, fluctuations distributed as power-laws, non-random structures and similarity to neural networks. Such specific properties suggest that markets operate at a very special point. Financial markets are believed to be critical by analogy to physical systems, but little statistically founded evidence has been given. Through a data-based methodology and comparison to simulations inspired by the statistical physics of complex systems, we show that the Dow Jones and index sets are not rigorously critical. However, financial systems are closer to criticality in the crash neighborhood.
Quantum simulation from the bottom up: the case of rebits
NASA Astrophysics Data System (ADS)
Enshan Koh, Dax; Yuezhen Niu, Murphy; Yoder, Theodore J.
2018-05-01
Typically, quantum mechanics is thought of as a linear theory with unitary evolution governed by the Schrödinger equation. While this is technically true and useful for a physicist, with regards to computation it is an unfortunately narrow point of view. Just as a classical computer can simulate highly nonlinear functions of classical states, so too can the more general quantum computer simulate nonlinear evolutions of quantum states. We detail one particular simulation of nonlinearity on a quantum computer, showing how the entire class of -unitary evolutions (on n qubits) can be simulated using a unitary, real-amplitude quantum computer (consisting of n + 1 qubits in total). These operators can be represented as the sum of a linear and antilinear operator, and add an intriguing new set of nonlinear quantum gates to the toolbox of the quantum algorithm designer. Furthermore, a subgroup of these nonlinear evolutions, called the -Cliffords, can be efficiently classically simulated, by making use of the fact that Clifford operators can simulate non-Clifford (in fact, non-linear) operators. This perspective of using the physical operators that we have to simulate non-physical ones that we do not is what we call bottom-up simulation, and we give some examples of its broader implications.
NASA Astrophysics Data System (ADS)
Yudin, M. S.
2017-11-01
In the present paper, stratification effects on surface pressure in the propagation of an atmospheric gravity current (cold front) over flat terrain are estimated with a non-hydrostatic finite-difference model of atmospheric dynamics. Artificial compressibility is introduced into the model in order to make its equations hyperbolic. For comparison with available simulation data, the physical processes under study are assumed to be adiabatic. The influence of orography is also eliminated. The front surface is explicitly described by a special equation. A time filter is used to suppress the non-physical oscillations. The results of simulations of surface pressure under neutral and stable stratification are presented. Under stable stratification the front moves faster and shows an abrupt pressure jump at the point of observation. This fact is in accordance with observations and the present-day theory of atmospheric fronts.
NASA Astrophysics Data System (ADS)
Martin-Bragado, I.; Castrillo, P.; Jaraiz, M.; Pinacho, R.; Rubio, J. E.; Barbolla, J.; Moroz, V.
2005-09-01
Atomistic process simulation is expected to play an important role for the development of next generations of integrated circuits. This work describes an approach for modeling electric charge effects in a three-dimensional atomistic kinetic Monte Carlo process simulator. The proposed model has been applied to the diffusion of electrically active boron and arsenic atoms in silicon. Several key aspects of the underlying physical mechanisms are discussed: (i) the use of the local Debye length to smooth out the atomistic point-charge distribution, (ii) algorithms to correctly update the charge state in a physically accurate and computationally efficient way, and (iii) an efficient implementation of the drift of charged particles in an electric field. High-concentration effects such as band-gap narrowing and degenerate statistics are also taken into account. The efficiency, accuracy, and relevance of the model are discussed.
Load management strategy for Particle-In-Cell simulations in high energy particle acceleration
NASA Astrophysics Data System (ADS)
Beck, A.; Frederiksen, J. T.; Dérouillat, J.
2016-09-01
In the wake of the intense effort made for the experimental CILEX project, numerical simulation campaigns have been carried out in order to finalize the design of the facility and to identify optimal laser and plasma parameters. These simulations bring, of course, important insight into the fundamental physics at play. As a by-product, they also characterize the quality of our theoretical and numerical models. In this paper, we compare the results given by different codes and point out algorithmic limitations both in terms of physical accuracy and computational performances. These limitations are illustrated in the context of electron laser wakefield acceleration (LWFA). The main limitation we identify in state-of-the-art Particle-In-Cell (PIC) codes is computational load imbalance. We propose an innovative algorithm to deal with this specific issue as well as milestones towards a modern, accurate high-performance PIC code for high energy particle acceleration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morgansen, K.A.; Pin, F.G.
A new method for mitigating unexpected impact of a redundant manipulator with an object in its environment is presented. Kinematic constraints are utilized with the recently developed method known as Full Space Parameterization (FSP). System performance criterion and constraints are changed at impact to return the end effector to the point of impact and halt the arm. Since large joint accelerations could occur as the manipulator is halted, joint acceleration bounds are imposed to simulate physical actuator limitations. Simulation results are presented for the case of a simple redundant planar manipulator.
Numerical simulation of the early-time high altitude electromagnetic pulse
NASA Astrophysics Data System (ADS)
Meng, Cui; Chen, Yu-Sheng; Liu, Shun-Kun; Xie, Qin-Chuan; Chen, Xiang-Yue; Gong, Jian-Cheng
2003-12-01
In this paper, the finite difference method is used to develop the Fortran software MCHII. The physical process in which the electromagnetic signal is generated by the interaction of nuclear-explosion-induced Compton currents with the geomagnetic field is numerically simulated. The electromagnetic pulse waveforms below the burst point are investigated. The effects of the height of burst, yield and the time-dependence of gamma-rays are calculated by using the MCHII code. The results agree well with those obtained by using the code CHAP.
Strangeness S =-1 hyperon-nucleon interactions: Chiral effective field theory versus lattice QCD
NASA Astrophysics Data System (ADS)
Song, Jing; Li, Kai-Wen; Geng, Li-Sheng
2018-06-01
Hyperon-nucleon interactions serve as basic inputs to studies of hypernuclear physics and dense (neutron) stars. Unfortunately, a precise understanding of these important quantities has lagged far behind that of the nucleon-nucleon interaction due to lack of high-precision experimental data. Historically, hyperon-nucleon interactions are either formulated in quark models or meson exchange models. In recent years, lattice QCD simulations and chiral effective field theory approaches start to offer new insights from first principles. In the present work, we contrast the state-of-the-art lattice QCD simulations with the latest chiral hyperon-nucleon forces and show that the leading order relativistic chiral results can already describe the lattice QCD data reasonably well. Given the fact that the lattice QCD simulations are performed with pion masses ranging from the (almost) physical point to 700 MeV, such studies provide a useful check on both the chiral effective field theory approaches as well as lattice QCD simulations. Nevertheless more precise lattice QCD simulations are eagerly needed to refine our understanding of hyperon-nucleon interactions.
Time domain simulations of preliminary breakdown pulses in natural lightning
Carlson, B E; Liang, C; Bitzer, P; Christian, H
2015-01-01
Lightning discharge is a complicated process with relevant physical scales spanning many orders of magnitude. In an effort to understand the electrodynamics of lightning and connect physical properties of the channel to observed behavior, we construct a simulation of charge and current flow on a narrow conducting channel embedded in three-dimensional space with the time domain electric field integral equation, the method of moments, and the thin-wire approximation. The method includes approximate treatment of resistance evolution due to lightning channel heating and the corona sheath of charge surrounding the lightning channel. Focusing our attention on preliminary breakdown in natural lightning by simulating stepwise channel extension with a simplified geometry, our simulation reproduces the broad features observed in data collected with the Huntsville Alabama Marx Meter Array. Some deviations in pulse shape details are evident, suggesting future work focusing on the detailed properties of the stepping mechanism. Key Points Preliminary breakdown pulses can be reproduced by simulated channel extension Channel heating and corona sheath formation are crucial to proper pulse shape Extension processes and channel orientation significantly affect observations PMID:26664815
Entropy in Collisionless Self-gravitating Systems
NASA Astrophysics Data System (ADS)
Barnes, Eric; Williams, L.
2010-01-01
Collisionless systems, like simulated dark matter halos or gas-less elliptical galaxies, often times have properties suggesting that a common physical principle controls their evolution. For example, N-body simulations of dark matter halos present nearly scale-free density/velocity-cubed profiles. In an attempt to understand the origins of such relationships, we adopt a thermodynamics approach. While it is well-known that self-gravitating systems do not have physically realizable thermal equilibrium configurations, we are interested in the behavior of entropy as mechanical equilibrium is acheived. We will discuss entropy production in these systems from a kinetic theory point of view. This material is based upon work supported by the National Aeronautics and Space Administration under grant NNX07AG86G issued through the Science Mission Directorate.
The Numerical Analysis of a Turbulent Compressible Jet. Degree awarded by Ohio State Univ., 2000
NASA Technical Reports Server (NTRS)
DeBonis, James R.
2001-01-01
A numerical method to simulate high Reynolds number jet flows was formulated and applied to gain a better understanding of the flow physics. Large-eddy simulation was chosen as the most promising approach to model the turbulent structures due to its compromise between accuracy and computational expense. The filtered Navier-Stokes equations were developed including a total energy form of the energy equation. Subgrid scale models for the momentum and energy equations were adapted from compressible forms of Smagorinsky's original model. The effect of using disparate temporal and spatial accuracy in a numerical scheme was discovered through one-dimensional model problems and a new uniformly fourth-order accurate numerical method was developed. Results from two- and three-dimensional validation exercises show that the code accurately reproduces both viscous and inviscid flows. Numerous axisymmetric jet simulations were performed to investigate the effect of grid resolution, numerical scheme, exit boundary conditions and subgrid scale modeling on the solution and the results were used to guide the three-dimensional calculations. Three-dimensional calculations of a Mach 1.4 jet showed that this LES simulation accurately captures the physics of the turbulent flow. The agreement with experimental data was relatively good and is much better than results in the current literature. Turbulent intensities indicate that the turbulent structures at this level of modeling are not isotropic and this information could lend itself to the development of improved subgrid scale models for LES and turbulence models for RANS simulations. A two point correlation technique was used to quantify the turbulent structures. Two point space correlations were used to obtain a measure of the integral length scale, which proved to be approximately 1/2 D(sub j). Two point space-time correlations were used to obtain the convection velocity for the turbulent structures. This velocity ranged from 0.57 to 0.71 U(sub j).
Low-Speed Flight Dynamic Tests and Analysis of the Orion Crew Module Drogue Parachute System
NASA Technical Reports Server (NTRS)
Hahne, David E.; Fremaux, C. Michael
2008-01-01
A test of a dynamically scaled model of the NASA Orion Crew Module (CM) with drogue parachutes was conducted in the NASA-Langley 20-Foot Vertical Spin Tunnel. The primary test objective was to assess the ability of the Orion Crew Module drogue parachute system to adequately stabilize the CM and reduce angular rates at low subsonic Mach numbers. Two attachment locations were tested: the current design nominal and an alternate. Experimental results indicated that the alternate attachment location showed a somewhat greater tendency to attenuate initial roll rate and reduce roll rate oscillations than the nominal location. Comparison of the experimental data to a Program To Optimize Simulated Trajectories (POST II) simulation of the experiment yielded results for the nominal attachment point that indicate differences between the low-speed pitch and yaw damping derivatives in the aerodynamic database and the physical model. Comparisons for the alternate attachment location indicate that riser twist plays a significant role in determining roll rate attenuation characteristics. Reevaluating the impact of the alternate attachment points using a simulation modified to account for these results showed significantly reduced roll rate attenuation tendencies when compared to the original simulation. Based on this modified simulation the alternate attachment point does not appear to offer a significant increase in allowable roll rate over the nominal configuration.
Effective charges and virial pressure of concentrated macroion solutions
Boon, Niels; Guerrero-García, Guillermo Ivan; van Roij, René; ...
2015-07-13
The stability of colloidal suspensions is crucial in a wide variety of processes, including the fabrication of photonic materials and scaffolds for biological assemblies. The ionic strength of the electrolyte that suspends charged colloids is widely used to control the physical properties of colloidal suspensions. The extensively used two-body Derjaguin-Landau-Verwey-Overbeek (DLVO) approach allows for a quantitative analysis of the effective electrostatic forces between colloidal particles. DLVO relates the ionic double layers, which enclose the particles, to their effective electrostatic repulsion. Nevertheless, the double layer is distorted at high macroion volume fractions. Therefore, DLVO cannot describe the many-body effects that arisemore » in concentrated suspensions. In this paper, we show that this problem can be largely resolved by identifying effective point charges for the macroions using cell theory. This extrapolated point charge (EPC) method assigns effective point charges in a consistent way, taking into account the excluded volume of highly charged macroions at any concentration, and thereby naturally accounting for high volume fractions in both salt-free and added-salt conditions. We provide an analytical expression for the effective pair potential and validate the EPC method by comparing molecular dynamics simulations of macroions and monovalent microions that interact via Coulombic potentials to simulations of macroions interacting via the derived EPC effective potential. The simulations reproduce the macroion-macroion spatial correlation and the virial pressure obtained with the EPC model. Finally, our findings provide a route to relate the physical properties such as pressure in systems of screened Coulomb particles to experimental measurements.« less
GF-7 Imaging Simulation and Dsm Accuracy Estimate
NASA Astrophysics Data System (ADS)
Yue, Q.; Tang, X.; Gao, X.
2017-05-01
GF-7 satellite is a two-line-array stereo imaging satellite for surveying and mapping which will be launched in 2018. Its resolution is about 0.8 meter at subastral point corresponding to a 20 km width of cloth, and the viewing angle of its forward and backward cameras are 5 and 26 degrees. This paper proposed the imaging simulation method of GF-7 stereo images. WorldView-2 stereo images were used as basic data for simulation. That is, we didn't use DSM and DOM as basic data (we call it "ortho-to-stereo" method) but used a "stereo-to-stereo" method, which will be better to reflect the difference of geometry and radiation in different looking angle. The shortage is that geometric error will be caused by two factors, one is different looking angles between basic image and simulated image, another is not very accurate or no ground reference data. We generated DSM by WorldView-2 stereo images. The WorldView-2 DSM was not only used as reference DSM to estimate the accuracy of DSM generated by simulated GF-7 stereo images, but also used as "ground truth" to establish the relationship between WorldView-2 image point and simulated image point. Static MTF was simulated on the instantaneous focal plane "image" by filtering. SNR was simulated in the electronic sense, that is, digital value of WorldView-2 image point was converted to radiation brightness and used as radiation brightness of simulated GF-7 camera. This radiation brightness will be converted to electronic number n according to physical parameters of GF-7 camera. The noise electronic number n1 will be a random number between -√n and √n. The overall electronic number obtained by TDI CCD will add and converted to digital value of simulated GF-7 image. Sinusoidal curves with different amplitude, frequency and initial phase were used as attitude curves. Geometric installation errors of CCD tiles were also simulated considering the rotation and translation factors. An accuracy estimate was made for DSM generated from simulated images.
El Niño/Southern Oscillation response to global warming
Latif, M.; Keenlyside, N. S.
2009-01-01
The El Niño/Southern Oscillation (ENSO) phenomenon, originating in the Tropical Pacific, is the strongest natural interannual climate signal and has widespread effects on the global climate system and the ecology of the Tropical Pacific. Any strong change in ENSO statistics will therefore have serious climatic and ecological consequences. Most global climate models do simulate ENSO, although large biases exist with respect to its characteristics. The ENSO response to global warming differs strongly from model to model and is thus highly uncertain. Some models simulate an increase in ENSO amplitude, others a decrease, and others virtually no change. Extremely strong changes constituting tipping point behavior are not simulated by any of the models. Nevertheless, some interesting changes in ENSO dynamics can be inferred from observations and model integrations. Although no tipping point behavior is envisaged in the physical climate system, smooth transitions in it may give rise to tipping point behavior in the biological, chemical, and even socioeconomic systems. For example, the simulated weakening of the Pacific zonal sea surface temperature gradient in the Hadley Centre model (with dynamic vegetation included) caused rapid Amazon forest die-back in the mid-twenty-first century, which in turn drove a nonlinear increase in atmospheric CO2, accelerating global warming. PMID:19060210
El Nino/Southern Oscillation response to global warming.
Latif, M; Keenlyside, N S
2009-12-08
The El Niño/Southern Oscillation (ENSO) phenomenon, originating in the Tropical Pacific, is the strongest natural interannual climate signal and has widespread effects on the global climate system and the ecology of the Tropical Pacific. Any strong change in ENSO statistics will therefore have serious climatic and ecological consequences. Most global climate models do simulate ENSO, although large biases exist with respect to its characteristics. The ENSO response to global warming differs strongly from model to model and is thus highly uncertain. Some models simulate an increase in ENSO amplitude, others a decrease, and others virtually no change. Extremely strong changes constituting tipping point behavior are not simulated by any of the models. Nevertheless, some interesting changes in ENSO dynamics can be inferred from observations and model integrations. Although no tipping point behavior is envisaged in the physical climate system, smooth transitions in it may give rise to tipping point behavior in the biological, chemical, and even socioeconomic systems. For example, the simulated weakening of the Pacific zonal sea surface temperature gradient in the Hadley Centre model (with dynamic vegetation included) caused rapid Amazon forest die-back in the mid-twenty-first century, which in turn drove a nonlinear increase in atmospheric CO(2), accelerating global warming.
Evaluation of coupling approaches for thermomechanical simulations
Novascone, S. R.; Spencer, B. W.; Hales, J. D.; ...
2015-08-10
Many problems of interest, particularly in the nuclear engineering field, involve coupling between the thermal and mechanical response of an engineered system. The strength of the two-way feedback between the thermal and mechanical solution fields can vary significantly depending on the problem. Contact problems exhibit a particularly high degree of two-way feedback between those fields. This paper describes and demonstrates the application of a flexible simulation environment that permits the solution of coupled physics problems using either a tightly coupled approach or a loosely coupled approach. In the tight coupling approach, Newton iterations include the coupling effects between all physics,more » while in the loosely coupled approach, the individual physics models are solved independently, and fixed-point iterations are performed until the coupled system is converged. These approaches are applied to simple demonstration problems and to realistic nuclear engineering applications. The demonstration problems consist of single and multi-domain thermomechanics with and without thermal and mechanical contact. Simulations of a reactor pressure vessel under pressurized thermal shock conditions and a simulation of light water reactor fuel are also presented. Here, problems that include thermal and mechanical contact, such as the contact between the fuel and cladding in the fuel simulation, exhibit much stronger two-way feedback between the thermal and mechanical solutions, and as a result, are better solved using a tight coupling strategy.« less
Applicability Analysis of Cloth Simulation Filtering Algorithm for Mobile LIDAR Point Cloud
NASA Astrophysics Data System (ADS)
Cai, S.; Zhang, W.; Qi, J.; Wan, P.; Shao, J.; Shen, A.
2018-04-01
Classifying the original point clouds into ground and non-ground points is a key step in LiDAR (light detection and ranging) data post-processing. Cloth simulation filtering (CSF) algorithm, which based on a physical process, has been validated to be an accurate, automatic and easy-to-use algorithm for airborne LiDAR point cloud. As a new technique of three-dimensional data collection, the mobile laser scanning (MLS) has been gradually applied in various fields, such as reconstruction of digital terrain models (DTM), 3D building modeling and forest inventory and management. Compared with airborne LiDAR point cloud, there are some different features (such as point density feature, distribution feature and complexity feature) for mobile LiDAR point cloud. Some filtering algorithms for airborne LiDAR data were directly used in mobile LiDAR point cloud, but it did not give satisfactory results. In this paper, we explore the ability of the CSF algorithm for mobile LiDAR point cloud. Three samples with different shape of the terrain are selected to test the performance of this algorithm, which respectively yields total errors of 0.44 %, 0.77 % and1.20 %. Additionally, large area dataset is also tested to further validate the effectiveness of this algorithm, and results show that it can quickly and accurately separate point clouds into ground and non-ground points. In summary, this algorithm is efficient and reliable for mobile LiDAR point cloud.
Enhanced Verification Test Suite for Physics Simulation Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamm, J R; Brock, J S; Brandon, S T
2008-10-10
This document discusses problems with which to augment, in quantity and in quality, the existing tri-laboratory suite of verification problems used by Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), and Sandia National Laboratories (SNL). The purpose of verification analysis is demonstrate whether the numerical results of the discretization algorithms in physics and engineering simulation codes provide correct solutions of the corresponding continuum equations. The key points of this document are: (1) Verification deals with mathematical correctness of the numerical algorithms in a code, while validation deals with physical correctness of a simulation in a regime of interest.more » This document is about verification. (2) The current seven-problem Tri-Laboratory Verification Test Suite, which has been used for approximately five years at the DOE WP laboratories, is limited. (3) Both the methodology for and technology used in verification analysis have evolved and been improved since the original test suite was proposed. (4) The proposed test problems are in three basic areas: (a) Hydrodynamics; (b) Transport processes; and (c) Dynamic strength-of-materials. (5) For several of the proposed problems we provide a 'strong sense verification benchmark', consisting of (i) a clear mathematical statement of the problem with sufficient information to run a computer simulation, (ii) an explanation of how the code result and benchmark solution are to be evaluated, and (iii) a description of the acceptance criterion for simulation code results. (6) It is proposed that the set of verification test problems with which any particular code be evaluated include some of the problems described in this document. Analysis of the proposed verification test problems constitutes part of a necessary--but not sufficient--step that builds confidence in physics and engineering simulation codes. More complicated test cases, including physics models of greater sophistication or other physics regimes (e.g., energetic material response, magneto-hydrodynamics), would represent a scientifically desirable complement to the fundamental test cases discussed in this report. The authors believe that this document can be used to enhance the verification analyses undertaken at the DOE WP Laboratories and, thus, to improve the quality, credibility, and usefulness of the simulation codes that are analyzed with these problems.« less
Limits in point to point resolution of MOS based pixels detector arrays
NASA Astrophysics Data System (ADS)
Fourches, N.; Desforge, D.; Kebbiri, M.; Kumar, V.; Serruys, Y.; Gutierrez, G.; Leprêtre, F.; Jomard, F.
2018-01-01
In high energy physics point-to-point resolution is a key prerequisite for particle detector pixel arrays. Current and future experiments require the development of inner-detectors able to resolve the tracks of particles down to the micron range. Present-day technologies, although not fully implemented in actual detectors, can reach a 5-μm limit, this limit being based on statistical measurements, with a pixel-pitch in the 10 μm range. This paper is devoted to the evaluation of the building blocks for use in pixel arrays enabling accurate tracking of charged particles. Basing us on simulations we will make here a quantitative evaluation of the physical and technological limits in pixel size. Attempts to design small pixels based on SOI technology will be briefly recalled here. A design based on CMOS compatible technologies that allow a reduction of the pixel size below the micrometer is introduced here. Its physical principle relies on a buried carrier-localizing collecting gate. The fabrication process needed by this pixel design can be based on existing process steps used in silicon microelectronics. The pixel characteristics will be discussed as well as the design of pixel arrays. The existing bottlenecks and how to overcome them will be discussed in the light of recent ion implantation and material characterization experiments.
NASA Astrophysics Data System (ADS)
Pikulin, D. I.; Franz, M.
2017-07-01
A system of Majorana zero modes with random infinite-range interactions—the Sachdev-Ye-Kitaev (SYK) model—is thought to exhibit an intriguing relation to the horizons of extremal black holes in two-dimensional anti-de Sitter space. This connection provides a rare example of holographic duality between a solvable quantum-mechanical model and dilaton gravity. Here, we propose a physical realization of the SYK model in a solid-state system. The proposed setup employs the Fu-Kane superconductor realized at the interface between a three-dimensional topological insulator and an ordinary superconductor. The requisite N Majorana zero modes are bound to a nanoscale hole fabricated in the superconductor that is threaded by N quanta of magnetic flux. We show that when the system is tuned to the surface neutrality point (i.e., chemical potential coincident with the Dirac point of the topological insulator surface state) and the hole has sufficiently irregular shape, the Majorana zero modes are described by the SYK Hamiltonian. We perform extensive numerical simulations to demonstrate that the system indeed exhibits physical properties expected of the SYK model, including thermodynamic quantities and two-point as well as four-point correlators, and discuss ways in which these can be observed experimentally.
Fowler, P; Duffield, R; Vaile, J
2015-06-01
The present study examined effects of simulated air travel on physical performance. In a randomized crossover design, 10 physically active males completed a simulated 5-h domestic flight (DOM), 24-h simulated international travel (INT), and a control trial (CON). The mild hypoxia, seating arrangements, and activity levels typically encountered during air travel were simulated in a normobaric, hypoxic altitude room. Physical performance was assessed in the afternoon of the day before (D - 1 PM) and in the morning (D + 1 AM) and afternoon (D + 1 PM) of the day following each trial. Mood states and physiological and perceptual responses to exercise were also examined at these time points, while sleep quantity and quality were monitored throughout each condition. Sleep quantity and quality were significantly reduced during INT compared with CON and DOM (P < 0.01). Yo-Yo Intermittent Recovery level 1 test performance was significantly reduced at D + 1 PM following INT compared with CON and DOM (P < 0.01), where performance remained unchanged (P > 0.05). Compared with baseline, physiological and perceptual responses to exercise, and mood states were exacerbated following the INT trial (P < 0.05). Attenuated intermittent-sprint performance following simulated international air travel may be due to sleep disruption during travel and the subsequent exacerbated physiological and perceptual markers of fatigue. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
How Monte Carlo heuristics aid to identify the physical processes of drug release kinetics.
Lecca, Paola
2018-01-01
We implement a Monte Carlo heuristic algorithm to model drug release from a solid dosage form. We show that with Monte Carlo simulations it is possible to identify and explain the causes of the unsatisfactory predictive power of current drug release models. It is well known that the power-law, the exponential models, as well as those derived from or inspired by them accurately reproduce only the first 60% of the release curve of a drug from a dosage form. In this study, by using Monte Carlo simulation approaches, we show that these models fit quite accurately almost the entire release profile when the release kinetics is not governed by the coexistence of different physico-chemical mechanisms. We show that the accuracy of the traditional models are comparable with those of Monte Carlo heuristics when these heuristics approximate and oversimply the phenomenology of drug release. This observation suggests to develop and use novel Monte Carlo simulation heuristics able to describe the complexity of the release kinetics, and consequently to generate data more similar to those observed in real experiments. Implementing Monte Carlo simulation heuristics of the drug release phenomenology may be much straightforward and efficient than hypothesizing and implementing from scratch complex mathematical models of the physical processes involved in drug release. Identifying and understanding through simulation heuristics what processes of this phenomenology reproduce the observed data and then formalize them in mathematics may allow avoiding time-consuming, trial-error based regression procedures. Three bullet points, highlighting the customization of the procedure. •An efficient heuristics based on Monte Carlo methods for simulating drug release from solid dosage form encodes is presented. It specifies the model of the physical process in a simple but accurate way in the formula of the Monte Carlo Micro Step (MCS) time interval.•Given the experimentally observed curve of drug release, we point out how Monte Carlo heuristics can be integrated in an evolutionary algorithmic approach to infer the mode of MCS best fitting the observed data, and thus the observed release kinetics.•The software implementing the method is written in R language, the free most used language in the bioinformaticians community.
MO-DE-BRA-02: SIMAC: A Simulation Tool for Teaching Linear Accelerator Physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlone, M; Harnett, N; Department of Radiation Oncology, University of Toronto, Toronto, Ontario
Purpose: The first goal of this work is to develop software that can simulate the physics of linear accelerators (linac). The second goal is to show that this simulation tool is effective in teaching linac physics to medical physicists and linac service engineers. Methods: Linacs were modeled using analytical expressions that can correctly describe the physical response of a linac to parameter changes in real time. These expressions were programmed with a graphical user interface in order to produce an environment similar to that of linac service mode. The software, “SIMAC”, has been used as a learning aid in amore » professional development course 3 times (2014 – 2016) as well as in a physics graduate program. Exercises were developed to supplement the didactic components of the courses consisting of activites designed to reinforce the concepts of beam loading; the effect of steering coil currents on beam symmetry; and the relationship between beam energy and flatness. Results: SIMAC was used to teach 35 professionals (medical physicists; regulators; service engineers; 1 week course) as well as 20 graduate students (1 month project). In the student evaluations, 85% of the students rated the effectiveness of SIMAC as very good or outstanding, and 70% rated the software as the most effective part of the courses. Exercise results were collected showing that 100% of the students were able to use the software correctly. In exercises involving gross changes to linac operating points (i.e. energy changes) the majority of students were able to correctly perform these beam adjustments. Conclusion: Software simulation(SIMAC), can be used to effectively teach linac physics. In short courses, students were able to correctly make gross parameter adjustments that typically require much longer training times using conventional training methods.« less
Robustness of critical points in a complex adaptive system: Effects of hedge behavior
NASA Astrophysics Data System (ADS)
Liang, Yuan; Huang, Ji-Ping
2013-08-01
In our recent papers, we have identified a class of phase transitions in the market-directed resource-allocation game, and found that there exists a critical point at which the phase transitions occur. The critical point is given by a certain resource ratio. Here, by performing computer simulations and theoretical analysis, we report that the critical point is robust against various kinds of human hedge behavior where the numbers of herds and contrarians can be varied widely. This means that the critical point can be independent of the total number of participants composed of normal agents, herds and contrarians, under some conditions. This finding means that the critical points we identified in this complex adaptive system (with adaptive agents) may also be an intensive quantity, similar to those revealed in traditional physical systems (with non-adaptive units).
Modeling digital breast tomosynthesis imaging systems for optimization studies
NASA Astrophysics Data System (ADS)
Lau, Beverly Amy
Digital breast tomosynthesis (DBT) is a new imaging modality for breast imaging. In tomosynthesis, multiple images of the compressed breast are acquired at different angles, and the projection view images are reconstructed to yield images of slices through the breast. One of the main problems to be addressed in the development of DBT is the optimal parameter settings to obtain images ideal for detection of cancer. Since it would be unethical to irradiate women multiple times to explore potentially optimum geometries for tomosynthesis, it is ideal to use a computer simulation to generate projection images. Existing tomosynthesis models have modeled scatter and detector without accounting for oblique angles of incidence that tomosynthesis introduces. Moreover, these models frequently use geometry-specific physical factors measured from real systems, which severely limits the robustness of their algorithms for optimization. The goal of this dissertation was to design the framework for a computer simulation of tomosynthesis that would produce images that are sensitive to changes in acquisition parameters, so an optimization study would be feasible. A computer physics simulation of the tomosynthesis system was developed. The x-ray source was modeled as a polychromatic spectrum based on published spectral data, and inverse-square law was applied. Scatter was applied using a convolution method with angle-dependent scatter point spread functions (sPSFs), followed by scaling using an angle-dependent scatter-to-primary ratio (SPR). Monte Carlo simulations were used to generate sPSFs for a 5-cm breast with a 1-cm air gap. Detector effects were included through geometric propagation of the image onto layers of the detector, which were blurred using depth-dependent detector point-spread functions (PRFs). Depth-dependent PRFs were calculated every 5-microns through a 200-micron thick CsI detector using Monte Carlo simulations. Electronic noise was added as Gaussian noise as a last step of the model. The sPSFs and detector PRFs were verified to match published data, and noise power spectrum (NPS) from simulated flat field images were shown to match empirically measured data from a digital mammography unit. A novel anthropomorphic software breast phantom was developed for 3D imaging simulation. Projection view images of the phantom were shown to have similar structure as real breasts in the spatial frequency domain, using the power-law exponent beta to quantify tissue complexity. The physics simulation and computer breast phantom were used together, following methods from a published study with real tomosynthesis images of real breasts. The simulation model and 3D numerical breast phantoms were able to reproduce the trends in the experimental data. This result demonstrates the ability of the tomosynthesis physics model to generate images sensitive to changes in acquisition parameters.
A Hybrid Physics-Based Data-Driven Approach for Point-Particle Force Modeling
NASA Astrophysics Data System (ADS)
Moore, Chandler; Akiki, Georges; Balachandar, S.
2017-11-01
This study improves upon the physics-based pairwise interaction extended point-particle (PIEP) model. The PIEP model leverages a physical framework to predict fluid mediated interactions between solid particles. While the PIEP model is a powerful tool, its pairwise assumption leads to increased error in flows with high particle volume fractions. To reduce this error, a regression algorithm is used to model the differences between the current PIEP model's predictions and the results of direct numerical simulations (DNS) for an array of monodisperse solid particles subjected to various flow conditions. The resulting statistical model and the physical PIEP model are superimposed to construct a hybrid, physics-based data-driven PIEP model. It must be noted that the performance of a pure data-driven approach without the model-form provided by the physical PIEP model is substantially inferior. The hybrid model's predictive capabilities are analyzed using more DNS. In every case tested, the hybrid PIEP model's prediction are more accurate than those of physical PIEP model. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE-1315138 and the U.S. DOE, NNSA, ASC Program, as a Cooperative Agreement under Contract No. DE-NA0002378.
Parallel Adaptive Simulation of Detonation Waves Using a Weighted Essentially Non-Oscillatory Scheme
NASA Astrophysics Data System (ADS)
McMahon, Sean
The purpose of this thesis was to develop a code that could be used to develop a better understanding of the physics of detonation waves. First, a detonation was simulated in one dimension using ZND theory. Then, using the 1D solution as an initial condition, a detonation was simulated in two dimensions using a weighted essentially non-oscillatory scheme on an adaptive mesh with the smallest lengthscales being equal to 2-3 flamelet lengths. The code development in linking Chemkin for chemical kinetics to the adaptive mesh refinement flow solver was completed. The detonation evolved in a way that, qualitatively, matched the experimental observations, however, the simulation was unable to progress past the formation of the triple point.
NASA Astrophysics Data System (ADS)
Alexandrou, Constantia; Constantinou, Martha; Hadjiyiannakou, Kyriakos; Jansen, Karl; Kallidonis, Christos; Koutsou, Giannis; Vaquero Avilés-Casco, Alejandro
2018-03-01
We present results on the isovector and isoscalar nucleon axial form factors including disconnected contributions, using an ensemble of Nf = 2 twisted mass cloverimproved Wilson fermions simulated with approximately the physical value of the pion mass. The light disconnected quark loops are computed using exact deflation, while the strange and the charm quark loops are evaluated using the truncated solver method. Techniques such as the summation and the two-state fits have been employed to access ground-state dominance.
Transient Simulation of the Multi-SERTTA Experiment with MAMMOTH
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ortensi, Javier; Baker, Benjamin; Wang, Yaqi
This work details the MAMMOTH reactor physics simulations of the Static Environment Rodlet Transient Test Apparatus (SERTTA) conducted at Idaho National Laboratory in FY-2017. TREAT static-environment experiment vehicles are being developed to enable transient testing of Pressurized Water Reactor (PWR) type fuel specimens, including fuel concepts with enhanced accident tolerance (Accident Tolerant Fuels, ATF). The MAMMOTH simulations include point reactor kinetics as well as spatial dynamics for a temperature-limited transient. The strongly coupled multi-physics solutions of the neutron flux and temperature fields are second order accurate both in the spatial and temporal domains. MAMMOTH produces pellet stack powers that are within 1.5% of the Monte Carlo reference solutions. Some discrepancies between the MCNP model used in the design of the flux collars and the Serpent/MAMMOTH models lead to higher power and energy deposition values in Multi-SERTTA unit 1. The TREAT core results compare well with the safety case computed with point reactor kinetics in RELAP5-3D. The reactor period is 44 msec, which corresponds to a reactivity insertion of 2.685% delta k/kmore » $. The peak core power in the spatial dynamics simulation is 431 MW, which the point kinetics model over-predicts by 12%. The pulse width at half the maximum power is 0.177 sec. Subtle transient effects are apparent at the beginning insertion in the experimental samples due to the control rod removal. Additional difference due to transient effects are observed in the sample powers and enthalpy. The time dependence of the power coupling factor (PCF) is calculated for the various fuel stacks of the Multi-SERTTA vehicle. Sample temperatures in excess of 3100 K, the melting point UO$$_2$$, are computed with the adiabatic heat transfer model. The planned shaped-transient might introduce additional effects that cannot be predicted with PRK models. Future modeling will be focused on the shaped-transient by improving the control rod models in MAMMOTH and adding the BISON thermo-elastic models and thermal-fluids heat transfer.« less
An ARM data-oriented diagnostics package to evaluate the climate model simulation
NASA Astrophysics Data System (ADS)
Zhang, C.; Xie, S.
2016-12-01
A set of diagnostics that utilize long-term high frequency measurements from the DOE Atmospheric Radiation Measurement (ARM) program is developed for evaluating the regional simulation of clouds, radiation and precipitation in climate models. The diagnostics results are computed and visualized automatically in a python-based package that aims to serve as an easy entry point for evaluating climate simulations using the ARM data, as well as the CMIP5 multi-model simulations. Basic performance metrics are computed to measure the accuracy of mean state and variability of simulated regional climate. The evaluated physical quantities include vertical profiles of clouds, temperature, relative humidity, cloud liquid water path, total column water vapor, precipitation, sensible and latent heat fluxes, radiative fluxes, aerosol and cloud microphysical properties. Process-oriented diagnostics focusing on individual cloud and precipitation-related phenomena are developed for the evaluation and development of specific model physical parameterizations. Application of the ARM diagnostics package will be presented in the AGU session. This work is performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344, IM release number is: LLNL-ABS-698645.
Kinetic Simulations of Type II Radio Burst Emission Processes
NASA Astrophysics Data System (ADS)
Ganse, U.; Spanier, F. A.; Vainio, R. O.
2011-12-01
The fundamental emission process of Type II Radio Bursts has been under discussion for many decades. While analytic deliberations point to three wave interaction as the source for fundamental and harmonic radio emissions, sparse in-situ observational data and high computational demands for kinetic simulations have not allowed for a definite conclusion to be reached. A popular model puts the radio emission into the foreshock region of a coronal mass ejection's shock front, where shock drift acceleration can create eletrcon beam populations in the otherwise quiescent foreshock plasma. Beam-driven instabilities are then assumed to create waves, forming the starting point of three wave interaction processes. Using our kinetic particle-in-cell code, we have studied a number of emission scenarios based on electron beam populations in a CME foreshock, with focus on wave-interaction microphysics on kinetic scales. The self-consistent, fully kinetic simulations with completely physical mass-ratio show fundamental and harmonic emission of transverse electromagnetic waves and allow for detailled statistical analysis of all contributing wavemodes and their couplings.
NASA Astrophysics Data System (ADS)
Fan, Zuhui
2000-01-01
The linear bias of the dark halos from a model under the Zeldovich approximation is derived and compared with the fitting formula of simulation results. While qualitatively similar to the Press-Schechter formula, this model gives a better description for the linear bias around the turnaround point. This advantage, however, may be compromised by the large uncertainty of the actual behavior of the linear bias near the turnaround point. For a broad class of structure formation models in the cold dark matter framework, a general relation exists between the number density and the linear bias of dark halos. This relation can be readily tested by numerical simulations. Thus, instead of laboriously checking these models one by one, numerical simulation studies can falsify a whole category of models. The general validity of this relation is important in identifying key physical processes responsible for the large-scale structure formation in the universe.
Roussis; Fitzgerald
2000-04-01
The coupling of gas chromatographic simulated distillation with mass spectrometry for the determination of the distillation profiles of crude oils is reported. The method provides the boiling point distributions of both weight and volume percent amounts. The weight percent distribution is obtained from the measured total ion current signal. The total ion current signal is converted to weight percent amount by calibration with a reference crude oil of a known distillation profile. Knowledge of the chemical composition of the crude oil across the boiling range permits the determination of the volume percent distribution. The long-term repeatability is equivalent to or better than the short-term repeatability of the currently available American Society for Testing and Materials (ASTM) gas chromatographic method for simulated distillation. Results obtained by the mass spectrometric method are in very good agreement with results obtained by conventional methods of physical distillation. The compositional information supplied by the method can be used to extensively characterize crude oils.
Hu, Yipeng; Morgan, Dominic; Ahmed, Hashim Uddin; Pendsé, Doug; Sahu, Mahua; Allen, Clare; Emberton, Mark; Hawkes, David; Barratt, Dean
2008-01-01
A method is described for generating a patient-specific, statistical motion model (SMM) of the prostate gland. Finite element analysis (FEA) is used to simulate the motion of the gland using an ultrasound-based 3D FE model over a range of plausible boundary conditions and soft-tissue properties. By applying principal component analysis to the displacements of the FE mesh node points inside the gland, the simulated deformations are then used as training data to construct the SMM. The SMM is used to both predict the displacement field over the whole gland and constrain a deformable surface registration algorithm, given only a small number of target points on the surface of the deformed gland. Using 3D transrectal ultrasound images of the prostates of five patients, acquired before and after imposing a physical deformation, to evaluate the accuracy of predicted landmark displacements, the mean target registration error was found to be less than 1.9 mm.
NASA Astrophysics Data System (ADS)
Gómez, Eudoxio Ramos; Zenit, Roberto; Rivera, Carlos González; Trápaga, Gerardo; Ramírez-Argáez, Marco A.
2013-04-01
In this work, a 3D numerical simulation using a Euler-Euler-based model implemented into a commercial CFD code was used to simulate fluid flow and turbulence structure in a water physical model of an aluminum ladle equipped with an impeller for degassing treatment. The effect of critical process parameters such as rotor speed, gas flow rate, and the point of gas injection (conventional injection through the shaft vs a novel injection through the bottom of the ladle) on the fluid flow and vortex formation was analyzed with this model. The commercial CFD code PHOENICS 3.4 was used to solve all conservation equations governing the process for this two-phase fluid flow system. The mathematical model was reasonably well validated against experimentally measured liquid velocity and vortex sizes in a water physical model built specifically for this investigation. From the results, it was concluded that the angular speed of the impeller is the most important parameter in promoting better stirred baths and creating smaller and better distributed bubbles in the liquid. The pumping effect of the impeller is increased as the impeller rotation speed increases. Gas flow rate is detrimental to bath stirring and diminishes the pumping effect of the impeller. Finally, although the injection point was the least significant variable, it was found that the "novel" injection improves stirring in the ladle.
Relationship Between Optimal Gain and Coherence Zone in Flight Simulation
NASA Technical Reports Server (NTRS)
Gracio, Bruno Jorge Correia; Pais, Ana Rita Valente; vanPaassen, M. M.; Mulder, Max; Kely, Lon C.; Houck, Jacob A.
2011-01-01
In motion simulation the inertial information generated by the motion platform is most of the times different from the visual information in the simulator displays. This occurs due to the physical limits of the motion platform. However, for small motions that are within the physical limits of the motion platform, one-to-one motion, i.e. visual information equal to inertial information, is possible. It has been shown in previous studies that one-to-one motion is often judged as too strong, causing researchers to lower the inertial amplitude. When trying to measure the optimal inertial gain for a visual amplitude, we found a zone of optimal gains instead of a single value. Such result seems related with the coherence zones that have been measured in flight simulation studies. However, the optimal gain results were never directly related with the coherence zones. In this study we investigated whether the optimal gain measurements are the same as the coherence zone measurements. We also try to infer if the results obtained from the two measurements can be used to differentiate between simulators with different configurations. An experiment was conducted at the NASA Langley Research Center which used both the Cockpit Motion Facility and the Visual Motion Simulator. The results show that the inertial gains obtained with the optimal gain are different than the ones obtained with the coherence zone measurements. The optimal gain is within the coherence zone.The point of mean optimal gain was lower and further away from the one-to-one line than the point of mean coherence. The zone width obtained for the coherence zone measurements was dependent on the visual amplitude and frequency. For the optimal gain, the zone width remained constant when the visual amplitude and frequency were varied. We found no effect of the simulator configuration in both the coherence zone and optimal gain measurements.
Simulated annealing model of acupuncture
NASA Astrophysics Data System (ADS)
Shang, Charles; Szu, Harold
2015-05-01
The growth control singularity model suggests that acupuncture points (acupoints) originate from organizers in embryogenesis. Organizers are singular points in growth control. Acupuncture can cause perturbation of a system with effects similar to simulated annealing. In clinical trial, the goal of a treatment is to relieve certain disorder which corresponds to reaching certain local optimum in simulated annealing. The self-organizing effect of the system is limited and related to the person's general health and age. Perturbation at acupoints can lead a stronger local excitation (analogous to higher annealing temperature) compared to perturbation at non-singular points (placebo control points). Such difference diminishes as the number of perturbed points increases due to the wider distribution of the limited self-organizing activity. This model explains the following facts from systematic reviews of acupuncture trials: 1. Properly chosen single acupoint treatment for certain disorder can lead to highly repeatable efficacy above placebo 2. When multiple acupoints are used, the result can be highly repeatable if the patients are relatively healthy and young but are usually mixed if the patients are old, frail and have multiple disorders at the same time as the number of local optima or comorbidities increases. 3. As number of acupoints used increases, the efficacy difference between sham and real acupuncture often diminishes. It predicted that the efficacy of acupuncture is negatively correlated to the disease chronicity, severity and patient's age. This is the first biological - physical model of acupuncture which can predict and guide clinical acupuncture research.
Equilibration of experimentally determined protein structures for molecular dynamics simulation
NASA Astrophysics Data System (ADS)
Walton, Emily B.; Vanvliet, Krystyn J.
2006-12-01
Preceding molecular dynamics simulations of biomolecular interactions, the molecule of interest is often equilibrated with respect to an initial configuration. This so-called equilibration stage is required because the input structure is typically not within the equilibrium phase space of the simulation conditions, particularly in systems as complex as proteins, which can lead to artifactual trajectories of protein dynamics. The time at which nonequilibrium effects from the initial configuration are minimized—what we will call the equilibration time—marks the beginning of equilibrium phase-space exploration. Note that the identification of this time does not imply exploration of the entire equilibrium phase space. We have found that current equilibration methodologies contain ambiguities that lead to uncertainty in determining the end of the equilibration stage of the trajectory. This results in equilibration times that are either too long, resulting in wasted computational resources, or too short, resulting in the simulation of molecular trajectories that do not accurately represent the physical system. We outline and demonstrate a protocol for identifying the equilibration time that is based on the physical model of Normal Mode Analysis. We attain the computational efficiency required of large-protein simulations via a stretched exponential approximation that enables an analytically tractable and physically meaningful form of the root-mean-square deviation of atoms comprising the protein. We find that the fitting parameters (which correspond to physical properties of the protein) fluctuate initially but then stabilize for increased simulation time, independently of the simulation duration or sampling frequency. We define the end of the equilibration stage—and thus the equilibration time—as the point in the simulation when these parameters attain constant values. Compared to existing methods, our approach provides the objective identification of the time at which the simulated biomolecule has entered an energetic basin. For the representative protein considered, bovine pancreatic trypsin inhibitor, existing methods indicate a range of 0.2-10ns of simulation until a local minimum is attained. Our approach identifies a substantially narrower range of 4.5-5.5ns , which will lead to a much more objective choice of equilibration time.
Perspective: Memcomputing: Leveraging memory and physics to compute efficiently
NASA Astrophysics Data System (ADS)
Di Ventra, Massimiliano; Traversa, Fabio L.
2018-05-01
It is well known that physical phenomena may be of great help in computing some difficult problems efficiently. A typical example is prime factorization that may be solved in polynomial time by exploiting quantum entanglement on a quantum computer. There are, however, other types of (non-quantum) physical properties that one may leverage to compute efficiently a wide range of hard problems. In this perspective, we discuss how to employ one such property, memory (time non-locality), in a novel physics-based approach to computation: Memcomputing. In particular, we focus on digital memcomputing machines (DMMs) that are scalable. DMMs can be realized with non-linear dynamical systems with memory. The latter property allows the realization of a new type of Boolean logic, one that is self-organizing. Self-organizing logic gates are "terminal-agnostic," namely, they do not distinguish between the input and output terminals. When appropriately assembled to represent a given combinatorial/optimization problem, the corresponding self-organizing circuit converges to the equilibrium points that express the solutions of the problem at hand. In doing so, DMMs take advantage of the long-range order that develops during the transient dynamics. This collective dynamical behavior, reminiscent of a phase transition, or even the "edge of chaos," is mediated by families of classical trajectories (instantons) that connect critical points of increasing stability in the system's phase space. The topological character of the solution search renders DMMs robust against noise and structural disorder. Since DMMs are non-quantum systems described by ordinary differential equations, not only can they be built in hardware with the available technology, they can also be simulated efficiently on modern classical computers. As an example, we will show the polynomial-time solution of the subset-sum problem for the worst cases, and point to other types of hard problems where simulations of DMMs' equations of motion on classical computers have already demonstrated substantial advantages over traditional approaches. We conclude this article by outlining further directions of study.
The Material Point Method and Simulation of Wave Propagation in Heterogeneous Media
NASA Astrophysics Data System (ADS)
Bardenhagen, S. G.; Greening, D. R.; Roessig, K. M.
2004-07-01
The mechanical response of polycrystalline materials, particularly under shock loading, is of significant interest in a variety of munitions and industrial applications. Homogeneous continuum models have been developed to describe material response, including Equation of State, strength, and reactive burn models. These models provide good estimates of bulk material response. However, there is little connection to underlying physics and, consequently, they cannot be applied far from their calibrated regime with confidence. Both explosives and metals have important structure at the (energetic or single crystal) grain scale. The anisotropic properties of the individual grains and the presence of interfaces result in the localization of energy during deformation. In explosives energy localization can lead to initiation under weak shock loading, and in metals to material ejecta under strong shock loading. To develop accurate, quantitative and predictive models it is imperative to develop a sound physical understanding of the grain-scale material response. Numerical simulations are performed to gain insight into grain-scale material response. The Generalized Interpolation Material Point Method family of numerical algorithms, selected for their robust treatment of large deformation problems and convenient framework for implementing material interface models, are reviewed. A three-dimensional simulation of wave propagation through a granular material indicates the scale and complexity of a representative grain-scale computation. Verification and validation calculations on model bimaterial systems indicate the minimum numerical algorithm complexity required for accurate simulation of wave propagation across material interfaces and demonstrate the importance of interfacial decohesion. Preliminary results are presented which predict energy localization at the grain boundary in a metallic bicrystal.
Time-dependent spectral renormalization method
NASA Astrophysics Data System (ADS)
Cole, Justin T.; Musslimani, Ziad H.
2017-11-01
The spectral renormalization method was introduced by Ablowitz and Musslimani (2005) as an effective way to numerically compute (time-independent) bound states for certain nonlinear boundary value problems. In this paper, we extend those ideas to the time domain and introduce a time-dependent spectral renormalization method as a numerical means to simulate linear and nonlinear evolution equations. The essence of the method is to convert the underlying evolution equation from its partial or ordinary differential form (using Duhamel's principle) into an integral equation. The solution sought is then viewed as a fixed point in both space and time. The resulting integral equation is then numerically solved using a simple renormalized fixed-point iteration method. Convergence is achieved by introducing a time-dependent renormalization factor which is numerically computed from the physical properties of the governing evolution equation. The proposed method has the ability to incorporate physics into the simulations in the form of conservation laws or dissipation rates. This novel scheme is implemented on benchmark evolution equations: the classical nonlinear Schrödinger (NLS), integrable PT symmetric nonlocal NLS and the viscous Burgers' equations, each of which being a prototypical example of a conservative and dissipative dynamical system. Numerical implementation and algorithm performance are also discussed.
The Application of High Energy Resolution Green's Functions to Threat Scenario Simulation
NASA Astrophysics Data System (ADS)
Thoreson, Gregory G.; Schneider, Erich A.
2012-04-01
Radiation detectors installed at key interdiction points provide defense against nuclear smuggling attempts by scanning vehicles and traffic for illicit nuclear material. These hypothetical threat scenarios may be modeled using radiation transport simulations. However, high-fidelity models are computationally intensive. Furthermore, the range of smuggler attributes and detector technologies create a large problem space not easily overcome by brute-force methods. Previous research has demonstrated that decomposing the scenario into independently simulated components using Green's functions can simulate photon detector signals with coarse energy resolution. This paper extends this methodology by presenting physics enhancements and numerical treatments which allow for an arbitrary level of energy resolution for photon transport. As a result, spectroscopic detector signals produced from full forward transport simulations can be replicated while requiring multiple orders of magnitude less computation time.
Automated Extraction of Flow Features
NASA Technical Reports Server (NTRS)
Dorney, Suzanne (Technical Monitor); Haimes, Robert
2005-01-01
Computational Fluid Dynamics (CFD) simulations are routinely performed as part of the design process of most fluid handling devices. In order to efficiently and effectively use the results of a CFD simulation, visualization tools are often used. These tools are used in all stages of the CFD simulation including pre-processing, interim-processing, and post-processing, to interpret the results. Each of these stages requires visualization tools that allow one to examine the geometry of the device, as well as the partial or final results of the simulation. An engineer will typically generate a series of contour and vector plots to better understand the physics of how the fluid is interacting with the physical device. Of particular interest are detecting features such as shocks, re-circulation zones, and vortices (which will highlight areas of stress and loss). As the demand for CFD analyses continues to increase the need for automated feature extraction capabilities has become vital. In the past, feature extraction and identification were interesting concepts, but not required in understanding the physics of a steady flow field. This is because the results of the more traditional tools like; isc-surface, cuts and streamlines, were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of a great deal of interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one "snapshot" of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments). Methods must be developed to abstract the feature of interest and display it in a manner that physically makes sense.
Automated Extraction of Flow Features
NASA Technical Reports Server (NTRS)
Dorney, Suzanne (Technical Monitor); Haimes, Robert
2004-01-01
Computational Fluid Dynamics (CFD) simulations are routinely performed as part of the design process of most fluid handling devices. In order to efficiently and effectively use the results of a CFD simulation, visualization tools are often used. These tools are used in all stages of the CFD simulation including pre-processing, interim-processing, and post-processing, to interpret the results. Each of these stages requires visualization tools that allow one to examine the geometry of the device, as well as the partial or final results of the simulation. An engineer will typically generate a series of contour and vector plots to better understand the physics of how the fluid is interacting with the physical device. Of particular interest are detecting features such as shocks, recirculation zones, and vortices (which will highlight areas of stress and loss). As the demand for CFD analyses continues to increase the need for automated feature extraction capabilities has become vital. In the past, feature extraction and identification were interesting concepts, but not required in understanding the physics of a steady flow field. This is because the results of the more traditional tools like; iso-surface, cuts and streamlines, were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of a great deal of interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one "snapshot" of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for (co-processing environments). Methods must be developed to abstract the feature of interest and display it in a manner that physically makes sense.
Simulation and fitting of complex reaction network TPR: The key is the objective function
Savara, Aditya Ashi
2016-07-07
In this research, a method has been developed for finding improved fits during simulation and fitting of data from complex reaction network temperature programmed reactions (CRN-TPR). It was found that simulation and fitting of CRN-TPR presents additional challenges relative to simulation and fitting of simpler TPR systems. The method used here can enable checking the plausibility of proposed chemical mechanisms and kinetic models. The most important finding was that when choosing an objective function, use of an objective function that is based on integrated production provides more utility in finding improved fits when compared to an objective function based onmore » the rate of production. The response surface produced by using the integrated production is monotonic, suppresses effects from experimental noise, requires fewer points to capture the response behavior, and can be simulated numerically with smaller errors. For CRN-TPR, there is increased importance (relative to simple reaction network TPR) in resolving of peaks prior to fitting, as well as from weighting of experimental data points. Using an implicit ordinary differential equation solver was found to be inadequate for simulating CRN-TPR. Lastly, the method employed here was capable of attaining improved fits in simulation and fitting of CRN-TPR when starting with a postulated mechanism and physically realistic initial guesses for the kinetic parameters.« less
A Novel Approach to Visualizing Dark Matter Simulations.
Kaehler, R; Hahn, O; Abel, T
2012-12-01
In the last decades cosmological N-body dark matter simulations have enabled ab initio studies of the formation of structure in the Universe. Gravity amplified small density fluctuations generated shortly after the Big Bang, leading to the formation of galaxies in the cosmic web. These calculations have led to a growing demand for methods to analyze time-dependent particle based simulations. Rendering methods for such N-body simulation data usually employ some kind of splatting approach via point based rendering primitives and approximate the spatial distributions of physical quantities using kernel interpolation techniques, common in SPH (Smoothed Particle Hydrodynamics)-codes. This paper proposes three GPU-assisted rendering approaches, based on a new, more accurate method to compute the physical densities of dark matter simulation data. It uses full phase-space information to generate a tetrahedral tessellation of the computational domain, with mesh vertices defined by the simulation's dark matter particle positions. Over time the mesh is deformed by gravitational forces, causing the tetrahedral cells to warp and overlap. The new methods are well suited to visualize the cosmic web. In particular they preserve caustics, regions of high density that emerge, when several streams of dark matter particles share the same location in space, indicating the formation of structures like sheets, filaments and halos. We demonstrate the superior image quality of the new approaches in a comparison with three standard rendering techniques for N-body simulation data.
NASA Astrophysics Data System (ADS)
Gerszewski, Daniel James
Physical simulation has become an essential tool in computer animation. As the use of visual effects increases, the need for simulating real-world materials increases. In this dissertation, we consider three problems in physics-based animation: large-scale splashing liquids, elastoplastic material simulation, and dimensionality reduction techniques for fluid simulation. Fluid simulation has been one of the greatest successes of physics-based animation, generating hundreds of research papers and a great many special effects over the last fifteen years. However, the animation of large-scale, splashing liquids remains challenging. We show that a novel combination of unilateral incompressibility, mass-full FLIP, and blurred boundaries is extremely well-suited to the animation of large-scale, violent, splashing liquids. Materials that incorporate both plastic and elastic deformations, also referred to as elastioplastic materials, are frequently encountered in everyday life. Methods for animating such common real-world materials are useful for effects practitioners and have been successfully employed in films. We describe a point-based method for animating elastoplastic materials. Our primary contribution is a simple method for computing the deformation gradient for each particle in the simulation. Given the deformation gradient, we can apply arbitrary constitutive models and compute the resulting elastic forces. Our method has two primary advantages: we do not store or compare to an initial rest configuration and we work directly with the deformation gradient. The first advantage avoids poor numerical conditioning and the second naturally leads to a multiplicative model of deformation appropriate for finite deformations. One of the most significant drawbacks of physics-based animation is that ever-higher fidelity leads to an explosion in the number of degrees of freedom. This problem leads us to the consideration of dimensionality reduction techniques. We present several enhancements to model-reduced fluid simulation that allow improved simulation bases and two-way solid-fluid coupling. Specifically, we present a basis enrichment scheme that allows us to combine data-driven or artistically derived bases with more general analytic bases derived from Laplacian Eigenfunctions. Additionally, we handle two-way solid-fluid coupling in a time-splitting fashion---we alternately timestep the fluid and rigid body simulators, while taking into account the effects of the fluid on the rigid bodies and vice versa. We employ the vortex panel method to handle solid-fluid coupling and use dynamic pressure to compute the effect of the fluid on rigid bodies. Taken together, these contributions have advanced the state-of-the art in physics-based animation and are practical enough to be used in production pipelines.
A General Simulation Method for Multiple Bodies in Proximate Flight
NASA Technical Reports Server (NTRS)
Meakin, Robert L.
2003-01-01
Methods of unsteady aerodynamic simulation for an arbitrary number of independent bodies flying in close proximity are considered. A novel method to efficiently detect collision contact points is described. A method to compute body trajectories in response to aerodynamic loads, applied loads, and inter-body collisions is also given. The physical correctness of the methods are verified by comparison to a set of analytic solutions. The methods, combined with a Navier-Stokes solver, are used to demonstrate the possibility of predicting the unsteady aerodynamics and flight trajectories of moving bodies that involve rigid-body collisions.
NASA Technical Reports Server (NTRS)
Wacker, John F.
1989-01-01
The sorption of Ne, Ar, Kr, and Xe was studied in carbon black, acridine carbon, and diamond in an attempt to understand the origin of trapped noble gases in meteorites. The results support a model in which gases are physically adsorbed on interior surfaces formed by a pore labyrinth within amorphous carbons. The data show that: (1) the adsorption/desorption times are controlled by choke points that restrict the movement of noble gas atoms within the pore labyrinth, and (2) the physical adsorption controls the temperature behavior and elemental fractionation patterns.
Color image display and visual perception in computer graphics
NASA Astrophysics Data System (ADS)
Bouatouch, Kadi; Tellier, Pierre
1996-03-01
This paper put an emphasis on the importance of two points which are crucial when the aim is physically based lighting simulation. The first one is the spectral approach which considers emitted, reflected, diffused and transmitted light as wavelength dependent. The second corresponds to the different steps aiming at converting into RGB components the radiance arriving at the viewpoint through the pixels of a screen.
United States Air Force Graduate Student Research Program. 1989 Program Technical Report. Volume 1
1989-12-01
Analysis is required to supplement the experimental observations, which requires the formulation of a realistic model of the physical problem...RECOMMENDATION: a . From our point of view, the research team considere the NASTRAN model correct due to the vibrational frequencies, but we are still...structure of the program was understood, attempts were made to change the model from a thunderstorm simulation
The evolving energy budget of accretionary wedges
NASA Astrophysics Data System (ADS)
McBeck, Jessica; Cooke, Michele; Maillot, Bertrand; Souloumiac, Pauline
2017-04-01
The energy budget of evolving accretionary systems reveals how deformational processes partition energy as faults slip, topography uplifts, and layer-parallel shortening produces distributed off-fault deformation. The energy budget provides a quantitative framework for evaluating the energetic contribution or consumption of diverse deformation mechanisms. We investigate energy partitioning in evolving accretionary prisms by synthesizing data from physical sand accretion experiments and numerical accretion simulations. We incorporate incremental strain fields and cumulative force measurements from two suites of experiments to design numerical simulations that represent accretionary wedges with stronger and weaker detachment faults. One suite of the physical experiments includes a basal glass bead layer and the other does not. Two physical experiments within each suite implement different boundary conditions (stable base versus moving base configuration). Synthesizing observations from the differing base configurations reduces the influence of sidewall friction because the force vector produced by sidewall friction points in opposite directions depending on whether the base is fixed or moving. With the numerical simulations, we calculate the energy budget at two stages of accretion: at the maximum force preceding the development of the first thrust pair, and at the minimum force following the development of the pair. To identify the appropriate combination of material and fault properties to apply in the simulations, we systematically vary the Young's modulus and the fault static and dynamic friction coefficients in numerical accretion simulations, and identify the set of parameters that minimizes the misfit between the normal force measured on the physical backwall and the numerically simulated force. Following this derivation of the appropriate material and fault properties, we calculate the components of the work budget in the numerical simulations and in the simulated increments of the physical experiments. The work budget components of the physical experiments are determined from backwall force measurements and incremental velocity fields calculated via digital image correlation. Comparison of the energy budget preceding and following the development of the first thrust pair quantifies the tradeoff of work done in distributed deformation and work expended in frictional slip due to the development of the first backthrust and forethrust. In both the numerical and physical experiments, after the pair develops internal work decreases at the expense of frictional work, which increases. Despite the increase in frictional work, the total external work of the system decreases, revealing that accretion faulting leads to gains in efficiency. Comparison of the energy budget of the accretion experiments and simulations with the strong and weak detachments indicate that when the detachment is strong, the total energy consumed in frictional sliding and internal deformation is larger than when the detachment is relatively weak.
Gelb, Lev D; Chakraborty, Somendra Nath
2011-12-14
The normal boiling points are obtained for a series of metals as described by the "quantum-corrected Sutton Chen" (qSC) potentials [S.-N. Luo, T. J. Ahrens, T. Çağın, A. Strachan, W. A. Goddard III, and D. C. Swift, Phys. Rev. B 68, 134206 (2003)]. Instead of conventional Monte Carlo simulations in an isothermal or expanded ensemble, simulations were done in the constant-NPH adabatic variant of the Gibbs ensemble technique as proposed by Kristóf and Liszi [Chem. Phys. Lett. 261, 620 (1996)]. This simulation technique is shown to be a precise tool for direct calculation of boiling temperatures in high-boiling fluids, with results that are almost completely insensitive to system size or other arbitrary parameters as long as the potential truncation is handled correctly. Results obtained were validated using conventional NVT-Gibbs ensemble Monte Carlo simulations. The qSC predictions for boiling temperatures are found to be reasonably accurate, but substantially underestimate the enthalpies of vaporization in all cases. This appears to be largely due to the systematic overestimation of dimer binding energies by this family of potentials, which leads to an unsatisfactory description of the vapor phase. © 2011 American Institute of Physics
Analyses of Simulated Reconnection-Driven Solar Polar Jets
NASA Astrophysics Data System (ADS)
Roberts, M. A.; Uritsky, V. M.; Karpen, J. T.; DeVore, C. R.
2014-12-01
Solar polar jets are observed to originate in regions within the open field of solar coronal holes. These so called "anemone" regions are generally accepted to be regions of opposite polarity, and are associated with an embedded dipole topology, consisting of a fan-separatrix and a spine line emanating from a null point occurring at the top of the dome shaped fan surface. Previous analysis of these jets (Pariat et al. 2009,2010) modeled using the Adaptively Refined Magnetohydrodynamics Solver (ARMS) has supported the claim that magnetic reconnection across current sheets formed at the null point between the highly twisted closed field of the dipole and open field lines surrounding it releases the energy necessary to drive these jets. However, these initial simulations assumed a "static" environment for the jets, neglecting effects due to gravity, solar wind and the expanding spherical geometry. A new set of ARMS simulations taking into account these additional physical processes was recently performed. Initial results are qualitatively consistent with the earlier Cartesian studies, demonstrating the robustness of the underlying ideal and resistive mechanisms. We focus on density and velocity fluctuations within a narrow radial slit aligned with the direction of the spine of the jet, as well as other physical properties, in order to identify and refine their signatures in the lower heliosphere. These refined signatures can be used as parameters by which plasma processes initiated by these jets may be identified in situ by future missions such as Solar Orbiter and Solar Probe Plus.
An interior-point method-based solver for simulation of aircraft parts riveting
NASA Astrophysics Data System (ADS)
Stefanova, Maria; Yakunin, Sergey; Petukhova, Margarita; Lupuleac, Sergey; Kokkolaras, Michael
2018-05-01
The particularities of the aircraft parts riveting process simulation necessitate the solution of a large amount of contact problems. A primal-dual interior-point method-based solver is proposed for solving such problems efficiently. The proposed method features a worst case polynomial complexity bound ? on the number of iterations, where n is the dimension of the problem and ε is a threshold related to desired accuracy. In practice, the convergence is often faster than this worst case bound, which makes the method applicable to large-scale problems. The computational challenge is solving the system of linear equations because the associated matrix is ill conditioned. To that end, the authors introduce a preconditioner and a strategy for determining effective initial guesses based on the physics of the problem. Numerical results are compared with ones obtained using the Goldfarb-Idnani algorithm. The results demonstrate the efficiency of the proposed method.
Detector Position Estimation for PET Scanners.
Pierce, Larry; Miyaoka, Robert; Lewellen, Tom; Alessio, Adam; Kinahan, Paul
2012-06-11
Physical positioning of scintillation crystal detector blocks in Positron Emission Tomography (PET) scanners is not always exact. We test a proof of concept methodology for the determination of the six degrees of freedom for detector block positioning errors by utilizing a rotating point source over stepped axial intervals. To test our method, we created computer simulations of seven Micro Crystal Element Scanner (MiCES) PET systems with randomized positioning errors. The computer simulations show that our positioning algorithm can estimate the positions of the block detectors to an average of one-seventh of the crystal pitch tangentially, and one-third of the crystal pitch axially. Virtual acquisitions of a point source grid and a distributed phantom show that our algorithm improves both the quantitative and qualitative accuracy of the reconstructed objects. We believe this estimation algorithm is a practical and accurate method for determining the spatial positions of scintillation detector blocks.
NASA Astrophysics Data System (ADS)
Foffi, Giuseppe; Kahl, Gerhard
2010-03-01
Interest in colloidal physics has grown at an incredible pace over the past few decades. To a great extent this remarkable development is due to the fact that colloidal systems are highly relevant in everyday applications as well as in basic research. On the one hand, colloids are ubiquitous in our daily lives and a deeper understanding of their physical properties is therefore highly relevant in applied areas ranging from biomedicine over food sciences to technology. On the other hand, a seemingly unlimited freedom in designing colloidal particles with desired properties in combination with new, low-cost experimental techniques, make them—compared to hard matter systems—considerably more attractive for a wide range of basic investigations. All these investigations are carried out with close cooperation between experimentalists, theoreticians and simulators, reuniting thereby, on a highly interdisciplinary level, physicists, chemists, and biologists. In an effort to give credit to some of these new developments in colloidal physics, two proposals for workshops were submitted independently to CECAM in the fall of 2008; both of them were approved and organized as consecutive events. This decision undoubtedly had many practical and organizational advantages. Furthermore, and from the scientific point of view more relevant, the organizers could welcome in total 69 participants, presenting 42 oral and 21 poster contributions. We are proud to say that nearly all the colleagues that we contacted at submission time accepted our invitation, and we are happy to say that the number of additional participants was rather high. Due to the fact that both workshops took place within one week, quite a few participants, registered originally for one of these meetings, extended their participation to the other event also. In total, 23 contributions have been submitted to this special issue, which cover the main scientific topics addressed in these workshops. We consider this relatively high number of contributions as an indicator that the topics presented at these workshops represent substantial scientific developments. The particular motivation to organize these two workshops came from the fact that experimental work in colloidal physics is advancing rapidly around the globe. In contrast, theoretical and simulation approaches to investigate the wide range of new and surprising physical phenomena of colloidal systems is lagging behind this experimental progress. This is the more deploring since theory and simulation might provide a more profound understanding of many phenomena in soft and bio-related physics, such as phase behaviour, self-assembly strategies, or rheological properties, to name but a few. Furthermore this insight might help to guide experiment to design new colloid-based materials with desired properties. The declared aim of the two workshops was thus to bring together scientists who have contributed in recent time to new developments in colloidal physics and to share and discuss their latest innovations. While CECAM workshops traditionally bring together scientists from the theoretical and simulator communities, from the very beginning the organizers considered it an indispensable necessity to invite experimentalists. And indeed, the organizers are happy to confirm that the participation of experimentalists, theoreticians, and simulators was highly fruitful and mutually inspiring: discussions between all communities did help to understand the possibilities and limitations imposed by experiment, theory, and simulations. Reuniting thus all forces, the workshop did contribute to a deeper understanding in colloidal physics and has helped to address future aspects that might lead to more applied problems of technological relevance. The first workshop, entitled 'Computer Simulation Approaches to Study Self-Assembly: From Patchy Nano-Colloids to Virus Capsides', (organized by Jonathan Doye—University Of Oxford, Ard A Louis—University Of Oxford and Athanassios Panagiotopoulos—University Of Princeton) focused on the remarkable ability of colloidal systems to self-organize in well-defined composite objects. New simulation techniques and theoretical approaches were presented and discussed that offer a deeper understanding of self-assembly phenomena in colloidal physics and, eventually to uncover design rules for self-assembly. Particular emphasis was put on an emerging new class of colloidal particles, so-called patchy colloids. The second workshop, entitled 'New Trends in Simulating Colloids: From Models to Applications', (organized by Giuseppe Foffi—Ecole Polytechnique Fédérale De Lausanne, Gerhard Kahl—Vienna Technical University and Richard Vink—Georg-August-Universität Göttingen) focused on new methodological devices in theoretical and simulation approaches that provided a more profound insight in colloidal physics in general. A large variety of theoretical tools, ranging from different simulation techniques over classical density functional theory to efficient optimization techniques were presented. For details about the tools presented in both workshops we refer the reader to the contributions of this special issue. The 'round table' discussion meetings were highly useful in providing an overview of yet unsolved problems and to point out directions for future work. From the phenomenological point of view, among those are the question on the relevance of hydrodynamic interactions, the problem whether to treat solvents in an explicit or implicit way, or the relevance of multibody interactions, to name but a few. With respect to the methods it was agreed that future developments on dynamic Monte Carlo simulations or on rare events and multiscale techniques are urgently required. The presence of the experimentalists was also of great help in focusing attention on the systems that are going to represent the scientific challenges in the next years. It was interesting that while new materials like dna-coated colloids or janus and patchy particles are generating a lot of interest, more traditional systems, like colloidal glasses/gels and proteins, are far from being completely understood. The relevance of these two workshops was reflected by the general consent that within a few years' time events with similar aims should be organized to discuss the progress that has been achieved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
English, Shawn Allen; Nelson, Stacy Michelle; Briggs, Timothy
Presented is a model verification and validation effort using low - velocity impact (LVI) of carbon fiber reinforced polymer laminate experiments. A flat cylindrical indenter impacts the laminate with enough energy to produce delamination, matrix cracks and fiber breaks. Included in the experimental efforts are ultrasonic scans of the damage for qualitative validation of the models. However, the primary quantitative metrics of validation are the force time history measured through the instrumented indenter and initial and final velocities. The simulations, whi ch are run on Sandia's Sierra finite element codes , consist of all physics and material parameters of importancemore » as determined by a sensitivity analysis conducted on the LVI simulation. A novel orthotropic damage and failure constitutive model that is cap able of predicting progressive composite damage and failure is described in detail and material properties are measured, estimated from micromechanics or optimized through calibration. A thorough verification and calibration to the accompanying experiment s are presented. Specia l emphasis is given to the four - point bend experiment. For all simulations of interest, the mesh and material behavior is verified through extensive convergence studies. An ensemble of simulations incorporating model parameter unc ertainties is used to predict a response distribution which is then compared to experimental output. The result is a quantifiable confidence in material characterization and model physics when simulating this phenomenon in structures of interest.« less
Numerical simulation support to the ESA/THOR mission
NASA Astrophysics Data System (ADS)
Valentini, F.; Servidio, S.; Perri, S.; Perrone, D.; De Marco, R.; Marcucci, M. F.; Daniele, B.; Bruno, R.; Camporeale, E.
2016-12-01
THOR is a spacecraft concept currently undergoing study phase as acandidate for the next ESA medium size mission M4. THOR has been designedto solve the longstanding physical problems of particle heating andenergization in turbulent plasmas. It will provide high resolutionmeasurements of electromagnetic fields and particle distribution functionswith unprecedented resolution, with the aim of exploring the so-calledkinetic scales. We present the numerical simulation framework which is supporting the THOR mission during the study phase. The THOR teamincludes many scientists developing and running different simulation codes(Eulerian-Vlasov, Particle-In-Cell, Gyrokinetics, Two-fluid, MHD, etc.),addressing the physics of plasma turbulence, shocks, magnetic reconnectionand so on.These numerical codes are being used during the study phase, mainly withthe aim of addressing the following points:(i) to simulate the response of real particle instruments on board THOR, byemploying an electrostatic analyser simulator which mimics the response ofthe CSW, IMS and TEA instruments to the particle velocity distributions ofprotons, alpha particle and electrons, as obtained from kinetic numericalsimulations of plasma turbulence.(ii) to compare multi-spacecraft with single-spacecraft configurations inmeasuring current density, by making use of both numerical models ofsynthetic turbulence and real data from MMS spacecraft.(iii) to investigate the validity of the Taylor hypothesis indifferent configurations of plasma turbulence
A fast simulation method for radiation maps using interpolation in a virtual environment.
Li, Meng-Kun; Liu, Yong-Kuo; Peng, Min-Jun; Xie, Chun-Li; Yang, Li-Qun
2018-05-10
In nuclear decommissioning, virtual simulation technology is a useful tool to achieve an effective work process by using virtual environments to represent the physical and logical scheme of a real decommissioning project. This technology is cost-saving and time-saving, with the capacity to develop various decommissioning scenarios and reduce the risk of retrofitting. The method utilises a radiation map in a virtual simulation as the basis for the assessment of exposure to a virtual human. In this paper, we propose a fast simulation method using a known radiation source. The method has a unique advantage over point kernel and Monte Carlo methods because it generates the radiation map using interpolation in a virtual environment. The simulation of the radiation map including the calculation and the visualisation were realised using UNITY and MATLAB. The feasibility of the proposed method was tested on a hypothetical case and the results obtained are discussed in this paper.
Black holes as critical point of quantum phase transition.
Dvali, Gia; Gomez, Cesar
We reformulate the quantum black hole portrait in the language of modern condensed matter physics. We show that black holes can be understood as a graviton Bose-Einstein condensate at the critical point of a quantum phase transition, identical to what has been observed in systems of cold atoms. The Bogoliubov modes that become degenerate and nearly gapless at this point are the holographic quantum degrees of freedom responsible for the black hole entropy and the information storage. They have no (semi)classical counterparts and become inaccessible in this limit. These findings indicate a deep connection between the seemingly remote systems and suggest a new quantum foundation of holography. They also open an intriguing possibility of simulating black hole information processing in table-top labs.
Nutation and precession control of the High Energy Solar Physics (HESP) satellite
NASA Technical Reports Server (NTRS)
Jayaraman, C. P.; Robertson, B. P.
1993-01-01
The High Energy Solar Physics (HESP) spacecraft is an intermediate class satellite proposed by NASA to study solar high-energy phenomena during the next cycle of high solar activity in the 1998 to 2005 time frame. The HESP spacecraft is a spinning satellite which points to the sun with stringent pointing requirements. The natural dynamics of a spinning satellite includes an undesirable effect: nutation, which is due to the presence of disturbances and offsets of the spin axis from the angular momentum vector. The proposed Attitude Control System (ACS) attenuates nutation with reaction wheels. Precessing the spacecraft to track the sun in the north-south and east-west directions is accomplished with the use of torques from magnetic torquer bars. In this paper, the basic dynamics of a spinning spacecraft are derived, control algorithms to meet HESP science requirements are discussed and simulation results to demonstrate feasibility of the ACS concept are presented.
Study on the CFD simulation of refrigerated container
NASA Astrophysics Data System (ADS)
Arif Budiyanto, Muhammad; Shinoda, Takeshi; Nasruddin
2017-10-01
The objective this study is to performed Computational Fluid Dynamic (CFD) simulation of refrigerated container in the container port. Refrigerated container is a thermal cargo container constructed from an insulation wall to carry kind of perishable goods. CFD simulation was carried out use cross sectional of container walls to predict surface temperatures of refrigerated container and to estimate its cooling load. The simulation model is based on the solution of the partial differential equations governing the fluid flow and heat transfer processes. The physical model of heat-transfer processes considered in this simulation are consist of solar radiation from the sun, heat conduction on the container walls, heat convection on the container surfaces and thermal radiation among the solid surfaces. The validation of simulation model was assessed uses surface temperatures at center points on each container walls obtained from the measurement experimentation in the previous study. The results shows the surface temperatures of simulation model has good agreement with the measurement data on all container walls.
NASA Technical Reports Server (NTRS)
Fisher, Travis C.; Carpenter, Mark H.; Nordstroem, Jan; Yamaleev, Nail K.; Swanson, R. Charles
2011-01-01
Simulations of nonlinear conservation laws that admit discontinuous solutions are typically restricted to discretizations of equations that are explicitly written in divergence form. This restriction is, however, unnecessary. Herein, linear combinations of divergence and product rule forms that have been discretized using diagonal-norm skew-symmetric summation-by-parts (SBP) operators, are shown to satisfy the sufficient conditions of the Lax-Wendroff theorem and thus are appropriate for simulations of discontinuous physical phenomena. Furthermore, special treatments are not required at the points that are near physical boundaries (i.e., discrete conservation is achieved throughout the entire computational domain, including the boundaries). Examples are presented of a fourth-order, SBP finite-difference operator with second-order boundary closures. Sixth- and eighth-order constructions are derived, and included in E. Narrow-stencil difference operators for linear viscous terms are also derived; these guarantee the conservative form of the combined operator.
Automated Fluid Feature Extraction from Transient Simulations
NASA Technical Reports Server (NTRS)
Haimes, Robert
2000-01-01
In the past, feature extraction and identification were interesting concepts, but not required in understanding the physics of a steady flow field. This is because the results of the more traditional tools like iso-surfaces, cuts and streamlines, were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of a great deal of interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one 'snap-shot' of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments like pV3). And methods must be developed to abstract the feature and display it in a manner that physically makes sense.
Modeling the electrophoretic separation of short biological molecules in nanofluidic devices
NASA Astrophysics Data System (ADS)
Fayad, Ghassan; Hadjiconstantinou, Nicolas
2010-11-01
Via comparisons with Brownian Dynamics simulations of the worm-like-chain and rigid-rod models, and the experimental results of Fu et al. [Phys. Rev. Lett., 97, 018103 (2006)], we demonstrate that, for the purposes of low-to-medium field electrophoretic separation in periodic nanofilter arrays, sufficiently short biomolecules can be modeled as point particles, with their orientational degrees of freedom accounted for using partition coefficients. This observation is used in the present work to build a particularly simple and efficient Brownian Dynamics simulation method. Particular attention is paid to the model's ability to quantitatively capture experimental results using realistic values of all physical parameters. A variance-reduction method is developed for efficiently simulating arbitrarily small forcing electric fields.
A model of the 8-25 micron point source infrared sky
NASA Technical Reports Server (NTRS)
Wainscoat, Richard J.; Cohen, Martin; Volk, Kevin; Walker, Helen J.; Schwartz, Deborah E.
1992-01-01
We present a detailed model for the IR point-source sky that comprises geometrically and physically realistic representations of the Galactic disk, bulge, stellar halo, spiral arms (including the 'local arm'), molecular ring, and the extragalactic sky. We represent each of the distinct Galactic components by up to 87 types of Galactic source, each fully characterized by scale heights, space densities, and absolute magnitudes at BVJHK, 12, and 25 microns. The model is guided by a parallel Monte Carlo simulation of the Galaxy at 12 microns. The content of our Galactic source table constitutes a good match to the 12 micron luminosity function in the simulation, as well as to the luminosity functions at V and K. We are able to produce differential and cumulative IR source counts for any bandpass lying fully within the IRAS Low-Resolution Spectrometer's range (7.7-22.7 microns as well as for the IRAS 12 and 25 micron bands. These source counts match the IRAS observations well. The model can be used to predict the character of the point source sky expected for observations from IR space experiments.
Time-reversal transcranial ultrasound beam focusing using a k-space method
Jing, Yun; Meral, F. Can; Clement, Greg. T.
2012-01-01
This paper proposes the use of a k-space method to obtain the correction for transcranial ultrasound beam focusing. Mirroring past approaches, A synthetic point source at the focal point is numerically excited, and propagated through the skull, using acoustic properties acquired from registered computed tomograpy of the skull being studied. The received data outside the skull contains the correction information and can be phase conjugated (time reversed) and then physically generated to achieve a tight focusing inside the skull, by assuming quasi-plane transmission where shear waves are not present or their contribution can be neglected. Compared with the conventional finite-difference time-domain method for wave propagation simulation, it will be shown that the k-space method is significantly more accurate even for a relatively coarse spatial resolution, leading to a dramatically reduced computation time. Both numerical simulations and experiments conducted on an ex vivo human skull demonstrate that, precise focusing can be realized using the k-space method with a spatial resolution as low as only 2.56 grid points per wavelength, thus allowing treatment planning computation on the order of minutes. PMID:22290477
NASA Astrophysics Data System (ADS)
Kreiss, Gunilla; Holmgren, Hanna; Kronbichler, Martin; Ge, Anthony; Brant, Luca
2017-11-01
The conventional no-slip boundary condition leads to a non-integrable stress singularity at a moving contact line. This makes numerical simulations of two-phase flow challenging, especially when capillarity of the contact point is essential for the dynamics of the flow. We will describe a modeling methodology, which is suitable for numerical simulations, and present results from numerical computations. The methodology is based on combining a relation between the apparent contact angle and the contact line velocity, with the similarity solution for Stokes flow at a planar interface. The relation between angle and velocity can be determined by theoretical arguments, or from simulations using a more detailed model. In our approach we have used results from phase field simulations in a small domain, but using a molecular dynamics model should also be possible. In both cases more physics is included and the stress singularity is removed.
NASA Astrophysics Data System (ADS)
Gottschalk, Ian P.; Hermans, Thomas; Knight, Rosemary; Caers, Jef; Cameron, David A.; Regnery, Julia; McCray, John E.
2017-12-01
Geophysical data have proven to be very useful for lithological characterization. However, quantitatively integrating the information gained from acquiring geophysical data generally requires colocated lithological and geophysical data for constructing a rock-physics relationship. In this contribution, the issue of integrating noncolocated geophysical and lithological data is addressed, and the results are applied to simulate groundwater flow in a heterogeneous aquifer in the Prairie Waters Project North Campus aquifer recharge site, Colorado. Two methods of constructing a rock-physics transform between electrical resistivity tomography (ERT) data and lithology measurements are assessed. In the first approach, a maximum likelihood estimation (MLE) is used to fit a bimodal lognormal distribution to horizontal crosssections of the ERT resistivity histogram. In the second approach, a spatial bootstrap is applied to approximate the rock-physics relationship. The rock-physics transforms provide soft data for multiple point statistics (MPS) simulations. Subsurface models are used to run groundwater flow and tracer test simulations. Each model's uncalibrated, predicted breakthrough time is evaluated based on its agreement with measured subsurface travel time values from infiltration basins to selected groundwater recovery wells. We find that incorporating geophysical information into uncalibrated flow models reduces the difference with observed values, as compared to flow models without geophysical information incorporated. The integration of geophysical data also narrows the variance of predicted tracer breakthrough times substantially. Accuracy is highest and variance is lowest in breakthrough predictions generated by the MLE-based rock-physics transform. Calibrating the ensemble of geophysically constrained models would help produce a suite of realistic flow models for predictive purposes at the site. We find that the success of breakthrough predictions is highly sensitive to the definition of the rock-physics transform; it is therefore important to model this transfer function accurately.
Semantic Information Processing of Physical Simulation Based on Scientific Concept Vocabulary Model
NASA Astrophysics Data System (ADS)
Kino, Chiaki; Suzuki, Yoshio; Takemiya, Hiroshi
Scientific Concept Vocabulary (SCV) has been developed to actualize Cognitive methodology based Data Analysis System: CDAS which supports researchers to analyze large scale data efficiently and comprehensively. SCV is an information model for processing semantic information for physics and engineering. In the model of SCV, all semantic information is related to substantial data and algorisms. Consequently, SCV enables a data analysis system to recognize the meaning of execution results output from a numerical simulation. This method has allowed a data analysis system to extract important information from a scientific view point. Previous research has shown that SCV is able to describe simple scientific indices and scientific perceptions. However, it is difficult to describe complex scientific perceptions by currently-proposed SCV. In this paper, a new data structure for SCV has been proposed in order to describe scientific perceptions in more detail. Additionally, the prototype of the new model has been constructed and applied to actual data of numerical simulation. The result means that the new SCV is able to describe more complex scientific perceptions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bostick, D.T.; Steele, W.V.
1999-08-01
This document describes physical and thermophysical property determinations that were made in order to resolve questions associated with the decontamination of Savannah River Site (SRS) waste streams using ion exchange on crystalline silicotitanate (CST). The research will aid in the understanding of potential issues associated with cooling of feed streams within SRS waste treatment processes. Toward this end, the thermophysical properties of engineered CST, manufactured under the trade name, Ionsive{reg_sign} IE-911 by UOP, Mobile, AL, were determined. The heating profiles of CST samples from several manufacturers' production runs were observed using differential scanning calorimetric (DSC) measurements. DSC data were obtainedmore » over the region of 10 to 215 C to check for the possibility of a phase transition or any other enthalpic event in that temperature region. Finally, the heat capacity, thermal conductivity, density, viscosity, and salting-out point were determined for SRS waste simulants designated as Average, High NO{sub 3}{sup {minus}} and High OH{sup {minus}} simulants.« less
Khatchadourian, R; Davis, S; Evans, M; Licea, A; Seuntjens, J; Kildea, J
2012-07-01
Photoneutrons are a major component of the equivalent dose in the maze and near the door of linac bunkers. Physical measurements and Monte Carlo (MC) calculations of neutron dose are key for validating bunker design with respect to health regulations. We attempted to use bubble detectors and a 3 He neutron spectrometer to measure neutron equivalent dose and neutron spectra in the maze and near the door of one of our bunkers. We also ran MC simulations with MCNP5 to measure the neutron fluence in the same region. Using a point source of neutrons, a Clinac 1800 linac operating at 10 MV was simulated and the fluence measured at various locations of interest. We describe the challenges faced when measuring dose with bubble detectors in the maze and the complexity of photoneutron spectrometry with linacs operating in pulsed mode. Finally, we report on the development of a userfriendly GUI for shielding calculations based on the NCRP 151 formalism. © 2012 American Association of Physicists in Medicine.
Angular Dispersions in Terahertz Metasurfaces: Physics and Applications
NASA Astrophysics Data System (ADS)
Qiu, Meng; Jia, Min; Ma, Shaojie; Sun, Shulin; He, Qiong; Zhou, Lei
2018-05-01
Angular dispersion—the response of a metasurface strongly depending on the impinging angle—is an intrinsic property of metasurfaces, but its physical origin remains obscure, which hinders its applications in metasurface design. We establish a theory to quantitatively describe such intriguing effects in metasurfaces, and we verify it by both experiments and numerical simulations on a typical terahertz metasurface. The physical understanding gained motivates us to propose an alternative strategy to design metadevices exhibiting impinging-angle-dependent multifunctionalities. As an illustration, we design a polarization-control metadevice that can behave as a half- or quarter-wave plate under different excitation angles. Our results not only reveal the physical origin of the angular dispersion but also point out an additional degree of freedom to manipulate light, both of which are important for designing metadevices facing versatile application requests.
Automated Fluid Feature Extraction from Transient Simulations
NASA Technical Reports Server (NTRS)
Haimes, Robert
1998-01-01
In the past, feature extraction and identification were interesting concepts, but not required to understand the underlying physics of a steady flow field. This is because the results of the more traditional tools like iso-surfaces, cuts and streamlines were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of much interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one 'snap-shot' of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments like pV3). And methods must be developed to abstract the feature and display it in a manner that physically makes sense. The following is a list of the important physical phenomena found in transient (and steady-state) fluid flow: Shocks; Vortex ores; Regions of Recirculation; Boundary Layers; Wakes.
Physics of the Geospace Response to Powerful HF Radio Waves
2012-10-31
studies of the response of the Earth’s space plasma to high-power HF radio waves from the High-frequency Active Auroral Research Program ( HAARP ...of HF heating and explored to simulate artificial ducts. DMSP- HAARP experiments revealed that HF-created ion outflows and artificial density ducts...in the topside ionosphere appeared faster than predicted by the models, pointing to kinetic (suprathermal) effects. CHAMP/GRACE- HAARP experiments
Simulating Nonequilibrium Radiation via Orthogonal Polynomial Refinement
2015-01-07
measured by the preprocessing time, computer memory space, and average query time. In many search procedures for the number of points np of a data set, a...analytic expression for the radiative flux density is possible by the commonly accepted local thermal equilibrium ( LTE ) approximation. A semi...Vol. 227, pp. 9463-9476, 2008. 10. Galvez, M., Ray-Tracing model for radiation transport in three-dimensional LTE system, App. Physics, Vol. 38
Volovik, G E
1999-05-25
There are several classes of homogeneous Fermi systems that are characterized by the topology of the energy spectrum of fermionic quasiparticles: (i) gapless systems with a Fermi surface, (ii) systems with a gap in their spectrum, (iii) gapless systems with topologically stable point nodes (Fermi points), and (iv) gapless systems with topologically unstable lines of nodes (Fermi lines). Superfluid 3He-A and electroweak vacuum belong to the universality class 3. The fermionic quasiparticles (particles) in this class are chiral: they are left-handed or right-handed. The collective bosonic modes of systems of class 3 are the effective gauge and gravitational fields. The great advantage of superfluid 3He-A is that we can perform experiments by using this condensed matter and thereby simulate many phenomena in high energy physics, including axial anomaly, baryoproduction, and magnetogenesis. 3He-A textures induce a nontrivial effective metrics of the space, where the free quasiparticles move along geodesics. With 3He-A one can simulate event horizons, Hawking radiation, rotating vacuum, etc. High-temperature superconductors are believed to belong to class 4. They have gapless fermionic quasiparticles with a "relativistic" spectrum close to gap nodes, which allows application of ideas developed for superfluid 3He-A.
NASA Astrophysics Data System (ADS)
González, Diego Luis; Pimpinelli, Alberto; Einstein, T. L.
2011-07-01
We study the configurational structure of the point-island model for epitaxial growth in one dimension. In particular, we calculate the island gap and capture zone distributions. Our model is based on an approximate description of nucleation inside the gaps. Nucleation is described by the joint probability density pnXY(x,y), which represents the probability density to have nucleation at position x within a gap of size y. Our proposed functional form for pnXY(x,y) describes excellently the statistical behavior of the system. We compare our analytical model with extensive numerical simulations. Our model retains the most relevant physical properties of the system.
A Computational Model of Human Table Tennis for Robot Application
NASA Astrophysics Data System (ADS)
Mülling, Katharina; Peters, Jan
Table tennis is a difficult motor skill which requires all basic components of a general motor skill learning system. In order to get a step closer to such a generic approach to the automatic acquisition and refinement of table tennis, we study table tennis from a human motor control point of view. We make use of the basic models of discrete human movement phases, virtual hitting points, and the operational timing hypothesis. Using these components, we create a computational model which is aimed at reproducing human-like behavior. We verify the functionality of this model in a physically realistic simulation of a Barrett WAM.
PHYSICAL MODELING OF CONTRACTED FLOW.
Lee, Jonathan K.
1987-01-01
Experiments on steady flow over uniform grass roughness through centered single-opening contractions were conducted in the Flood Plain Simulation Facility at the U. S. Geological Survey's Gulf Coast Hydroscience Center near Bay St. Louis, Miss. The experimental series was designed to provide data for calibrating and verifying two-dimensional, vertically averaged surface-water flow models used to simulate flow through openings in highway embankments across inundated flood plains. Water-surface elevations, point velocities, and vertical velocity profiles were obtained at selected locations for design discharges ranging from 50 to 210 cfs. Examples of observed water-surface elevations and velocity magnitudes at basin cross-sections are presented.
Comparison among mathematical models of the photovoltaic cell for computer simulation purposes
NASA Astrophysics Data System (ADS)
Tofoli, Fernando Lessa; Pereira, Denis de Castro; Josias De Paula, Wesley; Moreira Vicente, Eduardo; Vicente, Paula dos Santos; Braga, Henrique Antonio Carvalho
2017-07-01
This paper presents a comparison among mathematical models used in the simulation of solar photovoltaic modules that can be easily integrated with power electronic converters. In order to perform the analysis, three models available in literature and also the physical model of the module in software PSIM® are used. Some results regarding the respective I × V and P × V curves are presented, while some advantages and eventual limitations are discussed. Besides, a DC-DC buck converter performs maximum power point tracking by using perturb and observe method, while the performance of each one of the aforementioned models is investigated.
Multioverlap Simulations of the 3D Edwards-Anderson Ising Spin Glass
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berg, B.A.; Berg, B.A.; Janke, W.
1998-05-01
We introduce a novel method for numerical spin glass investigations: Simulations of two replica at fixed temperature, weighted to achieve a broad distribution of the Parisi overlap parameter q (multioverlap). We demonstrate the feasibility of the approach by studying the 3D Edwards-Anderson Ising (J{sub ik}={plus_minus}1) spin glass in the broken phase ({beta}=1). This makes it possible to obtain reliable results about spin glass tunneling barriers. In addition, our results indicate a nontrivial scaling behavior of the canonical q distributions not only at the freezing point but also deep in the broken phase. {copyright} {ital 1998} {ital The American Physical Society}
On the efficient and reliable numerical solution of rate-and-state friction problems
NASA Astrophysics Data System (ADS)
Pipping, Elias; Kornhuber, Ralf; Rosenau, Matthias; Oncken, Onno
2016-03-01
We present a mathematically consistent numerical algorithm for the simulation of earthquake rupture with rate-and-state friction. Its main features are adaptive time stepping, a novel algebraic solution algorithm involving nonlinear multigrid and a fixed point iteration for the rate-and-state decoupling. The algorithm is applied to a laboratory scale subduction zone which allows us to compare our simulations with experimental results. Using physical parameters from the experiment, we find a good fit of recurrence time of slip events as well as their rupture width and peak slip. Computations in 3-D confirm efficiency and robustness of our algorithm.
Design and modelling of a 3D compliant leg for Bioloid
NASA Astrophysics Data System (ADS)
Couto, Mafalda; Santos, Cristina; Machado, José
2012-09-01
In the growing field of rehabilitation robotics, the modelling of a real robot is a complex and passionate challenge. On the crossing point of mechanics, physics and computer-science, the development of a complete 3D model involves the knowledge of the different physic properties, for an accurate simulation. In this paper, it is proposed the design of an efficient three-dimensional model of the quadruped Bioloid robot setting segmented pantographic legs, in order to actively retract the quadruped legs during locomotion and minimizing large forces due to shocks, such that the robot is able to safely and dynamically interact with the user or the environment.
Quantification of topological changes of vorticity contours in two-dimensional Navier-Stokes flow.
Ohkitani, Koji; Al Sulti, Fayeza
2010-06-01
A characterization of reconnection of vorticity contours is made by direct numerical simulations of the two-dimensional Navier-Stokes flow at a relatively low Reynolds number. We identify all the critical points of the vorticity field and classify them by solving an eigenvalue problem of its Hessian matrix on the basis of critical-point theory. The numbers of hyperbolic (saddles) and elliptic (minima and maxima) points are confirmed to satisfy Euler's index theorem numerically. Time evolution of these indices is studied for a simple initial condition. Generally speaking, we have found that the indices are found to decrease in number with time. This result is discussed in connection with related works on streamline topology, in particular, the relationship between stagnation points and the dissipation. Associated elementary procedures in physical space, the merging of vortices, are studied in detail for a number of snapshots. A similar analysis is also done using the stream function.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molina, Raquel; Hu, Bitao; Doering, Michael
Several lattice QCD simulations of meson-meson scattering in p-wave and Isospin = 1 in Nf = 2 + 1 flavours have been carried out recently. Unitarized Chiral Perturbation Theory is used to perform extrapolations to the physical point. In contrast to previous findings on the analyses of Nf = 2 lattice data, where most of the data seems to be in agreement, some discrepancies are detected in the Nf = 2 + 1 lattice data analyses, which could be due to different masses of the strange quark, meson decay constants, initial constraints in the simulation, or other lattice artifacts. Inmore » addition, the low-energy constants are compared to the ones from a recent analysis of Nf = 2 lattice data.« less
Physical lumping methods for developing linear reduced models for high speed propulsion systems
NASA Technical Reports Server (NTRS)
Immel, S. M.; Hartley, Tom T.; Deabreu-Garcia, J. Alex
1991-01-01
In gasdynamic systems, information travels in one direction for supersonic flow and in both directions for subsonic flow. A shock occurs at the transition from supersonic to subsonic flow. Thus, to simulate these systems, any simulation method implemented for the quasi-one-dimensional Euler equations must have the ability to capture the shock. In this paper, a technique combining both backward and central differencing is presented. The equations are subsequently linearized about an operating point and formulated into a linear state space model. After proper implementation of the boundary conditions, the model order is reduced from 123 to less than 10 using the Schur method of balancing. Simulations comparing frequency and step response of the reduced order model and the original system models are presented.
Tropical Oceanic Precipitation Processes over Warm Pool: 2D and 3D Cloud Resolving Model Simulations
NASA Technical Reports Server (NTRS)
Tao, W.- K.; Johnson, D.
1998-01-01
Rainfall is a key link in the hydrologic cycle as well as the primary heat source for the atmosphere, The vertical distribution of convective latent-heat release modulates the large-scale circulations of the tropics, Furthermore, changes in the moisture distribution at middle and upper levels of the troposphere can affect cloud distributions and cloud liquid water and ice contents. How the incoming solar and outgoing longwave radiation respond to these changes in clouds is a major factor in assessing climate change. Present large-scale weather and climate models simulate cloud processes only crudely, reducing confidence in their predictions on both global and regional scales. One of the most promising methods to test physical parameterizations used in General Circulation Models (GCMS) and climate models is to use field observations together with Cloud Resolving Models (CRMs). The CRMs use more sophisticated and physically realistic parameterizations of cloud microphysical processes, and allow for their complex interactions with solar and infrared radiative transfer processes. The CRMs can reasonably well resolve the evolution, structure, and life cycles of individual clouds and cloud systems, The major objective of this paper is to investigate the latent heating, moisture and momenti,im budgets associated with several convective systems developed during the TOGA COARE IFA - westerly wind burst event (late December, 1992). The tool for this study is the Goddard Cumulus Ensemble (CCE) model which includes a 3-class ice-phase microphysical scheme, The model domain contains 256 x 256 grid points (using 2 km resolution) in the horizontal and 38 grid points (to a depth of 22 km depth) in the vertical, The 2D domain has 1024 grid points. The simulations are performed over a 7 day time period. We will examine (1) the precipitation processes (i.e., condensation/evaporation) and their interaction with warm pool; (2) the heating and moisture budgets in the convective and stratiform regions; (3) the cloud (upward-downward) mass fluxes in convective and stratiform regions; (4) characteristics of clouds (such as cloud size, updraft intensity and cloud lifetime) and the comparison of clouds with Radar observations. Differences and similarities in organization of convection between simulated 2D and 3D cloud systems. Preliminary results indicated that there is major differences between 2D and 3D simulated stratiform rainfall amount and convective updraft and downdraft mass fluxes.
Many Point Optical Velocimetry for Gas Gun Applications
NASA Astrophysics Data System (ADS)
Pena, Michael; Becker, Steven; Garza, Anselmo; Hanache, Michael; Hixson, Robert; Jennings, Richard; Matthes, Melissa; O'Toole, Brendan; Roy, Shawoon; Trabia, Mohamed
2015-06-01
With the emergence of the multiplexed photonic Doppler velocimeter (MPDV), it is now practical to record many velocity traces simultaneously on shock physics experiments. Optical measurements of plastic deformation during high velocity impact have historically been constrained to a few measurement points. We have applied a 32-channel MPDV system to gas gun experiments in order to measure plastic deformation of a steel plate. A two dimensional array of measurement points allowed for diagnostic coverage over a large surface area of the target plate. This provided experimental flexibility to accommodate platform uncertainties as well as provide for a wealth of data from a given experiment. The two dimensional array of measurement points was imaged from an MT fiber-optic connector using off-the-shelf optical components to allow for an economical and easy-to-assemble, many-fiber probe. A two-stage, light gas gun was used to launch a Lexan projectile at velocities ranging from 4 to 6 km/s at a 12.7 mm thick A36 steel plate. Plastic deformation of the back surface was measured and compared with simulations from two different models: LS-DYNA and CTH. Comparison of results indicates that the computational analysis using both codes can reasonably simulate experiments of this type.
NASA Astrophysics Data System (ADS)
Sangiovanni, Davide G.; Alling, Björn; Hultman, Lars; Abrikosov, Igor A.
2015-03-01
We use ab-initio and classical molecular dynamics (AIMD, CMD) to simulate diffusion of N vacancy and N self-interstitial point-defects in B1 TiN. The physical properties of TiN, important material system for thin film and coatings applications, are largely dictated by concentration and mobility of point defects. We determine N dilute-point-defect diffusion pathways, activation energies, attempt frequencies, and diffusion coefficients as a function of temperature. In addition, MD simulations reveal an unanticipated atomistic process, which controls the spontaneous formation of N-self-interstitial/N-vacancy pairs (Frenkel pairs) in defect-free TiN. This entails that a N lattice atom leaves its bulk position and bonds to a neighboring N lattice atom. In most cases, Frenkel-pair NI and NV recombine within a fraction of ns; 50% of these processes result in the exchange of two nitrogen lattice atoms. Occasionally, however, Frenkel-pair N-interstitial atoms permanently escape from the anion vacancy site, thus producing unpaired NI and NV point defects. The Knut and Alice Wallenberg foundation (Isotope Project, 2011.0094), the Swedish Research Council (VR) Linköping Linnaeus Initiative LiLi-NFM (Grant 2008-6572), and the Swedish Government Strategic Research (Grant MatLiU 2009-00971).
A simulation study of radial expansion of an electron beam injected into an ionospheric plasma
NASA Technical Reports Server (NTRS)
Koga, J.; Lin, C. S.
1994-01-01
Injections of nonrelativistic electron beams from a finite equipotential conductor into an ionospheric plasma have been simulated using a two-dimensional electrostatic particle code. The purpose of the study is to survey the simulation parameters for understanding the dependence of beam radius on physical variables. The conductor is charged to a high potential when the background plasma density is less than the beam density. Beam electrons attracted by the charged conductor are decelerated to zero velocity near the stagnation point, which is at a few Debye lengths from the conductor. The simulations suggest that the beam electrons at the stagnation point receive a large transverse kick and the beam expands radially thereafter. The buildup of beam electrons at the stagnation point produces a large electrostatic force responsible for the transverse kick. However, for the weak charging cases where the background plasma density is larger than the beam density, the radial expansion mechanism is different; the beam plasma instability is found to be responsible for the radial expansion. The simulations show that the electron beam radius for high spacecraft charging cases is of the order of the beam gyroradius, defined as the beam velocity divided by the gyrofrequency. In the weak charging cases, the beam radius is only a fraction of the beam gyroradius. The parameter survey indicates that the beam radius increases with beam density and decreases with magnetic field and beam velocity. The beam radius normalized by the beam gyroradius is found to scale according to the ratio of the beam electron Debye length to the ambient electron Debye length. The parameter dependence deduced would be useful for interpreting the beam radius and beam density of electron beam injection experiments conducted from rockets and the space shuttle.
NASA Astrophysics Data System (ADS)
Caudevilla, Oriol; Zhou, Wei; Stoupin, Stanislav; Verman, Boris; Brankov, J. G.
2016-09-01
Analyzer-based X-ray phase contrast imaging (ABI) belongs to a broader family of phase-contrast (PC) X-ray imaging modalities. Unlike the conventional X-ray radiography, which measures only X-ray absorption, in PC imaging one can also measures the X-rays deflection induced by the object refractive properties. It has been shown that refraction imaging provides better contrast when imaging the soft tissue, which is of great interest in medical imaging applications. In this paper, we introduce a simulation tool specifically designed to simulate the analyzer-based X-ray phase contrast imaging system with a conventional polychromatic X-ray source. By utilizing ray tracing and basic physical principles of diffraction theory our simulation tool can predicting the X-ray beam profile shape, the energy content, the total throughput (photon count) at the detector. In addition we can evaluate imaging system point-spread function for various system configurations.
Three-dimensional electron microscopy simulation with the CASINO Monte Carlo software.
Demers, Hendrix; Poirier-Demers, Nicolas; Couture, Alexandre Réal; Joly, Dany; Guilmain, Marc; de Jonge, Niels; Drouin, Dominique
2011-01-01
Monte Carlo softwares are widely used to understand the capabilities of electron microscopes. To study more realistic applications with complex samples, 3D Monte Carlo softwares are needed. In this article, the development of the 3D version of CASINO is presented. The software feature a graphical user interface, an efficient (in relation to simulation time and memory use) 3D simulation model, accurate physic models for electron microscopy applications, and it is available freely to the scientific community at this website: www.gel.usherbrooke.ca/casino/index.html. It can be used to model backscattered, secondary, and transmitted electron signals as well as absorbed energy. The software features like scan points and shot noise allow the simulation and study of realistic experimental conditions. This software has an improved energy range for scanning electron microscopy and scanning transmission electron microscopy applications. Copyright © 2011 Wiley Periodicals, Inc.
Three-Dimensional Electron Microscopy Simulation with the CASINO Monte Carlo Software
Demers, Hendrix; Poirier-Demers, Nicolas; Couture, Alexandre Réal; Joly, Dany; Guilmain, Marc; de Jonge, Niels; Drouin, Dominique
2011-01-01
Monte Carlo softwares are widely used to understand the capabilities of electron microscopes. To study more realistic applications with complex samples, 3D Monte Carlo softwares are needed. In this paper, the development of the 3D version of CASINO is presented. The software feature a graphical user interface, an efficient (in relation to simulation time and memory use) 3D simulation model, accurate physic models for electron microscopy applications, and it is available freely to the scientific community at this website: www.gel.usherbrooke.ca/casino/index.html. It can be used to model backscattered, secondary, and transmitted electron signals as well as absorbed energy. The software features like scan points and shot noise allow the simulation and study of realistic experimental conditions. This software has an improved energy range for scanning electron microscopy and scanning transmission electron microscopy applications. PMID:21769885
Magnetic biosensors: Modelling and simulation.
Nabaei, Vahid; Chandrawati, Rona; Heidari, Hadi
2018-04-30
In the past few years, magnetoelectronics has emerged as a promising new platform technology in various biosensors for detection, identification, localisation and manipulation of a wide spectrum of biological, physical and chemical agents. The methods are based on the exposure of the magnetic field of a magnetically labelled biomolecule interacting with a complementary biomolecule bound to a magnetic field sensor. This Review presents various schemes of magnetic biosensor techniques from both simulation and modelling as well as analytical and numerical analysis points of view, and the performance variations under magnetic fields at steady and nonstationary states. This is followed by magnetic sensors modelling and simulations using advanced Multiphysics modelling software (e.g. Finite Element Method (FEM) etc.) and home-made developed tools. Furthermore, outlook and future directions of modelling and simulations of magnetic biosensors in different technologies and materials are critically discussed. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.
High Fidelity Simulation of Transcritical Liquid Jet in Crossflow
NASA Astrophysics Data System (ADS)
Li, Xiaoyi; Soteriou, Marios
2017-11-01
Transcritical injection of liquid fuel occurs in many practical applications such as diesel, rocket and gas turbine engines. In these applications, the liquid fuel, with a supercritical pressure and a subcritical temperature, is introduced into an environment where both the pressure and temperature exceeds the critical point of the fuel. The convoluted physics of the transition from subcritical to supercritical conditions poses great challenges for both experimental and numerical investigations. In this work, numerical simulation of a binary system of a subcritical liquid injecting into a supercritical gaseous crossflow is performed. The spatially varying fluid thermodynamic and transport properties are evaluated using established cubic equation of state and extended corresponding state principles with established mixing rules. To efficiently account for the large spatial gradients in property variations, an adaptive mesh refinement technique is employed. The transcritical simulation results are compared with the predictions from the traditional subcritical jet atomization simulations.
XIMPOL: a new x-ray polarimetry observation-simulation and analysis framework
NASA Astrophysics Data System (ADS)
Omodei, Nicola; Baldini, Luca; Pesce-Rollins, Melissa; di Lalla, Niccolò
2017-08-01
We present a new simulation framework, XIMPOL, based on the python programming language and the Scipy stack, specifically developed for X-ray polarimetric applications. XIMPOL is not tied to any specific mission or instrument design and is meant to produce fast and yet realistic observation-simulations, given as basic inputs: (i) an arbitrary source model including morphological, temporal, spectral and polarimetric information, and (ii) the response functions of the detector under study, i.e., the effective area, the energy dispersion, the point-spread function and the modulation factor. The format of the response files is OGIP compliant, and the framework has the capability of producing output files that can be directly fed into the standard visualization and analysis tools used by the X-ray community, including XSPEC which make it a useful tool not only for simulating physical systems, but also to develop and test end-to-end analysis chains.
ls1 mardyn: The Massively Parallel Molecular Dynamics Code for Large Systems.
Niethammer, Christoph; Becker, Stefan; Bernreuther, Martin; Buchholz, Martin; Eckhardt, Wolfgang; Heinecke, Alexander; Werth, Stephan; Bungartz, Hans-Joachim; Glass, Colin W; Hasse, Hans; Vrabec, Jadran; Horsch, Martin
2014-10-14
The molecular dynamics simulation code ls1 mardyn is presented. It is a highly scalable code, optimized for massively parallel execution on supercomputing architectures and currently holds the world record for the largest molecular simulation with over four trillion particles. It enables the application of pair potentials to length and time scales that were previously out of scope for molecular dynamics simulation. With an efficient dynamic load balancing scheme, it delivers high scalability even for challenging heterogeneous configurations. Presently, multicenter rigid potential models based on Lennard-Jones sites, point charges, and higher-order polarities are supported. Due to its modular design, ls1 mardyn can be extended to new physical models, methods, and algorithms, allowing future users to tailor it to suit their respective needs. Possible applications include scenarios with complex geometries, such as fluids at interfaces, as well as nonequilibrium molecular dynamics simulation of heat and mass transfer.
Simulation of Flow Through Breach in Leading Edge at Mach 24
NASA Technical Reports Server (NTRS)
Gnoffo, Peter A.; Alter, Stephen J.
2004-01-01
A baseline solution for CFD Point 1 (Mach 24) in the STS-107 accident investigation was modified to include effects of holes through the leading edge into a vented cavity. The simulations were generated relatively quickly and early in the investigation by making simplifications to the leading edge cavity geometry. These simplifications in the breach simulations enabled: 1) A very quick grid generation procedure; 2) High fidelity corroboration of jet physics with internal surface impingements ensuing from a breach through the leading edge, fully coupled to the external shock layer flow at flight conditions. These simulations provided early evidence that the flow through a 2 inch diameter (or larger) breach enters the cavity with significant retention of external flow directionality. A normal jet directed into the cavity was not an appropriate model for these conditions at CFD Point 1 (Mach 24). The breach diameters were of the same order or larger than the local, external boundary-layer thickness. High impingement heating and pressures on the downstream lip of the breach were computed. It is likely that hole shape would evolve as a slot cut in the direction of the external streamlines. In the case of the 6 inch diameter breach the boundary layer is fully ingested.
A novel method for inverse fiber Bragg grating structure design
NASA Astrophysics Data System (ADS)
Yin, Yu-zhe; Chen, Xiang-fei; Dai, Yi-tang; Xie, Shi-zhong
2003-12-01
A novel grating inverse design method is proposed in this paper, which is direct in physical meaning and easy to accomplish. The key point of the method is design and implement desired spectra response in grating strength modulation domain, while not in grating period chirp domain. Simulated results are in good coincidence with design target. By transforming grating period chirp to grating strength modulation, a novel grating with opposite dispersion characters is proposed.
A Novel Piggyback Selection Scheme in IEEE 802.11e HCCA
NASA Astrophysics Data System (ADS)
Lee, Hyun-Jin; Kim, Jae-Hyun
A control frame can be piggybacked onto a data frame to increase channel efficiency in wireless communication. However, if the control frame including global control information is piggybacked, the delay of the data frame from a access point will be increased even though there is only one station with low physical transmission rate. It is similar to the anomaly phenomenon in a network which supports multi-rate transmission. In this letter, we define this phenomenon as “the piggyback problem at low physical transmission rate” and evaluate the effect of this problem with respect to physical transmission rate and normalized traffic load. Then, we propose a delay-based piggyback scheme. Simulations show that the proposed scheme reduces average frame transmission delay and improves channel utilization about 24% and 25%, respectively.
Contemporary machine learning: techniques for practitioners in the physical sciences
NASA Astrophysics Data System (ADS)
Spears, Brian
2017-10-01
Machine learning is the science of using computers to find relationships in data without explicitly knowing or programming those relationships in advance. Often without realizing it, we employ machine learning every day as we use our phones or drive our cars. Over the last few years, machine learning has found increasingly broad application in the physical sciences. This most often involves building a model relationship between a dependent, measurable output and an associated set of controllable, but complicated, independent inputs. The methods are applicable both to experimental observations and to databases of simulated output from large, detailed numerical simulations. In this tutorial, we will present an overview of current tools and techniques in machine learning - a jumping-off point for researchers interested in using machine learning to advance their work. We will discuss supervised learning techniques for modeling complicated functions, beginning with familiar regression schemes, then advancing to more sophisticated decision trees, modern neural networks, and deep learning methods. Next, we will cover unsupervised learning and techniques for reducing the dimensionality of input spaces and for clustering data. We'll show example applications from both magnetic and inertial confinement fusion. Along the way, we will describe methods for practitioners to help ensure that their models generalize from their training data to as-yet-unseen test data. We will finally point out some limitations to modern machine learning and speculate on some ways that practitioners from the physical sciences may be particularly suited to help. This work was performed by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
NASA Astrophysics Data System (ADS)
Zhang, Qi; Chang, Ming; Zhou, Shengzhen; Chen, Weihua; Wang, Xuemei; Liao, Wenhui; Dai, Jianing; Wu, ZhiYong
2017-11-01
There has been a rapid growth of reactive nitrogen (Nr) deposition over the world in the past decades. The Pearl River Delta region is one of the areas with high loading of nitrogen deposition. But there are still large uncertainties in the study of dry deposition because of its complex processes of physical chemistry and vegetation physiology. At present, the forest canopy parameterization scheme used in WRF-Chem model is a single-layer "big leaf" model, and the simulation of radiation transmission and energy balance in forest canopy is not detailed and accurate. Noah-MP land surface model (Noah-MP) is based on the Noah land surface model (Noah LSM) and has multiple parametric options to simulate the energy, momentum, and material interactions of the vegetation-soil-atmosphere system. Therefore, to investigate the improvement of the simulation results of WRF-Chem on the nitrogen deposition in forest area after coupled with Noah-MP model and to reduce the influence of meteorological simulation biases on the dry deposition velocity simulation, a dry deposition single-point model coupled by Noah- MP and the WRF-Chem dry deposition module (WDDM) was used to simulate the deposition velocity (Vd). The model was driven by the micro-meteorological observation of the Dinghushan Forest Ecosystem Location Station. And a series of numerical experiments were carried out to identify the key processes influencing the calculation of dry deposition velocity, and the effects of various surface physical and plant physiological processes on dry deposition were discussed. The model captured the observed Vd well, but still underestimated the Vd. The self-defect of Wesely scheme applied by WDDM, and the inaccuracy of built-in parameters in WDDM and input data for Noah-MP (e.g. LAI) were the key factors that cause the underestimation of Vd. Therefore, future work is needed to improve model mechanisms and parameterization.
Femur Model Reconstruction Based on Reverse Engineering and Rapid Prototyping
NASA Astrophysics Data System (ADS)
Tang, Tongming; Zhang, Zheng; Ni, Hongjun; Deng, Jiawen; Huang, Mingyu
Precise reconstruction of 3D models is fundamental and crucial to the researches of human femur. In this paper we present our approach towards tackling this problem. The surface of a human femur was scanned using a hand-held 3D laser scanner. The data obtained, in the form of point cloud, was then processed using the reverse engineering software Geomagic and the CAD/CAM software CimatronE to reconstruct a digital 3D model. The digital model was then used by the rapid prototyping machine to build a physical model of human femur using 3D printing. The geometric characteristics of the obtained physical model matched that of the original femur. The process of "physical object - 3D data - digital 3D model - physical model" presented in this paper provides a foundation of precise modeling for the digital manufacturing, virtual assembly, stress analysis, and simulated surgery of artificial bionic femurs.
Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng
2016-01-05
Large-scale atmospheric forcing data can greatly impact the simulations of atmospheric process models including Large Eddy Simulations (LES), Cloud Resolving Models (CRMs) and Single-Column Models (SCMs), and impact the development of physical parameterizations in global climate models. This study describes the development of an ensemble variationally constrained objective analysis of atmospheric large-scale forcing data and its application to evaluate the cloud biases in the Community Atmospheric Model (CAM5). Sensitivities of the variational objective analysis to background data, error covariance matrix and constraint variables are described and used to quantify the uncertainties in the large-scale forcing data. Application of the ensemblemore » forcing in the CAM5 SCM during March 2000 intensive operational period (IOP) at the Southern Great Plains (SGP) of the Atmospheric Radiation Measurement (ARM) program shows systematic biases in the model simulations that cannot be explained by the uncertainty of large-scale forcing data, which points to the deficiencies of physical parameterizations. The SCM is shown to overestimate high clouds and underestimate low clouds. These biases are found to also exist in the global simulation of CAM5 when it is compared with satellite data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng
Large-scale atmospheric forcing data can greatly impact the simulations of atmospheric process models including Large Eddy Simulations (LES), Cloud Resolving Models (CRMs) and Single-Column Models (SCMs), and impact the development of physical parameterizations in global climate models. This study describes the development of an ensemble variationally constrained objective analysis of atmospheric large-scale forcing data and its application to evaluate the cloud biases in the Community Atmospheric Model (CAM5). Sensitivities of the variational objective analysis to background data, error covariance matrix and constraint variables are described and used to quantify the uncertainties in the large-scale forcing data. Application of the ensemblemore » forcing in the CAM5 SCM during March 2000 intensive operational period (IOP) at the Southern Great Plains (SGP) of the Atmospheric Radiation Measurement (ARM) program shows systematic biases in the model simulations that cannot be explained by the uncertainty of large-scale forcing data, which points to the deficiencies of physical parameterizations. The SCM is shown to overestimate high clouds and underestimate low clouds. These biases are found to also exist in the global simulation of CAM5 when it is compared with satellite data.« less
Leptonic-decay-constant ratio f(K+)/f(π+) from lattice QCD with physical light quarks.
Bazavov, A; Bernard, C; DeTar, C; Foley, J; Freeman, W; Gottlieb, Steven; Heller, U M; Hetrick, J E; Kim, J; Laiho, J; Levkova, L; Lightman, M; Osborn, J; Qiu, S; Sugar, R L; Toussaint, D; Van de Water, R S; Zhou, R
2013-04-26
A calculation of the ratio of leptonic decay constants f(K+)/f(π+) makes possible a precise determination of the ratio of Cabibbo-Kobayashi-Maskawa (CKM) matrix elements |V(us)|/|V(ud)| in the standard model, and places a stringent constraint on the scale of new physics that would lead to deviations from unitarity in the first row of the CKM matrix. We compute f(K+)/f(π+) numerically in unquenched lattice QCD using gauge-field ensembles recently generated that include four flavors of dynamical quarks: up, down, strange, and charm. We analyze data at four lattice spacings a ≈ 0.06, 0.09, 0.12, and 0.15 fm with simulated pion masses down to the physical value 135 MeV. We obtain f(K+)/f(π+) = 1.1947(26)(37), where the errors are statistical and total systematic, respectively. This is our first physics result from our N(f) = 2+1+1 ensembles, and the first calculation of f(K+)/f(π+) from lattice-QCD simulations at the physical point. Our result is the most precise lattice-QCD determination of f(K+)/f(π+), with an error comparable to the current world average. When combined with experimental measurements of the leptonic branching fractions, it leads to a precise determination of |V(us)|/|V(ud)| = 0.2309(9)(4) where the errors are theoretical and experimental, respectively.
Advanced computations in plasma physics
NASA Astrophysics Data System (ADS)
Tang, W. M.
2002-05-01
Scientific simulation in tandem with theory and experiment is an essential tool for understanding complex plasma behavior. In this paper we review recent progress and future directions for advanced simulations in magnetically confined plasmas with illustrative examples chosen from magnetic confinement research areas such as microturbulence, magnetohydrodynamics, magnetic reconnection, and others. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales together with access to powerful new computational resources. In particular, the fusion energy science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPP's to produce three-dimensional, general geometry, nonlinear particle simulations which have accelerated progress in understanding the nature of turbulence self-regulation by zonal flows. It should be emphasized that these calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In general, results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. The associated scientific excitement should serve to stimulate improved cross-cutting collaborations with other fields and also to help attract bright young talent to plasma science.
Theory and Simulation of Reconnection. In memoriam Harry Petschek
NASA Astrophysics Data System (ADS)
Büchner, J.
2006-06-01
Reconnection is a major commonality of solar and magnetospheric physics. It was conjectured by Giovanelli in 1946 to explain particle acceleration in solar flares near magnetic neutral points. Since than it has been broadly applied in space physics including magnetospheric physics. In a special way this is due to Harry Petschek, who in 1994 published his ground breaking solution for a 2D magnetized plasma flow in regions containing singularities of vanishing magnetic field. Petschek’s reconnection theory was questioned in endless disputes and arguments, but his work stimulated the further investigation of this phenomenon like no other. However, there are questions left open. We consider two of them “anomalous” resistivity in collisionless space plasma and the nature of reconnection in three dimensions. The CLUSTER and SOHO missions address these two aspects of reconnection in a complementary way -- the resistivity problem in situ in the magnetosphere and the 3D aspect by remote sensing of the Sun. We demonstrate that the search for answers to both questions leads beyond the applicability of analytical theories and that appropriate numerical approaches are necessary to investigate the essentially nonlinear and nonlocal processes involved. Necessary are both micro-physical, kinetic Vlasov-equation based methods of investigation as well as large scale (MHD) simulations to obtain the geometry and topology of the acting fields and flows.
VERA Core Simulator Methodology for PWR Cycle Depletion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kochunas, Brendan; Collins, Benjamin S; Jabaay, Daniel
2015-01-01
This paper describes the methodology developed and implemented in MPACT for performing high-fidelity pressurized water reactor (PWR) multi-cycle core physics calculations. MPACT is being developed primarily for application within the Consortium for the Advanced Simulation of Light Water Reactors (CASL) as one of the main components of the VERA Core Simulator, the others being COBRA-TF and ORIGEN. The methods summarized in this paper include a methodology for performing resonance self-shielding and computing macroscopic cross sections, 2-D/1-D transport, nuclide depletion, thermal-hydraulic feedback, and other supporting methods. These methods represent a minimal set needed to simulate high-fidelity models of a realistic nuclearmore » reactor. Results demonstrating this are presented from the simulation of a realistic model of the first cycle of Watts Bar Unit 1. The simulation, which approximates the cycle operation, is observed to be within 50 ppm boron (ppmB) reactivity for all simulated points in the cycle and approximately 15 ppmB for a consistent statepoint. The verification and validation of the PWR cycle depletion capability in MPACT is the focus of two companion papers.« less
Modeling and Validation of Microwave Ablations with Internal Vaporization
Chiang, Jason; Birla, Sohan; Bedoya, Mariajose; Jones, David; Subbiah, Jeyam; Brace, Christopher L.
2014-01-01
Numerical simulation is increasingly being utilized for computer-aided design of treatment devices, analysis of ablation growth, and clinical treatment planning. Simulation models to date have incorporated electromagnetic wave propagation and heat conduction, but not other relevant physics such as water vaporization and mass transfer. Such physical changes are particularly noteworthy during the intense heat generation associated with microwave heating. In this work, a numerical model was created that integrates microwave heating with water vapor generation and transport by using porous media assumptions in the tissue domain. The heating physics of the water vapor model was validated through temperature measurements taken at locations 5, 10 and 20 mm away from the heating zone of the microwave antenna in homogenized ex vivo bovine liver setup. Cross-sectional area of water vapor transport was validated through intra-procedural computed tomography (CT) during microwave ablations in homogenized ex vivo bovine liver. Iso-density contours from CT images were compared to vapor concentration contours from the numerical model at intermittent time points using the Jaccard Index. In general, there was an improving correlation in ablation size dimensions as the ablation procedure proceeded, with a Jaccard Index of 0.27, 0.49, 0.61, 0.67 and 0.69 at 1, 2, 3, 4, and 5 minutes. This study demonstrates the feasibility and validity of incorporating water vapor concentration into thermal ablation simulations and validating such models experimentally. PMID:25330481
Bryce, Richard A
2011-04-01
The ability to accurately predict the interaction of a ligand with its receptor is a key limitation in computer-aided drug design approaches such as virtual screening and de novo design. In this article, we examine current strategies for a physics-based approach to scoring of protein-ligand affinity, as well as outlining recent developments in force fields and quantum chemical techniques. We also consider advances in the development and application of simulation-based free energy methods to study protein-ligand interactions. Fuelled by recent advances in computational algorithms and hardware, there is the opportunity for increased integration of physics-based scoring approaches at earlier stages in computationally guided drug discovery. Specifically, we envisage increased use of implicit solvent models and simulation-based scoring methods as tools for computing the affinities of large virtual ligand libraries. Approaches based on end point simulations and reference potentials allow the application of more advanced potential energy functions to prediction of protein-ligand binding affinities. Comprehensive evaluation of polarizable force fields and quantum mechanical (QM)/molecular mechanical and QM methods in scoring of protein-ligand interactions is required, particularly in their ability to address challenging targets such as metalloproteins and other proteins that make highly polar interactions. Finally, we anticipate increasingly quantitative free energy perturbation and thermodynamic integration methods that are practical for optimization of hits obtained from screened ligand libraries.
Bochmann, Esther S; Steffens, Kristina E; Gryczke, Andreas; Wagner, Karl G
2018-03-01
Simulation of HME processes is a valuable tool for increased process understanding and ease of scale-up. However, the experimental determination of all required input parameters is tedious, namely the melt rheology of the amorphous solid dispersion (ASD) in question. Hence, a procedure to simplify the application of hot-melt extrusion (HME) simulation for forming amorphous solid dispersions (ASD) is presented. The commercial 1D simulation software Ludovic ® was used to conduct (i) simulations using a full experimental data set of all input variables including melt rheology and (ii) simulations using model-based melt viscosity data based on the ASDs glass transition and the physical properties of polymeric matrix only. Both types of HME computation were further compared to experimental HME results. Variation in physical properties (e.g. heat capacity, density) and several process characteristics of HME (residence time distribution, energy consumption) among the simulations and experiments were evaluated. The model-based melt viscosity was calculated by using the glass transition temperature (T g ) of the investigated blend and the melt viscosity of the polymeric matrix by means of a T g -viscosity correlation. The results of measured melt viscosity and model-based melt viscosity were similar with only few exceptions, leading to similar HME simulation outcomes. At the end, the experimental effort prior to HME simulation could be minimized and the procedure enables a good starting point for rational development of ASDs by means of HME. As model excipients, Vinylpyrrolidone-vinyl acetate copolymer (COP) in combination with various APIs (carbamazepine, dipyridamole, indomethacin, and ibuprofen) or polyethylene glycol (PEG 1500) as plasticizer were used to form the ASDs. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Scudeler, Carlotta; Pangle, Luke; Pasetto, Damiano; Niu, Guo-Yue; Volkmann, Till; Paniconi, Claudio; Putti, Mario; Troch, Peter
2016-10-01
This paper explores the challenges of model parameterization and process representation when simulating multiple hydrologic responses from a highly controlled unsaturated flow and transport experiment with a physically based model. The experiment, conducted at the Landscape Evolution Observatory (LEO), involved alternate injections of water and deuterium-enriched water into an initially very dry hillslope. The multivariate observations included point measures of water content and tracer concentration in the soil, total storage within the hillslope, and integrated fluxes of water and tracer through the seepage face. The simulations were performed with a three-dimensional finite element model that solves the Richards and advection-dispersion equations. Integrated flow, integrated transport, distributed flow, and distributed transport responses were successively analyzed, with parameterization choices at each step supported by standard model performance metrics. In the first steps of our analysis, where seepage face flow, water storage, and average concentration at the seepage face were the target responses, an adequate match between measured and simulated variables was obtained using a simple parameterization consistent with that from a prior flow-only experiment at LEO. When passing to the distributed responses, it was necessary to introduce complexity to additional soil hydraulic parameters to obtain an adequate match for the point-scale flow response. This also improved the match against point measures of tracer concentration, although model performance here was considerably poorer. This suggests that still greater complexity is needed in the model parameterization, or that there may be gaps in process representation for simulating solute transport phenomena in very dry soils.
Monte-Carlo background simulations of present and future detectors in x-ray astronomy
NASA Astrophysics Data System (ADS)
Tenzer, C.; Kendziorra, E.; Santangelo, A.
2008-07-01
Reaching a low-level and well understood internal instrumental background is crucial for the scientific performance of an X-ray detector and, therefore, a main objective of the instrument designers. Monte-Carlo simulations of the physics processes and interactions taking place in a space-based X-ray detector as a result of its orbital environment can be applied to explain the measured background of existing missions. They are thus an excellent tool to predict and optimize the background of future observatories. Weak points of a design and the main sources of the background can be identified and methods to reduce them can be implemented and studied within the simulations. Using the Geant4 Monte-Carlo toolkit, we have created a simulation environment for space-based detectors and we present results of such background simulations for XMM-Newton's EPIC pn-CCD camera. The environment is also currently used to estimate and optimize the background of the future instruments Simbol-X and eRosita.
Low-energy electron dose-point kernel simulations using new physics models implemented in Geant4-DNA
NASA Astrophysics Data System (ADS)
Bordes, Julien; Incerti, Sébastien; Lampe, Nathanael; Bardiès, Manuel; Bordage, Marie-Claude
2017-05-01
When low-energy electrons, such as Auger electrons, interact with liquid water, they induce highly localized ionizing energy depositions over ranges comparable to cell diameters. Monte Carlo track structure (MCTS) codes are suitable tools for performing dosimetry at this level. One of the main MCTS codes, Geant4-DNA, is equipped with only two sets of cross section models for low-energy electron interactions in liquid water (;option 2; and its improved version, ;option 4;). To provide Geant4-DNA users with new alternative physics models, a set of cross sections, extracted from CPA100 MCTS code, have been added to Geant4-DNA. This new version is hereafter referred to as ;Geant4-DNA-CPA100;. In this study, ;Geant4-DNA-CPA100; was used to calculate low-energy electron dose-point kernels (DPKs) between 1 keV and 200 keV. Such kernels represent the radial energy deposited by an isotropic point source, a parameter that is useful for dosimetry calculations in nuclear medicine. In order to assess the influence of different physics models on DPK calculations, DPKs were calculated using the existing Geant4-DNA models (;option 2; and ;option 4;), newly integrated CPA100 models, and the PENELOPE Monte Carlo code used in step-by-step mode for monoenergetic electrons. Additionally, a comparison was performed of two sets of DPKs that were simulated with ;Geant4-DNA-CPA100; - the first set using Geant4‧s default settings, and the second using CPA100‧s original code default settings. A maximum difference of 9.4% was found between the Geant4-DNA-CPA100 and PENELOPE DPKs. Between the two Geant4-DNA existing models, slight differences, between 1 keV and 10 keV were observed. It was highlighted that the DPKs simulated with the two Geant4-DNA's existing models were always broader than those generated with ;Geant4-DNA-CPA100;. The discrepancies observed between the DPKs generated using Geant4-DNA's existing models and ;Geant4-DNA-CPA100; were caused solely by their different cross sections. The different scoring and interpolation methods used in CPA100 and Geant4 to calculate DPKs showed differences close to 3.0% near the source.
Air source integrated heat pump simulation model for EnergyPlus
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, Bo; New, Joshua; Baxter, Van
An Air Source Integrated Heat Pump (AS-IHP) is an air source, multi-functional spacing conditioning unit with water heating function (WH), which can lead to great energy savings by recovering the condensing waste heat for domestic water heating. This paper summarizes development of the EnergyPlus AS-IHP model, introducing the physics, sub-models, working modes, and control logic. Based on the model, building energy simulations were conducted to demonstrate greater than 50% annual energy savings, in comparison to a baseline heat pump with electric water heater, over 10 US cities, using the EnergyPlus quick-service restaurant template building. We assessed water heating energy savingmore » potentials using AS-IHP versus both gas and electric baseline systems, and pointed out climate zones where AS-IHPs are promising. In addition, a grid integration strategy was investigated to reveal further energy saving and electricity cost reduction potentials, via increasing the water heating set point temperature during off-peak hours and using larger water tanks.« less
MacLaren, S. A.; Schneider, M. B.; Widmann, K.; ...
2014-03-13
Here, indirect drive experiments at the National Ignition Facility are designed to achieve fusion by imploding a fuel capsule with x rays from a laser-driven hohlraum. Previous experiments have been unable to determine whether a deficit in measured ablator implosion velocity relative to simulations is due to inadequate models of the hohlraum or ablator physics. ViewFactor experiments allow for the first time a direct measure of the x-ray drive from the capsule point of view. The experiments show a 15%–25% deficit relative to simulations and thus explain nearly all of the disagreement with the velocity data. In addition, the datamore » from this open geometry provide much greater constraints on a predictive model of laser-driven hohlraum performance than the nominal ignition target.« less
Effect of point defects on the thermal conductivity of UO2: molecular dynamics simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Xiang-Yang; Stanek, Christopher Richard; Andersson, Anders David Ragnar
2015-07-21
The thermal conductivity of uranium dioxide (UO 2) fuel is an important materials property that affects fuel performance since it is a key parameter determining the temperature distribution in the fuel, thus governing, e.g., dimensional changes due to thermal expansion, fission gas release rates, etc. [1] The thermal conductivity of UO 2 nuclear fuel is also affected by fission gas, fission products, defects, and microstructural features such as grain boundaries. Here, molecular dynamics (MD) simulations are carried out to determine quantitatively, the effect of irradiation induced point defects on the thermal conductivity of UO 2, as a function of defectmore » concentrations, for a range of temperatures, 300 – 1500 K. The results will be used to develop enhanced continuum thermal conductivity models for MARMOT and BISON by INL. These models express the thermal conductivity as a function of microstructure state-variables, thus enabling thermal conductivity models with closer connection to the physical state of the fuel [2].« less
Computational modeling of unsteady loads in tidal boundary layers
NASA Astrophysics Data System (ADS)
Alexander, Spencer R.
As ocean current turbines move from the design stage into production and installation, a better understanding of oceanic turbulent flows and localized loading is required to more accurately predict turbine performance and durability. In the present study, large eddy simulations (LES) are used to measure the unsteady loads and bending moments that would be experienced by an ocean current turbine placed in a tidal channel. The LES model captures currents due to winds, waves, thermal convection, and tides, thereby providing a high degree of physical realism. Probability density functions, means, and variances of unsteady loads are calculated, and further statistical measures of the turbulent environment are also examined, including vertical profiles of Reynolds stresses, two-point correlations, and velocity structure functions. The simulations show that waves and tidal velocity had the largest impact on the strength of off-axis turbine loads. By contrast, boundary layer stability and wind speeds were shown to have minimal impact on the strength of off- axis turbine loads. It is shown both analytically and using simulation results that either transverse velocity structure functions or two-point transverse velocity spatial correlations are good predictors of unsteady loading in tidal channels.
Orientation and spread of reconnection x-line in asymmetric current sheets
NASA Astrophysics Data System (ADS)
Liu, Y. H.; Hesse, M.; Wendel, D. E.; Kuznetsova, M.; Wang, S.
2017-12-01
The magnetic field in solar wind plasmas can shear with Earth's dipole magnetic field at arbitrary angles, and the plasma conditions on the two sides of the (magnetopause) current sheet can greatly differ. One of the outstanding questions in such asymmetric geometry is what local physics controls the orientation of the reconnection x-line; while the x-line in a simplified 2D model (simulation) always points out of the simulation plane by design, it is unclear how to predict the orientation of the x-line in a fully three-dimensional (3D) system. Using kinetic simulations run on Blue Waters, we develop an approach to explore this 3D nature of the reconnection x-line, and test hypotheses including maximizing the reconnection rate, tearing mode growth rate or reconnection outflow speed, and the bisection solution. Practically, this orientation should correspond to the M-direction of the local LMN coordinate system that is often employed to analyze diffusion region crossings by the Magnetospheric Multiscale Mission (MMS). In this talk, we will also discuss how an x-line spread from a point source in asymmetric geometries, and the boundary effect on the development of the reconnection x-line and turbulence.
Phase-space dynamics of runaway electrons in magnetic fields
Guo, Zehua; McDevitt, Christopher Joseph; Tang, Xian-Zhu
2017-02-16
Dynamics of runaway electrons in magnetic fields are governed by the competition of three dominant physics: parallel electric field acceleration, Coulomb collision, and synchrotron radiation. Examination of the energy and pitch-angle flows reveals that the presence of local vortex structure and global circulation is crucial to the saturation of primary runaway electrons. Models for the vortex structure, which has an O-point to X-point connection, and the bump of runaway electron distribution in energy space have been developed and compared against the simulation data. Lastly, identification of these velocity-space structures opens a new venue to re-examine the conventional understanding of runawaymore » electron dynamics in magnetic fields.« less
NASA Astrophysics Data System (ADS)
Wang, Hai-Xiao; Chen, Yige; Hang, Zhi Hong; Kee, Hae-Young; Jiang, Jian-Hua
2017-09-01
The Dirac equation for relativistic electron waves is the parent model for Weyl and Majorana fermions as well as topological insulators. Simulation of Dirac physics in three-dimensional photonic crystals, though fundamentally important for topological phenomena at optical frequencies, encounters the challenge of synthesis of both Kramers double degeneracy and parity inversion. Here we show how type-II Dirac points—exotic Dirac relativistic waves yet to be discovered—are robustly realized through the nonsymmorphic screw symmetry. The emergent type-II Dirac points carry nontrivial topology and are the mother states of type-II Weyl points. The proposed all-dielectric architecture enables robust cavity states at photonic-crystal—air interfaces and anomalous refraction, with very low energy dissipation.
Fuels characterization studies. [jet fuels
NASA Technical Reports Server (NTRS)
Seng, G. T.; Antoine, A. C.; Flores, F. J.
1980-01-01
Current analytical techniques used in the characterization of broadened properties fuels are briefly described. Included are liquid chromatography, gas chromatography, and nuclear magnetic resonance spectroscopy. High performance liquid chromatographic ground-type methods development is being approached from several directions, including aromatic fraction standards development and the elimination of standards through removal or partial removal of the alkene and aromatic fractions or through the use of whole fuel refractive index values. More sensitive methods for alkene determinations using an ultraviolet-visible detector are also being pursued. Some of the more successful gas chromatographic physical property determinations for petroleum derived fuels are the distillation curve (simulated distillation), heat of combustion, hydrogen content, API gravity, viscosity, flash point, and (to a lesser extent) freezing point.
Multiple-component Decomposition from Millimeter Single-channel Data
NASA Astrophysics Data System (ADS)
Rodríguez-Montoya, Iván; Sánchez-Argüelles, David; Aretxaga, Itziar; Bertone, Emanuele; Chávez-Dagostino, Miguel; Hughes, David H.; Montaña, Alfredo; Wilson, Grant W.; Zeballos, Milagros
2018-03-01
We present an implementation of a blind source separation algorithm to remove foregrounds off millimeter surveys made by single-channel instruments. In order to make possible such a decomposition over single-wavelength data, we generate levels of artificial redundancy, then perform a blind decomposition, calibrate the resulting maps, and lastly measure physical information. We simulate the reduction pipeline using mock data: atmospheric fluctuations, extended astrophysical foregrounds, and point-like sources, but we apply the same methodology to the Aztronomical Thermal Emission Camera/ASTE survey of the Great Observatories Origins Deep Survey–South (GOODS-S). In both applications, our technique robustly decomposes redundant maps into their underlying components, reducing flux bias, improving signal-to-noise ratio, and minimizing information loss. In particular, GOODS-S is decomposed into four independent physical components: one of them is the already-known map of point sources, two are atmospheric and systematic foregrounds, and the fourth component is an extended emission that can be interpreted as the confusion background of faint sources.
Exploring the implication of climate process uncertainties within the Earth System Framework
NASA Astrophysics Data System (ADS)
Booth, B.; Lambert, F. H.; McNeal, D.; Harris, G.; Sexton, D.; Boulton, C.; Murphy, J.
2011-12-01
Uncertainties in the magnitude of future climate change have been a focus of a great deal of research. Much of the work with General Circulation Models has focused on the atmospheric response to changes in atmospheric composition, while other processes remain outside these frameworks. Here we introduce an ensemble of new simulations, based on an Earth System configuration of HadCM3C, designed to explored uncertainties in both physical (atmospheric, oceanic and aerosol physics) and carbon cycle processes, using perturbed parameter approaches previously used to explore atmospheric uncertainty. Framed in the context of the climate response to future changes in emissions, the resultant future projections represent significantly broader uncertainty than existing concentration driven GCM assessments. The systematic nature of the ensemble design enables interactions between components to be explored. For example, we show how metrics of physical processes (such as climate sensitivity) are also influenced carbon cycle parameters. The suggestion from this work is that carbon cycle processes represent a comparable contribution to uncertainty in future climate projections as contributions from atmospheric feedbacks more conventionally explored. The broad range of climate responses explored within these ensembles, rather than representing a reason for inaction, provide information on lower likelihood but high impact changes. For example while the majority of these simulations suggest that future Amazon forest extent is resilient to the projected climate changes, a small number simulate dramatic forest dieback. This ensemble represents a framework to examine these risks, breaking them down into physical processes (such as ocean temperature drivers of rainfall change) and vegetation processes (where uncertainties point towards requirements for new observational constraints).
Tavakoli, Mohammad Bagher; Reiazi, Reza; Mohammadi, Mohammad Mehdi; Jabbari, Keyvan
2015-01-01
After proposing the idea of antiproton cancer treatment in 1984 many experiments were launched to investigate different aspects of physical and radiobiological properties of antiproton, which came from its annihilation reactions. One of these experiments has been done at the European Organization for Nuclear Research known as CERN using the antiproton decelerator. The ultimate goal of this experiment was to assess the dosimetric and radiobiological properties of beams of antiprotons in order to estimate the suitability of antiprotons for radiotherapy. One difficulty on this way was the unavailability of antiproton beam in CERN for a long time, so the verification of Monte Carlo codes to simulate antiproton depth dose could be useful. Among available simulation codes, Geant4 provides acceptable flexibility and extensibility, which progressively lead to the development of novel Geant4 applications in research domains, especially modeling the biological effects of ionizing radiation at the sub-cellular scale. In this study, the depth dose corresponding to CERN antiproton beam energy by Geant4 recruiting all the standard physics lists currently available and benchmarked for other use cases were calculated. Overall, none of the standard physics lists was able to draw the antiproton percentage depth dose. Although, with some models our results were promising, the Bragg peak level remained as the point of concern for our study. It is concluded that the Bertini model with high precision neutron tracking (QGSP_BERT_HP) is the best to match the experimental data though it is also the slowest model to simulate events among the physics lists.
Computational fluid dynamics uses in fluid dynamics/aerodynamics education
NASA Technical Reports Server (NTRS)
Holst, Terry L.
1994-01-01
The field of computational fluid dynamics (CFD) has advanced to the point where it can now be used for the purpose of fluid dynamics physics education. Because of the tremendous wealth of information available from numerical simulation, certain fundamental concepts can be efficiently communicated using an interactive graphical interrogation of the appropriate numerical simulation data base. In other situations, a large amount of aerodynamic information can be communicated to the student by interactive use of simple CFD tools on a workstation or even in a personal computer environment. The emphasis in this presentation is to discuss ideas for how this process might be implemented. Specific examples, taken from previous publications, will be used to highlight the presentation.
An adaptive approach to the physical annealing strategy for simulated annealing
NASA Astrophysics Data System (ADS)
Hasegawa, M.
2013-02-01
A new and reasonable method for adaptive implementation of simulated annealing (SA) is studied on two types of random traveling salesman problems. The idea is based on the previous finding on the search characteristics of the threshold algorithms, that is, the primary role of the relaxation dynamics in their finite-time optimization process. It is shown that the effective temperature for optimization can be predicted from the system's behavior analogous to the stabilization phenomenon occurring in the heating process starting from a quenched solution. The subsequent slow cooling near the predicted point draws out the inherent optimizing ability of finite-time SA in more straightforward manner than the conventional adaptive approach.
Physics Based Modeling of Compressible Turbulance
2016-11-07
is shown in Fig. 1. It includes the test section of the wind tunnel used in the experiments...points each. Fo each of these probes, data was taken at 48 equidistant locations along a ring in the azimuthal direction. In the simulation, we angle the...direction) Journal of Engineering for Gas Turbines and Power MAY 2013, Vol. 135 / 051202-3 Downloaded From: http
Sword, David O; Thomas, K Jackson; Wise, Holly H; Brown, Deborah D
2017-01-01
Sophisticated high-fidelity human simulation (HFHS) manikins allow for practice of both evaluation and treatment techniques in a controlled environment in which real patients are not put at risk. However, due to high demand, access to HFHS by students has been very competitive and limited. In the present study, a basic CPR manikin with a speaker implanted in the chest cavity and internet access to a variety of heart and breath sounds was used. Students were evaluated on their ability to locate and identify auscultation sites and heart/breath sounds. A five-point Likert scale survey was administered to gain insight into student perceptions on the use of this simulation method. Our results demonstrated that 95% of students successfully identified the heart and breath sounds. Furthermore, survey results indicated that 75% of students agreed or strongly agreed that this manner of evaluation was an effective way to assess their auscultation skills. Based on performance and perception, we conclude that a simulation method as described in this paper is a viable and cost-effective means of evaluating auscultation competency in not only student physical therapists but across other health professions as well.
Numerical Coupling and Simulation of Point-Mass System with the Turbulent Fluid Flow
NASA Astrophysics Data System (ADS)
Gao, Zheng
A computational framework that combines the Eulerian description of the turbulence field with a Lagrangian point-mass ensemble is proposed in this dissertation. Depending on the Reynolds number, the turbulence field is simulated using Direct Numerical Simulation (DNS) or eddy viscosity model. In the meanwhile, the particle system, such as spring-mass system and cloud droplets, are modeled using the ordinary differential system, which is stiff and hence poses a challenge to the stability of the entire system. This computational framework is applied to the numerical study of parachute deceleration and cloud microphysics. These two distinct problems can be uniformly modeled with Partial Differential Equations (PDEs) and Ordinary Differential Equations (ODEs), and numerically solved in the same framework. For the parachute simulation, a novel porosity model is proposed to simulate the porous effects of the parachute canopy. This model is easy to implement with the projection method and is able to reproduce Darcy's law observed in the experiment. Moreover, the impacts of using different versions of k-epsilon turbulence model in the parachute simulation have been investigated and conclude that the standard and Re-Normalisation Group (RNG) model may overestimate the turbulence effects when Reynolds number is small while the Realizable model has a consistent performance with both large and small Reynolds number. For another application, cloud microphysics, the cloud entrainment-mixing problem is studied in the same numerical framework. Three sets of DNS are carried out with both decaying and forced turbulence. The numerical result suggests a new way parameterize the cloud mixing degree using the dynamical measures. The numerical experiments also verify the negative relationship between the droplets number concentration and the vorticity field. The results imply that the gravity has fewer impacts on the forced turbulence than the decaying turbulence. In summary, the proposed framework can be used to solve a physics problem that involves turbulence field and point-mass system, and therefore has a broad application.
Electrical circuit modeling and analysis of microwave acoustic interaction with biological tissues.
Gao, Fei; Zheng, Qian; Zheng, Yuanjin
2014-05-01
Numerical study of microwave imaging and microwave-induced thermoacoustic imaging utilizes finite difference time domain (FDTD) analysis for simulation of microwave and acoustic interaction with biological tissues, which is time consuming due to complex grid-segmentation and numerous calculations, not straightforward due to no analytical solution and physical explanation, and incompatible with hardware development requiring circuit simulator such as SPICE. In this paper, instead of conventional FDTD numerical simulation, an equivalent electrical circuit model is proposed to model the microwave acoustic interaction with biological tissues for fast simulation and quantitative analysis in both one and two dimensions (2D). The equivalent circuit of ideal point-like tissue for microwave-acoustic interaction is proposed including transmission line, voltage-controlled current source, envelop detector, and resistor-inductor-capacitor (RLC) network, to model the microwave scattering, thermal expansion, and acoustic generation. Based on which, two-port network of the point-like tissue is built and characterized using pseudo S-parameters and transducer gain. Two dimensional circuit network including acoustic scatterer and acoustic channel is also constructed to model the 2D spatial information and acoustic scattering effect in heterogeneous medium. Both FDTD simulation, circuit simulation, and experimental measurement are performed to compare the results in terms of time domain, frequency domain, and pseudo S-parameters characterization. 2D circuit network simulation is also performed under different scenarios including different sizes of tumors and the effect of acoustic scatterer. The proposed circuit model of microwave acoustic interaction with biological tissue could give good agreement with FDTD simulated and experimental measured results. The pseudo S-parameters and characteristic gain could globally evaluate the performance of tumor detection. The 2D circuit network enables the potential to combine the quasi-numerical simulation and circuit simulation in a uniform simulator for codesign and simulation of a microwave acoustic imaging system, bridging bioeffect study and hardware development seamlessly.
Speth, Jana; Frenzel, Clemens; Voss, Ursula
2013-09-01
We present Activity Analysis as a new method for the quantification of subjective reports of altered states of consciousness with regard to the indicated level of simulated motor activity. Empirical linguistic activity analysis was conducted with dream reports conceived immediately after EEG-controlled periods of hypnagogic hallucinations and REM-sleep in the sleep laboratory. Reports of REM-dreams exhibited a significantly higher level of simulated physical dreamer activity, while hypnagogic hallucinations appear to be experienced mostly from the point of passive observer. This study lays the groundwork for clinical research on the level of simulated activity in pathologically altered states of subjective experience, for example in the REM-dreams of clinically depressed patients, or in intrusions and dreams of patients diagnosed with PTSD. Copyright © 2013 Elsevier Inc. All rights reserved.
Clocked Magnetostriction-Assisted Spintronic Device Design and Simulation
NASA Astrophysics Data System (ADS)
Mousavi Iraei, Rouhollah; Kani, Nickvash; Dutta, Sourav; Nikonov, Dmitri E.; Manipatruni, Sasikanth; Young, Ian A.; Heron, John T.; Naeemi, Azad
2018-05-01
We propose a heterostructure device comprised of magnets and piezoelectrics that significantly improves the delay and the energy dissipation of an all-spin logic (ASL) device. This paper studies and models the physics of the device, illustrates its operation, and benchmarks its performance using SPICE simulations. We show that the proposed device maintains low voltage operation, non-reciprocity, non-volatility, cascadability, and thermal reliability of the original ASL device. Moreover, by utilizing the deterministic switching of a magnet from the saddle point of the energy profile, the device is more efficient in terms of energy and delay and is robust to thermal fluctuations. The results of simulations show that compared to ASL devices, the proposed device achieves 21x shorter delay and 27x lower energy dissipation per bit for a 32-bit arithmetic-logic unit (ALU).
Cosmological Simulations of Galaxy Clusters
NASA Astrophysics Data System (ADS)
Borgani, Stefano; Kravtsov, Andrey
2011-02-01
We review recent progress in the description of the formation and evolution of galaxy clusters in a cosmological context by using state-of-art numerical simulations. We focus our presentation on the comparison between simulated and observed X-ray properties, while we will also discuss numerical predictions on properties of the galaxy population in clusters, as observed in the optical band. Many of the salient observed properties of clusters, such as scaling relations between X-ray observables and total mass, radial profiles of entropy and density of the intracluster gas, and radial distribution of galaxies are reproduced quite well. In particular, the outer regions of cluster at radii beyond about 10 per cent of the virial radius are quite regular and exhibit scaling with mass remarkably close to that expected in the simplest case in which only the action of gravity determines the evolution of the intra-cluster gas. However, simulations generally fail at reproducing the observed "cool core" structure of clusters: simulated clusters generally exhibit a significant excess of gas cooling in their central regions, which causes both an overestimate of the star formation in the cluster centers and incorrect temperature and entropy profiles. The total baryon fraction in clusters is below the mean universal value, by an amount which depends on the cluster-centric distance and the physics included in the simulations, with interesting tensions between observed stellar and gas fractions in clusters and predictions of simulations. Besides their important implications for the cosmological application of clusters, these puzzles also point towards the important role played by additional physical processes, beyond those already included in the simulations. We review the role played by these processes, along with the difficulty for their implementation, and discuss the outlook for the future progress in numerical modeling of clusters.
Computational physics of the mind
NASA Astrophysics Data System (ADS)
Duch, Włodzisław
1996-08-01
In the XIX century and earlier physicists such as Newton, Mayer, Hooke, Helmholtz and Mach were actively engaged in the research on psychophysics, trying to relate psychological sensations to intensities of physical stimuli. Computational physics allows to simulate complex neural processes giving a chance to answer not only the original psychophysical questions but also to create models of the mind. In this paper several approaches relevant to modeling of the mind are outlined. Since direct modeling of the brain functions is rather limited due to the complexity of such models a number of approximations is introduced. The path from the brain, or computational neurosciences, to the mind, or cognitive sciences, is sketched, with emphasis on higher cognitive functions such as memory and consciousness. No fundamental problems in understanding of the mind seem to arise. From a computational point of view realistic models require massively parallel architectures.
Web-Based Computational Chemistry Education with CHARMMing I: Lessons and Tutorial
Miller, Benjamin T.; Singh, Rishi P.; Schalk, Vinushka; Pevzner, Yuri; Sun, Jingjun; Miller, Carrie S.; Boresch, Stefan; Ichiye, Toshiko; Brooks, Bernard R.; Woodcock, H. Lee
2014-01-01
This article describes the development, implementation, and use of web-based “lessons” to introduce students and other newcomers to computer simulations of biological macromolecules. These lessons, i.e., interactive step-by-step instructions for performing common molecular simulation tasks, are integrated into the collaboratively developed CHARMM INterface and Graphics (CHARMMing) web user interface (http://www.charmming.org). Several lessons have already been developed with new ones easily added via a provided Python script. In addition to CHARMMing's new lessons functionality, web-based graphical capabilities have been overhauled and are fully compatible with modern mobile web browsers (e.g., phones and tablets), allowing easy integration of these advanced simulation techniques into coursework. Finally, one of the primary objections to web-based systems like CHARMMing has been that “point and click” simulation set-up does little to teach the user about the underlying physics, biology, and computational methods being applied. In response to this criticism, we have developed a freely available tutorial to bridge the gap between graphical simulation setup and the technical knowledge necessary to perform simulations without user interface assistance. PMID:25057988
Web-based computational chemistry education with CHARMMing I: Lessons and tutorial.
Miller, Benjamin T; Singh, Rishi P; Schalk, Vinushka; Pevzner, Yuri; Sun, Jingjun; Miller, Carrie S; Boresch, Stefan; Ichiye, Toshiko; Brooks, Bernard R; Woodcock, H Lee
2014-07-01
This article describes the development, implementation, and use of web-based "lessons" to introduce students and other newcomers to computer simulations of biological macromolecules. These lessons, i.e., interactive step-by-step instructions for performing common molecular simulation tasks, are integrated into the collaboratively developed CHARMM INterface and Graphics (CHARMMing) web user interface (http://www.charmming.org). Several lessons have already been developed with new ones easily added via a provided Python script. In addition to CHARMMing's new lessons functionality, web-based graphical capabilities have been overhauled and are fully compatible with modern mobile web browsers (e.g., phones and tablets), allowing easy integration of these advanced simulation techniques into coursework. Finally, one of the primary objections to web-based systems like CHARMMing has been that "point and click" simulation set-up does little to teach the user about the underlying physics, biology, and computational methods being applied. In response to this criticism, we have developed a freely available tutorial to bridge the gap between graphical simulation setup and the technical knowledge necessary to perform simulations without user interface assistance.
Steady State Global Simulations of Microturbulence
NASA Astrophysics Data System (ADS)
Lee, W. W.
2004-11-01
Critical physics issues for the steady state simulation of ion temperature gradient (ITG) drift instabilities are associated with collisionless and collisional dissipation processes. In this paper, we will report on recent investigations involving the inclusion of velocity-space nonlinearity term in our global Gyrokinetic Toroidal Code (GTC) [1]. It is important to point out that this term has not been critically examined in the turbulence simulation community [2], although it has attracted some recent interest for energy conservation considerations as well as for its effect on transport [3]. The nonlinearity in question is actually of the same order as the nonlinear zonal flow, and it can also play an interesting role in entropy balance for steady state transport [4]. Our initial results with adiabatic electrons have shown that the velocity-space nonlinearity for the ions can have a small but non-negligible effect at the early nonlinear stage of the ITG simulation. In the later stage, it can actually enhance the level of zonal flow and, in turn, can reduce the steady state thermal flux. The enhanced fluctuation of (n=0, m=1) mode has also been observed. More detailed simulation results including also collisions [5] as well as the theoretical attempt to understand the nonlinear physics of mode-coupling and entropy balance will be reported. The implication of the present work on transport time scale simulation including Alfven kinetic-MHD physics [6] will also be discussed. [1] Z. Lin, T. S. Hahm, W. W. Lee, W. M. Tang and R. White, Science, <281>, 1835 (1998). [2] W. M. Nevins et al., Plasma Microturbulence Project, this conference. [3] L. Villard et al., Nuclear Fusion <44>, 172 (2004). [4] W. W. Lee and W. M. Tang, Phys. Fluids <31>, 612 (1988). [5] Z. Lin, T. S. Hahm, W. W. Lee, W. M. Tang and R. White, Phys. Plasmas <7>, 1857 (2000). [6] W. W. Lee and H. Qin, Phys. Plasmas <10>, 3196 (2003).
Use of Computational Fluid Dynamics for improvement of Balloon Borne Frost Point Hygrometer
NASA Astrophysics Data System (ADS)
Jorge, Teresa; Brunamonti, Simone; Wienhold, Frank G.; Peter, Thomas
2017-04-01
In the StratoClim 2016 Balloon Campaign in Nainital (India) during the Asian Summer Monsoon, balloon born payloads containing the EN-SCI CFH - Cryogenic Frost point Hygrometer - were flown to observe water vapor and cloud formation processes in the Upper Troposphere and Lower Stratosphere. Some of the recorded atmospheric water vapor profiles showed unexpected values above the tropopause and were considered contaminated. To interpret these contaminated results and in the scope of the development of a new frost point hygrometer - the Peltier Cooled Frost point Hygrometer (PCFH) - computational fluid dynamic (CFD) simulations with ANSYS Fluent software have been carried out. These simulations incorporate the fluid and thermodynamic characteristics of stratospheric air to predict airflow in the inlet tube of the instrument. An ice wall boundary layer based on the Murphy and Koop 2005 ice-vapor parametrization was created as a cause of the unexpected water vapor. Sensitivity was tested in relation to the CFD mesh, ice wall surface, inlet flow, inlet tube dimension, sensor head location and variation of atmospheric conditions. The development of the PCFH uses the results of this study and other computational fluid dynamic studies concerning the whole instrument boundary layer and heat exchanger design to improve on previous realizations of frost point hygrometers. As a novelty in the field of frost point hygrometry, Optimal Control Theory will be used to optimize the cooling of the mirror by the Peltier element, which will be described in a physical "plant model", since the cooling capacity of a cryogenic liquid will no longer be available in the new instrument.
A 3-D Finite-Volume Non-hydrostatic Icosahedral Model (NIM)
NASA Astrophysics Data System (ADS)
Lee, Jin
2014-05-01
The Nonhydrostatic Icosahedral Model (NIM) formulates the latest numerical innovation of the three-dimensional finite-volume control volume on the quasi-uniform icosahedral grid suitable for ultra-high resolution simulations. NIM's modeling goal is to improve numerical accuracy for weather and climate simulations as well as to utilize the state-of-art computing architecture such as massive parallel CPUs and GPUs to deliver routine high-resolution forecasts in timely manner. NIM dynamic corel innovations include: * A local coordinate system remapped spherical surface to plane for numerical accuracy (Lee and MacDonald, 2009), * Grid points in a table-driven horizontal loop that allow any horizontal point sequence (A.E. MacDonald, et al., 2010), * Flux-Corrected Transport formulated on finite-volume operators to maintain conservative positive definite transport (J.-L, Lee, ET. Al., 2010), *Icosahedral grid optimization (Wang and Lee, 2011), * All differentials evaluated as three-dimensional finite-volume integrals around the control volume. The three-dimensional finite-volume solver in NIM is designed to improve pressure gradient calculation and orographic precipitation over complex terrain. NIM dynamical core has been successfully verified with various non-hydrostatic benchmark test cases such as internal gravity wave, and mountain waves in Dynamical Cores Model Inter-comparisons Projects (DCMIP). Physical parameterizations suitable for NWP are incorporated into NIM dynamical core and successfully tested with multimonth aqua-planet simulations. Recently, NIM has started real data simulations using GFS initial conditions. Results from the idealized tests as well as real-data simulations will be shown in the conference.
Operational analysis for the drug detection problem
NASA Astrophysics Data System (ADS)
Hoopengardner, Roger L.; Smith, Michael C.
1994-10-01
New techniques and sensors to identify the molecular, chemical, or elemental structures unique to drugs are being developed under several national programs. However, the challenge faced by U.S. drug enforcement and Customs officials goes far beyond the simple technical capability to detect an illegal drug. Entry points into the U.S. include ports, border crossings, and airports where cargo ships, vehicles, and aircraft move huge volumes of freight. Current technology and personnel are able to physically inspect only a small fraction of the entering cargo containers. The complexities of how to best utilize new technology to aid the detection process and yet not adversely affect the processing of vehicles and time-sensitive cargo is the challenge faced by these officials. This paper describes an ARPA sponsored initiative to develop a simple, yet useful, method for examining the operational consequences of utilizing various procedures and technologies in combination to achieve an `acceptable' level of detection probability. Since Customs entry points into the U.S. vary from huge seaports to a one lane highway checkpoint between the U.S. and Canadian or Mexico border, no one system can possibly be right for all points. This approach can examine alternative concepts for using different techniques/systems for different types of entry points. Operational measures reported include the average time to process vehicles and containers, the average and maximum numbers in the system at any time, and the utilization of inspection teams. The method is implemented via a PC-based simulation written in GPSS-PC language. Input to the simulation model is (1) the individual detection probabilities and false positive rates for each detection technology or procedure, (2) the inspection time for each procedure, (3) the system configuration, and (4) the physical distance between inspection stations. The model offers on- line graphics to examine effects as the model runs.
NASA Astrophysics Data System (ADS)
Sanz, Eduardo
2009-03-01
We study the kinetics of the liquid-to-crystal transformation and of gel formation in colloidal suspensions of oppositely charged particles. We analyse, by means of both computer simulations and experiments, the evolution of a fluid quenched to a state point of the phase diagram where the most stable state is either a homogeneous crystalline solid or a solid phase in contact with a dilute gas. On the one hand, at high temperatures and high packing fractions, close to an ordered-solid/disordered-solid coexistence line, we find that the fluid-to-crystal pathway does not follow the minimum free energy route. On the other hand, a quench to a state point far from the ordered-crystal/disordered-crystal coexistence border is followed by a fluid-to-solid transition through the minimum free energy pathway. At low temperatures and packing fractions we observe that the system undergoes a gas-liquid spinodal decomposition that, at some point, arrests giving rise to a gel-like structure. Both our simulations and experiments suggest that increasing the interaction range favors crystallization over vitrification in gel-like structures. [4pt] In collaboration with Chantal Valeriani, Soft Condensed Matter, Debye Institute for Nanomaterials Science, Utrecht University, Princetonplein 5, 3584 CC Utrecht, The Netherlands and SUPA, School of Physics, University of Edinburgh, JCMB King's Buildings, Mayfield Road, Edinburgh EH9 3JZ, UK; Teun Vissers, Andrea Fortini, Mirjam E. Leunissen, and Alfons van Blaaderen, Soft Condensed Matter, Debye Institute for Nanomaterials Science, Utrecht University; Daan Frenke, FOM Institute for Atomic and Molecular Physics, Kruislaan 407, 1098 SJ Amsterdam, The Netherlands and Department of Chemistry, University of Cambridge, Lensfield Road, CB2 1EW, Cambridge, UK; and Marjolein Dijkstra, Soft Condensed Matter, Debye Institute for Nanomaterials Science, Utrecht University.
NASA Astrophysics Data System (ADS)
Zhang, S.; Tang, L.
2007-05-01
Panjiakou Reservoir is an important drinking water resource in Haihe River Basin, Hebei Province, People's Republic of China. The upstream watershed area is about 35,000 square kilometers. Recently, the water pollution in the reservoir is becoming more serious owing to the non-point pollution as well as point source pollution on the upstream watershed. To effectively manage the reservoir and watershed and develop a plan to reduce pollutant loads, the loading of non-point and point pollution and their distribution on the upstream watershed must be understood fully. The SWAT model is used to simulate the production and transportation of the non-point source pollutants in the upstream watershed of the Panjiakou Reservoir. The loadings of non-point source pollutants are calculated for different hydrologic years and the spatial and temporal characteristics of non-point source pollution are studied. The stream network and topographic characteristics of the stream network and sub-basins are all derived from the DEM by ArcGIS software. The soil and land use data are reclassified and the soil physical properties database file is created for the model. The SWAT model was calibrated with observed data of several hydrologic monitoring stations in the study area. The results of the calibration show that the model performs fairly well. Then the calibrated model was used to calculate the loadings of non-point source pollutants for a wet year, a normal year and a dry year respectively. The time and space distribution of flow, sediment and non-point source pollution were analyzed depending on the simulated results. The comparison of different hydrologic years on calculation results is dramatic. The loading of non-point source pollution in the wet year is relatively larger but smaller in the dry year since the non-point source pollutants are mainly transported through the runoff. The pollution loading within a year is mainly produced in the flood season. Because SWAT is a distributed model, it is possible to view model output as it varies across the basin, so the critical areas and reaches can be found in the study area. According to the simulation results, it is found that different land uses can yield different results and fertilization in rainy season has an important impact on the non- point source pollution. The limitations of the SWAT model are also discussed and the measures of the control and prevention of non- point source pollution for Panjiakou Reservoir are presented according to the analysis of model calculation results.
Numerical-experimental investigation of load paths in DP800 dual phase steel during Nakajima test
NASA Astrophysics Data System (ADS)
Bergs, Thomas; Nick, Matthias; Feuerhack, Andreas; Trauth, Daniel; Klocke, Fritz
2018-05-01
Fuel efficiency requirements demand lightweight construction of vehicle body parts. The usage of advanced high strength steels permits a reduction of sheet thickness while still maintaining the overall strength required for crash safety. However, damage, internal defects (voids, inclusions, micro fractures), microstructural defects (varying grain size distribution, precipitates on grain boundaries, anisotropy) and surface defects (micro fractures, grooves) act as a concentration point for stress and consequently as an initiation point for failure both during deep drawing and in service. Considering damage evolution in the design of car body deep drawing processes allows for a further reduction in material usage and therefore body weight. Preliminary research has shown that a modification of load paths in forming processes can help mitigate the effects of damage on the material. This paper investigates the load paths in Nakajima tests of a DP800 dual phase steel to research damage in deep drawing processes. Investigation is done via a finite element model using experimentally validated material data for a DP800 dual phase steel. Numerical simulation allows for the investigation of load paths with respect to stress states, strain rates and temperature evolution, which cannot be easily observed in physical experiments. Stress triaxiality and the Lode parameter are used to describe the stress states. Their evolution during the Nakajima tests serves as an indicator for damage evolution. The large variety of sheet metal forming specific load paths in Nakajima tests allows a comprehensive evaluation of damage for deep drawing. The results of the numerical simulation conducted in this project and further physical experiments will later be used to calibrate a damage model for simulation of deep drawing processes.
NASA Astrophysics Data System (ADS)
Ceberio, Mikel; Almudí, José Manuel; Franco, Ángel
2016-08-01
In recent years, interactive computer simulations have been progressively integrated in the teaching of the sciences and have contributed significant improvements in the teaching-learning process. Practicing problem-solving is a key factor in science and engineering education. The aim of this study was to design simulation-based problem-solving teaching materials and assess their effectiveness in improving students' ability to solve problems in university-level physics. Firstly, we analyze the effect of using simulation-based materials in the development of students' skills in employing procedures that are typically used in the scientific method of problem-solving. We found that a significant percentage of the experimental students used expert-type scientific procedures such as qualitative analysis of the problem, making hypotheses, and analysis of results. At the end of the course, only a minority of the students persisted with habits based solely on mathematical equations. Secondly, we compare the effectiveness in terms of problem-solving of the experimental group students with the students who are taught conventionally. We found that the implementation of the problem-solving strategy improved experimental students' results regarding obtaining a correct solution from the academic point of view, in standard textbook problems. Thirdly, we explore students' satisfaction with simulation-based problem-solving teaching materials and we found that the majority appear to be satisfied with the methodology proposed and took on a favorable attitude to learning problem-solving. The research was carried out among first-year Engineering Degree students.
Evaluation of Pseudo-Haptic Interactions with Soft Objects in Virtual Environments.
Li, Min; Sareh, Sina; Xu, Guanghua; Ridzuan, Maisarah Binti; Luo, Shan; Xie, Jun; Wurdemann, Helge; Althoefer, Kaspar
2016-01-01
This paper proposes a pseudo-haptic feedback method conveying simulated soft surface stiffness information through a visual interface. The method exploits a combination of two feedback techniques, namely visual feedback of soft surface deformation and control of the indenter avatar speed, to convey stiffness information of a simulated surface of a soft object in virtual environments. The proposed method was effective in distinguishing different sizes of virtual hard nodules integrated into the simulated soft bodies. To further improve the interactive experience, the approach was extended creating a multi-point pseudo-haptic feedback system. A comparison with regards to (a) nodule detection sensitivity and (b) elapsed time as performance indicators in hard nodule detection experiments to a tablet computer incorporating vibration feedback was conducted. The multi-point pseudo-haptic interaction is shown to be more time-efficient than the single-point pseudo-haptic interaction. It is noted that multi-point pseudo-haptic feedback performs similarly well when compared to a vibration-based feedback method based on both performance measures elapsed time and nodule detection sensitivity. This proves that the proposed method can be used to convey detailed haptic information for virtual environmental tasks, even subtle ones, using either a computer mouse or a pressure sensitive device as an input device. This pseudo-haptic feedback method provides an opportunity for low-cost simulation of objects with soft surfaces and hard inclusions, as, for example, occurring in ever more realistic video games with increasing emphasis on interaction with the physical environment and minimally invasive surgery in the form of soft tissue organs with embedded cancer nodules. Hence, the method can be used in many low-budget applications where haptic sensation is required, such as surgeon training or video games, either using desktop computers or portable devices, showing reasonably high fidelity in conveying stiffness perception to the user.
NASA Astrophysics Data System (ADS)
Zhang, Lucy
In this talk, we show a robust numerical framework to model and simulate gas-liquid-solid three-phase flows. The overall algorithm adopts a non-boundary-fitted approach that avoids frequent mesh-updating procedures by defining independent meshes and explicit interfacial points to represent each phase. In this framework, we couple the immersed finite element method (IFEM) and the connectivity-free front tracking (CFFT) method that model fluid-solid and gas-liquid interactions, respectively, for the three-phase models. The CFFT is used here to simulate gas-liquid multi-fluid flows that uses explicit interfacial points to represent the gas-liquid interface and for its easy handling of interface topology changes. Instead of defining different levels simultaneously as used in level sets, an indicator function naturally couples the two methods together to represent and track each of the three phases. Several 2-D and 3-D testing cases are performed to demonstrate the robustness and capability of the coupled numerical framework in dealing with complex three-phase problems, in particular free surfaces interacting with deformable solids. The solution technique offers accuracy and stability, which provides a means to simulate various engineering applications. The author would like to acknowledge the supports from NIH/DHHS R01-2R01DC005642-10A1 and the National Natural Science Foundation of China (NSFC) 11550110185.
Dosimetry applications in GATE Monte Carlo toolkit.
Papadimitroulas, Panagiotis
2017-09-01
Monte Carlo (MC) simulations are a well-established method for studying physical processes in medical physics. The purpose of this review is to present GATE dosimetry applications on diagnostic and therapeutic simulated protocols. There is a significant need for accurate quantification of the absorbed dose in several specific applications such as preclinical and pediatric studies. GATE is an open-source MC toolkit for simulating imaging, radiotherapy (RT) and dosimetry applications in a user-friendly environment, which is well validated and widely accepted by the scientific community. In RT applications, during treatment planning, it is essential to accurately assess the deposited energy and the absorbed dose per tissue/organ of interest, as well as the local statistical uncertainty. Several types of realistic dosimetric applications are described including: molecular imaging, radio-immunotherapy, radiotherapy and brachytherapy. GATE has been efficiently used in several applications, such as Dose Point Kernels, S-values, Brachytherapy parameters, and has been compared against various MC codes which are considered as standard tools for decades. Furthermore, the presented studies show reliable modeling of particle beams when comparing experimental with simulated data. Examples of different dosimetric protocols are reported for individualized dosimetry and simulations combining imaging and therapy dose monitoring, with the use of modern computational phantoms. Personalization of medical protocols can be achieved by combining GATE MC simulations with anthropomorphic computational models and clinical anatomical data. This is a review study, covering several dosimetric applications of GATE, and the different tools used for modeling realistic clinical acquisitions with accurate dose assessment. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
New Expression for Collisionless Magnetic Reconnection Rate
NASA Technical Reports Server (NTRS)
Klimas, Alexander J.
2014-01-01
For 2D, symmetric, anti-parallel, collisionless magnetic reconnection, a new expression for the reconnection rate in the electron diffusion region is introduced. It is shown that this expression can be derived in just a few simple steps from a physically intuitive starting point; the derivation is given in its entirety and the validity of each step is confirmed. The predictions of this expression are compared to the results of several long-duration, open-boundary PIC reconnection simulations to demonstrate excellent agreement.
A simulation of the instrument pointing system for the Astro-1 mission
NASA Technical Reports Server (NTRS)
Whorton, M.; West, M.; Rakoczy, J.
1991-01-01
NASA has recently completed a shuttle-borne stellar ultraviolet astronomy mission known as Astro-1. A three axis instrument pointing system (IPS) was employed to accurately point the science instruments. In order to analyze the pointing control system and verify pointing performance, a simulation of the IPS was developed using the multibody dynamics software TREETOPS. The TREETOPS IPS simulation is capable of accurately modeling the multibody IPS system undergoing large angle, nonlinear motion. The simulation is documented and example cases are presented demonstrating disturbance rejection, fine pointing operations, and multiple target pointing and slewing of the IPS.
Physically detached 'compact groups'
NASA Technical Reports Server (NTRS)
Hernquist, Lars; Katz, Neal; Weinberg, David H.
1995-01-01
A small fraction of galaxies appear to reside in dense compact groups, whose inferred crossing times are much shorter than a Hubble time. These short crossing times have led to considerable disagreement among researchers attempting to deduce the dynamical state of these systems. In this paper, we suggest that many of the observed groups are not physically bound but are chance projections of galaxies well separated along the line of sight. Unlike earlier similar proposals, ours does not require that the galaxies in the compact group be members of a more diffuse, but physically bound entity. The probability of physically separated galaxies projecting into an apparent compact group is nonnegligible if most galaxies are distributed in thin filaments. We illustrate this general point with a specific example: a simulation of a cold dark matter universe, in which hydrodynamic effects are included to identify galaxies. The simulated galaxy distribution is filamentary and end-on views of these filaments produce apparent galaxy associations that have sizes and velocity dispersions similar to those of observed compact groups. The frequency of such projections is sufficient, in principle, to explain the observed space density of groups in the Hickson catalog. We discuss the implications of our proposal for the formation and evolution of groups and elliptical galaxies. The proposal can be tested by using redshift-independent distance estimators to measure the line-of-sight spatial extent of nearby compact groups.
Transport of Zinc Oxide Nanoparticles in a Simulated Gastric Environment
NASA Astrophysics Data System (ADS)
Mayfield, Ryan T.
Recent years have seen a growing interest in the use of many types of nano sized materials in the consumer sector. Potential uses include encapsulation of nutrients, providing antimicrobial activity, altering texture, or changing bioavailability of nutrients. Engineered nanoparticles (ENP) possess properties that are different than larger particles made of the same constituents. Properties such as solubility, aggregation state, and toxicity can all be changed as a function of size. The gastric environment is an important area for study of engineered nanoparticles because of the varied physical, chemical, and enzymatic processes that are prevalent there. These all have the potential to alter those properties of ENP that make them different from their bulk counterparts. The Human Gastric Simulator (HGS) is an advanced in vitro model that can be used to study many facets of digestion. The HGS consists of a plastic lining that acts as the stomach cavity with two sets of U-shaped arms on belts that provide the physical forces needed to replicate peristalsis. Altering the position of the arms or changing the speed of the motor which powers them allows one to tightly hone and replicate varied digestive conditions. Gastric juice, consisting of salts, enzymes, and acid levels which replicate physiological conditions, is introduced to the cavity at a controllable rate. The release of digested food from the lumen of simulated stomach is controlled by a peristaltic pump. The goal of the HGS is to accurately and repeatedly simulate human digestion. This study focused on introducing foods spiked with zinc oxide ENP and bulk zinc oxide into the HGS and then monitoring how the concentration of each changed at two locations in the HGS over a two hour period. The two locations chosen were the highest point in the lumen of the stomach, which represented the fundus, and a point just beyond the equivalent of the pylorus, which represented the antrum of the stomach. These points were chosen in order to elucidate if and how two different particle sizes of the same material are transported during digestion. Results showed that particles preferentially collected at Location A; time played a minor role in the separation to the two locations while particle size did not play any role.
Multiscale Methods for Accurate, Efficient, and Scale-Aware Models of the Earth System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldhaber, Steve; Holland, Marika
The major goal of this project was to contribute improvements to the infrastructure of an Earth System Model in order to support research in the Multiscale Methods for Accurate, Efficient, and Scale-Aware models of the Earth System project. In support of this, the NCAR team accomplished two main tasks: improving input/output performance of the model and improving atmospheric model simulation quality. Improvement of the performance and scalability of data input and diagnostic output within the model required a new infrastructure which can efficiently handle the unstructured grids common in multiscale simulations. This allows for a more computationally efficient model, enablingmore » more years of Earth System simulation. The quality of the model simulations was improved by reducing grid-point noise in the spectral element version of the Community Atmosphere Model (CAM-SE). This was achieved by running the physics of the model using grid-cell data on a finite-volume grid.« less
Analytical modeling of helium turbomachinery using FORTRAN 77
NASA Astrophysics Data System (ADS)
Balaji, Purushotham
Advanced Generation IV modular reactors, including Very High Temperature Reactors (VHTRs), utilize helium as the working fluid, with a potential for high efficiency power production utilizing helium turbomachinery. Helium is chemically inert and nonradioactive which makes the gas ideal for a nuclear power-plant environment where radioactive leaks are a high concern. These properties of helium gas helps to increase the safety features as well as to decrease the aging process of plant components. The lack of sufficient helium turbomachinery data has made it difficult to study the vital role played by the gas turbine components of these VHTR powered cycles. Therefore, this research work focuses on predicting the performance of helium compressors. A FORTRAN77 program is developed to simulate helium compressor operation, including surge line prediction. The resulting design point and off design performance data can be used to develop compressor map files readable by Numerical Propulsion Simulation Software (NPSS). This multi-physics simulation software that was developed for propulsion system analysis has found applications in simulating power-plant cycles.
Variance change point detection for fractional Brownian motion based on the likelihood ratio test
NASA Astrophysics Data System (ADS)
Kucharczyk, Daniel; Wyłomańska, Agnieszka; Sikora, Grzegorz
2018-01-01
Fractional Brownian motion is one of the main stochastic processes used for describing the long-range dependence phenomenon for self-similar processes. It appears that for many real time series, characteristics of the data change significantly over time. Such behaviour one can observe in many applications, including physical and biological experiments. In this paper, we present a new technique for the critical change point detection for cases where the data under consideration are driven by fractional Brownian motion with a time-changed diffusion coefficient. The proposed methodology is based on the likelihood ratio approach and represents an extension of a similar methodology used for Brownian motion, the process with independent increments. Here, we also propose a statistical test for testing the significance of the estimated critical point. In addition to that, an extensive simulation study is provided to test the performance of the proposed method.
Method for simulating discontinuous physical systems
Baty, Roy S.; Vaughn, Mark R.
2001-01-01
The mathematical foundations of conventional numerical simulation of physical systems provide no consistent description of the behavior of such systems when subjected to discontinuous physical influences. As a result, the numerical simulation of such problems requires ad hoc encoding of specific experimental results in order to address the behavior of such discontinuous physical systems. In the present invention, these foundations are replaced by a new combination of generalized function theory and nonstandard analysis. The result is a class of new approaches to the numerical simulation of physical systems which allows the accurate and well-behaved simulation of discontinuous and other difficult physical systems, as well as simpler physical systems. Applications of this new class of numerical simulation techniques to process control, robotics, and apparatus design are outlined.
Mapping the current–current correlation function near a quantum critical point
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prodan, Emil, E-mail: prodan@yu.edu; Bellissard, Jean
2016-05-15
The current–current correlation function is a useful concept in the theory of electron transport in homogeneous solids. The finite-temperature conductivity tensor as well as Anderson’s localization length can be computed entirely from this correlation function. Based on the critical behavior of these two physical quantities near the plateau–insulator or plateau–plateau transitions in the integer quantum Hall effect, we derive an asymptotic formula for the current–current correlation function, which enables us to make several theoretical predictions about its generic behavior. For the disordered Hofstadter model, we employ numerical simulations to map the current–current correlation function, obtain its asymptotic form near amore » critical point and confirm the theoretical predictions.« less
Multibody Parachute Flight Simulations for Planetary Entry Trajectories Using "Equilibrium Points"
NASA Technical Reports Server (NTRS)
Raiszadeh, Ben
2003-01-01
A method has been developed to reduce numerical stiffness and computer CPU requirements of high fidelity multibody flight simulations involving parachutes for planetary entry trajectories. Typical parachute entry configurations consist of entry bodies suspended from a parachute, connected by flexible lines. To accurately calculate line forces and moments, the simulations need to keep track of the point where the flexible lines meet (confluence point). In previous multibody parachute flight simulations, the confluence point has been modeled as a point mass. Using a point mass for the confluence point tends to make the simulation numerically stiff, because its mass is typically much less that than the main rigid body masses. One solution for stiff differential equations is to use a very small integration time step. However, this results in large computer CPU requirements. In the method described in the paper, the need for using a mass as the confluence point has been eliminated. Instead, the confluence point is modeled using an "equilibrium point". This point is calculated at every integration step as the point at which sum of all line forces is zero (static equilibrium). The use of this "equilibrium point" has the advantage of both reducing the numerical stiffness of the simulations, and eliminating the dynamical equations associated with vibration of a lumped mass on a high-tension string.
Chemorheology of highly filled thermosets: Effects of fillers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halley, P.J.
1996-12-31
Highly filled thermosets are utilised in the manufacture of many high value added products in the aerospace, communication, computer and automobile industries. A fundamental understanding of the processing of these materials has, however, been hindered by the inherent complexities of the flow and cure properties of these composite thermoset materials. A chemorheological (gel point and chemoviscosity) testing procedure is described here that uses dynamic multiwave tests on a modified Rheometrics RDSII system, using a filled epoxy moulding compound (EMC). Data from this testing procedure has been combined with physical property and kinetic data to produce realistic simulation results using flowmore » simulation software (TSET; MOLDFLOW Pty Ltd). This chemorheological testing procedure has now been successfully implemented commercially by MOLDFLOW Pty Ltd.« less
Ható, Zoltán; Valiskó, Mónika; Kristóf, Tamás; Gillespie, Dirk; Boda, Dezsö
2017-07-21
In a multiscale modeling approach, we present computer simulation results for a rectifying bipolar nanopore at two modeling levels. In an all-atom model, we use explicit water to simulate ion transport directly with the molecular dynamics technique. In a reduced model, we use implicit water and apply the Local Equilibrium Monte Carlo method together with the Nernst-Planck transport equation. This hybrid method makes the fast calculation of ion transport possible at the price of lost details. We show that the implicit-water model is an appropriate representation of the explicit-water model when we look at the system at the device (i.e., input vs. output) level. The two models produce qualitatively similar behavior of the electrical current for different voltages and model parameters. Looking at the details of concentration and potential profiles, we find profound differences between the two models. These differences, however, do not influence the basic behavior of the model as a device because they do not influence the z-dependence of the concentration profiles which are the main determinants of current. These results then address an old paradox: how do reduced models, whose assumptions should break down in a nanoscale device, predict experimental data? Our simulations show that reduced models can still capture the overall device physics correctly, even though they get some important aspects of the molecular-scale physics quite wrong; reduced models work because they include the physics that is necessary from the point of view of device function. Therefore, reduced models can suffice for general device understanding and device design, but more detailed models might be needed for molecular level understanding.
NASA Astrophysics Data System (ADS)
Magyar, Rudolph
2013-06-01
We report a computational and validation study of equation of state (EOS) properties of liquid / dense plasma mixtures of xenon and ethane to explore and to illustrate the physics of the molecular scale mixing of light elements with heavy elements. Accurate EOS models are crucial to achieve high-fidelity hydrodynamics simulations of many high-energy-density phenomena such as inertial confinement fusion and strong shock waves. While the EOS is often tabulated for separate species, the equation of state for arbitrary mixtures is generally not available, requiring properties of the mixture to be approximated by combining physical properties of the pure systems. The main goal of this study is to access how accurate this approximation is under shock conditions. Density functional theory molecular dynamics (DFT-MD) at elevated-temperature and pressure is used to assess the thermodynamics of the xenon-ethane mixture. The simulations are unbiased as to elemental species and therefore provide comparable accuracy when describing total energies, pressures, and other physical properties of mixtures as they do for pure systems. In addition, we have performed shock compression experiments using the Sandia Z-accelerator on pure xenon, ethane, and various mixture ratios thereof. The Hugoniot results are compared to the DFT-MD results and the predictions of different rules for combing EOS tables. The DFT-based simulation results compare well with the experimental points, and it is found that a mixing rule based on pressure equilibration performs reliably well for the mixtures considered. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Keh-Fei; Draper, Terrence
It is emphasized in the 2015 NSAC Long Range Plan that "understanding the structure of hadrons in terms of QCD's quarks and gluons is one of the central goals of modern nuclear physics." Over the last three decades, lattice QCD has developed into a powerful tool for ab initio calculations of strong-interaction physics. Up until now, it is the only theoretical approach to solving QCD with controlled statistical and systematic errors. Since 1985, we have proposed and carried out first-principles calculations of nucleon structure and hadron spectroscopy using lattice QCD which entails both algorithmic development and large-scale computer simulation. Wemore » started out by calculating the nucleon form factors -- electromagnetic, axial-vector, πNN, and scalar form factors, the quark spin contribution to the proton spin, the strangeness magnetic moment, the quark orbital angular momentum, the quark momentum fraction, and the quark and glue decomposition of the proton momentum and angular momentum. The first round of calculations were done with Wilson fermions in the `quenched' approximation where the dynamical effects of the quarks in the sea are not taken into account in the Monte Carlo simulation to generate the background gauge configurations. Beginning in 2000, we have started implementing the overlap fermion formulation into the spectroscopy and structure calculations. This is mainly because the overlap fermion honors chiral symmetry as in the continuum. It is going to be more and more important to take the symmetry into account as the simulations move closer to the physical point where the u and d quark masses are as light as a few MeV only. We began with lattices which have quark masses in the sea corresponding to a pion mass at ~ 300 MeV and obtained the strange form factors, charm and strange quark masses, the charmonium spectrum and the D s meson decay constant f Ds, the strangeness and charmness, the meson mass decomposition and the strange quark spin from the anomalous Ward identity. Recently, we have started to include multiple lattices with different lattice spacings and different volumes including large lattices at the physical pion mass point. We are getting quite close to being able to calculate the hadron structure at the physical point and to do the continuum and large volume extrapolations, which is our ultimate aim. We have now finished several projects which have included these systematic corrections. They include the leptonic decay width of the ρ, the πN sigma and strange sigma terms, and the strange quark magnetic moment. Over the years, we have also studied hadron spectroscopy with lattice calculations and in phenomenology. These include Roper resonance, pentaquark state, charmonium spectrum, glueballs, scalar mesons a 0(1450) and σ(600) and other scalar mesons, and the 1 -+ meson. In addition, we have employed the canonical approach to explore the first-order phase transition and the critical point at finite density and finite temperature. We have also discovered a new parton degree of freedom -- the connected sea partons, from the path-integral formulation of the hadronic tensor, which explains the experimentally observed Gottfried sum rule violation. Combining experimental result on the strange parton distribution, the CT10 global fitting results of the total u and d anti-partons and the lattice result of the ratio of the momentum fraction of the strange vs that of u or d in the disconnected insertion, we have shown that the connected sea partons can be isolated. In this final technical report, we shall present a few representative highlights that have been achieved in the project.« less
The analysis of thermal comfort requirements through the simulation of an occupied building.
Thellier, F; Cordier, A; Monchoux, F
1994-05-01
Building simulation usually focuses on the study of physical indoor parameters, but we must not forget the main aim of a house: to provide comfort to the occupants. This study was undertaken in order to build a complete tool to model thermal behaviour that will enable the prediction of thermal sensations of humans in a real environment. A human thermoregulation model was added to TRNSYS, a building simulation program. For our purposes, improvements had to be made to the original physiological model, by refining the calculation of all heat exchanges with the environment and adding a representation of clothes. This paper briefly describes the program, its modifications, and compares its results with experimental ones. An example of potential use is given, which points out the usefulness of such models in seeking the best solutions to reach optimal environmental conditions for global, and specially local comfort, of building occupants.
Robust sensorimotor representation to physical interaction changes in humanoid motion learning.
Shimizu, Toshihiko; Saegusa, Ryo; Ikemoto, Shuhei; Ishiguro, Hiroshi; Metta, Giorgio
2015-05-01
This paper proposes a learning from demonstration system based on a motion feature, called phase transfer sequence. The system aims to synthesize the knowledge on humanoid whole body motions learned during teacher-supported interactions, and apply this knowledge during different physical interactions between a robot and its surroundings. The phase transfer sequence represents the temporal order of the changing points in multiple time sequences. It encodes the dynamical aspects of the sequences so as to absorb the gaps in timing and amplitude derived from interaction changes. The phase transfer sequence was evaluated in reinforcement learning of sitting-up and walking motions conducted by a real humanoid robot and compatible simulator. In both tasks, the robotic motions were less dependent on physical interactions when learned by the proposed feature than by conventional similarity measurements. Phase transfer sequence also enhanced the convergence speed of motion learning. Our proposed feature is original primarily because it absorbs the gaps caused by changes of the originally acquired physical interactions, thereby enhancing the learning speed in subsequent interactions.
NASA Technical Reports Server (NTRS)
Leboissertier, Anthony; Okong'O, Nora; Bellan, Josette
2005-01-01
Large-eddy simulation (LES) is conducted of a three-dimensional temporal mixing layer whose lower stream is initially laden with liquid drops which may evaporate during the simulation. The gas-phase equations are written in an Eulerian frame for two perfect gas species (carrier gas and vapour emanating from the drops), while the liquid-phase equations are written in a Lagrangian frame. The effect of drop evaporation on the gas phase is considered through mass, species, momentum and energy source terms. The drop evolution is modelled using physical drops, or using computational drops to represent the physical drops. Simulations are performed using various LES models previously assessed on a database obtained from direct numerical simulations (DNS). These LES models are for: (i) the subgrid-scale (SGS) fluxes and (ii) the filtered source terms (FSTs) based on computational drops. The LES, which are compared to filtered-and-coarsened (FC) DNS results at the coarser LES grid, are conducted with 64 times fewer grid points than the DNS, and up to 64 times fewer computational than physical drops. It is found that both constant-coefficient and dynamic Smagorinsky SGS-flux models, though numerically stable, are overly dissipative and damp generated small-resolved-scale (SRS) turbulent structures. Although the global growth and mixing predictions of LES using Smagorinsky models are in good agreement with the FC-DNS, the spatial distributions of the drops differ significantly. In contrast, the constant-coefficient scale-similarity model and the dynamic gradient model perform well in predicting most flow features, with the latter model having the advantage of not requiring a priori calibration of the model coefficient. The ability of the dynamic models to determine the model coefficient during LES is found to be essential since the constant-coefficient gradient model, although more accurate than the Smagorinsky model, is not consistently numerically stable despite using DNS-calibrated coefficients. With accurate SGS-flux models, namely scale-similarity and dynamic gradient, the FST model allows up to a 32-fold reduction in computational drops compared to the number of physical drops, without degradation of accuracy; a 64-fold reduction leads to a slight decrease in accuracy.
Real-time simulation of thermal shadows with EMIT
NASA Astrophysics Data System (ADS)
Klein, Andreas; Oberhofer, Stefan; Schätz, Peter; Nischwitz, Alfred; Obermeier, Paul
2016-05-01
Modern missile systems use infrared imaging for tracking or target detection algorithms. The development and validation processes of these missile systems need high fidelity simulations capable of stimulating the sensors in real-time with infrared image sequences from a synthetic 3D environment. The Extensible Multispectral Image Generation Toolset (EMIT) is a modular software library developed at MBDA Germany for the generation of physics-based infrared images in real-time. EMIT is able to render radiance images in full 32-bit floating point precision using state of the art computer graphics cards and advanced shader programs. An important functionality of an infrared image generation toolset is the simulation of thermal shadows as these may cause matching errors in tracking algorithms. However, for real-time simulations, such as hardware in the loop simulations (HWIL) of infrared seekers, thermal shadows are often neglected or precomputed as they require a thermal balance calculation in four-dimensions (3D geometry in one-dimensional time up to several hours in the past). In this paper we will show the novel real-time thermal simulation of EMIT. Our thermal simulation is capable of simulating thermal effects in real-time environments, such as thermal shadows resulting from the occlusion of direct and indirect irradiance. We conclude our paper with the practical use of EMIT in a missile HWIL simulation.
NASA Astrophysics Data System (ADS)
Corni, Federico; Michelini, Marisa
2018-01-01
Rutherford backscattering spectrometry is a nuclear analysis technique widely used for materials science investigation. Despite the strict technical requirements to perform the data acquisition, the interpretation of a spectrum is within the reach of general physics students. The main phenomena occurring during a collision between helium ions—with energy of a few MeV—and matter are: elastic nuclear collision, elastic scattering, and, in the case of non-surface collision, ion stopping. To interpret these phenomena, we use classical physics models: material point elastic collision, unscreened Coulomb scattering, and inelastic energy loss of ions with electrons, respectively. We present the educational proposal for Rutherford backscattering spectrometry, within the framework of the model of educational reconstruction, following a rationale that links basic physics concepts with quantities for spectra analysis. This contribution offers the opportunity to design didactic specific interventions suitable for undergraduate and secondary school students.
Kirchhoff and Ohm in action: solving electric currents in continuous extended media
NASA Astrophysics Data System (ADS)
Dolinko, A. E.
2018-03-01
In this paper we show a simple and versatile computational simulation method for determining electric currents and electric potential in 2D and 3D media with arbitrary distribution of resistivity. One of the highlights of the proposed method is that the simulation space containing the distribution of resistivity and the points of external applied voltage are introduced by means of digital images or bitmaps, which easily allows simulating any phenomena involving distributions of resistivity. The simulation is based on the Kirchhoff’s laws of electric currents and it is solved by means of an iterative procedure. The method is also generalised to account for media with distributions of reactive impedance. At the end of this work, we show an example of application of the simulation, consisting in reproducing the response obtained with the geophysical method of electric resistivity tomography in presence of soil cracks. This paper is aimed at undergraduate or graduated students interested in computational physics and electricity and also researchers involved in the area of continuous electric media, which could find a simple and powerful tool for investigation.
Using Voronoi Tessellations to identify groups in N-body Simulation
NASA Astrophysics Data System (ADS)
Gonzalez, R. E.; Theuns, T.
Dark matter N-body simulations often use a friends-of-friends (FOF) group finder to link together particles above a specified density threshold. An over density of 200 picks-out objects that can be identified with virialised dark matter haloes, based on the spherical collapse model for the formation of structure. When the halo contains significant substructure, as is the case in very high resolution simulations, then FOF will simply link all substructure to the parent halo. Many cosmological simulations now also include gas and stars, and these are often distributed differently from the dark matter. It is then not clear whether the structures identified by FOF are very physical. Here we use Voronoi tesselations to identify structures in hydrodynamical cosmological simulations, that contain dark matter, gas and stars. This adaptive technique allows accurate estimates of densities, and density gradients, for a non-structured distribution of points. We discuss how these estimates allow us to identify structures in the dark matter that can be identified with haloes, and in the stars, to identify galaxies.
Tail reconnection in the global magnetospheric context: Vlasiator first results
NASA Astrophysics Data System (ADS)
Palmroth, Minna; Hoilijoki, Sanni; Juusola, Liisa; Pulkkinen, Tuija I.; Hietala, Heli; Pfau-Kempf, Yann; Ganse, Urs; von Alfthan, Sebastian; Vainio, Rami; Hesse, Michael
2017-11-01
The key dynamics of the magnetotail have been researched for decades and have been associated with either three-dimensional (3-D) plasma instabilities and/or magnetic reconnection. We apply a global hybrid-Vlasov code, Vlasiator, to simulate reconnection self-consistently in the ion kinetic scales in the noon-midnight meridional plane, including both dayside and nightside reconnection regions within the same simulation box. Our simulation represents a numerical experiment, which turns off the 3-D instabilities but models ion-scale reconnection physically accurately in 2-D. We demonstrate that many known tail dynamics are present in the simulation without a full description of 3-D instabilities or without the detailed description of the electrons. While multiple reconnection sites can coexist in the plasma sheet, one reconnection point can start a global reconfiguration process, in which magnetic field lines become detached and a plasmoid is released. As the simulation run features temporally steady solar wind input, this global reconfiguration is not associated with sudden changes in the solar wind. Further, we show that lobe density variations originating from dayside reconnection may play an important role in stabilising tail reconnection.
Simulation of the Physics of Flight
ERIC Educational Resources Information Center
Lane, W. Brian
2013-01-01
Computer simulations continue to prove to be a valuable tool in physics education. Based on the needs of an Aviation Physics course, we developed the PHYSics of FLIght Simulator (PhysFliS), which numerically solves Newton's second law for an airplane in flight based on standard aerodynamics relationships. The simulation can be used to pique…
Drift-based scrape-off particle width in X-point geometry
NASA Astrophysics Data System (ADS)
Reiser, D.; Eich, T.
2017-04-01
The Goldston heuristic estimate of the scrape-off layer width (Goldston 2012 Nucl. Fusion 52 013009) is reconsidered using a fluid description for the plasma dynamics. The basic ingredient is the inclusion of a compressible diamagnetic drift for the particle cross field transport. Instead of testing the heuristic model in a sophisticated numerical simulation including several physical mechanisms working together, the purpose of this work is to point out basic consequences for a drift-dominated cross field transport using a reduced fluid model. To evaluate the model equations and prepare them for subsequent numerical solution a specific analytical model for 2D magnetic field configurations with X-points is employed. In a first step parameter scans in high-resolution grids for isothermal plasmas are done to assess the basic formulas of the heuristic model with respect to the functional dependence of the scrape-off width on the poloidal magnetic field and plasma temperature. Particular features in the 2D-fluid calculations—especially the appearance of supersonic parallel flows and shock wave like bifurcational jumps—are discussed and can be understood partly in the framework of a reduced 1D model. The resulting semi-analytical findings might give hints for experimental proof and implementation in more elaborated fluid simulations.
Physical Models and Virtual Reality Simulators in Otolaryngology.
Javia, Luv; Sardesai, Maya G
2017-10-01
The increasing role of simulation in the medical education of future otolaryngologists has followed suit with other surgical disciplines. Simulators make it possible for the resident to explore and learn in a safe and less stressful environment. The various subspecialties in otolaryngology use physical simulators and virtual-reality simulators. Although physical simulators allow the operator to make direct contact with its components, virtual-reality simulators allow the operator to interact with an environment that is computer generated. This article gives an overview of the various types of physical simulators and virtual-reality simulators used in otolaryngology that have been reported in the literature. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Schrage, Dean Stewart
1998-11-01
This dissertation presents a combined mathematical and experimental analysis of the fluid dynamics of a gas- liquid, dispersed-phase cyclonic separation device. The global objective of this research is to develop a simulation model of separation process in order to predict the void fraction field within a cyclonic separation device. The separation process is approximated by analyzing the dynamic motion of many single-bubbles, moving under the influence of the far-field, interacting with physical boundaries and other bubbles. The dynamic motion of the bubble is described by treating the bubble as a point-mass and writing an inertial force balance, equating the force applied to the bubble-point-location to the inertial acceleration of the bubble mass (also applied to the point-location). The forces which are applied to the bubble are determined by an integration of the surface pressure over the bubble. The surface pressure is coupled to the intrinsic motion of the bubble, and is very difficult to obtain exactly. However, under moderate Reynolds number, the wake trailing a bubble is small and the near-field flow field can be approximated as an inviscid flow field. Unconventional potential flow techniques are employed to solve for the surface pressure; the hydrodyamic forces are described as a hydrodynamic mass tensor operating on the bubble acceleration vector. The inviscid flow model is augmented with adjunct forces which describe: drag forces, dynamic lift, far-field pressure forces. The dynamic equations of motion are solved both analytically and numerically for the bubble trajectory in specific flow field examples. A validation of these equations is performed by comparing to an experimentally-derived trajectory of a single- bubble, which is released into a cylindrical Couette flow field (inner cylinder rotating) at varying positions. Finally, a simulation of a cyclonic separation device is performed by extending the single-bubble dynamic model to a multi-bubble ensemble. A simplified model is developed to predict the effects of bubble-interaction. The simulation qualitatively depicts the separation physics encountered in an actual cyclonic separation device, supporting the original tenet that the separation process can be approximated by the collective motions of single- bubbles.
Performance Evaluation of 18F Radioluminescence Microscopy Using Computational Simulation
Wang, Qian; Sengupta, Debanti; Kim, Tae Jin; Pratx, Guillem
2017-01-01
Purpose Radioluminescence microscopy can visualize the distribution of beta-emitting radiotracers in live single cells with high resolution. Here, we perform a computational simulation of 18F positron imaging using this modality to better understand how radioluminescence signals are formed and to assist in optimizing the experimental setup and image processing. Methods First, the transport of charged particles through the cell and scintillator and the resulting scintillation is modeled using the GEANT4 Monte-Carlo simulation. Then, the propagation of the scintillation light through the microscope is modeled by a convolution with a depth-dependent point-spread function, which models the microscope response. Finally, the physical measurement of the scintillation light using an electron-multiplying charge-coupled device (EMCCD) camera is modeled using a stochastic numerical photosensor model, which accounts for various sources of noise. The simulated output of the EMCCD camera is further processed using our ORBIT image reconstruction methodology to evaluate the endpoint images. Results The EMCCD camera model was validated against experimentally acquired images and the simulated noise, as measured by the standard deviation of a blank image, was found to be accurate within 2% of the actual detection. Furthermore, point-source simulations found that a reconstructed spatial resolution of 18.5 μm can be achieved near the scintillator. As the source is moved away from the scintillator, spatial resolution degrades at a rate of 3.5 μm per μm distance. These results agree well with the experimentally measured spatial resolution of 30–40 μm (live cells). The simulation also shows that the system sensitivity is 26.5%, which is also consistent with our previous experiments. Finally, an image of a simulated sparse set of single cells is visually similar to the measured cell image. Conclusions Our simulation methodology agrees with experimental measurements taken with radioluminescence microscopy. This in silico approach can be used to guide further instrumentation developments and to provide a framework for improving image reconstruction. PMID:28273348
Understanding the Magnetosphere: The Counter-intuitive Simplicity of Cosmic Electrodynamics
NASA Astrophysics Data System (ADS)
Vasyliūnas, V. M.
2008-12-01
Planetary magnetospheres exhibit an amazing variety of phenomena, unlimited in complexity if followed into endlessly fine detail. The challenge of theory is to understand this variety and complexity, ultimately by seeing how the observed effects follow from the basic equations of physics (a point emphasized by Eugene Parker). The basic equations themselves are remarkably simple, only their consequences being exceedingly complex (a point emphasized by Fred Hoyle). In this lecture I trace the development of electrodynamics as an essential ingredient of magnetospheric physics, through the three stages it has undergone to date. Stage I is the initial application of MHD concepts and constraints (sometimes phrased in equivalent single-particle terms). Stage II is the classical formulation of self-consistent coupling between magnetosphere and ionosphere. Stage III is the more recent recognition that properly elucidating time sequence and cause-effect relations requires Maxwell's equations combined with the unique constraints of large-scale plasma. Problems and controversies underlie the transition from each stage to the following. For each stage, there are specific observed aspects of the magnetosphere that can be understood at its level; also, each stage implies a specific way to formulate unresolved questions (particularly important in this age of extensive multi-point observations and ever-more-detailed numerical simulations).
NASA Astrophysics Data System (ADS)
Huang, Shih-Chieh Douglas
In this dissertation, I investigate the effects of a grounded learning experience on college students' mental models of physics systems. The grounded learning experience consisted of a priming stage and an instruction stage, and within each stage, one of two different types of visuo-haptic representation was applied: visuo-gestural simulation (visual modality and gestures) and visuo-haptic simulation (visual modality, gestures, and somatosensory information). A pilot study involving N = 23 college students examined how using different types of visuo-haptic representation in instruction affected people's mental model construction for physics systems. Participants' abilities to construct mental models were operationalized through their pretest-to-posttest gain scores for a basic physics system and their performance on a transfer task involving an advanced physics system. Findings from this pilot study revealed that, while both simulations significantly improved participants' mental modal construction for physics systems, visuo-haptic simulation was significantly better than visuo-gestural simulation. In addition, clinical interviews suggested that participants' mental model construction for physics systems benefited from receiving visuo-haptic simulation in a tutorial prior to the instruction stage. A dissertation study involving N = 96 college students examined how types of visuo-haptic representation in different applications support participants' mental model construction for physics systems. Participant's abilities to construct mental models were again operationalized through their pretest-to-posttest gain scores for a basic physics system and their performance on a transfer task involving an advanced physics system. Participants' physics misconceptions were also measured before and after the grounded learning experience. Findings from this dissertation study not only revealed that visuo-haptic simulation was significantly more effective in promoting mental model construction and remedying participants' physics misconceptions than visuo-gestural simulation, they also revealed that visuo-haptic simulation was more effective during the priming stage than during the instruction stage. Interestingly, the effects of visuo-haptic simulation in priming and visuo-haptic simulation in instruction on participants' pretest-to-posttest gain scores for a basic physics system appeared additive. These results suggested that visuo-haptic simulation is effective in physics learning, especially when it is used during the priming stage.
NASA Astrophysics Data System (ADS)
Liu, Cheng-Wei
Phase transitions and their associated critical phenomena are of fundamental importance and play a crucial role in the development of statistical physics for both classical and quantum systems. Phase transitions embody diverse aspects of physics and also have numerous applications outside physics, e.g., in chemistry, biology, and combinatorial optimization problems in computer science. Many problems can be reduced to a system consisting of a large number of interacting agents, which under some circumstances (e.g., changes of external parameters) exhibit collective behavior; this type of scenario also underlies phase transitions. The theoretical understanding of equilibrium phase transitions was put on a solid footing with the establishment of the renormalization group. In contrast, non-equilibrium phase transition are relatively less understood and currently a very active research topic. One important milestone here is the Kibble-Zurek (KZ) mechanism, which provides a useful framework for describing a system with a transition point approached through a non-equilibrium quench process. I developed two efficient Monte Carlo techniques for studying phase transitions, one is for classical phase transition and the other is for quantum phase transitions, both are under the framework of KZ scaling. For classical phase transition, I develop a non-equilibrium quench (NEQ) simulation that can completely avoid the critical slowing down problem. For quantum phase transitions, I develop a new algorithm, named quasi-adiabatic quantum Monte Carlo (QAQMC) algorithm for studying quantum quenches. I demonstrate the utility of QAQMC quantum Ising model and obtain high-precision results at the transition point, in particular showing generalized dynamic scaling in the quantum system. To further extend the methods, I study more complex systems such as spin-glasses and random graphs. The techniques allow us to investigate the problems efficiently. From the classical perspective, using the NEQ approach I verify the universality class of the 3D Ising spin-glasses. I also investigate the random 3-regular graphs in terms of both classical and quantum phase transitions. I demonstrate that under this simulation scheme, one can extract information associated with the classical and quantum spin-glass transitions without any knowledge prior to the simulation.
Size-dependent bending modulus of nanotubes induced by the imperfect boundary conditions
Zhang, Jin
2016-01-01
The size-dependent bending modulus of nanotubes, which was widely observed in most existing three-point bending experiments [e.g., J. Phys. Chem. B 117, 4618–4625 (2013)], has been tacitly assumed to originate from the shear effect. In this paper, taking boron nitride nanotubes as an example, we directly measured the shear effect by molecular dynamics (MD) simulations and found that the shear effect is not the major factor responsible for the observed size-dependent bending modulus of nanotubes. To further explain the size-dependence phenomenon, we abandoned the assumption of perfect boundary conditions (BCs) utilized in the aforementioned experiments and studied the influence of the BCs on the bending modulus of nanotubes based on MD simulations. The results show that the imperfect BCs also make the bending modulus of nanotubes size-dependent. Moreover, the size-dependence phenomenon induced by the imperfect BCs is much more significant than that induced by the shear effect, which suggests that the imperfect BC is a possible physical origin that leads to the strong size-dependence of the bending modulus found in the aforementioned experiments. To capture the physics behind the MD simulation results, a beam model with the general BCs is proposed and found to fit the experimental data very well. PMID:27941866
2D imaging of helium ion velocity in the DIII-D divertor
NASA Astrophysics Data System (ADS)
Samuell, C. M.; Porter, G. D.; Meyer, W. H.; Rognlien, T. D.; Allen, S. L.; Briesemeister, A.; Mclean, A. G.; Zeng, L.; Jaervinen, A. E.; Howard, J.
2018-05-01
Two-dimensional imaging of parallel ion velocities is compared to fluid modeling simulations to understand the role of ions in determining divertor conditions and benchmark the UEDGE fluid modeling code. Pure helium discharges are used so that spectroscopic He+ measurements represent the main-ion population at small electron temperatures. Electron temperatures and densities in the divertor match simulated values to within about 20%-30%, establishing the experiment/model match as being at least as good as those normally obtained in the more regularly simulated deuterium plasmas. He+ brightness (HeII) comparison indicates that the degree of detachment is captured well by UEDGE, principally due to the inclusion of E ×B drifts. Tomographically inverted Coherence Imaging Spectroscopy measurements are used to determine the He+ parallel velocities which display excellent agreement between the model and the experiment near the divertor target where He+ is predicted to be the main-ion species and where electron-dominated physics dictates the parallel momentum balance. Upstream near the X-point where He+ is a minority species and ion-dominated physics plays a more important role, there is an underestimation of the flow velocity magnitude by a factor of 2-3. These results indicate that more effort is required to be able to correctly predict ion momentum in these challenging regimes.
Coupling of Noah-MP and the High Resolution CI-WATER ADHydro Hydrological Model
NASA Astrophysics Data System (ADS)
Moreno, H. A.; Goncalves Pureza, L.; Ogden, F. L.; Steinke, R. C.
2014-12-01
ADHydro is a physics-based, high-resolution, distributed hydrological model suitable for simulating large watersheds in a massively parallel computing environment. It simulates important processes such as: rainfall and infiltration, snowfall and snowmelt in complex terrain, vegetation and evapotranspiration, soil heat flux and freezing, overland flow, channel flow, groundwater flow and water management. For the vegetation and evapotranspiration processes, ADHydro uses the validated community land surface model (LSM) Noah-MP. Noah-MP uses multiple options for key land-surface hydrology and was developed to facilitate climate predictions with physically based ensembles. This presentation discusses the lessons learned in coupling Noah-MP to ADHydro. Noah-MP is delivered with a main driver program and not as a library with a clear interface to be called from other codes. This required some investigation to determine the correct functions to call and the appropriate parameter values. ADHydro runs Noah-MP as a point process on each mesh element and provides initialization and forcing data for each element. Modeling data are acquired from various sources including the Soil Survey Geographic Database (SSURGO), the Weather Research and Forecasting (WRF) model, and internal ADHydro simulation states. Despite these challenges in coupling Noah-MP to ADHydro, the use of Noah-MP provides the benefits of a supported community code.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burke, TImothy P.; Kiedrowski, Brian C.; Martin, William R.
Kernel Density Estimators (KDEs) are a non-parametric density estimation technique that has recently been applied to Monte Carlo radiation transport simulations. Kernel density estimators are an alternative to histogram tallies for obtaining global solutions in Monte Carlo tallies. With KDEs, a single event, either a collision or particle track, can contribute to the score at multiple tally points with the uncertainty at those points being independent of the desired resolution of the solution. Thus, KDEs show potential for obtaining estimates of a global solution with reduced variance when compared to a histogram. Previously, KDEs have been applied to neutronics formore » one-group reactor physics problems and fixed source shielding applications. However, little work was done to obtain reaction rates using KDEs. This paper introduces a new form of the MFP KDE that is capable of handling general geometries. Furthermore, extending the MFP KDE to 2-D problems in continuous energy introduces inaccuracies to the solution. An ad-hoc solution to these inaccuracies is introduced that produces errors smaller than 4% at material interfaces.« less
Bayesian calibration of coarse-grained forces: Efficiently addressing transferability
NASA Astrophysics Data System (ADS)
Patrone, Paul N.; Rosch, Thomas W.; Phelan, Frederick R.
2016-04-01
Generating and calibrating forces that are transferable across a range of state-points remains a challenging task in coarse-grained (CG) molecular dynamics. In this work, we present a coarse-graining workflow, inspired by ideas from uncertainty quantification and numerical analysis, to address this problem. The key idea behind our approach is to introduce a Bayesian correction algorithm that uses functional derivatives of CG simulations to rapidly and inexpensively recalibrate initial estimates f0 of forces anchored by standard methods such as force-matching. Taking density-temperature relationships as a running example, we demonstrate that this algorithm, in concert with various interpolation schemes, can be used to efficiently compute physically reasonable force curves on a fine grid of state-points. Importantly, we show that our workflow is robust to several choices available to the modeler, including the interpolation schemes and tools used to construct f0. In a related vein, we also demonstrate that our approach can speed up coarse-graining by reducing the number of atomistic simulations needed as inputs to standard methods for generating CG forces.
Point-particle method to compute diffusion-limited cellular uptake.
Sozza, A; Piazza, F; Cencini, M; De Lillo, F; Boffetta, G
2018-02-01
We present an efficient point-particle approach to simulate reaction-diffusion processes of spherical absorbing particles in the diffusion-limited regime, as simple models of cellular uptake. The exact solution for a single absorber is used to calibrate the method, linking the numerical parameters to the physical particle radius and uptake rate. We study the configurations of multiple absorbers of increasing complexity to examine the performance of the method by comparing our simulations with available exact analytical or numerical results. We demonstrate the potential of the method to resolve the complex diffusive interactions, here quantified by the Sherwood number, measuring the uptake rate in terms of that of isolated absorbers. We implement the method in a pseudospectral solver that can be generalized to include fluid motion and fluid-particle interactions. As a test case of the presence of a flow, we consider the uptake rate by a particle in a linear shear flow. Overall, our method represents a powerful and flexible computational tool that can be employed to investigate many complex situations in biology, chemistry, and related sciences.
NASA Astrophysics Data System (ADS)
Khare, Ketan S.; Phelan, Frederick R., Jr.
Specialized applications of single-walled carbon nanotubes (SWCNTs) require an efficient and reliable method to sort these materials into monodisperse fractions with respect to their defining metrics (chirality, length, etc.) while retaining their physical and chemical integrity. A popular method to achieve this goal is to use surfactants that individually disperse SWCNTs in water and then to separate the resulting colloidal mixture into fractions that are enriched in monodisperse SWCNTs. Recently, experiments at NIST have shown that subtle point mutations of chemical groups in bile salt surfactants have a large impact on the hydrodynamic properties of SWCNT-surfactant complexes during ultracentrifugation. These results provide strong motivation for understanding the rich physics underlying the assembly of surfactants around SWCNTs, the structure and dynamics of counter ions around the resulting complex, and propagation of these effects into the first hydration shell. Here, all-atom molecular dynamics simulations are used to investigate the thermodynamics of SWCNT-bile salt surfactant complexes in water with an emphasis on the buoyant characteristics of the SWCNT-surfactant complexes. Simulation results will be presented along with a comparison with experimental data. Official contribution of the National Institute of Standards and Technology; not subject to copyright in the United States.
Finite-volume and partial quenching effects in the magnetic polarizability of the neutron
NASA Astrophysics Data System (ADS)
Hall, J. M. M.; Leinweber, D. B.; Young, R. D.
2014-03-01
There has been much progress in the experimental measurement of the electric and magnetic polarizabilities of the nucleon. Similarly, lattice QCD simulations have recently produced dynamical QCD results for the magnetic polarizability of the neutron approaching the chiral regime. In order to compare the lattice simulations with experiment, calculation of partial quenching and finite-volume effects is required prior to an extrapolation in quark mass to the physical point. These dependencies are described using chiral effective field theory. Corrections to the partial quenching effects associated with the sea-quark-loop electric charges are estimated by modeling corrections to the pion cloud. These are compared to the uncorrected lattice results. In addition, the behavior of the finite-volume corrections as a function of pion mass is explored. Box sizes of approximately 7 fm are required to achieve a result within 5% of the infinite-volume result at the physical pion mass. A variety of extrapolations are shown at different box sizes, providing a benchmark to guide future lattice QCD calculations of the magnetic polarizabilities. A relatively precise value for the physical magnetic polarizability of the neutron is presented, βn=1.93(11)stat(11)sys×10-4 fm3, which is in agreement with current experimental results.
NASA Astrophysics Data System (ADS)
Anantua, Richard; Roger Blandford, Jonathan McKinney and Alexander Tchekhovskoy
2016-01-01
We carry out the process of "observing" simulations of active galactic nuclei (AGN) with relativistic jets (hereafter called jet/accretion disk/black hole (JAB) systems) from ray tracing between image plane and source to convolving the resulting images with a point spread function. Images are generated at arbitrary observer angle relative to the black hole spin axis by implementing spatial and temporal interpolation of conserved magnetohydrodynamic flow quantities from a time series of output datablocks from fully general relativistic 3D simulations. We also describe the evolution of simulations of JAB systems' dynamical and kinematic variables, e.g., velocity shear and momentum density, respectively, and the variation of these variables with respect to observer polar and azimuthal angles. We produce, at frequencies from radio to optical, fixed observer time intensity and polarization maps using various plasma physics motivated prescriptions for the emissivity function of physical quantities from the simulation output, and analyze the corresponding light curves. Our hypothesis is that this approach reproduces observed features of JAB systems such as superluminal bulk flow projections and quasi-periodic oscillations in the light curves more closely than extant stylized analytical models, e.g., cannonball bulk flows. Moreover, our development of user-friendly, versatile C++ routines for processing images of state-of-the-art simulations of JAB systems may afford greater flexibility for observing a wide range of sources from high power BL-Lacs to low power quasars (possibly with the same simulation) without requiring years of observation using multiple telescopes. Advantages of observing simulations instead of observing astrophysical sources directly include: the absence of a diffraction limit, panoramic views of the same object and the ability to freely track features. Light travel time effects become significant for high Lorentz factor and small angles between observer direction and incident light rays; this regime is relevant for the study of AGN blazars in JAB simulations.
A macrochip interconnection network enabled by silicon nanophotonic devices.
Zheng, Xuezhe; Cunningham, John E; Koka, Pranay; Schwetman, Herb; Lexau, Jon; Ho, Ron; Shubin, Ivan; Krishnamoorthy, Ashok V; Yao, Jin; Mekis, Attila; Pinguet, Thierry
2010-03-01
We present an advanced wavelength-division multiplexing point-to-point network enabled by silicon nanophotonic devices. This network offers strictly non-blocking all-to-all connectivity while maximizing bisection bandwidth, making it ideal for multi-core and multi-processor interconnections. We introduce one of the key components, the nanophotonic grating coupler, and discuss, for the first time, how this device can be useful for practical implementations of the wavelength-division multiplexing network using optical proximity communications. Finite difference time-domain simulation of the nanophotonic grating coupler device indicates that it can be made compact (20 microm x 50 microm), low loss (3.8 dB), and broadband (100 nm). These couplers require subwavelength material modulation at the nanoscale to achieve the desired functionality. We show that optical proximity communication provides unmatched optical I/O bandwidth density to electrical chips, which enables the application of wavelength-division multiplexing point-to-point network in macrochip with unprecedented bandwidth-density. The envisioned physical implementation is discussed. The benefits of such an interconnect network include a 5-6x improvement in latency when compared to a purely electronic implementation. Performance analysis shows that the wavelength-division multiplexing point-to-point network offers better overall performance over other optical network architectures.
Impact of tool wear on cross wedge rolling process stability and on product quality
NASA Astrophysics Data System (ADS)
Gutierrez, Catalina; Langlois, Laurent; Baudouin, Cyrille; Bigot, Régis; Fremeaux, Eric
2017-10-01
Cross wedge rolling (CWR) is a metal forming process used in the automotive industry. One of its applications is in the manufacturing process of connecting rods. CWR transforms a cylindrical billet into a complex axisymmetrical shape with an accurate distribution of material. This preform is forged into shape in a forging die. In order to improve CWR tool lifecycle and product quality it is essential to understand tool wear evolution and the physical phenomena that change on the CWR process due to the resulting geometry of the tool when undergoing tool wear. In order to understand CWR tool wear behavior, numerical simulations are necessary. Nevertheless, if the simulations are performed with the CAD geometry of the tool, results are limited. To solve this difficulty, two numerical simulations with FORGE® were performed using the real geometry of the tools (both up and lower roll) at two different states: (1) before starting lifecycle and (2) end of lifecycle. The tools were 3D measured with ATOS triple scan by GOM® using optical 3D measuring techniques. The result was a high-resolution point cloud of the entire geometry of the tool. Each 3D point cloud was digitalized and converted into a STL format. The geometry of the tools in a STL format was input for the 3D simulations. Both simulations were compared. Defects of products obtained in simulation were compared to main defects of products found industrially. Two main defects are: (a) surface defects on the preform that are not fixed in the die forging operation; and (b) Preform bent (no longer straight), with two possible impacts: on the one hand that the robot cannot grab it to take it to the forging stage; on the other hand, an unfilled section in the forging operation.
NASA Astrophysics Data System (ADS)
Warmer, F.; Beidler, C. D.; Dinklage, A.; Wolf, R.; The W7-X Team
2016-07-01
As a starting point for a more in-depth discussion of a research strategy leading from Wendelstein 7-X to a HELIAS power plant, the respective steps in physics and engineering are considered from different vantage points. The first approach discusses the direct extrapolation of selected physics and engineering parameters. This is followed by an examination of advancing the understanding of stellarator optimisation. Finally, combining a dimensionless parameter approach with an empirical energy confinement time scaling, the necessary development steps are highlighted. From this analysis it is concluded that an intermediate-step burning-plasma stellarator is the most prudent approach to bridge the gap between W7-X and a HELIAS power plant. Using a systems code approach in combination with transport simulations, a range of possible conceptual designs is analysed. This range is exemplified by two bounding cases, a fast-track, cost-efficient device with low magnetic field and without a blanket and a device similar to a demonstration power plant with blanket and net electricity power production.
Chaix, Basile; Kestens, Yan; Duncan, Scott; Merrien, Claire; Thierry, Benoît; Pannier, Bruno; Brondeel, Ruben; Lewin, Antoine; Karusisi, Noëlla; Perchoux, Camille; Thomas, Frédérique; Méline, Julie
2014-09-27
Accurate information is lacking on the extent of transportation as a source of physical activity, on the physical activity gains from public transportation use, and on the extent to which population shifts in the use of transportation modes could increase the percentage of people reaching official physical activity recommendations. In 2012-2013, 234 participants of the RECORD GPS Study (French Paris region, median age = 58) wore a portable GPS receiver and an accelerometer for 7 consecutive days and completed a 7-day GPS-based mobility survey (participation rate = 57.1%). Information on transportation modes and accelerometry data aggregated at the trip level [number of steps taken, energy expended, moderate to vigorous physical activity (MVPA), and sedentary time] were available for 7,644 trips. Associations between transportation modes and accelerometer-derived physical activity were estimated at the trip level with multilevel linear models. Participants spent a median of 1 h 58 min per day in transportation (8.2% of total time). Thirty-eight per-cent of steps taken, 31% of energy expended, and 33% of MVPA over 7 days were attributable to transportation. Walking and biking trips but also public transportation trips with all four transit modes examined were associated with greater steps, MVPA, and energy expenditure when compared to trips by personal motorized vehicle. Two simulated scenarios, implying a shift of approximately 14% and 33% of all motorized trips to public transportation or walking, were associated with a predicted 6 point and 13 point increase in the percentage of participants achieving the current physical activity recommendation. Collecting data with GPS receivers, accelerometers, and a GPS-based electronic mobility survey of activities and transportation modes allowed us to investigate relationships between transportation modes and physical activity at the trip level. Our findings suggest that an increase in active transportation participation and public transportation use may have substantial impacts on the percentage of people achieving physical activity recommendations.
Generalized Fluid System Simulation Program (GFSSP) - Version 6
NASA Technical Reports Server (NTRS)
Majumdar, Alok; LeClair, Andre; Moore, Ric; Schallhorn, Paul
2015-01-01
The Generalized Fluid System Simulation Program (GFSSP) is a finite-volume based general-purpose computer program for analyzing steady state and time-dependent flow rates, pressures, temperatures, and concentrations in a complex flow network. The program is capable of modeling real fluids with phase changes, compressibility, mixture thermodynamics, conjugate heat transfer between solid and fluid, fluid transients, pumps, compressors, flow control valves and external body forces such as gravity and centrifugal. The thermo-fluid system to be analyzed is discretized into nodes, branches, and conductors. The scalar properties such as pressure, temperature, and concentrations are calculated at nodes. Mass flow rates and heat transfer rates are computed in branches and conductors. The graphical user interface allows users to build their models using the 'point, drag, and click' method; the users can also run their models and post-process the results in the same environment. The integrated fluid library supplies thermodynamic and thermo-physical properties of 36 fluids, and 24 different resistance/source options are provided for modeling momentum sources or sinks in the branches. Users can introduce new physics, non-linear and time-dependent boundary conditions through user-subroutine.
Spin-Ice Thin Films: Large-N Theory and Monte Carlo Simulations
NASA Astrophysics Data System (ADS)
Lantagne-Hurtubise, Étienne; Rau, Jeffrey G.; Gingras, Michel J. P.
2018-04-01
We explore the physics of highly frustrated magnets in confined geometries, focusing on the Coulomb phase of pyrochlore spin ices. As a specific example, we investigate thin films of nearest-neighbor spin ice, using a combination of analytic large-N techniques and Monte Carlo simulations. In the simplest film geometry, with surfaces perpendicular to the [001] crystallographic direction, we observe pinch points in the spin-spin correlations characteristic of a two-dimensional Coulomb phase. We then consider the consequences of crystal symmetry breaking on the surfaces of the film through the inclusion of orphan bonds. We find that when these bonds are ferromagnetic, the Coulomb phase is destroyed by the presence of fluctuating surface magnetic charges, leading to a classical Z2 spin liquid. Building on this understanding, we discuss other film geometries with surfaces perpendicular to the [110] or the [111] direction. We generically predict the appearance of surface magnetic charges and discuss their implications for the physics of such films, including the possibility of an unusual Z3 classical spin liquid. Finally, we comment on open questions and promising avenues for future research.
Linking market interaction intensity of 3D Ising type financial model with market volatility
NASA Astrophysics Data System (ADS)
Fang, Wen; Ke, Jinchuan; Wang, Jun; Feng, Ling
2016-11-01
Microscopic interaction models in physics have been used to investigate the complex phenomena of economic systems. The simple interactions involved can lead to complex behaviors and help the understanding of mechanisms in the financial market at a systemic level. This article aims to develop a financial time series model through 3D (three-dimensional) Ising dynamic system which is widely used as an interacting spins model to explain the ferromagnetism in physics. Through Monte Carlo simulations of the financial model and numerical analysis for both the simulation return time series and historical return data of Hushen 300 (HS300) index in Chinese stock market, we show that despite its simplicity, this model displays stylized facts similar to that seen in real financial market. We demonstrate a possible underlying link between volatility fluctuations of real stock market and the change in interaction strengths of market participants in the financial model. In particular, our stochastic interaction strength in our model demonstrates that the real market may be consistently operating near the critical point of the system.
The adaption and use of research codes for performance assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liebetrau, A.M.
1987-05-01
Models of real-world phenomena are developed for many reasons. The models are usually, if not always, implemented in the form of a computer code. The characteristics of a code are determined largely by its intended use. Realizations or implementations of detailed mathematical models of complex physical and/or chemical processes are often referred to as research or scientific (RS) codes. Research codes typically require large amounts of computing time. One example of an RS code is a finite-element code for solving complex systems of differential equations that describe mass transfer through some geologic medium. Considerable computing time is required because computationsmore » are done at many points in time and/or space. Codes used to evaluate the overall performance of real-world physical systems are called performance assessment (PA) codes. Performance assessment codes are used to conduct simulated experiments involving systems that cannot be directly observed. Thus, PA codes usually involve repeated simulations of system performance in situations that preclude the use of conventional experimental and statistical methods. 3 figs.« less
Comparison of APSIM and DNDC simulations of nitrogen transformations and N2O emissions.
Vogeler, I; Giltrap, D; Cichota, R
2013-11-01
Various models have been developed to better understand nitrogen (N) cycling in soils, which is governed by a complex interaction of physical, chemical and biological factors. Two process-based models, the Agricultural Production Systems sIMulator (APSIM) and DeNitrification DeComposition (DNDC), were used to simulate nitrification, denitrification and nitrous oxide (N2O) emissions from soils following N input from either fertiliser or excreta deposition. The effect of environmental conditions on N transformations as simulated by the two different models was compared. Temperature had a larger effect in APSIM on nitrification, whereas in DNDC, water content produced a larger response. In contrast, simulated denitrification showed a larger response to temperature and also organic carbon content in DNDC. And while denitrification in DNDC is triggered by rainfall ≥5mm/h, in APSIM, the driving factor is soil water content, with a trigger point at water content at field capacity. The two models also showed different responses to N load, with nearly linearly increasing N2O emission rates with N load simulated by DNDC, and a lower rate by APSIM. Increasing rainfall intensity decreased APSIM-simulated N2O emissions but increased those simulated by DNDC. Copyright © 2012 Elsevier B.V. All rights reserved.
Simulation of white light generation and near light bullets using a novel numerical technique
NASA Astrophysics Data System (ADS)
Zia, Haider
2018-01-01
An accurate and efficient simulation has been devised, employing a new numerical technique to simulate the derivative generalised non-linear Schrödinger equation in all three spatial dimensions and time. The simulation models all pertinent effects such as self-steepening and plasma for the non-linear propagation of ultrafast optical radiation in bulk material. Simulation results are compared to published experimental spectral data of an example ytterbium aluminum garnet system at 3.1 μm radiation and fits to within a factor of 5. The simulation shows that there is a stability point near the end of the 2 mm crystal where a quasi-light bullet (spatial temporal soliton) is present. Within this region, the pulse is collimated at a reduced diameter (factor of ∼2) and there exists a near temporal soliton at the spatial center. The temporal intensity within this stable region is compressed by a factor of ∼4 compared to the input. This study shows that the simulation highlights new physical phenomena based on the interplay of various linear, non-linear and plasma effects that go beyond the experiment and is thus integral to achieving accurate designs of white light generation systems for optical applications. An adaptive error reduction algorithm tailor made for this simulation will also be presented in appendix.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peterson, J. R.; Peng, E.; Ahmad, Z.
2015-05-15
We present a comprehensive methodology for the simulation of astronomical images from optical survey telescopes. We use a photon Monte Carlo approach to construct images by sampling photons from models of astronomical source populations, and then simulating those photons through the system as they interact with the atmosphere, telescope, and camera. We demonstrate that all physical effects for optical light that determine the shapes, locations, and brightnesses of individual stars and galaxies can be accurately represented in this formalism. By using large scale grid computing, modern processors, and an efficient implementation that can produce 400,000 photons s{sup −1}, we demonstratemore » that even very large optical surveys can be now be simulated. We demonstrate that we are able to (1) construct kilometer scale phase screens necessary for wide-field telescopes, (2) reproduce atmospheric point-spread function moments using a fast novel hybrid geometric/Fourier technique for non-diffraction limited telescopes, (3) accurately reproduce the expected spot diagrams for complex aspheric optical designs, and (4) recover system effective area predicted from analytic photometry integrals. This new code, the Photon Simulator (PhoSim), is publicly available. We have implemented the Large Synoptic Survey Telescope design, and it can be extended to other telescopes. We expect that because of the comprehensive physics implemented in PhoSim, it will be used by the community to plan future observations, interpret detailed existing observations, and quantify systematics related to various astronomical measurements. Future development and validation by comparisons with real data will continue to improve the fidelity and usability of the code.« less
TSARINA: A computer model for assessing conventional and chemical attacks on air bases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Emerson, D.E.; Wegner, L.H.
This Note describes the latest version of the TSARINA (TSAR INputs using AIDA) airbase damage assessment computer program that has been developed to estimate the on-base concentration of toxic agents that would be deposited by a chemical attack and to assess losses to various on-base resources from conventional attacks, as well as the physical damage to runways, taxiways, buildings, and other facilities. Although the model may be used as a general-purpose, complex-target damage assessment model, its primary role in intended to be in support of the TSAR (Theater Simulation of Airbase Resources) aircraft sortie generation simulation program. When used withmore » TSAR, multiple trials of a multibase airbase-attack campaign can be assessed with TSARINA, and the impact of those attacks on sortie generation can be derived using the TSAR simulation model. TSARINA, as currently configured, permits damage assessments of attacks on an airbase (or other) complex that is compassed of up to 1000 individual targets (buildings, taxiways, etc,), and 2500 packets of resources. TSARINA determines the actual impact points (pattern centroids for CBUs and container burst point for chemical weapons) by Monte Carlo procedures-i.e., by random selections from the appropriate error distributions. Uncertainties in wind velocity and heading are also considered for chemical weapons. Point-impact weapons that impact within a specified distance of each target type are classed as hits, and estimates of the damage to the structures and to the various classes of support resources are assessed using cookie-cutter weapon-effects approximations.« less
From HADES to PARADISE—atomistic simulation of defects in minerals
NASA Astrophysics Data System (ADS)
Parker, Stephen C.; Cooke, David J.; Kerisit, Sebastien; Marmier, Arnaud S.; Taylor, Sarah L.; Taylor, Stuart N.
2004-07-01
The development of the HADES code by Michael Norgett in the 1970s enabled, for the first time, the routine simulation of point defects in inorganic solids at the atomic scale. Using examples from current research we illustrate how the scope and applications of atomistic simulations have widened with time and yet still follow an approach readily identifiable with this early work. Firstly we discuss the use of the Mott-Littleton methodology to study the segregation of various isovalent cations to the (00.1) and (01.2) surfaces of haematite (agr-Fe2O3). The results show that the size of the impurities has a considerable effect on the magnitude of the segregation energy. We then extend these simulations to investigate the effect of the concentration of the impurities at the surface on the segregation process using a supercell approach. We consider next the effect of segregation to stepped surfaces illustrating this with recent work on segregation of La3+ to CaF2 surfaces, which show enhanced segregation to step edges. We discuss next the application of lattice dynamics to modelling point defects in complex oxide materials by applying this to the study of hydrogen incorporation into bgr-Mg2SiO4. Finally our attention is turned to a method for considering the surface energy of physically defective surfaces and we illustrate its approach by considering the low index surfaces of agr-Al2O3.
NASA Astrophysics Data System (ADS)
Wasklewicz, Thad; Zhu, Zhen; Gares, Paul
2017-12-01
Rapid technological advances, sustained funding, and a greater recognition of the value of topographic data have helped develop an increasing archive of topographic data sources. Advances in basic and applied research related to Earth surface changes require researchers to integrate recent high-resolution topography (HRT) data with the legacy datasets. Several technical challenges and data uncertainty issues persist to date when integrating legacy datasets with more recent HRT data. The disparate data sources required to extend the topographic record back in time are often stored in formats that are not readily compatible with more recent HRT data. Legacy data may also contain unknown error or unreported error that make accounting for data uncertainty difficult. There are also cases of known deficiencies in legacy datasets, which can significantly bias results. Finally, scientists are faced with the daunting challenge of definitively deriving the extent to which a landform or landscape has or will continue to change in response natural and/or anthropogenic processes. Here, we examine the question: how do we evaluate and portray data uncertainty from the varied topographic legacy sources and combine this uncertainty with current spatial data collection techniques to detect meaningful topographic changes? We view topographic uncertainty as a stochastic process that takes into consideration spatial and temporal variations from a numerical simulation and physical modeling experiment. The numerical simulation incorporates numerous topographic data sources typically found across a range of legacy data to present high-resolution data, while the physical model focuses on more recent HRT data acquisition techniques. Elevation uncertainties observed from anchor points in the digital terrain models are modeled using "states" in a stochastic estimator. Stochastic estimators trace the temporal evolution of the uncertainties and are natively capable of incorporating sensor measurements observed at various times in history. The geometric relationship between the anchor point and the sensor measurement can be approximated via spatial correlation even when a sensor does not directly observe an anchor point. Findings from a numerical simulation indicate the estimated error coincides with the actual error using certain sensors (Kinematic GNSS, ALS, TLS, and SfM-MVS). Data from 2D imagery and static GNSS did not perform as well at the time the sensor is integrated into estimator largely as a result of the low density of data added from these sources. The estimator provides a history of DEM estimation as well as the uncertainties and cross correlations observed on anchor points. Our work provides preliminary evidence that our approach is valid for integrating legacy data with HRT and warrants further exploration and field validation. [Figure not available: see fulltext.
Modeling and simulation of satellite subsystems for end-to-end spacecraft modeling
NASA Astrophysics Data System (ADS)
Schum, William K.; Doolittle, Christina M.; Boyarko, George A.
2006-05-01
During the past ten years, the Air Force Research Laboratory (AFRL) has been simultaneously developing high-fidelity spacecraft payload models as well as a robust distributed simulation environment for modeling spacecraft subsystems. Much of this research has occurred in the Distributed Architecture Simulation Laboratory (DASL). AFRL developers working in the DASL have effectively combined satellite power, attitude pointing, and communication link analysis subsystem models with robust satellite sensor models to create a first-order end-to-end satellite simulation capability. The merging of these two simulation areas has advanced the field of spacecraft simulation, design, and analysis, and enabled more in-depth mission and satellite utility analyses. A core capability of the DASL is the support of a variety of modeling and analysis efforts, ranging from physics and engineering-level modeling to mission and campaign-level analysis. The flexibility and agility of this simulation architecture will be used to support space mission analysis, military utility analysis, and various integrated exercises with other military and space organizations via direct integration, or through DOD standards such as Distributed Interaction Simulation. This paper discusses the results and lessons learned in modeling satellite communication link analysis, power, and attitude control subsystems for an end-to-end satellite simulation. It also discusses how these spacecraft subsystem simulations feed into and support military utility and space mission analyses.
Simbios: an NIH national center for physics-based simulation of biological structures.
Delp, Scott L; Ku, Joy P; Pande, Vijay S; Sherman, Michael A; Altman, Russ B
2012-01-01
Physics-based simulation provides a powerful framework for understanding biological form and function. Simulations can be used by biologists to study macromolecular assemblies and by clinicians to design treatments for diseases. Simulations help biomedical researchers understand the physical constraints on biological systems as they engineer novel drugs, synthetic tissues, medical devices, and surgical interventions. Although individual biomedical investigators make outstanding contributions to physics-based simulation, the field has been fragmented. Applications are typically limited to a single physical scale, and individual investigators usually must create their own software. These conditions created a major barrier to advancing simulation capabilities. In 2004, we established a National Center for Physics-Based Simulation of Biological Structures (Simbios) to help integrate the field and accelerate biomedical research. In 6 years, Simbios has become a vibrant national center, with collaborators in 16 states and eight countries. Simbios focuses on problems at both the molecular scale and the organismal level, with a long-term goal of uniting these in accurate multiscale simulations.
Simbios: an NIH national center for physics-based simulation of biological structures
Delp, Scott L; Ku, Joy P; Pande, Vijay S; Sherman, Michael A
2011-01-01
Physics-based simulation provides a powerful framework for understanding biological form and function. Simulations can be used by biologists to study macromolecular assemblies and by clinicians to design treatments for diseases. Simulations help biomedical researchers understand the physical constraints on biological systems as they engineer novel drugs, synthetic tissues, medical devices, and surgical interventions. Although individual biomedical investigators make outstanding contributions to physics-based simulation, the field has been fragmented. Applications are typically limited to a single physical scale, and individual investigators usually must create their own software. These conditions created a major barrier to advancing simulation capabilities. In 2004, we established a National Center for Physics-Based Simulation of Biological Structures (Simbios) to help integrate the field and accelerate biomedical research. In 6 years, Simbios has become a vibrant national center, with collaborators in 16 states and eight countries. Simbios focuses on problems at both the molecular scale and the organismal level, with a long-term goal of uniting these in accurate multiscale simulations. PMID:22081222
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leon, Stephanie M., E-mail: Stephanie.Leon@uth.tmc.edu; Wagner, Louis K.; Brateman, Libby F.
2014-11-01
Purpose: Monte Carlo simulations were performed with the goal of verifying previously published physical measurements characterizing scatter as a function of apparent thickness. A secondary goal was to provide a way of determining what effect tissue glandularity might have on the scatter characteristics of breast tissue. The overall reason for characterizing mammography scatter in this research is the application of these data to an image processing-based scatter-correction program. Methods: MCNPX was used to simulate scatter from an infinitesimal pencil beam using typical mammography geometries and techniques. The spreading of the pencil beam was characterized by two parameters: mean radial extentmore » (MRE) and scatter fraction (SF). The SF and MRE were found as functions of target, filter, tube potential, phantom thickness, and the presence or absence of a grid. The SF was determined by separating scatter and primary by the angle of incidence on the detector, then finding the ratio of the measured scatter to the total number of detected events. The accuracy of the MRE was determined by placing ring-shaped tallies around the impulse and fitting those data to the point-spread function (PSF) equation using the value for MRE derived from the physical measurements. The goodness-of-fit was determined for each data set as a means of assessing the accuracy of the physical MRE data. The effect of breast glandularity on the SF, MRE, and apparent tissue thickness was also considered for a limited number of techniques. Results: The agreement between the physical measurements and the results of the Monte Carlo simulations was assessed. With a grid, the SFs ranged from 0.065 to 0.089, with absolute differences between the measured and simulated SFs averaging 0.02. Without a grid, the range was 0.28–0.51, with absolute differences averaging −0.01. The goodness-of-fit values comparing the Monte Carlo data to the PSF from the physical measurements ranged from 0.96 to 1.00 with a grid and 0.65 to 0.86 without a grid. Analysis of the data suggested that the nongrid data could be better described by a biexponential function than the single exponential used here. The simulations assessing the effect of breast composition on SF and MRE showed only a slight impact on these quantities. When compared to a mix of 50% glandular/50% adipose tissue, the impact of substituting adipose or glandular breast compositions on the apparent thickness of the tissue was about 5%. Conclusions: The findings show agreement between the physical measurements published previously and the Monte Carlo simulations presented here; the resulting data can therefore be used more confidently for an application such as image processing-based scatter correction. The findings also suggest that breast composition does not have a major impact on the scatter characteristics of breast tissue. Application of the scatter data to the development of a scatter-correction software program can be simplified by ignoring the variations in density among breast tissues.« less
VERA Core Simulator methodology for pressurized water reactor cycle depletion
Kochunas, Brendan; Collins, Benjamin; Stimpson, Shane; ...
2017-01-12
This paper describes the methodology developed and implemented in the Virtual Environment for Reactor Applications Core Simulator (VERA-CS) to perform high-fidelity, pressurized water reactor (PWR), multicycle, core physics calculations. Depletion of the core with pin-resolved power and nuclide detail is a significant advance in the state of the art for reactor analysis, providing the level of detail necessary to address the problems of the U.S. Department of Energy Nuclear Reactor Simulation Hub, the Consortium for Advanced Simulation of Light Water Reactors (CASL). VERA-CS has three main components: the neutronics solver MPACT, the thermal-hydraulic (T-H) solver COBRA-TF (CTF), and the nuclidemore » transmutation solver ORIGEN. This paper focuses on MPACT and provides an overview of the resonance self-shielding methods, macroscopic-cross-section calculation, two-dimensional/one-dimensional (2-D/1-D) transport, nuclide depletion, T-H feedback, and other supporting methods representing a minimal set of the capabilities needed to simulate high-fidelity models of a commercial nuclear reactor. Results are presented from the simulation of a model of the first cycle of Watts Bar Unit 1. The simulation is within 16 parts per million boron (ppmB) reactivity for all state points compared to cycle measurements, with an average reactivity bias of <5 ppmB for the entire cycle. Comparisons to cycle 1 flux map data are also provided, and the average 2-D root-mean-square (rms) error during cycle 1 is 1.07%. To demonstrate the multicycle capability, a state point at beginning of cycle (BOC) 2 was also simulated and compared to plant data. The comparison of the cycle 2 BOC state has a reactivity difference of +3 ppmB from measurement, and the 2-D rms of the comparison in the flux maps is 1.77%. Lastly, these results provide confidence in VERA-CS’s capability to perform high-fidelity calculations for practical PWR reactor problems.« less
Reducing Childhood Obesity through U.S. Federal Policy
Kristensen, Alyson H.; Flottemesch, Thomas J.; Maciosek, Michael V.; Jenson, Jennifer; Barclay, Gillian; Ashe, Marice; Sanchez, Eduardo J.; Story, Mary; Teutsch, Steven M.; Brownson, Ross C.
2016-01-01
Background Childhood obesity prevalence remains high in the U.S., especially among racial/ethnic minorities and low-income populations. Federal policy is important in improving public health given its broad reach. Information is needed about federal policies that could reduce childhood obesity rates and by how much. Purpose To estimate the impact of three federal policies on childhood obesity prevalence in 2032, after 20 years of implementation. Methods Criteria were used to select the three following policies to reduce childhood obesity from 26 recommended policies: afterschool physical activity programs, a $0.01/ounce sugar-sweetened beverage (SSB) excise tax, and a ban on child-directed fast food TV advertising. For each policy, the literature was reviewed from January 2000 through July 2012 to find evidence of effectiveness and create average effect sizes. In 2012, a Markov microsimulation model estimated each policy’s impact on diet or physical activity, and then BMI, in a simulated school-aged population in 2032. Results The microsimulation predicted that afterschool physical activity programs would reduce obesity the most among children aged 6–12 years (1.8 percentage points) and the advertising ban would reduce obesity the least (0.9 percentage points). The SSB excise tax would reduce obesity the most among adolescents aged 13–18 years (2.4 percentage points). All three policies would reduce obesity more among blacks and Hispanics than whites, with the SSB excise tax reducing obesity disparities the most. Conclusions All three policies would reduce childhood obesity prevalence by 2032. However, a national $0.01/ounce SSB excise tax is the best option. PMID:25175764
The calibration of an HF radar used for ionospheric research
NASA Astrophysics Data System (ADS)
From, W. R.; Whitehead, J. D.
1984-02-01
The HF radar on Bribie Island, Australia, uses crossed-fan beams produced by crossed linear transmitter and receiver arrays of 10 elements each to simulate a pencil beam. The beam points vertically when all the array elements are in phase, and is steerable by up to 20 deg off vertical at the central one of the three operating frequencies. Phase and gain changes within the transmitters and receivers are compensated for by an automatic system of adjustment. The 10 transmitting antennas are, as nearly as possible, physically identical as are the 10 receiving antennas. Antenna calibration using high flying aircraft or satellites is not possible. A method is described for using the ionospheric reflections to measure the polar diagram and also to correct for errors in the direction of pointing.
Prediction of the Aerothermodynamic Environment of the Huygens Probe
NASA Technical Reports Server (NTRS)
Hollis, Brian R.; Striepe, Scott A.; Wright, Michael J.; Bose, Deepak; Sutton, Kenneth; Takashima, Naruhisa
2005-01-01
An investigation of the aerothermodynamic environment of the Huygens entry probe has been conducted. A Monte Carlo simulation of the trajectory of the probe during entry into Titan's atmosphere was performed to identify a worst-case heating rate trajectory. Flowfield and radiation transport computations were performed at points along this trajectory to obtain convective and radiative heat-transfer distributions on the probe's heat shield. This investigation identified important physical and numerical factors, including atmospheric CH4 concentration, transition to turbulence, numerical diffusion modeling, and radiation modeling, which strongly influenced the aerothermodynamic environment.
Equations of motion of a space station with emphasis on the effects of the gravity gradient
NASA Technical Reports Server (NTRS)
Tuell, L. P.
1987-01-01
The derivation of the equations of motion is based upon the principle of virtual work. As developed, these equations apply only to a space vehicle whose physical model consists of a rigid central carrier supporting several flexible appendages (not interconnected), smaller rigid bodies, and point masses. Clearly evident in the equations is the respect paid to the influence of the Earth's gravity field, considerably more than has been the custom in simulating vehicle motion. The effect of unpredictable crew motion is ignored.
Temperature Scaling Law for Quantum Annealing Optimizers.
Albash, Tameem; Martin-Mayor, Victor; Hen, Itay
2017-09-15
Physical implementations of quantum annealing unavoidably operate at finite temperatures. We point to a fundamental limitation of fixed finite temperature quantum annealers that prevents them from functioning as competitive scalable optimizers and show that to serve as optimizers annealer temperatures must be appropriately scaled down with problem size. We derive a temperature scaling law dictating that temperature must drop at the very least in a logarithmic manner but also possibly as a power law with problem size. We corroborate our results by experiment and simulations and discuss the implications of these to practical annealers.
Constraint methods that accelerate free-energy simulations of biomolecules.
Perez, Alberto; MacCallum, Justin L; Coutsias, Evangelos A; Dill, Ken A
2015-12-28
Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann's law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pierret, C.; Maunoury, L.; Biri, S.
The goal of this article is to present simulations on the extraction from an electron cyclotron resonance ion source (ECRIS). The aim of this work is to find out an extraction system, which allows one to reduce the emittances and to increase the current of the extracted ion beam at the focal point of the analyzing dipole. But first, we should locate the correct software which is able to reproduce the specific physics of an ion beam. To perform the simulations, the following softwares have been tested: SIMION 3D, AXCEL, CPO 3D, and especially, for the magnetic field calculation, MATHEMATICAmore » coupled with the RADIA module. Emittance calculations have been done with two types of ECRIS: one with a hexapole and one without a hexapole, and the difference will be discussed.« less
Black-hole Merger Simulations for LISA Science
NASA Technical Reports Server (NTRS)
Kelly, Bernard J.; Baker, John G.; vanMeter, James R.; Boggs, William D.; Centrella, Joan M.; McWilliams, Sean T.
2009-01-01
The strongest expected sources of gravitational waves in the LISA band are the mergers of massive black holes. LISA may observe these systems to high redshift, z>10, to uncover details of the origin of massive black holes, and of the relationship between black holes and their host structures, and structure formation itself. These signals arise from the final stage in the development of a massive black-hole binary emitting strong gravitational radiation that accelerates the system's inspiral toward merger. The strongest part of the signal, at the point of merger, carries much information about the system and provides a probe of extreme gravitational physics. Theoretical predictions for these merger signals rely on supercomputer simulations to solve Einstein's equations. We discuss recent numerical results and their impact on LISA science expectations.
Novel characterization of capsule x-ray drive at the National Ignition Facility.
MacLaren, S A; Schneider, M B; Widmann, K; Hammer, J H; Yoxall, B E; Moody, J D; Bell, P M; Benedetti, L R; Bradley, D K; Edwards, M J; Guymer, T M; Hinkel, D E; Hsing, W W; Kervin, M L; Meezan, N B; Moore, A S; Ralph, J E
2014-03-14
Indirect drive experiments at the National Ignition Facility are designed to achieve fusion by imploding a fuel capsule with x rays from a laser-driven hohlraum. Previous experiments have been unable to determine whether a deficit in measured ablator implosion velocity relative to simulations is due to inadequate models of the hohlraum or ablator physics. ViewFactor experiments allow for the first time a direct measure of the x-ray drive from the capsule point of view. The experiments show a 15%-25% deficit relative to simulations and thus explain nearly all of the disagreement with the velocity data. In addition, the data from this open geometry provide much greater constraints on a predictive model of laser-driven hohlraum performance than the nominal ignition target.
Pilot estimates of glidepath and aim point during simulated landing approaches
NASA Technical Reports Server (NTRS)
Acree, C. W., Jr.
1981-01-01
Pilot perceptions of glidepath angle and aim point were measured during simulated landings. A fixed-base cockpit simulator was used with video recordings of simulated landing approaches shown on a video projector. Pilots estimated the magnitudes of approach errors during observation without attempting to make corrections. Pilots estimated glidepath angular errors well, but had difficulty estimating aim-point errors. The data make plausible the hypothesis that pilots are little concerned with aim point during most of an approach, concentrating instead on keeping close to the nominal glidepath and trusting this technique to guide them to the proper touchdown point.
NASA Astrophysics Data System (ADS)
José Gómez-Navarro, Juan; Raible, Christoph C.; Blumer, Sandro; Martius, Olivia; Felder, Guido
2016-04-01
Extreme precipitation episodes, although rare, are natural phenomena that can threat human activities, especially in areas densely populated such as Switzerland. Their relevance demands the design of public policies that protect public assets and private property. Therefore, increasing the current understanding of such exceptional situations is required, i.e. the climatic characterisation of their triggering circumstances, severity, frequency, and spatial distribution. Such increased knowledge shall eventually lead us to produce more reliable projections about the behaviour of these events under ongoing climate change. Unfortunately, the study of extreme situations is hampered by the short instrumental record, which precludes a proper characterization of events with return period exceeding few decades. This study proposes a new approach that allows studying storms based on a synthetic, but physically consistent database of weather situations obtained from a long climate simulation. Our starting point is a 500-yr control simulation carried out with the Community Earth System Model (CESM). In a second step, this dataset is dynamically downscaled with the Weather Research and Forecasting model (WRF) to a final resolution of 2 km over the Alpine area. However, downscaling the full CESM simulation at such high resolution is infeasible nowadays. Hence, a number of case studies are previously selected. This selection is carried out examining the precipitation averaged in an area encompassing Switzerland in the ESM. Using a hydrological criterion, precipitation is accumulated in several temporal windows: 1 day, 2 days, 3 days, 5 days and 10 days. The 4 most extreme events in each category and season are selected, leading to a total of 336 days to be simulated. The simulated events are affected by systematic biases that have to be accounted before this data set can be used as input in hydrological models. Thus, quantile mapping is used to remove such biases. For this task, a 20-yr high-resolution control simulation is carried out. The extreme events belong to this distribution, and can be mapped onto the distribution of precipitation obtained from a gridded product of precipitation provided by MeteoSwiss. This procedure yields bias-free extreme precipitation events which serve as input by hydrological models that eventually produce a simulated, yet physically consistent flooding event. Thereby, the proposed methodology guarantees consistency with the underlying physics of extreme events, and reproduces plausible impacts of up to one-in-five-centuries situations.
Standard Model and New physics for ɛ'k/ɛk
NASA Astrophysics Data System (ADS)
Kitahara, Teppei
2018-05-01
The first result of the lattice simulation and improved perturbative calculations have pointed to a discrepancy between data on ɛ'k/ɛk and the standard-model (SM) prediction. Several new physics (NP) models can explain this discrepancy, and such NP models are likely to predict deviations of ℬ(K → πv
NASA Technical Reports Server (NTRS)
Mazaheri, Alireza; Gnoffo, Peter A.; Johnston, Chirstopher O.; Kleb, Bil
2010-01-01
This users manual provides in-depth information concerning installation and execution of LAURA, version 5. LAURA is a structured, multi-block, computational aerothermodynamic simulation code. Version 5 represents a major refactoring of the original Fortran 77 LAURA code toward a modular structure afforded by Fortran 95. The refactoring improved usability and maintainability by eliminating the requirement for problem-dependent re-compilations, providing more intuitive distribution of functionality, and simplifying interfaces required for multi-physics coupling. As a result, LAURA now shares gas-physics modules, MPI modules, and other low-level modules with the FUN3D unstructured-grid code. In addition to internal refactoring, several new features and capabilities have been added, e.g., a GNU-standard installation process, parallel load balancing, automatic trajectory point sequencing, free-energy minimization, and coupled ablation and flowfield radiation.
NASA Technical Reports Server (NTRS)
Mazaheri, Alireza; Gnoffo, Peter A.; Johnston, Christopher O.; Kleb, William L.
2013-01-01
This users manual provides in-depth information concerning installation and execution of LAURA, version 5. LAURA is a structured, multi-block, computational aerothermodynamic simulation code. Version 5 represents a major refactoring of the original Fortran 77 LAURA code toward a modular structure afforded by Fortran 95. The refactoring improved usability and maintain ability by eliminating the requirement for problem dependent recompilations, providing more intuitive distribution of functionality, and simplifying interfaces required for multi-physics coupling. As a result, LAURA now shares gas-physics modules, MPI modules, and other low-level modules with the Fun3D unstructured-grid code. In addition to internal refactoring, several new features and capabilities have been added, e.g., a GNU standard installation process, parallel load balancing, automatic trajectory point sequencing, free-energy minimization, and coupled ablation and flowfield radiation.
NASA Technical Reports Server (NTRS)
Mazaheri, Alireza; Gnoffo, Peter A.; Johnston, Christopher O.; Kleb, Bil
2011-01-01
This users manual provides in-depth information concerning installation and execution of Laura, version 5. Laura is a structured, multi-block, computational aerothermodynamic simulation code. Version 5 represents a major refactoring of the original Fortran 77 Laura code toward a modular structure afforded by Fortran 95. The refactoring improved usability and maintainability by eliminating the requirement for problem dependent re-compilations, providing more intuitive distribution of functionality, and simplifying interfaces required for multi-physics coupling. As a result, Laura now shares gas-physics modules, MPI modules, and other low-level modules with the Fun3D unstructured-grid code. In addition to internal refactoring, several new features and capabilities have been added, e.g., a GNU-standard installation process, parallel load balancing, automatic trajectory point sequencing, free-energy minimization, and coupled ablation and flowfield radiation.
Metrological assurance and traceability for Industry 4.0 and additive manufacturing in Ukraine
NASA Astrophysics Data System (ADS)
Skliarov, Volodymyr; Neyezhmakov, Pavel; Prokopov, Alexander
2018-03-01
The national measurement standards from the point of view of traceability of the results of measurement in additive manufacturing in Ukraine are considered in the paper. The metrological characteristics of the national primary measurement standards in the field of geometric, temperature, optical-physical and time-frequency measurements, which took part in international comparisons within COOMET projects, are presented. The accurate geometric, temperature, optical-physical and time-frequency measurements are the key ones in controlling the quality of additive manufacturing. The use of advanced CAD/CAE/CAM systems allows to simulate the process of additive manufacturing at each stage. In accordance with the areas of the technology of additive manufacturing, the ways of improving the national measurement standards of Ukraine for the growing needs of metrology of additive manufacturing are considered.
24 CFR 902.27 - Physical condition portion of total PHAS points.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Physical condition portion of total... HOUSING AND URBAN DEVELOPMENT PUBLIC HOUSING ASSESSMENT SYSTEM PHAS Indicator #1: Physical Condition § 902.27 Physical condition portion of total PHAS points. Of the total 100 points available for a PHAS...
Development of a Robust and Efficient Parallel Solver for Unsteady Turbomachinery Flows
NASA Technical Reports Server (NTRS)
West, Jeff; Wright, Jeffrey; Thakur, Siddharth; Luke, Ed; Grinstead, Nathan
2012-01-01
The traditional design and analysis practice for advanced propulsion systems relies heavily on expensive full-scale prototype development and testing. Over the past decade, use of high-fidelity analysis and design tools such as CFD early in the product development cycle has been identified as one way to alleviate testing costs and to develop these devices better, faster and cheaper. In the design of advanced propulsion systems, CFD plays a major role in defining the required performance over the entire flight regime, as well as in testing the sensitivity of the design to the different modes of operation. Increased emphasis is being placed on developing and applying CFD models to simulate the flow field environments and performance of advanced propulsion systems. This necessitates the development of next generation computational tools which can be used effectively and reliably in a design environment. The turbomachinery simulation capability presented here is being developed in a computational tool called Loci-STREAM [1]. It integrates proven numerical methods for generalized grids and state-of-the-art physical models in a novel rule-based programming framework called Loci [2] which allows: (a) seamless integration of multidisciplinary physics in a unified manner, and (b) automatic handling of massively parallel computing. The objective is to be able to routinely simulate problems involving complex geometries requiring large unstructured grids and complex multidisciplinary physics. An immediate application of interest is simulation of unsteady flows in rocket turbopumps, particularly in cryogenic liquid rocket engines. The key components of the overall methodology presented in this paper are the following: (a) high fidelity unsteady simulation capability based on Detached Eddy Simulation (DES) in conjunction with second-order temporal discretization, (b) compliance with Geometric Conservation Law (GCL) in order to maintain conservative property on moving meshes for second-order time-stepping scheme, (c) a novel cloud-of-points interpolation method (based on a fast parallel kd-tree search algorithm) for interfaces between turbomachinery components in relative motion which is demonstrated to be highly scalable, and (d) demonstrated accuracy and parallel scalability on large grids (approx 250 million cells) in full turbomachinery geometries.
NASA Technical Reports Server (NTRS)
1981-01-01
The software developed to simulate the ground control point navigation system is described. The Ground Control Point Simulation Program (GCPSIM) is designed as an analysis tool to predict the performance of the navigation system. The system consists of two star trackers, a global positioning system receiver, a gyro package, and a landmark tracker.
Adaptive LES Methodology for Turbulent Flow Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oleg V. Vasilyev
2008-06-12
Although turbulent flows are common in the world around us, a solution to the fundamental equations that govern turbulence still eludes the scientific community. Turbulence has often been called one of the last unsolved problem in classical physics, yet it is clear that the need to accurately predict the effect of turbulent flows impacts virtually every field of science and engineering. As an example, a critical step in making modern computational tools useful in designing aircraft is to be able to accurately predict the lift, drag, and other aerodynamic characteristics in numerical simulations in a reasonable amount of time. Simulationsmore » that take months to years to complete are much less useful to the design cycle. Much work has been done toward this goal (Lee-Rausch et al. 2003, Jameson 2003) and as cost effective accurate tools for simulating turbulent flows evolve, we will all benefit from new scientific and engineering breakthroughs. The problem of simulating high Reynolds number (Re) turbulent flows of engineering and scientific interest would have been solved with the advent of Direct Numerical Simulation (DNS) techniques if unlimited computing power, memory, and time could be applied to each particular problem. Yet, given the current and near future computational resources that exist and a reasonable limit on the amount of time an engineer or scientist can wait for a result, the DNS technique will not be useful for more than 'unit' problems for the foreseeable future (Moin & Kim 1997, Jimenez & Moin 1991). The high computational cost for the DNS of three dimensional turbulent flows results from the fact that they have eddies of significant energy in a range of scales from the characteristic length scale of the flow all the way down to the Kolmogorov length scale. The actual cost of doing a three dimensional DNS scales as Re{sup 9/4} due to the large disparity in scales that need to be fully resolved. State-of-the-art DNS calculations of isotropic turbulence have recently been completed at the Japanese Earth Simulator (Yokokawa et al. 2002, Kaneda et al. 2003) using a resolution of 40963 (approximately 10{sup 11}) grid points with a Taylor-scale Reynolds number of 1217 (Re {approx} 10{sup 6}). Impressive as these calculations are, performed on one of the world's fastest super computers, more brute computational power would be needed to simulate the flow over the fuselage of a commercial aircraft at cruising speed. Such a calculation would require on the order of 10{sup 16} grid points and would have a Reynolds number in the range of 108. Such a calculation would take several thousand years to simulate one minute of flight time on today's fastest super computers (Moin & Kim 1997). Even using state-of-the-art zonal approaches, which allow DNS calculations that resolve the necessary range of scales within predefined 'zones' in the flow domain, this calculation would take far too long for the result to be of engineering interest when it is finally obtained. Since computing power, memory, and time are all scarce resources, the problem of simulating turbulent flows has become one of how to abstract or simplify the complexity of the physics represented in the full Navier-Stokes (NS) equations in such a way that the 'important' physics of the problem is captured at a lower cost. To do this, a portion of the modes of the turbulent flow field needs to be approximated by a low order model that is cheaper than the full NS calculation. This model can then be used along with a numerical simulation of the 'important' modes of the problem that cannot be well represented by the model. The decision of what part of the physics to model and what kind of model to use has to be based on what physical properties are considered 'important' for the problem. It should be noted that 'nothing is free', so any use of a low order model will by definition lose some information about the original flow.« less
NASA Astrophysics Data System (ADS)
Divine, D. V.; Godtliebsen, F.; Rue, H.
2012-01-01
The paper proposes an approach to assessment of timescale errors in proxy-based series with chronological uncertainties. The method relies on approximation of the physical process(es) forming a proxy archive by a random Gamma process. Parameters of the process are partly data-driven and partly determined from prior assumptions. For a particular case of a linear accumulation model and absolutely dated tie points an analytical solution is found suggesting the Beta-distributed probability density on age estimates along the length of a proxy archive. In a general situation of uncertainties in the ages of the tie points the proposed method employs MCMC simulations of age-depth profiles yielding empirical confidence intervals on the constructed piecewise linear best guess timescale. It is suggested that the approach can be further extended to a more general case of a time-varying expected accumulation between the tie points. The approach is illustrated by using two ice and two lake/marine sediment cores representing the typical examples of paleoproxy archives with age models based on tie points of mixed origin.
The Design and Semi-Physical Simulation Test of Fault-Tolerant Controller for Aero Engine
NASA Astrophysics Data System (ADS)
Liu, Yuan; Zhang, Xin; Zhang, Tianhong
2017-11-01
A new fault-tolerant control method for aero engine is proposed, which can accurately diagnose the sensor fault by Kalman filter banks and reconstruct the signal by real-time on-board adaptive model combing with a simplified real-time model and an improved Kalman filter. In order to verify the feasibility of the method proposed, a semi-physical simulation experiment has been carried out. Besides the real I/O interfaces, controller hardware and the virtual plant model, semi-physical simulation system also contains real fuel system. Compared with the hardware-in-the-loop (HIL) simulation, semi-physical simulation system has a higher degree of confidence. In order to meet the needs of semi-physical simulation, a rapid prototyping controller with fault-tolerant control ability based on NI CompactRIO platform is designed and verified on the semi-physical simulation test platform. The result shows that the controller can realize the aero engine control safely and reliably with little influence on controller performance in the event of fault on sensor.
Mobile health IT: the effect of user interface and form factor on doctor-patient communication.
Alsos, Ole Andreas; Das, Anita; Svanæs, Dag
2012-01-01
Introducing computers into primary care can have negative effects on the doctor-patient dialogue. Little is known about such effects of mobile health IT in point-of-care situations. To assess how different mobile information devices used by physicians in point-of-care situations support or hinder aspects of doctor-patient communication, such as face-to-face dialogue, nonverbal communication, and action transparency. The study draws on two different experimental simulation studies where 22 doctors, in 80 simulated ward rounds, accessed patient-related information from a paper chart, a PDA, and a laptop mounted on a trolley. Video recordings from the simulations were analyzed qualitatively. Interviews with clinicians and patients were used to triangulate the findings and to verify the realism and results of the simulations. The paper chart afforded smooth re-establishment of eye contact, better verbal and non-verbal contact, more gesturing, good visibility of actions, and quick information retrieval. The digital information devices lacked many of these affordances; physicians' actions were not visible for the patients, the user interfaces required much attention, gesturing was harder, and re-establishment of eye contact took more time. Physicians used the devices to display their actions to the patients. The analysis revealed that the findings were related to the user interface and form factor of the information devices, as well as the personal characteristics of the physician. When information is needed and has to be located at the point-of-care, the user interface and the physical form factor of the mobile information device are influential elements for successful collaboration between doctors and patients. Both elements need to be carefully designed so that physicians can use the devices to support face-to-face dialogue, nonverbal communication, and action visibility. The ability to facilitate and support the doctor-patient collaboration is a noteworthy usability factor in the design of mobile EPR systems. The paper also presents possible design guidelines for mobile point-of-care systems for improved doctor-patient communication. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Active point out-of-plane ultrasound calibration
NASA Astrophysics Data System (ADS)
Cheng, Alexis; Guo, Xiaoyu; Zhang, Haichong K.; Kang, Hyunjae; Etienne-Cummings, Ralph; Boctor, Emad M.
2015-03-01
Image-guided surgery systems are often used to provide surgeons with informational support. Due to several unique advantages such as ease of use, real-time image acquisition, and no ionizing radiation, ultrasound is a common intraoperative medical imaging modality used in image-guided surgery systems. To perform advanced forms of guidance with ultrasound, such as virtual image overlays or automated robotic actuation, an ultrasound calibration process must be performed. This process recovers the rigid body transformation between a tracked marker attached to the transducer and the ultrasound image. Point-based phantoms are considered to be accurate, but their calibration framework assumes that the point is in the image plane. In this work, we present the use of an active point phantom and a calibration framework that accounts for the elevational uncertainty of the point. Given the lateral and axial position of the point in the ultrasound image, we approximate a circle in the axial-elevational plane with a radius equal to the axial position. The standard approach transforms all of the imaged points to be a single physical point. In our approach, we minimize the distances between the circular subsets of each image, with them ideally intersecting at a single point. We simulated in noiseless and noisy cases, presenting results on out-of-plane estimation errors, calibration estimation errors, and point reconstruction precision. We also performed an experiment using a robot arm as the tracker, resulting in a point reconstruction precision of 0.64mm.
Gallo, Paola; Amann-Winkel, Katrin; Angell, Charles Austen; Anisimov, Mikhail Alexeevich; Caupin, Frédéric; Chakravarty, Charusita; Lascaris, Erik; Loerting, Thomas; Panagiotopoulos, Athanassios Zois; Russo, John; Sellberg, Jonas Alexander; Stanley, Harry Eugene; Tanaka, Hajime; Vega, Carlos; Xu, Limei; Pettersson, Lars Gunnar Moody
2016-07-13
Water is the most abundant liquid on earth and also the substance with the largest number of anomalies in its properties. It is a prerequisite for life and as such a most important subject of current research in chemical physics and physical chemistry. In spite of its simplicity as a liquid, it has an enormously rich phase diagram where different types of ices, amorphous phases, and anomalies disclose a path that points to unique thermodynamics of its supercooled liquid state that still hides many unraveled secrets. In this review we describe the behavior of water in the regime from ambient conditions to the deeply supercooled region. The review describes simulations and experiments on this anomalous liquid. Several scenarios have been proposed to explain the anomalous properties that become strongly enhanced in the supercooled region. Among those, the second critical-point scenario has been investigated extensively, and at present most experimental evidence point to this scenario. Starting from very low temperatures, a coexistence line between a high-density amorphous phase and a low-density amorphous phase would continue in a coexistence line between a high-density and a low-density liquid phase terminating in a liquid-liquid critical point, LLCP. On approaching this LLCP from the one-phase region, a crossover in thermodynamics and dynamics can be found. This is discussed based on a picture of a temperature-dependent balance between a high-density liquid and a low-density liquid favored by, respectively, entropy and enthalpy, leading to a consistent picture of the thermodynamics of bulk water. Ice nucleation is also discussed, since this is what severely impedes experimental investigation of the vicinity of the proposed LLCP. Experimental investigation of stretched water, i.e., water at negative pressure, gives access to a different regime of the complex water diagram. Different ways to inhibit crystallization through confinement and aqueous solutions are discussed through results from experiments and simulations using the most sophisticated and advanced techniques. These findings represent tiles of a global picture that still needs to be completed. Some of the possible experimental lines of research that are essential to complete this picture are explored.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gallo, Paola; Amann-Winkel, Katrin; Angell, Charles Austen
Water is the most abundant liquid on earth and also the substance with the largest number of anomalies in its properties. It is a prerequisite for life and as such a most important subject of current research in chemical physics and physical chemistry. In spite of its simplicity as a liquid, it has an enormously rich phase diagram where different types of ices, amorphous phases, and anomalies disclose a path that points to unique thermodynamics of its supercooled liquid state that still hides many unraveled secrets. In this review we describe the behavior of water in the regime from ambientmore » conditions to the deeply supercooled region. The review describes simulations and experiments on this anomalous liquid. Several scenarios have been proposed to explain the anomalous properties that become strongly enhanced in the supercooled region. Among those, the second critical-point scenario has been investigated extensively, and at present most experimental evidence point to this scenario. Starting from very low temperatures, a coexistence line between a high-density amorphous phase and a low-density amorphous phase would continue in a coexistence line between a high-density and a low-density liquid phase terminating in a liquid–liquid critical point, LLCP. On approaching this LLCP from the one-phase region, a crossover in thermodynamics and dynamics can be found. This is discussed based on a picture of a temperature-dependent balance between a high-density liquid and a low-density liquid favored by, respectively, entropy and enthalpy, leading to a consistent picture of the thermodynamics of bulk water. Ice nucleation is also discussed, since this is what severely impedes experimental investigation of the vicinity of the proposed LLCP. Experimental investigation of stretched water, i.e., water at negative pressure, gives access to a different regime of the complex water diagram. Different ways to inhibit crystallization through confinement and aqueous solutions are discussed through results from experiments and simulations using the most sophisticated and advanced techniques. These findings represent tiles of a global picture that still needs to be completed. In conclusion, some of the possible experimental lines of research that are essential to complete this picture are explored.« less
Gallo, Paola; Amann-Winkel, Katrin; Angell, Charles Austen; ...
2016-07-05
Water is the most abundant liquid on earth and also the substance with the largest number of anomalies in its properties. It is a prerequisite for life and as such a most important subject of current research in chemical physics and physical chemistry. In spite of its simplicity as a liquid, it has an enormously rich phase diagram where different types of ices, amorphous phases, and anomalies disclose a path that points to unique thermodynamics of its supercooled liquid state that still hides many unraveled secrets. In this review we describe the behavior of water in the regime from ambientmore » conditions to the deeply supercooled region. The review describes simulations and experiments on this anomalous liquid. Several scenarios have been proposed to explain the anomalous properties that become strongly enhanced in the supercooled region. Among those, the second critical-point scenario has been investigated extensively, and at present most experimental evidence point to this scenario. Starting from very low temperatures, a coexistence line between a high-density amorphous phase and a low-density amorphous phase would continue in a coexistence line between a high-density and a low-density liquid phase terminating in a liquid–liquid critical point, LLCP. On approaching this LLCP from the one-phase region, a crossover in thermodynamics and dynamics can be found. This is discussed based on a picture of a temperature-dependent balance between a high-density liquid and a low-density liquid favored by, respectively, entropy and enthalpy, leading to a consistent picture of the thermodynamics of bulk water. Ice nucleation is also discussed, since this is what severely impedes experimental investigation of the vicinity of the proposed LLCP. Experimental investigation of stretched water, i.e., water at negative pressure, gives access to a different regime of the complex water diagram. Different ways to inhibit crystallization through confinement and aqueous solutions are discussed through results from experiments and simulations using the most sophisticated and advanced techniques. These findings represent tiles of a global picture that still needs to be completed. In conclusion, some of the possible experimental lines of research that are essential to complete this picture are explored.« less
2016-01-01
Water is the most abundant liquid on earth and also the substance with the largest number of anomalies in its properties. It is a prerequisite for life and as such a most important subject of current research in chemical physics and physical chemistry. In spite of its simplicity as a liquid, it has an enormously rich phase diagram where different types of ices, amorphous phases, and anomalies disclose a path that points to unique thermodynamics of its supercooled liquid state that still hides many unraveled secrets. In this review we describe the behavior of water in the regime from ambient conditions to the deeply supercooled region. The review describes simulations and experiments on this anomalous liquid. Several scenarios have been proposed to explain the anomalous properties that become strongly enhanced in the supercooled region. Among those, the second critical-point scenario has been investigated extensively, and at present most experimental evidence point to this scenario. Starting from very low temperatures, a coexistence line between a high-density amorphous phase and a low-density amorphous phase would continue in a coexistence line between a high-density and a low-density liquid phase terminating in a liquid–liquid critical point, LLCP. On approaching this LLCP from the one-phase region, a crossover in thermodynamics and dynamics can be found. This is discussed based on a picture of a temperature-dependent balance between a high-density liquid and a low-density liquid favored by, respectively, entropy and enthalpy, leading to a consistent picture of the thermodynamics of bulk water. Ice nucleation is also discussed, since this is what severely impedes experimental investigation of the vicinity of the proposed LLCP. Experimental investigation of stretched water, i.e., water at negative pressure, gives access to a different regime of the complex water diagram. Different ways to inhibit crystallization through confinement and aqueous solutions are discussed through results from experiments and simulations using the most sophisticated and advanced techniques. These findings represent tiles of a global picture that still needs to be completed. Some of the possible experimental lines of research that are essential to complete this picture are explored. PMID:27380438
Calatayud, Joaquin; Jakobsen, Markus D; Sundstrup, Emil; Casaña, Jose; Andersen, Lars L
2015-12-01
Regular physical activity is important for longevity and health, but knowledge about the optimal dose of physical activity for maintaining good work ability is unknown. This study investigates the association between intensity and duration of physical activity during leisure time and work ability in relation to physical demands of the job. From the 2010 round of the Danish Work Environment Cohort Study, currently employed wage earners with physically demanding work (n = 2952) replied to questions about work, lifestyle and health. Excellent (100 points), very good (75 points), good (50 points), fair (25 points) and poor (0 points) work ability in relation to the physical demands of the job was experienced by 18%, 40%, 30%, 10% and 2% of the respondents, respectively. General linear models that controlled for gender, age, physical and psychosocial work factors, lifestyle and chronic disease showed that the duration of high-intensity physical activity during leisure was positively associated with work ability, in a dose-response fashion (p < 0.001). Those performing ⩾ 5 hours of high-intensity physical activity per week had on average 8 points higher work ability than those not performing such activities. The duration of low-intensity leisure-time physical activity was not associated with work ability (p = 0.5668). The duration of high-intensity physical activity during leisure time is associated in a dose-response fashion with work ability, in workers with physically demanding jobs. © 2015 the Nordic Societies of Public Health.
Incipient triple point for adsorbed xenon monolayers: Pt(111) versus graphite substrates
NASA Astrophysics Data System (ADS)
Novaco, Anthony D.; Bruch, L. W.; Bavaresco, Jessica
2015-04-01
Simulation evidence of an incipient triple point is reported for xenon submonolayers adsorbed on the (111) surface of platinum. This is in stark contrast to the "normal" triple point found in simulations and experiments for xenon on the basal plane surface of graphite. The motions of the atoms in the surface plane are treated with standard 2D "NVE" molecular dynamics simulations using modern interactions. The simulation evidence strongly suggests an incipient triple point in the 120 -150 K range for adsorption on the Pt (111) surface while the adsorption on graphite shows a normal triple point at about 100 K.
MONTE CARLO SIMULATIONS OF PERIODIC PULSED REACTOR WITH MOVING GEOMETRY PARTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, Yan; Gohar, Yousry
2015-11-01
In a periodic pulsed reactor, the reactor state varies periodically from slightly subcritical to slightly prompt supercritical for producing periodic power pulses. Such periodic state change is accomplished by a periodic movement of specific reactor parts, such as control rods or reflector sections. The analysis of such reactor is difficult to perform with the current reactor physics computer programs. Based on past experience, the utilization of the point kinetics approximations gives considerable errors in predicting the magnitude and the shape of the power pulse if the reactor has significantly different neutron life times in different zones. To accurately simulate themore » dynamics of this type of reactor, a Monte Carlo procedure using the transfer function TRCL/TR of the MCNP/MCNPX computer programs is utilized to model the movable reactor parts. In this paper, two algorithms simulating the geometry part movements during a neutron history tracking have been developed. Several test cases have been developed to evaluate these procedures. The numerical test cases have shown that the developed algorithms can be utilized to simulate the reactor dynamics with movable geometry parts.« less
Laboratory evaluation of the pointing stability of the ASPS Vernier System
NASA Technical Reports Server (NTRS)
1980-01-01
The annular suspension and pointing system (ASPS) is an end-mount experiment pointing system designed for use in the space shuttle. The results of the ASPS Vernier System (AVS) pointing stability tests conducted in a laboratory environment are documented. A simulated zero-G suspension was used to support the test payload in the laboratory. The AVS and the suspension were modelled and incorporated into a simulation of the laboratory test. Error sources were identified and pointing stability sensitivities were determined via simulation. Statistical predictions of laboratory test performance were derived and compared to actual laboratory test results. The predicted mean pointing stability during simulated shuttle disturbances was 1.22 arc seconds; the actual mean laboratory test pointing stability was 1.36 arc seconds. The successful prediction of laboratory test results provides increased confidence in the analytical understanding of the AVS magnetic bearing technology and allows confident prediction of in-flight performance. Computer simulations of ASPS, operating in the shuttle disturbance environment, predict in-flight pointing stability errors less than 0.01 arc seconds.
Elucidation of Iron Gettering Mechanisms in Boron-Implanted Silicon Solar Cells
Laine, Hannu S.; Vahanissi, Ville; Liu, Zhengjun; ...
2017-12-15
To facilitate cost-effective manufacturing of boron-implanted silicon solar cells as an alternative to BBr 3 diffusion, we performed a quantitative test of the gettering induced by solar-typical boron-implants with the potential for low saturation current density emitters (< 50 fA/cm 2). We show that depending on the contamination level and the gettering anneal chosen, such boron-implanted emitters can induce more than a 99.9% reduction in bulk iron point defect concentration. The iron point defect results as well as synchrotron-based Nano-X-ray-fluorescence investigations of iron precipitates formed in the implanted layer imply that, with the chosen experimental parameters, iron precipitation is themore » dominant gettering mechanism, with segregation-based gettering playing a smaller role. We reproduce the measured iron point defect and precipitate distributions via kinetics modeling. First, we simulate the structural defect distribution created by the implantation process, and then we model these structural defects as heterogeneous precipitation sites for iron. Unlike previous theoretical work on gettering via boron- or phosphorus-implantation, our model is free of adjustable simulation parameters. The close agreement between the model and experimental results indicates that the model successfully captures the necessary physics to describe the iron gettering mechanisms operating in boron-implanted silicon. Furthermore, this modeling capability allows high-performance, cost-effective implanted silicon solar cells to be designed.« less
Comparison and validation of point spread models for imaging in natural waters.
Hou, Weilin; Gray, Deric J; Weidemann, Alan D; Arnone, Robert A
2008-06-23
It is known that scattering by particulates within natural waters is the main cause of the blur in underwater images. Underwater images can be better restored or enhanced with knowledge of the point spread function (PSF) of the water. This will extend the performance range as well as the information retrieval from underwater electro-optical systems, which is critical in many civilian and military applications, including target and especially mine detection, search and rescue, and diver visibility. A better understanding of the physical process involved also helps to predict system performance and simulate it accurately on demand. The presented effort first reviews several PSF models, including the introduction of a semi-analytical PSF given optical properties of the medium, including scattering albedo, mean scattering angles and the optical range. The models under comparison include the empirical model of Duntley, a modified PSF model by Dolin et al, as well as the numerical integration of analytical forms from Wells, as a benchmark of theoretical results. For experimental results, in addition to that of Duntley, we validate the above models with measured point spread functions by applying field measured scattering properties with Monte Carlo simulations. Results from these comparisons suggest it is sufficient but necessary to have the three parameters listed above to model PSFs. The simplified approach introduced also provides adequate accuracy and flexibility for imaging applications, as shown by examples of restored underwater images.
Elucidation of Iron Gettering Mechanisms in Boron-Implanted Silicon Solar Cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laine, Hannu S.; Vahanissi, Ville; Liu, Zhengjun
To facilitate cost-effective manufacturing of boron-implanted silicon solar cells as an alternative to BBr 3 diffusion, we performed a quantitative test of the gettering induced by solar-typical boron-implants with the potential for low saturation current density emitters (< 50 fA/cm 2). We show that depending on the contamination level and the gettering anneal chosen, such boron-implanted emitters can induce more than a 99.9% reduction in bulk iron point defect concentration. The iron point defect results as well as synchrotron-based Nano-X-ray-fluorescence investigations of iron precipitates formed in the implanted layer imply that, with the chosen experimental parameters, iron precipitation is themore » dominant gettering mechanism, with segregation-based gettering playing a smaller role. We reproduce the measured iron point defect and precipitate distributions via kinetics modeling. First, we simulate the structural defect distribution created by the implantation process, and then we model these structural defects as heterogeneous precipitation sites for iron. Unlike previous theoretical work on gettering via boron- or phosphorus-implantation, our model is free of adjustable simulation parameters. The close agreement between the model and experimental results indicates that the model successfully captures the necessary physics to describe the iron gettering mechanisms operating in boron-implanted silicon. Furthermore, this modeling capability allows high-performance, cost-effective implanted silicon solar cells to be designed.« less
Developing a molecular dynamics force field for both folded and disordered protein states.
Robustelli, Paul; Piana, Stefano; Shaw, David E
2018-05-07
Molecular dynamics (MD) simulation is a valuable tool for characterizing the structural dynamics of folded proteins and should be similarly applicable to disordered proteins and proteins with both folded and disordered regions. It has been unclear, however, whether any physical model (force field) used in MD simulations accurately describes both folded and disordered proteins. Here, we select a benchmark set of 21 systems, including folded and disordered proteins, simulate these systems with six state-of-the-art force fields, and compare the results to over 9,000 available experimental data points. We find that none of the tested force fields simultaneously provided accurate descriptions of folded proteins, of the dimensions of disordered proteins, and of the secondary structure propensities of disordered proteins. Guided by simulation results on a subset of our benchmark, however, we modified parameters of one force field, achieving excellent agreement with experiment for disordered proteins, while maintaining state-of-the-art accuracy for folded proteins. The resulting force field, a99SB- disp , should thus greatly expand the range of biological systems amenable to MD simulation. A similar approach could be taken to improve other force fields. Copyright © 2018 the Author(s). Published by PNAS.
NASA Astrophysics Data System (ADS)
Inochkin, F. M.; Kruglov, S. K.; Bronshtein, I. G.; Kompan, T. A.; Kondratjev, S. V.; Korenev, A. S.; Pukhov, N. F.
2017-06-01
A new method for precise subpixel edge estimation is presented. The principle of the method is the iterative image approximation in 2D with subpixel accuracy until the appropriate simulated is found, matching the simulated and acquired images. A numerical image model is presented consisting of three parts: an edge model, object and background brightness distribution model, lens aberrations model including diffraction. The optimal values of model parameters are determined by means of conjugate-gradient numerical optimization of a merit function corresponding to the L2 distance between acquired and simulated images. Computationally-effective procedure for the merit function calculation along with sufficient gradient approximation is described. Subpixel-accuracy image simulation is performed in a Fourier domain with theoretically unlimited precision of edge points location. The method is capable of compensating lens aberrations and obtaining the edge information with increased resolution. Experimental method verification with digital micromirror device applied to physically simulate an object with known edge geometry is shown. Experimental results for various high-temperature materials within the temperature range of 1000°C..2400°C are presented.
NASA Astrophysics Data System (ADS)
Charco, María; González, Pablo J.; Galán del Sastre, Pedro
2017-04-01
The Kilauea volcano (Hawaii, USA) is one of the most active volcanoes world-wide and therefore one of the better monitored volcanoes around the world. Its complex system provides a unique opportunity to investigate the dynamics of magma transport and supply. Geodetic techniques, as Interferometric Synthetic Aperture Radar (InSAR) are being extensively used to monitor ground deformation at volcanic areas. The quantitative interpretation of such surface ground deformation measurements using geodetic data requires both, physical modelling to simulate the observed signals and inversion approaches to estimate the magmatic source parameters. Here, we use synthetic aperture radar data from Sentinel-1 radar interferometry satellite mission to image volcano deformation sources during the inflation along Kilauea's Southwest Rift Zone in April-May 2015. We propose a Finite Element Model (FEM) for the calculation of Green functions in a mechanically heterogeneous domain. The key aspect of the methodology lies in applying the reciprocity relationship of the Green functions between the station and the source for efficient numerical inversions. The search for the best-fitting magmatic (point) source(s) is generally conducted for an array of 3-D locations extending below a predefined volume region. However, our approach allows to reduce the total number of Green functions to the number of the observation points by using the, above mentioned, reciprocity relationship. This new methodology is able to accurately represent magmatic processes using physical models capable of simulating volcano deformation in non-uniform material properties distribution domains, which eventually will lead to better description of the status of the volcano.
THE THREE-DIMENSIONAL EVOLUTION TO CORE COLLAPSE OF A MASSIVE STAR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Couch, Sean M.; Chatzopoulos, Emmanouil; Arnett, W. David
2015-07-20
We present the first three-dimensional (3D) simulation of the final minutes of iron core growth in a massive star, up to and including the point of core gravitational instability and collapse. We capture the development of strong convection driven by violent Si burning in the shell surrounding the iron core. This convective burning builds the iron core to its critical mass and collapse ensues, driven by electron capture and photodisintegration. The non-spherical structure and motion generated by 3D convection is substantial at the point of collapse, with convective speeds of several hundreds of km s{sup −1}. We examine the impactmore » of such physically realistic 3D initial conditions on the core-collapse supernova mechanism using 3D simulations including multispecies neutrino leakage and find that the enhanced post-shock turbulence resulting from 3D progenitor structure aids successful explosions. We conclude that non-spherical progenitor structure should not be ignored, and should have a significant and favorable impact on the likelihood for neutrino-driven explosions. In order to make simulating the 3D collapse of an iron core feasible, we were forced to make approximations to the nuclear network making this effort only a first step toward accurate, self-consistent 3D stellar evolution models of the end states of massive stars.« less
Maigne, L; Perrot, Y; Schaart, D R; Donnarieix, D; Breton, V
2011-02-07
The GATE Monte Carlo simulation platform based on the GEANT4 toolkit has come into widespread use for simulating positron emission tomography (PET) and single photon emission computed tomography (SPECT) imaging devices. Here, we explore its use for calculating electron dose distributions in water. Mono-energetic electron dose point kernels and pencil beam kernels in water are calculated for different energies between 15 keV and 20 MeV by means of GATE 6.0, which makes use of the GEANT4 version 9.2 Standard Electromagnetic Physics Package. The results are compared to the well-validated codes EGSnrc and MCNP4C. It is shown that recent improvements made to the GEANT4/GATE software result in significantly better agreement with the other codes. We furthermore illustrate several issues of general interest to GATE and GEANT4 users who wish to perform accurate simulations involving electrons. Provided that the electron step size is sufficiently restricted, GATE 6.0 and EGSnrc dose point kernels are shown to agree to within less than 3% of the maximum dose between 50 keV and 4 MeV, while pencil beam kernels are found to agree to within less than 4% of the maximum dose between 15 keV and 20 MeV.
Critical evaluation of mechanistic two-phase flow pipeline and well simulation models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dhulesia, H.; Lopez, D.
1996-12-31
Mechanistic steady state simulation models, rather than empirical correlations, are used for a design of multiphase production system including well, pipeline and downstream installations. Among the available models, PEPITE, WELLSIM, OLGA, TACITE and TUFFP are widely used for this purpose and consequently, a critical evaluation of these models is needed. An extensive validation methodology is proposed which consists of two distinct steps: first to validate the hydrodynamic point model using the test loop data and, then to validate the over-all simulation model using the real pipelines and wells data. The test loop databank used in this analysis contains about 5952more » data sets originated from four different test loops and a majority of these data are obtained at high pressures (up to 90 bars) with real hydrocarbon fluids. Before performing the model evaluation, physical analysis of the test loops data is required to eliminate non-coherent data. The evaluation of these point models demonstrates that the TACITE and OLGA models can be applied to any configuration of pipes. The TACITE model performs better than the OLGA model because it uses the most appropriate closure laws from the literature validated on a large number of data. The comparison of predicted and measured pressure drop for various real pipelines and wells demonstrates that the TACITE model is a reliable tool.« less
Toward simulating complex systems with quantum effects
NASA Astrophysics Data System (ADS)
Kenion-Hanrath, Rachel Lynn
Quantum effects like tunneling, coherence, and zero point energy often play a significant role in phenomena on the scales of atoms and molecules. However, the exact quantum treatment of a system scales exponentially with dimensionality, making it impractical for characterizing reaction rates and mechanisms in complex systems. An ongoing effort in the field of theoretical chemistry and physics is extending scalable, classical trajectory-based simulation methods capable of capturing quantum effects to describe dynamic processes in many-body systems; in the work presented here we explore two such techniques. First, we detail an explicit electron, path integral (PI)-based simulation protocol for predicting the rate of electron transfer in condensed-phase transition metal complex systems. Using a PI representation of the transferring electron and a classical representation of the transition metal complex and solvent atoms, we compute the outer sphere free energy barrier and dynamical recrossing factor of the electron transfer rate while accounting for quantum tunneling and zero point energy effects. We are able to achieve this employing only a single set of force field parameters to describe the system rather than parameterizing along the reaction coordinate. Following our success in describing a simple model system, we discuss our next steps in extending our protocol to technologically relevant materials systems. The latter half focuses on the Mixed Quantum-Classical Initial Value Representation (MQC-IVR) of real-time correlation functions, a semiclassical method which has demonstrated its ability to "tune'' between quantum- and classical-limit correlation functions while maintaining dynamic consistency. Specifically, this is achieved through a parameter that determines the quantumness of individual degrees of freedom. Here, we derive a semiclassical correction term for the MQC-IVR to systematically characterize the error introduced by different choices of simulation parameters, and demonstrate the ability of this approach to optimize MQC-IVR simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kochunas, Brendan; Collins, Benjamin; Stimpson, Shane
This paper describes the methodology developed and implemented in the Virtual Environment for Reactor Applications Core Simulator (VERA-CS) to perform high-fidelity, pressurized water reactor (PWR), multicycle, core physics calculations. Depletion of the core with pin-resolved power and nuclide detail is a significant advance in the state of the art for reactor analysis, providing the level of detail necessary to address the problems of the U.S. Department of Energy Nuclear Reactor Simulation Hub, the Consortium for Advanced Simulation of Light Water Reactors (CASL). VERA-CS has three main components: the neutronics solver MPACT, the thermal-hydraulic (T-H) solver COBRA-TF (CTF), and the nuclidemore » transmutation solver ORIGEN. This paper focuses on MPACT and provides an overview of the resonance self-shielding methods, macroscopic-cross-section calculation, two-dimensional/one-dimensional (2-D/1-D) transport, nuclide depletion, T-H feedback, and other supporting methods representing a minimal set of the capabilities needed to simulate high-fidelity models of a commercial nuclear reactor. Results are presented from the simulation of a model of the first cycle of Watts Bar Unit 1. The simulation is within 16 parts per million boron (ppmB) reactivity for all state points compared to cycle measurements, with an average reactivity bias of <5 ppmB for the entire cycle. Comparisons to cycle 1 flux map data are also provided, and the average 2-D root-mean-square (rms) error during cycle 1 is 1.07%. To demonstrate the multicycle capability, a state point at beginning of cycle (BOC) 2 was also simulated and compared to plant data. The comparison of the cycle 2 BOC state has a reactivity difference of +3 ppmB from measurement, and the 2-D rms of the comparison in the flux maps is 1.77%. Lastly, these results provide confidence in VERA-CS’s capability to perform high-fidelity calculations for practical PWR reactor problems.« less
Patient-specific CT dosimetry calculation: a feasibility study.
Fearon, Thomas; Xie, Huchen; Cheng, Jason Y; Ning, Holly; Zhuge, Ying; Miller, Robert W
2011-11-15
Current estimation of radiation dose from computed tomography (CT) scans on patients has relied on the measurement of Computed Tomography Dose Index (CTDI) in standard cylindrical phantoms, and calculations based on mathematical representations of "standard man". Radiation dose to both adult and pediatric patients from a CT scan has been a concern, as noted in recent reports. The purpose of this study was to investigate the feasibility of adapting a radiation treatment planning system (RTPS) to provide patient-specific CT dosimetry. A radiation treatment planning system was modified to calculate patient-specific CT dose distributions, which can be represented by dose at specific points within an organ of interest, as well as organ dose-volumes (after image segmentation) for a GE Light Speed Ultra Plus CT scanner. The RTPS calculation algorithm is based on a semi-empirical, measured correction-based algorithm, which has been well established in the radiotherapy community. Digital representations of the physical phantoms (virtual phantom) were acquired with the GE CT scanner in axial mode. Thermoluminescent dosimeter (TLDs) measurements in pediatric anthropomorphic phantoms were utilized to validate the dose at specific points within organs of interest relative to RTPS calculations and Monte Carlo simulations of the same virtual phantoms (digital representation). Congruence of the calculated and measured point doses for the same physical anthropomorphic phantom geometry was used to verify the feasibility of the method. The RTPS algorithm can be extended to calculate the organ dose by calculating a dose distribution point-by-point for a designated volume. Electron Gamma Shower (EGSnrc) codes for radiation transport calculations developed by National Research Council of Canada (NRCC) were utilized to perform the Monte Carlo (MC) simulation. In general, the RTPS and MC dose calculations are within 10% of the TLD measurements for the infant and child chest scans. With respect to the dose comparisons for the head, the RTPS dose calculations are slightly higher (10%-20%) than the TLD measurements, while the MC results were within 10% of the TLD measurements. The advantage of the algebraic dose calculation engine of the RTPS is a substantially reduced computation time (minutes vs. days) relative to Monte Carlo calculations, as well as providing patient-specific dose estimation. It also provides the basis for a more elaborate reporting of dosimetric results, such as patient specific organ dose volumes after image segmentation.
Quantum catastrophes: a case study
NASA Astrophysics Data System (ADS)
Znojil, Miloslav
2012-11-01
The bound-state spectrum of a Hamiltonian H is assumed real in a non-empty domain D of physical values of parameters. This means that for these parameters, H may be called crypto-Hermitian, i.e. made Hermitian via an ad hoc choice of the inner product in the physical Hilbert space of quantum bound states (i.e. via an ad hoc construction of the operator Θ called the metric). The name quantum catastrophe is then assigned to the N-tuple-exceptional-point crossing, i.e. to the scenario in which we leave the domain D along such a path that at the boundary of D, an N-plet of bound-state energies degenerates and, subsequently, complexifies. At any fixed N ⩾ 2, this process is simulated via an N × N benchmark effective matrix Hamiltonian H. It is being assigned such a closed-form metric which is made unique via an N-extrapolation-friendliness requirement. This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical devoted to ‘Quantum physics with non-Hermitian operators’.
Managing and capturing the physics of robotic systems
NASA Astrophysics Data System (ADS)
Werfel, Justin
Algorithmic and other theoretical analyses of robotic systems often use a discretized or otherwise idealized framework, while the real world is continuous-valued and noisy. This disconnect can make theoretical work sometimes problematic to apply successfully to real-world systems. One approach to bridging the separation can be to design hardware to take advantage of simple physical effects mechanically, in order to guide elements into a desired set of discrete attracting states. As a result, the system behavior can effectively approximate a discretized formalism, so that proofs based on an idealization remain directly relevant, while control can be made simpler. It is important to note, conversely, that such an approach does not make a physical instantiation unnecessary nor a purely theoretical treatment sufficient. Experiments with hardware in practice always reveal physical effects not originally accounted for in simulation or analytic modeling, which lead to unanticipated results and require nontrivial modifications to control algorithms in order to achieve desired outcomes. I will discuss these points in the context of swarm robotic systems recently developed at the Self-Organizing Systems Research Group at Harvard.
Update of global TC simulations using a variable resolution non-hydrostatic model
NASA Astrophysics Data System (ADS)
Park, S. H.
2017-12-01
Using in a variable resolution meshes in MPAS during 2017 summer., Tropical cyclone (TC) forecasts are simulated. Two physics suite are tested to explore performance and bias of each physics suite for TC forecasting. A WRF physics suite is selected from experience on weather forecasting and CAM (Community Atmosphere Model) physics is taken from a AMIP type climate simulation. Based on the last year results from CAM5 physical parameterization package and comparing with WRF physics, we investigated a issue with intensity bias using updated version of CAM physics (CAM6). We also compared these results with coupled version of TC simulations. During this talk, TC structure will be compared specially around of boundary layer and investigate their relationship between TC intensity and different physics package.
Modeling the Interaction of Radiation Between Vegetation and the Seasonal Snowcover
NASA Astrophysics Data System (ADS)
Tribbeck, M. J.; Gurney, R. J.; Morris, E. M.; Pearson, D.
2001-12-01
Prediction of meltwater runoff is crucial to communities where the seasonal snowpack is the major water supply. Water is itself a vital resource and it carries nutrients both in solution and in suspension. Simulation of snowpack depletion at a point in open areas has previously been shown to produce accurate results using physically based models such as SNTHERM. However, the radiation balance is more complex under a forest canopy as radiation is scattered and absorbed by canopy elements. This can alter the timing and magnitude of snowpack runoff substantially. The interaction of radiation between a forest canopy and its underlying snowcover is modeled by the coupling of a physically based snow model and an optical and thermal radiation canopy model. The snow model, which is based on SNTHERM (Jordan, 1991), is a discrete, multi-layer, one-dimensional mass and energy budget model for snow and is formulated with an adaptive grid system that compresses with the compacting snowpack and allows retention of snowpack stratigraphy. The vegetation canopy model approximates the canopy as a series of discrete, randomly orientated elements that scatter and absorb optical and thermal radiation. Multiple scattering of radiation between canopy and snow surface is modeled to conserve energy. The coupled model SNOWCAN differs from other vegetation-snow models such as GORT or SNOBAL as it models the albedo feedback mechanism. This is important as the albedo both affects and is affected by (through grain growth) the radiation balance. SNOWCAN is driven by standard atmospheric variables (including incident solar and thermal radiation) measured outside of the canopy and simulates snowpack properties such as temperature and density profiles as well as the sub-canopy radiation balance. The coupled snow and vegetation energy budget model was used to simulate snow depth at an old jack pine site during the 1994 BOREAS campaign. Measured and simulated snow depth showed good agreement throughout the accumulation and ablation periods, yielding an r2 correlation coefficient of 0.94. The snowpack development was also simulated at a point site within a fir stand in Reynolds Creek Experimental Watershed, Idaho, USA for the water year 2000-2001. A sensitivity analysis was carried out and comparisons were made with field observations of snowpack properties and sub-canopy radiation data for model validation.
Rigorous vector wave propagation for arbitrary flat media
NASA Astrophysics Data System (ADS)
Bos, Steven P.; Haffert, Sebastiaan Y.; Keller, Christoph U.
2017-08-01
Precise modelling of the (off-axis) point spread function (PSF) to identify geometrical and polarization aberrations is important for many optical systems. In order to characterise the PSF of the system in all Stokes parameters, an end-to-end simulation of the system has to be performed in which Maxwell's equations are rigorously solved. We present the first results of a python code that we are developing to perform multiscale end-to-end wave propagation simulations that include all relevant physics. Currently we can handle plane-parallel near- and far-field vector diffraction effects of propagating waves in homogeneous isotropic and anisotropic materials, refraction and reflection of flat parallel surfaces, interference effects in thin films and unpolarized light. We show that the code has a numerical precision on the order of 10-16 for non-absorbing isotropic and anisotropic materials. For absorbing materials the precision is on the order of 10-8. The capabilities of the code are demonstrated by simulating a converging beam reflecting from a flat aluminium mirror at normal incidence.
Simulating the decentralized processes of the human immune system in a virtual anatomy model.
Sarpe, Vladimir; Jacob, Christian
2013-01-01
Many physiological processes within the human body can be perceived and modeled as large systems of interacting particles or swarming agents. The complex processes of the human immune system prove to be challenging to capture and illustrate without proper reference to the spatial distribution of immune-related organs and systems. Our work focuses on physical aspects of immune system processes, which we implement through swarms of agents. This is our first prototype for integrating different immune processes into one comprehensive virtual physiology simulation. Using agent-based methodology and a 3-dimensional modeling and visualization environment (LINDSAY Composer), we present an agent-based simulation of the decentralized processes in the human immune system. The agents in our model - such as immune cells, viruses and cytokines - interact through simulated physics in two different, compartmentalized and decentralized 3-dimensional environments namely, (1) within the tissue and (2) inside a lymph node. While the two environments are separated and perform their computations asynchronously, an abstract form of communication is allowed in order to replicate the exchange, transportation and interaction of immune system agents between these sites. The distribution of simulated processes, that can communicate across multiple, local CPUs or through a network of machines, provides a starting point to build decentralized systems that replicate larger-scale processes within the human body, thus creating integrated simulations with other physiological systems, such as the circulatory, endocrine, or nervous system. Ultimately, this system integration across scales is our goal for the LINDSAY Virtual Human project. Our current immune system simulations extend our previous work on agent-based simulations by introducing advanced visualizations within the context of a virtual human anatomy model. We also demonstrate how to distribute a collection of connected simulations over a network of computers. As a future endeavour, we plan to use parameter tuning techniques on our model to further enhance its biological credibility. We consider these in silico experiments and their associated modeling and optimization techniques as essential components in further enhancing our capabilities of simulating a whole-body, decentralized immune system, to be used both for medical education and research as well as for virtual studies in immunoinformatics.
Can we approach the gas-liquid critical point using slab simulations of two coexisting phases?
Goujon, Florent; Ghoufi, Aziz; Malfreyt, Patrice; Tildesley, Dominic J
2016-09-28
In this paper, we demonstrate that it is possible to approach the gas-liquid critical point of the Lennard-Jones fluid by performing simulations in a slab geometry using a cut-off potential. In the slab simulation geometry, it is essential to apply an accurate tail correction to the potential energy, applied during the course of the simulation, to study the properties of states close to the critical point. Using the Janeček slab-based method developed for two-phase Monte Carlo simulations [J. Janec̆ek, J. Chem. Phys. 131, 6264 (2006)], the coexisting densities and surface tension in the critical region are reported as a function of the cutoff distance in the intermolecular potential. The results obtained using slab simulations are compared with those obtained using grand canonical Monte Carlo simulations of isotropic systems and the finite-size scaling techniques. There is a good agreement between these two approaches. The two-phase simulations can be used in approaching the critical point for temperatures up to 0.97 T C ∗ (T ∗ = 1.26). The critical-point exponents describing the dependence of the density, surface tension, and interfacial thickness on the temperature are calculated near the critical point.
A physical data model for fields and agents
NASA Astrophysics Data System (ADS)
de Jong, Kor; de Bakker, Merijn; Karssenberg, Derek
2016-04-01
Two approaches exist in simulation modeling: agent-based and field-based modeling. In agent-based (or individual-based) simulation modeling, the entities representing the system's state are represented by objects, which are bounded in space and time. Individual objects, like an animal, a house, or a more abstract entity like a country's economy, have properties representing their state. In an agent-based model this state is manipulated. In field-based modeling, the entities representing the system's state are represented by fields. Fields capture the state of a continuous property within a spatial extent, examples of which are elevation, atmospheric pressure, and water flow velocity. With respect to the technology used to create these models, the domains of agent-based and field-based modeling have often been separate worlds. In environmental modeling, widely used logical data models include feature data models for point, line and polygon objects, and the raster data model for fields. Simulation models are often either agent-based or field-based, even though the modeled system might contain both entities that are better represented by individuals and entities that are better represented by fields. We think that the reason for this dichotomy in kinds of models might be that the traditional object and field data models underlying those models are relatively low level. We have developed a higher level conceptual data model for representing both non-spatial and spatial objects, and spatial fields (De Bakker et al. 2016). Based on this conceptual data model we designed a logical and physical data model for representing many kinds of data, including the kinds used in earth system modeling (e.g. hydrological and ecological models). The goal of this work is to be able to create high level code and tools for the creation of models in which entities are representable by both objects and fields. Our conceptual data model is capable of representing the traditional feature data models and the raster data model, among many other data models. Our physical data model is capable of storing a first set of kinds of data, like omnipresent scalars, mobile spatio-temporal points and property values, and spatio-temporal rasters. With our poster we will provide an overview of the physical data model expressed in HDF5 and show examples of how it can be used to capture both object- and field-based information. References De Bakker, M, K. de Jong, D. Karssenberg. 2016. A conceptual data model and language for fields and agents. European Geosciences Union, EGU General Assembly, 2016, Vienna.
HABEBEE: habitability of eyeball-exo-Earths.
Angerhausen, Daniel; Sapers, Haley; Citron, Robert; Bergantini, Alexandre; Lutz, Stefanie; Queiroz, Luciano Lopes; da Rosa Alexandre, Marcelo; Araujo, Ana Carolina Vieira
2013-03-01
Extrasolar Earth and super-Earth planets orbiting within the habitable zone of M dwarf host stars may play a significant role in the discovery of habitable environments beyond Earth. Spectroscopic characterization of these exoplanets with respect to habitability requires the determination of habitability parameters with respect to remote sensing. The habitable zone of dwarf stars is located in close proximity to the host star, such that exoplanets orbiting within this zone will likely be tidally locked. On terrestrial planets with an icy shell, this may produce a liquid water ocean at the substellar point, one particular "Eyeball Earth" state. In this research proposal, HABEBEE: exploring the HABitability of Eyeball-Exo-Earths, we define the parameters necessary to achieve a stable icy Eyeball Earth capable of supporting life. Astronomical and geochemical research will define parameters needed to simulate potentially habitable environments on an icy Eyeball Earth planet. Biological requirements will be based on detailed studies of microbial communities within Earth analog environments. Using the interdisciplinary results of both the physical and biological teams, we will set up a simulation chamber to expose a cold- and UV-tolerant microbial community to the theoretically derived Eyeball Earth climate states, simulating the composition, atmosphere, physical parameters, and stellar irradiation. Combining the results of both studies will enable us to derive observable parameters as well as target decision guidance and feasibility analysis for upcoming astronomical platforms.
Surface signature of Mediterranean water eddies in a long-term high-resolution simulation
NASA Astrophysics Data System (ADS)
Ciani, D.; Carton, X.; Barbosa Aguiar, A. C.; Peliz, A.; Bashmachnikov, I.; Ienna, F.; Chapron, B.; Santoleri, R.
2017-12-01
We study the surface signatures of Mediterranean water eddies (Meddies) in the context of a regional, primitive equations model simulation (using the Regional Oceanic Modeling System, ROMS). This model simulation was previously performed to study the mean characteristics and pathways of Meddies during their evolution in the Atlantic Ocean. The advantage of our approach is to take into account different physical mechanisms acting on the evolution of Meddies and their surface signature, having full information on the 3D distribution of all physical variables of interest. The evolution of around 90 long-lived Meddies (whose lifetimes exceeded one year) was investigated. In particular, their surface signature was determined in sea-surface height, temperature and salinity. The Meddy-induced anomalies were studied as a function of the Meddy structure and of the oceanic background. We show that the Meddies can generate positive anomalies in the elevation of the oceanic free-surface and that these anomalies are principally related to the Meddies potential vorticity structure at depth (around 1000 m below the sea-surface). On the contrary, the Meddies thermohaline surface signatures proved to be mostly dominated by local surface conditions and little correlated to the Meddy structure at depth. This work essentially points out that satellite altimetry is the most suitable approach to track subsurface vortices from observations of the sea-surface.
From large-eddy simulation to multi-UAVs sampling of shallow cumulus clouds
NASA Astrophysics Data System (ADS)
Lamraoui, Fayçal; Roberts, Greg; Burnet, Frédéric
2016-04-01
In-situ sampling of clouds that can provide simultaneous measurements at satisfying spatio-temporal resolutions to capture 3D small scale physical processes continues to present challenges. This project (SKYSCANNER) aims at bringing together cloud sampling strategies using a swarm of unmanned aerial vehicles (UAVs) based on Large-eddy simulation (LES). The multi-UAV-based field campaigns with a personalized sampling strategy for individual clouds and cloud fields will significantly improve the understanding of the unresolved cloud physical processes. An extensive set of LES experiments for case studies from ARM-SGP site have been performed using MesoNH model at high resolutions down to 10 m. The carried out simulations led to establishing a macroscopic model that quantifies the interrelationship between micro- and macrophysical properties of shallow convective clouds. Both the geometry and evolution of individual clouds are critical to multi-UAV cloud sampling and path planning. The preliminary findings of the current project reveal several linear relationships that associate many cloud geometric parameters to cloud related meteorological variables. In addition, the horizontal wind speed indicates a proportional impact on cloud number concentration as well as triggering and prolonging the occurrence of cumulus clouds. In the framework of the joint collaboration that involves a Multidisciplinary Team (including institutes specializing in aviation, robotics and atmospheric science), this model will be a reference point for multi-UAVs sampling strategies and path planning.
Implementation of new physics models for low energy electrons in liquid water in Geant4-DNA.
Bordage, M C; Bordes, J; Edel, S; Terrissol, M; Franceries, X; Bardiès, M; Lampe, N; Incerti, S
2016-12-01
A new alternative set of elastic and inelastic cross sections has been added to the very low energy extension of the Geant4 Monte Carlo simulation toolkit, Geant4-DNA, for the simulation of electron interactions in liquid water. These cross sections have been obtained from the CPA100 Monte Carlo track structure code, which has been a reference in the microdosimetry community for many years. They are compared to the default Geant4-DNA cross sections and show better agreement with published data. In order to verify the correct implementation of the CPA100 cross section models in Geant4-DNA, simulations of the number of interactions and ranges were performed using Geant4-DNA with this new set of models, and the results were compared with corresponding results from the original CPA100 code. Good agreement is observed between the implementations, with relative differences lower than 1% regardless of the incident electron energy. Useful quantities related to the deposited energy at the scale of the cell or the organ of interest for internal dosimetry, like dose point kernels, are also calculated using these new physics models. They are compared with results obtained using the well-known Penelope Monte Carlo code. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Calibration of a rotating accelerometer gravity gradiometer using centrifugal gradients
NASA Astrophysics Data System (ADS)
Yu, Mingbiao; Cai, Tijing
2018-05-01
The purpose of this study is to calibrate scale factors and equivalent zero biases of a rotating accelerometer gravity gradiometer (RAGG). We calibrate scale factors by determining the relationship between the centrifugal gradient excitation and RAGG response. Compared with calibration by changing the gravitational gradient excitation, this method does not need test masses and is easier to implement. The equivalent zero biases are superpositions of self-gradients and the intrinsic zero biases of the RAGG. A self-gradient is the gravitational gradient produced by surrounding masses, and it correlates well with the RAGG attitude angle. We propose a self-gradient model that includes self-gradients and the intrinsic zero biases of the RAGG. The self-gradient model is a function of the RAGG attitude, and it includes parameters related to surrounding masses. The calibration of equivalent zero biases determines the parameters of the self-gradient model. We provide detailed procedures and mathematical formulations for calibrating scale factors and parameters in the self-gradient model. A RAGG physical simulation system substitutes for the actual RAGG in the calibration and validation experiments. Four point masses simulate four types of surrounding masses producing self-gradients. Validation experiments show that the self-gradients predicted by the self-gradient model are consistent with those from the outputs of the RAGG physical simulation system, suggesting that the presented calibration method is valid.
How well Can We Classify SWOT-derived Water Surface Profiles?
NASA Astrophysics Data System (ADS)
Frasson, R. P. M.; Wei, R.; Picamilh, C.; Durand, M. T.
2015-12-01
The upcoming Surface Water Ocean Topography (SWOT) mission will detect water bodies and measure water surface elevation throughout the globe. Within its continental high resolution mask, SWOT is expected to deliver measurements of river width, water elevation and slope of rivers wider than ~50 m. The definition of river reaches is an integral step of the computation of discharge based on SWOT's observables. As poorly defined reaches can negatively affect the accuracy of discharge estimations, we seek strategies to break up rivers into physically meaningful sections. In the present work, we investigate how accurately we can classify water surface profiles based on simulated SWOT observations. We assume that most river sections can be classified as either M1 (mild slope, with depth larger than the normal depth), or A1 (adverse slope with depth larger than the critical depth). This assumption allows the classification to be based solely on the second derivative of water surface profiles, with convex profiles being classified as A1 and concave profiles as M1. We consider a HEC-RAS model of the Sacramento River as a representation of the true state of the river. We employ the SWOT instrument simulator to generate a synthetic pass of the river, which includes our best estimates of height measurement noise and geolocation errors. We process the resulting point cloud of water surface heights with the RiverObs package, which delineates the river center line and draws the water surface profile. Next, we identify inflection points in the water surface profile and classify the sections between the inflection points. Finally, we compare our limited classification of simulated SWOT-derived water surface profile to the "exact" classification of the modeled Sacramento River. With this exercise, we expect to determine if SWOT observations can be used to find inflection points in water surface profiles, which would bring knowledge of flow regimes into the definition of river reaches.
BC404 scintillators as gamma locators studied via Geant4 simulations
NASA Astrophysics Data System (ADS)
Cortés, M. L.; Hoischen, R.; Eisenhauer, K.; Gerl, J.; Pietralla, N.
2014-05-01
In many applications in industry and academia, an accurate determination of the direction from where gamma rays are emitted is either needed or desirable. Ion-beam therapy treatments, the search for orphan sources, and homeland security applications are examples of fields that can benefit from directional sensitivity to gamma-radiation. Scintillation detectors are a good option for these types of applications as they have relatively low cost, are easy to handle and can be produced in a large range of different sizes. In this work a Geant4 simulation was developed to study the directional sensitivity of different BC404 scintillator geometries and arrangements. The simulation includes all the physical processes relevant for gamma detection in a scintillator. In particular, the creation and propagation of optical photons inside the scintillator was included. A simplified photomultiplier tube model was also simulated. The physical principle exploited is the angular dependence of the shape of the energy spectrum obtained from thin scintillator layers when irradiated from different angles. After an experimental confirmation of the working principle of the device and a check of the simulation, the possibilities and limitations of directional sensitivity to gamma radiation using scintillator layers was tested. For this purpose, point-like sources of typical energies expected in ion-beam therapy were used. Optimal scintillator thicknesses for different energies were determined and the setup efficiencies calculated. The use of arrays of scintillators to reconstruct the direction of incoming gamma rays was also studied. For this case, a spherical source emitting Bremsstrahlung radiation was used together with a setup consisting of scintillator layers. The capability of this setup to identify the center of the extended source was studied together with its angular resolution.
NASA Astrophysics Data System (ADS)
Abdul Ghani, B.
2005-09-01
"TEA CO 2 Laser Simulator" has been designed to simulate the dynamic emission processes of the TEA CO 2 laser based on the six-temperature model. The program predicts the behavior of the laser output pulse (power, energy, pulse duration, delay time, FWHM, etc.) depending on the physical and geometrical input parameters (pressure ratio of gas mixture, reflecting area of the output mirror, media length, losses, filling and decay factors, etc.). Program summaryTitle of program: TEA_CO2 Catalogue identifier: ADVW Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVW Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer: P.IV DELL PC Setup: Atomic Energy Commission of Syria, Scientific Services Department, Mathematics and Informatics Division Operating system: MS-Windows 9x, 2000, XP Programming language: Delphi 6.0 No. of lines in distributed program, including test data, etc.: 47 315 No. of bytes in distributed program, including test data, etc.:7 681 109 Distribution format:tar.gz Classification: 15 Laser Physics Nature of the physical problem: "TEA CO 2 Laser Simulator" is a program that predicts the behavior of the laser output pulse by studying the effect of the physical and geometrical input parameters on the characteristics of the output laser pulse. The laser active medium consists of a CO 2-N 2-He gas mixture. Method of solution: Six-temperature model, for the dynamics emission of TEA CO 2 laser, has been adapted in order to predict the parameters of laser output pulses. A simulation of the laser electrical pumping was carried out using two approaches; empirical function equation (8) and differential equation (9). Typical running time: The program's running time mainly depends on both integration interval and step; for a 4 μs period of time and 0.001 μs integration step (defaults values used in the program), the running time will be about 4 seconds. Restrictions on the complexity: Using a very small integration step might leads to stop the program run due to the huge number of calculating points and to a small paging file size of the MS-Windows virtual memory. In such case, it is recommended to enlarge the paging file size to the appropriate size, or to use a bigger value of integration step.
NASA Astrophysics Data System (ADS)
Alexandrou, C.; Constantinou, M.; Dimopoulos, P.; Frezzotti, R.; Hadjiyiannakou, K.; Jansen, K.; Kallidonis, C.; Kostrzewa, B.; Koutsou, G.; Mangin-Brinet, M.; Vaquero Avilès-Casco, A.; Wenger, U.
2017-06-01
We present results on the light, strange and charm nucleon scalar and tensor charges from lattice QCD, using simulations with Nf=2 flavors of twisted mass clover-improved fermions with a physical value of the pion mass. Both connected and disconnected contributions are included, enabling us to extract the isoscalar, strange and charm charges for the first time directly at the physical point. Furthermore, the renormalization is computed nonperturbatively for both isovector and isoscalar quantities. We investigate excited state effects by analyzing several sink-source time separations and by employing a set of methods to probe ground state dominance. Our final results for the scalar charges are gSu=5.20 (42 )(15 )(12 ), gSd=4.27 (26 )(15 )(12 ), gSs=0.33 (7 )(1 )(4 ), and gSc=0.062 (13 )(3 )(5 ) and for the tensor charges gTu=0.794 (16 )(2 )(13 ), gTd=-0.210 (10 )(2 )(13 ), gTs=0.00032 (24 )(0 ), and gTc=0.00062 (85 )(0 ) in the MS ¯ scheme at 2 GeV. The first error is statistical, the second is the systematic error due to the renormalization and the third the systematic arising from estimating the contamination due to the excited states, when our data are precise enough to probe the first excited state.
DOE Office of Scientific and Technical Information (OSTI.GOV)
BRISC is a developmental prototype for a nextgeneration systems-level integrated performance and safety code (IPSC) for nuclear reactors. Its development served to demonstrate how a lightweight multi-physics coupling approach can be used to tightly couple the physics models in several different physics codes (written in a variety of languages) into one integrated package for simulating accident scenarios in a liquid sodium cooled burner nuclear reactor. For example, the RIO Fluid Flow and Heat transfer code developed at Sandia (SNL: Chris Moen, Dept. 08005) is used in BRISC to model fluid flow and heat transfer, as well as conduction heat transfermore » in solids. Because BRISC is a prototype, its most practical application is as a foundation or starting point for developing a true production code. The sub-codes and the associated models and correlations currently employed within BRISC were chosen to cover the required application space and demonstrate feasibility, but were not optimized or validated against experimental data within the context of their use in BRISC.« less
Report on SNL RCBC control options
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ponciroli, R.; Vilim, R. B.
The attractive performance of the S-CO 2 recompression cycle arises from the thermo-physical properties of carbon dioxide near the critical point. However, to ensure efficient operation of the cycle near the critical point, precise control of the heat removal rate by the Printed Circuit Heat Exchanger (PCHE) upstream of the main compressor is required. Accomplishing this task is not trivial because of the large variations in fluid properties with respect to temperature and pressure near the critical point. The use of a model-based approach for the design of a robust feedback regulator is being investigated to achieve acceptable control ofmore » heat removal rate at different operating conditions. A first step in this procedure is the development of a dynamic model of the heat exchanger. In this work, a one-dimensional (1-D) control-oriented model of the PCHE was developed using the General Plant Analyzer and System Simulator (GPASS) code. GPASS is a transient simulation code that supports analysis and control of power conversion cycles based on the S-CO 2 Brayton cycle. This modeling capability was used this fiscal year to analyze experiment data obtained from the heat exchanger in the SNL recompression Brayton cycle. The analysis suggested that the error in the water flowrate measurement was greater than required for achieving precise control of heat removal rate. Accordingly, a new water flowmeter was installed, significantly improving the quality of the measurement. Comparison of heat exchanger measurements in subsequent experiments with code simulations yielded good agreement establishing a reliable basis for the use of the GPASS PCHE model for future development of a model-based feedback controller.« less
Sensor data fusion for textured reconstruction and virtual representation of alpine scenes
NASA Astrophysics Data System (ADS)
Häufel, Gisela; Bulatov, Dimitri; Solbrig, Peter
2017-10-01
The concept of remote sensing is to provide information about a wide-range area without making physical contact with this area. If, additionally to satellite imagery, images and videos taken by drones provide a more up-to-date data at a higher resolution, or accurate vector data is downloadable from the Internet, one speaks of sensor data fusion. The concept of sensor data fusion is relevant for many applications, such as virtual tourism, automatic navigation, hazard assessment, etc. In this work, we describe sensor data fusion aiming to create a semantic 3D model of an extremely interesting yet challenging dataset: An alpine region in Southern Germany. A particular challenge of this work is that rock faces including overhangs are present in the input airborne laser point cloud. The proposed procedure for identification and reconstruction of overhangs from point clouds comprises four steps: Point cloud preparation, filtering out vegetation, mesh generation and texturing. Further object types are extracted in several interesting subsections of the dataset: Building models with textures from UAV (Unmanned Aerial Vehicle) videos, hills reconstructed as generic surfaces and textured by the orthophoto, individual trees detected by the watershed algorithm, as well as the vector data for roads retrieved from openly available shapefiles and GPS-device tracks. We pursue geo-specific reconstruction by assigning texture and width to roads of several pre-determined types and modeling isolated trees and rocks using commercial software. For visualization and simulation of the area, we have chosen the simulation system Virtual Battlespace 3 (VBS3). It becomes clear that the proposed concept of sensor data fusion allows a coarse reconstruction of a large scene and, at the same time, an accurate and up-to-date representation of its relevant subsections, in which simulation can take place.
NASA Astrophysics Data System (ADS)
Zhang, Shaojie; Zhao, Luqiang; Delgado-Tellez, Ricardo; Bao, Hongjun
2018-03-01
Conventional outputs of physics-based landslide forecasting models are presented as deterministic warnings by calculating the safety factor (Fs) of potentially dangerous slopes. However, these models are highly dependent on variables such as cohesion force and internal friction angle which are affected by a high degree of uncertainty especially at a regional scale, resulting in unacceptable uncertainties of Fs. Under such circumstances, the outputs of physical models are more suitable if presented in the form of landslide probability values. In order to develop such models, a method to link the uncertainty of soil parameter values with landslide probability is devised. This paper proposes the use of Monte Carlo methods to quantitatively express uncertainty by assigning random values to physical variables inside a defined interval. The inequality Fs < 1 is tested for each pixel in n simulations which are integrated in a unique parameter. This parameter links the landslide probability to the uncertainties of soil mechanical parameters and is used to create a physics-based probabilistic forecasting model for rainfall-induced shallow landslides. The prediction ability of this model was tested in a case study, in which simulated forecasting of landslide disasters associated with heavy rainfalls on 9 July 2013 in the Wenchuan earthquake region of Sichuan province, China, was performed. The proposed model successfully forecasted landslides in 159 of the 176 disaster points registered by the geo-environmental monitoring station of Sichuan province. Such testing results indicate that the new model can be operated in a highly efficient way and show more reliable results, attributable to its high prediction accuracy. Accordingly, the new model can be potentially packaged into a forecasting system for shallow landslides providing technological support for the mitigation of these disasters at regional scale.
IMPETUS - Interactive MultiPhysics Environment for Unified Simulations.
Ha, Vi Q; Lykotrafitis, George
2016-12-08
We introduce IMPETUS - Interactive MultiPhysics Environment for Unified Simulations, an object oriented, easy-to-use, high performance, C++ program for three-dimensional simulations of complex physical systems that can benefit a large variety of research areas, especially in cell mechanics. The program implements cross-communication between locally interacting particles and continuum models residing in the same physical space while a network facilitates long-range particle interactions. Message Passing Interface is used for inter-processor communication for all simulations. Copyright © 2016 Elsevier Ltd. All rights reserved.
Spacing distribution functions for 1D point island model with irreversible attachment
NASA Astrophysics Data System (ADS)
Gonzalez, Diego; Einstein, Theodore; Pimpinelli, Alberto
2011-03-01
We study the configurational structure of the point island model for epitaxial growth in one dimension. In particular, we calculate the island gap and capture zone distributions. Our model is based on an approximate description of nucleation inside the gaps. Nucleation is described by the joint probability density p xy n (x,y), which represents the probability density to have nucleation at position x within a gap of size y. Our proposed functional form for p xy n (x,y) describes excellently the statistical behavior of the system. We compare our analytical model with extensive numerical simulations. Our model retains the most relevant physical properties of the system. This work was supported by the NSF-MRSEC at the University of Maryland, Grant No. DMR 05-20471, with ancillary support from the Center for Nanophysics and Advanced Materials (CNAM).
Su, Yi-Huang; Keller, Peter E
2018-01-29
Motor simulation has been implicated in how musicians anticipate the rhythm of another musician's action to achieve interpersonal synchronization. Here, we investigated whether similar mechanisms govern a related form of rhythmic action: dance. We examined (1) whether synchronization with visual dance stimuli was influenced by movement agency, (2) whether music training modulated simulation efficiency, and (3) what cues were relevant for simulating the dance rhythm. Participants were first recorded dancing the basic Charleston steps paced by a metronome, and later in a synchronization task they tapped to the rhythm of their own point-light dance stimuli, stimuli of another physically matched participant or one matched in movement kinematics, and a quantitative average across individuals. Results indicated that, while there was no overall "self advantage" and synchronization was generally most stable with the least variable (averaged) stimuli, motor simulation was driven-indicated by high tap-beat variability correlations-by familiar movement kinematics rather than morphological features. Furthermore, music training facilitated simulation, such that musicians outperformed non-musicians when synchronizing with others' movements but not with their own movements. These findings support action simulation as underlying synchronization in dance, linking action observation and rhythm processing in a common motor framework.
Virtual Reality for Artificial Intelligence: human-centered simulation for social science.
Cipresso, Pietro; Riva, Giuseppe
2015-01-01
There is a long last tradition in Artificial Intelligence as use of Robots endowing human peculiarities, from a cognitive and emotional point of view, and not only in shape. Today Artificial Intelligence is more oriented to several form of collective intelligence, also building robot simulators (hardware or software) to deeply understand collective behaviors in human beings and society as a whole. Modeling has also been crucial in the social sciences, to understand how complex systems can arise from simple rules. However, while engineers' simulations can be performed in the physical world using robots, for social scientist this is impossible. For decades, researchers tried to improve simulations by endowing artificial agents with simple and complex rules that emulated human behavior also by using artificial intelligence (AI). To include human beings and their real intelligence within artificial societies is now the big challenge. We present an hybrid (human-artificial) platform where experiments can be performed by simulated artificial worlds in the following manner: 1) agents' behaviors are regulated by the behaviors shown in Virtual Reality involving real human beings exposed to specific situations to simulate, and 2) technology transfers these rules into the artificial world. These form a closed-loop of real behaviors inserted into artificial agents, which can be used to study real society.
NASA Astrophysics Data System (ADS)
Divel, Sarah E.; Christensen, Soren; Wintermark, Max; Lansberg, Maarten G.; Pelc, Norbert J.
2017-03-01
Computer simulation is a powerful tool in CT; however, long simulation times of complex phantoms and systems, especially when modeling many physical aspects (e.g., spectrum, finite detector and source size), hinder the ability to realistically and efficiently evaluate and optimize CT techniques. Long simulation times primarily result from the tracing of hundreds of line integrals through each of the hundreds of geometrical shapes defined within the phantom. However, when the goal is to perform dynamic simulations or test many scan protocols using a particular phantom, traditional simulation methods inefficiently and repeatedly calculate line integrals through the same set of structures although only a few parameters change in each new case. In this work, we have developed a new simulation framework that overcomes such inefficiencies by dividing the phantom into material specific regions with the same time attenuation profiles, acquiring and storing monoenergetic projections of the regions, and subsequently scaling and combining the projections to create equivalent polyenergetic sinograms. The simulation framework is especially efficient for the validation and optimization of CT perfusion which requires analysis of many stroke cases and testing hundreds of scan protocols on a realistic and complex numerical brain phantom. Using this updated framework to conduct a 31-time point simulation with 80 mm of z-coverage of a brain phantom on two 16-core Linux serves, we have reduced the simulation time from 62 hours to under 2.6 hours, a 95% reduction.
Dynamical Scaling and Phase Coexistence in Topologically Constrained DNA Melting.
Fosado, Y A G; Michieletto, D; Marenduzzo, D
2017-09-15
There is a long-standing experimental observation that the melting of topologically constrained DNA, such as circular closed plasmids, is less abrupt than that of linear molecules. This finding points to an important role of topology in the physics of DNA denaturation, which is, however, poorly understood. Here, we shed light on this issue by combining large-scale Brownian dynamics simulations with an analytically solvable phenomenological Landau mean field theory. We find that the competition between melting and supercoiling leads to phase coexistence of denatured and intact phases at the single-molecule level. This coexistence occurs in a wide temperature range, thereby accounting for the broadening of the transition. Finally, our simulations show an intriguing topology-dependent scaling law governing the growth of denaturation bubbles in supercoiled plasmids, which can be understood within the proposed mean field theory.
Displaying Computer Simulations Of Physical Phenomena
NASA Technical Reports Server (NTRS)
Watson, Val
1991-01-01
Paper discusses computer simulation as means of experiencing and learning to understand physical phenomena. Covers both present simulation capabilities and major advances expected in near future. Visual, aural, tactile, and kinesthetic effects used to teach such physical sciences as dynamics of fluids. Recommends classrooms in universities, government, and industry be linked to advanced computing centers so computer simulations integrated into education process.
Fly-by-feel aeroservoelasticity
NASA Astrophysics Data System (ADS)
Suryakumar, Vishvas Samuel
Recent experiments have suggested a strong correlation between local flow features on the airfoil surface such as the leading edge stagnation point (LESP), transition or the flow separation point with global integrated quantities such as aerodynamic lift. "Fly-By-Feel" refers to a physics-based sensing and control framework where local flow features are tracked in real-time to determine aerodynamic loads. This formulation offers possibilities for the development of robust, low-order flight control architectures. An essential contribution towards this objective is the theoretical development showing the direct relationship of the LESP with circulation for small-amplitude, unsteady, airfoil maneuvers. The theory is validated through numerical simulations and wind tunnel tests. With the availability of an aerodynamic observable, a low-order, energy-based control formulation is derived for aeroelastic stabilization and gust load alleviation. The sensing and control framework is implemented on the Nonlinear Aeroelastic Test Apparatus at Texas A&M University. The LESP is located using hot-film sensors distributed around the wing leading edge. Stabilization of limit cycle oscillations exhibited by a nonlinear wing section is demonstrated in the presence of gusts. Aeroelastic stabilization is also demonstrated on a flying wing configuration exhibiting body freedom flutter through numerical simulations.
Contact-aware simulations of particulate Stokesian suspensions
NASA Astrophysics Data System (ADS)
Lu, Libin; Rahimian, Abtin; Zorin, Denis
2017-10-01
We present an efficient, accurate, and robust method for simulation of dense suspensions of deformable and rigid particles immersed in Stokesian fluid in two dimensions. We use a well-established boundary integral formulation for the problem as the foundation of our approach. This type of formulation, with a high-order spatial discretization and an implicit and adaptive time discretization, have been shown to be able to handle complex interactions between particles with high accuracy. Yet, for dense suspensions, very small time-steps or expensive implicit solves as well as a large number of discretization points are required to avoid non-physical contact and intersections between particles, leading to infinite forces and numerical instability. Our method maintains the accuracy of previous methods at a significantly lower cost for dense suspensions. The key idea is to ensure interference-free configuration by introducing explicit contact constraints into the system. While such constraints are unnecessary in the formulation, in the discrete form of the problem, they make it possible to eliminate catastrophic loss of accuracy by preventing contact explicitly. Introducing contact constraints results in a significant increase in stable time-step size for explicit time-stepping, and a reduction in the number of points adequate for stability.
Simulating direct shear tests with the Bullet physics library: A validation study.
Izadi, Ehsan; Bezuijen, Adam
2018-01-01
This study focuses on the possible uses of physics engines, and more specifically the Bullet physics library, to simulate granular systems. Physics engines are employed extensively in the video gaming, animation and movie industries to create physically plausible scenes. They are designed to deliver a fast, stable, and optimal simulation of certain systems such as rigid bodies, soft bodies and fluids. This study focuses exclusively on simulating granular media in the context of rigid body dynamics with the Bullet physics library. The first step was to validate the results of the simulations of direct shear testing on uniform-sized metal beads on the basis of laboratory experiments. The difference in the average angle of mobilized frictions was found to be only 1.0°. In addition, a very close match was found between dilatancy in the laboratory samples and in the simulations. A comprehensive study was then conducted to determine the failure and post-failure mechanism. We conclude with the presentation of a simulation of a direct shear test on real soil which demonstrated that Bullet has all the capabilities needed to be used as software for simulating granular systems.
NASA Astrophysics Data System (ADS)
Tian, Kaiwen; Goldsby, David L.; Carpick, Robert W.
2018-05-01
Rate and state friction (RSF) laws are widely used empirical relationships that describe macroscale to microscale frictional behavior. They entail a linear combination of the direct effect (the increase of friction with sliding velocity due to the reduced influence of thermal excitations) and the evolution effect (the change in friction with changes in contact "state," such as the real contact area or the degree of interfacial chemical bonds). Recent atomic force microscope (AFM) experiments and simulations found that nanoscale single-asperity amorphous silica-silica contacts exhibit logarithmic aging (increasing friction with time) over several decades of contact time, due to the formation of interfacial chemical bonds. Here we establish a physically based RSF relation for such contacts by combining the thermally activated Prandtl-Tomlinson (PTT) model with an evolution effect based on the physics of chemical aging. This thermally activated Prandtl-Tomlinson model with chemical aging (PTTCA), like the PTT model, uses the loading point velocity for describing the direct effect, not the tip velocity (as in conventional RSF laws). Also, in the PTTCA model, the combination of the evolution and direct effects may be nonlinear. We present AFM data consistent with the PTTCA model whereby in aging tests, for a given hold time, static friction increases with the logarithm of the loading point velocity. Kinetic friction also increases with the logarithm of the loading point velocity at sufficiently high velocities, but at a different increasing rate. The discrepancy between the rates of increase of static and kinetic friction with velocity arises from the fact that appreciable aging during static contact changes the energy landscape. Our approach extends the PTT model, originally used for crystalline substrates, to amorphous materials. It also establishes how conventional RSF laws can be modified for nanoscale single-asperity contacts to provide a physically based friction relation for nanoscale contacts that exhibit chemical bond-induced aging, as well as other aging mechanisms with similar physical characteristics.
2013-01-01
Background The validity of studies describing clinicians’ judgements based on their responses to paper cases is questionable, because - commonly used - paper case simulations only partly reflect real clinical environments. In this study we test whether paper case simulations evoke similar risk assessment judgements to the more realistic simulated patients used in high fidelity physical simulations. Methods 97 nurses (34 experienced nurses and 63 student nurses) made dichotomous assessments of risk of acute deterioration on the same 25 simulated scenarios in both paper case and physical simulation settings. Scenarios were generated from real patient cases. Measures of judgement ‘ecology’ were derived from the same case records. The relationship between nurses’ judgements, actual patient outcomes (i.e. ecological criteria), and patient characteristics were described using the methodology of judgement analysis. Logistic regression models were constructed to calculate Lens Model Equation parameters. Parameters were then compared between the modeled paper-case and physical-simulation judgements. Results Participants had significantly less achievement (ra) judging physical simulations than when judging paper cases. They used less modelable knowledge (G) with physical simulations than with paper cases, while retaining similar cognitive control and consistency on repeated patients. Respiration rate, the most important cue for predicting patient risk in the ecological model, was weighted most heavily by participants. Conclusions To the extent that accuracy in judgement analysis studies is a function of task representativeness, improving task representativeness via high fidelity physical simulations resulted in lower judgement performance in risk assessments amongst nurses when compared to paper case simulations. Lens Model statistics could prove useful when comparing different options for the design of simulations used in clinical judgement analysis. The approach outlined may be of value to those designing and evaluating clinical simulations as part of education and training strategies aimed at improving clinical judgement and reasoning. PMID:23718556
NASA Astrophysics Data System (ADS)
Denissenkov, Pavel; Perdikakis, Georgios; Herwig, Falk; Schatz, Hendrik; Ritter, Christian; Pignatari, Marco; Jones, Samuel; Nikas, Stylianos; Spyrou, Artemis
2018-05-01
The first-peak s-process elements Rb, Sr, Y and Zr in the post-AGB star Sakurai's object (V4334 Sagittarii) have been proposed to be the result of i-process nucleosynthesis in a post-AGB very-late thermal pulse event. We estimate the nuclear physics uncertainties in the i-process model predictions to determine whether the remaining discrepancies with observations are significant and point to potential issues with the underlying astrophysical model. We find that the dominant source in the nuclear physics uncertainties are predictions of neutron capture rates on unstable neutron rich nuclei, which can have uncertainties of more than a factor 20 in the band of the i-process. We use a Monte Carlo variation of 52 neutron capture rates and a 1D multi-zone post-processing model for the i-process in Sakurai's object to determine the cumulative effect of these uncertainties on the final elemental abundance predictions. We find that the nuclear physics uncertainties are large and comparable to observational errors. Within these uncertainties the model predictions are consistent with observations. A correlation analysis of the results of our MC simulations reveals that the strongest impact on the predicted abundances of Rb, Sr, Y and Zr is made by the uncertainties in the (n, γ) reaction rates of 85Br, 86Br, 87Kr, 88Kr, 89Kr, 89Rb, 89Sr, and 92Sr. This conclusion is supported by a series of multi-zone simulations in which we increased and decreased to their maximum and minimum limits one or two reaction rates per run. We also show that simple and fast one-zone simulations should not be used instead of more realistic multi-zone stellar simulations for nuclear sensitivity and uncertainty studies of convective–reactive processes. Our findings apply more generally to any i-process site with similar neutron exposure, such as rapidly accreting white dwarfs with near-solar metallicities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fleury, Leesa M.; Moore, Guy D.
2016-05-03
If the axion exists and if the initial axion field value is uncorrelated at causally disconnected points, then it should be possible to predict the efficiency of cosmological axion production, relating the axionic dark matter density to the axion mass. The main obstacle to making this prediction is correctly treating the axion string cores. We develop a new algorithm for treating the axionic string cores correctly in 2+1 dimensions. When the axionic string cores are given their full physical string tension, axion production is about twice as efficient as in previous simulations. We argue that the string network in 2+1more » dimensions should behave very differently than in 3+1 dimensions, so this result cannot be simply carried over to the physical case. We outline how to extend our method to 3+1D axion string dynamics.« less
Water/rock interactions in experimentally simulated dirty snowball and dirty iceball cometary nuclei
NASA Technical Reports Server (NTRS)
Gooding, James L.; Allton, Judith H.
1991-01-01
In the dirty snowball model for cometary nuclei, comet-nucleus materials are regarded as mixtures of volatile ices and relatively non-volatile minerals or chemical compounds. Carbonaceous chondrite meteorites are regarded as useful analogs for the rocky component. To help elucidate the possible physical geochemistry of cometary nuclei, preliminary results are reported of calorimetric experiments with two-component systems involving carbonaceous chondrites and water ice. Based on collective knowledge of the physics of water ice, three general types of interactions can be expected between water and minerals at sub-freezing temperatures: (1) heterogeneous nucleation of ice by insoluble minerals; (2) adsorption of water vapor by hygroscopic phases; and (3) freezing- and melting-point depression of liquid water sustained by soluble minerals. The relative and absolute magnitude of all three effects are expected to vary with mineral composition.
Physical Patterns Associated with 27 April 2011 Tornado Outbreak
NASA Astrophysics Data System (ADS)
Ramos, Fernanda; Salem, Thomas
2012-02-01
The National Weather Service office in Memphis, Tennessee has aimed their efforts to improve severe tornado forecasting. Everything is not known about tornadogenesis, but one thing is: tornadoes tend to form within supercell thunderstorms. Hence, 27 April 2011 and 25 May 2011 were days when a Tornado Outbreak was expected to arise. Although 22 tornadoes struck the region on 27 April 2011, only 1 impacted the area on 25 May 2011. In order to understand both events, comparisons of their physical features were made. These parameters were studied using the Weather Event Simulator system and the NOAA/NWS Storm Prediction database. This research concentrated on the Surface Frontal Analysis, NAM40 700mb Dew-Points, NAM80 250mb Wind Speed and NAM20 500mb Vorticity images as well as 0-6 km Shear, MUCAPE and VGP mesoscale patterns. As result of this research a Dry-Line ahead of a Cold Front, Dew-points 5C and higher, and high Vorticity values^ were synoptic patterns that influenced to the formation of supercell tornadoes. Finally, MUCAPE and VGP favored the possibility of tornadoes occurrence on 25 May 2011, but shear was the factor that made 27 April 2011 a day for a Tornado Outbreak weather event.
Physical Scaffolding Accelerates the Evolution of Robot Behavior.
Buckingham, David; Bongard, Josh
2017-01-01
In some evolutionary robotics experiments, evolved robots are transferred from simulation to reality, while sensor/motor data flows back from reality to improve the next transferral. We envision a generalization of this approach: a simulation-to-reality pipeline. In this pipeline, increasingly embodied agents flow up through a sequence of increasingly physically realistic simulators, while data flows back down to improve the next transferral between neighboring simulators; physical reality is the last link in this chain. As a first proof of concept, we introduce a two-link chain: A fast yet low-fidelity ( lo-fi) simulator hosts minimally embodied agents, which gradually evolve controllers and morphologies to colonize a slow yet high-fidelity ( hi-fi) simulator. The agents are thus physically scaffolded. We show here that, given the same computational budget, these physically scaffolded robots reach higher performance in the hi-fi simulator than do robots that only evolve in the hi-fi simulator, but only for a sufficiently difficult task. These results suggest that a simulation-to-reality pipeline may strike a good balance between accelerating evolution in simulation while anchoring the results in reality, free the investigator from having to prespecify the robot's morphology, and pave the way to scalable, automated, robot-generating systems.
Physics based simulation of seismicity induced in the vicinity of a high-pressure fluid injection
NASA Astrophysics Data System (ADS)
McCloskey, J.; NicBhloscaidh, M.; Murphy, S.; O'Brien, G. S.; Bean, C. J.
2013-12-01
High-pressure fluid injection into subsurface is known, in some cases, to induce earthquakes in the surrounding volume. The increasing importance of ';fracking' as a potential source of hydrocarbons has made the seismic hazard from this effect an important issue the adjudication of planning applications and it is likely that poor understanding of the process will be used as justification of refusal of planning in Ireland and the UK. Here we attempt to understand some of the physical controls on the size and frequency of induced earthquakes using a physics-based simulation of the process and examine resulting earthquake catalogues The driver for seismicity in our simulations is identical to that used in the paper by Murphy et al. in this session. Fluid injection is simulated using pore fluid movement throughout a permeable layer from a high-pressure point source using a lattice Boltzmann scheme. Diffusivities and frictional parameters can be defined independently at individual nodes/cells allowing us to reproduce 3-D geological structures. Active faults in the model follow a fractal size distribution and exhibit characteristic event size, resulting in a power-law frequency-size distribution. The fluid injection is not hydraulically connected to the fault (i.e. fluid does not come into physical contact with the fault); however stress perturbations from the injection drive the seismicity model. The duration and pressure-time function of the fluid injection can be adjusted to model any given injection scenario and the rate of induced seismicity is controlled by the local structures and ambient stress field as well as by the stress perturbations resulting from the fluid injection. Results from the rate and state fault models of Murphy et al. are incorporated to include the effect of fault strengthening in seismically quite areas. Initial results show similarities with observed induced seismic catalogues. Seismicity is only induced where the active faults have not been rotated far from the ambient stress field; the ';structural keel' provided by the geology suppresses induction since the fluid induced stress levels are much smaller than the breaking strain of the host rocks. In addition, we observe a systematic increase in observed biggest magnitude event with time during any injection indicating that in none of our simulations is the maximum magnitude event observed; mmax is in fact not estimable from any of our simulations and is unlikely to be observed in any given injection scenario.
Ablation dynamics - from absorption to heat accumulation/ultra-fast laser matter interaction
NASA Astrophysics Data System (ADS)
Kramer, Thorsten; Remund, Stefan; Jäggi, Beat; Schmid, Marc; Neuenschwander, Beat
2018-05-01
Ultra-short laser radiation is used in manifold industrial applications today. Although state-of-the-art laser sources are providing an average power of 10-100 W with repetition rates of up to several megahertz, most applications do not benefit from it. On the one hand, the processing speed is limited to some hundred millimeters per second by the dynamics of mechanical axes or galvanometric scanners. On the other hand, high repetition rates require consideration of new physical effects such as heat accumulation and shielding that might reduce the process efficiency. For ablation processes, process efficiency can be expressed by the specific removal rate, ablated volume per time, and average power. The analysis of the specific removal rate for different laser parameters, like average power, repetition rate or pulse duration, and process parameters, like scanning speed or material, can be used to find the best operation point for microprocessing applications. Analytical models and molecular dynamics simulations based on the so-called two-temperature model reveal the causes for the appearance of limiting physical effects. The findings of models and simulations can be used to take advantage and optimize processing strategies.
Rettmann, Maryam E.; Holmes, David R.; Kwartowitz, David M.; Gunawan, Mia; Johnson, Susan B.; Camp, Jon J.; Cameron, Bruce M.; Dalegrave, Charles; Kolasa, Mark W.; Packer, Douglas L.; Robb, Richard A.
2014-01-01
Purpose: In cardiac ablation therapy, accurate anatomic guidance is necessary to create effective tissue lesions for elimination of left atrial fibrillation. While fluoroscopy, ultrasound, and electroanatomic maps are important guidance tools, they lack information regarding detailed patient anatomy which can be obtained from high resolution imaging techniques. For this reason, there has been significant effort in incorporating detailed, patient-specific models generated from preoperative imaging datasets into the procedure. Both clinical and animal studies have investigated registration and targeting accuracy when using preoperative models; however, the effect of various error sources on registration accuracy has not been quantitatively evaluated. Methods: Data from phantom, canine, and patient studies are used to model and evaluate registration accuracy. In the phantom studies, data are collected using a magnetically tracked catheter on a static phantom model. Monte Carlo simulation studies were run to evaluate both baseline errors as well as the effect of different sources of error that would be present in a dynamic in vivo setting. Error is simulated by varying the variance parameters on the landmark fiducial, physical target, and surface point locations in the phantom simulation studies. In vivo validation studies were undertaken in six canines in which metal clips were placed in the left atrium to serve as ground truth points. A small clinical evaluation was completed in three patients. Landmark-based and combined landmark and surface-based registration algorithms were evaluated in all studies. In the phantom and canine studies, both target registration error and point-to-surface error are used to assess accuracy. In the patient studies, no ground truth is available and registration accuracy is quantified using point-to-surface error only. Results: The phantom simulation studies demonstrated that combined landmark and surface-based registration improved landmark-only registration provided the noise in the surface points is not excessively high. Increased variability on the landmark fiducials resulted in increased registration errors; however, refinement of the initial landmark registration by the surface-based algorithm can compensate for small initial misalignments. The surface-based registration algorithm is quite robust to noise on the surface points and continues to improve landmark registration even at high levels of noise on the surface points. Both the canine and patient studies also demonstrate that combined landmark and surface registration has lower errors than landmark registration alone. Conclusions: In this work, we describe a model for evaluating the impact of noise variability on the input parameters of a registration algorithm in the context of cardiac ablation therapy. The model can be used to predict both registration error as well as assess which inputs have the largest effect on registration accuracy. PMID:24506630
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rettmann, Maryam E., E-mail: rettmann.maryam@mayo.edu; Holmes, David R.; Camp, Jon J.
2014-02-15
Purpose: In cardiac ablation therapy, accurate anatomic guidance is necessary to create effective tissue lesions for elimination of left atrial fibrillation. While fluoroscopy, ultrasound, and electroanatomic maps are important guidance tools, they lack information regarding detailed patient anatomy which can be obtained from high resolution imaging techniques. For this reason, there has been significant effort in incorporating detailed, patient-specific models generated from preoperative imaging datasets into the procedure. Both clinical and animal studies have investigated registration and targeting accuracy when using preoperative models; however, the effect of various error sources on registration accuracy has not been quantitatively evaluated. Methods: Datamore » from phantom, canine, and patient studies are used to model and evaluate registration accuracy. In the phantom studies, data are collected using a magnetically tracked catheter on a static phantom model. Monte Carlo simulation studies were run to evaluate both baseline errors as well as the effect of different sources of error that would be present in a dynamicin vivo setting. Error is simulated by varying the variance parameters on the landmark fiducial, physical target, and surface point locations in the phantom simulation studies. In vivo validation studies were undertaken in six canines in which metal clips were placed in the left atrium to serve as ground truth points. A small clinical evaluation was completed in three patients. Landmark-based and combined landmark and surface-based registration algorithms were evaluated in all studies. In the phantom and canine studies, both target registration error and point-to-surface error are used to assess accuracy. In the patient studies, no ground truth is available and registration accuracy is quantified using point-to-surface error only. Results: The phantom simulation studies demonstrated that combined landmark and surface-based registration improved landmark-only registration provided the noise in the surface points is not excessively high. Increased variability on the landmark fiducials resulted in increased registration errors; however, refinement of the initial landmark registration by the surface-based algorithm can compensate for small initial misalignments. The surface-based registration algorithm is quite robust to noise on the surface points and continues to improve landmark registration even at high levels of noise on the surface points. Both the canine and patient studies also demonstrate that combined landmark and surface registration has lower errors than landmark registration alone. Conclusions: In this work, we describe a model for evaluating the impact of noise variability on the input parameters of a registration algorithm in the context of cardiac ablation therapy. The model can be used to predict both registration error as well as assess which inputs have the largest effect on registration accuracy.« less
NASA Astrophysics Data System (ADS)
Koch, Jonas; Nowak, Wolfgang
2013-04-01
At many hazardous waste sites and accidental spills, dense non-aqueous phase liquids (DNAPLs) such as TCE, PCE, or TCA have been released into the subsurface. Once a DNAPL is released into the subsurface, it serves as persistent source of dissolved-phase contamination. In chronological order, the DNAPL migrates through the porous medium and penetrates the aquifer, it forms a complex pattern of immobile DNAPL saturation, it dissolves into the groundwater and forms a contaminant plume, and it slowly depletes and bio-degrades in the long-term. In industrial countries the number of such contaminated sites is tremendously high to the point that a ranking from most risky to least risky is advisable. Such a ranking helps to decide whether a site needs to be remediated or may be left to natural attenuation. Both the ranking and the designing of proper remediation or monitoring strategies require a good understanding of the relevant physical processes and their inherent uncertainty. To this end, we conceptualize a probabilistic simulation framework that estimates probability density functions of mass discharge, source depletion time, and critical concentration values at crucial target locations. Furthermore, it supports the inference of contaminant source architectures from arbitrary site data. As an essential novelty, the mutual dependencies of the key parameters and interacting physical processes are taken into account throughout the whole simulation. In an uncertain and heterogeneous subsurface setting, we identify three key parameter fields: the local velocities, the hydraulic permeabilities and the DNAPL phase saturations. Obviously, these parameters depend on each other during DNAPL infiltration, dissolution and depletion. In order to highlight the importance of these mutual dependencies and interactions, we present results of several model set ups where we vary the physical and stochastic dependencies of the input parameters and simulated processes. Under these changes, the probability density functions demonstrate strong statistical shifts in their expected values and in their uncertainty. Considering the uncertainties of all key parameters but neglecting their interactions overestimates the output uncertainty. However, consistently using all available physical knowledge when assigning input parameters and simulating all relevant interactions of the involved processes reduces the output uncertainty significantly back down to useful and plausible ranges. When using our framework in an inverse setting, omitting a parameter dependency within a crucial physical process would lead to physical meaningless identified parameters. Thus, we conclude that the additional complexity we propose is both necessary and adequate. Overall, our framework provides a tool for reliable and plausible prediction, risk assessment, and model based decision support for DNAPL contaminated sites.
Formation of X-ray emitting stationary shocks in magnetized protostellar jets
NASA Astrophysics Data System (ADS)
Ustamujic, S.; Orlando, S.; Bonito, R.; Miceli, M.; Gómez de Castro, A. I.; López-Santiago, J.
2016-12-01
Context. X-ray observations of protostellar jets show evidence of strong shocks heating the plasma up to temperatures of a few million degrees. In some cases, the shocked features appear to be stationary. They are interpreted as shock diamonds. Aims: We investigate the physics that guides the formation of X-ray emitting stationary shocks in protostellar jets; the role of the magnetic field in determining the location, stability, and detectability in X-rays of these shocks; and the physical properties of the shocked plasma. Methods: We performed a set of 2.5-dimensional magnetohydrodynamic numerical simulations that modelled supersonic jets ramming into a magnetized medium and explored different configurations of the magnetic field. The model takes into account the most relevant physical effects, namely thermal conduction and radiative losses. We compared the model results with observations, via the emission measure and the X-ray luminosity synthesized from the simulations. Results: Our model explains the formation of X-ray emitting stationary shocks in a natural way. The magnetic field collimates the plasma at the base of the jet and forms a magnetic nozzle there. After an initial transient, the nozzle leads to the formation of a shock diamond at its exit which is stationary over the time covered by the simulations ( 40-60 yr; comparable with timescales of the observations). The shock generates a point-like X-ray source located close to the base of the jet with luminosity comparable with that inferred from X-ray observations of protostellar jets. For the range of parameters explored, the evolution of the post-shock plasma is dominated by the radiative cooling, whereas the thermal conduction slightly affects the structure of the shock. A movie is available at http://www.aanda.org
Low Order Modeling Tools for Preliminary Pressure Gain Combustion Benefits Analyses
NASA Technical Reports Server (NTRS)
Paxson, Daniel E.
2012-01-01
Pressure gain combustion (PGC) offers the promise of higher thermodynamic cycle efficiency and greater specific power in propulsion and power systems. This presentation describes a model, developed under a cooperative agreement between NASA and AFRL, for preliminarily assessing the performance enhancement and preliminary size requirements of PGC components either as stand-alone thrust producers or coupled with surrounding turbomachinery. The model is implemented in the Numerical Propulsion Simulation System (NPSS) environment allowing various configurations to be examined at numerous operating points. The validated model is simple, yet physics-based. It executes quickly in NPSS, yet produces realistic results.
Medium-heavy nuclei from nucleon-nucleon interactions in lattice QCD
NASA Astrophysics Data System (ADS)
Inoue, Takashi; Aoki, Sinya; Charron, Bruno; Doi, Takumi; Hatsuda, Tetsuo; Ikeda, Yoichi; Ishii, Noriyoshi; Murano, Keiko; Nemura, Hidekatsu; Sasaki, Kenji; HAL QCD Collaboration
2015-01-01
On the basis of the Brueckner-Hartree-Fock method with the nucleon-nucleon forces obtained from lattice QCD simulations, the properties of the medium-heavy doubly magic nuclei such as 16O and 40Ca are investigated. We found that those nuclei are bound for the pseudoscalar meson mass MPS≃470 MeV. The mass number dependence of the binding energies, single-particle spectra, and density distributions are qualitatively consistent with those expected from empirical data at the physical point, although these hypothetical nuclei at heavy quark mass have smaller binding energies than the real nuclei.
The phase slip factor of the electrostatic cryogenic storage ring CSR
NASA Astrophysics Data System (ADS)
Grieser, Manfred; von Hahn, Robert; Vogel, Stephen; Wolf, Andreas
2017-07-01
To determine the momentum spread of an ion beam from the measured revolution frequency distribution, the knowledge of the phase slip factor of the storage ring is necessary. The slip factor was measured for various working points of the cryogenic storage ring CSR at MPI for Nuclear Physics, Heidelberg and was compared with simulations. The predicted functional relationship of the slip factor and the horizontal tune depends on the different islands of stability, which has been experimentally verified. This behavior of the slip factor is in clear contrast to that of magnetic storage rings.
Etien, Erik
2013-05-01
This paper deals with the design of a speed soft sensor for induction motor. The sensor is based on the physical model of the motor. Because the validation step highlight the fact that the sensor cannot be validated for all the operating points, the model is modified in order to obtain a fully validated sensor in the whole speed range. An original feature of the proposed approach is that the modified model is derived from stability analysis using automatic control theory. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Design of invisibility cloaks with an open tunnel.
Ako, Thomas; Yan, Min; Qiu, Min
2010-12-20
In this paper we apply the methodology of transformation optics for design of a novel invisibility cloak which can possess an open tunnel. Such a cloak facilitates the insertion (retrieval) of matter into (from) the cloak's interior without significantly affecting the cloak's performance, overcoming the matter exchange bottleneck inherent to most previously proposed cloak designs.We achieve this by applying a transformation which expands a point at the origin in electromagnetic space to a finite area in physical space in a highly anisotropic manner. The invisibility performance of the proposed cloak is verified by using full-wave finite-element simulations.
Millimeter image of the HL Tau Disk: gaps opened by planets?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Hui
2015-10-20
Several observed features which favor planet-induced gaps in the disk are pointed out. Parameters of a two-fluid simulation model are listed, and some model results are shown. It is concluded that (1) interaction between planets, gas, and dust can explain the main features in the ALMA observation; (2) the millimeter image of a disk is determined by the dust profile, which in turn is influenced by planetary masses, viscosity, disk self-gravity, etc.; and (3) models that focus on the complex physics between gas and dust (and planets) are crucial in interpreting the (sub)millimeter images of disks.
NASA Astrophysics Data System (ADS)
Argüeso, D.; Hidalgo-Muñoz, J. M.; Gámiz-Fortis, S. R.; Esteban-Parra, M. J.; Castro-Díez, Y.
2009-04-01
An evaluation of MM5 mesoscale model sensitivity to different parameterizations schemes is presented in terms of temperature and precipitation for high-resolution integrations over Andalusia (South of Spain). As initial and boundary conditions ERA-40 Reanalysis data are used. Two domains were used, a coarse one with dimensions of 55 by 60 grid points with spacing of 30 km and a nested domain of 48 by 72 grid points grid spaced 10 km. Coarse domain fully covers Iberian Peninsula and Andalusia fits loosely in the finer one. In addition to parameterization tests, two dynamical downscaling techniques have been applied in order to examine the influence of initial conditions on RCM long-term studies. Regional climate studies usually employ continuous integration for the period under survey, initializing atmospheric fields only at the starting point and feeding boundary conditions regularly. An alternative approach is based on frequent re-initialization of atmospheric fields; hence the simulation is divided in several independent integrations. Altogether, 20 simulations have been performed using varying physics options, of which 4 were fulfilled applying the re-initialization technique. Surface temperature and accumulated precipitation (daily and monthly scale) were analyzed for a 5-year period covering from 1990 to 1994. Results have been compared with daily observational data series from 110 stations for temperature and 95 for precipitation Both daily and monthly average temperatures are generally well represented by the model. Conversely, daily precipitation results present larger deviations from observational data. However, noticeable accuracy is gained when comparing with monthly precipitation observations. There are some especially conflictive subregions where precipitation is scarcely captured, such as the Southeast of the Iberian Peninsula, mainly due to its extremely convective nature. Regarding parameterization schemes performance, every set provides very similar results either for temperature or precipitation and no configuration seems to outperform the others both for the whole region and for every season. Nevertheless, some marked differences between areas within the domain appear when analyzing certain physics options, particularly for precipitation. Some of the physics options, such as radiation, have little impact on model performance with respect to precipitation and results do not vary when the scheme is modified. On the other hand, cumulus and boundary layer parameterizations are responsible for most of the differences obtained between configurations. Acknowledgements: The Spanish Ministry of Science and Innovation, with additional support from the European Community Funds (FEDER), project CGL2007-61151/CLI, and the Regional Government of Andalusia project P06-RNM-01622, have financed this study. The "Centro de Servicios de Informática y Redes de Comunicaciones" (CSIRC), Universidad de Granada, has provided the computing time. Key words: MM5 mesoscale model, parameterizations schemes, temperature and precipitation, South of Spain.
Creating Interactive Physics Simulations Using the Power of GeoGebra
ERIC Educational Resources Information Center
Walsh, Tom
2017-01-01
I have long incorporated physics simulations in my physics teaching, and truly appreciate those who have made their simulations available to the public. I often would think of an idea for a simulation I would love to be able to use, but with no real programming background I did not know how I could make my own. That was the case until I discovered…
PHYSICAL PROPERTIES OF FLUORINATED PROPANE AND BUTANE DERIVATIVES AS ALTERNATIVE REFRIGERANTS
Physical property measurements are presented for 24 fluorinated propane and butane derivatives and one fluorinated ether. These measurements include melting point, boiling point, vapor pressure below the boiling point, heat of vaporization at the boiling point, critical propertie...
Physical Uncertainty Bounds (PUB)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vaughan, Diane Elizabeth; Preston, Dean L.
2015-03-19
This paper introduces and motivates the need for a new methodology for determining upper bounds on the uncertainties in simulations of engineered systems due to limited fidelity in the composite continuum-level physics models needed to simulate the systems. We show that traditional uncertainty quantification methods provide, at best, a lower bound on this uncertainty. We propose to obtain bounds on the simulation uncertainties by first determining bounds on the physical quantities or processes relevant to system performance. By bounding these physics processes, as opposed to carrying out statistical analyses of the parameter sets of specific physics models or simply switchingmore » out the available physics models, one can obtain upper bounds on the uncertainties in simulated quantities of interest.« less
Have More Fun Teaching Physics: Simulating, Stimulating Software.
ERIC Educational Resources Information Center
Jenkins, Doug
1996-01-01
High school physics offers opportunities to use problem solving and lab practices as well as cement skills in research, technical writing, and software applications. Describes and evaluates computer software enhancing the high school physics curriculum including spreadsheets for laboratory data, all-in-one simulators, projectile motion simulators,…
Multi-scale Modeling of Arctic Clouds
NASA Astrophysics Data System (ADS)
Hillman, B. R.; Roesler, E. L.; Dexheimer, D.
2017-12-01
The presence and properties of clouds are critically important to the radiative budget in the Arctic, but clouds are notoriously difficult to represent in global climate models (GCMs). The challenge stems partly from a disconnect in the scales at which these models are formulated and the scale of the physical processes important to the formation of clouds (e.g., convection and turbulence). Because of this, these processes are parameterized in large-scale models. Over the past decades, new approaches have been explored in which a cloud system resolving model (CSRM), or in the extreme a large eddy simulation (LES), is embedded into each gridcell of a traditional GCM to replace the cloud and convective parameterizations to explicitly simulate more of these important processes. This approach is attractive in that it allows for more explicit simulation of small-scale processes while also allowing for interaction between the small and large-scale processes. The goal of this study is to quantify the performance of this framework in simulating Arctic clouds relative to a traditional global model, and to explore the limitations of such a framework using coordinated high-resolution (eddy-resolving) simulations. Simulations from the global model are compared with satellite retrievals of cloud fraction partioned by cloud phase from CALIPSO, and limited-area LES simulations are compared with ground-based and tethered-balloon measurements from the ARM Barrow and Oliktok Point measurement facilities.
NASA Astrophysics Data System (ADS)
Perez, J. C.; Chandran, B. D. G.
2017-12-01
In this work we present recent results from high-resolution direct numerical simulations and a phenomenological model that describes the radial evolution of reflection-driven Alfven Wave turbulence in the solar atmosphere and the inner solar wind. The simulations are performed inside a narrow magnetic flux tube that models a coronal hole extending from the solar surface through the chromosphere and into the solar corona to approximately 21 solar radii. The simulations include prescribed empirical profiles that account for the inhomogeneities in density, background flow, and the background magnetic field present in coronal holes. Alfven waves are injected into the solar corona by imposing random, time-dependent velocity and magnetic field fluctuations at the photosphere. The phenomenological model incorporates three important features observed in the simulations: dynamic alignment, weak/strong nonlinear AW-AW interactions, and that the outward-propagating AWs launched by the Sun split into two populations with different characteristic frequencies. Model and simulations are in good agreement and show that when the key physical parameters are chosen within observational constraints, reflection-driven Alfven turbulence is a plausible mechanism for the heating and acceleration of the fast solar wind. By flying a virtual Parker Solar Probe (PSP) through the simulations, we will also establish comparisons between the model and simulations with the kind of single-point measurements that PSP will provide.
A new equilibrium torus solution and GRMHD initial conditions
NASA Astrophysics Data System (ADS)
Penna, Robert F.; Kulkarni, Akshay; Narayan, Ramesh
2013-11-01
Context. General relativistic magnetohydrodynamic (GRMHD) simulations are providing influential models for black hole spin measurements, gamma ray bursts, and supermassive black hole feedback. Many of these simulations use the same initial condition: a rotating torus of fluid in hydrostatic equilibrium. A persistent concern is that simulation results sometimes depend on arbitrary features of the initial torus. For example, the Bernoulli parameter (which is related to outflows), appears to be controlled by the Bernoulli parameter of the initial torus. Aims: In this paper, we give a new equilibrium torus solution and describe two applications for the future. First, it can be used as a more physical initial condition for GRMHD simulations than earlier torus solutions. Second, it can be used in conjunction with earlier torus solutions to isolate the simulation results that depend on initial conditions. Methods: We assume axisymmetry, an ideal gas equation of state, constant entropy, and ignore self-gravity. We fix an angular momentum distribution and solve the relativistic Euler equations in the Kerr metric. Results: The Bernoulli parameter, rotation rate, and geometrical thickness of the torus can be adjusted independently. Our torus tends to be more bound and have a larger radial extent than earlier torus solutions. Conclusions: While this paper was in preparation, several GRMHD simulations appeared based on our equilibrium torus. We believe it will continue to provide a more realistic starting point for future simulations.
Integration of Irma tactical scene generator into directed-energy weapon system simulation
NASA Astrophysics Data System (ADS)
Owens, Monte A.; Cole, Madison B., III; Laine, Mark R.
2003-08-01
Integrated high-fidelity physics-based simulations that include engagement models, image generation, electro-optical hardware models and control system algorithms have previously been developed by Boeing-SVS for various tracking and pointing systems. These simulations, however, had always used images with featureless or random backgrounds and simple target geometries. With the requirement to engage tactical ground targets in the presence of cluttered backgrounds, a new type of scene generation tool was required to fully evaluate system performance in this challenging environment. To answer this need, Irma was integrated into the existing suite of Boeing-SVS simulation tools, allowing scene generation capabilities with unprecedented realism. Irma is a US Air Force research tool used for high-resolution rendering and prediction of target and background signatures. The MATLAB/Simulink-based simulation achieves closed-loop tracking by running track algorithms on the Irma-generated images, processing the track errors through optical control algorithms, and moving simulated electro-optical elements. The geometry of these elements determines the sensor orientation with respect to the Irma database containing the three-dimensional background and target models. This orientation is dynamically passed to Irma through a Simulink S-function to generate the next image. This integrated simulation provides a test-bed for development and evaluation of tracking and control algorithms against representative images including complex background environments and realistic targets calibrated using field measurements.
Three-dimensional Simulations of Pure Deflagration Models for Thermonuclear Supernovae
NASA Astrophysics Data System (ADS)
Long, Min; Jordan, George C., IV; van Rossum, Daniel R.; Diemer, Benedikt; Graziani, Carlo; Kessler, Richard; Meyer, Bradley; Rich, Paul; Lamb, Don Q.
2014-07-01
We present a systematic study of the pure deflagration model of Type Ia supernovae (SNe Ia) using three-dimensional, high-resolution, full-star hydrodynamical simulations, nucleosynthetic yields calculated using Lagrangian tracer particles, and light curves calculated using radiation transport. We evaluate the simulations by comparing their predicted light curves with many observed SNe Ia using the SALT2 data-driven model and find that the simulations may correspond to under-luminous SNe Iax. We explore the effects of the initial conditions on our results by varying the number of randomly selected ignition points from 63 to 3500, and the radius of the centered sphere they are confined in from 128 to 384 km. We find that the rate of nuclear burning depends on the number of ignition points at early times, the density of ignition points at intermediate times, and the radius of the confining sphere at late times. The results depend primarily on the number of ignition points, but we do not expect this to be the case in general. The simulations with few ignition points release more nuclear energy E nuc, have larger kinetic energies E K, and produce more 56Ni than those with many ignition points, and differ in the distribution of 56Ni, Si, and C/O in the ejecta. For these reasons, the simulations with few ignition points exhibit higher peak B-band absolute magnitudes M B and light curves that rise and decline more quickly; their M B and light curves resemble those of under-luminous SNe Iax, while those for simulations with many ignition points are not.
Incorporating Haptic Feedback in Simulation for Learning Physics
ERIC Educational Resources Information Center
Han, Insook; Black, John B.
2011-01-01
The purpose of this study was to investigate the effectiveness of a haptic augmented simulation in learning physics. The results indicate that haptic augmented simulations, both the force and kinesthetic and the purely kinesthetic simulations, were more effective than the equivalent non-haptic simulation in providing perceptual experiences and…
Pointing System Simulation Toolbox with Application to a Balloon Mission Simulator
NASA Technical Reports Server (NTRS)
Maringolo Baldraco, Rosana M.; Aretskin-Hariton, Eliot D.; Swank, Aaron J.
2017-01-01
The development of attitude estimation and pointing-control algorithms is necessary in order to achieve high-fidelity modeling for a Balloon Mission Simulator (BMS). A pointing system simulation toolbox was developed to enable this. The toolbox consists of a star-tracker (ST) and Inertial Measurement Unit (IMU) signal generator, a UDP (User Datagram Protocol) communication le (bridge), and an indirect-multiplicative extended Kalman filter (imEKF). This document describes the Python toolbox developed and the results of its implementation in the imEKF.
NASA Astrophysics Data System (ADS)
Wang, Zhihui; Bao, Lin; Tong, Binggang
2009-12-01
This paper is a research on the variation character of stagnation point heat flux for hypersonic pointed bodies from continuum to rarefied flow states by using theoretical analysis and numerical simulation methods. The newly developed near space hypersonic cruise vehicles have sharp noses and wingtips, which desires exact and relatively simple methods to estimate the stagnation point heat flux. With the decrease of the curvature radius of the leading edge, the flow becomes rarefied gradually, and viscous interaction effects and rarefied gas effects come forth successively, which results in that the classical Fay-Riddell equation under continuum hypothesis will become invalid and the variation of stagnation point heat flux is characterized by a new trend. The heat flux approaches the free molecular flow limit instead of an infinite value when the curvature radius of the leading edge tends to 0. The physical mechanism behind this phenomenon remains in need of theoretical study. Firstly, due to the fact that the whole flow regime can be described by Boltzmann equation, the continuum and rarefied flow are analyzed under a uniform framework. A relationship is established between the molecular collision insufficiency in rarefied flow and the failure of Fourier’s heat conduction law along with the increasing significance of the nonlinear heat flux. Then based on an inspiration drew from Burnett approximation, control factors are grasped and a specific heat flux expression containing the nonlinear term is designed in the stagnation region of hypersonic leading edge. Together with flow pattern analysis, the ratio of nonlinear to linear heat flux W r is theoretically obtained as a parameter which reflects the influence of nonlinear factors, i.e. a criterion to classify the hypersonic rarefied flows. Ultimately, based on the characteristic parameter W r , a bridge function with physical background is constructed, which predicts comparative reasonable results in coincidence well with DSMC and experimental data in the whole flow regime.
Enriching Triangle Mesh Animations with Physically Based Simulation.
Li, Yijing; Xu, Hongyi; Barbic, Jernej
2017-10-01
We present a system to combine arbitrary triangle mesh animations with physically based Finite Element Method (FEM) simulation, enabling control over the combination both in space and time. The input is a triangle mesh animation obtained using any method, such as keyframed animation, character rigging, 3D scanning, or geometric shape modeling. The input may be non-physical, crude or even incomplete. The user provides weights, specified using a minimal user interface, for how much physically based simulation should be allowed to modify the animation in any region of the model, and in time. Our system then computes a physically-based animation that is constrained to the input animation to the amount prescribed by these weights. This permits smoothly turning physics on and off over space and time, making it possible for the output to strictly follow the input, to evolve purely based on physically based simulation, and anything in between. Achieving such results requires a careful combination of several system components. We propose and analyze these components, including proper automatic creation of simulation meshes (even for non-manifold and self-colliding undeformed triangle meshes), converting triangle mesh animations into animations of the simulation mesh, and resolving collisions and self-collisions while following the input.
Workshop on data acquisition and trigger system simulations for high energy physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1992-12-31
This report discusses the following topics: DAQSIM: A data acquisition system simulation tool; Front end and DCC Simulations for the SDC Straw Tube System; Simulation of Non-Blocklng Data Acquisition Architectures; Simulation Studies of the SDC Data Collection Chip; Correlation Studies of the Data Collection Circuit & The Design of a Queue for this Circuit; Fast Data Compression & Transmission from a Silicon Strip Wafer; Simulation of SCI Protocols in Modsim; Visual Design with vVHDL; Stochastic Simulation of Asynchronous Buffers; SDC Trigger Simulations; Trigger Rates, DAQ & Online Processing at the SSC; Planned Enhancements to MODSEM II & SIMOBJECT -- anmore » Overview -- R.; DAGAR -- A synthesis system; Proposed Silicon Compiler for Physics Applications; Timed -- LOTOS in a PROLOG Environment: an Algebraic language for Simulation; Modeling and Simulation of an Event Builder for High Energy Physics Data Acquisition Systems; A Verilog Simulation for the CDF DAQ; Simulation to Design with Verilog; The DZero Data Acquisition System: Model and Measurements; DZero Trigger Level 1.5 Modeling; Strategies Optimizing Data Load in the DZero Triggers; Simulation of the DZero Level 2 Data Acquisition System; A Fast Method for Calculating DZero Level 1 Jet Trigger Properties and Physics Input to DAQ Studies.« less
NASA Astrophysics Data System (ADS)
Ficaro, Edward Patrick
The ^{252}Cf -source-driven noise analysis (CSDNA) requires the measurement of the cross power spectral density (CPSD) G_ {23}(omega), between a pair of neutron detectors (subscripts 2 and 3) located in or near the fissile assembly, and the CPSDs, G_{12}( omega) and G_{13}( omega), between the neutron detectors and an ionization chamber 1 containing ^{252}Cf also located in or near the fissile assembly. The key advantage of this method is that the subcriticality of the assembly can be obtained from the ratio of spectral densities,{G _sp{12}{*}(omega)G_ {13}(omega)over G_{11 }(omega)G_{23}(omega) },using a point kinetic model formulation which is independent of the detector's properties and a reference measurement. The multigroup, Monte Carlo code, KENO-NR, was developed to eliminate the dependence of the measurement on the point kinetic formulation. This code utilizes time dependent, analog neutron tracking to simulate the experimental method, in addition to the underlying nuclear physics, as closely as possible. From a direct comparison of simulated and measured data, the calculational model and cross sections are validated for the calculation, and KENO-NR can then be rerun to provide a distributed source k_ {eff} calculation. Depending on the fissile assembly, a few hours to a couple of days of computation time are needed for a typical simulation executed on a desktop workstation. In this work, KENO-NR demonstrated the ability to accurately estimate the measured ratio of spectral densities from experiments using capture detectors performed on uranium metal cylinders, a cylindrical tank filled with aqueous uranyl nitrate, and arrays of safe storage bottles filled with uranyl nitrate. Good agreement was also seen between simulated and measured values of the prompt neutron decay constant from the fitted CPSDs. Poor agreement was seen between simulated and measured results using composite ^6Li-glass-plastic scintillators at large subcriticalities for the tank of uranyl nitrate. It is believed that the response of these detectors is not well known and is incorrectly modeled in KENO-NR. In addition to these tests, several benchmark calculations were also performed to provide insight into the properties of the point kinetic formulation.
The Excess Chemical Potential of Water at the Interface with a Protein from End Point Simulations.
Zhang, Bin W; Cui, Di; Matubayasi, Nobuyuki; Levy, Ronald M
2018-05-03
We use end point simulations to estimate the excess chemical potential of water in the homogeneous liquid and at the interface with a protein in solution. When the pure liquid is taken as the reference, the excess chemical potential of interfacial water is the difference between the solvation free energy of a water molecule at the interface and in the bulk. Using the homogeneous liquid as an example, we show that the solvation free energy for growing a water molecule can be estimated by applying UWHAM to the simulation data generated from the initial and final states (i.e., "the end points") instead of multistate free energy perturbation simulations because of the possible overlaps of the configurations sampled at the end points. Then end point simulations are used to estimate the solvation free energy of water at the interface with a protein in solution. The estimate of the solvation free energy at the interface from two simulations at the end points agrees with the benchmark using 32 states within a 95% confidence interval for most interfacial locations. The ability to accurately estimate the excess chemical potential of water from end point simulations facilitates the statistical thermodynamic analysis of diverse interfacial phenomena. Our focus is on analyzing the excess chemical potential of water at protein receptor binding sites with the goal of using this information to assist in the design of tight binding ligands.
Yılmaz, Bülent; Çiftçi, Emre
2013-06-01
Extracorporeal Shock Wave Lithotripsy (ESWL) is based on disintegration of the kidney stone by delivering high-energy shock waves that are created outside the body and transmitted through the skin and body tissues. Nowadays high-energy shock waves are also used in orthopedic operations and investigated to be used in the treatment of myocardial infarction and cancer. Because of these new application areas novel lithotriptor designs are needed for different kinds of treatment strategies. In this study our aim was to develop a versatile computer simulation environment which would give the device designers working on various medical applications that use shock wave principle a substantial amount of flexibility while testing the effects of new parameters such as reflector size, material properties of the medium, water temperature, and different clinical scenarios. For this purpose, we created a finite-difference time-domain (FDTD)-based computational model in which most of the physical system parameters were defined as an input and/or as a variable in the simulations. We constructed a realistic computational model of a commercial electrohydraulic lithotriptor and optimized our simulation program using the results that were obtained by the manufacturer in an experimental setup. We, then, compared the simulation results with the results from an experimental setup in which oxygen level in water was varied. Finally, we studied the effects of changing the input parameters like ellipsoid size and material, temperature change in the wave propagation media, and shock wave source point misalignment. The simulation results were consistent with the experimental results and expected effects of variation in physical parameters of the system. The results of this study encourage further investigation and provide adequate evidence that the numerical modeling of a shock wave therapy system is feasible and can provide a practical means to test novel ideas in new device design procedures. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
DEVELOPMENT OF AN IMPROVED SIMULATOR FOR CHEMICAL AND MICROBIAL IOR METHODS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gary A. Pope; Kamy Sepehrnoori; Mojdeh Delshad
2001-10-01
This is the final report of a three-year research project on further development of a chemical and microbial improved oil recovery reservoir simulator. The objective of this research was to extend the capability of an existing simulator (UTCHEM) to improved oil recovery methods which use surfactants, polymers, gels, alkaline chemicals, microorganisms and foam as well as various combinations of these in both conventional and naturally fractured oil reservoirs. The first task was the addition of a dual-porosity model for chemical IOR in naturally fractured oil reservoirs. They formulated and implemented a multiphase, multicomponent dual porosity model for enhanced oil recoverymore » from naturally fractured reservoirs. The multiphase dual porosity model was tested against analytical solutions, coreflood data, and commercial simulators. The second task was the addition of a foam model. They implemented a semi-empirical surfactant/foam model in UTCHEM and validated the foam model by comparison with published laboratory data. The third task addressed several numerical and coding enhancements that will greatly improve its versatility and performance. Major enhancements were made in UTCHEM output files and memory management. A graphical user interface to set up the simulation input and to process the output data on a Windows PC was developed. New solvers for solving the pressure equation and geochemical system of equations were implemented and tested. A corner point grid geometry option for gridding complex reservoirs was implemented and tested. Enhancements of physical property models for both chemical and microbial IOR simulations were included in the final task of this proposal. Additional options for calculating the physical properties such as relative permeability and capillary pressure were added. A microbiological population model was developed and incorporated into UTCHEM. They have applied the model to microbial enhanced oil recovery (MEOR) processes by including the capability of permeability reduction due to biomass growth and retention. The formations of bio-products such as surfactant and polymer surfactant have also been incorporated.« less
NASA Technical Reports Server (NTRS)
Mocko, David M.; Sud, Y. C.; Einaudi, Franco (Technical Monitor)
2000-01-01
Present-day climate models produce large climate drifts that interfere with the climate signals simulated in modelling studies. The simplifying assumptions of the physical parameterization of snow and ice processes lead to large biases in the annual cycles of surface temperature, evapotranspiration, and the water budget, which in turn causes erroneous land-atmosphere interactions. Since land processes are vital for climate prediction, and snow and snowmelt processes have been shown to affect Indian monsoons and North American rainfall and hydrology, special attention is now being given to cold land processes and their influence on the simulated annual cycle in GCMs. The snow model of the SSiB land-surface model being used at Goddard has evolved from a unified single snow-soil layer interacting with a deep soil layer through a force-restore procedure to a two-layer snow model atop a ground layer separated by a snow-ground interface. When the snow cover is deep, force-restore occurs within the snow layers. However, several other simplifying assumptions such as homogeneous snow cover, an empirical depth related surface albedo, snowmelt and melt-freeze in the diurnal cycles, and neglect of latent heat of soil freezing and thawing still remain as nagging problems. Several important influences of these assumptions will be discussed with the goal of improving them to better simulate the snowmelt and meltwater hydrology. Nevertheless, the current snow model (Mocko and Sud, 2000, submitted) better simulates cold land processes as compared to the original SSiB. This was confirmed against observations of soil moisture, runoff, and snow cover in global GSWP (Sud and Mocko, 1999) and point-scale Valdai simulations over seasonal snow regions. New results from the current snow model SSiB from the 10-year PILPS 2e intercomparison in northern Scandinavia will be presented.
Impact of detector simulation in particle physics collider experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elvira, V. Daniel
Through the last three decades, precise simulation of the interactions of particles with matter and modeling of detector geometries has proven to be of critical importance to the success of the international high-energy physics experimental programs. For example, the detailed detector modeling and accurate physics of the Geant4-based simulation software of the CMS and ATLAS particle physics experiments at the European Center of Nuclear Research (CERN) Large Hadron Collider (LHC) was a determinant factor for these collaborations to deliver physics results of outstanding quality faster than any hadron collider experiment ever before. This review article highlights the impact of detectormore » simulation on particle physics collider experiments. It presents numerous examples of the use of simulation, from detector design and optimization, through software and computing development and testing, to cases where the use of simulation samples made a difference in the accuracy of the physics results and publication turnaround, from data-taking to submission. It also presents the economic impact and cost of simulation in the CMS experiment. Future experiments will collect orders of magnitude more data, taxing heavily the performance of simulation and reconstruction software for increasingly complex detectors. Consequently, it becomes urgent to find solutions to speed up simulation software in order to cope with the increased demand in a time of flat budgets. The study ends with a short discussion on the potential solutions that are being explored, by leveraging core count growth in multicore machines, using new generation coprocessors, and re-engineering of HEP code for concurrency and parallel computing.« less
Impact of detector simulation in particle physics collider experiments
Elvira, V. Daniel
2017-06-01
Through the last three decades, precise simulation of the interactions of particles with matter and modeling of detector geometries has proven to be of critical importance to the success of the international high-energy physics experimental programs. For example, the detailed detector modeling and accurate physics of the Geant4-based simulation software of the CMS and ATLAS particle physics experiments at the European Center of Nuclear Research (CERN) Large Hadron Collider (LHC) was a determinant factor for these collaborations to deliver physics results of outstanding quality faster than any hadron collider experiment ever before. This review article highlights the impact of detectormore » simulation on particle physics collider experiments. It presents numerous examples of the use of simulation, from detector design and optimization, through software and computing development and testing, to cases where the use of simulation samples made a difference in the accuracy of the physics results and publication turnaround, from data-taking to submission. It also presents the economic impact and cost of simulation in the CMS experiment. Future experiments will collect orders of magnitude more data, taxing heavily the performance of simulation and reconstruction software for increasingly complex detectors. Consequently, it becomes urgent to find solutions to speed up simulation software in order to cope with the increased demand in a time of flat budgets. The study ends with a short discussion on the potential solutions that are being explored, by leveraging core count growth in multicore machines, using new generation coprocessors, and re-engineering of HEP code for concurrency and parallel computing.« less
Impact of detector simulation in particle physics collider experiments
NASA Astrophysics Data System (ADS)
Daniel Elvira, V.
2017-06-01
Through the last three decades, accurate simulation of the interactions of particles with matter and modeling of detector geometries has proven to be of critical importance to the success of the international high-energy physics (HEP) experimental programs. For example, the detailed detector modeling and accurate physics of the Geant4-based simulation software of the CMS and ATLAS particle physics experiments at the European Center of Nuclear Research (CERN) Large Hadron Collider (LHC) was a determinant factor for these collaborations to deliver physics results of outstanding quality faster than any hadron collider experiment ever before. This review article highlights the impact of detector simulation on particle physics collider experiments. It presents numerous examples of the use of simulation, from detector design and optimization, through software and computing development and testing, to cases where the use of simulation samples made a difference in the precision of the physics results and publication turnaround, from data-taking to submission. It also presents estimates of the cost and economic impact of simulation in the CMS experiment. Future experiments will collect orders of magnitude more data with increasingly complex detectors, taxing heavily the performance of simulation and reconstruction software. Consequently, exploring solutions to speed up simulation and reconstruction software to satisfy the growing demand of computing resources in a time of flat budgets is a matter that deserves immediate attention. The article ends with a short discussion on the potential solutions that are being considered, based on leveraging core count growth in multicore machines, using new generation coprocessors, and re-engineering HEP code for concurrency and parallel computing.
Shock waves simulated using the dual domain material point method combined with molecular dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Duan Z.; Dhakal, Tilak Raj
Here in this work we combine the dual domain material point method with molecular dynamics in an attempt to create a multiscale numerical method to simulate materials undergoing large deformations with high strain rates. In these types of problems, the material is often in a thermodynamically nonequilibrium state, and conventional constitutive relations or equations of state are often not available. In this method, the closure quantities, such as stress, at each material point are calculated from a molecular dynamics simulation of a group of atoms surrounding the material point. Rather than restricting the multiscale simulation in a small spatial region,more » such as phase interfaces, or crack tips, this multiscale method can be used to consider nonequilibrium thermodynamic effects in a macroscopic domain. This method takes the advantage that the material points only communicate with mesh nodes, not among themselves; therefore molecular dynamics simulations for material points can be performed independently in parallel. The dual domain material point method is chosen for this multiscale method because it can be used in history dependent problems with large deformation without generating numerical noise as material points move across cells, and also because of its convergence and conservation properties. In conclusion, to demonstrate the feasibility and accuracy of this method, we compare the results of a shock wave propagation in a cerium crystal calculated using the direct molecular dynamics simulation with the results from this combined multiscale calculation.« less
Shock waves simulated using the dual domain material point method combined with molecular dynamics
Zhang, Duan Z.; Dhakal, Tilak Raj
2017-01-17
Here in this work we combine the dual domain material point method with molecular dynamics in an attempt to create a multiscale numerical method to simulate materials undergoing large deformations with high strain rates. In these types of problems, the material is often in a thermodynamically nonequilibrium state, and conventional constitutive relations or equations of state are often not available. In this method, the closure quantities, such as stress, at each material point are calculated from a molecular dynamics simulation of a group of atoms surrounding the material point. Rather than restricting the multiscale simulation in a small spatial region,more » such as phase interfaces, or crack tips, this multiscale method can be used to consider nonequilibrium thermodynamic effects in a macroscopic domain. This method takes the advantage that the material points only communicate with mesh nodes, not among themselves; therefore molecular dynamics simulations for material points can be performed independently in parallel. The dual domain material point method is chosen for this multiscale method because it can be used in history dependent problems with large deformation without generating numerical noise as material points move across cells, and also because of its convergence and conservation properties. In conclusion, to demonstrate the feasibility and accuracy of this method, we compare the results of a shock wave propagation in a cerium crystal calculated using the direct molecular dynamics simulation with the results from this combined multiscale calculation.« less
Multipoint propagators in cosmological gravitational instability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernardeau, Francis; Crocce, Martin; Scoccimarro, Roman
2008-11-15
We introduce the concept of multipoint propagators between linear cosmic fields and their nonlinear counterparts in the context of cosmological perturbation theory. Such functions express how a nonlinearly evolved Fourier mode depends on the full ensemble of modes in the initial density field. We identify and resum the dominant diagrams in the large-k limit, showing explicitly that multipoint propagators decay into the nonlinear regime at the same rate as the two-point propagator. These analytic results generalize the large-k limit behavior of the two-point propagator to arbitrary order. We measure the three-point propagator as a function of triangle shape in numericalmore » simulations and confirm the results of our high-k resummation. We show that any n-point spectrum can be reconstructed from multipoint propagators, which leads to a physical connection between nonlinear corrections to the power spectrum at small scales and higher-order correlations at large scales. As a first application of these results, we calculate the reduced bispectrum at one loop in renormalized perturbation theory and show that we can predict the decrease in its dependence on triangle shape at redshift zero, when standard perturbation theory is least successful.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vogl, Christopher J.
Here, the Closest Point method, initially developed by Ruuth and Merriman, allows for the numerical solution of surface partial differential equations without the need for a parameterization of the surface itself. Surface quantities are embedded into the surrounding domain by assigning each value at a given spatial location to the corresponding value at the closest point on the surface. This embedding allows for surface derivatives to be replaced by their Cartesian counterparts (e.g. ∇ s=∇). This equivalence is only valid on the surface, and thus, interpolation is used to enforce what is known as the side condition away from themore » surface. To improve upon the method, this work derives an operator embedding that incorporates curvature information, making it valid in a neighborhood of the surface. With this, direct enforcement of the side condition is no longer needed. Comparisons in R 2 and R 3 show that the resulting Curvature-Augmented Closest Point method has better accuracy and requires less memory, through increased matrix sparsity, than the Closest Point method, while maintaining similar matrix condition numbers. To demonstrate the utility of the method in a physical application, simulations of inextensible, bi-lipid vesicles evolving toward equilibrium shapes are also included.« less
The Curvature-Augmented Closest Point method with vesicle inextensibility application
Vogl, Christopher J.
2017-06-06
Here, the Closest Point method, initially developed by Ruuth and Merriman, allows for the numerical solution of surface partial differential equations without the need for a parameterization of the surface itself. Surface quantities are embedded into the surrounding domain by assigning each value at a given spatial location to the corresponding value at the closest point on the surface. This embedding allows for surface derivatives to be replaced by their Cartesian counterparts (e.g. ∇ s=∇). This equivalence is only valid on the surface, and thus, interpolation is used to enforce what is known as the side condition away from themore » surface. To improve upon the method, this work derives an operator embedding that incorporates curvature information, making it valid in a neighborhood of the surface. With this, direct enforcement of the side condition is no longer needed. Comparisons in R 2 and R 3 show that the resulting Curvature-Augmented Closest Point method has better accuracy and requires less memory, through increased matrix sparsity, than the Closest Point method, while maintaining similar matrix condition numbers. To demonstrate the utility of the method in a physical application, simulations of inextensible, bi-lipid vesicles evolving toward equilibrium shapes are also included.« less
A breakthrough for experiencing and understanding simulated physics
NASA Technical Reports Server (NTRS)
Watson, Val
1988-01-01
The use of computer simulation in physics research is discussed, focusing on improvements to graphic workstations. Simulation capabilities and applications of enhanced visualization tools are outlined. The elements of an ideal computer simulation are presented and the potential for improving various simulation elements is examined. The interface between the human and the computer and simulation models are considered. Recommendations are made for changes in computer simulation practices and applications of simulation technology in education.
Bistatic synthetic aperture radar
NASA Astrophysics Data System (ADS)
Yates, Gillian
Synthetic aperture radar (SAR) allows all-weather, day and night, surface surveillance and has the ability to detect, classify and geolocate objects at long stand-off ranges. Bistatic SAR, where the transmitter and the receiver are on separate platforms, is seen as a potential means of countering the vulnerability of conventional monostatic SAR to electronic countermeasures, particularly directional jamming, and avoiding physical attack of the imaging platform. As the receiving platform can be totally passive, it does not advertise its position by RF emissions. The transmitter is not susceptible to jamming and can, for example, operate at long stand-off ranges to reduce its vulnerability to physical attack. This thesis examines some of the complications involved in producing high-resolution bistatic SAR imagery. The effect of bistatic operation on resolution is examined from a theoretical viewpoint and analytical expressions for resolution are developed. These expressions are verified by simulation work using a simple 'point by point' processor. This work is extended to look at using modern practical processing engines for bistatic geometries. Adaptations of the polar format algorithm and range migration algorithm are considered. The principal achievement of this work is a fully airborne demonstration of bistatic SAR. The route taken in reaching this is given, along with some results. The bistatic SAR imagery is analysed and compared to the monostatic imagery collected at the same time. Demonstrating high-resolution bistatic SAR imagery using two airborne platforms represents what I believe to be a European first and is likely to be the first time that this has been achieved outside the US (the UK has very little insight into US work on this topic). Bistatic target characteristics are examined through the use of simulations. This also compares bistatic imagery with monostatic and gives further insight into the utility of bistatic SAR.
Impact of Crop Conversions on Runoff and Sediment Output in the Lower Mississippi River Basin
NASA Astrophysics Data System (ADS)
Momm, H.; Bingner, R. L.; Elkadiri, R.; Yaraser, L.; Porter, W.
2017-12-01
Farming management practices influence sediment and agrochemical loads exiting fields and entering downstream water bodies. These practices impact multiple physical processes responsible for sediment and nutrient detachment, transport, and deposition. Recent changes in farming practices in the Southern United States coincide with increased grain production, replacing traditional crops such as cotton with corn and soybeans. To grow these crops in the South, adapted crop management practices are needed (irrigation, fertilizer, etc.). In this study, the impact of grain crop adoption on hydrologic processes and non-point source pollutant production is quantified. A watershed located in the Big Sunflower River drainage basin (14,179 km2) - a part of the greater Lower Mississippi River basin - was selected due to its economic relevance, historical agricultural output, and depiction of recent farming management trends. Estimates of runoff and sediment loads were produced using the U.S. Department of Agriculture supported Annualized Agriculture Non-Point Source Pollution (AnnAGNPS) watershed pollution and management model. Existing physical conditions during a 16-year period (2000-2015) were characterized using 3,992 sub-catchments and 1,602 concentrated flow paths. Algorithms were developed to integrate continuous land use/land cover information, variable spatio-temporal irrigation practices, and crop output yield in order to generate a total of 2,922 unique management practices and corresponding soil-disturbing operations. A simulation representing existing conditions was contrasted with simulations depicting alternatives of management, irrigation practices, and temporal variations in crop yield. Quantification of anthropogenic impacts to water quality and water availability at a watershed scale supports the development of targeted pollution mitigation and custom conservation strategies.
The Shale Hills Critical Zone Observatory for Embedded Sensing and Simulation
NASA Astrophysics Data System (ADS)
Duffy, C.; Davis, K.; Kane, T.; Boyer, E.
2009-04-01
The future of environmental observing systems will utilize embedded sensor networks with continuous real-time measurement of hydrologic, atmospheric, biogeochemical, and ecological variables across diverse terrestrial environments. Embedded environmental sensors, benefitting from advances in information sciences, networking technology, materials science, computing capacity, and data synthesis methods, are undergoing revolutionary change. It is now possible to field spatially-distributed, multi-node sensor networks that provide density and spatial coverage previously accessible only via numerical simulation. At the same time, computational tools are advancing rapidly to the point where it is now possible to simulate the physical processes controlling individual parcels of water and solutes through the complete terrestrial water cycle. Our goal for the Penn State Critical Zone Observatory is to apply environmental sensor arrays, integrated hydrologic models deployed and coordinated at a testbed within the Penn State Experimental Forest. The NSF-funded CZO is designed to observe the detailed space and time complexities of the water and energy cycle for a watershed and ultimately the river basin for all physical states and fluxes (groundwater, soil moisture, temperature, streamflow, latent heat, snowmelt, chemistry, isotopes etc.). Presently fully-coupled physical models are being developed that link the atmosphere-land-vegetation-subsurface system into a fully-coupled distributed system. During the last 5 years the Penn State Integrated Hydrologic Modeling System has been under development as an open-source community modeling project funded by NSF EAR/GEO and NSF CBET/ENG. PIHM represents a strategy for the formulation and solution of fully-coupled process equations at the watershed and river basin scales, and includes a tightly coupled GIS tool for data handling, domain decomposition, optimal unstructured grid generation, and model parameterization. (PIHM; http://sourceforge.net/projects/pihmmodel/; http://sourceforge.net/projects/pihmgis/ ) The CZO sensor and simulation system is being developed to have the following elements: 1) extensive, spatially-distributed smart sensor networks to gather intensive soil, geologic, hydrologic, geochemical and isotopic data; 2) spatially-explicit multiphysics models/solutions of the land-subsurface-vegetation-atmosphere system; and 3) parallel/distributed, adaptive algorithms for rapidly simulating the states of the watershed at high resolution, and 4) signal processing tools for data mining and parameter estimation. The prototype proposed sensor array and simulation system proposed is demonstrated with preliminary results from our first year.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsai, Hai-En; Swanson, Kelly K.; Barber, Sam K.
The injection physics in a shock-induced density down-ramp injector was characterized, demonstrating precise control of a laser-plasma accelerator (LPA). Using a jet-blade assembly, experiments systematically v aried the shock injector profile, including shock angle, shock position, up-ramp width, and acceleration length. Our work demonstrates that beam energy, energy spread, and pointing can be controlled by adjusting these parameters. As a result, an electron beam that was highly tunable from 25 to 300 MeV with 8% energy spread (ΔE FWHM/E), 1.5 mrad divergence, and 0.35 mrad pointing fluctuation was produced. Particle-in-cell simulation characterized how variation in the shock angle and up-rampmore » width impacted the injection process. This highly controllable LPA represents a suitable, compact electron beam source for LPA applications such as Thomson sources and free-electron lasers.« less
Tsai, Hai-En; Swanson, Kelly K.; Barber, Sam K.; ...
2018-04-13
The injection physics in a shock-induced density down-ramp injector was characterized, demonstrating precise control of a laser-plasma accelerator (LPA). Using a jet-blade assembly, experiments systematically v aried the shock injector profile, including shock angle, shock position, up-ramp width, and acceleration length. Our work demonstrates that beam energy, energy spread, and pointing can be controlled by adjusting these parameters. As a result, an electron beam that was highly tunable from 25 to 300 MeV with 8% energy spread (ΔE FWHM/E), 1.5 mrad divergence, and 0.35 mrad pointing fluctuation was produced. Particle-in-cell simulation characterized how variation in the shock angle and up-rampmore » width impacted the injection process. This highly controllable LPA represents a suitable, compact electron beam source for LPA applications such as Thomson sources and free-electron lasers.« less
NASA Astrophysics Data System (ADS)
Henriquez, Miguel F.; Thompson, Derek S.; Kenily, Shane; Khaziev, Rinat; Good, Timothy N.; McIlvain, Julianne; Siddiqui, M. Umair; Curreli, Davide; Scime, Earl E.
2016-10-01
Understanding particle distributions in plasma boundary regions is critical to predicting plasma-surface interactions. Ions in the presheath exhibit complex behavior because of collisions and due to the presence of boundary-localized electric fields. Complete understanding of particle dynamics is necessary for understanding the critical problems of tokamak wall loading and Hall thruster channel wall erosion. We report measurements of 3D argon ion velocity distribution functions (IVDFs) in the vicinity of an absorbing boundary oriented obliquely to a background magnetic field. Measurements were obtained via argon ion laser induced fluorescence throughout a spatial volume upstream of the boundary. These distribution functions reveal kinetic details that provide a point-to-point check on particle-in-cell and 1D3V Boltzmann simulations. We present the results of this comparison and discuss some implications for plasma boundary interaction physics.
Camera calibration based on the back projection process
NASA Astrophysics Data System (ADS)
Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui
2015-12-01
Camera calibration plays a crucial role in 3D measurement tasks of machine vision. In typical calibration processes, camera parameters are iteratively optimized in the forward imaging process (FIP). However, the results can only guarantee the minimum of 2D projection errors on the image plane, but not the minimum of 3D reconstruction errors. In this paper, we propose a universal method for camera calibration, which uses the back projection process (BPP). In our method, a forward projection model is used to obtain initial intrinsic and extrinsic parameters with a popular planar checkerboard pattern. Then, the extracted image points are projected back into 3D space and compared with the ideal point coordinates. Finally, the estimation of the camera parameters is refined by a non-linear function minimization process. The proposed method can obtain a more accurate calibration result, which is more physically useful. Simulation and practical data are given to demonstrate the accuracy of the proposed method.
Capturing Revolute Motion and Revolute Joint Parameters with Optical Tracking
NASA Astrophysics Data System (ADS)
Antonya, C.
2017-12-01
Optical tracking of users and various technical systems are becoming more and more popular. It consists of analysing sequence of recorded images using video capturing devices and image processing algorithms. The returned data contains mainly point-clouds, coordinates of markers or coordinates of point of interest. These data can be used for retrieving information related to the geometry of the objects, but also to extract parameters for the analytical model of the system useful in a variety of computer aided engineering simulations. The parameter identification of joints deals with extraction of physical parameters (mainly geometric parameters) for the purpose of constructing accurate kinematic and dynamic models. The input data are the time-series of the marker’s position. The least square method was used for fitting the data into different geometrical shapes (ellipse, circle, plane) and for obtaining the position and orientation of revolute joins.
Application of Stereo Vision to the Reconnection Scaling Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klarenbeek, Johnny; Sears, Jason A.; Gao, Kevin W.
The measurement and simulation of the three-dimensional structure of magnetic reconnection in astrophysical and lab plasmas is a challenging problem. At Los Alamos National Laboratory we use the Reconnection Scaling Experiment (RSX) to model 3D magnetohydrodynamic (MHD) relaxation of plasma filled tubes. These magnetic flux tubes are called flux ropes. In RSX, the 3D structure of the flux ropes is explored with insertable probes. Stereo triangulation can be used to compute the 3D position of a probe from point correspondences in images from two calibrated cameras. While common applications of stereo triangulation include 3D scene reconstruction and robotics navigation, wemore » will investigate the novel application of stereo triangulation in plasma physics to aid reconstruction of 3D data for RSX plasmas. Several challenges will be explored and addressed, such as minimizing 3D reconstruction errors in stereo camera systems and dealing with point correspondence problems.« less
NASA Astrophysics Data System (ADS)
Tsai, Hai-En; Swanson, Kelly K.; Barber, Sam K.; Lehe, Remi; Mao, Hann-Shin; Mittelberger, Daniel E.; Steinke, Sven; Nakamura, Kei; van Tilborg, Jeroen; Schroeder, Carl; Esarey, Eric; Geddes, Cameron G. R.; Leemans, Wim
2018-04-01
The injection physics in a shock-induced density down-ramp injector was characterized, demonstrating precise control of a laser-plasma accelerator (LPA). Using a jet-blade assembly, experiments systematically varied the shock injector profile, including shock angle, shock position, up-ramp width, and acceleration length. Our work demonstrates that beam energy, energy spread, and pointing can be controlled by adjusting these parameters. As a result, an electron beam that was highly tunable from 25 to 300 MeV with 8% energy spread (ΔEFWHM/E), 1.5 mrad divergence, and 0.35 mrad pointing fluctuation was produced. Particle-in-cell simulation characterized how variation in the shock angle and up-ramp width impacted the injection process. This highly controllable LPA represents a suitable, compact electron beam source for LPA applications such as Thomson sources and free-electron lasers.
Using the PhysX engine for physics-based virtual surgery with force feedback.
Maciel, Anderson; Halic, Tansel; Lu, Zhonghua; Nedel, Luciana P; De, Suvranu
2009-09-01
The development of modern surgical simulators is highly challenging, as they must support complex simulation environments. The demand for higher realism in such simulators has driven researchers to adopt physics-based models, which are computationally very demanding. This poses a major problem, since real-time interactions must permit graphical updates of 30 Hz and a much higher rate of 1 kHz for force feedback (haptics). Recently several physics engines have been developed which offer multi-physics simulation capabilities, including rigid and deformable bodies, cloth and fluids. While such physics engines provide unique opportunities for the development of surgical simulators, their higher latencies, compared to what is necessary for real-time graphics and haptics, offer significant barriers to their use in interactive simulation environments. In this work, we propose solutions to this problem and demonstrate how a multimodal surgical simulation environment may be developed based on NVIDIA's PhysX physics library. Hence, models that are undergoing relatively low-frequency updates in PhysX can exist in an environment that demands much higher frequency updates for haptics. We use a collision handling layer to interface between the physical response provided by PhysX and the haptic rendering device to provide both real-time tissue response and force feedback. Our simulator integrates a bimanual haptic interface for force feedback and per-pixel shaders for graphics realism in real time. To demonstrate the effectiveness of our approach, we present the simulation of the laparoscopic adjustable gastric banding (LAGB) procedure as a case study. To develop complex and realistic surgical trainers with realistic organ geometries and tissue properties demands stable physics-based deformation methods, which are not always compatible with the interaction level required for such trainers. We have shown that combining different modelling strategies for behaviour, collision and graphics is possible and desirable. Such multimodal environments enable suitable rates to simulate the major steps of the LAGB procedure.
Multi-scale calculation based on dual domain material point method combined with molecular dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dhakal, Tilak Raj
This dissertation combines the dual domain material point method (DDMP) with molecular dynamics (MD) in an attempt to create a multi-scale numerical method to simulate materials undergoing large deformations with high strain rates. In these types of problems, the material is often in a thermodynamically non-equilibrium state, and conventional constitutive relations are often not available. In this method, the closure quantities, such as stress, at each material point are calculated from a MD simulation of a group of atoms surrounding the material point. Rather than restricting the multi-scale simulation in a small spatial region, such as phase interfaces, or crackmore » tips, this multi-scale method can be used to consider non-equilibrium thermodynamic e ects in a macroscopic domain. This method takes advantage that the material points only communicate with mesh nodes, not among themselves; therefore MD simulations for material points can be performed independently in parallel. First, using a one-dimensional shock problem as an example, the numerical properties of the original material point method (MPM), the generalized interpolation material point (GIMP) method, the convected particle domain interpolation (CPDI) method, and the DDMP method are investigated. Among these methods, only the DDMP method converges as the number of particles increases, but the large number of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the computation. The numerical properties of the multiscale method are investigated as well as the results from this multi-scale calculation are compared of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the computation. The numerical properties of the multiscale method are investigated as well as the results from this multi-scale calculation are compared with direct MD simulation results to demonstrate the feasibility of the method. Also, the multi-scale method is applied for a two dimensional problem of jet formation around copper notch under a strong impact.« less
Developing iPad-Based Physics Simulations That Can Help People Learn Newtonian Physics Concepts
ERIC Educational Resources Information Center
Lee, Young-Jin
2015-01-01
The aims of this study are: (1) to develop iPad-based computer simulations called iSimPhysics that can help people learn Newtonian physics concepts; and (2) to assess its educational benefits and pedagogical usefulness. To facilitate learning, iSimPhysics visualizes abstract physics concepts, and allows for conducting a series of computer…
Circuit-based versus full-wave modelling of active microwave circuits
NASA Astrophysics Data System (ADS)
Bukvić, Branko; Ilić, Andjelija Ž.; Ilić, Milan M.
2018-03-01
Modern full-wave computational tools enable rigorous simulations of linear parts of complex microwave circuits within minutes, taking into account all physical electromagnetic (EM) phenomena. Non-linear components and other discrete elements of the hybrid microwave circuit are then easily added within the circuit simulator. This combined full-wave and circuit-based analysis is a must in the final stages of the circuit design, although initial designs and optimisations are still faster and more comfortably done completely in the circuit-based environment, which offers real-time solutions at the expense of accuracy. However, due to insufficient information and general lack of specific case studies, practitioners still struggle when choosing an appropriate analysis method, or a component model, because different choices lead to different solutions, often with uncertain accuracy and unexplained discrepancies arising between the simulations and measurements. We here design a reconfigurable power amplifier, as a case study, using both circuit-based solver and a full-wave EM solver. We compare numerical simulations with measurements on the manufactured prototypes, discussing the obtained differences, pointing out the importance of measured parameters de-embedding, appropriate modelling of discrete components and giving specific recipes for good modelling practices.
Multi-scale sensitivity analysis of pile installation using DEM
NASA Astrophysics Data System (ADS)
Esposito, Ricardo Gurevitz; Velloso, Raquel Quadros; , Eurípedes do Amaral Vargas, Jr.; Danziger, Bernadete Ragoni
2017-12-01
The disturbances experienced by the soil due to the pile installation and dynamic soil-structure interaction still present major challenges to foundation engineers. These phenomena exhibit complex behaviors, difficult to measure in physical tests and to reproduce in numerical models. Due to the simplified approach used by the discrete element method (DEM) to simulate large deformations and nonlinear stress-dilatancy behavior of granular soils, the DEM consists of an excellent tool to investigate these processes. This study presents a sensitivity analysis of the effects of introducing a single pile using the PFC2D software developed by Itasca Co. The different scales investigated in these simulations include point and shaft resistance, alterations in porosity and stress fields and particles displacement. Several simulations were conducted in order to investigate the effects of different numerical approaches showing indications that the method of installation and particle rotation could influence greatly in the conditions around the numerical pile. Minor effects were also noted due to change in penetration velocity and pile-soil friction. The difference in behavior of a moving and a stationary pile shows good qualitative agreement with previous experimental results indicating the necessity of realizing a force equilibrium process prior to any load-test to be simulated.
Black Hole Coalescence and Mergers: Review, Status, and ``Where are We Heading?''
NASA Astrophysics Data System (ADS)
Seidel, E.
I review recent progress in 3D numerical relativity, focusing onsimulations involving black holes evolved with singularity avoiding slicings. After a long series of axisymmetric and perturbative studies of distorted black holes and black hole collisions, similar studies were carried out with full 3D codes. The results show that such black hole simulations can be carried out extremely accurately, although instabilities plague the simulation at uncomfortably early times. However, new formulations of Einstein's equations allow much more stable 3D evolutions than ever before, enabling the first studies of 3D gravitational collapse to a black hole. With these new formulations, for example, it has become possible to perform the first detailed simulations of 3D grazing collisions of black holes with unequal mass and spin, and with orbital angular momentum. I discuss the 3D black hole physics that can now be studied, and prospects for the future. Such studies may be able to provide information about the final plunge of two black holes, which is relevant to gravitational wave astronomy, and will be very useful as a foundation for future studies when advanced techniques like black hole excision mature to the point that they permit full orbital coalescence simulations.
Elliott, Lydia; DeCristofaro, Claire; Carpenter, Alesia
2012-09-01
This article describes the development and implementation of integrated use of personal handheld devices (personal digital assistants, PDAs) and high-fidelity simulation in an advanced health assessment course in a graduate family nurse practitioner (NP) program. A teaching tool was developed that can be utilized as a template for clinical case scenarios blending these separate technologies. Review of the evidence-based literature, including peer-reviewed articles and reviews. Blending the technologies of high-fidelity simulation and handheld devices (PDAs) provided a positive learning experience for graduate NP students in a teaching laboratory setting. Combining both technologies in clinical case scenarios offered a more real-world learning experience, with a focus on point-of-care service and integration of interview and physical assessment skills with existing standards of care and external clinical resources. Faculty modeling and advance training with PDA technology was crucial to success. Faculty developed a general template tool and systems-based clinical scenarios integrating PDA and high-fidelity simulation. Faculty observations, the general template tool, and one scenario example are included in this article. ©2012 The Author(s) Journal compilation ©2012 American Academy of Nurse Practitioners.
Multi-scale sensitivity analysis of pile installation using DEM
NASA Astrophysics Data System (ADS)
Esposito, Ricardo Gurevitz; Velloso, Raquel Quadros; , Eurípedes do Amaral Vargas, Jr.; Danziger, Bernadete Ragoni
2018-07-01
The disturbances experienced by the soil due to the pile installation and dynamic soil-structure interaction still present major challenges to foundation engineers. These phenomena exhibit complex behaviors, difficult to measure in physical tests and to reproduce in numerical models. Due to the simplified approach used by the discrete element method (DEM) to simulate large deformations and nonlinear stress-dilatancy behavior of granular soils, the DEM consists of an excellent tool to investigate these processes. This study presents a sensitivity analysis of the effects of introducing a single pile using the PFC2D software developed by Itasca Co. The different scales investigated in these simulations include point and shaft resistance, alterations in porosity and stress fields and particles displacement. Several simulations were conducted in order to investigate the effects of different numerical approaches showing indications that the method of installation and particle rotation could influence greatly in the conditions around the numerical pile. Minor effects were also noted due to change in penetration velocity and pile-soil friction. The difference in behavior of a moving and a stationary pile shows good qualitative agreement with previous experimental results indicating the necessity of realizing a force equilibrium process prior to any load-test to be simulated.
The effects of atmospheric cloud radiative forcing on climate
NASA Technical Reports Server (NTRS)
Randall, David A.
1989-01-01
In order to isolate the effects of atmospheric cloud radiative forcing (ACRF) on climate, the general circulation of an ocean-covered earth called 'Seaworld' was simulated using the Colorado State University GCM. Most current climate models, however, do not include an interactive ocean. The key simplifications in 'Seaworld' are the fixed boundary temperature with no land points, the lack of mountains and the zonal uniformity of the boundary conditions. Two 90-day 'perpetual July' simulations were performed and analyzed the last sixty days of each. The first run included all the model's physical parameterizations, while the second omitted the effects of clouds in both the solar and terrestrial radiation parameterizations. Fixed and identical boundary temperatures were set for the two runs, and resulted in differences revealing the direct and indirect effects of the ACRF on the large-scale circulation and the parameterized hydrologic processes.
A numerical investigation of the effect of surface wettability on the boiling curve.
Hsu, Hua-Yi; Lin, Ming-Chieh; Popovic, Bridget; Lin, Chii-Ruey; Patankar, Neelesh A
2017-01-01
Surface wettability is recognized as playing an important role in pool boiling and the corresponding heat transfer curve. In this work, a systematic study of pool boiling heat transfer on smooth surfaces of varying wettability (contact angle range of 5° - 180°) has been conducted and reported. Based on numerical simulations, boiling curves are calculated and boiling dynamics in each regime are studied using a volume-of-fluid method with contact angle model. The calculated trends in critical heat flux and Leidenfrost point as functions of surface wettability are obtained and compared with prior experimental and theoretical predictions, giving good agreement. For the first time, the effect of contact angle on the complete boiling curve is shown. It is demonstrated that the simulation methodology can be used for studying pool boiling and related dynamics and providing more physical insights.
Angular Momentum Content of the ρ Meson in Lattice QCD
NASA Astrophysics Data System (ADS)
Glozman, Leonid Ya.; Lang, C. B.; Limmer, Markus
2009-09-01
The variational method allows one to study the mixing of interpolators with different chiral transformation properties in the nonperturbatively determined physical state. It is then possible to define and calculate in a gauge-invariant manner the chiral as well as the partial wave content of the quark-antiquark component of a meson in the infrared, where mass is generated. Using a unitary transformation from the chiral basis to the LJ2S+1 basis one may extract a partial wave content of a meson. We present results for the ground state of the ρ meson using quenched simulations as well as simulations with nf=2 dynamical quarks, all for lattice spacings close to 0.15 fm. We point out that these results indicate a simple S13-wave composition of the ρ meson in the infrared, like in the SU(6) flavor-spin quark model.
Statistical moments of quantum-walk dynamics reveal topological quantum transitions.
Cardano, Filippo; Maffei, Maria; Massa, Francesco; Piccirillo, Bruno; de Lisio, Corrado; De Filippis, Giulio; Cataudella, Vittorio; Santamato, Enrico; Marrucci, Lorenzo
2016-04-22
Many phenomena in solid-state physics can be understood in terms of their topological properties. Recently, controlled protocols of quantum walk (QW) are proving to be effective simulators of such phenomena. Here we report the realization of a photonic QW showing both the trivial and the non-trivial topologies associated with chiral symmetry in one-dimensional (1D) periodic systems. We find that the probability distribution moments of the walker position after many steps can be used as direct indicators of the topological quantum transition: while varying a control parameter that defines the system phase, these moments exhibit a slope discontinuity at the transition point. Numerical simulations strongly support the conjecture that these features are general of 1D topological systems. Extending this approach to higher dimensions, different topological classes, and other typologies of quantum phases may offer general instruments for investigating and experimentally detecting quantum transitions in such complex systems.
Statistical moments of quantum-walk dynamics reveal topological quantum transitions
Cardano, Filippo; Maffei, Maria; Massa, Francesco; Piccirillo, Bruno; de Lisio, Corrado; De Filippis, Giulio; Cataudella, Vittorio; Santamato, Enrico; Marrucci, Lorenzo
2016-01-01
Many phenomena in solid-state physics can be understood in terms of their topological properties. Recently, controlled protocols of quantum walk (QW) are proving to be effective simulators of such phenomena. Here we report the realization of a photonic QW showing both the trivial and the non-trivial topologies associated with chiral symmetry in one-dimensional (1D) periodic systems. We find that the probability distribution moments of the walker position after many steps can be used as direct indicators of the topological quantum transition: while varying a control parameter that defines the system phase, these moments exhibit a slope discontinuity at the transition point. Numerical simulations strongly support the conjecture that these features are general of 1D topological systems. Extending this approach to higher dimensions, different topological classes, and other typologies of quantum phases may offer general instruments for investigating and experimentally detecting quantum transitions in such complex systems. PMID:27102945
Angular momentum content of the rho meson in lattice QCD.
Glozman, Leonid Ya; Lang, C B; Limmer, Markus
2009-09-18
The variational method allows one to study the mixing of interpolators with different chiral transformation properties in the nonperturbatively determined physical state. It is then possible to define and calculate in a gauge-invariant manner the chiral as well as the partial wave content of the quark-antiquark component of a meson in the infrared, where mass is generated. Using a unitary transformation from the chiral basis to the ;{2S+1}L_{J} basis one may extract a partial wave content of a meson. We present results for the ground state of the rho meson using quenched simulations as well as simulations with n_{f} = 2 dynamical quarks, all for lattice spacings close to 0.15 fm. We point out that these results indicate a simple ;{3}S_{1}-wave composition of the rho meson in the infrared, like in the SU(6) flavor-spin quark model.
Numerical simulation of aerothermal loads in hypersonic engine inlets due to shock impingement
NASA Technical Reports Server (NTRS)
Ramakrishnan, R.
1992-01-01
The effect of shock impingement on an axial corner simulating the inlet of a hypersonic vehicle engine is modeled using a finite-difference procedure. A three-dimensional dynamic grid adaptation procedure is utilized to move the grids to regions with strong flow gradients. The adaptation procedure uses a grid relocation stencil that is valid at both the interior and boundary points of the finite-difference grid. A linear combination of spatial derivatives of specific flow variables, calculated with finite-element interpolation functions, are used as adaptation measures. This computational procedure is used to study laminar and turbulent Mach 6 flows in the axial corner. The description of flow physics and qualitative measures of heat transfer distributions on cowl and strut surfaces obtained from the analysis are compared with experimental observations. Conclusions are drawn regarding the capability of the numerical scheme for enhanced modeling of high-speed compressible flows.
Numerical Model Studies of the Martian Mesoscale Circulations
NASA Technical Reports Server (NTRS)
Segal, Moti; Arritt, Raymond W.
1997-01-01
The study objectives were to evaluate by numerical modeling various possible mesoscale circulation on Mars and related atmospheric boundary layer processes. The study was in collaboration with J. Tillman of the University of Washington (who supported the study observationally). Interaction has been made with J. Prusa of Iowa State University in numerical modeling investigation of dynamical effects of topographically-influenced flow. Modeling simulations included evaluations of surface physical characteristics on: (i) the Martian atmospheric boundary layer and (ii) their impact on thermally and dynamically forced mesoscale flows. Special model evaluations were made in support of selection of the Pathfinder landing sites. J. Tillman's finding of VL-2 inter-annual temperature difference was followed by model simulations attempting to point out the forcing for this feature. Publication of the results in the reviewed literature in pending upon completion of the manuscripts in preparation as indicated later.
NASA Astrophysics Data System (ADS)
Torres, Hilario; Iaccarino, Gianluca
2017-11-01
Soleil-X is a multi-physics solver being developed at Stanford University as a part of the Predictive Science Academic Alliance Program II. Our goal is to conduct high fidelity simulations of particle laden turbulent flows in a radiation environment for solar energy receiver applications as well as to demonstrate our readiness to effectively utilize next generation Exascale machines. The novel aspect of Soleil-X is that it is built upon the Legion runtime system to enable easy portability to different parallel distributed heterogeneous architectures while also being written entirely in high-level/high-productivity languages (Ebb and Regent). An overview of the Soleil-X software architecture will be given. Results from coupled fluid flow, Lagrangian point particle tracking, and thermal radiation simulations will be presented. Performance diagnostic tools and metrics corresponding the the same cases will also be discussed. US Department of Energy, National Nuclear Security Administration.
A numerical investigation of the effect of surface wettability on the boiling curve
Lin, Ming-Chieh; Popovic, Bridget; Lin, Chii-Ruey; Patankar, Neelesh A.
2017-01-01
Surface wettability is recognized as playing an important role in pool boiling and the corresponding heat transfer curve. In this work, a systematic study of pool boiling heat transfer on smooth surfaces of varying wettability (contact angle range of 5° − 180°) has been conducted and reported. Based on numerical simulations, boiling curves are calculated and boiling dynamics in each regime are studied using a volume-of-fluid method with contact angle model. The calculated trends in critical heat flux and Leidenfrost point as functions of surface wettability are obtained and compared with prior experimental and theoretical predictions, giving good agreement. For the first time, the effect of contact angle on the complete boiling curve is shown. It is demonstrated that the simulation methodology can be used for studying pool boiling and related dynamics and providing more physical insights. PMID:29125847
Compact and controlled microfluidic mixing and biological particle capture
NASA Astrophysics Data System (ADS)
Ballard, Matthew; Owen, Drew; Mills, Zachary Grant; Hesketh, Peter J.; Alexeev, Alexander
2016-11-01
We use three-dimensional simulations and experiments to develop a multifunctional microfluidic device that performs rapid and controllable microfluidic mixing and specific particle capture. Our device uses a compact microfluidic channel decorated with magnetic features. A rotating magnetic field precisely controls individual magnetic microbeads orbiting around the features, enabling effective continuous-flow mixing of fluid streams over a compact mixing region. We use computer simulations to elucidate the underlying physical mechanisms that lead to effective mixing and compare them with experimental mixing results. We study the effect of various system parameters on microfluidic mixing to design an efficient micromixer. We also experimentally and numerically demonstrate that orbiting microbeads can effectively capture particles transported by the fluid, which has major implications in pre-concentration and detection of biological particles including various cells and bacteria, with applications in areas such as point-of-care diagnostics, biohazard detection, and food safety. Support from NSF and USDA is gratefully acknowledged.
Revealing missing charges with generalised quantum fluctuation relations.
Mur-Petit, J; Relaño, A; Molina, R A; Jaksch, D
2018-05-22
The non-equilibrium dynamics of quantum many-body systems is one of the most fascinating problems in physics. Open questions range from how they relax to equilibrium to how to extract useful work from them. A critical point lies in assessing whether a system has conserved quantities (or 'charges'), as these can drastically influence its dynamics. Here we propose a general protocol to reveal the existence of charges based on a set of exact relations between out-of-equilibrium fluctuations and equilibrium properties of a quantum system. We apply these generalised quantum fluctuation relations to a driven quantum simulator, demonstrating their relevance to obtain unbiased temperature estimates from non-equilibrium measurements. Our findings will help guide research on the interplay of quantum and thermal fluctuations in quantum simulation, in studying the transition from integrability to chaos and in the design of new quantum devices.
Optical simulation of flying targets using physically based renderer
NASA Astrophysics Data System (ADS)
Cheng, Ye; Zheng, Quan; Peng, Junkai; Lv, Pin; Zheng, Changwen
2018-02-01
The simulation of aerial flying targets is widely needed in many fields. This paper proposes a physically based method for optical simulation of flying targets. In the first step, three-dimensional target models are built and the motion speed and direction are defined. Next, the material of the outward appearance of a target is also simulated. Then the illumination conditions are defined. After all definitions are given, all settings are encoded in a description file. Finally, simulated results are generated by Monte Carlo ray tracing in a physically based renderer. Experiments show that this method is able to simulate materials, lighting and motion blur for flying targets, and it can generate convincing and highquality simulation results.
3-D simulation of hanging wall effect at dam site
NASA Astrophysics Data System (ADS)
Zhang, L.; Xu, Y.
2017-12-01
Hanging wall effect is one of the near fault effects. This paper focuses on the difference of the ground motions on the hanging wall side between the footwall side of the fault at dam site considering the key factors, such as actual topography, the rupture process. For this purpose, 3-D ground motions are numerically simulated by the spectrum element method (SEM), which takes into account the physical mechanism of generation and propagation of seismic waves. With the SEM model of 548 million DOFs, excitation and propagation of seismic waves are simulated to compare the difference between the ground motion on the hanging wall side and that on the footwall side. Take Dagangshan region located in China as an example, several seismogenic finite faults with different dip angle are simulated to investigate the hanging wall effect. Furthermore, by comparing the ground motions of the receiving points, the influence of several factors on hanging wall effect is investigated, such as the dip of the fault and the fault type (strike slip fault or dip-slip fault). The peak acceleration on the hanging wall side is obviously larger than those on the footwall side, which numerically evidences the hanging wall effect. Besides, the simulation shows that only when the dip is less than 70° does the hanging wall effect deserve attention.
Statistics of Magnetic Reconnection X-Lines in Kinetic Turbulence
NASA Astrophysics Data System (ADS)
Haggerty, C. C.; Parashar, T.; Matthaeus, W. H.; Shay, M. A.; Wan, M.; Servidio, S.; Wu, P.
2016-12-01
In this work we examine the statistics of magnetic reconnection (x-lines) and their associated reconnection rates in intermittent current sheets generated in turbulent plasmas. Although such statistics have been studied previously for fluid simulations (e.g. [1]), they have not yet been generalized to fully kinetic particle-in-cell (PIC) simulations. A significant problem with PIC simulations, however, is electrostatic fluctuations generated due to numerical particle counting statistics. We find that analyzing gradients of the magnetic vector potential from the raw PIC field data identifies numerous artificial (or non-physical) x-points. Using small Orszag-Tang vortex PIC simulations, we analyze x-line identification and show that these artificial x-lines can be removed using sub-Debye length filtering of the data. We examine how turbulent properties such as the magnetic spectrum and scale dependent kurtosis are affected by particle noise and sub-Debye length filtering. We subsequently apply these analysis methods to a large scale kinetic PIC turbulent simulation. Consistent with previous fluid models, we find a range of normalized reconnection rates as large as ½ but with the bulk of the rates being approximately less than to 0.1. [1] Servidio, S., W. H. Matthaeus, M. A. Shay, P. A. Cassak, and P. Dmitruk (2009), Magnetic reconnection and two-dimensional magnetohydrodynamic turbulence, Phys. Rev. Lett., 102, 115003.
Multi-Physics Demonstration Problem with the SHARP Reactor Simulation Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merzari, E.; Shemon, E. R.; Yu, Y. Q.
This report describes to employ SHARP to perform a first-of-a-kind analysis of the core radial expansion phenomenon in an SFR. This effort required significant advances in the framework Multi-Physics Demonstration Problem with the SHARP Reactor Simulation Toolkit used to drive the coupled simulations, manipulate the mesh in response to the deformation of the geometry, and generate the necessary modified mesh files. Furthermore, the model geometry is fairly complex, and consistent mesh generation for the three physics modules required significant effort. Fully-integrated simulations of a 7-assembly mini-core test problem have been performed, and the results are presented here. Physics models ofmore » a full-core model of the Advanced Burner Test Reactor have also been developed for each of the three physics modules. Standalone results of each of the three physics modules for the ABTR are presented here, which provides a demonstration of the feasibility of the fully-integrated simulation.« less
Structure and Feedback in 30 Doradus. II. Structure and Chemical Abundances
NASA Astrophysics Data System (ADS)
Pellegrini, E. W.; Baldwin, J. A.; Ferland, G. J.
2011-09-01
We use our new optical-imaging and spectrophotometric survey of key diagnostic emission lines in 30 Doradus, together with CLOUDY photoionization models, to study the physical conditions and ionization mechanisms along over 4000 individual lines of sight at points spread across the face of the extended nebula, out to a projected radius 75 pc from R136 at the center of the ionizing cluster NGC 2070. We focus on the physical conditions, geometry, and importance of radiation pressure on a point-by-point basis, with the aim of setting observational constraints on important feedback processes. We find that the dynamics and large-scale structure of 30 Dor are set by a confined system of X-ray bubbles in rough pressure equilibrium with each other and with the confining molecular gas. Although the warm (10,000 K) gas is photoionized by the massive young stars in NGC 2070, the radiation pressure does not currently play a major role in shaping the overall structure. The completeness of our survey also allows us to create a composite spectrum of 30 Doradus, simulating the observable spectrum of a spatially unresolved, distant giant extragalactic H II region. We find that the highly simplified models used in the "strong line" abundance technique do in fact reproduce our observed line strengths and deduced chemical abundances, in spite of the more than one order of magnitude range in the ionization parameter and density of the actual gas in 30 Dor.
ERIC Educational Resources Information Center
Gale, Jessica; Wind, Stefanie; Koval, Jayma; Dagosta, Joseph; Ryan, Mike; Usselman, Marion
2016-01-01
This paper illustrates the use of simulation-based performance assessment (PA) methodology in a recent study of eighth-grade students' understanding of physical science concepts. A set of four simulation-based PA tasks were iteratively developed to assess student understanding of an array of physical science concepts, including net force,…
Relation of Parallel Discrete Event Simulation algorithms with physical models
NASA Astrophysics Data System (ADS)
Shchur, L. N.; Shchur, L. V.
2015-09-01
We extend concept of local simulation times in parallel discrete event simulation (PDES) in order to take into account architecture of the current hardware and software in high-performance computing. We shortly review previous research on the mapping of PDES on physical problems, and emphasise how physical results may help to predict parallel algorithms behaviour.
Augmented versus Virtual Reality Laparoscopic Simulation: What Is the Difference?
Botden, Sanne M.B.I.; Buzink, Sonja N.; Schijven, Marlies P.
2007-01-01
Background Virtual reality (VR) is an emerging new modality for laparoscopic skills training; however, most simulators lack realistic haptic feedback. Augmented reality (AR) is a new laparoscopic simulation system offering a combination of physical objects and VR simulation. Laparoscopic instruments are used within an hybrid mannequin on tissue or objects while using video tracking. This study was designed to assess the difference in realism, haptic feedback, and didactic value between AR and VR laparoscopic simulation. Methods The ProMIS AR and LapSim VR simulators were used in this study. The participants performed a basic skills task and a suturing task on both simulators, after which they filled out a questionnaire about their demographics and their opinion of both simulators scored on a 5-point Likert scale. The participants were allotted to 3 groups depending on their experience: experts, intermediates and novices. Significant differences were calculated with the paired t-test. Results There was general consensus in all groups that the ProMIS AR laparoscopic simulator is more realistic than the LapSim VR laparoscopic simulator in both the basic skills task (mean 4.22 resp. 2.18, P < 0.000) as well as the suturing task (mean 4.15 resp. 1.85, P < 0.000). The ProMIS is regarded as having better haptic feedback (mean 3.92 resp. 1.92, P < 0.000) and as being more useful for training surgical residents (mean 4.51 resp. 2.94, P < 0.000). Conclusions In comparison with the VR simulator, the AR laparoscopic simulator was regarded by all participants as a better simulator for laparoscopic skills training on all tested features. PMID:17361356
A mixed-reality part-task trainer for subclavian venous access.
Robinson, Albert R; Gravenstein, Nikolaus; Cooper, Lou Ann; Lizdas, David; Luria, Isaac; Lampotang, Samsun
2014-02-01
Mixed-reality (MR) procedural simulators combine virtual and physical components and visualization software that can be used for debriefing and offer an alternative to learn subclavian central venous access (SCVA). We present a SCVA MR simulator, a part-task trainer, which can assist in the training of medical personnel. Sixty-five participants were involved in the following: (1) a simulation trial 1; (2) a teaching intervention followed by trial 2 (with the simulator's visualization software); and (3) trial 3, a final simulation assessment. The main test parameters were time to complete SCVA and the SCVA score, a composite of efficiency and safety metrics generated by the simulator's scoring algorithm. Residents and faculty completed questionnaires presimulation and postsimulation that assessed their confidence in obtaining access and learner satisfaction questions, for example, realism of the simulator. The average SCVA score was improved by 24.5 (n=65). Repeated-measures analysis of variance showed significant reductions in average time (F=31.94, P<0.0001), number of attempts (F=10.56, P<0.0001), and score (F=18.59, P<0.0001). After the teaching intervention and practice with the MR simulator, the results no longer showed a difference in performance between the faculty and residents. On a 5-point scale (5=strongly agree), participants agreed that the SCVA simulator was realistic (M=4.3) and strongly agreed that it should be used as an educational tool (M=4.9). An SCVA mixed simulator offers a realistic representation of subclavian central venous access and offers new debriefing capabilities.
Evaluation of null-point detection methods on simulation data
NASA Astrophysics Data System (ADS)
Olshevsky, Vyacheslav; Fu, Huishan; Vaivads, Andris; Khotyaintsev, Yuri; Lapenta, Giovanni; Markidis, Stefano
2014-05-01
We model the measurements of artificial spacecraft that resemble the configuration of CLUSTER propagating in the particle-in-cell simulation of turbulent magnetic reconnection. The simulation domain contains multiple isolated X-type null-points, but the majority are O-type null-points. Simulations show that current pinches surrounded by twisted fields, analogous to laboratory pinches, are formed along the sequences of O-type nulls. In the simulation, the magnetic reconnection is mainly driven by the kinking of the pinches, at spatial scales of several ion inertial lentghs. We compute the locations of magnetic null-points and detect their type. When the satellites are separated by the fractions of ion inertial length, as it is for CLUSTER, they are able to locate both the isolated null-points, and the pinches. We apply the method to the real CLUSTER data and speculate how common are pinches in the magnetosphere, and whether they play a dominant role in the dissipation of magnetic energy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jun; College of Physics and Electronic Engineering, Henan Normal University, 453007 Xinxiang, Henan; Zhang, Xiangdong, E-mail: zhangxd@bit.edu.cn
2015-09-28
Simultaneous negative refraction for both the fundamental frequency (FF) and second-harmonic (SH) fields in two-dimensional nonlinear photonic crystals have been found through both the physical analysis and exact numerical simulation. By combining such a property with the phase-matching condition and strong second-order susceptibility, we have designed a SH lens to realize focusing for both the FF and SH fields at the same time. Good-quality non-near field images for both FF and SH fields have been observed. The physical mechanism for such SH focusing phenomena has been disclosed, which is different from the backward SH generation as has been pointed outmore » in the previous investigations. In addition, the effect of absorption losses on the phenomena has also been discussed. Thus, potential applications of these phenomena to biphotonic microscopy technique are anticipated.« less
On storm movement and its applications
NASA Astrophysics Data System (ADS)
Niemczynowicz, Janusz
Rainfall-runoff models applicable for design and analysis of sewage systems in urban areas are further developed in order to represent better different physical processes going on on an urban catchment. However, one important part of the modelling procedure, the generation of the rainfall input is still a weak point. The main problem is lack of adequate rainfall data which represent temporal and spatial variations of the natural rainfall process. Storm movement is a natural phenomenon which influences urban runoff. However, the rainfall movement and its influence on runoff generation process is not represented in presently available urban runoff simulation models. Physical description of the rainfall movement and its parameters is given based on detailed measurements performed on twelve gauges in Lund, Sweden. The paper discusses the significance of the rainfall movement on the runoff generation process and gives suggestions how the rainfall movement parameters may be used in runoff modelling.
Flow Visualization and Pattern Formation in Vertically Falling Liquid Films
NASA Astrophysics Data System (ADS)
Balakotaiah, Vemuri; Malamataris, Nikolaos
2008-11-01
Analytical results of a low-dimensional two equation h-q model and results of a direct numerical simulation of the transient two-dimensional Navier Stokes equations are presented for vertically falling liquid films along a solid wall. The numerical study aims at the elucidation of the hydrodynamics of the falling film. The analytical study aims at the calculation of the parameter space where pattern formation occurs for this flow. It has been found that when the wave amplitude exceeds a certain magnitude, flow reversal occurs in the film underneath the minimum of the waves [1]. The instantaneous vortical structures possess two hyperbolic points on the vertical wall and an elliptic point in the film. As the wave amplitude increases further, the elliptic point reaches the free surface of the film and two more hyperbolic points are formed in the free surface that replace the elliptic point. Between the two hyperbolic points on the free surface, the streamwise component of velocity is negative and the film is divided into asymmetric patterns of up and down flows. Depending on the value of the Kapitza number, these patterns are either stationary or oscillatory. Physical reasons for the influence of the Kapitza number on pattern formation are given. Movies are shown where the pattern formation is demonstrated. [1] N.A.Malamataris and V.Balakotaiah (2008), AIChE J., 54(7), p. 1725-1740
NASA Astrophysics Data System (ADS)
Stella, J. C.; Harper, E. B.; Fremier, A. K.; Hayden, M. K.; Battles, J. J.
2009-12-01
In high-order alluvial river systems, physical factors of flooding and channel migration are particularly important drivers of riparian forest dynamics because they regulate habitat creation, resource fluxes of water, nutrients and light that are critical for growth, and mortality from fluvial disturbance. Predicting vegetation composition and dynamics at individual sites in this setting is challenging, both because of the stochastic nature of the flood regime and the spatial variability of flood events. Ecological models that correlate environmental factors with species’ occurrence and abundance (e.g., ’niche models’) often work well in infrequently-disturbed upland habitats, but are less useful in river corridors and other dynamic zones where environmental conditions fluctuate greatly and selection pressures on disturbance-adapted organisms are complex. In an effort to help conserve critical riparian forest habitat along the middle Sacramento River, CA, we are taking a mechanistic approach to quantify linkages between fluvial and biotic processes for Fremont cottonwood (Populus fremontii), a keystone pioneer tree in dryland rivers ecosystems of the U.S. Southwest. To predict the corridor-wide population effects of projected changes to the disturbance regime from flow regulation, climate change, and landscape modifications, we have coupled a physical model of channel meandering with a patch-based population model that incorporates the climatic, hydrologic, and topographic factors critical for tree recruitment and survival. We employed these linked simulations to study the relative influence of the two most critical habitat types--point bars and abandoned channels--in sustaining the corridor-wide cottonwood population over a 175-year period. The physical model uses discharge data and channel planform to predict the spatial distribution of new habitat patches; the population model runs on top of this physical template to track tree colonization and survival on each patch. Model parameters of tree life-history traits (e.g., dispersal timing) and hydrogeomorphic processes (e.g., sedimentation rate) were determined by field and experimental studies, and aerial LIDAR, with separate range of values for point bar versus floodplain habitats. In most runs, abandoned channels were colonized one third as frequently as point bars, but supported much larger forest patches when colonization was successful (from 15-99% of forest area, depending on point bar success). Independent evaluation of aerial photos confirm that cottonwood forest stands associated with abandoned channels were less frequent (38% of all stands) but more extensive (53% of all forest area) relative to those caused by migrating point bars. Results indicate that changes to the rate and scale of river migration, and particularly channel abandonment, from human and climatic alterations to the flow regime will likely influence riparian corridor-wide tree population structure and forest dynamics, with consequences for the community of organisms that depend on this habitat.
Developments on the Toroid Ion Trap Analyzer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lammert, S.A.; Thompson, C.V.; Wise, M.B.
1999-06-13
Investigations into several areas of research have been undertaken to address the performance limitations of the toroid analyzer. The Simion 3D6 (2) ion optics simulation program was used to determine whether the potential well minimum of the toroid trapping field is in the physical center of the trap electrode structure. The results (Figures 1) indicate that the minimum of the potential well is shifted towards the inner ring electrode by an amount approximately equal to 10% of the r0 dimension. A simulation of the standard 3D ion trap under similar conditions was performed as a control. In this case, themore » ions settle to the minimum of the potential well at a point that is coincident with the physical center (both radial and axial) of the trapping electrodes. It is proposed that by using simulation programs, a set of new analyzer electrodes can be fashioned that will correct for the non- linear fields introduced by curving the substantially quadrupolar field about the toroid axis in order to provide a trapping field similar to the 3D ion trap cross- section. A new toroid electrode geometry has been devised to allow the use of channel- tron style detectors in place of the more expensive multichannel plate detector. Two different versions have been designed and constructed - one using the current ion trap cross- section (Figure 2) and another using the linear quedrupole cross- section design first reported by Bier and Syka (3).« less
NASA Astrophysics Data System (ADS)
Rose, D. V.; Welch, D. R.; Clark, R. E.; Thoma, C.; Zimmerman, W. R.; Bruner, N.; Rambo, P. K.; Atherton, B. W.
2011-09-01
Streamer and leader formation in high pressure devices is dynamic process involving a broad range of physical phenomena. These include elastic and inelastic particle collisions in the gas, radiation generation, transport and absorption, and electrode interactions. Accurate modeling of these physical processes is essential for a number of applications, including high-current, laser-triggered gas switches. Towards this end, we present a new 3D implicit particle-in-cell simulation model of gas breakdown leading to streamer formation in electronegative gases. The model uses a Monte Carlo treatment for all particle interactions and includes discrete photon generation, transport, and absorption for ultra-violet and soft x-ray radiation. Central to the realization of this fully kinetic particle treatment is an algorithm that manages the total particle count by species while preserving the local momentum distribution functions and conserving charge [D. R. Welch, T. C. Genoni, R. E. Clark, and D. V. Rose, J. Comput. Phys. 227, 143 (2007)]. The simulation model is fully electromagnetic, making it capable of following, for example, the evolution of a gas switch from the point of laser-induced localized breakdown of the gas between electrodes through the successive stages of streamer propagation, initial electrode current connection, and high-current conduction channel evolution, where self-magnetic field effects are likely to be important. We describe the model details and underlying assumptions used and present sample results from 3D simulations of streamer formation and propagation in SF6.
GenASiS Basics: Object-oriented utilitarian functionality for large-scale physics simulations
Cardall, Christian Y.; Budiardja, Reuben D.
2015-06-11
Aside from numerical algorithms and problem setup, large-scale physics simulations on distributed-memory supercomputers require more basic utilitarian functionality, such as physical units and constants; display to the screen or standard output device; message passing; I/O to disk; and runtime parameter management and usage statistics. Here we describe and make available Fortran 2003 classes furnishing extensible object-oriented implementations of this sort of rudimentary functionality, along with individual `unit test' programs and larger example problems demonstrating their use. Lastly, these classes compose the Basics division of our developing astrophysics simulation code GenASiS (General Astrophysical Simulation System), but their fundamental nature makes themmore » useful for physics simulations in many fields.« less
Zhang, Yong; Weissmann, Gary S; Fogg, Graham E; Lu, Bingqing; Sun, HongGuang; Zheng, Chunmiao
2018-06-05
Groundwater susceptibility to non-point source contamination is typically quantified by stable indexes, while groundwater quality evolution (or deterioration globally) can be a long-term process that may last for decades and exhibit strong temporal variations. This study proposes a three-dimensional (3- d ), transient index map built upon physical models to characterize the complete temporal evolution of deep aquifer susceptibility. For illustration purposes, the previous travel time probability density (BTTPD) approach is extended to assess the 3- d deep groundwater susceptibility to non-point source contamination within a sequence stratigraphic framework observed in the Kings River fluvial fan (KRFF) aquifer. The BTTPD, which represents complete age distributions underlying a single groundwater sample in a regional-scale aquifer, is used as a quantitative, transient measure of aquifer susceptibility. The resultant 3- d imaging of susceptibility using the simulated BTTPDs in KRFF reveals the strong influence of regional-scale heterogeneity on susceptibility. The regional-scale incised-valley fill deposits increase the susceptibility of aquifers by enhancing rapid downward solute movement and displaying relatively narrow and young age distributions. In contrast, the regional-scale sequence-boundary paleosols within the open-fan deposits "protect" deep aquifers by slowing downward solute movement and displaying a relatively broad and old age distribution. Further comparison of the simulated susceptibility index maps to known contaminant distributions shows that these maps are generally consistent with the high concentration and quick evolution of 1,2-dibromo-3-chloropropane (DBCP) in groundwater around the incised-valley fill since the 1970s'. This application demonstrates that the BTTPDs can be used as quantitative and transient measures of deep aquifer susceptibility to non-point source contamination.
Maximizing Information Diffusion in the Cyber-physical Integrated Network †
Lu, Hongliang; Lv, Shaohe; Jiao, Xianlong; Wang, Xiaodong; Liu, Juan
2015-01-01
Nowadays, our living environment has been embedded with smart objects, such as smart sensors, smart watches and smart phones. They make cyberspace and physical space integrated by their abundant abilities of sensing, communication and computation, forming a cyber-physical integrated network. In order to maximize information diffusion in such a network, a group of objects are selected as the forwarding points. To optimize the selection, a minimum connected dominating set (CDS) strategy is adopted. However, existing approaches focus on minimizing the size of the CDS, neglecting an important factor: the weight of links. In this paper, we propose a distributed maximizing the probability of information diffusion (DMPID) algorithm in the cyber-physical integrated network. Unlike previous approaches that only consider the size of CDS selection, DMPID also considers the information spread probability that depends on the weight of links. To weaken the effects of excessively-weighted links, we also present an optimization strategy that can properly balance the two factors. The results of extensive simulation show that DMPID can nearly double the information diffusion probability, while keeping a reasonable size of selection with low overhead in different distributed networks. PMID:26569254
Discriminating topology in galaxy distributions using network analysis
NASA Astrophysics Data System (ADS)
Hong, Sungryong; Coutinho, Bruno C.; Dey, Arjun; Barabási, Albert-L.; Vogelsberger, Mark; Hernquist, Lars; Gebhardt, Karl
2016-07-01
The large-scale distribution of galaxies is generally analysed using the two-point correlation function. However, this statistic does not capture the topology of the distribution, and it is necessary to resort to higher order correlations to break degeneracies. We demonstrate that an alternate approach using network analysis can discriminate between topologically different distributions that have similar two-point correlations. We investigate two galaxy point distributions, one produced by a cosmological simulation and the other by a Lévy walk. For the cosmological simulation, we adopt the redshift z = 0.58 slice from Illustris and select galaxies with stellar masses greater than 108 M⊙. The two-point correlation function of these simulated galaxies follows a single power law, ξ(r) ˜ r-1.5. Then, we generate Lévy walks matching the correlation function and abundance with the simulated galaxies. We find that, while the two simulated galaxy point distributions have the same abundance and two-point correlation function, their spatial distributions are very different; most prominently, filamentary structures, absent in Lévy fractals. To quantify these missing topologies, we adopt network analysis tools and measure diameter, giant component, and transitivity from networks built by a conventional friends-of-friends recipe with various linking lengths. Unlike the abundance and two-point correlation function, these network quantities reveal a clear separation between the two simulated distributions; therefore, the galaxy distribution simulated by Illustris is not a Lévy fractal quantitatively. We find that the described network quantities offer an efficient tool for discriminating topologies and for comparing observed and theoretical distributions.
Automated Fluid Feature Extraction from Transient Simulations
NASA Technical Reports Server (NTRS)
Haimes, Robert; Lovely, David
1999-01-01
In the past, feature extraction and identification were interesting concepts, but not required to understand the underlying physics of a steady flow field. This is because the results of the more traditional tools like iso-surfaces, cuts and streamlines were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of much interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one "snap-shot" of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments like pV3). And methods must be developed to abstract the feature and display it in a manner that physically makes sense. The following is a list of the important physical phenomena found in transient (and steady-state) fluid flow: (1) Shocks, (2) Vortex cores, (3) Regions of recirculation, (4) Boundary layers, (5) Wakes. Three papers and an initial specification for the (The Fluid eXtraction tool kit) FX Programmer's guide were included. The papers, submitted to the AIAA Computational Fluid Dynamics Conference, are entitled : (1) Using Residence Time for the Extraction of Recirculation Regions, (2) Shock Detection from Computational Fluid Dynamics results and (3) On the Velocity Gradient Tensor and Fluid Feature Extraction.
Assessing accuracy of point fire intervals across landscapes with simulation modelling
Russell A. Parsons; Emily K. Heyerdahl; Robert E. Keane; Brigitte Dorner; Joseph Fall
2007-01-01
We assessed accuracy in point fire intervals using a simulation model that sampled four spatially explicit simulated fire histories. These histories varied in fire frequency and size and were simulated on a flat landscape with two forest types (dry versus mesic). We used three sampling designs (random, systematic grids, and stratified). We assessed the sensitivity of...
ERIC Educational Resources Information Center
Meibauer, Gustav; Aagaard Nøhr, Andreas
2018-01-01
This article is about designing and implementing PowerPoint-based interactive simulations for use in International Relations (IR) introductory undergraduate classes based on core pedagogical literature, models of human skill acquisition, and previous research on simulations in IR teaching. We argue that simulations can be usefully employed at the…
Monte Carlo based, patient-specific RapidArc QA using Linac log files.
Teke, Tony; Bergman, Alanah M; Kwa, William; Gill, Bradford; Duzenli, Cheryl; Popescu, I Antoniu
2010-01-01
A Monte Carlo (MC) based QA process to validate the dynamic beam delivery accuracy for Varian RapidArc (Varian Medical Systems, Palo Alto, CA) using Linac delivery log files (DynaLog) is presented. Using DynaLog file analysis and MC simulations, the goal of this article is to (a) confirm that adequate sampling is used in the RapidArc optimization algorithm (177 static gantry angles) and (b) to assess the physical machine performance [gantry angle and monitor unit (MU) delivery accuracy]. Ten clinically acceptable RapidArc treatment plans were generated for various tumor sites and delivered to a water-equivalent cylindrical phantom on the treatment unit. Three Monte Carlo simulations were performed to calculate dose to the CT phantom image set: (a) One using a series of static gantry angles defined by 177 control points with treatment planning system (TPS) MLC control files (planning files), (b) one using continuous gantry rotation with TPS generated MLC control files, and (c) one using continuous gantry rotation with actual Linac delivery log files. Monte Carlo simulated dose distributions are compared to both ionization chamber point measurements and with RapidArc TPS calculated doses. The 3D dose distributions were compared using a 3D gamma-factor analysis, employing a 3%/3 mm distance-to-agreement criterion. The dose difference between MC simulations, TPS, and ionization chamber point measurements was less than 2.1%. For all plans, the MC calculated 3D dose distributions agreed well with the TPS calculated doses (gamma-factor values were less than 1 for more than 95% of the points considered). Machine performance QA was supplemented with an extensive DynaLog file analysis. A DynaLog file analysis showed that leaf position errors were less than 1 mm for 94% of the time and there were no leaf errors greater than 2.5 mm. The mean standard deviation in MU and gantry angle were 0.052 MU and 0.355 degrees, respectively, for the ten cases analyzed. The accuracy and flexibility of the Monte Carlo based RapidArc QA system were demonstrated. Good machine performance and accurate dose distribution delivery of RapidArc plans were observed. The sampling used in the TPS optimization algorithm was found to be adequate.
Ohtake, Patricia J; Lazarus, Marcilene; Schillo, Rebecca; Rosen, Michael
2013-02-01
Rehabilitation of patients in critical care environments improves functional outcomes. This finding has led to increased implementation of intensive care unit (ICU) rehabilitation programs, including early mobility, and an associated increased demand for physical therapists practicing in ICUs. Unfortunately, many physical therapists report being inadequately prepared to work in this high-risk environment. Simulation provides focused, deliberate practice in safe, controlled learning environments and may be a method to initiate academic preparation of physical therapists for ICU practice. The purpose of this study was to examine the effect of participation in simulation-based management of a patient with critical illness in an ICU setting on levels of confidence and satisfaction in physical therapist students. A one-group, pretest-posttest, quasi-experimental design was used. Physical therapist students (N=43) participated in a critical care simulation experience requiring technical (assessing bed mobility and pulmonary status), behavioral (patient and interprofessional communication), and cognitive (recognizing a patient status change and initiating appropriate responses) skill performance. Student confidence and satisfaction were surveyed before and after the simulation experience. Students' confidence in their technical, behavioral, and cognitive skill performance increased from "somewhat confident" to "confident" following the critical care simulation experience. Student satisfaction was highly positive, with strong agreement the simulation experience was valuable, reinforced course content, and was a useful educational tool. Limitations of the study were the small sample from one university and a control group was not included. Incorporating a simulated, interprofessional critical care experience into a required clinical course improved physical therapist student confidence in technical, behavioral, and cognitive performance measures and was associated with high student satisfaction. Using simulation, students were introduced to the critical care environment, which may increase interest in working in this practice area.
Resolving Low-Density Lipoprotein (LDL) on the Human Aortic Surface Using Large Eddy Simulation
NASA Astrophysics Data System (ADS)
Lantz, Jonas; Karlsson, Matts
2011-11-01
The prediction and understanding of the genesis of vascular diseases is one of the grand challenges in biofluid engineering. The progression of atherosclerosis is correlated to the build- up of LDL on the arterial surface, which is affected by the blood flow. A multi-physics simulation of LDL mass transport in the blood and through the arterial wall of a subject specific human aorta was performed, employing a LES turbulence model to resolve the turbulent flow. Geometry and velocity measurements from magnetic resonance imaging (MRI) were incorporated to assure physiological relevance of the simulation. Due to the turbulent nature of the flow, consecutive cardiac cycles are not identical, neither in vivo nor in the simulations. A phase average based on a large number of cardiac cycles is therefore computed, which is the proper way to get reliable statistical results from a LES simulation. In total, 50 cardiac cycles were simulated, yielding over 2.5 Billion data points to be post-processed. An inverse relation between LDL and WSS was found; LDL accumulated on locations where WSS was low and vice-versa. Large temporal differences were present, with the concentration level decreasing during systolic acceleration and increasing during the deceleration phase. This method makes it possible to resolve the localization of LDL accumulation in the normal human aorta with its complex transitional flow.
NASA Astrophysics Data System (ADS)
Sagui, Celeste
2006-03-01
An accurate and numerically efficient treatment of electrostatics is essential for biomolecular simulations, as this stabilizes much of the delicate 3-d structure associated with biomolecules. Currently, force fields such as AMBER and CHARMM assign ``partial charges'' to every atom in a simulation in order to model the interatomic electrostatic forces, so that the calculation of the electrostatics rapidly becomes the computational bottleneck in large-scale simulations. There are two main issues associated with the current treatment of classical electrostatics: (i) how does one eliminate the artifacts associated with the point-charges (e.g., the underdetermined nature of the current RESP fitting procedure for large, flexible molecules) used in the force fields in a physically meaningful way? (ii) how does one efficiently simulate the very costly long-range electrostatic interactions? Recently, we have dealt with both of these challenges as follows. In order to improve the description of the molecular electrostatic potentials (MEPs), a new distributed multipole analysis based on localized functions -- Wannier, Boys, and Edminston-Ruedenberg -- was introduced, which allows for a first principles calculation of the partial charges and multipoles. Through a suitable generalization of the particle mesh Ewald (PME) and multigrid method, one can treat electrostatic multipoles all the way to hexadecapoles all without prohibitive extra costs. The importance of these methods for large-scale simulations will be discussed, and examplified by simulations from polarizable DNA models.
NASA Astrophysics Data System (ADS)
Tahir, N. A.; Burkart, F.; Schmidt, R.; Shutov, A.; Wollmann, D.; Piriz, A. R.
2016-12-01
Experiments have been done at the CERN HiRadMat (High Radiation to Materials) facility in which large cylindrical copper targets were irradiated with 440 GeV proton beam generated by the Super Proton Synchrotron (SPS). The primary purpose of these experiments was to confirm the existence of hydrodynamic tunneling of ultra-relativistic protons and their hadronic shower in solid materials, that was predicted by previous numerical simulations. The experimental measurements have shown very good agreement with the simulation results. This provides confidence in our simulations of the interaction of the 7 TeV LHC (Large Hadron Collider) protons and the 50 TeV Future Circular Collider (FCC) protons with solid materials, respectively. This work is important from the machine protection point of view. The numerical simulations have also shown that in the HiRadMat experiments, a significant part of thetarget material is be converted into different phases of High Energy Density (HED) matter, including two-phase solid-liquid mixture, expanded as well as compressed hot liquid phases, two-phase liquid-gas mixture and gaseous state. The HiRadMat facility is therefore a unique ion beam facility worldwide that is currently available for studying the thermophysical properties of HED matter. In the present paper we discuss the numerical simulation results and present a comparison with the experimental measurements.
NASA Astrophysics Data System (ADS)
Tolhurst, Jeffrey Wayne
Most students enrolled in lower division physical geology courses are non-majors and tend to finish the course with little appreciation of what it is geologists really do. They may also be expected to analyze, synthesize, and apply knowledge from previous laboratory experiences with little or no instruction and/or practice in utilizing the critical thinking skills necessary to do so. This study sought to answer two research questions: (1) do physical geology students enrolled in a course designed around a mining simulation activity perform better cognitively than students who are taught the same curriculum in the traditional fashion; and (2) do students enrolled in the course gain a greater appreciation of physical geology and the work that geologists do. Eighty students enrolled in the course at Columbia College, Sonora, California over a two year period. During the first year, thirty-one students were taught the traditional physical geology curriculum. During the second year, forty-nine students were taught the traditional curriculum up until week nine, then they were taught a cooperative learning mining simulation activity for three weeks. A static group, split plot, repeated measures design was used. Pre- and post-tests were administered to students in both the control and treatment groups. The cognitive assessment instrument was validated by content area experts in the University of South Carolina Geological Sciences Department. Students were given raw lithologic, gravimetric, topographic, and environmental data with which to construct maps and perform an overlay analysis. They were tested on the cognitive reasoning and spatial analysis they used to make decisions about where to test drill for valuable metallic ores. The affective instrument used a six point Likert scale to assess students' perceived enjoyment, interest, and importance of the material. Gains scores analysis of cognitive achievement data showed a mean of 2.43 for the control group and 4.47 for the treatment group, statistically significantly different at the alpha = 0.05 level (p = 0.0038). Gains scores for the affective data indicated no statistically significant differences between the treatment and control groups. The simulation seems to make a difference in terms of students' intellectual performance, but not in terms of their attitudinal perceptions of the course. Results support the hypothesis that cognitive achievement is improved by a cooperative learning mining simulation activity. One implication might include adapting and implementing the model in lower division physical geology courses. Another would be to develop similar activities for other lower division, non-majors earth science courses (i.e. environmental geology, astronomy, meteorology, oceanography, etc.) that could improve students' subject matter knowledge. Additionally, the research supports shifting the locus of control from the instructor to students as well as the use of the principles of active learning, cooperative learning, and confluent education in the science classroom.
A physical anthropomorphic phantom of a one year old child with real-time dosimetry
NASA Astrophysics Data System (ADS)
Bower, Mark William
A physical heterogeneous phantom has been created with epoxy resin based tissue substitutes. The phantom is based on the Cristy and Eckerman mathematical phantom which in turn is a modification of the Medical Internal Radiation Dose (MIRD) model of a one-year-old child as presented by the Society of Nuclear Medicine. The Cristy and Eckerman mathematical phantom, and the physical phantom, are comprised of three different tissue types: bone, lung tissue and soft tissue. The bone tissue substitute is a homogenous mixture of bone tissues: active marrow, inactive marrow, trabecular bone, and cortical bone. Soft tissue organs are represented by a homogeneous soft tissue substitute at a particular location. Point doses were measured within the phantom with a Metal Oxide Semiconductor Field Effect Transistor (MOSFET)- based Patient Dose Verification System modified from the original radiotherapy application. The system features multiple dosimeters that are used to monitor entrance or exit skin doses and intracavity doses in the phantom in real-time. Two different MOSFET devices were evaluated: the typical therapy MOSFET and a developmental MOSFET device that has an oxide layer twice as thick as the therapy MOSFET thus making it of higher sensitivity. The average sensitivity (free-in-air, including backscatter) of the 'high-sensitivity' MOSFET dosimeters ranged from 1.15×105 mV per C kg-1 (29.7 mV/R) to 1.38×105 mV per C kg-1 (35.7 mV/R) depending on the energy of the x-ray field. The integrated physical phantom was utilized to obtain point measurements of the absorbed dose from diagnostic x-ray examinations. Organ doses were calculated based on these point dose measurements. The phantom dosimetry system functioned well providing real-time measurement of the dose to particular organs. The system was less reliable at low doses where the main contribution to the dose was from scattered radiation. The system also was of limited utility for determining the absorbed dose in larger systems such as the skeleton. The point dose method of estimating the organ dose to large disperse organs such as this are of questionable accuracy since only a limited number of points are measured in a field with potentially large exposure variations. The MOSFET system was simple to use and considerably faster than traditional thermoluminescent dosimetry. The one-year-old simulated phantom with the real-time MOSFET dosimeters provides a method to easily evaluate the risk to a previously understudied population from diagnostic radiographic procedures.
Development of IR imaging system simulator
NASA Astrophysics Data System (ADS)
Xiang, Xinglang; He, Guojing; Dong, Weike; Dong, Lu
2017-02-01
To overcome the disadvantages of the tradition semi-physical simulation and injection simulation equipment in the performance evaluation of the infrared imaging system (IRIS), a low-cost and reconfigurable IRIS simulator, which can simulate the realistic physical process of infrared imaging, is proposed to test and evaluate the performance of the IRIS. According to the theoretical simulation framework and the theoretical models of the IRIS, the architecture of the IRIS simulator is constructed. The 3D scenes are generated and the infrared atmospheric transmission effects are simulated using OGRE technology in real-time on the computer. The physical effects of the IRIS are classified as the signal response characteristic, modulation transfer characteristic and noise characteristic, and they are simulated on the single-board signal processing platform based on the core processor FPGA in real-time using high-speed parallel computation method.
Singularity free N-body simulations called 'Dynamic Universe Model' don't require dark matter
NASA Astrophysics Data System (ADS)
Naga Parameswara Gupta, Satyavarapu
For finding trajectories of Pioneer satellite (Anomaly), New Horizons satellite going to Pluto, the Calculations of Dynamic Universe model can be successfully applied. No dark matter is assumed within solar system radius. The effect on the masses around SUN shows as though there is extra gravitation pull toward SUN. It solves the Dynamics of Extra-solar planets like Planet X, satellite like Pioneer and NH for 3-Position, 3-velocity 3-accelaration for their masses, considering the complex situation of Multiple planets, Stars, Galaxy parts and Galaxy centre and other Galaxies Using simple Newtonian Physics. It already solved problems Missing mass in Galaxies observed by galaxy circular velocity curves successfully. Singularity free Newtonian N-body simulations Historically, King Oscar II of Sweden an-nounced a prize to a solution of N-body problem with advice given by Güsta Mittag-Leffler in 1887. He announced `Given a system of arbitrarily many mass points that attract each according to Newton's law, under the assumption that no two points ever collide, try to find a representation of the coordinates of each point as a series in a variable that is some known function of time and for all of whose values the series converges uniformly.'[This is taken from Wikipedia]. The announced dead line that time was1st June 1888. And after that dead line, on 21st January 1889, Great mathematician Poincaré claimed that prize. Later he himself sent a telegram to journal Acta Mathematica to stop printing the special issue after finding the error in his solution. Yet for such a man of science reputation is important than money. [ Ref Book `Celestial mechanics: the waltz of the planets' By Alessandra Celletti, Ettore Perozzi, page 27]. He realized that he has been wrong in his general stability result! But till now nobody could solve that problem or claimed that prize. Later all solutions resulted in singularities and collisions of masses, given by many people . . . . . . . . . . . . . . . . . . . . . . . . .. Now I can say that the Dynamic Universe Model solves this classical N-body problem where only Newtonian Gravi-tation law and classical Physics were used. The solution converges at all points. There are no multiple values, diverging solutions or divided by zero singularities. Collisions of masses depend on physical values of masses and their space distribution only. These collisions do not happen due to internal inherent problems of Dynamic universe Model. If the mass distribution is homogeneous and isotropic, the masses will colloid. If the mass distribution is heterogeneous and anisotropic, they do not colloid. This approach solves many problems which otherwise can not be solved by General relativity, Steady state universe model etc. . .
NASA Astrophysics Data System (ADS)
Messina, Luca; Castin, Nicolas; Domain, Christophe; Olsson, Pär
2017-02-01
The quality of kinetic Monte Carlo (KMC) simulations of microstructure evolution in alloys relies on the parametrization of point-defect migration rates, which are complex functions of the local chemical composition and can be calculated accurately with ab initio methods. However, constructing reliable models that ensure the best possible transfer of physical information from ab initio to KMC is a challenging task. This work presents an innovative approach, where the transition rates are predicted by artificial neural networks trained on a database of 2000 migration barriers, obtained with density functional theory (DFT) in place of interatomic potentials. The method is tested on copper precipitation in thermally aged iron alloys, by means of a hybrid atomistic-object KMC model. For the object part of the model, the stability and mobility properties of copper-vacancy clusters are analyzed by means of independent atomistic KMC simulations, driven by the same neural networks. The cluster diffusion coefficients and mean free paths are found to increase with size, confirming the dominant role of coarsening of medium- and large-sized clusters in the precipitation kinetics. The evolution under thermal aging is in better agreement with experiments with respect to a previous interatomic-potential model, especially concerning the experiment time scales. However, the model underestimates the solubility of copper in iron due to the excessively high solution energy predicted by the chosen DFT method. Nevertheless, this work proves the capability of neural networks to transfer complex ab initio physical properties to higher-scale models, and facilitates the extension to systems with increasing chemical complexity, setting the ground for reliable microstructure evolution simulations in a wide range of alloys and applications.
May, Christian P; Kolokotroni, Eleni; Stamatakos, Georgios S; Büchler, Philippe
2011-10-01
Modeling of tumor growth has been performed according to various approaches addressing different biocomplexity levels and spatiotemporal scales. Mathematical treatments range from partial differential equation based diffusion models to rule-based cellular level simulators, aiming at both improving our quantitative understanding of the underlying biological processes and, in the mid- and long term, constructing reliable multi-scale predictive platforms to support patient-individualized treatment planning and optimization. The aim of this paper is to establish a multi-scale and multi-physics approach to tumor modeling taking into account both the cellular and the macroscopic mechanical level. Therefore, an already developed biomodel of clinical tumor growth and response to treatment is self-consistently coupled with a biomechanical model. Results are presented for the free growth case of the imageable component of an initially point-like glioblastoma multiforme tumor. The composite model leads to significant tumor shape corrections that are achieved through the utilization of environmental pressure information and the application of biomechanical principles. Using the ratio of smallest to largest moment of inertia of the tumor material to quantify the effect of our coupled approach, we have found a tumor shape correction of 20% by coupling biomechanics to the cellular simulator as compared to a cellular simulation without preferred growth directions. We conclude that the integration of the two models provides additional morphological insight into realistic tumor growth behavior. Therefore, it might be used for the development of an advanced oncosimulator focusing on tumor types for which morphology plays an important role in surgical and/or radio-therapeutic treatment planning. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Lawston, Patricia M.; Santanello, Joseph A.; Rodell, Matthew; Franz, Trenton E.
2017-01-01
Irrigation increases soil moisture, which in turn controls water and energy fluxes from the land surface to the10 planetary boundary layer and determines plant stress and productivity. Therefore, developing a realistic representation of irrigation is critical to understanding land-atmosphere interactions in agricultural areas. Irrigation parameterizations are becoming more common in land surface models and are growing in sophistication, but there is difficulty in assessing the realism of these schemes, due to limited observations (e.g., soil moisture, evapotranspiration) and scant reporting of irrigation timing and quantity. This study uses the Noah land surface model run at high resolution within NASAs Land15 Information System to assess the physics of a sprinkler irrigation simulation scheme and model sensitivity to choice of irrigation intensity and greenness fraction datasets over a small, high resolution domain in Nebraska. Differences between experiments are small at the interannual scale but become more apparent at seasonal and daily time scales. In addition, this study uses point and gridded soil moisture observations from fixed and roving Cosmic Ray Neutron Probes and co-located human practice data to evaluate the realism of irrigation amounts and soil moisture impacts simulated by the model. Results20 show that field-scale heterogeneity resulting from the individual actions of farmers is not captured by the model and the amount of irrigation applied by the model exceeds that applied at the two irrigated fields. However, the seasonal timing of irrigation and soil moisture contrasts between irrigated and non-irrigated areas are simulated well by the model. Overall, the results underscore the necessity of both high-quality meteorological forcing data and proper representation of irrigation foraccurate simulation of water and energy states and fluxes over cropland.
NASA Astrophysics Data System (ADS)
Abbiati, Giuseppe; La Salandra, Vincenzo; Bursi, Oreste S.; Caracoglia, Luca
2018-02-01
Successful online hybrid (numerical/physical) dynamic substructuring simulations have shown their potential in enabling realistic dynamic analysis of almost any type of non-linear structural system (e.g., an as-built/isolated viaduct, a petrochemical piping system subjected to non-stationary seismic loading, etc.). Moreover, owing to faster and more accurate testing equipment, a number of different offline experimental substructuring methods, operating both in time (e.g. the impulse-based substructuring) and frequency domains (i.e. the Lagrange multiplier frequency-based substructuring), have been employed in mechanical engineering to examine dynamic substructure coupling. Numerous studies have dealt with the above-mentioned methods and with consequent uncertainty propagation issues, either associated with experimental errors or modelling assumptions. Nonetheless, a limited number of publications have systematically cross-examined the performance of the various Experimental Dynamic Substructuring (EDS) methods and the possibility of their exploitation in a complementary way to expedite a hybrid experiment/numerical simulation. From this perspective, this paper performs a comparative uncertainty propagation analysis of three EDS algorithms for coupling physical and numerical subdomains with a dual assembly approach based on localized Lagrange multipliers. The main results and comparisons are based on a series of Monte Carlo simulations carried out on a five-DoF linear/non-linear chain-like systems that include typical aleatoric uncertainties emerging from measurement errors and excitation loads. In addition, we propose a new Composite-EDS (C-EDS) method to fuse both online and offline algorithms into a unique simulator. Capitalizing from the results of a more complex case study composed of a coupled isolated tank-piping system, we provide a feasible way to employ the C-EDS method when nonlinearities and multi-point constraints are present in the emulated system.
Investigating the Cosmic Web with Topological Data Analysis
NASA Astrophysics Data System (ADS)
Cisewski-Kehe, Jessi; Wu, Mike; Fasy, Brittany; Hellwing, Wojciech; Lovell, Mark; Rinaldo, Alessandro; Wasserman, Larry
2018-01-01
Data exhibiting complicated spatial structures are common in many areas of science (e.g. cosmology, biology), but can be difficult to analyze. Persistent homology is a popular approach within the area of Topological Data Analysis that offers a new way to represent, visualize, and interpret complex data by extracting topological features, which can be used to infer properties of the underlying structures. In particular, TDA may be useful for analyzing the large-scale structure (LSS) of the Universe, which is an intricate and spatially complex web of matter. In order to understand the physics of the Universe, theoretical and computational cosmologists develop large-scale simulations that allow for visualizing and analyzing the LSS under varying physical assumptions. Each point in the 3D data set represents a galaxy or a cluster of galaxies, and topological summaries ("persistent diagrams") can be obtained summarizing the different ordered holes in the data (e.g. connected components, loops, voids).The topological summaries are interesting and informative descriptors of the Universe on their own, but hypothesis tests using the topological summaries would provide a way to make more rigorous comparisons of LSS under different theoretical models. For example, the received cosmological model has cold dark matter (CDM); however, while the case is strong for CDM, there are some observational inconsistencies with this theory. Another possibility is warm dark matter (WDM). It is of interest to see if a CDM Universe and WDM Universe produce LSS that is topologically distinct.We present several possible test statistics for two-sample hypothesis tests using the topological summaries, carryout a simulation study to investigate the suitableness of the proposed test statistics using simulated data from a variation of the Voronoi foam model, and finally we apply the proposed inference framework to WDM vs. CDM cosmological simulation data.
Numerical Modeling of Thermal Edge Flow
NASA Astrophysics Data System (ADS)
Ibrayeva, Aizhan
A gas flow can be induced between two interdigitated arrays of thin vanes, when one of the arrays is uniformly heated or cooled. Sharply curved isotherms near the vane edges leads to momentum imbalance among incident particles, which creates Knudsen force to the vane and thermal edge flow in a gas. The flow is observed in a rarefied gas, when the mean free path of the molecules are comparable with the characteristic length scale of the system. In order to understand a physical mechanism of the flow and Knudsen force, the configuration was numerically investigated under different gas rarefication degrees and temperature gradients in the system by direct simulation Monte Carlo (DSMC) method. From simulations, the highest force value is obtained when Knudsen number is around 0.5 and becomes negligible in free molecular and continuum regimes. DSMC results are analyzed from the theoretical point of view and compared to experimental data. Validation of the simulations is done by the RKDG method. An effect of various geometric parameters to the performance of the actuator was investigated and suggestions were made for improved design of the device.
Experimental Verification of Bayesian Planet Detection Algorithms with a Shaped Pupil Coronagraph
NASA Astrophysics Data System (ADS)
Savransky, D.; Groff, T. D.; Kasdin, N. J.
2010-10-01
We evaluate the feasibility of applying Bayesian detection techniques to discovering exoplanets using high contrast laboratory data with simulated planetary signals. Background images are generated at the Princeton High Contrast Imaging Lab (HCIL), with a coronagraphic system utilizing a shaped pupil and two deformable mirrors (DMs) in series. Estimates of the electric field at the science camera are used to correct for quasi-static speckle and produce symmetric high contrast dark regions in the image plane. Planetary signals are added in software, or via a physical star-planet simulator which adds a second off-axis point source before the coronagraph with a beam recombiner, calibrated to a fixed contrast level relative to the source. We produce a variety of images, with varying integration times and simulated planetary brightness. We then apply automated detection algorithms such as matched filtering to attempt to extract the planetary signals. This allows us to evaluate the efficiency of these techniques in detecting planets in a high noise regime and eliminating false positives, as well as to test existing algorithms for calculating the required integration times for these techniques to be applicable.
Numerical simulation of temperature field in K9 glass irradiated by ultraviolet pulse laser
NASA Astrophysics Data System (ADS)
Wang, Xi; Fang, Xiaodong
2015-10-01
The optical component of photoelectric system was easy to be damaged by irradiation of high power pulse laser, so the effect of high power pulse laser irradiation on K9 glass was researched. A thermodynamic model of K9 glass irradiated by ultraviolet pulse laser was established using the finite element software ANSYS. The article analyzed some key problems in simulation process of ultraviolet pulse laser damage of K9 glass based on ANSYS from the finite element models foundation, meshing, loading of pulse laser, setting initial conditions and boundary conditions and setting the thermal physical parameters of material. The finite element method (FEM) model was established and a numerical analysis was performed to calculate temperature field in K9 glass irradiated by ultraviolet pulse laser. The simulation results showed that the temperature of irradiation area exceeded the melting point of K9 glass, while the incident laser energy was low. The thermal damage dominated in the damage mechanism of K9 glass, the melting phenomenon should be much more distinct.
The Gravity Wave Response Above Deep Convection in a Squall Line Simulation
NASA Technical Reports Server (NTRS)
Alexander, M. J.; Holton, J. R.; Durran, D. R.
1995-01-01
High-frequency gravity waves generated by convective storms likely play an important role in the general circulation of the middle atmosphere. Yet little is known about waves from this source. This work utilizes a fully compressible, nonlinear, numerical, two-dimensional simulation of a midlatitude squall line to study vertically propagating waves generated by deep convection. The model includes a deep stratosphere layer with high enough resolution to characterize the wave motions at these altitudes. A spectral analysis of the stratospheric waves provides an understanding of the necessary characteristics of the spectrum for future studies of their effects on the middle atmosphere in realistic mean wind scenarios. The wave spectrum also displays specific characteristics that point to the physical mechanisms within the storm responsible for their forcing. Understanding these forcing mechanisms and the properties of the storm and atmosphere that control them are crucial first steps toward developing a parameterization of waves from this source. The simulation also provides a description of some observable signatures of convectively generated waves, which may promote observational verification of these results and help tie any such observations to their convective source.
Modeling Vortex Generators in a Navier-Stokes Code
NASA Technical Reports Server (NTRS)
Dudek, Julianne C.
2011-01-01
A source-term model that simulates the effects of vortex generators was implemented into the Wind-US Navier-Stokes code. The source term added to the Navier-Stokes equations simulates the lift force that would result from a vane-type vortex generator in the flowfield. The implementation is user-friendly, requiring the user to specify only three quantities for each desired vortex generator: the range of grid points over which the force is to be applied and the planform area and angle of incidence of the physical vane. The model behavior was evaluated for subsonic flow in a rectangular duct with a single vane vortex generator, subsonic flow in an S-duct with 22 corotating vortex generators, and supersonic flow in a rectangular duct with a counter-rotating vortex-generator pair. The model was also used to successfully simulate microramps in supersonic flow by treating each microramp as a pair of vanes with opposite angles of incidence. The validation results indicate that the source-term vortex-generator model provides a useful tool for screening vortex-generator configurations and gives comparable results to solutions computed using gridded vanes.
The plastic response of Tantalum in Quasi-Isentropic Compression Ramp and Release
NASA Astrophysics Data System (ADS)
Moore, Alexander; Brown, Justin; Lim, Hojun; Lane, J. Matthew D.
2017-06-01
The mechanical response of various forms of tantalum under extreme pressures and strain rates is studied using dynamic quasi-isentropic compression loading conditions in atomistic simulations. Ramp compression in bcc metals under these conditions tend to show a significant strengthening effect with increasing pressure; however, due to limitations of experimental methods in such regimes, the underlying physics for this phenomenon is not well understood. Molecular dynamics simulations provide important information about the plasticity mechanisms and can be used to investigate this strengthening. MD simulations are performed on nanocrystalline Ta and single crystal defective Ta with dislocations and point defects to uncover how the material responds and the underlying plasticity mechanisms. The different systems of solid Ta are seen to plastically deform through different mechanisms. Fundamental understanding of tantalum plasticity in these high pressure and strain rate regimes is needed to model and fully understand experimental results. Sandia National Labs is a multi program laboratory managed and operated by Sandia Corp., a wholly owned subsidiary of Lockheed Martin Corp., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Modeling of a microchannel plate working in pulsed mode
NASA Astrophysics Data System (ADS)
Secroun, Aurelia; Mens, Alain; Segre, Jacques; Assous, Franck; Piault, Emmanuel; Rebuffie, Jean-Claude
1997-05-01
MicroChannel Plates (MCPs) are used in high speed cinematography systems such as MCP framing cameras and streak camera readouts. In order to know the dynamic range or the signal to noise ratio that are available in these devices, a good knowledge of the performances of the MCP is essential. The point of interest of our simulation is the working mode of the microchannel plate--that is light pulsed mode--, in which the signal level is relatively high and its duration can be shorter than the time needed to replenish the wall of the channel, when other papers mainly studied night vision applications with weak continuous and nearly single electron input signal. Also our method allows the simulation of saturation phenomena due to the large number of electrons involved, whereas the discrete models previously used for simulating pulsed mode might not be properly adapted. Here are presented the choices made in modeling the microchannel, more specifically as for the physics laws, the secondary emission parameters and the 3D- geometry. In a last part first results are shown.
Evolution simulation of lightning discharge based on a magnetohydrodynamics method
NASA Astrophysics Data System (ADS)
Fusheng, WANG; Xiangteng, MA; Han, CHEN; Yao, ZHANG
2018-07-01
In order to solve the load problem for aircraft lightning strikes, lightning channel evolution is simulated under the key physical parameters for aircraft lightning current component C. A numerical model of the discharge channel is established, based on magnetohydrodynamics (MHD) and performed by FLUENT software. With the aid of user-defined functions and a user-defined scalar, the Lorentz force, Joule heating and material parameters of an air thermal plasma are added. A three-dimensional lightning arc channel is simulated and the arc evolution in space is obtained. The results show that the temperature distribution of the lightning channel is symmetrical and that the hottest region occurs at the center of the lightning channel. The distributions of potential and current density are obtained, showing that the difference in electric potential or energy between two points tends to make the arc channel develop downwards. The arc channel comes into expansion on the anode surface due to stagnation of the thermal plasma and there exists impingement on the copper plate when the arc channel comes into contact with the anode plate.
Magnetic reconnection process in transient coaxial helicity injection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ebrahimi, F.; Hooper, E. B.; Sovinec, C. R.
The physics of magnetic reconnection and fast flux closure in transient coaxial helicity injection experiments in NSTX is examined using resistive MHD simulations. These simulations have been performed using the NIMROD code with fixed boundary flux (including NSTX poloidal coil currents) in the NSTX experimental geometry. Simulations show that an X point is formed in the injector region, followed by formation of closed flux surfaces within 0.5 ms after the driven injector voltage and injector current begin to rapidly decrease. As the injector voltage is turned off, the field lines tend to untwist in the toroidal direction and magnetic fieldmore » compression exerts a radial J × B force and generates a bi-directional radial E{sub toroidal}×B{sub poloidal} pinch flow to bring oppositely directed field lines closer together to reconnect. At sufficiently low magnetic diffusivity (high Lundquist number), and with a sufficiently narrow injector flux footprint width, the oppositely directed field lines have sufficient time to reconnect (before dissipating), leading to the formation of closed flux surfaces. The reconnection process is shown to have transient Sweet-Parker characteristics.« less
NASA Astrophysics Data System (ADS)
Chen, Goong; Wang, Yi-Ching; Perronnet, Alain; Gu, Cong; Yao, Pengfei; Bin-Mohsin, Bandar; Hajaiej, Hichem; Scully, Marlan O.
2017-03-01
Computational mathematics, physics and engineering form a major constituent of modern computational science, which now stands on an equal footing with the established branches of theoretical and experimental sciences. Computational mechanics solves problems in science and engineering based upon mathematical modeling and computing, bypassing the need for expensive and time-consuming laboratory setups and experimental measurements. Furthermore, it allows the numerical simulations of large scale systems, such as the formation of galaxies that could not be done in any earth bound laboratories. This article is written as part of the 21st Century Frontiers Series to illustrate some state-of-the-art computational science. We emphasize how to do numerical modeling and visualization in the study of a contemporary event, the pulverizing crash of the Germanwings Flight 9525 on March 24, 2015, as a showcase. Such numerical modeling and the ensuing simulation of aircraft crashes into land or mountain are complex tasks as they involve both theoretical study and supercomputing of a complex physical system. The most tragic type of crash involves ‘pulverization’ such as the one suffered by this Germanwings flight. Here, we show pulverizing airliner crashes by visualization through video animations from supercomputer applications of the numerical modeling tool LS-DYNA. A sound validation process is challenging but essential for any sophisticated calculations. We achieve this by validation against the experimental data from a crash test done in 1993 of an F4 Phantom II fighter jet into a wall. We have developed a method by hybridizing two primary methods: finite element analysis and smoothed particle hydrodynamics. This hybrid method also enhances visualization by showing a ‘debris cloud’. Based on our supercomputer simulations and the visualization, we point out that prior works on this topic based on ‘hollow interior’ modeling can be quite problematic and, thus, not likely to be correct. We discuss the effects of terrain on pulverization using the information from the recovered flight-data-recorder and show our forensics and assessments of what may have happened during the final moments of the crash. Finally, we point out that our study has potential for being made into real-time flight crash simulators to help the study of crashworthiness and survivability for future aviation safety. Some forward-looking statements are also made.
NASA Astrophysics Data System (ADS)
Rodgers, A. J.; Pitarka, A.; Wagoner, J. L.; Helmberger, D. V.
2017-12-01
The FLASK underground nuclear explosion (UNE) was conducted in Area 2 of Yucca Flat at the Nevada Test Site on May 26, 1970. The yield was 105 kilotons (DOE/NV-209-Rev 16) and the working point was 529 m below the surface. This test was detonated in faulted Tertiary volcanic rocks of Yucca Flat. Coincidently, the FLASK UNE ground zero (GZ) is close (< 600 m) to the U2ez hole where the Source Physics Experiment will be conducting Phase II of its chemical high explosives test series in the so-called Dry Alluvium Geology (DAG) site. Ground motions from FLASK were recorded by twelve (12) three-component seismic stations in the near-field at ranges 3-4 km. We digitized the paper records and used available metadata on peak particle velocity measurements made at the time to adjust the amplitudes. These waveforms show great variability in amplitudes and waveform complexity with azimuth from the shot, likely due to along propagation path structure such as the geometry of the hard-rock/alluvium contact above the working point. Peak particle velocities at stations in the deeper alluvium to the north, east and south of GZ have larger amplitudes than those to the west where the basement rock is much shallower. Interestingly, the transverse components show a similar trend with azimuth. In fact, the transverse component amplitudes are similar to the other components for many stations overlying deeper basement. In this study, we simulated the seismic response at the available near-field stations using the SW4 three-dimensional (3D) finite difference code. SW4 can simulate seismic wave propagation in 3D inelastic earth structure, including surface topography. SW4 includes vertical mesh refinement which greatly reduces the computational resources needed to run a specific problem. Simulations are performed on high-performance computers with grid spacing as small as 10 meters and resolution to 6 Hz. We are testing various subsurface models to identify the role of 3D structure on path propagation effects from the source. We are also testing 3D models to constrain structure for the upcoming DAG experiments in 2018.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tregillis, I. L.
The Los Alamos Physics and Engineering Models (PEM) program has developed a model for Richtmyer-Meshkov instability (RMI) based ejecta production from shock-melted surfaces, along with a prescription for a self-similar velocity distribution (SSVD) of the resulting ejecta particles. We have undertaken an effort to validate this source model using data from explosively driven tin coupon experiments. The model’s current formulation lacks a crucial piece of physics: a method for determining the duration of the ejecta production interval. Without a mechanism for terminating ejecta production, the model is not predictive. Furthermore, when the production interval is hand-tuned to match time-integrated massmore » data, the predicted time-dependent mass accumulation on a downstream sensor rises too sharply at early times and too slowly at late times because the SSVD overestimates the amount of mass stored in the fastest particles and underestimates the mass stored in the slowest particles. The functional form of the resulting m(t) is inconsistent with the available time-dependent data; numerical simulations and analytic studies agree on this point. Simulated mass tallies are highly sensitive to radial expansion of the ejecta cloud. It is not clear if the same effect is present in the experimental data but if so, depending on the degree, this may challenge the model’s compatibility with tin coupon data. The current implementation of the model in FLAG is sensitive to the detailed interaction between kinematics (hydrodynamic methods) and thermodynamics (material models); this sensitivity prohibits certain physics modeling choices. The appendices contain an extensive analytic study of piezoelectric ejecta mass measurements, along with test problems, excerpted from a longer work (LA-UR-17-21218).« less
Expansion of transient operating data
NASA Astrophysics Data System (ADS)
Chipman, Christopher; Avitabile, Peter
2012-08-01
Real time operating data is very important to understand actual system response. Unfortunately, the amount of physical data points typically collected is very small and often interpretation of the data is difficult. Expansion techniques have been developed using traditional experimental modal data to augment this limited set of data. This expansion process allows for a much improved description of the real time operating response. This paper presents the results from several different structures to show the robustness of the technique. Comparisons are made to a more complete set of measured data to validate the approach. Both analytical simulations and actual experimental data are used to illustrate the usefulness of the technique.
Atmospheric scattering of middle uv radiation from an internal source.
Meier, R R; Lee, J S; Anderson, D E
1978-10-15
A Monte Carlo model has been developed which simulates the multiple-scattering of middle-uv radiation in the lower atmosphere. The source of radiation is assumed to be monochromatic and located at a point. The physical effects taken into account in the model are Rayleigh and Mie scattering, pure absorption by particulates and trace atmospheric gases, and ground albedo. The model output consists of the multiply scattered radiance as a function of look-angle of a detector located within the atmosphere. Several examples are discussed, and comparisons are made with direct-source and single-scattered contributions to the signal received by the detector.
Models for Type Ia Supernovae and Related Astrophysical Transients
NASA Astrophysics Data System (ADS)
Röpke, Friedrich K.; Sim, Stuart A.
2018-06-01
We give an overview of recent efforts to model Type Ia supernovae and related astrophysical transients resulting from thermonuclear explosions in white dwarfs. In particular we point out the challenges resulting from the multi-physics multi-scale nature of the problem and discuss possible numerical approaches to meet them in hydrodynamical explosion simulations and radiative transfer modeling. We give examples of how these methods are applied to several explosion scenarios that have been proposed to explain distinct subsets or, in some cases, the majority of the observed events. In case we comment on some of the successes and shortcoming of these scenarios and highlight important outstanding issues.
Statistical crystallography of surface micelle spacing
NASA Technical Reports Server (NTRS)
Noever, David A.
1992-01-01
The aggregation of the recently reported surface micelles of block polyelectrolytes is analyzed using techniques of statistical crystallography. A polygonal lattice (Voronoi mosaic) connects center-to-center points, yielding statistical agreement with crystallographic predictions; Aboav-Weaire's law and Lewis's law are verified. This protocol supplements the standard analysis of surface micelles leading to aggregation number determination and, when compared to numerical simulations, allows further insight into the random partitioning of surface films. In particular, agreement with Lewis's law has been linked to the geometric packing requirements of filling two-dimensional space which compete with (or balance) physical forces such as interfacial tension, electrostatic repulsion, and van der Waals attraction.
A survey of simulators for palpation training.
Zhang, Yan; Phillips, Roger; Ward, James; Pisharody, Sandhya
2009-01-01
Palpation is a widely used diagnostic method in medical practice. The sensitivity of palpation is highly dependent upon the skill of clinicians, which is often difficult to master. There is a need of simulators in palpation training. This paper summarizes important work and the latest achievements in simulation for palpation training. Three types of simulators; physical models, Virtual Reality (VR) based simulations, and hybrid (computerized and physical) simulators, are surveyed. Comparisons among different kinds of simulators are presented.
"Physically-based" numerical experiment to determine the dominant hillslope processes during floods?
NASA Astrophysics Data System (ADS)
Gaume, Eric; Esclaffer, Thomas; Dangla, Patrick; Payrastre, Olivier
2016-04-01
To study the dynamics of hillslope responses during flood event, a fully coupled "physically-based" model for the combined numerical simulation of surface runoff and underground flows has been developed. A particular attention has been given to the selection of appropriate numerical schemes for the modelling of both processes and of their coupling. Surprisingly, the most difficult question to solve, from a numerical point of view, was not related to the coupling of two processes with contrasted kinetics such as surface and underground flows, but to the high gradient infiltration fronts appearing in soils, source of numerical diffusion, instabilities and sometimes divergence. The model being elaborated, it has been successfully tested against results of high quality experiments conducted on a laboratory sandy slope in the early eighties, which is still considered as a reference hillslope experimental setting (Abdul & Guilham). The model appeared able to accurately simulate the pore pressure distributions observed in this 1.5 meter deep and wide laboratory hillslope, as well as its outflow hydrograph shapes and the measured respective contributions of direct runoff and groundwater to these outflow hydrographs. Based on this great success, the same model has been used to simulate the response of a theoretical 100-meter wide and 10% sloped hillslope, with a 2 meter deep pervious soil and impervious bedrock. Three rain events have been tested: a 100 millimeter rainfall event over 10 days, over 1 day or over one hour. The simulated responses are hydrologically not realistic and especially the fast component of the response, that is generally observed in the real-world and explains flood events, is almost absent of the simulated response. Thinking a little about the whole problem, the simulation results appears totally logical according to the proposed model. The simulated response, in fact a recession hydrograph, corresponds to a piston flow of a relatively uniformly saturated hillslope leading to a constant discharge over several days. Some ingredients are clearly missing in the proposed model to reproduce hydrologically sensible responses. Heterogeneities are necessary to generate a variety of residence times and especially preferential flows must clearly be present to generate the fast component of hillslope responses. The importance of preferential flows in hillslope hydrology has been confirmed since this reported failure by several hillslope field experiments. We let also the readers draw their own conclusions about the numerous numerical models, that look very much alike the model proposed here, even if generally much more simplified, but representing the watersheds as much too homogeneous neglecting heterogeneities and preferential flows and pretending to be "physically based"…
Renosh, P R; Schmitt, Francois G; Loisel, Hubert
2015-01-01
Satellite remote sensing observations allow the ocean surface to be sampled synoptically over large spatio-temporal scales. The images provided from visible and thermal infrared satellite observations are widely used in physical, biological, and ecological oceanography. The present work proposes a method to understand the multi-scaling properties of satellite products such as the Chlorophyll-a (Chl-a), and the Sea Surface Temperature (SST), rarely studied. The specific objectives of this study are to show how the small scale heterogeneities of satellite images can be characterised using tools borrowed from the fields of turbulence. For that purpose, we show how the structure function, which is classically used in the frame of scaling time series analysis, can be used also in 2D. The main advantage of this method is that it can be applied to process images which have missing data. Based on both simulated and real images, we demonstrate that coarse-graining (CG) of a gradient modulus transform of the original image does not provide correct scaling exponents. We show, using a fractional Brownian simulation in 2D, that the structure function (SF) can be used with randomly sampled couple of points, and verify that 1 million of couple of points provides enough statistics.
Sources of spurious force oscillations from an immersed boundary method for moving-body problems
NASA Astrophysics Data System (ADS)
Lee, Jongho; Kim, Jungwoo; Choi, Haecheon; Yang, Kyung-Soo
2011-04-01
When a discrete-forcing immersed boundary method is applied to moving-body problems, it produces spurious force oscillations on a solid body. In the present study, we identify two sources of these force oscillations. One source is from the spatial discontinuity in the pressure across the immersed boundary when a grid point located inside a solid body becomes that of fluid with a body motion. The addition of mass source/sink together with momentum forcing proposed by Kim et al. [J. Kim, D. Kim, H. Choi, An immersed-boundary finite volume method for simulations of flow in complex geometries, Journal of Computational Physics 171 (2001) 132-150] reduces the spurious force oscillations by alleviating this pressure discontinuity. The other source is from the temporal discontinuity in the velocity at the grid points where fluid becomes solid with a body motion. The magnitude of velocity discontinuity decreases with decreasing the grid spacing near the immersed boundary. Four moving-body problems are simulated by varying the grid spacing at a fixed computational time step and at a constant CFL number, respectively. It is found that the spurious force oscillations decrease with decreasing the grid spacing and increasing the computational time step size, but they depend more on the grid spacing than on the computational time step size.