Propulsion simulator for magnetically-suspended wind tunnel models
NASA Technical Reports Server (NTRS)
Joshi, Prakash B.; Goldey, C. L.; Sacco, G. P.; Lawing, Pierce L.
1991-01-01
The objective of phase two of a current investigation sponsored by NASA Langley Research Center is to demonstrate the measurement of aerodynamic forces/moments, including the effects of exhaust gases, in magnetic suspension and balance system (MSBS) wind tunnels. Two propulsion simulator models are being developed: a small-scale and a large-scale unit, both employing compressed, liquified carbon dioxide as propellant. The small-scale unit was designed, fabricated, and statically-tested at Physical Sciences Inc. (PSI). The large-scale simulator is currently in the preliminary design stage. The small-scale simulator design/development is presented, and the data from its static firing on a thrust stand are discussed. The analysis of this data provides important information for the design of the large-scale unit. A description of the preliminary design of the device is also presented.
Ruano, M V; Ribes, J; Seco, A; Ferrer, J
2011-01-01
This paper presents a computer tool called DSC (Simulation based Controllers Design) that enables an easy design of control systems and strategies applied to wastewater treatment plants. Although the control systems are developed and evaluated by simulation, this tool aims to facilitate the direct implementation of the designed control system to the PC of the full-scale WWTP (wastewater treatment plants). The designed control system can be programmed in a dedicated control application and can be connected to either the simulation software or the SCADA of the plant. To this end, the developed DSC incorporates an OPC server (OLE for process control) which facilitates an open-standard communication protocol for different industrial process applications. The potential capabilities of the DSC tool are illustrated through the example of a full-scale application. An aeration control system applied to a nutrient removing WWTP was designed, tuned and evaluated with the DSC tool before its implementation in the full scale plant. The control parameters obtained by simulation were suitable for the full scale plant with only few modifications to improve the control performance. With the DSC tool, the control systems performance can be easily evaluated by simulation. Once developed and tuned by simulation, the control systems can be directly applied to the full-scale WWTP.
TASK 2: QUENCH ZONE SIMULATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fusselman, Steve
Aerojet Rocketdyne (AR) has developed an innovative gasifier concept incorporating advanced technologies in ultra-dense phase dry feed system, rapid mix injector, and advanced component cooling to significantly improve gasifier performance, life, and cost compared to commercially available state-of-the-art systems. A key feature of the AR gasifier design is the transition from the gasifier outlet into the quench zone, where the raw syngas is cooled to ~ 400°C by injection and vaporization of atomized water. Earlier pilot plant testing revealed a propensity for the original gasifier outlet design to accumulate slag in the outlet, leading to erratic syngas flow from themore » outlet. Subsequent design modifications successfully resolved this issue in the pilot plant gasifier. In order to gain greater insight into the physical phenomena occurring within this zone, AR developed a cold flow simulation apparatus with Coanda Research & Development with a high degree of similitude to hot fire conditions with the pilot scale gasifier design, and capable of accommodating a scaled-down quench zone for a demonstration-scale gasifier. The objective of this task was to validate similitude of the cold flow simulation model by comparison of pilot-scale outlet design performance, and to assess demonstration scale gasifier design feasibility from testing of a scaled-down outlet design. Test results did exhibit a strong correspondence with the two pilot scale outlet designs, indicating credible similitude for the cold flow simulation device. Testing of the scaled-down outlet revealed important considerations in the design and operation of the demonstration scale gasifier, in particular pertaining to the relative momentum between the downcoming raw syngas and the sprayed quench water and associated impacts on flow patterns within the quench zone. This report describes key findings from the test program, including assessment of pilot plant configuration simulations relative to actual results on the pilot plant gasifier and demonstration plant design recommendations, based on cold flow simulation results.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilke, Jeremiah J; Kenny, Joseph P.
2015-02-01
Discrete event simulation provides a powerful mechanism for designing and testing new extreme- scale programming models for high-performance computing. Rather than debug, run, and wait for results on an actual system, design can first iterate through a simulator. This is particularly useful when test beds cannot be used, i.e. to explore hardware or scales that do not yet exist or are inaccessible. Here we detail the macroscale components of the structural simulation toolkit (SST). Instead of depending on trace replay or state machines, the simulator is architected to execute real code on real software stacks. Our particular user-space threading frameworkmore » allows massive scales to be simulated even on small clusters. The link between the discrete event core and the threading framework allows interesting performance metrics like call graphs to be collected from a simulated run. Performance analysis via simulation can thus become an important phase in extreme-scale programming model and runtime system design via the SST macroscale components.« less
Franklin, Ashley E; Burns, Paulette; Lee, Christopher S
2014-10-01
In 2006, the National League for Nursing published three measures related to novice nurses' beliefs about self-confidence, scenario design, and educational practices associated with simulation. Despite the extensive use of these measures, little is known about their reliability and validity. The psychometric properties of the Student Satisfaction and Self-Confidence in Learning Scale, Simulation Design Scale, and Educational Practices Questionnaire were studied among a sample of 2200 surveys completed by novice nurses from a liberal arts university in the southern United States. Psychometric tests included item analysis, confirmatory and exploratory factor analyses in randomly-split subsamples, concordant and discordant validity, and internal consistency. All three measures have sufficient reliability and validity to be used in education research. There is room for improvement in content validity with the Student Satisfaction and Self-Confidence in Learning and Simulation Design Scale. This work provides robust evidence to ensure that judgments made about self-confidence after simulation, simulation design and educational practices are valid and reliable. Copyright © 2014 Elsevier Ltd. All rights reserved.
Adjoint-Based Aerodynamic Design of Complex Aerospace Configurations
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.
2016-01-01
An overview of twenty years of adjoint-based aerodynamic design research at NASA Langley Research Center is presented. Adjoint-based algorithms provide a powerful tool for efficient sensitivity analysis of complex large-scale computational fluid dynamics (CFD) simulations. Unlike alternative approaches for which computational expense generally scales with the number of design parameters, adjoint techniques yield sensitivity derivatives of a simulation output with respect to all input parameters at the cost of a single additional simulation. With modern large-scale CFD applications often requiring millions of compute hours for a single analysis, the efficiency afforded by adjoint methods is critical in realizing a computationally tractable design optimization capability for such applications.
A quantitative approach to evaluating caring in nursing simulation.
Eggenberger, Terry L; Keller, Kathryn B; Chase, Susan K; Payne, Linda
2012-01-01
This study was designed to test a quantitative method of measuring caring in the simulated environment. Since competency in caring is central to nursing practice, ways of including caring concepts in designing scenarios and in evaluation of performance need to be developed. Coates' Caring Efficacy scales were adapted for simulation and named the Caring Efficacy Scale-Simulation Student Version (CES-SSV) and Caring Efficacy Scale-Simulation Faculty Version (CES-SFV). A correlational study was designed to compare student self-ratings with faculty ratings on caring efficacy during an adult acute simulation experience with traditional and accelerated baccalaureate students in a nursing program grounded in caring theory. Student self-ratings were significantly correlated with objective ratings (r = 0.345, 0.356). Both the CES-SSV and the CES-SFV were found to have excellent internal consistency and significantly correlated interrater reliability. They were useful in measuring caring in the simulated learning environment.
Design of full-scale adsorption systems typically includes expensive and time-consuming pilot studies to simulate full-scale adsorber performance. Accordingly, the rapid small-scale column test (RSSCT) was developed and evaluated experimentally. The RSSCT can simulate months of f...
The mechanical design and simulation of a scaled H⁻ Penning ion source.
Rutter, T; Faircloth, D; Turner, D; Lawrie, S
2016-02-01
The existing ISIS Penning H(-) source is unable to produce the beam parameters required for the front end test stand and so a new, high duty factor, high brightness scaled source is being developed. This paper details first the development of an electrically biased aperture plate for the existing ISIS source and second, the design, simulation, and development of a prototype scaled source.
The mechanical design and simulation of a scaled H- Penning ion source
NASA Astrophysics Data System (ADS)
Rutter, T.; Faircloth, D.; Turner, D.; Lawrie, S.
2016-02-01
The existing ISIS Penning H- source is unable to produce the beam parameters required for the front end test stand and so a new, high duty factor, high brightness scaled source is being developed. This paper details first the development of an electrically biased aperture plate for the existing ISIS source and second, the design, simulation, and development of a prototype scaled source.
Design, construction, and evaluation of a 1:8 scale model binaural manikin.
Robinson, Philip; Xiang, Ning
2013-03-01
Many experiments in architectural acoustics require presenting listeners with simulations of different rooms to compare. Acoustic scale modeling is a feasible means to create accurate simulations of many rooms at reasonable cost. A critical component in a scale model room simulation is a receiver that properly emulates a human receiver. For this purpose, a scale model artificial head has been constructed and tested. This paper presents the design and construction methods used, proper equalization procedures, and measurements of its response. A headphone listening experiment examining sound externalization with various reflection conditions is presented that demonstrates its use for psycho-acoustic testing.
The mechanical design and simulation of a scaled H{sup −} Penning ion source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rutter, T., E-mail: theo.rutter@stfc.ac.uk; Faircloth, D.; Turner, D.
2016-02-15
The existing ISIS Penning H{sup −} source is unable to produce the beam parameters required for the front end test stand and so a new, high duty factor, high brightness scaled source is being developed. This paper details first the development of an electrically biased aperture plate for the existing ISIS source and second, the design, simulation, and development of a prototype scaled source.
Survey of factors influencing learner engagement with simulation debriefing among nursing students.
Roh, Young Sook; Jang, Kie In
2017-12-01
Simulation-based education has escalated worldwide, yet few studies have rigorously explored predictors of learner engagement with simulation debriefing. The purpose of this cross-sectional, descriptive survey was to identify factors that determine learner engagement with simulation debriefing among nursing students. A convenience sample of 296 Korean nursing students enrolled in the simulation-based course completed the survey. A total of five instruments were used: (i) Characteristics of Debriefing; (ii) Debriefing Assessment for Simulation in Healthcare - Student Version; (iii) The Korean version of the Simulation Design Scale; (iv) Communication Skills Scale; and (v) Clinical-Based Stress Scale. Multiple regression analysis was performed using the variables to investigate the influencing factors. The results indicated that influencing factors of learning engagement with simulation debriefing were simulation design, confidentiality, stress, and number of students. Simulation design was the most important factor. Video-assisted debriefing was not a significant factor affecting learner engagement. Educators should organize and conduct debriefing activities while considering these factors to effectively induce learner engagement. Further study is needed to identify the effects of debriefing sessions targeting learners' needs and considering situational factors on learning outcomes. © 2017 John Wiley & Sons Australia, Ltd.
Computational Modeling Approaches to Multiscale Design of Icephobic Surfaces
NASA Technical Reports Server (NTRS)
Tallman, Aaron; Wang, Yan; Vargas, Mario
2017-01-01
To aid in the design of surfaces that prevent icing, a model and computational simulation of impact ice formation at the single droplet scale was implemented. The nucleation of a single supercooled droplet impacting on a substrate, in rime ice conditions, was simulated. Open source computational fluid dynamics (CFD) software was used for the simulation. To aid in the design of surfaces that prevent icing, a model of impact ice formation at the single droplet scale was proposed•No existing model simulates simultaneous impact and freezing of a single super-cooled water droplet•For the 10-week project, a low-fidelity feasibility study was the goal.
Subgrid-scale models for large-eddy simulation of rotating turbulent channel flows
NASA Astrophysics Data System (ADS)
Silvis, Maurits H.; Bae, Hyunji Jane; Trias, F. Xavier; Abkar, Mahdi; Moin, Parviz; Verstappen, Roel
2017-11-01
We aim to design subgrid-scale models for large-eddy simulation of rotating turbulent flows. Rotating turbulent flows form a challenging test case for large-eddy simulation due to the presence of the Coriolis force. The Coriolis force conserves the total kinetic energy while transporting it from small to large scales of motion, leading to the formation of large-scale anisotropic flow structures. The Coriolis force may also cause partial flow laminarization and the occurrence of turbulent bursts. Many subgrid-scale models for large-eddy simulation are, however, primarily designed to parametrize the dissipative nature of turbulent flows, ignoring the specific characteristics of transport processes. We, therefore, propose a new subgrid-scale model that, in addition to the usual dissipative eddy viscosity term, contains a nondissipative nonlinear model term designed to capture transport processes, such as those due to rotation. We show that the addition of this nonlinear model term leads to improved predictions of the energy spectra of rotating homogeneous isotropic turbulence as well as of the Reynolds stress anisotropy in spanwise-rotating plane-channel flows. This work is financed by the Netherlands Organisation for Scientific Research (NWO) under Project Number 613.001.212.
A large meteorological wind tunnel was used to simulate a suburban atmospheric boundary layer. The model-prototype scale was 1:300 and the roughness length was approximately 1.0 m full scale. The model boundary layer simulated full scale dispersion from ground-level and elevated ...
Why build a virtual brain? Large-scale neural simulations as jump start for cognitive computing
NASA Astrophysics Data System (ADS)
Colombo, Matteo
2017-03-01
Despite the impressive amount of financial resources recently invested in carrying out large-scale brain simulations, it is controversial what the pay-offs are of pursuing this project. One idea is that from designing, building, and running a large-scale neural simulation, scientists acquire knowledge about the computational performance of the simulating system, rather than about the neurobiological system represented in the simulation. It has been claimed that this knowledge may usher in a new era of neuromorphic, cognitive computing systems. This study elucidates this claim and argues that the main challenge this era is facing is not the lack of biological realism. The challenge lies in identifying general neurocomputational principles for the design of artificial systems, which could display the robust flexibility characteristic of biological intelligence.
Manned remote work station development article
NASA Technical Reports Server (NTRS)
1978-01-01
The two prime objectives of the Manned Remote Work Station (MRWS) Development Article Study are to first, evaluate the MRWS flight article roles and associated design concepts for fundamental requirements and embody key technology developments into a simulation program; and to provide detail manufacturing drawings and schedules for a simulator development test article. An approach is outlined which establishes flight article requirements based on past studies of Solar Power Satellite, orbital construction support equipments, construction bases and near term shuttle operations. Simulation objectives are established for those technology issues that can best be addressed on a simulator. Concepts for full-scale and sub-scale simulators are then studied to establish an overall approach to studying MRWS requirements. Emphasis then shifts to design and specification of a full-scale development test article.
If You've Got It, Use It (Simulation, That Is...)
NASA Technical Reports Server (NTRS)
Frost, Chad; Tucker, George
2006-01-01
This viewgraph presentation reviews the Rotorcraft Aircrew Systems Concept Airborne Laboratory (RASCAL) UH-60 in-flight simulator, the use of simulation in support of safety monitor design specification development, the development of a failure/recovery (F/R) rating scale, the use of F/R Rating Scale as a common element between simulation and flight evaluation, and the expansion of the flight envelope without benefit of simulation.
Demonstration of coal reburning for cyclone boiler NO{sub x} control. Appendix, Book 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
Based on the industry need for a pilot-scale cyclone boiler simulator, Babcock Wilcox (B&W) designed, fabricated, and installed such a facility at its Alliance Research Center (ARC) in 1985. The project involved conversion of an existing pulverized coal-fired facility to be cyclone-firing capable. Additionally, convective section tube banks were installed in the upper furnace in order to simulate a typical boiler convection pass. The small boiler simulator (SBS) is designed to simulate most fireside aspects of full-size utility boilers such as combustion and flue gas emissions characteristics, fireside deposition, etc. Prior to the design of the pilot-scale cyclone boiler simulator,more » the various cyclone boiler types were reviewed in order to identify the inherent cyclone boiler design characteristics which are applicable to the majority of these boilers. The cyclone boiler characteristics that were reviewed include NO{sub x} emissions, furnace exit gas temperature (FEGT) carbon loss, and total furnace residence time. Previous pilot-scale cyclone-fired furnace experience identified the following concerns: (1) Operability of a small cyclone furnace (e.g., continuous slag tapping capability). (2) The optimum cyclone(s) configuration for the pilot-scale unit. (3) Compatibility of NO{sub x} levels, carbon burnout, cyclone ash carryover to the convection pass, cyclone temperature, furnace residence time, and FEGT.« less
Javaherchi, Teymour
2016-06-08
Attached are the .cas and .dat files along with the required User Defined Functions (UDFs) and look-up table of lift and drag coefficients for the Reynolds Averaged Navier-Stokes (RANS) simulation of three coaxially located lab-scaled DOE RM1 turbine implemented in ANSYS FLUENT CFD-package. The lab-scaled DOE RM1 is a re-design geometry, based of the full scale DOE RM1 design, producing same power output as the full scale model, while operating at matched Tip Speed Ratio values at reachable laboratory Reynolds number (see attached paper). In this case study the flow field around and in the wake of the lab-scaled DOE RM1 turbines in a coaxial array is simulated using Blade Element Model (a.k.a Virtual Blade Model) by solving RANS equations coupled with k-\\omega turbulence closure model. It should be highlighted that in this simulation the actual geometry of the rotor blade is not modeled. The effect of turbine rotating blades are modeled using the Blade Element Theory. This simulation provides an accurate estimate for the performance of each device and structure of their turbulent far wake. The results of these simulations were validated against the developed in-house experimental data. Simulations for other turbine configurations are available upon request.
Design and Analysis of Windmill Simulation and Pole by Solidwork Program
NASA Astrophysics Data System (ADS)
Mulyana, Tatang; Sebayang, Darwin; R, Akmal Muamar. D.; A, Jauharah H. D.; Yahya Shomit, M.
2018-03-01
The Indonesian state of archipelago has great wind energy potential. For micro-scale power generation, the energy obtained from the windmill can be connected directly to the electrical load and can be used without problems. However, for macro-scale power generation, problems will arise such as the design of vane shapes, there should be a simulation and an accurate experiment to produce blades with a special shape that can capture wind energy. In addition, daily and yearly wind and wind rate calculations are also required to ensure the best latitude and longitude positions for building windmills. This paper presents a solution to solve the problem of how to produce a windmill which in the builder is very practical and very mobile can be moved its location. Before a windmill prototype is built it should have obtained the best windmill design result. Therefore, the simulation of the designed windmill is of crucial importance. Solid simulation express is a tool that serves to generate simulation of a design. Some factors that can affect a design result include the power part and the rest part of the part, material selection, the load is given, the security of the design power made, and changes in shape due to treat the load given to the design made. In this paper, static and thermal simulations of windmills have been designed. Based on the simulation result on the designed windmill, it shows that the design has been made very satisfactory so that it can be done prototyping fabrication process.
Opportunities for Breakthroughs in Large-Scale Computational Simulation and Design
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia; Alter, Stephen J.; Atkins, Harold L.; Bey, Kim S.; Bibb, Karen L.; Biedron, Robert T.; Carpenter, Mark H.; Cheatwood, F. McNeil; Drummond, Philip J.; Gnoffo, Peter A.
2002-01-01
Opportunities for breakthroughs in the large-scale computational simulation and design of aerospace vehicles are presented. Computational fluid dynamics tools to be used within multidisciplinary analysis and design methods are emphasized. The opportunities stem from speedups and robustness improvements in the underlying unit operations associated with simulation (geometry modeling, grid generation, physical modeling, analysis, etc.). Further, an improved programming environment can synergistically integrate these unit operations to leverage the gains. The speedups result from reducing the problem setup time through geometry modeling and grid generation operations, and reducing the solution time through the operation counts associated with solving the discretized equations to a sufficient accuracy. The opportunities are addressed only at a general level here, but an extensive list of references containing further details is included. The opportunities discussed are being addressed through the Fast Adaptive Aerospace Tools (FAAST) element of the Advanced Systems Concept to Test (ASCoT) and the third Generation Reusable Launch Vehicles (RLV) projects at NASA Langley Research Center. The overall goal is to enable greater inroads into the design process with large-scale simulations.
Martin, Phillip K; Schroeder, Ryan W
2014-06-01
The Designs subtest allows for accumulation of raw score points by chance alone, creating the potential for artificially inflated performances, especially in older patients. A random number generator was used to simulate the random selection and placement of cards by 100 test naive participants, resulting in a mean raw score of 36.26 (SD = 3.86). This resulted in relatively high-scaled scores in the 45-54, 55-64, and 65-69 age groups on Designs II. In the latter age group, in particular, the mean simulated performance resulted in a scaled score of 7, with scores 1 SD below and above the performance mean translating to scaled scores of 5 and 8, respectively. The findings indicate that clinicians should use caution when interpreting Designs II performance in these age groups, as our simulations demonstrated that low average to average range scores occur frequently when patients are relying solely on chance performance. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Health care planning and education via gaming-simulation: a two-stage experiment.
Gagnon, J H; Greenblat, C S
1977-01-01
A two-stage process of gaming-simulation design was conducted: the first stage of design concerned national planning for hemophilia care; the second stage of design was for gaming-simulation concerning the problems of hemophilia patients and health care providers. The planning design was intended to be adaptable to large-scale planning for a variety of health care problems. The educational game was designed using data developed in designing the planning game. A broad range of policy-makers participated in the planning game.
NASA Astrophysics Data System (ADS)
Ren, Lei; Zhang, Lin; Tao, Fei; (Luke) Zhang, Xiaolong; Luo, Yongliang; Zhang, Yabin
2012-08-01
Multidisciplinary design of complex products leads to an increasing demand for high performance simulation (HPS) platforms. One great challenge is how to achieve high efficient utilisation of large-scale simulation resources in distributed and heterogeneous environments. This article reports a virtualisation-based methodology to realise a HPS platform. This research is driven by the issues concerning large-scale simulation resources deployment and complex simulation environment construction, efficient and transparent utilisation of fine-grained simulation resources and high reliable simulation with fault tolerance. A framework of virtualisation-based simulation platform (VSIM) is first proposed. Then the article investigates and discusses key approaches in VSIM, including simulation resources modelling, a method to automatically deploying simulation resources for dynamic construction of system environment, and a live migration mechanism in case of faults in run-time simulation. Furthermore, the proposed methodology is applied to a multidisciplinary design system for aircraft virtual prototyping and some experiments are conducted. The experimental results show that the proposed methodology can (1) significantly improve the utilisation of fine-grained simulation resources, (2) result in a great reduction in deployment time and an increased flexibility for simulation environment construction and (3)achieve fault tolerant simulation.
Modification of Obstetric Emergency Simulation Scenarios for Realism in a Home-Birth Setting.
Komorowski, Janelle; Andrighetti, Tia; Benton, Melissa
2017-01-01
Clinical competency and clear communication are essential for intrapartum care providers who encounter high-stakes, low-frequency emergencies. The challenge for these providers is to maintain infrequently used skills. The challenge is even more significant for midwives who manage births at home and who, due to low practice volume and low-risk clientele, may rarely encounter an emergency. In addition, access to team simulation may be limited for home-birth midwives. This project modified existing validated obstetric simulation scenarios for a home-birth setting. Twelve certified professional midwives (CPMs) in active home-birth practice participated in shoulder dystocia and postpartum hemorrhage simulations. The simulations were staged to resemble home-birth settings, supplies, and personnel. Fidelity (realism) of the simulations was assessed with the Simulation Design Scale, and satisfaction and self-confidence were assessed with the Student Satisfaction and Self-Confidence in Learning Scale. Both utilized a 5-point Likert scale, with higher scores suggesting greater levels of fidelity, participant satisfaction, and self-confidence. Simulation Design Scale scores indicated participants agreed fidelity was achieved for the home-birth setting, while scores on the Student Satisfaction and Self-Confidence in Learning indicated high levels of participant satisfaction and self-confidence. If offered without modification, simulation scenarios designed for use in hospitals may lose fidelity for home-birth midwives, particularly in the environmental and psychological components. Simulation is standard of care in most settings, an excellent vehicle for maintaining skills, and some evidence suggests it results in improved perinatal outcomes. Additional study is needed in this area to support home-birth providers in maintaining skills. This pilot study suggests that simulation scenarios intended for hospital use can be successfully adapted to the home-birth setting. © 2016 by the American College of Nurse-Midwives.
Simulation Framework for Intelligent Transportation Systems
DOT National Transportation Integrated Search
1996-10-01
A simulation framework has been developed for a large-scale, comprehensive, scaleable simulation of an Intelligent Transportation System. The simulator is designed for running on parellel computers and distributed (networked) computer systems, but ca...
Facilitating Co-Design for Extreme-Scale Systems Through Lightweight Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engelmann, Christian; Lauer, Frank
This work focuses on tools for investigating algorithm performance at extreme scale with millions of concurrent threads and for evaluating the impact of future architecture choices to facilitate the co-design of high-performance computing (HPC) architectures and applications. The approach focuses on lightweight simulation of extreme-scale HPC systems with the needed amount of accuracy. The prototype presented in this paper is able to provide this capability using a parallel discrete event simulation (PDES), such that a Message Passing Interface (MPI) application can be executed at extreme scale, and its performance properties can be evaluated. The results of an initial prototype aremore » encouraging as a simple 'hello world' MPI program could be scaled up to 1,048,576 virtual MPI processes on a four-node cluster, and the performance properties of two MPI programs could be evaluated at up to 16,384 virtual MPI processes on the same system.« less
Andersen, Simone Nyholm; Broberg, Ole
2015-11-01
Current application of work system simulation in participatory ergonomics (PE) design includes a variety of different simulation media. However, the actual influence of the media attributes on the simulation outcome has received less attention. This study investigates two simulation media: full-scale mock-ups and table-top models. The aim is to compare, how the media attributes of fidelity and affordance influence the ergonomics identification and evaluation in PE design of hospital work systems. The results illustrate, how the full-scale mock-ups' high fidelity of room layout and affordance of tool operation support ergonomics identification and evaluation related to the work system entities space and technologies & tools. The table-top models' high fidelity of function relations and affordance of a helicopter view support ergonomics identification and evaluation related to the entity organization. Furthermore, the study addresses the form of the identified and evaluated conditions, being either identified challenges or tangible design criteria. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.
49 CFR Appendix A to Part 239 - Schedule of Civil Penalties 1
Code of Federal Regulations, 2014 CFR
2014-10-01
... emergency responders to participate in emergency simulations 3,000 6,000 (iii) Distribution of applicable... passengers with disabilities 2,500 5,000 239.103Failure to conduct a required full-scale simulation in... debriefing and critique session after an emergency or full-scale simulation 4,000 7,500 (c) Failure to design...
Acoustic Treatment Design Scaling Methods. Volume 1; Overview, Results, and Recommendations
NASA Technical Reports Server (NTRS)
Kraft, R. E.; Yu, J.
1999-01-01
Scale model fan rigs that simulate new generation ultra-high-bypass engines at about 1/5-scale are achieving increased importance as development vehicles for the design of low-noise aircraft engines. Testing at small scale allows the tests to be performed in existing anechoic wind tunnels, which provides an accurate simulation of the important effects of aircraft forward motion on the noise generation. The ability to design, build, and test miniaturized acoustic treatment panels on scale model fan rigs representative of the fullscale engine provides not only a cost-savings, but an opportunity to optimize the treatment by allowing tests of different designs. The primary objective of this study was to develop methods that will allow scale model fan rigs to be successfully used as acoustic treatment design tools. The study focuses on finding methods to extend the upper limit of the frequency range of impedance prediction models and acoustic impedance measurement methods for subscale treatment liner designs, and confirm the predictions by correlation with measured data. This phase of the program had as a goal doubling the upper limit of impedance measurement from 6 kHz to 12 kHz. The program utilizes combined analytical and experimental methods to achieve the objectives.
Argonne Simulation Framework for Intelligent Transportation Systems
DOT National Transportation Integrated Search
1996-01-01
A simulation framework has been developed which defines a high-level architecture for a large-scale, comprehensive, scalable simulation of an Intelligent Transportation System (ITS). The simulator is designed to run on parallel computers and distribu...
Dong, Hui; Loomer, Peter; Barr, Alan; LaRoche, Charles; Young, Ed; Rempel, David
2007-01-01
Work-related upper extremity musculoskeletal disorders, including carpal tunnel syndrome, are prevalent among dentists and dental hygienists. An important risk factor for developing these disorders is forceful pinching which occurs during periodontal work such as dental scaling. Ergonomically designed dental scaling instruments may help reduce the prevalence of carpal tunnel syndrome among dental practitioners. In this study, 8 custom-designed dental scaling instruments with different handle shapes were used by 24 dentists and dental hygienists to perform a simulated tooth scaling task. The muscle activity of two extensors and two flexors in the forearm was recorded with electromyography while thumb pinch force was measured by pressure sensors. The results demonstrated that the instrument handle with a tapered, round shape and a 10 mm diameter required the least muscle load and pinch force when performing simulated periodontal work. The results from this study can guide dentists and dental hygienists in selection of dental scaling instruments. PMID:17156742
RANS Simulation (Rotating Reference Frame Model [RRF]) of Single Lab-Scaled DOE RM1 MHK Turbine
Javaherchi, Teymour; Stelzenmuller, Nick; Aliseda, Alberto; Seydel, Joseph
2014-04-15
Attached are the .cas and .dat files for the Reynolds Averaged Navier-Stokes (RANS) simulation of a single lab-scaled DOE RM1 turbine implemented in ANSYS FLUENT CFD-package. The lab-scaled DOE RM1 is a re-design geometry, based of the full scale DOE RM1 design, producing same power output as the full scale model, while operating at matched Tip Speed Ratio values at reachable laboratory Reynolds number (see attached paper). In this case study taking advantage of the symmetry of lab-scaled DOE RM1 geometry, only half of the geometry is models using (Single) Rotating Reference Frame model [RRF]. In this model RANS equations, coupled with k-\\omega turbulence closure model, are solved in the rotating reference frame. The actual geometry of the turbine blade is included and the turbulent boundary layer along the blade span is simulated using wall-function approach. The rotation of the blade is modeled by applying periodic boundary condition to sets of plane of symmetry. This case study simulates the performance and flow field in the near and far wake of the device at the desired operating conditions. The results of these simulations were validated against in-house experimental data. Please see the attached paper.
Design of a V/STOL propulsion system for a large-scale fighter model
NASA Technical Reports Server (NTRS)
Willis, W. S.
1981-01-01
Modifications were made to the existing Large-Scale STOL fighter model to simulate a V/STOL configuration. Modifications include the substitutions of two dimensional lift/cruise exhaust nozzles in the nacelles, and the addition of a third J97 engine in the fuselage to suppy a remote exhaust nozzle simulating a Remote Augmented Lift System. A preliminary design of the inlet and exhaust ducting for the third engine was developed and a detailed design was completed of the hot exhaust ducting and remote nozzle.
Simulating the Response of a Composite Honeycomb Energy Absorber. Part 2; Full-Scale Impact Testing
NASA Technical Reports Server (NTRS)
Fasanella, Edwin L.; Annett, Martin S.; Jackson, Karen E.; Polanco, Michael A.
2012-01-01
NASA has sponsored research to evaluate an externally deployable composite honeycomb designed to attenuate loads in the event of a helicopter crash. The concept, designated the Deployable Energy Absorber (DEA), is an expandable Kevlar(Registered TradeMark) honeycomb. The DEA has a flexible hinge that allows the honeycomb to be stowed collapsed until needed during an emergency. Evaluation of the DEA began with material characterization of the Kevlar(Registered TradeMark)-129 fabric/epoxy, and ended with a full-scale crash test of a retrofitted MD-500 helicopter. During each evaluation phase, finite element models of the test articles were developed and simulations were performed using the dynamic finite element code, LS-DYNA(Registered TradeMark). The paper will focus on simulations of two full-scale impact tests involving the DEA, a mass-simulator and a full-scale crash of an instrumented MD-500 helicopter. Isotropic (MAT24) and composite (MAT58) material models, which were assigned to DEA shell elements, were compared. Based on simulations results, the MAT58 model showed better agreement with test.
Initialization of high resolution surface wind simulations using NWS gridded data
J. Forthofer; K. Shannon; Bret Butler
2010-01-01
WindNinja is a standalone computer model designed to provide the user with simulations of surface wind flow. It is deterministic and steady state. It is currently being modified to allow the user to initialize the flow calculation using National Digital Forecast Database. It essentially allows the user to downscale the coarse scale simulations from meso-scale models to...
Spatial adaptive sampling in multiscale simulation
NASA Astrophysics Data System (ADS)
Rouet-Leduc, Bertrand; Barros, Kipton; Cieren, Emmanuel; Elango, Venmugil; Junghans, Christoph; Lookman, Turab; Mohd-Yusof, Jamaludin; Pavel, Robert S.; Rivera, Axel Y.; Roehm, Dominic; McPherson, Allen L.; Germann, Timothy C.
2014-07-01
In a common approach to multiscale simulation, an incomplete set of macroscale equations must be supplemented with constitutive data provided by fine-scale simulation. Collecting statistics from these fine-scale simulations is typically the overwhelming computational cost. We reduce this cost by interpolating the results of fine-scale simulation over the spatial domain of the macro-solver. Unlike previous adaptive sampling strategies, we do not interpolate on the potentially very high dimensional space of inputs to the fine-scale simulation. Our approach is local in space and time, avoids the need for a central database, and is designed to parallelize well on large computer clusters. To demonstrate our method, we simulate one-dimensional elastodynamic shock propagation using the Heterogeneous Multiscale Method (HMM); we find that spatial adaptive sampling requires only ≈ 50 ×N0.14 fine-scale simulations to reconstruct the stress field at all N grid points. Related multiscale approaches, such as Equation Free methods, may also benefit from spatial adaptive sampling.
NASA Technical Reports Server (NTRS)
Fujiwara, Gustavo; Bragg, Mike; Triphahn, Chris; Wiberg, Brock; Woodard, Brian; Loth, Eric; Malone, Adam; Paul, Bernard; Pitera, David; Wilcox, Pete;
2017-01-01
This report presents the key results from the first two years of a program to develop experimental icing simulation capabilities for full-scale swept wings. This investigation was undertaken as a part of a larger collaborative research effort on ice accretion and aerodynamics for large-scale swept wings. Ice accretion and the resulting aerodynamic effect on large-scale swept wings presents a significant airplane design and certification challenge to air frame manufacturers, certification authorities, and research organizations alike. While the effect of ice accretion on straight wings has been studied in detail for many years, the available data on swept-wing icing are much more limited, especially for larger scales.
NASA Astrophysics Data System (ADS)
Toshimitsu, Kazuhiko; Narihara, Takahiko; Kikugawa, Hironori; Akiyoshi, Arata; Kawazu, Yuuya
2017-04-01
The effects of turbulent intensity and vortex scale of simulated natural wind on performance of a horizontal axis wind turbine (HAWT) are mainly investigated in this paper. In particular, the unsteadiness and turbulence of wind in Japan are stronger than ones in Europe and North America in general. Hence, Japanese engineers should take account of the velocity unsteadiness of natural wind at installed open-air location to design a higher performance wind turbine. Using the originally designed five wind turbines on the basis of NACA and MEL blades, the dependencies of the wind frequency and vortex scale of the simulated natural wind are presented. As the results, the power coefficient of the newly designed MEL3-type rotor in the simulated natural wind is 130% larger than one in steady wind.
VISUALIZATION AND SIMULATION OF NON-AQUEOUS PHASE LIQUIDS SOLUBILIZATION IN PORE NETWORKS
The design of in-situ remediation of contaminated soils is mostly based on a description at the macroscopic scale using a averaged quantities. These cannot address issues at the pore and pore network scales. In this paper, visualization experiments and numerical simulations in ...
Fractal Simulations of African Design in Pre-College Computing Education
ERIC Educational Resources Information Center
Eglash, Ron; Krishnamoorthy, Mukkai; Sanchez, Jason; Woodbridge, Andrew
2011-01-01
This article describes the use of fractal simulations of African design in a high school computing class. Fractal patterns--repetitions of shape at multiple scales--are a common feature in many aspects of African design. In African architecture we often see circular houses grouped in circular complexes, or rectangular houses in rectangular…
NASA Astrophysics Data System (ADS)
Matsuzaki, F.; Yoshikawa, N.; Tanaka, M.; Fujimaki, A.; Takai, Y.
2003-10-01
Recently many single flux quantum (SFQ) logic circuits containing several thousands of Josephson junctions have been designed successfully by using digital domain simulation based on the hard ware description language (HDL). In the present HDL-based design of SFQ circuits, a structure-level HDL description has been used, where circuits are made up of basic gate cells. However, in order to analyze large-scale SFQ digital systems, such as a microprocessor, more higher-level circuit abstraction is necessary to reduce the circuit simulation time. In this paper we have investigated the way to describe functionality of the large-scale SFQ digital circuits by a behavior-level HDL description. In this method, the functionality and the timing of the circuit block is defined directly by describing their behavior by the HDL. Using this method, we can dramatically reduce the simulation time of large-scale SFQ digital circuits.
Predicting agricultural impacts of large-scale drought: 2012 and the case for better modeling
USDA-ARS?s Scientific Manuscript database
We present an example of a simulation-based forecast for the 2012 U.S. maize growing season produced as part of a high-resolution, multi-scale, predictive mechanistic modeling study designed for decision support, risk management, and counterfactual analysis. The simulations undertaken for this analy...
Haines, Brian M.; Yi, S. A.; Olson, R. E.; ...
2017-07-10
The wetted foam capsule design for inertial confinement fusion capsules, which includes a foam layer wetted with deuterium-tritium liquid, enables layered capsule implosions with a wide range of hot-spot convergence ratios (CR) on the National Ignition Facility. In this paper, we present a full-scale wetted foam capsule design that demonstrates high gain in one-dimensional simulations. In these simulations, increasing the convergence ratio leads to an improved capsule yield due to higher hot-spot temperatures and increased fuel areal density. High-resolution two-dimensional simulations of this design are presented with detailed and well resolved models for the capsule fill tube, support tent, surfacemore » roughness, and predicted asymmetries in the x-ray drive. Our modeling of these asymmetries is validated by comparisons with available experimental data. In 2D simulations of the full-scale wetted foam capsule design, jetting caused by the fill tube is prevented by the expansion of the tungsten-doped shell layer due to preheat. While the impacts of surface roughness and predicted asymmetries in the x-ray drive are enhanced by convergence effects, likely underpredicted in 2D at high CR, simulations predict that the capsule is robust to these features. Nevertheless, the design is highly susceptible to the effects of the capsule support tent, which negates all of the one-dimensional benefits of increasing the convergence ratio. Indeed, when the support tent is included in simulations, the yield decreases as the convergence ratio is increased for CR > 20. Finally and nevertheless, the results suggest that the full-scale wetted foam design has the potential to outperform ice layer capsules given currently achievable levels of asymmetries when fielded at low convergence ratios (CR < 20).« less
NASA Astrophysics Data System (ADS)
Haines, Brian M.; Yi, S. A.; Olson, R. E.; Khan, S. F.; Kyrala, G. A.; Zylstra, A. B.; Bradley, P. A.; Peterson, R. R.; Kline, J. L.; Leeper, R. J.; Shah, R. C.
2017-07-01
The wetted foam capsule design for inertial confinement fusion capsules, which includes a foam layer wetted with deuterium-tritium liquid, enables layered capsule implosions with a wide range of hot-spot convergence ratios (CR) on the National Ignition Facility. We present a full-scale wetted foam capsule design that demonstrates high gain in one-dimensional simulations. In these simulations, increasing the convergence ratio leads to an improved capsule yield due to higher hot-spot temperatures and increased fuel areal density. High-resolution two-dimensional simulations of this design are presented with detailed and well resolved models for the capsule fill tube, support tent, surface roughness, and predicted asymmetries in the x-ray drive. Our modeling of these asymmetries is validated by comparisons with available experimental data. In 2D simulations of the full-scale wetted foam capsule design, jetting caused by the fill tube is prevented by the expansion of the tungsten-doped shell layer due to preheat. While the impacts of surface roughness and predicted asymmetries in the x-ray drive are enhanced by convergence effects, likely underpredicted in 2D at high CR, simulations predict that the capsule is robust to these features. Nevertheless, the design is highly susceptible to the effects of the capsule support tent, which negates all of the one-dimensional benefits of increasing the convergence ratio. Indeed, when the support tent is included in simulations, the yield decreases as the convergence ratio is increased for CR > 20. Nevertheless, the results suggest that the full-scale wetted foam design has the potential to outperform ice layer capsules given currently achievable levels of asymmetries when fielded at low convergence ratios (CR < 20).
Keiski, Michelle A; Shore, Douglas L; Hamilton, Joanna M; Malec, James F
2015-04-01
The purpose of this study was to characterize the operating characteristics of the Personality Assessment Inventory (PAI) validity scales in distinguishing simulators feigning symptoms of traumatic brain injury (TBI) while completing the PAI (n = 84) from a clinical sample of patients with TBI who achieved adequate scores on performance validity tests (n = 112). The simulators were divided into two groups: (a) Specific Simulators feigning cognitive and somatic symptoms only or (b) Global Simulators feigning cognitive, somatic, and psychiatric symptoms. The PAI overreporting scales were indeed sensitive to the simulation of TBI symptoms in this analogue design. However, these scales were less sensitive to the feigning of somatic and cognitive TBI symptoms than the feigning of a broad range of cognitive, somatic, and emotional symptoms often associated with TBI. The relationships of TBI simulation to consistency and underreporting scales are also explored. © The Author(s) 2014.
Aerodynamic Simulation of the MARINTEK Braceless Semisubmersible Wave Tank Tests
NASA Astrophysics Data System (ADS)
Stewart, Gordon; Muskulus, Michael
2016-09-01
Model scale experiments of floating offshore wind turbines are important for both platform design for the industry as well as numerical model validation for the research community. An important consideration in the wave tank testing of offshore wind turbines are scaling effects, especially the tension between accurate scaling of both hydrodynamic and aerodynamic forces. The recent MARINTEK braceless semisubmersible wave tank experiment utilizes a novel aerodynamic force actuator to decouple the scaling of the aerodynamic forces. This actuator consists of an array of motors that pull on cables to provide aerodynamic forces that are calculated by a blade-element momentum code in real time as the experiment is conducted. This type of system has the advantage of supplying realistically scaled aerodynamic forces that include dynamic forces from platform motion, but does not provide the insights into the accuracy of the aerodynamic models that an actual model-scale rotor could provide. The modeling of this system presents an interesting challenge, as there are two ways to simulate the aerodynamics; either by using the turbulent wind fields as inputs to the aerodynamic model of the design code, or by surpassing the aerodynamic model and using the forces applied to the experimental turbine as direct inputs to the simulation. This paper investigates the best practices of modeling this type of novel aerodynamic actuator using a modified wind turbine simulation tool, and demonstrates that bypassing the dynamic aerodynamics solver of design codes can lead to erroneous results.
Error simulation of paired-comparison-based scaling methods
NASA Astrophysics Data System (ADS)
Cui, Chengwu
2000-12-01
Subjective image quality measurement usually resorts to psycho physical scaling. However, it is difficult to evaluate the inherent precision of these scaling methods. Without knowing the potential errors of the measurement, subsequent use of the data can be misleading. In this paper, the errors on scaled values derived form paired comparison based scaling methods are simulated with randomly introduced proportion of choice errors that follow the binomial distribution. Simulation results are given for various combinations of the number of stimuli and the sampling size. The errors are presented in the form of average standard deviation of the scaled values and can be fitted reasonably well with an empirical equation that can be sued for scaling error estimation and measurement design. The simulation proves paired comparison based scaling methods can have large errors on the derived scaled values when the sampling size and the number of stimuli are small. Examples are also given to show the potential errors on actually scaled values of color image prints as measured by the method of paired comparison.
Introducing CGOLS: The Cholla Galactic Outflow Simulation Suite
NASA Astrophysics Data System (ADS)
Schneider, Evan E.; Robertson, Brant E.
2018-06-01
We present the Cholla Galactic OutfLow Simulations (CGOLS) suite, a set of extremely high resolution global simulations of isolated disk galaxies designed to clarify the nature of multiphase structure in galactic winds. Using the GPU-based code Cholla, we achieve unprecedented resolution in these simulations, modeling galaxies over a 20 kpc region at a constant resolution of 5 pc. The simulations include a feedback model designed to test the effects of different mass- and energy-loading factors on galactic outflows over kiloparsec scales. In addition to describing the simulation methodology in detail, we also present the results from an adiabatic simulation that tests the frequently adopted analytic galactic wind model of Chevalier & Clegg. Our results indicate that the Chevalier & Clegg model is a good fit to nuclear starburst winds in the nonradiative region of parameter space. Finally, we investigate the role of resolution and convergence in large-scale simulations of multiphase galactic winds. While our largest-scale simulations show convergence of observable features like soft X-ray emission, our tests demonstrate that simulations of this kind with resolutions greater than 10 pc are not yet converged, confirming the need for extreme resolution in order to study the structure of winds and their effects on the circumgalactic medium.
RANS Simulation (Virtual Blade Model [VBM]) of Single Lab Scaled DOE RM1 MHK Turbine
Javaherchi, Teymour; Stelzenmuller, Nick; Aliseda, Alberto; Seydel, Joseph
2014-04-15
Attached are the .cas and .dat files for the Reynolds Averaged Navier-Stokes (RANS) simulation of a single lab-scaled DOE RM1 turbine implemented in ANSYS FLUENT CFD-package. The lab-scaled DOE RM1 is a re-design geometry, based of the full scale DOE RM1 design, producing same power output as the full scale model, while operating at matched Tip Speed Ratio values at reachable laboratory Reynolds number (see attached paper). In this case study the flow field around and in the wake of the lab-scaled DOE RM1 turbine is simulated using Blade Element Model (a.k.a Virtual Blade Model) by solving RANS equations coupled with k-\\omega turbulence closure model. It should be highlighted that in this simulation the actual geometry of the rotor blade is not modeled. The effect of turbine rotating blades are modeled using the Blade Element Theory. This simulation provides an accurate estimate for the performance of device and structure of it's turbulent far wake. Due to the simplifications implemented for modeling the rotating blades in this model, VBM is limited to capture details of the flow field in near wake region of the device. The required User Defined Functions (UDFs) and look-up table of lift and drag coefficients are included along with the .cas and .dat files.
The Numerical Propulsion System Simulation: An Overview
NASA Technical Reports Server (NTRS)
Lytle, John K.
2000-01-01
Advances in computational technology and in physics-based modeling are making large-scale, detailed simulations of complex systems possible within the design environment. For example, the integration of computing, communications, and aerodynamics has reduced the time required to analyze major propulsion system components from days and weeks to minutes and hours. This breakthrough has enabled the detailed simulation of major propulsion system components to become a routine part of designing systems, providing the designer with critical information about the components early in the design process. This paper describes the development of the numerical propulsion system simulation (NPSS), a modular and extensible framework for the integration of multicomponent and multidisciplinary analysis tools using geographically distributed resources such as computing platforms, data bases, and people. The analysis is currently focused on large-scale modeling of complete aircraft engines. This will provide the product developer with a "virtual wind tunnel" that will reduce the number of hardware builds and tests required during the development of advanced aerospace propulsion systems.
Everson, Naleya; Levett-Jones, Tracy; Pitt, Victoria; Lapkin, Samuel; Van Der Riet, Pamela; Rossiter, Rachel; Jones, Donovan; Gilligan, Conor; Courtney Pratt, Helen
2018-04-25
Abstract Background Empathic concern has been found to decline in health professional students. Few effective educational programs and a lack of validated scales are reported. Previous analysis of the Empathic Concern scale of the Emotional Response Questionnaire has reported both one and two latent constructs. Aim To evaluate the impact of simulation on nursing students' empathic concern and test the psychometric properties of the Empathic Concern scale. Methods The study used a one group pre-test post-test design with a convenience sample of 460 nursing students. Empathic concern was measured pre-post simulation with the Empathic Concern scale. Factor Analysis was undertaken to investigate the structure of the scale. Results There was a statistically significant increase in Empathic Concern scores between pre-simulation 5.57 (SD = 1.04) and post-simulation 6.10 (SD = 0.95). Factor analysis of the Empathic Concern scale identified one latent dimension. Conclusion Immersive simulation may promote empathic concern. The Empathic Concern scale measured a single latent construct in this cohort.
Challenges of NDE Simulation Tool Challenges of NDE Simulation Tool
NASA Technical Reports Server (NTRS)
Leckey, Cara A. C.; Juarez, Peter D.; Seebo, Jeffrey P.; Frank, Ashley L.
2015-01-01
Realistic nondestructive evaluation (NDE) simulation tools enable inspection optimization and predictions of inspectability for new aerospace materials and designs. NDE simulation tools may someday aid in the design and certification of advanced aerospace components; potentially shortening the time from material development to implementation by industry and government. Furthermore, modeling and simulation are expected to play a significant future role in validating the capabilities and limitations of guided wave based structural health monitoring (SHM) systems. The current state-of-the-art in ultrasonic NDE/SHM simulation cannot rapidly simulate damage detection techniques for large scale, complex geometry composite components/vehicles with realistic damage types. This paper discusses some of the challenges of model development and validation for composites, such as the level of realism and scale of simulation needed for NASA' applications. Ongoing model development work is described along with examples of model validation studies. The paper will also discuss examples of the use of simulation tools at NASA to develop new damage characterization methods, and associated challenges of validating those methods.
Unver, Vesile; Basak, Tulay; Watts, Penni; Gaioso, Vanessa; Moss, Jacqueline; Tastan, Sevinc; Iyigun, Emine; Tosun, Nuran
2017-02-01
The purpose of this study was to adapt the "Student Satisfaction and Self-Confidence in Learning Scale" (SCLS), "Simulation Design Scale" (SDS), and "Educational Practices Questionnaire" (EPQ) developed by Jeffries and Rizzolo into Turkish and establish the reliability and the validity of these translated scales. A sample of 87 nursing students participated in this study. These scales were cross-culturally adapted through a process including translation, comparison with original version, back translation, and pretesting. Construct validity was evaluated by factor analysis, and criterion validity was evaluated using the Perceived Learning Scale, Patient Intervention Self-confidence/Competency Scale, and Educational Belief Scale. Cronbach's alpha values were found as 0.77-0.85 for SCLS, 0.73-0.86 for SDS, and 0.61-0.86 for EPQ. The results of this study show that the Turkish versions of all scales are validated and reliable measurement tools.
Efficient and Extensible Quasi-Explicit Modular Nonlinear Multiscale Battery Model: GH-MSMD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Gi-Heon; Smith, Kandler; Lawrence-Simon, Jake
Complex physics and long computation time hinder the adoption of computer aided engineering models in the design of large-format battery cells and systems. A modular, efficient battery simulation model -- the multiscale multidomain (MSMD) model -- was previously introduced to aid the scale-up of Li-ion material and electrode designs to complete cell and pack designs, capturing electrochemical interplay with 3-D electronic current pathways and thermal response. Here, this paper enhances the computational efficiency of the MSMD model using a separation of time-scales principle to decompose model field variables. The decomposition provides a quasi-explicit linkage between the multiple length-scale domains andmore » thus reduces time-consuming nested iteration when solving model equations across multiple domains. In addition to particle-, electrode- and cell-length scales treated in the previous work, the present formulation extends to bus bar- and multi-cell module-length scales. We provide example simulations for several variants of GH electrode-domain models.« less
Efficient and Extensible Quasi-Explicit Modular Nonlinear Multiscale Battery Model: GH-MSMD
Kim, Gi-Heon; Smith, Kandler; Lawrence-Simon, Jake; ...
2017-03-24
Complex physics and long computation time hinder the adoption of computer aided engineering models in the design of large-format battery cells and systems. A modular, efficient battery simulation model -- the multiscale multidomain (MSMD) model -- was previously introduced to aid the scale-up of Li-ion material and electrode designs to complete cell and pack designs, capturing electrochemical interplay with 3-D electronic current pathways and thermal response. Here, this paper enhances the computational efficiency of the MSMD model using a separation of time-scales principle to decompose model field variables. The decomposition provides a quasi-explicit linkage between the multiple length-scale domains andmore » thus reduces time-consuming nested iteration when solving model equations across multiple domains. In addition to particle-, electrode- and cell-length scales treated in the previous work, the present formulation extends to bus bar- and multi-cell module-length scales. We provide example simulations for several variants of GH electrode-domain models.« less
Nonlocal and collective relaxation in stellar systems
NASA Technical Reports Server (NTRS)
Weinberg, Martin D.
1993-01-01
The modal response of stellar systems to fluctuations at large scales is presently investigated by means of analytic theory and n-body simulation; the stochastic excitation of these modes is shown to increase the relaxation rate even for a system which is moderately far from instability. The n-body simulations, when designed to suppress relaxation at small scales, clearly show the effects of large-scale fluctuations. It is predicted that large-scale fluctuations will be largest for such marginally bound systems as forming star clusters and associations.
Towards Application of NASA Standard for Models and Simulations in Aeronautical Design Process
NASA Astrophysics Data System (ADS)
Vincent, Luc; Dunyach, Jean-Claude; Huet, Sandrine; Pelissier, Guillaume; Merlet, Joseph
2012-08-01
Even powerful computational techniques like simulation endure limitations in their validity domain. Consequently using simulation models requires cautions to avoid making biased design decisions for new aeronautical products on the basis of inadequate simulation results. Thus the fidelity, accuracy and validity of simulation models shall be monitored in context all along the design phases to build confidence in achievement of the goals of modelling and simulation.In the CRESCENDO project, we adapt the Credibility Assessment Scale method from NASA standard for models and simulations from space programme to the aircraft design in order to assess the quality of simulations. The proposed eight quality assurance metrics aggregate information to indicate the levels of confidence in results. They are displayed in management dashboard and can secure design trade-off decisions at programme milestones.The application of this technique is illustrated in aircraft design context with specific thermal Finite Elements Analysis. This use case shows how to judge the fitness- for-purpose of simulation as Virtual testing means and then green-light the continuation of Simulation Lifecycle Management (SLM) process.
Adjoint Sensitivity Analysis for Scale-Resolving Turbulent Flow Solvers
NASA Astrophysics Data System (ADS)
Blonigan, Patrick; Garai, Anirban; Diosady, Laslo; Murman, Scott
2017-11-01
Adjoint-based sensitivity analysis methods are powerful design tools for engineers who use computational fluid dynamics. In recent years, these engineers have started to use scale-resolving simulations like large-eddy simulations (LES) and direct numerical simulations (DNS), which resolve more scales in complex flows with unsteady separation and jets than the widely-used Reynolds-averaged Navier-Stokes (RANS) methods. However, the conventional adjoint method computes large, unusable sensitivities for scale-resolving simulations, which unlike RANS simulations exhibit the chaotic dynamics inherent in turbulent flows. Sensitivity analysis based on least-squares shadowing (LSS) avoids the issues encountered by conventional adjoint methods, but has a high computational cost even for relatively small simulations. The following talk discusses a more computationally efficient formulation of LSS, ``non-intrusive'' LSS, and its application to turbulent flows simulated with a discontinuous-Galkerin spectral-element-method LES/DNS solver. Results are presented for the minimal flow unit, a turbulent channel flow with a limited streamwise and spanwise domain.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Weizhao; Ren, Huaqing; Wang, Zequn
2016-10-19
An integrated computational materials engineering method is proposed in this paper for analyzing the design and preforming process of woven carbon fiber composites. The goal is to reduce the cost and time needed for the mass production of structural composites. It integrates the simulation methods from the micro-scale to the macro-scale to capture the behavior of the composite material in the preforming process. In this way, the time consuming and high cost physical experiments and prototypes in the development of the manufacturing process can be circumvented. This method contains three parts: the micro-scale representative volume element (RVE) simulation to characterizemore » the material; the metamodeling algorithm to generate the constitutive equations; and the macro-scale preforming simulation to predict the behavior of the composite material during forming. The results show the potential of this approach as a guidance to the design of composite materials and its manufacturing process.« less
NASA Technical Reports Server (NTRS)
Brown, Christopher A.
1993-01-01
The approach of the project is to base the design of multi-function, reflective topographies on the theory that topographically dependent phenomena react with surfaces and interfaces at certain scales. The first phase of the project emphasizes the development of methods for understanding the sizes of topographic features which influence reflectivity. Subsequent phases, if necessary, will address the scales of interaction for adhesion and manufacturing processes. A simulation of the interaction of electromagnetic radiation, or light, with a reflective surface is performed using specialized software. Reflectivity of the surface as a function of scale is evaluated and the results from the simulation are compared with reflectivity measurements made on multi-function, reflective surfaces.
Development of fire test methods for airplane interior materials
NASA Technical Reports Server (NTRS)
Tustin, E. A.
1978-01-01
Fire tests were conducted in a 737 airplane fuselage at NASA-JSC to characterize jet fuel fires in open steel pans (simulating post-crash fire sources and a ruptured airplane fuselage) and to characterize fires in some common combustibles (simulating in-flight fire sources). Design post-crash and in-flight fire source selections were based on these data. Large panels of airplane interior materials were exposed to closely-controlled large scale heating simulations of the two design fire sources in a Boeing fire test facility utilizing a surplused 707 fuselage section. Small samples of the same airplane materials were tested by several laboratory fire test methods. Large scale and laboratory scale data were examined for correlative factors. Published data for dangerous hazard levels in a fire environment were used as the basis for developing a method to select the most desirable material where trade-offs in heat, smoke and gaseous toxicant evolution must be considered.
A new deadlock resolution protocol and message matching algorithm for the extreme-scale simulator
Engelmann, Christian; Naughton, III, Thomas J.
2016-03-22
Investigating the performance of parallel applications at scale on future high-performance computing (HPC) architectures and the performance impact of different HPC architecture choices is an important component of HPC hardware/software co-design. The Extreme-scale Simulator (xSim) is a simulation toolkit for investigating the performance of parallel applications at scale. xSim scales to millions of simulated Message Passing Interface (MPI) processes. The overhead introduced by a simulation tool is an important performance and productivity aspect. This paper documents two improvements to xSim: (1)~a new deadlock resolution protocol to reduce the parallel discrete event simulation overhead and (2)~a new simulated MPI message matchingmore » algorithm to reduce the oversubscription management overhead. The results clearly show a significant performance improvement. The simulation overhead for running the NAS Parallel Benchmark suite was reduced from 102% to 0% for the embarrassingly parallel (EP) benchmark and from 1,020% to 238% for the conjugate gradient (CG) benchmark. xSim offers a highly accurate simulation mode for better tracking of injected MPI process failures. Furthermore, with highly accurate simulation, the overhead was reduced from 3,332% to 204% for EP and from 37,511% to 13,808% for CG.« less
Menzies, Kevin
2014-08-13
The growth in simulation capability over the past 20 years has led to remarkable changes in the design process for gas turbines. The availability of relatively cheap computational power coupled to improvements in numerical methods and physical modelling in simulation codes have enabled the development of aircraft propulsion systems that are more powerful and yet more efficient than ever before. However, the design challenges are correspondingly greater, especially to reduce environmental impact. The simulation requirements to achieve a reduced environmental impact are described along with the implications of continued growth in available computational power. It is concluded that achieving the environmental goals will demand large-scale multi-disciplinary simulations requiring significantly increased computational power, to enable optimization of the airframe and propulsion system over the entire operational envelope. However even with massive parallelization, the limits imposed by communications latency will constrain the time required to achieve a solution, and therefore the position of such large-scale calculations in the industrial design process. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
A Metascalable Computing Framework for Large Spatiotemporal-Scale Atomistic Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nomura, K; Seymour, R; Wang, W
2009-02-17
A metascalable (or 'design once, scale on new architectures') parallel computing framework has been developed for large spatiotemporal-scale atomistic simulations of materials based on spatiotemporal data locality principles, which is expected to scale on emerging multipetaflops architectures. The framework consists of: (1) an embedded divide-and-conquer (EDC) algorithmic framework based on spatial locality to design linear-scaling algorithms for high complexity problems; (2) a space-time-ensemble parallel (STEP) approach based on temporal locality to predict long-time dynamics, while introducing multiple parallelization axes; and (3) a tunable hierarchical cellular decomposition (HCD) parallelization framework to map these O(N) algorithms onto a multicore cluster based onmore » hybrid implementation combining message passing and critical section-free multithreading. The EDC-STEP-HCD framework exposes maximal concurrency and data locality, thereby achieving: (1) inter-node parallel efficiency well over 0.95 for 218 billion-atom molecular-dynamics and 1.68 trillion electronic-degrees-of-freedom quantum-mechanical simulations on 212,992 IBM BlueGene/L processors (superscalability); (2) high intra-node, multithreading parallel efficiency (nanoscalability); and (3) nearly perfect time/ensemble parallel efficiency (eon-scalability). The spatiotemporal scale covered by MD simulation on a sustained petaflops computer per day (i.e. petaflops {center_dot} day of computing) is estimated as NT = 2.14 (e.g. N = 2.14 million atoms for T = 1 microseconds).« less
Binary optical filters for scale invariant pattern recognition
NASA Technical Reports Server (NTRS)
Reid, Max B.; Downie, John D.; Hine, Butler P.
1992-01-01
Binary synthetic discriminant function (BSDF) optical filters which are invariant to scale changes in the target object of more than 50 percent are demonstrated in simulation and experiment. Efficient databases of scale invariant BSDF filters can be designed which discriminate between two very similar objects at any view scaled over a factor of 2 or more. The BSDF technique has considerable advantages over other methods for achieving scale invariant object recognition, as it also allows determination of the object's scale. In addition to scale, the technique can be used to design recognition systems invariant to other geometric distortions.
Computational analysis of fluid dynamics in pharmaceutical freeze-drying.
Alexeenko, Alina A; Ganguly, Arnab; Nail, Steven L
2009-09-01
Analysis of water vapor flows encountered in pharmaceutical freeze-drying systems, laboratory-scale and industrial, is presented based on the computational fluid dynamics (CFD) techniques. The flows under continuum gas conditions are analyzed using the solution of the Navier-Stokes equations whereas the rarefied flow solutions are obtained by the direct simulation Monte Carlo (DSMC) method for the Boltzmann equation. Examples of application of CFD techniques to laboratory-scale and industrial scale freeze-drying processes are discussed with an emphasis on the utility of CFD for improvement of design and experimental characterization of pharmaceutical freeze-drying hardware and processes. The current article presents a two-dimensional simulation of a laboratory scale dryer with an emphasis on the importance of drying conditions and hardware design on process control and a three-dimensional simulation of an industrial dryer containing a comparison of the obtained results with analytical viscous flow solutions. It was found that the presence of clean in place (CIP)/sterilize in place (SIP) piping in the duct lead to significant changes in the flow field characteristics. The simulation results for vapor flow rates in an industrial freeze-dryer have been compared to tunable diode laser absorption spectroscopy (TDLAS) and gravimetric measurements.
Mantle convection on modern supercomputers
NASA Astrophysics Data System (ADS)
Weismüller, Jens; Gmeiner, Björn; Mohr, Marcus; Waluga, Christian; Wohlmuth, Barbara; Rüde, Ulrich; Bunge, Hans-Peter
2015-04-01
Mantle convection is the cause for plate tectonics, the formation of mountains and oceans, and the main driving mechanism behind earthquakes. The convection process is modeled by a system of partial differential equations describing the conservation of mass, momentum and energy. Characteristic to mantle flow is the vast disparity of length scales from global to microscopic, turning mantle convection simulations into a challenging application for high-performance computing. As system size and technical complexity of the simulations continue to increase, design and implementation of simulation models for next generation large-scale architectures demand an interdisciplinary co-design. Here we report about recent advances of the TERRA-NEO project, which is part of the high visibility SPPEXA program, and a joint effort of four research groups in computer sciences, mathematics and geophysical application under the leadership of FAU Erlangen. TERRA-NEO develops algorithms for future HPC infrastructures, focusing on high computational efficiency and resilience in next generation mantle convection models. We present software that can resolve the Earth's mantle with up to 1012 grid points and scales efficiently to massively parallel hardware with more than 50,000 processors. We use our simulations to explore the dynamic regime of mantle convection assessing the impact of small scale processes on global mantle flow.
Wind-tunnel simulation of store jettison with the aid of magnetic artificial gravity
NASA Technical Reports Server (NTRS)
Stephens, T.; Adams, R.
1972-01-01
A method employed in the simulation of jettison of stores from aircraft involving small scale wind-tunnel drop tests from a model of the parent aircraft is described. Proper scaling of such experiments generally dictates that the gravitational acceleration should ideally be a test variable. A method of introducing a controllable artificial component of gravity by magnetic means has been proposed. The use of a magnetic artificial gravity facility based upon this idea, in conjunction with small scale wind-tunnel drop tests, would improve the accuracy of simulation. A review of the scaling laws as they apply to the design of such a facility is presented. The design constraints involved in the integration of such a facility with a wind tunnel are defined. A detailed performance analysis procedure applicable to such a facility is developed. A practical magnet configuration is defined which is capable of controlling the strength and orientation of the magnetic artificial gravity field in the vertical plane, thereby allowing simulation of store jettison from a diving or climbing aircraft. The factors involved in the choice between continuous or intermittent operation of the facility, and the use of normal or superconducting magnets, are defined.
Aerodynamic design on high-speed trains
NASA Astrophysics Data System (ADS)
Ding, San-San; Li, Qiang; Tian, Ai-Qin; Du, Jian; Liu, Jia-Li
2016-04-01
Compared with the traditional train, the operational speed of the high-speed train has largely improved, and the dynamic environment of the train has changed from one of mechanical domination to one of aerodynamic domination. The aerodynamic problem has become the key technological challenge of high-speed trains and significantly affects the economy, environment, safety, and comfort. In this paper, the relationships among the aerodynamic design principle, aerodynamic performance indexes, and design variables are first studied, and the research methods of train aerodynamics are proposed, including numerical simulation, a reduced-scale test, and a full-scale test. Technological schemes of train aerodynamics involve the optimization design of the streamlined head and the smooth design of the body surface. Optimization design of the streamlined head includes conception design, project design, numerical simulation, and a reduced-scale test. Smooth design of the body surface is mainly used for the key parts, such as electric-current collecting system, wheel truck compartment, and windshield. The aerodynamic design method established in this paper has been successfully applied to various high-speed trains (CRH380A, CRH380AM, CRH6, CRH2G, and the Standard electric multiple unit (EMU)) that have met expected design objectives. The research results can provide an effective guideline for the aerodynamic design of high-speed trains.
Cold Flow Testing for Liquid Propellant Rocket Injector Scaling and Throttling
NASA Technical Reports Server (NTRS)
Kenny, Jeremy R.; Moser, Marlow D.; Hulka, James; Jones, Gregg
2006-01-01
Scaling and throttling of combustion devices are important capabilities to demonstrate in development of liquid rocket engines for NASA's Space Exploration Mission. Scaling provides the ability to design new injectors and injection elements with predictable performance on the basis of test experience with existing injectors and elements, and could be a key aspect of future development programs. Throttling is the reduction of thrust with fixed designs and is a critical requirement in lunar and other planetary landing missions. A task in the Constellation University Institutes Program (CUIP) has been designed to evaluate spray characteristics when liquid propellant rocket engine injectors are scaled and throttled. The specific objectives of the present study are to characterize injection and primary atomization using cold flow simulations of the reacting sprays. These simulations can provide relevant information because the injection and primary atomization are believed to be the spray processes least affected by the propellant reaction. Cold flow studies also provide acceptable test conditions for a university environment. Three geometric scales - 1/4- scale, 1/2-scale, and full-scale - of two different injector element types - swirl coaxial and shear coaxial - will be designed, fabricated, and tested. A literature review is currently being conducted to revisit and compile the previous scaling documentation. Because it is simple to perform, throttling will also be examined in the present work by measuring primary atomization characteristics as the mass flow rate and pressure drop of the six injector element concepts are reduced, with corresponding changes in chamber backpressure. Simulants will include water and gaseous nitrogen, and an optically accessible chamber will be used for visual and laser-based diagnostics. The chamber will include curtain flow capability to repress recirculation, and additional gas injection to provide independent control of the backpressure. This paper provides a short review of the appropriate literature, as well as descriptions of plans for experimental hardware, test chamber instrumentation, diagnostics, and testing.
Formalizing Knowledge in Multi-Scale Agent-Based Simulations
Somogyi, Endre; Sluka, James P.; Glazier, James A.
2017-01-01
Multi-scale, agent-based simulations of cellular and tissue biology are increasingly common. These simulations combine and integrate a range of components from different domains. Simulations continuously create, destroy and reorganize constituent elements causing their interactions to dynamically change. For example, the multi-cellular tissue development process coordinates molecular, cellular and tissue scale objects with biochemical, biomechanical, spatial and behavioral processes to form a dynamic network. Different domain specific languages can describe these components in isolation, but cannot describe their interactions. No current programming language is designed to represent in human readable and reusable form the domain specific knowledge contained in these components and interactions. We present a new hybrid programming language paradigm that naturally expresses the complex multi-scale objects and dynamic interactions in a unified way and allows domain knowledge to be captured, searched, formalized, extracted and reused. PMID:29338063
Formalizing Knowledge in Multi-Scale Agent-Based Simulations.
Somogyi, Endre; Sluka, James P; Glazier, James A
2016-10-01
Multi-scale, agent-based simulations of cellular and tissue biology are increasingly common. These simulations combine and integrate a range of components from different domains. Simulations continuously create, destroy and reorganize constituent elements causing their interactions to dynamically change. For example, the multi-cellular tissue development process coordinates molecular, cellular and tissue scale objects with biochemical, biomechanical, spatial and behavioral processes to form a dynamic network. Different domain specific languages can describe these components in isolation, but cannot describe their interactions. No current programming language is designed to represent in human readable and reusable form the domain specific knowledge contained in these components and interactions. We present a new hybrid programming language paradigm that naturally expresses the complex multi-scale objects and dynamic interactions in a unified way and allows domain knowledge to be captured, searched, formalized, extracted and reused.
Joint Live Fire (JLF) Final Report for Instrumentation for Local Accelerative Loading
2016-07-22
Comparison with Pretest Prediction ................................................................................... 60 d. Lessons Learned...test designs and results prior to full-scale testing. Correlating simulation to test data can aid in increasing confidence in the models to further...test and test-to-simulation with the current instrumentation used during testing. Recent advances in accelerometer design must be evaluated and
HRLSim: a high performance spiking neural network simulator for GPGPU clusters.
Minkovich, Kirill; Thibeault, Corey M; O'Brien, Michael John; Nogin, Aleksey; Cho, Youngkwan; Srinivasa, Narayan
2014-02-01
Modeling of large-scale spiking neural models is an important tool in the quest to understand brain function and subsequently create real-world applications. This paper describes a spiking neural network simulator environment called HRL Spiking Simulator (HRLSim). This simulator is suitable for implementation on a cluster of general purpose graphical processing units (GPGPUs). Novel aspects of HRLSim are described and an analysis of its performance is provided for various configurations of the cluster. With the advent of inexpensive GPGPU cards and compute power, HRLSim offers an affordable and scalable tool for design, real-time simulation, and analysis of large-scale spiking neural networks.
Geotechnical Centrifuge Experiments to Evaluate Piping in Foundation Soils
2014-05-01
verifiable results. These tests were successful in design , construction, and execution of a realistic simulation of internal erosion leading to failure...possible “scale effects,” “modeling of models” testing protocol should be included in the test program. Also, the model design should minimize the scale...recommendations for improving the centrifuge tests include the following: • Design improved system for reservoir control to provide definitive and
Advanced Dynamically Adaptive Algorithms for Stochastic Simulations on Extreme Scales
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiu, Dongbin
2017-03-03
The focus of the project is the development of mathematical methods and high-performance computational tools for stochastic simulations, with a particular emphasis on computations on extreme scales. The core of the project revolves around the design of highly efficient and scalable numerical algorithms that can adaptively and accurately, in high dimensional spaces, resolve stochastic problems with limited smoothness, even containing discontinuities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wardle, Kent E.; Frey, Kurt; Pereira, Candido
2014-02-02
This task is aimed at predictive modeling of solvent extraction processes in typical extraction equipment through multiple simulation methods at various scales of resolution. We have conducted detailed continuum fluid dynamics simulation on the process unit level as well as simulations of the molecular-level physical interactions which govern extraction chemistry. Through combination of information gained through simulations at each of these two tiers along with advanced techniques such as the Lattice Boltzmann Method (LBM) which can bridge these two scales, we can develop the tools to work towards predictive simulation for solvent extraction on the equipment scale (Figure 1). Themore » goal of such a tool-along with enabling optimized design and operation of extraction units-would be to allow prediction of stage extraction effrciency under specified conditions. Simulation efforts on each of the two scales will be described below. As the initial application of FELBM in the work performed during FYl0 has been on annular mixing it will be discussed in context of the continuum-scale. In the future, however, it is anticipated that the real value of FELBM will be in its use as a tool for sub-grid model development through highly refined DNS-like multiphase simulations facilitating exploration and development of droplet models including breakup and coalescence which will be needed for the large-scale simulations where droplet level physics cannot be resolved. In this area, it can have a significant advantage over traditional CFD methods as its high computational efficiency allows exploration of significantly greater physical detail especially as computational resources increase in the future.« less
The Numerical Propulsion System Simulation: A Multidisciplinary Design System for Aerospace Vehicles
NASA Technical Reports Server (NTRS)
Lytle, John K.
1999-01-01
Advances in computational technology and in physics-based modeling are making large scale, detailed simulations of complex systems possible within the design environment. For example, the integration of computing, communications, and aerodynamics has reduced the time required to analyze ma or propulsion system components from days and weeks to minutes and hours. This breakthrough has enabled the detailed simulation of major propulsion system components to become a routine part of design process and to provide the designer with critical information about the components early in the design process. This paper describes the development of the Numerical Propulsion System Simulation (NPSS), a multidisciplinary system of analysis tools that is focussed on extending the simulation capability from components to the full system. This will provide the product developer with a "virtual wind tunnel" that will reduce the number of hardware builds and tests required during the development of advanced aerospace propulsion systems.
Wide band design on the scaled absorbing material filled with flaky CIPs
NASA Astrophysics Data System (ADS)
Xu, Yonggang; Yuan, Liming; Gao, Wei; Wang, Xiaobing; Liang, Zichang; Liao, Yi
2018-02-01
The scaled target measurement is an important method to get the target characteristic. Radar absorbing materials are widely used in the low detectable target, considering the absorbing material frequency dispersion characteristics, it makes designing and manufacturing scaled radar absorbing materials on the scaled target very difficult. This paper proposed a wide band design method on the scaled absorbing material of the thin absorption coating with added carbonyl iron particles. According to the theoretical radar cross section (RCS) of the plate, the reflection loss determined by the permittivity and permeability was chosen as the main design factor. Then, the parameters of the scaled absorbing materials were designed using the effective medium theory, and the scaled absorbing material was constructed. Finally, the full-size coating plate and scaled coating plates (under three different scale factors) were simulated; the RCSs of the coating plates were numerically calculated and measured at 4 GHz and a scale factor of 2. The results showed that the compensated RCS of the scaled coating plate was close to that of the full-size coating plate, that is, the mean deviation was less than 0.5 dB, and the design method for the scaled material was very effective.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmintier, Bryan S; Krishnamurthy, Dheepak; Top, Philip
This paper describes the design rationale for a new cyber-physical-energy co-simulation framework for electric power systems. This new framework will support very large-scale (100,000+ federates) co-simulations with off-the-shelf power-systems, communication, and end-use models. Other key features include cross-platform operating system support, integration of both event-driven (e.g. packetized communication) and time-series (e.g. power flow) simulation, and the ability to co-iterate among federates to ensure model convergence at each time step. After describing requirements, we begin by evaluating existing co-simulation frameworks, including HLA and FMI, and conclude that none provide the required features. Then we describe the design for the new layeredmore » co-simulation architecture.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmintier, Bryan S; Krishnamurthy, Dheepak; Top, Philip
This paper describes the design rationale for a new cyber-physical-energy co-simulation framework for electric power systems. This new framework will support very large-scale (100,000+ federates) co-simulations with off-the-shelf power-systems, communication, and end-use models. Other key features include cross-platform operating system support, integration of both event-driven (e.g. packetized communication) and time-series (e.g. power flow) simulation, and the ability to co-iterate among federates to ensure model convergence at each time step. After describing requirements, we begin by evaluating existing co-simulation frameworks, including HLA and FMI, and conclude that none provide the required features. Then we describe the design for the new layeredmore » co-simulation architecture.« less
The Energy-Environment Simulator as a Classroom Aid.
ERIC Educational Resources Information Center
Sell, Nancy J.; Van Koevering, Thomas E.
1981-01-01
Energy-Environment Simulators, provided by the U.S. Department of Energy, can be used to help individuals experience the effects of unbridled energy consumption for the next century on a national or worldwide scale. The simulator described is a specially designed analog computer which models the real-world energy situation. (MP)
Performance of a pilot-scale constructed wetland system for treating simulated ash basin water.
Dorman, Lane; Castle, James W; Rodgers, John H
2009-05-01
A pilot-scale constructed wetland treatment system (CWTS) was designed and built to decrease the concentration and toxicity of constituents of concern in ash basin water from coal-burning power plants. The CWTS was designed to promote the following treatment processes for metals and metalloids: precipitation as non-bioavailable sulfides, co-precipitation with iron oxyhydroxides, and adsorption onto iron oxides. Concentrations of Zn, Cr, Hg, As, and Se in simulated ash basin water were reduced by the CWTS to less than USEPA-recommended water quality criteria. The removal efficiency (defined as the percent concentration decrease from influent to effluent) was dependent on the influent concentration of the constituent, while the extent of removal (defined as the concentration of a constituent of concern in the CWTS effluent) was independent of the influent concentration. Results from toxicity experiments illustrated that the CWTS eliminated influent toxicity with regard to survival and reduced influent toxicity with regard to reproduction. Reduction in potential for scale formation and biofouling was achieved through treatment of the simulated ash basin water by the pilot-scale CWTS.
Ultra-dense magnetoresistive mass memory
NASA Technical Reports Server (NTRS)
Daughton, J. M.; Sinclair, R.; Dupuis, T.; Brown, J.
1992-01-01
This report details the progress and accomplishments of Nonvolatile Electronics (NVE), Inc., on the design of the wafer scale MRAM mass memory system during the fifth quarter of the project. NVE has made significant progress this quarter on the one megabit design in several different areas. A test chip, which will verify a working GMR bit with the dimensions required by the 1 Meg chip, has been designed, laid out, and is currently being processed in the NVE labs. This test chip will allow electrical specifications, tolerances, and processing issues to be finalized before construction of the actual chip, thus providing a greater assurance of success of the final 1 Meg design. A model has been developed to accurately simulate the parasitic effects of unselected sense lines. This model gives NVE the ability to perform accurate simulations of the array electronic and test different design concepts. Much of the circuit design for the 1 Meg chip has been completed and simulated and these designs are included. Progress has been made in the wafer scale design area to verify the reliable operation of the 16 K macrocell. This is currently being accomplished with the design and construction of two stand alone test systems which will perform life tests and gather data on reliabiliy and wearout mechanisms for analysis.
Visual Data-Analytics of Large-Scale Parallel Discrete-Event Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ross, Caitlin; Carothers, Christopher D.; Mubarak, Misbah
Parallel discrete-event simulation (PDES) is an important tool in the codesign of extreme-scale systems because PDES provides a cost-effective way to evaluate designs of highperformance computing systems. Optimistic synchronization algorithms for PDES, such as Time Warp, allow events to be processed without global synchronization among the processing elements. A rollback mechanism is provided when events are processed out of timestamp order. Although optimistic synchronization protocols enable the scalability of large-scale PDES, the performance of the simulations must be tuned to reduce the number of rollbacks and provide an improved simulation runtime. To enable efficient large-scale optimistic simulations, one has tomore » gain insight into the factors that affect the rollback behavior and simulation performance. We developed a tool for ROSS model developers that gives them detailed metrics on the performance of their large-scale optimistic simulations at varying levels of simulation granularity. Model developers can use this information for parameter tuning of optimistic simulations in order to achieve better runtime and fewer rollbacks. In this work, we instrument the ROSS optimistic PDES framework to gather detailed statistics about the simulation engine. We have also developed an interactive visualization interface that uses the data collected by the ROSS instrumentation to understand the underlying behavior of the simulation engine. The interface connects real time to virtual time in the simulation and provides the ability to view simulation data at different granularities. We demonstrate the usefulness of our framework by performing a visual analysis of the dragonfly network topology model provided by the CODES simulation framework built on top of ROSS. The instrumentation needs to minimize overhead in order to accurately collect data about the simulation performance. To ensure that the instrumentation does not introduce unnecessary overhead, we perform a scaling study that compares instrumented ROSS simulations with their noninstrumented counterparts in order to determine the amount of perturbation when running at different simulation scales.« less
Numerical Propulsion System Simulation (NPSS) 1999 Industry Review
NASA Technical Reports Server (NTRS)
Lytle, John; Follen, Greg; Naiman, Cynthia; Evans, Austin
2000-01-01
The technologies necessary to enable detailed numerical simulations of complete propulsion systems are being developed at the NASA Glenn Research Center in cooperation with industry, academia, and other government agencies. Large scale, detailed simulations will be of great value to the nation because they eliminate some of the costly testing required to develop and certify advanced propulsion systems. In addition, time and cost savings will be achieved by enabling design details to be evaluated early in the development process before a commitment is made to a specific design. This concept is called the Numerical Propulsion System Simulation (NPSS). NPSS consists of three main elements: (1) engineering models that enable multidisciplinary analysis of large subsystems and systems at various levels of detail, (2) a simulation environment that maximizes designer productivity, and (3) a cost-effective, high-performance computing platform. A fundamental requirement of the concept is that the simulations must be capable of overnight execution on easily accessible computing platforms. This will greatly facilitate the use of large-scale simulations in a design environment. This paper describes the current status of the NPSS with specific emphasis on the progress made over the past year on air breathing propulsion applications. In addition, the paper contains a summary of the feedback received from industry partners in the development effort and the actions taken over the past year to respond to that feedback. The NPSS development was supported in FY99 by the High Performance Computing and Communications Program.
Numerical Simulations of Subscale Wind Turbine Rotor Inboard Airfoils at Low Reynolds Number
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blaylock, Myra L.; Maniaci, David Charles; Resor, Brian R.
2015-04-01
New blade designs are planned to support future research campaigns at the SWiFT facility in Lubbock, Texas. The sub-scale blades will reproduce specific aerodynamic characteristics of utility-scale rotors. Reynolds numbers for megawatt-, utility-scale rotors are generally above 2-8 million. The thickness of inboard airfoils for these large rotors are typically as high as 35-40%. The thickness and the proximity to three-dimensional flow of these airfoils present design and analysis challenges, even at the full scale. However, more than a decade of experience with the airfoils in numerical simulation, in the wind tunnel, and in the field has generated confidence inmore » their performance. Reynolds number regimes for the sub-scale rotor are significantly lower for the inboard blade, ranging from 0.7 to 1 million. Performance of the thick airfoils in this regime is uncertain because of the lack of wind tunnel data and the inherent challenge associated with numerical simulations. This report documents efforts to determine the most capable analysis tools to support these simulations in an effort to improve understanding of the aerodynamic properties of thick airfoils in this Reynolds number regime. Numerical results from various codes of four airfoils are verified against previously published wind tunnel results where data at those Reynolds numbers are available. Results are then computed for other Reynolds numbers of interest.« less
Hall, Matthew; Goupee, Andrew; Jonkman, Jason
2017-08-24
Hybrid modeling—combining physical testing and numerical simulation in real time$-$opens new opportunities in floating wind turbine research. Wave basin testing is an important validation step for floating support structure design, but the conventional approaches that use physical wind above the basin are limited by scaling problems in the aerodynamics. Applying wind turbine loads with an actuation system that is controlled by a simulation responding to the basin test in real time offers a way to avoid scaling problems and reduce cost barriers for floating wind turbine design validation in realistic coupled wind and wave conditions. This paper demonstrates the developmentmore » of performance specifications for a system that couples a wave basin experiment with a wind turbine simulation. Two different points for the hybrid coupling are considered: the tower-base interface and the aero-rotor interface (the boundary between aerodynamics and the rotor structure). Analyzing simulations of three floating wind turbine designs across seven load cases reveals the motion and force requirements of the coupling system. By simulating errors in the hybrid coupling system, the sensitivity of the floating wind turbine response to coupling quality can be quantified. The sensitivity results can then be used to determine tolerances for motion tracking errors, force actuation errors, bandwidth limitations, and latency in the hybrid coupling system. These tolerances can guide the design of hybrid coupling systems to achieve desired levels of accuracy. An example demonstrates how the developed methods can be used to generate performance specifications for a system at 1:50 scale. Results show that sensitivities vary significantly between support structure designs and that coupling at the aero-rotor interface has less stringent requirements than those for coupling at the tower base. As a result, the methods and results presented here can inform design of future hybrid coupling systems and enhance understanding of how test results are affected by hybrid coupling quality.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hall, Matthew; Goupee, Andrew; Jonkman, Jason
Hybrid modeling—combining physical testing and numerical simulation in real time$-$opens new opportunities in floating wind turbine research. Wave basin testing is an important validation step for floating support structure design, but the conventional approaches that use physical wind above the basin are limited by scaling problems in the aerodynamics. Applying wind turbine loads with an actuation system that is controlled by a simulation responding to the basin test in real time offers a way to avoid scaling problems and reduce cost barriers for floating wind turbine design validation in realistic coupled wind and wave conditions. This paper demonstrates the developmentmore » of performance specifications for a system that couples a wave basin experiment with a wind turbine simulation. Two different points for the hybrid coupling are considered: the tower-base interface and the aero-rotor interface (the boundary between aerodynamics and the rotor structure). Analyzing simulations of three floating wind turbine designs across seven load cases reveals the motion and force requirements of the coupling system. By simulating errors in the hybrid coupling system, the sensitivity of the floating wind turbine response to coupling quality can be quantified. The sensitivity results can then be used to determine tolerances for motion tracking errors, force actuation errors, bandwidth limitations, and latency in the hybrid coupling system. These tolerances can guide the design of hybrid coupling systems to achieve desired levels of accuracy. An example demonstrates how the developed methods can be used to generate performance specifications for a system at 1:50 scale. Results show that sensitivities vary significantly between support structure designs and that coupling at the aero-rotor interface has less stringent requirements than those for coupling at the tower base. As a result, the methods and results presented here can inform design of future hybrid coupling systems and enhance understanding of how test results are affected by hybrid coupling quality.« less
NASA Astrophysics Data System (ADS)
Hansen, A. L.; Donnelly, C.; Refsgaard, J. C.; Karlsson, I. B.
2018-01-01
This paper describes a modeling approach proposed to simulate the impact of local-scale, spatially targeted N-mitigation measures for the Baltic Sea Basin. Spatially targeted N-regulations aim at exploiting the considerable spatial differences in the natural N-reduction taking place in groundwater and surface water. While such measures can be simulated using local-scale physically-based catchment models, use of such detailed models for the 1.8 million km2 Baltic Sea basin is not feasible due to constraints on input data and computing power. Large-scale models that are able to simulate the Baltic Sea basin, on the other hand, do not have adequate spatial resolution to simulate some of the field-scale measures. Our methodology combines knowledge and results from two local-scale physically-based MIKE SHE catchment models, the large-scale and more conceptual E-HYPE model, and auxiliary data in order to enable E-HYPE to simulate how spatially targeted regulation of agricultural practices may affect N-loads to the Baltic Sea. We conclude that the use of E-HYPE with this upscaling methodology enables the simulation of the impact on N-loads of applying a spatially targeted regulation at the Baltic Sea basin scale to the correct order-of-magnitude. The E-HYPE model together with the upscaling methodology therefore provides a sound basis for large-scale policy analysis; however, we do not expect it to be sufficiently accurate to be useful for the detailed design of local-scale measures.
Hierarchical Engine for Large-scale Infrastructure Co-Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
2017-04-24
HELICS is designed to support very-large-scale (100,000+ federates) cosimulations with off-the-shelf power-system, communication, market, and end-use tools. Other key features include cross platform operating system support, the integration of both event driven (e.g., packetized communication) and time-series (e.g., power flow) simulations, and the ability to co-iterate among federates to ensure physical model convergence at each time step.
RenNanqi; GuoWanqian; LiuBingfeng; CaoGuangli; DingJie
2011-06-01
Among different technologies of hydrogen production, bio-hydrogen production exhibits perhaps the greatest potential to replace fossil fuels. Based on recent research on dark fermentative hydrogen production, this article reviews the following aspects towards scaled-up application of this technology: bioreactor development and parameter optimization, process modeling and simulation, exploitation of cheaper raw materials and combining dark-fermentation with photo-fermentation. Bioreactors are necessary for dark-fermentation hydrogen production, so the design of reactor type and optimization of parameters are essential. Process modeling and simulation can help engineers design and optimize large-scale systems and operations. Use of cheaper raw materials will surely accelerate the pace of scaled-up production of biological hydrogen. And finally, combining dark-fermentation with photo-fermentation holds considerable promise, and has successfully achieved maximum overall hydrogen yield from a single substrate. Future development of bio-hydrogen production will also be discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.
Separate versus Concurrent Calibration Methods in Vertical Scaling.
ERIC Educational Resources Information Center
Karkee, Thakur; Lewis, Daniel M.; Hoskens, Machteld; Yao, Lihua; Haug, Carolyn
Two methods to establish a common scale across grades within a content area using a common item design (separate and concurrent) have previously been studied under simulated conditions. Separate estimation is accomplished through separate calibration and grade-by-grade chained linking. Concurrent calibration established the vertical scale in a…
Design of a high-speed digital processing element for parallel simulation
NASA Technical Reports Server (NTRS)
Milner, E. J.; Cwynar, D. S.
1983-01-01
A prototype of a custom designed computer to be used as a processing element in a multiprocessor based jet engine simulator is described. The purpose of the custom design was to give the computer the speed and versatility required to simulate a jet engine in real time. Real time simulations are needed for closed loop testing of digital electronic engine controls. The prototype computer has a microcycle time of 133 nanoseconds. This speed was achieved by: prefetching the next instruction while the current one is executing, transporting data using high speed data busses, and using state of the art components such as a very large scale integration (VLSI) multiplier. Included are discussions of processing element requirements, design philosophy, the architecture of the custom designed processing element, the comprehensive instruction set, the diagnostic support software, and the development status of the custom design.
Ganguly, Arnab; Alexeenko, Alina A; Schultz, Steven G; Kim, Sherry G
2013-10-01
A physics-based model for the sublimation-transport-condensation processes occurring in pharmaceutical freeze-drying by coupling product attributes and equipment capabilities into a unified simulation framework is presented. The system-level model is used to determine the effect of operating conditions such as shelf temperature, chamber pressure, and the load size on occurrence of choking for a production-scale dryer. Several data sets corresponding to production-scale runs with a load from 120 to 485 L have been compared with simulations. A subset of data is used for calibration, whereas another data set corresponding to a load of 150 L is used for model validation. The model predictions for both the onset and extent of choking as well as for the measured product temperature agree well with the production-scale measurements. Additionally, we study the effect of resistance to vapor transport presented by the duct with a valve and a baffle in the production-scale freeze-dryer. Computation Fluid Dynamics (CFD) techniques augmented with a system-level unsteady heat and mass transfer model allow to predict dynamic process conditions taking into consideration specific dryer design. CFD modeling of flow structure in the duct presented here for a production-scale freeze-dryer quantifies the benefit of reducing the obstruction to the flow through several design modifications. It is found that the use of a combined valve-baffle system can increase vapor flow rate by a factor of 2.2. Moreover, minor design changes such as moving the baffle downstream by about 10 cm can increase the flow rate by 54%. The proposed design changes can increase drying rates, improve efficiency, and reduce cycle times due to fewer obstructions in the vapor flow path. The comprehensive simulation framework combining the system-level model and the detailed CFD computations can provide a process analytical tool for more efficient and robust freeze-drying of bio-pharmaceuticals. Copyright © 2013 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, William Michael; Plimpton, Steven James; Wang, Peng
2010-03-01
LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. LAMMPS has potentials for soft materials (biomolecules, polymers) and solid-state materials (metals, semiconductors) and coarse-grained or mesoscopic systems. It can be used to model atoms or, more generically, as a parallel particle simulator at the atomic, meso, or continuum scale. LAMMPS runs on single processors or in parallel using message-passing techniques and a spatial-decomposition of the simulation domain. The code is designed to be easy to modify or extend with new functionality.
Secure web-based invocation of large-scale plasma simulation codes
NASA Astrophysics Data System (ADS)
Dimitrov, D. A.; Busby, R.; Exby, J.; Bruhwiler, D. L.; Cary, J. R.
2004-12-01
We present our design and initial implementation of a web-based system for running, both in parallel and serial, Particle-In-Cell (PIC) codes for plasma simulations with automatic post processing and generation of visual diagnostics.
Fully Coupled Simulation of Lithium Ion Battery Cell Performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trembacki, Bradley L.; Murthy, Jayathi Y.; Roberts, Scott Alan
Lithium-ion battery particle-scale (non-porous electrode) simulations applied to resolved electrode geometries predict localized phenomena and can lead to better informed decisions on electrode design and manufacturing. This work develops and implements a fully-coupled finite volume methodology for the simulation of the electrochemical equations in a lithium-ion battery cell. The model implementation is used to investigate 3D battery electrode architectures that offer potential energy density and power density improvements over traditional layer-by-layer particle bed battery geometries. Advancement of micro-scale additive manufacturing techniques has made it possible to fabricate these 3D electrode microarchitectures. A variety of 3D battery electrode geometries are simulatedmore » and compared across various battery discharge rates and length scales in order to quantify performance trends and investigate geometrical factors that improve battery performance. The energy density and power density of the 3D battery microstructures are compared in several ways, including a uniform surface area to volume ratio comparison as well as a comparison requiring a minimum manufacturable feature size. Significant performance improvements over traditional particle bed electrode designs are observed, and electrode microarchitectures derived from minimal surfaces are shown to be superior. A reduced-order volume-averaged porous electrode theory formulation for these unique 3D batteries is also developed, allowing simulations on the full-battery scale. Electrode concentration gradients are modeled using the diffusion length method, and results for plate and cylinder electrode geometries are compared to particle-scale simulation results. Additionally, effective diffusion lengths that minimize error with respect to particle-scale results for gyroid and Schwarz P electrode microstructures are determined.« less
Review of Dynamic Modeling and Simulation of Large Scale Belt Conveyor System
NASA Astrophysics Data System (ADS)
He, Qing; Li, Hong
Belt conveyor is one of the most important devices to transport bulk-solid material for long distance. Dynamic analysis is the key to decide whether the design is rational in technique, safe and reliable in running, feasible in economy. It is very important to study dynamic properties, improve efficiency and productivity, guarantee conveyor safe, reliable and stable running. The dynamic researches and applications of large scale belt conveyor are discussed. The main research topics, the state-of-the-art of dynamic researches on belt conveyor are analyzed. The main future works focus on dynamic analysis, modeling and simulation of main components and whole system, nonlinear modeling, simulation and vibration analysis of large scale conveyor system.
Design and Evaluation of Simulations for the Development of Complex Decision-Making Skills.
ERIC Educational Resources Information Center
Hartley, Roger; Varley, Glen
2002-01-01
Command and Control Training Using Simulation (CACTUS) is a computer digital mapping system used by police to manage large-scale public events. Audio and video records of adaptive training scenarios using CACTUS show how the simulation develops decision-making skills for strategic and tactical event management. (SK)
Investigating the Effectiveness of Computer Simulations for Chemistry Learning
ERIC Educational Resources Information Center
Plass, Jan L.; Milne, Catherine; Homer, Bruce D.; Schwartz, Ruth N.; Hayward, Elizabeth O.; Jordan, Trace; Verkuilen, Jay; Ng, Florrie; Wang, Yan; Barrientos, Juan
2012-01-01
Are well-designed computer simulations an effective tool to support student understanding of complex concepts in chemistry when integrated into high school science classrooms? We investigated scaling up the use of a sequence of simulations of kinetic molecular theory and associated topics of diffusion, gas laws, and phase change, which we designed…
McDonald, Richard R.; Nelson, Jonathan M.; Fosness, Ryan L.; Nelson, Peter O.; Constantinescu, George; Garcia, Marcelo H.; Hanes, Dan
2016-01-01
Two- and three-dimensional morphodynamic simulations are becoming common in studies of channel form and process. The performance of these simulations are often validated against measurements from laboratory studies. Collecting channel change information in natural settings for model validation is difficult because it can be expensive and under most channel forming flows the resulting channel change is generally small. Several channel restoration projects designed in part to armor large meanders with several large spurs constructed of wooden piles on the Kootenai River, ID, have resulted in rapid bed elevation change following construction. Monitoring of these restoration projects includes post- restoration (as-built) Digital Elevation Models (DEMs) as well as additional channel surveys following high channel forming flows post-construction. The resulting sequence of measured bathymetry provides excellent validation data for morphodynamic simulations at the reach scale of a real river. In this paper we test the performance a quasi-three-dimensional morphodynamic simulation against the measured elevation change. The resulting simulations predict the pattern of channel change reasonably well but many of the details such as the maximum scour are under predicted.
A GENERAL SIMULATION MODEL FOR INFORMATION SYSTEMS: A REPORT ON A MODELLING CONCEPT
The report is concerned with the design of large-scale management information systems (MIS). A special design methodology was created, along with a design model to complement it. The purpose of the paper is to present the model.
Electromagnetic Simulations for Aerospace Application Final Report CRADA No. TC-0376-92
DOE Office of Scientific and Technical Information (OSTI.GOV)
Madsen, N.; Meredith, S.
Electromagnetic (EM) simulation tools play an important role in the design cycle, allowing optimization of a design before it is fabricated for testing. The purpose of this cooperative project was to provide Lockheed with state-of-the-art electromagnetic (EM) simulation software that will enable the optimal design of the next generation of low-observable (LO) military aircraft through the VHF regime. More particularly, the project was principally code development and validation, its goal to produce a 3-D, conforming grid,time-domain (TD) EM simulation tool, consisting of a mesh generator, a DS13D-based simulation kernel, and an RCS postprocessor, which was useful in the optimization ofmore » LO aircraft, both for full-aircraft simulations run on a massively parallel computer and for small scale problems run on a UNIX workstation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Monet, Giath; Bacon, David J; Osetskiy, Yury N
2010-01-01
Given the time and length scales in molecular dynamics (MD) simulations of dislocation-defect interactions, quantitative MD results cannot be used directly in larger scale simulations or compared directly with experiment. A method to extract fundamental quantities from MD simulations is proposed here. The first quantity is a critical stress defined to characterise the obstacle resistance. This mesoscopic parameter, rather than the obstacle 'strength' designed for a point obstacle, is to be used for an obstacle of finite size. At finite temperature, our analyses of MD simulations allow the activation energy to be determined as a function of temperature. The resultsmore » confirm the proportionality between activation energy and temperature that is frequently observed by experiment. By coupling the data for the activation energy and the critical stress as functions of temperature, we show how the activation energy can be deduced at a given value of the critical stress.« less
MODFLOW equipped with a new method for the accurate simulation of axisymmetric flow
NASA Astrophysics Data System (ADS)
Samani, N.; Kompani-Zare, M.; Barry, D. A.
2004-01-01
Axisymmetric flow to a well is an important topic of groundwater hydraulics, the simulation of which depends on accurate computation of head gradients. Groundwater numerical models with conventional rectilinear grid geometry such as MODFLOW (in contrast to analytical models) generally have not been used to simulate aquifer test results at a pumping well because they are not designed or expected to closely simulate the head gradient near the well. A scaling method is proposed based on mapping the governing flow equation from cylindrical to Cartesian coordinates, and vice versa. A set of relationships and scales is derived to implement the conversion. The proposed scaling method is then embedded in MODFLOW 2000. To verify the accuracy of the method steady and unsteady flows in confined and unconfined aquifers with fully or partially penetrating pumping wells are simulated and compared with the corresponding analytical solutions. In all cases a high degree of accuracy is achieved.
Enabling Co-Design of Multi-Layer Exascale Storage Architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carothers, Christopher
Growing demands for computing power in applications such as energy production, climate analysis, computational chemistry, and bioinformatics have propelled computing systems toward the exascale: systems with 10 18 floating-point operations per second. These systems, to be designed and constructed over the next decade, will create unprecedented challenges in component counts, power consumption, resource limitations, and system complexity. Data storage and access are an increasingly important and complex component in extreme-scale computing systems, and significant design work is needed to develop successful storage hardware and software architectures at exascale. Co-design of these systems will be necessary to find the best possiblemore » design points for exascale systems. The goal of this work has been to enable the exploration and co-design of exascale storage systems by providing a detailed, accurate, and highly parallel simulation of exascale storage and the surrounding environment. Specifically, this simulation has (1) portrayed realistic application checkpointing and analysis workloads, (2) captured the complexity, scale, and multilayer nature of exascale storage hardware and software, and (3) executed in a timeframe that enables “what if'” exploration of design concepts. We developed models of the major hardware and software components in an exascale storage system, as well as the application I/O workloads that drive them. We used our simulation system to investigate critical questions in reliability and concurrency at exascale, helping guide the design of future exascale hardware and software architectures. Additionally, we provided this system to interested vendors and researchers so that others can explore the design space. We validated the capabilities of our simulation environment by configuring the simulation to represent the Argonne Leadership Computing Facility Blue Gene/Q system and comparing simulation results for application I/O patterns to the results of executions of these I/O kernels on the actual system.« less
NASA Astrophysics Data System (ADS)
Li, Gen; Tang, Chun-An; Liang, Zheng-Zhao
2017-01-01
Multi-scale high-resolution modeling of rock failure process is a powerful means in modern rock mechanics studies to reveal the complex failure mechanism and to evaluate engineering risks. However, multi-scale continuous modeling of rock, from deformation, damage to failure, has raised high requirements on the design, implementation scheme and computation capacity of the numerical software system. This study is aimed at developing the parallel finite element procedure, a parallel rock failure process analysis (RFPA) simulator that is capable of modeling the whole trans-scale failure process of rock. Based on the statistical meso-damage mechanical method, the RFPA simulator is able to construct heterogeneous rock models with multiple mechanical properties, deal with and represent the trans-scale propagation of cracks, in which the stress and strain fields are solved for the damage evolution analysis of representative volume element by the parallel finite element method (FEM) solver. This paper describes the theoretical basis of the approach and provides the details of the parallel implementation on a Windows - Linux interactive platform. A numerical model is built to test the parallel performance of FEM solver. Numerical simulations are then carried out on a laboratory-scale uniaxial compression test, and field-scale net fracture spacing and engineering-scale rock slope examples, respectively. The simulation results indicate that relatively high speedup and computation efficiency can be achieved by the parallel FEM solver with a reasonable boot process. In laboratory-scale simulation, the well-known physical phenomena, such as the macroscopic fracture pattern and stress-strain responses, can be reproduced. In field-scale simulation, the formation process of net fracture spacing from initiation, propagation to saturation can be revealed completely. In engineering-scale simulation, the whole progressive failure process of the rock slope can be well modeled. It is shown that the parallel FE simulator developed in this study is an efficient tool for modeling the whole trans-scale failure process of rock from meso- to engineering-scale.
Modeling and Simulation of the Second-Generation Orion Crew Module Air Bag Landing System
NASA Technical Reports Server (NTRS)
Timmers, Richard B.; Welch, Joseph V.; Hardy, Robin C.
2009-01-01
Air bags were evaluated as the landing attenuation system for earth landing of the Orion Crew Module (CM). An important element of the air bag system design process is proper modeling of the proposed configuration to determine if the resulting performance meets requirements. Analysis conducted to date shows that airbags are capable of providing a graceful landing of the CM in nominal and off-nominal conditions such as parachute failure, high horizontal winds, and unfavorable vehicle/ground angle combinations. The efforts presented here surround a second generation of the airbag design developed by ILC Dover, and is based on previous design, analysis, and testing efforts. In order to fully evaluate the second generation air bag design and correlate the dynamic simulations, a series of drop tests were carried out at NASA Langley's Landing and Impact Research (LandIR) facility. The tests consisted of a full-scale set of air bags attached to a full-scale test article representing the Orion Crew Module. The techniques used to collect experimental data, construct the simulations, and make comparisons to experimental data are discussed.
A novel, highly efficient cavity backshort design for far-infrared TES detectors
NASA Astrophysics Data System (ADS)
Bracken, C.; de Lange, G.; Audley, M. D.; Trappe, N.; Murphy, J. A.; Gradziel, M.; Vreeling, W.-J.; Watson, D.
2018-03-01
In this paper we present a new cavity backshort design for TES (transition edge sensor) detectors which will provide increased coupling of the incoming astronomical signal to the detectors. The increased coupling results from the improved geometry of the cavities, where the geometry is a consequence of the proposed chemical etching manufacturing technique. Using a number of modelling techniques, predicted results of the performance of the cavities for frequencies of 4.3-10 THz are presented and compared to more standard cavity designs. Excellent optical efficiency is demonstrated, with improved response flatness across the band. In order to verify the simulated results, a scaled model cavity was built for testing at the lower W-band frequencies (75-100 GHz) with a VNA system. Further testing of the scale model at THz frequencies was carried out using a globar and bolometer via an FTS measurement set-up. The experimental results are presented, and compared to the simulations. Although there is relatively poor comparison between simulation and measurement at some frequencies, the discrepancies are explained by means of higher-mode excitation in the measured cavity which are not accounted for in the single-mode simulations. To verify this assumption, a better behaved cylindrical cavity is simulated and measured, where excellent agreement is demonstrated in those results. It can be concluded that both the simulations and the supporting measurements give confidence that this novel cavity design will indeed provide much-improved optical coupling for TES detectors in the far-infrared/THz band.
HACC: Simulating sky surveys on state-of-the-art supercomputing architectures
NASA Astrophysics Data System (ADS)
Habib, Salman; Pope, Adrian; Finkel, Hal; Frontiere, Nicholas; Heitmann, Katrin; Daniel, David; Fasel, Patricia; Morozov, Vitali; Zagaris, George; Peterka, Tom; Vishwanath, Venkatram; Lukić, Zarija; Sehrish, Saba; Liao, Wei-keng
2016-01-01
Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the 'Dark Universe', dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers that enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of programming models and algorithms. It has been demonstrated at scale on Cell- and GPU-accelerated systems, standard multi-core node clusters, and Blue Gene systems. HACC's design allows for ease of portability, and at the same time, high levels of sustained performance on the fastest supercomputers available. We present a description of the design philosophy of HACC, the underlying algorithms and code structure, and outline implementation details for several specific architectures. We show selected accuracy and performance results from some of the largest high resolution cosmological simulations so far performed, including benchmarks evolving more than 3.6 trillion particles.
HACC: Simulating sky surveys on state-of-the-art supercomputing architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habib, Salman; Pope, Adrian; Finkel, Hal
2016-01-01
Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the ‘Dark Universe’, dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers thatmore » enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of programming models and algorithms. It has been demonstrated at scale on Cell- and GPU-accelerated systems, standard multi-core node clusters, and Blue Gene systems. HACC’s design allows for ease of portability, and at the same time, high levels of sustained performance on the fastest supercomputers available. We present a description of the design philosophy of HACC, the underlying algorithms and code structure, and outline implementation details for several specific architectures. We show selected accuracy and performance results from some of the largest high resolution cosmological simulations so far performed, including benchmarks evolving more than 3.6 trillion particles.« less
Mars aerobrake assembly simulation
NASA Technical Reports Server (NTRS)
Filatovs, G. J.; Lee, Gordon K. F.; Garvey, John
1992-01-01
On-orbit assembly operation simulations in neutral buoyancy conditions are presently undertaken by a partial/full-scale Mars mission aerobrake mockup, whose design, conducted in the framework of an engineering senior students' design project, involved several levels of constraints for critical physical and operational features. Allowances had to be made for the auxiliary constraints introduced by underwater testing, as well as the subsegmenting required for overland shipment to the neutral-buoyancy testing facility. This mockup aerobrake's fidelity is determined by the numerous, competing design objectives.
Space Shuttle Orbital Drag Parachute Design
NASA Technical Reports Server (NTRS)
Meyerson, Robert E.
2001-01-01
The drag parachute system was added to the Space Shuttle Orbiter's landing deceleration subsystem beginning with flight STS-49 in May 1992. The addition of this subsystem to an existing space vehicle required a detailed set of ground tests and analyses. The aerodynamic design and performance testing of the system consisted of wind tunnel tests, numerical simulations, pilot-in-the-loop simulations, and full-scale testing. This analysis and design resulted in a fully qualified system that is deployed on every flight of the Space Shuttle.
NASA Technical Reports Server (NTRS)
Williams, Powtawche N.
1998-01-01
To assess engine performance during the testing of Space Shuttle Main Engines (SSMEs), the design of an optimal altitude diffuser is studied for future Space Transportation Systems (STS). For other Space Transportation Systems, rocket propellant using kerosene is also studied. Methane and dodecane have similar reaction schemes as kerosene, and are used to simulate kerosene combustion processes at various temperatures. The equations for the methane combustion mechanism at high temperature are given, and engine combustion is simulated on the General Aerodynamic Simulation Program (GASP). The successful design of an altitude diffuser depends on the study of a sub-scaled diffuser model tested through two-dimensional (2-D) flow-techniques. Subroutines given calculate the static temperature and pressure at each Mach number within the diffuser flow. Implementing these subroutines into program code for the properties of 2-D compressible fluid flow determines all fluid characteristics, and will be used in the development of an optimal diffuser design.
USDA-ARS?s Scientific Manuscript database
Water quality modeling requires across-scale support of combined digital soil elements and simulation parameters. This paper presents the unprecedented development of a large spatial scale (1:250,000) ArcGIS geodatabase coverage designed as a functional repository of soil-parameters for modeling an...
Design and simulation of a cable-pulley-based transmission for artificial ankle joints
NASA Astrophysics Data System (ADS)
Liu, Huaxin; Ceccarelli, Marco; Huang, Qiang
2016-06-01
In this paper, a mechanical transmission based on cable pulley is proposed for human-like actuation in the artificial ankle joints of human-scale. The anatomy articular characteristics of the human ankle is discussed for proper biomimetic inspiration in designing an accurate, efficient, and robust motion control of artificial ankle joint devices. The design procedure is presented through the inclusion of conceptual considerations and design details for an interactive solution of the transmission system. A mechanical design is elaborated for the ankle joint angular with pitch motion. A multi-body dynamic simulation model is elaborated accordingly and evaluated numerically in the ADAMS environment. Results of the numerical simulations are discussed to evaluate the dynamic performance of the proposed design solution and to investigate the feasibility of the proposed design in future applications for humanoid robots.
An Integrated Modeling and Simulation Methodology for Intelligent Systems Design and Testing
2002-08-01
simulation and actual execution. KEYWORDS: Model Continuity, Modeling, Simulation, Experimental Frame, Real Time Systems , Intelligent Systems...the methodology for a stand-alone real time system. Then it will scale up to distributed real time systems . For both systems, step-wise simulation...MODEL CONTINUITY Intelligent real time systems monitor, respond to, or control, an external environment. This environment is connected to the digital
A review of the analytical simulation of aircraft crash dynamics
NASA Technical Reports Server (NTRS)
Fasanella, Edwin L.; Carden, Huey D.; Boitnott, Richard L.; Hayduk, Robert J.
1990-01-01
A large number of full scale tests of general aviation aircraft, helicopters, and one unique air-to-ground controlled impact of a transport aircraft were performed. Additionally, research was also conducted on seat dynamic performance, load-limiting seats, load limiting subfloor designs, and emergency-locator-transmitters (ELTs). Computer programs were developed to provide designers with methods for predicting accelerations, velocities, and displacements of collapsing structure and for estimating the human response to crash loads. The results of full scale aircraft and component tests were used to verify and guide the development of analytical simulation tools and to demonstrate impact load attenuating concepts. Analytical simulation of metal and composite aircraft crash dynamics are addressed. Finite element models are examined to determine their degree of corroboration by experimental data and to reveal deficiencies requiring further development.
On the estimation and detection of the Rees-Sciama effect
NASA Astrophysics Data System (ADS)
Fullana, M. J.; Arnau, J. V.; Thacker, R. J.; Couchman, H. M. P.; Sáez, D.
2017-02-01
Maps of the Rees-Sciama (RS) effect are simulated using the parallel N-body code, HYDRA, and a run-time ray-tracing procedure. A method designed for the analysis of small, square cosmic microwave background (CMB) maps is applied to our RS maps. Each of these techniques has been tested and successfully applied in previous papers. Within a range of angular scales, our estimate of the RS angular power spectrum due to variations in the peculiar gravitational potential on scales smaller than 42/h megaparsecs is shown to be robust. An exhaustive study of the redshifts and spatial scales relevant for the production of RS anisotropy is developed for the first time. Results from this study demonstrate that (I) to estimate the full integrated RS effect, the initial redshift for the calculations (integration) must be greater than 25, (II) the effect produced by strongly non-linear structures is very small and peaks at angular scales close to 4.3 arcmin, and (III) the RS anisotropy cannot be detected either directly-in temperature CMB maps-or by looking for cross-correlations between these maps and tracers of the dark matter distribution. To estimate the RS effect produced by scales larger than 42/h megaparsecs, where the density contrast is not strongly non-linear, high accuracy N-body simulations appear unnecessary. Simulations based on approximations such as the Zel'dovich approximation and adhesion prescriptions, for example, may be adequate. These results can be used to guide the design of future RS simulations.
A small scale CSP-based cooling system prototype (300W cooling capacity) and the system performance simulation tool will be developed as a proof of concept. Practical issues will be identified to improve our design.
Bayramzadeh, Sara; Joseph, Anjali; Allison, David; Shultz, Jonas; Abernathy, James
2018-07-01
This paper describes the process and tools developed as part of a multidisciplinary collaborative simulation-based approach for iterative design and evaluation of operating room (OR) prototypes. Full-scale physical mock-ups of healthcare spaces offer an opportunity to actively communicate with and to engage multidisciplinary stakeholders in the design process. While mock-ups are increasingly being used in healthcare facility design projects, they are rarely evaluated in a manner to support active user feedback and engagement. Researchers and architecture students worked closely with clinicians and architects to develop OR design prototypes and engaged clinical end-users in simulated scenarios. An evaluation toolkit was developed to compare design prototypes. The mock-up evaluation helped the team make key decisions about room size, location of OR table, intra-room zoning, and doors location. Structured simulation based mock-up evaluations conducted in the design process can help stakeholders visualize their future workspace and provide active feedback. Copyright © 2018 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carothers, Christopher D.; Meredith, Jeremy S.; Blanco, Marc
Performance modeling of extreme-scale applications on accurate representations of potential architectures is critical for designing next generation supercomputing systems because it is impractical to construct prototype systems at scale with new network hardware in order to explore designs and policies. However, these simulations often rely on static application traces that can be difficult to work with because of their size and lack of flexibility to extend or scale up without rerunning the original application. To address this problem, we have created a new technique for generating scalable, flexible workloads from real applications, we have implemented a prototype, called Durango, thatmore » combines a proven analytical performance modeling language, Aspen, with the massively parallel HPC network modeling capabilities of the CODES framework.Our models are compact, parameterized and representative of real applications with computation events. They are not resource intensive to create and are portable across simulator environments. We demonstrate the utility of Durango by simulating the LULESH application in the CODES simulation environment on several topologies and show that Durango is practical to use for simulation without loss of fidelity, as quantified by simulation metrics. During our validation of Durango's generated communication model of LULESH, we found that the original LULESH miniapp code had a latent bug where the MPI_Waitall operation was used incorrectly. This finding underscores the potential need for a tool such as Durango, beyond its benefits for flexible workload generation and modeling.Additionally, we demonstrate the efficacy of Durango's direct integration approach, which links Aspen into CODES as part of the running network simulation model. Here, Aspen generates the application-level computation timing events, which in turn drive the start of a network communication phase. Results show that Durango's performance scales well when executing both torus and dragonfly network models on up to 4K Blue Gene/Q nodes using 32K MPI ranks, Durango also avoids the overheads and complexities associated with extreme-scale trace files.« less
2014-09-30
software devel- oped with this project support. S1 Cork School 2013: I. UPPEcore Simulator design and usage, Simulation examples II. Nonlinear pulse...pulse propagation 08/28/13 — 08/02/13, University College Cork , Ireland S2 ACMS MURI School 2012: Computational Methods for Nonlinear PDEs describing
Increasing the relevance of GCM simulations for Climate Services
NASA Astrophysics Data System (ADS)
Smith, L. A.; Suckling, E.
2012-12-01
The design and interpretation of model simulations for climate services differ significantly from experimental design for the advancement of the fundamental research on predictability that underpins it. Climate services consider the sources of best information available today; this calls for a frank evaluation of model skill in the face of statistical benchmarks defined by empirical models. The fact that Physical simulation models are thought to provide the only reliable method for extrapolating into conditions not previously observed has no bearing on whether or not today's simulation models outperform empirical models. Evidence on the length scales on which today's simulation models fail to outperform empirical benchmarks is presented; it is illustrated that this occurs even on global scales in decadal prediction. At all timescales considered thus far (as of July 2012), predictions based on simulation models are improved by blending with the output of statistical models. Blending is shown to be more interesting in the climate context than it is in the weather context, where blending with a history-based climatology is straightforward. As GCMs improve and as the Earth's climate moves further from that of the last century, the skill from simulation models and their relevance to climate services is expected to increase. Examples from both seasonal and decadal forecasting will be used to discuss a third approach that may increase the role of current GCMs more quickly. Specifically, aspects of the experimental design in previous hind cast experiments are shown to hinder the use of GCM simulations for climate services. Alternative designs are proposed. The value in revisiting Thompson's classic approach to improving weather forecasting in the fifties in the context of climate services is discussed.
NASA Technical Reports Server (NTRS)
Balasubramanian, Kunjithapatham; Hoppe, Daniel J.; Halverson, Peter G.; Wilson, Daniel W.; Echternach, Pierre M.; Shi, Fang; Lowman, Andrew E.; Niessner, Albert F.; Trauger, John T.; Shaklan, Stuart B.
2005-01-01
Occulting focal plane masks for the Terrestrial Planet Finder Coronagraph (TPF-C) could be designed with continuous gray scale profile of the occulting pattern such as 1-sinc2 on a suitable material or with micron-scale binary transparent and opaque structures of metallic pattern on glass. We have designed, fabricated and tested both kinds of masks. The fundamental characteristics of such masks and initial test results from the High Contrast Imaging Test bed (HCIT) at JPL are presented.
Design of an Indoor Sonic Boom Simulator at NASA Langley Research Center
NASA Technical Reports Server (NTRS)
Klos, Jacob; Sullivan, Brenda M.; Shepherd, Kevin P.
2008-01-01
Construction of a simulator to recreate the soundscape inside residential buildings exposed to sonic booms is scheduled to start during the summer of 2008 at NASA Langley Research Center. The new facility should be complete by the end of the year. The design of the simulator allows independent control of several factors that create the indoor soundscape. Variables that will be isolated include such factors as boom duration, overpressure, rise time, spectral shape, level of rattle, level of squeak, source of rattle and squeak, level of vibration and source of vibration. Test subjects inside the simulator will be asked to judge the simulated soundscape, which will represent realistic indoor boom exposure. Ultimately, this simulator will be used to develop a functional relationship between human response and the sound characteristics creating the indoor soundscape. A conceptual design has been developed by NASA personnel, and is currently being vetted through small-scale risk reduction tests that are being performed in-house. The purpose of this document is to introduce the conceptual design, identify how the indoor response will be simulated, briefly outline some of the risk reduction tests that have been completed to vet the design, and discuss the impact of these tests on the simulator design.
Beam-Beam Study on the Upgrade of Beijing Electron Positron Collider
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, S.; /Beijing, Inst. High Energy Phys.; Cai, Y.
2006-02-10
It is an important issue to study the beam-beam interaction in the design and performance of such a high luminosity collider as BEPCII, the upgrade of Beijing Electron Positron Collider. The weak-strong simulation is generally used during the design of a collider. For performance a large scale tune scan, the weak-strong simulation studies on beam-beam interaction were done, and the geometry effects were taken into account. The strong-strong simulation studies were done for investigating the luminosity goal and the dependence of the luminosity on the beam parameters.
Parameter Studies, time-dependent simulations and design with automated Cartesian methods
NASA Technical Reports Server (NTRS)
Aftosmis, Michael
2005-01-01
Over the past decade, NASA has made a substantial investment in developing adaptive Cartesian grid methods for aerodynamic simulation. Cartesian-based methods played a key role in both the Space Shuttle Accident Investigation and in NASA's return to flight activities. The talk will provide an overview of recent technological developments focusing on the generation of large-scale aerodynamic databases, automated CAD-based design, and time-dependent simulations with of bodies in relative motion. Automation, scalability and robustness underly all of these applications and research in each of these topics will be presented.
Xiao, Li; Cai, Qin; Li, Zhilin; Zhao, Hongkai; Luo, Ray
2014-11-25
A multi-scale framework is proposed for more realistic molecular dynamics simulations in continuum solvent models by coupling a molecular mechanics treatment of solute with a fluid mechanics treatment of solvent. This article reports our initial efforts to formulate the physical concepts necessary for coupling the two mechanics and develop a 3D numerical algorithm to simulate the solvent fluid via the Navier-Stokes equation. The numerical algorithm was validated with multiple test cases. The validation shows that the algorithm is effective and stable, with observed accuracy consistent with our design.
CHARACTERIZATION OF EMISSIONS FROM THE SIMULATED OPEN BURNING OF SCRAP TIRES
The report gives results of a small-scale combustion study, designed to collect, identify, and quantify products emitted during the simulated open burning of scrap tires. Fixed combustion gas, volatile and semi-volatile organic, particulate, and airborne metals data were collecte...
NASA Astrophysics Data System (ADS)
Yuen, Anthony C. Y.; Yeoh, Guan H.; Timchenko, Victoria; Cheung, Sherman C. P.; Chan, Qing N.; Chen, Timothy
2017-09-01
An in-house large eddy simulation (LES) based fire field model has been developed for large-scale compartment fire simulations. The model incorporates four major components, including subgrid-scale turbulence, combustion, soot and radiation models which are fully coupled. It is designed to simulate the temporal and fluid dynamical effects of turbulent reaction flow for non-premixed diffusion flame. Parametric studies were performed based on a large-scale fire experiment carried out in a 39-m long test hall facility. Several turbulent Prandtl and Schmidt numbers ranging from 0.2 to 0.5, and Smagorinsky constants ranging from 0.18 to 0.23 were investigated. It was found that the temperature and flow field predictions were most accurate with turbulent Prandtl and Schmidt numbers of 0.3, respectively, and a Smagorinsky constant of 0.2 applied. In addition, by utilising a set of numerically verified key modelling parameters, the smoke filling process was successfully captured by the present LES model.
Building simulation: Ten challenges
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Tianzhen; Langevin, Jared; Sun, Kaiyu
Buildings consume more than one-third of the world’s primary energy. Reducing energy use and greenhouse-gas emissions in the buildings sector through energy conservation and efficiency improvements constitutes a key strategy for achieving global energy and environmental goals. Building performance simulation has been increasingly used as a tool for designing, operating and retrofitting buildings to save energy and utility costs. However, opportunities remain for researchers, software developers, practitioners and policymakers to maximize the value of building performance simulation in the design and operation of low energy buildings and communities that leverage interdisciplinary approaches to integrate humans, buildings, and the power gridmore » at a large scale. This paper presents ten challenges that highlight some of the most important issues in building performance simulation, covering the full building life cycle and a wide range of modeling scales. In conclusion, the formulation and discussion of each challenge aims to provide insights into the state-of-the-art and future research opportunities for each topic, and to inspire new questions from young researchers in this field.« less
Building simulation: Ten challenges
Hong, Tianzhen; Langevin, Jared; Sun, Kaiyu
2018-04-12
Buildings consume more than one-third of the world’s primary energy. Reducing energy use and greenhouse-gas emissions in the buildings sector through energy conservation and efficiency improvements constitutes a key strategy for achieving global energy and environmental goals. Building performance simulation has been increasingly used as a tool for designing, operating and retrofitting buildings to save energy and utility costs. However, opportunities remain for researchers, software developers, practitioners and policymakers to maximize the value of building performance simulation in the design and operation of low energy buildings and communities that leverage interdisciplinary approaches to integrate humans, buildings, and the power gridmore » at a large scale. This paper presents ten challenges that highlight some of the most important issues in building performance simulation, covering the full building life cycle and a wide range of modeling scales. In conclusion, the formulation and discussion of each challenge aims to provide insights into the state-of-the-art and future research opportunities for each topic, and to inspire new questions from young researchers in this field.« less
IslandFAST: A Semi-numerical Tool for Simulating the Late Epoch of Reionization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Yidong; Chen, Xuelei; Yue, Bin
2017-08-01
We present the algorithm and main results of our semi-numerical simulation, islandFAST, which was developed from 21cmFAST and designed for the late stage of reionization. The islandFAST simulation predicts the evolution and size distribution of the large-scale underdense neutral regions (neutral islands), and we find that the late Epoch of Reionization proceeds very fast, showing a characteristic scale of the neutral islands at each redshift. Using islandFAST, we compare the impact of two types of absorption systems, i.e., the large-scale underdense neutral islands versus small-scale overdense absorbers, in regulating the reionization process. The neutral islands dominate the morphology of themore » ionization field, while the small-scale absorbers dominate the mean-free path of ionizing photons, and also delay and prolong the reionization process. With our semi-numerical simulation, the evolution of the ionizing background can be derived self-consistently given a model for the small absorbers. The hydrogen ionization rate of the ionizing background is reduced by an order of magnitude in the presence of dense absorbers.« less
Ice Accretion Test Results for Three Large-Scale Swept-Wing Models in the NASA Icing Research Tunnel
NASA Technical Reports Server (NTRS)
Broeren, Andy; Potapczuk, Mark; Lee, Sam; Malone, Adam; Paul, Ben; Woodard, Brian
2016-01-01
The design and certification of modern transport airplanes for flight in icing conditions increasing relies on three-dimensional numerical simulation tools for ice accretion prediction. There is currently no publically available, high-quality, ice accretion database upon which to evaluate the performance of icing simulation tools for large-scale swept wings that are representative of modern commercial transport airplanes. The purpose of this presentation is to present the results of a series of icing wind tunnel test campaigns whose aim was to provide an ice accretion database for large-scale, swept wings.
BlazeDEM3D-GPU A Large Scale DEM simulation code for GPUs
NASA Astrophysics Data System (ADS)
Govender, Nicolin; Wilke, Daniel; Pizette, Patrick; Khinast, Johannes
2017-06-01
Accurately predicting the dynamics of particulate materials is of importance to numerous scientific and industrial areas with applications ranging across particle scales from powder flow to ore crushing. Computational discrete element simulations is a viable option to aid in the understanding of particulate dynamics and design of devices such as mixers, silos and ball mills, as laboratory scale tests comes at a significant cost. However, the computational time required to simulate an industrial scale simulation which consists of tens of millions of particles can take months to complete on large CPU clusters, making the Discrete Element Method (DEM) unfeasible for industrial applications. Simulations are therefore typically restricted to tens of thousands of particles with highly detailed particle shapes or a few million of particles with often oversimplified particle shapes. However, a number of applications require accurate representation of the particle shape to capture the macroscopic behaviour of the particulate system. In this paper we give an overview of the recent extensions to the open source GPU based DEM code, BlazeDEM3D-GPU, that can simulate millions of polyhedra and tens of millions of spheres on a desktop computer with a single or multiple GPUs.
Scaling of counter-current imbibition recovery curves using artificial neural networks
NASA Astrophysics Data System (ADS)
Jafari, Iman; Masihi, Mohsen; Nasiri Zarandi, Masoud
2018-06-01
Scaling imbibition curves are of great importance in the characterization and simulation of oil production from naturally fractured reservoirs. Different parameters such as matrix porosity and permeability, oil and water viscosities, matrix dimensions, and oil/water interfacial tensions have an effective on the imbibition process. Studies on the scaling imbibition curves along with the consideration of different assumptions have resulted in various scaling equations. In this work, using an artificial neural network (ANN) method, a novel technique is presented for scaling imbibition recovery curves, which can be used for scaling the experimental and field-scale imbibition cases. The imbibition recovery curves for training and testing the neural network were gathered through the simulation of different scenarios using a commercial reservoir simulator. In this ANN-based method, six parameters were assumed to have an effect on the imbibition process and were considered as the inputs for training the network. Using the ‘Bayesian regularization’ training algorithm, the network was trained and tested. Training and testing phases showed superior results in comparison with the other scaling methods. It is concluded that using the new technique is useful for scaling imbibition recovery curves, especially for complex cases, for which the common scaling methods are not designed.
Status of the Flooding Fragility Testing Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pope, C. L.; Savage, B.; Bhandari, B.
2016-06-01
This report provides an update on research addressing nuclear power plant component reliability under flooding conditions. The research includes use of the Component Flooding Evaluation Laboratory (CFEL) where individual components and component subassemblies will be tested to failure under various flooding conditions. The resulting component reliability data can then be incorporated with risk simulation strategies to provide a more thorough representation of overall plant risk. The CFEL development strategy consists of four interleaved phases. Phase 1 addresses design and application of CFEL with water rise and water spray capabilities allowing testing of passive and active components including fully electrified components.more » Phase 2 addresses research into wave generation techniques followed by the design and addition of the wave generation capability to CFEL. Phase 3 addresses methodology development activities including small scale component testing, development of full scale component testing protocol, and simulation techniques including Smoothed Particle Hydrodynamic (SPH) based computer codes. Phase 4 involves full scale component testing including work on full scale component testing in a surrogate CFEL testing apparatus.« less
NASA Astrophysics Data System (ADS)
Huang, Yanhui; Zhao, He; Wang, Yixing; Ratcliff, Tyree; Breneman, Curt; Brinson, L. Catherine; Chen, Wei; Schadler, Linda S.
2017-08-01
It has been found that doping dielectric polymers with a small amount of nanofiller or molecular additive can stabilize the material under a high field and lead to increased breakdown strength and lifetime. Choosing appropriate fillers is critical to optimizing the material performance, but current research largely relies on experimental trial and error. The employment of computer simulations for nanodielectric design is rarely reported. In this work, we propose a multi-scale modeling approach that employs ab initio, Monte Carlo, and continuum scales to predict the breakdown strength and lifetime of polymer nanocomposites based on the charge trapping effect of the nanofillers. The charge transfer, charge energy relaxation, and space charge effects are modeled in respective hierarchical scales by distinctive simulation techniques, and these models are connected together for high fidelity and robustness. The preliminary results show good agreement with the experimental data, suggesting its promise for use in the computer aided material design of high performance dielectrics.
Ayachit, Utkarsh; Bauer, Andrew; Duque, Earl P. N.; ...
2016-11-01
A key trend facing extreme-scale computational science is the widening gap between computational and I/O rates, and the challenge that follows is how to best gain insight from simulation data when it is increasingly impractical to save it to persistent storage for subsequent visual exploration and analysis. One approach to this challenge is centered around the idea of in situ processing, where visualization and analysis processing is performed while data is still resident in memory. Our paper examines several key design and performance issues related to the idea of in situ processing at extreme scale on modern platforms: Scalability, overhead,more » performance measurement and analysis, comparison and contrast with a traditional post hoc approach, and interfacing with simulation codes. We illustrate these principles in practice with studies, conducted on large-scale HPC platforms, that include a miniapplication and multiple science application codes, one of which demonstrates in situ methods in use at greater than 1M-way concurrency.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kojima, S.; Yokosawa, M.; Matsuyama, M.
To study the practical application of a tritium separation process using Self-Developing Gas Chromatography (SDGC) using a Pd-Pt alloy, intermediate scale-up experiments (22 mm ID x 2 m length column) and the development of a computational simulation method have been conducted. In addition, intermediate scale production of Pd-Pt powder has been developed for the scale-up experiments.The following results were obtained: (1) a 50-fold scale-up from 3 mm to 22 mm causes no significant impact on the SDGC process; (2) the Pd-Pt alloy powder is applicable to a large size SDGC process; and (3) the simulation enables preparation of a conceptualmore » design of a SDGC process for tritium separation.« less
Yoshikawa, Katsunori; Aikawa, Shimpei; Kojima, Yuta; Toya, Yoshihiro; Furusawa, Chikara; Kondo, Akihiko; Shimizu, Hiroshi
2015-01-01
Arthrospira (Spirulina) platensis is a promising feedstock and host strain for bioproduction because of its high accumulation of glycogen and superior characteristics for industrial production. Metabolic simulation using a genome-scale metabolic model and flux balance analysis is a powerful method that can be used to design metabolic engineering strategies for the improvement of target molecule production. In this study, we constructed a genome-scale metabolic model of A. platensis NIES-39 including 746 metabolic reactions and 673 metabolites, and developed novel strategies to improve the production of valuable metabolites, such as glycogen and ethanol. The simulation results obtained using the metabolic model showed high consistency with experimental results for growth rates under several trophic conditions and growth capabilities on various organic substrates. The metabolic model was further applied to design a metabolic network to improve the autotrophic production of glycogen and ethanol. Decreased flux of reactions related to the TCA cycle and phosphoenolpyruvate reaction were found to improve glycogen production. Furthermore, in silico knockout simulation indicated that deletion of genes related to the respiratory chain, such as NAD(P)H dehydrogenase and cytochrome-c oxidase, could enhance ethanol production by using ammonium as a nitrogen source. PMID:26640947
Video Monitoring a Simulation-Based Quality Improvement Program in Bihar, India.
Dyer, Jessica; Spindler, Hilary; Christmas, Amelia; Shah, Malay Bharat; Morgan, Melissa; Cohen, Susanna R; Sterne, Jason; Mahapatra, Tanmay; Walker, Dilys
2018-04-01
Simulation-based training has become an accepted clinical training andragogy in high-resource settings with its use increasing in low-resource settings. Video recordings of simulated scenarios are commonly used by facilitators. Beyond using the videos during debrief sessions, researchers can also analyze the simulation videos to quantify technical and nontechnical skills during simulated scenarios over time. Little is known about the feasibility and use of large-scale systems to video record and analyze simulation and debriefing data for monitoring and evaluation in low-resource settings. This manuscript describes the process of designing and implementing a large-scale video monitoring system. Mentees and Mentors were consented and all simulations and debriefs conducted at 320 Primary Health Centers (PHCs) were video recorded. The system design, number of video recordings, and inter-rater reliability of the coded videos were assessed. The final dataset included a total of 11,278 videos. Overall, a total of 2,124 simulation videos were coded and 183 (12%) were blindly double-coded. For the double-coded sample, the average inter-rater reliability (IRR) scores were 80% for nontechnical skills, and 94% for clinical technical skills. Among 4,450 long debrief videos received, 216 were selected for coding and all were double-coded. Data quality of simulation videos was found to be very good in terms of recorded instances of "unable to see" and "unable to hear" in Phases 1 and 2. This study demonstrates that video monitoring systems can be effectively implemented at scale in resource limited settings. Further, video monitoring systems can play several vital roles within program implementation, including monitoring and evaluation, provision of actionable feedback to program implementers, and assurance of program fidelity.
Development and validation of the Simulation Learning Effectiveness Scale for nursing students.
Pai, Hsiang-Chu
2016-11-01
To develop and validate the Simulation Learning Effectiveness Scale, which is based on Bandura's social cognitive theory. A simulation programme is a significant teaching strategy for nursing students. Nevertheless, there are few evidence-based instruments that validate the effectiveness of simulation learning in Taiwan. This is a quantitative descriptive design. In Study 1, a nonprobability convenience sample of 151 student nurses completed the Simulation Learning Effectiveness Scale. Exploratory factor analysis was used to examine the factor structure of the instrument. In Study 2, which involved 365 student nurses, confirmatory factor analysis and structural equation modelling were used to analyse the construct validity of the Simulation Learning Effectiveness Scale. In Study 1, exploratory factor analysis yielded three components: self-regulation, self-efficacy and self-motivation. The three factors explained 29·09, 27·74 and 19·32% of the variance, respectively. The final 12-item instrument with the three factors explained 76·15% of variance. Cronbach's alpha was 0·94. In Study 2, confirmatory factor analysis identified a second-order factor termed Simulation Learning Effectiveness Scale. Goodness-of-fit indices showed an acceptable fit overall with the full model (χ 2 /df (51) = 3·54, comparative fit index = 0·96, Tucker-Lewis index = 0·95 and standardised root-mean-square residual = 0·035). In addition, teacher's competence was found to encourage learning, and self-reflection and insight were significantly and positively associated with Simulation Learning Effectiveness Scale. Teacher's competence in encouraging learning also was significantly and positively associated with self-reflection and insight. Overall, theses variable explained 21·9% of the variance in the student's learning effectiveness. The Simulation Learning Effectiveness Scale is a reliable and valid means to assess simulation learning effectiveness for nursing students. The Simulation Learning Effectiveness Scale can be used to examine nursing students' learning effectiveness and serve as a basis to improve student's learning efficiency through simulation programmes. Future implementation research that focuses on the relationship between learning effectiveness and nursing competence in nursing students is recommended. © 2016 John Wiley & Sons Ltd.
Nuclear Power Plant Mechanical Component Flooding Fragility Experiments Status
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pope, C. L.; Savage, B.; Johnson, B.
This report describes progress on Nuclear Power Plant mechanical component flooding fragility experiments and supporting research. The progress includes execution of full scale fragility experiments using hollow-core doors, design of improvements to the Portal Evaluation Tank, equipment procurement and initial installation of PET improvements, designation of experiments exploiting the improved PET capabilities, fragility mathematical model development, Smoothed Particle Hydrodynamic simulations, wave impact simulation device research, and pipe rupture mechanics research.
We present results from a study testing the new boundary layer parameterization method, the canopy drag approach (DA) which is designed to explicitly simulate the effects of buildings, street and tree canopies on the dynamic, thermodynamic structure and dispersion fields in urban...
Simulating forest fuel and fire risk dynamics across landscapes--LANDIS fuel module design
Hong S. He; Bo Z. Shang; Thomas R. Crow; Eric J. Gustafson; Stephen R. Shifley
2004-01-01
Understanding fuel dynamics over large spatial (103-106 ha) and temporal scales (101-103 years) is important in comprehensive wildfire management. We present a modeling approach to simulate fuel and fire risk dynamics as well as impacts of alternative fuel treatments. The...
Microfiltration of thin stillage: Process simulation and economic analyses
USDA-ARS?s Scientific Manuscript database
In plant scale operations, multistage membrane systems have been adopted for cost minimization. We considered design optimization and operation of a continuous microfiltration (MF) system for the corn dry grind process. The objectives were to develop a model to simulate a multistage MF system, optim...
A pilot rating scale for evaluating failure transients in electronic flight control systems
NASA Technical Reports Server (NTRS)
Hindson, William S.; Schroeder, Jeffery A.; Eshow, Michelle M.
1990-01-01
A pilot rating scale was developed to describe the effects of transients in helicopter flight-control systems on safety-of-flight and on pilot recovery action. The scale was applied to the evaluation of hardovers that could potentially occur in the digital flight-control system being designed for a variable-stability UH-60A research helicopter. Tests were conducted in a large moving-base simulator and in flight. The results of the investigation were combined with existing airworthiness criteria to determine quantitative reliability design goals for the control system.
Rogers, R; Sewell, K W; Morey, L C; Ustad, K L
1996-12-01
Psychological assessment with multiscale inventories is largely dependent on the honesty and forthrightness of those persons evaluated. We investigated the effectiveness of the Personality Assessment Inventory (PAI) in detecting participants feigning three specific disorders: schizophrenia, major depression, and generalized anxiety disorder. With a simulation design, we tested the PAI validity scales on 166 naive (undergraduates with minimal preparation) and 80 sophisticated (doctoral psychology students with 1 week preparation) participants. We compared their results to persons with the designated disorders: schizophrenia (n = 45), major depression (n = 136), and generalized anxiety disorder (n = 40). Although moderately effective with naive simulators, the validity scales evidenced only modest positive predictive power with their sophisticated counterparts. Therefore, we performed a two-stage discriminant analysis that yielded a moderately high hit rate (> 80%) that was maintained in the cross-validation sample, irrespective of the feigned disorder or the sophistication of the simulators.
A manipulator arm for zero-g simulations
NASA Technical Reports Server (NTRS)
Brodie, S. B.; Grant, C.; Lazar, J. J.
1975-01-01
A 12-ft counterbalanced Slave Manipulator Arm (SMA) was designed and fabricated to be used for resolving the questions of operational applications, capabilities, and limitations for such remote manned systems as the Payload Deployment and Retrieval Mechanism (PDRM) for the shuttle, the Free-Flying Teleoperator System, the Advanced Space Tug, and Planetary Rovers. As a developmental tool for the shuttle manipulator system (or PDRM), the SMA represents an approximate one-quarter scale working model for simulating and demonstrating payload handling, docking assistance, and satellite servicing. For the Free-Flying Teleoperator System and the Advanced Tug, the SMA provides a near full-scale developmental tool for satellite servicing, docking, and deployment/retrieval procedures, techniques, and support equipment requirements. For the Planetary Rovers, it provides an oversize developmental tool for sample handling and soil mechanics investigations. The design of the SMA was based on concepts developed for a 40-ft NASA technology arm to be used for zero-g shuttle manipulator simulations.
Study on the millimeter-wave scale absorber based on the Salisbury screen
NASA Astrophysics Data System (ADS)
Yuan, Liming; Dai, Fei; Xu, Yonggang; Zhang, Yuan
2018-03-01
In order to solve the problem on the millimeter-wave scale absorber, the Salisbury screen absorber is employed and designed based on the RL. By optimizing parameters including the sheet resistance of the surface resistive layer, the permittivity and the thickness of the grounded dielectric layer, the RL of the Salisbury screen absorber could be identical with that of the theoretical scale absorber. An example is given to verify the effectiveness of the method, where the Salisbury screen absorber is designed by the proposed method and compared with the theoretical scale absorber. Meanwhile, plate models and tri-corner reflector (TCR) models are constructed according to the designed result and their scattering properties are simulated by FEKO. Results reveal that the deviation between the designed Salisbury screen absorber and the theoretical scale absorber falls within the tolerance of radar Cross section (RCS) measurement. The work in this paper has important theoretical and practical significance in electromagnetic measurement of large scale ratio.
Simulating recurrent event data with hazard functions defined on a total time scale.
Jahn-Eimermacher, Antje; Ingel, Katharina; Ozga, Ann-Kathrin; Preussler, Stella; Binder, Harald
2015-03-08
In medical studies with recurrent event data a total time scale perspective is often needed to adequately reflect disease mechanisms. This means that the hazard process is defined on the time since some starting point, e.g. the beginning of some disease, in contrast to a gap time scale where the hazard process restarts after each event. While techniques such as the Andersen-Gill model have been developed for analyzing data from a total time perspective, techniques for the simulation of such data, e.g. for sample size planning, have not been investigated so far. We have derived a simulation algorithm covering the Andersen-Gill model that can be used for sample size planning in clinical trials as well as the investigation of modeling techniques. Specifically, we allow for fixed and/or random covariates and an arbitrary hazard function defined on a total time scale. Furthermore we take into account that individuals may be temporarily insusceptible to a recurrent incidence of the event. The methods are based on conditional distributions of the inter-event times conditional on the total time of the preceeding event or study start. Closed form solutions are provided for common distributions. The derived methods have been implemented in a readily accessible R script. The proposed techniques are illustrated by planning the sample size for a clinical trial with complex recurrent event data. The required sample size is shown to be affected not only by censoring and intra-patient correlation, but also by the presence of risk-free intervals. This demonstrates the need for a simulation algorithm that particularly allows for complex study designs where no analytical sample size formulas might exist. The derived simulation algorithm is seen to be useful for the simulation of recurrent event data that follow an Andersen-Gill model. Next to the use of a total time scale, it allows for intra-patient correlation and risk-free intervals as are often observed in clinical trial data. Its application therefore allows the simulation of data that closely resemble real settings and thus can improve the use of simulation studies for designing and analysing studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendon, Vrushali V.; Taylor, Zachary T.
ABSTRACT: Recent advances in residential building energy efficiency and codes have resulted in increased interest in detailed residential building energy models using the latest energy simulation software. One of the challenges of developing residential building models to characterize new residential building stock is to allow for flexibility to address variability in house features like geometry, configuration, HVAC systems etc. Researchers solved this problem in a novel way by creating a simulation structure capable of creating fully-functional EnergyPlus batch runs using a completely scalable residential EnergyPlus template system. This system was used to create a set of thirty-two residential prototype buildingmore » models covering single- and multifamily buildings, four common foundation types and four common heating system types found in the United States (US). A weighting scheme with detailed state-wise and national weighting factors was designed to supplement the residential prototype models. The complete set is designed to represent a majority of new residential construction stock. The entire structure consists of a system of utility programs developed around the core EnergyPlus simulation engine to automate the creation and management of large-scale simulation studies with minimal human effort. The simulation structure and the residential prototype building models have been used for numerous large-scale studies, one of which is briefly discussed in this paper.« less
Enabling parallel simulation of large-scale HPC network systems
Mubarak, Misbah; Carothers, Christopher D.; Ross, Robert B.; ...
2016-04-07
Here, with the increasing complexity of today’s high-performance computing (HPC) architectures, simulation has become an indispensable tool for exploring the design space of HPC systems—in particular, networks. In order to make effective design decisions, simulations of these systems must possess the following properties: (1) have high accuracy and fidelity, (2) produce results in a timely manner, and (3) be able to analyze a broad range of network workloads. Most state-of-the-art HPC network simulation frameworks, however, are constrained in one or more of these areas. In this work, we present a simulation framework for modeling two important classes of networks usedmore » in today’s IBM and Cray supercomputers: torus and dragonfly networks. We use the Co-Design of Multi-layer Exascale Storage Architecture (CODES) simulation framework to simulate these network topologies at a flit-level detail using the Rensselaer Optimistic Simulation System (ROSS) for parallel discrete-event simulation. Our simulation framework meets all the requirements of a practical network simulation and can assist network designers in design space exploration. First, it uses validated and detailed flit-level network models to provide an accurate and high-fidelity network simulation. Second, instead of relying on serial time-stepped or traditional conservative discrete-event simulations that limit simulation scalability and efficiency, we use the optimistic event-scheduling capability of ROSS to achieve efficient and scalable HPC network simulations on today’s high-performance cluster systems. Third, our models give network designers a choice in simulating a broad range of network workloads, including HPC application workloads using detailed network traces, an ability that is rarely offered in parallel with high-fidelity network simulations« less
Enabling parallel simulation of large-scale HPC network systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mubarak, Misbah; Carothers, Christopher D.; Ross, Robert B.
Here, with the increasing complexity of today’s high-performance computing (HPC) architectures, simulation has become an indispensable tool for exploring the design space of HPC systems—in particular, networks. In order to make effective design decisions, simulations of these systems must possess the following properties: (1) have high accuracy and fidelity, (2) produce results in a timely manner, and (3) be able to analyze a broad range of network workloads. Most state-of-the-art HPC network simulation frameworks, however, are constrained in one or more of these areas. In this work, we present a simulation framework for modeling two important classes of networks usedmore » in today’s IBM and Cray supercomputers: torus and dragonfly networks. We use the Co-Design of Multi-layer Exascale Storage Architecture (CODES) simulation framework to simulate these network topologies at a flit-level detail using the Rensselaer Optimistic Simulation System (ROSS) for parallel discrete-event simulation. Our simulation framework meets all the requirements of a practical network simulation and can assist network designers in design space exploration. First, it uses validated and detailed flit-level network models to provide an accurate and high-fidelity network simulation. Second, instead of relying on serial time-stepped or traditional conservative discrete-event simulations that limit simulation scalability and efficiency, we use the optimistic event-scheduling capability of ROSS to achieve efficient and scalable HPC network simulations on today’s high-performance cluster systems. Third, our models give network designers a choice in simulating a broad range of network workloads, including HPC application workloads using detailed network traces, an ability that is rarely offered in parallel with high-fidelity network simulations« less
Results of Small-scale Solid Rocket Combustion Simulator testing at Marshall Space Flight Center
NASA Technical Reports Server (NTRS)
Goldberg, Benjamin E.; Cook, Jerry
1993-01-01
The Small-scale Solid Rocket Combustion Simulator (SSRCS) program was established at the Marshall Space Flight Center (MSFC), and used a government/industry team consisting of Hercules Aerospace Corporation, Aerotherm Corporation, United Technology Chemical Systems Division, Thiokol Corporation and MSFC personnel to study the feasibility of simulating the combustion species, temperatures and flow fields of a conventional solid rocket motor (SRM) with a versatile simulator system. The SSRCS design is based on hybrid rocket motor principles. The simulator uses a solid fuel and a gaseous oxidizer. Verification of the feasibility of a SSRCS system as a test bed was completed using flow field and system analyses, as well as empirical test data. A total of 27 hot firings of a subscale SSRCS motor were conducted at MSFC. Testing of the Small-scale SSRCS program was completed in October 1992. This paper, a compilation of reports from the above team members and additional analysis of the instrumentation results, will discuss the final results of the analyses and test programs.
Challenge toward the prediction of typhoon behaviour and down pour
NASA Astrophysics Data System (ADS)
Takahashi, K.; Onishi, R.; Baba, Y.; Kida, S.; Matsuda, K.; Goto, K.; Fuchigami, H.
2013-08-01
Mechanisms of interactions among different scale phenomena play important roles for forecasting of weather and climate. Multi-scale Simulator for the Geoenvironment (MSSG), which deals with multi-scale multi-physics phenomena, is a coupled non-hydrostatic atmosphere-ocean model designed to be run efficiently on the Earth Simulator. We present simulation results with the world-highest 1.9km horizontal resolution for the entire globe and regional heavy rain with 1km horizontal resolution and 5m horizontal/vertical resolution for urban area simulation. To gain high performance by exploiting the system capabilities, we propose novel performance evaluation metrics introduced in previous studies that incorporate the effects of the data caching mechanism between CPU and memory. With a useful code optimization guideline based on such metrics, we demonstrate that MSSG can achieve an excellent peak performance ratio of 32.2% on the Earth Simulator with the single-core performance found to be a key to a reduced time-to-solution.
Simulation of the human-telerobot interface on the Space Station
NASA Technical Reports Server (NTRS)
Stuart, Mark A.; Smith, Randy L.
1993-01-01
Many issues remain unresolved concerning the components of the human-telerobot interface presented in this work. It is critical that these components be optimally designed and arranged to ensure, not only that the overall system's goals are met, but but that the intended end-user has been optimally accommodated. With sufficient testing and evaluation throughout the development cycle, the selection of the components to use in the final telerobotic system can promote efficient, error-free performance. It is recommended that whole-system simulation with full-scale mockups be used to help design the human-telerobot interface. It is contended that the use of simulation can facilitate this design and evaluation process.
Detector Development for the MARE Neutrino Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galeazzi, M.; Bogorin, D.; Molina, R.
2009-12-16
The MARE experiment is designed to measure the mass of the neutrino with sub-eV sensitivity by measuring the beta decay of {sup 187}Re with cryogenic microcalorimeters. A preliminary analysis shows that, to achieve the necessary statistics, between 10,000 and 50,000 detectors are likely necessary. We have fabricated and characterized Iridium transition edge sensors with high reproducibility and uniformity for such a large scale experiment. We have also started a full scale simulation of the experimental setup for MARE, including thermalization in the absorber, detector response, and optimum filter analysis, to understand the issues related to reaching a sub-eV sensitivity andmore » to optimize the design of the MARE experiment. We present our characterization of the Ir devices, including reproducibility, uniformity, and sensitivity, and we discuss the implementation and capabilities of our full scale simulation.« less
Pizzitutti, Francesco; Pan, William; Feingold, Beth; Zaitchik, Ben; Álvarez, Carlos A; Mena, Carlos F
2018-01-01
Though malaria control initiatives have markedly reduced malaria prevalence in recent decades, global eradication is far from actuality. Recent studies show that environmental and social heterogeneities in low-transmission settings have an increased weight in shaping malaria micro-epidemiology. New integrated and more localized control strategies should be developed and tested. Here we present a set of agent-based models designed to study the influence of local scale human movements on local scale malaria transmission in a typical Amazon environment, where malaria is transmission is low and strongly connected with seasonal riverine flooding. The agent-based simulations show that the overall malaria incidence is essentially not influenced by local scale human movements. In contrast, the locations of malaria high risk spatial hotspots heavily depend on human movements because simulated malaria hotspots are mainly centered on farms, were laborers work during the day. The agent-based models are then used to test the effectiveness of two different malaria control strategies both designed to reduce local scale malaria incidence by targeting hotspots. The first control scenario consists in treat against mosquito bites people that, during the simulation, enter at least once inside hotspots revealed considering the actual sites where human individuals were infected. The second scenario involves the treatment of people entering in hotspots calculated assuming that the infection sites of every infected individual is located in the household where the individual lives. Simulations show that both considered scenarios perform better in controlling malaria than a randomized treatment, although targeting household hotspots shows slightly better performance.
Requirements for future development of small scale rainfall simulators
NASA Astrophysics Data System (ADS)
Iserloh, Thomas; Ries, Johannes B.; Seeger, Manuel
2013-04-01
Rainfall simulation with small scale simulators is a method used worldwide to assess the generation of overland flow, soil erosion, infiltration and interrelated processes such as soil sealing, crusting, splash and redistribution of solids and solutes. Following the outcomes of the project "Comparability of simulation results of different rainfall simulators as input data for soil erosion modelling (Deutsche Forschungsgemeinschaft - DFG, Project No. Ri 835/6-1)" and the "International Rainfall Simulator Workshop 2011" in Trier, the necessity for further technical improvements of simulators and strategies towards an adaption of designs and methods becomes obvious. Uniform measurements of artificially generated rainfall and comparative measurements on a prepared bare fallow with rainfall simulators used by European research groups showed limitations of the comparability of the results. The following requirements, essential for small portable rainfall simulators, were identified: (I) Low and efficient water consumption for use in areas with water shortage, (II) easy handling and control of test conditions, (III) homogeneous spatial rainfall distribution, (IV) best possible drop spectrum (physically), (V) reproducibility and knowledge of spatial distribution and drop spectrum, (VI) easy and fast training of operators to obtain reproducible experiments and (VII) good mobility and easy installation for use in remote areas and in regions where highly erosive rainfall events are rare or irregular. The presentation discusses possibilities for a common use of identical plot designs, rainfall intensities and nozzles.
Thibault, J. C.; Roe, D. R.; Eilbeck, K.; Cheatham, T. E.; Facelli, J. C.
2015-01-01
Biomolecular simulations aim to simulate structure, dynamics, interactions, and energetics of complex biomolecular systems. With the recent advances in hardware, it is now possible to use more complex and accurate models, but also reach time scales that are biologically significant. Molecular simulations have become a standard tool for toxicology and pharmacology research, but organizing and sharing data – both within the same organization and among different ones – remains a substantial challenge. In this paper we review our recent work leading to the development of a comprehensive informatics infrastructure to facilitate the organization and exchange of biomolecular simulations data. Our efforts include the design of data models and dictionary tools that allow the standardization of the metadata used to describe the biomedical simulations, the development of a thesaurus and ontology for computational reasoning when searching for biomolecular simulations in distributed environments, and the development of systems based on these models to manage and share the data at a large scale (iBIOMES), and within smaller groups of researchers at laboratory scale (iBIOMES Lite), that take advantage of the standardization of the meta data used to describe biomolecular simulations. PMID:26387907
Thibault, J C; Roe, D R; Eilbeck, K; Cheatham, T E; Facelli, J C
2015-01-01
Biomolecular simulations aim to simulate structure, dynamics, interactions, and energetics of complex biomolecular systems. With the recent advances in hardware, it is now possible to use more complex and accurate models, but also reach time scales that are biologically significant. Molecular simulations have become a standard tool for toxicology and pharmacology research, but organizing and sharing data - both within the same organization and among different ones - remains a substantial challenge. In this paper we review our recent work leading to the development of a comprehensive informatics infrastructure to facilitate the organization and exchange of biomolecular simulations data. Our efforts include the design of data models and dictionary tools that allow the standardization of the metadata used to describe the biomedical simulations, the development of a thesaurus and ontology for computational reasoning when searching for biomolecular simulations in distributed environments, and the development of systems based on these models to manage and share the data at a large scale (iBIOMES), and within smaller groups of researchers at laboratory scale (iBIOMES Lite), that take advantage of the standardization of the meta data used to describe biomolecular simulations.
A Simulated Research Problem for Undergraduate Metamorphic Petrology.
ERIC Educational Resources Information Center
Amenta, Roddy V.
1984-01-01
Presents a laboratory problem in metamorphic petrology designed to simulate a research experience. The problem deals with data on scales ranging from a geologic map to hand specimens to thin sections. Student analysis includes identifying metamorphic index minerals, locating their isograds on the map, and determining the folding sequence. (BC)
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
Critical infrastructures of the world are at constant risks for earthquakes. Most of these critical structures are designed using archaic, seismic, simulation methods that were built from early digital computers from the 1970s. Idaho National Laboratory’s Seismic Research Group are working to modernize the simulation methods through computational research and large-scale laboratory experiments.
From micro-scale 3D simulations to macro-scale model of periodic porous media
NASA Astrophysics Data System (ADS)
Crevacore, Eleonora; Tosco, Tiziana; Marchisio, Daniele; Sethi, Rajandrea; Messina, Francesca
2015-04-01
In environmental engineering, the transport of colloidal suspensions in porous media is studied to understand the fate of potentially harmful nano-particles and to design new remediation technologies. In this perspective, averaging techniques applied to micro-scale numerical simulations are a powerful tool to extrapolate accurate macro-scale models. Choosing two simplified packing configurations of soil grains and starting from a single elementary cell (module), it is possible to take advantage of the periodicity of the structures to reduce the computation costs of full 3D simulations. Steady-state flow simulations for incompressible fluid in laminar regime are implemented. Transport simulations are based on the pore-scale advection-diffusion equation, that can be enriched introducing also the Stokes velocity (to consider the gravity effect) and the interception mechanism. Simulations are carried on a domain composed of several elementary modules, that serve as control volumes in a finite volume method for the macro-scale method. The periodicity of the medium involves the periodicity of the flow field and this will be of great importance during the up-scaling procedure, allowing relevant simplifications. Micro-scale numerical data are treated in order to compute the mean concentration (volume and area averages) and fluxes on each module. The simulation results are used to compare the micro-scale averaged equation to the integral form of the macroscopic one, making a distinction between those terms that could be computed exactly and those for which a closure in needed. Of particular interest it is the investigation of the origin of macro-scale terms such as the dispersion and tortuosity, trying to describe them with micro-scale known quantities. Traditionally, to study the colloidal transport many simplifications are introduced, such those concerning ultra-simplified geometry that usually account for a single collector. Gradual removal of such hypothesis leads to a detailed description of colloidal transport mechanisms. Starting from nearly realistic 3D geometries, the ultimate purpose of this work is that of develop an improved understanding of the fate of colloidal particles through, for example, an accurate description of the deposition efficiency, in order design efficient remediation techniques. G. Boccardo, D.L. Marchisio, R.Sethi, Journal of colloid and interface science, Vol 417C, pp 227-237, 2014 M. Icardi, G. Boccardo, D.L. Marchisio, T. Tosco, R.Sethi, Physical Review E - Statistical, Nonlinear, and Soft Matter Physics, 2014 S. Torkzaban, S.S. Tazehkand, S.L. Walker, S.A. Bradford, Water resources research, Vol 44, 2008 S.M. Hassanizadeh, Adv in Water Resources, Vol. 2, pp 131-144, 1979 S. Whitaker, AIChE Journal, Vol. 13 No. 3, pp 420-428, May 1967
SIMSAT: An object oriented architecture for real-time satellite simulation
NASA Technical Reports Server (NTRS)
Williams, Adam P.
1993-01-01
Real-time satellite simulators are vital tools in the support of satellite missions. They are used in the testing of ground control systems, the training of operators, the validation of operational procedures, and the development of contingency plans. The simulators must provide high-fidelity modeling of the satellite, which requires detailed system information, much of which is not available until relatively near launch. The short time-scales and resulting high productivity required of such simulator developments culminates in the need for a reusable infrastructure which can be used as a basis for each simulator. This paper describes a major new simulation infrastructure package, the Software Infrastructure for Modelling Satellites (SIMSAT). It outlines the object oriented design methodology used, describes the resulting design, and discusses the advantages and disadvantages experienced in applying the methodology.
NASA Technical Reports Server (NTRS)
Darmofal, David L.
2003-01-01
The use of computational simulations in the prediction of complex aerodynamic flows is becoming increasingly prevalent in the design process within the aerospace industry. Continuing advancements in both computing technology and algorithmic development are ultimately leading to attempts at simulating ever-larger, more complex problems. However, by increasing the reliance on computational simulations in the design cycle, we must also increase the accuracy of these simulations in order to maintain or improve the reliability arid safety of the resulting aircraft. At the same time, large-scale computational simulations must be made more affordable so that their potential benefits can be fully realized within the design cycle. Thus, a continuing need exists for increasing the accuracy and efficiency of computational algorithms such that computational fluid dynamics can become a viable tool in the design of more reliable, safer aircraft. The objective of this research was the development of an error estimation and grid adaptive strategy for reducing simulation errors in integral outputs (functionals) such as lift or drag from from multi-dimensional Euler and Navier-Stokes simulations. In this final report, we summarize our work during this grant.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woods, Brian; Gutowska, Izabela; Chiger, Howard
Computer simulations of nuclear reactor thermal-hydraulic phenomena are often used in the design and licensing of nuclear reactor systems. In order to assess the accuracy of these computer simulations, computer codes and methods are often validated against experimental data. This experimental data must be of sufficiently high quality in order to conduct a robust validation exercise. In addition, this experimental data is generally collected at experimental facilities that are of a smaller scale than the reactor systems that are being simulated due to cost considerations. Therefore, smaller scale test facilities must be designed and constructed in such a fashion tomore » ensure that the prototypical behavior of a particular nuclear reactor system is preserved. The work completed through this project has resulted in scaling analyses and conceptual design development for a test facility capable of collecting code validation data for the following high temperature gas reactor systems and events— 1. Passive natural circulation core cooling system, 2. pebble bed gas reactor concept, 3. General Atomics Energy Multiplier Module reactor, and 4. prismatic block design steam-water ingress event. In the event that code validation data for these systems or events is needed in the future, significant progress in the design of an appropriate integral-type test facility has already been completed as a result of this project. Where applicable, the next step would be to begin the detailed design development and material procurement. As part of this project applicable scaling analyses were completed and test facility design requirements developed. Conceptual designs were developed for the implementation of these design requirements at the Oregon State University (OSU) High Temperature Test Facility (HTTF). The original HTTF is based on a ¼-scale model of a high temperature gas reactor concept with the capability for both forced and natural circulation flow through a prismatic core with an electrical heat source. The peak core region temperature capability is 1400°C. As part of this project, an inventory of test facilities that could be used for these experimental programs was completed. Several of these facilities showed some promise, however, upon further investigation it became clear that only the OSU HTTF had the power and/or peak temperature limits that would allow for the experimental programs envisioned herein. Thus the conceptual design and feasibility study development focused on examining the feasibility of configuring the current HTTF to collect validation data for these experimental programs. In addition to the scaling analyses and conceptual design development, a test plan was developed for the envisioned modified test facility. This test plan included a discussion on an appropriate shakedown test program as well as the specific matrix tests. Finally, a feasibility study was completed to determine the cost and schedule considerations that would be important to any test program developed to investigate these designs and events.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finsterle, S.; Moridis, G.J.; Pruess, K.
1994-01-01
The emplacement of liquids under controlled viscosity conditions is investigated by means of numerical simulations. Design calculations are performed for a laboratory experiment on a decimeter scale, and a field experiment on a meter scale. The purpose of the laboratory experiment is to study the behavior of multiple gout plumes when injected in a porous medium. The calculations for the field trial aim at designing a grout injection test from a vertical well in order to create a grout plume of a significant extent in the subsurface.
The GlueX central drift chamber: Design and performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Haarlem, Y; Barbosa, F; Dey, B
2010-10-01
Tests and studies concerning the design and performance of the GlueX Central Drift Chamber (CDC) are presented. A full-scale prototype was built to test and steer the mechanical and electronic design. Small scale prototypes were constructed to test for sagging and to do timing and resolution studies of the detector. These studies were used to choose the gas mixture and to program a Monte Carlo simulation that can predict the detector response in an external magnetic field. Particle identification and charge division possibilities were also investigated.
1993-04-01
wave buoy provided by SEATEX, Norway (Figure 3). The modified Mills-cross array was designed to provide spatial estimates of the variation in wave, wind... designed for SWADE to examine the wave physics at different spatial and temporal scales, and the usefulness of a nested system. Each grid is supposed to...field specification. SWADE Model This high-resolution grid was designed to simulate the small scale wave physics and to improve and verify the source
Evolution of an interfacial crack on the concrete-embankment boundary
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glascoe, Lee; Antoun, Tarabay; Kanarska, Yuliya
2013-07-10
Failure of a dam can have subtle beginnings. A small crack or dislocation at the interface of the concrete dam and the surrounding embankment soil initiated by, for example, a seismic or an explosive event can lead to a catastrophic failure of the dam. The dam may ‘self-rehabilitate’ if a properly designed granular filter is engineered around the embankment. Currently, the design criteria for such filters have only been based on experimental studies. We demonstrate the numerical prediction of filter effectiveness at the soil grain scale. This joint LLNL-ERDC basic research project, funded by the Department of Homeland Security’s Sciencemore » and Technology Directorate (DHS S&T), consists of validating advanced high performance computer simulations of soil erosion and transport of grain- and dam-scale models to detailed centrifuge and soil erosion tests. Validated computer predictions highlight that a resilient filter is consistent with the current design specifications for dam filters. These predictive simulations, unlike the design specifications, can be used to assess filter success or failure under different soil or loading conditions and can lead to meaningful estimates of the timing and nature of full-scale dam failure.« less
Xiao, Li; Cai, Qin; Li, Zhilin; Zhao, Hongkai; Luo, Ray
2014-01-01
A multi-scale framework is proposed for more realistic molecular dynamics simulations in continuum solvent models by coupling a molecular mechanics treatment of solute with a fluid mechanics treatment of solvent. This article reports our initial efforts to formulate the physical concepts necessary for coupling the two mechanics and develop a 3D numerical algorithm to simulate the solvent fluid via the Navier-Stokes equation. The numerical algorithm was validated with multiple test cases. The validation shows that the algorithm is effective and stable, with observed accuracy consistent with our design. PMID:25404761
NASA Astrophysics Data System (ADS)
Antsiferov, S. I.; Eltsov, M. Iu; Khakhalev, P. A.
2018-03-01
This paper considers a newly designed electronic digital model of a robotic complex for implementing full-scale additive technologies, funded under a Federal Target Program. The electronic and digital model was used to solve the problem of simulating the movement of a robotic complex using the NX CAD/CAM/CAE system. The virtual mechanism was built and the main assemblies, joints, and drives were identified as part of solving the problem. In addition, the maximum allowed printable area size was identified for the robotic complex, and a simulation of printing a rectangular-shaped article was carried out.
Finite Element Simulation of the Shear Effect of Ultrasonic on Heat Exchanger Descaling
NASA Astrophysics Data System (ADS)
Lu, Shaolv; Wang, Zhihua; Wang, Hehui
2018-03-01
The shear effect on the interface of metal plate and its attached scale is an important mechanism of ultrasonic descaling, which is caused by the different propagation speed of ultrasonic wave in two different mediums. The propagating of ultrasonic wave on the shell is simulated based on the ANSYS/LS-DYNA explicit dynamic analysis. The distribution of shear stress in different paths under ultrasonic vibration is obtained through the finite element analysis and it reveals the main descaling mechanism of shear effect. The simulation result is helpful and enlightening to the reasonable design and the application of the ultrasonic scaling technology on heat exchanger.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lai, Canhai; Xu, Zhijie; Li, Tingwen
In virtual design and scale up of pilot-scale carbon capture systems, the coupled reactive multiphase flow problem must be solved to predict the adsorber’s performance and capture efficiency under various operation conditions. This paper focuses on the detailed computational fluid dynamics (CFD) modeling of a pilot-scale fluidized bed adsorber equipped with vertical cooling tubes. Multiphase Flow with Interphase eXchanges (MFiX), an open-source multiphase flow CFD solver, is used for the simulations with custom code to simulate the chemical reactions and filtered models to capture the effect of the unresolved details in the coarser mesh for simulations with reasonable simulations andmore » manageable computational effort. Previously developed two filtered models for horizontal cylinder drag, heat transfer, and reaction kinetics have been modified to derive the 2D filtered models representing vertical cylinders in the coarse-grid CFD simulations. The effects of the heat exchanger configurations (i.e., horizontal or vertical) on the adsorber’s hydrodynamics and CO2 capture performance are then examined. The simulation result subsequently is compared and contrasted with another predicted by a one-dimensional three-region process model.« less
NASA Technical Reports Server (NTRS)
Bezos, Gaudy M.; Campbell, Bryan A.
1993-01-01
A large-scale, outdoor, ground-based test capability for acquiring aerodynamic data in a simulated rain environment was developed at the Langley Aircraft Landing Dynamics Facility (ALDF) to assess the effect of heavy rain on airfoil performance. The ALDF test carriage was modified to transport a 10-ft-chord NACA 64210 wing section along a 3000-ft track at full-scale aircraft approach speeds. An overhead rain simulation system was constructed along a 525-ft section of the track with the capability of producing simulated rain fields of 2, 10, 30, and 40 in/hr. The facility modifications, the aerodynamic testing and rain simulation capability, the design and calibration of the rain simulation system, and the operational procedures developed to minimize the effect of wind on the simulated rain field and aerodynamic data are described in detail. The data acquisition and reduction processes are also presented along with sample force data illustrating the environmental effects on data accuracy and repeatability for the 'rain-off' test condition.
A Hybrid Multiscale Framework for Subsurface Flow and Transport Simulations
Scheibe, Timothy D.; Yang, Xiaofan; Chen, Xingyuan; ...
2015-06-01
Extensive research efforts have been invested in reducing model errors to improve the predictive ability of biogeochemical earth and environmental system simulators, with applications ranging from contaminant transport and remediation to impacts of biogeochemical elemental cycling (e.g., carbon and nitrogen) on local ecosystems and regional to global climate. While the bulk of this research has focused on improving model parameterizations in the face of observational limitations, the more challenging type of model error/uncertainty to identify and quantify is model structural error which arises from incorrect mathematical representations of (or failure to consider) important physical, chemical, or biological processes, properties, ormore » system states in model formulations. While improved process understanding can be achieved through scientific study, such understanding is usually developed at small scales. Process-based numerical models are typically designed for a particular characteristic length and time scale. For application-relevant scales, it is generally necessary to introduce approximations and empirical parameterizations to describe complex systems or processes. This single-scale approach has been the best available to date because of limited understanding of process coupling combined with practical limitations on system characterization and computation. While computational power is increasing significantly and our understanding of biological and environmental processes at fundamental scales is accelerating, using this information to advance our knowledge of the larger system behavior requires the development of multiscale simulators. Accordingly there has been much recent interest in novel multiscale methods in which microscale and macroscale models are explicitly coupled in a single hybrid multiscale simulation. A limited number of hybrid multiscale simulations have been developed for biogeochemical earth systems, but they mostly utilize application-specific and sometimes ad-hoc approaches for model coupling. We are developing a generalized approach to hierarchical model coupling designed for high-performance computational systems, based on the Swift computing workflow framework. In this presentation we will describe the generalized approach and provide two use cases: 1) simulation of a mixing-controlled biogeochemical reaction coupling pore- and continuum-scale models, and 2) simulation of biogeochemical impacts of groundwater – river water interactions coupling fine- and coarse-grid model representations. This generalized framework can be customized for use with any pair of linked models (microscale and macroscale) with minimal intrusiveness to the at-scale simulators. It combines a set of python scripts with the Swift workflow environment to execute a complex multiscale simulation utilizing an approach similar to the well-known Heterogeneous Multiscale Method. User customization is facilitated through user-provided input and output file templates and processing function scripts, and execution within a high-performance computing environment is handled by Swift, such that minimal to no user modification of at-scale codes is required.« less
Modeling and Simulation of the Second-Generation Orion Crew Module Air Bag Landing System
NASA Technical Reports Server (NTRS)
Timmers, Richard B.; Hardy, Robin C.; Willey, Cliff E.; Welch, Joseph V.
2009-01-01
Air bags were evaluated as the landing attenuation system for earth landing of the Orion Crew Module (CM). Analysis conducted to date shows that airbags are capable of providing a graceful landing of the CM in nominal and off-nominal conditions such as parachute failure, high horizontal winds, and unfavorable vehicle/ground angle combinations, while meeting crew and vehicle safety requirements. The analyses and associated testing presented here surround a second generation of the airbag design developed by ILC Dover, building off of relevant first-generation design, analysis, and testing efforts. In order to fully evaluate the second generation air bag design and correlate the dynamic simulations, a series of drop tests were carried out at NASA Langley s Landing and Impact Research (LandIR) facility in Hampton, Virginia. The tests consisted of a full-scale set of air bags attached to a full-scale test article representing the Orion Crew Module. The techniques used to collect experimental data, develop the simulations, and make comparisons to experimental data are discussed.
Xie, Yi; Mun, Sungyong; Kim, Jinhyun; Wang, Nien-Hwa Linda
2002-01-01
A tandem simulated moving bed (SMB) process for insulin purification has been proposed and validated experimentally. The mixture to be separated consists of insulin, high molecular weight proteins, and zinc chloride. A systematic approach based on the standing wave design, rate model simulations, and experiments was used to develop this multicomponent separation process. The standing wave design was applied to specify the SMB operating conditions of a lab-scale unit with 10 columns. The design was validated with rate model simulations prior to experiments. The experimental results show 99.9% purity and 99% yield, which closely agree with the model predictions and the standing wave design targets. The agreement proves that the standing wave design can ensure high purity and high yield for the tandem SMB process. Compared to a conventional batch SEC process, the tandem SMB has 10% higher yield, 400% higher throughput, and 72% lower eluant consumption. In contrast, a design that ignores the effects of mass transfer and nonideal flow cannot meet the purity requirement and gives less than 96% yield.
A Decade-Long European-Scale Convection-Resolving Climate Simulation on GPUs
NASA Astrophysics Data System (ADS)
Leutwyler, D.; Fuhrer, O.; Ban, N.; Lapillonne, X.; Lüthi, D.; Schar, C.
2016-12-01
Convection-resolving models have proven to be very useful tools in numerical weather prediction and in climate research. However, due to their extremely demanding computational requirements, they have so far been limited to short simulations and/or small computational domains. Innovations in the supercomputing domain have led to new supercomputer designs that involve conventional multi-core CPUs and accelerators such as graphics processing units (GPUs). One of the first atmospheric models that has been fully ported to GPUs is the Consortium for Small-Scale Modeling weather and climate model COSMO. This new version allows us to expand the size of the simulation domain to areas spanning continents and the time period up to one decade. We present results from a decade-long, convection-resolving climate simulation over Europe using the GPU-enabled COSMO version on a computational domain with 1536x1536x60 gridpoints. The simulation is driven by the ERA-interim reanalysis. The results illustrate how the approach allows for the representation of interactions between synoptic-scale and meso-scale atmospheric circulations at scales ranging from 1000 to 10 km. We discuss some of the advantages and prospects from using GPUs, and focus on the performance of the convection-resolving modeling approach on the European scale. Specifically we investigate the organization of convective clouds and on validate hourly rainfall distributions with various high-resolution data sets.
NASA Astrophysics Data System (ADS)
Aguirre, Rodolfo, II
Cadmium telluride (CdTe) is a material used to make solar cells because it absorbs the sunlight very efficiently and converts it into electricity. However, CdTe modules suffer from degradation of 1% over a period of 1 year. Improvements on the efficiency and stability can be achieved by designing better materials at the atomic scale. Experimental techniques to study materials at the atomic scale, such as Atomic Probe Tomography (APT) and Transmission Electron Microscope (TEM) are expensive and time consuming. On the other hand, Molecular Dynamics (MD) offers an inexpensive and fast computer simulation technique to study the growth evolution of materials with atomic scale resolution. In combination with advance characterization software, MD simulations provide atomistic visualization, defect analysis, structure maps, 3-D atomistic view, and composition profiles. MD simulations help to design better quality materials by predicting material behavior at the atomic scale. In this work, a new MD method to study several phenomena such as polycrystalline growth of CdTe-based materials, interdiffusion of atoms at interfaces, and deposition of a copper doped ZnTe back contact is established. Results are compared with experimental data found in the literature and experiments performed and shown to be in remarkably good agreement.
Recovery Act: Oxy-Combustion Techology Development for Industrial-Scale Boiler Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levasseur, Armand
2014-04-30
Alstom Power Inc. (Alstom), under U.S. DOE/NETL Cooperative Agreement No. DE-NT0005290, is conducting a development program to generate detailed technical information needed for application of oxy-combustion technology. The program is designed to provide the necessary information and understanding for the next step of large-scale commercial demonstration of oxy combustion in tangentially fired boilers and to accelerate the commercialization of this technology. The main project objectives include: • Design and develop an innovative oxyfuel system for existing tangentially-fired boiler units that minimizes overall capital investment and operating costs. • Evaluate performance of oxyfuel tangentially fired boiler systems in pilot scale testsmore » at Alstom’s 15 MWth tangentially fired Boiler Simulation Facility (BSF). • Address technical gaps for the design of oxyfuel commercial utility boilers by focused testing and improvement of engineering and simulation tools. • Develop the design, performance and costs for a demonstration scale oxyfuel boiler and auxiliary systems. • Develop the design and costs for both industrial and utility commercial scale reference oxyfuel boilers and auxiliary systems that are optimized for overall plant performance and cost. • Define key design considerations and develop general guidelines for application of results to utility and different industrial applications. The project was initiated in October 2008 and the scope extended in 2010 under an ARRA award. The project completion date was April 30, 2014. Central to the project is 15 MWth testing in the BSF, which provided in-depth understanding of oxy-combustion under boiler conditions, detailed data for improvement of design tools, and key information for application to commercial scale oxy-fired boiler design. Eight comprehensive 15 MWth oxy-fired test campaigns were performed with different coals, providing detailed data on combustion, emissions, and thermal behavior over a matrix of fuels, oxyprocess variables and boiler design parameters. Significant improvement of CFD modeling tools and validation against 15 MWth experimental data has been completed. Oxy-boiler demonstration and large reference designs have been developed, supported with the information and knowledge gained from the 15 MWth testing. The results from the 15 MWth testing in the BSF and complimentary bench-scale testing are addressed in this volume (Volume II) of the final report. The results of the modeling efforts (Volume III) and the oxy boiler design efforts (Volume IV) are reported in separate volumes.« less
Final Report for DE-FG02-99ER45795
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilkins, John Warren
The research supported by this grant focuses on atomistic studies of defects, phase transitions, electronic and magnetic properties, and mechanical behaviors of materials. We have been studying novel properties of various emerging nanoscale materials on multiple levels of length and time scales, and have made accurate predictions on many technologically important properties. A significant part of our research has been devoted to exploring properties of novel nano-scale materials by pushing the limit of quantum mechanical simulations, and development of a rigorous scheme to design accurate classical inter-atomic potentials for larger scale atomistic simulations for many technologically important metals and metalmore » alloys.« less
Towards European-scale convection-resolving climate simulations with GPUs: a study with COSMO 4.19
NASA Astrophysics Data System (ADS)
Leutwyler, David; Fuhrer, Oliver; Lapillonne, Xavier; Lüthi, Daniel; Schär, Christoph
2016-09-01
The representation of moist convection in climate models represents a major challenge, due to the small scales involved. Using horizontal grid spacings of O(1km), convection-resolving weather and climate models allows one to explicitly resolve deep convection. However, due to their extremely demanding computational requirements, they have so far been limited to short simulations and/or small computational domains. Innovations in supercomputing have led to new hybrid node designs, mixing conventional multi-core hardware and accelerators such as graphics processing units (GPUs). One of the first atmospheric models that has been fully ported to these architectures is the COSMO (Consortium for Small-scale Modeling) model.Here we present the convection-resolving COSMO model on continental scales using a version of the model capable of using GPU accelerators. The verification of a week-long simulation containing winter storm Kyrill shows that, for this case, convection-parameterizing simulations and convection-resolving simulations agree well. Furthermore, we demonstrate the applicability of the approach to longer simulations by conducting a 3-month-long simulation of the summer season 2006. Its results corroborate the findings found on smaller domains such as more credible representation of the diurnal cycle of precipitation in convection-resolving models and a tendency to produce more intensive hourly precipitation events. Both simulations also show how the approach allows for the representation of interactions between synoptic-scale and meso-scale atmospheric circulations at scales ranging from 1000 to 10 km. This includes the formation of sharp cold frontal structures, convection embedded in fronts and small eddies, or the formation and organization of propagating cold pools. Finally, we assess the performance gain from using heterogeneous hardware equipped with GPUs relative to multi-core hardware. With the COSMO model, we now use a weather and climate model that has all the necessary modules required for real-case convection-resolving regional climate simulations on GPUs.
NASA Astrophysics Data System (ADS)
Courbat, J.; Canonica, M.; Teyssieux, D.; Briand, D.; de Rooij, N. F.
2011-01-01
The design of ultra-low power micro-hotplates on a polyimide (PI) substrate supported by thermal simulations and characterization is presented. By establishing a method for the thermal simulation of very small scale heating elements, the goal of this study was to decrease the power consumption of PI micro-hotplates to a few milliwatts to make them suitable for very low power applications. To this end, the mean heat transfer coefficients in air of the devices were extracted by finite element analysis combined with very precise thermographic measurements. A simulation model was implemented for these hotplates to investigate both the influence of their downscaling and the bulk micromachining of the polyimide substrate to lower their power consumptions. Simulations were in very good agreement with the experimental results. The main parameters influencing significantly the power consumption at such dimensions were identified and guidelines were defined allowing the design of very small (15 × 15 µm) and ultra-low power heating elements (6 mW at 300 °C). These very low power heating structures enable the realization of flexible sensors, such as gas, flow or wind sensors, for applications in autonomous wireless sensors networks or RFID applications and make them compatible with large-scale production on foil such as roll-to-roll or printing processes.
Analysis of detection performance of multi band laser beam analyzer
NASA Astrophysics Data System (ADS)
Du, Baolin; Chen, Xiaomei; Hu, Leili
2017-10-01
Compared with microwave radar, Laser radar has high resolution, strong anti-interference ability and good hiding ability, so it becomes the focus of laser technology engineering application. A large scale Laser radar cross section (LRCS) measurement system is designed and experimentally tested. First, the boundary conditions are measured and the long range laser echo power is estimated according to the actual requirements. The estimation results show that the echo power is greater than the detector's response power. Secondly, a large scale LRCS measurement system is designed according to the demonstration and estimation. The system mainly consists of laser shaping, beam emitting device, laser echo receiving device and integrated control device. Finally, according to the designed lidar cross section measurement system, the scattering cross section of target is simulated and tested. The simulation results are basically the same as the test results, and the correctness of the system is proved.
Multiscale Modeling in the Clinic: Drug Design and Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clancy, Colleen E.; An, Gary; Cannon, William R.
A wide range of length and time scales are relevant to pharmacology, especially in drug development, drug design and drug delivery. Therefore, multi-scale computational modeling and simulation methods and paradigms that advance the linkage of phenomena occurring at these multiple scales have become increasingly important. Multi-scale approaches present in silico opportunities to advance laboratory research to bedside clinical applications in pharmaceuticals research. This is achievable through the capability of modeling to reveal phenomena occurring across multiple spatial and temporal scales, which are not otherwise readily accessible to experimentation. The resultant models, when validated, are capable of making testable predictions tomore » guide drug design and delivery. In this review we describe the goals, methods, and opportunities of multi-scale modeling in drug design and development. We demonstrate the impact of multiple scales of modeling in this field. We indicate the common mathematical techniques employed for multi-scale modeling approaches used in pharmacology and present several examples illustrating the current state-of-the-art regarding drug development for: Excitable Systems (Heart); Cancer (Metastasis and Differentiation); Cancer (Angiogenesis and Drug Targeting); Metabolic Disorders; and Inflammation and Sepsis. We conclude with a focus on barriers to successful clinical translation of drug development, drug design and drug delivery multi-scale models.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tome, Carlos N; Caro, J A; Lebensohn, R A
2010-01-01
Advancing the performance of Light Water Reactors, Advanced Nuclear Fuel Cycles, and Advanced Reactors, such as the Next Generation Nuclear Power Plants, requires enhancing our fundamental understanding of fuel and materials behavior under irradiation. The capability to accurately model the nuclear fuel systems to develop predictive tools is critical. Not only are fabrication and performance models needed to understand specific aspects of the nuclear fuel, fully coupled fuel simulation codes are required to achieve licensing of specific nuclear fuel designs for operation. The backbone of these codes, models, and simulations is a fundamental understanding and predictive capability for simulating themore » phase and microstructural behavior of the nuclear fuel system materials and matrices. In this paper we review the current status of the advanced modeling and simulation of nuclear reactor cladding, with emphasis on what is available and what is to be developed in each scale of the project, how we propose to pass information from one scale to the next, and what experimental information is required for benchmarking and advancing the modeling at each scale level.« less
IMPLEMENTATION OF GREEN ROOF SUSTAINABILITY IN ARID CONDITIONS
We successfully designed and fabricated accurately scaled prototypes of a green roof and a conventional white roof and began testing in simulated conditions of 115-70°F with relative humidity of 13%. The design parameters were based on analytical models created through ver...
A large scale software system for simulation and design optimization of mechanical systems
NASA Technical Reports Server (NTRS)
Dopker, Bernhard; Haug, Edward J.
1989-01-01
The concept of an advanced integrated, networked simulation and design system is outlined. Such an advanced system can be developed utilizing existing codes without compromising the integrity and functionality of the system. An example has been used to demonstrate the applicability of the concept of the integrated system outlined here. The development of an integrated system can be done incrementally. Initial capabilities can be developed and implemented without having a detailed design of the global system. Only a conceptual global system must exist. For a fully integrated, user friendly design system, further research is needed in the areas of engineering data bases, distributed data bases, and advanced user interface design.
NASA Astrophysics Data System (ADS)
Blain, Matthew G.; Riter, Leah S.; Cruz, Dolores; Austin, Daniel E.; Wu, Guangxiang; Plass, Wolfgang R.; Cooks, R. Graham
2004-08-01
Breakthrough improvements in simplicity and reductions in the size of mass spectrometers are needed for high-consequence fieldable applications, including error-free detection of chemical/biological warfare agents, medical diagnoses, and explosives and contraband discovery. These improvements are most likely to be realized with the reconceptualization of the mass spectrometer, rather than by incremental steps towards miniaturization. Microfabricated arrays of mass analyzers represent such a conceptual advance. A massively parallel array of micrometer-scaled mass analyzers on a chip has the potential to set the performance standard for hand-held sensors due to the inherit selectivity, sensitivity, and universal applicability of mass spectrometry as an analytical method. While the effort to develop a complete micro-MS system must include innovations in ultra-small-scale sample introduction, ion sources, mass analyzers, detectors, and vacuum and power subsystems, the first step towards radical miniaturization lies in the design, fabrication, and characterization of the mass analyzer itself. In this paper we discuss design considerations and results from simulations of ion trapping behavior for a micrometer scale cylindrical ion trap (CIT) mass analyzer (internal radius r0 = 1 [mu]m). We also present a description of the design and microfabrication of a 0.25 cm2 array of 106 one-micrometer CITs, including integrated ion detectors, constructed in tungsten on a silicon substrate.
Flow field prediction in full-scale Carrousel oxidation ditch by using computational fluid dynamics.
Yang, Yin; Wu, Yingying; Yang, Xiao; Zhang, Kai; Yang, Jiakuan
2010-01-01
In order to optimize the flow field in a full-scale Carrousel oxidation ditch with many sets of disc aerators operating simultaneously, an experimentally validated numerical tool, based on computational fluid dynamics (CFD), was proposed. A full-scale, closed-loop bioreactor (Carrousel oxidation ditch) in Ping Dingshan Sewage Treatment Plant in Ping Dingshan City, a medium-sized city in Henan Province of China, was evaluated using CFD. Moving wall model was created to simulate many sets of disc aerators which created fluid motion in the ditch. The simulated results were acceptable compared with the experimental data and the following results were obtained: (1) a new method called moving wall model could simulate the flow field in Carrousel oxidation ditch with many sets of disc aerators operating simultaneously. The whole number of cells of grids decreased significantly, thus the calculation amount decreased, and (2) CFD modeling generally characterized the flow pattern in the full-scale tank. 3D simulation could be a good supplement for improving the hydrodynamic performance in oxidation ditch designs.
Layout-aware simulation of soft errors in sub-100 nm integrated circuits
NASA Astrophysics Data System (ADS)
Balbekov, A.; Gorbunov, M.; Bobkov, S.
2016-12-01
Single Event Transient (SET) caused by charged particle traveling through the sensitive volume of integral circuit (IC) may lead to different errors in digital circuits in some cases. In technologies below 180 nm, a single particle can affect multiple devices causing multiple SET. This fact adds the complexity to fault tolerant devices design, because the schematic design techniques become useless without their layout consideration. The most common layout mitigation technique is a spatial separation of sensitive nodes of hardened circuits. Spatial separation decreases the circuit performance and increases power consumption. Spacing should thus be reasonable and its scaling follows the device dimensions' scaling trend. This paper presents the development of the SET simulation approach comprised of SPICE simulation with "double exponent" current source as SET model. The technique uses layout in GDSII format to locate nearby devices that can be affected by a single particle and that can share the generated charge. The developed software tool automatizes multiple simulations and gathers the produced data to present it as the sensitivity map. The examples of conducted simulations of fault tolerant cells and their sensitivity maps are presented in this paper.
Simbios: an NIH national center for physics-based simulation of biological structures.
Delp, Scott L; Ku, Joy P; Pande, Vijay S; Sherman, Michael A; Altman, Russ B
2012-01-01
Physics-based simulation provides a powerful framework for understanding biological form and function. Simulations can be used by biologists to study macromolecular assemblies and by clinicians to design treatments for diseases. Simulations help biomedical researchers understand the physical constraints on biological systems as they engineer novel drugs, synthetic tissues, medical devices, and surgical interventions. Although individual biomedical investigators make outstanding contributions to physics-based simulation, the field has been fragmented. Applications are typically limited to a single physical scale, and individual investigators usually must create their own software. These conditions created a major barrier to advancing simulation capabilities. In 2004, we established a National Center for Physics-Based Simulation of Biological Structures (Simbios) to help integrate the field and accelerate biomedical research. In 6 years, Simbios has become a vibrant national center, with collaborators in 16 states and eight countries. Simbios focuses on problems at both the molecular scale and the organismal level, with a long-term goal of uniting these in accurate multiscale simulations.
Simbios: an NIH national center for physics-based simulation of biological structures
Delp, Scott L; Ku, Joy P; Pande, Vijay S; Sherman, Michael A
2011-01-01
Physics-based simulation provides a powerful framework for understanding biological form and function. Simulations can be used by biologists to study macromolecular assemblies and by clinicians to design treatments for diseases. Simulations help biomedical researchers understand the physical constraints on biological systems as they engineer novel drugs, synthetic tissues, medical devices, and surgical interventions. Although individual biomedical investigators make outstanding contributions to physics-based simulation, the field has been fragmented. Applications are typically limited to a single physical scale, and individual investigators usually must create their own software. These conditions created a major barrier to advancing simulation capabilities. In 2004, we established a National Center for Physics-Based Simulation of Biological Structures (Simbios) to help integrate the field and accelerate biomedical research. In 6 years, Simbios has become a vibrant national center, with collaborators in 16 states and eight countries. Simbios focuses on problems at both the molecular scale and the organismal level, with a long-term goal of uniting these in accurate multiscale simulations. PMID:22081222
A Generalized Hybrid Multiscale Modeling Approach for Flow and Reactive Transport in Porous Media
NASA Astrophysics Data System (ADS)
Yang, X.; Meng, X.; Tang, Y. H.; Guo, Z.; Karniadakis, G. E.
2017-12-01
Using emerging understanding of biological and environmental processes at fundamental scales to advance predictions of the larger system behavior requires the development of multiscale approaches, and there is strong interest in coupling models at different scales together in a hybrid multiscale simulation framework. A limited number of hybrid multiscale simulation methods have been developed for subsurface applications, mostly using application-specific approaches for model coupling. The proposed generalized hybrid multiscale approach is designed with minimal intrusiveness to the at-scale simulators (pre-selected) and provides a set of lightweight C++ scripts to manage a complex multiscale workflow utilizing a concurrent coupling approach. The workflow includes at-scale simulators (using the lattice-Boltzmann method, LBM, at the pore and Darcy scale, respectively), scripts for boundary treatment (coupling and kriging), and a multiscale universal interface (MUI) for data exchange. The current study aims to apply the generalized hybrid multiscale modeling approach to couple pore- and Darcy-scale models for flow and mixing-controlled reaction with precipitation/dissolution in heterogeneous porous media. The model domain is packed heterogeneously that the mixing front geometry is more complex and not known a priori. To address those challenges, the generalized hybrid multiscale modeling approach is further developed to 1) adaptively define the locations of pore-scale subdomains, 2) provide a suite of physical boundary coupling schemes and 3) consider the dynamic change of the pore structures due to mineral precipitation/dissolution. The results are validated and evaluated by comparing with single-scale simulations in terms of velocities, reactive concentrations and computing cost.
DOT National Transportation Integrated Search
2011-03-01
An instrumented, simulated bridge pier was constructed, and two full-scale collisions with an : 80,000-lb van-type tractor-trailer were performed on it. The trailer was ballasted with bags of sand on : pallets. The simulated pier was 36 inches in dia...
Spatial application of WEPS for estimating wind erosion in the Pacific Northwest
USDA-ARS?s Scientific Manuscript database
The Wind Erosion Prediction System (WEPS) is used to simulate soil erosion on croplands and was originally designed to run field scale simulations. This research is an extension of the WEPS model to run on multiple fields (grids) covering a larger region. We modified the WEPS source code to allow it...
Spatial application of WEPS for estimating wind erosion in the Pacific Northwest
USDA-ARS?s Scientific Manuscript database
The Wind Erosion Prediction System (WEPS) is used to simulate soil erosion on cropland and was originally designed to run simulations on a field-scale size. This study extended WEPS to run on multiple fields (grids) independently to cover a large region and to conduct an initial investigation to ass...
PHOTOCHEMICAL SIMULATIONS OF POINT SOURCE EMISSIONS WITH THE MODELS-3 CMAQ PLUME-IN-GRID APPROACH
A plume-in-grid (PinG) approach has been designed to provide a realistic treatment for the simulation the dynamic and chemical processes impacting pollutant species in major point source plumes during a subgrid scale phase within an Eulerian grid modeling framework. The PinG sci...
Kim, Moonkeun; Lee, Sang-Kyun; Yang, Yil Suk; Jeong, Jaehwa; Min, Nam Ki; Kwon, Kwang-Ho
2013-12-01
We fabricated dual-beam cantilevers on the microelectromechanical system (MEMS) scale with an integrated Si proof mass. A Pb(Zr,Ti)O3 (PZT) cantilever was designed as a mechanical vibration energy-harvesting system for low power applications. The resonant frequency of the multilayer composition cantilevers were simulated using the finite element method (FEM) with parametric analysis carried out in the design process. According to simulations, the resonant frequency, voltage, and average power of a dual-beam cantilever was 69.1 Hz, 113.9 mV, and 0.303 microW, respectively, at optimal resistance and 0.5 g (gravitational acceleration, m/s2). Based on these data, we subsequently fabricated cantilever devices using dual-beam cantilevers. The harvested power density of the dual-beam cantilever compared favorably with the simulation. Experiments revealed the resonant frequency, voltage, and average power density to be 78.7 Hz, 118.5 mV, and 0.34 microW, respectively. The error between the measured and simulated results was about 10%. The maximum average power and power density of the fabricated dual-beam cantilever at 1 g were 0.803 microW and 1322.80 microW cm(-3), respectively. Furthermore, the possibility of a MEMS-scale power source for energy conversion experiments was also tested.
Evaluation of dispersion strengthened nickel-base alloy heat shields for space shuttle application
NASA Technical Reports Server (NTRS)
Johnson, R., Jr.; Killpatrick, D. H.
1973-01-01
The work reported constitutes the first phase of a two-phase program. Vehicle environments having critical effects on the thermal protection system are defined; TD Ni-20Cr material characteristics are reviewed and compared with TD Ni-20Cr produced in previous development efforts; cyclic load, temperature, and pressure effects on TD Ni-20Cr sheet material are investigated; the effects of braze reinforcement in improving the efficiency of spotwelded, diffusion-bonded, or seam-welded joints are evaluated through tests of simple lap-shear joint samples; parametric studies of metallic radiative thermal protection systems are reported; and the design, instrumentation, and testing of full-scale subsize heat shield panels are described. Tests of full-scale subsize panels included simulated meteoroid impact tests; simulated entry flight aerodynamic heating in an arc-heated plasma stream; programmed differential pressure loads and temperatures simulating mission conditions; and acoustic tests simulating sound levels experienced by heat shields during about boost flight. Test results are described, and the performances of two heat shield designs are compared and evaluated.
THE VIRTUAL INSTRUMENT: SUPPORT FOR GRID-ENABLED MCELL SIMULATIONS
Casanova, Henri; Berman, Francine; Bartol, Thomas; Gokcay, Erhan; Sejnowski, Terry; Birnbaum, Adam; Dongarra, Jack; Miller, Michelle; Ellisman, Mark; Faerman, Marcio; Obertelli, Graziano; Wolski, Rich; Pomerantz, Stuart; Stiles, Joel
2010-01-01
Ensembles of widely distributed, heterogeneous resources, or Grids, have emerged as popular platforms for large-scale scientific applications. In this paper we present the Virtual Instrument project, which provides an integrated application execution environment that enables end-users to run and interact with running scientific simulations on Grids. This work is performed in the specific context of MCell, a computational biology application. While MCell provides the basis for running simulations, its capabilities are currently limited in terms of scale, ease-of-use, and interactivity. These limitations preclude usage scenarios that are critical for scientific advances. Our goal is to create a scientific “Virtual Instrument” from MCell by allowing its users to transparently access Grid resources while being able to steer running simulations. In this paper, we motivate the Virtual Instrument project and discuss a number of relevant issues and accomplishments in the area of Grid software development and application scheduling. We then describe our software design and report on the current implementation. We verify and evaluate our design via experiments with MCell on a real-world Grid testbed. PMID:20689618
Probabilistic Simulation of Multi-Scale Composite Behavior
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
2012-01-01
A methodology is developed to computationally assess the non-deterministic composite response at all composite scales (from micro to structural) due to the uncertainties in the constituent (fiber and matrix) properties, in the fabrication process and in structural variables (primitive variables). The methodology is computationally efficient for simulating the probability distributions of composite behavior, such as material properties, laminate and structural responses. Bi-products of the methodology are probabilistic sensitivities of the composite primitive variables. The methodology has been implemented into the computer codes PICAN (Probabilistic Integrated Composite ANalyzer) and IPACS (Integrated Probabilistic Assessment of Composite Structures). The accuracy and efficiency of this methodology are demonstrated by simulating the uncertainties in composite typical laminates and comparing the results with the Monte Carlo simulation method. Available experimental data of composite laminate behavior at all scales fall within the scatters predicted by PICAN. Multi-scaling is extended to simulate probabilistic thermo-mechanical fatigue and to simulate the probabilistic design of a composite redome in order to illustrate its versatility. Results show that probabilistic fatigue can be simulated for different temperature amplitudes and for different cyclic stress magnitudes. Results also show that laminate configurations can be selected to increase the redome reliability by several orders of magnitude without increasing the laminate thickness--a unique feature of structural composites. The old reference denotes that nothing fundamental has been done since that time.
NASA Technical Reports Server (NTRS)
Kalinowski, Kevin F.; Tucker, George E.; Moralez, Ernesto, III
2006-01-01
Engineering development and qualification of a Research Flight Control System (RFCS) for the Rotorcraft Aircrew Systems Concepts Airborne Laboratory (RASCAL) JUH-60A has motivated the development of a pilot rating scale for evaluating failure transients in fly-by-wire flight control systems. The RASCAL RFCS includes a highly-reliable, dual-channel Servo Control Unit (SCU) to command and monitor the performance of the fly-by-wire actuators and protect against the effects of erroneous commands from the flexible, but single-thread Flight Control Computer. During the design phase of the RFCS, two piloted simulations were conducted on the Ames Research Center Vertical Motion Simulator (VMS) to help define the required performance characteristics of the safety monitoring algorithms in the SCU. Simulated failures, including hard-over and slow-over commands, were injected into the command path, and the aircraft response and safety monitor performance were evaluated. A subjective Failure/Recovery Rating (F/RR) scale was developed as a means of quantifying the effects of the injected failures on the aircraft state and the degree of pilot effort required to safely recover the aircraft. A brief evaluation of the rating scale was also conducted on the Army/NASA CH-47B variable stability helicopter to confirm that the rating scale was likely to be equally applicable to in-flight evaluations. Following the initial research flight qualification of the RFCS in 2002, a flight test effort was begun to validate the performance of the safety monitors and to validate their design for the safe conduct of research flight testing. Simulated failures were injected into the SCU, and the F/RR scale was applied to assess the results. The results validate the performance of the monitors, and indicate that the Failure/Recovery Rating scale is a very useful tool for evaluating failure transients in fly-by-wire flight control systems.
[Characteristics of Waves Generated Beneath the Solar Convection Zone by Penetrative Overshoot
NASA Technical Reports Server (NTRS)
Julien, Keith
2000-01-01
The goal of this project was to theoretically and numerically characterize the waves generated beneath the solar convection zone by penetrative overshoot. Three dimensional model simulations were designed to isolate the effects of rotation and shear. In order to overcome the numerically imposed limitations of finite Reynolds numbers (Re) below solar values, series of simulations were designed to elucidate the Reynolds-number dependence (hoped to exhibit mathematically simple scaling on Re) so that one could cautiously extrapolate to solar values.
Rahman Prize Lecture: Lattice Boltzmann simulation of complex states of flowing matter
NASA Astrophysics Data System (ADS)
Succi, Sauro
Over the last three decades, the Lattice Boltzmann (LB) method has gained a prominent role in the numerical simulation of complex flows across an impressively broad range of scales, from fully-developed turbulence in real-life geometries, to multiphase flows in micro-fluidic devices, all the way down to biopolymer translocation in nanopores and lately, even quark-gluon plasmas. After a brief introduction to the main ideas behind the LB method and its historical developments, we shall present a few selected applications to complex flow problems at various scales of motion. Finally, we shall discuss prospects for extreme-scale LB simulations of outstanding problems in the physics of fluids and its interfaces with material sciences and biology, such as the modelling of fluid turbulence, the optimal design of nanoporous gold catalysts and protein folding/aggregation in crowded environments.
Hybrid Reynolds-Averaged/Large Eddy Simulation of the Flow in a Model SCRamjet Cavity Flameholder
NASA Technical Reports Server (NTRS)
Baurle, R. A.
2016-01-01
Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. Experimental data available for this configuration include velocity statistics obtained from particle image velocimetry. Several turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged/large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This e ort was undertaken to not only assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community, but to also begin to understand how this capability can best be used to augment standard Reynolds-averaged simulations. The numerical errors were quantified for the steady-state simulations, and at least qualitatively assessed for the scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results displayed a high degree of variability when comparing the flameholder fuel distributions obtained from each turbulence model. This prompted the consideration of applying the higher-fidelity scale-resolving simulations as a surrogate "truth" model to calibrate the Reynolds-averaged closures in a non-reacting setting prior to their use for the combusting simulations. In general, the Reynolds-averaged velocity profile predictions at the lowest fueling level matched the particle imaging measurements almost as well as was observed for the non-reacting condition. However, the velocity field predictions proved to be more sensitive to the flameholder fueling rate than was indicated in the measurements.
2000 Numerical Propulsion System Simulation Review
NASA Technical Reports Server (NTRS)
Lytle, John; Follen, Greg; Naiman, Cynthia; Veres, Joseph; Owen, Karl; Lopez, Isaac
2001-01-01
The technologies necessary to enable detailed numerical simulations of complete propulsion systems are being developed at the NASA Glenn Research Center in cooperation with industry, academia, and other government agencies. Large scale, detailed simulations will be of great value to the nation because they eliminate some of the costly testing required to develop and certify advanced propulsion systems. In addition, time and cost savings will be achieved by enabling design details to be evaluated early in the development process before a commitment is made to a specific design. This concept is called the Numerical Propulsion System Simulation (NPSS). NPSS consists of three main elements: (1) engineering models that enable multidisciplinary analysis of large subsystems and systems at various levels of detail, (2) a simulation environment that maximizes designer productivity, and (3) a cost-effective. high-performance computing platform. A fundamental requirement of the concept is that the simulations must be capable of overnight execution on easily accessible computing platforms. This will greatly facilitate the use of large-scale simulations in a design environment. This paper describes the current status of the NPSS with specific emphasis on the progress made over the past year on air breathing propulsion applications. Major accomplishments include the first formal release of the NPSS object-oriented architecture (NPSS Version 1) and the demonstration of a one order of magnitude reduction in computing cost-to-performance ratio using a cluster of personal computers. The paper also describes the future NPSS milestones, which include the simulation of space transportation propulsion systems in response to increased emphasis on safe, low cost access to space within NASA'S Aerospace Technology Enterprise. In addition, the paper contains a summary of the feedback received from industry partners on the fiscal year 1999 effort and the actions taken over the past year to respond to that feedback. NPSS was supported in fiscal year 2000 by the High Performance Computing and Communications Program.
2001 Numerical Propulsion System Simulation Review
NASA Technical Reports Server (NTRS)
Lytle, John; Follen, Gregory; Naiman, Cynthia; Veres, Joseph; Owen, Karl; Lopez, Isaac
2002-01-01
The technologies necessary to enable detailed numerical simulations of complete propulsion systems are being developed at the NASA Glenn Research Center in cooperation with industry, academia and other government agencies. Large scale, detailed simulations will be of great value to the nation because they eliminate some of the costly testing required to develop and certify advanced propulsion systems. In addition, time and cost savings will be achieved by enabling design details to be evaluated early in the development process before a commitment is made to a specific design. This concept is called the Numerical Propulsion System Simulation (NPSS). NPSS consists of three main elements: (1) engineering models that enable multidisciplinary analysis of large subsystems and systems at various levels of detail, (2) a simulation environment that maximizes designer productivity, and (3) a cost-effective, high-performance computing platform. A fundamental requirement of the concept is that the simulations must be capable of overnight execution on easily accessible computing platforms. This will greatly facilitate the use of large-scale simulations in a design environment. This paper describes the current status of the NPSS with specific emphasis on the progress made over the past year on air breathing propulsion applications. Major accomplishments include the first formal release of the NPSS object-oriented architecture (NPSS Version 1) and the demonstration of a one order of magnitude reduction in computing cost-to-performance ratio using a cluster of personal computers. The paper also describes the future NPSS milestones, which include the simulation of space transportation propulsion systems in response to increased emphasis on safe, low cost access to space within NASA's Aerospace Technology Enterprise. In addition, the paper contains a summary of the feedback received from industry partners on the fiscal year 2000 effort and the actions taken over the past year to respond to that feedback. NPSS was supported in fiscal year 2001 by the High Performance Computing and Communications Program.
An adaptive response surface method for crashworthiness optimization
NASA Astrophysics Data System (ADS)
Shi, Lei; Yang, Ren-Jye; Zhu, Ping
2013-11-01
Response surface-based design optimization has been commonly used for optimizing large-scale design problems in the automotive industry. However, most response surface models are built by a limited number of design points without considering data uncertainty. In addition, the selection of a response surface in the literature is often arbitrary. This article uses a Bayesian metric to systematically select the best available response surface among several candidates in a library while considering data uncertainty. An adaptive, efficient response surface strategy, which minimizes the number of computationally intensive simulations, was developed for design optimization of large-scale complex problems. This methodology was demonstrated by a crashworthiness optimization example.
Why continuous simulation? The role of antecedent moisture in design flood estimation
NASA Astrophysics Data System (ADS)
Pathiraja, S.; Westra, S.; Sharma, A.
2012-06-01
Continuous simulation for design flood estimation is increasingly becoming a viable alternative to traditional event-based methods. The advantage of continuous simulation approaches is that the catchment moisture state prior to the flood-producing rainfall event is implicitly incorporated within the modeling framework, provided the model has been calibrated and validated to produce reasonable simulations. This contrasts with event-based models in which both information about the expected sequence of rainfall and evaporation preceding the flood-producing rainfall event, as well as catchment storage and infiltration properties, are commonly pooled together into a single set of "loss" parameters which require adjustment through the process of calibration. To identify the importance of accounting for antecedent moisture in flood modeling, this paper uses a continuous rainfall-runoff model calibrated to 45 catchments in the Murray-Darling Basin in Australia. Flood peaks derived using the historical daily rainfall record are compared with those derived using resampled daily rainfall, for which the sequencing of wet and dry days preceding the heavy rainfall event is removed. The analysis shows that there is a consistent underestimation of the design flood events when antecedent moisture is not properly simulated, which can be as much as 30% when only 1 or 2 days of antecedent rainfall are considered, compared to 5% when this is extended to 60 days of prior rainfall. These results show that, in general, it is necessary to consider both short-term memory in rainfall associated with synoptic scale dependence, as well as longer-term memory at seasonal or longer time scale variability in order to obtain accurate design flood estimates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kant, Deepender, E-mail: dkc@ceeri.ernet.in; Joshi, L. M.; Janyani, Vijay
The klystron is a well-known microwave amplifier which uses kinetic energy of an electron beam for amplification of the RF signal. There are some limitations of conventional single beam klystron such as high operating voltage, low efficiency and bulky size at higher power levels, which are very effectively handled in Multi Beam Klystron (MBK) that uses multiple low purveyance electron beams for RF interaction. Each beam propagates along its individual transit path through a resonant cavity structure. Multi-Beam klystron cavity design is a critical task due to asymmetric cavity structure and can be simulated by 3D code only. The presentmore » paper shall discuss the design of multi beam RF cavities for klystrons operating at 2856 MHz (S-band) and 5 GHz (C-band) respectively. The design approach uses some scaling laws for finding the electron beam parameters of the multi beam device from their single beam counter parts. The scaled beam parameters are then used for finding the design parameters of the multi beam cavities. Design of the desired multi beam cavity can be optimized through iterative simulations in CST Microwave Studio.« less
Dhillon, Sonya; Bagby, R Michael; Kushner, Shauna C; Burchett, Danielle
2017-04-01
The Personality Inventory for DSM-5 (PID-5) is a 220-item self-report instrument that assesses the alternative model of personality psychopathology in Section III (Emerging Measures and Models) of DSM-5 . Despite its relatively recent introduction, the PID-5 has generated an impressive accumulation of studies examining its psychometric properties, and the instrument is also already widely and frequently used in research studies. Although the PID-5 is psychometrically sound overall, reviews of this instrument express concern that this scale does not possess validity scales to detect invalidating levels of response bias, such as underreporting and overreporting. McGee Ng et al. (2016), using a "known-groups" (partial) criterion design, demonstrated that both underreporting and overreporting grossly affect mean scores on PID-5 scales. In the current investigation, we replicate these findings using an analog simulation design. An important extension to this replication study was the finding that the construct validity of the PID-5 was also significantly compromised by response bias, with statistically significant attenuation noted in validity coefficients of the PID-5 domain scales with scales from other instruments measuring congruent constructs. This attenuation was found for underreporting and overreporting bias. We believe there is a need to develop validity scales to screen for data-distorting response bias in research contexts and in clinical assessments where response bias is likely or otherwise suspected. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Comparative Model Evaluation Studies of Biogenic Trace Gas Fluxes in Tropical Forests
NASA Technical Reports Server (NTRS)
Potter, C. S.; Peterson, David L. (Technical Monitor)
1997-01-01
Simulation modeling can play a number of important roles in large-scale ecosystem studies, including synthesis of patterns and changes in carbon and nutrient cycling dynamics, scaling up to regional estimates, and formulation of testable hypotheses for process studies. Recent comparative studies have shown that ecosystem models of soil trace gas exchange with the atmosphere are evolving into several distinct simulation approaches. Different levels of detail exist among process models in the treatment of physical controls on ecosystem nutrient fluxes and organic substrate transformations leading to gas emissions. These differences are is in part from distinct objectives of scaling and extrapolation. Parameter requirements for initialization scalings, boundary conditions, and time-series driven therefore vary among ecosystem simulation models, such that the design of field experiments for integration with modeling should consider a consolidated series of measurements that will satisfy most of the various model requirements. For example, variables that provide information on soil moisture holding capacity, moisture retention characteristics, potential evapotranspiration and drainage rates, and rooting depth appear to be of the first order in model evaluation trials for tropical moist forest ecosystems. The amount and nutrient content of labile organic matter in the soil, based on accurate plant production estimates, are also key parameters that determine emission model response. Based on comparative model results, it is possible to construct a preliminary evaluation matrix along categories of key diagnostic parameters and temporal domains. Nevertheless, as large-scale studied are planned, it is notable that few existing models age designed to simulate transient states of ecosystem change, a feature which will be essential for assessment of anthropogenic disturbance on regional gas budgets, and effects of long-term climate variability on biosphere-atmosphere exchange.
Three-dimensional hydrodynamic simulations of OMEGA implosions
NASA Astrophysics Data System (ADS)
Igumenshchev, I. V.; Michel, D. T.; Shah, R. C.; Campbell, E. M.; Epstein, R.; Forrest, C. J.; Glebov, V. Yu.; Goncharov, V. N.; Knauer, J. P.; Marshall, F. J.; McCrory, R. L.; Regan, S. P.; Sangster, T. C.; Stoeckl, C.; Schmitt, A. J.; Obenschain, S.
2017-05-01
The effects of large-scale (with Legendre modes ≲ 10) asymmetries in OMEGA direct-drive implosions caused by laser illumination nonuniformities (beam-power imbalance and beam mispointing and mistiming), target offset, and variation in target-layer thickness were investigated using the low-noise, three-dimensional Eulerian hydrodynamic code ASTER. Simulations indicate that these asymmetries can significantly degrade the implosion performance. The most important sources of the asymmetries are the target offsets ( ˜10 to 20 μm), beam-power imbalance ( σrms˜10 %), and variations ( ˜5 %) in target-layer thickness. Large-scale asymmetries distort implosion cores, resulting in a reduced hot-spot confinement and an increased residual kinetic energy of implosion targets. The ion temperature inferred from the width of simulated neutron spectra is influenced by bulk fuel motion in the distorted hot spot and can result in up to an ˜1 -keV increase in apparent temperature. Similar temperature variations along different lines of sight are observed. Demonstrating hydrodynamic equivalence to ignition designs on OMEGA requires a reduction in large-scale target and laser-imposed nonuniformities, minimizing target offset, and employing highly efficient mid-adiabat (α = 4) implosion designs, which mitigate cross-beam energy transfer and suppress short-wavelength Rayleigh-Taylor growth.
Three-dimensional hydrodynamic simulations of OMEGA implosions
Igumenshchev, I. V.; Michel, D. T.; Shah, R. C.; ...
2017-03-30
Here, the effects of large-scale (with Legendre modes ≲10) asymmetries in OMEGA direct-drive implosions caused by laser illumination nonuniformities (beam-power imbalance and beam mispointing and mistiming), target offset, and variation in target-layer thickness were investigated using the low-noise, three-dimensional Eulerian hydrodynamic code ASTER. Simulations indicate that these asymmetries can significantly degrade the implosion performance. The most important sources of the asymmetries are the target offsets (~10 to 20 μm), beam-power imbalance (σ rms ~ 10%), and variations (~5%) in target-layer thickness. Large-scale asymmetries distort implosion cores, resulting in a reduced hot-spot confinement and an increased residual kinetic energy of implosionmore » targets. The ion temperature inferred from the width of simulated neutron spectra is influenced by bulk fuel motion in the distorted hot spot and can result in up to an ~1 -keV increase in apparent temperature. Similar temperature variations along different lines of sight are observed. Demonstrating hydrodynamic equivalence to ignition designs on OMEGA requires a reduction in large-scale target and laser-imposed nonuniformities, minimizing target offset, and employing highly efficient mid-adiabat (α = 4) implosion designs, which mitigate cross-beam energy transfer and suppress short-wavelength Rayleigh–Taylor growth.« less
Three-dimensional hydrodynamic simulations of OMEGA implosions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Igumenshchev, I. V.; Michel, D. T.; Shah, R. C.
Here, the effects of large-scale (with Legendre modes ≲10) asymmetries in OMEGA direct-drive implosions caused by laser illumination nonuniformities (beam-power imbalance and beam mispointing and mistiming), target offset, and variation in target-layer thickness were investigated using the low-noise, three-dimensional Eulerian hydrodynamic code ASTER. Simulations indicate that these asymmetries can significantly degrade the implosion performance. The most important sources of the asymmetries are the target offsets (~10 to 20 μm), beam-power imbalance (σ rms ~ 10%), and variations (~5%) in target-layer thickness. Large-scale asymmetries distort implosion cores, resulting in a reduced hot-spot confinement and an increased residual kinetic energy of implosionmore » targets. The ion temperature inferred from the width of simulated neutron spectra is influenced by bulk fuel motion in the distorted hot spot and can result in up to an ~1 -keV increase in apparent temperature. Similar temperature variations along different lines of sight are observed. Demonstrating hydrodynamic equivalence to ignition designs on OMEGA requires a reduction in large-scale target and laser-imposed nonuniformities, minimizing target offset, and employing highly efficient mid-adiabat (α = 4) implosion designs, which mitigate cross-beam energy transfer and suppress short-wavelength Rayleigh–Taylor growth.« less
Disadvantages of the Horsfall-Barratt Scale for estimating severity of citrus canker
USDA-ARS?s Scientific Manuscript database
Direct visual estimation of disease severity to the nearest percent was compared to using the Horsfall-Barratt (H-B) scale. Data from a simulation model designed to sample two diseased populations were used to investigate the probability of the two methods to reject a null hypothesis (H0) using a t-...
The large scale microelectronics Computer-Aided Design and Test (CADAT) system
NASA Technical Reports Server (NTRS)
Gould, J. M.
1978-01-01
The CADAT system consists of a number of computer programs written in FORTRAN that provide the capability to simulate, lay out, analyze, and create the artwork for large scale microelectronics. The function of each software component of the system is described with references to specific documentation for each software component.
Acoustic Treatment Design Scaling Methods. Phase 2
NASA Technical Reports Server (NTRS)
Clark, L. (Technical Monitor); Parrott, T. (Technical Monitor); Jones, M. (Technical Monitor); Kraft, R. E.; Yu, J.; Kwan, H. W.; Beer, B.; Seybert, A. F.; Tathavadekar, P.
2003-01-01
The ability to design, build and test miniaturized acoustic treatment panels on scale model fan rigs representative of full scale engines provides not only cost-savings, but also an opportunity to optimize the treatment by allowing multiple tests. To use scale model treatment as a design tool, the impedance of the sub-scale liner must be known with confidence. This study was aimed at developing impedance measurement methods for high frequencies. A normal incidence impedance tube method that extends the upper frequency range to 25,000 Hz. without grazing flow effects was evaluated. The free field method was investigated as a potential high frequency technique. The potential of the two-microphone in-situ impedance measurement method was evaluated in the presence of grazing flow. Difficulties in achieving the high frequency goals were encountered in all methods. Results of developing a time-domain finite difference resonator impedance model indicated that a re-interpretation of the empirical fluid mechanical models used in the frequency domain model for nonlinear resistance and mass reactance may be required. A scale model treatment design that could be tested on the Universal Propulsion Simulator vehicle was proposed.
An FPGA-Based Massively Parallel Neuromorphic Cortex Simulator
Wang, Runchun M.; Thakur, Chetan S.; van Schaik, André
2018-01-01
This paper presents a massively parallel and scalable neuromorphic cortex simulator designed for simulating large and structurally connected spiking neural networks, such as complex models of various areas of the cortex. The main novelty of this work is the abstraction of a neuromorphic architecture into clusters represented by minicolumns and hypercolumns, analogously to the fundamental structural units observed in neurobiology. Without this approach, simulating large-scale fully connected networks needs prohibitively large memory to store look-up tables for point-to-point connections. Instead, we use a novel architecture, based on the structural connectivity in the neocortex, such that all the required parameters and connections can be stored in on-chip memory. The cortex simulator can be easily reconfigured for simulating different neural networks without any change in hardware structure by programming the memory. A hierarchical communication scheme allows one neuron to have a fan-out of up to 200 k neurons. As a proof-of-concept, an implementation on one Altera Stratix V FPGA was able to simulate 20 million to 2.6 billion leaky-integrate-and-fire (LIF) neurons in real time. We verified the system by emulating a simplified auditory cortex (with 100 million neurons). This cortex simulator achieved a low power dissipation of 1.62 μW per neuron. With the advent of commercially available FPGA boards, our system offers an accessible and scalable tool for the design, real-time simulation, and analysis of large-scale spiking neural networks. PMID:29692702
An FPGA-Based Massively Parallel Neuromorphic Cortex Simulator.
Wang, Runchun M; Thakur, Chetan S; van Schaik, André
2018-01-01
This paper presents a massively parallel and scalable neuromorphic cortex simulator designed for simulating large and structurally connected spiking neural networks, such as complex models of various areas of the cortex. The main novelty of this work is the abstraction of a neuromorphic architecture into clusters represented by minicolumns and hypercolumns, analogously to the fundamental structural units observed in neurobiology. Without this approach, simulating large-scale fully connected networks needs prohibitively large memory to store look-up tables for point-to-point connections. Instead, we use a novel architecture, based on the structural connectivity in the neocortex, such that all the required parameters and connections can be stored in on-chip memory. The cortex simulator can be easily reconfigured for simulating different neural networks without any change in hardware structure by programming the memory. A hierarchical communication scheme allows one neuron to have a fan-out of up to 200 k neurons. As a proof-of-concept, an implementation on one Altera Stratix V FPGA was able to simulate 20 million to 2.6 billion leaky-integrate-and-fire (LIF) neurons in real time. We verified the system by emulating a simplified auditory cortex (with 100 million neurons). This cortex simulator achieved a low power dissipation of 1.62 μW per neuron. With the advent of commercially available FPGA boards, our system offers an accessible and scalable tool for the design, real-time simulation, and analysis of large-scale spiking neural networks.
Earthquake simulator tests and associated study of an 1/6-scale nine-story RC model
NASA Astrophysics Data System (ADS)
Sun, Jingjiang; Wang, Tao; Qi, Hu
2007-09-01
Earthquake simulator tests of a 1/6-scale nine-story reinforced concrete frame-wall model are described in the paper. The test results and associated numerical simulation are summarized and discussed. Based on the test data, a relationship between maximum inter-story drift and damage state is established. Equations of variation of structural characteristics (natural frequency and equivalent stiffness) with overall drifts are derived by data fitting, which can be used to estimate structural damage state if structural characteristics can be measured. A comparison of the analytical and experimental results show that both the commonly used equivalent beam and fiber element models can simulate the nonlinear seismic response of structures very well. Finally, conclusions associated with seismic design and damage evaluation of RC structures are presented.
Development of mpi_EPIC model for global agroecosystem modeling
Kang, Shujiang; Wang, Dali; Jeff A. Nichols; ...
2014-12-31
Models that address policy-maker concerns about multi-scale effects of food and bioenergy production systems are computationally demanding. We integrated the message passing interface algorithm into the process-based EPIC model to accelerate computation of ecosystem effects. Simulation performance was further enhanced by applying the Vampir framework. When this enhanced mpi_EPIC model was tested, total execution time for a global 30-year simulation of a switchgrass cropping system was shortened to less than 0.5 hours on a supercomputer. The results illustrate that mpi_EPIC using parallel design can balance simulation workloads and facilitate large-scale, high-resolution analysis of agricultural production systems, management alternatives and environmentalmore » effects.« less
''Football'' test coil: a simulated service test of internally-cooled, cabled superconductor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marston, P.G.; Iwasa, Y.; Thome, R.J.
Internally-cooled, cabled superconductor, (ICCS), appears from small-scale tests to be a viable alternative to pool-boiling cooled superconductors for large superconducting magnets. Potential advantages may include savings in helium inventory, smaller structure and ease of fabrication. Questions remain, however, about the structural performance of these systems. The ''football'' test coil has been designed to simulate the actual ''field-current-stress-thermal'' operating conditions of a 25 ka ICCS in a commercial scale MHD magnet. The test procedure will permit demonstration of the 20 year cyclic life of such a magnet in less than 20 days. This paper describes the design, construction and test ofmore » that coil which is wound of copper-stabilized niobium-titanium cable in steel conduit. 2 refs.« less
A comparative study of internally and externally capped balloons using small scale test balloons
NASA Technical Reports Server (NTRS)
Bell, Douglas P.
1994-01-01
Caps have been used to structurally reinforce scientific research balloons since the late 1950's. The scientific research balloons used by the National Aeronautics and Space Administration (NASA) use internal caps. A NASA cap placement specification does not exist since no empirical information exisits concerning cap placement. To develop a cap placement specification, NASA has completed two in-hangar inflation tests comparing the structural contributions of internal caps and external caps. The tests used small scale test balloons designed to develop the highest possible stresses within the constraints of the hangar and balloon materials. An externally capped test balloon and an internally capped test balloon were designed, built, inflated and simulated to determine the structural contributions and benefits of each. The results of the tests and simulations are presented.
Ultra-Scale Computing for Emergency Evacuation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhaduri, Budhendra L; Nutaro, James J; Liu, Cheng
2010-01-01
Emergency evacuations are carried out in anticipation of a disaster such as hurricane landfall or flooding, and in response to a disaster that strikes without a warning. Existing emergency evacuation modeling and simulation tools are primarily designed for evacuation planning and are of limited value in operational support for real time evacuation management. In order to align with desktop computing, these models reduce the data and computational complexities through simple approximations and representations of real network conditions and traffic behaviors, which rarely represent real-world scenarios. With the emergence of high resolution physiographic, demographic, and socioeconomic data and supercomputing platforms, itmore » is possible to develop micro-simulation based emergency evacuation models that can foster development of novel algorithms for human behavior and traffic assignments, and can simulate evacuation of millions of people over a large geographic area. However, such advances in evacuation modeling and simulations demand computational capacity beyond the desktop scales and can be supported by high performance computing platforms. This paper explores the motivation and feasibility of ultra-scale computing for increasing the speed of high resolution emergency evacuation simulations.« less
Fuermaier, Anselm B M; Tucha, Oliver; Koerts, Janneke; Butzbach, Marah; Weisbrod, Matthias; Aschenbrenner, Steffen; Tucha, Lara
2018-05-01
A growing body of research questions the reliance of symptom self-reports in the clinical evaluation of attention-deficit/hyperactivity disorder (ADHD) in adulthood. A recent study suggested that also impairment reports are vulnerable to noncredible responses, as derived from a simulation design using a global functional impairment scale. The present study aims to add evidence to this issue, by using an ADHD specific impairment scale in a simulation design on large samples. Impairment ratings on the Weiss Functional Impairment Rating Scale (WFIRS) of 62 patients with ADHD were compared to 142 healthy individuals who were instructed to show normal behavior. Furthermore, impairment ratings of patients with ADHD were compared to ratings of 330 healthy individuals who were randomly assigned to one of four simulation conditions that were instructed to complete the scale as if they had ADHD. Patients with ADHD reported higher levels of impairment than the healthy control group in all domains of life. Furthermore, individuals instructed to feign ADHD indicated higher levels of impairments in most domains of life compared to control participants and genuine patients with ADHD. The group differences between individuals feigning ADHD and individuals with genuine ADHD, however, were only small to moderate. Further analyses revealed that the WFRIS was not useful to successfully differentiate genuine from feigned ADHD. The present study confirms the conclusion that self-reported impairments are susceptible to noncredible responses and should be used with caution in the clinical evaluation of adult ADHD.
Simulation framework for intelligent transportation systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ewing, T.; Doss, E.; Hanebutte, U.
1996-10-01
A simulation framework has been developed for a large-scale, comprehensive, scaleable simulation of an Intelligent Transportation System (ITS). The simulator is designed for running on parallel computers and distributed (networked) computer systems, but can run on standalone workstations for smaller simulations. The simulator currently models instrumented smart vehicles with in-vehicle navigation units capable of optimal route planning and Traffic Management Centers (TMC). The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide two-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphicalmore » user interfaces to support human-factors studies. Realistic modeling of variations of the posted driving speed are based on human factors studies that take into consideration weather, road conditions, driver personality and behavior, and vehicle type. The prototype has been developed on a distributed system of networked UNIX computers but is designed to run on parallel computers, such as ANL`s IBM SP-2, for large-scale problems. A novel feature of the approach is that vehicles are represented by autonomous computer processes which exchange messages with other processes. The vehicles have a behavior model which governs route selection and driving behavior, and can react to external traffic events much like real vehicles. With this approach, the simulation is scaleable to take advantage of emerging massively parallel processor (MPP) systems.« less
Constraints in distortion-invariant target recognition system simulation
NASA Astrophysics Data System (ADS)
Iftekharuddin, Khan M.; Razzaque, Md A.
2000-11-01
Automatic target recognition (ATR) is a mature but active research area. In an earlier paper, we proposed a novel ATR approach for recognition of targets varying in fine details, rotation, and translation using a Learning Vector Quantization (LVQ) Neural Network (NN). The proposed approach performed segmentation of multiple objects and the identification of the objects using LVQNN. In this current paper, we extend the previous approach for recognition of targets varying in rotation, translation, scale, and combination of all three distortions. We obtain the analytical results of the system level design to show that the approach performs well with some constraints. The first constraint determines the size of the input images and input filters. The second constraint shows the limits on amount of rotation, translation, and scale of input objects. We present the simulation verification of the constraints using DARPA's Moving and Stationary Target Recognition (MSTAR) images with different depression and pose angles. The simulation results using MSTAR images verify the analytical constraints of the system level design.
NASA Astrophysics Data System (ADS)
Stegen, Ronald; Gassmann, Matthias
2017-04-01
The use of a broad variation of agrochemicals is essential for the modern industrialized agriculture. During the last decades, the awareness of the side effects of their use has grown and with it the requirement to reproduce, understand and predict the behaviour of these agrochemicals in the environment, in order to optimize their use and minimize the side effects. The modern modelling has made great progress in understanding and predicting these chemicals with digital methods. While the behaviour of the applied chemicals is often investigated and modelled, most studies only simulate parent chemicals, considering total annihilation of the substance. However, due to a diversity of chemical, physical and biological processes, the substances are rather transformed into new chemicals, which themselves are transformed until, at the end of the chain, the substance is completely mineralized. During this process, the fate of each transformation product is determined by its own environmental characteristics and the pathway and results of transformation can differ largely by substance and environmental influences, that can occur in different compartments of the same site. Simulating transformation products introduces additional model uncertainties. Thus, the calibration effort increases compared to simulations of the transport and degradation of the primary substance alone. The simulation of the necessary physical processes needs a lot of calculation time. Due to that, few physically-based models offer the possibility to simulate transformation products at all, mostly at the field scale. The few models available for the catchment scale are not optimized for this duty, i.e. they are only able to simulate a single parent compound and up to two transformation products. Thus, for simulations of large physico-chemical parameter spaces, the enormous calculation time of the underlying hydrological model diminishes the overall performance. In this study, the structure of the model ZIN-AGRITRA is re-designed for the transport and transformation of an unlimited amount of agrochemicals in the soil-water-plant system at catchment scale. The focus is, besides a good hydrological standard, on a flexible variation of transformation processes and the optimization for the use of large numbers of different substances. Due to the new design, a reduction of the calculation time per tested substance is acquired, allowing faster testing of parameter spaces. Additionally, the new concept allows for the consideration of different transformation processes and products in different environmental compartments. A first test of calculation time improvements and flexible transformation pathways was performed in a Mediterranean meso-scale catchment, using the insecticide Chlorpyrifos and two of its transformation products, which emerge from different transformation processes, as test substances.
Pan, Wenxiao; Yang, Xiu; Bao, Jie; ...
2017-01-01
We develop a new mathematical framework to study the optimal design of air electrode microstructures for lithium-oxygen (Li-O2) batteries. It can eectively reduce the number of expensive experiments for testing dierent air-electrodes, thereby minimizing the cost in the design of Li-O2 batteries. The design parameters to characterize an air-electrode microstructure include the porosity, surface-to-volume ratio, and parameters associated with the pore-size distribution. A surrogate model (also known as response surface) for discharge capacity is rst constructed as a function of these design parameters. The surrogate model is accurate and easy to evaluate such that an optimization can be performed basedmore » on it. In particular, a Gaussian process regression method, co-kriging, is employed due to its accuracy and eciency in predicting high-dimensional responses from a combination of multidelity data. Specically, a small amount of data from high-delity simulations are combined with a large number of data obtained from computationally ecient low-delity simulations. The high-delity simulation is based on a multiscale modeling approach that couples the microscale (pore-scale) and macroscale (device-scale) models. Whereas, the low-delity simulation is based on an empirical macroscale model. The constructed response surface provides quantitative understanding and prediction about how air electrode microstructures aect the discharge performance of Li-O2 batteries. The succeeding sensitivity analysis via Sobol indices and optimization via genetic algorithm ultimately oer a reliable guidance on the optimal design of air electrode microstructures. The proposed mathematical framework can be generalized to investigate other new energy storage techniques and materials.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan, Wenxiao; Yang, Xiu; Bao, Jie
We develop a new mathematical framework to study the optimal design of air electrode microstructures for lithium-oxygen (Li-O2) batteries. It can eectively reduce the number of expensive experiments for testing dierent air-electrodes, thereby minimizing the cost in the design of Li-O2 batteries. The design parameters to characterize an air-electrode microstructure include the porosity, surface-to-volume ratio, and parameters associated with the pore-size distribution. A surrogate model (also known as response surface) for discharge capacity is rst constructed as a function of these design parameters. The surrogate model is accurate and easy to evaluate such that an optimization can be performed basedmore » on it. In particular, a Gaussian process regression method, co-kriging, is employed due to its accuracy and eciency in predicting high-dimensional responses from a combination of multidelity data. Specically, a small amount of data from high-delity simulations are combined with a large number of data obtained from computationally ecient low-delity simulations. The high-delity simulation is based on a multiscale modeling approach that couples the microscale (pore-scale) and macroscale (device-scale) models. Whereas, the low-delity simulation is based on an empirical macroscale model. The constructed response surface provides quantitative understanding and prediction about how air electrode microstructures aect the discharge performance of Li-O2 batteries. The succeeding sensitivity analysis via Sobol indices and optimization via genetic algorithm ultimately oer a reliable guidance on the optimal design of air electrode microstructures. The proposed mathematical framework can be generalized to investigate other new energy storage techniques and materials.« less
A Decade-long Continental-Scale Convection-Resolving Climate Simulation on GPUs
NASA Astrophysics Data System (ADS)
Leutwyler, David; Fuhrer, Oliver; Lapillonne, Xavier; Lüthi, Daniel; Schär, Christoph
2016-04-01
The representation of moist convection in climate models represents a major challenge, due to the small scales involved. Convection-resolving models have proven to be very useful tools in numerical weather prediction and in climate research. Using horizontal grid spacings of O(1km), they allow to explicitly resolve deep convection leading to an improved representation of the water cycle. However, due to their extremely demanding computational requirements, they have so far been limited to short simulations and/or small computational domains. Innovations in the supercomputing domain have led to new supercomputer-designs that involve conventional multicore CPUs and accelerators such as graphics processing units (GPUs). One of the first atmospheric models that has been fully ported to GPUs is the Consortium for Small-Scale Modeling weather and climate model COSMO. This new version allows us to expand the size of the simulation domain to areas spanning continents and the time period up to one decade. We present results from a decade-long, convection-resolving climate simulation using the GPU-enabled COSMO version. The simulation is driven by the ERA-interim reanalysis. The results illustrate how the approach allows for the representation of interactions between synoptic-scale and meso-scale atmospheric circulations at scales ranging from 1000 to 10 km. We discuss the performance of the convection-resolving modeling approach on the European scale. Specifically we focus on the annual cycle of convection in Europe, on the organization of convective clouds and on the verification of hourly rainfall with various high resolution datasets.
ERIC Educational Resources Information Center
Bos, Nathan D.; Shami, N. Sadat; Naab, Sara
2006-01-01
There is an increasing need for business students to be taught the ability to think through ethical dilemmas faced by corporations conducting business on a global scale. This article describes a multiplayer online simulation game, ISLAND TELECOM, that exposes students to ethical dilemmas in international business. Through role playing and…
USDA-ARS?s Scientific Manuscript database
A pilot-scale, recirculating-flow-through, non-steady-state (RFT-NSS) chamber system was designed for quantifying nitrous oxide (N2O) emissions from simulated open-lot beef cattle feedlot pens. The system employed five 1 square meter steel pans. A lid was placed systematically on each pan and heads...
Propulsion simulation for magnetically suspended wind tunnel models
NASA Technical Reports Server (NTRS)
Joshi, Prakash B.; Beerman, Henry P.; Chen, James; Krech, Robert H.; Lintz, Andrew L.; Rosen, David I.
1990-01-01
The feasibility of simulating propulsion-induced aerodynamic effects on scaled aircraft models in wind tunnels employing Magnetic Suspension and Balance Systems. The investigation concerned itself with techniques of generating exhaust jets of appropriate characteristics. The objectives were to: (1) define thrust and mass flow requirements of jets; (2) evaluate techniques for generating propulsive gas within volume limitations imposed by magnetically-suspended models; (3) conduct simple diagnostic experiments for techniques involving new concepts; and (4) recommend experiments for demonstration of propulsion simulation techniques. Various techniques of generating exhaust jets of appropriate characteristics were evaluated on scaled aircraft models in wind tunnels with MSBS. Four concepts of remotely-operated propulsion simulators were examined. Three conceptual designs involving innovative adaptation of convenient technologies (compressed gas cylinders, liquid, and solid propellants) were developed. The fourth innovative concept, namely, the laser-assisted thruster, which can potentially simulate both inlet and exhaust flows, was found to require very high power levels for small thrust levels.
1965-08-10
Artists used paintbrushes and airbrushes to recreate the lunar surface on each of the four models comprising the LOLA simulator. Project LOLA or Lunar Orbit and Landing Approach was a simulator built at Langley to study problems related to landing on the lunar surface. It was a complex project that cost nearly 2 million dollars. James Hansen wrote: This simulator was designed to provide a pilot with a detailed visual encounter with the lunar surface the machine consisted primarily of a cockpit, a closed-circuit TV system, and four large murals or scale models representing portions of the lunar surface as seen from various altitudes. The pilot in the cockpit moved along a track past these murals which would accustom him to the visual cues for controlling a spacecraft in the vicinity of the moon. Unfortunately, such a simulation--although great fun and quite aesthetic--was not helpful because flight in lunar orbit posed no special problems other than the rendezvous with the LEM, which the device did not simulate. Not long after the end of Apollo, the expensive machine was dismantled. (p. 379) Ellis J. White described the simulator as follows: Model 1 is a 20-foot-diameter sphere mounted on a rotating base and is scaled 1 in. 9 miles. Models 2,3, and 4 are approximately 15x40 feet scaled sections of model 1. Model 4 is a scaled-up section of the Crater Alphonsus and the scale is 1 in. 200 feet. All models are in full relief except the sphere. -- Published in James R. Hansen, Spaceflight Revolution: NASA Langley Research Center From Sputnik to Apollo, (Washington: NASA, 1995), p. 379 Ellis J. White, Discussion of Three Typical Langley Research Center Simulation Programs, Paper presented at the Eastern Simulation Council (EAI s Princeton Computation Center), Princeton, NJ, October 20, 1966.
1964-10-28
Artists used paintbrushes and airbrushes to recreate the lunar surface on each of the four models comprising the LOLA simulator. Project LOLA or Lunar Orbit and Landing Approach was a simulator built at Langley to study problems related to landing on the lunar surface. It was a complex project that cost nearly $2 million dollars. James Hansen wrote: "This simulator was designed to provide a pilot with a detailed visual encounter with the lunar surface; the machine consisted primarily of a cockpit, a closed-circuit TV system, and four large murals or scale models representing portions of the lunar surface as seen from various altitudes. The pilot in the cockpit moved along a track past these murals which would accustom him to the visual cues for controlling a spacecraft in the vicinity of the moon. Unfortunately, such a simulation--although great fun and quite aesthetic--was not helpful because flight in lunar orbit posed no special problems other than the rendezvous with the LEM, which the device did not simulate. Not long after the end of Apollo, the expensive machine was dismantled." (p. 379) Ellis J. White further described LOLA in his paper "Discussion of Three Typical Langley Research Center Simulation Programs," "Model 1 is a 20-foot-diameter sphere mounted on a rotating base and is scaled 1 in. = 9 miles. Models 2,3, and 4 are approximately 15x40 feet scaled sections of model 1. Model 4 is a scaled-up section of the Crater Alphonsus and the scale is 1 in. = 200 feet. All models are in full relief except the sphere." -- Published in James R. Hansen, Spaceflight Revolution, NASA SP-4308, p. 379; Ellis J. White, "Discussion of Three Typical Langley Research Center Simulation Programs," Paper presented at the Eastern Simulation Council (EAI's Princeton Computation Center), Princeton, NJ, October 20, 1966.
NASA Astrophysics Data System (ADS)
Shi, Ao; Lu, Bo; Yang, Dangguo; Wang, Xiansheng; Wu, Junqiang; Zhou, Fangqi
2018-05-01
Coupling between aero-acoustic noise and structural vibration under high-speed open cavity flow-induced oscillation may bring about severe random vibration of the structure, and even cause structure to fatigue destruction, which threatens the flight safety. Carrying out the research on vibro-acoustic experiments of scaled down model is an effective means to clarify the effects of high-intensity noise of cavity on structural vibration. Therefore, in allusion to the vibro-acoustic experiments of cavity in wind tunnel, taking typical elastic cavity as the research object, dimensional analysis and finite element method were adopted to establish the similitude relations of structural inherent characteristics and dynamics for distorted model, and verifying the proposed similitude relations by means of experiments and numerical simulation. Research shows that, according to the analysis of scale-down model, the established similitude relations can accurately simulate the structural dynamic characteristics of actual model, which provides theoretic guidance for structural design and vibro-acoustic experiments of scaled down elastic cavity model.
Multi-phase CFD modeling of solid sorbent carbon capture system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryan, E. M.; DeCroix, D.; Breault, R.
2013-07-01
Computational fluid dynamics (CFD) simulations are used to investigate a low temperature post-combustion carbon capture reactor. The CFD models are based on a small scale solid sorbent carbon capture reactor design from ADA-ES and Southern Company. The reactor is a fluidized bed design based on a silica-supported amine sorbent. CFD models using both Eulerian–Eulerian and Eulerian–Lagrangian multi-phase modeling methods are developed to investigate the hydrodynamics and adsorption of carbon dioxide in the reactor. Models developed in both FLUENT® and BARRACUDA are presented to explore the strengths and weaknesses of state of the art CFD codes for modeling multi-phase carbon capturemore » reactors. The results of the simulations show that the FLUENT® Eulerian–Lagrangian simulations (DDPM) are unstable for the given reactor design; while the BARRACUDA Eulerian–Lagrangian model is able to simulate the system given appropriate simplifying assumptions. FLUENT® Eulerian–Eulerian simulations also provide a stable solution for the carbon capture reactor given the appropriate simplifying assumptions.« less
Multi-Phase CFD Modeling of Solid Sorbent Carbon Capture System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryan, Emily M.; DeCroix, David; Breault, Ronald W.
2013-07-30
Computational fluid dynamics (CFD) simulations are used to investigate a low temperature post-combustion carbon capture reactor. The CFD models are based on a small scale solid sorbent carbon capture reactor design from ADA-ES and Southern Company. The reactor is a fluidized bed design based on a silica-supported amine sorbent. CFD models using both Eulerian-Eulerian and Eulerian-Lagrangian multi-phase modeling methods are developed to investigate the hydrodynamics and adsorption of carbon dioxide in the reactor. Models developed in both FLUENT® and BARRACUDA are presented to explore the strengths and weaknesses of state of the art CFD codes for modeling multi-phase carbon capturemore » reactors. The results of the simulations show that the FLUENT® Eulerian-Lagrangian simulations (DDPM) are unstable for the given reactor design; while the BARRACUDA Eulerian-Lagrangian model is able to simulate the system given appropriate simplifying assumptions. FLUENT® Eulerian-Eulerian simulations also provide a stable solution for the carbon capture reactor given the appropriate simplifying assumptions.« less
Atahan, Ali O; Hiekmann, J Marten; Himpe, Jeffrey; Marra, Joseph
2018-07-01
Road restraint systems are designed to minimize the undesirable effects of roadside accidents and improve safety of road users. These systems are utilized at either side or median section of roads to contain and redirect errant vehicles. Although restraint systems are mainly designed against car, truck and bus impacts there is an increasing pressure by the motorcycle industry to incorporate motorcycle protection systems into these systems. In this paper development details of a new and versatile motorcycle barrier, CMPS, coupled with an existing vehicle barrier is presented. CMPS is intended to safely contain and redirect motorcyclists during a collision event. First, crash performance of CMPS design is evaluated by means of a three dimensional computer simulation program LS-DYNA. Then full-scale crash tests are used to verify the acceptability of CMPS design. Crash tests were performed at CSI proving ground facility using a motorcycle dummy in accordance with prEN 1317-8 specification. Full-scale crash test results show that CMPS is able to successfully contain and redirect dummy with minimal injury risk on the dummy. Damage on the barrier is also minimal proving the robustness of the CMPS design. Based on the test findings and further review by the authorities the implementation of CMPS was recommended at highway system. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Lucas, Charles E.; Walters, Eric A.; Jatskevich, Juri; Wasynczuk, Oleg; Lamm, Peter T.
2003-09-01
In this paper, a new technique useful for the numerical simulation of large-scale systems is presented. This approach enables the overall system simulation to be formed by the dynamic interconnection of the various interdependent simulations, each representing a specific component or subsystem such as control, electrical, mechanical, hydraulic, or thermal. Each simulation may be developed separately using possibly different commercial-off-the-shelf simulation programs thereby allowing the most suitable language or tool to be used based on the design/analysis needs. These subsystems communicate the required interface variables at specific time intervals. A discussion concerning the selection of appropriate communication intervals is presented herein. For the purpose of demonstration, this technique is applied to a detailed simulation of a representative aircraft power system, such as that found on the Joint Strike Fighter (JSF). This system is comprised of ten component models each developed using MATLAB/Simulink, EASY5, or ACSL. When the ten component simulations were distributed across just four personal computers (PCs), a greater than 15-fold improvement in simulation speed (compared to the single-computer implementation) was achieved.
Theory and data for simulating fine-scale human movement in an urban environment
Perkins, T. Alex; Garcia, Andres J.; Paz-Soldán, Valerie A.; Stoddard, Steven T.; Reiner, Robert C.; Vazquez-Prokopec, Gonzalo; Bisanzio, Donal; Morrison, Amy C.; Halsey, Eric S.; Kochel, Tadeusz J.; Smith, David L.; Kitron, Uriel; Scott, Thomas W.; Tatem, Andrew J.
2014-01-01
Individual-based models of infectious disease transmission depend on accurate quantification of fine-scale patterns of human movement. Existing models of movement either pertain to overly coarse scales, simulate some aspects of movement but not others, or were designed specifically for populations in developed countries. Here, we propose a generalizable framework for simulating the locations that an individual visits, time allocation across those locations, and population-level variation therein. As a case study, we fit alternative models for each of five aspects of movement (number, distance from home and types of locations visited; frequency and duration of visits) to interview data from 157 residents of the city of Iquitos, Peru. Comparison of alternative models showed that location type and distance from home were significant determinants of the locations that individuals visited and how much time they spent there. We also found that for most locations, residents of two neighbourhoods displayed indistinguishable preferences for visiting locations at various distances, despite differing distributions of locations around those neighbourhoods. Finally, simulated patterns of time allocation matched the interview data in a number of ways, suggesting that our framework constitutes a sound basis for simulating fine-scale movement and for investigating factors that influence it. PMID:25142528
Modeling a Million-Node Slim Fly Network Using Parallel Discrete-Event Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolfe, Noah; Carothers, Christopher; Mubarak, Misbah
As supercomputers close in on exascale performance, the increased number of processors and processing power translates to an increased demand on the underlying network interconnect. The Slim Fly network topology, a new lowdiameter and low-latency interconnection network, is gaining interest as one possible solution for next-generation supercomputing interconnect systems. In this paper, we present a high-fidelity Slim Fly it-level model leveraging the Rensselaer Optimistic Simulation System (ROSS) and Co-Design of Exascale Storage (CODES) frameworks. We validate our Slim Fly model with the Kathareios et al. Slim Fly model results provided at moderately sized network scales. We further scale the modelmore » size up to n unprecedented 1 million compute nodes; and through visualization of network simulation metrics such as link bandwidth, packet latency, and port occupancy, we get an insight into the network behavior at the million-node scale. We also show linear strong scaling of the Slim Fly model on an Intel cluster achieving a peak event rate of 36 million events per second using 128 MPI tasks to process 7 billion events. Detailed analysis of the underlying discrete-event simulation performance shows that a million-node Slim Fly model simulation can execute in 198 seconds on the Intel cluster.« less
Field Scale Optimization for Long-Term Sustainability of Best Management Practices in Watersheds
NASA Astrophysics Data System (ADS)
Samuels, A.; Babbar-Sebens, M.
2012-12-01
Agricultural and urban land use changes have led to disruption of natural hydrologic processes and impairment of streams and rivers. Multiple previous studies have evaluated Best Management Practices (BMPs) as means for restoring existing hydrologic conditions and reducing impairment of water resources. However, planning of these practices have relied on watershed scale hydrologic models for identifying locations and types of practices at scales much coarser than the actual field scale, where landowners have to plan, design and implement the practices. Field scale hydrologic modeling provides means for identifying relationships between BMP type, spatial location, and the interaction between BMPs at a finer farm/field scale that is usually more relevant to the decision maker (i.e. the landowner). This study focuses on development of a simulation-optimization approach for field-scale planning of BMPs in the School Branch stream system of Eagle Creek Watershed, Indiana, USA. The Agricultural Policy Environmental Extender (APEX) tool is used as the field scale hydrologic model, and a multi-objective optimization algorithm is used to search for optimal alternatives. Multiple climate scenarios downscaled to the watershed-scale are used to test the long term performance of these alternatives and under extreme weather conditions. The effectiveness of these BMPs under multiple weather conditions are included within the simulation-optimization approach as a criteria/goal to assist landowners in identifying sustainable design of practices. The results from these scenarios will further enable efficient BMP planning for current and future usage.
NASA Astrophysics Data System (ADS)
Lin, Shian-Jiann; Harris, Lucas; Chen, Jan-Huey; Zhao, Ming
2014-05-01
A multi-scale High-Resolution Atmosphere Model (HiRAM) is being developed at NOAA/Geophysical Fluid Dynamics Laboratory. The model's dynamical framework is the non-hydrostatic extension of the vertically Lagrangian finite-volume dynamical core (Lin 2004, Monthly Wea. Rev.) constructed on a stretchable (via Schmidt transformation) cubed-sphere grid. Physical parametrizations originally designed for IPCC-type climate predictions are in the process of being modified and made more "scale-aware", in an effort to make the model suitable for multi-scale weather-climate applications, with horizontal resolution ranging from 1 km (near the target high-resolution region) to as low as 400 km (near the antipodal point). One of the main goals of this development is to enable simulation of high impact weather phenomena (such as tornadoes, thunderstorms, category-5 hurricanes) within an IPCC-class climate modeling system previously thought impossible. We will present preliminary results, covering a very wide spectrum of temporal-spatial scales, ranging from simulation of tornado genesis (hours), Madden-Julian Oscillations (intra-seasonal), topical cyclones (seasonal), to Quasi Biennial Oscillations (intra-decadal), using the same global multi-scale modeling system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hyman, Jeffrey De'Haven; Painter, S. L.; Viswanathan, H.
We investigate how the choice of injection mode impacts transport properties in kilometer-scale three-dimensional discrete fracture networks (DFN). The choice of injection mode, resident and flux-weighted, is designed to mimic different physical phenomena. It has been hypothesized that solute plumes injected under resident conditions evolve to behave similarly to solutes injected under flux-weighted conditions. Previously, computational limitations have prohibited the large-scale simulations required to investigate this hypothesis. We investigate this hypothesis by using a high-performance DFN suite, dfnWorks, to simulate flow in kilometer-scale three-dimensional DFNs based on fractured granite at the Forsmark site in Sweden, and adopt a Lagrangian approachmore » to simulate transport therein. Results show that after traveling through a pre-equilibrium region, both injection methods exhibit linear scaling of the first moment of travel time and power law scaling of the breakthrough curve with similar exponents, slightly larger than 2. Lastly, the physical mechanisms behind this evolution appear to be the combination of in-network channeling of mass into larger fractures, which offer reduced resistance to flow, and in-fracture channeling, which results from the topology of the DFN.« less
Hyman, Jeffrey De'Haven; Painter, S. L.; Viswanathan, H.; ...
2015-09-12
We investigate how the choice of injection mode impacts transport properties in kilometer-scale three-dimensional discrete fracture networks (DFN). The choice of injection mode, resident and flux-weighted, is designed to mimic different physical phenomena. It has been hypothesized that solute plumes injected under resident conditions evolve to behave similarly to solutes injected under flux-weighted conditions. Previously, computational limitations have prohibited the large-scale simulations required to investigate this hypothesis. We investigate this hypothesis by using a high-performance DFN suite, dfnWorks, to simulate flow in kilometer-scale three-dimensional DFNs based on fractured granite at the Forsmark site in Sweden, and adopt a Lagrangian approachmore » to simulate transport therein. Results show that after traveling through a pre-equilibrium region, both injection methods exhibit linear scaling of the first moment of travel time and power law scaling of the breakthrough curve with similar exponents, slightly larger than 2. Lastly, the physical mechanisms behind this evolution appear to be the combination of in-network channeling of mass into larger fractures, which offer reduced resistance to flow, and in-fracture channeling, which results from the topology of the DFN.« less
Integrated layout based Monte-Carlo simulation for design arc optimization
NASA Astrophysics Data System (ADS)
Shao, Dongbing; Clevenger, Larry; Zhuang, Lei; Liebmann, Lars; Wong, Robert; Culp, James
2016-03-01
Design rules are created considering a wafer fail mechanism with the relevant design levels under various design cases, and the values are set to cover the worst scenario. Because of the simplification and generalization, design rule hinders, rather than helps, dense device scaling. As an example, SRAM designs always need extensive ground rule waivers. Furthermore, dense design also often involves "design arc", a collection of design rules, the sum of which equals critical pitch defined by technology. In design arc, a single rule change can lead to chain reaction of other rule violations. In this talk we present a methodology using Layout Based Monte-Carlo Simulation (LBMCS) with integrated multiple ground rule checks. We apply this methodology on SRAM word line contact, and the result is a layout that has balanced wafer fail risks based on Process Assumptions (PAs). This work was performed at the IBM Microelectronics Div, Semiconductor Research and Development Center, Hopewell Junction, NY 12533
NASA Astrophysics Data System (ADS)
Frantziskonis, George N.; Gur, Sourav
2017-06-01
Thermally induced phase transformation in NiTi shape memory alloys (SMAs) shows strong size and shape, collectively termed length scale effects, at the nano to micrometer scales, and that has important implications for the design and use of devices and structures at such scales. This paper, based on a recently developed multiscale model that utilizes molecular dynamics (MDs) simulations at small scales and MD-verified phase field (PhF) simulations at larger scales, reports results on specific length scale effects, i.e. length scale effects in martensite phase fraction (MPF) evolution, transformation temperatures (martensite and austenite start and finish) and in the thermally cyclic transformation between austenitic and martensitic phase. The multiscale study identifies saturation points for length scale effects and studies, for the first time, the length scale effect on the kinetics (i.e. developed internal strains) in the B19‧ phase during phase transformation. The major part of the work addresses small scale single crystals in specific orientations. However, the multiscale method is used in a unique and novel way to indirectly study length scale and grain size effects on evolution kinetics in polycrystalline NiTi, and to compare the simulation results to experiments. The interplay of the grain size and the length scale effect on the thermally induced MPF evolution is also shown in this present study. Finally, the multiscale coupling results are employed to improve phenomenological material models for NiTi SMA.
Design research of nanopositioner based on SPM and its simulation of FEM
NASA Astrophysics Data System (ADS)
Zhang, Zhenyu; Li, Hongqi; Zhou, Hongxiu; Li, Linan; Liu, Xiangjun
2006-01-01
A novel nanopositioning stage was designed according to the scanning property of SPM with flexure hinge as kinematic structure and piezoelectric ceramic as actuator. Kinetic precision and X directional area of nanopositioner are 1.55nm and 26.4 micron, respectively, which is demonstrated by kinetic analysis and finite element method FEM simulation. Designed nanopositioner based on SPM moves at 3 dimensions with nanometer scale and its motion of X, Y, and Z directions is decoupled and isotropic. Furthermore, frame of nanopositioner is simple and manufacturing is convenient, which will have broad prospect in the field of nanopositioning and nanotracing.
Properties important to mixing and simulant recommendations for WTP full-scale vessel testing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poirier, M. R.; Martino, C. J.
2015-12-01
Full Scale Vessel Testing (FSVT) is being planned by Bechtel National, Inc., to demonstrate the ability of the standard high solids vessel design (SHSVD) to meet mixing requirements over the range of fluid properties planned for processing in the Pretreatment Facility (PTF) of the Hanford Waste Treatment and Immobilization Plant (WTP). Testing will use simulated waste rather than actual Hanford waste. Therefore, the use of suitable simulants is critical to achieving the goals of the test program. WTP personnel requested the Savannah River National Laboratory (SRNL) to assist with development of simulants for use in FSVT. Among the tasks assignedmore » to SRNL was to develop a list of waste properties that are important to pulse-jet mixer (PJM) performance in WTP vessels with elevated concentrations of solids.« less
ReaDDy - A Software for Particle-Based Reaction-Diffusion Dynamics in Crowded Cellular Environments
Schöneberg, Johannes; Noé, Frank
2013-01-01
We introduce the software package ReaDDy for simulation of detailed spatiotemporal mechanisms of dynamical processes in the cell, based on reaction-diffusion dynamics with particle resolution. In contrast to other particle-based reaction kinetics programs, ReaDDy supports particle interaction potentials. This permits effects such as space exclusion, molecular crowding and aggregation to be modeled. The biomolecules simulated can be represented as a sphere, or as a more complex geometry such as a domain structure or polymer chain. ReaDDy bridges the gap between small-scale but highly detailed molecular dynamics or Brownian dynamics simulations and large-scale but little-detailed reaction kinetics simulations. ReaDDy has a modular design that enables the exchange of the computing core by efficient platform-specific implementations or dynamical models that are different from Brownian dynamics. PMID:24040218
A full scale hydrodynamic simulation of pyrotechnic combustion
NASA Astrophysics Data System (ADS)
Kim, Bohoon; Jang, Seung-Gyo; Yoh, Jack
2017-06-01
A full scale hydrodynamic simulation that requires an accurate reproduction of shock-induced detonation was conducted for design of an energetic component system. A series of small scale gap tests and detailed hydrodynamic simulations were used to validate the reactive flow model for predicting the shock propagation in a train configuration and to quantify the shock sensitivity of the energetic materials. The energetic component system is composed of four main components, namely a donor unit (HNS + HMX), a bulkhead (STS), an acceptor explosive (RDX), and a propellant (BKNO3) for gas generation. The pressurized gases generated from the burning propellant were purged into a 10 cc release chamber for study of the inherent oscillatory flow induced by the interferences between shock and rarefaction waves. The pressure fluctuations measured from experiment and calculation were investigated to further validate the peculiar peak at specific characteristic frequency (ωc = 8.3 kHz). In this paper, a step-by-step numerical description of detonation of high explosive components, deflagration of propellant component, and deformation of metal component is given in order to facilitate the proper implementation of the outlined formulation into a shock physics code for a full scale hydrodynamic simulation of the energetic component system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sprague, Michael A.; Stickel, Jonathan J.; Sitaraman, Hariswaran
Designing processing equipment for the mixing of settling suspensions is a challenging problem. Achieving low-cost mixing is especially difficult for the application of slowly reacting suspended solids because the cost of impeller power consumption becomes quite high due to the long reaction times (batch mode) or due to large-volume reactors (continuous mode). Further, the usual scale-up metrics for mixing, e.g., constant tip speed and constant power per volume, do not apply well for mixing of suspensions. As an alternative, computational fluid dynamics (CFD) can be useful for analyzing mixing at multiple scales and determining appropriate mixer designs and operating parameters.more » We developed a mixture model to describe the hydrodynamics of a settling cellulose suspension. The suspension motion is represented as a single velocity field in a computationally efficient Eulerian framework. The solids are represented by a scalar volume-fraction field that undergoes transport due to particle diffusion, settling, fluid advection, and shear stress. A settling model and a viscosity model, both functions of volume fraction, were selected to fit experimental settling and viscosity data, respectively. Simulations were performed with the open-source Nek5000 CFD program, which is based on the high-order spectral-finite-element method. Simulations were performed for the cellulose suspension undergoing mixing in a laboratory-scale vane mixer. The settled-bed heights predicted by the simulations were in semi-quantitative agreement with experimental observations. Further, the simulation results were in quantitative agreement with experimentally obtained torque and mixing-rate data, including a characteristic torque bifurcation. In future work, we plan to couple this CFD model with a reaction-kinetics model for the enzymatic digestion of cellulose, allowing us to predict enzymatic digestion performance for various mixing intensities and novel reactor designs.« less
Principles for scaling of distributed direct potable water reuse systems: a modeling study.
Guo, Tianjiao; Englehardt, James D
2015-05-15
Scaling of direct potable water reuse (DPR) systems involves tradeoffs of treatment facility economy-of-scale, versus cost and energy of conveyance including energy for upgradient distribution of treated water, and retention of wastewater thermal energy. In this study, a generalized model of the cost of DPR as a function of treatment plant scale, assuming futuristic, optimized conveyance networks, was constructed for purposes of developing design principles. Fractal landscapes representing flat, hilly, and mountainous topographies were simulated, with urban, suburban, and rural housing distributions placed by modified preferential growth algorithm. Treatment plants were allocated by agglomerative hierarchical clustering, networked to buildings by minimum spanning tree. Simulations assume advanced oxidation-based DPR system design, with 20-year design life and capability to mineralize chemical oxygen demand below normal detection limits, allowing implementation in regions where disposal of concentrate containing hormones and antiscalants is not practical. Results indicate that total DPR capital and O&M costs in rural areas, where systems that return nutrients to the land may be more appropriate, are high. However, costs in urban/suburban areas are competitive with current water/wastewater service costs at scales of ca. one plant per 10,000 residences. This size is relatively small, and costs do not increase significantly until plant service areas fall below 100 to 1000 homes. Based on these results, distributed DPR systems are recommended for consideration for urban/suburban water and wastewater system capacity expansion projects. Copyright © 2015 Elsevier Ltd. All rights reserved.
Computational Modeling in Plasma Processing for 300 mm Wafers
NASA Technical Reports Server (NTRS)
Meyyappan, Meyya; Arnold, James O. (Technical Monitor)
1997-01-01
Migration toward 300 mm wafer size has been initiated recently due to process economics and to meet future demands for integrated circuits. A major issue facing the semiconductor community at this juncture is development of suitable processing equipment, for example, plasma processing reactors that can accomodate 300 mm wafers. In this Invited Talk, scaling of reactors will be discussed with the aid of computational fluid dynamics results. We have undertaken reactor simulations using CFD with reactor geometry, pressure, and precursor flow rates as parameters in a systematic investigation. These simulations provide guidelines for scaling up in reactor design.
Self-reconfigurable ship fluid-network modeling for simulation-based design
NASA Astrophysics Data System (ADS)
Moon, Kyungjin
Our world is filled with large-scale engineering systems, which provide various services and conveniences in our daily life. A distinctive trend in the development of today's large-scale engineering systems is the extensive and aggressive adoption of automation and autonomy that enable the significant improvement of systems' robustness, efficiency, and performance, with considerably reduced manning and maintenance costs, and the U.S. Navy's DD(X), the next-generation destroyer program, is considered as an extreme example of such a trend. This thesis pursues a modeling solution for performing simulation-based analysis in the conceptual or preliminary design stage of an intelligent, self-reconfigurable ship fluid system, which is one of the concepts of DD(X) engineering plant development. Through the investigations on the Navy's approach for designing a more survivable ship system, it is found that the current naval simulation-based analysis environment is limited by the capability gaps in damage modeling, dynamic model reconfiguration, and simulation speed of the domain specific models, especially fluid network models. As enablers of filling these gaps, two essential elements were identified in the formulation of the modeling method. The first one is the graph-based topological modeling method, which will be employed for rapid model reconstruction and damage modeling, and the second one is the recurrent neural network-based, component-level surrogate modeling method, which will be used to improve the affordability and efficiency of the modeling and simulation (M&S) computations. The integration of the two methods can deliver computationally efficient, flexible, and automation-friendly M&S which will create an environment for more rigorous damage analysis and exploration of design alternatives. As a demonstration for evaluating the developed method, a simulation model of a notional ship fluid system was created, and a damage analysis was performed. Next, the models representing different design configurations of the fluid system were created, and damage analyses were performed with them in order to find an optimal design configuration for system survivability. Finally, the benefits and drawbacks of the developed method were discussed based on the result of the demonstration.
NASA Technical Reports Server (NTRS)
Schmidt, Gene I.; Rossow, Vernon J.; Vanaken, Johannes M.; Parrish, Cynthia L.
1987-01-01
The features of a 1/50-scale model of the National Full-Scale Aerodynamics Complex are first described. An overview is then given of some results from the various tests conducted with the model to aid in the design of the full-scale facility. It was found that the model tunnel simulated accurately many of the operational characteristics of the full-scale circuits. Some characteristics predicted by the model were, however, noted to differ from previous full-scale results by about 10%.
2010-08-04
airway management practices in the PACU has been deemed successful by KMC anesthesia management 15. SUBJECT TERMS Human Patient Simulation; Emergency...of South Alabama and KMC Clinical Research Laboratory (CRL) were received. The training sessions were planned for two 4-hour sessions in the HPS...assistance ofthe KMC CRL research statistician. Findings Results of the NLN Simulation Design Scale surveys showed seven of eight nurses in the
Multi-Scale Hierarchical and Topological Design of Structures for Failure Resistance
2013-10-04
materials, simulation, 3D printing , advanced manufacturing, design, fracture Markus J. Buehler Massachusetts Institute of Technology (MIT) 77...by Mineralized Natural Materials: Computation, 3D printing , and Testing, Advanced Functional Materials, (09 2013): 0. doi: 10.1002/adfm.201300215 10...have made substantial progress. Recent work focuses on the analysis of topological effects of composite design, 3D printing of bioinspired and
1991-04-01
Boiler and Pressure Vessel Code . Other design requirements are developed from standard safe... Boiler and Pressure Vessel Code . The following three condi- tions constitute the primary design parameters for pressure vessels: (a) Design Working...rules and practices of the American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel Code . Section VIII, Division 1 of the ASME
Use of a PhET Interactive Simulation in General Chemistry Laboratory: Models of the Hydrogen Atom
ERIC Educational Resources Information Center
Clark, Ted M.; Chamberlain, Julia M.
2014-01-01
An activity supporting the PhET interactive simulation, Models of the Hydrogen Atom, has been designed and used in the laboratory portion of a general chemistry course. This article describes the framework used to successfully accomplish implementation on a large scale. The activity guides students through a comparison and analysis of the six…
Investigation related to hydrogen isotopes separation by cryogenic distillation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bornea, A.; Zamfirache, M.; Stefanescu, I.
2008-07-15
Research conducted in the last fifty years has shown that one of the most efficient techniques of removing tritium from the heavy water used as moderator and coolant in CANDU reactors (as that operated at Cernavoda (Romania)) is hydrogen cryogenic distillation. Designing and implementing the concept of cryogenic distillation columns require experiments to be conducted as well as computer simulations. Particularly, computer simulations are of great importance when designing and evaluating the performances of a column or a series of columns. Experimental data collected from laboratory work will be used as input for computer simulations run at larger scale (formore » The Pilot Plant for Tritium and Deuterium Separation) in order to increase the confidence in the simulated results. Studies carried out were focused on the following: - Quantitative analyses of important parameters such as the number of theoretical plates, inlet area, reflux flow, flow-rates extraction, working pressure, etc. - Columns connected in series in such a way to fulfil the separation requirements. Experiments were carried out on a laboratory-scale installation to investigate the performance of contact elements with continuous packing. The packing was manufactured in our institute. (authors)« less
Mathematical and Numerical Techniques in Energy and Environmental Modeling
NASA Astrophysics Data System (ADS)
Chen, Z.; Ewing, R. E.
Mathematical models have been widely used to predict, understand, and optimize many complex physical processes, from semiconductor or pharmaceutical design to large-scale applications such as global weather models to astrophysics. In particular, simulation of environmental effects of air pollution is extensive. Here we address the need for using similar models to understand the fate and transport of groundwater contaminants and to design in situ remediation strategies. Three basic problem areas need to be addressed in the modeling and simulation of the flow of groundwater contamination. First, one obtains an effective model to describe the complex fluid/fluid and fluid/rock interactions that control the transport of contaminants in groundwater. This includes the problem of obtaining accurate reservoir descriptions at various length scales and modeling the effects of this heterogeneity in the reservoir simulators. Next, one develops accurate discretization techniques that retain the important physical properties of the continuous models. Finally, one develops efficient numerical solution algorithms that utilize the potential of the emerging computing architectures. We will discuss recent advances and describe the contribution of each of the papers in this book in these three areas. Keywords: reservoir simulation, mathematical models, partial differential equations, numerical algorithms
NASA Astrophysics Data System (ADS)
Penna, James; Morgan, Kyle; Grubb, Isaac; Jarboe, Thomas
2017-10-01
The Helicity Injected Torus - Steady Inductive 3 (HIT-SI3) experiment forms and maintains spheromaks via Steady Inductive Helicity Injection (SIHI) using discrete injectors that inject magnetic helicity via a non-axisymmetric perturbation and drive toroidally symmetric current. Newer designs for larger SIHI-driven spheromaks incorporate a set of injectors connected to a single external manifold to allow more freedom for the toroidal structure of the applied perturbation. Simulations have been carried out using the NIMROD code to assess the effectiveness of various imposed mode structures and injector schema in driving current via Imposed Dynamo Current Drive (IDCD). The results are presented here for varying flux conserver shapes on a device approximately 1.5 times larger than the current HIT-SI3 experiment. The imposed mode structures and spectra of simulated spheromaks are analyzed in order to examine magnetic structure and stability and determine an optimal regime for IDCD sustainment in a large device. The development of scaling laws for manifold operation is also presented, and simulation results are analyzed and assessed as part of the development path for the large scale device.
Multidisciplinary propulsion simulation using the numerical propulsion system simulator (NPSS)
NASA Technical Reports Server (NTRS)
Claus, Russel W.
1994-01-01
Implementing new technology in aerospace propulsion systems is becoming prohibitively expensive. One of the major contributions to the high cost is the need to perform many large scale system tests. The traditional design analysis procedure decomposes the engine into isolated components and focuses attention on each single physical discipline (e.g., fluid for structural dynamics). Consequently, the interactions that naturally occur between components and disciplines can be masked by the limited interactions that occur between individuals or teams doing the design and must be uncovered during expensive engine testing. This overview will discuss a cooperative effort of NASA, industry, and universities to integrate disciplines, components, and high performance computing into a Numerical propulsion System Simulator (NPSS).
ASIC implementation of recursive scaled discrete cosine transform algorithm
NASA Astrophysics Data System (ADS)
On, Bill N.; Narasimhan, Sam; Huang, Victor K.
1994-05-01
A program to implement the Recursive Scaled Discrete Cosine Transform (DCT) algorithm as proposed by H. S. Hou has been undertaken at the Institute of Microelectronics. Implementation of the design was done using top-down design methodology with VHDL (VHSIC Hardware Description Language) for chip modeling. When the VHDL simulation has been satisfactorily completed, the design is synthesized into gates using a synthesis tool. The architecture of the design consists of two processing units together with a memory module for data storage and transpose. Each processing unit is composed of four pipelined stages which allow the internal clock to run at one-eighth (1/8) the speed of the pixel clock. Each stage operates on eight pixels in parallel. As the data flows through each stage, there are various adders and multipliers to transform them into the desired coefficients. The Scaled IDCT was implemented in a similar fashion with the adders and multipliers rearranged to perform the inverse DCT algorithm. The chip has been verified using Field Programmable Gate Array devices. The design is operational. The combination of fewer multiplications required and pipelined architecture give Hou's Recursive Scaled DCT good potential of achieving high performance at a low cost in using Very Large Scale Integration implementation.
NASA Astrophysics Data System (ADS)
Arshadi, Amir
Image-based simulation of complex materials is a very important tool for understanding their mechanical behavior and an effective tool for successful design of composite materials. In this thesis an image-based multi-scale finite element approach is developed to predict the mechanical properties of asphalt mixtures. In this approach the "up-scaling" and homogenization of each scale to the next is critically designed to improve accuracy. In addition to this multi-scale efficiency, this study introduces an approach for consideration of particle contacts at each of the scales in which mineral particles exist. One of the most important pavement distresses which seriously affects the pavement performance is fatigue cracking. As this cracking generally takes place in the binder phase of the asphalt mixture, the binder fatigue behavior is assumed to be one of the main factors influencing the overall pavement fatigue performance. It is also known that aggregate gradation, mixture volumetric properties, and filler type and concentration can affect damage initiation and progression in the asphalt mixtures. This study was conducted to develop a tool to characterize the damage properties of the asphalt mixtures at all scales. In the present study the Viscoelastic continuum damage model is implemented into the well-known finite element software ABAQUS via the user material subroutine (UMAT) in order to simulate the state of damage in the binder phase under the repeated uniaxial sinusoidal loading. The inputs are based on the experimentally derived measurements for the binder properties. For the scales of mastic and mortar, the artificially 2-Dimensional images of mastic and mortar scales were generated and used to characterize the properties of those scales. Finally, the 2D scanned images of asphalt mixtures are used to study the asphalt mixture fatigue behavior under loading. In order to validate the proposed model, the experimental test results and the simulation results were compared. Indirect tensile fatigue tests were conducted on asphalt mixture samples. A comparison between experimental results and the results from simulation shows that the model developed in this study is capable of predicting the effect of asphalt binder properties and aggregate micro-structure on mechanical behavior of asphalt concrete under loading.
Impact gages for detecting meteoroid and other orbital debris impacts on space vehicles.
NASA Technical Reports Server (NTRS)
Mastandrea, J. R.; Scherb, M. V.
1973-01-01
Impacts on space vehicles have been simulated using the McDonnell Douglas Aerophysics Laboratory (MDAL) Light-Gas Guns to launch particles at hypervelocity speeds into scaled space structures. Using impact gages and a triangulation technique, these impacts have been detected and accurately located. This paper describes in detail the various types of impact gages (piezoelectric PZT-5A, quartz, electret, and off-the-shelf plastics) used. This description includes gage design and experimental results for gages installed on single-walled scaled payload carriers, multiple-walled satellites and space stations, and single-walled full-scale Delta tank structures. A brief description of the triangulation technique, the impact simulation, and the data acquisition system are also included.
The impact of ARM on climate modeling
Randall, David A.; Del Genio, Anthony D.; Donner, Lee J.; ...
2016-07-15
Climate models are among humanity’s most ambitious and elaborate creations. They are designed to simulate the interactions of the atmosphere, ocean, land surface, and cryosphere on time scales far beyond the limits of deterministic predictability and including the effects of time-dependent external forcings. The processes involved include radiative transfer, fluid dynamics, microphysics, and some aspects of geochemistry, biology, and ecology. The models explicitly simulate processes on spatial scales ranging from the circumference of Earth down to 100 km or smaller and implicitly include the effects of processes on even smaller scales down to a micron or so. In addition, themore » atmospheric component of a climate model can be called an atmospheric global circulation model (AGCM).« less
NASA Astrophysics Data System (ADS)
Hu, S. X.; Michel, D. T.; Edgell, D. H.; Froula, D. H.; Follett, R. K.; Goncharov, V. N.; Myatt, J. F.; Skupsky, S.; Yaakobi, B.
2013-03-01
Direct-drive-ignition designs with plastic CH ablators create plasmas of long density scale lengths (Ln ≥ 500 μm) at the quarter-critical density (Nqc) region of the driving laser. The two-plasmon-decay (TPD) instability can exceed its threshold in such long-scale-length plasmas (LSPs). To investigate the scaling of TPD-induced hot electrons to laser intensity and plasma conditions, a series of planar experiments have been conducted at the Omega Laser Facility with 2-ns square pulses at the maximum laser energies available on OMEGA and OMEGA EP. Radiation-hydrodynamic simulations have been performed for these LSP experiments using the two-dimensional hydrocode draco. The simulated hydrodynamic evolution of such long-scale-length plasmas has been validated with the time-resolved full-aperture backscattering and Thomson-scattering measurements. draco simulations for CH ablator indicate that (1) ignition-relevant long-scale-length plasmas of Ln approaching ˜400 μm have been created; (2) the density scale length at Nqc scales as Ln(μm)≃(RDPP×I1/4/2); and (3) the electron temperature Te at Nqc scales as Te(keV)≃0.95×√I , with the incident intensity (I) measured in 1014 W/cm2 for plasmas created on both OMEGA and OMEGA EP configurations with different-sized (RDPP) distributed phase plates. These intensity scalings are in good agreement with the self-similar model predictions. The measured conversion fraction of laser energy into hot electrons fhot is found to have a similar behavior for both configurations: a rapid growth [fhot≃fc×(Gc/4)6 for Gc < 4] followed by a saturation of the form, fhot≃fc×(Gc/4)1.2 for Gc ≥ 4, with the common wave gain is defined as Gc=3 × 10-2×IqcLnλ0/Te, where the laser intensity contributing to common-wave gain Iqc, Ln, Te at Nqc, and the laser wavelength λ0 are, respectively, measured in [1014 W/cm2], [μm], [keV], and [μm]. The saturation level fc is observed to be fc ≃ 10-2 at around Gc ≃ 4. The hot-electron temperature scales roughly linear with Gc. Furthermore, to mitigate TPD instability in long-scale-length plasmas, different ablator materials such as saran and aluminum have been investigated on OMEGA EP. Hot-electron generation has been reduced by a factor of 3-10 for saran and aluminum plasmas, compared to the CH case at the same incident laser intensity. draco simulations suggest that saran might be a better ablator for direct-drive-ignition designs as it balances TPD mitigation with an acceptable hydro-efficiency.
Construction of multi-functional open modulized Matlab simulation toolbox for imaging ladar system
NASA Astrophysics Data System (ADS)
Wu, Long; Zhao, Yuan; Tang, Meng; He, Jiang; Zhang, Yong
2011-06-01
Ladar system simulation is to simulate the ladar models using computer simulation technology in order to predict the performance of the ladar system. This paper presents the developments of laser imaging radar simulation for domestic and overseas studies and the studies of computer simulation on ladar system with different application requests. The LadarSim and FOI-LadarSIM simulation facilities of Utah State University and Swedish Defence Research Agency are introduced in details. This paper presents the low level of simulation scale, un-unified design and applications of domestic researches in imaging ladar system simulation, which are mostly to achieve simple function simulation based on ranging equations for ladar systems. Design of laser imaging radar simulation with open and modularized structure is proposed to design unified modules for ladar system, laser emitter, atmosphere models, target models, signal receiver, parameters setting and system controller. Unified Matlab toolbox and standard control modules have been built with regulated input and output of the functions, and the communication protocols between hardware modules. A simulation based on ICCD gain-modulated imaging ladar system for a space shuttle is made based on the toolbox. The simulation result shows that the models and parameter settings of the Matlab toolbox are able to simulate the actual detection process precisely. The unified control module and pre-defined parameter settings simplify the simulation of imaging ladar detection. Its open structures enable the toolbox to be modified for specialized requests. The modulization gives simulations flexibility.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eyler, L.L.; Trent, D.S.
The TEMPEST computer program was used to simulate fluid and thermal mixing in the cold leg and downcomer of a pressurized water reactor under emergency core cooling high-pressure injection (HPI), which is of concern to the pressurized thermal shock (PTS) problem. Application of the code was made in performing an analysis simulation of a full-scale Westinghouse three-loop plant design cold leg and downcomer. Verification/assessment of the code was performed and analysis procedures developed using data from Creare 1/5-scale experimental tests. Results of three simulations are presented. The first is a no-loop-flow case with high-velocity, low-negative-buoyancy HPI in a 1/5-scale modelmore » of a cold leg and downcomer. The second is a no-loop-flow case with low-velocity, high-negative density (modeled with salt water) injection in a 1/5-scale model. Comparison of TEMPEST code predictions with experimental data for these two cases show good agreement. The third simulation is a three-dimensional model of one loop of a full size Westinghouse three-loop plant design. Included in this latter simulation are loop components extending from the steam generator to the reactor vessel and a one-third sector of the vessel downcomer and lower plenum. No data were available for this case. For the Westinghouse plant simulation, thermally coupled conduction heat transfer in structural materials is included. The cold leg pipe and fluid mixing volumes of the primary pump, the stillwell, and the riser to the steam generator are included in the model. In the reactor vessel, the thermal shield, pressure vessel cladding, and pressure vessel wall are thermally coupled to the fluid and thermal mixing in the downcomer. The inlet plenum mixing volume is included in the model. A 10-min (real time) transient beginning at the initiation of HPI is computed to determine temperatures at the beltline of the pressure vessel wall.« less
Multi Length Scale Finite Element Design Framework for Advanced Woven Fabrics
NASA Astrophysics Data System (ADS)
Erol, Galip Ozan
Woven fabrics are integral parts of many engineering applications spanning from personal protective garments to surgical scaffolds. They provide a wide range of opportunities in designing advanced structures because of their high tenacity, flexibility, high strength-to-weight ratios and versatility. These advantages result from their inherent multi scale nature where the filaments are bundled together to create yarns while the yarns are arranged into different weave architectures. Their highly versatile nature opens up potential for a wide range of mechanical properties which can be adjusted based on the application. While woven fabrics are viable options for design of various engineering systems, being able to understand the underlying mechanisms of the deformation and associated highly nonlinear mechanical response is important and necessary. However, the multiscale nature and relationships between these scales make the design process involving woven fabrics a challenging task. The objective of this work is to develop a multiscale numerical design framework using experimentally validated mesoscopic and macroscopic length scale approaches by identifying important deformation mechanisms and recognizing the nonlinear mechanical response of woven fabrics. This framework is exercised by developing mesoscopic length scale constitutive models to investigate plain weave fabric response under a wide range of loading conditions. A hyperelastic transversely isotropic yarn material model with transverse material nonlinearity is developed for woven yarns (commonly used in personal protection garments). The material properties/parameters are determined through an inverse method where unit cell finite element simulations are coupled with experiments. The developed yarn material model is validated by simulating full scale uniaxial tensile, bias extension and indentation experiments, and comparing to experimentally observed mechanical response and deformation mechanisms. Moreover, mesoscopic unit cell finite elements are coupled with a design-of-experiments method to systematically identify the important yarn material properties for the macroscale response of various weave architectures. To demonstrate the macroscopic length scale approach, two new material models for woven fabrics were developed. The Planar Material Model (PMM) utilizes two important deformation mechanisms in woven fabrics: (1) yarn elongation, and (2) relative yarn rotation due to shear loads. The yarns' uniaxial tensile response is modeled with a nonlinear spring using constitutive relations while a nonlinear rotational spring is implemented to define fabric's shear stiffness. The second material model, Sawtooth Material Model (SMM) adopts the sawtooth geometry while recognizing the biaxial nature of woven fabrics by implementing the interactions between the yarns. Material properties/parameters required by both PMM and SMM can be directly determined from standard experiments. Both macroscopic material models are implemented within an explicit finite element code and validated by comparing to the experiments. Then, the developed macroscopic material models are compared under various loading conditions to determine their accuracy. Finally, the numerical models developed in the mesoscopic and macroscopic length scales are linked thus demonstrating the new systematic design framework involving linked mesoscopic and macroscopic length scale modeling approaches. The approach is demonstrated with both Planar and Sawtooth Material Models and the simulation results are verified by comparing the results obtained from meso and macro models.
Simulating flow around scaled model of a hypersonic vehicle in wind tunnel
NASA Astrophysics Data System (ADS)
Markova, T. V.; Aksenov, A. A.; Zhluktov, S. V.; Savitsky, D. V.; Gavrilov, A. D.; Son, E. E.; Prokhorov, A. N.
2016-11-01
A prospective hypersonic HEXAFLY aircraft is considered in the given paper. In order to obtain the aerodynamic characteristics of a new construction design of the aircraft, experiments with a scaled model have been carried out in a wind tunnel under different conditions. The runs have been performed at different angles of attack with and without hydrogen combustion in the scaled propulsion engine. However, the measured physical quantities do not provide all the information about the flowfield. Numerical simulation can complete the experimental data as well as to reduce the number of wind tunnel experiments. Besides that, reliable CFD software can be used for calculations of the aerodynamic characteristics for any possible design of the full-scale aircraft under different operation conditions. The reliability of the numerical predictions must be confirmed in verification study of the software. The given work is aimed at numerical investigation of the flowfield around and inside the scaled model of the HEXAFLY-CIAM module under wind tunnel conditions. A cold run (without combustion) was selected for this study. The calculations are performed in the FlowVision CFD software. The flow characteristics are compared against the available experimental data. The carried out verification study confirms the capability of the FlowVision CFD software to calculate the flows discussed.
Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware
Knight, James C.; Tully, Philip J.; Kaplan, Bernhard A.; Lansner, Anders; Furber, Steve B.
2016-01-01
SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 2.0 × 104 neurons and 5.1 × 107 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately 45× more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models. PMID:27092061
An automated methodology development. [software design for combat simulation
NASA Technical Reports Server (NTRS)
Hawley, L. R.
1985-01-01
The design methodology employed in testing the applicability of Ada in large-scale combat simulations is described. Ada was considered as a substitute for FORTRAN to lower life cycle costs and ease the program development efforts. An object-oriented approach was taken, which featured definitions of military targets, the capability of manipulating their condition in real-time, and one-to-one correlation between the object states and real world states. The simulation design process was automated by the problem statement language (PSL)/problem statement analyzer (PSA). The PSL/PSA system accessed the problem data base directly to enhance the code efficiency by, e.g., eliminating non-used subroutines, and provided for automated report generation, besides allowing for functional and interface descriptions. The ways in which the methodology satisfied the responsiveness, reliability, transportability, modifiability, timeliness and efficiency goals are discussed.
NASA Astrophysics Data System (ADS)
Du, Wenbo
A common attribute of electric-powered aerospace vehicles and systems such as unmanned aerial vehicles, hybrid- and fully-electric aircraft, and satellites is that their performance is usually limited by the energy density of their batteries. Although lithium-ion batteries offer distinct advantages such as high voltage and low weight over other battery technologies, they are a relatively new development, and thus significant gaps in the understanding of the physical phenomena that govern battery performance remain. As a result of this limited understanding, batteries must often undergo a cumbersome design process involving many manual iterations based on rules of thumb and ad-hoc design principles. A systematic study of the relationship between operational, geometric, morphological, and material-dependent properties and performance metrics such as energy and power density is non-trivial due to the multiphysics, multiphase, and multiscale nature of the battery system. To address these challenges, two numerical frameworks are established in this dissertation: a process for analyzing and optimizing several key design variables using surrogate modeling tools and gradient-based optimizers, and a multi-scale model that incorporates more detailed microstructural information into the computationally efficient but limited macro-homogeneous model. In the surrogate modeling process, multi-dimensional maps for the cell energy density with respect to design variables such as the particle size, ion diffusivity, and electron conductivity of the porous cathode material are created. A combined surrogate- and gradient-based approach is employed to identify optimal values for cathode thickness and porosity under various operating conditions, and quantify the uncertainty in the surrogate model. The performance of multiple cathode materials is also compared by defining dimensionless transport parameters. The multi-scale model makes use of detailed 3-D FEM simulations conducted at the particle-level. A monodisperse system of ellipsoidal particles is used to simulate the effective transport coefficients and interfacial reaction current density within the porous microstructure. Microscopic simulation results are shown to match well with experimental measurements, while differing significantly from homogenization approximations used in the macroscopic model. Global sensitivity analysis and surrogate modeling tools are applied to couple the two length scales and complete the multi-scale model.
NASA Astrophysics Data System (ADS)
Bianco, C.; Tosco, T.; Sethi, R.
2017-12-01
Nanoremediation is a promising in-situ technology for the reclamation of contaminated aquifers. It consists in the subsurface injection of a reactive colloidal suspension for the in-situ treatment of pollutants. The overall success of this technology at the field scale is strictly related to the achievement of an effective and efficient emplacement of the nanoparticles (NP) inside the contaminated area. Mathematical models can be used to support the design of nanotechnology-based remediation by effectively assessing the expected NP mobility at the field scale. Several analytical and numerical tools have been developed in recent years to model the transport of NPs in simplified geometry and boundary conditions. The numerical tool MNMs was developed by the authors of this work to simulate colloidal transport in 1D Cartesian and radial coordinates. A new modelling tool, MNM3D (Micro and Nanoparticle transport Model in 3D geometries), was also proposed for the simulation of injection and transport of NP suspensions in generic complex scenarios. MNM3D accounts for the simultaneous dependency of NP transport on water ionic strength and velocity. The software was developed to predict the NP mobility at different stages of a nanoremediation application, from the design stage to the prediction of the long-term fate after injection. In this work an integrated experimental-modelling procedure is applied to support the design of a field scale injection of goethite NPs carried out in the framework of the H2020 European project Reground. Column tests are performed at different injection flowrates using natural sand collected at the contaminated site as porous medium. The tests are interpreted using MNMs to characterize the NP mobility and derive the constitutive equations describing the suspension behavior in the natural porous medium. MNM3D is then used to predict NP behavior during the field scale injection and to assess the long-term mobility of the injected slurry. Finally, different injection scenarios were simulated to get a reliable estimation of several operating parameters, e.g. particle distribution around the injection well, radius of influence, number of required wells.
Huang, Yi-Shao; Liu, Wel-Ping; Wu, Min; Wang, Zheng-Wu
2014-09-01
This paper presents a novel observer-based decentralized hybrid adaptive fuzzy control scheme for a class of large-scale continuous-time multiple-input multiple-output (MIMO) uncertain nonlinear systems whose state variables are unmeasurable. The scheme integrates fuzzy logic systems, state observers, and strictly positive real conditions to deal with three issues in the control of a large-scale MIMO uncertain nonlinear system: algorithm design, controller singularity, and transient response. Then, the design of the hybrid adaptive fuzzy controller is extended to address a general large-scale uncertain nonlinear system. It is shown that the resultant closed-loop large-scale system keeps asymptotically stable and the tracking error converges to zero. The better characteristics of our scheme are demonstrated by simulations. Copyright © 2014. Published by Elsevier Ltd.
Maestro: an orchestration framework for large-scale WSN simulations.
Riliskis, Laurynas; Osipov, Evgeny
2014-03-18
Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation.
Maestro: An Orchestration Framework for Large-Scale WSN Simulations
Riliskis, Laurynas; Osipov, Evgeny
2014-01-01
Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation. PMID:24647123
Ha, Jennifer F; Morrison, Robert J; Green, Glenn E; Zopf, David A
2017-06-01
Autologous cartilage grafting during open airway reconstruction is a complex skill instrumental to the success of the operation. Most trainees lack adequate opportunities to develop proficiency in this skill. We hypothesized that 3-dimensional (3D) printing and computer-aided design can be used to create a high-fidelity simulator for developing skills carving costal cartilage grafts for airway reconstruction. The rapid manufacturing and low cost of the simulator allow deployment in locations lacking expert instructors or cadaveric dissection, such as medical missions and Third World countries. In this blinded, prospective observational study, resident trainees completed a physical simulator exercise using a 3D-printed costal cartilage grafting tool. Participant assessment was performed using a Likert scale questionnaire, and airway grafts were assessed by a blinded expert surgeon. Most participants found this to be a very relevant training tool and highly rated the level of realism of the simulation tool.
Monte Carlo capabilities of the SCALE code system
Rearden, Bradley T.; Petrie, Jr., Lester M.; Peplow, Douglas E.; ...
2014-09-12
SCALE is a broadly used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a “plug-and-play” framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport asmore » well as activation, depletion, and decay calculations. SCALE’s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. Finally, an overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.« less
The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code.
Kunkel, Susanne; Schenck, Wolfram
2017-01-01
NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling.
The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code
Kunkel, Susanne; Schenck, Wolfram
2017-01-01
NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling. PMID:28701946
NASA Astrophysics Data System (ADS)
Yoon, Chun Hong; Yurkov, Mikhail V.; Schneidmiller, Evgeny A.; Samoylova, Liubov; Buzmakov, Alexey; Jurek, Zoltan; Ziaja, Beata; Santra, Robin; Loh, N. Duane; Tschentscher, Thomas; Mancuso, Adrian P.
2016-04-01
The advent of newer, brighter, and more coherent X-ray sources, such as X-ray Free-Electron Lasers (XFELs), represents a tremendous growth in the potential to apply coherent X-rays to determine the structure of materials from the micron-scale down to the Angstrom-scale. There is a significant need for a multi-physics simulation framework to perform source-to-detector simulations for a single particle imaging experiment, including (i) the multidimensional simulation of the X-ray source; (ii) simulation of the wave-optics propagation of the coherent XFEL beams; (iii) atomistic modelling of photon-material interactions; (iv) simulation of the time-dependent diffraction process, including incoherent scattering; (v) assembling noisy and incomplete diffraction intensities into a three-dimensional data set using the Expansion-Maximisation-Compression (EMC) algorithm and (vi) phase retrieval to obtain structural information. We demonstrate the framework by simulating a single-particle experiment for a nitrogenase iron protein using parameters of the SPB/SFX instrument of the European XFEL. This exercise demonstrably yields interpretable consequences for structure determination that are crucial yet currently unavailable for experiment design.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pope, G.A.; Sepehrnoori, K.
1994-09-01
The objective of this research is to develop cost-effective surfactant flooding technology by using surfactant simulation studies to evaluate and optimize alternative design strategies taking into account reservoir characteristics, process chemistry, and process design options such as horizontal wells. Task 1 is the development of an improved numerical method for our simulator that will enable us to solve a wider class of these difficult simulation problems accurately and affordably. Task 2 is the application of this simulator to the optimization of surfactant flooding to reduce its risk and cost. The goal of Task 2 is to understand and generalize themore » impact of both process and reservoir characteristics on the optimal design of surfactant flooding. We have studied the effect of process parameters such as salinity gradient, surfactant adsorption, surfactant concentration, surfactant slug size, pH, polymer concentration and well constraints on surfactant floods. In this report, we show three dimensional field scale simulation results to illustrate the impact of one important design parameter, the salinity gradient. Although the use of a salinity gradient to improve the efficiency and robustness of surfactant flooding has been studied and applied for many years, this is the first time that we have evaluated it using stochastic simulations rather than simulations using the traditional layered reservoir description. The surfactant flooding simulations were performed using The University of Texas chemical flooding simulator called UTCHEM.« less
Chemical process simulation has long been used as a design tool in the development of chemical plants, and has long been considered a means to evaluate different design options. With the advent of large scale computer networks and interface models for program components, it is po...
ERIC Educational Resources Information Center
Tran, Huu-Khoa; Chiou, Juing -Shian; Peng, Shou-Tao
2016-01-01
In this paper, the feasibility of a Genetic Algorithm Optimization (GAO) education software based Fuzzy Logic Controller (GAO-FLC) for simulating the flight motion control of Unmanned Aerial Vehicles (UAVs) is designed. The generated flight trajectories integrate the optimized Scaling Factors (SF) fuzzy controller gains by using GAO algorithm. The…
Vu-Bac, N.; Bessa, M. A.; Rabczuk, Timon; ...
2015-09-10
In this paper, we present experimentally validated molecular dynamics predictions of the quasi- static yield and post-yield behavior for a highly cross-linked epoxy polymer under gen- eral stress states and for different temperatures. In addition, a hierarchical multiscale model is presented where the nano-scale simulations obtained from molecular dynamics were homogenized to a continuum thermoplastic constitutive model for the epoxy that can be used to describe the macroscopic behavior of the material. Three major conclusions were achieved: (1) the yield surfaces generated from the nano-scale model for different temperatures agree well with the paraboloid yield crite- rion, supporting previous macroscopicmore » experimental observations; (2) rescaling of the entire yield surfaces to the quasi-static case is possible by considering Argon’s theoretical predictions for pure compression of the polymer at absolute zero temperature; (3) nano- scale simulations can be used for an experimentally-free calibration of macroscopic con- tinuum models, opening new avenues for the design of materials and structures through multi-scale simulations that provide structure-property-performance relationships.« less
New Approaches to Quantifying Transport Model Error in Atmospheric CO2 Simulations
NASA Technical Reports Server (NTRS)
Ott, L.; Pawson, S.; Zhu, Z.; Nielsen, J. E.; Collatz, G. J.; Gregg, W. W.
2012-01-01
In recent years, much progress has been made in observing CO2 distributions from space. However, the use of these observations to infer source/sink distributions in inversion studies continues to be complicated by difficulty in quantifying atmospheric transport model errors. We will present results from several different experiments designed to quantify different aspects of transport error using the Goddard Earth Observing System, Version 5 (GEOS-5) Atmospheric General Circulation Model (AGCM). In the first set of experiments, an ensemble of simulations is constructed using perturbations to parameters in the model s moist physics and turbulence parameterizations that control sub-grid scale transport of trace gases. Analysis of the ensemble spread and scales of temporal and spatial variability among the simulations allows insight into how parameterized, small-scale transport processes influence simulated CO2 distributions. In the second set of experiments, atmospheric tracers representing model error are constructed using observation minus analysis statistics from NASA's Modern-Era Retrospective Analysis for Research and Applications (MERRA). The goal of these simulations is to understand how errors in large scale dynamics are distributed, and how they propagate in space and time, affecting trace gas distributions. These simulations will also be compared to results from NASA's Carbon Monitoring System Flux Pilot Project that quantified the impact of uncertainty in satellite constrained CO2 flux estimates on atmospheric mixing ratios to assess the major factors governing uncertainty in global and regional trace gas distributions.
Assessment of zero-equation SGS models for simulating indoor environment
NASA Astrophysics Data System (ADS)
Taghinia, Javad; Rahman, Md Mizanur; Tse, Tim K. T.
2016-12-01
The understanding of air-flow in enclosed spaces plays a key role to designing ventilation systems and indoor environment. The computational fluid dynamics aspects dictate that the large eddy simulation (LES) offers a subtle means to analyze complex flows with recirculation and streamline curvature effects, providing more robust and accurate details than those of Reynolds-averaged Navier-Stokes simulations. This work assesses the performance of two zero-equation sub-grid scale models: the Rahman-Agarwal-Siikonen-Taghinia (RAST) model with a single grid-filter and the dynamic Smagorinsky model with grid-filter and test-filter scales. This in turn allows a cross-comparison of the effect of two different LES methods in simulating indoor air-flows with forced and mixed (natural + forced) convection. A better performance against experiments is indicated with the RAST model in wall-bounded non-equilibrium indoor air-flows; this is due to its sensitivity toward both the shear and vorticity parameters.
Progress on the Development of the hPIC Particle-in-Cell Code
NASA Astrophysics Data System (ADS)
Dart, Cameron; Hayes, Alyssa; Khaziev, Rinat; Marcinko, Stephen; Curreli, Davide; Laboratory of Computational Plasma Physics Team
2017-10-01
Advancements were made in the development of the kinetic-kinetic electrostatic Particle-in-Cell code, hPIC, designed for large-scale simulation of the Plasma-Material Interface. hPIC achieved a weak scaling efficiency of 87% using the Algebraic Multigrid Solver BoomerAMG from the PETSc library on more than 64,000 cores of the Blue Waters supercomputer at the University of Illinois at Urbana-Champaign. The code successfully simulates two-stream instability and a volume of plasma over several square centimeters of surface extending out to the presheath in kinetic-kinetic mode. Results from a parametric study of the plasma sheath in strongly magnetized conditions will be presented, as well as a detailed analysis of the plasma sheath structure at grazing magnetic angles. The distribution function and its moments will be reported for plasma species in the simulation domain and at the material surface for plasma sheath simulations. Membership Pending.
Simulation of German PKL refill/reflood experiment K9A using RELAP4/MOD7. [PWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hsu, M.T.; Davis, C.B.; Behling, S.R.
This paper describes a RELAP4/MOD7 simulation of West Germany's Kraftwerk Union (KWU) Primary Coolant Loop (PKL) refill/reflood experiment K9A. RELAP4/MOD7, a best-estimate computer program for the calculation of thermal and hydraulic phenomena in a nuclear reactor or related system, is the latest version in the RELAP4 code development series. This study was the first major simulation using RELAP4/MOD7 since its release by the Idaho National Engineering Laboratory (INEL). The PKL facility is a reduced scale (1:134) representation of a typical West German four-loop 1300 MW pressurized water reactor (PWR). A prototypical scale of the total volume to power ratio wasmore » maintained. The test facility was designed specifically for an experiment simulating the refill/reflood phase of a Loss-of-Coolant Accident (LOCA).« less
The Postoperative Pain Assessment Skills pilot trial.
McGillion, Michael; Dubrowski, Adam; Stremler, Robyn; Watt-Watson, Judy; Campbell, Fiona; McCartney, Colin; Victor, Charles; Wiseman, Jeffrey; Snell, Linda; Costello, Judy; Robb, Anja; Nelson, Sioban; Stinson, Jennifer; Hunter, Judith; Dao, Thuan; Promislow, Sara; McNaughton, Nancy; White, Scott; Shobbrook, Cindy; Jeffs, Lianne; Mauch, Kianda; Leegaard, Marit; Beattie, W Scott; Schreiber, Martin; Silver, Ivan
2011-01-01
BACKGROUND⁄ Pain-related misbeliefs among health care professionals (HCPs) are common and contribute to ineffective postoperative pain assessment. While standardized patients (SPs) have been effectively used to improve HCPs' assessment skills, not all centres have SP programs. The present equivalence randomized controlled pilot trial examined the efficacy of an alternative simulation method - deteriorating patient-based simulation (DPS) - versus SPs for improving HCPs' pain knowledge and assessment skills. Seventy-two HCPs were randomly assigned to a 3 h SP or DPS simulation intervention. Measures were recorded at baseline, immediate postintervention and two months postintervention. The primary outcome was HCPs' pain assessment performance as measured by the postoperative Pain Assessment Skills Tool (PAST). Secondary outcomes included HCPs knowledge of pain-related misbeliefs, and perceived satisfaction and quality of the simulation. These outcomes were measured by the Pain Beliefs Scale (PBS), the Satisfaction with Simulated Learning Scale (SSLS) and the Simulation Design Scale (SDS), respectively. Student's t tests were used to test for overall group differences in postintervention PAST, SSLS and SDS scores. One-way analysis of covariance tested for overall group differences in PBS scores. DPS and SP groups did not differ on post-test PAST, SSLS or SDS scores. Knowledge of pain-related misbeliefs was also similar between groups. These pilot data suggest that DPS is an effective simulation alternative for HCPs' education on postoperative pain assessment, with improvements in performance and knowledge comparable with SP-based simulation. An equivalence trial to examine the effectiveness of deteriorating patient-based simulation versus standardized patients is warranted.
NASA Astrophysics Data System (ADS)
El-Wardany, Tahany; Lynch, Mathew; Gu, Wenjiong; Hsu, Arthur; Klecka, Michael; Nardi, Aaron; Viens, Daniel
This paper proposes an optimization framework enabling the integration of multi-scale / multi-physics simulation codes to perform structural optimization design for additively manufactured components. Cold spray was selected as the additive manufacturing (AM) process and its constraints were identified and included in the optimization scheme. The developed framework first utilizes topology optimization to maximize stiffness for conceptual design. The subsequent step applies shape optimization to refine the design for stress-life fatigue. The component weight was reduced by 20% while stresses were reduced by 75% and the rigidity was improved by 37%. The framework and analysis codes were implemented using Altair software as well as an in-house loading code. The optimized design was subsequently produced by the cold spray process.
Study and Development of an Air Conditioning System Operating on a Magnetic Heat Pump Cycle
NASA Technical Reports Server (NTRS)
Wang, Pao-Lien
1991-01-01
This report describes the design of a laboratory scale demonstration prototype of an air conditioning system operating on a magnetic heat pump cycle. Design parameters were selected through studies performed by a Kennedy Space Center (KSC) System Simulation Computer Model. The heat pump consists of a rotor turning through four magnetic fields that are created by permanent magnets. Gadolinium was selected as the working material for this demonstration prototype. The rotor was designed to be constructed of flat parallel disks of gadolinium with very little space in between. The rotor rotates in an aluminum housing. The laboratory scale demonstration prototype is designed to provide a theoretical Carnot Cycle efficiency of 62 percent and a Coefficient of Performance of 16.55.
Three-dimensional (3D) printed endovascular simulation models: a feasibility study.
Mafeld, Sebastian; Nesbitt, Craig; McCaslin, James; Bagnall, Alan; Davey, Philip; Bose, Pentop; Williams, Rob
2017-02-01
Three-dimensional (3D) printing is a manufacturing process in which an object is created by specialist printers designed to print in additive layers to create a 3D object. Whilst there are initial promising medical applications of 3D printing, a lack of evidence to support its use remains a barrier for larger scale adoption into clinical practice. Endovascular virtual reality (VR) simulation plays an important role in the safe training of future endovascular practitioners, but existing VR models have disadvantages including cost and accessibility which could be addressed with 3D printing. This study sought to evaluate the feasibility of 3D printing an anatomically accurate human aorta for the purposes of endovascular training. A 3D printed model was successfully designed and printed and used for endovascular simulation. The stages of development and practical applications are described. Feedback from 96 physicians who answered a series of questions using a 5 point Likert scale is presented. Initial data supports the value of 3D printed endovascular models although further educational validation is required.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chu, T.Y.; Bentz, J.H.; Bergeron, K.D.
1994-04-01
The possibility of achieving in-vessel core retention by flooding the reactor cavity, or the ``flooded cavity``, is an accident management concept currently under consideration for advanced light water reactors (ALWR), as well as for existing light water reactors (LWR). The CYBL (CYlindrical BoiLing) facility is a facility specifically designed to perform large-scale confirmatory testing of the flooded cavity concept. CYBL has a tank-within-a-tank design; the inner 3.7 m diameter tank simulates the reactor vessel, and the outer tank simulates the reactor cavity. The energy deposition on the bottom head is simulated with an array of radiant heaters. The array canmore » deliver a tailored heat flux distribution corresponding to that resulting from core melt convection. The present paper provides a detailed description of the capabilities of the facility, as well as results of recent experiments with heat flux in the range of interest to those required for in-vessel retention in typical ALWRs. The paper concludes with a discussion of other experiments for the flooded cavity applications.« less
Challenges of NDE simulation tool validation, optimization, and utilization for composites
NASA Astrophysics Data System (ADS)
Leckey, Cara A. C.; Seebo, Jeffrey P.; Juarez, Peter
2016-02-01
Rapid, realistic nondestructive evaluation (NDE) simulation tools can aid in inspection optimization and prediction of inspectability for advanced aerospace materials and designs. NDE simulation tools may someday aid in the design and certification of aerospace components; potentially shortening the time from material development to implementation by industry and government. Furthermore, ultrasound modeling and simulation are expected to play a significant future role in validating the capabilities and limitations of guided wave based structural health monitoring (SHM) systems. The current state-of-the-art in ultrasonic NDE/SHM simulation is still far from the goal of rapidly simulating damage detection techniques for large scale, complex geometry composite components/vehicles containing realistic damage types. Ongoing work at NASA Langley Research Center is focused on advanced ultrasonic simulation tool development. This paper discusses challenges of simulation tool validation, optimization, and utilization for composites. Ongoing simulation tool development work is described along with examples of simulation validation and optimization challenges that are more broadly applicable to all NDE simulation tools. The paper will also discuss examples of simulation tool utilization at NASA to develop new damage characterization methods for composites, and associated challenges in experimentally validating those methods.
NASA Astrophysics Data System (ADS)
Ruopp, A.; Ruprecht, A.; Riedelbauch, S.; Arnaud, G.; Hamad, I.
2014-03-01
The development of a hydro-kinetic prototype was shown including the compound structure, guide vanes, runner blades and a draft tube section with a steeply sloping, short spoiler. The design process of the hydrodynamic layout was split into three major steps. First the compound and the draft tube section was designed and the best operating point was identified using porous media as replacement for the guide vane and runner section (step one). The best operating point and the volume flux as well as the pressure drop was identified and used for the design of the guide vane section and the runner section. Both were designed and simulated independently (step two). In step three, all parts were merged in stationary simulation runs detecting peak power and operational bandwidth. In addition, the full scale demonstrator was installed in August 2010 and measured in the St. Lawrence River in Quebec supporting the average inflow velocity using ADCP (Acoustic Doppler Current Profiler) and the generator power output over the variable rotational speed. Simulation data and measurements are in good agreement. Thus, the presented approach is a suitable way in designing a hydro kinetic turbine.
Aeroacoustic prediction of turbulent free shear flows
NASA Astrophysics Data System (ADS)
Bodony, Daniel Joseph
2005-12-01
For many people living in the immediate vicinity of an active airport the noise of jet aircraft flying overhead can be a nuisance, if not worse. Airports, which are held accountable for the noise they produce, and upcoming international noise limits are pressuring the major airframe and jet engine manufacturers to bring quieter aircraft into service. However, component designers need a predictive tool that can estimate the sound generated by a new configuration. Current noise prediction techniques are almost entirely based on previously collected experimental data and are applicable only to evolutionary, not revolutionary, changes in the basic design. Physical models of final candidate designs must still be built and tested before a single design is selected. By focusing on the noise produced in the jet engine exhaust at take-off conditions, the prediction of sound generated by turbulent flows is addressed. The technique of large-eddy simulation is used to calculate directly the radiated sound produced by jets at different operating conditions. Predicted noise spectra agree with measurements for frequencies up to, and slightly beyond, the peak frequency. Higher frequencies are missed, however, due to the limited resolution of the simulations. Two methods of estimating the 'missing' noise are discussed. In the first a subgrid scale noise model, analogous to a subgrid scale closure model, is proposed. In the second method the governing equations are expressed in a wavelet basis from which simplified time-dependent equations for the subgrid scale fluctuations can be derived. These equations are inexpensively integrated to yield estimates of the subgrid scale fluctuations with proper space-time dynamics.
A Hybrid Coarse-graining Approach for Lipid Bilayers at Large Length and Time Scales
Ayton, Gary S.; Voth, Gregory A.
2009-01-01
A hybrid analytic-systematic (HAS) coarse-grained (CG) lipid model is developed and employed in a large-scale simulation of a liposome. The methodology is termed hybrid analyticsystematic as one component of the interaction between CG sites is variationally determined from the multiscale coarse-graining (MS-CG) methodology, while the remaining component utilizes an analytic potential. The systematic component models the in-plane center of mass interaction of the lipids as determined from an atomistic-level MD simulation of a bilayer. The analytic component is based on the well known Gay-Berne ellipsoid of revolution liquid crystal model, and is designed to model the highly anisotropic interactions at a highly coarse-grained level. The HAS CG approach is the first step in an “aggressive” CG methodology designed to model multi-component biological membranes at very large length and timescales. PMID:19281167
Large scale rigidity-based flexibility analysis of biomolecules
Streinu, Ileana
2016-01-01
KINematics And RIgidity (KINARI) is an on-going project for in silico flexibility analysis of proteins. The new version of the software, Kinari-2, extends the functionality of our free web server KinariWeb, incorporates advanced web technologies, emphasizes the reproducibility of its experiments, and makes substantially improved tools available to the user. It is designed specifically for large scale experiments, in particular, for (a) very large molecules, including bioassemblies with high degree of symmetry such as viruses and crystals, (b) large collections of related biomolecules, such as those obtained through simulated dilutions, mutations, or conformational changes from various types of dynamics simulations, and (c) is intended to work as seemlessly as possible on the large, idiosyncratic, publicly available repository of biomolecules, the Protein Data Bank. We describe the system design, along with the main data processing, computational, mathematical, and validation challenges underlying this phase of the KINARI project. PMID:26958583
Siddique, Radwanul Hasan; Diewald, Silvia; Leuthold, Juerg; Hölscher, Hendrik
2013-06-17
Morpho butterflies are well-known for their iridescence originating from nanostructures in the scales of their wings. These optical active structures integrate three design principles leading to the wide angle reflection: alternating lamellae layers, "Christmas tree" like shape, and offsets between neighboring ridges. We study their individual effects rigorously by 2D FEM simulations of the nanostructures of the Morpho sulkowskyi butterfly and show how the reflection spectrum can be controlled by the design of the nanostructures. The width of the spectrum is broad (≈ 90 nm) for alternating lamellae layers (or "brunches") of the structure while the "Christmas tree" pattern together with a height offset between neighboring ridges reduces the directionality of the reflectance. Furthermore, we fabricated the simulated structures by e-beam lithography. The resulting samples mimicked all important optical features of the original Morpho butterfly scales and feature the intense blue iridescence with a wide angular range of reflection.
Kang, Jian; Li, Xin; Jin, Rui; Ge, Yong; Wang, Jinfeng; Wang, Jianghao
2014-01-01
The eco-hydrological wireless sensor network (EHWSN) in the middle reaches of the Heihe River Basin in China is designed to capture the spatial and temporal variability and to estimate the ground truth for validating the remote sensing productions. However, there is no available prior information about a target variable. To meet both requirements, a hybrid model-based sampling method without any spatial autocorrelation assumptions is developed to optimize the distribution of EHWSN nodes based on geostatistics. This hybrid model incorporates two sub-criteria: one for the variogram modeling to represent the variability, another for improving the spatial prediction to evaluate remote sensing productions. The reasonability of the optimized EHWSN is validated from representativeness, the variogram modeling and the spatial accuracy through using 15 types of simulation fields generated with the unconditional geostatistical stochastic simulation. The sampling design shows good representativeness; variograms estimated by samples have less than 3% mean error relative to true variograms. Then, fields at multiple scales are predicted. As the scale increases, estimated fields have higher similarities to simulation fields at block sizes exceeding 240 m. The validations prove that this hybrid sampling method is effective for both objectives when we do not know the characteristics of an optimized variables. PMID:25317762
MagLIF scaling on Z and future machines
NASA Astrophysics Data System (ADS)
Slutz, Stephen; Stygar, William; Gomez, Matthew; Campbell, Edward; Peterson, Kyle; Sefkow, Adam; Sinars, Daniel; Vesey, Roger
2015-11-01
The MagLIF (Magnetized Liner Inertial Fusion) concept [S.A. Slutz et al Phys. Plasmas 17, 056303, 2010] has demonstrated [M.R. Gomez et al., PRL 113, 155003, 2014] fusion-relevant plasma conditions on the Z machine. We present 2D numerical simulations of the scaling of MagLIF on Z indicating that deuterium/tritium (DT) fusion yields greater than 100 kJ could be possible on Z when operated at a peak current of 25 MA. Much higher yields are predicted for MagLIF driven with larger peak currents. Two high performance pulsed-power machines (Z300 and Z800) have been designed based on Linear Transformer Driver (LTD) technology. The Z300 design would provide approximately 48 MA to a MagLIF load, while Z800 would provide about 66 MA. We used a parameterized Thevenin equivalent circuit to drive a series of 1D and 2D numerical simulations with currents between and beyond these two designs. Our simulations indicate that 5-10 MJ yields may be possible with Z300, while yields of about 1 GJ may be possible with Z800. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Kang, Jian; Li, Xin; Jin, Rui; Ge, Yong; Wang, Jinfeng; Wang, Jianghao
2014-10-14
The eco-hydrological wireless sensor network (EHWSN) in the middle reaches of the Heihe River Basin in China is designed to capture the spatial and temporal variability and to estimate the ground truth for validating the remote sensing productions. However, there is no available prior information about a target variable. To meet both requirements, a hybrid model-based sampling method without any spatial autocorrelation assumptions is developed to optimize the distribution of EHWSN nodes based on geostatistics. This hybrid model incorporates two sub-criteria: one for the variogram modeling to represent the variability, another for improving the spatial prediction to evaluate remote sensing productions. The reasonability of the optimized EHWSN is validated from representativeness, the variogram modeling and the spatial accuracy through using 15 types of simulation fields generated with the unconditional geostatistical stochastic simulation. The sampling design shows good representativeness; variograms estimated by samples have less than 3% mean error relative to true variograms. Then, fields at multiple scales are predicted. As the scale increases, estimated fields have higher similarities to simulation fields at block sizes exceeding 240 m. The validations prove that this hybrid sampling method is effective for both objectives when we do not know the characteristics of an optimized variables.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humbird, David; Sitaraman, Hariswaran; Stickel, Jonathan
If advanced biofuels are to measurably displace fossil fuels in the near term, they will have to operate at levels of scale, efficiency, and margin unprecedented in the current biotech industry. For aerobically-grown products in particular, scale-up is complex and the practical size, cost, and operability of extremely large reactors is not well understood. Put simply, the problem of how to attain fuel-class production scales comes down to cost-effective delivery of oxygen at high mass transfer rates and low capital and operating costs. To that end, very large reactor vessels (>500 m3) are proposed in order to achieve favorable economiesmore » of scale. Additionally, techno-economic evaluation indicates that bubble-column reactors are more cost-effective than stirred-tank reactors in many low-viscosity cultures. In order to advance the design of extremely large aerobic bioreactors, we have performed computational fluid dynamics (CFD) simulations of bubble-column reactors. A multiphase Euler-Euler model is used to explicitly account for the spatial distribution of air (i.e., gas bubbles) in the reactor. Expanding on the existing bioreactor CFD literature (typically focused on the hydrodynamics of bubbly flows), our simulations include interphase mass transfer of oxygen and a simple phenomenological reaction representing the uptake and consumption of dissolved oxygen by submerged cells. The simulations reproduce the expected flow profiles, with net upward flow in the center of column and downward flow near the wall. At high simulated oxygen uptake rates (OUR), oxygen-depleted regions can be observed in the reactor. By increasing the gas flow to enhance mixing and eliminate depleted areas, a maximum oxygen transfer (OTR) rate is obtained as a function of superficial velocity. These insights regarding minimum superficial velocity and maximum reactor size are incorporated into NREL's larger techno-economic models to supplement standard reactor design equations.« less
Evaluation of dispersion strengthened nickel-base alloy heat shields for space shuttle application
NASA Technical Reports Server (NTRS)
Johnson, R., Jr.; Killpatrick, D. H.
1975-01-01
The design, fabrication, and testing of a full-size, full-scale TD Ni-20Cr heat shield test array in simulated mission environments is described along with the design and fabrication of two additional full-size, full-scale test arrays to be tested in flowing gas test facilities at the NASA Langley Research Center. Cost and reusability evaluations of TD Ni-20Cr heat shield systems are presented, and weight estimates of a TD Ni-20Cr heat shield system for use on a shuttle orbiter vehicle are made. Safe-line expectancy of a TD Ni-20Cr heat shield system is assessed. Non-destructive test techniques are evaluated to determine their effectiveness in quality assurance checks of TD Ni-20Cr components such as heat shields, heat shield supports, close-out panels, formed cover strips, and edge seals. Results of tests on a braze reinforced full-scale, subsize panel are included. Results show only minor structural degradation in the main TD Ni-20Cr heat shields of the test array during simulated mission test cycles.
Scaled Jump in Gravity-Reduced Virtual Environments.
Kim, MyoungGon; Cho, Sunglk; Tran, Tanh Quang; Kim, Seong-Pil; Kwon, Ohung; Han, JungHyun
2017-04-01
The reduced gravity experienced in lunar or Martian surfaces can be simulated on the earth using a cable-driven system, where the cable lifts a person to reduce his or her weight. This paper presents a novel cable-driven system designed for the purpose. It is integrated with a head-mounted display and a motion capture system. Focusing on jump motion within the system, this paper proposes to scale the jump and reports the experiments made for quantifying the extent to which a jump can be scaled without the discrepancy between physical and virtual jumps being noticed by the user. With the tolerable range of scaling computed from these experiments, an application named retargeted jump is developed, where a user can jump up onto virtual objects while physically jumping in the real-world flat floor. The core techniques presented in this paper can be extended to develop extreme-sport simulators such as parasailing and skydiving.
NASA Astrophysics Data System (ADS)
Yang, Jian; Sun, Shuaishuai; Tian, Tongfei; Li, Weihua; Du, Haiping; Alici, Gursel; Nakano, Masami
2016-03-01
Protecting civil engineering structures from uncontrollable events such as earthquakes while maintaining their structural integrity and serviceability is very important; this paper describes the performance of a stiffness softening magnetorheological elastomer (MRE) isolator in a scaled three storey building. In order to construct a closed-loop system, a scaled three storey building was designed and built according to the scaling laws, and then four MRE isolator prototypes were fabricated and utilised to isolate the building from the motion induced by a scaled El Centro earthquake. Fuzzy logic was used to output the current signals to the isolators, based on the real-time responses of the building floors, and then a simulation was used to evaluate the feasibility of this closed loop control system before carrying out an experimental test. The simulation and experimental results showed that the stiffness softening MRE isolator controlled by fuzzy logic could suppress structural vibration well.
A normal stress subgrid-scale eddy viscosity model in large eddy simulation
NASA Technical Reports Server (NTRS)
Horiuti, K.; Mansour, N. N.; Kim, John J.
1993-01-01
The Smagorinsky subgrid-scale eddy viscosity model (SGS-EVM) is commonly used in large eddy simulations (LES) to represent the effects of the unresolved scales on the resolved scales. This model is known to be limited because its constant must be optimized in different flows, and it must be modified with a damping function to account for near-wall effects. The recent dynamic model is designed to overcome these limitations but is compositionally intensive as compared to the traditional SGS-EVM. In a recent study using direct numerical simulation data, Horiuti has shown that these drawbacks are due mainly to the use of an improper velocity scale in the SGS-EVM. He also proposed the use of the subgrid-scale normal stress as a new velocity scale that was inspired by a high-order anisotropic representation model. The testing of Horiuti, however, was conducted using DNS data from a low Reynolds number channel flow simulation. It was felt that further testing at higher Reynolds numbers and also using different flows (other than wall-bounded shear flows) were necessary steps needed to establish the validity of the new model. This is the primary motivation of the present study. The objective is to test the new model using DNS databases of high Reynolds number channel and fully developed turbulent mixing layer flows. The use of both channel (wall-bounded) and mixing layer flows is important for the development of accurate LES models because these two flows encompass many characteristic features of complex turbulent flows.
Simulations of space charge neutralization in a magnetized electron cooler
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerity, James; McIntyre, Peter M.; Bruhwiler, David Leslie
Magnetized electron cooling at relativistic energies and Ampere scale current is essential to achieve the proposed ion luminosities in a future electron-ion collider (EIC). Neutralization of the space charge in such a cooler can significantly increase the magnetized dynamic friction and, hence, the cooling rate. The Warp framework is being used to simulate magnetized electron beam dynamics during and after the build-up of neutralizing ions, via ionization of residual gas in the cooler. The design follows previous experiments at Fermilab as a verification case. We also discuss the relevance to EIC designs.
Statistical Inference for Big Data Problems in Molecular Biophysics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramanathan, Arvind; Savol, Andrej; Burger, Virginia
2012-01-01
We highlight the role of statistical inference techniques in providing biological insights from analyzing long time-scale molecular simulation data. Technologi- cal and algorithmic improvements in computation have brought molecular simu- lations to the forefront of techniques applied to investigating the basis of living systems. While these longer simulations, increasingly complex reaching petabyte scales presently, promise a detailed view into microscopic behavior, teasing out the important information has now become a true challenge on its own. Mining this data for important patterns is critical to automating therapeutic intervention discovery, improving protein design, and fundamentally understanding the mech- anistic basis of cellularmore » homeostasis.« less
NASA Technical Reports Server (NTRS)
Frederick, D. K.; Lashmet, P. K.; Moyer, W. R.; Sandor, G. N.; Shen, C. N.; Smith, E. J.; Yerazunis, S. W.
1973-01-01
The following tasks related to the design, construction, and evaluation of a mobile planetary vehicle for unmanned exploration of Mars are discussed: (1) design and construction of a 0.5 scale dynamic vehicle; (2) mathematical modeling of vehicle dynamics; (3) experimental 0.4 scale vehicle dynamics measurements and interpretation; (4) vehicle electro-mechanical control systems; (5) remote control systems; (6) collapsibility and deployment concepts and hardware; (7) design, construction and evaluation of a wheel with increased lateral stiffness, (8) system design optimization; (9) design of an on-board computer; (10) design and construction of a laser range finder; (11) measurement of reflectivity of terrain surfaces; (12) obstacle perception by edge detection; (13) terrain modeling based on gradients; (14) laser scan systems; (15) path selection system simulation and evaluation; (16) gas chromatograph system concepts; (17) experimental chromatograph separation measurements and chromatograph model improvement and evaluation.
NASA Technical Reports Server (NTRS)
Gelder, Thomas F.; Moore, Royce D.; Shyne, Rickey J.; Boldman, Donald R.
1987-01-01
Two turning vane designs were experimentally evaluated for the fan-drive corner (corner 2) coupled to an upstream diffuser and the high-speed corner (corner 1) of the 0.1 scale model of NASA Lewis Research Center's proposed Altitude Wind Tunnel. For corner 2 both a controlled-diffusion vane design (vane A4) and a circular-arc vane design (vane B) were studied. The corner 2 total pressure loss coefficient was about 0.12 with either vane design. This was about 25 percent less loss than when corner 2 was tested alone. Although the vane A4 design has the advantage of 20 percent fewer vanes than the vane B design, its vane shape is more complex. The effects of simulated inlet flow distortion on the overall losses for corner 1 or 2 were small.
Damage Assessment of a Full-Scale Six-Story wood-frame Building Following Triaxial shake Table Tests
John W. van de Lindt; Rakesh Gupta; Shiling Pei; Kazuki Tachibana; Yasuhiro Araki; Douglas Rammer; Hiroshi Isoda
2012-01-01
In the summer of 2009, a full-scale midrise wood-frame building was tested under a series of simulated earthquakes on the world's largest shake table in Miki City, Japan. The objective of this series of tests was to validate a performance-based seismic design approach by qualitatively and quantitatively examining the building's seismic performance in terms of...
Hypersonic Combustor Model Inlet CFD Simulations and Experimental Comparisons
NASA Technical Reports Server (NTRS)
Venkatapathy, E.; TokarcikPolsky, S.; Deiwert, G. S.; Edwards, Thomas A. (Technical Monitor)
1995-01-01
Numerous two-and three-dimensional computational simulations were performed for the inlet associated with the combustor model for the hypersonic propulsion experiment in the NASA Ames 16-Inch Shock Tunnel. The inlet was designed to produce a combustor-inlet flow that is nearly two-dimensional and of sufficient mass flow rate for large scale combustor testing. The three-dimensional simulations demonstrated that the inlet design met all the design objectives and that the inlet produced a very nearly two-dimensional combustor inflow profile. Numerous two-dimensional simulations were performed with various levels of approximations such as in the choice of chemical and physical models, as well as numerical approximations. Parametric studies were conducted to better understand and to characterize the inlet flow. Results from the two-and three-dimensional simulations were used to predict the mass flux entering the combustor and a mass flux correlation as a function of facility stagnation pressure was developed. Surface heat flux and pressure measurements were compared with the computed results and good agreement was found. The computational simulations helped determine the inlet low characteristics in the high enthalpy environment, the important parameters that affect the combustor-inlet flow, and the sensitivity of the inlet flow to various modeling assumptions.
Modeling, simulation, and concept design for hybrid-electric medium-size military trucks
NASA Astrophysics Data System (ADS)
Rizzoni, Giorgio; Josephson, John R.; Soliman, Ahmed; Hubert, Christopher; Cantemir, Codrin-Gruie; Dembski, Nicholas; Pisu, Pierluigi; Mikesell, David; Serrao, Lorenzo; Russell, James; Carroll, Mark
2005-05-01
A large scale design space exploration can provide valuable insight into vehicle design tradeoffs being considered for the U.S. Army"s FMTV (Family of Medium Tactical Vehicles). Through a grant from TACOM (Tank-automotive and Armaments Command), researchers have generated detailed road, surface, and grade conditions representative of the performance criteria of this medium-sized truck and constructed a virtual powertrain simulator for both conventional and hybrid variants. The simulator incorporates the latest technology among vehicle design options, including scalable ultracapacitor and NiMH battery packs as well as a variety of generator and traction motor configurations. An energy management control strategy has also been developed to provide efficiency and performance. A design space exploration for the family of vehicles involves running a large number of simulations with systematically varied vehicle design parameters, where each variant is paced through several different mission profiles and multiple attributes of performance are measured. The resulting designs are filtered to remove dominated designs, exposing the multi-criterial surface of optimality (Pareto optimal designs), and revealing the design tradeoffs as they impact vehicle performance and economy. The results are not yet definitive because ride and drivability measures were not included, and work is not finished on fine-tuning the modeled dynamics of some powertrain components. However, the work so far completed demonstrates the effectiveness of the approach to design space exploration, and the results to date suggest the powertrain configuration best suited to the FMTV mission.
NASA Technical Reports Server (NTRS)
Adoue, J. A.
1984-01-01
In support of preflight design loads definition, preliminary water impact scale model are being conducted of space shuttle rocket boosters. The model to be used as well as the instrumentation, test facilities, and test procedures are described for water impact tests being conducted at test conditions to simulate full-scale initial impact at vertical velocities from 65 to 85 ft/sec. zero horizontal velocity, and angles of 0,5, and 10 degrees.
Real-time gray-scale photolithography for fabrication of continuous microstructure
NASA Astrophysics Data System (ADS)
Peng, Qinjun; Guo, Yongkang; Liu, Shijie; Cui, Zheng
2002-10-01
A novel real-time gray-scale photolithography technique for the fabrication of continuous microstructures that uses a LCD panel as a real-time gray-scale mask is presented. The principle of design of the technique is explained, and computer simulation results based on partially coherent imaging theory are given for the patterning of a microlens array and a zigzag grating. An experiment is set up, and a microlens array and a zigzag grating on panchromatic silver halide sensitized gelatin with trypsinase etching are obtained.
Development of a Scale-up Tool for Pervaporation Processes
Thiess, Holger; Strube, Jochen
2018-01-01
In this study, an engineering tool for the design and optimization of pervaporation processes is developed based on physico-chemical modelling coupled with laboratory/mini-plant experiments. The model incorporates the solution-diffusion-mechanism, polarization effects (concentration and temperature), axial dispersion, pressure drop and the temperature drop in the feed channel due to vaporization of the permeating components. The permeance, being the key model parameter, was determined via dehydration experiments on a mini-plant scale for the binary mixtures ethanol/water and ethyl acetate/water. A second set of experimental data was utilized for the validation of the model for two chemical systems. The industrially relevant ternary mixture, ethanol/ethyl acetate/water, was investigated close to its azeotropic point and compared to a simulation conducted with the determined binary permeance data. Experimental and simulation data proved to agree very well for the investigated process conditions. In order to test the scalability of the developed engineering tool, large-scale data from an industrial pervaporation plant used for the dehydration of ethanol was compared to a process simulation conducted with the validated physico-chemical model. Since the membranes employed in both mini-plant and industrial scale were of the same type, the permeance data could be transferred. The comparison of the measured and simulated data proved the scalability of the derived model. PMID:29342956
Helium segregation on surfaces of plasma-exposed tungsten
Maroudas, Dimitrios; Blondel, Sophie; Hu, Lin; ...
2016-01-21
Here we report a hierarchical multi-scale modeling study of implanted helium segregation on surfaces of tungsten, considered as a plasma facing component in nuclear fusion reactors. We employ a hierarchy of atomic-scale simulations based on a reliable interatomic interaction potential, including molecular-statics simulations to understand the origin of helium surface segregation, targeted molecular-dynamics (MD) simulations of near-surface cluster reactions, and large-scale MD simulations of implanted helium evolution in plasma-exposed tungsten. We find that small, mobile He-n (1 <= n <= 7) clusters in the near-surface region are attracted to the surface due to an elastic interaction force that provides themore » thermodynamic driving force for surface segregation. Elastic interaction force induces drift fluxes of these mobile Hen clusters, which increase substantially as the migrating clusters approach the surface, facilitating helium segregation on the surface. Moreover, the clusters' drift toward the surface enables cluster reactions, most importantly trap mutation, in the near-surface region at rates much higher than in the bulk material. Moreover, these near-surface cluster dynamics have significant effects on the surface morphology, near-surface defect structures, and the amount of helium retained in the material upon plasma exposure. We integrate the findings of such atomic-scale simulations into a properly parameterized and validated spatially dependent, continuum-scale reaction-diffusion cluster dynamics model, capable of predicting implanted helium evolution, surface segregation, and its near-surface effects in tungsten. This cluster-dynamics model sets the stage for development of fully atomistically informed coarse-grained models for computationally efficient simulation predictions of helium surface segregation, as well as helium retention and surface morphological evolution, toward optimal design of plasma facing components.« less
Helium segregation on surfaces of plasma-exposed tungsten
NASA Astrophysics Data System (ADS)
Maroudas, Dimitrios; Blondel, Sophie; Hu, Lin; Hammond, Karl D.; Wirth, Brian D.
2016-02-01
We report a hierarchical multi-scale modeling study of implanted helium segregation on surfaces of tungsten, considered as a plasma facing component in nuclear fusion reactors. We employ a hierarchy of atomic-scale simulations based on a reliable interatomic interaction potential, including molecular-statics simulations to understand the origin of helium surface segregation, targeted molecular-dynamics (MD) simulations of near-surface cluster reactions, and large-scale MD simulations of implanted helium evolution in plasma-exposed tungsten. We find that small, mobile He n (1 ⩽ n ⩽ 7) clusters in the near-surface region are attracted to the surface due to an elastic interaction force that provides the thermodynamic driving force for surface segregation. This elastic interaction force induces drift fluxes of these mobile He n clusters, which increase substantially as the migrating clusters approach the surface, facilitating helium segregation on the surface. Moreover, the clusters’ drift toward the surface enables cluster reactions, most importantly trap mutation, in the near-surface region at rates much higher than in the bulk material. These near-surface cluster dynamics have significant effects on the surface morphology, near-surface defect structures, and the amount of helium retained in the material upon plasma exposure. We integrate the findings of such atomic-scale simulations into a properly parameterized and validated spatially dependent, continuum-scale reaction-diffusion cluster dynamics model, capable of predicting implanted helium evolution, surface segregation, and its near-surface effects in tungsten. This cluster-dynamics model sets the stage for development of fully atomistically informed coarse-grained models for computationally efficient simulation predictions of helium surface segregation, as well as helium retention and surface morphological evolution, toward optimal design of plasma facing components.
3D-printed pediatric endoscopic ear surgery simulator for surgical training.
Barber, Samuel R; Kozin, Elliott D; Dedmon, Matthew; Lin, Brian M; Lee, Kyuwon; Sinha, Sumi; Black, Nicole; Remenschneider, Aaron K; Lee, Daniel J
2016-11-01
Surgical simulators are designed to improve operative skills and patient safety. Transcanal Endoscopic Ear Surgery (TEES) is a relatively new surgical approach with a slow learning curve due to one-handed dissection. A reusable and customizable 3-dimensional (3D)-printed endoscopic ear surgery simulator may facilitate the development of surgical skills with high fidelity and low cost. Herein, we aim to design, fabricate, and test a low-cost and reusable 3D-printed TEES simulator. The TEES simulator was designed in computer-aided design (CAD) software using anatomic measurements taken from anthropometric studies. Cross sections from external auditory canal samples were traced as vectors and serially combined into a mesh construct. A modified tympanic cavity with a modular testing platform for simulator tasks was incorporated. Components were fabricated using calcium sulfate hemihydrate powder and multiple colored infiltrants via a commercial inkjet 3D-printing service. All components of a left-sided ear were printed to scale. Six right-handed trainees completed three trials each. Mean trial time (n = 3) ranged from 23.03 to 62.77 s using the dominant hand for all dissection. Statistically significant differences between first and last completion time with the dominant hand (p < 0.05) and average completion time for junior and senior residents (p < 0.05) suggest construct validity. A 3D-printed simulator is feasible for TEES simulation. Otolaryngology training programs with access to a 3D printer may readily fabricate a TEES simulator, resulting in inexpensive yet high-fidelity surgical simulation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Comparison of plastic, high density carbon, and beryllium as indirect drive NIF ablators
NASA Astrophysics Data System (ADS)
Kritcher, A. L.; Clark, D.; Haan, S.; Yi, S. A.; Zylstra, A. B.; Callahan, D. A.; Hinkel, D. E.; Berzak Hopkins, L. F.; Hurricane, O. A.; Landen, O. L.; MacLaren, S. A.; Meezan, N. B.; Patel, P. K.; Ralph, J.; Thomas, C. A.; Town, R.; Edwards, M. J.
2018-05-01
Detailed radiation hydrodynamic simulations calibrated to experimental data have been used to compare the relative strengths and weaknesses of three candidate indirect drive ablator materials now tested at the NIF: plastic, high density carbon or diamond, and beryllium. We apply a common simulation methodology to several currently fielded ablator platforms to benchmark the model and extrapolate designs to the full NIF envelope to compare on a more equal footing. This paper focuses on modeling of the hohlraum energetics which accurately reproduced measured changes in symmetry when changes to the hohlraum environment were made within a given platform. Calculations suggest that all three ablator materials can achieve a symmetric implosion at a capsule outer radius of ˜1100 μm, a laser energy of 1.8 MJ, and a DT ice mass of 185 μg. However, there is more uncertainty in the symmetry predictions for the plastic and beryllium designs. Scaled diamond designs had the most calculated margin for achieving symmetry and the highest fuel absorbed energy at the same scale compared to plastic or beryllium. A comparison of the relative hydrodynamic stability was made using ultra-high resolution capsule simulations and the two dimensional radiation fluxes described in this work [Clark et al., Phys. Plasmas 25, 032703 (2018)]. These simulations, which include low and high mode perturbations, suggest that diamond is currently the most promising for achieving higher yields in the near future followed by plastic, and more data are required to understand beryllium.
Mobility analysis, simulation, and scale model testing for the design of wheeled planetary rovers
NASA Technical Reports Server (NTRS)
Lindemann, Randel A.; Eisen, Howard J.
1993-01-01
The use of computer based techniques to model and simulate wheeled rovers on rough natural terrains is considered. Physical models of a prototype vehicle can be used to test the correlation of the simulations in scaled testing. The computer approaches include a quasi-static planar or two dimensional analysis and design tool based on the traction necessary for the vehicle to have imminent mobility. The computer program modeled a six by six wheel drive vehicle of original kinematic configuration, called the Rocker Bogie. The Rocker Bogie was optimized using the quasi-static software with respect to its articulation parameters prior to fabrication of a prototype. In another approach used, the dynamics of the Rocker Bogie vehicle in 3-D space was modeled on an engineering workstation using commercial software. The model included the complex and nonlinear interaction of the tire and terrain. The results of the investigation yielded numerical and graphical results of the rover traversing rough terrain on the earth, moon, and Mars. In addition, animations of the rover excursions were also generated. A prototype vehicle was then used in a series of testbed and field experiments. Correspondence was then established between the computer models and the physical model. The results indicated the utility of the quasi-static tool for configurational design, as well as the predictive ability of the 3-D simulation to model the dynamic behavior of the vehicle over short traverses.
The Parallel System for Integrating Impact Models and Sectors (pSIMS)
NASA Technical Reports Server (NTRS)
Elliott, Joshua; Kelly, David; Chryssanthacopoulos, James; Glotter, Michael; Jhunjhnuwala, Kanika; Best, Neil; Wilde, Michael; Foster, Ian
2014-01-01
We present a framework for massively parallel climate impact simulations: the parallel System for Integrating Impact Models and Sectors (pSIMS). This framework comprises a) tools for ingesting and converting large amounts of data to a versatile datatype based on a common geospatial grid; b) tools for translating this datatype into custom formats for site-based models; c) a scalable parallel framework for performing large ensemble simulations, using any one of a number of different impacts models, on clusters, supercomputers, distributed grids, or clouds; d) tools and data standards for reformatting outputs to common datatypes for analysis and visualization; and e) methodologies for aggregating these datatypes to arbitrary spatial scales such as administrative and environmental demarcations. By automating many time-consuming and error-prone aspects of large-scale climate impacts studies, pSIMS accelerates computational research, encourages model intercomparison, and enhances reproducibility of simulation results. We present the pSIMS design and use example assessments to demonstrate its multi-model, multi-scale, and multi-sector versatility.
A detailed model for simulation of catchment scale subsurface hydrologic processes
NASA Technical Reports Server (NTRS)
Paniconi, Claudio; Wood, Eric F.
1993-01-01
A catchment scale numerical model is developed based on the three-dimensional transient Richards equation describing fluid flow in variably saturated porous media. The model is designed to take advantage of digital elevation data bases and of information extracted from these data bases by topographic analysis. The practical application of the model is demonstrated in simulations of a small subcatchment of the Konza Prairie reserve near Manhattan, Kansas. In a preliminary investigation of computational issues related to model resolution, we obtain satisfactory numerical results using large aspect ratios, suggesting that horizontal grid dimensions may not be unreasonably constrained by the typically much smaller vertical length scale of a catchment and by vertical discretization requirements. Additional tests are needed to examine the effects of numerical constraints and parameter heterogeneity in determining acceptable grid aspect ratios. In other simulations we attempt to match the observed streamflow response of the catchment, and we point out the small contribution of the streamflow component to the overall water balance of the catchment.
Ranganathan, Panneerselvam; Savithri, Sivaraman
2018-06-01
Computational Fluid Dynamics (CFD) technique is used in this work to simulate the hydrothermal liquefaction of Nannochloropsis sp. microalgae in a lab-scale continuous plug-flow reactor to understand the fluid dynamics, heat transfer, and reaction kinetics in a HTL reactor under hydrothermal condition. The temperature profile in the reactor and the yield of HTL products from the present simulation are obtained and they are validated with the experimental data available in the literature. Furthermore, the parametric study is carried out to study the effect of slurry flow rate, reactor temperature, and external heat transfer coefficient on the yield of products. Though the model predictions are satisfactory in comparison with the experimental results, it still needs to be improved for better prediction of the product yields. This improved model will be considered as a baseline for design and scale-up of large-scale HTL reactor. Copyright © 2018 Elsevier Ltd. All rights reserved.
Preliminary design, analysis, and costing of a dynamic scale model of the NASA space station
NASA Technical Reports Server (NTRS)
Gronet, M. J.; Pinson, E. D.; Voqui, H. L.; Crawley, E. F.; Everman, M. R.
1987-01-01
The difficulty of testing the next generation of large flexible space structures on the ground places an emphasis on other means for validating predicted on-orbit dynamic behavior. Scale model technology represents one way of verifying analytical predictions with ground test data. This study investigates the preliminary design, scaling and cost trades for a Space Station dynamic scale model. The scaling of nonlinear joint behavior is studied from theoretical and practical points of view. Suspension system interaction trades are conducted for the ISS Dual Keel Configuration and Build-Up Stages suspended in the proposed NASA/LaRC Large Spacecraft Laboratory. Key issues addressed are scaling laws, replication vs. simulation of components, manufacturing, suspension interactions, joint behavior, damping, articulation capability, and cost. These issues are the subject of parametric trades versus the scale model factor. The results of these detailed analyses are used to recommend scale factors for four different scale model options, each with varying degrees of replication. Potential problems in constructing and testing the scale model are identified, and recommendations for further study are outlined.
Zwier, Matthew C.; Adelman, Joshua L.; Kaus, Joseph W.; Pratt, Adam J.; Wong, Kim F.; Rego, Nicholas B.; Suárez, Ernesto; Lettieri, Steven; Wang, David W.; Grabe, Michael; Zuckerman, Daniel M.; Chong, Lillian T.
2015-01-01
The weighted ensemble (WE) path sampling approach orchestrates an ensemble of parallel calculations with intermittent communication to enhance the sampling of rare events, such as molecular associations or conformational changes in proteins or peptides. Trajectories are replicated and pruned in a way that focuses computational effort on under-explored regions of configuration space while maintaining rigorous kinetics. To enable the simulation of rare events at any scale (e.g. atomistic, cellular), we have developed an open-source, interoperable, and highly scalable software package for the execution and analysis of WE simulations: WESTPA (The Weighted Ensemble Simulation Toolkit with Parallelization and Analysis). WESTPA scales to thousands of CPU cores and includes a suite of analysis tools that have been implemented in a massively parallel fashion. The software has been designed to interface conveniently with any dynamics engine and has already been used with a variety of molecular dynamics (e.g. GROMACS, NAMD, OpenMM, AMBER) and cell-modeling packages (e.g. BioNetGen, MCell). WESTPA has been in production use for over a year, and its utility has been demonstrated for a broad set of problems, ranging from atomically detailed host-guest associations to non-spatial chemical kinetics of cellular signaling networks. The following describes the design and features of WESTPA, including the facilities it provides for running WE simulations, storing and analyzing WE simulation data, as well as examples of input and output. PMID:26392815
Measuring multielectron beam imaging fidelity with a signal-to-noise ratio analysis
NASA Astrophysics Data System (ADS)
Mukhtar, Maseeh; Bunday, Benjamin D.; Quoi, Kathy; Malloy, Matt; Thiel, Brad
2016-07-01
Java Monte Carlo Simulator for Secondary Electrons (JMONSEL) simulations are used to generate expected imaging responses of chosen test cases of patterns and defects with the ability to vary parameters for beam energy, spot size, pixel size, and/or defect material and form factor. The patterns are representative of the design rules for an aggressively scaled FinFET-type design. With these simulated images and resulting shot noise, a signal-to-noise framework is developed, which relates to defect detection probabilities. Additionally, with this infrastructure, the effect of detection chain noise and frequency-dependent system response can be made, allowing for targeting of best recipe parameters for multielectron beam inspection validation experiments. Ultimately, these results should lead to insights into how such parameters will impact tool design, including necessary doses for defect detection and estimations of scanning speeds for achieving high throughput for high-volume manufacturing.
Comparison of batch sorption tests, pilot studies, and modeling for estimating GAC bed life.
Scharf, Roger G; Johnston, Robert W; Semmens, Michael J; Hozalski, Raymond M
2010-02-01
Saint Paul Regional Water Services (SPRWS) in Saint Paul, MN experiences annual taste and odor episodes during the warm summer months. These episodes are attributed primarily to geosmin that is produced by cyanobacteria growing in the chain of lakes used to convey and store the source water pumped from the Mississippi River. Batch experiments, pilot-scale experiments, and model simulations were performed to determine the geosmin removal performance and bed life of a granular activated carbon (GAC) filter-sorber. Using batch adsorption isotherm parameters, the estimated bed life for the GAC filter-sorber ranged from 920 to 1241 days when challenged with a constant concentration of 100 ng/L of geosmin. The estimated bed life obtained using the AdDesignS model and the actual pilot-plant loading history was 594 days. Based on the pilot-scale GAC column data, the actual bed life (>714 days) was much longer than the simulated values because bed life was extended by biological degradation of geosmin. The continuous feeding of high concentrations of geosmin (100-400 ng/L) in the pilot-scale experiments enriched for a robust geosmin-degrading culture that was sustained when the geosmin feed was turned off for 40 days. It is unclear, however, whether a geosmin-degrading culture can be established in a full-scale filter that experiences taste and odor episodes for only 1 or 2 months per year. The results of this research indicate that care must be exercised in the design and interpretation of pilot-scale experiments and model simulations for predicting taste and odor removal in full-scale GAC filter-sorbers. Adsorption and the potential for biological degradation must be considered to estimate GAC bed life for the conditions of intermittent geosmin loading typically experienced by full-scale systems. (c) 2009 Elsevier Ltd. All rights reserved.
On testing VLSI chips for the big Viterbi decoder
NASA Technical Reports Server (NTRS)
Hsu, I. S.
1989-01-01
A general technique that can be used in testing very large scale integrated (VLSI) chips for the Big Viterbi Decoder (BVD) system is described. The test technique is divided into functional testing and fault-coverage testing. The purpose of functional testing is to verify that the design works functionally. Functional test vectors are converted from outputs of software simulations which simulate the BVD functionally. Fault-coverage testing is used to detect and, in some cases, to locate faulty components caused by bad fabrication. This type of testing is useful in screening out bad chips. Finally, design for testability, which is included in the BVD VLSI chip design, is described in considerable detail. Both the observability and controllability of a VLSI chip are greatly enhanced by including the design for the testability feature.
Ekama, G A; Marais, P
2004-02-01
The applicability of the one-dimensional idealized flux theory (1DFT) for the design of secondary settling tanks (SSTs) is evaluated by comparing its predicted maximum surface overflow (SOR) and solids loading (SLR) rates with that calculated with the two-dimensional computational fluid dynamics model SettlerCAD using as a basis 35 full-scale SST stress tests conducted on different SSTs with diameters from 30 to 45m and 2.25-4.1m side water depth (SWD), with and without Stamford baffles. From the simulations, a relatively consistent pattern appeared, i.e. that the 1DFT can be used for design but its predicted maximum SLR needs to be reduced by an appropriate flux rating, the magnitude of which depends mainly on SST depth and hydraulic loading rate (HLR). Simulations of the Watts et al. (Water Res. 30(9)(1996)2112) SST, with doubled SWDs and the Darvill new (4.1m) and old (2.5m) SSTs with interchanged depths, were run to confirm the sensitivity of the flux rating to depth and HLR. Simulations with and without a Stamford baffle were also performed. While the design of the internal features of the SST, such as baffling, has a marked influence on the effluent SS concentration while the SST is underloaded, these features appeared to have only a small influence on the flux rating, i.e. capacity, of the SST. Until more information is obtained, it would appear from the simulations that the flux rating of 0.80 of the 1DFT maximum SLR recommended by Ekama and Marais (Water Pollut. Control 85(1)(1986)101) remains a reasonable value to apply in the design of full-scale SSTs-for deep SSTs (4m SWD) the flux rating could be increased to 0.85 and for shallow SSTs (2.5m SWD) decreased to 0.75. It is recommended that (i) while the apparent interrelationship between SST flux rating and depth suggests some optimization of the volume of the SST, this be avoided and (ii) the depth of the SST be designed independently of the surface area as is usually the practice and once selected, the appropriate flux rating applied to the 1DFT estimate of the surface area.
On the Representation of Subgrid Microtopography Effects in Process-based Hydrologic Models
NASA Astrophysics Data System (ADS)
Jan, A.; Painter, S. L.; Coon, E. T.
2017-12-01
Increased availability of high-resolution digital elevation are enabling process-based hydrologic modeling on finer and finer scales. However, spatial variability in surface elevation (microtopography) exists below the scale of a typical hyper-resolution grid cell and has the potential to play a significant role in water retention, runoff, and surface/subsurface interactions. Though the concept of microtopographic features (depressions, obstructions) and the associated implications on flow and discharge are well established, representing those effects in watershed-scale integrated surface/subsurface hydrology models remains a challenge. Using the complex and coupled hydrologic environment of the Arctic polygonal tundra as an example, we study the effects of submeter topography and present a subgrid model parameterized by small-scale spatial heterogeneities for use in hyper-resolution models with polygons at a scale of 15-20 meters forming the surface cells. The subgrid model alters the flow and storage terms in the diffusion wave equation for surface flow. We compare our results against sub-meter scale simulations (acts as a benchmark for our simulations) and hyper-resolution models without the subgrid representation. The initiation of runoff in the fine-scale simulations is delayed and the recession curve is slowed relative to simulated runoff using the hyper-resolution model with no subgrid representation. Our subgrid modeling approach improves the representation of runoff and water retention relative to models that ignore subgrid topography. We evaluate different strategies for parameterizing subgrid model and present a classification-based method to efficiently move forward to larger landscapes. This work was supported by the Interoperable Design of Extreme-scale Application Software (IDEAS) project and the Next-Generation Ecosystem Experiments-Arctic (NGEE Arctic) project. NGEE-Arctic is supported by the Office of Biological and Environmental Research in the DOE Office of Science.
Simulation for Supporting Scale-Up of a Fluidized Bed Reactor for Advanced Water Oxidation
Abdul Raman, Abdul Aziz; Daud, Wan Mohd Ashri Wan
2014-01-01
Simulation of fluidized bed reactor (FBR) was accomplished for treating wastewater using Fenton reaction, which is an advanced oxidation process (AOP). The simulation was performed to determine characteristics of FBR performance, concentration profile of the contaminants, and various prominent hydrodynamic properties (e.g., Reynolds number, velocity, and pressure) in the reactor. Simulation was implemented for 2.8 L working volume using hydrodynamic correlations, continuous equation, and simplified kinetic information for phenols degradation as a model. The simulation shows that, by using Fe3+ and Fe2+ mixtures as catalyst, TOC degradation up to 45% was achieved for contaminant range of 40–90 mg/L within 60 min. The concentration profiles and hydrodynamic characteristics were also generated. A subsequent scale-up study was also conducted using similitude method. The analysis shows that up to 10 L working volume, the models developed are applicable. The study proves that, using appropriate modeling and simulation, data can be predicted for designing and operating FBR for wastewater treatment. PMID:25309949
NASA Technical Reports Server (NTRS)
Kadambi, J. R.; Schneider, S. J.; Stewart, W. A.
1986-01-01
The natural circulation of a single phase fluid in a scale model of a pressurized water reactor system during a postulated grade core accident is analyzed. The fluids utilized were water and SF6. The design of the reactor model and the similitude requirements are described. Four LDA tests were conducted: water with 28 kW of heat in the simulated core, with and without the participation of simulated steam generators; water with 28 kW of heat in the simulated core, with the participation of simulated steam generators and with cold upflow of 12 lbm/min from the lower plenum; and SF6 with 0.9 kW of heat in the simulated core and without the participation of the simulated steam generators. For the water tests, the velocity of the water in the center of the core increases with vertical height and continues to increase in the upper plenum. For SF6, it is observed that the velocities are an order of magnitude higher than those of water; however, the velocity patterns are similar.
Simulation research: A vital step for human missions to Mars
NASA Astrophysics Data System (ADS)
Perino, Maria Antonietta; Apel, Uwe; Bichi, Alessandro
The complex nature of the challenge as humans embark on exploration missions beyond Earth orbit will require that, in the early stages, simulation facilities be established at least on Earth. Suitable facilities in Low Earth Orbit and on the Moon surface would provide complementary information of critical importance for the overall design of a human mission to Mars. A full range of simulation campaigns is required, in fact, to reach a better understanding of the complexities involved in exploration missions that will bring humans back to the Moon and then outward to Mars. The corresponding simulation means may range from small scale environmental simulation chambers and/or computer models that will aid in the development of new materials, to full scale mock-ups of spacecraft and planetary habitats and/or orbiting infrastructues. This paper describes how a suitable simulation campaign will contribute to the definition of the required countermeasures with respect to the expected duration of the flight. This will allow to be traded contermeasure payload and astronaut time against effort in technological development of propulsion systems.
NASA Technical Reports Server (NTRS)
Baurle, R. A.
2015-01-01
Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. The cases simulated corresponded to those used to examine this flowfield experimentally using particle image velocimetry. A variety of turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged / large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This effort was undertaken to formally assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community. The numerical errors were quantified for both the steady-state and scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results showed a high degree of variability when comparing the predictions obtained from each turbulence model, with the non-linear eddy viscosity model (an explicit algebraic stress model) providing the most accurate prediction of the measured values. The hybrid Reynolds-averaged/large eddy simulation results were carefully scrutinized to ensure that even the coarsest grid had an acceptable level of resolution for large eddy simulation, and that the time-averaged statistics were acceptably accurate. The autocorrelation and its Fourier transform were the primary tools used for this assessment. The statistics extracted from the hybrid simulation strategy proved to be more accurate than the Reynolds-averaged results obtained using the linear eddy viscosity models. However, there was no predictive improvement noted over the results obtained from the explicit Reynolds stress model. Fortunately, the numerical error assessment at most of the axial stations used to compare with measurements clearly indicated that the scale-resolving simulations were improving (i.e. approaching the measured values) as the grid was refined. Hence, unlike a Reynolds-averaged simulation, the hybrid approach provides a mechanism to the end-user for reducing model-form errors.
Simulating wall and corner fire tests on wood products with the OSU room fire model
H. C. Tran
1994-01-01
This work demonstrates the complexity of modeling wall and corner fires in a compartment. The model chosen for this purpose is the Ohio State University (OSU) room fire model. This model was designed to simulate fire growth on walls in a compartment and therefore lends itself to direct comparison with standard room test results. The model input were bench-scale data...
Park, Pyung-Kyu; Lee, Sangho; Cho, Jae-Seok; Kim, Jae-Hong
2012-08-01
The objective of this study is to further develop previously reported mechanistic predictive model that simulates boron removal in full-scale seawater reverse osmosis (RO) desalination processes to take into account the effect of membrane fouling. Decrease of boron removal and reduction in water production rate by membrane fouling due to enhanced concentration polarization were simulated as a decrease in solute mass transfer coefficient in boundary layer on membrane surface. Various design and operating options under fouling condition were examined including single- versus double-pass configurations, different number of RO elements per vessel, use of RO membranes with enhanced boron rejection, and pH adjustment. These options were quantitatively compared by normalizing the performance of the system in terms of E(min), the minimum energy costs per product water. Simulation results suggested that most viable options to enhance boron rejection among those tested in this study include: i) minimizing fouling, ii) exchanging the existing SWRO elements to boron-specific ones, and iii) increasing pH in the second pass. The model developed in this study is expected to help design and optimization of the RO processes to achieve the target boron removal at target water recovery under realistic conditions where membrane fouling occurs during operation. Copyright © 2012 Elsevier Ltd. All rights reserved.
Germovsek, Eva; Barker, Charlotte I S; Sharland, Mike; Standing, Joseph F
2018-04-19
Pharmacokinetic/pharmacodynamic (PKPD) modeling is important in the design and conduct of clinical pharmacology research in children. During drug development, PKPD modeling and simulation should underpin rational trial design and facilitate extrapolation to investigate efficacy and safety. The application of PKPD modeling to optimize dosing recommendations and therapeutic drug monitoring is also increasing, and PKPD model-based dose individualization will become a core feature of personalized medicine. Following extensive progress on pediatric PK modeling, a greater emphasis now needs to be placed on PD modeling to understand age-related changes in drug effects. This paper discusses the principles of PKPD modeling in the context of pediatric drug development, summarizing how important PK parameters, such as clearance (CL), are scaled with size and age, and highlights a standardized method for CL scaling in children. One standard scaling method would facilitate comparison of PK parameters across multiple studies, thus increasing the utility of existing PK models and facilitating optimal design of new studies.
Chiang, Kuo-Szu; Bock, Clive H; Lee, I-Hsuan; El Jarroudi, Moussa; Delfosse, Philippe
2016-12-01
The effect of rater bias and assessment method on hypothesis testing was studied for representative experimental designs for plant disease assessment using balanced and unbalanced data sets. Data sets with the same number of replicate estimates for each of two treatments are termed "balanced" and those with unequal numbers of replicate estimates are termed "unbalanced". The three assessment methods considered were nearest percent estimates (NPEs), an amended 10% incremental scale, and the Horsfall-Barratt (H-B) scale. Estimates of severity of Septoria leaf blotch on leaves of winter wheat were used to develop distributions for a simulation model. The experimental designs are presented here in the context of simulation experiments which consider the optimal design for the number of specimens (individual units sampled) and the number of replicate estimates per specimen for a fixed total number of observations (total sample size for the treatments being compared). The criterion used to gauge each method was the power of the hypothesis test. As expected, at a given fixed number of observations, the balanced experimental designs invariably resulted in a higher power compared with the unbalanced designs at different disease severity means, mean differences, and variances. Based on these results, with unbiased estimates using NPE, the recommended number of replicate estimates taken per specimen is 2 (from a sample of specimens of at least 30), because this conserves resources. Furthermore, for biased estimates, an apparent difference in the power of the hypothesis test was observed between assessment methods and between experimental designs. Results indicated that, regardless of experimental design or rater bias, an amended 10% incremental scale has slightly less power compared with NPEs, and that the H-B scale is more likely than the others to cause a type II error. These results suggest that choice of assessment method, optimizing sample number and number of replicate estimates, and using a balanced experimental design are important criteria to consider to maximize the power of hypothesis tests for comparing treatments using disease severity estimates.
Design and Analysis of A Spin-Stabilized Projectile Experimental Apparatus
NASA Astrophysics Data System (ADS)
Siegel, Noah; Rodebaugh, Gregory; Elkins, Christopher; van Poppel, Bret; Benson, Michael; Cremins, Michael; Lachance, Austin; Ortega, Raymond; Vanderyacht, Douglas
2017-11-01
Spinning objects experience an effect termed `The Magnus Moment' due to an uneven pressure distribution based on rotation within a crossflow. Unlike the Magnus force, which is often small for spin-stabilized projectiles, the Magnus moment can have a strong detrimental effect on aerodynamic flight stability. Simulations often fail to accurately predict the Magnus moment in the subsonic flight regime. In an effort to characterize the conditions that cause the Magnus moment, researchers in this work employed Magnetic Resonance Velocimetry (MRV) techniques to measure three dimensional, three component, sub-millimeter resolution fluid velocity fields around a scaled model of a spinning projectile in flight. The team designed, built, and tested using a novel water channel apparatus that was fully MRI-compliant - water-tight and non-ferrous - and capable of spinning a projectile at a constant rotational speed. A supporting numerical simulation effort informed the design process of the scaled projectile to thicken the hydrodynamic boundary layer near the outer surface of the projectile. Preliminary testing produced two-dimensional and three-dimensional velocity data and revealed an asymmetric boundary layer around the projectile, which is indicative of the Magnus effect.
NASA Astrophysics Data System (ADS)
Youn, Sung-Won; Suzuki, Kenta; Hiroshima, Hiroshi
2018-06-01
A software program for modifying a mold design to obtain a uniform residual layer thickness (RLT) distribution has been developed and its validity was verified by UV-nanoimprint lithography (UV-NIL) simulation. First, the effects of granularity (G) on both residual layer uniformity and filling characteristics were characterized. For a constant complementary pattern depth and a granularity that was sufficiently larger than the minimum pattern width, filling time decreased with the decrease in granularity. For a pattern design with a wide density range and an irregular distribution, the choice of a small granularity was not always a good strategy since the etching depth required for a complementary pattern occasionally exceptionally increased with the decrease in granularity. On basis of the results obtained, the automated method was applied to a chip-scale pattern modification. Simulation results showed a marked improvement in residual layer thickness uniformity for a capacity-equalized (CE) mold. For the given conditions, the standard deviation of RLT decreased in the range from 1/3 to 1/5 in accordance with pattern designs.
Yang, Hanbae; McCoy, Edward L; Grewal, Parwinder S; Dick, Warren A
2010-08-01
Rain gardens are bioretention systems that have the potential to reduce peak runoff flow and improve water quality in a natural and aesthetically pleasing manner. We compared hydraulic performance and removal efficiencies of nutrients and atrazine in a monophasic rain garden design versus a biphasic design at a column-scale using simulated runoff. The biphasic rain garden was designed to increase retention time and removal efficiency of runoff pollutants by creating a sequence of water saturated to unsaturated conditions. We also evaluated the effect of C substrate availability on pollutant removal efficiency in the biphasic rain garden. Five simulated runoff events with various concentrations of runoff pollutants (i.e. nitrate, phosphate, and atrazine) were applied to the monophasic and biphasic rain gardens once every 5d. Hydraulic performance was consistent over the five simulated runoff events. Peak flow was reduced by approximately 56% for the monophasic design and 80% for the biphasic design. Both rain garden systems showed excellent removal efficiency of phosphate (89-100%) and atrazine (84-100%). However, significantly (p<0.001) higher removal of nitrate was observed in the biphasic (42-63%) compared to the monophasic rain garden (29-39%). Addition of C substrate in the form of glucose increased removal efficiency of nitrate significantly (p<0.001), achieving up to 87% removal at a treatment C/N ratio of 2.0. This study demonstrates the importance of retention time, environmental conditions (i.e. saturated/unsaturated conditions), and availability of C substrate for bioremediation of pollutants, especially nitrates, in rain gardens. (c) 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Johnson, Marcus; Jung, Youngsun; Dawson, Daniel; Supinie, Timothy; Xue, Ming; Park, Jongsook; Lee, Yong-Hee
2018-07-01
The UK Met Office Unified Model (UM) is employed by many weather forecasting agencies around the globe. This model is designed to run across spatial and time scales and known to produce skillful predictions for large-scale weather systems. However, the model has only recently begun running operationally at horizontal grid spacings of ˜1.5 km [e.g., at the UK Met Office and the Korea Meteorological Administration (KMA)]. As its microphysics scheme was originally designed and tuned for large-scale precipitation systems, we investigate the performance of UM microphysics to determine potential inherent biases or weaknesses. Two rainfall cases from the KMA forecasting system are considered in this study: a Changma (quasi-stationary) front, and Typhoon Sanba (2012). The UM output is compared to polarimetric radar observations in terms of simulated polarimetric radar variables. Results show that the UM generally underpredicts median reflectivity in stratiform rain, producing high reflectivity cores and precipitation gaps between them. This is partially due to the diagnostic rain intercept parameter formulation used in the one-moment microphysics scheme. Model drop size is generally both underand overpredicted compared to observations. UM frozen hydrometeors favor generic ice (crystals and snow) rather than graupel, which is reasonable for Changma and typhoon cases. The model performed best with the typhoon case in terms of simulated precipitation coverage.
The use of vestibular models for design and evaluation of flight simulator motion
NASA Technical Reports Server (NTRS)
Bussolari, Steven R.; Young, Laurence R.; Lee, Alfred T.
1989-01-01
Quantitative models for the dynamics of the human vestibular system are applied to the design and evaluation of flight simulator platform motion. An optimal simulator motion control algorithm is generated to minimize the vector difference between perceived spatial orientation estimated in flight and in simulation. The motion controller has been implemented on the Vertical Motion Simulator at NASA Ames Research Center and evaluated experimentally through measurement of pilot performance and subjective rating during VTOL aircraft simulation. In general, pilot performance in a longitudinal tracking task (formation flight) did not appear to be sensitive to variations in platform motion condition as long as motion was present. However, pilot assessment of motion fidelity by means of a rating scale designed for this purpose, were sensitive to motion controller design. Platform motion generated with the optimal motion controller was found to be generally equivalent to that generated by conventional linear crossfeed washout. The vestibular models are used to evaluate the motion fidelity of transport category aircraft (Boeing 727) simulation in a pilot performance and simulator acceptability study at the Man-Vehicle Systems Research Facility at NASA Ames Research Center. Eighteen airline pilots, currently flying B-727, were given a series of flight scenarios in the simulator under various conditions of simulator motion. The scenarios were chosen to reflect the flight maneuvers that these pilots might expect to be given during a routine pilot proficiency check. Pilot performance and subjective rating of simulator fidelity was relatively insensitive to the motion condition, despite large differences in the amplitude of motion provided. This lack of sensitivity may be explained by means of the vestibular models, which predict little difference in the modeled motion sensations of the pilots when different motion conditions are imposed.
Design, Activation, and Operation of the J2-X Subscale Simulator (JSS)
NASA Technical Reports Server (NTRS)
Saunders, Grady P.; Raines, Nickey G.; Varner, Darrel G.
2009-01-01
The purpose of this paper is to give a detailed description of the design, activation, and operation of the J2-X Subscale Simulator (JSS) installed in Cell 1 of the E3 test facility at Stennis Space Center, MS (SSC). The primary purpose of the JSS is to simulate the installation of the J2-X engine in the A3 Subscale Rocket Altitude Test Facility at SSC. The JSS is designed to give aerodynamically and thermodynamically similar plume properties as the J2-X engine currently under development for use as the upper stage engine on the ARES I and ARES V spacecraft. The JSS is a scale pressure fed, LOX/GH fueled rocket that is geometrically similar to the J2-X from the throat to the nozzle exit plane (NEP) and is operated at the same oxidizer to fuel ratios and chamber pressures. This paper describes the heritage hardware used as the basis of the JSS design, the newly designed rocket hardware, igniter systems used, and the activation and operation of the JSS.
WE-H-BRA-04: Biological Geometries for the Monte Carlo Simulation Toolkit TOPASNBio
DOE Office of Scientific and Technical Information (OSTI.GOV)
McNamara, A; Held, K; Paganetti, H
2016-06-15
Purpose: New advances in radiation therapy are most likely to come from the complex interface of physics, chemistry and biology. Computational simulations offer a powerful tool for quantitatively investigating radiation interactions with biological tissue and can thus help bridge the gap between physics and biology. The aim of TOPAS-nBio is to provide a comprehensive tool to generate advanced radiobiology simulations. Methods: TOPAS wraps and extends the Geant4 Monte Carlo (MC) simulation toolkit. TOPAS-nBio is an extension to TOPAS which utilizes the physics processes in Geant4-DNA to model biological damage from very low energy secondary electrons. Specialized cell, organelle and molecularmore » geometries were designed for the toolkit. Results: TOPAS-nBio gives the user the capability of simulating biological geometries, ranging from the micron-scale (e.g. cells and organelles) to complex nano-scale geometries (e.g. DNA and proteins). The user interacts with TOPAS-nBio through easy-to-use input parameter files. For example, in a simple cell simulation the user can specify the cell type and size as well as the type, number and size of included organelles. For more detailed nuclear simulations, the user can specify chromosome territories containing chromatin fiber loops, the later comprised of nucleosomes on a double helix. The chromatin fibers can be arranged in simple rigid geometries or within factual globules, mimicking realistic chromosome territories. TOPAS-nBio also provides users with the capability of reading protein data bank 3D structural files to simulate radiation damage to proteins or nucleic acids e.g. histones or RNA. TOPAS-nBio has been validated by comparing results to other track structure simulation software and published experimental measurements. Conclusion: TOPAS-nBio provides users with a comprehensive MC simulation tool for radiobiological simulations, giving users without advanced programming skills the ability to design and run complex simulations.« less
NASA Technical Reports Server (NTRS)
Allen, B. Danette; Alexandrov, Natalia
2016-01-01
Incremental approaches to air transportation system development inherit current architectural constraints, which, in turn, place hard bounds on system capacity, efficiency of performance, and complexity. To enable airspace operations of the future, a clean-slate (ab initio) airspace design(s) must be considered. This ab initio National Airspace System (NAS) must be capable of accommodating increased traffic density, a broader diversity of aircraft, and on-demand mobility. System and subsystem designs should scale to accommodate the inevitable demand for airspace services that include large numbers of autonomous Unmanned Aerial Vehicles and a paradigm shift in general aviation (e.g., personal air vehicles) in addition to more traditional aerial vehicles such as commercial jetliners and weather balloons. The complex and adaptive nature of ab initio designs for the future NAS requires new approaches to validation, adding a significant physical experimentation component to analytical and simulation tools. In addition to software modeling and simulation, the ability to exercise system solutions in a flight environment will be an essential aspect of validation. The NASA Langley Research Center (LaRC) Autonomy Incubator seeks to develop a flight simulation infrastructure for ab initio modeling and simulation that assumes no specific NAS architecture and models vehicle-to-vehicle behavior to examine interactions and emergent behaviors among hundreds of intelligent aerial agents exhibiting collaborative, cooperative, coordinative, selfish, and malicious behaviors. The air transportation system of the future will be a complex adaptive system (CAS) characterized by complex and sometimes unpredictable (or unpredicted) behaviors that result from temporal and spatial interactions among large numbers of participants. A CAS not only evolves with a changing environment and adapts to it, it is closely coupled to all systems that constitute the environment. Thus, the ecosystem that contains the system and other systems evolves with the CAS as well. The effects of the emerging adaptation and co-evolution are difficult to capture with only combined mathematical and computational experimentation. Therefore, an ab initio flight simulation environment must accommodate individual vehicles, groups of self-organizing vehicles, and large-scale infrastructure behavior. Inspired by Massively Multiplayer Online Role Playing Games (MMORPG) and Serious Gaming, the proposed ab initio simulation environment is similar to online gaming environments in which player participants interact with each other, affect their environment, and expect the simulation to persist and change regardless of any individual player's active participation.
Virtual Systems Pharmacology (ViSP) software for simulation from mechanistic systems-level models.
Ermakov, Sergey; Forster, Peter; Pagidala, Jyotsna; Miladinov, Marko; Wang, Albert; Baillie, Rebecca; Bartlett, Derek; Reed, Mike; Leil, Tarek A
2014-01-01
Multiple software programs are available for designing and running large scale system-level pharmacology models used in the drug development process. Depending on the problem, scientists may be forced to use several modeling tools that could increase model development time, IT costs and so on. Therefore, it is desirable to have a single platform that allows setting up and running large-scale simulations for the models that have been developed with different modeling tools. We developed a workflow and a software platform in which a model file is compiled into a self-contained executable that is no longer dependent on the software that was used to create the model. At the same time the full model specifics is preserved by presenting all model parameters as input parameters for the executable. This platform was implemented as a model agnostic, therapeutic area agnostic and web-based application with a database back-end that can be used to configure, manage and execute large-scale simulations for multiple models by multiple users. The user interface is designed to be easily configurable to reflect the specifics of the model and the user's particular needs and the back-end database has been implemented to store and manage all aspects of the systems, such as Models, Virtual Patients, User Interface Settings, and Results. The platform can be adapted and deployed on an existing cluster or cloud computing environment. Its use was demonstrated with a metabolic disease systems pharmacology model that simulates the effects of two antidiabetic drugs, metformin and fasiglifam, in type 2 diabetes mellitus patients.
Cotes-Ruiz, Iván Tomás; Prado, Rocío P.; García-Galán, Sebastián; Muñoz-Expósito, José Enrique; Ruiz-Reyes, Nicolás
2017-01-01
Nowadays, the growing computational capabilities of Cloud systems rely on the reduction of the consumed power of their data centers to make them sustainable and economically profitable. The efficient management of computing resources is at the heart of any energy-aware data center and of special relevance is the adaptation of its performance to workload. Intensive computing applications in diverse areas of science generate complex workload called workflows, whose successful management in terms of energy saving is still at its beginning. WorkflowSim is currently one of the most advanced simulators for research on workflows processing, offering advanced features such as task clustering and failure policies. In this work, an expected power-aware extension of WorkflowSim is presented. This new tool integrates a power model based on a computing-plus-communication design to allow the optimization of new management strategies in energy saving considering computing, reconfiguration and networks costs as well as quality of service, and it incorporates the preeminent strategy for on host energy saving: Dynamic Voltage Frequency Scaling (DVFS). The simulator is designed to be consistent in different real scenarios and to include a wide repertory of DVFS governors. Results showing the validity of the simulator in terms of resources utilization, frequency and voltage scaling, power, energy and time saving are presented. Also, results achieved by the intra-host DVFS strategy with different governors are compared to those of the data center using a recent and successful DVFS-based inter-host scheduling strategy as overlapped mechanism to the DVFS intra-host technique. PMID:28085932
Cotes-Ruiz, Iván Tomás; Prado, Rocío P; García-Galán, Sebastián; Muñoz-Expósito, José Enrique; Ruiz-Reyes, Nicolás
2017-01-01
Nowadays, the growing computational capabilities of Cloud systems rely on the reduction of the consumed power of their data centers to make them sustainable and economically profitable. The efficient management of computing resources is at the heart of any energy-aware data center and of special relevance is the adaptation of its performance to workload. Intensive computing applications in diverse areas of science generate complex workload called workflows, whose successful management in terms of energy saving is still at its beginning. WorkflowSim is currently one of the most advanced simulators for research on workflows processing, offering advanced features such as task clustering and failure policies. In this work, an expected power-aware extension of WorkflowSim is presented. This new tool integrates a power model based on a computing-plus-communication design to allow the optimization of new management strategies in energy saving considering computing, reconfiguration and networks costs as well as quality of service, and it incorporates the preeminent strategy for on host energy saving: Dynamic Voltage Frequency Scaling (DVFS). The simulator is designed to be consistent in different real scenarios and to include a wide repertory of DVFS governors. Results showing the validity of the simulator in terms of resources utilization, frequency and voltage scaling, power, energy and time saving are presented. Also, results achieved by the intra-host DVFS strategy with different governors are compared to those of the data center using a recent and successful DVFS-based inter-host scheduling strategy as overlapped mechanism to the DVFS intra-host technique.
Virtual Systems Pharmacology (ViSP) software for simulation from mechanistic systems-level models
Ermakov, Sergey; Forster, Peter; Pagidala, Jyotsna; Miladinov, Marko; Wang, Albert; Baillie, Rebecca; Bartlett, Derek; Reed, Mike; Leil, Tarek A.
2014-01-01
Multiple software programs are available for designing and running large scale system-level pharmacology models used in the drug development process. Depending on the problem, scientists may be forced to use several modeling tools that could increase model development time, IT costs and so on. Therefore, it is desirable to have a single platform that allows setting up and running large-scale simulations for the models that have been developed with different modeling tools. We developed a workflow and a software platform in which a model file is compiled into a self-contained executable that is no longer dependent on the software that was used to create the model. At the same time the full model specifics is preserved by presenting all model parameters as input parameters for the executable. This platform was implemented as a model agnostic, therapeutic area agnostic and web-based application with a database back-end that can be used to configure, manage and execute large-scale simulations for multiple models by multiple users. The user interface is designed to be easily configurable to reflect the specifics of the model and the user's particular needs and the back-end database has been implemented to store and manage all aspects of the systems, such as Models, Virtual Patients, User Interface Settings, and Results. The platform can be adapted and deployed on an existing cluster or cloud computing environment. Its use was demonstrated with a metabolic disease systems pharmacology model that simulates the effects of two antidiabetic drugs, metformin and fasiglifam, in type 2 diabetes mellitus patients. PMID:25374542
Adaptive Control of a Utility-Scale Wind Turbine Operating in Region 3
NASA Technical Reports Server (NTRS)
Frost, Susan A.; Balas, Mark J.; Wright, Alan D.
2009-01-01
Adaptive control techniques are well suited to nonlinear applications, such as wind turbines, which are difficult to accurately model and which have effects from poorly known operating environments. The turbulent and unpredictable conditions in which wind turbines operate create many challenges for their operation. In this paper, we design an adaptive collective pitch controller for a high-fidelity simulation of a utility scale, variable-speed horizontal axis wind turbine. The objective of the adaptive pitch controller in Region 3 is to regulate generator speed and reject step disturbances. The control objective is accomplished by collectively pitching the turbine blades. We use an extension of the Direct Model Reference Adaptive Control (DMRAC) approach to track a reference point and to reject persistent disturbances. The turbine simulation models the Controls Advanced Research Turbine (CART) of the National Renewable Energy Laboratory in Golden, Colorado. The CART is a utility-scale wind turbine which has a well-developed and extensively verified simulator. The adaptive collective pitch controller for Region 3 was compared in simulations with a bas celliansesical Proportional Integrator (PI) collective pitch controller. In the simulations, the adaptive pitch controller showed improved speed regulation in Region 3 when compared with the baseline PI pitch controller and it demonstrated robustness to modeling errors.
Analysis of large-scale tablet coating: Modeling, simulation and experiments.
Boehling, P; Toschkoff, G; Knop, K; Kleinebudde, P; Just, S; Funke, A; Rehbaum, H; Khinast, J G
2016-07-30
This work concerns a tablet coating process in an industrial-scale drum coater. We set up a full-scale Design of Simulation Experiment (DoSE) using the Discrete Element Method (DEM) to investigate the influence of various process parameters (the spray rate, the number of nozzles, the rotation rate and the drum load) on the coefficient of inter-tablet coating variation (cv,inter). The coater was filled with up to 290kg of material, which is equivalent to 1,028,369 tablets. To mimic the tablet shape, the glued sphere approach was followed, and each modeled tablet consisted of eight spheres. We simulated the process via the eXtended Particle System (XPS), proving that it is possible to accurately simulate the tablet coating process on the industrial scale. The process time required to reach a uniform tablet coating was extrapolated based on the simulated data and was in good agreement with experimental results. The results are provided at various levels of details, from thorough investigation of the influence that the process parameters have on the cv,inter and the amount of tablets that visit the spray zone during the simulated 90s to the velocity in the spray zone and the spray and bed cycle time. It was found that increasing the number of nozzles and decreasing the spray rate had the highest influence on the cv,inter. Although increasing the drum load and the rotation rate increased the tablet velocity, it did not have a relevant influence on the cv,inter and the process time. Copyright © 2015 Elsevier B.V. All rights reserved.
Transport Simulations for Fast Ignition on NIF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strozzi, D J; Tabak, M; Grote, D P
2009-10-26
We are designing a full hydro-scale cone-guided, indirect-drive FI coupling experiment, for NIF, with the ARC-FIDO short-pulse laser. Current rad-hydro designs with limited fuel jetting into cone tip are not yet adequate for ignition. Designs are improving. Electron beam transport simulations (implicit-PIC LSP) show: (1) Magnetic fields and smaller angular spreads increase coupling to ignition-relevant 'hot spot' (20 um radius); (2) Plastic CD (for a warm target) produces somewhat better coupling than pure D (cryogenic target) due to enhanced resistive B fields; and (3) The optimal T{sub hot} for this target is {approx} 1 MeV; coupling falls by 3x asmore » T{sub hot} rises to 4 MeV.« less
Impact of 50% Synthesized Iso-Paraffins (SIP) on F-76 Fuel Coalescence
2013-12-16
petroleum JP-5 and Synthesized Iso-Paraffins (SIP). SIP fuels are made from direct fermentation of sugar into olefinic hydrocarbons. The olefinic...manufactured scaled down filter/coalescer and separator to simulate the performance of a full-scale filter separator system. This test is designed to predict...5 and Synthesized Iso-Paraffins (SIP). SIP fuels are made from direct fermentation of sugar into olefinic hydrocarbons. The olefinic hydrocarbons
ProtoMD: A prototyping toolkit for multiscale molecular dynamics
NASA Astrophysics Data System (ADS)
Somogyi, Endre; Mansour, Andrew Abi; Ortoleva, Peter J.
2016-05-01
ProtoMD is a toolkit that facilitates the development of algorithms for multiscale molecular dynamics (MD) simulations. It is designed for multiscale methods which capture the dynamic transfer of information across multiple spatial scales, such as the atomic to the mesoscopic scale, via coevolving microscopic and coarse-grained (CG) variables. ProtoMD can be also be used to calibrate parameters needed in traditional CG-MD methods. The toolkit integrates 'GROMACS wrapper' to initiate MD simulations, and 'MDAnalysis' to analyze and manipulate trajectory files. It facilitates experimentation with a spectrum of coarse-grained variables, prototyping rare events (such as chemical reactions), or simulating nanocharacterization experiments such as terahertz spectroscopy, AFM, nanopore, and time-of-flight mass spectroscopy. ProtoMD is written in python and is freely available under the GNU General Public License from github.com/CTCNano/proto_md.
Design and simulation of multifunctional optical devices using metasurfaces
NASA Astrophysics Data System (ADS)
Alyammahi, Saleimah
In classical optics, optical components such as lenses and microscopes are unable to focus the light into deep subwavelength or nanometer scales due to the diffraction limit. However, recent developments in nanophotonics, have enabled researchers to control the light at subwavelength scales and overcome the diffraction limit. Using subwavelength structures, we can create a new class of optical materials with unusual optical responses or with new properties that are not attainable in nature. Such artificial materials can be created by structuring conventional materials on the subwavelength scale, giving rise to the unusual optical properties due to the electric and magnetic responses of each meta-atom. These materials are called metamaterials or engineered materials that exhibit exciting phenomena such as non-linear optical responses and negative refraction. Metasurfaces are two dimensional meta-atoms arranged as an array with subwavelength distances. Therefore, metasurfaces are planar, ultrathin version of metamaterials that offer fascinating possibilities of manipulating the wavefront of the optical fields. Recently, the control of light properties such as phase, amplitude, and polarization has been demonstrated by introducing abrupt phase change across a subwavelength scale. Phase discontinuities at the interface can be attained by engineered metasurfaces with new applications and functionalities that have not been realized with bulk or multilayer materials. In this work, high efficient, planar metasurfaces based on geometric phase are designed to realize various functionalities. The designs include metalenses, axicon lenses, vortex beam generators, and Bessel vortex beam generators. The capability of planar metasurfaces in focusing the incident beams and shaping the optical wavefront is numerically demonstrated. COMSOL simulations are used to prove the capability of these metasurfaces to transform the incident beams into complex beams that carry orbital angular momentum (OAM). New designs of ultrathin, planar metasurfaces may result in development of a new type of photonic devices with reduced loss and broad bandwidth. The advances in metasurface designs will lead to ultrathin devices with surprising functionalities and low cost. These novel designs may offer more possibilities for applications in quantum optic devices, pulse shaping, spatial light modulators, nano-scale sensing or imaging, and so on.
NASA Astrophysics Data System (ADS)
Gómez, Walter; Chávez, Carlos; Salgado, Hugo; Vásquez, Felipe
2017-11-01
We present the design, implementation, and evaluation of a subsidy program to introduce cleaner and more efficient household wood combustion technologies. The program was conducted in the city of Temuco, one of the most polluted cities in southern Chile, as a pilot study to design a new national stove replacement initiative for pollution control. In this city, around 90% of the total emissions of suspended particulate matter is caused by households burning wood. We created a simulated market in which households could choose among different combustion technologies with an assigned subsidy. The subsidy was a relevant factor in the decision to participate, and the inability to secure credit was a significant constraint for the participation of low-income households. Due to several practical difficulties and challenges associated with the implementation of large-scale programs that encourage technological innovation at the household level, it is strongly advisable to start with a small-scale pilot that can provide useful insights into the final design of a fuller, larger-scale program.
When Feedback Fails: The Scaling and Saturation of Star Formation Efficiency
NASA Astrophysics Data System (ADS)
Y Grudic, Michael; Hopkins, Philip F.; Faucher-Giguere, Claude-Andre; Quataert, Eliot; Murray, Norman W.; Keres, Dusan
2017-06-01
We present a suite of 3D multi-physics MHD simulations following star formation in isolated turbulent molecular gas disks ranging from 5 to 500 parsecs in radius. These simulations are designed to survey the range of surface densities between those typical of Milky Way GMCs (˜100 M⊙pc-2) and extreme ULIRG environments (˜104M⊙pc-2) so as to map out the scaling of star formation efficiency (SFE) between these two regimes. The simulations include prescriptions for supernova, stellar wind, and radiative feedback, which we find to be essential in determining both the instantaneous (ɛff) and integrated (ɛint) star formation efficiencies. In all simulations, the gas disks form stars until a critical stellar mass has been reached and the remaining gas is blown out by stellar feedback. We find that surface density is the best predictor of ɛint of all of the gas cloud's global properties, as suggested by analytic force balance arguments from previous works. Furthermore, SFE eventually saturates to ˜1 at high surface density, with very good agreement across different spatial scales. We also find a roughly proportional relationship between ɛff and ɛint. These results have implications for star formation in galactic disks, the nature and fate of nuclear starbursts, and the formation of bound star clusters. The scaling of ɛff also contradicts star formation models in which ɛff˜1% universally, including popular subgrid models for galaxy simulations.
Yoon, Chun Hong; Yurkov, Mikhail V.; Schneidmiller, Evgeny A.; Samoylova, Liubov; Buzmakov, Alexey; Jurek, Zoltan; Ziaja, Beata; Santra, Robin; Loh, N. Duane; Tschentscher, Thomas; Mancuso, Adrian P.
2016-01-01
The advent of newer, brighter, and more coherent X-ray sources, such as X-ray Free-Electron Lasers (XFELs), represents a tremendous growth in the potential to apply coherent X-rays to determine the structure of materials from the micron-scale down to the Angstrom-scale. There is a significant need for a multi-physics simulation framework to perform source-to-detector simulations for a single particle imaging experiment, including (i) the multidimensional simulation of the X-ray source; (ii) simulation of the wave-optics propagation of the coherent XFEL beams; (iii) atomistic modelling of photon-material interactions; (iv) simulation of the time-dependent diffraction process, including incoherent scattering; (v) assembling noisy and incomplete diffraction intensities into a three-dimensional data set using the Expansion-Maximisation-Compression (EMC) algorithm and (vi) phase retrieval to obtain structural information. We demonstrate the framework by simulating a single-particle experiment for a nitrogenase iron protein using parameters of the SPB/SFX instrument of the European XFEL. This exercise demonstrably yields interpretable consequences for structure determination that are crucial yet currently unavailable for experiment design. PMID:27109208
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoon, Chun Hong; Yurkov, Mikhail V.; Schneidmiller, Evgeny A.
The advent of newer, brighter, and more coherent X-ray sources, such as X-ray Free-Electron Lasers (XFELs), represents a tremendous growth in the potential to apply coherent X-rays to determine the structure of materials from the micron-scale down to the Angstrom-scale. There is a significant need for a multi-physics simulation framework to perform source-to-detector simulations for a single particle imaging experiment, including (i) the multidimensional simulation of the X-ray source; (ii) simulation of the wave-optics propagation of the coherent XFEL beams; (iii) atomistic modelling of photon-material interactions; (iv) simulation of the time-dependent diffraction process, including incoherent scattering; (v) assembling noisy andmore » incomplete diffraction intensities into a three-dimensional data set using the Expansion-Maximisation-Compression (EMC) algorithm and (vi) phase retrieval to obtain structural information. Furthermore, we demonstrate the framework by simulating a single-particle experiment for a nitrogenase iron protein using parameters of the SPB/SFX instrument of the European XFEL. This exercise demonstrably yields interpretable consequences for structure determination that are crucial yet currently unavailable for experiment design.« less
Enhancing nursing students' understanding of poverty through simulation.
Patterson, Nena; Hulton, Linda J
2012-01-01
The purposes of this study were (a) to describe the implementation of a poverty simulation, (b) to evaluate its use on nursing students' attitudes about poverty, and (c) to offer lessons learned. Using a mixed-method design, a convenience sample of senior undergraduate nursing students (n = 43) from a public university in a mid-Atlantic state participated in a poverty simulation experience. Students assumed the roles of real-life families and were given limited amounts of resources to survive in a simulated community. This simulation took place during a community health practicum clinical day. The short form of Attitudes about Poverty and Poor Populations Scale (APPPS) was adapted for this evaluation. This 21-item scale includes factors of personal deficiency, stigma, and structural perspective, which measures a range of diverse attitudes toward poverty and poor people. The results of this evaluation demonstrated that nursing students viewed the poverty simulation as an effective teaching strategy and actively participated. In particular, nursing students' scores on the factor of stigma of poverty demonstrated statistically significant changes. With proper planning, organization, and reflection, a poverty simulation experience can be a positive impetus for lifelong learning and civic engagement. © 2011 Wiley Periodicals, Inc.
Yoon, Chun Hong; Yurkov, Mikhail V.; Schneidmiller, Evgeny A.; ...
2016-04-25
The advent of newer, brighter, and more coherent X-ray sources, such as X-ray Free-Electron Lasers (XFELs), represents a tremendous growth in the potential to apply coherent X-rays to determine the structure of materials from the micron-scale down to the Angstrom-scale. There is a significant need for a multi-physics simulation framework to perform source-to-detector simulations for a single particle imaging experiment, including (i) the multidimensional simulation of the X-ray source; (ii) simulation of the wave-optics propagation of the coherent XFEL beams; (iii) atomistic modelling of photon-material interactions; (iv) simulation of the time-dependent diffraction process, including incoherent scattering; (v) assembling noisy andmore » incomplete diffraction intensities into a three-dimensional data set using the Expansion-Maximisation-Compression (EMC) algorithm and (vi) phase retrieval to obtain structural information. Furthermore, we demonstrate the framework by simulating a single-particle experiment for a nitrogenase iron protein using parameters of the SPB/SFX instrument of the European XFEL. This exercise demonstrably yields interpretable consequences for structure determination that are crucial yet currently unavailable for experiment design.« less
Khan, Asaduzzaman; Chien, Chi-Wen; Bagraith, Karl S
2015-04-01
To investigate whether using a parametric statistic in comparing groups leads to different conclusions when using summative scores from rating scales compared with using their corresponding Rasch-based measures. A Monte Carlo simulation study was designed to examine between-group differences in the change scores derived from summative scores from rating scales, and those derived from their corresponding Rasch-based measures, using 1-way analysis of variance. The degree of inconsistency between the 2 scoring approaches (i.e. summative and Rasch-based) was examined, using varying sample sizes, scale difficulties and person ability conditions. This simulation study revealed scaling artefacts that could arise from using summative scores rather than Rasch-based measures for determining the changes between groups. The group differences in the change scores were statistically significant for summative scores under all test conditions and sample size scenarios. However, none of the group differences in the change scores were significant when using the corresponding Rasch-based measures. This study raises questions about the validity of the inference on group differences of summative score changes in parametric analyses. Moreover, it provides a rationale for the use of Rasch-based measures, which can allow valid parametric analyses of rating scale data.
A new approach to flow simulation using hybrid models
NASA Astrophysics Data System (ADS)
Solgi, Abazar; Zarei, Heidar; Nourani, Vahid; Bahmani, Ramin
2017-11-01
The necessity of flow prediction in rivers, for proper management of water resource, and the need for determining the inflow to the dam reservoir, designing efficient flood warning systems and so forth, have always led water researchers to think about models with high-speed response and low error. In the recent years, the development of Artificial Neural Networks and Wavelet theory and using the combination of models help researchers to estimate the river flow better and better. In this study, daily and monthly scales were used for simulating the flow of Gamasiyab River, Nahavand, Iran. The first simulation was done using two types of ANN and ANFIS models. Then, using wavelet theory and decomposing input signals of the used parameters, sub-signals were obtained and were fed into the ANN and ANFIS to obtain hybrid models of WANN and WANFIS. In this study, in addition to the parameters of precipitation and flow, parameters of temperature and evaporation were used to analyze their effects on the simulation. The results showed that using wavelet transform improved the performance of the models in both monthly and daily scale. However, it had a better effect on the monthly scale and the WANFIS was the best model.
An engineering closure for heavily under-resolved coarse-grid CFD in large applications
NASA Astrophysics Data System (ADS)
Class, Andreas G.; Yu, Fujiang; Jordan, Thomas
2016-11-01
Even though high performance computation allows very detailed description of a wide range of scales in scientific computations, engineering simulations used for design studies commonly merely resolve the large scales thus speeding up simulation time. The coarse-grid CFD (CGCFD) methodology is developed for flows with repeated flow patterns as often observed in heat exchangers or porous structures. It is proposed to use inviscid Euler equations on a very coarse numerical mesh. This coarse mesh needs not to conform to the geometry in all details. To reinstall physics on all smaller scales cheap subgrid models are employed. Subgrid models are systematically constructed by analyzing well-resolved generic representative simulations. By varying the flow conditions in these simulations correlations are obtained. These comprehend for each individual coarse mesh cell a volume force vector and volume porosity. Moreover, for all vertices, surface porosities are derived. CGCFD is related to the immersed boundary method as both exploit volume forces and non-body conformal meshes. Yet, CGCFD differs with respect to the coarser mesh and the use of Euler equations. We will describe the methodology based on a simple test case and the application of the method to a 127 pin wire-wrap fuel bundle.
Multiscale design and life-cycle based sustainability assessment of polymer nanocomposite coatings
NASA Astrophysics Data System (ADS)
Uttarwar, Rohan G.
In recent years, nanocoatings with exceptionally improved and new performance properties have found numerous applications in the automotive, aerospace, ship-making, chemical, electronics, steel, construction, and many other industries. Especially the formulations providing multiple functionalities to cured paint films are believed to dominate the coatings market in the near future. It has shifted the focus of research towards building sustainable coating recipes which can deliver multiple functionalities through applied films. The challenge to this exciting area of research arrives from the insufficient knowledge about structure-property correlations of nanocoating materials and their design complexity. Experimental efforts have been successful in developing certain types of nanopaints exhibiting improved properties. However, multifunctional nanopaint design optimality is extremely difficult to address if not impossible solely through experiments. In addition to this, the environmental implications and societal risks associated with this growing field of nanotechnology raise several questions related to its sustainable development. This research focuses on the study of a multiscale sustainable nanocoating design which can have the application from novel function envisioning and idea refinement point of view, to knowledge discovery and design solution derivation, and further to performance testing in industrial applications. The nanocoating design is studied using computational simulations of nano- to macro- scale models and sustainability assessment study over the life-cycle. Computational simulations aim at integrating top-down, goals/means, inductive systems engineering and bottom-up, cause and effect, deductive systems engineering approaches for material development. The in-silico paint resin system is a water-dispersible acrylic polymer with hydrophilic nanoparticles incorporated into it. The nano-scale atomistic and micro-scale coarse-grained (CG) level simulations are performed using molecular dynamics methodology to study several structural and morphological features such as effect of polymer molecular weight, polydispersity, rheology, nanoparticle volume fraction, size, shape and chemical nature on the bulk mechanical and self-cleaning properties of the coating film. At macro-scale, a paint spray system which is used for automotive coating application is studied by using CFD-based simulation methodology to generate crucial information about the effects of nanocoating technology on environmental emissions and coating film quality. The cradle-to-grave life-cycle based sustainability assessment study address all the critical issues related to economic benefits, environmental implications and societal effects of nanocoating technology through case studies of automotive coating systems. It is accomplished by identifying crucial correlations among measurable parameters at different stages and developing sustainability indicator matrices for analysis of each stage of life-cycle. The findings from the research can have great potential to draft useful conclusions in favor of future development of coating systems with novel functionalities and improved sustainability.
An Engineering Methodology for Implementing and Testing VLSI (Very Large Scale Integrated) Circuits
1989-03-01
the pad frame and associated routing, conducted additional testing. and submitted the finished design effort to MOSIS for manufacturing. Throughout...register bank TSTCON Allows the XNOR circuitry to enter the TEST register bank PADIN Test signal to check operation of the input pad VCC Power connection...MOSSIM II simulation program. but the design offered little observability within the circuit. The initial design used 35 pins of a 40 pin pad frame
Baseline process description for simulating plutonium oxide production for precalc project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pike, J. A.
Savannah River National Laboratory (SRNL) started a multi-year project, the PreCalc Project, to develop a computational simulation of a plutonium oxide (PuO 2) production facility with the objective to study the fundamental relationships between morphological and physicochemical properties. This report provides a detailed baseline process description to be used by SRNL personnel and collaborators to facilitate the initial design and construction of the simulation. The PreCalc Project team selected the HB-Line Plutonium Finishing Facility as the basis for a nominal baseline process since the facility is operational and significant model validation data can be obtained. The process boundary as wellmore » as process and facility design details necessary for multi-scale, multi-physics models are provided.« less
High Resolution Model Intercomparison Project (HighResMIP v1.0) for CMIP6
NASA Astrophysics Data System (ADS)
Haarsma, Reindert J.; Roberts, Malcolm J.; Vidale, Pier Luigi; Senior, Catherine A.; Bellucci, Alessio; Bao, Qing; Chang, Ping; Corti, Susanna; Fučkar, Neven S.; Guemas, Virginie; von Hardenberg, Jost; Hazeleger, Wilco; Kodama, Chihiro; Koenigk, Torben; Leung, L. Ruby; Lu, Jian; Luo, Jing-Jia; Mao, Jiafu; Mizielinski, Matthew S.; Mizuta, Ryo; Nobre, Paulo; Satoh, Masaki; Scoccimarro, Enrico; Semmler, Tido; Small, Justin; von Storch, Jin-Song
2016-11-01
Robust projections and predictions of climate variability and change, particularly at regional scales, rely on the driving processes being represented with fidelity in model simulations. The role of enhanced horizontal resolution in improved process representation in all components of the climate system is of growing interest, particularly as some recent simulations suggest both the possibility of significant changes in large-scale aspects of circulation as well as improvements in small-scale processes and extremes. However, such high-resolution global simulations at climate timescales, with resolutions of at least 50 km in the atmosphere and 0.25° in the ocean, have been performed at relatively few research centres and generally without overall coordination, primarily due to their computational cost. Assessing the robustness of the response of simulated climate to model resolution requires a large multi-model ensemble using a coordinated set of experiments. The Coupled Model Intercomparison Project 6 (CMIP6) is the ideal framework within which to conduct such a study, due to the strong link to models being developed for the CMIP DECK experiments and other model intercomparison projects (MIPs). Increases in high-performance computing (HPC) resources, as well as the revised experimental design for CMIP6, now enable a detailed investigation of the impact of increased resolution up to synoptic weather scales on the simulated mean climate and its variability. The High Resolution Model Intercomparison Project (HighResMIP) presented in this paper applies, for the first time, a multi-model approach to the systematic investigation of the impact of horizontal resolution. A coordinated set of experiments has been designed to assess both a standard and an enhanced horizontal-resolution simulation in the atmosphere and ocean. The set of HighResMIP experiments is divided into three tiers consisting of atmosphere-only and coupled runs and spanning the period 1950-2050, with the possibility of extending to 2100, together with some additional targeted experiments. This paper describes the experimental set-up of HighResMIP, the analysis plan, the connection with the other CMIP6 endorsed MIPs, as well as the DECK and CMIP6 historical simulations. HighResMIP thereby focuses on one of the CMIP6 broad questions, "what are the origins and consequences of systematic model biases?", but we also discuss how it addresses the World Climate Research Program (WCRP) grand challenges.
Numerical Simulation of nZVI at the Field Scale
NASA Astrophysics Data System (ADS)
Chowdhury, A. I.; Krol, M.; Sleep, B. E.; O'Carroll, D. M.
2014-12-01
Nano-scale zero valent iron (nZVI) has been used at a number of contaminated sites over the last decade. At most of these sites, significant decreases in contaminant concentrations have resulted from the application of nZVI. However, limited work has been completed investigating nZVI mobility at the field-scale. In this study a three dimensional, three phase, finite difference numerical simulator (CompSim) was used to simulate nZVI and polymer transport in a variably saturated site. The model was able to accurately predict the field observed head data without parameter fitting. In addition, the numerical simulator estimated the amount of nZVI delivered to the saturated and unsaturated zones as well as the phase of nZVI (i.e., attached or aqueous phase). The simulation results showed that the injected slurry migrated radially outward from the injection well, and therefore nZVI transport was governed by injection velocity as well as viscosity of the injected solution. A suite of sensitivity analyses was performed to investigate the impact of different injection scenarios (e.g. different volume and injection rate) on nZVI migration. Simulation results showed that injection of a higher volume of nZVI delivered more iron particles at a given distance; however, not necessarily to a greater distance proportionate to the increase in volume. This study suggests that on-site synthesized nZVI particles are mobile in the subsurface and the numerical simulator can be a valuable tool for optimum design of nZVI applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Churchfield, M.; Wang, Q.; Scholbrock, A.
Here, we describe the process of using large-eddy simulations of wind turbine wake flow to help design a wake measurement campaign. The main goal of the experiment is to measure wakes and wake deflection that result from intentional yaw misalignment under a variety of atmospheric conditions at the Scaled Wind Farm Technology facility operated by Sandia National Laboratories in Lubbock, Texas. Prior simulation studies have shown that wake deflection may be used for wind-plant control that maximizes plant power output. In this study, simulations are performed to characterize wake deflection and general behavior before the experiment is performed to ensuremore » better upfront planning. Beyond characterizing the expected wake behavior, we also use the large-eddy simulation to test a virtual version of the lidar we plan to use to measure the wake and better understand our lidar scan strategy options. This work is an excellent example of a 'simulation-in-the-loop' measurement campaign.« less
NASA Astrophysics Data System (ADS)
Churchfield, M.; Wang, Q.; Scholbrock, A.; Herges, T.; Mikkelsen, T.; Sjöholm, M.
2016-09-01
We describe the process of using large-eddy simulations of wind turbine wake flow to help design a wake measurement campaign. The main goal of the experiment is to measure wakes and wake deflection that result from intentional yaw misalignment under a variety of atmospheric conditions at the Scaled Wind Farm Technology facility operated by Sandia National Laboratories in Lubbock, Texas. Prior simulation studies have shown that wake deflection may be used for wind-plant control that maximizes plant power output. In this study, simulations are performed to characterize wake deflection and general behavior before the experiment is performed to ensure better upfront planning. Beyond characterizing the expected wake behavior, we also use the large-eddy simulation to test a virtual version of the lidar we plan to use to measure the wake and better understand our lidar scan strategy options. This work is an excellent example of a “simulation-in-the-loop” measurement campaign.
Churchfield, M.; Wang, Q.; Scholbrock, A.; ...
2016-10-03
Here, we describe the process of using large-eddy simulations of wind turbine wake flow to help design a wake measurement campaign. The main goal of the experiment is to measure wakes and wake deflection that result from intentional yaw misalignment under a variety of atmospheric conditions at the Scaled Wind Farm Technology facility operated by Sandia National Laboratories in Lubbock, Texas. Prior simulation studies have shown that wake deflection may be used for wind-plant control that maximizes plant power output. In this study, simulations are performed to characterize wake deflection and general behavior before the experiment is performed to ensuremore » better upfront planning. Beyond characterizing the expected wake behavior, we also use the large-eddy simulation to test a virtual version of the lidar we plan to use to measure the wake and better understand our lidar scan strategy options. This work is an excellent example of a 'simulation-in-the-loop' measurement campaign.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naitoh, Masanori; Ujita, Hiroshi; Nagumo, Hiroichi
1997-07-01
The Nuclear Power Engineering Corporation (NUPEC) has initiated a long-term program to develop the simulation system {open_quotes}IMPACT{close_quotes} for analysis of hypothetical severe accidents in nuclear power plants. IMPACT employs advanced methods of physical modeling and numerical computation, and can simulate a wide spectrum of senarios ranging from normal operation to hypothetical, beyond-design-basis-accident events. Designed as a large-scale system of interconnected, hierarchical modules, IMPACT`s distinguishing features include mechanistic models based on first principles and high speed simulation on parallel processing computers. The present plan is a ten-year program starting from 1993, consisting of the initial one-year of preparatory work followed bymore » three technical phases: Phase-1 for development of a prototype system; Phase-2 for completion of the simulation system, incorporating new achievements from basic studies; and Phase-3 for refinement through extensive verification and validation against test results and available real plant data.« less
1991-08-01
specifications are taken primarily from the 1983 version of the ASME Boiler and Pressure Vessel Code . Other design requirements were developea from standard safe...rules and practices of the American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel Code to provide a safe and reliable system
A real-time simulator of a turbofan engine
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.; Delaat, John C.; Merrill, Walter C.
1989-01-01
A real-time digital simulator of a Pratt and Whitney F100 engine has been developed for real-time code verification and for actuator diagnosis during full-scale engine testing. This self-contained unit can operate in an open-loop stand-alone mode or as part of closed-loop control system. It can also be used for control system design and development. Tests conducted in conjunction with the NASA Advanced Detection, Isolation, and Accommodation program show that the simulator is a valuable tool for real-time code verification and as a real-time actuator simulator for actuator fault diagnosis. Although currently a small perturbation model, advances in microprocessor hardware should allow the simulator to evolve into a real-time, full-envelope, full engine simulation.
A numerical tool for reproducing driver behaviour: experiments and predictive simulations.
Casucci, M; Marchitto, M; Cacciabue, P C
2010-03-01
This paper presents the simulation tool called SDDRIVE (Simple Simulation of Driver performance), which is the numerical computerised implementation of the theoretical architecture describing Driver-Vehicle-Environment (DVE) interactions, contained in Cacciabue and Carsten [Cacciabue, P.C., Carsten, O. A simple model of driver behaviour to sustain design and safety assessment of automated systems in automotive environments, 2010]. Following a brief description of the basic algorithms that simulate the performance of drivers, the paper presents and discusses a set of experiments carried out in a Virtual Reality full scale simulator for validating the simulation. Then the predictive potentiality of the tool is shown by discussing two case studies of DVE interactions, performed in the presence of different driver attitudes in similar traffic conditions.
NASA Technical Reports Server (NTRS)
Wilson, Jeffrey D.; Zimmerli, Gregory A.
2012-01-01
Good antenna-mode coupling is needed for determining the amount of propellant in a tank through the method of radio frequency mass gauging (RFMG). The antenna configuration and position in a tank are important factors in coupling the antenna to the natural electromagnetic modes. In this study, different monopole and dipole antenna mounting configurations and positions were modeled and responses simulated in a full-scale tank model with the transient solver of CST Microwave Studio (CST Computer Simulation Technology of America, Inc.). The study was undertaken to qualitatively understand the effect of antenna design and placement within a tank on the resulting radio frequency (RF) tank spectrum.
Time-and-Spatially Adapting Simulations for Efficient Dynamic Stall Predictions
2015-09-01
Experi- mental Investigation and Fundamental Understand- ing of a Full-Scale Slowed Rotor at High Advance Ratios,” Journal of the American Helicopter ...remains a major roadblock in the design and analysis of conventional rotors as well as new concepts for future vertical lift. Several approaches to...of conventional rotors as well as new concepts for future vertical lift. Several approaches to reduce the cost of these dynamic stall simulations for
Computer-aided software development process design
NASA Technical Reports Server (NTRS)
Lin, Chi Y.; Levary, Reuven R.
1989-01-01
The authors describe an intelligent tool designed to aid managers of software development projects in planning, managing, and controlling the development process of medium- to large-scale software projects. Its purpose is to reduce uncertainties in the budget, personnel, and schedule planning of software development projects. It is based on dynamic model for the software development and maintenance life-cycle process. This dynamic process is composed of a number of time-varying, interacting developmental phases, each characterized by its intended functions and requirements. System dynamics is used as a modeling methodology. The resulting Software LIfe-Cycle Simulator (SLICS) and the hybrid expert simulation system of which it is a subsystem are described.
NASA Technical Reports Server (NTRS)
Haas, J. E.; Kofskey, M. G.; Hotz, G. M.; Futral, S. M., Jr.
1978-01-01
Performance data were obtained experimentally for a 0.4 linear scale version of the LF460 lift fan turbine for a range of scroll inlet total to diffuser exit static pressure ratios at design equivalent speed with simulated fan leakage air. Tests were conducted for full and partial admission operation with three separate combinations of rotor inlet and rotor exit leakage air. Data were compared to the results obtained from previous investigations in which no leakage air was present. Results are presented in terms of mass flow, torque, and efficiency.
NASA Astrophysics Data System (ADS)
Braun, Robert Joseph
The advent of maturing fuel cell technologies presents an opportunity to achieve significant improvements in energy conversion efficiencies at many scales; thereby, simultaneously extending our finite resources and reducing "harmful" energy-related emissions to levels well below that of near-future regulatory standards. However, before realization of the advantages of fuel cells can take place, systems-level design issues regarding their application must be addressed. Using modeling and simulation, the present work offers optimal system design and operation strategies for stationary solid oxide fuel cell systems applied to single-family detached dwellings. A one-dimensional, steady-state finite-difference model of a solid oxide fuel cell (SOFC) is generated and verified against other mathematical SOFC models in the literature. Fuel cell system balance-of-plant components and costs are also modeled and used to provide an estimate of system capital and life cycle costs. The models are used to evaluate optimal cell-stack power output, the impact of cell operating and design parameters, fuel type, thermal energy recovery, system process design, and operating strategy on overall system energetic and economic performance. Optimal cell design voltage, fuel utilization, and operating temperature parameters are found using minimization of the life cycle costs. System design evaluations reveal that hydrogen-fueled SOFC systems demonstrate lower system efficiencies than methane-fueled systems. The use of recycled cell exhaust gases in process design in the stack periphery are found to produce the highest system electric and cogeneration efficiencies while achieving the lowest capital costs. Annual simulations reveal that efficiencies of 45% electric (LHV basis), 85% cogenerative, and simple economic paybacks of 5--8 years are feasible for 1--2 kW SOFC systems in residential-scale applications. Design guidelines that offer additional suggestions related to fuel cell-stack sizing and operating strategy (base-load or load-following and cogeneration or electric-only) are also presented.
Using CellML with OpenCMISS to Simulate Multi-Scale Physiology
Nickerson, David P.; Ladd, David; Hussan, Jagir R.; Safaei, Soroush; Suresh, Vinod; Hunter, Peter J.; Bradley, Christopher P.
2014-01-01
OpenCMISS is an open-source modeling environment aimed, in particular, at the solution of bioengineering problems. OpenCMISS consists of two main parts: a computational library (OpenCMISS-Iron) and a field manipulation and visualization library (OpenCMISS-Zinc). OpenCMISS is designed for the solution of coupled multi-scale, multi-physics problems in a general-purpose parallel environment. CellML is an XML format designed to encode biophysically based systems of ordinary differential equations and both linear and non-linear algebraic equations. A primary design goal of CellML is to allow mathematical models to be encoded in a modular and reusable format to aid reproducibility and interoperability of modeling studies. In OpenCMISS, we make use of CellML models to enable users to configure various aspects of their multi-scale physiological models. This avoids the need for users to be familiar with the OpenCMISS internal code in order to perform customized computational experiments. Examples of this are: cellular electrophysiology models embedded in tissue electrical propagation models; material constitutive relationships for mechanical growth and deformation simulations; time-varying boundary conditions for various problem domains; and fluid constitutive relationships and lumped-parameter models. In this paper, we provide implementation details describing how CellML models are integrated into multi-scale physiological models in OpenCMISS. The external interface OpenCMISS presents to users is also described, including specific examples exemplifying the extensibility and usability these tools provide the physiological modeling and simulation community. We conclude with some thoughts on future extension of OpenCMISS to make use of other community developed information standards, such as FieldML, SED-ML, and BioSignalML. Plans for the integration of accelerator code (graphical processing unit and field programmable gate array) generated from CellML models is also discussed. PMID:25601911
NASA Technical Reports Server (NTRS)
Shiva, S. G.; Shah, A. M.
1980-01-01
The details of digital systems can be conveniently input into the design automation system by means of hardware description language (HDL). The computer aided design and test (CADAT) system at NASA MSFC is used for the LSI design. The digital design language (DDL) was selected as HDL for the CADAT System. DDL translator output can be used for the hardware implementation of the digital design. Problems of selecting the standard cells from the CADAT standard cell library to realize the logic implied by the DDL description of the system are addressed.
Environmentally Benign Battlefield Effects Black Smoke Simulator
2006-11-01
tested and results Fuel Oxidizer Color of Smoke Density of Smoke Sugar (Sucrose) KNO3 Grey Medium Dextrin KNO3 Grey Thin Microcrystalline...design. 3.5 Initial Prototype Scale Fiberboard Testing Several quality black smoke formulations were identified in the small pellet testing to
Generative Representations for Automated Design of Robots
NASA Technical Reports Server (NTRS)
Homby, Gregory S.; Lipson, Hod; Pollack, Jordan B.
2007-01-01
A method of automated design of complex, modular robots involves an evolutionary process in which generative representations of designs are used. The term generative representations as used here signifies, loosely, representations that consist of or include algorithms, computer programs, and the like, wherein encoded designs can reuse elements of their encoding and thereby evolve toward greater complexity. Automated design of robots through synthetic evolutionary processes has already been demonstrated, but it is not clear whether genetically inspired search algorithms can yield designs that are sufficiently complex for practical engineering. The ultimate success of such algorithms as tools for automation of design depends on the scaling properties of representations of designs. A nongenerative representation (one in which each element of the encoded design is used at most once in translating to the design) scales linearly with the number of elements. Search algorithms that use nongenerative representations quickly become intractable (search times vary approximately exponentially with numbers of design elements), and thus are not amenable to scaling to complex designs. Generative representations are compact representations and were devised as means to circumvent the above-mentioned fundamental restriction on scalability. In the present method, a robot is defined by a compact programmatic form (its generative representation) and the evolutionary variation takes place on this form. The evolutionary process is an iterative one, wherein each cycle consists of the following steps: 1. Generative representations are generated in an evolutionary subprocess. 2. Each generative representation is a program that, when compiled, produces an assembly procedure. 3. In a computational simulation, a constructor executes an assembly procedure to generate a robot. 4. A physical-simulation program tests the performance of a simulated constructed robot, evaluating the performance according to a fitness criterion to yield a figure of merit that is fed back into the evolutionary subprocess of the next iteration. In comparison with prior approaches to automated evolutionary design of robots, the use of generative representations offers two advantages: First, a generative representation enables the reuse of components in regular and hierarchical ways and thereby serves a systematic means of creating more complex modules out of simpler ones. Second, the evolved generative representation may capture intrinsic properties of the design problem, so that variations in the representations move through the design space more effectively than do equivalent variations in a nongenerative representation. This method has been demonstrated by using it to design some robots that move, variously, by walking, rolling, or sliding. Some of the robots were built (see figure). Although these robots are very simple, in comparison with robots designed by humans, their structures are more regular, modular, hierarchical, and complex than are those of evolved designs of comparable functionality synthesized by use of nongenerative representations.
Modelling and scale-up of chemical flooding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pope, G.A.; Lake, L.W.; Sepehrnoori, K.
1990-03-01
The objective of this research is to develop, validate, and apply a comprehensive chemical flooding simulator for chemical recovery processes involving surfactants, polymers, and alkaline chemicals in various combinations. This integrated program includes components of laboratory experiments, physical property modelling, scale-up theory, and numerical analysis as necessary and integral components of the simulation activity. We have continued to develop, test, and apply our chemical flooding simulator (UTCHEM) to a wide variety of laboratory and reservoir problems involving tracers, polymers, polymer gels, surfactants, and alkaline agents. Part I is an update on the Application of Higher-Order Methods in Chemical Flooding Simulation.more » This update focuses on the comparison of grid orientation effects for four different numerical methods implemented in UTCHEM. Part II is on Simulation Design Studies and is a continuation of Saad's Big Muddy surfactant pilot simulation study reported last year. Part III reports on the Simulation of Gravity Effects under conditions similar to those of some of the oil reservoirs in the North Sea. Part IV is on Determining Oil Saturation from Interwell Tracers UTCHEM is used for large-scale interwell tracer tests. A systematic procedure for estimating oil saturation from interwell tracer data is developed and a specific example based on the actual field data provided by Sun E P Co. is given. Part V reports on the Application of Vectorization and Microtasking for Reservoir Simulation. Part VI reports on Alkaline Simulation. The alkaline/surfactant/polymer flood compositional simulator (UTCHEM) reported last year is further extended to include reactions involving chemical species containing magnesium, aluminium and silicon as constituent elements. Part VII reports on permeability and trapping of microemulsion.« less
In-wheel hub SRM simulation and analysis
NASA Astrophysics Data System (ADS)
Sager, Milton W., III
Is it feasible to replace the conventional gasoline engine and subsequent drive system in a motorcycle with an electric switched reluctance motor (SRM) by placing the SRM inside the rear wheel, thereby removing the need for things such as a clutch, chain, transmission, gears and sprockets? The goal of this thesis is to study the theoretical aspect of prototyping and analyzing an in-wheel electric hub motor to replace the standard gasoline engine traditionally found on motorcycles. With the recent push for clean energy, electric vehicles are becoming more common. All currently produced electric motorcycles use conventional, prefabricated electric motors connected to the traditional sprocket and chain design. This greatly restricts the efficiency and range of these motorcycles. My design stands apart by turning the rear wheel into a SRM which uses electromagnets around a non-magnetic core to convert electrical energy into mechanical force driving the rear wheel. To my knowledge, there is currently no motorcycle designed with an in-wheel hub SRM. A three-phase SRM and a five-phase SRM will be simulated and analyzed using MATLAB with Simulink. Factors such as friction, weight, power, etc. will be taken into account in order to create a realistic simulation as if it were inside the rear wheel of a motorcycle. Since time and finances will not allow for a full scale build, a scaled model three-phase SRM will be attempted for demonstration purposes.
Ryan, Patrick B; Schuemie, Martijn J
2013-10-01
There has been only limited evaluation of statistical methods for identifying safety risks of drug exposure in observational healthcare data. Simulations can support empirical evaluation, but have not been shown to adequately model the real-world phenomena that challenge observational analyses. To design and evaluate a probabilistic framework (OSIM2) for generating simulated observational healthcare data, and to use this data for evaluating the performance of methods in identifying associations between drug exposure and health outcomes of interest. Seven observational designs, including case-control, cohort, self-controlled case series, and self-controlled cohort design were applied to 399 drug-outcome scenarios in 6 simulated datasets with no effect and injected relative risks of 1.25, 1.5, 2, 4, and 10, respectively. Longitudinal data for 10 million simulated patients were generated using a model derived from an administrative claims database, with associated demographics, periods of drug exposure derived from pharmacy dispensings, and medical conditions derived from diagnoses on medical claims. Simulation validation was performed through descriptive comparison with real source data. Method performance was evaluated using Area Under ROC Curve (AUC), bias, and mean squared error. OSIM2 replicates prevalence and types of confounding observed in real claims data. When simulated data are injected with relative risks (RR) ≥ 2, all designs have good predictive accuracy (AUC > 0.90), but when RR < 2, no methods achieve 100 % predictions. Each method exhibits a different bias profile, which changes with the effect size. OSIM2 can support methodological research. Results from simulation suggest method operating characteristics are far from nominal properties.
Development of the CSI phase-3 evolutionary model testbed
NASA Technical Reports Server (NTRS)
Gronet, M. J.; Davis, D. A.; Tan, M. K.
1994-01-01
This report documents the development effort for the reconfiguration of the Controls-Structures Integration (CSI) Evolutionary Model (CEM) Phase-2 testbed into the CEM Phase-3 configuration. This step responds to the need to develop and test CSI technologies associated with typical planned earth science and remote sensing platforms. The primary objective of the CEM Phase-3 ground testbed is to simulate the overall on-orbit dynamic behavior of the EOS AM-1 spacecraft. Key elements of the objective include approximating the low-frequency appendage dynamic interaction of EOS AM-1, allowing for the changeout of components, and simulating the free-free on-orbit environment using an advanced suspension system. The fundamentals of appendage dynamic interaction are reviewed. A new version of the multiple scaling method is used to design the testbed to have the full-scale geometry and dynamics of the EOS AM-1 spacecraft, but at one-tenth the weight. The testbed design is discussed, along with the testing of the solar array, high gain antenna, and strut components. Analytical performance comparisons show that the CEM Phase-3 testbed simulates the EOS AM-1 spacecraft with good fidelity for the important parameters of interest.
NASA Technical Reports Server (NTRS)
Iversen, J. D.
1991-01-01
The aeolian wind tunnel is a special case of a larger subset of the wind tunnel family which is designed to simulate the atmospheric surface layer winds to small scale (a member of this larger subset is usually called an atmospheric boundary layer wind tunnel or environmental wind tunnel). The atmospheric boundary layer wind tunnel is designed to simulate, as closely as possible, the mean velocity and turbulence that occur naturally in the atmospheric boundary layer (defined as the lowest portion of the atmosphere, of the order of 500 m, in which the winds are most greatly affected by surface roughness and topography). The aeolian wind tunnel is used for two purposes: to simulate the physics of the saltation process and to model at small scale the erosional and depositional processes associated with topographic surface features. For purposes of studying aeolian effects on the surface of Mars and Venus as well as on Earth, the aeolian wind tunnel continues to prove to be a useful tool for estimating wind speeds necessary to move small particles on the three planets as well as to determine the effects of topography on the evolution of aeolian features such as wind streaks and dune patterns.
Low-floor bus design preferences of walking aid users during simulated boarding and alighting.
D'souza, Clive; Paquet, Victor; Lenker, James; Steinfeld, Edward; Bareria, Piyush
2012-01-01
Low-floor buses represent a significant improvement in accessible public transit for passengers with limited mobility. However, there is still a need for research on the inclusive design of transit buses to identify specific low-floor bus design conditions that are either particularly accommodating or challenging for passengers with functional and mobility impairments. These include doorway locations, seating configuration and the large front wheel-well covers that collectively impact boarding, alighting and interior movement of passengers. Findings from a laboratory study using a static full-scale simulation of a lowfloor bus to evaluate the impact of seating configuration and crowding on interior movement and accessibility for individuals with and without walking aids are presented (n=41). Simulated bus journeys that included boarding, fare payment, seating, and alighting were performed. Results from video observations and subjective assessments showed differences in boarding and alighting performance and users' perceptions of task difficulty. The need for assistive design features (e.g. handholds, stanchions), legroom and stowage space for walking aids was evident. These results demonstrate that specific design conditions in low-floor buses can significantly impact design preference among those who use walking aids. Consideration of ergonomics and inclusive design can therefore be used to improve the design of low-floor buses.
Evaluation of dispersion strengthened nickel-base alloy heat shields for space shuttle application
NASA Technical Reports Server (NTRS)
Johnson, R., Jr.; Killpatrick, D. H.
1976-01-01
The results obtained in a program to evaluate dispersion-strengthened nickel-base alloys for use in a metallic radiative thermal protection system operating at surface temperatures to 1477 K for the space shuttle were presented. Vehicle environments having critical effects on the thermal protection system are defined; TD Ni-20Cr characteristics of material used in the current study are compared with previous results; cyclic load, temperature, and pressure effects on sheet material residual strength are investigated; the effects of braze reinforcement in improving the efficiency of spotwelded joints are evaluated; parametric studies of metallic radiative thermal protection systems are reported; and the design, instrumentation, and testing of full scale subsize heat shield panels in two configurations are described. Initial tests of full scale subsize panels included simulated meteoroid impact tests, simulated entry flight aerodynamic heating, programmed differential pressure loads and temperatures simulating mission conditions, and acoustic tests simulating sound levels experienced during boost flight.
NASA Astrophysics Data System (ADS)
Ying, Shen; Li, Lin; Gao, Yurong
2009-10-01
Spatial visibility analysis is the important direction of pedestrian behaviors because our visual conception in space is the straight method to get environment information and navigate your actions. Based on the agent modeling and up-tobottom method, the paper develop the framework about the analysis of the pedestrian flow depended on visibility. We use viewshed in visibility analysis and impose the parameters on agent simulation to direct their motion in urban space. We analyze the pedestrian behaviors in micro-scale and macro-scale of urban open space. The individual agent use visual affordance to determine his direction of motion in micro-scale urban street on district. And we compare the distribution of pedestrian flow with configuration in macro-scale urban environment, and mine the relationship between the pedestrian flow and distribution of urban facilities and urban function. The paper first computes the visibility situations at the vantage point in urban open space, such as street network, quantify the visibility parameters. The multiple agents use visibility parameters to decide their direction of motion, and finally pedestrian flow reach to a stable state in urban environment through the simulation of multiple agent system. The paper compare the morphology of visibility parameters and pedestrian distribution with urban function and facilities layout to confirm the consistence between them, which can be used to make decision support in urban design.
Mantle Convection on Modern Supercomputers
NASA Astrophysics Data System (ADS)
Weismüller, J.; Gmeiner, B.; Huber, M.; John, L.; Mohr, M.; Rüde, U.; Wohlmuth, B.; Bunge, H. P.
2015-12-01
Mantle convection is the cause for plate tectonics, the formation of mountains and oceans, and the main driving mechanism behind earthquakes. The convection process is modeled by a system of partial differential equations describing the conservation of mass, momentum and energy. Characteristic to mantle flow is the vast disparity of length scales from global to microscopic, turning mantle convection simulations into a challenging application for high-performance computing. As system size and technical complexity of the simulations continue to increase, design and implementation of simulation models for next generation large-scale architectures is handled successfully only in an interdisciplinary context. A new priority program - named SPPEXA - by the German Research Foundation (DFG) addresses this issue, and brings together computer scientists, mathematicians and application scientists around grand challenges in HPC. Here we report from the TERRA-NEO project, which is part of the high visibility SPPEXA program, and a joint effort of four research groups. TERRA-NEO develops algorithms for future HPC infrastructures, focusing on high computational efficiency and resilience in next generation mantle convection models. We present software that can resolve the Earth's mantle with up to 1012 grid points and scales efficiently to massively parallel hardware with more than 50,000 processors. We use our simulations to explore the dynamic regime of mantle convection and assess the impact of small scale processes on global mantle flow.
OpenSHS: Open Smart Home Simulator.
Alshammari, Nasser; Alshammari, Talal; Sedky, Mohamed; Champion, Justin; Bauer, Carolin
2017-05-02
This paper develops a new hybrid, open-source, cross-platform 3D smart home simulator, OpenSHS, for dataset generation. OpenSHS offers an opportunity for researchers in the field of the Internet of Things (IoT) and machine learning to test and evaluate their models. Following a hybrid approach, OpenSHS combines advantages from both interactive and model-based approaches. This approach reduces the time and efforts required to generate simulated smart home datasets. We have designed a replication algorithm for extending and expanding a dataset. A small sample dataset produced, by OpenSHS, can be extended without affecting the logical order of the events. The replication provides a solution for generating large representative smart home datasets. We have built an extensible library of smart devices that facilitates the simulation of current and future smart home environments. Our tool divides the dataset generation process into three distinct phases: first design: the researcher designs the initial virtual environment by building the home, importing smart devices and creating contexts; second, simulation: the participant simulates his/her context-specific events; and third, aggregation: the researcher applies the replication algorithm to generate the final dataset. We conducted a study to assess the ease of use of our tool on the System Usability Scale (SUS).
OpenSHS: Open Smart Home Simulator
Alshammari, Nasser; Alshammari, Talal; Sedky, Mohamed; Champion, Justin; Bauer, Carolin
2017-01-01
This paper develops a new hybrid, open-source, cross-platform 3D smart home simulator, OpenSHS, for dataset generation. OpenSHS offers an opportunity for researchers in the field of the Internet of Things (IoT) and machine learning to test and evaluate their models. Following a hybrid approach, OpenSHS combines advantages from both interactive and model-based approaches. This approach reduces the time and efforts required to generate simulated smart home datasets. We have designed a replication algorithm for extending and expanding a dataset. A small sample dataset produced, by OpenSHS, can be extended without affecting the logical order of the events. The replication provides a solution for generating large representative smart home datasets. We have built an extensible library of smart devices that facilitates the simulation of current and future smart home environments. Our tool divides the dataset generation process into three distinct phases: first design: the researcher designs the initial virtual environment by building the home, importing smart devices and creating contexts; second, simulation: the participant simulates his/her context-specific events; and third, aggregation: the researcher applies the replication algorithm to generate the final dataset. We conducted a study to assess the ease of use of our tool on the System Usability Scale (SUS). PMID:28468330
NASA Technical Reports Server (NTRS)
Kuhl, Christopher A.
2008-01-01
The Aerial Regional-Scale Environmental Survey (ARES) is a Mars exploration mission concept that utilizes a rocket propelled airplane to take scientific measurements of atmospheric, surface, and subsurface phenomena. The liquid rocket propulsion system design has matured through several design cycles and trade studies since the inception of the ARES concept in 2002. This paper describes the process of selecting a bipropellant system over other propulsion system options, and provides details on the rocket system design, thrusters, propellant tank and PMD design, propellant isolation, and flow control hardware. The paper also summarizes computer model results of thruster plume interactions and simulated flight performance. The airplane has a 6.25 m wingspan with a total wet mass of 185 kg and has to ability to fly over 600 km through the atmosphere of Mars with 45 kg of MMH / MON3 propellant.
Using Generative Representations to Evolve Robots. Chapter 1
NASA Technical Reports Server (NTRS)
Hornby, Gregory S.
2004-01-01
Recent research has demonstrated the ability of evolutionary algorithms to automatically design both the physical structure and software controller of real physical robots. One of the challenges for these automated design systems is to improve their ability to scale to the high complexities found in real-world problems. Here we claim that for automated design systems to scale in complexity they must use a representation which allows for the hierarchical creation and reuse of modules, which we call a generative representation. Not only is the ability to reuse modules necessary for functional scalability, but it is also valuable for improving efficiency in testing and construction. We then describe an evolutionary design system with a generative representation capable of hierarchical modularity and demonstrate it for the design of locomoting robots in simulation. Finally, results from our experiments show that evolution with our generative representation produces better robots than those evolved with a non-generative representation.
NASA Astrophysics Data System (ADS)
Martínez-Lucas, G.; Pérez-Díaz, J. I.; Sarasúa, J. I.; Cavazzini, G.; Pavesi, G.; Ardizzon, G.
2017-04-01
This paper presents a dynamic simulation model of a laboratory-scale pumped-storage power plant (PSPP) operating in pumping mode with variable speed. The model considers the dynamic behavior of the conduits by means of an elastic water column approach, and synthetically generates both pressure and torque pulsations that reproduce the operation of the hydraulic machine in its instability region. The pressure and torque pulsations are generated each from a different set of sinusoidal functions. These functions were calibrated from the results of a CFD model, which was in turn validated from experimental data. Simulation model results match the numerical results of the CFD model with reasonable accuracy. The pump-turbine model (the functions used to generate pressure and torque pulsations inclusive) was up-scaled by hydraulic similarity according to the design parameters of a real PSPP and included in a dynamic simulation model of the said PSPP. Preliminary conclusions on the impact of unstable operation conditions on the penstock fatigue were obtained by means of a Monte Carlo simulation-based fatigue analysis.
On the Scaling of Small, Heat Simulated Jet Noise Measurements to Moderate Size Exhaust Jets
NASA Technical Reports Server (NTRS)
McLaughlin, Dennis K.; Bridges, James; Kuo, Ching-Wen
2010-01-01
Modern military aircraft jet engines are designed with variable geometry nozzles to provide optimum thrust in different operating conditions, depending on the flight envelope. However, the acoustic measurements for such nozzles are scarce, due to the cost involved in making full scale measurements and the lack of details about the exact geometry of these nozzles. Thus the present effort at The Pennsylvania State University and the NASA Glenn Research Center- in partnership with GE Aviation is aiming to study and characterize the acoustic field produced by supersonic jets issuing from converging-diverging military style nozzles. An equally important objective is to validate methodology for using data obtained from small and moderate scale experiments to reliably predict the most important components of full scale engine noise. The experimental results presented show reasonable agreement between small scale and moderate scale jet acoustic data, as well as between heated jets and heat-simulated ones. Unresolved issues however are identified that are currently receiving our attention, in particular the effect of the small bypass ratio airflow. Future activities will identify and test promising noise reduction techniques in an effort to predict how well such concepts will work with full scale engines in flight conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stepinski, Dominique C.; Vandegrift, G. F.
2015-09-30
Argonne is assisting SHINE Medical Technologies (SHINE) in their efforts to develop SHINE, an accelerator-driven process that will utilize a uranyl-sulfate solution for the production of fission product Mo-99. An integral part of the process is the development of a column for the separation and recovery of Mo-99, followed by a concentration column to reduce the product volume from 15-25 L to <1 L. Argonne has collected data from batch studies and breakthrough column experiments to utilize the VERSE (Versatile Reaction Separation) simulation program (Purdue University) to design plant-scale product recovery and concentration processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jordan, Amy B.; Zyvoloski, George Anthony; Weaver, Douglas James
The simulation work presented in this report supports DOE-NE Used Fuel Disposition Campaign (UFDC) goals related to the development of drift scale in-situ field testing of heat-generating nuclear waste (HGNW) in salt formations. Numerical code verification and validation is an important part of the lead-up to field testing, allowing exploration of potential heater emplacement designs, monitoring locations, and perhaps most importantly the ability to predict heat and mass transfer around an evolving test. Such predictions are crucial for the design and location of sampling and monitoring that can be used to validate our understanding of a drift scale test thatmore » is likely to span several years.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pourrezaei, K.
1982-01-01
A neutral beam probe capable of measuring plasma space potential in a fully 3-dimensional magnetic field geometry has been developed. This neutral beam was successfully used to measure an arc target plasma contained within the ALEX baseball magnetic coil. A computer simulation of the experiment was performed to refine the experimental design and to develop a numerical model for scaling the ALEX neutral beam probe to other cases of fully 3-dimensional magnetic field. Based on this scaling a 30 to 50 keV neutral cesium beam probe capable of measuring space potential in the thermal barrier region of TMX Upgrade wasmore » designed.« less
Noise Reduction Techniques and Scaling Effects towards Photon Counting CMOS Image Sensors
Boukhayma, Assim; Peizerat, Arnaud; Enz, Christian
2016-01-01
This paper presents an overview of the read noise in CMOS image sensors (CISs) based on four-transistors (4T) pixels, column-level amplification and correlated multiple sampling. Starting from the input-referred noise analytical formula, process level optimizations, device choices and circuit techniques at the pixel and column level of the readout chain are derived and discussed. The noise reduction techniques that can be implemented at the column and pixel level are verified by transient noise simulations, measurement and results from recently-published low noise CIS. We show how recently-reported process refinement, leading to the reduction of the sense node capacitance, can be combined with an optimal in-pixel source follower design to reach a sub-0.3erms- read noise at room temperature. This paper also discusses the impact of technology scaling on the CIS read noise. It shows how designers can take advantage of scaling and how the Metal-Oxide-Semiconductor (MOS) transistor gate leakage tunneling current appears as a challenging limitation. For this purpose, both simulation results of the gate leakage current and 1/f noise data reported from different foundries and technology nodes are used.
Argonne simulation framework for intelligent transportation systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ewing, T.; Doss, E.; Hanebutte, U.
1996-04-01
A simulation framework has been developed which defines a high-level architecture for a large-scale, comprehensive, scalable simulation of an Intelligent Transportation System (ITS). The simulator is designed to run on parallel computers and distributed (networked) computer systems; however, a version for a stand alone workstation is also available. The ITS simulator includes an Expert Driver Model (EDM) of instrumented ``smart`` vehicles with in-vehicle navigation units. The EDM is capable of performing optimal route planning and communicating with Traffic Management Centers (TMC). A dynamic road map data base is sued for optimum route planning, where the data is updated periodically tomore » reflect any changes in road or weather conditions. The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide 2-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphical user interfaces that includes human-factors studies to support safety and operational research. Realistic modeling of variations of the posted driving speed are based on human factor studies that take into consideration weather, road conditions, driver`s personality and behavior and vehicle type. The simulator has been developed on a distributed system of networked UNIX computers, but is designed to run on ANL`s IBM SP-X parallel computer system for large scale problems. A novel feature of the developed simulator is that vehicles will be represented by autonomous computer processes, each with a behavior model which performs independent route selection and reacts to external traffic events much like real vehicles. Vehicle processes interact with each other and with ITS components by exchanging messages. With this approach, one will be able to take advantage of emerging massively parallel processor (MPP) systems.« less
Three-Dimensional Hydrodynamic Simulations of OMEGA Implosions
NASA Astrophysics Data System (ADS)
Igumenshchev, I. V.
2016-10-01
The effects of large-scale (with Legendre modes less than 30) asymmetries in OMEGA direct-drive implosions caused by laser illumination nonuniformities (beam-power imbalance and beam mispointing and mistiming) and target offset, mount, and layers nonuniformities were investigated using three-dimensional (3-D) hydrodynamic simulations. Simulations indicate that the performance degradation in cryogenic implosions is caused mainly by the target offsets ( 10 to 20 μm), beampower imbalance (σrms 10 %), and initial target asymmetry ( 5% ρRvariation), which distort implosion cores, resulting in a reduced hot-spot confinement and an increased residual kinetic energy of the stagnated target. The ion temperature inferred from the width of simulated neutron spectra are influenced by bulk fuel motion in the distorted hot spot and can result in up to 2-keV apparent temperature increase. Similar temperature variations along different lines of sight are observed. Simulated x-ray images of implosion cores in the 4- to 8-keV energy range show good agreement with experiments. Demonstrating hydrodynamic equivalence to ignition designs on OMEGA requires reducing large-scale target and laser-imposed nonuniformities, minimizing target offset, and employing high-efficient mid-adiabat (α = 4) implosion designs that mitigate cross-beam energy transfer (CBET) and suppress short-wavelength Rayleigh-Taylor growth. These simulations use a new low-noise 3-D Eulerian hydrodynamic code ASTER. Existing 3-D hydrodynamic codes for direct-drive implosions currently miss CBET and noise-free ray-trace laser deposition algorithms. ASTER overcomes these limitations using a simplified 3-D laser-deposition model, which includes CBET and is capable of simulating the effects of beam-power imbalance, beam mispointing, mistiming, and target offset. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.
Main steam line break accident simulation of APR1400 using the model of ATLAS facility
NASA Astrophysics Data System (ADS)
Ekariansyah, A. S.; Deswandri; Sunaryo, Geni R.
2018-02-01
A main steam line break simulation for APR1400 as an advanced design of PWR has been performed using the RELAP5 code. The simulation was conducted in a model of thermal-hydraulic test facility called as ATLAS, which represents a scaled down facility of the APR1400 design. The main steam line break event is described in a open-access safety report document, in which initial conditions and assumptionsfor the analysis were utilized in performing the simulation and analysis of the selected parameter. The objective of this work was to conduct a benchmark activities by comparing the simulation results of the CESEC-III code as a conservative approach code with the results of RELAP5 as a best-estimate code. Based on the simulation results, a general similarity in the behavior of selected parameters was observed between the two codes. However the degree of accuracy still needs further research an analysis by comparing with the other best-estimate code. Uncertainties arising from the ATLAS model should be minimized by taking into account much more specific data in developing the APR1400 model.
A Goddard Multi-Scale Modeling System with Unified Physics
NASA Technical Reports Server (NTRS)
Tao, W.K.; Anderson, D.; Atlas, R.; Chern, J.; Houser, P.; Hou, A.; Lang, S.; Lau, W.; Peters-Lidard, C.; Kakar, R.;
2008-01-01
Numerical cloud resolving models (CRMs), which are based the non-hydrostatic equations of motion, have been extensively applied to cloud-scale and mesoscale processes during the past four decades. Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that CRMs agree with observations in simulating various types of clouds and cloud systems from different geographic locations. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that Numerical Weather Prediction (NWP) and regional scale model can be run in grid size similar to cloud resolving model through nesting technique. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a szrper-parameterization or multi-scale modeling -framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and more sophisticated physical parameterization. NASA satellite and field campaign can provide initial conditions as well as validation through utilizing the Earth Satellite simulators. At Goddard, we have developed a multi-scale modeling system with unified physics. The modeling system consists a coupled GCM-CRM (or MMF); a state-of-the-art weather research forecast model (WRF) and a cloud-resolving model (Goddard Cumulus Ensemble model). In these models, the same microphysical schemes (2ICE, several 3ICE), radiation (including explicitly calculated cloud optical properties), and surface models are applied. In addition, a comprehensive unified Earth Satellite simulator has been developed at GSFC, which is designed to fully utilize the multi-scale modeling system. A brief review of the multi-scale modeling system with unified physics/simulator and examples is presented in this article.
Optimal output fast feedback in two-time scale control of flexible arms
NASA Technical Reports Server (NTRS)
Siciliano, B.; Calise, A. J.; Jonnalagadda, V. R. P.
1986-01-01
Control of lightweight flexible arms moving along predefined paths can be successfully synthesized on the basis of a two-time scale approach. A model following control can be designed for the reduced order slow subsystem. The fast subsystem is a linear system in which the slow variables act as parameters. The flexible fast variables which model the deflections of the arm along the trajectory can be sensed through strain gage measurements. For full state feedback design the derivatives of the deflections need to be estimated. The main contribution of this work is the design of an output feedback controller which includes a fixed order dynamic compensator, based on a recent convergent numerical algorithm for calculating LQ optimal gains. The design procedure is tested by means of simulation results for the one link flexible arm prototype in the laboratory.
Bench-Scale Filtration Testing in Support of the Pretreatment Engineering Platform (PEP)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Billing, Justin M.; Daniel, Richard C.; Kurath, Dean E.
Pacific Northwest National Laboratory (PNNL) has been tasked by Bechtel National Inc. (BNI) on the River Protection Project-Hanford Tank Waste Treatment and Immobilization Plant (RPP-WTP) project to perform research and development activities to resolve technical issues identified for the Pretreatment Facility (PTF). The Pretreatment Engineering Platform (PEP) was designed, constructed and operated as part of a plan to respond to issue M12, “Undemonstrated Leaching Processes.” The PEP is a 1/4.5-scale test platform designed to simulate the WTP pretreatment caustic leaching, oxidative leaching, ultrafiltration solids concentration, and slurry washing processes. The PEP testing program specifies that bench-scale testing is to bemore » performed in support of specific operations, including filtration, caustic leaching, and oxidative leaching.« less
NASA Astrophysics Data System (ADS)
Piniewski, Mikołaj
2016-05-01
The objective of this study was to apply a previously developed large-scale and high-resolution SWAT model of the Vistula and the Odra basins, calibrated with the focus of natural flow simulation, in order to assess the impact of three different dam reservoirs on streamflow using the Indicators of Hydrologic Alteration (IHA). A tailored spatial calibration approach was designed, in which calibration was focused on a large set of relatively small non-nested sub-catchments with semi-natural flow regime. These were classified into calibration clusters based on the flow statistics similarity. After performing calibration and validation that gave overall positive results, the calibrated parameter values were transferred to the remaining part of the basins using an approach based on hydrological similarity of donor and target catchments. The calibrated model was applied in three case studies with the purpose of assessing the effect of dam reservoirs (Włocławek, Siemianówka and Czorsztyn Reservoirs) on streamflow alteration. Both the assessment based on gauged streamflow (Before-After design) and the one based on simulated natural streamflow showed large alterations in selected flow statistics related to magnitude, duration, high and low flow pulses and rate of change. Some benefits of using a large-scale and high-resolution hydrological model for the assessment of streamflow alteration include: (1) providing an alternative or complementary approach to the classical Before-After designs, (2) isolating the climate variability effect from the dam (or any other source of alteration) effect, (3) providing a practical tool that can be applied at a range of spatial scales over large area such as a country, in a uniform way. Thus, presented approach can be applied for designing more natural flow regimes, which is crucial for river and floodplain ecosystem restoration in the context of the European Union's policy on environmental flows.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rearden, Bradley T.; Jessee, Matthew Anderson
The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministicmore » and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rearden, Bradley T.; Jessee, Matthew Anderson
The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministicmore » and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.« less
DOT National Transportation Integrated Search
1992-09-01
The Louisiana Transportation Research Center has established a Pavement Research Facility (PRF). The core of the PRF is a testing machine that is capable of conducting full-scale simulated and accelerated load testing of pavement materials, construct...
Zheng, Chaohui; Liu, Yi; Bluemling, Bettina; Mol, Arthur P J; Chen, Jining
2015-01-01
To minimize negative environmental impact of livestock production, policy-makers face a challenge to design and implement more effective policy instruments for livestock farmers at different scales. This research builds an assessment framework on the basis of an agent-based model, named ANEM, to explore nutrient mitigation potentials of five policy instruments, using pig production in Zhongjiang county, southwest China, as the empirical filling. The effects of different policy scenarios are simulated and compared using four indicators and differentiating between small, medium and large scale pig farms. Technology standards, biogas subsidies and information provisioning prove to be the most effective policies, while pollution fees and manure markets fail to environmentally improve manure management in pig livestock farming. Medium-scale farms are the more relevant scale category for a more environmentally sound development of Chinese livestock production. A number of policy recommendations are formulated as conclusion, as well as some limitations and prospects of the simulations are discussed. Copyright © 2014 Elsevier B.V. All rights reserved.
Reese, Caitlin S; Suhr, Julie A; Riddle, Tara L
2012-03-01
Prior research shows that Digit Span is a useful embedded measure of malingering. However, the Wechsler Adult Intelligence Scale-IV (Wechsler, 2008) altered Digit Span in meaningful ways, necessitating another look at Digit Span as an embedded measure of malingering. Using a simulated malingerer design, we examined the predictive accuracy of existing Digit Span validity indices and explored whether patterns of performance utilizing the new version would provide additional evidence for malingering. Undergraduates with a history of mild head injury performed with best effort or simulated impaired cognition and were also compared with a large sample of non-head-injured controls. Previously established cutoffs for the age-corrected scaled score and Reliable Digit Span (RDS) performed similarly in the present samples. Patterns of RDS length using all three subscales of the new scale were different in malingerers when compared with both head-injured and non-head-injured controls. Two potential alternative RDS scores were introduced, which showed better sensitivity than the traditional RDS, while retaining specificity to malingering.
Efficient implicit LES method for the simulation of turbulent cavitating flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Egerer, Christian P., E-mail: christian.egerer@aer.mw.tum.de; Schmidt, Steffen J.; Hickel, Stefan
2016-07-01
We present a numerical method for efficient large-eddy simulation of compressible liquid flows with cavitation based on an implicit subgrid-scale model. Phase change and subgrid-scale interface structures are modeled by a homogeneous mixture model that assumes local thermodynamic equilibrium. Unlike previous approaches, emphasis is placed on operating on a small stencil (at most four cells). The truncation error of the discretization is designed to function as a physically consistent subgrid-scale model for turbulence. We formulate a sensor functional that detects shock waves or pseudo-phase boundaries within the homogeneous mixture model for localizing numerical dissipation. In smooth regions of the flowmore » field, a formally non-dissipative central discretization scheme is used in combination with a regularization term to model the effect of unresolved subgrid scales. The new method is validated by computing standard single- and two-phase test-cases. Comparison of results for a turbulent cavitating mixing layer obtained with the new method demonstrates its suitability for the target applications.« less
Static analysis techniques for semiautomatic synthesis of message passing software skeletons
Sottile, Matthew; Dagit, Jason; Zhang, Deli; ...
2015-06-29
The design of high-performance computing architectures demands performance analysis of large-scale parallel applications to derive various parameters concerning hardware design and software development. The process of performance analysis and benchmarking an application can be done in several ways with varying degrees of fidelity. One of the most cost-effective ways is to do a coarse-grained study of large-scale parallel applications through the use of program skeletons. The concept of a “program skeleton” that we discuss in this article is an abstracted program that is derived from a larger program where source code that is determined to be irrelevant is removed formore » the purposes of the skeleton. In this work, we develop a semiautomatic approach for extracting program skeletons based on compiler program analysis. Finally, we demonstrate correctness of our skeleton extraction process by comparing details from communication traces, as well as show the performance speedup of using skeletons by running simulations in the SST/macro simulator.« less
Evaluating the performance of parallel subsurface simulators: An illustrative example with PFLOTRAN
Hammond, G E; Lichtner, P C; Mills, R T
2014-01-01
[1] To better inform the subsurface scientist on the expected performance of parallel simulators, this work investigates performance of the reactive multiphase flow and multicomponent biogeochemical transport code PFLOTRAN as it is applied to several realistic modeling scenarios run on the Jaguar supercomputer. After a brief introduction to the code's parallel layout and code design, PFLOTRAN's parallel performance (measured through strong and weak scalability analyses) is evaluated in the context of conceptual model layout, software and algorithmic design, and known hardware limitations. PFLOTRAN scales well (with regard to strong scaling) for three realistic problem scenarios: (1) in situ leaching of copper from a mineral ore deposit within a 5-spot flow regime, (2) transient flow and solute transport within a regional doublet, and (3) a real-world problem involving uranium surface complexation within a heterogeneous and extremely dynamic variably saturated flow field. Weak scalability is discussed in detail for the regional doublet problem, and several difficulties with its interpretation are noted. PMID:25506097
Evaluating the performance of parallel subsurface simulators: An illustrative example with PFLOTRAN.
Hammond, G E; Lichtner, P C; Mills, R T
2014-01-01
[1] To better inform the subsurface scientist on the expected performance of parallel simulators, this work investigates performance of the reactive multiphase flow and multicomponent biogeochemical transport code PFLOTRAN as it is applied to several realistic modeling scenarios run on the Jaguar supercomputer. After a brief introduction to the code's parallel layout and code design, PFLOTRAN's parallel performance (measured through strong and weak scalability analyses) is evaluated in the context of conceptual model layout, software and algorithmic design, and known hardware limitations. PFLOTRAN scales well (with regard to strong scaling) for three realistic problem scenarios: (1) in situ leaching of copper from a mineral ore deposit within a 5-spot flow regime, (2) transient flow and solute transport within a regional doublet, and (3) a real-world problem involving uranium surface complexation within a heterogeneous and extremely dynamic variably saturated flow field. Weak scalability is discussed in detail for the regional doublet problem, and several difficulties with its interpretation are noted.
DENA: A Configurable Microarchitecture and Design Flow for Biomedical DNA-Based Logic Design.
Beiki, Zohre; Jahanian, Ali
2017-10-01
DNA is known as the building block for storing the life codes and transferring the genetic features through the generations. However, it is found that DNA strands can be used for a new type of computation that opens fascinating horizons in computational medicine. Significant contributions are addressed on design of DNA-based logic gates for medical and computational applications but there are serious challenges for designing the medium and large-scale DNA circuits. In this paper, a new microarchitecture and corresponding design flow is proposed to facilitate the design of multistage large-scale DNA logic systems. Feasibility and efficiency of the proposed microarchitecture are evaluated by implementing a full adder and, then, its cascadability is determined by implementing a multistage 8-bit adder. Simulation results show the highlight features of the proposed design style and microarchitecture in terms of the scalability, implementation cost, and signal integrity of the DNA-based logic system compared to the traditional approaches.
Effect of Logarithmic and Linear Frequency Scales on Parametric Modelling of Tissue Dielectric Data.
Salahuddin, Saqib; Porter, Emily; Meaney, Paul M; O'Halloran, Martin
2017-02-01
The dielectric properties of biological tissues have been studied widely over the past half-century. These properties are used in a vast array of applications, from determining the safety of wireless telecommunication devices to the design and optimisation of medical devices. The frequency-dependent dielectric properties are represented in closed-form parametric models, such as the Cole-Cole model, for use in numerical simulations which examine the interaction of electromagnetic (EM) fields with the human body. In general, the accuracy of EM simulations depends upon the accuracy of the tissue dielectric models. Typically, dielectric properties are measured using a linear frequency scale; however, use of the logarithmic scale has been suggested historically to be more biologically descriptive. Thus, the aim of this paper is to quantitatively compare the Cole-Cole fitting of broadband tissue dielectric measurements collected with both linear and logarithmic frequency scales. In this way, we can determine if appropriate choice of scale can minimise the fit error and thus reduce the overall error in simulations. Using a well-established fundamental statistical framework, the results of the fitting for both scales are quantified. It is found that commonly used performance metrics, such as the average fractional error, are unable to examine the effect of frequency scale on the fitting results due to the averaging effect that obscures large localised errors. This work demonstrates that the broadband fit for these tissues is quantitatively improved when the given data is measured with a logarithmic frequency scale rather than a linear scale, underscoring the importance of frequency scale selection in accurate wideband dielectric modelling of human tissues.
Effect of Logarithmic and Linear Frequency Scales on Parametric Modelling of Tissue Dielectric Data
Salahuddin, Saqib; Porter, Emily; Meaney, Paul M.; O’Halloran, Martin
2016-01-01
The dielectric properties of biological tissues have been studied widely over the past half-century. These properties are used in a vast array of applications, from determining the safety of wireless telecommunication devices to the design and optimisation of medical devices. The frequency-dependent dielectric properties are represented in closed-form parametric models, such as the Cole-Cole model, for use in numerical simulations which examine the interaction of electromagnetic (EM) fields with the human body. In general, the accuracy of EM simulations depends upon the accuracy of the tissue dielectric models. Typically, dielectric properties are measured using a linear frequency scale; however, use of the logarithmic scale has been suggested historically to be more biologically descriptive. Thus, the aim of this paper is to quantitatively compare the Cole-Cole fitting of broadband tissue dielectric measurements collected with both linear and logarithmic frequency scales. In this way, we can determine if appropriate choice of scale can minimise the fit error and thus reduce the overall error in simulations. Using a well-established fundamental statistical framework, the results of the fitting for both scales are quantified. It is found that commonly used performance metrics, such as the average fractional error, are unable to examine the effect of frequency scale on the fitting results due to the averaging effect that obscures large localised errors. This work demonstrates that the broadband fit for these tissues is quantitatively improved when the given data is measured with a logarithmic frequency scale rather than a linear scale, underscoring the importance of frequency scale selection in accurate wideband dielectric modelling of human tissues. PMID:28191324
The accurate particle tracer code
NASA Astrophysics Data System (ADS)
Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi; Yao, Yicun
2017-11-01
The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runaway electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world's fastest computer, the Sunway TaihuLight supercomputer, by supporting master-slave architecture of Sunway many-core processors. Based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.
Achieving bioinspired flapping wing hovering flight solutions on Mars via wing scaling.
Bluman, James E; Pohly, Jeremy; Sridhar, Madhu; Kang, Chang-Kwon; Landrum, David Brian; Fahimi, Farbod; Aono, Hikaru
2018-05-29
Achieving atmospheric flight on Mars is challenging due to the low density of the Martian atmosphere. Aerodynamic forces are proportional to the atmospheric density, which limits the use of conventional aircraft designs on Mars. Here, we show using numerical simulations that a flapping wing robot can fly on Mars via bioinspired dynamic scaling. Trimmed, hovering flight is possible in a simulated Martian environment when dynamic similarity with insects on earth is achieved by preserving the relevant dimensionless parameters while scaling up the wings three to four times its normal size. The analysis is performed using a well-validated two-dimensional Navier-Stokes equation solver, coupled to a three-dimensional flight dynamics model to simulate free flight. The majority of power required is due to the inertia of the wing because of the ultra-low density. The inertial flap power can be substantially reduced through the use of a torsional spring. The minimum total power consumption is 188 W/kg when the torsional spring is driven at its natural frequency. © 2018 IOP Publishing Ltd.
NASA Astrophysics Data System (ADS)
Jiao, J.; Trautz, A.; Zhang, Y.; Illangasekera, T.
2017-12-01
Subsurface flow and transport characterization under data-sparse condition is addressed by a new and computationally efficient inverse theory that simultaneously estimates parameters, state variables, and boundary conditions. Uncertainty in static data can be accounted for while parameter structure can be complex due to process uncertainty. The approach has been successfully extended to inverting transient and unsaturated flows as well as contaminant source identification under unknown initial and boundary conditions. In one example, by sampling numerical experiments simulating two-dimensional steady-state flow in which tracer migrates, a sequential inversion scheme first estimates the flow field and permeability structure before the evolution of tracer plume and dispersivities are jointly estimated. Compared to traditional inversion techniques, the theory does not use forward simulations to assess model-data misfits, thus the knowledge of the difficult-to-determine site boundary condition is not required. To test the general applicability of the theory, data generated during high-precision intermediate-scale experiments (i.e., a scale intermediary to the field and column scales) in large synthetic aquifers can be used. The design of such experiments is not trivial as laboratory conditions have to be selected to mimic natural systems in order to provide useful data, thus requiring a variety of sensors and data collection strategies. This paper presents the design of such an experiment in a synthetic, multi-layered aquifer with dimensions of 242.7 x 119.3 x 7.7 cm3. Different experimental scenarios that will generate data to validate the theory are presented.
Multi-scale evaporator architectures for geothermal binary power plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sabau, Adrian S; Nejad, Ali; Klett, James William
2016-01-01
In this paper, novel geometries of heat exchanger architectures are proposed for evaporators that are used in Organic Rankine Cycles. A multi-scale heat exchanger concept was developed by employing successive plenums at several length-scale levels. Flow passages contain features at both macro-scale and micro-scale, which are designed from Constructal Theory principles. Aside from pumping power and overall thermal resistance, several factors were considered in order to fully assess the performance of the new heat exchangers, such as weight of metal structures, surface area per unit volume, and total footprint. Component simulations based on laminar flow correlations for supercritical R134a weremore » used to obtain performance indicators.« less
Sharp Interface Tracking in Rotating Microflows of Solvent Extraction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glimm, James; Almeida, Valmor de; Jiao, Xiangmin
2013-01-08
The objective of this project is to develop a specialized sharp interface tracking simulation capability for predicting interaction of micron-sized drops and bubbles in rotating flows relevant to optimized design of contactor devices used in solvent extraction processes of spent nuclear fuel reprocessing. The primary outcomes of this project include the capability to resolve drops and bubbles micro-hydrodynamics in solvent extraction contactors, determining from first principles continuum fluid mechanics how micro-drops and bubbles interact with each other and the surrounding shearing fluid for realistic flows. In the near term, this effort will play a central role in providing parameters andmore » insight into the flow dynamics of models that average over coarser scales, say at the millimeter unit length. In the longer term, it will prove to be the platform to conduct full-device, detailed simulations as parallel computing power reaches the exaflop level. The team will develop an accurate simulation tool for flows containing interacting droplets and bubbles with sharp interfaces under conditions that mimic those found in realistic contactor operations. The main objective is to create an off-line simulation capability to model drop and bubble interactions in a domain representative of the averaged length scale. The technical approach is to combine robust interface tracking software, subgrid modeling, validation quality experiments, powerful computational hardware, and a team with simulation modeling, physical modeling and technology integration experience. Simulations will then fully resolve the microflow of drops and bubbles at the microsecond time scale. This approach is computationally intensive but very accurate in treating important coupled physical phenomena in the vicinity of interfaces. The method makes it possible to resolve spatial scales smaller than the typical distance between bubbles and to model some non-equilibrium thermodynamic features such as finite critical tension in cavitating liquids« less
NASA Astrophysics Data System (ADS)
Komm, M.; Gunn, J. P.; Dejarnac, R.; Pánek, R.; Pitts, R. A.; Podolník, A.
2017-12-01
Predictive modelling of the heat flux distribution on ITER tungsten divertor monoblocks is a critical input to the design choice for component front surface shaping and for the understanding of power loading in the case of small-scale exposed edges. This paper presents results of particle-in-cell (PIC) simulations of plasma interaction in the vicinity of poloidal gaps between monoblocks in the high heat flux areas of the ITER outer vertical target. The main objective of the simulations is to assess the role of local electric fields which are accounted for in a related study using the ion orbit approach including only the Lorentz force (Gunn et al 2017 Nucl. Fusion 57 046025). Results of the PIC simulations demonstrate that even if in some cases the electric field plays a distinct role in determining the precise heat flux distribution, when heat diffusion into the bulk material is taken into account, the thermal responses calculated using the PIC or ion orbit approaches are very similar. This is a consequence of the small spatial scales over which the ion orbits distribute the power. The key result of this study is that the computationally much less intensive ion orbit approximation can be used with confidence in monoblock shaping design studies, thus validating the approach used in Gunn et al (2017 Nucl. Fusion 57 046025).
Simulating nanoparticle transport in 3D geometries with MNM3D
NASA Astrophysics Data System (ADS)
Bianco, Carlo; Tosco, Tiziana; Sethi, Rajandrea
2017-04-01
The application of NP transport to real cases, such as the design of a field-scale injection or the prediction of the long term fate of nanoparticles (NPs) in the environment, requires the support of mathematical tools to effectively assess the expected NP mobility at the field scale. In general, micro- and nanoparticle transport in porous media is controlled by particle-particle and particle-porous media interactions, which are in turn affected by flow velocity and pore water chemistry. During the injection, a strong perturbation of the flow field is induced around the well, and the NP transport is mainly controlled by the consequent sharp variation of pore-water velocity. Conversely, when the injection is stopped, the particles are transported solely due to the natural flow, and the influence of groundwater geochemistry (ionic strength, IS, in particular) on the particle behaviour becomes predominant. Pore-water velocity and IS are therefore important parameters influencing particle transport in groundwater, and have to be taken into account by the numerical codes used to simulate NP transport. Several analytical and numerical tools have been developed in recent years to model the transport of colloidal particles in simplified geometry and boundary conditions. For instance, the numerical tool MNMs was developed by the authors of this work to simulate colloidal transport in 1D Cartesian and radial coordinates. Only few simulation tools are instead available for 3D colloid transport, and none of them implements direct correlations accounting for variations of groundwater IS and flow velocity. In this work a new modelling tool, MNM3D (Micro and Nanoparticle transport Model in 3D geometries), is proposed for the simulation of injection and transport of nanoparticle suspensions in generic complex scenarios. MNM3D implements a new formulation to account for the simultaneous dependency of the attachment and detachment kinetic coefficients on groundwater IS and velocity. The software was developed in the framework of the FP7 European research project NanoRem and can be used to predict the NP mobility at different stages of a nanoremediation application, both in the planning and design stages (i.e. support the design of the injection plan), and later to predict the long-term particle mobility after injection (i.e. support the monitoring, final fate of the injected particles). In this work MNM3D an integrated experimental-modelling procedure is used to assess and predict the nanoparticle transport in porous media at different spatial and time scales: laboratory tests are performed and interpreted using MNMs to characterize the nanoparticle mobility and derive the constitutive equations describing the suspension behavior in groundwater. MNM3D is then used to predict the NP transport at the field scale. The procedure is here applied to two practical cases: a 3D pilot scale injection of CARBO-IRON® in a large scale flume carried out at the VEGAS facilities in the framework of the NanoRem project; the long term fate of an hypothetical release of nanoparticles into the environment from a landfill is simulated.
Lansberg, Maarten G; Bhat, Ninad S; Yeatts, Sharon D; Palesch, Yuko Y; Broderick, Joseph P; Albers, Gregory W; Lai, Tze L; Lavori, Philip W
2016-12-01
Adaptive trial designs that allow enrichment of the study population through subgroup selection can increase the chance of a positive trial when there is a differential treatment effect among patient subgroups. The goal of this study is to illustrate the potential benefit of adaptive subgroup selection in endovascular stroke studies. We simulated the performance of a trial design with adaptive subgroup selection and compared it with that of a traditional design. Outcome data were based on 90-day modified Rankin Scale scores, observed in IMS III (Interventional Management of Stroke III), among patients with a vessel occlusion on baseline computed tomographic angiography (n=382). Patients were categorized based on 2 methods: (1) according to location of the arterial occlusive lesion and onset-to-randomization time and (2) according to onset-to-randomization time alone. The power to demonstrate a treatment benefit was based on 10 000 trial simulations for each design. The treatment effect was relatively homogeneous across categories when patients were categorized based on arterial occlusive lesion and time. Consequently, the adaptive design had similar power (47%) compared with the fixed trial design (45%). There was a differential treatment effect when patients were categorized based on time alone, resulting in greater power with the adaptive design (82%) than with the fixed design (57%). These simulations, based on real-world patient data, indicate that adaptive subgroup selection has merit in endovascular stroke trials as it substantially increases power when the treatment effect differs among subgroups in a predicted pattern. © 2016 American Heart Association, Inc.
Assessing Performance in Shoulder Arthroscopy: The Imperial Global Arthroscopy Rating Scale (IGARS).
Bayona, Sofia; Akhtar, Kash; Gupte, Chinmay; Emery, Roger J H; Dodds, Alexander L; Bello, Fernando
2014-07-02
Surgical training is undergoing major changes with reduced resident work hours and an increasing focus on patient safety and surgical aptitude. The aim of this study was to create a valid, reliable method for an assessment of arthroscopic skills that is independent of time and place and is designed for both real and simulated settings. The validity of the scale was tested using a virtual reality shoulder arthroscopy simulator. The study consisted of two parts. In the first part, an Imperial Global Arthroscopy Rating Scale for assessing technical performance was developed using a Delphi method. Application of this scale required installing a dual-camera system to synchronously record the simulator screen and body movements of trainees to allow an assessment that is independent of time and place. The scale includes aspects such as efficient portal positioning, angles of instrument insertion, proficiency in handling the arthroscope and adequately manipulating the camera, and triangulation skills. In the second part of the study, a validation study was conducted. Two experienced arthroscopic surgeons, blinded to the identities and experience of the participants, each assessed forty-nine subjects performing three different tests using the Imperial Global Arthroscopy Rating Scale. Results were analyzed using two-way analysis of variance with measures of absolute agreement. The intraclass correlation coefficient was calculated for each test to assess inter-rater reliability. The scale demonstrated high internal consistency (Cronbach alpha, 0.918). The intraclass correlation coefficient demonstrated high agreement between the assessors: 0.91 (p < 0.001). Construct validity was evaluated using Kruskal-Wallis one-way analysis of variance (chi-square test, 29.826; p < 0.001), demonstrating that the Imperial Global Arthroscopy Rating Scale distinguishes significantly between subjects with different levels of experience utilizing a virtual reality simulator. The Imperial Global Arthroscopy Rating Scale has a high internal consistency and excellent inter-rater reliability and offers an approach for assessing technical performance in basic arthroscopy on a virtual reality simulator. The Imperial Global Arthroscopy Rating Scale provides detailed information on surgical skills. Although it requires further validation in the operating room, this scale, which is independent of time and place, offers a robust and reliable method for assessing arthroscopic technical skills. Copyright © 2014 by The Journal of Bone and Joint Surgery, Incorporated.
Multi-scale computation methods: Their applications in lithium-ion battery research and development
NASA Astrophysics Data System (ADS)
Siqi, Shi; Jian, Gao; Yue, Liu; Yan, Zhao; Qu, Wu; Wangwei, Ju; Chuying, Ouyang; Ruijuan, Xiao
2016-01-01
Based upon advances in theoretical algorithms, modeling and simulations, and computer technologies, the rational design of materials, cells, devices, and packs in the field of lithium-ion batteries is being realized incrementally and will at some point trigger a paradigm revolution by combining calculations and experiments linked by a big shared database, enabling accelerated development of the whole industrial chain. Theory and multi-scale modeling and simulation, as supplements to experimental efforts, can help greatly to close some of the current experimental and technological gaps, as well as predict path-independent properties and help to fundamentally understand path-independent performance in multiple spatial and temporal scales. Project supported by the National Natural Science Foundation of China (Grant Nos. 51372228 and 11234013), the National High Technology Research and Development Program of China (Grant No. 2015AA034201), and Shanghai Pujiang Program, China (Grant No. 14PJ1403900).
Pérez, Ramón José; Álvarez, Ignacio; Enguita, José María
2016-01-01
This article presents, by means of computational simulation tools, a full analysis and design of an Interferometric Fiber-Optic Gyroscope (IFOG) prototype based on a closed-loop configuration with sinusoidal bias phase- modulation. The complete design of the different blocks, optical and electronic, is presented, including some novelties as the sinusoidal bias phase-modulation and the use of an integrator to generate the serrodyne phase-modulation signal. The paper includes detailed calculation of most parameter values, and the plots of the resulting signals obtained from simulation tools. The design is focused in the use of a standard single-mode optical fiber, allowing a cost competitive implementation compared to commercial IFOG, at the expense of reduced sensitivity. The design contains an IFOG model that accomplishes tactical and industrial grade applications (sensitivity ≤ 0.055 °/h). This design presents two important properties: (1) an optical subsystem with advanced conception: depolarization of the optical wave by means of Lyot depolarizers, which allows to use a sensing coil made by standard optical fiber, instead by polarization maintaining fiber, which supposes consequent cost savings and (2) a novel and simple electronic design that incorporates a linear analog integrator with reset in feedback chain, this integrator generating a serrodyne voltage-wave to apply to Phase-Modulator (PM), so that it will be obtained the interferometric phase cancellation. This particular feedback design with sawtooth-wave generated signal for a closed-loop configuration with sinusoidal bias phase modulation has not been reported till now in the scientific literature and supposes a considerable simplification with regard to previous designs based on similar configurations. The sensing coil consists of an 8 cm average diameter spool that contains 300 m of standard single-mode optical-fiber (SMF-28 type) realized by quadrupolar winding. The working wavelength will be 1310 nm. The theoretical calculated values of threshold sensitivity and dynamic range for this prototype are 0.052 °/h and 101.38 dB (from ±1.164 × 10−5 °/s up to ±78.19 °/s), respectively. The Scale-Factor (SF) non-linearity for this model is 5.404% relative to full scale, this value being obtained from data simulation results. PMID:27128924
Pérez, Ramón José; Álvarez, Ignacio; Enguita, José María
2016-04-27
This article presents, by means of computational simulation tools, a full analysis and design of an Interferometric Fiber-Optic Gyroscope (IFOG) prototype based on a closed-loop configuration with sinusoidal bias phase- modulation. The complete design of the different blocks, optical and electronic, is presented, including some novelties as the sinusoidal bias phase-modulation and the use of an integrator to generate the serrodyne phase-modulation signal. The paper includes detailed calculation of most parameter values, and the plots of the resulting signals obtained from simulation tools. The design is focused in the use of a standard single-mode optical fiber, allowing a cost competitive implementation compared to commercial IFOG, at the expense of reduced sensitivity. The design contains an IFOG model that accomplishes tactical and industrial grade applications (sensitivity ≤ 0.055 °/h). This design presents two important properties: (1) an optical subsystem with advanced conception: depolarization of the optical wave by means of Lyot depolarizers, which allows to use a sensing coil made by standard optical fiber, instead by polarization maintaining fiber, which supposes consequent cost savings and (2) a novel and simple electronic design that incorporates a linear analog integrator with reset in feedback chain, this integrator generating a serrodyne voltage-wave to apply to Phase-Modulator (PM), so that it will be obtained the interferometric phase cancellation. This particular feedback design with sawtooth-wave generated signal for a closed-loop configuration with sinusoidal bias phase modulation has not been reported till now in the scientific literature and supposes a considerable simplification with regard to previous designs based on similar configurations. The sensing coil consists of an 8 cm average diameter spool that contains 300 m of standard single-mode optical-fiber (SMF-28 type) realized by quadrupolar winding. The working wavelength will be 1310 nm. The theoretical calculated values of threshold sensitivity and dynamic range for this prototype are 0.052 °/h and 101.38 dB (from ±1.164 × 10(-5) °/s up to ±78.19 °/s), respectively. The Scale-Factor (SF) non-linearity for this model is 5.404% relative to full scale, this value being obtained from data simulation results.
Developing 3D morphologies for simulating building energy demand in urban microclimates
DOE Office of Scientific and Technical Information (OSTI.GOV)
New, Joshua Ryan; Omitaomu, Olufemi A.; Allen, Melissa R.
In order to simulate the effect of interactions between urban morphology and microclimate on demand for heating and cooling in buildings, we utilize source elevation data to create 3D building geometries at the neighborhood and city scale. Additionally, we use urban morphology concepts to design virtual morphologies for simulation scenarios in an undeveloped land parcel. Using these morphologies, we compute building-energy parameters such as the density for each surface and the frontal area index for each of the buildings to be able to effectively model the microclimate for the urban area.
When feedback fails: the scaling and saturation of star formation efficiency
NASA Astrophysics Data System (ADS)
Grudić, Michael Y.; Hopkins, Philip F.; Faucher-Giguère, Claude-André; Quataert, Eliot; Murray, Norman; Kereš, Dušan
2018-04-01
We present a suite of 3D multiphysics MHD simulations following star formation in isolated turbulent molecular gas discs ranging from 5 to 500 parsecs in radius. These simulations are designed to survey the range of surface densities between those typical of Milky Way giant molecular clouds (GMCs) ({˜ } 10^2 {M_{\\odot } pc^{-2}}) and extreme ultraluminous infrared galaxy environments ({˜ } 10^4 {M_{\\odot } pc^{-2}}) so as to map out the scaling of the cloud-scale star formation efficiency (SFE) between these two regimes. The simulations include prescriptions for supernova, stellar wind, and radiative feedback, which we find to be essential in determining both the instantaneous per-freefall (ɛff) and integrated (ɛint) star formation efficiencies. In all simulations, the gas discs form stars until a critical stellar surface density has been reached and the remaining gas is blown out by stellar feedback. We find that surface density is a good predictor of ɛint, as suggested by analytic force balance arguments from previous works. SFE eventually saturates to ˜1 at high surface density. We also find a proportional relationship between ɛff and ɛint, implying that star formation is feedback-moderated even over very short time-scales in isolated clouds. These results have implications for star formation in galactic discs, the nature and fate of nuclear starbursts, and the formation of bound star clusters. The scaling of ɛff with surface density is not consistent with the notion that ɛff is always ˜ 1 per cent on the scale of GMCs, but our predictions recover the ˜ 1 per cent value for GMC parameters similar to those found in spiral galaxies, including our own.
Pulse Jet Mixing Tests With Noncohesive Solids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meyer, Perry A.; Bamberger, Judith A.; Enderlin, Carl W.
2012-02-17
This report summarizes results from pulse jet mixing (PJM) tests with noncohesive solids in Newtonian liquid. The tests were conducted during FY 2007 and 2008 to support the design of mixing systems for the Hanford Waste Treatment and Immobilization Plant (WTP). Tests were conducted at three geometric scales using noncohesive simulants, and the test data were used to develop models predicting two measures of mixing performance for full-scale WTP vessels. The models predict the cloud height (the height to which solids will be lifted by the PJM action) and the critical suspension velocity (the minimum velocity needed to ensure allmore » solids are suspended off the floor, though not fully mixed). From the cloud height, the concentration of solids at the pump inlet can be estimated. The predicted critical suspension velocity for lifting all solids is not precisely the same as the mixing requirement for 'disturbing' a sufficient volume of solids, but the values will be similar and closely related. These predictive models were successfully benchmarked against larger scale tests and compared well with results from computational fluid dynamics simulations. The application of the models to assess mixing in WTP vessels is illustrated in examples for 13 distinct designs and selected operational conditions. The values selected for these examples are not final; thus, the estimates of performance should not be interpreted as final conclusions of design adequacy or inadequacy. However, this work does reveal that several vessels may require adjustments to design, operating features, or waste feed properties to ensure confidence in operation. The models described in this report will prove to be valuable engineering tools to evaluate options as designs are finalized for the WTP. Revision 1 refines data sets used for model development and summarizes models developed since the completion of Revision 0.« less
Apollo experience report: Guidance and control systems. Engineering simulation program
NASA Technical Reports Server (NTRS)
Gilbert, D. W.
1973-01-01
The Apollo Program experience from early 1962 to July 1969 with respect to the engineering-simulation support and the problems encountered is summarized in this report. Engineering simulation in support of the Apollo guidance and control system is discussed in terms of design analysis and verification, certification of hardware in closed-loop operation, verification of hardware/software compatibility, and verification of both software and procedures for each mission. The magnitude, time, and cost of the engineering simulations are described with respect to hardware availability, NASA and contractor facilities (for verification of the command module, the lunar module, and the primary guidance, navigation, and control system), and scheduling and planning considerations. Recommendations are made regarding implementation of similar, large-scale simulations for future programs.
Hybrid stochastic simplifications for multiscale gene networks.
Crudu, Alina; Debussche, Arnaud; Radulescu, Ovidiu
2009-09-07
Stochastic simulation of gene networks by Markov processes has important applications in molecular biology. The complexity of exact simulation algorithms scales with the number of discrete jumps to be performed. Approximate schemes reduce the computational time by reducing the number of simulated discrete events. Also, answering important questions about the relation between network topology and intrinsic noise generation and propagation should be based on general mathematical results. These general results are difficult to obtain for exact models. We propose a unified framework for hybrid simplifications of Markov models of multiscale stochastic gene networks dynamics. We discuss several possible hybrid simplifications, and provide algorithms to obtain them from pure jump processes. In hybrid simplifications, some components are discrete and evolve by jumps, while other components are continuous. Hybrid simplifications are obtained by partial Kramers-Moyal expansion [1-3] which is equivalent to the application of the central limit theorem to a sub-model. By averaging and variable aggregation we drastically reduce simulation time and eliminate non-critical reactions. Hybrid and averaged simplifications can be used for more effective simulation algorithms and for obtaining general design principles relating noise to topology and time scales. The simplified models reproduce with good accuracy the stochastic properties of the gene networks, including waiting times in intermittence phenomena, fluctuation amplitudes and stationary distributions. The methods are illustrated on several gene network examples. Hybrid simplifications can be used for onion-like (multi-layered) approaches to multi-scale biochemical systems, in which various descriptions are used at various scales. Sets of discrete and continuous variables are treated with different methods and are coupled together in a physically justified approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Ann E; Barker, Ashley D; Bland, Arthur S Buddy
Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.4 billion core hours in calendar year (CY) 2011 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Users reported more than 670 publications this year arising from their use of OLCF resources. Of thesemore » we report the 300 in this review that are consistent with guidance provided. Scientific achievements by OLCF users cut across all range scales from atomic to molecular to large-scale structures. At the atomic scale, researchers discovered that the anomalously long half-life of Carbon-14 can be explained by calculating, for the first time, the very complex three-body interactions between all the neutrons and protons in the nucleus. At the molecular scale, researchers combined experimental results from LBL's light source and simulations on Jaguar to discover how DNA replication continues past a damaged site so a mutation can be repaired later. Other researchers combined experimental results from ORNL's Spallation Neutron Source and simulations on Jaguar to reveal the molecular structure of ligno-cellulosic material used in bioethanol production. This year, Jaguar has been used to do billion-cell CFD calculations to develop shock wave compression turbo machinery as a means to meet DOE goals for reducing carbon sequestration costs. General Electric used Jaguar to calculate the unsteady flow through turbo machinery to learn what efficiencies the traditional steady flow assumption is hiding from designers. Even a 1% improvement in turbine design can save the nation billions of gallons of fuel.« less
Modeling and Analysis of Realistic Fire Scenarios in Spacecraft
NASA Technical Reports Server (NTRS)
Brooker, J. E.; Dietrich, D. L.; Gokoglu, S. A.; Urban, D. L.; Ruff, G. A.
2015-01-01
An accidental fire inside a spacecraft is an unlikely, but very real emergency situation that can easily have dire consequences. While much has been learned over the past 25+ years of dedicated research on flame behavior in microgravity, a quantitative understanding of the initiation, spread, detection and extinguishment of a realistic fire aboard a spacecraft is lacking. Virtually all combustion experiments in microgravity have been small-scale, by necessity (hardware limitations in ground-based facilities and safety concerns in space-based facilities). Large-scale, realistic fire experiments are unlikely for the foreseeable future (unlike in terrestrial situations). Therefore, NASA will have to rely on scale modeling, extrapolation of small-scale experiments and detailed numerical modeling to provide the data necessary for vehicle and safety system design. This paper presents the results of parallel efforts to better model the initiation, spread, detection and extinguishment of fires aboard spacecraft. The first is a detailed numerical model using the freely available Fire Dynamics Simulator (FDS). FDS is a CFD code that numerically solves a large eddy simulation form of the Navier-Stokes equations. FDS provides a detailed treatment of the smoke and energy transport from a fire. The simulations provide a wealth of information, but are computationally intensive and not suitable for parametric studies where the detailed treatment of the mass and energy transport are unnecessary. The second path extends a model previously documented at ICES meetings that attempted to predict maximum survivable fires aboard space-craft. This one-dimensional model implies the heat and mass transfer as well as toxic species production from a fire. These simplifications result in a code that is faster and more suitable for parametric studies (having already been used to help in the hatch design of the Multi-Purpose Crew Vehicle, MPCV).
WindPACT Reference Wind Turbines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dykes, Katherine L; Rinker, Jennifer
To fully understand how loads and turbine cost scale with turbine size, it is necessary to have identical turbine models that have been scaled to different rated powers. The report presents the WindPACT baseline models, which are a series of four baseline models that were designed to facilitate investigations into the scalings of loads and turbine cost with size. The models have four different rated powers (750 kW, 1.5 MW, 3.0 MW, and 5.0 MW), and each model was designed to its specified rated power using the same design methodology. The models were originally implemented in FAST_AD, the predecessor tomore » NREL's open-source wind turbine simulator FAST, but have yet to be implemented in FAST. This report contains the specifications for all four WindPACT baseline models - including structural, aerodynamic, and control specifications - along with the inherent assumptions and equations that were used to calculate the model parameters. It is hoped that these baseline models will serve as extremely useful resources for investigations into the scalings of costs, loads, or optimization routines.« less
Analysis of BJ493 diesel engine lubrication system properties
NASA Astrophysics Data System (ADS)
Liu, F.
2017-12-01
The BJ493ZLQ4A diesel engine design is based on the primary model of BJ493ZLQ3, of which exhaust level is upgraded to the National GB5 standard due to the improved design of combustion and injection systems. Given the above changes in the diesel lubrication system, its improved properties are analyzed in this paper. According to the structures, technical parameters and indices of the lubrication system, the lubrication system model of BJ493ZLQ4A diesel engine was constructed using the Flowmaster flow simulation software. The properties of the diesel engine lubrication system, such as the oil flow rate and pressure at different rotational speeds were analyzed for the schemes involving large- and small-scale oil filters. The calculated values of the main oil channel pressure are in good agreement with the experimental results, which verifies the proposed model feasibility. The calculation results show that the main oil channel pressure and maximum oil flow rate values for the large-scale oil filter scheme satisfy the design requirements, while the small-scale scheme yields too low main oil channel’s pressure and too high. Therefore, application of small-scale oil filters is hazardous, and the large-scale scheme is recommended.
NASA Astrophysics Data System (ADS)
Conti, Roberto; Meli, Enrico; Pugi, Luca; Malvezzi, Monica; Bartolini, Fabio; Allotta, Benedetto; Rindi, Andrea; Toni, Paolo
2012-05-01
Scaled roller rigs used for railway applications play a fundamental role in the development of new technologies and new devices, combining the hardware in the loop (HIL) benefits with the reduction of the economic investments. The main problem of the scaled roller rig with respect to the full scale ones is the improved complexity due to the scaling factors. For this reason, before building the test rig, the development of a software model of the HIL system can be useful to analyse the system behaviour in different operative conditions. One has to consider the multi-body behaviour of the scaled roller rig, the controller and the model of the virtual vehicle, whose dynamics has to be reproduced on the rig. The main purpose of this work is the development of a complete model that satisfies the previous requirements and in particular the performance analysis of the controller and of the dynamical behaviour of the scaled roller rig when some disturbances are simulated with low adhesion conditions. Since the scaled roller rig will be used to simulate degraded adhesion conditions, accurate and realistic wheel-roller contact model also has to be included in the model. The contact model consists of two parts: the contact point detection and the adhesion model. The first part is based on a numerical method described in some previous studies for the wheel-rail case and modified to simulate the three-dimensional contact between revolute surfaces (wheel-roller). The second part consists in the evaluation of the contact forces by means of the Hertz theory for the normal problem and the Kalker theory for the tangential problem. Some numerical tests were performed, in particular low adhesion conditions were simulated, and bogie hunting and dynamical imbalance of the wheelsets were introduced. The tests were devoted to verify the robustness of control system with respect to some of the more frequent disturbances that may influence the roller rig dynamics. In particular we verified that the wheelset imbalance could significantly influence system performance, and to reduce the effect of this disturbance a multistate filter was designed.
Spatial Translation and Scaling Up of LID Practices in Deer Creek Watershed in East Missouri
NASA Astrophysics Data System (ADS)
Di Vittorio, Damien
This study investigated two important aspects of hydrologic effects of low impact development (LID) practices at the watershed scale by (1) examining the potential benefits of scaling up of LID design, and (2) evaluating downstream effects of LID design and its spatial translation within a watershed. The Personal Computer Storm Water Management Model (PCSWMM) was used to model runoff reduction with the implementation of LID practices in Deer Creek watershed (DCW), Missouri. The model was calibrated from 2003 to 2007 (R2 = 0.58 and NSE = 0.57), and validated from 2008 to 2012 (R2 = 0.64 and NSE = 0.65) for daily direct runoff. Runoff simulated for the study period, 2003 to 2012 (NSE = 0.61; R2 = 0.63), was used as the baseline for comparison to LID scenarios. Using 1958 areal imagery to assign land cover, a predevelopment scenario was constructed and simulated to assess LID scenarios' ability to restore predevelopment hydrologic conditions. The baseline and all LID scenarios were simulated using 2006 National Land Cover Dataset. The watershed was divided in 117 subcatchments, which were clustered in six groups of approximately equal areas and two scaling concepts consisting of incremental scaling and spatial scaling were modelled. Incremental scaling was investigated using three LID practices (rain barrel, porous pavement, and rain garden). Each LID practice was simulated at four implementation levels (25%, 50%, 75%, and 100%) in all subcatchments for the study period (2003 to 2012). Results showed an increased runoff reduction, ranging from 3% to 31%, with increased implementation level. Spatial scaling was investigated by increasing the spatial extent of LID practices using the subcatchment groups and all three LID practices (combined) implemented at 50% level. Results indicated that as the spatial extent of LID practices increased the runoff reduction at the outlet also increased, ranging from 3% to 19%. Spatial variability of LID implementation was examined by normalizing LID treated area to impervious area for each subcatchment group. The normalized LID implementation levels for each group revealed a reduction in runoff at the outlet of the watershed, ranging from 0.6% to 3.7%. This study showed that over a long-term period LID practices could restore pre-development hydrologic conditions. The optimal location for LID practice implementation within the study area was found to be near the outlet; however, these results cannot be generalized for all watersheds.
Slope Stability of Geosynthetic Clay Liner Test Plots
Fourteen full-scale field test plots containing five types of geosynthetic clay liners (GCLs) were constructed on 2H:IV and 3H:IV slopes for the purpose of assessing slope stability. The test plots were designed to simulate typical final cover systems for landfill. Slides occurr...
Wang, Guan; Zhao, Junfei; Haringa, Cees; Tang, Wenjun; Xia, Jianye; Chu, Ju; Zhuang, Yingping; Zhang, Siliang; Deshmukh, Amit T; van Gulik, Walter; Heijnen, Joseph J; Noorman, Henk J
2018-05-01
In a 54 m 3 large-scale penicillin fermentor, the cells experience substrate gradient cycles at the timescales of global mixing time about 20-40 s. Here, we used an intermittent feeding regime (IFR) and a two-compartment reactor (TCR) to mimic these substrate gradients at laboratory-scale continuous cultures. The IFR was applied to simulate substrate dynamics experienced by the cells at full scale at timescales of tens of seconds to minutes (30 s, 3 min and 6 min), while the TCR was designed to simulate substrate gradients at an applied mean residence time (τc) of 6 min. A biological systems analysis of the response of an industrial high-yielding P. chrysogenum strain has been performed in these continuous cultures. Compared to an undisturbed continuous feeding regime in a single reactor, the penicillin productivity (q PenG ) was reduced in all scale-down simulators. The dynamic metabolomics data indicated that in the IFRs, the cells accumulated high levels of the central metabolites during the feast phase to actively cope with external substrate deprivation during the famine phase. In contrast, in the TCR system, the storage pool (e.g. mannitol and arabitol) constituted a large contribution of carbon supply in the non-feed compartment. Further, transcript analysis revealed that all scale-down simulators gave different expression levels of the glucose/hexose transporter genes and the penicillin gene clusters. The results showed that q PenG did not correlate well with exposure to the substrate regimes (excess, limitation and starvation), but there was a clear inverse relation between q PenG and the intracellular glucose level. © 2018 The Authors. Microbial Biotechnology published by John Wiley & Sons Ltd and Society for Applied Microbiology.
A novel hybrid algorithm for the design of the phase diffractive optical elements for beam shaping
NASA Astrophysics Data System (ADS)
Jiang, Wenbo; Wang, Jun; Dong, Xiucheng
2013-02-01
In this paper, a novel hybrid algorithm for the design of a phase diffractive optical elements (PDOE) is proposed. It combines the genetic algorithm (GA) with the transformable scale BFGS (Broyden, Fletcher, Goldfarb, Shanno) algorithm, the penalty function was used in the cost function definition. The novel hybrid algorithm has the global merits of the genetic algorithm as well as the local improvement capabilities of the transformable scale BFGS algorithm. We designed the PDOE using the conventional simulated annealing algorithm and the novel hybrid algorithm. To compare the performance of two algorithms, three indexes of the diffractive efficiency, uniformity error and the signal-to-noise ratio are considered in numerical simulation. The results show that the novel hybrid algorithm has good convergence property and good stability. As an application example, the PDOE was used for the Gaussian beam shaping; high diffractive efficiency, low uniformity error and high signal-to-noise were obtained. The PDOE can be used for high quality beam shaping such as inertial confinement fusion (ICF), excimer laser lithography, fiber coupling laser diode array, laser welding, etc. It shows wide application value.
Terascale direct numerical simulations of turbulent combustion using S3D
NASA Astrophysics Data System (ADS)
Chen, J. H.; Choudhary, A.; de Supinski, B.; DeVries, M.; Hawkes, E. R.; Klasky, S.; Liao, W. K.; Ma, K. L.; Mellor-Crummey, J.; Podhorszki, N.; Sankaran, R.; Shende, S.; Yoo, C. S.
2009-01-01
Computational science is paramount to the understanding of underlying processes in internal combustion engines of the future that will utilize non-petroleum-based alternative fuels, including carbon-neutral biofuels, and burn in new combustion regimes that will attain high efficiency while minimizing emissions of particulates and nitrogen oxides. Next-generation engines will likely operate at higher pressures, with greater amounts of dilution and utilize alternative fuels that exhibit a wide range of chemical and physical properties. Therefore, there is a significant role for high-fidelity simulations, direct numerical simulations (DNS), specifically designed to capture key turbulence-chemistry interactions in these relatively uncharted combustion regimes, and in particular, that can discriminate the effects of differences in fuel properties. In DNS, all of the relevant turbulence and flame scales are resolved numerically using high-order accurate numerical algorithms. As a consequence terascale DNS are computationally intensive, require massive amounts of computing power and generate tens of terabytes of data. Recent results from terascale DNS of turbulent flames are presented here, illustrating its role in elucidating flame stabilization mechanisms in a lifted turbulent hydrogen/air jet flame in a hot air coflow, and the flame structure of a fuel-lean turbulent premixed jet flame. Computing at this scale requires close collaborations between computer and combustion scientists to provide optimized scaleable algorithms and software for terascale simulations, efficient collective parallel I/O, tools for volume visualization of multiscale, multivariate data and automating the combustion workflow. The enabling computer science, applied to combustion science, is also required in many other terascale physics and engineering simulations. In particular, performance monitoring is used to identify the performance of key kernels in the DNS code, S3D and especially memory intensive loops in the code. Through the careful application of loop transformations, data reuse in cache is exploited thereby reducing memory bandwidth needs, and hence, improving S3D's nodal performance. To enhance collective parallel I/O in S3D, an MPI-I/O caching design is used to construct a two-stage write-behind method for improving the performance of write-only operations. The simulations generate tens of terabytes of data requiring analysis. Interactive exploration of the simulation data is enabled by multivariate time-varying volume visualization. The visualization highlights spatial and temporal correlations between multiple reactive scalar fields using an intuitive user interface based on parallel coordinates and time histogram. Finally, an automated combustion workflow is designed using Kepler to manage large-scale data movement, data morphing, and archival and to provide a graphical display of run-time diagnostics.
A status report on NASA general aviation stall/spin flight testing
NASA Technical Reports Server (NTRS)
Patton, J. M., Jr.
1980-01-01
The NASA Langley Research Center has undertaken a comprehensive program involving spin tunnel, static and rotary balance wind tunnel, full-scale wind tunnel, free flight radio control model, flight simulation, and full-scale testing. Work underway includes aerodynamic definition of various configurations at high angles of attack, testing of stall and spin prevention concepts, definition of spin and spin recovery characteristics, and development of test techniques and emergency spin recovery systems. This paper presents some interesting results to date for the first aircraft (low-wing, single-engine) in the program, in the areas of tail design, wing leading edge design, mass distribution, center of gravity location, and small airframe changes, with associated pilot observations. The design philosophy of the spin recovery parachute system is discussed in addition to test techniques.
Design, Construction and Testing of an In-Pile Loop for PWR (Pressurized Water Reactor) Simulation.
1987-06-01
computer modeling remains at best semiempirical (C-i), this large variation in scaling factor makes extrapolation of data impossible. The DIDO Water...in a full scale PWR are not practical. The reactor plant is not controlled to tolerances necessary for research, and utilities are reluctant to vary...MIT Reactor Safeguards Committee, in revision 1 to the PCCL Safety Evaluation Report (SER), for final approval to begin in-pile testing and
2010-08-01
petroleum industry. Moreover, heterogeneity control strategies can be applied to improve the efficiency of a variety of in situ remediation technologies...conditions that differ significantly from those found in environmental systems . Therefore many of the design criteria used by the petroleum industry for...were helpful in constructing numerical models in up-scaled systems (2-D tanks). The UTCHEM model was able to successfully simulate 2-D experimental
A scalable parallel black oil simulator on distributed memory parallel computers
NASA Astrophysics Data System (ADS)
Wang, Kun; Liu, Hui; Chen, Zhangxin
2015-11-01
This paper presents our work on developing a parallel black oil simulator for distributed memory computers based on our in-house parallel platform. The parallel simulator is designed to overcome the performance issues of common simulators that are implemented for personal computers and workstations. The finite difference method is applied to discretize the black oil model. In addition, some advanced techniques are employed to strengthen the robustness and parallel scalability of the simulator, including an inexact Newton method, matrix decoupling methods, and algebraic multigrid methods. A new multi-stage preconditioner is proposed to accelerate the solution of linear systems from the Newton methods. Numerical experiments show that our simulator is scalable and efficient, and is capable of simulating extremely large-scale black oil problems with tens of millions of grid blocks using thousands of MPI processes on parallel computers.
Simulating New Drop Test Vehicles and Test Techniques for the Orion CEV Parachute Assembly System
NASA Technical Reports Server (NTRS)
Morris, Aaron L.; Fraire, Usbaldo, Jr.; Bledsoe, Kristin J.; Ray, Eric; Moore, Jim W.; Olson, Leah M.
2011-01-01
The Crew Exploration Vehicle Parachute Assembly System (CPAS) project is engaged in a multi-year design and test campaign to qualify a parachute recovery system for human use on the Orion Spacecraft. Test and simulation techniques have evolved concurrently to keep up with the demands of a challenging and complex system. The primary simulations used for preflight predictions and post-test data reconstructions are Decelerator System Simulation (DSS), Decelerator System Simulation Application (DSSA), and Drop Test Vehicle Simulation (DTV-SIM). The goal of this paper is to provide a roadmap to future programs on the test technique challenges and obstacles involved in executing a large-scale, multi-year parachute test program. A focus on flight simulation modeling and correlation to test techniques executed to obtain parachute performance parameters are presented.
Large Eddy Simulation of Vertical Axis Wind Turbines
NASA Astrophysics Data System (ADS)
Hezaveh, Seyed Hossein
Due to several design advantages and operational characteristics, particularly in offshore farms, vertical axis wind turbines (VAWTs) are being reconsidered as a complementary technology to horizontal axial turbines (HAWTs). However, considerable gaps remain in our understanding of VAWT performance since they have been significantly less studied than HAWTs. This thesis examines the performance of isolated VAWTs based on different design parameters and evaluates their characteristics in large wind farms. An actuator line model (ALM) is implemented in an atmospheric boundary layer large eddy simulation (LES) code, with offline coupling to a high-resolution blade-scale unsteady Reynolds-averaged Navier-Stokes (URANS) model. The LES captures the turbine-to-farm scale dynamics, while the URANS captures the blade-to-turbine scale flow. The simulation results are found to be in good agreement with existing experimental datasets. Subsequently, a parametric study of the flow over an isolated VAWT is carried out by varying solidities, height-to-diameter aspect ratios, and tip speed ratios. The analyses of the wake area and power deficits yield an improved understanding of the evolution of VAWT wakes, which in turn enables a more informed selection of turbine designs for wind farms. One of the most important advantages of VAWTs compared to HAWTs is their potential synergistic interactions that increase their performance when placed in close proximity. Field experiments have confirmed that unlike HAWTs, VAWTs can enhance and increase the total power production when placed near each other. Based on these experiments and using ALM-LES, we also present and test new approaches for VAWT farm configuration. We first design clusters with three turbines then configure farms consisting of clusters of VAWTs rather than individual turbines. The results confirm that by using a cluster design, the average power density of wind farms can be increased by as much as 60% relative to regular arrays. Finally, the thesis conducts an investigation of the influence of farm length (parallel to the wind) to assess the fetch needed for equilibrium to be reached, as well as the origin of the kinetic energy extracted by the turbines.
New Challenges in Computational Thermal Hydraulics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yadigaroglu, George; Lakehal, Djamel
New needs and opportunities drive the development of novel computational methods for the design and safety analysis of light water reactors (LWRs). Some new methods are likely to be three dimensional. Coupling is expected between system codes, computational fluid dynamics (CFD) modules, and cascades of computations at scales ranging from the macro- or system scale to the micro- or turbulence scales, with the various levels continuously exchanging information back and forth. The ISP-42/PANDA and the international SETH project provide opportunities for testing applications of single-phase CFD methods to LWR safety problems. Although industrial single-phase CFD applications are commonplace, computational multifluidmore » dynamics is still under development. However, first applications are appearing; the state of the art and its potential uses are discussed. The case study of condensation of steam/air mixtures injected from a downward-facing vent into a pool of water is a perfect illustration of a simulation cascade: At the top of the hierarchy of scales, system behavior can be modeled with a system code; at the central level, the volume-of-fluid method can be applied to predict large-scale bubbling behavior; at the bottom of the cascade, direct-contact condensation can be treated with direct numerical simulation, in which turbulent flow (in both the gas and the liquid), interfacial dynamics, and heat/mass transfer are directly simulated without resorting to models.« less
NASA Astrophysics Data System (ADS)
Watanabe, Y.; Abe, S.
2014-06-01
Terrestrial neutron-induced soft errors in MOSFETs from a 65 nm down to a 25 nm design rule are analyzed by means of multi-scale Monte Carlo simulation using the PHITS-HyENEXSS code system. Nuclear reaction models implemented in PHITS code are validated by comparisons with experimental data. From the analysis of calculated soft error rates, it is clarified that secondary He and H ions provide a major impact on soft errors with decreasing critical charge. It is also found that the high energy component from 10 MeV up to several hundreds of MeV in secondary cosmic-ray neutrons has the most significant source of soft errors regardless of design rule.
Pradhan, Ranjan; Misra, Manjusri; Erickson, Larry; Mohanty, Amar
2010-11-01
A laboratory scale simulated composting facility (as per ASTM D 5338) was designed and utilized to determine and evaluate the extent of degradation of polylactic acid (PLA), untreated wheat and soy straw and injection moulded composites of PLA-wheat straw (70:30) and PLA-soy straw (70:30). The outcomes of the study revealed the suitability of the test protocol, validity of the test system and defined the compostability of the composites of PLA with unmodified natural substrate. The study would help to design composites using modified soy straw and wheat straw as reinforcement/filler to satisfy ASTM D 6400 specifications. Copyright 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Chatterjee, Tanmoy; Peet, Yulia T.
2017-07-01
A large eddy simulation (LES) methodology coupled with near-wall modeling has been implemented in the current study for high Re neutral atmospheric boundary layer flows using an exponentially accurate spectral element method in an open-source research code Nek 5000. The effect of artificial length scales due to subgrid scale (SGS) and near wall modeling (NWM) on the scaling laws and structure of the inner and outer layer eddies is studied using varying SGS and NWM parameters in the spectral element framework. The study provides an understanding of the various length scales and dynamics of the eddies affected by the LES model and also the fundamental physics behind the inner and outer layer eddies which are responsible for the correct behavior of the mean statistics in accordance with the definition of equilibrium layers by Townsend. An economical and accurate LES model based on capturing the near wall coherent eddies has been designed, which is successful in eliminating the artificial length scale effects like the log-layer mismatch or the secondary peak generation in the streamwise variance.
Modularized Parallel Neutron Instrument Simulation on the TeraGrid
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Meili; Cobb, John W; Hagen, Mark E
2007-01-01
In order to build a bridge between the TeraGrid (TG), a national scale cyberinfrastructure resource, and neutron science, the Neutron Science TeraGrid Gateway (NSTG) is focused on introducing productive HPC usage to the neutron science community, primarily the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL). Monte Carlo simulations are used as a powerful tool for instrument design and optimization at SNS. One of the successful efforts of a collaboration team composed of NSTG HPC experts and SNS instrument scientists is the development of a software facility named PSoNI, Parallelizing Simulations of Neutron Instruments. Parallelizing the traditional serialmore » instrument simulation on TeraGrid resources, PSoNI quickly computes full instrument simulation at sufficient statistical levels in instrument de-sign. Upon SNS successful commissioning, to the end of 2007, three out of five commissioned instruments in SNS target station will be available for initial users. Advanced instrument study, proposal feasibility evalua-tion, and experiment planning are on the immediate schedule of SNS, which pose further requirements such as flexibility and high runtime efficiency on fast instrument simulation. PSoNI has been redesigned to meet the new challenges and a preliminary version is developed on TeraGrid. This paper explores the motivation and goals of the new design, and the improved software structure. Further, it describes the realized new fea-tures seen from MPI parallelized McStas running high resolution design simulations of the SEQUOIA and BSS instruments at SNS. A discussion regarding future work, which is targeted to do fast simulation for automated experiment adjustment and comparing models to data in analysis, is also presented.« less
Computational Simulation of Composite Structural Fatigue
NASA Technical Reports Server (NTRS)
Minnetyan, Levon; Chamis, Christos C. (Technical Monitor)
2005-01-01
Progressive damage and fracture of composite structures subjected to monotonically increasing static, tension-tension cyclic, pressurization, and flexural cyclic loading are evaluated via computational simulation. Constituent material properties, stress and strain limits are scaled up to the structure level to evaluate the overall damage and fracture propagation for composites. Damage initiation, growth, accumulation, and propagation to fracture due to monotonically increasing static and cyclic loads are included in the simulations. Results show the number of cycles to failure at different temperatures and the damage progression sequence during different degradation stages. A procedure is outlined for use of computational simulation data in the assessment of damage tolerance, determination of sensitive parameters affecting fracture, and interpretation of results with insight for design decisions.
Computational Simulation of Composite Structural Fatigue
NASA Technical Reports Server (NTRS)
Minnetyan, Levon
2004-01-01
Progressive damage and fracture of composite structures subjected to monotonically increasing static, tension-tension cyclic, pressurization, and flexural cyclic loading are evaluated via computational simulation. Constituent material properties, stress and strain limits are scaled up to the structure level to evaluate the overall damage and fracture propagation for composites. Damage initiation, growth, accumulation, and propagation to fracture due to monotonically increasing static and cyclic loads are included in the simulations. Results show the number of cycles to failure at different temperatures and the damage progression sequence during different degradation stages. A procedure is outlined for use of computational simulation data in the assessment of damage tolerance, determination of sensitive parameters affecting fracture, and interpretation of results with insight for design decisions.
Efficiently passing messages in distributed spiking neural network simulation.
Thibeault, Corey M; Minkovich, Kirill; O'Brien, Michael J; Harris, Frederick C; Srinivasa, Narayan
2013-01-01
Efficiently passing spiking messages in a neural model is an important aspect of high-performance simulation. As the scale of networks has increased so has the size of the computing systems required to simulate them. In addition, the information exchange of these resources has become more of an impediment to performance. In this paper we explore spike message passing using different mechanisms provided by the Message Passing Interface (MPI). A specific implementation, MVAPICH, designed for high-performance clusters with Infiniband hardware is employed. The focus is on providing information about these mechanisms for users of commodity high-performance spiking simulators. In addition, a novel hybrid method for spike exchange was implemented and benchmarked.
Lai, Canhai; Xu, Zhijie; Li, Tingwen; ...
2017-08-05
In virtual design and scale up of pilot-scale carbon capture systems, the coupled reactive multiphase flow problem must be solved to predict the adsorber's performance and capture efficiency under various operation conditions. This paper focuses on the detailed computational fluid dynamics (CFD) modeling of a pilot-scale fluidized bed adsorber equipped with vertical cooling tubes. Multiphase Flow with Interphase eXchanges (MFiX), an open-source multiphase flow CFD solver, is used for the simulations with custom code to simulate the chemical reactions and filtered sub-grid models to capture the effect of the unresolved details in the coarser mesh for simulations with reasonable accuracymore » and manageable computational effort. Previously developed filtered models for horizontal cylinder drag, heat transfer, and reaction kinetics have been modified to derive the 2D filtered models representing vertical cylinders in the coarse-grid CFD simulations. The effects of the heat exchanger configurations (i.e., horizontal or vertical tubes) on the adsorber's hydrodynamics and CO 2 capture performance are then examined. A one-dimensional three-region process model is briefly introduced for comparison purpose. The CFD model matches reasonably well with the process model while provides additional information about the flow field that is not available with the process model.« less
Catchment-scale Validation of a Physically-based, Post-fire Runoff and Erosion Model
NASA Astrophysics Data System (ADS)
Quinn, D.; Brooks, E. S.; Robichaud, P. R.; Dobre, M.; Brown, R. E.; Wagenbrenner, J.
2017-12-01
The cascading consequences of fire-induced ecological changes have profound impacts on both natural and managed forest ecosystems. Forest managers tasked with implementing post-fire mitigation strategies need robust tools to evaluate the effectiveness of their decisions, particularly those affecting hydrological recovery. Various hillslope-scale interfaces of the physically-based Water Erosion Prediction Project (WEPP) model have been successfully validated for this purpose using fire-effected plot experiments, however these interfaces are explicitly designed to simulate single hillslopes. Spatially-distributed, catchment-scale WEPP interfaces have been developed over the past decade, however none have been validated for post-fire simulations, posing a barrier to adoption for forest managers. In this validation study, we compare WEPP simulations with pre- and post-fire hydrological records for three forested catchments (W. Willow, N. Thomas, and S. Thomas) that burned in the 2011 Wallow Fire in Northeastern Arizona, USA. Simulations were conducted using two approaches; the first using automatically created inputs from an online, spatial, post-fire WEPP interface, and the second using manually created inputs which incorporate the spatial variability of fire effects observed in the field. Both approaches were compared to five years of observed post-fire sediment and flow data to assess goodness of fit.
Jamroz, Michal; Orozco, Modesto; Kolinski, Andrzej; Kmiecik, Sebastian
2013-01-08
It is widely recognized that atomistic Molecular Dynamics (MD), a classical simulation method, captures the essential physics of protein dynamics. That idea is supported by a theoretical study showing that various MD force-fields provide a consensus picture of protein fluctuations in aqueous solution [Rueda, M. et al. Proc. Natl. Acad. Sci. U.S.A. 2007, 104, 796-801]. However, atomistic MD cannot be applied to most biologically relevant processes due to its limitation to relatively short time scales. Much longer time scales can be accessed by properly designed coarse-grained models. We demonstrate that the aforementioned consensus view of protein dynamics from short (nanosecond) time scale MD simulations is fairly consistent with the dynamics of the coarse-grained protein model - the CABS model. The CABS model employs stochastic dynamics (a Monte Carlo method) and a knowledge-based force-field, which is not biased toward the native structure of a simulated protein. Since CABS-based dynamics allows for the simulation of entire folding (or multiple folding events) in a single run, integration of the CABS approach with all-atom MD promises a convenient (and computationally feasible) means for the long-time multiscale molecular modeling of protein systems with atomistic resolution.
Simulation of Deep Convective Clouds with the Dynamic Reconstruction Turbulence Closure
NASA Astrophysics Data System (ADS)
Shi, X.; Chow, F. K.; Street, R. L.; Bryan, G. H.
2017-12-01
The terra incognita (TI), or gray zone, in simulations is a range of grid spacing comparable to the most energetic eddy diameter. Spacing in mesoscale and simulations is much larger than the eddies, and turbulence is parameterized with one-dimensional vertical-mixing. Large eddy simulations (LES) have grid spacing much smaller than the energetic eddies, and use three-dimensional models of turbulence. Studies of convective weather use convection-permitting resolutions, which are in the TI. Neither mesoscale-turbulence nor LES models are designed for the TI, so TI turbulence parameterization needs to be discussed. Here, the effects of sub-filter scale (SFS) closure schemes on the simulation of deep tropical convection are evaluated by comparing three closures, i.e. Smagorinsky model, Deardorff-type TKE model and the dynamic reconstruction model (DRM), which partitions SFS turbulence into resolvable sub-filter scales (RSFS) and unresolved sub-grid scales (SGS). The RSFS are reconstructed, and the SGS are modeled with a dynamic eddy viscosity/diffusivity model. The RSFS stresses/fluxes allow backscatter of energy/variance via counter-gradient stresses/fluxes. In high-resolution (100m) simulations of tropical convection use of these turbulence models did not lead to significant differences in cloud water/ice distribution, precipitation flux, or vertical fluxes of momentum and heat. When model resolutions are coarsened, the Smagorinsky and TKE models overestimate cloud ice and produces large-amplitude downward heat flux in the middle troposphere (not found in the high-resolution simulations). This error is a result of unrealistically large eddy diffusivities, i.e., the eddy diffusivity of the DRM is on the order of 1 for the coarse resolution simulations, the eddy diffusivity of the Smagorinsky and TKE model is on the order of 100. Splitting the eddy viscosity/diffusivity scalars into vertical and horizontal components by using different length scales and strain rate components helps to reduce the errors, but does not completely remedy the problem. In contrast, the coarse resolution simulations using the DRM produce results that are more consistent with the high-resolution results, suggesting that the DRM is a more appropriate turbulence model for simulating convection in the TI.
Turbulence modeling for Francis turbine water passages simulation
NASA Astrophysics Data System (ADS)
Maruzewski, P.; Hayashi, H.; Munch, C.; Yamaishi, K.; Hashii, T.; Mombelli, H. P.; Sugow, Y.; Avellan, F.
2010-08-01
The applications of Computational Fluid Dynamics, CFD, to hydraulic machines life require the ability to handle turbulent flows and to take into account the effects of turbulence on the mean flow. Nowadays, Direct Numerical Simulation, DNS, is still not a good candidate for hydraulic machines simulations due to an expensive computational time consuming. Large Eddy Simulation, LES, even, is of the same category of DNS, could be an alternative whereby only the small scale turbulent fluctuations are modeled and the larger scale fluctuations are computed directly. Nevertheless, the Reynolds-Averaged Navier-Stokes, RANS, model have become the widespread standard base for numerous hydraulic machine design procedures. However, for many applications involving wall-bounded flows and attached boundary layers, various hybrid combinations of LES and RANS are being considered, such as Detached Eddy Simulation, DES, whereby the RANS approximation is kept in the regions where the boundary layers are attached to the solid walls. Furthermore, the accuracy of CFD simulations is highly dependent on the grid quality, in terms of grid uniformity in complex configurations. Moreover any successful structured and unstructured CFD codes have to offer a wide range to the variety of classic RANS model to hybrid complex model. The aim of this study is to compare the behavior of turbulent simulations for both structured and unstructured grids topology with two different CFD codes which used the same Francis turbine. Hence, the study is intended to outline the encountered discrepancy for predicting the wake of turbine blades by using either the standard k-epsilon model, or the standard k-epsilon model or the SST shear stress model in a steady CFD simulation. Finally, comparisons are made with experimental data from the EPFL Laboratory for Hydraulic Machines reduced scale model measurements.
A Machine Learning Method for the Prediction of Receptor Activation in the Simulation of Synapses
Montes, Jesus; Gomez, Elena; Merchán-Pérez, Angel; DeFelipe, Javier; Peña, Jose-Maria
2013-01-01
Chemical synaptic transmission involves the release of a neurotransmitter that diffuses in the extracellular space and interacts with specific receptors located on the postsynaptic membrane. Computer simulation approaches provide fundamental tools for exploring various aspects of the synaptic transmission under different conditions. In particular, Monte Carlo methods can track the stochastic movements of neurotransmitter molecules and their interactions with other discrete molecules, the receptors. However, these methods are computationally expensive, even when used with simplified models, preventing their use in large-scale and multi-scale simulations of complex neuronal systems that may involve large numbers of synaptic connections. We have developed a machine-learning based method that can accurately predict relevant aspects of the behavior of synapses, such as the percentage of open synaptic receptors as a function of time since the release of the neurotransmitter, with considerably lower computational cost compared with the conventional Monte Carlo alternative. The method is designed to learn patterns and general principles from a corpus of previously generated Monte Carlo simulations of synapses covering a wide range of structural and functional characteristics. These patterns are later used as a predictive model of the behavior of synapses under different conditions without the need for additional computationally expensive Monte Carlo simulations. This is performed in five stages: data sampling, fold creation, machine learning, validation and curve fitting. The resulting procedure is accurate, automatic, and it is general enough to predict synapse behavior under experimental conditions that are different to the ones it has been trained on. Since our method efficiently reproduces the results that can be obtained with Monte Carlo simulations at a considerably lower computational cost, it is suitable for the simulation of high numbers of synapses and it is therefore an excellent tool for multi-scale simulations. PMID:23894367
NASA Astrophysics Data System (ADS)
Ji, H.; Bhattacharjee, A.; Prager, S.; Daughton, W. S.; Bale, S. D.; Carter, T. A.; Crocker, N.; Drake, J. F.; Egedal, J.; Sarff, J.; Wallace, J.; Chen, Y.; Cutler, R.; Fox, W. R., II; Heitzenroeder, P.; Kalish, M.; Jara-Almonte, J.; Myers, C. E.; Ren, Y.; Yamada, M.; Yoo, J.
2015-12-01
The FLARE device (flare.pppl.gov) is a new intermediate-scale plasma experiment under construction at Princeton to study magnetic reconnection in regimes directly relevant to space, solar and astrophysical plasmas. The existing small-scale experiments have been focusing on the single X-line reconnection process either with small effective sizes or at low Lundquist numbers, but both of which are typically very large in natural plasmas. The configuration of the FLARE device is designed to provide experimental access to the new regimes involving multiple X-lines, as guided by a reconnection "phase diagram" [Ji & Daughton, PoP (2011)]. Most of major components of the FLARE device have been designed and are under construction. The device will be assembled and installed in 2016, followed by commissioning and operation in 2017. The planned research on FLARE as a user facility will be discussed on topics including the multiple scale nature of magnetic reconnection from global fluid scales to ion and electron kinetic scales. Results from scoping simulations based on particle and fluid codes and possible comparative research with space measurements will be presented.
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Abumeri, Galib H.
2000-01-01
Aircraft engines are assemblies of dynamically interacting components. Engine updates to keep present aircraft flying safely and engines for new aircraft are progressively required to operate in more demanding technological and environmental requirements. Designs to effectively meet those requirements are necessarily collections of multi-scale, multi-level, multi-disciplinary analysis and optimization methods and probabilistic methods are necessary to quantify respective uncertainties. These types of methods are the only ones that can formally evaluate advanced composite designs which satisfy those progressively demanding requirements while assuring minimum cost, maximum reliability and maximum durability. Recent research activities at NASA Glenn Research Center have focused on developing multi-scale, multi-level, multidisciplinary analysis and optimization methods. Multi-scale refers to formal methods which describe complex material behavior metal or composite; multi-level refers to integration of participating disciplines to describe a structural response at the scale of interest; multidisciplinary refers to open-ended for various existing and yet to be developed discipline constructs required to formally predict/describe a structural response in engine operating environments. For example, these include but are not limited to: multi-factor models for material behavior, multi-scale composite mechanics, general purpose structural analysis, progressive structural fracture for evaluating durability and integrity, noise and acoustic fatigue, emission requirements, hot fluid mechanics, heat-transfer and probabilistic simulations. Many of these, as well as others, are encompassed in an integrated computer code identified as Engine Structures Technology Benefits Estimator (EST/BEST) or Multi-faceted/Engine Structures Optimization (MP/ESTOP). The discipline modules integrated in MP/ESTOP include: engine cycle (thermodynamics), engine weights, internal fluid mechanics, cost, mission and coupled structural/thermal, various composite property simulators and probabilistic methods to evaluate uncertainty effects (scatter ranges) in all the design parameters. The objective of the proposed paper is to briefly describe a multi-faceted design analysis and optimization capability for coupled multi-discipline engine structures optimization. Results are presented for engine and aircraft type metrics to illustrate the versatility of that capability. Results are also presented for reliability, noise and fatigue to illustrate its inclusiveness. For example, replacing metal rotors with composites reduces the engine weight by 20 percent, 15 percent noise reduction, and an order of magnitude improvement in reliability. Composite designs exist to increase fatigue life by at least two orders of magnitude compared to state-of-the-art metals.
Survey of outcomes in a faculty development program on simulation pedagogy.
Roh, Young Sook; Kim, Mi Kang; Tangkawanich, Thitiarpha
2016-06-01
Although many nursing programs use simulation as a teaching-learning modality, there are few systematic approaches to help nursing educators learn this pedagogy. This study evaluates the effects of a simulation pedagogy nursing faculty development program on participants' learning perceptions using a retrospective pre-course and post-course design. Sixteen Thai participants completed a two-day nursing faculty development program on simulation pedagogy. Thirteen questionnaires were used in the final analysis. The participants' self-perceived learning about simulation teaching showed significant post-course improvement. On a five-point Likert scale, the composite mean attitude, subjective norm, and perceived behavioral control scores, as well as intention to use a simulator, showed a significant post-course increase. A faculty development program on simulation pedagogy induced favorable learning and attitudes. Further studies must test how faculty performance affects the cognitive, emotional, and social dimensions of learning in a simulation-based learning domain. © 2015 Wiley Publishing Asia Pty Ltd.
NASA Astrophysics Data System (ADS)
Wosnik, Martin; Bachant, Peter
2016-11-01
Cross-flow turbines show potential in marine hydrokinetic (MHK) applications. A research focus is on accurately predicting device performance and wake evolution to improve turbine array layouts for maximizing overall power output, i.e., minimizing wake interference, or taking advantage of constructive wake interaction. Experiments were carried with large laboratory-scale cross-flow turbines D O (1 m) using a turbine test bed in a large cross-section tow tank, designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. Several turbines of varying solidity were employed, including the UNH Reference Vertical Axis Turbine (RVAT) and a 1:6 scale model of the DOE-Sandia Reference Model 2 (RM2) turbine. To improve parameterization in array simulations, an actuator line model (ALM) was developed to provide a computationally feasible method for simulating full turbine arrays inside Navier-Stokes models. Results are presented for the simulation of performance and wake dynamics of cross-flow turbines and compared with experiments and body-fitted mesh, blade-resolving CFD. Supported by NSF-CBET Grant 1150797, Sandia National Laboratories.
Tian, Yang; Liu, Zhilin; Li, Xiaoqian; Zhang, Lihua; Li, Ruiqing; Jiang, Ripeng; Dong, Fang
2018-05-01
Ultrasonic sonotrodes play an essential role in transmitting power ultrasound into the large-scale metallic casting. However, cavitation erosion considerably impairs the in-service performance of ultrasonic sonotrodes, leading to marginal microstructural refinement. In this work, the cavitation erosion behaviour of ultrasonic sonotrodes in large-scale castings was explored using the industry-level experiments of Al alloy cylindrical ingots (i.e. 630 mm in diameter and 6000 mm in length). When introducing power ultrasound, severe cavitation erosion was found to reproducibly occur at some specific positions on ultrasonic sonotrodes. However, there is no cavitation erosion present on the ultrasonic sonotrodes that were not driven by electric generator. Vibratory examination showed cavitation erosion depended on the vibration state of ultrasonic sonotrodes. Moreover, a finite element (FE) model was developed to simulate the evolution and distribution of acoustic pressure in 3-D solidification volume. FE simulation results confirmed that significant dynamic interaction between sonotrodes and melts only happened at some specific positions corresponding to severe cavitation erosion. This work will allow for developing more advanced ultrasonic sonotrodes with better cavitation erosion-resistance, in particular for large-scale castings, from the perspectives of ultrasonic physics and mechanical design. Copyright © 2018 Elsevier B.V. All rights reserved.
Design and realization of retina-like three-dimensional imaging based on a MOEMS mirror
NASA Astrophysics Data System (ADS)
Cao, Jie; Hao, Qun; Xia, Wenze; Peng, Yuxin; Cheng, Yang; Mu, Jiaxing; Wang, Peng
2016-07-01
To balance conflicts for high-resolution, large-field-of-view and real-time imaging, a retina-like imaging method based on time-of flight (TOF) is proposed. Mathematical models of 3D imaging based on MOEMS are developed. Based on this method, we perform simulations of retina-like scanning properties, including compression of redundant information and rotation and scaling invariance. To validate the theory, we develop a prototype and conduct relevant experiments. The preliminary results agree well with the simulations.
Computational Materials: Modeling and Simulation of Nanostructured Materials and Systems
NASA Technical Reports Server (NTRS)
Gates, Thomas S.; Hinkley, Jeffrey A.
2003-01-01
The paper provides details on the structure and implementation of the Computational Materials program at the NASA Langley Research Center. Examples are given that illustrate the suggested approaches to predicting the behavior and influencing the design of nanostructured materials such as high-performance polymers, composites, and nanotube-reinforced polymers. Primary simulation and measurement methods applicable to multi-scale modeling are outlined. Key challenges including verification and validation of models are highlighted and discussed within the context of NASA's broad mission objectives.
Stahnke, Amanda M.; Behnen, Erin M.
2015-01-01
Objective. To assess the impact of a 6-week patient/provider interaction simulation on empathy and self-efficacy levels of diabetes management skills in third-year pharmacy students. Design. Pharmacy students enrolled in a diabetes elective course were paired to act as a patient with diabetes or as a provider assisting in the management of that patient during a 6-week simulation activity. After 3 weeks, students switched roles. The simulation was designed with activities to build empathy. Assessment. The Jefferson Scale of Empathy (JSE) and a self-efficacy survey were administered to assess change in empathy and confidence levels from baseline to the end of the activity. Completion of the activity resulted in significant improvement in total JSE scores. Additionally, significant improvements in overall self-efficacy scores regarding diabetes management were noted. Conclusion. The 6-week patient/provider interaction simulation improved empathy and self-efficacy levels in third-year pharmacy students. PMID:25995517
An Automatic Instrument to Study the Spatial Scaling Behavior of Emissivity
Tian, Jing; Zhang, Renhua; Su, Hongbo; Sun, Xiaomin; Chen, Shaohui; Xia, Jun
2008-01-01
In this paper, the design of an automatic instrument for measuring the spatial distribution of land surface emissivity is presented, which makes the direct in situ measurement of the spatial distribution of emissivity possible. The significance of this new instrument lies in two aspects. One is that it helps to investigate the spatial scaling behavior of emissivity and temperature; the other is that, the design of the instrument provides theoretical and practical foundations for the implement of measuring distribution of surface emissivity on airborne or spaceborne. To improve the accuracy of the measurements, the emissivity measurement and its uncertainty are examined in a series of carefully designed experiments. The impact of the variation of target temperature and the environmental irradiance on the measurement of emissivity is analyzed as well. In addition, the ideal temperature difference between hot environment and cool environment is obtained based on numerical simulations. Finally, the scaling behavior of surface emissivity caused by the heterogeneity of target is discussed. PMID:27879735