Validation of Supersonic Film Cooling Modeling for Liquid Rocket Engine Applications
NASA Technical Reports Server (NTRS)
Morris, Christopher I.; Ruf, Joseph H.
2010-01-01
Topics include: upper stage engine key requirements and design drivers; Calspan "stage 1" results, He slot injection into hypersonic flow (air); test articles for shock generator diagram, slot injector details, and instrumentation positions; test conditions; modeling approach; 2-d grid used for film cooling simulations of test article; heat flux profiles from 2-d flat plate simulations (run #4); heat flux profiles from 2-d backward facing step simulations (run #43); isometric sketch of single coolant nozzle, and x-z grid of half-nozzle domain; comparison of 2-d and 3-d simulations of coolant nozzles (run #45); flowfield properties along coolant nozzle centerline (run #45); comparison of 3-d CFD nozzle flow calculations with experimental data; nozzle exit plane reduced to linear profile for use in 2-d film-cooling simulations (run #45); synthetic Schlieren image of coolant injection region (run #45); axial velocity profiles from 2-d film-cooling simulation (run #45); coolant mass fraction profiles from 2-d film-cooling simulation (run #45); heat flux profiles from 2-d film cooling simulations (run #45); heat flux profiles from 2-d film cooling simulations (runs #47, #45, and #47); 3-d grid used for film cooling simulations of test article; heat flux contours from 3-d film-cooling simulation (run #45); and heat flux profiles from 3-d and 2-d film cooling simulations (runs #44, #46, and #47).
Reducing EnergyPlus Run Time For Code Compliance Tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Athalye, Rahul A.; Gowri, Krishnan; Schultz, Robert W.
2014-09-12
Integration of the EnergyPlus ™ simulation engine into performance-based code compliance software raises a concern about simulation run time, which impacts timely feedback of compliance results to the user. EnergyPlus annual simulations for proposed and code baseline building models, and mechanical equipment sizing result in simulation run times beyond acceptable limits. This paper presents a study that compares the results of a shortened simulation time period using 4 weeks of hourly weather data (one per quarter), to an annual simulation using full 52 weeks of hourly weather data. Three representative building types based on DOE Prototype Building Models and threemore » climate zones were used for determining the validity of using a shortened simulation run period. Further sensitivity analysis and run time comparisons were made to evaluate the robustness and run time savings of using this approach. The results of this analysis show that the shortened simulation run period provides compliance index calculations within 1% of those predicted using annual simulation results, and typically saves about 75% of simulation run time.« less
NASA Technical Reports Server (NTRS)
Caldwell, E. C.; Cowley, M. S.; Scott-Pandorf, M. M.
2010-01-01
Develop a model that simulates a human running in 0 G using the European Space Agency s (ESA) Subject Loading System (SLS). The model provides ground reaction forces (GRF) based on speed and pull-down forces (PDF). DESIGN The theoretical basis for the Running Model was based on a simple spring-mass model. The dynamic properties of the spring-mass model express theoretical vertical GRF (GRFv) and shear GRF in the posterior-anterior direction (GRFsh) during running gait. ADAMs VIEW software was used to build the model, which has a pelvis, thigh segment, shank segment, and a spring foot (see Figure 1).the model s movement simulates the joint kinematics of a human running at Earth gravity with the aim of generating GRF data. DEVELOPMENT & VERIFICATION ESA provided parabolic flight data of subjects running while using the SLS, for further characterization of the model s GRF. Peak GRF data were fit to a linear regression line dependent on PDF and speed. Interpolation and extrapolation of the regression equation provided a theoretical data matrix, which is used to drive the model s motion equations. Verification of the model was conducted by running the model at 4 different speeds, with each speed accounting for 3 different PDF. The model s GRF data fell within a 1-standard-deviation boundary derived from the empirical ESA data. CONCLUSION The Running Model aids in conducting various simulations (potential scenarios include a fatigued runner or a powerful runner generating high loads at a fast cadence) to determine limitations for the T2 vibration isolation system (VIS) aboard the International Space Station. This model can predict how running with the ESA SLS affects the T2 VIS and may be used for other exercise analyses in the future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Tianzhen; Buhl, Fred; Haves, Philip
2008-09-20
EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences,more » identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.« less
The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code.
Kunkel, Susanne; Schenck, Wolfram
2017-01-01
NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling.
The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code
Kunkel, Susanne; Schenck, Wolfram
2017-01-01
NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling. PMID:28701946
Running Parallel Discrete Event Simulators on Sierra
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnes, P. D.; Jefferson, D. R.
2015-12-03
In this proposal we consider porting the ROSS/Charm++ simulator and the discrete event models that run under its control so that they run on the Sierra architecture and make efficient use of the Volta GPUs.
A Storm Surge and Inundation Model of the Back River Watershed at NASA Langley Research Center
NASA Technical Reports Server (NTRS)
Loftis, Jon Derek; Wang, Harry V.; DeYoung, Russell J.
2013-01-01
This report on a Virginia Institute for Marine Science project demonstrates that the sub-grid modeling technology (now as part of Chesapeake Bay Inundation Prediction System, CIPS) can incorporate high-resolution Lidar measurements provided by NASA Langley Research Center into the sub-grid model framework to resolve detailed topographic features for use as a hydrological transport model for run-off simulations within NASA Langley and Langley Air Force Base. The rainfall over land accumulates in the ditches/channels resolved via the model sub-grid was tested to simulate the run-off induced by heavy precipitation. Possessing both the capabilities for storm surge and run-off simulations, the CIPS model was then applied to simulate real storm events starting with Hurricane Isabel in 2003. It will be shown that the model can generate highly accurate on-land inundation maps as demonstrated by excellent comparison of the Langley tidal gauge time series data (CAPABLE.larc.nasa.gov) and spatial patterns of real storm wrack line measurements with the model results simulated during Hurricanes Isabel (2003), Irene (2011), and a 2009 Nor'easter. With confidence built upon the model's performance, sea level rise scenarios from the ICCP (International Climate Change Partnership) were also included in the model scenario runs to simulate future inundation cases.
NASA Astrophysics Data System (ADS)
Mohaghegh, Shahab
2010-05-01
Surrogate Reservoir Model (SRM) is new solution for fast track, comprehensive reservoir analysis (solving both direct and inverse problems) using existing reservoir simulation models. SRM is defined as a replica of the full field reservoir simulation model that runs and provides accurate results in real-time (one simulation run takes only a fraction of a second). SRM mimics the capabilities of a full field model with high accuracy. Reservoir simulation is the industry standard for reservoir management. It is used in all phases of field development in the oil and gas industry. The routine of simulation studies calls for integration of static and dynamic measurements into the reservoir model. Full field reservoir simulation models have become the major source of information for analysis, prediction and decision making. Large prolific fields usually go through several versions (updates) of their model. Each new version usually is a major improvement over the previous version. The updated model includes the latest available information incorporated along with adjustments that usually are the result of single-well or multi-well history matching. As the number of reservoir layers (thickness of the formations) increases, the number of cells representing the model approaches several millions. As the reservoir models grow in size, so does the time that is required for each run. Schemes such as grid computing and parallel processing helps to a certain degree but do not provide the required speed for tasks such as: field development strategies using comprehensive reservoir analysis, solving the inverse problem for injection/production optimization, quantifying uncertainties associated with the geological model and real-time optimization and decision making. These types of analyses require hundreds or thousands of runs. Furthermore, with the new push for smart fields in the oil/gas industry that is a natural growth of smart completion and smart wells, the need for real time reservoir modeling becomes more pronounced. SRM is developed using the state of the art in neural computing and fuzzy pattern recognition to address the ever growing need in the oil and gas industry to perform accurate, but high speed simulation and modeling. Unlike conventional geo-statistical approaches (response surfaces, proxy models …) that require hundreds of simulation runs for development, SRM is developed only with a few (from 10 to 30 runs) simulation runs. SRM can be developed regularly (as new versions of the full field model become available) off-line and can be put online for real-time processing to guide important decisions. SRM has proven its value in the field. An SRM was developed for a giant oil field in the Middle East. The model included about one million grid blocks with more than 165 horizontal wells and took ten hours for a single run on 12 parallel CPUs. Using only 10 simulation runs, an SRM was developed that was able to accurately mimic the behavior of the reservoir simulation model. Performing a comprehensive reservoir analysis that included making millions of SRM runs, wells in the field were divided into five clusters. It was predicted that wells in cluster one & two are best candidates for rate relaxation with minimal, long term water production while wells in clusters four and five are susceptive to high water cuts. Two and a half years and 20 wells later, rate relaxation results from the field proved that all the predictions made by the SRM analysis were correct. While incremental oil production increased in all wells (wells in clusters 1 produced the most followed by wells in cluster 2, 3 …) the percent change in average monthly water cut for wells in each cluster clearly demonstrated the analytic power of SRM. As it was correctly predicted, wells in clusters 1 and 2 actually experience a reduction in water cut while a substantial increase in water cut was observed in wells classified into clusters 4 and 5. Performing these analyses would have been impossible using the original full field simulation model.
Scientific Benefits of Space Science Models Archiving at Community Coordinated Modeling Center
NASA Technical Reports Server (NTRS)
Kuznetsova, Maria M.; Berrios, David; Chulaki, Anna; Hesse, Michael; MacNeice, Peter J.; Maddox, Marlo M.; Pulkkinen, Antti; Rastaetter, Lutz; Taktakishvili, Aleksandre
2009-01-01
The Community Coordinated Modeling Center (CCMC) hosts a set of state-of-the-art space science models ranging from the solar atmosphere to the Earth's upper atmosphere. CCMC provides a web-based Run-on-Request system, by which the interested scientist can request simulations for a broad range of space science problems. To allow the models to be driven by data relevant to particular events CCMC developed a tool that automatically downloads data from data archives and transform them to required formats. CCMC also provides a tailored web-based visualization interface for the model output, as well as the capability to download the simulation output in portable format. CCMC offers a variety of visualization and output analysis tools to aid scientists in interpretation of simulation results. During eight years since the Run-on-request system became available the CCMC archived the results of almost 3000 runs that are covering significant space weather events and time intervals of interest identified by the community. The simulation results archived at CCMC also include a library of general purpose runs with modeled conditions that are used for education and research. Archiving results of simulations performed in support of several Modeling Challenges helps to evaluate the progress in space weather modeling over time. We will highlight the scientific benefits of CCMC space science model archive and discuss plans for further development of advanced methods to interact with simulation results.
NASA Astrophysics Data System (ADS)
Kemp, E. M.; Putman, W. M.; Gurganus, J.; Burns, R. W.; Damon, M. R.; McConaughy, G. R.; Seablom, M. S.; Wojcik, G. S.
2009-12-01
We present a regional downscaling system (RDS) suitable for high-resolution weather and climate simulations in multiple supercomputing environments. The RDS is built on the NASA Workflow Tool, a software framework for configuring, running, and managing computer models on multiple platforms with a graphical user interface. The Workflow Tool is used to run the NASA Goddard Earth Observing System Model Version 5 (GEOS-5), a global atmospheric-ocean model for weather and climate simulations down to 1/4 degree resolution; the NASA Land Information System Version 6 (LIS-6), a land surface modeling system that can simulate soil temperature and moisture profiles; and the Weather Research and Forecasting (WRF) community model, a limited-area atmospheric model for weather and climate simulations down to 1-km resolution. The Workflow Tool allows users to customize model settings to user needs; saves and organizes simulation experiments; distributes model runs across different computer clusters (e.g., the DISCOVER cluster at Goddard Space Flight Center, the Cray CX-1 Desktop Supercomputer, etc.); and handles all file transfers and network communications (e.g., scp connections). Together, the RDS is intended to aid researchers by making simulations as easy as possible to generate on the computer resources available. Initial conditions for LIS-6 and GEOS-5 are provided by Modern Era Retrospective-Analysis for Research and Applications (MERRA) reanalysis data stored on DISCOVER. The LIS-6 is first run for 2-4 years forced by MERRA atmospheric analyses, generating initial conditions for the WRF soil physics. GEOS-5 is then initialized from MERRA data and run for the period of interest. Large-scale atmospheric data, sea-surface temperatures, and sea ice coverage from GEOS-5 are used as boundary conditions for WRF, which is run for the same period of interest. Multiply nested grids are used for both LIS-6 and WRF, with the innermost grid run at a resolution sufficient for typical local weather features (terrain, convection, etc.) All model runs, restarts, and file transfers are coordinated by the Workflow Tool. Two use cases are being pursued. First, the RDS generates regional climate simulations down to 4-km for the Chesapeake Bay region, with WRF output provided as input to more specialized models (e.g., ocean/lake, hydrological, marine biology, and air pollution). This will allow assessment of climate impact on local interests (e.g., changes in Bay water levels and temperatures, innundation, fish kills, etc.) Second, the RDS generates high-resolution hurricane simulations in the tropical North Atlantic. This use case will support Observing System Simulation Experiments (OSSEs) of dynamically-targeted lidar observations as part of the NASA Sensor Web Simulator project. Sample results will be presented at the AGU Fall Meeting.
A new synoptic scale resolving global climate simulation using the Community Earth System Model
NASA Astrophysics Data System (ADS)
Small, R. Justin; Bacmeister, Julio; Bailey, David; Baker, Allison; Bishop, Stuart; Bryan, Frank; Caron, Julie; Dennis, John; Gent, Peter; Hsu, Hsiao-ming; Jochum, Markus; Lawrence, David; Muñoz, Ernesto; diNezio, Pedro; Scheitlin, Tim; Tomas, Robert; Tribbia, Joseph; Tseng, Yu-heng; Vertenstein, Mariana
2014-12-01
High-resolution global climate modeling holds the promise of capturing planetary-scale climate modes and small-scale (regional and sometimes extreme) features simultaneously, including their mutual interaction. This paper discusses a new state-of-the-art high-resolution Community Earth System Model (CESM) simulation that was performed with these goals in mind. The atmospheric component was at 0.25° grid spacing, and ocean component at 0.1°. One hundred years of "present-day" simulation were completed. Major results were that annual mean sea surface temperature (SST) in the equatorial Pacific and El-Niño Southern Oscillation variability were well simulated compared to standard resolution models. Tropical and southern Atlantic SST also had much reduced bias compared to previous versions of the model. In addition, the high resolution of the model enabled small-scale features of the climate system to be represented, such as air-sea interaction over ocean frontal zones, mesoscale systems generated by the Rockies, and Tropical Cyclones. Associated single component runs and standard resolution coupled runs are used to help attribute the strengths and weaknesses of the fully coupled run. The high-resolution run employed 23,404 cores, costing 250 thousand processor-hours per simulated year and made about two simulated years per day on the NCAR-Wyoming supercomputer "Yellowstone."
NASA Astrophysics Data System (ADS)
Liu, Fei; Zhao, Jiuwei; Fu, Xiouhua; Huang, Gang
2018-02-01
By conducting idealized experiments in a general circulation model (GCM) and in a toy theoretical model, we test the hypothesis that shallow convection (SC) is responsible for explaining why the boreal summer intraseasonal oscillation (BSISO) prefers propagating northward. Two simulations are performed using ECHAM4, with the control run using a standard detrainment rate of SC and the sensitivity run turning off the detrainment rate of SC. These two simulations display dramatically different BSISO characteristics. The control run simulates the realistic northward propagation (NP) of the BSISO, while the sensitivity run with little SC only simulates stationary signals. In the sensitivity run, the meridional asymmetries of vorticity and humidity fields are simulated under the monsoon vertical wind shear (VWS); thus, the frictional convergence can be excited to the north of the BSISO. However, the lack of SC makes the lower and middle troposphere very dry, which prohibits further development of deeper convection. A theoretical BSISO model is also constructed, and the result shows that SC is a key to convey the asymmetric vorticity effect to induce the BSISO to move northward. Thus, both the GCM and theoretical model results demonstrate the importance of SC in promoting the NP of the BSISO.
A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations
NASA Astrophysics Data System (ADS)
Demir, I.; Agliamzanov, R.
2014-12-01
Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.
NASA Astrophysics Data System (ADS)
Anantharaj, V. G.; Venzke, J.; Lingerfelt, E.; Messer, B.
2015-12-01
Climate model simulations are used to understand the evolution and variability of earth's climate. Unfortunately, high-resolution multi-decadal climate simulations can take days to weeks to complete. Typically, the simulation results are not analyzed until the model runs have ended. During the course of the simulation, the output may be processed periodically to ensure that the model is preforming as expected. However, most of the data analytics and visualization are not performed until the simulation is finished. The lengthy time period needed for the completion of the simulation constrains the productivity of climate scientists. Our implementation of near real-time data visualization analytics capabilities allows scientists to monitor the progress of their simulations while the model is running. Our analytics software executes concurrently in a co-scheduling mode, monitoring data production. When new data are generated by the simulation, a co-scheduled data analytics job is submitted to render visualization artifacts of the latest results. These visualization output are automatically transferred to Bellerophon's data server located at ORNL's Compute and Data Environment for Science (CADES) where they are processed and archived into Bellerophon's database. During the course of the experiment, climate scientists can then use Bellerophon's graphical user interface to view animated plots and their associated metadata. The quick turnaround from the start of the simulation until the data are analyzed permits research decisions and projections to be made days or sometimes even weeks sooner than otherwise possible! The supercomputer resources used to run the simulation are unaffected by co-scheduling the data visualization jobs, so the model runs continuously while the data are visualized. Our just-in-time data visualization software looks to increase climate scientists' productivity as climate modeling moves into exascale era of computing.
Weather model performance on extreme rainfall events simulation's over Western Iberian Peninsula
NASA Astrophysics Data System (ADS)
Pereira, S. C.; Carvalho, A. C.; Ferreira, J.; Nunes, J. P.; Kaiser, J. J.; Rocha, A.
2012-08-01
This study evaluates the performance of the WRF-ARW numerical weather model in simulating the spatial and temporal patterns of an extreme rainfall period over a complex orographic region in north-central Portugal. The analysis was performed for the December month of 2009, during the Portugal Mainland rainy season. The heavy rainfall to extreme heavy rainfall periods were due to several low surface pressure's systems associated with frontal surfaces. The total amount of precipitation for December exceeded, in average, the climatological mean for the 1971-2000 time period in +89 mm, varying from 190 mm (south part of the country) to 1175 mm (north part of the country). Three model runs were conducted to assess possible improvements in model performance: (1) the WRF-ARW is forced with the initial fields from a global domain model (RunRef); (2) data assimilation for a specific location (RunObsN) is included; (3) nudging is used to adjust the analysis field (RunGridN). Model performance was evaluated against an observed hourly precipitation dataset of 15 rainfall stations using several statistical parameters. The WRF-ARW model reproduced well the temporal rainfall patterns but tended to overestimate precipitation amounts. The RunGridN simulation provided the best results but model performance of the other two runs was good too, so that the selected extreme rainfall episode was successfully reproduced.
Designing Crop Simulation Web Service with Service Oriented Architecture Principle
NASA Astrophysics Data System (ADS)
Chinnachodteeranun, R.; Hung, N. D.; Honda, K.
2015-12-01
Crop simulation models are efficient tools for simulating crop growth processes and yield. Running crop models requires data from various sources as well as time-consuming data processing, such as data quality checking and data formatting, before those data can be inputted to the model. It makes the use of crop modeling limited only to crop modelers. We aim to make running crop models convenient for various users so that the utilization of crop models will be expanded, which will directly improve agricultural applications. As the first step, we had developed a prototype that runs DSSAT on Web called as Tomorrow's Rice (v. 1). It predicts rice yields based on a planting date, rice's variety and soil characteristics using DSSAT crop model. A user only needs to select a planting location on the Web GUI then the system queried historical weather data from available sources and expected yield is returned. Currently, we are working on weather data connection via Sensor Observation Service (SOS) interface defined by Open Geospatial Consortium (OGC). Weather data can be automatically connected to a weather generator for generating weather scenarios for running the crop model. In order to expand these services further, we are designing a web service framework consisting of layers of web services to support compositions and executions for running crop simulations. This framework allows a third party application to call and cascade each service as it needs for data preparation and running DSSAT model using a dynamic web service mechanism. The framework has a module to manage data format conversion, which means users do not need to spend their time curating the data inputs. Dynamic linking of data sources and services are implemented using the Service Component Architecture (SCA). This agriculture web service platform demonstrates interoperability of weather data using SOS interface, convenient connections between weather data sources and weather generator, and connecting various services for running crop models for decision support.
NASA Technical Reports Server (NTRS)
Vrnak, Daniel R.; Stueber, Thomas J.; Le, Dzu K.
2012-01-01
This report presents a method for running a dynamic legacy inlet simulation in concert with another dynamic simulation that uses a graphical interface. The legacy code, NASA's LArge Perturbation INlet (LAPIN) model, was coded using the FORTRAN 77 (The Portland Group, Lake Oswego, OR) programming language to run in a command shell similar to other applications that used the Microsoft Disk Operating System (MS-DOS) (Microsoft Corporation, Redmond, WA). Simulink (MathWorks, Natick, MA) is a dynamic simulation that runs on a modern graphical operating system. The product of this work has both simulations, LAPIN and Simulink, running synchronously on the same computer with periodic data exchanges. Implementing the method described in this paper avoided extensive changes to the legacy code and preserved its basic operating procedure. This paper presents a novel method that promotes inter-task data communication between the synchronously running processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abercrombie, Robert K; Schlicher, Bob G
Vulnerability in security of an information system is quantitatively predicted. The information system may receive malicious actions against its security and may receive corrective actions for restoring the security. A game oriented agent based model is constructed in a simulator application. The game ABM model represents security activity in the information system. The game ABM model has two opposing participants including an attacker and a defender, probabilistic game rules and allowable game states. A specified number of simulations are run and a probabilistic number of the plurality of allowable game states are reached in each simulation run. The probability ofmore » reaching a specified game state is unknown prior to running each simulation. Data generated during the game states is collected to determine a probability of one or more aspects of security in the information system.« less
Real-time simulation of large-scale floods
NASA Astrophysics Data System (ADS)
Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.
2016-08-01
According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.
Australia's marine virtual laboratory
NASA Astrophysics Data System (ADS)
Proctor, Roger; Gillibrand, Philip; Oke, Peter; Rosebrock, Uwe
2014-05-01
In all modelling studies of realistic scenarios, a researcher has to go through a number of steps to set up a model in order to produce a model simulation of value. The steps are generally the same, independent of the modelling system chosen. These steps include determining the time and space scales and processes of the required simulation; obtaining data for the initial set up and for input during the simulation time; obtaining observation data for validation or data assimilation; implementing scripts to run the simulation(s); and running utilities or custom-built software to extract results. These steps are time consuming and resource hungry, and have to be done every time irrespective of the simulation - the more complex the processes, the more effort is required to set up the simulation. The Australian Marine Virtual Laboratory (MARVL) is a new development in modelling frameworks for researchers in Australia. MARVL uses the TRIKE framework, a java-based control system developed by CSIRO that allows a non-specialist user configure and run a model, to automate many of the modelling preparation steps needed to bring the researcher faster to the stage of simulation and analysis. The tool is seen as enhancing the efficiency of researchers and marine managers, and is being considered as an educational aid in teaching. In MARVL we are developing a web-based open source application which provides a number of model choices and provides search and recovery of relevant observations, allowing researchers to: a) efficiently configure a range of different community ocean and wave models for any region, for any historical time period, with model specifications of their choice, through a user-friendly web application, b) access data sets to force a model and nest a model into, c) discover and assemble ocean observations from the Australian Ocean Data Network (AODN, http://portal.aodn.org.au/webportal/) in a format that is suitable for model evaluation or data assimilation, and d) run the assembled configuration in a cloud computing environment, or download the assembled configuration and packaged data to run on any other system of the user's choice. MARVL is now being applied in a number of case studies around Australia ranging in scale from locally confined estuaries to the Tasman Sea between Australia and New Zealand. In time we expect the range of models offered will include biogeochemical models.
NASA Astrophysics Data System (ADS)
Chen, Xiuhong; Huang, Xianglei; Jiao, Chaoyi; Flanner, Mark G.; Raeker, Todd; Palen, Brock
2017-01-01
The suites of numerical models used for simulating climate of our planet are usually run on dedicated high-performance computing (HPC) resources. This study investigates an alternative to the usual approach, i.e. carrying out climate model simulations on commercially available cloud computing environment. We test the performance and reliability of running the CESM (Community Earth System Model), a flagship climate model in the United States developed by the National Center for Atmospheric Research (NCAR), on Amazon Web Service (AWS) EC2, the cloud computing environment by Amazon.com, Inc. StarCluster is used to create virtual computing cluster on the AWS EC2 for the CESM simulations. The wall-clock time for one year of CESM simulation on the AWS EC2 virtual cluster is comparable to the time spent for the same simulation on a local dedicated high-performance computing cluster with InfiniBand connections. The CESM simulation can be efficiently scaled with the number of CPU cores on the AWS EC2 virtual cluster environment up to 64 cores. For the standard configuration of the CESM at a spatial resolution of 1.9° latitude by 2.5° longitude, increasing the number of cores from 16 to 64 reduces the wall-clock running time by more than 50% and the scaling is nearly linear. Beyond 64 cores, the communication latency starts to outweigh the benefit of distributed computing and the parallel speedup becomes nearly unchanged.
Li, Chen; Nagasaki, Masao; Koh, Chuan Hock; Miyano, Satoru
2011-05-01
Mathematical modeling and simulation studies are playing an increasingly important role in helping researchers elucidate how living organisms function in cells. In systems biology, researchers typically tune many parameters manually to achieve simulation results that are consistent with biological knowledge. This severely limits the size and complexity of simulation models built. In order to break this limitation, we propose a computational framework to automatically estimate kinetic parameters for a given network structure. We utilized an online (on-the-fly) model checking technique (which saves resources compared to the offline approach), with a quantitative modeling and simulation architecture named hybrid functional Petri net with extension (HFPNe). We demonstrate the applicability of this framework by the analysis of the underlying model for the neuronal cell fate decision model (ASE fate model) in Caenorhabditis elegans. First, we built a quantitative ASE fate model containing 3327 components emulating nine genetic conditions. Then, using our developed efficient online model checker, MIRACH 1.0, together with parameter estimation, we ran 20-million simulation runs, and were able to locate 57 parameter sets for 23 parameters in the model that are consistent with 45 biological rules extracted from published biological articles without much manual intervention. To evaluate the robustness of these 57 parameter sets, we run another 20 million simulation runs using different magnitudes of noise. Our simulation results concluded that among these models, one model is the most reasonable and robust simulation model owing to the high stability against these stochastic noises. Our simulation results provide interesting biological findings which could be used for future wet-lab experiments.
Sams, J. I.; Witt, E. C.
1995-01-01
The Hydrological Simulation Program - Fortran (HSPF) was used to simulate streamflow and sediment transport in two surface-mined basins of Fayette County, Pa. Hydrologic data from the Stony Fork Basin (0.93 square miles) was used to calibrate HSPF parameters. The calibrated parameters were applied to an HSPF model of the Poplar Run Basin (8.83 square miles) to evaluate the transfer value of model parameters. The results of this investigation provide information to the Pennsylvania Department of Environmental Resources, Bureau of Mining and Reclamation, regarding the value of the simulated hydrologic data for use in cumulative hydrologic-impact assessments of surface-mined basins. The calibration period was October 1, 1985, through September 30, 1988 (water years 1986-88). The simulated data were representative of the observed data from the Stony Fork Basin. Mean simulated streamflow was 1.64 cubic feet per second compared to measured streamflow of 1.58 cubic feet per second for the 3-year period. The difference between the observed and simulated peak stormflow ranged from 4.0 to 59.7 percent for 12 storms. The simulated sediment load for the 1987 water year was 127.14 tons (0.21 ton per acre), which compares to a measured sediment load of 147.09 tons (0.25 ton per acre). The total simulated suspended-sediment load for the 3-year period was 538.2 tons (0.30 ton per acre per year), which compares to a measured sediment load of 467.61 tons (0.26 ton per acre per year). The model was verified by comparing observed and simulated data from October 1, 1988, through September 30, 1989. The results obtained were comparable to those from the calibration period. The simulated mean daily discharge was representative of the range of data observed from the basin and of the frequency with which specific discharges were equalled or exceeded. The calibrated and verified parameters from the Stony Fork model were applied to an HSPF model of the Poplar Run Basin. The two basins are in a similar physical setting. Data from October 1, 1987, through September 30, 1989, were used to evaluate the Poplar Run model. In general, the results from the Poplar Run model were comparable to those obtained from the Stony Fork model. The difference between observed and simulated total streamflow was 1.1 percent for the 2-year period. The mean annual streamflow simulated by the Poplar Run model was 18.3 cubic feet per second. This compares to an observed streamflow of 18.15 cubic feet per second. For the 2-year period, the simulated sediment load was 2,754 tons (0.24 ton per acre per year), which compares to a measured sediment load of 3,051.2 tons (0.27 ton per acre per year) for the Poplar Run Basin. Cumulative frequency-distribution curves of the observed and simulated streamflow compared well. The comparison between observed and simulated data improved as the time span increased. Simulated annual means and totals were more representative of the observed data than hourly data used in comparing storm events. The structure and organization of the HSPF model facilitated the simulation of a wide range of hydrologic processes. The simulation results from this investigation indicate that model parameters may be transferred to ungaged basins to generate representative hydrologic data through modeling techniques.
NOTE: Implementation of angular response function modeling in SPECT simulations with GATE
NASA Astrophysics Data System (ADS)
Descourt, P.; Carlier, T.; Du, Y.; Song, X.; Buvat, I.; Frey, E. C.; Bardies, M.; Tsui, B. M. W.; Visvikis, D.
2010-05-01
Among Monte Carlo simulation codes in medical imaging, the GATE simulation platform is widely used today given its flexibility and accuracy, despite long run times, which in SPECT simulations are mostly spent in tracking photons through the collimators. In this work, a tabulated model of the collimator/detector response was implemented within the GATE framework to significantly reduce the simulation times in SPECT. This implementation uses the angular response function (ARF) model. The performance of the implemented ARF approach has been compared to standard SPECT GATE simulations in terms of the ARF tables' accuracy, overall SPECT system performance and run times. Considering the simulation of the Siemens Symbia T SPECT system using high-energy collimators, differences of less than 1% were measured between the ARF-based and the standard GATE-based simulations, while considering the same noise level in the projections, acceleration factors of up to 180 were obtained when simulating a planar 364 keV source seen with the same SPECT system. The ARF-based and the standard GATE simulation results also agreed very well when considering a four-head SPECT simulation of a realistic Jaszczak phantom filled with iodine-131, with a resulting acceleration factor of 100. In conclusion, the implementation of an ARF-based model of collimator/detector response for SPECT simulations within GATE significantly reduces the simulation run times without compromising accuracy.
High-resolution dynamical downscaling of the future Alpine climate
NASA Astrophysics Data System (ADS)
Bozhinova, Denica; José Gómez-Navarro, Juan; Raible, Christoph
2017-04-01
The Alpine region and Switzerland is a challenging area for simulating and analysing Global Climate Model (GCM) results. This is mostly due to the combination of a very complex topography and the still rather coarse horizontal resolution of current GCMs, in which not all of the many-scale processes that drive the local weather and climate can be resolved. In our study, the Weather Research and Forecasting (WRF) model is used to dynamically downscale a GCM simulation to a resolution as high as 2 km x 2 km. WRF is driven by initial and boundary conditions produced with the Community Earth System Model (CESM) for the recent past (control run) and until 2100 using the RCP8.5 climate scenario (future run). The control run downscaled with WRF covers the period 1976-2005, while the future run investigates a 20-year-slice simulated for the 2080-2099. We compare the control WRF-CESM simulations to an observational product provided by MeteoSwiss and an additional WRF simulation driven by the ERA-Interim reanalysis, to estimate the bias that is introduced by the extra modelling step of our framework. Several bias-correction methods are evaluated, including a quantile mapping technique, to ameliorate the bias in the control WRF-CESM simulation. In the next step of our study these corrections are applied to our future WRF-CESM run. The resulting downscaled and bias-corrected data is analysed for the properties of precipitation and wind speed in the future climate. Our special interest focuses on the absolute quantities simulated for these meteorological variables as these are used to identify extreme events, such as wind storms and situations that can lead to floods.
Development of the CELSS emulator at NASA. Johnson Space Center
NASA Technical Reports Server (NTRS)
Cullingford, Hatice S.
1990-01-01
The Closed Ecological Life Support System (CELSS) Emulator is under development. It will be used to investigate computer simulations of integrated CELSS operations involving humans, plants, and process machinery. Described here is Version 1.0 of the CELSS Emulator that was initiated in 1988 on the Johnson Space Center (JSC) Multi Purpose Applications Console Test Bed as the simulation framework. The run model of the simulation system now contains a CELSS model called BLSS. The CELSS simulator empowers us to generate model data sets, store libraries of results for further analysis, and also display plots of model variables as a function of time. The progress of the project is presented with sample test runs and simulation display pages.
NASA Astrophysics Data System (ADS)
Shen, Wenqiang; Tang, Jianping; Wang, Yuan; Wang, Shuyu; Niu, Xiaorui
2017-04-01
In this study, the characteristics of tropical cyclones (TCs) over the East Asia Coordinated Regional Downscaling Experiment domain are examined with the Weather Research and Forecasting (WRF) model. Eight 20-year (1989-2008) simulations are performed using the WRF model, with lateral boundary forcing from the ERA-Interim reanalysis, to test the sensitivity of TC simulation to interior spectral nudging (SN, including nudging time interval, nudging variables) and radiation schemes [Community Atmosphere Model (CAM), Rapid Radiative Transfer Model (RRTM)]. The simulated TCs are compared with the observation from the Regional Specialized Meteorological Centers TC best tracks. It is found that all WRF runs can simulate the climatology of key TC features such as the tracks and location/frequency of genesis reasonably well, and reproduce the inter-annual variations and seasonal cycle of TC counts. The SN runs produce enhanced TC activity compare to the runs without SN. The thermodynamic profile suggests that nudging with horizontal wind increases the unstable of thermodynamic states in tropics, which results in excessive TCs genesis. The experiments with wind and temperature nudging improve the overestimation of TCs numbers, especially suppress the TCs intensification by correct the thermodynamic profile. Weak SN coefficient enhances TCs activity significantly even with wind and temperature nudging. The analysis of TCs numbers and large scale circulation shows that the SN parameters adopted in our experiments do not appear to suppress the formation of TC. The excessive TCs activity in CAM runs relative to RRTM runs are also due to the enhanced atmospheric instability.
Modeling and Simulation of Ceramic Arrays to Improve Ballaistic Performance
2013-10-01
are modeled using SPH elements. Model validation runs with monolithic SiC tiles are conducted based on the DoP experiments described in reference...TERMS ,30cal AP M2 Projectile, 762x39 PS Projectile, SPH , Aluminum 5083, SiC, DoP Expeminets, AutoDyn Simulations, Tile Gap 16. SECURITY...range 700 m/s to 1000 m/s are modeled using SPH elements. □ Model validation runs with monolithic SiC tiles are conducted based on the DoP
User's instructions for the cardiovascular Walters model
NASA Technical Reports Server (NTRS)
Croston, R. C.
1973-01-01
The model is a combined, steady-state cardiovascular and thermal model. It was originally developed for interactive use, but was converted to batch mode simulation for the Sigma 3 computer. The model has the purpose to compute steady-state circulatory and thermal variables in response to exercise work loads and environmental factors. During a computer simulation run, several selected variables are printed at each time step. End conditions are also printed at the completion of the run.
Statistical Emulator for Expensive Classification Simulators
NASA Technical Reports Server (NTRS)
Ross, Jerret; Samareh, Jamshid A.
2016-01-01
Expensive simulators prevent any kind of meaningful analysis to be performed on the phenomena they model. To get around this problem the concept of using a statistical emulator as a surrogate representation of the simulator was introduced in the 1980's. Presently, simulators have become more and more complex and as a result running a single example on these simulators is very expensive and can take days to weeks or even months. Many new techniques have been introduced, termed criteria, which sequentially select the next best (most informative to the emulator) point that should be run on the simulator. These criteria methods allow for the creation of an emulator with only a small number of simulator runs. We follow and extend this framework to expensive classification simulators.
NASA Astrophysics Data System (ADS)
Gomes, J. L.; Chou, S. C.; Yaguchi, S. M.
2012-04-01
Physics parameterizations and the model vertical and horizontal resolutions, for example, can significantly contribute to the uncertainty in the numerical weather predictions, especially at regions with complex topography. The objective of this study is to assess the influences of model precipitation production schemes and horizontal resolution on the diurnal cycle of precipitation in the Eta Model . The model was run in hydrostatic mode at 3- and 5-km grid sizes, the vertical resolution was set to 50 layers, and the time steps to 6 and 10 s, respectively. The initial and boundary conditions were taken from ERA-Interim reanalysis. Over the sea the 0.25-deg sea surface temperature from NOAA was used. The model was setup to run for each resolution over Angra dos Reis, located in the Southeast region of Brazil, for the rainy period between 18 December 2009 and 01 de January 2010, the model simulation range was 48 hours. In one set of runs the cumulus parameterization was switched off, in this case the model precipitation was fully simulated by cloud microphysics scheme, and in the other set the model was run with weak cumulus convection. The results show that as the model horizontal resolution increases from 5 to 3 km, the spatial pattern of the precipitation hardly changed, although the maximum precipitation core increased in magnitude. Daily data from automatic station data was used to evaluate the runs and shows that the diurnal cycle of temperature and precipitation were better simulated for 3 km when compared against observations. The model configuration results without cumulus convection shows a small contraction in the precipitating area and an increase in the simulated maximum values. The diurnal cycle of precipitation was better simulated with some activity of the cumulus convection scheme. The skill scores for the period and for different forecast ranges are higher at weak and moderate precipitation rates.
Pasta nucleosynthesis: Molecular dynamics simulations of nuclear statistical equilibrium
NASA Astrophysics Data System (ADS)
Caplan, M. E.; Schneider, A. S.; Horowitz, C. J.; Berry, D. K.
2015-06-01
Background: Exotic nonspherical nuclear pasta shapes are expected in nuclear matter at just below saturation density because of competition between short-range nuclear attraction and long-range Coulomb repulsion. Purpose: We explore the impact nuclear pasta may have on nucleosynthesis during neutron star mergers when cold dense nuclear matter is ejected and decompressed. Methods: We use a hybrid CPU/GPU molecular dynamics (MD) code to perform decompression simulations of cold dense matter with 51 200 and 409 600 nucleons from 0.080 fm-3 down to 0.00125 fm-3 . Simulations are run for proton fractions YP= 0.05, 0.10, 0.20, 0.30, and 0.40 at temperatures T = 0.5, 0.75, and 1.0 MeV. The final composition of each simulation is obtained using a cluster algorithm and compared to a constant density run. Results: Size of nuclei in the final state of decompression runs are in good agreement with nuclear statistical equilibrium (NSE) models for temperatures of 1 MeV while constant density runs produce nuclei smaller than the ones obtained with NSE. Our MD simulations produces unphysical results with large rod-like nuclei in the final state of T =0.5 MeV runs. Conclusions: Our MD model is valid at higher densities than simple nuclear statistical equilibrium models and may help determine the initial temperatures and proton fractions of matter ejected in mergers.
Volume 2: Compendium of Abstracts
2017-06-01
simulation work using a standard running model for legged systems, the Spring Loaded Inverted Pendulum (SLIP) Model. In this model, the dynamics of a single...bar SLIP model is analyzed using a basin of attraction analyses to determine the optimal configuration for running at different velocities and...acquisition, and the automatic target acquisition were then compared to each other. After running trials with the current system, it will be
Wedge Experiment Modeling and Simulation for Reactive Flow Model Calibration
NASA Astrophysics Data System (ADS)
Maestas, Joseph T.; Dorgan, Robert J.; Sutherland, Gerrit T.
2017-06-01
Wedge experiments are a typical method for generating pop-plot data (run-to-detonation distance versus input shock pressure), which is used to assess an explosive material's initiation behavior. Such data can be utilized to calibrate reactive flow models by running hydrocode simulations and successively tweaking model parameters until a match between experiment is achieved. Typical simulations are performed in 1D and typically use a flyer impact to achieve the prescribed shock loading pressure. In this effort, a wedge experiment performed at the Army Research Lab (ARL) was modeled using CTH (SNL hydrocode) in 1D, 2D, and 3D space in order to determine if there was any justification in using simplified models. A simulation was also performed using the BCAT code (CTH companion tool) that assumes a plate impact shock loading. Results from the simulations were compared to experimental data and show that the shock imparted into an explosive specimen is accurately captured with 2D and 3D simulations, but changes significantly in 1D space and with the BCAT tool. The difference in shock profile is shown to only affect numerical predictions for large run distances. This is attributed to incorrectly capturing the energy fluence for detonation waves versus flat shock loading. Portions of this work were funded through the Joint Insensitive Munitions Technology Program.
Simulated tsunami run-up amplification factors around Penang Island for preliminary risk assessment
NASA Astrophysics Data System (ADS)
Lim, Yong Hui; Kh'ng, Xin Yi; Teh, Su Yean; Koh, Hock Lye; Tan, Wai Kiat
2017-08-01
The mega-tsunami Andaman that struck Malaysia on 26 December 2004 affected 200 kilometers of northwest Peninsular Malaysia coastline from Perlis to Selangor. It is anticipated by the tsunami scientific community that the next mega-tsunami is due to occur any time soon. This rare catastrophic event has awakened the attention of Malaysian government to take appropriate risk reduction measures, including timely and orderly evacuation. To effectively evacuate ordinary citizens to a safe ground or a nearest designated emergency shelter, a well prepared evacuation route is essential with the estimated tsunami run-up heights and inundation distances on land clearly indicated on the evacuation map. The run-up heights and inundation distances are simulated by an in-house model 2-D TUNA-RP based upon credible scientific tsunami source scenarios derived from tectonic activity around the region. To provide a useful tool for estimating the run-up heights along the entire coast of Penang Island, we computed tsunami amplification factors based upon 2-D TUNA-RP model simulations in this paper. The inundation map and run-up amplification factors in six domains along the entire coastline of Penang Island are provided. The comparison between measured tsunami wave heights for the 2004 Andaman tsunami and TUNA-RP model simulated values demonstrates good agreement.
Design of Flight Control Panel Layout using Graphical User Interface in MATLAB
NASA Astrophysics Data System (ADS)
Wirawan, A.; Indriyanto, T.
2018-04-01
This paper introduces the design of Flight Control Panel (FCP) Layout using Graphical User Interface in MATLAB. The FCP is the interface to give the command to the simulation and to monitor model variables while the simulation is running. The command accommodates by the FCP are altitude command, the angle of sideslip command, heading command, and setting command for turbulence model. The FCP was also designed to monitor the flight parameter while the simulation is running.
Sensitivity study of a dynamic thermodynamic sea ice model
NASA Astrophysics Data System (ADS)
Holland, David M.; Mysak, Lawrence A.; Manak, Davinder K.; Oberhuber, Josef M.
1993-02-01
A numerical simulation of the seasonal sea ice cover in the Arctic Ocean and the Greenland, Iceland, and Norwegian seas is presented. The sea ice model is extracted from Oberhuber's (1990) coupled sea ice-mixed layer-isopycnal general circulation model and is written in spherical coordinates. The advantage of such a model over previous sea ice models is that it can be easily coupled to either global atmospheric or ocean general circulation models written in spherical coordinates. In this model, the thermodynamics are a modification of that of Parkinson and Washington (1979), while the dynamics use the full Hibler (1979) viscous-plastic rheology. Monthly thermodynamic and dynamic forcing fields for the atmosphere and ocean are specified. The simulations of the seasonal cycle of ice thickness, compactness, and velocity, for a control set of parameters, compare favorably with the known seasonal characteristics of these fields. A sensitivity study of the control simulation of the seasonal sea ice cover is presented. The sensitivity runs are carried out under three different themes, namely, numerical conditions, parameter values, and physical processes. This last theme refers to experiments in which physical processes are either newly added or completely removed from the model. Approximately 80 sensitivity runs have been performed in which a change from the control run environment has been implemented. Comparisons have been made between the control run and a particular sensitivity run based on time series of the seasonal cycle of the domain-averaged ice thickness, compactness, areal coverage, and kinetic energy. In addition, spatially varying fields of ice thickness, compactness, velocity, and surface temperature for each season are presented for selected experiments. A brief description and discussion of the more interesting experiments are presented. The simulation of the seasonal cycle of Arctic sea ice cover is shown to be robust.
Hulme, Adam; Thompson, Jason; Nielsen, Rasmus Oestergaard; Read, Gemma J M; Salmon, Paul M
2018-06-18
There have been recent calls for the application of the complex systems approach in sports injury research. However, beyond theoretical description and static models of complexity, little progress has been made towards formalising this approach in way that is practical to sports injury scientists and clinicians. Therefore, our objective was to use a computational modelling method and develop a dynamic simulation in sports injury research. Agent-based modelling (ABM) was used to model the occurrence of sports injury in a synthetic athlete population. The ABM was developed based on sports injury causal frameworks and was applied in the context of distance running-related injury (RRI). Using the acute:chronic workload ratio (ACWR), we simulated the dynamic relationship between changes in weekly running distance and RRI through the manipulation of various 'athlete management tools'. The findings confirmed that building weekly running distances over time, even within the reported ACWR 'sweet spot', will eventually result in RRI as athletes reach and surpass their individual physical workload limits. Introducing training-related error into the simulation and the modelling of a 'hard ceiling' dynamic resulted in a higher RRI incidence proportion across the population at higher absolute workloads. The presented simulation offers a practical starting point to further apply more sophisticated computational models that can account for the complex nature of sports injury aetiology. Alongside traditional forms of scientific inquiry, the use of ABM and other simulation-based techniques could be considered as a complementary and alternative methodological approach in sports injury research. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
FARSITE: Fire Area Simulator-model development and evaluation
Mark A. Finney
1998-01-01
A computer simulation model, FARSITE, includes existing fire behavior models for surface, crown, spotting, point-source fire acceleration, and fuel moisture. The model's components and assumptions are documented. Simulations were run for simple conditions that illustrate the effect of individual fire behavior models on two-dimensional fire growth.
An extension of the OpenModelica compiler for using Modelica models in a discrete event simulation
Nutaro, James
2014-11-03
In this article, a new back-end and run-time system is described for the OpenModelica compiler. This new back-end transforms a Modelica model into a module for the adevs discrete event simulation package, thereby extending adevs to encompass complex, hybrid dynamical systems. The new run-time system that has been built within the adevs simulation package supports models with state-events and time-events and that comprise differential-algebraic systems with high index. Finally, although the procedure for effecting this transformation is based on adevs and the Discrete Event System Specification, it can be adapted to any discrete event simulation package.
The Q continuum simulation: Harnessing the power of GPU accelerated supercomputers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heitmann, Katrin; Frontiere, Nicholas; Sewell, Chris
2015-08-01
Modeling large-scale sky survey observations is a key driver for the continuing development of high-resolution, large-volume, cosmological simulations. We report the first results from the "Q Continuum" cosmological N-body simulation run carried out on the GPU-accelerated supercomputer Titan. The simulation encompasses a volume of (1300 Mpc)(3) and evolves more than half a trillion particles, leading to a particle mass resolution of m(p) similar or equal to 1.5 . 10(8) M-circle dot. At thismass resolution, the Q Continuum run is currently the largest cosmology simulation available. It enables the construction of detailed synthetic sky catalogs, encompassing different modeling methodologies, including semi-analyticmore » modeling and sub-halo abundance matching in a large, cosmological volume. Here we describe the simulation and outputs in detail and present first results for a range of cosmological statistics, such as mass power spectra, halo mass functions, and halo mass-concentration relations for different epochs. We also provide details on challenges connected to running a simulation on almost 90% of Titan, one of the fastest supercomputers in the world, including our usage of Titan's GPU accelerators.« less
Full-Body Musculoskeletal Model for Muscle-Driven Simulation of Human Gait.
Rajagopal, Apoorva; Dembia, Christopher L; DeMers, Matthew S; Delp, Denny D; Hicks, Jennifer L; Delp, Scott L
2016-10-01
Musculoskeletal models provide a non-invasive means to study human movement and predict the effects of interventions on gait. Our goal was to create an open-source 3-D musculoskeletal model with high-fidelity representations of the lower limb musculature of healthy young individuals that can be used to generate accurate simulations of gait. Our model includes bony geometry for the full body, 37 degrees of freedom to define joint kinematics, Hill-type models of 80 muscle-tendon units actuating the lower limbs, and 17 ideal torque actuators driving the upper body. The model's musculotendon parameters are derived from previous anatomical measurements of 21 cadaver specimens and magnetic resonance images of 24 young healthy subjects. We tested the model by evaluating its computational time and accuracy of simulations of healthy walking and running. Generating muscle-driven simulations of normal walking and running took approximately 10 minutes on a typical desktop computer. The differences between our muscle-generated and inverse dynamics joint moments were within 3% (RMSE) of the peak inverse dynamics joint moments in both walking and running, and our simulated muscle activity showed qualitative agreement with salient features from experimental electromyography data. These results suggest that our model is suitable for generating muscle-driven simulations of healthy gait. We encourage other researchers to further validate and apply the model to study other motions of the lower extremity. The model is implemented in the open-source software platform OpenSim. The model and data used to create and test the simulations are freely available at https://simtk.org/home/full_body/, allowing others to reproduce these results and create their own simulations.
The Trick Simulation Toolkit: A NASA/Opensource Framework for Running Time Based Physics Models
NASA Technical Reports Server (NTRS)
Penn, John M.
2016-01-01
The Trick Simulation Toolkit is a simulation development environment used to create high fidelity training and engineering simulations at the NASA Johnson Space Center and many other NASA facilities. Its purpose is to generate a simulation executable from a collection of user-supplied models and a simulation definition file. For each Trick-based simulation, Trick automatically provides job scheduling, numerical integration, the ability to write and restore human readable checkpoints, data recording, interactive variable manipulation, a run-time interpreter, and many other commonly needed capabilities. This allows simulation developers to concentrate on their domain expertise and the algorithms and equations of their models. Also included in Trick are tools for plotting recorded data and various other supporting utilities and libraries. Trick is written in C/C++ and Java and supports both Linux and MacOSX computer operating systems. This paper describes Trick's design and use at NASA Johnson Space Center.
An Evaluation of the Predictability of Austral Summer Season Precipitation over South America.
NASA Astrophysics Data System (ADS)
Misra, Vasubandhu
2004-03-01
In this study predictability of austral summer seasonal precipitation over South America is investigated using a 12-yr set of a 3.5-month range (seasonal) and a 17-yr range (continuous multiannual) five-member ensemble integrations of the Center for Ocean Land Atmosphere Studies (COLA) atmospheric general circulation model (AGCM). These integrations were performed with prescribed observed sea surface temperature (SST); therefore, skill attained represents an estimate of the upper bound of the skill achievable by COLA AGCM with predicted SST. The seasonal runs outperform the multiannual model integrations both in deterministic and probabilistic skill. The simulation of the January February March (JFM) seasonal climatology of precipitation is vastly superior in the seasonal runs except over the Nordeste region where the multiannual runs show a marginal improvement. The teleconnection of the ensemble mean JFM precipitation over tropical South America with global contemporaneous observed sea surface temperature in the seasonal runs conforms more closely to observations than in the multiannual runs. Both the sets of runs clearly beat persistence in predicting the interannual precipitation anomalies over the Amazon River basin, Nordeste, South Atlantic convergence zone, and subtropical South America. However, both types of runs display poorer simulations over subtropical regions than the tropical areas of South America. The examination of probabilistic skill of precipitation supports the conclusions from deterministic skill analysis that the seasonal runs yield superior simulations than the multiannual-type runs.
Kang, Xianbiao; Zhang, Rong-Hua; Gao, Chuan; Zhu, Jieshun
2017-12-07
The El Niño-Southern oscillation (ENSO) simulated in the Community Earth System Model of the National Center for Atmospheric Research (NCAR CESM) is much stronger than in reality. Here, satellite data are used to derive a statistical relationship between interannual variations in oceanic chlorophyll (CHL) and sea surface temperature (SST), which is then incorporated into the CESM to represent oceanic chlorophyll -induced climate feedback in the tropical Pacific. Numerical runs with and without the feedback (referred to as feedback and non-feedback runs) are performed and compared with each other. The ENSO amplitude simulated in the feedback run is more accurate than that in the non-feedback run; quantitatively, the Niño3 SST index is reduced by 35% when the feedback is included. The underlying processes are analyzed and the results show that interannual CHL anomalies exert a systematic modulating effect on the solar radiation penetrating into the subsurface layers, which induces differential heating in the upper ocean that affects vertical mixing and thus SST. The statistical modeling approach proposed in this work offers an effective and economical way for improving climate simulations.
On the use of tower-flux measurements to assess the performance of global ecosystem models
NASA Astrophysics Data System (ADS)
El Maayar, M.; Kucharik, C.
2003-04-01
Global ecosystem models are important tools for the study of biospheric processes and their responses to environmental changes. Such models typically translate knowledge, gained from local observations, into estimates of regional or even global outcomes of ecosystem processes. A typical test of ecosystem models consists of comparing their output against tower-flux measurements of land surface-atmosphere exchange of heat and mass. To perform such tests, models are typically run using detailed information on soil properties (texture, carbon content,...) and vegetation structure observed at the experimental site (e.g., vegetation height, vegetation phenology, leaf photosynthetic characteristics,...). In global simulations, however, earth's vegetation is typically represented by a limited number of plant functional types (PFT; group of plant species that have similar physiological and ecological characteristics). For each PFT (e.g., temperate broadleaf trees, boreal conifer evergreen trees,...), which can cover a very large area, a set of typical physiological and physical parameters are assigned. Thus, a legitimate question arises: How does the performance of a global ecosystem model run using detailed site-specific parameters compare with the performance of a less detailed global version where generic parameters are attributed to a group of vegetation species forming a PFT? To answer this question, we used a multiyear dataset, measured at two forest sites with contrasting environments, to compare seasonal and interannual variability of surface-atmosphere exchange of water and carbon predicted by the Integrated BIosphere Simulator-Dynamic Global Vegetation Model. Two types of simulations were, thus, performed: a) Detailed runs: observed vegetation characteristics (leaf area index, vegetation height,...) and soil carbon content, in addition to climate and soil type, are specified for model run; and b) Generic runs: when only observed climates and soil types at the measurement sites are used to run the model. The generic runs were performed for the number of years equal to the current age of the forests, initialized with no vegetation and a soil carbon density equal to zero.
Reliable results from stochastic simulation models
Donald L., Jr. Gochenour; Leonard R. Johnson
1973-01-01
Development of a computer simulation model is usually done without fully considering how long the model should run (e.g. computer time) before the results are reliable. However construction of confidence intervals (CI) about critical output parameters from the simulation model makes it possible to determine the point where model results are reliable. If the results are...
Full body musculoskeletal model for muscle-driven simulation of human gait
Rajagopal, Apoorva; Dembia, Christopher L.; DeMers, Matthew S.; Delp, Denny D.; Hicks, Jennifer L.; Delp, Scott L.
2017-01-01
Objective Musculoskeletal models provide a non-invasive means to study human movement and predict the effects of interventions on gait. Our goal was to create an open-source, three-dimensional musculoskeletal model with high-fidelity representations of the lower limb musculature of healthy young individuals that can be used to generate accurate simulations of gait. Methods Our model includes bony geometry for the full body, 37 degrees of freedom to define joint kinematics, Hill-type models of 80 muscle-tendon units actuating the lower limbs, and 17 ideal torque actuators driving the upper body. The model’s musculotendon parameters are derived from previous anatomical measurements of 21 cadaver specimens and magnetic resonance images of 24 young healthy subjects. We tested the model by evaluating its computational time and accuracy of simulations of healthy walking and running. Results Generating muscle-driven simulations of normal walking and running took approximately 10 minutes on a typical desktop computer. The differences between our muscle-generated and inverse dynamics joint moments were within 3% (RMSE) of the peak inverse dynamics joint moments in both walking and running, and our simulated muscle activity showed qualitative agreement with salient features from experimental electromyography data. Conclusion These results suggest that our model is suitable for generating muscle-driven simulations of healthy gait. We encourage other researchers to further validate and apply the model to study other motions of the lower-extremity. Significance The model is implemented in the open source software platform OpenSim. The model and data used to create and test the simulations are freely available at https://simtk.org/home/full_body/, allowing others to reproduce these results and create their own simulations. PMID:27392337
Antonioletti, Mario; Biktashev, Vadim N; Jackson, Adrian; Kharche, Sanjay R; Stary, Tomas; Biktasheva, Irina V
2017-01-01
The BeatBox simulation environment combines flexible script language user interface with the robust computational tools, in order to setup cardiac electrophysiology in-silico experiments without re-coding at low-level, so that cell excitation, tissue/anatomy models, stimulation protocols may be included into a BeatBox script, and simulation run either sequentially or in parallel (MPI) without re-compilation. BeatBox is a free software written in C language to be run on a Unix-based platform. It provides the whole spectrum of multi scale tissue modelling from 0-dimensional individual cell simulation, 1-dimensional fibre, 2-dimensional sheet and 3-dimensional slab of tissue, up to anatomically realistic whole heart simulations, with run time measurements including cardiac re-entry tip/filament tracing, ECG, local/global samples of any variables, etc. BeatBox solvers, cell, and tissue/anatomy models repositories are extended via robust and flexible interfaces, thus providing an open framework for new developments in the field. In this paper we give an overview of the BeatBox current state, together with a description of the main computational methods and MPI parallelisation approaches.
Regional model simulations of New Zealand climate
NASA Astrophysics Data System (ADS)
Renwick, James A.; Katzfey, Jack J.; Nguyen, Kim C.; McGregor, John L.
1998-03-01
Simulation of New Zealand climate is examined through the use of a regional climate model nested within the output of the Commonwealth Scientific and Industrial Research Organisation nine-level general circulation model (GCM). R21 resolution GCM output is used to drive a regional model run at 125 km grid spacing over the Australasian region. The 125 km run is used in turn to drive a simulation at 50 km resolution over New Zealand. Simulations with a full seasonal cycle are performed for 10 model years. The focus is on the quality of the simulation of present-day climate, but results of a doubled-CO2 run are discussed briefly. Spatial patterns of mean simulated precipitation and surface temperatures improve markedly as horizontal resolution is increased, through the better resolution of the country's orography. However, increased horizontal resolution leads to a positive bias in precipitation. At 50 km resolution, simulated frequency distributions of daily maximum/minimum temperatures are statistically similar to those of observations at many stations, while frequency distributions of daily precipitation appear to be statistically different to those of observations at most stations. Modeled daily precipitation variability at 125 km resolution is considerably less than observed, but is comparable to, or exceeds, observed variability at 50 km resolution. The sensitivity of the simulated climate to changes in the specification of the land surface is discussed briefly. Spatial patterns of the frequency of extreme temperatures and precipitation are generally well modeled. Under a doubling of CO2, the frequency of precipitation extremes changes only slightly at most locations, while air frosts become virtually unknown except at high-elevation sites.
Mathematical model simulation of a diesel spill in the Potomac River
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, S.S.; Nicolette, J.P.; Markarian, R.K.
1995-12-31
A mathematical modeling technique was used to simulate the transport and fate of approximately 400,000 gallons of spilled diesel fuel and its impact on the aquatic biota in the Potomac River and Sugarland Run. Sugarland Run is a tributary about 21 miles upstream from Washington, DC. The mass balance model predicted the dynamic (spatial and temporal) distribution of spilled oil. The distributions were presented in terms of surface oil slick and sheen, dissolved and undissolved total petroleum hydrocarbons (TPH) in the water surface, water column, river sediments, shoreline and atmosphere. The processes simulated included advective movement, dispersion, dissolution, evaporation, volatilization,more » sedimentation, shoreline deposition, biodegradation, and removal of oil from cleanup operations. The model predicted that the spill resulted in a water column dissolved TPH concentration range of 0.05 to 18.6 ppm in Sugarland Run. The spilled oil traveled 10 miles along Sugarland Run before it reached the Potomac River. At the Potomac River, the water column TPH concentration was predicted to have decreased to the range of 0.0 to 0.43 ppm. These levels were consistent with field samples. To assess biological injury, the model used 4, 8, 24, 48, and 96-hr LC values in computing the fish injury caused by the fuel oil. The model used the maximum running average of dissolved TPH and exposure time to predict levels of fish mortality in the range of 38 to 40% in Sugarland Run. This prediction was consistent with field fisheries surveys. The model also computed the amount of spilled oil that adsorbed and settled into the river sediments.« less
Arifin, S M Niaz; Madey, Gregory R; Collins, Frank H
2013-08-21
Agent-based models (ABMs) have been used to estimate the effects of malaria-control interventions. Early studies have shown the efficacy of larval source management (LSM) and insecticide-treated nets (ITNs) as vector-control interventions, applied both in isolation and in combination. However, the robustness of results can be affected by several important modelling assumptions, including the type of boundary used for landscapes, and the number of replicated simulation runs reported in results. Selection of the ITN coverage definition may also affect the predictive findings. Hence, by replication, independent verification of prior findings of published models bears special importance. A spatially-explicit entomological ABM of Anopheles gambiae is used to simulate the resource-seeking process of mosquitoes in grid-based landscapes. To explore LSM and replicate results of an earlier LSM study, the original landscapes and scenarios are replicated by using a landscape generator tool, and 1,800 replicated simulations are run using absorbing and non-absorbing boundaries. To explore ITNs and evaluate the relative impacts of the different ITN coverage schemes, the settings of an earlier ITN study are replicated, the coverage schemes are defined and simulated, and 9,000 replicated simulations for three ITN parameters (coverage, repellence and mortality) are run. To evaluate LSM and ITNs in combination, landscapes with varying densities of houses and human populations are generated, and 12,000 simulations are run. General agreement with an earlier LSM study is observed when an absorbing boundary is used. However, using a non-absorbing boundary produces significantly different results, which may be attributed to the unrealistic killing effect of an absorbing boundary. Abundance cannot be completely suppressed by removing aquatic habitats within 300 m of houses. Also, with density-dependent oviposition, removal of insufficient number of aquatic habitats may prove counter-productive. The importance of performing large number of simulation runs is also demonstrated. For ITNs, the choice of coverage scheme has important implications, and too high repellence yields detrimental effects. When LSM and ITNs are applied in combination, ITNs' mortality can play more important roles with higher densities of houses. With partial mortality, increasing ITN coverage is more effective than increasing LSM coverage, and integrating both interventions yields more synergy as the densities of houses increase. Using a non-absorbing boundary and reporting average results from sufficiently large number of simulation runs are strongly recommended for malaria ABMs. Several guidelines (code and data sharing, relevant documentation, and standardized models) for future modellers are also recommended.
Hindcasting the Madden‐Julian Oscillation With a New Parameterization of Surface Heat Fluxes
Wang, Jingfeng; Lin, Wenshi
2017-01-01
Abstract The recently developed maximum entropy production (MEP) model, an alternative parameterization of surface heat fluxes, is incorporated into the Weather Research and Forecasting (WRF) model. A pair of WRF cloud‐resolving experiments (5 km grids) using the bulk transfer model (WRF default) and the MEP model of surface heat fluxes are performed to hindcast the October Madden‐Julian oscillation (MJO) event observed during the 2011 Dynamics of the MJO (DYNAMO) field campaign. The simulated surface latent and sensible heat fluxes in the MEP and bulk transfer model runs are in general consistent with in situ observations from two research vessels. Compared to the bulk transfer model, the convection envelope is strengthened in the MEP run and shows a more coherent propagation over the Maritime Continent. The simulated precipitable water in the MEP run is in closer agreement with the observations. Precipitation in the MEP run is enhanced during the active phase of the MJO with significantly reduced regional dry and wet biases. Large‐scale ocean evaporation is stronger in the MEP run leading to stronger boundary layer moistening to the east of the convection center, which facilitates the eastward propagation of the MJO. PMID:29399269
Ko, Sungahn; Zhao, Jieqiong; Xia, Jing; Afzal, Shehzad; Wang, Xiaoyu; Abram, Greg; Elmqvist, Niklas; Kne, Len; Van Riper, David; Gaither, Kelly; Kennedy, Shaun; Tolone, William; Ribarsky, William; Ebert, David S
2014-12-01
We present VASA, a visual analytics platform consisting of a desktop application, a component model, and a suite of distributed simulation components for modeling the impact of societal threats such as weather, food contamination, and traffic on critical infrastructure such as supply chains, road networks, and power grids. Each component encapsulates a high-fidelity simulation model that together form an asynchronous simulation pipeline: a system of systems of individual simulations with a common data and parameter exchange format. At the heart of VASA is the Workbench, a visual analytics application providing three distinct features: (1) low-fidelity approximations of the distributed simulation components using local simulation proxies to enable analysts to interactively configure a simulation run; (2) computational steering mechanisms to manage the execution of individual simulation components; and (3) spatiotemporal and interactive methods to explore the combined results of a simulation run. We showcase the utility of the platform using examples involving supply chains during a hurricane as well as food contamination in a fast food restaurant chain.
Real-time visual simulation of APT system based on RTW and Vega
NASA Astrophysics Data System (ADS)
Xiong, Shuai; Fu, Chengyu; Tang, Tao
2012-10-01
The Matlab/Simulink simulation model of APT (acquisition, pointing and tracking) system is analyzed and established. Then the model's C code which can be used for real-time simulation is generated by RTW (Real-Time Workshop). Practical experiments show, the simulation result of running the C code is the same as running the Simulink model directly in the Matlab environment. MultiGen-Vega is a real-time 3D scene simulation software system. With it and OpenGL, the APT scene simulation platform is developed and used to render and display the virtual scenes of the APT system. To add some necessary graphics effects to the virtual scenes real-time, GLSL (OpenGL Shading Language) shaders are used based on programmable GPU. By calling the C code, the scene simulation platform can adjust the system parameters on-line and get APT system's real-time simulation data to drive the scenes. Practical application shows that this visual simulation platform has high efficiency, low charge and good simulation effect.
Spatial application of WEPS for estimating wind erosion in the Pacific Northwest
USDA-ARS?s Scientific Manuscript database
The Wind Erosion Prediction System (WEPS) is used to simulate soil erosion on croplands and was originally designed to run field scale simulations. This research is an extension of the WEPS model to run on multiple fields (grids) covering a larger region. We modified the WEPS source code to allow it...
GEANT4 distributed computing for compact clusters
NASA Astrophysics Data System (ADS)
Harrawood, Brian P.; Agasthya, Greeshma A.; Lakshmanan, Manu N.; Raterman, Gretchen; Kapadia, Anuj J.
2014-11-01
A new technique for distribution of GEANT4 processes is introduced to simplify running a simulation in a parallel environment such as a tightly coupled computer cluster. Using a new C++ class derived from the GEANT4 toolkit, multiple runs forming a single simulation are managed across a local network of computers with a simple inter-node communication protocol. The class is integrated with the GEANT4 toolkit and is designed to scale from a single symmetric multiprocessing (SMP) machine to compact clusters ranging in size from tens to thousands of nodes. User designed 'work tickets' are distributed to clients using a client-server work flow model to specify the parameters for each individual run of the simulation. The new g4DistributedRunManager class was developed and well tested in the course of our Neutron Stimulated Emission Computed Tomography (NSECT) experiments. It will be useful for anyone running GEANT4 for large discrete data sets such as covering a range of angles in computed tomography, calculating dose delivery with multiple fractions or simply speeding the through-put of a single model.
Development of the CELSS Emulator at NASA JSC
NASA Technical Reports Server (NTRS)
Cullingford, Hatice S.
1989-01-01
The Controlled Ecological Life Support System (CELSS) Emulator is under development at the NASA Johnson Space Center (JSC) with the purpose to investigate computer simulations of integrated CELSS operations involving humans, plants, and process machinery. This paper describes Version 1.0 of the CELSS Emulator that was initiated in 1988 on the JSC Multi Purpose Applications Console Test Bed as the simulation framework. The run module of the simulation system now contains a CELSS model called BLSS. The CELSS Emulator makes it possible to generate model data sets, store libraries of results for further analysis, and also display plots of model variables as a function of time. The progress of the project is presented with sample test runs and simulation display pages.
Do downscaled general circulation models reliably simulate historical climatic conditions?
Bock, Andrew R.; Hay, Lauren E.; McCabe, Gregory J.; Markstrom, Steven L.; Atkinson, R. Dwight
2018-01-01
The accuracy of statistically downscaled (SD) general circulation model (GCM) simulations of monthly surface climate for historical conditions (1950–2005) was assessed for the conterminous United States (CONUS). The SD monthly precipitation (PPT) and temperature (TAVE) from 95 GCMs from phases 3 and 5 of the Coupled Model Intercomparison Project (CMIP3 and CMIP5) were used as inputs to a monthly water balance model (MWBM). Distributions of MWBM input (PPT and TAVE) and output [runoff (RUN)] variables derived from gridded station data (GSD) and historical SD climate were compared using the Kolmogorov–Smirnov (KS) test For all three variables considered, the KS test results showed that variables simulated using CMIP5 generally are more reliable than those derived from CMIP3, likely due to improvements in PPT simulations. At most locations across the CONUS, the largest differences between GSD and SD PPT and RUN occurred in the lowest part of the distributions (i.e., low-flow RUN and low-magnitude PPT). Results indicate that for the majority of the CONUS, there are downscaled GCMs that can reliably simulate historical climatic conditions. But, in some geographic locations, none of the SD GCMs replicated historical conditions for two of the three variables (PPT and RUN) based on the KS test, with a significance level of 0.05. In these locations, improved GCM simulations of PPT are needed to more reliably estimate components of the hydrologic cycle. Simple metrics and statistical tests, such as those described here, can provide an initial set of criteria to help simplify GCM selection.
Modeling and Simulation: PowerBoosting Productivity with Simulation.
ERIC Educational Resources Information Center
Riley, Suzanne
Minnesota high school students and teachers are learning the technology of simulation and integrating it into business and industrial technology courses. Modeling and simulation is the science of using software to construct a system within an organization and then running simulations of proposed changes to assess results before funds are spent. In…
Modeling a maintenance simulation of the geosynchronous platform
NASA Technical Reports Server (NTRS)
Kleiner, A. F., Jr.
1980-01-01
A modeling technique used to conduct a simulation study comparing various maintenance routines for a space platform is dicussed. A system model is described and illustrated, the basic concepts of a simulation pass are detailed, and sections on failures and maintenance are included. The operation of the system across time is best modeled by a discrete event approach with two basic events - failure and maintenance of the system. Each overall simulation run consists of introducing a particular model of the physical system, together with a maintenance policy, demand function, and mission lifetime. The system is then run through many passes, each pass corresponding to one mission and the model is re-initialized before each pass. Statistics are compiled at the end of each pass and after the last pass a report is printed. Items of interest typically include the time to first maintenance, total number of maintenance trips for each pass, average capability of the system, etc.
Modeling of Aerodynamic Force Acting in Tunnel for Analysis of Riding Comfort in a Train
NASA Astrophysics Data System (ADS)
Kikko, Satoshi; Tanifuji, Katsuya; Sakanoue, Kei; Nanba, Kouichiro
In this paper, we aimed to model the aerodynamic force that acts on a train running at high speed in a tunnel. An analytical model of the aerodynamic force is developed from pressure data measured on car-body sides of a test train running at the maximum revenue operation speed. The simulation of an 8-car train running while being subjected to the modeled aerodynamic force gives the following results. The simulated car-body vibration corresponds to the actual vibration both qualitatively and quantitatively for the cars at the rear of the train. The separation of the airflow at the tail-end of the train increases the yawing vibration of the tail-end car while it has little effect on the car-body vibration of the adjoining car. Also, the effect of the moving velocity of the aerodynamic force on the car-body vibration is clarified that the simulation under the assumption of a stationary aerodynamic force can markedly increase the car-body vibration.
Evaluation of Tsunami Run-Up on Coastal Areas at Regional Scale
NASA Astrophysics Data System (ADS)
González, M.; Aniel-Quiroga, Í.; Gutiérrez, O.
2017-12-01
Tsunami hazard assessment is tackled by means of numerical simulations, giving as a result, the areas flooded by tsunami wave inland. To get this, some input data is required, i.e., the high resolution topobathymetry of the study area, the earthquake focal mechanism parameters, etc. The computational cost of these kinds of simulations are still excessive. An important restriction for the elaboration of large scale maps at National or regional scale is the reconstruction of high resolution topobathymetry on the coastal zone. An alternative and traditional method consists of the application of empirical-analytical formulations to calculate run-up at several coastal profiles (i.e. Synolakis, 1987), combined with numerical simulations offshore without including coastal inundation. In this case, the numerical simulations are faster but some limitations are added as the coastal bathymetric profiles are very simply idealized. In this work, we present a complementary methodology based on a hybrid numerical model, formed by 2 models that were coupled ad hoc for this work: a non-linear shallow water equations model (NLSWE) for the offshore part of the propagation and a Volume of Fluid model (VOF) for the areas near the coast and inland, applying each numerical scheme where they better reproduce the tsunami wave. The run-up of a tsunami scenario is obtained by applying the coupled model to an ad-hoc numerical flume. To design this methodology, hundreds of worldwide topobathymetric profiles have been parameterized, using 5 parameters (2 depths and 3 slopes). In addition, tsunami waves have been also parameterized by their height and period. As an application of the numerical flume methodology, the coastal parameterized profiles and tsunami waves have been combined to build a populated database of run-up calculations. The combination was tackled by means of numerical simulations in the numerical flume The result is a tsunami run-up database that considers real profiles shape, realistic tsunami waves, and optimized numerical simulations. This database allows the calculation of the run-up of any new tsunami wave by interpolation on the database, in a short period of time, based on the tsunami wave characteristics provided as an output of the NLSWE model along the coast at a large scale domain (regional or National scale).
Bridging the scales in atmospheric composition simulations using a nudging technique
NASA Astrophysics Data System (ADS)
D'Isidoro, Massimo; Maurizi, Alberto; Russo, Felicita; Tampieri, Francesco
2010-05-01
Studying the interaction between climate and anthropogenic activities, specifically those concentrated in megacities/hot spots, requires the description of processes in a very wide range of scales from local, where anthropogenic emissions are concentrated to global where we are interested to study the impact of these sources. The description of all the processes at all scales within the same numerical implementation is not feasible because of limited computer resources. Therefore, different phenomena are studied by means of different numerical models that can cover different range of scales. The exchange of information from small to large scale is highly non-trivial though of high interest. In fact uncertainties in large scale simulations are expected to receive large contribution from the most polluted areas where the highly inhomogeneous distribution of sources connected to the intrinsic non-linearity of the processes involved can generate non negligible departures between coarse and fine scale simulations. In this work a new method is proposed and investigated in a case study (August 2009) using the BOLCHEM model. Monthly simulations at coarse (0.5° European domain, run A) and fine (0.1° Central Mediterranean domain, run B) horizontal resolution are performed using the coarse resolution as boundary condition for the fine one. Then another coarse resolution run (run C) is performed, in which the high resolution fields remapped on to the coarse grid are used to nudge the concentrations on the Po Valley area. The nudging is applied to all gas and aerosol species of BOLCHEM. Averaged concentrations and variances over Po Valley and other selected areas for O3 and PM are computed. It is observed that although the variance of run B is markedly larger than that of run A, the variance of run C is smaller because the remapping procedure removes large portion of variance from run B fields. Mean concentrations show some differences depending on species: in general mean values of run C lie between run A and run B. A propagation of the signal outside the nudging region is observed, and is evaluated in terms of differences between coarse resolution (with and without nudging) and fine resolution simulations.
NASA Astrophysics Data System (ADS)
Hardiman, Steven C.; Butchart, Neal; O'Connor, Fiona M.; Rumbold, Steven T.
2017-03-01
Free-running and nudged versions of a Met Office chemistry-climate model are evaluated and used to investigate the impact of dynamics versus transport and chemistry within the model on the simulated evolution of stratospheric ozone. Metrics of the dynamical processes relevant for simulating stratospheric ozone are calculated, and the free-running model is found to outperform the previous model version in 10 of the 14 metrics. In particular, large biases in stratospheric transport and tropical tropopause temperature, which existed in the previous model version, are substantially reduced, making the current model more suitable for the simulation of stratospheric ozone. The spatial structure of the ozone hole, the area of polar stratospheric clouds, and the increased ozone concentrations in the Northern Hemisphere winter stratosphere following sudden stratospheric warmings, were all found to be sensitive to the accuracy of the dynamics and were better simulated in the nudged model than in the free-running model. Whilst nudging can, in general, provide a useful tool for removing the influence of dynamical biases from the evolution of chemical fields, this study shows that issues can remain in the climatology of nudged models. Significant biases in stratospheric vertical velocities, age of air, water vapour, and total column ozone still exist in the Met Office nudged model. Further, these can lead to biases in the downward flux of ozone into the troposphere.
Software Framework for Advanced Power Plant Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
John Widmann; Sorin Munteanu; Aseem Jain
2010-08-01
This report summarizes the work accomplished during the Phase II development effort of the Advanced Process Engineering Co-Simulator (APECS). The objective of the project is to develop the tools to efficiently combine high-fidelity computational fluid dynamics (CFD) models with process modeling software. During the course of the project, a robust integration controller was developed that can be used in any CAPE-OPEN compliant process modeling environment. The controller mediates the exchange of information between the process modeling software and the CFD software. Several approaches to reducing the time disparity between CFD simulations and process modeling have been investigated and implemented. Thesemore » include enabling the CFD models to be run on a remote cluster and enabling multiple CFD models to be run simultaneously. Furthermore, computationally fast reduced-order models (ROMs) have been developed that can be 'trained' using the results from CFD simulations and then used directly within flowsheets. Unit operation models (both CFD and ROMs) can be uploaded to a model database and shared between multiple users.« less
NASA Technical Reports Server (NTRS)
Harvey, Jason; Moore, Michael
2013-01-01
The General-Use Nodal Network Solver (GUNNS) is a modeling software package that combines nodal analysis and the hydraulic-electric analogy to simulate fluid, electrical, and thermal flow systems. GUNNS is developed by L-3 Communications under the TS21 (Training Systems for the 21st Century) project for NASA Johnson Space Center (JSC), primarily for use in space vehicle training simulators at JSC. It has sufficient compactness and fidelity to model the fluid, electrical, and thermal aspects of space vehicles in real-time simulations running on commodity workstations, for vehicle crew and flight controller training. It has a reusable and flexible component and system design, and a Graphical User Interface (GUI), providing capability for rapid GUI-based simulator development, ease of maintenance, and associated cost savings. GUNNS is optimized for NASA's Trick simulation environment, but can be run independently of Trick.
SIM_EXPLORE: Software for Directed Exploration of Complex Systems
NASA Technical Reports Server (NTRS)
Burl, Michael; Wang, Esther; Enke, Brian; Merline, William J.
2013-01-01
Physics-based numerical simulation codes are widely used in science and engineering to model complex systems that would be infeasible to study otherwise. While such codes may provide the highest- fidelity representation of system behavior, they are often so slow to run that insight into the system is limited. Trying to understand the effects of inputs on outputs by conducting an exhaustive grid-based sweep over the input parameter space is simply too time-consuming. An alternative approach called "directed exploration" has been developed to harvest information from numerical simulators more efficiently. The basic idea is to employ active learning and supervised machine learning to choose cleverly at each step which simulation trials to run next based on the results of previous trials. SIM_EXPLORE is a new computer program that uses directed exploration to explore efficiently complex systems represented by numerical simulations. The software sequentially identifies and runs simulation trials that it believes will be most informative given the results of previous trials. The results of new trials are incorporated into the software's model of the system behavior. The updated model is then used to pick the next round of new trials. This process, implemented as a closed-loop system wrapped around existing simulation code, provides a means to improve the speed and efficiency with which a set of simulations can yield scientifically useful results. The software focuses on the case in which the feedback from the simulation trials is binary-valued, i.e., the learner is only informed of the success or failure of the simulation trial to produce a desired output. The software offers a number of choices for the supervised learning algorithm (the method used to model the system behavior given the results so far) and a number of choices for the active learning strategy (the method used to choose which new simulation trials to run given the current behavior model). The software also makes use of the LEGION distributed computing framework to leverage the power of a set of compute nodes. The approach has been demonstrated on a planetary science application in which numerical simulations are used to study the formation of asteroid families.
Chonggang Xu; Hong S. He; Yuanman Hu; Yu Chang; Xiuzhen Li; Rencang Bu
2005-01-01
Geostatistical stochastic simulation is always combined with Monte Carlo method to quantify the uncertainty in spatial model simulations. However, due to the relatively long running time of spatially explicit forest models as a result of their complexity, it is always infeasible to generate hundreds or thousands of Monte Carlo simulations. Thus, it is of great...
NASA Astrophysics Data System (ADS)
Dilmen, Derya I.; Titov, Vasily V.; Roe, Gerard H.
2015-12-01
On September 29, 2009, an Mw = 8.1 earthquake at 17:48 UTC in Tonga Trench generated a tsunami that caused heavy damage across Samoa, American Samoa, and Tonga islands. Tutuila island, which is located 250 km from the earthquake epicenter, experienced tsunami flooding and strong currents on the north and east coasts, causing 34 fatalities (out of 192 total deaths from this tsunami) and widespread structural and ecological damage. The surrounding coral reefs also suffered heavy damage. The damage was formally evaluated based on detailed surveys before and immediately after the tsunami. This setting thus provides a unique opportunity to evaluate the relationship between tsunami dynamics and coral damage. In this study, estimates of the maximum wave amplitudes and coastal inundation of the tsunami are obtained with the MOST model (T itov and S ynolakis, J. Waterway Port Coast Ocean Eng: pp 171, 1998; T itov and G onzalez, NOAA Tech. Memo. ERL PMEL 112:11, 1997), which is now the operational tsunami forecast tool used by the National Oceanic and Atmospheric Administration (NOAA). The earthquake source function was constrained using the real-time deep-ocean tsunami data from three DART® (Deep-ocean Assessment and Reporting for Tsunamis) systems in the far field, and by tide-gauge observations in the near field. We compare the simulated run-up with observations to evaluate the simulation performance. We present an overall synthesis of the tide-gauge data, survey results of the run-up, inundation measurements, and the datasets of coral damage around the island. These data are used to assess the overall accuracy of the model run-up prediction for Tutuila, and to evaluate the model accuracy over the coral reef environment during the tsunami event. Our primary findings are that: (1) MOST-simulated run-up correlates well with observed run-up for this event ( r = 0.8), it tends to underestimated amplitudes over coral reef environment around Tutuila (for 15 of 31 villages, run-up is underestimated by more than 10 %; in only 5 was run-up overestimated by more than 10 %), and (2) the locations where the model underestimates run-up also tend to have experienced heavy or very heavy coral damage (8 of the 15 villages), whereas well-estimated run-up locations characteristically experience low or very low damage (7 of 11 villages). These findings imply that a numerical model may overestimate the energy loss of the tsunami waves during their interaction with the coral reef. We plan future studies to quantify this energy loss and to explore what improvements can be made in simulations of tsunami run-up when simulating coastal environments with fringing coral reefs.
Mars-solar wind interaction: LatHyS, an improved parallel 3-D multispecies hybrid model
NASA Astrophysics Data System (ADS)
Modolo, Ronan; Hess, Sebastien; Mancini, Marco; Leblanc, Francois; Chaufray, Jean-Yves; Brain, David; Leclercq, Ludivine; Esteban-Hernández, Rosa; Chanteur, Gerard; Weill, Philippe; González-Galindo, Francisco; Forget, Francois; Yagi, Manabu; Mazelle, Christian
2016-07-01
In order to better represent Mars-solar wind interaction, we present an unprecedented model achieving spatial resolution down to 50 km, a so far unexplored resolution for global kinetic models of the Martian ionized environment. Such resolution approaches the ionospheric plasma scale height. In practice, the model is derived from a first version described in Modolo et al. (2005). An important effort of parallelization has been conducted and is presented here. A better description of the ionosphere was also implemented including ionospheric chemistry, electrical conductivities, and a drag force modeling the ion-neutral collisions in the ionosphere. This new version of the code, named LatHyS (Latmos Hybrid Simulation), is here used to characterize the impact of various spatial resolutions on simulation results. In addition, and following a global model challenge effort, we present the results of simulation run for three cases which allow addressing the effect of the suprathermal corona and of the solar EUV activity on the magnetospheric plasma boundaries and on the global escape. Simulation results showed that global patterns are relatively similar for the different spatial resolution runs, but finest grid runs provide a better representation of the ionosphere and display more details of the planetary plasma dynamic. Simulation results suggest that a significant fraction of escaping O+ ions is originated from below 1200 km altitude.
Challenges in Visual Analysis of Ensembles
Crossno, Patricia
2018-04-12
Modeling physical phenomena through computational simulation increasingly relies on generating a collection of related runs, known as an ensemble. In this paper, we explore the challenges we face in developing analysis and visualization systems for large and complex ensemble data sets, which we seek to understand without having to view the results of every simulation run. Implementing approaches and ideas developed in response to this goal, we demonstrate the analysis of a 15K run material fracturing study using Slycat, our ensemble analysis system.
Shadow: Running Tor in a Box for Accurate and Efficient Experimentation
2011-09-23
Modeling the speed of a target CPU is done by running an OpenSSL [31] speed test on a real CPU of that type. This provides us with the raw CPU processing...rate, but we are also interested in the processing speed of an application. By running application 5 benchmarks on the same CPU as the OpenSSL speed test...simulation, saving CPU cy- cles on our simulation host machine. Shadow removes cryptographic processing by preloading the main OpenSSL [31] functions used
Challenges in Visual Analysis of Ensembles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crossno, Patricia
Modeling physical phenomena through computational simulation increasingly relies on generating a collection of related runs, known as an ensemble. In this paper, we explore the challenges we face in developing analysis and visualization systems for large and complex ensemble data sets, which we seek to understand without having to view the results of every simulation run. Implementing approaches and ideas developed in response to this goal, we demonstrate the analysis of a 15K run material fracturing study using Slycat, our ensemble analysis system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mishra, Srikanta; Jin, Larry; He, Jincong
2015-06-30
Reduced-order models provide a means for greatly accelerating the detailed simulations that will be required to manage CO 2 storage operations. In this work, we investigate the use of one such method, POD-TPWL, which has previously been shown to be effective in oil reservoir simulation problems. This method combines trajectory piecewise linearization (TPWL), in which the solution to a new (test) problem is represented through a linearization around the solution to a previously-simulated (training) problem, with proper orthogonal decomposition (POD), which enables solution states to be expressed in terms of a relatively small number of parameters. We describe the applicationmore » of POD-TPWL for CO 2-water systems simulated using a compositional procedure. Stanford’s Automatic Differentiation-based General Purpose Research Simulator (AD-GPRS) performs the full-order training simulations and provides the output (derivative matrices and system states) required by the POD-TPWL method. A new POD-TPWL capability introduced in this work is the use of horizontal injection wells that operate under rate (rather than bottom-hole pressure) control. Simulation results are presented for CO 2 injection into a synthetic aquifer and into a simplified model of the Mount Simon formation. Test cases involve the use of time-varying well controls that differ from those used in training runs. Results of reasonable accuracy are consistently achieved for relevant well quantities. Runtime speedups of around a factor of 370 relative to full- order AD-GPRS simulations are achieved, though the preprocessing needed for POD-TPWL model construction corresponds to the computational requirements for about 2.3 full-order simulation runs. A preliminary treatment for POD-TPWL modeling in which test cases differ from training runs in terms of geological parameters (rather than well controls) is also presented. Results in this case involve only small differences between training and test runs, though they do demonstrate that the approach is able to capture basic solution trends. The impact of some of the detailed numerical treatments within the POD-TPWL formulation is considered in an Appendix.« less
A Component-Based Extension Framework for Large-Scale Parallel Simulations in NEURON
King, James G.; Hines, Michael; Hill, Sean; Goodman, Philip H.; Markram, Henry; Schürmann, Felix
2008-01-01
As neuronal simulations approach larger scales with increasing levels of detail, the neurosimulator software represents only a part of a chain of tools ranging from setup, simulation, interaction with virtual environments to analysis and visualizations. Previously published approaches to abstracting simulator engines have not received wide-spread acceptance, which in part may be to the fact that they tried to address the challenge of solving the model specification problem. Here, we present an approach that uses a neurosimulator, in this case NEURON, to describe and instantiate the network model in the simulator's native model language but then replaces the main integration loop with its own. Existing parallel network models are easily adopted to run in the presented framework. The presented approach is thus an extension to NEURON but uses a component-based architecture to allow for replaceable spike exchange components and pluggable components for monitoring, analysis, or control that can run in this framework alongside with the simulation. PMID:19430597
Growth-simulation model for lodgepole pine in central Oregon.
Walter G. Dahms
1983-01-01
A growth-simulation model for central Oregon lodgepole pine (Pinus contorta Dougl.) has been constructed by combining data from temporary and permanent sample plots. The model is similar to a conventional yield table with the added capacity for dealing with the stand-density variable. The simulator runs on a desk-top computer.
Review of Dynamic Modeling and Simulation of Large Scale Belt Conveyor System
NASA Astrophysics Data System (ADS)
He, Qing; Li, Hong
Belt conveyor is one of the most important devices to transport bulk-solid material for long distance. Dynamic analysis is the key to decide whether the design is rational in technique, safe and reliable in running, feasible in economy. It is very important to study dynamic properties, improve efficiency and productivity, guarantee conveyor safe, reliable and stable running. The dynamic researches and applications of large scale belt conveyor are discussed. The main research topics, the state-of-the-art of dynamic researches on belt conveyor are analyzed. The main future works focus on dynamic analysis, modeling and simulation of main components and whole system, nonlinear modeling, simulation and vibration analysis of large scale conveyor system.
Can nudging be used to quantify model sensitivities in precipitation and cloud forcing?
NASA Astrophysics Data System (ADS)
Lin, Guangxing; Wan, Hui; Zhang, Kai; Qian, Yun; Ghan, Steven J.
2016-09-01
Efficient simulation strategies are crucial for the development and evaluation of high-resolution climate models. This paper evaluates simulations with constrained meteorology for the quantification of parametric sensitivities in the Community Atmosphere Model version 5 (CAM5). Two parameters are perturbed as illustrating examples: the convection relaxation time scale (TAU), and the threshold relative humidity for the formation of low-level stratiform clouds (rhminl). Results suggest that the fidelity of the constrained simulations depends on the detailed implementation of nudging and the mechanism through which the perturbed parameter affects precipitation and cloud. The relative computational costs of nudged and free-running simulations are determined by the magnitude of internal variability in the physical quantities of interest, as well as the magnitude of the parameter perturbation. In the case of a strong perturbation in convection, temperature, and/or wind nudging with a 6 h relaxation time scale leads to nonnegligible side effects due to the distorted interactions between resolved dynamics and parameterized convection, while 1 year free-running simulations can satisfactorily capture the annual mean precipitation and cloud forcing sensitivities. In the case of a relatively weak perturbation in the large-scale condensation scheme, results from 1 year free-running simulations are strongly affected by natural noise, while nudging winds effectively reduces the noise, and reasonably reproduces the sensitivities. These results indicate that caution is needed when using nudged simulations to assess precipitation and cloud forcing sensitivities to parameter changes in general circulation models. We also demonstrate that ensembles of short simulations are useful for understanding the evolution of model sensitivities.
Advanced ETC/LSS computerized analytical models, CO2 concentration. Volume 1: Summary document
NASA Technical Reports Server (NTRS)
Taylor, B. N.; Loscutoff, A. V.
1972-01-01
Computer simulations have been prepared for the concepts of C02 concentration which have the potential for maintaining a C02 partial pressure of 3.0 mmHg, or less, in a spacecraft environment. The simulations were performed using the G-189A Generalized Environmental Control computer program. In preparing the simulations, new subroutines to model the principal functional components for each concept were prepared and integrated into the existing program. Sample problems were run to demonstrate the methods of simulation and performance characteristics of the individual concepts. Comparison runs for each concept can be made for parametric values of cabin pressure, crew size, cabin air dry and wet bulb temperatures, and mission duration.
Mars Tumbleweed Simulation Using Singular Perturbation Theory
NASA Technical Reports Server (NTRS)
Raiszadeh, Behzad; Calhoun, Phillip
2005-01-01
The Mars Tumbleweed is a new surface rover concept that utilizes Martian winds as the primary source of mobility. Several designs have been proposed for the Mars Tumbleweed, all using aerodynamic drag to generate force for traveling about the surface. The Mars Tumbleweed, in its deployed configuration, must be large and lightweight to provide the ratio of drag force to rolling resistance necessary to initiate motion from the Martian surface. This paper discusses the dynamic simulation details of a candidate Tumbleweed design. The dynamic simulation model must properly evaluate and characterize the motion of the tumbleweed rover to support proper selection of system design parameters. Several factors, such as model flexibility, simulation run times, and model accuracy needed to be considered in modeling assumptions. The simulation was required to address the flexibility of the rover and its interaction with the ground, and properly evaluate its mobility. Proper assumptions needed to be made such that the simulated dynamic motion is accurate and realistic while not overly burdened by long simulation run times. This paper also shows results that provided reasonable correlation between the simulation and a drop/roll test of a tumbleweed prototype.
Simulation of ozone production in a complex circulation region using nested grids
NASA Astrophysics Data System (ADS)
Taghavi, M.; Cautenet, S.; Foret, G.
2003-07-01
During ESCOMPTE precampaign (15 June to 10 July 2000), three days of intensive pollution (IOP0) have been observed and simulated. The comprehensive RAMS model, version 4.3, coupled online with a chemical module including 29 species, has been used to follow the chemistry of the zone polluted over southern France. This online method can be used because the code is paralleled and the SGI 3800 computer is very powerful. Two runs have been performed: run1 with one grid and run2 with two nested grids. The redistribution of simulated chemical species (ozone, carbon monoxide, sulphur dioxide and nitrogen oxides) was compared to aircraft measurements and surface stations. The 2-grid run has given substantially better results than the one-grid run only because the former takes the outer pollutants into account. This online method helps to explain dynamics and to retrieve the chemical species redistribution with a good agreement.
Simulation of ozone production in a complex circulation region using nested grids
NASA Astrophysics Data System (ADS)
Taghavi, M.; Cautenet, S.; Foret, G.
2004-06-01
During the ESCOMPTE precampaign (summer 2000, over Southern France), a 3-day period of intensive observation (IOP0), associated with ozone peaks, has been simulated. The comprehensive RAMS model, version 4.3, coupled on-line with a chemical module including 29 species, is used to follow the chemistry of the polluted zone. This efficient but time consuming method can be used because the code is installed on a parallel computer, the SGI 3800. Two runs are performed: run 1 with a single grid and run 2 with two nested grids. The simulated fields of ozone, carbon monoxide, nitrogen oxides and sulfur dioxide are compared with aircraft and surface station measurements. The 2-grid run looks substantially better than the run with one grid because the former takes the outer pollutants into account. This on-line method helps to satisfactorily retrieve the chemical species redistribution and to explain the impact of dynamics on this redistribution.
ERIC Educational Resources Information Center
Nordmark, Staffan
1984-01-01
This report contains a theoretical model for describing the motion of a passenger car. The simulation program based on this model is used in conjunction with an advanced driving simulator and run in real time. The mathematical model is complete in the sense that the dynamics of the engine, transmission and steering system is described in some…
The evolution of extreme precipitations in high resolution scenarios over France
NASA Astrophysics Data System (ADS)
Colin, J.; Déqué, M.; Somot, S.
2009-09-01
Over the past years, improving the modelling of extreme events and their variability at climatic time scales has become one of the challenging issue raised in the regional climate research field. This study shows the results of a high resolution (12 km) scenario run over France with the limited area model (LAM) ALADIN-Climat, regarding the representation of extreme precipitations. The runs were conducted in the framework of the ANR-SCAMPEI national project on high resolution scenarios over French mountains. As a first step, we attempt to quantify one of the uncertainties implied by the use of LAM : the size of the area on which the model is run. In particular, we address the issue of whether a relatively small domain allows the model to create its small scale process. Indeed, high resolution scenarios cannot be run on large domains because of the computation time. Therefore one needs to answer this preliminary question before producing and analyzing such scenarios. To do so, we worked in the framework of a « big brother » experiment. We performed a 23-year long global simulation in present-day climate (1979-2001) with the ARPEGE-Climat GCM, at a resolution of approximately 50 km over Europe (stretched grid). This first simulation, named ARP50, constitutes the « big brother » reference of our experiment. It has been validated in comparison with the CRU climatology. Then we filtered the short waves (up to 200 km) from ARP50 in order to obtain the equivalent of coarse resolution lateral boundary conditions (LBC). We have carried out three ALADIN-Climat simulations at a 50 km resolution with these LBC, using different configurations of the model : * FRA50, run over a small domain (2000 x 2000 km, centered over France), * EUR50, run over a larger domain (5000 x 5000 km, centered over France as well), * EUR50-SN, run over the large domain (using spectral nudging). Considering the facts that ARPEGE-Climat and ALADIN-Climat models share the same physics and dynamics and that both regional and global simulations were run at the same resolution, ARP50 can be regarded as a reference with which FRA50, EUR50 and EUR50-SN should each be compared. After an analysis of the differences between the regional simulations and ARP50 in annual and seasonal mean, we focus on the representation of rainfall extremes comparing two dimensional fields of various index inspired from STARDEX and quantile-quantile plots. The results show a good agreement with the ARP50 reference for all three regional simulations and little differences are found between them. This result indicates that the use of small domains is not significantly detrimental to the modelling of extreme precipitation events. It also shows that the spectral nudging technique has no detrimental effect on the extreme precipitation. Therefore, high resolution scenarios performed on a relatively small domain such as the ones run for SCAMPEI, can be regarded as good tools to explore their possible evolution in the future climate. Preliminary results on the response of precipitation extremes over South-East France are given.
NASA Astrophysics Data System (ADS)
Oglesby, R. J.; Erickson, D. J.; Hernandez, J. L.; Irwin, D.
2005-12-01
Central America covers a relatively small area, but is topographically very complex, has long coast-lines, large inland bodies of water, and very diverse land cover which is both natural and human-induced. As a result, Central America is plagued by hydrologic extremes, especially major flooding and drought events, in a region where many people still barely manage to eke out a living through subsistence. Therefore, considerable concern exists about whether these extreme events will change, either in magnitude or in number, as climate changes in the future. To address this concern, we have used global climate model simulations of future climate change to drive a regional climate model centered on Central America. We use the IPCC `business as usual' scenario 21st century run made with the NCAR CCSM3 global model to drive the regional model MM5 at 12 km resolution. We chose the `business as usual' scenario to focus on the largest possible changes that are likely to occur. Because we are most interested in near-term changes, our simulations are for the years 2010, 2015, and 2025. A long `present-day run (for 2005) allows us to distinguish between climate variability and any signal due to climate change. Furthermore, a multi-year run with MM5 forced by NCEP reanalyses allows an assessment of how well the coupled global-regional model performs over Central America. Our analyses suggest that the coupled model does a credible job simulating the current climate and hydrologic regime, though lack of sufficient observations strongly complicates this comparison. The suite of model runs for the future years is currently nearing completion, and key results will be presented at the meeting.
The Prodiguer Messaging Platform
NASA Astrophysics Data System (ADS)
Greenslade, Mark; Denvil, Sebastien; Raciazek, Jerome; Carenton, Nicolas; Levavasseur, Guillame
2014-05-01
CONVERGENCE is a French multi-partner national project designed to gather HPC and informatics expertise to innovate in the context of running French climate models with differing grids and at differing resolutions. Efficient and reliable execution of these models and the management and dissemination of model output (data and meta-data) are just some of the complexities that CONVERGENCE aims to resolve. The Institut Pierre Simon Laplace (IPSL) is responsible for running climate simulations upon a set of heterogenous HPC environments within France. With heterogeneity comes added complexity in terms of simulation instrumentation and control. Obtaining a global perspective upon the state of all simulations running upon all HPC environments has hitherto been problematic. In this presentation we detail how, within the context of CONVERGENCE, the implementation of the Prodiguer messaging platform resolves complexity and permits the development of real-time applications such as: 1. a simulation monitoring dashboard; 2. a simulation metrics visualizer; 3. an automated simulation runtime notifier; 4. an automated output data & meta-data publishing pipeline; The Prodiguer messaging platform leverages a widely used open source message broker software called RabbitMQ. RabbitMQ itself implements the Advanced Message Queue Protocol (AMPQ). Hence it will be demonstrated that the Prodiguer messaging platform is built upon both open source and open standards.
2011-03-21
throughout the experimental runs. Reliable and validated measures of anxiety ( Spielberger , 1983), as well as custom-constructed questionnaires about...Crowd modeling and simulation technologies. Transactions on modeling and computer simulation, 20(4). Spielberger , C. D. (1983
2013-01-01
Background Agent-based models (ABMs) have been used to estimate the effects of malaria-control interventions. Early studies have shown the efficacy of larval source management (LSM) and insecticide-treated nets (ITNs) as vector-control interventions, applied both in isolation and in combination. However, the robustness of results can be affected by several important modelling assumptions, including the type of boundary used for landscapes, and the number of replicated simulation runs reported in results. Selection of the ITN coverage definition may also affect the predictive findings. Hence, by replication, independent verification of prior findings of published models bears special importance. Methods A spatially-explicit entomological ABM of Anopheles gambiae is used to simulate the resource-seeking process of mosquitoes in grid-based landscapes. To explore LSM and replicate results of an earlier LSM study, the original landscapes and scenarios are replicated by using a landscape generator tool, and 1,800 replicated simulations are run using absorbing and non-absorbing boundaries. To explore ITNs and evaluate the relative impacts of the different ITN coverage schemes, the settings of an earlier ITN study are replicated, the coverage schemes are defined and simulated, and 9,000 replicated simulations for three ITN parameters (coverage, repellence and mortality) are run. To evaluate LSM and ITNs in combination, landscapes with varying densities of houses and human populations are generated, and 12,000 simulations are run. Results General agreement with an earlier LSM study is observed when an absorbing boundary is used. However, using a non-absorbing boundary produces significantly different results, which may be attributed to the unrealistic killing effect of an absorbing boundary. Abundance cannot be completely suppressed by removing aquatic habitats within 300 m of houses. Also, with density-dependent oviposition, removal of insufficient number of aquatic habitats may prove counter-productive. The importance of performing large number of simulation runs is also demonstrated. For ITNs, the choice of coverage scheme has important implications, and too high repellence yields detrimental effects. When LSM and ITNs are applied in combination, ITNs’ mortality can play more important roles with higher densities of houses. With partial mortality, increasing ITN coverage is more effective than increasing LSM coverage, and integrating both interventions yields more synergy as the densities of houses increase. Conclusions Using a non-absorbing boundary and reporting average results from sufficiently large number of simulation runs are strongly recommended for malaria ABMs. Several guidelines (code and data sharing, relevant documentation, and standardized models) for future modellers are also recommended. PMID:23965136
DOT National Transportation Integrated Search
1979-12-01
An econometric model is developed which provides long-run policy analysis and forecasting of annual trends, for U.S. auto stock, new sales, and their composition by auto size-class. The concept of "desired" (equilibrium) stock is introduced. "Desired...
Vulnerability Model. A Simulation System for Assessing Damage Resulting from Marine Spills
1975-06-01
used and the scenario simulated. The test runs were made on an IBM 360/65 computer. Running times were generally between 15 and 35 CPU seconds...fect filrthcr north. A petroleum tank-truck operation was located within 600 feet Of L:- stock pond on which the crude oil had dammred itp . At 5 A-M
Lytton, William W; Neymotin, Samuel A; Hines, Michael L
2008-06-30
In an effort to design a simulation environment that is more similar to that of neurophysiology, we introduce a virtual slice setup in the NEURON simulator. The virtual slice setup runs continuously and permits parameter changes, including changes to synaptic weights and time course and to intrinsic cell properties. The virtual slice setup permits shocks to be applied at chosen locations and activity to be sampled intra- or extracellularly from chosen locations. By default, a summed population display is shown during a run to indicate the level of activity and no states are saved. Simulations can run for hours of model time, therefore it is not practical to save all of the state variables. These, in any case, are primarily of interest at discrete times when experiments are being run: the simulation can be stopped momentarily at such times to save activity patterns. The virtual slice setup maintains an automated notebook showing shocks and parameter changes as well as user comments. We demonstrate how interaction with a continuously running simulation encourages experimental prototyping and can suggest additional dynamical features such as ligand wash-in and wash-out-alternatives to typical instantaneous parameter change. The virtual slice setup currently uses event-driven cells and runs at approximately 2 min/h on a laptop.
Capsule modeling of high foot implosion experiments on the National Ignition Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, D. S.; Kritcher, A. L.; Milovich, J. L.
This study summarizes the results of detailed, capsule-only simulations of a set of high foot implosion experiments conducted on the National Ignition Facility (NIF). These experiments span a range of ablator thicknesses, laser powers, and laser energies, and modeling these experiments as a set is important to assess whether the simulation model can reproduce the trends seen experimentally as the implosion parameters were varied. Two-dimensional (2D) simulations have been run including a number of effects—both nominal and off-nominal—such as hohlraum radiation asymmetries, surface roughness, the capsule support tent, and hot electron pre-heat. Selected three-dimensional simulations have also been run tomore » assess the validity of the 2D axisymmetric approximation. As a composite, these simulations represent the current state of understanding of NIF high foot implosion performance using the best and most detailed computational model available. While the most detailed simulations show approximate agreement with the experimental data, it is evident that the model remains incomplete and further refinements are needed. Nevertheless, avenues for improved performance are clearly indicated.« less
Capsule modeling of high foot implosion experiments on the National Ignition Facility
Clark, D. S.; Kritcher, A. L.; Milovich, J. L.; ...
2017-03-21
This study summarizes the results of detailed, capsule-only simulations of a set of high foot implosion experiments conducted on the National Ignition Facility (NIF). These experiments span a range of ablator thicknesses, laser powers, and laser energies, and modeling these experiments as a set is important to assess whether the simulation model can reproduce the trends seen experimentally as the implosion parameters were varied. Two-dimensional (2D) simulations have been run including a number of effects—both nominal and off-nominal—such as hohlraum radiation asymmetries, surface roughness, the capsule support tent, and hot electron pre-heat. Selected three-dimensional simulations have also been run tomore » assess the validity of the 2D axisymmetric approximation. As a composite, these simulations represent the current state of understanding of NIF high foot implosion performance using the best and most detailed computational model available. While the most detailed simulations show approximate agreement with the experimental data, it is evident that the model remains incomplete and further refinements are needed. Nevertheless, avenues for improved performance are clearly indicated.« less
Simulations of Eurasian winter temperature trends in coupled and uncoupled CFSv2
NASA Astrophysics Data System (ADS)
Collow, Thomas W.; Wang, Wanqiu; Kumar, Arun
2018-01-01
Conflicting results have been presented regarding the link between Arctic sea-ice loss and midlatitude cooling, particularly over Eurasia. This study analyzes uncoupled (atmosphere-only) and coupled (ocean-atmosphere) simulations by the Climate Forecast System, version 2 (CFSv2), to examine this linkage during the Northern Hemisphere winter, focusing on the simulation of the observed surface cooling trend over Eurasia during the last three decades. The uncoupled simulations are Atmospheric Model Intercomparison Project (AMIP) runs forced with mean seasonal cycles of sea surface temperature (SST) and sea ice, using combinations of SST and sea ice from different time periods to assess the role that each plays individually, and to assess the role of atmospheric internal variability. Coupled runs are used to further investigate the role of internal variability via the analysis of initialized predictions and the evolution of the forecast with lead time. The AMIP simulations show a mean warming response over Eurasia due to SST changes, but little response to changes in sea ice. Individual runs simulate cooler periods over Eurasia, and this is shown to be concurrent with a stronger Siberian high and warming over Greenland. No substantial differences in the variability of Eurasian surface temperatures are found between the different model configurations. In the coupled runs, the region of significant warming over Eurasia is small at short leads, but increases at longer leads. It is concluded that, although the models have some capability in highlighting the temperature variability over Eurasia, the observed cooling may still be a consequence of internal variability.
Global and local waveform simulations using the VERCE platform
NASA Astrophysics Data System (ADS)
Garth, Thomas; Saleh, Rafiq; Spinuso, Alessandro; Gemund, Andre; Casarotti, Emanuele; Magnoni, Federica; Krischner, Lion; Igel, Heiner; Schlichtweg, Horst; Frank, Anton; Michelini, Alberto; Vilotte, Jean-Pierre; Rietbrock, Andreas
2017-04-01
In recent years the potential to increase resolution of seismic imaging by full waveform inversion has been demonstrated on a range of scales from basin to continental scales. These techniques rely on harnessing the computational power of large supercomputers, and running large parallel codes to simulate the seismic wave field in a three-dimensional geological setting. The VERCE platform is designed to make these full waveform techniques accessible to a far wider spectrum of the seismological community. The platform supports the two widely used spectral element simulation programs SPECFEM3D Cartesian, and SPECFEM3D globe, allowing users to run a wide range of simulations. In the SPECFEM3D Cartesian implementation the user can run waveform simulations on a range of pre-loaded meshes and velocity models for specific areas, or upload their own velocity model and mesh. In the new SPECFEM3D globe implementation, the user will be able to select from a number of continent scale model regions, or perform waveform simulations for the whole earth. Earthquake focal mechanisms can be downloaded within the platform, for example from the GCMT catalogue, or users can upload their own focal mechanism catalogue through the platform. The simulations can be run on a range of European supercomputers in the PRACE network. Once a job has been submitted and run through the platform, the simulated waveforms can be manipulated or downloaded for further analysis. The misfit between the simulated and recorded waveforms can then be calculated through the platform through three interoperable workflows, for raw-data access (FDSN) and caching, pre-processing and finally misfit. The last workflow makes use of the Pyflex analysis software. In addition, the VERCE platform can be used to produce animations of waveform propagation through the velocity model, and synthetic shakemaps. All these data-products are made discoverable and re-usable thanks to the VERCE data and metadata management layer. We demonstrate the functionality of the VERCE platform with two use cases, one using the pre-loaded velocity model and mesh for the Maule area of Chile using the SPECFEM3D Cartesian workflow, and one showing the output of a global simulation using the SPECFEM3D globe workflow. It is envisioned that this tool will allow a much greater range of seismologists to access these full waveform inversion tools, and aid full waveform tomographic and source inversion, synthetic shakemap production and other full waveform applications, in a wide range of tectonic settings.
NASA Technical Reports Server (NTRS)
Mcenulty, R. E.
1977-01-01
The G189A simulation of the Shuttle Orbiter ECLSS was upgraded. All simulation library versions and simulation models were converted from the EXEC2 to the EXEC8 computer system and a new program, G189PL, was added to the combination master program library. The program permits the post-plotting of up to 100 frames of plot data over any time interval of a G189 simulation run. The overlay structure of the G189A simulations were restructured for the purpose of conserving computer core requirements and minimizing run time requirements.
Atmosphere-ocean feedbacks in a coastal upwelling system
NASA Astrophysics Data System (ADS)
Alves, J. M. R.; Peliz, A.; Caldeira, R. M. A.; Miranda, P. M. A.
2018-03-01
The COAWST (Coupled Ocean-Atmosphere-Wave-Sediment Transport) modelling system is used in different configurations to simulate the Iberian upwelling during the 2012 summer, aiming to assess the atmosphere-ocean feedbacks in the upwelling dynamics. When model results are compared with satellite measurements and in-situ data, two-way coupling is found to have a moderate impact in data-model statistics. A significant reinforcement of atmosphere-ocean coupling coefficients is, however, observed in the two-way coupled run, and in the WRF and ROMS runs forced by previously simulated SST and wind fields, respectively. The increasing in the coupling coefficient is associated with slight, but potentially important changes in the low-level coastal jet in the atmospheric marine boundary layer. While these results do not imply the need for fully coupled simulations in many applications, they show that in seasonal numerical studies such simulations do not degrade the overall model performance, and contribute to produce better dynamical fields.
NASA Astrophysics Data System (ADS)
Silva, F.; Maechling, P. J.; Goulet, C. A.; Somerville, P.; Jordan, T. H.
2014-12-01
The Southern California Earthquake Center (SCEC) Broadband Platform is a collaborative software development project involving geoscientists, earthquake engineers, graduate students, and the SCEC Community Modeling Environment. The SCEC Broadband Platform (BBP) is open-source scientific software that can generate broadband (0-100Hz) ground motions for earthquakes, integrating complex scientific modules that implement rupture generation, low and high-frequency seismogram synthesis, non-linear site effects calculation, and visualization into a software system that supports easy on-demand computation of seismograms. The Broadband Platform operates in two primary modes: validation simulations and scenario simulations. In validation mode, the Platform runs earthquake rupture and wave propagation modeling software to calculate seismograms for a well-observed historical earthquake. Then, the BBP calculates a number of goodness of fit measurements that quantify how well the model-based broadband seismograms match the observed seismograms for a certain event. Based on these results, the Platform can be used to tune and validate different numerical modeling techniques. In scenario mode, the Broadband Platform can run simulations for hypothetical (scenario) earthquakes. In this mode, users input an earthquake description, a list of station names and locations, and a 1D velocity model for their region of interest, and the Broadband Platform software then calculates ground motions for the specified stations. Working in close collaboration with scientists and research engineers, the SCEC software development group continues to add new capabilities to the Broadband Platform and to release new versions as open-source scientific software distributions that can be compiled and run on many Linux computer systems. Our latest release includes 5 simulation methods, 7 simulation regions covering California, Japan, and Eastern North America, the ability to compare simulation results against GMPEs, and several new data products, such as map and distance-based goodness of fit plots. As the number and complexity of scenarios simulated using the Broadband Platform increases, we have added batching utilities to substantially improve support for running large-scale simulations on computing clusters.
SPEEDES - A multiple-synchronization environment for parallel discrete-event simulation
NASA Technical Reports Server (NTRS)
Steinman, Jeff S.
1992-01-01
Synchronous Parallel Environment for Emulation and Discrete-Event Simulation (SPEEDES) is a unified parallel simulation environment. It supports multiple-synchronization protocols without requiring users to recompile their code. When a SPEEDES simulation runs on one node, all the extra parallel overhead is removed automatically at run time. When the same executable runs in parallel, the user preselects the synchronization algorithm from a list of options. SPEEDES currently runs on UNIX networks and on the California Institute of Technology/Jet Propulsion Laboratory Mark III Hypercube. SPEEDES also supports interactive simulations. Featured in the SPEEDES environment is a new parallel synchronization approach called Breathing Time Buckets. This algorithm uses some of the conservative techniques found in Time Bucket synchronization, along with the optimism that characterizes the Time Warp approach. A mathematical model derived from first principles predicts the performance of Breathing Time Buckets. Along with the Breathing Time Buckets algorithm, this paper discusses the rules for processing events in SPEEDES, describes the implementation of various other synchronization protocols supported by SPEEDES, describes some new ones for the future, discusses interactive simulations, and then gives some performance results.
Li, Zhengdong; Zou, Donghua; Liu, Ningguo; Zhong, Liangwei; Shao, Yu; Wan, Lei; Huang, Ping; Chen, Yijiu
2013-06-10
The elucidation and prediction of the biomechanics of lower limb fractures could serve as a useful tool in forensic practices. Finite element (FE) analysis could potentially help in the understanding of the fracture mechanisms of lower limb fractures frequently caused by car-pedestrian accidents. Our aim was (1) to develop and validate a FE model of the human lower limb, (2) to assess the biomechanics of specific injuries concerning run-over and impact loading conditions, and (3) to reconstruct one real car-pedestrian collision case using the model created in this study. We developed a novel lower limb FE model and simulated three different loading scenarios. The geometry of the model was reconstructed using Mimics 13.0 based on computed tomography (CT) scans from an actual traffic accident. The material properties were based upon a synthesis of data found in published literature. The FE model validation and injury reconstruction were conducted using the LS-DYNA code. The FE model was validated by a comparison of the simulation results of three-point bending, overall lateral impact tests and published postmortem human surrogate (PMHS) results. Simulated loading scenarios of running-over the thigh with a wheel, the impact on the upper leg, and impact on the lower thigh were conducted with velocities of 10 m/s, 20 m/s, and 40 m/s, respectively. We compared the injuries resulting from one actual case with the simulated results in order to explore the possible fracture bio-mechanism. The peak fracture forces, maximum bending moments, and energy lost ratio exhibited no significant differences between the FE simulations and the literature data. Under simulated run-over conditions, the segmental fracture pattern was formed and the femur fracture patterns and mechanisms were consistent with the actual injury features of the case. Our study demonstrated that this simulation method could potentially be effective in identifying forensic cases and exploring of the injury mechanisms of lower limb fractures encountered due to inflicted lesions. This model can also help to distinguish between possible and impossible scenarios. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Simulation framework for intelligent transportation systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ewing, T.; Doss, E.; Hanebutte, U.
1996-10-01
A simulation framework has been developed for a large-scale, comprehensive, scaleable simulation of an Intelligent Transportation System (ITS). The simulator is designed for running on parallel computers and distributed (networked) computer systems, but can run on standalone workstations for smaller simulations. The simulator currently models instrumented smart vehicles with in-vehicle navigation units capable of optimal route planning and Traffic Management Centers (TMC). The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide two-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphicalmore » user interfaces to support human-factors studies. Realistic modeling of variations of the posted driving speed are based on human factors studies that take into consideration weather, road conditions, driver personality and behavior, and vehicle type. The prototype has been developed on a distributed system of networked UNIX computers but is designed to run on parallel computers, such as ANL`s IBM SP-2, for large-scale problems. A novel feature of the approach is that vehicles are represented by autonomous computer processes which exchange messages with other processes. The vehicles have a behavior model which governs route selection and driving behavior, and can react to external traffic events much like real vehicles. With this approach, the simulation is scaleable to take advantage of emerging massively parallel processor (MPP) systems.« less
NASA Astrophysics Data System (ADS)
Swallow, B.; Rigby, M. L.; Rougier, J.; Manning, A.; Thomson, D.; Webster, H. N.; Lunt, M. F.; O'Doherty, S.
2016-12-01
In order to understand underlying processes governing environmental and physical phenomena, a complex mathematical model is usually required. However, there is an inherent uncertainty related to the parameterisation of unresolved processes in these simulators. Here, we focus on the specific problem of accounting for uncertainty in parameter values in an atmospheric chemical transport model. Systematic errors introduced by failing to account for these uncertainties have the potential to have a large effect on resulting estimates in unknown quantities of interest. One approach that is being increasingly used to address this issue is known as emulation, in which a large number of forward runs of the simulator are carried out, in order to approximate the response of the output to changes in parameters. However, due to the complexity of some models, it is often unfeasible to run large numbers of training runs that is usually required for full statistical emulators of the environmental processes. We therefore present a simplified model reduction method for approximating uncertainties in complex environmental simulators without the need for very large numbers of training runs. We illustrate the method through an application to the Met Office's atmospheric transport model NAME. We show how our parameter estimation framework can be incorporated into a hierarchical Bayesian inversion, and demonstrate the impact on estimates of UK methane emissions, using atmospheric mole fraction data. We conclude that accounting for uncertainties in the parameterisation of complex atmospheric models is vital if systematic errors are to be minimized and all relevant uncertainties accounted for. We also note that investigations of this nature can prove extremely useful in highlighting deficiencies in the simulator that might otherwise be missed.
Two graphical user interfaces for managing and analyzing MODFLOW groundwater-model scenarios
Banta, Edward R.
2014-01-01
Scenario Manager and Scenario Analyzer are graphical user interfaces that facilitate the use of calibrated, MODFLOW-based groundwater models for investigating possible responses to proposed stresses on a groundwater system. Scenario Manager allows a user, starting with a calibrated model, to design and run model scenarios by adding or modifying stresses simulated by the model. Scenario Analyzer facilitates the process of extracting data from model output and preparing such display elements as maps, charts, and tables. Both programs are designed for users who are familiar with the science on which groundwater modeling is based but who may not have a groundwater modeler’s expertise in building and calibrating a groundwater model from start to finish. With Scenario Manager, the user can manipulate model input to simulate withdrawal or injection wells, time-variant specified hydraulic heads, recharge, and such surface-water features as rivers and canals. Input for stresses to be simulated comes from user-provided geographic information system files and time-series data files. A Scenario Manager project can contain multiple scenarios and is self-documenting. Scenario Analyzer can be used to analyze output from any MODFLOW-based model; it is not limited to use with scenarios generated by Scenario Manager. Model-simulated values of hydraulic head, drawdown, solute concentration, and cell-by-cell flow rates can be presented in display elements. Map data can be represented as lines of equal value (contours) or as a gradated color fill. Charts and tables display time-series data obtained from output generated by a transient-state model run or from user-provided text files of time-series data. A display element can be based entirely on output of a single model run, or, to facilitate comparison of results of multiple scenarios, an element can be based on output from multiple model runs. Scenario Analyzer can export display elements and supporting metadata as a Portable Document Format file.
Run-up Variability due to Source Effects
NASA Astrophysics Data System (ADS)
Del Giudice, Tania; Zolezzi, Francesca; Traverso, Chiara; Valfrè, Giulio; Poggi, Pamela; Parker, Eric J.
2010-05-01
This paper investigates the variability of tsunami run-up at a specific location due to uncertainty in earthquake source parameters. It is important to quantify this 'inter-event' variability for probabilistic assessments of tsunami hazard. In principal, this aspect of variability could be studied by comparing field observations at a single location from a number of tsunamigenic events caused by the same source. As such an extensive dataset does not exist, we decided to study the inter-event variability through numerical modelling. We attempt to answer the question 'What is the potential variability of tsunami wave run-up at a specific site, for a given magnitude earthquake occurring at a known location'. The uncertainty is expected to arise from the lack of knowledge regarding the specific details of the fault rupture 'source' parameters. The following steps were followed: the statistical distributions of the main earthquake source parameters affecting the tsunami height were established by studying fault plane solutions of known earthquakes; a case study based on a possible tsunami impact on Egypt coast has been set up and simulated, varying the geometrical parameters of the source; simulation results have been analyzed deriving relationships between run-up height and source parameters; using the derived relationships a Monte Carlo simulation has been performed in order to create the necessary dataset to investigate the inter-event variability of the run-up height along the coast; the inter-event variability of the run-up height along the coast has been investigated. Given the distribution of source parameters and their variability, we studied how this variability propagates to the run-up height, using the Cornell 'Multi-grid coupled Tsunami Model' (COMCOT). The case study was based on the large thrust faulting offshore the south-western Greek coast, thought to have been responsible for the infamous 1303 tsunami. Numerical modelling of the event was used to assess the impact on the North African coast. The effects of uncertainty in fault parameters were assessed by perturbing the base model, and observing variation on wave height along the coast. The tsunami wave run-up was computed at 4020 locations along the Egyptian coast between longitudes 28.7 E and 33.8 E. To assess the effects of fault parameters uncertainty, input model parameters have been varied and effects on run-up have been analyzed. The simulations show that for a given point there are linear relationships between run-up and both fault dislocation and rupture length. A superposition analysis shows that a linear combination of the effects of the different source parameters (evaluated results) leads to a good approximation of the simulated results. This relationship is then used as the basis for a Monte Carlo simulation. The Monte Carlo simulation was performed for 1600 scenarios at each of the 4020 points along the coast. The coefficient of variation (the ratio between standard deviation of the results and the average of the run-up heights along the coast) is comprised between 0.14 and 3.11 with an average value along the coast equal to 0.67. The coefficient of variation of normalized run-up has been compared with the standard deviation of spectral acceleration attenuation laws used for probabilistic seismic hazard assessment studies. These values have a similar meaning, and the uncertainty in the two cases is similar. The 'rule of thumb' relationship between mean and sigma can be expressed as follows: ?+ σ ≈ 2?. The implication is that the uncertainty in run-up estimation should give a range of values within approximately two times the average. This uncertainty should be considered in tsunami hazard analysis, such as inundation and risk maps, evacuation plans and the other related steps.
NASA Astrophysics Data System (ADS)
Chou, S. C.; Zolino, M. M.; Gomes, J. L.; Bustamante, J. F.; Lima-e-Silva, P. P.
2012-04-01
The Eta Model is used operationally by CPTEC to produce weather forecasts over South America since 1997. The model has gone through upgrades. In order to prepare the model for operational higher resolution forecasts, the model is configured and tested over a region of complex topography located near the coast of Southeast Brazil. The Eta Model was configured, with 2-km horizontal resolution and 50 layers. The Eta-2km is a second nesting, it is driven by Eta-15km, which in its turn is driven by Era-Interim reanalyses. The model domain includes the two Brazilians cities, Rio de Janeiro and Sao Paulo, urban areas, preserved tropical forest, pasture fields, and complex terrain and coastline. Mountains can rise up to about 700m. The region suffers frequent events of floods and landslides. The objective of this work is to evaluate high resolution simulations of wind and temperature in this complex area. Verification of model runs uses observations taken from the nuclear power plant. Accurate near-surface wind direction and magnitude are needed for the plant emergency plan and winds are highly sensitive to model spatial resolution and atmospheric stability. Verification of two cases during summer shows that model has clear diurnal cycle signal for wind in that region. The area is characterized by weak winds which makes the simulation more difficult. The simulated wind magnitude is about 1.5m/s, which is close to observations of about 2m/s; however, the observed change of wind direction of the sea breeze is fast whereas it is slow in the simulations. Nighttime katabatic flow is captured by the simulations. Comparison against Eta-5km runs show that the valley circulation is better described in the 2-km resolution run. Simulated temperatures follow closely the observed diurnal cycle. Experiments improving some surface conditions such as the surface temperature and land cover show simulation error reduction and improved diurnal cycle.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilke, Jeremiah J; Kenny, Joseph P.
2015-02-01
Discrete event simulation provides a powerful mechanism for designing and testing new extreme- scale programming models for high-performance computing. Rather than debug, run, and wait for results on an actual system, design can first iterate through a simulator. This is particularly useful when test beds cannot be used, i.e. to explore hardware or scales that do not yet exist or are inaccessible. Here we detail the macroscale components of the structural simulation toolkit (SST). Instead of depending on trace replay or state machines, the simulator is architected to execute real code on real software stacks. Our particular user-space threading frameworkmore » allows massive scales to be simulated even on small clusters. The link between the discrete event core and the threading framework allows interesting performance metrics like call graphs to be collected from a simulated run. Performance analysis via simulation can thus become an important phase in extreme-scale programming model and runtime system design via the SST macroscale components.« less
Moyer, Douglas; Hyer, Kenneth
2003-01-01
Impairment of surface waters by fecal coliform bacteria is a water-quality issue of national scope and importance. Section 303(d) of the Clean Water Act requires that each State identify surface waters that do not meet applicable water-quality standards. In Virginia, more than 175 stream segments are on the 1998 Section 303(d) list of impaired waters because of violations of the water-quality standard for fecal coliform bacteria. A total maximum daily load (TMDL) will need to be developed by 2006 for each of these impaired streams and rivers by the Virginia Departments of Environmental Quality and Conservation and Recreation. A TMDL is a quantitative representation of the maximum load of a given water-quality constituent, from all point and nonpoint sources, that a stream can assimilate without violating the designated water-quality standard. Blacks Run, in Rockingham County, Virginia, is one of the stream segments listed by the State of Virginia as impaired by fecal coliform bacteria. Watershed modeling and bacterial source tracking were used to develop the technical components of the fecal coliform bacteria TMDL for Accotink Creek. The Hydrological Simulation Program?FORTRAN (HSPF) was used to simulate streamflow, fecal coliform concentrations, and source-specific fecal coliform loading in Blacks Run. Ribotyping, a bacterial source tracking technique, was used to identify the dominant sources of fecal coliform bacteria in the Blacks Run watershed. Ribotyping also was used to determine the relative contributions of specific sources to the observed fecal coliform load in Blacks Run. Data from the ribotyping analysis were incorporated into the calibration of the fecal coliform model. Study results provide information regarding the calibration of the streamflow and fecal coliform bacteria models and also identify the reductions in fecal coliform loads required to meet the TMDL for Blacks Run. The calibrated streamflow model simulated observed streamflow characteristics with respect to total annual runoff, seasonal runoff, average daily streamflow, and hourly stormflow. The calibrated fecal coliform model simulated the patterns and range of observed fecal coliform bacteria concentrations. Observed fecal coliform bacteria concentrations during low-flow periods ranged from 40 to 7,000 colonies per 100 milliliters, and peak concentrations during storm-flow periods ranged from 33,000 to 260,000 colonies per 100 milliliters. Simulated source-specific contributions of fecal coliform bacteria to instream load were matched to the observed contributions from the dominant sources, which were cats, cattle, deer, dogs, ducks, geese, horses, humans, muskrats, poultry, raccoons, and sheep. According to model results, a 95-percent reduction in the current fecal coliform load delivered from the watershed to Blacks Run would result in compliance with the designated water-quality goals and associated TMDL.
Macroeconomics and oil-supply disruptions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hubbard, R.G.; Fry, R.C. Jr.
1981-04-01
Energy-economy interactions and domestic linkages have been used in a system of models. Domestic economic aggregates are linked with a model of the world oil market by a core macroeconomic model with real and financial sectors. The model can be used to examine the policy ramifications of various short-run scenarios. Demand factors are not taken as exogenous to the world oil market, nor are oil prices taken as exogenous to the US economy. Simulations of the model have generated endogenous cycles in the world oil market; which then affect the US economy primarily through output and inflation channels. Policy simulationmore » was centered around the short-run imposition of a disruption tariff. The disruption tariff exhibited at least some of the desirable features noted by its proponents, though it did not function as a shield against the short-run output loss forced by the disruption. One might also simulate the rebate of tariff revenues as a reduction in the social security payroll tax. Other possible simulations include the use of any of the fiscal and monetary instruments included in the model. The effectiveness of these other policy instruments will be examined in a later paper.« less
Large-eddy simulation of dust-uplift by a haboob density current
NASA Astrophysics Data System (ADS)
Huang, Qian; Marsham, John H.; Tian, Wenshou; Parker, Douglas J.; Garcia-Carreras, Luis
2018-04-01
Cold pool outflows have been shown from both observations and convection-permitting models to be a dominant source of dust emissions ("haboobs") in the summertime Sahel and Sahara, and to cause dust uplift over deserts across the world. In this paper Met Office Large Eddy Model (LEM) simulations, which resolve the turbulence within the cold-pools much better than previous studies of haboobs with convection-permitting models, are used to investigate the winds that uplift dust in cold pools, and the resultant dust transport. In order to simulate the cold pool outflow, an idealized cooling is added in the model during the first 2 h of 5.7 h run time. Given the short duration of the runs, dust is treated as a passive tracer. Dust uplift largely occurs in the "head" of the density current, consistent with the few existing observations. In the modeled density current dust is largely restricted to the lowest, coldest and well mixed layers of the cold pool outflow (below around 400 m), except above the "head" of the cold pool where some dust reaches 2.5 km. This rapid transport to above 2 km will contribute to long atmospheric lifetimes of large dust particles from haboobs. Decreasing the model horizontal grid-spacing from 1.0 km to 100 m resolves more turbulence, locally increasing winds, increasing mixing and reducing the propagation speed of the density current. Total accumulated dust uplift is approximately twice as large in 1.0 km runs compared with 100 m runs, suggesting that for studying haboobs in convection-permitting runs the representation of turbulence and mixing is significant. Simulations with surface sensible heat fluxes representative of those from a desert region during daytime show that increasing surface fluxes slows the density current due to increased mixing, but increase dust uplift rates, due to increased downward transport of momentum to the surface.
How well do the GCMs replicate the historical precipitation variability in the Colorado River Basin?
NASA Astrophysics Data System (ADS)
Guentchev, G.; Barsugli, J. J.; Eischeid, J.; Raff, D. A.; Brekke, L.
2009-12-01
Observed precipitation variability measures are compared to measures obtained using the World Climate Research Programme (WCRP) Coupled Model Intercomparison Project (CMIP3) General Circulation Models (GCM) data from 36 model projections downscaled by Brekke at al. (2007) and 30 model projections downscaled by Jon Eischeid. Three groups of variability measures are considered in this historical period (1951-1999) comparison: a) basic variability measures, such as standard deviation, interdecadal standard deviation; b) exceedance probability values, i.e., 10% (extreme wet years) and 90% (extreme dry years) exceedance probability values of series of n-year running mean annual amounts, where n=1,12; 10% exceedance probability values of annual maximum monthly precipitation (extreme wet months); and c) runs variability measures, e.g., frequency of negative and positive runs of annual precipitation amounts, total number of the negative and positive runs. Two gridded precipitation data sets produced from observations are used: the Maurer et al. (2002) and the Daly et al. (1994) Precipitation Regression on Independent Slopes Method (PRISM) data sets. The data consist of monthly grid-point precipitation averaged on a United States Geological Survey (USGS) hydrological sub-region scale. The statistical significance of the obtained model minus observed measure differences is assessed using a block bootstrapping approach. The analyses were performed on annual, seasonal and monthly scale. The results indicate that the interdecadal standard deviation is underestimated, in general, on all time scales by the downscaled model data. The differences are statistically significant at a 0.05 significance level for several Lower Colorado Basin sub-regions on annual and seasonal scale, and for several sub-regions located mostly in the Upper Colorado River Basin for the months of March, June, July and November. Although the models simulate drier extreme wet years, wetter extreme dry years and drier extreme wet months for the Upper Colorado basin, the differences are mostly not-significant. Exceptions are the results about the extreme wet years for n=3 for sub-region White-Yampa, for n=6, 7, and 8 for sub-region Upper Colorado-Dolores, and about the extreme dry years for n=11 for sub-region Great Divide-Upper Green. None of the results for the sub-regions in the Lower Colorado Basin were significant. For most of the Upper Colorado sub-regions the models simulate significantly lower frequency of negative and positive 4-6 year runs, while for several sub-regions a significantly higher frequency of 2-year negative runs is evident in the model versus the Maurer data comparisons. The model projections versus the PRISM data comparison reveals similar results for the negative runs, while for the positive runs the results indicate that the models simulate higher frequency of the 2-6 year runs. The results for the Lower Colorado basin sub-regions are similar, in general, to these for the Upper Colorado sub-regions. The differences between the simulated and the observed total number of negative or positive runs were not significant for almost all of the sub-regions within the Colorado River Basin.
NASA Astrophysics Data System (ADS)
Vivoni, Enrique R.; Mascaro, Giuseppe; Mniszewski, Susan; Fasel, Patricia; Springer, Everett P.; Ivanov, Valeriy Y.; Bras, Rafael L.
2011-10-01
SummaryA major challenge in the use of fully-distributed hydrologic models has been the lack of computational capabilities for high-resolution, long-term simulations in large river basins. In this study, we present the parallel model implementation and real-world hydrologic assessment of the Triangulated Irregular Network (TIN)-based Real-time Integrated Basin Simulator (tRIBS). Our parallelization approach is based on the decomposition of a complex watershed using the channel network as a directed graph. The resulting sub-basin partitioning divides effort among processors and handles hydrologic exchanges across boundaries. Through numerical experiments in a set of nested basins, we quantify parallel performance relative to serial runs for a range of processors, simulation complexities and lengths, and sub-basin partitioning methods, while accounting for inter-run variability on a parallel computing system. In contrast to serial simulations, the parallel model speed-up depends on the variability of hydrologic processes. Load balancing significantly improves parallel speed-up with proportionally faster runs as simulation complexity (domain resolution and channel network extent) increases. The best strategy for large river basins is to combine a balanced partitioning with an extended channel network, with potential savings through a lower TIN resolution. Based on these advances, a wider range of applications for fully-distributed hydrologic models are now possible. This is illustrated through a set of ensemble forecasts that account for precipitation uncertainty derived from a statistical downscaling model.
Aviation Safety Simulation Model
NASA Technical Reports Server (NTRS)
Houser, Scott; Yackovetsky, Robert (Technical Monitor)
2001-01-01
The Aviation Safety Simulation Model is a software tool that enables users to configure a terrain, a flight path, and an aircraft and simulate the aircraft's flight along the path. The simulation monitors the aircraft's proximity to terrain obstructions, and reports when the aircraft violates accepted minimum distances from an obstruction. This model design facilitates future enhancements to address other flight safety issues, particularly air and runway traffic scenarios. This report shows the user how to build a simulation scenario and run it. It also explains the model's output.
Preventing Pirates from Boarding Commercial Vessels - A Systems Approach
2014-09-01
was developed in MATLAB to run simulations designed to estimate the relative effectiveness of each assessed countermeasure. A cost analysis was...project indicated that the P-Trap countermeasure, designed to entangle the pirate’s propellers with thin lines, is both effective and economically viable...vessels. A model of the operational environment was developed in MATLAB to run simulations designed to estimate the relative effectiveness of each
Convective aggregation in realistic convective-scale simulations
NASA Astrophysics Data System (ADS)
Holloway, Christopher E.
2017-06-01
To investigate the real-world relevance of idealized-model convective self-aggregation, five 15 day cases of real organized convection in the tropics are simulated. These include multiple simulations of each case to test sensitivities of the convective organization and mean states to interactive radiation, interactive surface fluxes, and evaporation of rain. These simulations are compared to self-aggregation seen in the same model configured to run in idealized radiative-convective equilibrium. Analysis of the budget of the spatial variance of column-integrated frozen moist static energy shows that control runs have significant positive contributions to organization from radiation and negative contributions from surface fluxes and transport, similar to idealized runs once they become aggregated. Despite identical lateral boundary conditions for all experiments in each case, systematic differences in mean column water vapor (CWV), CWV distribution shape, and CWV autocorrelation length scale are found between the different sensitivity runs, particularly for those without interactive radiation, showing that there are at least some similarities in sensitivities to these feedbacks in both idealized and realistic simulations (although the organization of precipitation shows less sensitivity to interactive radiation). The magnitudes and signs of these systematic differences are consistent with a rough equilibrium between (1) equalization due to advection from the lateral boundaries and (2) disaggregation due to the absence of interactive radiation, implying disaggregation rates comparable to those in idealized runs with aggregated initial conditions and noninteractive radiation. This points to a plausible similarity in the way that radiation feedbacks maintain aggregated convection in both idealized simulations and the real world.
A School Finance Computer Simulation Model
ERIC Educational Resources Information Center
Boardman, Gerald R.
1974-01-01
Presents a description of the computer simulation model developed by the National Educational Finance Project for use by States in planning and evaluating alternative approaches for State support programs. Provides a general introduction to the model, a program operation overview, a sample run, and some conclusions. (Author/WM)
Feedbacks between Air Pollution and Weather, Part 1: Effects on Weather
The meteorological predictions of fully coupled air-quality models running in “feedback” versus “nofeedback” simulations were compared against each other as part of Phase 2 of the Air Quality Model Evaluation International Initiative. The model simulations included a “no-feedback...
NASA Astrophysics Data System (ADS)
Shin, Sun-Hee; Kim, Ok-Yeon; Kim, Dongmin; Lee, Myong-In
2017-07-01
Using 32 CMIP5 (Coupled Model Intercomparison Project Phase 5) models, this study examines the veracity in the simulation of cloud amount and their radiative effects (CREs) in the historical run driven by observed external radiative forcing for 1850-2005, and their future changes in the RCP (Representative Concentration Pathway) 4.5 scenario runs for 2006-2100. Validation metrics for the historical run are designed to examine the accuracy in the representation of spatial patterns for climatological mean, and annual and interannual variations of clouds and CREs. The models show large spread in the simulation of cloud amounts, specifically in the low cloud amount. The observed relationship between cloud amount and the controlling large-scale environment are also reproduced diversely by various models. Based on the validation metrics, four models—ACCESS1.0, ACCESS1.3, HadGEM2-CC, and HadGEM2-ES—are selected as best models, and the average of the four models performs more skillfully than the multimodel ensemble average. All models project global-mean SST warming at the increase of the greenhouse gases, but the magnitude varies across the simulations between 1 and 2 K, which is largely attributable to the difference in the change of cloud amount and distribution. The models that simulate more SST warming show a greater increase in the net CRE due to reduced low cloud and increased incoming shortwave radiation, particularly over the regions of marine boundary layer in the subtropics. Selected best-performing models project a significant reduction in global-mean cloud amount of about -0.99% K-1 and net radiative warming of 0.46 W m-2 K-1, suggesting a role of positive feedback to global warming.
McEwan, Phil; Bergenheim, Klas; Yuan, Yong; Tetlow, Anthony P; Gordon, Jason P
2010-01-01
Simulation techniques are well suited to modelling diseases yet can be computationally intensive. This study explores the relationship between modelled effect size, statistical precision, and efficiency gains achieved using variance reduction and an executable programming language. A published simulation model designed to model a population with type 2 diabetes mellitus based on the UKPDS 68 outcomes equations was coded in both Visual Basic for Applications (VBA) and C++. Efficiency gains due to the programming language were evaluated, as was the impact of antithetic variates to reduce variance, using predicted QALYs over a 40-year time horizon. The use of C++ provided a 75- and 90-fold reduction in simulation run time when using mean and sampled input values, respectively. For a series of 50 one-way sensitivity analyses, this would yield a total run time of 2 minutes when using C++, compared with 155 minutes for VBA when using mean input values. The use of antithetic variates typically resulted in a 53% reduction in the number of simulation replications and run time required. When drawing all input values to the model from distributions, the use of C++ and variance reduction resulted in a 246-fold improvement in computation time compared with VBA - for which the evaluation of 50 scenarios would correspondingly require 3.8 hours (C++) and approximately 14.5 days (VBA). The choice of programming language used in an economic model, as well as the methods for improving precision of model output can have profound effects on computation time. When constructing complex models, more computationally efficient approaches such as C++ and variance reduction should be considered; concerns regarding model transparency using compiled languages are best addressed via thorough documentation and model validation.
NASA Astrophysics Data System (ADS)
Schalge, Bernd; Rihani, Jehan; Haese, Barbara; Baroni, Gabriele; Erdal, Daniel; Haefliger, Vincent; Lange, Natascha; Neuweiler, Insa; Hendricks-Franssen, Harrie-Jan; Geppert, Gernot; Ament, Felix; Kollet, Stefan; Cirpka, Olaf; Saavedra, Pablo; Han, Xujun; Attinger, Sabine; Kunstmann, Harald; Vereecken, Harry; Simmer, Clemens
2017-04-01
Currently, an integrated approach to simulating the earth system is evolving where several compartment models are coupled to achieve the best possible physically consistent representation. We used the model TerrSysMP, which fully couples subsurface, land surface and atmosphere, in a synthetic study that mimicked the Neckar catchment in Southern Germany. A virtual reality run at a high resolution of 400m for the land surface and subsurface and 1.1km for the atmosphere was made. Ensemble runs at a lower resolution (800m for the land surface and subsurface) were also made. The ensemble was generated by varying soil and vegetation parameters and lateral atmospheric forcing among the different ensemble members in a systematic way. It was found that the ensemble runs deviated for some variables and some time periods largely from the virtual reality reference run (the reference run was not covered by the ensemble), which could be related to the different model resolutions. This was for example the case for river discharge in the summer. We also analyzed the spread of model states as function of time and found clear relations between the spread and the time of the year and weather conditions. For example, the ensemble spread of latent heat flux related to uncertain soil parameters was larger under dry soil conditions than under wet soil conditions. Another example is that the ensemble spread of atmospheric states was more influenced by uncertain soil and vegetation parameters under conditions of low air pressure gradients (in summer) than under conditions with larger air pressure gradients in winter. The analysis of the ensemble of fully coupled model simulations provided valuable insights in the dynamics of land-atmosphere feedbacks which we will further highlight in the presentation.
Identification of stochastic interactions in nonlinear models of structural mechanics
NASA Astrophysics Data System (ADS)
Kala, Zdeněk
2017-07-01
In the paper, the polynomial approximation is presented by which the Sobol sensitivity analysis can be evaluated with all sensitivity indices. The nonlinear FEM model is approximated. The input area is mapped using simulations runs of Latin Hypercube Sampling method. The domain of the approximation polynomial is chosen so that it were possible to apply large number of simulation runs of Latin Hypercube Sampling method. The method presented also makes possible to evaluate higher-order sensitivity indices, which could not be identified in case of nonlinear FEM.
1-D blood flow modelling in a running human body.
Szabó, Viktor; Halász, Gábor
2017-07-01
In this paper an attempt was made to simulate blood flow in a mobile human arterial network, specifically, in a running human subject. In order to simulate the effect of motion, a previously published immobile 1-D model was modified by including an inertial force term into the momentum equation. To calculate inertial force, gait analysis was performed at different levels of speed. Our results show that motion has a significant effect on the amplitudes of the blood pressure and flow rate but the average values are not effected significantly.
Weak simulated extratropical responses to complete tropical deforestation
Findell, K.L.; Knutson, T.R.; Milly, P.C.D.
2006-01-01
The Geophysical Fluid Dynamics Laboratory atmosphere-land model version 2 (AM2/LM2) coupled to a 50-m-thick slab ocean model has been used to investigate remote responses to tropical deforestation. Magnitudes and significance of differences between a control run and a deforested run are assessed through comparisons of 50-yr time series, accounting for autocorrelation and field significance. Complete conversion of the broadleaf evergreen forests of South America, central Africa, and the islands of Oceania to grasslands leads to highly significant local responses. In addition, a broad but mild warming is seen throughout the tropical troposphere (<0.2??C between 700 and 150 mb), significant in northern spring and summer. However, the simulation results show very little statistically significant response beyond the Tropics. There are no significant differences in any hydroclimatic variables (e.g., precipitation, soil moisture, evaporation) in either the northern or the southern extratropics. Small but statistically significant local differences in some geopotential height and wind fields are present in the southeastern Pacific Ocean. Use of the same statistical tests on two 50-yr segments of the control run show that the small but significant extratropical differences between the deforested run and the control run are similar in magnitude and area to the differences between nonoverlapping segments of the control run. These simulations suggest that extratropical responses to complete tropical deforestation are unlikely to be distinguishable from natural climate variability.
Hostetler, S.W.; Alder, J.R.; Allan, A.M.
2011-01-01
We have completed an array of high-resolution simulations of present and future climate over Western North America (WNA) and Eastern North America (ENA) by dynamically downscaling global climate simulations using a regional climate model, RegCM3. The simulations are intended to provide long time series of internally consistent surface and atmospheric variables for use in climate-related research. In addition to providing high-resolution weather and climate data for the past, present, and future, we have developed an integrated data flow and methodology for processing, summarizing, viewing, and delivering the climate datasets to a wide range of potential users. Our simulations were run over 50- and 15-kilometer model grids in an attempt to capture more of the climatic detail associated with processes such as topographic forcing than can be captured by general circulation models (GCMs). The simulations were run using output from four GCMs. All simulations span the present (for example, 1968-1999), common periods of the future (2040-2069), and two simulations continuously cover 2010-2099. The trace gas concentrations in our simulations were the same as those of the GCMs: the IPCC 20th century time series for 1968-1999 and the A2 time series for simulations of the future. We demonstrate that RegCM3 is capable of producing present day annual and seasonal climatologies of air temperature and precipitation that are in good agreement with observations. Important features of the high-resolution climatology of temperature, precipitation, snow water equivalent (SWE), and soil moisture are consistently reproduced in all model runs over WNA and ENA. The simulations provide a potential range of future climate change for selected decades and display common patterns of the direction and magnitude of changes. As expected, there are some model to model differences that limit interpretability and give rise to uncertainties. Here, we provide background information about the GCMs and the RegCM3, a basic evaluation of the model output and examples of simulated future climate. We also provide information needed to access the web applications for visualizing and downloading the data, and give complete metadata that describe the variables in the datasets.
The natural oscillation of two types of ENSO events based on analyses of CMIP5 model control runs
NASA Astrophysics Data System (ADS)
Xu, Kang; Su, Jingzhi; Zhu, Congwen
2014-07-01
The eastern- and central-Pacific El Niño-Southern Oscillation (EP- and CP-ENSO) have been found to be dominant in the tropical Pacific Ocean, and are characterized by interannual and decadal oscillation, respectively. In the present study, we defined the EP- and CP-ENSO modes by singular value decomposition (SVD) between SST and sea level pressure (SLP) anomalous fields. We evaluated the natural features of these two types of ENSO modes as simulated by the pre-industrial control runs of 20 models involved in phase five of the Coupled Model Intercomparison Project (CMIP5). The results suggested that all the models show good skill in simulating the SST and SLP anomaly dipolar structures for the EP-ENSO mode, but only 12 exhibit good performance in simulating the tripolar CP-ENSO modes. Wavelet analysis suggested that the ensemble principal components in these 12 models exhibit an interannual and multi-decadal oscillation related to the EP- and CP-ENSO, respectively. Since there are no changes in external forcing in the pre-industrial control runs, such a result implies that the decadal oscillation of CP-ENSO is possibly a result of natural climate variability rather than external forcing.
Understanding climate variability and global climate change using high-resolution GCM simulations
NASA Astrophysics Data System (ADS)
Feng, Xuelei
In this study, three climate processes are examined using long-term simulations from multiple climate models with increasing horizontal resolutions. These simulations include the European Center for Medium-range Weather Forecasts (ECMWF) atmospheric general circulation model (AGCM) runs forced with observed sea surface temperatures (SST) (the Athena runs) and a set of coupled ocean-atmosphere seasonal hindcasts (the Minerva runs). Both sets of runs use different AGCM resolutions, the highest at 16 km. A pair of the Community Climate System Model (CCSM) simulations with ocean general circulation model (OGCM) resolutions at 100 and 10 km are also examined. The higher resolution CCSM run fully resolves oceanic mesoscale eddies. The resolution influence on the precipitation climatology over the Gulf Stream (GS) region is first investigated. In the Athena simulations, the resolution increase generates enhanced mean GS precipitation moderately in both large-scale and sub-scale rainfalls in the North Atlantic, with the latter more tightly confined near the oceanic front. However, the non-eddy resolving OGCM in the Minerva runs simulates a weaker oceanic front and weakens the mean GS precipitation response. On the other hand, an increase in CCSM oceanic resolutions from non-eddy-resolving to eddy resolving regimes greatly improves the model's GS precipitation climatology, resulting in both stronger intensity and more realistic structure. Further analyses show that the improvement of the GS precipitation climatology due to resolution increases is caused by the enhanced atmospheric response to an increased SST gradient near the oceanic front, which leads to stronger surface convergence and upper level divergence. Another focus of this study is on the global warming impacts on precipitation characteristic changes using the high-resolution Athena simulations under the SST forcing from the observations and a global warming scenario. As a comparison, results from the coarse resolution simulation are also analyzed to examine the dependence on resolution. The increasing rates of globally averaged precipitation amount for the high and low resolution simulations are 1.7%/K-1 and 1.8%/K-1, respectively. The sensitivities for heavy, moderate, light and drizzle rain are 6.8, -1.2, 0.0, 0.2%/K-1 for low and 6.3, -1.5, 0.4, -0.2%/K -1 for high resolution simulations. The number of rainy days decreases in a warming scenario, by 3.4 and 4.2 day/year-1, respectively. The most sensitive response of 6.3-6.8%/K-1 for the heavy rain approaches that of the 7%/K-1 for the Clausius-Clapeyron scaling limit. During the twenty-first century simulation, the increases in precipitation are larger over high latitude and wet regions in low and mid-latitudes. Over the dry regions, such as the subtropics, the precipitation amount and frequency decrease. There is a higher occurrence of low and heavy rain from the tropics to mid-latitudes at the expense of the decreases in the frequency of moderate rain. In the third part, the inter-annual variability of the northern hemisphere storm tracks is examined. In the Athena simulations, the leading modes of the observed storm track variability are reproduced realistically by all runs. In general, the fluctuations of the model storm tracks in the North Pacific and Atlantic basins are largely independent of each other. Within each basin, the variations are characterized by the intensity change near the climatological center and the meridional shift of the storm track location. These two modes are associated with major teleconnection patterns of the low frequency atmospheric variations. These model results are not sensitive to resolution. Using the Minerva hindcast initialized in November, it is shown that a portion of the winter (December-January) storm track variability is predictable, mainly due to the influences of the atmospheric wave trains induced by the El Nino and Southern Oscillation.
Reduced-Order Models Based on POD-Tpwl for Compositional Subsurface Flow Simulation
NASA Astrophysics Data System (ADS)
Durlofsky, L. J.; He, J.; Jin, L. Z.
2014-12-01
A reduced-order modeling procedure applicable for compositional subsurface flow simulation will be described and applied. The technique combines trajectory piecewise linearization (TPWL) and proper orthogonal decomposition (POD) to provide highly efficient surrogate models. The method is based on a molar formulation (which uses pressure and overall component mole fractions as the primary variables) and is applicable for two-phase, multicomponent systems. The POD-TPWL procedure expresses new solutions in terms of linearizations around solution states generated and saved during previously simulated 'training' runs. High-dimensional states are projected into a low-dimensional subspace using POD. Thus, at each time step, only a low-dimensional linear system needs to be solved. Results will be presented for heterogeneous three-dimensional simulation models involving CO2 injection. Both enhanced oil recovery and carbon storage applications (with horizontal CO2 injectors) will be considered. Reasonably close agreement between full-order reference solutions and compositional POD-TPWL simulations will be demonstrated for 'test' runs in which the well controls differ from those used for training. Construction of the POD-TPWL model requires preprocessing overhead computations equivalent to about 3-4 full-order runs. Runtime speedups using POD-TPWL are, however, very significant - typically O(100-1000). The use of POD-TPWL for well control optimization will also be illustrated. For this application, some amount of retraining during the course of the optimization is required, which leads to smaller, but still significant, speedup factors.
Automated Knowledge Discovery From Simulators
NASA Technical Reports Server (NTRS)
Burl, Michael; DeCoste, Dennis; Mazzoni, Dominic; Scharenbroich, Lucas; Enke, Brian; Merline, William
2007-01-01
A computational method, SimLearn, has been devised to facilitate efficient knowledge discovery from simulators. Simulators are complex computer programs used in science and engineering to model diverse phenomena such as fluid flow, gravitational interactions, coupled mechanical systems, and nuclear, chemical, and biological processes. SimLearn uses active-learning techniques to efficiently address the "landscape characterization problem." In particular, SimLearn tries to determine which regions in "input space" lead to a given output from the simulator, where "input space" refers to an abstraction of all the variables going into the simulator, e.g., initial conditions, parameters, and interaction equations. Landscape characterization can be viewed as an attempt to invert the forward mapping of the simulator and recover the inputs that produce a particular output. Given that a single simulation run can take days or weeks to complete even on a large computing cluster, SimLearn attempts to reduce costs by reducing the number of simulations needed to effect discoveries. Unlike conventional data-mining methods that are applied to static predefined datasets, SimLearn involves an iterative process in which a most informative dataset is constructed dynamically by using the simulator as an oracle. On each iteration, the algorithm models the knowledge it has gained through previous simulation trials and then chooses which simulation trials to run next. Running these trials through the simulator produces new data in the form of input-output pairs. The overall process is embodied in an algorithm that combines support vector machines (SVMs) with active learning. SVMs use learning from examples (the examples are the input-output pairs generated by running the simulator) and a principle called maximum margin to derive predictors that generalize well to new inputs. In SimLearn, the SVM plays the role of modeling the knowledge that has been gained through previous simulation trials. Active learning is used to determine which new input points would be most informative if their output were known. The selected input points are run through the simulator to generate new information that can be used to refine the SVM. The process is then repeated. SimLearn carefully balances exploration (semi-randomly searching around the input space) versus exploitation (using the current state of knowledge to conduct a tightly focused search). During each iteration, SimLearn uses not one, but an ensemble of SVMs. Each SVM in the ensemble is characterized by different hyper-parameters that control various aspects of the learned predictor - for example, whether the predictor is constrained to be very smooth (nearby points in input space lead to similar output predictions) or whether the predictor is allowed to be "bumpy." The various SVMs will have different preferences about which input points they would like to run through the simulator next. SimLearn includes a formal mechanism for balancing the ensemble SVM preferences so that a single choice can be made for the next set of trials.
John B Kim; Erwan Monier; Brent Sohngen; G Stephen Pitts; Ray Drapek; James McFarland; Sara Ohrel; Jefferson Cole
2016-01-01
We analyze a set of simulations to assess the impact of climate change on global forests where MC2 dynamic global vegetation model (DGVM) was run with climate simulations from the MIT Integrated Global System Model-Community Atmosphere Model (IGSM-CAM) modeling framework. The core study relies on an ensemble of climate simulations under two emissions scenarios: a...
Simulating three dimensional wave run-up over breakwaters covered by antifer units
NASA Astrophysics Data System (ADS)
Najafi-Jilani, A.; Niri, M. Zakiri; Naderi, Nader
2014-06-01
The paper presents the numerical analysis of wave run-up over rubble-mound breakwaters covered by antifer units using a technique integrating Computer-Aided Design (CAD) and Computational Fluid Dynamics (CFD) software. Direct application of Navier-Stokes equations within armour blocks, is used to provide a more reliable approach to simulate wave run-up over breakwaters. A well-tested Reynolds-averaged Navier-Stokes (RANS) Volume of Fluid (VOF) code (Flow-3D) was adopted for CFD computations. The computed results were compared with experimental data to check the validity of the model. Numerical results showed that the direct three dimensional (3D) simulation method can deliver accurate results for wave run-up over rubble mound breakwaters. The results showed that the placement pattern of antifer units had a great impact on values of wave run-up so that by changing the placement pattern from regular to double pyramid can reduce the wave run-up by approximately 30%. Analysis was done to investigate the influences of surface roughness, energy dissipation in the pores of the armour layer and reduced wave run-up due to inflow into the armour and stone layer.
The Role of Sea Ice in 2 x CO2 Climate Model Sensitivity. Part 2; Hemispheric Dependencies
NASA Technical Reports Server (NTRS)
Rind, D.; Healy, R.; Parkinson, C.; Martinson, D.
1997-01-01
How sensitive are doubled CO2 simulations to GCM control-run sea ice thickness and extent? This issue is examined in a series of 10 control-run simulations with different sea ice and corresponding doubled CO2 simulations. Results show that with increased control-run sea ice coverage in the Southern Hemisphere, temperature sensitivity with climate change is enhanced, while there is little effect on temperature sensitivity of (reasonable) variations in control-run sea ice thickness. In the Northern Hemisphere the situation is reversed: sea ice thickness is the key parameter, while (reasonable) variations in control-run sea ice coverage are of less importance. In both cases, the quantity of sea ice that can be removed in the warmer climate is the determining factor. Overall, the Southern Hemisphere sea ice coverage change had a larger impact on global temperature, because Northern Hemisphere sea ice was sufficiently thick to limit its response to doubled CO2, and sea ice changes generally occurred at higher latitudes, reducing the sea ice-albedo feedback. In both these experiments and earlier ones in which sea ice was not allowed to change, the model displayed a sensitivity of -0.02 C global warming per percent change in Southern Hemisphere sea ice coverage.
Simulation of a master-slave event set processor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Comfort, J.C.
1984-03-01
Event set manipulation may consume a considerable amount of the computation time spent in performing a discrete-event simulation. One way of minimizing this time is to allow event set processing to proceed in parallel with the remainder of the simulation computation. The paper describes a multiprocessor simulation computer, in which all non-event set processing is performed by the principal processor (called the host). Event set processing is coordinated by a front end processor (the master) and actually performed by several other functionally identical processors (the slaves). A trace-driven simulation program modeling this system was constructed, and was run with tracemore » output taken from two different simulation programs. Output from this simulation suggests that a significant reduction in run time may be realized by this approach. Sensitivity analysis was performed on the significant parameters to the system (number of slave processors, relative processor speeds, and interprocessor communication times). A comparison between actual and simulation run times for a one-processor system was used to assist in the validation of the simulation. 7 references.« less
Platform-Independence and Scheduling In a Multi-Threaded Real-Time Simulation
NASA Technical Reports Server (NTRS)
Sugden, Paul P.; Rau, Melissa A.; Kenney, P. Sean
2001-01-01
Aviation research often relies on real-time, pilot-in-the-loop flight simulation as a means to develop new flight software, flight hardware, or pilot procedures. Often these simulations become so complex that a single processor is incapable of performing the necessary computations within a fixed time-step. Threads are an elegant means to distribute the computational work-load when running on a symmetric multi-processor machine. However, programming with threads often requires operating system specific calls that reduce code portability and maintainability. While a multi-threaded simulation allows a significant increase in the simulation complexity, it also increases the workload of a simulation operator by requiring that the operator determine which models run on which thread. To address these concerns an object-oriented design was implemented in the NASA Langley Standard Real-Time Simulation in C++ (LaSRS++) application framework. The design provides a portable and maintainable means to use threads and also provides a mechanism to automatically load balance the simulation models.
Global Reference Atmosphere Model (GRAM)
NASA Technical Reports Server (NTRS)
Woodrum, A. W.
1989-01-01
GRAM series of four-dimensional atmospheric model validated by years of data. GRAM program, still available. More current are Gram 86, which includes atmospheric data from 1986 and runs on DEC VAX, and GRAM 88, which runs on IBM 3084. Program generates altitude profiles of atmospheric parameters along any simulated trajectory through atmosphere, and also useful for global circulation and diffusion studies.
Humans running in place on water at simulated reduced gravity.
Minetti, Alberto E; Ivanenko, Yuri P; Cappellini, Germana; Dominici, Nadia; Lacquaniti, Francesco
2012-01-01
On Earth only a few legged species, such as water strider insects, some aquatic birds and lizards, can run on water. For most other species, including humans, this is precluded by body size and proportions, lack of appropriate appendages, and limited muscle power. However, if gravity is reduced to less than Earth's gravity, running on water should require less muscle power. Here we use a hydrodynamic model to predict the gravity levels at which humans should be able to run on water. We test these predictions in the laboratory using a reduced gravity simulator. We adapted a model equation, previously used by Glasheen and McMahon to explain the dynamics of Basilisk lizard, to predict the body mass, stride frequency and gravity necessary for a person to run on water. Progressive body-weight unloading of a person running in place on a wading pool confirmed the theoretical predictions that a person could run on water, at lunar (or lower) gravity levels using relatively small rigid fins. Three-dimensional motion capture of reflective markers on major joint centers showed that humans, similarly to the Basilisk Lizard and to the Western Grebe, keep the head-trunk segment at a nearly constant height, despite the high stride frequency and the intensive locomotor effort. Trunk stabilization at a nearly constant height differentiates running on water from other, more usual human gaits. The results showed that a hydrodynamic model of lizards running on water can also be applied to humans, despite the enormous difference in body size and morphology.
Can High-resolution WRF Simulations Be Used for Short-term Forecasting of Lightning?
NASA Technical Reports Server (NTRS)
Goodman, S. J.; Lapenta, W.; McCaul, E. W., Jr.; LaCasse, K.; Petersen, W.
2006-01-01
A number of research teams have begun to make quasi-operational forecast simulations at high resolution with models such as the Weather Research and Forecast (WRF) model. These model runs have used horizontal meshes of 2-4 km grid spacing, and thus resolved convective storms explicitly. In the light of recent global satellite-based observational studies that reveal robust relationships between total lightning flash rates and integrated amounts of precipitation-size ice hydrometeors in storms, it is natural to inquire about the capabilities of these convection-resolving models in representing the ice hydrometeor fields faithfully. If they do, this might make operational short-term forecasts of lightning activity feasible. We examine high-resolution WRF simulations from several Southeastern cases for which either NLDN or LMA lightning data were available. All the WRF runs use a standard microphysics package that depicts only three ice species, cloud ice, snow and graupel. The realism of the WRF simulations is examined by comparisons with both lightning and radar observations and with additional even higher-resolution cloud-resolving model runs. Preliminary findings are encouraging in that they suggest that WRF often makes convective storms of the proper size in approximately the right location, but they also indicate that higher resolution and better hydrometeor microphysics would be helpful in improving the realism of the updraft strengths, reflectivity and ice hydrometeor fields.
NASA Astrophysics Data System (ADS)
KIM, J.; Smith, M. B.; Koren, V.; Salas, F.; Cui, Z.; Johnson, D.
2017-12-01
The National Oceanic and Atmospheric Administration (NOAA)-National Weather Service (NWS) developed the Hydrology Laboratory-Research Distributed Hydrologic Model (HL-RDHM) framework as an initial step towards spatially distributed modeling at River Forecast Centers (RFCs). Recently, the NOAA/NWS worked with the National Center for Atmospheric Research (NCAR) to implement the National Water Model (NWM) for nationally-consistent water resources prediction. The NWM is based on the WRF-Hydro framework and is run at a 1km spatial resolution and 1-hour time step over the contiguous United States (CONUS) and contributing areas in Canada and Mexico. In this study, we compare streamflow simulations from HL-RDHM and WRF-Hydro to observations from 279 USGS stations. For streamflow simulations, HL-RDHM is run on 4km grids with the temporal resolution of 1 hour for a 5-year period (Water Years 2008-2012), using a priori parameters provided by NOAA-NWS. The WRF-Hydro streamflow simulations for the same time period are extracted from NCAR's 23 retrospective run of the NWM (version 1.0) over CONUS based on 1km grids. We choose 279 USGS stations which are relatively less affected by dams or reservoirs, in the domains of six different RFCs. We use the daily average values of simulations and observations for the convenience of comparison. The main purpose of this research is to evaluate how HL-RDHM and WRF-Hydro perform at USGS gauge stations. We compare daily time-series of observations and both simulations, and calculate the error values using a variety of error functions. Using these plots and error values, we evaluate the performances of HL-RDHM and WRF-Hydro models. Our results show a mix of model performance across geographic regions.
Automatic mathematical modeling for real time simulation system
NASA Technical Reports Server (NTRS)
Wang, Caroline; Purinton, Steve
1988-01-01
A methodology for automatic mathematical modeling and generating simulation models is described. The models will be verified by running in a test environment using standard profiles with the results compared against known results. The major objective is to create a user friendly environment for engineers to design, maintain, and verify their model and also automatically convert the mathematical model into conventional code for conventional computation. A demonstration program was designed for modeling the Space Shuttle Main Engine Simulation. It is written in LISP and MACSYMA and runs on a Symbolic 3670 Lisp Machine. The program provides a very friendly and well organized environment for engineers to build a knowledge base for base equations and general information. It contains an initial set of component process elements for the Space Shuttle Main Engine Simulation and a questionnaire that allows the engineer to answer a set of questions to specify a particular model. The system is then able to automatically generate the model and FORTRAN code. The future goal which is under construction is to download the FORTRAN code to VAX/VMS system for conventional computation. The SSME mathematical model will be verified in a test environment and the solution compared with the real data profile. The use of artificial intelligence techniques has shown that the process of the simulation modeling can be simplified.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ansari, A.; Mohaghegh, S.; Shahnam, M.
To ensure the usefulness of simulation technologies in practice, their credibility needs to be established with Uncertainty Quantification (UQ) methods. In this project, smart proxy is introduced to significantly reduce the computational cost of conducting large number of multiphase CFD simulations, which is typically required for non-intrusive UQ analysis. Smart proxy for CFD models are developed using pattern recognition capabilities of Artificial Intelligence (AI) and Data Mining (DM) technologies. Several CFD simulation runs with different inlet air velocities for a rectangular fluidized bed are used to create a smart CFD proxy that is capable of replicating the CFD results formore » the entire geometry and inlet velocity range. The smart CFD proxy is validated with blind CFD runs (CFD runs that have not played any role during the development of the smart CFD proxy). The developed and validated smart CFD proxy generates its results in seconds with reasonable error (less than 10%). Upon completion of this project, UQ studies that rely on hundreds or thousands of smart CFD proxy runs can be accomplished in minutes. Following figure demonstrates a validation example (blind CFD run) showing the results from the MFiX simulation and the smart CFD proxy for pressure distribution across a fluidized bed at a given time-step (the layer number corresponds to the vertical location in the bed).« less
Investigation on the Practicality of Developing Reduced Thermal Models
NASA Technical Reports Server (NTRS)
Lombardi, Giancarlo; Yang, Kan
2015-01-01
Throughout the spacecraft design and development process, detailed instrument thermal models are created to simulate their on-orbit behavior and to ensure that they do not exceed any thermal limits. These detailed models, while generating highly accurate predictions, can sometimes lead to long simulation run times, especially when integrated with a spacecraft observatory model. Therefore, reduced models containing less detail are typically produced in tandem with the detailed models so that results may be more readily available, albeit less accurate. In the current study, both reduced and detailed instrument models are integrated with their associated spacecraft bus models to examine the impact of instrument model reduction on run time and accuracy. Preexisting instrument bus thermal model pairs from several projects were used to determine trends between detailed and reduced thermal models; namely, the Mirror Optical Bench (MOB) on the Gravity and Extreme Magnetism Small Explorer (GEMS) spacecraft, Advanced Topography Laser Altimeter System (ATLAS) on the Ice, Cloud, and Elevation Satellite 2 (ICESat-2), and the Neutral Mass Spectrometer (NMS) on the Lunar Atmosphere and Dust Environment Explorer (LADEE). Hot and cold cases were run for each model to capture the behavior of the models at both thermal extremes. It was found that, though decreasing the number of nodes from a detailed to reduced model brought about a reduction in the run-time, a large time savings was not observed, nor was it a linear relationship between the percentage of nodes reduced and time saved. However, significant losses in accuracy were observed with greater model reduction. It was found that while reduced models are useful in decreasing run time, there exists a threshold of reduction where, once exceeded, the loss in accuracy outweighs the benefit from reduced model runtime.
Understanding resonance graphs using Easy Java Simulations (EJS) and why we use EJS
NASA Astrophysics Data System (ADS)
Wee, Loo Kang; Lee, Tat Leong; Chew, Charles; Wong, Darren; Tan, Samuel
2015-03-01
This paper reports a computer model simulation created using Easy Java Simulation (EJS) for learners to visualize how the steady-state amplitude of a driven oscillating system varies with the frequency of the periodic driving force. The simulation shows (N = 100) identical spring-mass systems being subjected to (1) a periodic driving force of equal amplitude but different driving frequencies, and (2) different amounts of damping. The simulation aims to create a visually intuitive way of understanding how the series of amplitude versus driving frequency graphs are obtained by showing how the displacement of the system changes over time as it transits from the transient to the steady state. A suggested ‘how to use’ the model is added to help educators and students in their teaching and learning, where we explain the theoretical steady-state equation time conditions when the model begins to allow data recording of maximum amplitudes to closely match the theoretical equation, and the steps to collect different runs of the degree of damping. We also discuss two of the design features in our computer model: displaying the instantaneous oscillation together with the achieved steady-state amplitudes, and the explicit world view overlay with scientific representation with different degrees of damping runs. Three advantages of using EJS include: (1) open source codes and creative commons attribution licenses for scaling up of interactively engaging educational practices; (2) the models made can run on almost any device, including Android and iOS; and (3) it allows the redefinition of physics educational practices through computer modeling.
Graphical User Interface for Simulink Integrated Performance Analysis Model
NASA Technical Reports Server (NTRS)
Durham, R. Caitlyn
2009-01-01
The J-2X Engine (built by Pratt & Whitney Rocketdyne,) in the Upper Stage of the Ares I Crew Launch Vehicle, will only start within a certain range of temperature and pressure for Liquid Hydrogen and Liquid Oxygen propellants. The purpose of the Simulink Integrated Performance Analysis Model is to verify that in all reasonable conditions the temperature and pressure of the propellants are within the required J-2X engine start boxes. In order to run the simulation, test variables must be entered at all reasonable values of parameters such as heat leak and mass flow rate. To make this testing process as efficient as possible in order to save the maximum amount of time and money, and to show that the J-2X engine will start when it is required to do so, a graphical user interface (GUI) was created to allow the input of values to be used as parameters in the Simulink Model, without opening or altering the contents of the model. The GUI must allow for test data to come from Microsoft Excel files, allow those values to be edited before testing, place those values into the Simulink Model, and get the output from the Simulink Model. The GUI was built using MATLAB, and will run the Simulink simulation when the Simulate option is activated. After running the simulation, the GUI will construct a new Microsoft Excel file, as well as a MATLAB matrix file, using the output values for each test of the simulation so that they may graphed and compared to other values.
Evaluation of high-resolution climate simulations for West Africa using COSMO-CLM
NASA Astrophysics Data System (ADS)
Dieng, Diarra; Smiatek, Gerhard; Bliefernicht, Jan; Laux, Patrick; Heinzeller, Dominikus; Kunstmann, Harald; Sarr, Abdoulaye; Thierno Gaye, Amadou
2017-04-01
The climate change modeling activities within the WASCAL program (West African Science Service Center on Climate Change and Adapted Land Use) concentrate on the provisioning of future climate change scenario data at high spatial and temporal resolution and quality in West Africa. Such information is highly required for impact studies in water resources and agriculture for the development of reliable climate change adaptation and mitigation strategies. In this study, we present a detailed evaluation of high simulation runs based on the regional climate model, COSMO model in CLimate Mode (COSMO-CLM). The model is applied over West Africa in a nested approach with two simulation domains at 0.44° and 0.11° resolution using reanalysis data from ERA-Interim (1979-2013). The models runs are compared to several state-of-the-art observational references (e.g., CRU, CHIRPS) including daily precipitation data provided by national meteorological services in West Africa. Special attention is paid to the reproduction of the dynamics of the West African Monsoon (WMA), its associated precipitation patterns and crucial agro-climatological indices such as the onset of the rainy season. In addition, first outcomes of the regional climate change simulations driven by MPI-ESM-LR are presented for a historical period (1980 to 2010) and two future periods (2020 to 2050, 2070 to 2100). The evaluation of the reanalysis runs shows that COSMO-CLM is able to reproduce the observed major climate characteristics including the West African Monsoon within the range of comparable RCM evaluations studies. However, substantial uncertainties remain, especially in the Sahel zone. The added value of the higher resolution of the nested run is reflected in a smaller bias in extreme precipitation statistics with respect to the reference data.
NASA Technical Reports Server (NTRS)
1974-01-01
The Stanford Watershed Model, the Kentucky Watershed Model and OPSET program, and the NASA-IBM system for simulation and analysis of watersheds are described in terms of their applications to the study of remote sensing of water resources. Specific calibration processes and input and output parameters that are instrumental in the simulations are explained for the following kinds of data: (1) hourly precipitation data; (2) daily discharge data; (3) flood hydrographs; (4) temperature and evaporation data; and (5) snowmelt data arrays. The Sensitivity Analysis Task, which provides a method for evaluation of any of the separate simulation runs in the form of performance indices, is also reported. The method is defined and a summary of results is given which indicates the values obtained in the simulation runs performed for Town Creek, Alabama; Alamosa Creek, Colorado; and Pearl River, Louisiana. The results are shown in tabular and plot graph form. For Vol. 1, see N74-27813.
NASA One-Dimensional Combustor Simulation--User Manual for S1D_ML
NASA Technical Reports Server (NTRS)
Stueber, Thomas J.; Paxson, Daniel E.
2014-01-01
The work presented in this paper is to promote research leading to a closed-loop control system to actively suppress thermo-acoustic instabilities. To serve as a model for such a closed-loop control system, a one-dimensional combustor simulation composed using MATLAB software tools has been written. This MATLAB based process is similar to a precursor one-dimensional combustor simulation that was formatted as FORTRAN 77 source code. The previous simulation process requires modification to the FORTRAN 77 source code, compiling, and linking when creating a new combustor simulation executable file. The MATLAB based simulation does not require making changes to the source code, recompiling, or linking. Furthermore, the MATLAB based simulation can be run from script files within the MATLAB environment or with a compiled copy of the executable file running in the Command Prompt window without requiring a licensed copy of MATLAB. This report presents a general simulation overview. Details regarding how to setup and initiate a simulation are also presented. Finally, the post-processing section describes the two types of files created while running the simulation and it also includes simulation results for a default simulation included with the source code.
NASA Technical Reports Server (NTRS)
Shen, B.-W.; Atlas, R.; Reale, O.; Lin, S.-J.; Chern, J.-D.; Chang, J.; Henze, C.
2006-01-01
Hurricane Katrina was the sixth most intense hurricane in the Atlantic. Katrina's forecast poses major challenges, the most important of which is its rapid intensification. Hurricane intensity forecast with General Circulation Models (GCMs) is difficult because of their coarse resolution. In this article, six 5-day simulations with the ultra-high resolution finite-volume GCM are conducted on the NASA Columbia supercomputer to show the effects of increased resolution on the intensity predictions of Katrina. It is found that the 0.125 degree runs give comparable tracks to the 0.25 degree, but provide better intensity forecasts, bringing the center pressure much closer to observations with differences of only plus or minus 12 hPa. In the runs initialized at 1200 UTC 25 AUG, the 0.125 degree simulates a more realistic intensification rate and better near-eye wind distributions. Moreover, the first global 0.125 degree simulation without convection parameterization (CP) produces even better intensity evolution and near-eye winds than the control run with CP.
New NASA 3D Animation Shows Seven Days of Simulated Earth Weather
2014-08-11
This visualization shows early test renderings of a global computational model of Earth's atmosphere based on data from NASA's Goddard Earth Observing System Model, Version 5 (GEOS-5). This particular run, called Nature Run 2, was run on a supercomputer, spanned 2 years of simulation time at 30 minute intervals, and produced Petabytes of output. The visualization spans a little more than 7 days of simulation time which is 354 time steps. The time period was chosen because a simulated category-4 typhoon developed off the coast of China. The 7 day period is repeated several times during the course of the visualization. Credit: NASA's Scientific Visualization Studio Read more or download here: svs.gsfc.nasa.gov/goto?4180 NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
NASA Astrophysics Data System (ADS)
Fuhrer, Oliver; Chadha, Tarun; Hoefler, Torsten; Kwasniewski, Grzegorz; Lapillonne, Xavier; Leutwyler, David; Lüthi, Daniel; Osuna, Carlos; Schär, Christoph; Schulthess, Thomas C.; Vogt, Hannes
2018-05-01
The best hope for reducing long-standing global climate model biases is by increasing resolution to the kilometer scale. Here we present results from an ultrahigh-resolution non-hydrostatic climate model for a near-global setup running on the full Piz Daint supercomputer on 4888 GPUs (graphics processing units). The dynamical core of the model has been completely rewritten using a domain-specific language (DSL) for performance portability across different hardware architectures. Physical parameterizations and diagnostics have been ported using compiler directives. To our knowledge this represents the first complete atmospheric model being run entirely on accelerators on this scale. At a grid spacing of 930 m (1.9 km), we achieve a simulation throughput of 0.043 (0.23) simulated years per day and an energy consumption of 596 MWh per simulated year. Furthermore, we propose a new memory usage efficiency (MUE) metric that considers how efficiently the memory bandwidth - the dominant bottleneck of climate codes - is being used.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, B.; Schneider, E.K.
1995-10-01
Two surface wind stress datasets for 1979-91, one based on observations and the other from an investigation of the COLA atmospheric general circulation model (AGCM) with prescribed SST, are used to drive the GFDL ocean general circulation model. These two runs are referred to as the control and COLA experiments, respectively. Simulated SST and upper-ocean heat contents (HC) in the tropical Pacific Ocean are compared with observations and between experiments. Both simulation reproduced the observed mean SST and HC fields as well as their annual cycles realistically. Major errors common to both runs are colder than observed SST in themore » eastern equatorial ocean and HC in the western Pacific south of the equator, with errors generally larger in the COLA experiment. New errors arising from the AGCM wind forcing include higher SST near the South American coast throughout the year and weaker HC gradients along the equator in boreal spring. The former is associated with suppressed coastal upwelling by weak along shore AGCM winds, and the latter is caused by weaker equatorial easterlies in boreal spring. The low-frequency ENSO fluctuations are also realistic for both runs. Correlations between the observed and simulated SST anomalies from the COLA simulation are as high as those from the control run in the central equatorial Pacific. A major problem in the COLA simulation is the appearance of unrealistic tropical cold anomalies during the boreal spring of mature El Nino years. These anomalies propagate along the equator from the western Pacific to the eastern coast in about three months, and temporarily eliminate the warm SST and HC anomalies in the eastern Pacific. This erroneous oceanic response in the COLA simulation is caused by a reversal of the westerly wind anomalies on the equator, associated with an unrealistic southward shift of the ITCZ in boreal spring during El Nino events. 66 refs., 16 figs.« less
The SCEC Broadband Platform: Open-Source Software for Strong Ground Motion Simulation and Validation
NASA Astrophysics Data System (ADS)
Silva, F.; Goulet, C. A.; Maechling, P. J.; Callaghan, S.; Jordan, T. H.
2016-12-01
The Southern California Earthquake Center (SCEC) Broadband Platform (BBP) is a carefully integrated collection of open-source scientific software programs that can simulate broadband (0-100 Hz) ground motions for earthquakes at regional scales. The BBP can run earthquake rupture and wave propagation modeling software to simulate ground motions for well-observed historical earthquakes and to quantify how well the simulated broadband seismograms match the observed seismograms. The BBP can also run simulations for hypothetical earthquakes. In this case, users input an earthquake location and magnitude description, a list of station locations, and a 1D velocity model for the region of interest, and the BBP software then calculates ground motions for the specified stations. The BBP scientific software modules implement kinematic rupture generation, low- and high-frequency seismogram synthesis using wave propagation through 1D layered velocity structures, several ground motion intensity measure calculations, and various ground motion goodness-of-fit tools. These modules are integrated into a software system that provides user-defined, repeatable, calculation of ground-motion seismograms, using multiple alternative ground motion simulation methods, and software utilities to generate tables, plots, and maps. The BBP has been developed over the last five years in a collaborative project involving geoscientists, earthquake engineers, graduate students, and SCEC scientific software developers. The SCEC BBP software released in 2016 can be compiled and run on recent Linux and Mac OS X systems with GNU compilers. It includes five simulation methods, seven simulation regions covering California, Japan, and Eastern North America, and the ability to compare simulation results against empirical ground motion models (aka GMPEs). The latest version includes updated ground motion simulation methods, a suite of new validation metrics and a simplified command line user interface.
NASA Astrophysics Data System (ADS)
Kourafalou, Vassiliki H.; Androulidakis, Yannis S.; Halliwell, George R.; Kang, HeeSook; Mehari, Michael M.; Le Hénaff, Matthieu; Atlas, Robert; Lumpkin, Rick
2016-11-01
A high resolution, free-running model has been developed for the hurricane region of the North Atlantic Ocean. The model is evaluated with a variety of observations to ensure that it adequately represents both the ocean climatology and variability over this region, with a focus on processes relevant to hurricane-ocean interactions. As such, it can be used as the "Nature Run" (NR) model within the framework of Observing System Simulation Experiments (OSSEs), designed specifically to improve the ocean component of coupled ocean-atmosphere hurricane forecast models. The OSSE methodology provides quantitative assessment of the impact of specific observations on the skill of forecast models and enables the comprehensive design of future observational platforms and the optimization of existing ones. Ocean OSSEs require a state-of-the-art, high-resolution free-running model simulation that represents the true ocean (the NR). This study concentrates on the development and data based evaluation of the NR model component, which leads to a reliable model simulation that has a dual purpose: (a) to provide the basis for future hurricane related OSSEs; (b) to explore process oriented studies of hurricane-ocean interactions. A specific example is presented, where the impact of Hurricane Bill (2009) on the eastward extension and transport of the Gulf Stream is analyzed. The hurricane induced cold wake is shown in both NR simulation and observations. Interaction of storm-forced currents with the Gulf Stream produced a temporary large reduction in eastward transport downstream from Cape Hatteras and had a marked influence on frontal displacement in the upper ocean. The kinetic energy due to ageostrophic currents showed a significant increase as the storm passed, and then decreased to pre-storm levels within 8 days after the hurricane advanced further north. This is a unique result of direct hurricane impact on a western boundary current, with possible implications on the ocean feedback on hurricane evolution.
Mastin, Mark
2012-01-01
A previous collaborative effort between the U.S. Geological Survey and the Bureau of Reclamation resulted in a watershed model for four watersheds that discharge into Potholes Reservoir, Washington. Since the model was constructed, two new meteorological sites have been established that provide more reliable real-time information. The Bureau of Reclamation was interested in incorporating this new information into the existing watershed model developed in 2009, and adding measured snowpack information to update simulated results and to improve forecasts of runoff. This report includes descriptions of procedures to aid a user in making model runs, including a description of the Object User Interface for the watershed model with details on specific keystrokes to generate model runs for the contributing basins. A new real-time, data-gathering computer program automates the creation of the model input files and includes the new meteorological sites. The 2009 watershed model was updated with the new sites and validated by comparing simulated results to measured data. As in the previous study, the updated model (2012 model) does a poor job of simulating individual storms, but a reasonably good job of simulating seasonal runoff volumes. At three streamflow-gaging stations, the January 1 to June 30 retrospective forecasts of runoff volume for years 2010 and 2011 were within 40 percent of the measured runoff volume for five of the six comparisons, ranging from -39.4 to 60.3 percent difference. A procedure for collecting measured snowpack data and using the data in the watershed model for forecast model runs, based on the Ensemble Streamflow Prediction method, is described, with an example that uses 2004 snow-survey data.
Simulation of linear mechanical systems
NASA Technical Reports Server (NTRS)
Sirlin, S. W.
1993-01-01
A dynamics and controls analyst is typically presented with a structural dynamics model and must perform various input/output tests and design control laws. The required time/frequency simulations need to be done many times as models change and control designs evolve. This paper examines some simple ways that open and closed loop frequency and time domain simulations can be done using the special structure of the system equations usually available. Routines were developed to run under Pro-Matlab in a mixture of the Pro-Matlab interpreter and FORTRAN (using the .mex facility). These routines are often orders of magnitude faster than trying the typical 'brute force' approach of using built-in Pro-Matlab routines such as bode. This makes the analyst's job easier since not only does an individual run take less time, but much larger models can be attacked, often allowing the whole model reduction step to be eliminated.
Simulation for Grid Connected Wind Turbines with Fluctuating
NASA Astrophysics Data System (ADS)
Ye, Ying; Fu, Yang; Wei, Shurong
This paper establishes the whole dynamic model of wind turbine generator system which contains the wind speed model and DFIG wind turbines model .A simulation sample based on the mathematical models is built by using MATLAB in this paper. Research are did on the performance characteristics of doubly-fed wind generators (DFIG) which connected to power grid with three-phase ground fault and the disturbance by gust and mixed wind. The capacity of the wind farm is 9MW which consists of doubly-fed wind generators (DFIG). Simulation results demonstrate that the three-phase ground fault occurs on grid side runs less affected on the stability of doubly-fed wind generators. However, as a power source, fluctuations of the wind speed will run a large impact on stability of double-fed wind generators. The results also show that if the two disturbances occur in the meantime, the situation will be very serious.
A new climate modeling framework for convection-resolving simulation at continental scale
NASA Astrophysics Data System (ADS)
Charpilloz, Christophe; di Girolamo, Salvatore; Arteaga, Andrea; Fuhrer, Oliver; Hoefler, Torsten; Schulthess, Thomas; Schär, Christoph
2017-04-01
Major uncertainties remain in our understanding of the processes that govern the water cycle in a changing climate and their representation in weather and climate models. Of particular concern are heavy precipitation events of convective origin (thunderstorms and rain showers). The aim of the crCLIM project [1] is to propose a new climate modeling framework that alleviates the I/O-bottleneck in large-scale, convection-resolving climate simulations and thus to enable new analysis techniques for climate scientists. Due to the large computational costs, convection-resolving simulations are currently restricted to small computational domains or very short time scales, unless the largest available supercomputers system such as hybrid CPU-GPU architectures are used [3]. Hence, the COSMO model has been adapted to run on these architectures for research and production purposes [2]. However, the amount of generated data also increases and storing this data becomes infeasible making the analysis of simulations results impractical. To circumvent this problem and enable high-resolution models in climate we propose a data-virtualization layer (DVL) that re-runs simulations on demand and transparently manages the data for the analysis, that means we trade off computational effort (time) for storage (space). This approach also requires a bit-reproducible version of the COSMO model that produces identical results on different architectures (CPUs and GPUs) [4] that will be coupled with a performance model in order enable optimal re-runs depending on requirements of the re-run and available resources. In this contribution, we discuss the strategy to develop the DVL, a first performance model, the challenge of bit-reproducibility and the first results of the crCLIM project. [1] http://www.c2sm.ethz.ch/research/crCLIM.html [2] O. Fuhrer, C. Osuna, X. Lapillonne, T. Gysi, M. Bianco, and T. Schulthess. "Towards gpu-accelerated operational weather forecasting." In The GPU Technology Conference, GTC. 2013. [3] D. Leutwyler, O. Fuhrer, X. Lapillonne, D. Lüthi, and C. Schär. "Towards European-scale convection-resolving climate simulations with GPUs: a study with COSMO 4.19." Geoscientific Model Development 9, no. 9 (2016): 3393. [4] A. Arteaga, O. Fuhrer, and T. Hoefler. "Designing bit-reproducible portable high-performance applications." In Parallel and Distributed Processing Symposium, 2014 IEEE 28th International, pp. 1235-1244. IEEE, 2014.
NASA Astrophysics Data System (ADS)
Yao, Zhixiong; Tang, Youmin; Chen, Dake; Zhou, Lei; Li, Xiaojing; Lian, Tao; Ul Islam, Siraj
2016-12-01
This study examines the possible impacts of coupling processes on simulations of the Indian Ocean Dipole (IOD). Emphasis is placed on the atmospheric model resolution and physics. Five experiments were conducted for this purpose, including one control run of the ocean-only model, four coupled experiments using two different versions of the Community Atmosphere Model (CAM4 and CAM5) and two different resolutions. The results show that the control run could effectively simulate various features of the IOD. The coupled experiments run at the higher resolution yielded more realistic IOD period and intensity than their counterparts at the low resolution. The coupled experiments using CAM5 generally showed a better simulation skill in the tropical Indian SST climatology and phase-locking than those using CAM4, but the wind anomalies were stronger and the IOD period were longer in the former experiments than in the latter. In all coupled experiments, the IOD intensity was much stronger than the observed intensity, which is attributable to wind-thermocline depth feedback and thermocline depth-subsurface temperature feedback. The CAM5 physics seems beneficial for the simulation of summer rainfall over the eastern equatorial Indian Ocean and the CAM4 physics tends to produce less biases over the western equatorial Indian Ocean, whereas the higher resolution tends to generate unrealistically strong meridional winds. The IOD-ENSO relationship was captured reasonably well in coupled experiments, with improvements in CAM5 relative to CAM4. However, the teleconnection of the IOD-Indian summer monsoon and ENSO-Indian summer monsoon was not realistically simulated in all experiments.
NASA Technical Reports Server (NTRS)
Fortenbaugh, R. L.
1980-01-01
A mathematical model of a high performance airplane capable of vertical attitude takeoff and landing (VATOL) was developed. An off line digital simulation program incorporating this model was developed to provide trim conditions and dynamic check runs for the piloted simulation studies and support dynamic analyses of proposed VATOL configuration and flight control concepts. Development details for the various simulation component models and the application of the off line simulation program, Vertical Attitude Take-Off and Landing Simulation (VATLAS), to develop a baseline control system for the Vought SF-121 VATOL airplane concept are described.
NASA Astrophysics Data System (ADS)
Xavier, V. F.; Chandrasekar, A.; Singh, Devendra
2006-12-01
The present study utilized the Penn State/NCAR mesoscale model (MM5), to assimilate the INSAT-CMV (Indian National Satellite System-Cloud Motion Vector) wind observations using analysis nudging to improve the prediction of a monsoon depression which occurred over the Arabian Sea, India during 14 September 2005 to 17 September 2005. NCEP-FNL analysis has been utilized as the initial and lateral boundary conditions and two sets of numerical experiments were designed to reveal the impact of assimilation of satellite-derived winds. The model was integrated from 14 September 2005 00 UTC to 17 September 2005 00 UTC, with just the NCEP FNL analysis in the NOFDDA run. In the FDDA run, the NCEP FNL analysis fields were improved by assimilating the INSAT-CMV (wind speed and wind direction) as well as QuickSCAT sea surface winds during the 24 hour pre-forecast period (14 September 2005 00 UTC to 15 September 2005 00 UTC) using analysis nudging. The model was subsequently run in the free forecast mode from 15 September 2005 00 UTC to 17 September 2005 12 UTC. The simulated sea level pressure field from the NOFDDA run reveals a relatively stronger system as compared to the FDDA run. However, the sea level pressure fields corresponding to the FDDA run are closer to the analysis. The simulated lower tropospheric winds from both experiments reveal a well-developed cyclonic circulation as compared to the analysis.
Stenemo, Fredrik; Jørgensen, Peter R; Jarvis, Nicholas
2005-09-01
The one-dimensional pesticide fate model MACRO was loose-linked to the three-dimensional discrete fracture/matrix diffusion model FRAC3DVS to describe transport of the pesticide mecoprop in a fractured moraine till and local sand aquifer (5-5.5 m depth) overlying a regional limestone aquifer (16 m depth) at Havdrup, Denmark. Alternative approaches to describe the upper boundary in the groundwater model were examined. Field-scale simulations were run to compare a uniform upper boundary condition with a spatially variable upper boundary derived from Monte-Carlo simulations with MACRO. Plot-scale simulations were run to investigate the influence of the temporal resolution of the upper boundary conditions for fluxes in the groundwater model and the effects of different assumptions concerning the macropore/fracture connectivity between the two models. The influence of within-field variability of leaching on simulated mecoprop concentrations in the local aquifer was relatively small. A fully transient simulation with FRAC3DVS gave 20 times larger leaching to the regional aquifer compared to the case with steady-state water flow, assuming full connectivity with respect to macropores/fractures across the boundary between the two models. For fully transient simulations 'disconnecting' the macropores/fractures at the interface between the two models reduced leaching by a factor 24. A fully connected, transient simulation with FRAC3DVS, with spatially uniform upper boundary fluxes derived from a MACRO simulation with 'effective' parameters is therefore recommended for assessing leaching risks to the regional aquifer, at this, and similar sites.
Fall, Mamadou Lamine; Van der Heyden, Hervé; Carisse, Odile
2016-01-01
Lettuce downy mildew, caused by the oomycete Bremia lactucae Regel, is a major threat to lettuce production worldwide. Lettuce downy mildew is a polycyclic disease driven by airborne spores. A weather-based dynamic simulation model for B. lactucae airborne spores was developed to simulate the aerobiological characteristics of the pathogen. The model was built using the STELLA platform by following the system dynamics methodology. The model was developed using published equations describing disease subprocesses (e.g., sporulation) and assembled knowledge of the interactions among pathogen, host, and weather. The model was evaluated with four years of independent data by comparing model simulations with observations of hourly and daily airborne spore concentrations. The results show an accurate simulation of the trend and shape of B. lactucae temporal dynamics of airborne spore concentration. The model simulated hourly and daily peaks in airborne spore concentrations. More than 95% of the simulation runs, the daily-simulated airborne conidia concentration was 0 when airborne conidia were not observed. Also, the relationship between the simulated and the observed airborne spores was linear. In more than 94% of the simulation runs, the proportion of the linear variation in the hourly-observed values explained by the variation in the hourly-simulated values was greater than 0.7 in all years except one. Most of the errors came from the deviation from the 1:1 line, and the proportion of errors due to the model bias was low. This model is the only dynamic model developed to mimic the dynamics of airborne inoculum and represents an initial step towards improved lettuce downy mildew understanding, forecasting and management.
Fall, Mamadou Lamine; Van der Heyden, Hervé; Carisse, Odile
2016-01-01
Lettuce downy mildew, caused by the oomycete Bremia lactucae Regel, is a major threat to lettuce production worldwide. Lettuce downy mildew is a polycyclic disease driven by airborne spores. A weather-based dynamic simulation model for B. lactucae airborne spores was developed to simulate the aerobiological characteristics of the pathogen. The model was built using the STELLA platform by following the system dynamics methodology. The model was developed using published equations describing disease subprocesses (e.g., sporulation) and assembled knowledge of the interactions among pathogen, host, and weather. The model was evaluated with four years of independent data by comparing model simulations with observations of hourly and daily airborne spore concentrations. The results show an accurate simulation of the trend and shape of B. lactucae temporal dynamics of airborne spore concentration. The model simulated hourly and daily peaks in airborne spore concentrations. More than 95% of the simulation runs, the daily-simulated airborne conidia concentration was 0 when airborne conidia were not observed. Also, the relationship between the simulated and the observed airborne spores was linear. In more than 94% of the simulation runs, the proportion of the linear variation in the hourly-observed values explained by the variation in the hourly-simulated values was greater than 0.7 in all years except one. Most of the errors came from the deviation from the 1:1 line, and the proportion of errors due to the model bias was low. This model is the only dynamic model developed to mimic the dynamics of airborne inoculum and represents an initial step towards improved lettuce downy mildew understanding, forecasting and management. PMID:26953691
Comprehensive Assessment of Models and Events based on Library tools (CAMEL)
NASA Astrophysics Data System (ADS)
Rastaetter, L.; Boblitt, J. M.; DeZeeuw, D.; Mays, M. L.; Kuznetsova, M. M.; Wiegand, C.
2017-12-01
At the Community Coordinated Modeling Center (CCMC), the assessment of modeling skill using a library of model-data comparison metrics is taken to the next level by fully integrating the ability to request a series of runs with the same model parameters for a list of events. The CAMEL framework initiates and runs a series of selected, pre-defined simulation settings for participating models (e.g., WSA-ENLIL, SWMF-SC+IH for the heliosphere, SWMF-GM, OpenGGCM, LFM, GUMICS for the magnetosphere) and performs post-processing using existing tools for a host of different output parameters. The framework compares the resulting time series data with respective observational data and computes a suite of metrics such as Prediction Efficiency, Root Mean Square Error, Probability of Detection, Probability of False Detection, Heidke Skill Score for each model-data pair. The system then plots scores by event and aggregated over all events for all participating models and run settings. We are building on past experiences with model-data comparisons of magnetosphere and ionosphere model outputs in GEM2008, GEM-CEDAR CETI2010 and Operational Space Weather Model challenges (2010-2013). We can apply the framework also to solar-heliosphere as well as radiation belt models. The CAMEL framework takes advantage of model simulations described with Space Physics Archive Search and Extract (SPASE) metadata and a database backend design developed for a next-generation Run-on-Request system at the CCMC.
NASA Technical Reports Server (NTRS)
Meng, J. C. S.; Thomson, J. A. L.
1975-01-01
A data analysis program constructed to assess LDV system performance, to validate the simulation model, and to test various vortex location algorithms is presented. Real or simulated Doppler spectra versus range and elevation is used and the spatial distributions of various spectral moments or other spectral characteristics are calculated and displayed. Each of the real or simulated scans can be processed by one of three different procedures: simple frequency or wavenumber filtering, matched filtering, and deconvolution filtering. The final output is displayed as contour plots in an x-y coordinate system, as well as in the form of vortex tracks deduced from the maxima of the processed data. A detailed analysis of run number 1023 and run number 2023 is presented to demonstrate the data analysis procedure. Vortex tracks and system range resolutions are compared with theoretical predictions.
Mean Line Pump Flow Model in Rocket Engine System Simulation
NASA Technical Reports Server (NTRS)
Veres, Joseph P.; Lavelle, Thomas M.
2000-01-01
A mean line pump flow modeling method has been developed to provide a fast capability for modeling turbopumps of rocket engines. Based on this method, a mean line pump flow code PUMPA has been written that can predict the performance of pumps at off-design operating conditions, given the loss of the diffusion system at the design point. The pump code can model axial flow inducers, mixed-flow and centrifugal pumps. The code can model multistage pumps in series. The code features rapid input setup and computer run time, and is an effective analysis and conceptual design tool. The map generation capability of the code provides the map information needed for interfacing with a rocket engine system modeling code. The off-design and multistage modeling capabilities of the code permit parametric design space exploration of candidate pump configurations and provide pump performance data for engine system evaluation. The PUMPA code has been integrated with the Numerical Propulsion System Simulation (NPSS) code and an expander rocket engine system has been simulated. The mean line pump flow code runs as an integral part of the NPSS rocket engine system simulation and provides key pump performance information directly to the system model at all operating conditions.
Optimum Vehicle Component Integration with InVeST (Integrated Vehicle Simulation Testbed)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ng, W; Paddack, E; Aceves, S
2001-12-27
We have developed an Integrated Vehicle Simulation Testbed (InVeST). InVeST is based on the concept of Co-simulation, and it allows the development of virtual vehicles that can be analyzed and optimized as an overall integrated system. The virtual vehicle is defined by selecting different vehicle components from a component library. Vehicle component models can be written in multiple programming languages running on different computer platforms. At the same time, InVeST provides full protection for proprietary models. Co-simulation is a cost-effective alternative to competing methodologies, such as developing a translator or selecting a single programming language for all vehicle components. InVeSTmore » has been recently demonstrated using a transmission model and a transmission controller model. The transmission model was written in SABER and ran on a Sun/Solaris workstation, while the transmission controller was written in MATRIXx and ran on a PC running Windows NT. The demonstration was successfully performed. Future plans include the applicability of Co-simulation and InVeST to analysis and optimization of multiple complex systems, including those of Intelligent Transportation Systems.« less
Building Simulation Modelers are we big-data ready?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanyal, Jibonananda; New, Joshua Ryan
Recent advances in computing and sensor technologies have pushed the amount of data we collect or generate to limits previously unheard of. Sub-minute resolution data from dozens of channels is becoming increasingly common and is expected to increase with the prevalence of non-intrusive load monitoring. Experts are running larger building simulation experiments and are faced with an increasingly complex data set to analyze and derive meaningful insight. This paper focuses on the data management challenges that building modeling experts may face in data collected from a large array of sensors, or generated from running a large number of building energy/performancemore » simulations. The paper highlights the technical difficulties that were encountered and overcome in order to run 3.5 million EnergyPlus simulations on supercomputers and generating over 200 TBs of simulation output. This extreme case involved development of technologies and insights that will be beneficial to modelers in the immediate future. The paper discusses different database technologies (including relational databases, columnar storage, and schema-less Hadoop) in order to contrast the advantages and disadvantages of employing each for storage of EnergyPlus output. Scalability, analysis requirements, and the adaptability of these database technologies are discussed. Additionally, unique attributes of EnergyPlus output are highlighted which make data-entry non-trivial for multiple simulations. Practical experience regarding cost-effective strategies for big-data storage is provided. The paper also discusses network performance issues when transferring large amounts of data across a network to different computing devices. Practical issues involving lag, bandwidth, and methods for synchronizing or transferring logical portions of the data are presented. A cornerstone of big-data is its use for analytics; data is useless unless information can be meaningfully derived from it. In addition to technical aspects of managing big data, the paper details design of experiments in anticipation of large volumes of data. The cost of re-reading output into an analysis program is elaborated and analysis techniques that perform analysis in-situ with the simulations as they are run are discussed. The paper concludes with an example and elaboration of the tipping point where it becomes more expensive to store the output than re-running a set of simulations.« less
Sensitivity of WRF Regional Climate Simulations to Choice of Land Use Dataset
The goal of this study is to assess the sensitivity of regional climate simulations run with the Weather Research and Forecasting (WRF) model to the choice of datasets representing land use and land cover (LULC). Within a regional climate modeling application, an accurate repres...
Limits to high-speed simulations of spiking neural networks using general-purpose computers.
Zenke, Friedemann; Gerstner, Wulfram
2014-01-01
To understand how the central nervous system performs computations using recurrent neuronal circuitry, simulations have become an indispensable tool for theoretical neuroscience. To study neuronal circuits and their ability to self-organize, increasing attention has been directed toward synaptic plasticity. In particular spike-timing-dependent plasticity (STDP) creates specific demands for simulations of spiking neural networks. On the one hand a high temporal resolution is required to capture the millisecond timescale of typical STDP windows. On the other hand network simulations have to evolve over hours up to days, to capture the timescale of long-term plasticity. To do this efficiently, fast simulation speed is the crucial ingredient rather than large neuron numbers. Using different medium-sized network models consisting of several thousands of neurons and off-the-shelf hardware, we compare the simulation speed of the simulators: Brian, NEST and Neuron as well as our own simulator Auryn. Our results show that real-time simulations of different plastic network models are possible in parallel simulations in which numerical precision is not a primary concern. Even so, the speed-up margin of parallelism is limited and boosting simulation speeds beyond one tenth of real-time is difficult. By profiling simulation code we show that the run times of typical plastic network simulations encounter a hard boundary. This limit is partly due to latencies in the inter-process communications and thus cannot be overcome by increased parallelism. Overall, these results show that to study plasticity in medium-sized spiking neural networks, adequate simulation tools are readily available which run efficiently on small clusters. However, to run simulations substantially faster than real-time, special hardware is a prerequisite.
Simulating Microbial Community Patterning Using Biocellion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kang, Seung-Hwa; Kahan, Simon H.; Momeni, Babak
2014-04-17
Mathematical modeling and computer simulation are important tools for understanding complex interactions between cells and their biotic and abiotic environment: similarities and differences between modeled and observed behavior provide the basis for hypothesis forma- tion. Momeni et al. [5] investigated pattern formation in communities of yeast strains engaging in different types of ecological interactions, comparing the predictions of mathematical modeling and simulation to actual patterns observed in wet-lab experiments. However, simu- lations of millions of cells in a three-dimensional community are ex- tremely time-consuming. One simulation run in MATLAB may take a week or longer, inhibiting exploration of the vastmore » space of parameter combinations and assumptions. Improving the speed, scale, and accu- racy of such simulations facilitates hypothesis formation and expedites discovery. Biocellion is a high performance software framework for ac- celerating discrete agent-based simulation of biological systems with millions to trillions of cells. Simulations of comparable scale and accu- racy to those taking a week of computer time using MATLAB require just hours using Biocellion on a multicore workstation. Biocellion fur- ther accelerates large scale, high resolution simulations using cluster computers by partitioning the work to run on multiple compute nodes. Biocellion targets computational biologists who have mathematical modeling backgrounds and basic C++ programming skills. This chap- ter describes the necessary steps to adapt the original Momeni et al.'s model to the Biocellion framework as a case study.« less
The Prodiguer Messaging Platform
NASA Astrophysics Data System (ADS)
Denvil, S.; Greenslade, M. A.; Carenton, N.; Levavasseur, G.; Raciazek, J.
2015-12-01
CONVERGENCE is a French multi-partner national project designed to gather HPC and informatics expertise to innovate in the context of running French global climate models with differing grids and at differing resolutions. Efficient and reliable execution of these models and the management and dissemination of model output are some of the complexities that CONVERGENCE aims to resolve.At any one moment in time, researchers affiliated with the Institut Pierre Simon Laplace (IPSL) climate modeling group, are running hundreds of global climate simulations. These simulations execute upon a heterogeneous set of French High Performance Computing (HPC) environments. The IPSL's simulation execution runtime libIGCM (library for IPSL Global Climate Modeling group) has recently been enhanced so as to support hitherto impossible realtime use cases such as simulation monitoring, data publication, metrics collection, simulation control, visualizations … etc. At the core of this enhancement is Prodiguer: an AMQP (Advanced Message Queue Protocol) based event driven asynchronous distributed messaging platform. libIGCM now dispatches copious amounts of information, in the form of messages, to the platform for remote processing by Prodiguer software agents at IPSL servers in Paris. Such processing takes several forms: Persisting message content to database(s); Launching rollback jobs upon simulation failure; Notifying downstream applications; Automation of visualization pipelines; We will describe and/or demonstrate the platform's: Technical implementation; Inherent ease of scalability; Inherent adaptiveness in respect to supervising simulations; Web portal receiving simulation notifications in realtime.
Analyzing Spacecraft Telecommunication Systems
NASA Technical Reports Server (NTRS)
Kordon, Mark; Hanks, David; Gladden, Roy; Wood, Eric
2004-01-01
Multi-Mission Telecom Analysis Tool (MMTAT) is a C-language computer program for analyzing proposed spacecraft telecommunication systems. MMTAT utilizes parameterized input and computational models that can be run on standard desktop computers to perform fast and accurate analyses of telecommunication links. MMTAT is easy to use and can easily be integrated with other software applications and run as part of almost any computational simulation. It is distributed as either a stand-alone application program with a graphical user interface or a linkable library with a well-defined set of application programming interface (API) calls. As a stand-alone program, MMTAT provides both textual and graphical output. The graphs make it possible to understand, quickly and easily, how telecommunication performance varies with variations in input parameters. A delimited text file that can be read by any spreadsheet program is generated at the end of each run. The API in the linkable-library form of MMTAT enables the user to control simulation software and to change parameters during a simulation run. Results can be retrieved either at the end of a run or by use of a function call at any time step.
CMAQ was run to simulate urban conditions in the southeastern U.S. in July 1999 at 32, 8, and 2 km grid spacings. Runs were made with two older mechanisms, Carbon Bond IV (CB4) and the Regional Acid Deposition Model, version 2 (RADM2), and with the more recent California Statewid...
WMT: The CSDMS Web Modeling Tool
NASA Astrophysics Data System (ADS)
Piper, M.; Hutton, E. W. H.; Overeem, I.; Syvitski, J. P.
2015-12-01
The Community Surface Dynamics Modeling System (CSDMS) has a mission to enable model use and development for research in earth surface processes. CSDMS strives to expand the use of quantitative modeling techniques, promotes best practices in coding, and advocates for the use of open-source software. To streamline and standardize access to models, CSDMS has developed the Web Modeling Tool (WMT), a RESTful web application with a client-side graphical interface and a server-side database and API that allows users to build coupled surface dynamics models in a web browser on a personal computer or a mobile device, and run them in a high-performance computing (HPC) environment. With WMT, users can: Design a model from a set of components Edit component parameters Save models to a web-accessible server Share saved models with the community Submit runs to an HPC system Download simulation results The WMT client is an Ajax application written in Java with GWT, which allows developers to employ object-oriented design principles and development tools such as Ant, Eclipse and JUnit. For deployment on the web, the GWT compiler translates Java code to optimized and obfuscated JavaScript. The WMT client is supported on Firefox, Chrome, Safari, and Internet Explorer. The WMT server, written in Python and SQLite, is a layered system, with each layer exposing a web service API: wmt-db: database of component, model, and simulation metadata and output wmt-api: configure and connect components wmt-exe: launch simulations on remote execution servers The database server provides, as JSON-encoded messages, the metadata for users to couple model components, including descriptions of component exchange items, uses and provides ports, and input parameters. Execution servers are network-accessible computational resources, ranging from HPC systems to desktop computers, containing the CSDMS software stack for running a simulation. Once a simulation completes, its output, in NetCDF, is packaged and uploaded to a data server where it is stored and from which a user can download it as a single compressed archive file.
NONMEMory: a run management tool for NONMEM.
Wilkins, Justin J
2005-06-01
NONMEM is an extremely powerful tool for nonlinear mixed-effect modelling and simulation of pharmacokinetic and pharmacodynamic data. However, it is a console-based application whose output does not lend itself to rapid interpretation or efficient management. NONMEMory has been created to be a comprehensive project manager for NONMEM, providing detailed summary, comparison and overview of the runs comprising a given project, including the display of output data, simple post-run processing, fast diagnostic plots and run output management, complementary to other available modelling aids. Analysis time ought not to be spent on trivial tasks, and NONMEMory's role is to eliminate these as far as possible by increasing the efficiency of the modelling process. NONMEMory is freely available from http://www.uct.ac.za/depts/pha/nonmemory.php.
Parallelization and automatic data distribution for nuclear reactor simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liebrock, L.M.
1997-07-01
Detailed attempts at realistic nuclear reactor simulations currently take many times real time to execute on high performance workstations. Even the fastest sequential machine can not run these simulations fast enough to ensure that the best corrective measure is used during a nuclear accident to prevent a minor malfunction from becoming a major catastrophe. Since sequential computers have nearly reached the speed of light barrier, these simulations will have to be run in parallel to make significant improvements in speed. In physical reactor plants, parallelism abounds. Fluids flow, controls change, and reactions occur in parallel with only adjacent components directlymore » affecting each other. These do not occur in the sequentialized manner, with global instantaneous effects, that is often used in simulators. Development of parallel algorithms that more closely approximate the real-world operation of a reactor may, in addition to speeding up the simulations, actually improve the accuracy and reliability of the predictions generated. Three types of parallel architecture (shared memory machines, distributed memory multicomputers, and distributed networks) are briefly reviewed as targets for parallelization of nuclear reactor simulation. Various parallelization models (loop-based model, shared memory model, functional model, data parallel model, and a combined functional and data parallel model) are discussed along with their advantages and disadvantages for nuclear reactor simulation. A variety of tools are introduced for each of the models. Emphasis is placed on the data parallel model as the primary focus for two-phase flow simulation. Tools to support data parallel programming for multiple component applications and special parallelization considerations are also discussed.« less
Blum, Yvonne; Vejdani, Hamid R; Birn-Jeffery, Aleksandra V; Hubicki, Christian M; Hurst, Jonathan W; Daley, Monica A
2014-01-01
To achieve robust and stable legged locomotion in uneven terrain, animals must effectively coordinate limb swing and stance phases, which involve distinct yet coupled dynamics. Recent theoretical studies have highlighted the critical influence of swing-leg trajectory on stability, disturbance rejection, leg loading and economy of walking and running. Yet, simulations suggest that not all these factors can be simultaneously optimized. A potential trade-off arises between the optimal swing-leg trajectory for disturbance rejection (to maintain steady gait) versus regulation of leg loading (for injury avoidance and economy). Here we investigate how running guinea fowl manage this potential trade-off by comparing experimental data to predictions of hypothesis-based simulations of running over a terrain drop perturbation. We use a simple model to predict swing-leg trajectory and running dynamics. In simulations, we generate optimized swing-leg trajectories based upon specific hypotheses for task-level control priorities. We optimized swing trajectories to achieve i) constant peak force, ii) constant axial impulse, or iii) perfect disturbance rejection (steady gait) in the stance following a terrain drop. We compare simulation predictions to experimental data on guinea fowl running over a visible step down. Swing and stance dynamics of running guinea fowl closely match simulations optimized to regulate leg loading (priorities i and ii), and do not match the simulations optimized for disturbance rejection (priority iii). The simulations reinforce previous findings that swing-leg trajectory targeting disturbance rejection demands large increases in stance leg force following a terrain drop. Guinea fowl negotiate a downward step using unsteady dynamics with forward acceleration, and recover to steady gait in subsequent steps. Our results suggest that guinea fowl use swing-leg trajectory consistent with priority for load regulation, and not for steadiness of gait. Swing-leg trajectory optimized for load regulation may facilitate economy and injury avoidance in uneven terrain.
Blum, Yvonne; Vejdani, Hamid R.; Birn-Jeffery, Aleksandra V.; Hubicki, Christian M.; Hurst, Jonathan W.; Daley, Monica A.
2014-01-01
To achieve robust and stable legged locomotion in uneven terrain, animals must effectively coordinate limb swing and stance phases, which involve distinct yet coupled dynamics. Recent theoretical studies have highlighted the critical influence of swing-leg trajectory on stability, disturbance rejection, leg loading and economy of walking and running. Yet, simulations suggest that not all these factors can be simultaneously optimized. A potential trade-off arises between the optimal swing-leg trajectory for disturbance rejection (to maintain steady gait) versus regulation of leg loading (for injury avoidance and economy). Here we investigate how running guinea fowl manage this potential trade-off by comparing experimental data to predictions of hypothesis-based simulations of running over a terrain drop perturbation. We use a simple model to predict swing-leg trajectory and running dynamics. In simulations, we generate optimized swing-leg trajectories based upon specific hypotheses for task-level control priorities. We optimized swing trajectories to achieve i) constant peak force, ii) constant axial impulse, or iii) perfect disturbance rejection (steady gait) in the stance following a terrain drop. We compare simulation predictions to experimental data on guinea fowl running over a visible step down. Swing and stance dynamics of running guinea fowl closely match simulations optimized to regulate leg loading (priorities i and ii), and do not match the simulations optimized for disturbance rejection (priority iii). The simulations reinforce previous findings that swing-leg trajectory targeting disturbance rejection demands large increases in stance leg force following a terrain drop. Guinea fowl negotiate a downward step using unsteady dynamics with forward acceleration, and recover to steady gait in subsequent steps. Our results suggest that guinea fowl use swing-leg trajectory consistent with priority for load regulation, and not for steadiness of gait. Swing-leg trajectory optimized for load regulation may facilitate economy and injury avoidance in uneven terrain. PMID:24979750
Information Architecture for Interactive Archives at the Community Coordianted Modeling Center
NASA Astrophysics Data System (ADS)
De Zeeuw, D.; Wiegand, C.; Kuznetsova, M.; Mullinix, R.; Boblitt, J. M.
2017-12-01
The Community Coordinated Modeling Center (CCMC) is upgrading its meta-data system for model simulations to be compliant with the SPASE meta-data standard. This work is helping to enhance the SPASE standards for simulations to better describe the wide variety of models and their output. It will enable much more sophisticated and automated metrics and validation efforts at the CCMC, as well as much more robust searches for specific types of output. The new meta-data will also allow much more tailored run submissions as it will allow some code options to be selected for Run-On-Request models. We will also demonstrate data accessibility through an implementation of the Heliophysics Application Programmer's Interface (HAPI) protocol of data otherwise available throught the integrated space weather analysis system (iSWA).
Influence of hydrodynamic thrust bearings on the nonlinear oscillations of high-speed rotors
NASA Astrophysics Data System (ADS)
Chatzisavvas, Ioannis; Boyaci, Aydin; Koutsovasilis, Panagiotis; Schweizer, Bernhard
2016-10-01
This paper investigates the effect of hydrodynamic thrust bearings on the nonlinear vibrations and the bifurcations occurring in rotor/bearing systems. In order to examine the influence of thrust bearings, run-up simulations may be carried out. To be able to perform such run-up calculations, a computationally efficient thrust bearing model is mandatory. Direct discretization of the Reynolds equation for thrust bearings by means of a Finite Element or Finite Difference approach entails rather large simulation times, since in every time-integration step a discretized model of the Reynolds equation has to be solved simultaneously with the rotor model. Implementation of such a coupled rotor/bearing model may be accomplished by a co-simulation approach. Such an approach prevents, however, a thorough analysis of the rotor/bearing system based on extensive parameter studies. A major point of this work is the derivation of a very time-efficient but rather precise model for transient simulations of rotors with hydrodynamic thrust bearings. The presented model makes use of a global Galerkin approach, where the pressure field is approximated by global trial functions. For the considered problem, an analytical evaluation of the relevant integrals is possible. As a consequence, the system of equations of the discretized bearing model is obtained symbolically. In combination with a proper decomposition of the governing system matrix, a numerically efficient implementation can be achieved. Using run-up simulations with the proposed model, the effect of thrust bearings on the bifurcations points as well as on the amplitudes and frequencies of the subsynchronous rotor oscillations is investigated. Especially, the influence of the magnitude of the axial force, the geometry of the thrust bearing and the oil parameters is examined. It is shown that the thrust bearing exerts a large influence on the nonlinear rotor oscillations, especially to those related with the conical mode of the rotor. A comparison between a full co-simulation approach and a reduced Galerkin implementation is carried out. It is shown that a speed-up of 10-15 times may be obtained with the Galerkin model compared to the co-simulation model under the same accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bush, Brian W; Brunhart-Lupo, Nicholas J; Gruchalla, Kenny M
This brochure describes a system dynamics simulation (SD) framework that supports an end-to-end analysis workflow that is optimized for deployment on ESIF facilities(Peregrine and the Insight Center). It includes (I) parallel and distributed simulation of SD models, (ii) real-time 3D visualization of running simulations, and (iii) comprehensive database-oriented persistence of simulation metadata, inputs, and outputs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bush, Brian W; Brunhart-Lupo, Nicholas J; Gruchalla, Kenny M
This presentation describes a system dynamics simulation (SD) framework that supports an end-to-end analysis workflow that is optimized for deployment on ESIF facilities(Peregrine and the Insight Center). It includes (I) parallel and distributed simulation of SD models, (ii) real-time 3D visualization of running simulations, and (iii) comprehensive database-oriented persistence of simulation metadata, inputs, and outputs.
Effects of simulated weightlessness on fish otolith growth: Clinostat versus Rotating-Wall Vessel
NASA Astrophysics Data System (ADS)
Brungs, Sonja; Hauslage, Jens; Hilbig, Reinhard; Hemmersbach, Ruth; Anken, Ralf
2011-09-01
Stimulus dependence is a general feature of developing sensory systems. It has been shown earlier that the growth of inner ear heavy stones (otoliths) of late-stage Cichlid fish ( Oreochromis mossambicus) and Zebrafish ( Danio rerio) is slowed down by hypergravity, whereas microgravity during space flight yields an opposite effect, i.e. larger than 1 g otoliths, in Swordtail ( Xiphophorus helleri) and in Cichlid fish late-stage embryos. These and related studies proposed that otolith growth is actively adjusted via a feedback mechanism to produce a test mass of the appropriate physical capacity. Using ground-based techniques to apply simulated weightlessness, long-term clinorotation (CR; exposure on a fast-rotating Clinostat with one axis of rotation) led to larger than 1 g otoliths in late-stage Cichlid fish. Larger than normal otoliths were also found in early-staged Zebrafish embryos after short-term Wall Vessel Rotation (WVR; also regarded as a method to simulate weightlessness). These results are basically in line with the results obtained on Swordtails from space flight. Thus, the growth of fish inner ear otoliths seems to be an appropriate parameter to assess the quality of "simulated weightlessness" provided by a particular simulation device. Since CR and WVR are in worldwide use to simulate weightlessness conditions on ground using small-sized specimens, we were prompted to directly compare the effects of CR and WVR on otolith growth using developing Cichlids as model organism. Animals were simultaneously subjected to CR and WVR from a point of time when otolith primordia had begun to calcify both within the utricle (gravity perception) and the saccule (hearing); the respective otoliths are the lapilli and the sagittae. Three such runs were subsequently carried out, using three different batches of fish. The runs were discontinued when the animals began to hatch. In the course of all three runs performed, CR led to larger than normal lapilli, whereas WVR had no effect on the growth of these otoliths. Regarding sagittae, CR resulted in larger than normal stones in one of the three runs. The other CR runs and all WVR runs had no effect on sagittal growth. These results clearly indicate that CR rather than WVR can be regarded as a device to simulate weightlessness using the Cichlid as model organism. Since WVR has earlier been shown to affect otolith growth in Zebrafish, the lifestyle of an animal (mouth-breeding versus egg-laying) seems to be of considerable importance. Further studies using a variety of simulation techniques (including, e.g. magnetic levitation and random positioning) and various species are needed in order to identify the most appropriate technique to simulate weightlessness regarding a particular model organism.
A Method for Generating Reduced-Order Linear Models of Multidimensional Supersonic Inlets
NASA Technical Reports Server (NTRS)
Chicatelli, Amy; Hartley, Tom T.
1998-01-01
Simulation of high speed propulsion systems may be divided into two categories, nonlinear and linear. The nonlinear simulations are usually based on multidimensional computational fluid dynamics (CFD) methodologies and tend to provide high resolution results that show the fine detail of the flow. Consequently, these simulations are large, numerically intensive, and run much slower than real-time. ne linear simulations are usually based on large lumping techniques that are linearized about a steady-state operating condition. These simplistic models often run at or near real-time but do not always capture the detailed dynamics of the plant. Under a grant sponsored by the NASA Lewis Research Center, Cleveland, Ohio, a new method has been developed that can be used to generate improved linear models for control design from multidimensional steady-state CFD results. This CFD-based linear modeling technique provides a small perturbation model that can be used for control applications and real-time simulations. It is important to note the utility of the modeling procedure; all that is needed to obtain a linear model of the propulsion system is the geometry and steady-state operating conditions from a multidimensional CFD simulation or experiment. This research represents a beginning step in establishing a bridge between the controls discipline and the CFD discipline so that the control engineer is able to effectively use multidimensional CFD results in control system design and analysis.
Desktop Modeling and Simulation: Parsimonious, yet Effective Discrete-Event Simulation Analysis
NASA Technical Reports Server (NTRS)
Bradley, James R.
2012-01-01
This paper evaluates how quickly students can be trained to construct useful discrete-event simulation models using Excel The typical supply chain used by many large national retailers is described, and an Excel-based simulation model is constructed of it The set of programming and simulation skills required for development of that model are then determined we conclude that six hours of training are required to teach the skills to MBA students . The simulation presented here contains all fundamental functionallty of a simulation model, and so our result holds for any discrete-event simulation model. We argue therefore that Industry workers with the same technical skill set as students having completed one year in an MBA program can be quickly trained to construct simulation models. This result gives credence to the efficacy of Desktop Modeling and Simulation whereby simulation analyses can be quickly developed, run, and analyzed with widely available software, namely Excel.
A simulation model for probabilistic analysis of Space Shuttle abort modes
NASA Technical Reports Server (NTRS)
Hage, R. T.
1993-01-01
A simulation model which was developed to provide a probabilistic analysis tool to study the various space transportation system abort mode situations is presented. The simulation model is based on Monte Carlo simulation of an event-tree diagram which accounts for events during the space transportation system's ascent and its abort modes. The simulation model considers just the propulsion elements of the shuttle system (i.e., external tank, main engines, and solid boosters). The model was developed to provide a better understanding of the probability of occurrence and successful completion of abort modes during the vehicle's ascent. The results of the simulation runs discussed are for demonstration purposes only, they are not official NASA probability estimates.
Systematic approach to verification and validation: High explosive burn models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Menikoff, Ralph; Scovel, Christina A.
2012-04-16
Most material models used in numerical simulations are based on heuristics and empirically calibrated to experimental data. For a specific model, key questions are determining its domain of applicability and assessing its relative merits compared to other models. Answering these questions should be a part of model verification and validation (V and V). Here, we focus on V and V of high explosive models. Typically, model developers implemented their model in their own hydro code and use different sets of experiments to calibrate model parameters. Rarely can one find in the literature simulation results for different models of the samemore » experiment. Consequently, it is difficult to assess objectively the relative merits of different models. This situation results in part from the fact that experimental data is scattered through the literature (articles in journals and conference proceedings) and that the printed literature does not allow the reader to obtain data from a figure in electronic form needed to make detailed comparisons among experiments and simulations. In addition, it is very time consuming to set up and run simulations to compare different models over sufficiently many experiments to cover the range of phenomena of interest. The first difficulty could be overcome if the research community were to support an online web based database. The second difficulty can be greatly reduced by automating procedures to set up and run simulations of similar types of experiments. Moreover, automated testing would be greatly facilitated if the data files obtained from a database were in a standard format that contained key experimental parameters as meta-data in a header to the data file. To illustrate our approach to V and V, we have developed a high explosive database (HED) at LANL. It now contains a large number of shock initiation experiments. Utilizing the header information in a data file from HED, we have written scripts to generate an input file for a hydro code, run a simulation, and generate a comparison plot showing simulated and experimental velocity gauge data. These scripts are then applied to several series of experiments and to several HE burn models. The same systematic approach is applicable to other types of material models; for example, equations of state models and material strength models.« less
Debris flow run-out simulation and analysis using a dynamic model
NASA Astrophysics Data System (ADS)
Melo, Raquel; van Asch, Theo; Zêzere, José L.
2018-02-01
Only two months after a huge forest fire occurred in the upper part of a valley located in central Portugal, several debris flows were triggered by intense rainfall. The event caused infrastructural and economic damage, although no lives were lost. The present research aims to simulate the run-out of two debris flows that occurred during the event as well as to calculate via back-analysis the rheological parameters and the excess rain involved. Thus, a dynamic model was used, which integrates surface runoff, concentrated erosion along the channels, propagation and deposition of flow material. Afterwards, the model was validated using 32 debris flows triggered during the same event that were not considered for calibration. The rheological and entrainment parameters obtained for the most accurate simulation were then used to perform three scenarios of debris flow run-out on the basin scale. The results were confronted with the existing buildings exposed in the study area and the worst-case scenario showed a potential inundation that may affect 345 buildings. In addition, six streams where debris flow occurred in the past and caused material damage and loss of lives were identified.
Characterization of the Body-to-Body Propagation Channel for Subjects during Sports Activities.
Mohamed, Marshed; Cheffena, Michael; Moldsvor, Arild
2018-02-18
Body-to-body wireless networks (BBWNs) have great potential to find applications in team sports activities among others. However, successful design of such systems requires great understanding of the communication channel as the movement of the body components causes time-varying shadowing and fading effects. In this study, we present results of the measurement campaign of BBWN during running and cycling activities. Among others, the results indicated the presence of good and bad states with each state following a specific distribution for the considered propagation scenarios. This motivated the development of two-state semi-Markov model, for simulation of the communication channels. The simulation model was validated using the available measurement data in terms of first and second order statistics and have shown good agreement. The first order statistics obtained from the simulation model as well as the measured results were then used to analyze the performance of the BBWNs channels under running and cycling activities in terms of capacity and outage probability. Cycling channels showed better performance than running, having higher channel capacity and lower outage probability, regardless of the speed of the subjects involved in the measurement campaign.
Active Learning for Directed Exploration of Complex Systems
NASA Technical Reports Server (NTRS)
Burl, Michael C.; Wang, Esther
2009-01-01
Physics-based simulation codes are widely used in science and engineering to model complex systems that would be infeasible to study otherwise. Such codes provide the highest-fidelity representation of system behavior, but are often so slow to run that insight into the system is limited. For example, conducting an exhaustive sweep over a d-dimensional input parameter space with k-steps along each dimension requires k(sup d) simulation trials (translating into k(sup d) CPU-days for one of our current simulations). An alternative is directed exploration in which the next simulation trials are cleverly chosen at each step. Given the results of previous trials, supervised learning techniques (SVM, KDE, GP) are applied to build up simplified predictive models of system behavior. These models are then used within an active learning framework to identify the most valuable trials to run next. Several active learning strategies are examined including a recently-proposed information-theoretic approach. Performance is evaluated on a set of thirteen synthetic oracles, which serve as surrogates for the more expensive simulations and enable the experiments to be replicated by other researchers.
Running SW4 On New Commodity Technology Systems (CTS-1) Platform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodgers, Arthur J.; Petersson, N. Anders; Pitarka, Arben
We have recently been running earthquake ground motion simulations with SW4 on the new capacity computing systems, called the Commodity Technology Systems - 1 (CTS-1) at Lawrence Livermore National Laboratory (LLNL). SW4 is a fourth order time domain finite difference code developed by LLNL and distributed by the Computational Infrastructure for Geodynamics (CIG). SW4 simulates seismic wave propagation in complex three-dimensional Earth models including anelasticity and surface topography. We are modeling near-fault earthquake strong ground motions for the purposes of evaluating the response of engineered structures, such as nuclear power plants and other critical infrastructure. Engineering analysis of structures requiresmore » the inclusion of high frequencies which can cause damage, but are often difficult to include in simulations because of the need for large memory to model fine grid spacing on large domains.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gowardhan, Akshay; Neuscamman, Stephanie; Donetti, John
Aeolus is an efficient three-dimensional computational fluid dynamics code based on finite volume method developed for predicting transport and dispersion of contaminants in a complex urban area. It solves the time dependent incompressible Navier-Stokes equation on a regular Cartesian staggered grid using a fractional step method. It also solves a scalar transport equation for temperature and using the Boussinesq approximation. The model also includes a Lagrangian dispersion model for predicting the transport and dispersion of atmospheric contaminants. The model can be run in an efficient Reynolds Average Navier-Stokes (RANS) mode with a run time of several minutes, or a moremore » detailed Large Eddy Simulation (LES) mode with run time of hours for a typical simulation. This report describes the model components, including details on the physics models used in the code, as well as several model validation efforts. Aeolus wind and dispersion predictions are compared to field data from the Joint Urban Field Trials 2003 conducted in Oklahoma City (Allwine et al 2004) including both continuous and instantaneous releases. Newly implemented Aeolus capabilities include a decay chain model and an explosive Radiological Dispersal Device (RDD) source term; these capabilities are described. Aeolus predictions using the buoyant explosive RDD source are validated against two experimental data sets: the Green Field explosive cloud rise experiments conducted in Israel (Sharon et al 2012) and the Full-Scale RDD Field Trials conducted in Canada (Green et al 2016).« less
2006-06-01
levels of automation applied as per Figure 13. .................................. 60 x THIS PAGE...models generated for this thesis were set to run for 60 minutes. To run the simulation for the set time, the analyst provides a random number seed to...1984). The IMPRINT 59 workload value of 60 has been used by a consensus of workload modeling SMEs to represent the ‘high’ threshold, while the
CAMPUS-MINNESOTA User Information Manual. Project PRIME Report, Number 12.
ERIC Educational Resources Information Center
Andrew, Gary M.
The purpose of this report is to aid the use of the computer simulation model, CAMPUS-M, in 4 specific areas: (1) the conceptual modeling of the institution; (2) the preparation of machine readable input data; (3) the preparation of simulation and report commands for the model; and (4) the actual running of the program on a CDC 6600 computer.…
Zhu, Q.; Jiang, H.; Liu, J.; Wei, X.; Peng, C.; Fang, X.; Liu, S.; Zhou, G.; Yu, S.; Ju, W.
2010-01-01
The Integrated Biosphere Simulator is used to evaluate the spatial and temporal patterns of the crucial hydrological variables [run-off and actual evapotranspiration (AET)] of the water balance across China for the period 1951–2006 including a precipitation analysis. Results suggest three major findings. First, simulated run-off captured 85% of the spatial variability and 80% of the temporal variability for 85 hydrological gauges across China. The mean relative errors were within 20% for 66% of the studied stations and within 30% for 86% of the stations. The Nash–Sutcliffe coefficients indicated that the quantity pattern of run-off was also captured acceptably except for some watersheds in southwestern and northwestern China. The possible reasons for underestimation of run-off in the Tibetan plateau include underestimation of precipitation and uncertainties in other meteorological data due to complex topography, and simplified representations of the soil depth attribute and snow processes in the model. Second, simulated AET matched reasonably with estimated values calculated as the residual of precipitation and run-off for watersheds controlled by the hydrological gauges. Finally, trend analysis based on the Mann–Kendall method indicated that significant increasing and decreasing patterns in precipitation appeared in the northwest part of China and the Yellow River region, respectively. Significant increasing and decreasing trends in AET were detected in the Southwest region and the Yangtze River region, respectively. In addition, the Southwest region, northern China (including the Heilongjiang, Liaohe, and Haihe Basins), and the Yellow River Basin showed significant decreasing trends in run-off, and the Zhemin hydrological region showed a significant increasing trend.
Overcoming Microsoft Excel's Weaknesses for Crop Model Building and Simulations
ERIC Educational Resources Information Center
Sung, Christopher Teh Boon
2011-01-01
Using spreadsheets such as Microsoft Excel for building crop models and running simulations can be beneficial. Excel is easy to use, powerful, and versatile, and it requires the least proficiency in computer programming compared to other programming platforms. Excel, however, has several weaknesses: it does not directly support loops for iterative…
Key algorithms used in GR02: A computer simulation model for predicting tree and stand growth
Garrett A. Hughes; Paul E. Sendak; Paul E. Sendak
1985-01-01
GR02 is an individual tree, distance-independent simulation model for predicting tree and stand growth over time. It performs five major functions during each run: (1) updates diameter at breast height, (2) updates total height, (3) estimates mortality, (4) determines regeneration, and (5) updates crown class.
A Digital Computer Simulation of Cardiovascular and Renal Physiology.
ERIC Educational Resources Information Center
Tidball, Charles S.
1979-01-01
Presents the physiological MACPEE, one of a family of digital computer simulations used in Canada and Great Britain. A general description of the model is provided, along with a sample of computer output format, options for making interventions, advanced capabilities, an evaluation, and technical information for running a MAC model. (MA)
Using Voice Recognition Equipment to Run the Warfare Environmental Simulator (WES),
1981-03-01
simulations and models are often used. War games are a type of simulation frequently used by the military to evaluate C3 effectiveness. Through the use of a...to 162 words or short phrases (Appendix B). B. EQUIPMENT USED 1. Hardware Description [13] For the experiment a Threshold Model T600 discrete... Model T600 terminal used in this experiment con- sists of an analog speech preprocessor, microcomputer, CRT/keyboard unit, magnetic tape cartridge unit
Virtual Observation System for Earth System Model: An Application to ACME Land Model Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Dali; Yuan, Fengming; Hernandez, Benjamin
Investigating and evaluating physical-chemical-biological processes within an Earth system model (EMS) can be very challenging due to the complexity of both model design and software implementation. A virtual observation system (VOS) is presented to enable interactive observation of these processes during system simulation. Based on advance computing technologies, such as compiler-based software analysis, automatic code instrumentation, and high-performance data transport, the VOS provides run-time observation capability, in-situ data analytics for Earth system model simulation, model behavior adjustment opportunities through simulation steering. A VOS for a terrestrial land model simulation within the Accelerated Climate Modeling for Energy model is also presentedmore » to demonstrate the implementation details and system innovations.« less
Virtual Observation System for Earth System Model: An Application to ACME Land Model Simulations
Wang, Dali; Yuan, Fengming; Hernandez, Benjamin; ...
2017-01-01
Investigating and evaluating physical-chemical-biological processes within an Earth system model (EMS) can be very challenging due to the complexity of both model design and software implementation. A virtual observation system (VOS) is presented to enable interactive observation of these processes during system simulation. Based on advance computing technologies, such as compiler-based software analysis, automatic code instrumentation, and high-performance data transport, the VOS provides run-time observation capability, in-situ data analytics for Earth system model simulation, model behavior adjustment opportunities through simulation steering. A VOS for a terrestrial land model simulation within the Accelerated Climate Modeling for Energy model is also presentedmore » to demonstrate the implementation details and system innovations.« less
An experimental investigation of flow around a vehicle passing through a tornado
NASA Astrophysics Data System (ADS)
Suzuki, Masahiro; Obara, Kouhei; Okura, Nobuyuki
2016-03-01
Flow around a vehicle running through a tornado was investigated experimentally. A tornado simulator was developed to generate a tornado-like swirl flow. PIV study confirmed that the simulator generates two-celled vortices which are observed in the natural tornadoes. A moving test rig was developed to run a 1/40 scaled train-shaped model vehicle under the tornado simulator. The car contained pressure sensors, a data logger with an AD converter to measure unsteady surface pressures during its run through the swirling flow. Aerodynamic forces acting on the vehicle were estimated from the pressure data. The results show that the aerodynamic forces change its magnitude and direction depending on the position of the car in the swirling flow. The asymmetry of the forces about the vortex centre suggests the vehicle itself may deform the flow field.
The Initial Conditions and Evolution of Isolated Galaxy Models: Effects of the Hot Gas Halo
NASA Astrophysics Data System (ADS)
Hwang, Jeong-Sun; Park, Changbom; Choi, Jun-Hwan
2013-02-01
We construct several Milky Way-like galaxy models containing a gas halo (as well as gaseous and stellar disks, a dark matter halo, and a stellar bulge) following either an isothermal or an NFW density profile with varying mass and initial spin. In addition, galactic winds associated with star formation are tested in some of the simulations. We evolve these isolated galaxy models using the GADGET-3 N-body/hydrodynamic simulation code, paying particular attention to the effects of the gaseous halo on the evolution. We find that the evolution of the models is strongly affected by the adopted gas halo component, particularly in the gas dissipation and the star formation activity in the disk. The model without a gas halo shows an increasing star formation rate (SFR) at the beginning of the simulation for some hundreds of millions of years and then a continuously decreasing rate to the end of the run at 3 Gyr. Whereas the SFRs in the models with a gas halo, depending on the density profile and the total mass of the gas halo, emerge to be either relatively flat throughout the simulations or increasing until the middle of the run (over a gigayear) and then decreasing to the end. The models with the more centrally concentrated NFW gas halo show overall higher SFRs than those with the isothermal gas halo of the equal mass. The gas accretion from the halo onto the disk also occurs more in the models with the NFW gas halo, however, this is shown to take place mostly in the inner part of the disk and not to contribute significantly to the star formation unless the gas halo has very high density at the central part. The rotation of a gas halo is found to make SFR lower in the model. The SFRs in the runs including galactic winds are found to be lower than those in the same runs but without winds. We conclude that the effects of a hot gaseous halo on the evolution of galaxies are generally too significant to be simply ignored. We also expect that more hydrodynamical processes in galaxies could be understood through numerical simulations employing both gas disk and gas halo components.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, S. Y.
Au beam at the RHIC ramp in run 2014 is reviewed together with the run 2011 and run 2012. Observed bunch length and longitudinal emittance are compared with the IBS simulations. The IBS growth rate of the longitudinal emittance in run 2014 is similar to run 2011, and both are larger than run 2012. This is explained by the large transverse emittance at high intensity observed in run 2012, but not in run 2014. The big improvement of the AGS ramping in run 2014 might be related to this change. The importance of the injector intensity improvement in run 2014more » is emphasized, which gives rise to the initial luminosity improvement of 50% in run 2014, compared with the previous Au-Au run 2011. In addition, a modified IBS model, which is calibrated using the RHIC Au runs from 9.8 GeV/n to 100 GeV/n, is presented and used in the study.« less
A Lagrangian stochastic model for aerial spray transport above an oak forest
Wang, Yansen; Miller, David R.; Anderson, Dean E.; McManus, Michael L.
1995-01-01
An aerial spray droplets' transport model has been developed by applying recent advances in Lagrangian stochastic simulation of heavy particles. A two-dimensional Lagrangian stochastic model was adopted to simulate the spray droplet dispersion in atmospheric turbulence by adjusting the Lagrangian integral time scale along the drop trajectory. The other major physical processes affecting the transport of spray droplets above a forest canopy, the aircraft wingtip vortices and the droplet evaporation, were also included in each time step of the droplets' transport.The model was evaluated using data from an aerial spray field experiment. In generally neutral stability conditions, the accuracy of the model predictions varied from run-to-run as expected. The average root-mean-square error was 24.61 IU cm−2, and the average relative error was 15%. The model prediction was adequate in two-dimensional steady wind conditions, but was less accurate in variable wind condition. The results indicated that the model can simulate successfully the ensemble; average transport of aerial spray droplets under neutral, steady atmospheric wind conditions.
Challenges in the development of very high resolution Earth System Models for climate science
NASA Astrophysics Data System (ADS)
Rasch, Philip J.; Xie, Shaocheng; Ma, Po-Lun; Lin, Wuyin; Wan, Hui; Qian, Yun
2017-04-01
The authors represent the 20+ members of the ACME atmosphere development team. The US Department of Energy (DOE) has, like many other organizations around the world, identified the need for an Earth System Model capable of rapid completion of decade to century length simulations at very high (vertical and horizontal) resolution with good climate fidelity. Two years ago DOE initiated a multi-institution effort called ACME (Accelerated Climate Modeling for Energy) to meet this an extraordinary challenge, targeting a model eventually capable of running at 10-25km horizontal and 20-400m vertical resolution through the troposphere on exascale computational platforms at speeds sufficient to complete 5+ simulated years per day. I will outline the challenges our team has encountered in development of the atmosphere component of this model, and the strategies we have been using for tuning and debugging a model that we can barely afford to run on today's computational platforms. These strategies include: 1) evaluation at lower resolutions; 2) ensembles of short simulations to explore parameter space, and perform rough tuning and evaluation; 3) use of regionally refined versions of the model for probing high resolution model behavior at less expense; 4) use of "auto-tuning" methodologies for model tuning; and 5) brute force long climate simulations.
Aspen Modeling of the Bayer Process
NASA Astrophysics Data System (ADS)
Langa, J. M.; Russell, T. G.; O'Neill, G. A.; Gacka, P.; Shah, V. B.; Stephenson, J. L.; Snyder, J. G.
The ASPEN simulator was used to model Alcoa's Pt. Comfort Bayer refinery. All areas of the refinery including the lakes and powerhouse were modeled. Each area model was designed to be run stand alone or integrated with others for a full plant model.
Pilot-in-the Loop CFD Method Development
2016-10-20
State University. All software supporting piloted simulations must run at real time speeds or faster. This requirement drives the number of...objects in the environment. In turn, this flowfield affects the local aerodynamics of the main rotor blade sections, affecting blade air loads, and...model, empirical models of ground effect and rotor / airframe interactions) are disabled when running in fully coupled mode, so as to not “double count
NASA Astrophysics Data System (ADS)
Khodayari, Arezoo; Olsen, Seth C.; Wuebbles, Donald J.; Phoenix, Daniel B.
2015-07-01
Atmospheric chemistry-climate models are often used to calculate the effect of aviation NOx emissions on atmospheric ozone (O3) and methane (CH4). Due to the long (∼10 yr) atmospheric lifetime of methane, model simulations must be run for long time periods, typically for more than 40 simulation years, to reach steady-state if using CH4 emission fluxes. Because of the computational expense of such long runs, studies have traditionally used specified CH4 mixing ratio lower boundary conditions (BCs) and then applied a simple parameterization based on the change in CH4 lifetime between the control and NOx-perturbed simulations to estimate the change in CH4 concentration induced by NOx emissions. In this parameterization a feedback factor (typically a value of 1.4) is used to account for the feedback of CH4 concentrations on its lifetime. Modeling studies comparing simulations using CH4 surface fluxes and fixed mixing ratio BCs are used to examine the validity of this parameterization. The latest version of the Community Earth System Model (CESM), with the CAM5 atmospheric model, was used for this study. Aviation NOx emissions for 2006 were obtained from the AEDT (Aviation Environmental Design Tool) global commercial aircraft emissions. Results show a 31.4 ppb change in CH4 concentration when estimated using the parameterization and a 1.4 feedback factor, and a 28.9 ppb change when the concentration was directly calculated in the CH4 flux simulations. The model calculated value for CH4 feedback on its own lifetime agrees well with the 1.4 feedback factor. Systematic comparisons between the separate runs indicated that the parameterization technique overestimates the CH4 concentration by 8.6%. Therefore, it is concluded that the estimation technique is good to within ∼10% and decreases the computational requirements in our simulations by nearly a factor of 8.
Red-light running violation prediction using observational and simulator data.
Jahangiri, Arash; Rakha, Hesham; Dingus, Thomas A
2016-11-01
In the United States, 683 people were killed and an estimated 133,000 were injured in crashes due to running red lights in 2012. To help prevent/mitigate crashes caused by running red lights, these violations need to be identified before they occur, so both the road users (i.e., drivers, pedestrians, etc.) in potential danger and the infrastructure can be notified and actions can be taken accordingly. Two different data sets were used to assess the feasibility of developing red-light running (RLR) violation prediction models: (1) observational data and (2) driver simulator data. Both data sets included common factors, such as time to intersection (TTI), distance to intersection (DTI), and velocity at the onset of the yellow indication. However, the observational data set provided additional factors that the simulator data set did not, and vice versa. The observational data included vehicle information (e.g., speed, acceleration, etc.) for several different time frames. For each vehicle approaching an intersection in the observational data set, required data were extracted from several time frames as the vehicle drew closer to the intersection. However, since the observational data were inherently anonymous, driver factors such as age and gender were unavailable in the observational data set. Conversely, the simulator data set contained age and gender. In addition, the simulator data included a secondary (non-driving) task factor and a treatment factor (i.e., incoming/outgoing calls while driving). The simulator data only included vehicle information for certain time frames (e.g., yellow onset); the data did not provide vehicle information for several different time frames while vehicles were approaching an intersection. In this study, the random forest (RF) machine-learning technique was adopted to develop RLR violation prediction models. Factor importance was obtained for different models and different data sets to show how differently the factors influence the performance of each model. A sensitivity analysis showed that the factor importance to identify RLR violations changed when data from different time frames were used to develop the prediction models. TTI, DTI, the required deceleration parameter (RDP), and velocity at the onset of a yellow indication were among the most important factors identified by both models constructed using observational data and simulator data. Furthermore, in addition to the factors obtained from a point in time (i.e., yellow onset), valuable information suitable for RLR violation prediction was obtained from defined monitoring periods. It was found that period lengths of 2-6m contributed to the best model performance. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Caballero Bendixsen, Luis; Bott-Suzuki, Simon; Cordaro, Samuel; Krishnan, Mahadevan; Chapman, Stephen; Coleman, Phil; Chittenden, Jeremy
2015-11-01
Results will be shown on coordinated experiments and MHD simulations on magnetically driven implosions, with an emphasis on current diffusion and heat transport. Experiments are run at a Mather-type dense plasma focus (DPF-3, Vc: 20 kV, Ip: 480 kA, E: 5.8 kJ). Typical experiments are run at 300 kA and 0.33 Hz repetition rate with different gas loads (Ar, Ne, and He) at pressures of ~ 1-3 Torr, usually gathering 1000 shots per day. Simulations are run at a 96-core HP blade server cluster using 3GHz processors with 4GB RAM per node.Preliminary results show axial and radial phase plasma sheath velocity of ~ 1x105 m/s. These are in agreement with the snow-plough model of DPFs. Peak magnetic field of ~ 1 Tesla in the radial compression phase are measured. Electron densities on the order of 1018 cm-3 anticipated. Comparison between 2D and 3D models with empirical results show a good agreement in the axial and radial phase.
NASA Astrophysics Data System (ADS)
Wang, Jiajia; Ward, Steven N.; Xiao, Lili
2015-06-01
Flow-like landslides are rapidly moving fluid-solid mixtures that can cause significant destruction along paths that run far from their original sources. Existing models for run out prediction and motion simulation of flow-like landslides have many limitations. In this paper, we develop a new method named `Tsunami Squares' to simulate the generation, propagation and stoppage of flow-like landslides based on conservation of volume and momentum. Landslide materials in the new method form divisible squares that are displaced, then further fractured. The squares move under the influence of gravity-driven acceleration and suffer decelerations due to basal and dynamic frictions. Distinctively, this method takes into account solid and fluid mechanics, particle interactions and flow regime transitions. We apply this approach to simulate the 1982 El Picacho landslide in San Salvador, capital city of El Salvador. Landslide products from Tsunami Squares such as run out distance, velocities, erosion and deposition depths and impacted area agree well with field investigated and eyewitness data.
SIDM on FIRE: hydrodynamical self-interacting dark matter simulations of low-mass dwarf galaxies
NASA Astrophysics Data System (ADS)
Robles, Victor H.; Bullock, James S.; Elbert, Oliver D.; Fitts, Alex; González-Samaniego, Alejandro; Boylan-Kolchin, Michael; Hopkins, Philip F.; Faucher-Giguère, Claude-André; Kereš, Dušan; Hayward, Christopher C.
2017-12-01
We compare a suite of four simulated dwarf galaxies formed in 1010 M⊙ haloes of collisionless cold dark matter (CDM) with galaxies simulated in the same haloes with an identical galaxy formation model but a non-zero cross-section for DM self-interactions. These cosmological zoom-in simulations are part of the Feedback In Realistic Environments (FIRE) project and utilize the FIRE-2 model for hydrodynamics and galaxy formation physics. We find the stellar masses of the galaxies formed in self-interacting dark matter (SIDM) with σ/m = 1 cm2 g-1 are very similar to those in CDM (spanning M⋆ ≈ 105.7-7.0M⊙) and all runs lie on a similar stellar mass-size relation. The logarithmic DM density slope (α = d log ρ/d log r) in the central 250-500 pc remains steeper than α = -0.8 for the CDM-Hydro simulations with stellar mass M⋆ ∼ 106.6 M⊙ and core-like in the most massive galaxy. In contrast, every SIDM hydrodynamic simulation yields a flatter profile, with α > -0.4. Moreover, the central density profiles predicted in SIDM runs without baryons are similar to the SIDM runs that include FIRE-2 baryonic physics. Thus, SIDM appears to be much more robust to the inclusion of (potentially uncertain) baryonic physics than CDM on this mass scale, suggesting that SIDM will be easier to falsify than CDM using low-mass galaxies. Our FIRE simulations predict that galaxies less massive than M⋆ ≲ 3 × 106 M⊙ provide potentially ideal targets for discriminating models, with SIDM producing substantial cores in such tiny galaxies and CDM producing cusps.
NASA Astrophysics Data System (ADS)
Bel Hadj Kacem, Mohamed Salah
All hydrological processes are affected by the spatial variability of the physical parameters of the watershed, and also by human intervention on the landscape. The water outflow from a watershed strictly depends on the spatial and temporal variabilities of the physical parameters of the watershed. It is now apparent that the integration of mathematical models into GIS's can benefit both GIS and three-dimension environmental models: a true modeling capability can help the modeling community bridge the gap between planners, scientists, decision-makers and end-users. The main goal of this research is to design a practical tool to simulate run-off water surface using Geographic design a practical tool to simulate run-off water surface using Geographic Information Systems and the simulation of the hydrological behavior by the Finite Element Method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Covey, Curt; Lucas, Donald D.; Trenberth, Kevin E.
2016-03-02
This document presents the large scale water budget statistics of a perturbed input-parameter ensemble of atmospheric model runs. The model is Version 5.1.02 of the Community Atmosphere Model (CAM). These runs are the “C-Ensemble” described by Qian et al., “Parametric Sensitivity Analysis of Precipitation at Global and Local Scales in the Community Atmosphere Model CAM5” (Journal of Advances in Modeling the Earth System, 2015). As noted by Qian et al., the simulations are “AMIP type” with temperature and sea ice boundary conditions chosen to match surface observations for the five year period 2000-2004. There are 1100 ensemble members in additionmore » to one run with default inputparameter values.« less
Case Studies of Forecasting Ionospheric Total Electron Content
NASA Astrophysics Data System (ADS)
Mannucci, A. J.; Meng, X.; Verkhoglyadova, O. P.; Tsurutani, B.; McGranaghan, R. M.
2017-12-01
We report on medium-range forecast-mode runs of ionosphere-thermosphere coupled models that calculate ionospheric total electron content (TEC), focusing on low-latitude daytime conditions. A medium-range forecast-mode run refers to simulations that are driven by inputs that can be predicted 2-3 days in advance, for example based on simulations of the solar wind. We will present results from a weak geomagnetic storm caused by a high-speed solar wind stream on June 29, 2012. Simulations based on the Global Ionosphere Thermosphere Model (GITM) and the Thermosphere Ionosphere Electrodynamic General Circulation Model (TIEGCM) significantly over-estimate TEC in certain low latitude daytime regions, compared to TEC maps based on observations. We will present the results from a more intense coronal mass ejection (CME) driven storm where the simulations are closer to observations. We compare high latitude data sets to model inputs, such as auroral boundary and convection patterns, to assess the degree to which poorly estimated high latitude drivers may be the largest cause of discrepancy between simulations and observations. Our results reveal many factors that can affect the accuracy of forecasts, including the fidelity of empirical models used to estimate high latitude precipitation patterns, or observation proxies for solar EUV spectra, such as the F10.7 index. Implications for forecasts with few-day lead times are discussed
Numerical simulations of Hurricane Katrina (2005) in the turbulent gray zone
NASA Astrophysics Data System (ADS)
Green, Benjamin W.; Zhang, Fuqing
2015-03-01
Current numerical simulations of tropical cyclones (TCs) use a horizontal grid spacing as small as Δx = 103 m, with all boundary layer (BL) turbulence parameterized. Eventually, TC simulations can be conducted at Large Eddy Simulation (LES) resolution, which requires Δx to fall in the inertial subrange (often <102 m) to adequately resolve the large, energy-containing eddies. Between the two lies the so-called "terra incognita" because some of the assumptions used by mesoscale models and LES to treat BL turbulence are invalid. This study performs several 4-6 h simulations of Hurricane Katrina (2005) without a BL parameterization at extremely fine Δx [333, 200, and 111 m, hereafter "Large Eddy Permitting (LEP) runs"] and compares with mesoscale simulations with BL parameterizations (Δx = 3 km, 1 km, and 333 m, hereafter "PBL runs"). There are profound differences in the hurricane BL structure between the PBL and LEP runs: the former have a deeper inflow layer and secondary eyewall formation, whereas the latter have a shallow inflow layer without a secondary eyewall. Among the LEP runs, decreased Δx yields weaker subgrid-scale vertical momentum fluxes, but the sum of subgrid-scale and "grid-scale" fluxes remain similar. There is also evidence that the size of the prevalent BL eddies depends upon Δx, suggesting that convergence to true LES has not yet been reached. Nevertheless, the similarities in the storm-scale BL structure among the LEP runs indicate that the net effect of the BL on the rest of the hurricane may be somewhat independent of Δx.
Evolution Model and Simulation of Profit Model of Agricultural Products Logistics Financing
NASA Astrophysics Data System (ADS)
Yang, Bo; Wu, Yan
2018-03-01
Agricultural products logistics financial warehousing business mainly involves agricultural production and processing enterprises, third-party logistics enterprises and financial institutions tripartite, to enable the three parties to achieve win-win situation, the article first gives the replication dynamics and evolutionary stability strategy between the three parties in business participation, and then use NetLogo simulation platform, using the overall modeling and simulation method of Multi-Agent, established the evolutionary game simulation model, and run the model under different revenue parameters, finally, analyzed the simulation results. To achieve the agricultural products logistics financial financing warehouse business to participate in tripartite mutually beneficial win-win situation, thus promoting the smooth flow of agricultural products logistics business.
2017-06-01
maintenance times from the fleet are randomly resampled when running the model to enhance model realism. The use of a simulation model to represent the...helicopter regiment. 2. Attack Helicopter UH TIGER The EC665, or Airbus Helicopter TIGER, (Figure 3) is a four- bladed , twin- engine multi-role attack...migrated into the automated management system SAP Standard Product Family (SASPF), and the usage clock starts to run with the amount of the current
Modeling and Simulation of Ceramic Arrays to Improve Ballaistic Performance
2013-09-09
targets with .30cal AP M2 projectile using SPH elements. -Model validation runs were conducted based on the DoP experiments described in reference...effect of material properties on DoP 15. SUBJECT TERMS .30cal AP M2 Projectile, 762x39 PS Projectile, SPH , Aluminum 5083, SiC, DoP Expeminets...and ceramic-faced aluminum targets with „30cal AP M2 projectile using SPH elements. □ Model validation runs were conducted based on the DoP
Hutchinson, C.B.
1984-01-01
This report describes a quasi-three-dimensional finite-difference model for simulation of steady-state ground-water flow in the Floridan aquifer over a 932-square-mile area that contains 10 municipal well fields. The over-lying surficial aquifer contains a water table and is coupled to the Floridan aquifer by leakage term that represents flow through a confining layer separating the two aquifers. Under the steady-state condition, all storage terms are set to zero. Use of the head-controlled flux condition allows simulated head and flow changes to occur in the Floridan aquifer at the model boundaries. Procedures used to calibrate the model, test its sensitivity to input-parameter errors, and validate its accuracy for predictive purposes are described. Also included are attachments that describe setting up and running the model. Example model-interrogation runs show anticipated drawdowns under high, average, and low recharge conditions with 10 well fields pumping simultaneously at the maximum annual permitted rates totaling 186.9 million gallons per day. (USGS)
High-speed GPU-based finite element simulations for NDT
NASA Astrophysics Data System (ADS)
Huthwaite, P.; Shi, F.; Van Pamel, A.; Lowe, M. J. S.
2015-03-01
The finite element method solved with explicit time increments is a general approach which can be applied to many ultrasound problems. It is widely used as a powerful tool within NDE for developing and testing inspection techniques, and can also be used in inversion processes. However, the solution technique is computationally intensive, requiring many calculations to be performed for each simulation, so traditionally speed has been an issue. For maximum speed, an implementation of the method, called Pogo [Huthwaite, J. Comp. Phys. 2014, doi: 10.1016/j.jcp.2013.10.017], has been developed to run on graphics cards, exploiting the highly parallelisable nature of the algorithm. Pogo typically demonstrates speed improvements of 60-90x over commercial CPU alternatives. Pogo is applied to three NDE examples, where the speed improvements are important: guided wave tomography, where a full 3D simulation must be run for each source transducer and every different defect size; scattering from rough cracks, where many simulations need to be run to build up a statistical model of the behaviour; and ultrasound propagation within coarse-grained materials where the mesh must be highly refined and many different cases run.
NASA Astrophysics Data System (ADS)
Ricchi, Antonio; Miglietta, M. Marcello; Barbariol, Francesco; Benetazzo, Alvise; Bonaldo, Davide; Falcieri, Francesco; Russo, Aniello; Sclavo, Mauro; Carniel, Sandro
2016-04-01
In November 6-8, 2011, in the Balearic islands an extra-tropical depression developed into a Tropical-Like Cyclone (TLC) characterized by a deep-warm core, leading to a mean sea level pressure minimum of about 991 hPa, 10 m wind speeds higher than 28 m/s around the eye, and very intense rainfall, especially in the Gulf of Lion. To explore in detail the effect of the sea surface temperature on the Medicane evolution, we employed the coupled modeling system COAWST, which consists of the ROMS model for the hydrodynamic part, the WRF model for the meteorological part, and the SWAN for the surface wave modeling. All model run over 5 km domain (same domain for ROMS and SWAN). COAWST was used with different configurations: in Stand Alone (SA) mode (that is, with only the atmospheric part), in atmosphere-ocean coupled mode (AO), and in a fully coupled version including also surface waves (AOW). Several sensitivity simulations performed with the SA approach were undertaken to simulate the TLC evolution. Especially in the later stage of the lifetime, when the cyclone was weaker, the predictability appears limited. Sensitivity simulations have considered the effect of the cumulus scheme (using an explicit scheme the Medicane does not develop and remains an extra-tropical depression) and the PBL scheme (using MYJ or MYNN resulting "Medicane" are extremely similar, although the roughness appears rather different among the two experiments). Comparing the three runs, the effects of different simulations on the Medicane tracks are significant only in the later stage of the cyclone lifetime. In the overall modeled basin, wind intensity is higher in the SA case w.r.t. both coupled runs. When compared to case AO, winds are about 1 m/s larger, even though the spatial distribution is very similar (possibly because of the lower SST produced by case AO). Case AOW produces less intense winds then SA and AO case in the areas where the wave is most developed (differences are about 2-4 m/s), while they are more intense in the neighborhood of the eye of the cyclone. Moreover, the inclusion of the wave model (AOW) has implications in the water column, by changing the depth of the ocean mixed layer along the track of the Medicane, so that eventually the SST in AOW run is colder than in AO. The date chosen for the run initialization appears important: an earlier initial condition allows to properly simulate the evolution of the cyclone from the cyclogenesis and to include the effect of the air-sea interaction through the coupled models.
The “Dry-Run” Analysis: A Method for Evaluating Risk Scores for Confounding Control
Wyss, Richard; Hansen, Ben B.; Ellis, Alan R.; Gagne, Joshua J.; Desai, Rishi J.; Glynn, Robert J.; Stürmer, Til
2017-01-01
Abstract A propensity score (PS) model's ability to control confounding can be assessed by evaluating covariate balance across exposure groups after PS adjustment. The optimal strategy for evaluating a disease risk score (DRS) model's ability to control confounding is less clear. DRS models cannot be evaluated through balance checks within the full population, and they are usually assessed through prediction diagnostics and goodness-of-fit tests. A proposed alternative is the “dry-run” analysis, which divides the unexposed population into “pseudo-exposed” and “pseudo-unexposed” groups so that differences on observed covariates resemble differences between the actual exposed and unexposed populations. With no exposure effect separating the pseudo-exposed and pseudo-unexposed groups, a DRS model is evaluated by its ability to retrieve an unconfounded null estimate after adjustment in this pseudo-population. We used simulations and an empirical example to compare traditional DRS performance metrics with the dry-run validation. In simulations, the dry run often improved assessment of confounding control, compared with the C statistic and goodness-of-fit tests. In the empirical example, PS and DRS matching gave similar results and showed good performance in terms of covariate balance (PS matching) and controlling confounding in the dry-run analysis (DRS matching). The dry-run analysis may prove useful in evaluating confounding control through DRS models. PMID:28338910
Methods Data Qualification Interim Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. Sam Alessi; Tami Grimmett; Leng Vang
The overall goal of the Next Generation Nuclear Plant (NGNP) Data Management and Analysis System (NDMAS) is to maintain data provenance for all NGNP data including the Methods component of NGNP data. Multiple means are available to access data stored in NDMAS. A web portal environment allows users to access data, view the results of qualification tests and view graphs and charts of various attributes of the data. NDMAS also has methods for the management of the data output from VHTR simulation models and data generated from experiments designed to verify and validate the simulation codes. These simulation models representmore » the outcome of mathematical representation of VHTR components and systems. The methods data management approaches described herein will handle data that arise from experiment, simulation, and external sources for the main purpose of facilitating parameter estimation and model verification and validation (V&V). A model integration environment entitled ModelCenter is used to automate the storing of data from simulation model runs to the NDMAS repository. This approach does not adversely change the why computational scientists conduct their work. The method is to be used mainly to store the results of model runs that need to be preserved for auditing purposes or for display to the NDMAS web portal. This interim report demonstrates the currently development of NDMAS for Methods data and discusses data and its qualification that is currently part of NDMAS.« less
A series of simulated rainfall run-off experiments with applications of different manure types (cattle solid pats, poultry dry litter, swine slurry) was conducted across four seasons on a field containing 36 plots (0.75 × 2 m each), resulting in 144 rainfall run-off events....
Assimilation of Cloud Information in Numerical Weather Prediction Model in Southwest China
NASA Astrophysics Data System (ADS)
HENG, Z.
2016-12-01
Based on the ARPS Data Analysis System (ADAS), Weather Research and Forecasting (WRF) model, simulation experiments from July 1st 2015 to August 1st 2015 are conducted in the region of Southwest China. In the assimilation experiment (EXP), datasets from surface observations are assimilated, cloud information from weather Doppler radar, Fengyun-2E (FY-2E) geostationary satellite are retrieved by using the complex cloud analysis scheme in the ADAS, to insert microphysical variables and adjust the humility structure in the initial condition. As a control run (CTL), datasets from surface observations are assimilated, but no cloud information is used in the ADAS. The simulation result of a rainstorm caused by the Southwest Vortex during 14-15 July 2015 shows that, the EXP run has a better capability in representing the shape and intensity of precipitation, especially the center of rainstorm. The one-month inter-comparison of the initial and prediction results between the EXP and CTL runs reveled that, EXP runs can present a more reasonable phenomenon of rain and get a higher score in the rain prediction. Keywords: NWP, rainstorm, Data assimilation
The SCEC Broadband Platform: Open-Source Software for Strong Ground Motion Simulation and Validation
NASA Astrophysics Data System (ADS)
Goulet, C.; Silva, F.; Maechling, P. J.; Callaghan, S.; Jordan, T. H.
2015-12-01
The Southern California Earthquake Center (SCEC) Broadband Platform (BBP) is a carefully integrated collection of open-source scientific software programs that can simulate broadband (0-100Hz) ground motions for earthquakes at regional scales. The BBP scientific software modules implement kinematic rupture generation, low and high-frequency seismogram synthesis using wave propagation through 1D layered velocity structures, seismogram ground motion amplitude calculations, and goodness of fit measurements. These modules are integrated into a software system that provides user-defined, repeatable, calculation of ground motion seismograms, using multiple alternative ground motion simulation methods, and software utilities that can generate plots, charts, and maps. The BBP has been developed over the last five years in a collaborative scientific, engineering, and software development project involving geoscientists, earthquake engineers, graduate students, and SCEC scientific software developers. The BBP can run earthquake rupture and wave propagation modeling software to simulate ground motions for well-observed historical earthquakes and to quantify how well the simulated broadband seismograms match the observed seismograms. The BBP can also run simulations for hypothetical earthquakes. In this case, users input an earthquake location and magnitude description, a list of station locations, and a 1D velocity model for the region of interest, and the BBP software then calculates ground motions for the specified stations. The SCEC BBP software released in 2015 can be compiled and run on recent Linux systems with GNU compilers. It includes 5 simulation methods, 7 simulation regions covering California, Japan, and Eastern North America, the ability to compare simulation results against GMPEs, updated ground motion simulation methods, and a simplified command line user interface.
Simulation environment and graphical visualization environment: a COPD use-case.
Huertas-Migueláñez, Mercedes; Mora, Daniel; Cano, Isaac; Maier, Dieter; Gomez-Cabrero, David; Lluch-Ariet, Magí; Miralles, Felip
2014-11-28
Today, many different tools are developed to execute and visualize physiological models that represent the human physiology. Most of these tools run models written in very specific programming languages which in turn simplify the communication among models. Nevertheless, not all of these tools are able to run models written in different programming languages. In addition, interoperability between such models remains an unresolved issue. In this paper we present a simulation environment that allows, first, the execution of models developed in different programming languages and second the communication of parameters to interconnect these models. This simulation environment, developed within the Synergy-COPD project, aims at helping and supporting bio-researchers and medical students understand the internal mechanisms of the human body through the use of physiological models. This tool is composed of a graphical visualization environment, which is a web interface through which the user can interact with the models, and a simulation workflow management system composed of a control module and a data warehouse manager. The control module monitors the correct functioning of the whole system. The data warehouse manager is responsible for managing the stored information and supporting its flow among the different modules. It has been proved that the simulation environment presented here allows the user to research and study the internal mechanisms of the human physiology by the use of models via a graphical visualization environment. A new tool for bio-researchers is ready for deployment in various use cases scenarios.
Predictive simulation of gait at low gravity reveals skipping as the preferred locomotion strategy
Ackermann, Marko; van den Bogert, Antonie J.
2012-01-01
The investigation of gait strategies at low gravity environments gained momentum recently as manned missions to the Moon and to Mars are reconsidered. Although reports by astronauts of the Apollo missions indicate alternative gait strategies might be favored on the Moon, computational simulations and experimental investigations have been almost exclusively limited to the study of either walking or running, the locomotion modes preferred under Earth's gravity. In order to investigate the gait strategies likely to be favored at low gravity a series of predictive, computational simulations of gait are performed using a physiological model of the musculoskeletal system, without assuming any particular type of gait. A computationally efficient optimization strategy is utilized allowing for multiple simulations. The results reveal skipping as more efficient and less fatiguing than walking or running and suggest the existence of a walk-skip rather than a walk-run transition at low gravity. The results are expected to serve as a background to the design of experimental investigations of gait under simulated low gravity. PMID:22365845
Predictive simulation of gait at low gravity reveals skipping as the preferred locomotion strategy.
Ackermann, Marko; van den Bogert, Antonie J
2012-04-30
The investigation of gait strategies at low gravity environments gained momentum recently as manned missions to the Moon and to Mars are reconsidered. Although reports by astronauts of the Apollo missions indicate alternative gait strategies might be favored on the Moon, computational simulations and experimental investigations have been almost exclusively limited to the study of either walking or running, the locomotion modes preferred under Earth's gravity. In order to investigate the gait strategies likely to be favored at low gravity a series of predictive, computational simulations of gait are performed using a physiological model of the musculoskeletal system, without assuming any particular type of gait. A computationally efficient optimization strategy is utilized allowing for multiple simulations. The results reveal skipping as more efficient and less fatiguing than walking or running and suggest the existence of a walk-skip rather than a walk-run transition at low gravity. The results are expected to serve as a background to the design of experimental investigations of gait under simulated low gravity. Copyright © 2012 Elsevier Ltd. All rights reserved.
A Computing Infrastructure for Supporting Climate Studies
NASA Astrophysics Data System (ADS)
Yang, C.; Bambacus, M.; Freeman, S. M.; Huang, Q.; Li, J.; Sun, M.; Xu, C.; Wojcik, G. S.; Cahalan, R. F.; NASA Climate @ Home Project Team
2011-12-01
Climate change is one of the major challenges facing us on the Earth planet in the 21st century. Scientists build many models to simulate the past and predict the climate change for the next decades or century. Most of the models are at a low resolution with some targeting high resolution in linkage to practical climate change preparedness. To calibrate and validate the models, millions of model runs are needed to find the best simulation and configuration. This paper introduces the NASA effort on Climate@Home project to build a supercomputer based-on advanced computing technologies, such as cloud computing, grid computing, and others. Climate@Home computing infrastructure includes several aspects: 1) a cloud computing platform is utilized to manage the potential spike access to the centralized components, such as grid computing server for dispatching and collecting models runs results; 2) a grid computing engine is developed based on MapReduce to dispatch models, model configuration, and collect simulation results and contributing statistics; 3) a portal serves as the entry point for the project to provide the management, sharing, and data exploration for end users; 4) scientists can access customized tools to configure model runs and visualize model results; 5) the public can access twitter and facebook to get the latest about the project. This paper will introduce the latest progress of the project and demonstrate the operational system during the AGU fall meeting. It will also discuss how this technology can become a trailblazer for other climate studies and relevant sciences. It will share how the challenges in computation and software integration were solved.
Passive turbulent flamelet propagation
NASA Technical Reports Server (NTRS)
Ashurst, William T.; Ruetsch, G. R.; Lund, T. S.
1994-01-01
We analyze results of a premixed constant density flame propagating in three-dimensional turbulence, where a flame model developed by Kerstein, et al. (1988) has been used. Simulations with constant and evolving velocity fields are used, where peculiar results were obtained from the constant velocity field runs. Data from the evolving flow runs with various flame speeds are used to determine two-point correlations of the fluctuating scalar field and implications for flamelet modeling are discussed.
Canadian crop calendars in support of the early warning project
NASA Technical Reports Server (NTRS)
Trenchard, M. H.; Hodges, T. (Principal Investigator)
1980-01-01
The Canadian crop calendars for LACIE are presented. Long term monthly averages of daily maximum and daily minimum temperatures for subregions of provinces were used to simulate normal daily maximum and minimum temperatures. The Robertson (1968) spring wheat and Williams (1974) spring barley phenology models were run using the simulated daily temperatures and daylengths for appropriate latitudes. Simulated daily temperatures and phenology model outputs for spring wheat and spring barley are given.
A high-resolution global-scale groundwater model
NASA Astrophysics Data System (ADS)
de Graaf, I. E. M.; Sutanudjaja, E. H.; van Beek, L. P. H.; Bierkens, M. F. P.
2015-02-01
Groundwater is the world's largest accessible source of fresh water. It plays a vital role in satisfying basic needs for drinking water, agriculture and industrial activities. During times of drought groundwater sustains baseflow to rivers and wetlands, thereby supporting ecosystems. Most global-scale hydrological models (GHMs) do not include a groundwater flow component, mainly due to lack of geohydrological data at the global scale. For the simulation of lateral flow and groundwater head dynamics, a realistic physical representation of the groundwater system is needed, especially for GHMs that run at finer resolutions. In this study we present a global-scale groundwater model (run at 6' resolution) using MODFLOW to construct an equilibrium water table at its natural state as the result of long-term climatic forcing. The used aquifer schematization and properties are based on available global data sets of lithology and transmissivities combined with the estimated thickness of an upper, unconfined aquifer. This model is forced with outputs from the land-surface PCRaster Global Water Balance (PCR-GLOBWB) model, specifically net recharge and surface water levels. A sensitivity analysis, in which the model was run with various parameter settings, showed that variation in saturated conductivity has the largest impact on the groundwater levels simulated. Validation with observed groundwater heads showed that groundwater heads are reasonably well simulated for many regions of the world, especially for sediment basins (R2 = 0.95). The simulated regional-scale groundwater patterns and flow paths demonstrate the relevance of lateral groundwater flow in GHMs. Inter-basin groundwater flows can be a significant part of a basin's water budget and help to sustain river baseflows, especially during droughts. Also, water availability of larger aquifer systems can be positively affected by additional recharge from inter-basin groundwater flows.
NASA Astrophysics Data System (ADS)
Josse, P.; Caniaux, G.; Giordani, H.; Planton, S.
1999-04-01
A mesoscale non-hydrostatic atmospheric model has been coupled with a mesoscale oceanic model. The case study is a four-day simulation of a strong storm event observed during the SEMAPHORE experiment over a 500 × 500 km2 domain. This domain encompasses a thermohaline front associated with the Azores current. In order to analyze the effect of mesoscale coupling, three simulations are compared: the first one with the atmospheric model forced by realistic sea surface temperature analyses; the second one with the ocean model forced by atmospheric fields, derived from weather forecast re-analyses; the third one with the models being coupled. For these three simulations the surface fluxes were computed with the same bulk parametrization. All three simulations succeed well in representing the main oceanic or atmospheric features observed during the storm. Comparison of surface fields with in situ observations reveals that the winds of the fine mesh atmospheric model are more realistic than those of the weather forecast re-analyses. The low-level winds simulated with the atmospheric model in the forced and coupled simulations are appreciably stronger than the re-analyzed winds. They also generate stronger fluxes. The coupled simulation has the strongest surface heat fluxes: the difference in the net heat budget with the oceanic forced simulation reaches on average 50 Wm-2 over the simulation period. Sea surface-temperature cooling is too weak in both simulations, but is improved in the coupled run and matches better the cooling observed with drifters. The spatial distributions of sea surface-temperature cooling and surface fluxes are strongly inhomogeneous over the simulation domain. The amplitude of the flux variation is maximum in the coupled run. Moreover the weak correlation between the cooling and heat flux patterns indicates that the surface fluxes are not responsible for the whole cooling and suggests that the response of the ocean mixed layer to the atmosphere is highly non-local and enhanced in the coupled simulation.
Snowmelt runoff modeling in simulation and forecasting modes with the Martinec-Mango model
NASA Technical Reports Server (NTRS)
Shafer, B.; Jones, E. B.; Frick, D. M. (Principal Investigator)
1982-01-01
The Martinec-Rango snowmelt runoff model was applied to two watersheds in the Rio Grande basin, Colorado-the South Fork Rio Grande, a drainage encompassing 216 sq mi without reservoirs or diversions and the Rio Grande above Del Norte, a drainage encompassing 1,320 sq mi without major reservoirs. The model was successfully applied to both watersheds when run in a simulation mode for the period 1973-79. This period included both high and low runoff seasons. Central to the adaptation of the model to run in a forecast mode was the need to develop a technique to forecast the shape of the snow cover depletion curves between satellite data points. Four separate approaches were investigated-simple linear estimation, multiple regression, parabolic exponential, and type curve. Only the parabolic exponential and type curve methods were run on the South Fork and Rio Grande watersheds for the 1980 runoff season using satellite snow cover updates when available. Although reasonable forecasts were obtained in certain situations, neither method seemed ready for truly operational forecasts, possibly due to a large amount of estimated climatic data for one or two primary base stations during the 1980 season.
NASA Technical Reports Server (NTRS)
Strahan, Susan E.; Douglass, Anne R.; Einaudi, Franco (Technical Monitor)
2001-01-01
The Global Modeling Initiative (GMI) Team developed objective criteria for model evaluation in order to identify the best representation of the stratosphere. This work created a method to quantitatively and objectively discriminate between different models. In the original GMI study, 3 different meteorological data sets were used to run an offline chemistry and transport model (CTM). Observationally-based grading criteria were derived and applied to these simulations and various aspects of stratospheric transport were evaluated; grades were assigned. Here we report on the application of the GMI evaluation criteria to CTM simulations integrated with a new assimilated wind data set and a new general circulation model (GCM) wind data set. The Finite Volume Community Climate Model (FV-CCM) is a new GCM developed at Goddard which uses the NCAR CCM physics and the Lin and Rood advection scheme. The FV-Data Assimilation System (FV-DAS) is a new data assimilation system which uses the FV-CCM as its core model. One year CTM simulations of 2.5 degrees longitude by 2 degrees latitude resolution were run for each wind data set. We present the evaluation of temperature and annual transport cycles in the lower and middle stratosphere in the two new CTM simulations. We include an evaluation of high latitude transport which was not part of the original GMI criteria. Grades for the new simulations will be compared with those assigned during the original GMT evaluations and areas of improvement will be identified.
Modeling and Simulation of Ceramic Arrays to Improve Ballaistic Performance
2013-11-01
2219 , 2000 Tile gap is found to increase the DoP as compared to One Tile tiles The next step will be run simulations on narrower and wider gap sizes...experiments described in reference - ARL-TR- 2219 , 2000 □ Tile gap is found to increase the DoP as compared to One Tile tiles □ The next step will be run...L| Al m ^ s\\cr V^ 1 v^ □ Smoothed-particle hydrodynamics (SPH) used for all parts □ SPH size = 0.40-mm, totaling 278k
Stretching Your Energetic Budget: How Tendon Compliance Affects the Metabolic Cost of Running
Uchida, Thomas K.; Hicks, Jennifer L.; Dembia, Christopher L.; Delp, Scott L.
2016-01-01
Muscles attach to bones via tendons that stretch and recoil, affecting muscle force generation and metabolic energy consumption. In this study, we investigated the effect of tendon compliance on the metabolic cost of running using a full-body musculoskeletal model with a detailed model of muscle energetics. We performed muscle-driven simulations of running at 2–5 m/s with tendon force–strain curves that produced between 1 and 10% strain when the muscles were developing maximum isometric force. We computed the average metabolic power consumed by each muscle when running at each speed and with each tendon compliance. Average whole-body metabolic power consumption increased as running speed increased, regardless of tendon compliance, and was lowest at each speed when tendon strain reached 2–3% as muscles were developing maximum isometric force. When running at 2 m/s, the soleus muscle consumed less metabolic power at high tendon compliance because the strain of the tendon allowed the muscle fibers to operate nearly isometrically during stance. In contrast, the medial and lateral gastrocnemii consumed less metabolic power at low tendon compliance because less compliant tendons allowed the muscle fibers to operate closer to their optimal lengths during stance. The software and simulations used in this study are freely available at simtk.org and enable examination of muscle energetics with unprecedented detail. PMID:26930416
Development and testing of a fast conceptual river water quality model.
Keupers, Ingrid; Willems, Patrick
2017-04-15
Modern, model based river quality management strongly relies on river water quality models to simulate the temporal and spatial evolution of pollutant concentrations in the water body. Such models are typically constructed by extending detailed hydrodynamic models with a component describing the advection-diffusion and water quality transformation processes in a detailed, physically based way. This approach is too computational time demanding, especially when simulating long time periods that are needed for statistical analysis of the results or when model sensitivity analysis, calibration and validation require a large number of model runs. To overcome this problem, a structure identification method to set up a conceptual river water quality model has been developed. Instead of calculating the water quality concentrations at each water level and discharge node, the river branch is divided into conceptual reservoirs based on user information such as location of interest and boundary inputs. These reservoirs are modelled as Plug Flow Reactor (PFR) and Continuously Stirred Tank Reactor (CSTR) to describe advection and diffusion processes. The same water quality transformation processes as in the detailed models are considered but with adjusted residence times based on the hydrodynamic simulation results and calibrated to the detailed water quality simulation results. The developed approach allows for a much faster calculation time (factor 10 5 ) without significant loss of accuracy, making it feasible to perform time demanding scenario runs. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Smith, L. A.
2001-05-01
Many sources of uncertainty come into play when modelling geophysical systems by simulation. These include uncertainty in the initial condition, uncertainty in model parameter values (and the parameterisations themselves) and error in the model class from which the model(s) was selected. In recent decades, climate simulations have focused resources on reducing the last of these by including more and more details into the model. One can question when this ``kitchen sink'' approach should be complimented with realistic estimates of the impact from other uncertainties noted above. Indeed while the impact of model error can never be fully quantified, as all simulation experiments are interpreted a the rosy scenario which assumes a priori that nothing crucial is missing, the impact of other uncertainties can be quantified at only the cost of computational power; as illustrated, for example, in ensemble climate modelling experiments like Casino-21. This talk illustrates the interplay uncertainties in the context of a trivial nonlinear system and an ensemble of models. The simple systems considered in this small scale experiment, Keno-21, are meant to illustrate issues of experimental design; they are not intended to provide true climate simulations. The use of simulation models with huge numbers of parameters given limited data is usually justified by an appeal to the Laws of Physics: the number of free degrees-of-freedom are many fewer than the number of variables; both variables, parameterisations, and parameter values are constrained by ``the physics" and the resulting simulation yields a realistic reproduction of the entire planet's climate system to within reasonable bounds. But what bounds? exactly? In a single model run under transient forcing scenario, there are good statistical grounds for considering only large space and time averages; most of these reasons vanish if an ensemble of runs are made. Ensemble runs can quantify the (in)ability of a model to provide insight on regional changes: if a model cannot capture regional variations in the data on which the model was constructed (that is, in-sample) claims that out-of-sample predictions of those same regional averages should be used in policy making are vacuous. While motivated by climate modelling and illustrated on a trivial nonlinear system, these issues have implications across the range of geophysical modelling. These include implications for appropriate resource allocation, on the making of science policy, and on the public understanding of science and the role of uncertainty in decision making.
Visual Elements in Flight Simulation
1975-07-01
control. In consequence, current efforts tc create appropriate visual simulations run the gamut from efforts toward almost complete replication of the...create appropriate visual simulations run the gamut from efforts to create appropriate visual simulations run the gamut from efforts toward almost
Parallel distributed, reciprocal Monte Carlo radiation in coupled, large eddy combustion simulations
NASA Astrophysics Data System (ADS)
Hunsaker, Isaac L.
Radiation is the dominant mode of heat transfer in high temperature combustion environments. Radiative heat transfer affects the gas and particle phases, including all the associated combustion chemistry. The radiative properties are in turn affected by the turbulent flow field. This bi-directional coupling of radiation turbulence interactions poses a major challenge in creating parallel-capable, high-fidelity combustion simulations. In this work, a new model was developed in which reciprocal monte carlo radiation was coupled with a turbulent, large-eddy simulation combustion model. A technique wherein domain patches are stitched together was implemented to allow for scalable parallelism. The combustion model runs in parallel on a decomposed domain. The radiation model runs in parallel on a recomposed domain. The recomposed domain is stored on each processor after information sharing of the decomposed domain is handled via the message passing interface. Verification and validation testing of the new radiation model were favorable. Strong scaling analyses were performed on the Ember cluster and the Titan cluster for the CPU-radiation model and GPU-radiation model, respectively. The model demonstrated strong scaling to over 1,700 and 16,000 processing cores on Ember and Titan, respectively.
Assessing Climate Change Risks Using a Multi-Model Approach
NASA Astrophysics Data System (ADS)
Knorr, W.; Scholze, M.; Prentice, C.
2007-12-01
We quantify the risks of climate-induced changes in key ecosystem processes during the 21st century by forcing a dynamic global vegetation model with multiple scenarios from the IPCC AR4 data archive using 16 climate models and mapping the proportions of model runs showing exceedance of natural variability in wildfire frequency and freshwater supply or shifts in vegetation cover. Our analysis does not assign probabilities to scenarios. Instead, we consider the distribution of outcomes within three sets of model runs grouped according to the amount of global warming they simulate: < 2 degree C (including committed climate change simulations), 2-3 degree C, and >3 degree C. Here, we are contrasting two different methods for calculating the risks: first we use an equal weighting approach giving every model within one of the three sets the same weight, and second, we weight the models according to their ability to model ENSO. The differences are underpinning the need for the development of more robust performance metrics for global climate models.
NASA Astrophysics Data System (ADS)
Prudden, R.; Arribas, A.; Tomlinson, J.; Robinson, N.
2017-12-01
The Unified Model is a numerical model of the atmosphere used at the UK Met Office (and numerous partner organisations including Korean Meteorological Agency, Australian Bureau of Meteorology and US Air Force) for both weather and climate applications.Especifically, dynamical models such as the Unified Model are now a central part of weather forecasting. Starting from basic physical laws, these models make it possible to predict events such as storms before they have even begun to form. The Unified Model can be simply described as having two components: one component solves the navier-stokes equations (usually referred to as the "dynamics"); the other solves relevant sub-grid physical processes (usually referred to as the "physics"). Running weather forecasts requires substantial computing resources - for example, the UK Met Office operates the largest operational High Performance Computer in Europe - and the cost of a typical simulation is spent roughly 50% in the "dynamics" and 50% in the "physics". Therefore there is a high incentive to reduce cost of weather forecasts and Machine Learning is a possible option because, once a machine learning model has been trained, it is often much faster to run than a full simulation. This is the motivation for a technique called model emulation, the idea being to build a fast statistical model which closely approximates a far more expensive simulation. In this paper we discuss the use of Machine Learning as an emulator to replace the "physics" component of the Unified Model. Various approaches and options will be presented and the implications for further model development, operational running of forecasting systems, development of data assimilation schemes, and development of ensemble prediction techniques will be discussed.
A Case Study of a Low Powervapour Compression Refrigeration System
NASA Astrophysics Data System (ADS)
Abinav, R.; Nambiar, G. K.; Sahu, Debjyoti
2016-09-01
Reported in this paper is a case study on a normal vapor compression refrigeration system which is expected to be run by photovoltaic panels to utilize minimum grid power. A small 120 W refrigerator is fabricated out of commercially available components and run by an inverter and battery connected to solar photovoltaic panel as well as grid. Temperature at several points was measured and the performance was evaluated. The Coefficient of performance (COP) to run such refrigerator is estimated after numerical simulation of major components namely, evaporator, condenser and a capillary tube. The simulation was done to obtain an effective cooling temperature and the results were compared with measured temperatures. Calculation proves to be in conformity with the actual model.
SPH Simulation of Impact of a Surge on a Wall
NASA Astrophysics Data System (ADS)
Diwakar, Manoj Kumar; Mohapatra, Pranab Kumar; Tripathi, Shivam
2014-05-01
Structures located on the downstream of a dam are prone to impact of the surge due to dam break flow. Ramsden (1996) experimentally studied the run-up height on a vertical wall due to propagation of bore and surge on dry bed and measured their impact on the wall. Mohapatra et al. (2000) applied Navier Stokes equations to numerically study the impact of bore on vertical and inclined walls. They also obtained the evolution of surge on dry bed. In the present work, the impact of a surge wave due to dam break flow against the wall is modeled with a two-dimensional smoothed particle hydrodynamics (SPH) model. SPH is a mesh-free method that relies on the particle view of the field problem and approximates the continuity and momentum equations on a set of particles. The method solves the strong form of Navier-Stokes equations. The governing equations are solved numerically in the vertical plane. The propagation of the surge wave, its impact and the maximum run-up on the wall located at the boundary are analyzed. Surface profile, velocity field and pressure distributions are simulated. Non-dimensional run-up height obtained from the present numerical model is 0.86 and is in good agreement with the available experimental data of Ramsden (1996) which is in the range of 0.75-0.9. Also, the simulated profile of the surge tip was comparable to the empirical equations refereed in Ramsden (1996). The model is applied to the study the maximum force and the run-up height on inclined walls with different inclinations. The results indicate that the maximum force and the run-up height on the wall increase with the increment of wall inclination. Comparison of numerical results with analytical solutions derived from shallow water equations clearly shows the breakdown of shallow water assumption during the impact. In addition to these results, the numerical simulation yields the complete velocity and pressure ?elds which may be used to design structures located in the path of a dam-break wave. The study shows that the smoothed particle hydrodynamics can effectively simulate fluid flow dynamics. References: Mohapatra, P. K., Bhallamudi, S. M., and Eswaran, V. (2000). 'Numerical simulation of impact of bores against inclined walls.' J. Hydraulic. Engg., ASCE, 126(12), 942-945. Ramsden, J. D. (1996). 'Forces on a vertical wall due to long waves, bores, and dry-bed surges.' J. Waterway, Port, Coastal, and Ocean Engg., ASCE, 122(3), 134-141.
Robotics On-Board Trainer (ROBoT)
NASA Technical Reports Server (NTRS)
Johnson, Genevieve; Alexander, Greg
2013-01-01
ROBoT is an on-orbit version of the ground-based Dynamics Skills Trainer (DST) that astronauts use for training on a frequent basis. This software consists of two primary software groups. The first series of components is responsible for displaying the graphical scenes. The remaining components are responsible for simulating the Mobile Servicing System (MSS), the Japanese Experiment Module Remote Manipulator System (JEMRMS), and the H-II Transfer Vehicle (HTV) Free Flyer Robotics Operations. The MSS simulation software includes: Robotic Workstation (RWS) simulation, a simulation of the Space Station Remote Manipulator System (SSRMS), a simulation of the ISS Command and Control System (CCS), and a portion of the Portable Computer System (PCS) software necessary for MSS operations. These components all run under the CentOS4.5 Linux operating system. The JEMRMS simulation software includes real-time, HIL, dynamics, manipulator multi-body dynamics, and a moving object contact model with Tricks discrete time scheduling. The JEMRMS DST will be used as a functional proficiency and skills trainer for flight crews. The HTV Free Flyer Robotics Operations simulation software adds a functional simulation of HTV vehicle controllers, sensors, and data to the MSS simulation software. These components are intended to support HTV ISS visiting vehicle analysis and training. The scene generation software will use DOUG (Dynamic On-orbit Ubiquitous Graphics) to render the graphical scenes. DOUG runs on a laptop running the CentOS4.5 Linux operating system. DOUG is an Open GL-based 3D computer graphics rendering package. It uses pre-built three-dimensional models of on-orbit ISS and space shuttle systems elements, and provides realtime views of various station and shuttle configurations.
Speeding up N-body simulations of modified gravity: chameleon screening models
NASA Astrophysics Data System (ADS)
Bose, Sownak; Li, Baojiu; Barreira, Alexandre; He, Jian-hua; Hellwing, Wojciech A.; Koyama, Kazuya; Llinares, Claudio; Zhao, Gong-Bo
2017-02-01
We describe and demonstrate the potential of a new and very efficient method for simulating certain classes of modified gravity theories, such as the widely studied f(R) gravity models. High resolution simulations for such models are currently very slow due to the highly nonlinear partial differential equation that needs to be solved exactly to predict the modified gravitational force. This nonlinearity is partly inherent, but is also exacerbated by the specific numerical algorithm used, which employs a variable redefinition to prevent numerical instabilities. The standard Newton-Gauss-Seidel iterative method used to tackle this problem has a poor convergence rate. Our new method not only avoids this, but also allows the discretised equation to be written in a form that is analytically solvable. We show that this new method greatly improves the performance and efficiency of f(R) simulations. For example, a test simulation with 5123 particles in a box of size 512 Mpc/h is now 5 times faster than before, while a Millennium-resolution simulation for f(R) gravity is estimated to be more than 20 times faster than with the old method. Our new implementation will be particularly useful for running very high resolution, large-sized simulations which, to date, are only possible for the standard model, and also makes it feasible to run large numbers of lower resolution simulations for covariance analyses. We hope that the method will bring us to a new era for precision cosmological tests of gravity.
Pumping Optimization Model for Pump and Treat Systems - 15091
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, S.; Ivarson, Kristine A.; Karanovic, M.
2015-01-15
Pump and Treat systems are being utilized to remediate contaminated groundwater in the Hanford 100 Areas adjacent to the Columbia River in Eastern Washington. Design of the systems was supported by a three-dimensional (3D) fate and transport model. This model provided sophisticated simulation capabilities but requires many hours to calculate results for each simulation considered. Many simulations are required to optimize system performance, so a two-dimensional (2D) model was created to reduce run time. The 2D model was developed as a equivalent-property version of the 3D model that derives boundary conditions and aquifer properties from the 3D model. It producesmore » predictions that are very close to the 3D model predictions, allowing it to be used for comparative remedy analyses. Any potential system modifications identified by using the 2D version are verified for use by running the 3D model to confirm performance. The 2D model was incorporated into a comprehensive analysis system (the Pumping Optimization Model, POM) to simplify analysis of multiple simulations. It allows rapid turnaround by utilizing a graphical user interface that: 1 allows operators to create hypothetical scenarios for system operation, 2 feeds the input to the 2D fate and transport model, and 3 displays the scenario results to evaluate performance improvement. All of the above is accomplished within the user interface. Complex analyses can be completed within a few hours and multiple simulations can be compared side-by-side. The POM utilizes standard office computing equipment and established groundwater modeling software.« less
NASA Astrophysics Data System (ADS)
Kunz, Robert; Haworth, Daniel; Dogan, Gulkiz; Kriete, Andres
2006-11-01
Three-dimensional, unsteady simulations of multiphase flow, gas exchange, and particle/aerosol deposition in the human lung are reported. Surface data for human tracheo-bronchial trees are derived from CT scans, and are used to generate three- dimensional CFD meshes for the first several generations of branching. One-dimensional meshes for the remaining generations down to the respiratory units are generated using branching algorithms based on those that have been proposed in the literature, and a zero-dimensional respiratory unit (pulmonary acinus) model is attached at the end of each terminal bronchiole. The process is automated to facilitate rapid model generation. The model is exercised through multiple breathing cycles to compute the spatial and temporal variations in flow, gas exchange, and particle/aerosol deposition. The depth of the 3D/1D transition (at branching generation n) is a key parameter, and can be varied. High-fidelity models (large n) are run on massively parallel distributed-memory clusters, and are used to generate physical insight and to calibrate/validate the 1D and 0D models. Suitably validated lower-order models (small n) can be run on single-processor PC’s with run times that allow model-based clinical intervention for individual patients.
An Open Simulation System Model for Scientific Applications
NASA Technical Reports Server (NTRS)
Williams, Anthony D.
1995-01-01
A model for a generic and open environment for running multi-code or multi-application simulations - called the open Simulation System Model (OSSM) - is proposed and defined. This model attempts to meet the requirements of complex systems like the Numerical Propulsion Simulator System (NPSS). OSSM places no restrictions on the types of applications that can be integrated at any state of its evolution. This includes applications of different disciplines, fidelities, etc. An implementation strategy is proposed that starts with a basic prototype, and evolves over time to accommodate an increasing number of applications. Potential (standard) software is also identified which may aid in the design and implementation of the system.
NASA Technical Reports Server (NTRS)
Chapman, Jeffryes W.; Lavelle, Thomas M.; May, Ryan D.; Litt, Jonathan S.; Guo, Ten-Huei
2014-01-01
A simulation toolbox has been developed for the creation of both steady-state and dynamic thermodynamic software models. This paper describes the Toolbox for the Modeling and Analysis of Thermodynamic Systems (T-MATS), which combines generic thermodynamic and controls modeling libraries with a numerical iterative solver to create a framework for the development of thermodynamic system simulations, such as gas turbine engines. The objective of this paper is to present an overview of T-MATS, the theory used in the creation of the module sets, and a possible propulsion simulation architecture. A model comparison was conducted by matching steady-state performance results from a T-MATS developed gas turbine simulation to a well-documented steady-state simulation. Transient modeling capabilities are then demonstrated when the steady-state T-MATS model is updated to run dynamically.
NASA Technical Reports Server (NTRS)
Chapman, Jeffryes W.; Lavelle, Thomas M.; May, Ryan D.; Litt, Jonathan S.; Guo, Ten-Huei
2014-01-01
A simulation toolbox has been developed for the creation of both steady-state and dynamic thermodynamic software models. This paper describes the Toolbox for the Modeling and Analysis of Thermodynamic Systems (T-MATS), which combines generic thermodynamic and controls modeling libraries with a numerical iterative solver to create a framework for the development of thermodynamic system simulations, such as gas turbine engines. The objective of this paper is to present an overview of T-MATS, the theory used in the creation of the module sets, and a possible propulsion simulation architecture. A model comparison was conducted by matching steady-state performance results from a T-MATS developed gas turbine simulation to a well-documented steady-state simulation. Transient modeling capabilities are then demonstrated when the steady-state T-MATS model is updated to run dynamically.
Simulation environment and graphical visualization environment: a COPD use-case
2014-01-01
Background Today, many different tools are developed to execute and visualize physiological models that represent the human physiology. Most of these tools run models written in very specific programming languages which in turn simplify the communication among models. Nevertheless, not all of these tools are able to run models written in different programming languages. In addition, interoperability between such models remains an unresolved issue. Results In this paper we present a simulation environment that allows, first, the execution of models developed in different programming languages and second the communication of parameters to interconnect these models. This simulation environment, developed within the Synergy-COPD project, aims at helping and supporting bio-researchers and medical students understand the internal mechanisms of the human body through the use of physiological models. This tool is composed of a graphical visualization environment, which is a web interface through which the user can interact with the models, and a simulation workflow management system composed of a control module and a data warehouse manager. The control module monitors the correct functioning of the whole system. The data warehouse manager is responsible for managing the stored information and supporting its flow among the different modules. This simulation environment has been validated with the integration of three models: two deterministic, i.e. based on linear and differential equations, and one probabilistic, i.e., based on probability theory. These models have been selected based on the disease under study in this project, i.e., chronic obstructive pulmonary disease. Conclusion It has been proved that the simulation environment presented here allows the user to research and study the internal mechanisms of the human physiology by the use of models via a graphical visualization environment. A new tool for bio-researchers is ready for deployment in various use cases scenarios. PMID:25471327
Rönner-Holm, S G E; Kaufmann Alves, I; Steinmetz, H; Holm, N C
2009-01-01
Integrated dynamic simulation analysis of a full-scale municipal sequential batch reactor (SBR) wastewater treatment plant (WWTP) was performed using the KOSMO pollution load simulation model for the combined sewer system (CSS) and the ASM3 + EAWAG-BioP model for the WWTP. Various optimising strategies for dry and storm weather conditions were developed to raise the purification and hydraulic performance and to reduce operation costs based on simulation studies with the calibrated WWTP model. The implementation of some strategies on the plant led to lower effluent values and an average annual saving of 49,000 euro including sewage tax, which is 22% of the total running costs. Dynamic simulation analysis of CSS for an increased WWTP influent over a period of one year showed high potentials for reducing combined sewer overflow (CSO) volume by 18-27% and CSO loads for COD by 22%, NH(4)-N and P(total) by 33%. In addition, the SBR WWTP could easily handle much higher influents without exceeding the monitoring values. During the integrated simulation of representative storm events, the total emission load for COD dropped to 90%, the sewer system emitted 47% less, whereas the pollution load in the WWTP effluent increased to only 14% with 2% higher running costs.
Phast4Windows: A 3D graphical user interface for the reactive-transport simulator PHAST
Charlton, Scott R.; Parkhurst, David L.
2013-01-01
Phast4Windows is a Windows® program for developing and running groundwater-flow and reactive-transport models with the PHAST simulator. This graphical user interface allows definition of grid-independent spatial distributions of model properties—the porous media properties, the initial head and chemistry conditions, boundary conditions, and locations of wells, rivers, drains, and accounting zones—and other parameters necessary for a simulation. Spatial data can be defined without reference to a grid by drawing, by point-by-point definitions, or by importing files, including ArcInfo® shape and raster files. All definitions can be inspected, edited, deleted, moved, copied, and switched from hidden to visible through the data tree of the interface. Model features are visualized in the main panel of the interface, so that it is possible to zoom, pan, and rotate features in three dimensions (3D). PHAST simulates single phase, constant density, saturated groundwater flow under confined or unconfined conditions. Reactions among multiple solutes include mineral equilibria, cation exchange, surface complexation, solid solutions, and general kinetic reactions. The interface can be used to develop and run simple or complex models, and is ideal for use in the classroom, for analysis of laboratory column experiments, and for development of field-scale simulations of geochemical processes and contaminant transport.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Yu; Sengupta, Manajit
Solar radiation can be computed using radiative transfer models, such as the Rapid Radiation Transfer Model (RRTM) and its general circulation model applications, and used for various energy applications. Due to the complexity of computing radiation fields in aerosol and cloudy atmospheres, simulating solar radiation can be extremely time-consuming, but many approximations--e.g., the two-stream approach and the delta-M truncation scheme--can be utilized. To provide a new fast option for computing solar radiation, we developed the Fast All-sky Radiation Model for Solar applications (FARMS) by parameterizing the simulated diffuse horizontal irradiance and direct normal irradiance for cloudy conditions from the RRTMmore » runs using a 16-stream discrete ordinates radiative transfer method. The solar irradiance at the surface was simulated by combining the cloud irradiance parameterizations with a fast clear-sky model, REST2. To understand the accuracy and efficiency of the newly developed fast model, we analyzed FARMS runs using cloud optical and microphysical properties retrieved using GOES data from 2009-2012. The global horizontal irradiance for cloudy conditions was simulated using FARMS and RRTM for global circulation modeling with a two-stream approximation and compared to measurements taken from the U.S. Department of Energy's Atmospheric Radiation Measurement Climate Research Facility Southern Great Plains site. Our results indicate that the accuracy of FARMS is comparable to or better than the two-stream approach; however, FARMS is approximately 400 times more efficient because it does not explicitly solve the radiative transfer equation for each individual cloud condition. Radiative transfer model runs are computationally expensive, but this model is promising for broad applications in solar resource assessment and forecasting. It is currently being used in the National Solar Radiation Database, which is publicly available from the National Renewable Energy Laboratory at http://nsrdb.nrel.gov.« less
InMAP: A model for air pollution interventions
Tessum, Christopher W.; Hill, Jason D.; Marshall, Julian D.; ...
2017-04-19
Mechanistic air pollution modeling is essential in air quality management, yet the extensive expertise and computational resources required to run most models prevent their use in many situations where their results would be useful. We present InMAP (Intervention Model for Air Pollution), which offers an alternative to comprehensive air quality models for estimating the air pollution health impacts of emission reductions and other potential interventions. InMAP estimates annual-average changes in primary and secondary fine particle (PM2.5) concentrations—the air pollution outcome generally causing the largest monetized health damages–attributable to annual changes in precursor emissions. InMAP leverages pre-processed physical and chemical informationmore » from the output of a state-of-the-science chemical transport model and a variable spatial resolution computational grid to perform simulations that are several orders of magnitude less computationally intensive than comprehensive model simulations. In comparisons we run, InMAP recreates comprehensive model predictions of changes in total PM2.5 concentrations with population-weighted mean fractional bias (MFB) of -17% and population-weighted R2 = 0.90. Although InMAP is not specifically designed to reproduce total observed concentrations, it is able to do so within published air quality model performance criteria for total PM2.5. Potential uses of InMAP include studying exposure, health, and environmental justice impacts of potential shifts in emissions for annual-average PM2.5. InMAP can be trained to run for any spatial and temporal domain given the availability of appropriate simulation output from a comprehensive model. The InMAP model source code and input data are freely available online under an open-source license.« less
InMAP: A model for air pollution interventions
Hill, Jason D.; Marshall, Julian D.
2017-01-01
Mechanistic air pollution modeling is essential in air quality management, yet the extensive expertise and computational resources required to run most models prevent their use in many situations where their results would be useful. Here, we present InMAP (Intervention Model for Air Pollution), which offers an alternative to comprehensive air quality models for estimating the air pollution health impacts of emission reductions and other potential interventions. InMAP estimates annual-average changes in primary and secondary fine particle (PM2.5) concentrations—the air pollution outcome generally causing the largest monetized health damages–attributable to annual changes in precursor emissions. InMAP leverages pre-processed physical and chemical information from the output of a state-of-the-science chemical transport model and a variable spatial resolution computational grid to perform simulations that are several orders of magnitude less computationally intensive than comprehensive model simulations. In comparisons run here, InMAP recreates comprehensive model predictions of changes in total PM2.5 concentrations with population-weighted mean fractional bias (MFB) of −17% and population-weighted R2 = 0.90. Although InMAP is not specifically designed to reproduce total observed concentrations, it is able to do so within published air quality model performance criteria for total PM2.5. Potential uses of InMAP include studying exposure, health, and environmental justice impacts of potential shifts in emissions for annual-average PM2.5. InMAP can be trained to run for any spatial and temporal domain given the availability of appropriate simulation output from a comprehensive model. The InMAP model source code and input data are freely available online under an open-source license. PMID:28423049
InMAP: A model for air pollution interventions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tessum, Christopher W.; Hill, Jason D.; Marshall, Julian D.
Mechanistic air pollution modeling is essential in air quality management, yet the extensive expertise and computational resources required to run most models prevent their use in many situations where their results would be useful. We present InMAP (Intervention Model for Air Pollution), which offers an alternative to comprehensive air quality models for estimating the air pollution health impacts of emission reductions and other potential interventions. InMAP estimates annual-average changes in primary and secondary fine particle (PM2.5) concentrations—the air pollution outcome generally causing the largest monetized health damages–attributable to annual changes in precursor emissions. InMAP leverages pre-processed physical and chemical informationmore » from the output of a state-of-the-science chemical transport model and a variable spatial resolution computational grid to perform simulations that are several orders of magnitude less computationally intensive than comprehensive model simulations. In comparisons we run, InMAP recreates comprehensive model predictions of changes in total PM2.5 concentrations with population-weighted mean fractional bias (MFB) of -17% and population-weighted R2 = 0.90. Although InMAP is not specifically designed to reproduce total observed concentrations, it is able to do so within published air quality model performance criteria for total PM2.5. Potential uses of InMAP include studying exposure, health, and environmental justice impacts of potential shifts in emissions for annual-average PM2.5. InMAP can be trained to run for any spatial and temporal domain given the availability of appropriate simulation output from a comprehensive model. The InMAP model source code and input data are freely available online under an open-source license.« less
Evaluation of The Operational Benefits Versus Costs of An Automated Cargo Mover
2016-12-01
logistics footprint and life-cycle cost are presented as part of this report. Analysis of modeling and simulation results identified statistically...life-cycle cost are presented as part of this report. Analysis of modeling and simulation results identified statistically significant differences...Error of Estimation. Source: Eskew and Lawler (1994). ...........................75 Figure 24. Load Results (100 Runs per Scenario
Integration of MATLAB Simulink(Registered Trademark) Models with the Vertical Motion Simulator
NASA Technical Reports Server (NTRS)
Lewis, Emily K.; Vuong, Nghia D.
2012-01-01
This paper describes the integration of MATLAB Simulink(Registered TradeMark) models into the Vertical Motion Simulator (VMS) at NASA Ames Research Center. The VMS is a high-fidelity, large motion flight simulator that is capable of simulating a variety of aerospace vehicles. Integrating MATLAB Simulink models into the VMS needed to retain the development flexibility of the MATLAB environment and allow rapid deployment of model changes. The process developed at the VMS was used successfully in a number of recent simulation experiments. This accomplishment demonstrated that the model integrity was preserved, while working within the hard real-time run environment of the VMS architecture, and maintaining the unique flexibility of the VMS to meet diverse research requirements.
Severe Nuclear Accident Program (SNAP) - a real time model for accidental releases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saltbones, J.; Foss, A.; Bartnicki, J.
1996-12-31
The model: Several Nuclear Accident Program (SNAP) has been developed at the Norwegian Meteorological Institute (DNMI) in Oslo to provide decision makers and Government officials with real-time tool for simulating large accidental releases of radioactivity from nuclear power plants or other sources. SNAP is developed in the Lagrangian framework in which atmospheric transport of radioactive pollutants is simulated by emitting a large number of particles from the source. The main advantage of the Lagrangian approach is a possibility of precise parameterization of advection processes, especially close to the source. SNAP can be used to predict the transport and deposition ofmore » a radioactive cloud in e future (up to 48 hours, in the present version) or to analyze the behavior of the cloud in the past. It is also possible to run the model in the mixed mode (partly analysis and partly forecast). In the routine run we assume unit (1 g s{sup -1}) emission in each of three classes. This assumption is very convenient for the main user of the model output in case of emergency: Norwegian Radiation Protection Agency. Due to linearity of the model equations, user can test different emission scenarios as a post processing task by assigning different weights to concentration and deposition fields corresponding to each of three emission classes. SNAP is fully operational and can be run by the meteorologist on duty at any time. The output from SNAP has two forms: First on the maps of Europe, or selected parts of Europe, individual particles are shown during the simulation period. Second, immediately after the simulation, concentration/deposition fields can be shown every three hours of the simulation period as isoline maps for each emission class. In addition, concentration and deposition maps, as well as some meteorological data, are stored on a public accessible disk for further processing by the model users.« less
ALC: automated reduction of rule-based models
Koschorreck, Markus; Gilles, Ernst Dieter
2008-01-01
Background Combinatorial complexity is a challenging problem for the modeling of cellular signal transduction since the association of a few proteins can give rise to an enormous amount of feasible protein complexes. The layer-based approach is an approximative, but accurate method for the mathematical modeling of signaling systems with inherent combinatorial complexity. The number of variables in the simulation equations is highly reduced and the resulting dynamic models show a pronounced modularity. Layer-based modeling allows for the modeling of systems not accessible previously. Results ALC (Automated Layer Construction) is a computer program that highly simplifies the building of reduced modular models, according to the layer-based approach. The model is defined using a simple but powerful rule-based syntax that supports the concepts of modularity and macrostates. ALC performs consistency checks on the model definition and provides the model output in different formats (C MEX, MATLAB, Mathematica and SBML) as ready-to-run simulation files. ALC also provides additional documentation files that simplify the publication or presentation of the models. The tool can be used offline or via a form on the ALC website. Conclusion ALC allows for a simple rule-based generation of layer-based reduced models. The model files are given in different formats as ready-to-run simulation files. PMID:18973705
cellGPU: Massively parallel simulations of dynamic vertex models
NASA Astrophysics Data System (ADS)
Sussman, Daniel M.
2017-10-01
Vertex models represent confluent tissue by polygonal or polyhedral tilings of space, with the individual cells interacting via force laws that depend on both the geometry of the cells and the topology of the tessellation. This dependence on the connectivity of the cellular network introduces several complications to performing molecular-dynamics-like simulations of vertex models, and in particular makes parallelizing the simulations difficult. cellGPU addresses this difficulty and lays the foundation for massively parallelized, GPU-based simulations of these models. This article discusses its implementation for a pair of two-dimensional models, and compares the typical performance that can be expected between running cellGPU entirely on the CPU versus its performance when running on a range of commercial and server-grade graphics cards. By implementing the calculation of topological changes and forces on cells in a highly parallelizable fashion, cellGPU enables researchers to simulate time- and length-scales previously inaccessible via existing single-threaded CPU implementations. Program Files doi:http://dx.doi.org/10.17632/6j2cj29t3r.1 Licensing provisions: MIT Programming language: CUDA/C++ Nature of problem: Simulations of off-lattice "vertex models" of cells, in which the interaction forces depend on both the geometry and the topology of the cellular aggregate. Solution method: Highly parallelized GPU-accelerated dynamical simulations in which the force calculations and the topological features can be handled on either the CPU or GPU. Additional comments: The code is hosted at https://gitlab.com/dmsussman/cellGPU, with documentation additionally maintained at http://dmsussman.gitlab.io/cellGPUdocumentation
Martin, Gary R.; Zarriello, Phillip J.; Shipp, Allison A.
2001-01-01
Rainfall, streamflow, and water-quality data collected in the Chenoweth Run Basin during February 1996?January 1998, in combination with the available historical sampling data, were used to characterize hydrologic conditions and to develop and calibrate a Hydrological Simulation Program?Fortran (HSPF) model for continuous simulation of rainfall, streamflow, suspended-sediment, and total-orthophosphate (TPO4) transport relations. Study results provide an improved understanding of basin hydrology and a hydrologic-modeling framework with analytical tools for use in comprehensive waterresource planning and management. Chenoweth Run Basin, encompassing 16.5 mi2 in suburban eastern Jefferson County, Kentucky, contains expanding urban development, particularly in the upper third of the basin. Historical water-quality problems have interfered with designated aquatic-life and recreation uses in the stream main channel (approximately 9 mi in length) and have been attributed to organic enrichment, nutrients, metals, and pathogens in urban runoff and wastewater inflows. Hydrologic conditions in Jefferson County are highly varied. In the Chenoweth Run Basin, as in much of the eastern third of the county, relief is moderately sloping to steep. Also, internal drainage in pervious areas is impeded by the shallow, fine-textured subsoils that contain abundant silts and clays. Thus, much of the precipitation here tends to move rapidly as overland flow and (or) shallow subsurface flow (interflow) to the stream channels. Data were collected at two streamflowgaging stations, one rain gage, and four waterquality- sampling sites in the basin. Precipitation, streamflow, and, consequently, constituent loads were above normal during the data-collection period of this study. Nonpoint sources contributed the largest portion of the sediment loads. However, the three wastewatertreatment plants (WWTP?s) were the source of the majority of estimated total phosphorus (TP) and TPO4 transport downstream from the WWTP?s. HSPF, a hydrologic model capable of simulating mixed-land-use basins, includes land surface, subsurface, and instream waterquantity- and water-quality-modeling components. The HSPF model was used to represent several important hydrologic features of the Chenoweth Run Basin including (1) numerous small lakes and ponds, through which approximately 25 percent of the basin drains; (2) potential seasonal ground-waterseepage losses in stream channels; (3) contributions from WWTP effluents and bypass flows; and (4) the transport and transformations of sediments and nutrients. The HSPF model was calibrated and verified for flow simulation on the basis of measured total, annual, seasonal, monthly, daily, hourly, and 5-minute-interval storm discharge data. The occurrence of numerous storms during the study period permitted a splitsample procedure to be used for a model verification on the basis of storm volumes and peaks. Total simulated and observed discharge during the model calibration period differed by approximately -5.4 percent at the upper gaging station and 3.1 percent at the lower station. The model results for the total and annual water balances were classified as very good on the basis of the calibration criteria reported in other modeling studies. The model had correlation coefficients ranging from 0.89 to 0.98 for hourly to monthly mean flows, respectively. The coefficients of model-fit efficiency for daily and monthly discharge simulations were near the excellent range (exceeding 0.97). However, the model was calibrated for a comparatively short 24-month period during which flows were above normal. Increased model error might be expected during an extended period of nearnormal flows. The model was calibrated for simulation of sediment and TPO4 transport. The simulated mean-annual load (over 24 months) ranged from -33 to -28 percent of the estimated sediment load and within +/- 1 percent of the estimated TPO4 load at the two streamflow-gaging s
NASA Astrophysics Data System (ADS)
Hill, M. C.; Jakeman, J.; Razavi, S.; Tolson, B.
2015-12-01
For many environmental systems model runtimes have remained very long as more capable computers have been used to add more processes and more time and space discretization. Scientists have also added more parameters and kinds of observations, and many model runs are needed to explore the models. Computational demand equals run time multiplied by number of model runs divided by parallelization opportunities. Model exploration is conducted using sensitivity analysis, optimization, and uncertainty quantification. Sensitivity analysis is used to reveal consequences of what may be very complex simulated relations, optimization is used to identify parameter values that fit the data best, or at least better, and uncertainty quantification is used to evaluate the precision of simulated results. The long execution times make such analyses a challenge. Methods for addressing this challenges include computationally frugal analysis of the demanding original model and a number of ingenious surrogate modeling methods. Both commonly use about 50-100 runs of the demanding original model. In this talk we consider the tradeoffs between (1) original model development decisions, (2) computationally frugal analysis of the original model, and (3) using many model runs of the fast surrogate model. Some questions of interest are as follows. If the added processes and discretization invested in (1) are compared with the restrictions and approximations in model analysis produced by long model execution times, is there a net benefit related of the goals of the model? Are there changes to the numerical methods that could reduce the computational demands while giving up less fidelity than is compromised by using computationally frugal methods or surrogate models for model analysis? Both the computationally frugal methods and surrogate models require that the solution of interest be a smooth function of the parameters or interest. How does the information obtained from the local methods typical of (2) and the global averaged methods typical of (3) compare for typical systems? The discussion will use examples of response of the Greenland glacier to global warming and surface and groundwater modeling.
OBERON: OBliquity and Energy balance Run on N-body systems
NASA Astrophysics Data System (ADS)
Forgan, Duncan H.
2016-08-01
OBERON (OBliquity and Energy balance Run on N-body systems) models the climate of Earthlike planets under the effects of an arbitrary number and arrangement of other bodies, such as stars, planets and moons. The code, written in C++, simultaneously computes N body motions using a 4th order Hermite integrator, simulates climates using a 1D latitudinal energy balance model, and evolves the orbital spin of bodies using the equations of Laskar (1986a,b).
An effective online data monitoring and saving strategy for large-scale climate simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xian, Xiaochen; Archibald, Rick; Mayer, Benjamin
Large-scale climate simulation models have been developed and widely used to generate historical data and study future climate scenarios. These simulation models often have to run for a couple of months to understand the changes in the global climate over the course of decades. This long-duration simulation process creates a huge amount of data with both high temporal and spatial resolution information; however, how to effectively monitor and record the climate changes based on these large-scale simulation results that are continuously produced in real time still remains to be resolved. Due to the slow process of writing data to disk,more » the current practice is to save a snapshot of the simulation results at a constant, slow rate although the data generation process runs at a very high speed. This study proposes an effective online data monitoring and saving strategy over the temporal and spatial domains with the consideration of practical storage and memory capacity constraints. Finally, our proposed method is able to intelligently select and record the most informative extreme values in the raw data generated from real-time simulations in the context of better monitoring climate changes.« less
An effective online data monitoring and saving strategy for large-scale climate simulations
Xian, Xiaochen; Archibald, Rick; Mayer, Benjamin; ...
2018-01-22
Large-scale climate simulation models have been developed and widely used to generate historical data and study future climate scenarios. These simulation models often have to run for a couple of months to understand the changes in the global climate over the course of decades. This long-duration simulation process creates a huge amount of data with both high temporal and spatial resolution information; however, how to effectively monitor and record the climate changes based on these large-scale simulation results that are continuously produced in real time still remains to be resolved. Due to the slow process of writing data to disk,more » the current practice is to save a snapshot of the simulation results at a constant, slow rate although the data generation process runs at a very high speed. This study proposes an effective online data monitoring and saving strategy over the temporal and spatial domains with the consideration of practical storage and memory capacity constraints. Finally, our proposed method is able to intelligently select and record the most informative extreme values in the raw data generated from real-time simulations in the context of better monitoring climate changes.« less
Stream channels of the Upper San Pedro with percent difference between results from two SWAT simulations run through AGWA: one using the 1973 NALC landcover for model parameterization, and the other using the 1997 NALC landcover.
Subwatersheds of the Upper San Pedro basin with percent difference between results from two SWAT simulations run through AGWA: one using the 1973 NALC landcover for model parameterization, and the other using the 1997 NALC landcover.
NASA Technical Reports Server (NTRS)
Shen, B.-W.; Atlas, R.; Reale, O.; Chern, J.-D.; Li, S.-J.; Lee, T.; Chang, J.; Henze, C.; Yeh, K.-S.
2006-01-01
It is known that the General Circulation Models (GCMs) have sufficient resolution to accurately simulate hurricane near-eye structure and intensity. To overcome this limitation, the mesoscale-resolving finite-element GCM (fvGCM) has been experimentally deployed on the NASA Columbia supercomputer, and its performance is evaluated choosing hurricane Katrina as an example in this study. On late August 2005 Katrina underwent two stages of rapid intensification and became the sixth most intense hurricane in the Atlantic. Six 5-day simulations of Katrina at both 0.25 deg and 0.125 deg show comparable track forecasts, but the 0,125 deg runs provide much better intensity forecasts, producing center pressure with errors of only +/- 12 hPa. The 0.125 deg simulates better near-eye wind distributions and a more realistic average intensification rate. A convection parameterization (CP) is one of the major limitations in a GCM, the 0.125 deg run with CP disabled produces very encouraging results.
Efficient generation of low-energy folded states of a model protein
NASA Astrophysics Data System (ADS)
Gordon, Heather L.; Kwan, Wai Kei; Gong, Chunhang; Larrass, Stefan; Rothstein, Stuart M.
2003-01-01
A number of short simulated annealing runs are performed on a highly-frustrated 46-"residue" off-lattice model protein. We perform, in an iterative fashion, a principal component analysis of the 946 nonbonded interbead distances, followed by two varieties of cluster analyses: hierarchical and k-means clustering. We identify several distinct sets of conformations with reasonably consistent cluster membership. Nonbonded distance constraints are derived for each cluster and are employed within a distance geometry approach to generate many new conformations, previously unidentified by the simulated annealing experiments. Subsequent analyses suggest that these new conformations are members of the parent clusters from which they were generated. Furthermore, several novel, previously unobserved structures with low energy were uncovered, augmenting the ensemble of simulated annealing results, and providing a complete distribution of low-energy states. The computational cost of this approach to generating low-energy conformations is small when compared to the expense of further Monte Carlo simulated annealing runs.
NASA Astrophysics Data System (ADS)
Wichmann, Volker
2017-09-01
The Gravitational Process Path (GPP) model can be used to simulate the process path and run-out area of gravitational processes based on a digital terrain model (DTM). The conceptual model combines several components (process path, run-out length, sink filling and material deposition) to simulate the movement of a mass point from an initiation site to the deposition area. For each component several modeling approaches are provided, which makes the tool configurable for different processes such as rockfall, debris flows or snow avalanches. The tool can be applied to regional-scale studies such as natural hazard susceptibility mapping but also contains components for scenario-based modeling of single events. Both the modeling approaches and precursor implementations of the tool have proven their applicability in numerous studies, also including geomorphological research questions such as the delineation of sediment cascades or the study of process connectivity. This is the first open-source implementation, completely re-written, extended and improved in many ways. The tool has been committed to the main repository of the System for Automated Geoscientific Analyses (SAGA) and thus will be available with every SAGA release.
Transient analysis of a pulsed detonation combustor using the numerical propulsion system simulation
NASA Astrophysics Data System (ADS)
Hasler, Anthony Scott
The performance of a hybrid mixed flow turbofan (with detonation tubes installed in the bypass duct) is investigated in this study and compared with a baseline model of a mixed flow turbofan with a standard combustion chamber as a duct burner. Previous studies have shown that pulsed detonation combustors have the potential to be more efficient than standard combustors, but they also present new challenges that must be overcome before they can be utilized. The Numerical Propulsion System Simulation (NPSS) will be used to perform the analysis with a pulsed detonation combustor model based on a numerical simulation done by Endo, Fujiwara, et. al. Three different cases will be run using both models representing a take-off situation, a subsonic cruise and a supersonic cruise situation. Since this study investigates a transient analysis, the pulse detonation combustor is run in a rig setup first and then its pressure and temperature are averaged for the cycle to obtain quasi-steady results.
NASA Technical Reports Server (NTRS)
Jonathan L. Case; Kumar, Sujay V.; Srikishen, Jayanthi; Jedlovec, Gary J.
2010-01-01
One of the most challenging weather forecast problems in the southeastern U.S. is daily summertime pulse-type convection. During the summer, atmospheric flow and forcing are generally weak in this region; thus, convection typically initiates in response to local forcing along sea/lake breezes, and other discontinuities often related to horizontal gradients in surface heating rates. Numerical simulations of pulse convection usually have low skill, even in local predictions at high resolution, due to the inherent chaotic nature of these precipitation systems. Forecast errors can arise from assumptions within parameterization schemes, model resolution limitations, and uncertainties in both the initial state of the atmosphere and land surface variables such as soil moisture and temperature. For this study, it is hypothesized that high-resolution, consistent representations of surface properties such as soil moisture, soil temperature, and sea surface temperature (SST) are necessary to better simulate the interactions between the surface and atmosphere, and ultimately improve predictions of summertime pulse convection. This paper describes a sensitivity experiment using the Weather Research and Forecasting (WRF) model. Interpolated land and ocean surface fields from a large-scale model are replaced with high-resolution datasets provided by unique NASA assets in an experimental simulation: the Land Information System (LIS) and Moderate Resolution Imaging Spectroradiometer (MODIS) SSTs. The LIS is run in an offline mode for several years at the same grid resolution as the WRF model to provide compatible land surface initial conditions in an equilibrium state. The MODIS SSTs provide detailed analyses of SSTs over the oceans and large lakes compared to current operational products. The WRF model runs initialized with the LIS+MODIS datasets result in a reduction in the overprediction of rainfall areas; however, the skill is almost equally as low in both experiments using traditional verification methodologies. Output from object-based verification within NCAR s Meteorological Evaluation Tools reveals that the WRF runs initialized with LIS+MODIS data consistently generated precipitation objects that better matched observed precipitation objects, especially at higher precipitation intensities. The LIS+MODIS runs produced on average a 4% increase in matched precipitation areas and a simultaneous 4% decrease in unmatched areas during three months of daily simulations.
GRODY - GAMMA RAY OBSERVATORY DYNAMICS SIMULATOR IN ADA
NASA Technical Reports Server (NTRS)
Stark, M.
1994-01-01
Analysts use a dynamics simulator to test the attitude control system algorithms used by a satellite. The simulator must simulate the hardware, dynamics, and environment of the particular spacecraft and provide user services which enable the analyst to conduct experiments. Researchers at Goddard's Flight Dynamics Division developed GRODY alongside GROSS (GSC-13147), a FORTRAN simulator which performs the same functions, in a case study to assess the feasibility and effectiveness of the Ada programming language for flight dynamics software development. They used popular object-oriented design techniques to link the simulator's design with its function. GRODY is designed for analysts familiar with spacecraft attitude analysis. The program supports maneuver planning as well as analytical testing and evaluation of the attitude determination and control system used on board the Gamma Ray Observatory (GRO) satellite. GRODY simulates the GRO on-board computer and Control Processor Electronics. The analyst/user sets up and controls the simulation. GRODY allows the analyst to check and update parameter values and ground commands, obtain simulation status displays, interrupt the simulation, analyze previous runs, and obtain printed output of simulation runs. The video terminal screen display allows visibility of command sequences, full-screen display and modification of parameters using input fields, and verification of all input data. Data input available for modification includes alignment and performance parameters for all attitude hardware, simulation control parameters which determine simulation scheduling and simulator output, initial conditions, and on-board computer commands. GRODY generates eight types of output: simulation results data set, analysis report, parameter report, simulation report, status display, plots, diagnostic output (which helps the user trace any problems that have occurred during a simulation), and a permanent log of all runs and errors. The analyst can send results output in graphical or tabular form to a terminal, disk, or hardcopy device, and can choose to have any or all items plotted against time or against each other. Goddard researchers developed GRODY on a VAX 8600 running VMS version 4.0. For near real time performance, GRODY requires a VAX at least as powerful as a model 8600 running VMS 4.0 or a later version. To use GRODY, the VAX needs an Ada Compilation System (ACS), Code Management System (CMS), and 1200K memory. GRODY is written in Ada and FORTRAN.
IPSL-CM5A2. An Earth System Model designed to run long simulations for past and future climates.
NASA Astrophysics Data System (ADS)
Sepulchre, Pierre; Caubel, Arnaud; Marti, Olivier; Hourdin, Frédéric; Dufresne, Jean-Louis; Boucher, Olivier
2017-04-01
The IPSL-CM5A model was developed and released in 2013 "to study the long-term response of the climate system to natural and anthropogenic forcings as part of the 5th Phase of the Coupled Model Intercomparison Project (CMIP5)" [Dufresne et al., 2013]. Although this model also has been used for numerous paleoclimate studies, a major limitation was its computation time, which averaged 10 model-years / day on 32 cores of the Curie supercomputer (on TGCC computing center, France). Such performances were compatible with the experimental designs of intercomparison projects (e.g. CMIP, PMIP) but became limiting for modelling activities involving several multi-millenial experiments, which are typical for Quaternary or "deeptime" paleoclimate studies, in which a fully-equilibrated deep-ocean is mandatory. Here we present the Earth-System model IPSL-CM5A2. Based on IPSL-CM5A, technical developments have been performed both on separate components and on the coupling system in order to speed up the whole coupled model. These developments include the integration of hybrid parallelization MPI-OpenMP in LMDz atmospheric component, the use of a new input-ouput library to perform parallel asynchronous input/output by using computing cores as "IO servers", the use of a parallel coupling library between the ocean and the atmospheric components. Running on 304 cores, the model can now simulate 55 years per day, opening new gates towards multi-millenial simulations. Apart from obtaining better computing performances, one aim of setting up IPSL-CM5A2 was also to overcome the cold bias depicted in global surface air temperature (t2m) in IPSL-CM5A. We present the tuning strategy to overcome this bias as well as the main characteristics (including biases) of the pre-industrial climate simulated by IPSL-CM5A2. Lastly, we shortly present paleoclimate simulations run with this model, for the Holocene and for deeper timescales in the Cenozoic, for which the particular continental configuration was overcome by a new design of the ocean tripolar grid.
Uptake and storage of anthropogenic CO2 in the pacific ocean estimated using two modeling approaches
NASA Astrophysics Data System (ADS)
Li, Yangchun; Xu, Yongfu
2012-07-01
A basin-wide ocean general circulation model (OGCM) of the Pacific Ocean is employed to estimate the uptake and storage of anthropogenic CO2 using two different simulation approaches. The simulation (named BIO) makes use of a carbon model with biological processes and full thermodynamic equations to calculate surface water partial pressure of CO2, whereas the other simulation (named PTB) makes use of a perturbation approach to calculate surface water partial pressure of anthropogenic CO2. The results from the two simulations agree well with the estimates based on observation data in most important aspects of the vertical distribution as well as the total inventory of anthropogenic carbon. The storage of anthropogenic carbon from BIO is closer to the observation-based estimate than that from PTB. The Revelle factor in 1994 obtained in BIO is generally larger than that obtained in PTB in the whole Pacific, except for the subtropical South Pacific. This, to large extent, leads to the difference in the surface anthropogenic CO2 concentration between the two runs. The relative difference in the annual uptake between the two runs is almost constant during the integration processes after 1850. This is probably not caused by dissolved inorganic carbon (DIC), but rather by a factor independent of time. In both runs, the rate of change in anthropogenic CO2 fluxes with time is consistent with the rate of change in the growth rate of atmospheric partial pressure of CO2.
Verification of component mode techniques for flexible multibody systems
NASA Technical Reports Server (NTRS)
Wiens, Gloria J.
1990-01-01
Investigations were conducted in the modeling aspects of flexible multibodies undergoing large angular displacements. Models were to be generated and analyzed through application of computer simulation packages employing the 'component mode synthesis' techniques. Multibody Modeling, Verification and Control Laboratory (MMVC) plan was implemented, which includes running experimental tests on flexible multibody test articles. From these tests, data was to be collected for later correlation and verification of the theoretical results predicted by the modeling and simulation process.
Modular use of human body models of varying levels of complexity: Validation of head kinematics.
Decker, William; Koya, Bharath; Davis, Matthew L; Gayzik, F Scott
2017-05-29
The significant computational resources required to execute detailed human body finite-element models has motivated the development of faster running, simplified models (e.g., GHBMC M50-OS). Previous studies have demonstrated the ability to modularly incorporate the validated GHBMC M50-O brain model into the simplified model (GHBMC M50-OS+B), which allows for localized analysis of the brain in a fraction of the computation time required for the detailed model. The objective of this study is to validate the head and neck kinematics of the GHBMC M50-O and M50-OS (detailed and simplified versions of the same model) against human volunteer test data in frontal and lateral loading. Furthermore, the effect of modular insertion of the detailed brain model into the M50-OS is quantified. Data from the Navy Biodynamics Laboratory (NBDL) human volunteer studies, including a 15g frontal, 8g frontal, and 7g lateral impact, were reconstructed and simulated using LS-DYNA. A five-point restraint system was used for all simulations, and initial positions of the models were matched with volunteer data using settling and positioning techniques. Both the frontal and lateral simulations were run with the M50-O, M50-OS, and M50-OS+B with active musculature for a total of nine runs. Normalized run times for the various models used in this study were 8.4 min/ms for the M50-O, 0.26 min/ms for the M50-OS, and 0.97 min/ms for the M50-OS+B, a 32- and 9-fold reduction in run time, respectively. Corridors were reanalyzed for head and T1 kinematics from the NBDL studies. Qualitative evaluation of head rotational accelerations and linear resultant acceleration, as well as linear resultant T1 acceleration, showed reasonable results between all models and the experimental data. Objective evaluation of the results for head center of gravity (CG) accelerations was completed via ISO TS 18571, and indicated scores of 0.673 (M50-O), 0.638 (M50-OS), and 0.656 (M50-OS+B) for the 15g frontal impact. Scores at lower g levels yielded similar results, 0.667 (M50-O), 0.675 (M50-OS), and 0.710 (M50-OS+B) for the 8g frontal impact. The 7g lateral simulations also compared fairly with an average ISO score of 0.565 for the M50-O, 0.634 for the M50-OS, and 0.606 for the M50-OS+B. The three HBMs experienced similar head and neck motion in the frontal simulations, but the M50-O predicted significantly greater head rotation in the lateral simulation. The greatest departure from the detailed occupant models were noted in lateral flexion, potentially indicating the need for further study. Precise modeling of the belt system however was limited by available data. A sensitivity study of these parameters in the frontal condition showed that belt slack and muscle activation have a modest effect on the ISO score. The reduction in computation time of the M50-OS+B reduces the burden of high computational requirements when handling detailed HBMs. Future work will focus on harmonizing the lateral head response of the models and studying localized injury criteria within the brain from the M50-O and M50-OS+B.
Solving Equations of Multibody Dynamics
NASA Technical Reports Server (NTRS)
Jain, Abhinandan; Lim, Christopher
2007-01-01
Darts++ is a computer program for solving the equations of motion of a multibody system or of a multibody model of a dynamic system. It is intended especially for use in dynamical simulations performed in designing and analyzing, and developing software for the control of, complex mechanical systems. Darts++ is based on the Spatial-Operator- Algebra formulation for multibody dynamics. This software reads a description of a multibody system from a model data file, then constructs and implements an efficient algorithm that solves the dynamical equations of the system. The efficiency and, hence, the computational speed is sufficient to make Darts++ suitable for use in realtime closed-loop simulations. Darts++ features an object-oriented software architecture that enables reconfiguration of system topology at run time; in contrast, in related prior software, system topology is fixed during initialization. Darts++ provides an interface to scripting languages, including Tcl and Python, that enable the user to configure and interact with simulation objects at run time.
Simulation and analysis of a model dinoflagellate predator-prey system
NASA Astrophysics Data System (ADS)
Mazzoleni, M. J.; Antonelli, T.; Coyne, K. J.; Rossi, L. F.
2015-12-01
This paper analyzes the dynamics of a model dinoflagellate predator-prey system and uses simulations to validate theoretical and experimental studies. A simple model for predator-prey interactions is derived by drawing upon analogies from chemical kinetics. This model is then modified to account for inefficiencies in predation. Simulation results are shown to closely match the model predictions. Additional simulations are then run which are based on experimental observations of predatory dinoflagellate behavior, and this study specifically investigates how the predatory dinoflagellate Karlodinium veneficum uses toxins to immobilize its prey and increase its feeding rate. These simulations account for complex dynamics that were not included in the basic models, and the results from these computational simulations closely match the experimentally observed predatory behavior of K. veneficum and reinforce the notion that predatory dinoflagellates utilize toxins to increase their feeding rate.
Discrete Event-based Performance Prediction for Temperature Accelerated Dynamics
NASA Astrophysics Data System (ADS)
Junghans, Christoph; Mniszewski, Susan; Voter, Arthur; Perez, Danny; Eidenbenz, Stephan
2014-03-01
We present an example of a new class of tools that we call application simulators, parameterized fast-running proxies of large-scale scientific applications using parallel discrete event simulation (PDES). We demonstrate our approach with a TADSim application simulator that models the Temperature Accelerated Dynamics (TAD) method, which is an algorithmically complex member of the Accelerated Molecular Dynamics (AMD) family. The essence of the TAD application is captured without the computational expense and resource usage of the full code. We use TADSim to quickly characterize the runtime performance and algorithmic behavior for the otherwise long-running simulation code. We further extend TADSim to model algorithm extensions to standard TAD, such as speculative spawning of the compute-bound stages of the algorithm, and predict performance improvements without having to implement such a method. Focused parameter scans have allowed us to study algorithm parameter choices over far more scenarios than would be possible with the actual simulation. This has led to interesting performance-related insights into the TAD algorithm behavior and suggested extensions to the TAD method.
Scaling NS-3 DCE Experiments on Multi-Core Servers
2016-06-15
that work well together. 3.2 Simulation Server Details We ran the simulations on a Dell® PowerEdge M520 blade server[8] running Ubuntu Linux 14.04...To minimize the amount of time needed to complete all of the simulations, we planned to run multiple simulations at the same time on a blade server...MacBook was running the simulation inside a virtual machine (Ubuntu 14.04), while the blade server was running the same operating system directly on
Differences in Train-induced Vibration between Hard Soil and Soft Soil
NASA Astrophysics Data System (ADS)
Noyori, M.; Yokoyama, H.
2017-12-01
Vibration and noise caused by running trains sometimes raises environmental issues. Train-induced vibration is caused by moving static and dynamic axle loads. To reduce the vibration, it is important to clarify the conditions under which the train-induced vibration increases. In this study, we clarified the differences in train-induced vibration between on hard soil and on soft soil using a numerical simulation method. The numerical simulation method we used is a combination of two analysis. The one is a coupled vibration analysis model of a running train, a track and a supporting structure. In the analysis, the excitation force of the viaduct slabs generated by a running train is computed. The other analysis is a three-dimensional vibration analysis model of a supporting structure and the ground into which the excitation force computed by the former analysis is input. As a result of the numerical simulation, the ground vibration in the area not more than 25m from the center of the viaduct is larger under the soft soil condition than that under the hard soil condition in almost all frequency ranges. On the other hand, the ground vibration of 40 and 50Hz at a point 50m from the center of the viaduct under the hard soil condition is larger than that under the soft soil condition. These are consistent with the result of the two-dimensional FEM based on a ground model alone. Thus, we concluded that these results are obtained from not the effects of the running train but the vibration characteristics of the ground.
AN OVERVIEW OF REDUCED ORDER MODELING TECHNIQUES FOR SAFETY APPLICATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandelli, D.; Alfonsi, A.; Talbot, P.
2016-10-01
The RISMC project is developing new advanced simulation-based tools to perform Computational Risk Analysis (CRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermal-hydraulic behavior of the reactors primary and secondary systems, but also external event temporal evolution and component/system ageing. Thus, this is not only a multi-physics problem being addressed, but also a multi-scale problem (both spatial, µm-mm-m, and temporal, seconds-hours-years). As part of the RISMC CRA approach, a large amount of computationally-expensive simulation runs may be required. An important aspect is that even though computational power is growing, themore » overall computational cost of a RISMC analysis using brute-force methods may be not viable for certain cases. A solution that is being evaluated to assist the computational issue is the use of reduced order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RISMC analysis computational cost by decreasing the number of simulation runs; for this analysis improvement we used surrogate models instead of the actual simulation codes. This article focuses on the use of reduced order modeling techniques that can be applied to RISMC analyses in order to generate, analyze, and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (microseconds instead of hours/days).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merkin, V. G.; Lionello, R.; Linker, J.
2016-11-01
Two well-established magnetohydrodynamic (MHD) codes are coupled to model the solar corona and the inner heliosphere. The corona is simulated using the MHD algorithm outside a sphere (MAS) model. The Lyon–Fedder–Mobarry (LFM) model is used in the heliosphere. The interface between the models is placed in a spherical shell above the critical point and allows both models to work in either a rotating or an inertial frame. Numerical tests are presented examining the coupled model solutions from 20 to 50 solar radii. The heliospheric simulations are run with both LFM and the MAS extension into the heliosphere, and use themore » same polytropic coronal MAS solutions as the inner boundary condition. The coronal simulations are performed for idealized magnetic configurations, with an out-of-equilibrium flux rope inserted into an axisymmetric background, with and without including the solar rotation. The temporal evolution at the inner boundary of the LFM and MAS solutions is shown to be nearly identical, as are the steady-state background solutions, prior to the insertion of the flux rope. However, after the coronal mass ejection has propagated through the significant portion of the simulation domain, the heliospheric solutions diverge. Additional simulations with different resolution are then performed and show that the MAS heliospheric solutions approach those of LFM when run with progressively higher resolution. Following these detailed tests, a more realistic simulation driven by the thermodynamic coronal MAS is presented, which includes solar rotation and an azimuthally asymmetric background and extends to the Earth’s orbit.« less
Road simulation for four-wheel vehicle whole input power spectral density
NASA Astrophysics Data System (ADS)
Wang, Jiangbo; Qiang, Baomin
2017-05-01
As the vibration of running vehicle mainly comes from road and influence vehicle ride performance. So the road roughness power spectral density simulation has great significance to analyze automobile suspension vibration system parameters and evaluate ride comfort. Firstly, this paper based on the mathematical model of road roughness power spectral density, established the integral white noise road random method. Then in the MATLAB/Simulink environment, according to the research method of automobile suspension frame from simple two degree of freedom single-wheel vehicle model to complex multiple degrees of freedom vehicle model, this paper built the simple single incentive input simulation model. Finally the spectrum matrix was used to build whole vehicle incentive input simulation model. This simulation method based on reliable and accurate mathematical theory and can be applied to the random road simulation of any specified spectral which provides pavement incentive model and foundation to vehicle ride performance research and vibration simulation.
Wyant, M. C.; Bretherton, Christopher S.; Wood, Robert; ...
2015-01-09
A diverse collection of models are used to simulate the marine boundary layer in the southeast Pacific region during the period of the October–November 2008 VOCALS REx (VAMOS Ocean Cloud Atmosphere Land Study Regional Experiment) field campaign. Regional models simulate the period continuously in boundary-forced free-running mode, while global forecast models and GCMs (general circulation models) are run in forecast mode. The models are compared to extensive observations along a line at 20° S extending westward from the South American coast. Most of the models simulate cloud and aerosol characteristics and gradients across the region that are recognizably similar tomore » observations, despite the complex interaction of processes involved in the problem, many of which are parameterized or poorly resolved. Some models simulate the regional low cloud cover well, though many models underestimate MBL (marine boundary layer) depth near the coast. Most models qualitatively simulate the observed offshore gradients of SO 2, sulfate aerosol, CCN (cloud condensation nuclei) concentration in the MBL as well as differences in concentration between the MBL and the free troposphere. Most models also qualitatively capture the decrease in cloud droplet number away from the coast. However, there are large quantitative intermodel differences in both means and gradients of these quantities. Many models are able to represent episodic offshore increases in cloud droplet number and aerosol concentrations associated with periods of offshore flow. Most models underestimate CCN (at 0.1% supersaturation) in the MBL and free troposphere. The GCMs also have difficulty simulating coastal gradients in CCN and cloud droplet number concentration near the coast. The overall performance of the models demonstrates their potential utility in simulating aerosol–cloud interactions in the MBL, though quantitative estimation of aerosol–cloud interactions and aerosol indirect effects of MBL clouds with these models remains uncertain.« less
Applying Monte Carlo Simulation to Launch Vehicle Design and Requirements Analysis
NASA Technical Reports Server (NTRS)
Hanson, J. M.; Beard, B. B.
2010-01-01
This Technical Publication (TP) is meant to address a number of topics related to the application of Monte Carlo simulation to launch vehicle design and requirements analysis. Although the focus is on a launch vehicle application, the methods may be applied to other complex systems as well. The TP is organized so that all the important topics are covered in the main text, and detailed derivations are in the appendices. The TP first introduces Monte Carlo simulation and the major topics to be discussed, including discussion of the input distributions for Monte Carlo runs, testing the simulation, how many runs are necessary for verification of requirements, what to do if results are desired for events that happen only rarely, and postprocessing, including analyzing any failed runs, examples of useful output products, and statistical information for generating desired results from the output data. Topics in the appendices include some tables for requirements verification, derivation of the number of runs required and generation of output probabilistic data with consumer risk included, derivation of launch vehicle models to include possible variations of assembled vehicles, minimization of a consumable to achieve a two-dimensional statistical result, recontact probability during staging, ensuring duplicated Monte Carlo random variations, and importance sampling.
Dvarskas, Anthony
2017-03-01
While the development of the tourism industry can bring economic benefits to an area, it is important to consider the long-run impact of the industry on a given location. Particularly when the tourism industry relies upon a certain ecological state, those weighing different development options need to consider the long-run impacts of increased tourist numbers upon measures of ecological condition. This paper presents one approach for linking a model of recreational visitor behavior with an ecological model that estimates the impact of the increased visitors upon the environment. Two simulations were run for the model using initial parameters available from survey data and water quality data for beach locations in Croatia. Results suggest that the resilience of a given tourist location to the changes brought by increasing tourism numbers is important in determining its long-run sustainability. Further work should investigate additional model components, including the tourism industry, refinement of the relationships assumed by the model, and application of the proposed model in additional areas. Copyright © 2016 Elsevier Ltd. All rights reserved.
Emulation for probabilistic weather forecasting
NASA Astrophysics Data System (ADS)
Cornford, Dan; Barillec, Remi
2010-05-01
Numerical weather prediction models are typically very expensive to run due to their complexity and resolution. Characterising the sensitivity of the model to its initial condition and/or to its parameters requires numerous runs of the model, which is impractical for all but the simplest models. To produce probabilistic forecasts requires knowledge of the distribution of the model outputs, given the distribution over the inputs, where the inputs include the initial conditions, boundary conditions and model parameters. Such uncertainty analysis for complex weather prediction models seems a long way off, given current computing power, with ensembles providing only a partial answer. One possible way forward that we develop in this work is the use of statistical emulators. Emulators provide an efficient statistical approximation to the model (or simulator) while quantifying the uncertainty introduced. In the emulator framework, a Gaussian process is fitted to the simulator response as a function of the simulator inputs using some training data. The emulator is essentially an interpolator of the simulator output and the response in unobserved areas is dictated by the choice of covariance structure and parameters in the Gaussian process. Suitable parameters are inferred from the data in a maximum likelihood, or Bayesian framework. Once trained, the emulator allows operations such as sensitivity analysis or uncertainty analysis to be performed at a much lower computational cost. The efficiency of emulators can be further improved by exploiting the redundancy in the simulator output through appropriate dimension reduction techniques. We demonstrate this using both Principal Component Analysis on the model output and a new reduced-rank emulator in which an optimal linear projection operator is estimated jointly with other parameters, in the context of simple low order models, such as the Lorenz 40D system. We present the application of emulators to probabilistic weather forecasting, where the construction of the emulator training set replaces the traditional ensemble model runs. Thus the actual forecast distributions are computed using the emulator conditioned on the ‘ensemble runs' which are chosen to explore the plausible input space using relatively crude experimental design methods. One benefit here is that the ensemble does not need to be a sample from the true distribution of the input space, rather it should cover that input space in some sense. The probabilistic forecasts are computed using Monte Carlo methods sampling from the input distribution and using the emulator to produce the output distribution. Finally we discuss the limitations of this approach and briefly mention how we might use similar methods to learn the model error within a framework that incorporates a data assimilation like aspect, using emulators and learning complex model error representations. We suggest future directions for research in the area that will be necessary to apply the method to more realistic numerical weather prediction models.
Wei, Gaofeng; Tang, Gang; Fu, Zengliang; Sun, Qiuming; Tian, Feng
2010-10-01
The China Mechanical Virtual Human (CMVH) is a human musculoskeletal biomechanical simulation platform based on China Visible Human slice images; it has great realistic application significance. In this paper is introduced the construction method of CMVH 3D models. Then a simulation system solution based on Creator/Vega is put forward for the complex and gigantic data characteristics of the 3D models. At last, combined with MFC technology, the CMVH simulation system is developed and a running simulation scene is given. This paper provides a new way for the virtual reality application of CMVH.
AMITIS: A 3D GPU-Based Hybrid-PIC Model for Space and Plasma Physics
NASA Astrophysics Data System (ADS)
Fatemi, Shahab; Poppe, Andrew R.; Delory, Gregory T.; Farrell, William M.
2017-05-01
We have developed, for the first time, an advanced modeling infrastructure in space simulations (AMITIS) with an embedded three-dimensional self-consistent grid-based hybrid model of plasma (kinetic ions and fluid electrons) that runs entirely on graphics processing units (GPUs). The model uses NVIDIA GPUs and their associated parallel computing platform, CUDA, developed for general purpose processing on GPUs. The model uses a single CPU-GPU pair, where the CPU transfers data between the system and GPU memory, executes CUDA kernels, and writes simulation outputs on the disk. All computations, including moving particles, calculating macroscopic properties of particles on a grid, and solving hybrid model equations are processed on a single GPU. We explain various computing kernels within AMITIS and compare their performance with an already existing well-tested hybrid model of plasma that runs in parallel using multi-CPU platforms. We show that AMITIS runs ∼10 times faster than the parallel CPU-based hybrid model. We also introduce an implicit solver for computation of Faraday’s Equation, resulting in an explicit-implicit scheme for the hybrid model equation. We show that the proposed scheme is stable and accurate. We examine the AMITIS energy conservation and show that the energy is conserved with an error < 0.2% after 500,000 timesteps, even when a very low number of particles per cell is used.
NASA Technical Reports Server (NTRS)
Morin, Cory; Monaghan, Andrew; Quattrochi, Dale; Crosson, William; Hayden, Mary; Ernst, Kacey
2015-01-01
Dengue fever is a mosquito-borne viral disease reemerging throughout much of the tropical Americas. Dengue virus transmission is explicitly influenced by climate and the environment through its primary vector, Aedes aegypti. Temperature regulates Ae. aegypti development, survival, and replication rates as well as the incubation period of the virus within the mosquito. Precipitation provides water for many of the preferred breeding habitats of the mosquito, including buckets, old tires, and other places water can collect. Although transmission regularly occurs along the border region in Mexico, dengue virus transmission in bordering Arizona has not occurred. Using NASA's TRMM (Tropical Rainfall Measuring Mission) satellite for precipitation input and Daymet for temperature and supplemental precipitation input, we modeled dengue transmission along a US-Mexico transect using a dynamic dengue transmission model that includes interacting vector ecology and epidemiological components. Model runs were performed for 5 cities in Sonora, Mexico and southern Arizona. Employing a Monte Carlo approach, we performed ensembles of several thousands of model simulations in order to resolve the model uncertainty arising from using different combinations of parameter values that are not well known. For cities with reported dengue case data, the top model simulations that best reproduced dengue case numbers were retained and their parameter values were extracted for comparison. These parameter values were used to run simulations in areas where dengue virus transmission does not occur or where dengue fever case data was unavailable. Additional model runs were performed to reveal how changes in climate or parameter values could alter transmission risk along the transect. The relative influence of climate variability and model parameters on dengue virus transmission is assessed to help public health workers prepare location specific infection prevention strategies.
Acceleration of discrete stochastic biochemical simulation using GPGPU.
Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira
2015-01-01
For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130.
Acceleration of discrete stochastic biochemical simulation using GPGPU
Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira
2015-01-01
For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130. PMID:25762936
NASA Astrophysics Data System (ADS)
Lupu, A.; Semeniuk, K.; McConnell, J. C.; Kaminski, J. W.; Toyota, K.; Neary, L.
2012-12-01
The Global Environmental Multiscale Air Quality (GEM-AQ) model was run in global and limited area model (LAM) modes for the baseline year 2000 and one future year, 2050, on three different horizontal grids of increasing resolution from global (1.5°) to North American (LAM, 0.45°) to Ontario regional scale (LAM, 0.15°). For the future simulation we used the high greenhouse emissions scenario RCP8.5. Boundary conditions for the LAM runs were taken from the coarser resolution runs. All simulations had 54 vertical sigma-pressure hybrid levels from the ground to the stratopause (˜50 km), which should give a good representation of ozone injection to the troposphere from the stratosphere. The model uses the interactive land surface scheme ISBA. Sea surface and lake temperatures are prescribed, but ice cover is partially interactive based on prescribed fields. A lake model, FLAKE, was coupled to GEM-AQ in order to capture the impacts of the Great Lakes on the meteorology when the model is run at high resolution. For the Ontario regional simulation the interactive lake model allowed for self-consistent water temperatures and moisture fluxes. The simulation for the year 2000 shows that the model is able to reproduce the observed monthly surface temperatures across the US. The monthly surface ozone is reproduced at the level of detail of most other air quality models with year 2000 weather as opposed to a free run forced by SSTs. Our year 2050 simulation shows that ozone levels during the summer throughout most of Ontario and Canada will increase. Regions south of the latitude of Lake Superior will generally see decreased levels of summer (JJA) ozone, except for around large urban areas such as Toronto, Chicago and Montreal. However, NOx levels will decrease during the summer, reflecting decreased emissions. Ozone levels in the US will generally improve. Other indices rather than simple averages yield a different perspective. If the MDA8 ozone metric and NO2 one-hour 98th percentile are used, then it is found that air quality across Canada and US will generally improve. From the perspective of meteorology, the most significant surface warming that is likely to occur by 2050 is during winter. The winter warming also reflects changes in large scale circulation with baroclinic eddy storm tracks moving north. Winter warming contributes to a surface ozone increase by 2050 in spite of reduced emissions. In addition, we note that in the Ontario region and environs for 2050 there is a significant increase (˜40) in the number of DD5 days, i.e. days where the temperature is above 5°C, a metric useful for the length of the growing season for agriculture. This also means that conditions that impact forests and movement of disease vectors will also change.
NASA Astrophysics Data System (ADS)
Tong, Qiujie; Wang, Qianqian; Li, Xiaoyang; Shan, Bin; Cui, Xuntai; Li, Chenyu; Peng, Zhong
2016-11-01
In order to satisfy the requirements of the real-time and generality, a laser target simulator in semi-physical simulation system based on RTX+LabWindows/CVI platform is proposed in this paper. Compared with the upper-lower computers simulation platform architecture used in the most of the real-time system now, this system has better maintainability and portability. This system runs on the Windows platform, using Windows RTX real-time extension subsystem to ensure the real-time performance of the system combining with the reflective memory network to complete some real-time tasks such as calculating the simulation model, transmitting the simulation data, and keeping real-time communication. The real-time tasks of simulation system run under the RTSS process. At the same time, we use the LabWindows/CVI to compile a graphical interface, and complete some non-real-time tasks in the process of simulation such as man-machine interaction, display and storage of the simulation data, which run under the Win32 process. Through the design of RTX shared memory and task scheduling algorithm, the data interaction between the real-time tasks process of RTSS and non-real-time tasks process of Win32 is completed. The experimental results show that this system has the strongly real-time performance, highly stability, and highly simulation accuracy. At the same time, it also has the good performance of human-computer interaction.
Deterministic Stress Modeling of Hot Gas Segregation in a Turbine
NASA Technical Reports Server (NTRS)
Busby, Judy; Sondak, Doug; Staubach, Brent; Davis, Roger
1998-01-01
Simulation of unsteady viscous turbomachinery flowfields is presently impractical as a design tool due to the long run times required. Designers rely predominantly on steady-state simulations, but these simulations do not account for some of the important unsteady flow physics. Unsteady flow effects can be modeled as source terms in the steady flow equations. These source terms, referred to as Lumped Deterministic Stresses (LDS), can be used to drive steady flow solution procedures to reproduce the time-average of an unsteady flow solution. The goal of this work is to investigate the feasibility of using inviscid lumped deterministic stresses to model unsteady combustion hot streak migration effects on the turbine blade tip and outer air seal heat loads using a steady computational approach. The LDS model is obtained from an unsteady inviscid calculation. The LDS model is then used with a steady viscous computation to simulate the time-averaged viscous solution. Both two-dimensional and three-dimensional applications are examined. The inviscid LDS model produces good results for the two-dimensional case and requires less than 10% of the CPU time of the unsteady viscous run. For the three-dimensional case, the LDS model does a good job of reproducing the time-averaged viscous temperature migration and separation as well as heat load on the outer air seal at a CPU cost that is 25% of that of an unsteady viscous computation.
The report gives results of a series of computer runs using the DOE-2.1E building energy model, simulating a small office in a hot, humid climate (Miami). These simulations assessed the energy and relative humidity (RH) penalties when the outdoor air (OA) ventilation rate is inc...
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
An OpenStudio Measure is a script that can manipulate an OpenStudio model and associated data to apply energy conservation measures (ECMs), run supplemental simulations, or visualize simulation results. The OpenStudio software development kit (SDK) and accessibility of the Ruby scripting language makes measure authorship accessible to both software developers and energy modelers. This paper discusses the life cycle of an OpenStudio Measure from development, testing, and distribution, to application.
Zhu, Hao; Sun, Yan; Rajagopal, Gunaretnam; Mondry, Adrian; Dhar, Pawan
2004-01-01
Background Many arrhythmias are triggered by abnormal electrical activity at the ionic channel and cell level, and then evolve spatio-temporally within the heart. To understand arrhythmias better and to diagnose them more precisely by their ECG waveforms, a whole-heart model is required to explore the association between the massively parallel activities at the channel/cell level and the integrative electrophysiological phenomena at organ level. Methods We have developed a method to build large-scale electrophysiological models by using extended cellular automata, and to run such models on a cluster of shared memory machines. We describe here the method, including the extension of a language-based cellular automaton to implement quantitative computing, the building of a whole-heart model with Visible Human Project data, the parallelization of the model on a cluster of shared memory computers with OpenMP and MPI hybrid programming, and a simulation algorithm that links cellular activity with the ECG. Results We demonstrate that electrical activities at channel, cell, and organ levels can be traced and captured conveniently in our extended cellular automaton system. Examples of some ECG waveforms simulated with a 2-D slice are given to support the ECG simulation algorithm. A performance evaluation of the 3-D model on a four-node cluster is also given. Conclusions Quantitative multicellular modeling with extended cellular automata is a highly efficient and widely applicable method to weave experimental data at different levels into computational models. This process can be used to investigate complex and collective biological activities that can be described neither by their governing differentiation equations nor by discrete parallel computation. Transparent cluster computing is a convenient and effective method to make time-consuming simulation feasible. Arrhythmias, as a typical case, can be effectively simulated with the methods described. PMID:15339335
2007-01-01
Aid (IWEDA) we developed techniques that allowed significant improvement in weather effects and impacts for wargames. TAWS was run for numerous and...found that the wargame realism was increased without impacting the run time. While these techniques are applicable to wargames in general, we tested...them by incorporation into the Advanced Warfighting Simulation (AWARS) model. AWARS was modified to incorporate weather impacts upon sensor
Hutchinson, C.B.; Johnson, Dale M.; Gerhart, James M.
1981-01-01
A two-dimensional finite-difference model was developed for simulation of steady-state ground-water flow in the Floridan aquifer throughout a 932-square-mile area, which contains nine municipal well fields. The overlying surficial aquifer contains a constant-head water table and is coupled to the Floridan aquifer by a leakage term that represents flow through a confining layer separating the two aquifers. Under the steady-state condition, all storage terms are set to zero. Utilization of the head-controlled flux condition allows head and flow to vary at the model-grid boundaries. Procedures are described to calibrate the model, test its sensitivity to input-parameter errors, and verify its accuracy for predictive purposes. Also included are attachments that describe setting up and running the model. An example model-interrogation run shows anticipated drawdowns that should result from pumping at the newly constructed Cross Bar Ranch and Morris Bridge well fields. (USGS)
Computer Laboratory for Multi-scale Simulations of Novel Nanomaterials
2014-09-15
schemes for multiscale modeling of polymers. Permselective ion-exchange membranes for protective clothing, fuel cells , and batteries are of special...polyelectrolyte membranes ( PEM ) with chemical warfare agents (CWA) and their simulants and (2) development of new simulation methods and computational...chemical potential using gauge cell method and calculation of density profiles. However, the code does not run in parallel environments. For mesoscale
THE MIRA–TITAN UNIVERSE: PRECISION PREDICTIONS FOR DARK ENERGY SURVEYS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heitmann, Katrin; Habib, Salman; Biswas, Rahul
2016-04-01
Large-scale simulations of cosmic structure formation play an important role in interpreting cosmological observations at high precision. The simulations must cover a parameter range beyond the standard six cosmological parameters and need to be run at high mass and force resolution. A key simulation-based task is the generation of accurate theoretical predictions for observables using a finite number of simulation runs, via the method of emulation. Using a new sampling technique, we explore an eight-dimensional parameter space including massive neutrinos and a variable equation of state of dark energy. We construct trial emulators using two surrogate models (the linear powermore » spectrum and an approximate halo mass function). The new sampling method allows us to build precision emulators from just 26 cosmological models and to systematically increase the emulator accuracy by adding new sets of simulations in a prescribed way. Emulator fidelity can now be continuously improved as new observational data sets become available and higher accuracy is required. Finally, using one ΛCDM cosmology as an example, we study the demands imposed on a simulation campaign to achieve the required statistics and accuracy when building emulators for investigations of dark energy.« less
The mira-titan universe. Precision predictions for dark energy surveys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heitmann, Katrin; Bingham, Derek; Lawrence, Earl
2016-03-28
Large-scale simulations of cosmic structure formation play an important role in interpreting cosmological observations at high precision. The simulations must cover a parameter range beyond the standard six cosmological parameters and need to be run at high mass and force resolution. A key simulation-based task is the generation of accurate theoretical predictions for observables using a finite number of simulation runs, via the method of emulation. Using a new sampling technique, we explore an eight-dimensional parameter space including massive neutrinos and a variable equation of state of dark energy. We construct trial emulators using two surrogate models (the linear powermore » spectrum and an approximate halo mass function). The new sampling method allows us to build precision emulators from just 26 cosmological models and to systematically increase the emulator accuracy by adding new sets of simulations in a prescribed way. Emulator fidelity can now be continuously improved as new observational data sets become available and higher accuracy is required. Finally, using one ΛCDM cosmology as an example, we study the demands imposed on a simulation campaign to achieve the required statistics and accuracy when building emulators for investigations of dark energy.« less
NASA Astrophysics Data System (ADS)
Benjamin, J.; Rosser, N. J.; Dunning, S.; Hardy, R. J.; Karim, K.; Szczucinski, W.; Norman, E. C.; Strzelecki, M.; Drewniak, M.
2014-12-01
Risk assessments of the threat posed by rock avalanches rely upon numerical modelling of potential run-out and spreading, and are contingent upon a thorough understanding of the flow dynamics inferred from deposits left by previous events. Few records exist of multiple rock avalanches with boundary conditions sufficiently consistent to develop a set of more generalised rules for behaviour across events. A unique cluster of 20 large (3 x 106 - 94 x 106 m3) rock avalanche deposits along the Vaigat Strait, West Greenland, offers a unique opportunity to model a large sample of adjacent events sourced from a stretch of coastal mountains of relatively uniform geology and structure. Our simulations of these events were performed using VolcFlow, a geophysical mass flow code developed to simulate volcanic debris avalanches. Rheological calibration of the model was performed using a well-constrained event at Paatuut (AD 2000). The best-fit simulation assumes a constant retarding stress with a collisional stress coefficient (T0 = 250 kPa, ξ = 0.01), and simulates run-out to within ±0.3% of that observed. Despite being widely used to simulate rock avalanche propagation, other models, that assume either a Coulomb frictional or a Voellmy rheology, failed to reproduce the observed event characteristics and deposit distribution at Paatuut. We applied this calibration to 19 other events, simulating rock avalanche motion across 3D terrain of varying levels of complexity. Our findings illustrate the utility and sensitivity of modelling a single rock avalanche satisfactorily as a function of rheology, alongside the validity of applying the same parameters elsewhere, even within similar boundary conditions. VolcFlow can plausibly account for the observed morphology of a series of deposits emplaced by events of different types, although its performance is sensitive to a range of topographic and geometric factors. These exercises show encouraging results in the model's ability to simulate a series of events using a single set of parameters obtained by back-analysis of the Paatuut event alone. The results also hold important implications for our process understanding of rock avalanches in confined fjord settings, where correctly modelling material flux at the point of entry into the water is critical in tsunami generation.
Data-driven modelling of vertical dynamic excitation of bridges induced by people running
NASA Astrophysics Data System (ADS)
Racic, Vitomir; Morin, Jean Benoit
2014-02-01
With increasingly popular marathon events in urban environments, structural designers face a great deal of uncertainty when assessing dynamic performance of bridges occupied and dynamically excited by people running. While the dynamic loads induced by pedestrians walking have been intensively studied since the infamous lateral sway of the London Millennium Bridge in 2000, reliable and practical descriptions of running excitation are still very rare and limited. This interdisciplinary study has addressed the issue by bringing together a database of individual running force signals recorded by two state-of-the-art instrumented treadmills and two attempts to mathematically describe the measurements. The first modelling strategy is adopted from the available design guidelines for human walking excitation of structures, featuring perfectly periodic and deterministic characterisation of pedestrian forces presentable via Fourier series. This modelling approach proved to be inadequate for running loads due to the inherent near-periodic nature of the measured signals, a great inter-personal randomness of the dominant Fourier amplitudes and the lack of strong correlation between the amplitudes and running footfall rate. Hence, utilising the database established and motivated by the existing models of wind and earthquake loading, speech recognition techniques and a method of replicating electrocardiogram signals, this paper finally presents a numerical generator of random near-periodic running force signals which can reliably simulate the measurements. Such a model is an essential prerequisite for future quality models of dynamic loading induced by individuals, groups and crowds running under a wide range of conditions, such as perceptibly vibrating bridges and different combinations of visual, auditory and tactile cues.
Automatic Fitting of Spiking Neuron Models to Electrophysiological Recordings
Rossant, Cyrille; Goodman, Dan F. M.; Platkiewicz, Jonathan; Brette, Romain
2010-01-01
Spiking models can accurately predict the spike trains produced by cortical neurons in response to somatically injected currents. Since the specific characteristics of the model depend on the neuron, a computational method is required to fit models to electrophysiological recordings. The fitting procedure can be very time consuming both in terms of computer simulations and in terms of code writing. We present algorithms to fit spiking models to electrophysiological data (time-varying input and spike trains) that can run in parallel on graphics processing units (GPUs). The model fitting library is interfaced with Brian, a neural network simulator in Python. If a GPU is present it uses just-in-time compilation to translate model equations into optimized code. Arbitrary models can then be defined at script level and run on the graphics card. This tool can be used to obtain empirically validated spiking models of neurons in various systems. We demonstrate its use on public data from the INCF Quantitative Single-Neuron Modeling 2009 competition by comparing the performance of a number of neuron spiking models. PMID:20224819
FLAME: A platform for high performance computing of complex systems, applied for three case studies
Kiran, Mariam; Bicak, Mesude; Maleki-Dizaji, Saeedeh; ...
2011-01-01
FLAME allows complex models to be automatically parallelised on High Performance Computing (HPC) grids enabling large number of agents to be simulated over short periods of time. Modellers are hindered by complexities of porting models on parallel platforms and time taken to run large simulations on a single machine, which FLAME overcomes. Three case studies from different disciplines were modelled using FLAME, and are presented along with their performance results on a grid.
Tropical pacing of Antarctic sea ice increase
NASA Astrophysics Data System (ADS)
Schneider, D. P.
2015-12-01
One reason why coupled climate model simulations generally do not reproduce the observed increase in Antarctic sea ice extent may be that their internally generated climate variability does not sync with the observed phases of phenomena like the Pacific Decadal Oscillation (PDO) and ENSO. For example, it is unlikely for a free-running coupled model simulation to capture the shift of the PDO from its positive to negative phase during 1998, and the subsequent ~15 year duration of the negative PDO phase. In previously presented work based on atmospheric models forced by observed tropical SSTs and stratospheric ozone, we demonstrated that tropical variability is key to explaining the wind trends over the Southern Ocean during the past ~35 years, particularly in the Ross, Amundsen and Bellingshausen Seas, the regions of the largest trends in sea ice extent and ice season duration. Here, we extend this idea to coupled model simulations with the Community Earth System Model (CESM) in which the evolution of SST anomalies in the central and eastern tropical Pacific is constrained to match the observations. This ensemble of 10 "tropical pacemaker" simulations shows a more realistic evolution of Antarctic sea ice anomalies than does its unconstrained counterpart, the CESM Large Ensemble (both sets of runs include stratospheric ozone depletion and other time-dependent radiative forcings). In particular, the pacemaker runs show that increased sea ice in the eastern Ross Sea is associated with a deeper Amundsen Sea Low (ASL) and stronger westerlies over the south Pacific. These circulation patterns in turn are linked with the negative phase of the PDO, characterized by negative SST anomalies in the central and eastern Pacific. The timing of tropical decadal variability with respect to ozone depletion further suggests a strong role for tropical variability in the recent acceleration of the Antarctic sea ice trend, as ozone depletion stabilized by late 1990s, prior to the most recent major shift in tropical climate. In the pacemaker runs, the positive sea ice trend in the eastern Ross Sea is stronger during the most recent period (~2000-2014) than it is during period of rapid ozone depletion (~1980-1996).
NASA Astrophysics Data System (ADS)
Ivanovic, Ruza; Gregoire, Lauren; Kageyama, Masa; Roche, Didier; Valdes, Paul; Burke, Andrea; Drummond, Rosemarie; Peltier, W. Richard; Tarasov, Lev
2016-04-01
The last deglaciation, which marked the transition between the last glacial and present interglacial periods, was punctuated by a series of rapid (centennial and decadal) climate changes. Numerical climate models are useful for investigating mechanisms that underpin the events, especially now that some of the complex models can be run for multiple millennia. We have set up a Paleoclimate Modelling Intercomparison Project (PMIP) working group to coordinate efforts to run transient simulations of the last deglaciation, and to facilitate the dissemination of expertise between modellers and those engaged with reconstructing the climate of the last 21 thousand years. Here, we present the design of a coordinated Core simulation over the period 21-9 thousand years before present (ka) with time varying orbital forcing, greenhouse gases, ice sheets, and other geographical changes. A choice of two ice sheet reconstructions is given. Additional focussed simulations will also be coordinated on an ad-hoc basis by the working group, for example to investigate the effect of ice sheet and iceberg meltwater, and the uncertainty in other forcings. Some of these focussed simulations will concentrate on shorter durations around specific events to allow the more computationally expensive models to take part. Ivanovic, R. F., Gregoire, L. J., Kageyama, M., Roche, D. M., Valdes, P. J., Burke, A., Drummond, R., Peltier, W. R., and Tarasov, L.: Transient climate simulations of the deglaciation 21-9 thousand years before present; PMIP4 Core experiment design and boundary conditions, Geosci. Model Dev. Discuss., 8, 9045-9102, doi:10.5194/gmdd-8-9045-2015, 2015.
Connectionist agent-based learning in bank-run decision making
NASA Astrophysics Data System (ADS)
Huang, Weihong; Huang, Qiao
2018-05-01
It is of utter importance for the policy makers, bankers, and investors to thoroughly understand the probability of bank-run (PBR) which was often neglected in the classical models. Bank-run is not merely due to miscoordination (Diamond and Dybvig, 1983) or deterioration of bank assets (Allen and Gale, 1998) but various factors. This paper presents the simulation results of the nonlinear dynamic probabilities of bank runs based on the global games approach, with the distinct assumption that heterogenous agents hold highly correlated but unidentical beliefs about the true payoffs. The specific technique used in the simulation is to let agents have an integrated cognitive-affective network. It is observed that, even when the economy is good, agents are significantly affected by the cognitive-affective network to react to bad news which might lead to bank-run. Both the rise of the late payoffs, R, and the early payoffs, r, will decrease the effect of the affective process. The increased risk sharing might or might not increase PBR, and the increase in late payoff is beneficial for preventing the bank run. This paper is one of the pioneers that links agent-based computational economics and behavioral economics.
Computer-Aided System Engineering and Analysis (CASE/A) Programmer's Manual, Version 5.0
NASA Technical Reports Server (NTRS)
Knox, J. C.
1996-01-01
The Computer Aided System Engineering and Analysis (CASE/A) Version 5.0 Programmer's Manual provides the programmer and user with information regarding the internal structure of the CASE/A 5.0 software system. CASE/A 5.0 is a trade study tool that provides modeling/simulation capabilities for analyzing environmental control and life support systems and active thermal control systems. CASE/A has been successfully used in studies such as the evaluation of carbon dioxide removal in the space station. CASE/A modeling provides a graphical and command-driven interface for the user. This interface allows the user to construct a model by placing equipment components in a graphical layout of the system hardware, then connect the components via flow streams and define their operating parameters. Once the equipment is placed, the simulation time and other control parameters can be set to run the simulation based on the model constructed. After completion of the simulation, graphical plots or text files can be obtained for evaluation of the simulation results over time. Additionally, users have the capability to control the simulation and extract information at various times in the simulation (e.g., control equipment operating parameters over the simulation time or extract plot data) by using "User Operations (OPS) Code." This OPS code is written in FORTRAN with a canned set of utility subroutines for performing common tasks. CASE/A version 5.0 software runs under the VAX VMS(Trademark) environment. It utilizes the Tektronics 4014(Trademark) graphics display system and the VTIOO(Trademark) text manipulation/display system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mudunuru, Maruti Kumar; Karra, Satish; Harp, Dylan Robert
Reduced-order modeling is a promising approach, as many phenomena can be described by a few parameters/mechanisms. An advantage and attractive aspect of a reduced-order model is that it is computational inexpensive to evaluate when compared to running a high-fidelity numerical simulation. A reduced-order model takes couple of seconds to run on a laptop while a high-fidelity simulation may take couple of hours to run on a high-performance computing cluster. The goal of this paper is to assess the utility of regression-based reduced-order models (ROMs) developed from high-fidelity numerical simulations for predicting transient thermal power output for an enhanced geothermal reservoirmore » while explicitly accounting for uncertainties in the subsurface system and site-specific details. Numerical simulations are performed based on equally spaced values in the specified range of model parameters. Key sensitive parameters are then identified from these simulations, which are fracture zone permeability, well/skin factor, bottom hole pressure, and injection flow rate. We found the fracture zone permeability to be the most sensitive parameter. The fracture zone permeability along with time, are used to build regression-based ROMs for the thermal power output. The ROMs are trained and validated using detailed physics-based numerical simulations. Finally, predictions from the ROMs are then compared with field data. We propose three different ROMs with different levels of model parsimony, each describing key and essential features of the power production curves. The coefficients in the proposed regression-based ROMs are developed by minimizing a non-linear least-squares misfit function using the Levenberg–Marquardt algorithm. The misfit function is based on the difference between numerical simulation data and reduced-order model. ROM-1 is constructed based on polynomials up to fourth order. ROM-1 is able to accurately reproduce the power output of numerical simulations for low values of permeabilities and certain features of the field-scale data. ROM-2 is a model with more analytical functions consisting of polynomials up to order eight, exponential functions and smooth approximations of Heaviside functions, and accurately describes the field-data. At higher permeabilities, ROM-2 reproduces numerical results better than ROM-1, however, there is a considerable deviation from numerical results at low fracture zone permeabilities. ROM-3 consists of polynomials up to order ten, and is developed by taking the best aspects of ROM-1 and ROM-2. ROM-1 is relatively parsimonious than ROM-2 and ROM-3, while ROM-2 overfits the data. ROM-3 on the other hand, provides a middle ground for model parsimony. Based on R 2-values for training, validation, and prediction data sets we found that ROM-3 is better model than ROM-2 and ROM-1. For predicting thermal drawdown in EGS applications, where high fracture zone permeabilities (typically greater than 10 –15 m 2) are desired, ROM-2 and ROM-3 outperform ROM-1. As per computational time, all the ROMs are 10 4 times faster when compared to running a high-fidelity numerical simulation. In conclusion, this makes the proposed regression-based ROMs attractive for real-time EGS applications because they are fast and provide reasonably good predictions for thermal power output.« less
Mudunuru, Maruti Kumar; Karra, Satish; Harp, Dylan Robert; ...
2017-07-10
Reduced-order modeling is a promising approach, as many phenomena can be described by a few parameters/mechanisms. An advantage and attractive aspect of a reduced-order model is that it is computational inexpensive to evaluate when compared to running a high-fidelity numerical simulation. A reduced-order model takes couple of seconds to run on a laptop while a high-fidelity simulation may take couple of hours to run on a high-performance computing cluster. The goal of this paper is to assess the utility of regression-based reduced-order models (ROMs) developed from high-fidelity numerical simulations for predicting transient thermal power output for an enhanced geothermal reservoirmore » while explicitly accounting for uncertainties in the subsurface system and site-specific details. Numerical simulations are performed based on equally spaced values in the specified range of model parameters. Key sensitive parameters are then identified from these simulations, which are fracture zone permeability, well/skin factor, bottom hole pressure, and injection flow rate. We found the fracture zone permeability to be the most sensitive parameter. The fracture zone permeability along with time, are used to build regression-based ROMs for the thermal power output. The ROMs are trained and validated using detailed physics-based numerical simulations. Finally, predictions from the ROMs are then compared with field data. We propose three different ROMs with different levels of model parsimony, each describing key and essential features of the power production curves. The coefficients in the proposed regression-based ROMs are developed by minimizing a non-linear least-squares misfit function using the Levenberg–Marquardt algorithm. The misfit function is based on the difference between numerical simulation data and reduced-order model. ROM-1 is constructed based on polynomials up to fourth order. ROM-1 is able to accurately reproduce the power output of numerical simulations for low values of permeabilities and certain features of the field-scale data. ROM-2 is a model with more analytical functions consisting of polynomials up to order eight, exponential functions and smooth approximations of Heaviside functions, and accurately describes the field-data. At higher permeabilities, ROM-2 reproduces numerical results better than ROM-1, however, there is a considerable deviation from numerical results at low fracture zone permeabilities. ROM-3 consists of polynomials up to order ten, and is developed by taking the best aspects of ROM-1 and ROM-2. ROM-1 is relatively parsimonious than ROM-2 and ROM-3, while ROM-2 overfits the data. ROM-3 on the other hand, provides a middle ground for model parsimony. Based on R 2-values for training, validation, and prediction data sets we found that ROM-3 is better model than ROM-2 and ROM-1. For predicting thermal drawdown in EGS applications, where high fracture zone permeabilities (typically greater than 10 –15 m 2) are desired, ROM-2 and ROM-3 outperform ROM-1. As per computational time, all the ROMs are 10 4 times faster when compared to running a high-fidelity numerical simulation. In conclusion, this makes the proposed regression-based ROMs attractive for real-time EGS applications because they are fast and provide reasonably good predictions for thermal power output.« less
NASA Astrophysics Data System (ADS)
Carrier, M.; Ngodock, H.; Smith, S. R.; Souopgui, I.
2016-02-01
NASA's Surface Water and Ocean Topography (SWOT) satellite, scheduled for launch in 2020, will provide sea surface height anomaly (SSHA) observations with a wider swath width and higher spatial resolution than current satellite altimeters. It is expected that this will help to further constrain ocean models in terms of the mesoscale circulation. In this work, this expectation is investigated by way of twin data assimilation experiments using the Navy Coastal Ocean Model Four Dimensional Variational (NCOM-4DVAR) data assimilation system using a weak constraint formulation. Here, a nature run is created from which SWOT observations are sampled, as well as along-track SSHA observations from simulated Jason-2 tracks. The simulated SWOT data has appropriate spatial coverage, resolution, and noise characteristics based on an observation-simulator program provided by the SWOT science team. The experiment is run for a three-month period during which the analysis is updated every 24 hours and each analysis is used to initialize a 96 hour forecast. The forecasts in each experiment are compared to the available nature run to determine the impact of the assimilated data. It is demonstrated here that the SWOT observations help to constrain the model mesoscale in a more consistent manner than traditional altimeter observations. The findings of this study suggest that data from SWOT may have a substantial impact on improving the ocean model analysis and forecast of mesoscale features and surface ocean transport.
Reduced Order Model Implementation in the Risk-Informed Safety Margin Characterization Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandelli, Diego; Smith, Curtis L.; Alfonsi, Andrea
2015-09-01
The RISMC project aims to develop new advanced simulation-based tools to perform Probabilistic Risk Analysis (PRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermo-hydraulic behavior of the reactor primary and secondary systems but also external events temporal evolution and components/system ageing. Thus, this is not only a multi-physics problem but also a multi-scale problem (both spatial, µm-mm-m, and temporal, ms-s-minutes-years). As part of the RISMC PRA approach, a large amount of computationally expensive simulation runs are required. An important aspect is that even though computational power is regularly growing, themore » overall computational cost of a RISMC analysis may be not viable for certain cases. A solution that is being evaluated is the use of reduce order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RICM analysis computational cost by decreasing the number of simulations runs to perform and employ surrogate models instead of the actual simulation codes. This report focuses on the use of reduced order modeling techniques that can be applied to any RISMC analysis to generate, analyze and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (µs instead of hours/days). We apply reduced order and surrogate modeling techniques to several RISMC types of analyses using RAVEN and RELAP-7 and show the advantages that can be gained.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Huan; Cheng, Liang; Chuah, Mooi Choo
In the generation, transmission, and distribution sectors of the smart grid, intelligence of field devices is realized by programmable logic controllers (PLCs). Many smart-grid subsystems are essentially cyber-physical energy systems (CPES): For instance, the power system process (i.e., the physical part) within a substation is monitored and controlled by a SCADA network with hosts running miscellaneous applications (i.e., the cyber part). To study the interactions between the cyber and physical components of a CPES, several co-simulation platforms have been proposed. However, the network simulators/emulators of these platforms do not include a detailed traffic model that takes into account the impactsmore » of the execution model of PLCs on traffic characteristics. As a result, network traces generated by co-simulation only reveal the impacts of the physical process on the contents of the traffic generated by SCADA hosts, whereas the distinction between PLCs and computing nodes (e.g., a hardened computer running a process visualization application) has been overlooked. To generate realistic network traces using co-simulation for the design and evaluation of applications relying on accurate traffic profiles, it is necessary to establish a traffic model for PLCs. In this work, we propose a parameterized model for PLCs that can be incorporated into existing co-simulation platforms. We focus on the DNP3 subsystem of slave PLCs, which automates the processing of packets from the DNP3 master. To validate our approach, we extract model parameters from both the configuration and network traces of real PLCs. Simulated network traces are generated and compared against those from PLCs. Our evaluation shows that our proposed model captures the essential traffic characteristics of DNP3 slave PLCs, which can be used to extend existing co-simulation platforms and gain further insights into the behaviors of CPES.« less
Secular trends and climate drift in coupled ocean-atmosphere general circulation models
NASA Astrophysics Data System (ADS)
Covey, Curt; Gleckler, Peter J.; Phillips, Thomas J.; Bader, David C.
2006-02-01
Coupled ocean-atmosphere general circulation models (coupled GCMs) with interactive sea ice are the primary tool for investigating possible future global warming and numerous other issues in climate science. A long-standing problem with such models is that when different components of the physical climate system are linked together, the simulated climate can drift away from observation unless constrained by ad hoc adjustments to interface fluxes. However, 11 modern coupled GCMs, including three that do not employ flux adjustments, behave much better in this respect than the older generation of models. Surface temperature trends in control run simulations (with external climate forcing such as solar brightness and atmospheric carbon dioxide held constant) are small compared with observed trends, which include 20th century climate change due to both anthropogenic and natural factors. Sea ice changes in the models are dominated by interannual variations. Deep ocean temperature and salinity trends are small enough for model control runs to extend over 1000 simulated years or more, but trends in some regions, most notably the Arctic, differ substantially among the models and may be problematic. Methods used to initialize coupled GCMs can mitigate climate drift but cannot eliminate it. Lengthy "spin-ups" of models, made possible by increasing computer power, are one reason for the improvements this paper documents.
Topographic filtering simulation model for sediment source apportionment
NASA Astrophysics Data System (ADS)
Cho, Se Jong; Wilcock, Peter; Hobbs, Benjamin
2018-05-01
We propose a Topographic Filtering simulation model (Topofilter) that can be used to identify those locations that are likely to contribute most of the sediment load delivered from a watershed. The reduced complexity model links spatially distributed estimates of annual soil erosion, high-resolution topography, and observed sediment loading to determine the distribution of sediment delivery ratio across a watershed. The model uses two simple two-parameter topographic transfer functions based on the distance and change in elevation from upland sources to the nearest stream channel and then down the stream network. The approach does not attempt to find a single best-calibrated solution of sediment delivery, but uses a model conditioning approach to develop a large number of possible solutions. For each model run, locations that contribute to 90% of the sediment loading are identified and those locations that appear in this set in most of the 10,000 model runs are identified as the sources that are most likely to contribute to most of the sediment delivered to the watershed outlet. Because the underlying model is quite simple and strongly anchored by reliable information on soil erosion, topography, and sediment load, we believe that the ensemble of simulation outputs provides a useful basis for identifying the dominant sediment sources in the watershed.
Accelerated rescaling of single Monte Carlo simulation runs with the Graphics Processing Unit (GPU).
Yang, Owen; Choi, Bernard
2013-01-01
To interpret fiber-based and camera-based measurements of remitted light from biological tissues, researchers typically use analytical models, such as the diffusion approximation to light transport theory, or stochastic models, such as Monte Carlo modeling. To achieve rapid (ideally real-time) measurement of tissue optical properties, especially in clinical situations, there is a critical need to accelerate Monte Carlo simulation runs. In this manuscript, we report on our approach using the Graphics Processing Unit (GPU) to accelerate rescaling of single Monte Carlo runs to calculate rapidly diffuse reflectance values for different sets of tissue optical properties. We selected MATLAB to enable non-specialists in C and CUDA-based programming to use the generated open-source code. We developed a software package with four abstraction layers. To calculate a set of diffuse reflectance values from a simulated tissue with homogeneous optical properties, our rescaling GPU-based approach achieves a reduction in computation time of several orders of magnitude as compared to other GPU-based approaches. Specifically, our GPU-based approach generated a diffuse reflectance value in 0.08ms. The transfer time from CPU to GPU memory currently is a limiting factor with GPU-based calculations. However, for calculation of multiple diffuse reflectance values, our GPU-based approach still can lead to processing that is ~3400 times faster than other GPU-based approaches.
NASA Technical Reports Server (NTRS)
Throop, David R.
1992-01-01
The paper examines the requirements for the reuse of computational models employed in model-based reasoning (MBR) to support automated inference about mechanisms. Areas in which the theory of MBR is not yet completely adequate for using the information that simulations can yield are identified, and recent work in these areas is reviewed. It is argued that using MBR along with simulations forces the use of specific fault models. Fault models are used so that a particular fault can be instantiated into the model and run. This in turn implies that the component specification language needs to be capable of encoding any fault that might need to be sensed or diagnosed. It also means that the simulation code must anticipate all these faults at the component level.
Virtual Systems Pharmacology (ViSP) software for simulation from mechanistic systems-level models.
Ermakov, Sergey; Forster, Peter; Pagidala, Jyotsna; Miladinov, Marko; Wang, Albert; Baillie, Rebecca; Bartlett, Derek; Reed, Mike; Leil, Tarek A
2014-01-01
Multiple software programs are available for designing and running large scale system-level pharmacology models used in the drug development process. Depending on the problem, scientists may be forced to use several modeling tools that could increase model development time, IT costs and so on. Therefore, it is desirable to have a single platform that allows setting up and running large-scale simulations for the models that have been developed with different modeling tools. We developed a workflow and a software platform in which a model file is compiled into a self-contained executable that is no longer dependent on the software that was used to create the model. At the same time the full model specifics is preserved by presenting all model parameters as input parameters for the executable. This platform was implemented as a model agnostic, therapeutic area agnostic and web-based application with a database back-end that can be used to configure, manage and execute large-scale simulations for multiple models by multiple users. The user interface is designed to be easily configurable to reflect the specifics of the model and the user's particular needs and the back-end database has been implemented to store and manage all aspects of the systems, such as Models, Virtual Patients, User Interface Settings, and Results. The platform can be adapted and deployed on an existing cluster or cloud computing environment. Its use was demonstrated with a metabolic disease systems pharmacology model that simulates the effects of two antidiabetic drugs, metformin and fasiglifam, in type 2 diabetes mellitus patients.
Virtual Systems Pharmacology (ViSP) software for simulation from mechanistic systems-level models
Ermakov, Sergey; Forster, Peter; Pagidala, Jyotsna; Miladinov, Marko; Wang, Albert; Baillie, Rebecca; Bartlett, Derek; Reed, Mike; Leil, Tarek A.
2014-01-01
Multiple software programs are available for designing and running large scale system-level pharmacology models used in the drug development process. Depending on the problem, scientists may be forced to use several modeling tools that could increase model development time, IT costs and so on. Therefore, it is desirable to have a single platform that allows setting up and running large-scale simulations for the models that have been developed with different modeling tools. We developed a workflow and a software platform in which a model file is compiled into a self-contained executable that is no longer dependent on the software that was used to create the model. At the same time the full model specifics is preserved by presenting all model parameters as input parameters for the executable. This platform was implemented as a model agnostic, therapeutic area agnostic and web-based application with a database back-end that can be used to configure, manage and execute large-scale simulations for multiple models by multiple users. The user interface is designed to be easily configurable to reflect the specifics of the model and the user's particular needs and the back-end database has been implemented to store and manage all aspects of the systems, such as Models, Virtual Patients, User Interface Settings, and Results. The platform can be adapted and deployed on an existing cluster or cloud computing environment. Its use was demonstrated with a metabolic disease systems pharmacology model that simulates the effects of two antidiabetic drugs, metformin and fasiglifam, in type 2 diabetes mellitus patients. PMID:25374542
NRL 1989 Beam Propagation Studies in Support of the ATA Multi-Pulse Propagation Experiment
1990-08-31
papers presented here were all written prior to the completion of the experiment. The first of these papers presents simulation results which modeled ...beam stability and channel evolution for an entire five pulse burst. The second paper describes a new air chemistry model used in the SARLAC...Experiment: A new air chemistry model for use in the propagation codes simulating the MPPE was developed by making analytic fits to benchmark runs with
2014-04-30
cost to acquire systems as design maturity could be verified incrementally as the system was developed vice waiting for specific large “ big bang ...Framework (MBAF) be applied to simulate or optimize process variations on programs? LSI Roles and Responsibilities A review of the roles and...the model/process optimization process. It is the current intent that NAVAIR will use the model to run simulations on process changes in an attempt to
NASA Astrophysics Data System (ADS)
Taxak, A. K.; Ojha, C. S. P.
2017-12-01
Land use and land cover (LULC) changes within a watershed are recognised as an important factor affecting hydrological processes and water resources. LULC changes continuously not only in long term but also on the inter-annual and season level. Changes in LULC affects the interception, storage and moisture. A widely used approach in rainfall-runoff modelling through Land surface models (LSM)/ hydrological models is to keep LULC same throughout the model running period. In long term simulations where land use change take place during the run period, using a single LULC does not represent a true picture of ground conditions could result in stationarity of model responses. The present work presents a case study in which changes in LULC are incorporated by using multiple LULC layers. LULC for the study period were created using imageries from Landsat series, Sentinal, EO-1 ALI. Distributed, physically based Variable Infiltration Capacity (VIC) model was modified to allow inclusion of LULC as a time varying variable just like climate. The Narayani basin was simulated with LULC, leaf area index (LAI), albedo and climate data for 1992-2015. The results showed that the model simulation with varied parametrization approach has a large improvement over the conventional fixed parametrization approach in terms of long-term water balance. The proposed modelling approach could improve hydrological modelling for applications like land cover change studies, water budget studies etc.
The Madden-Julian oscillation in ECHAM4 coupled and uncoupled general circulation models
Sperber, Kenneth R.; Gualdi, Silvio; Legutke, Stephanie; ...
2005-06-29
The Madden-Julian oscillation (MJO) dominates tropical variability on timescales of 30–70 days. During the boreal winter/spring, it is manifested as an eastward propagating disturbance, with a strong convective signature over the eastern hemisphere. The space–time structure of the MJO is analyzed using simulations with the ECHAM4 atmospheric general circulation model run with observed monthly mean sea-surface temperatures (SSTs), and coupled to three different ocean models. The coherence of the eastward propagation of MJO convection is sensitive to the ocean model to which ECHAM4 is coupled. For ECHAM4/OPYC and ECHO-G, models for which ~100 years of daily data is available, Montemore » Carlo sampling indicates that their metrics of eastward propagation are different at the 1% significance level. The flux-adjusted coupled simulations, ECHAM4/OPYC and ECHO-G, maintain a more realistic mean-state, and have a more realistic MJO simulation than the nonadjusted scale interaction experiment (SINTEX) coupled runs. The SINTEX model exhibits a cold bias in Indian Ocean and tropical West Pacific Ocean sea-surface temperature of ~0.5°C. This cold bias affects the distribution of time-mean convection over the tropical eastern hemisphere. Furthermore, the eastward propagation of MJO convection in this model is not as coherent as in the two models that used flux adjustment or when compared to an integration of ECHAM4 with prescribed observed SST. This result suggests that simulating a realistic basic state is at least as important as air–sea interaction for organizing the MJO. While all of the coupled models simulate the warm (cold) SST anomalies that precede (succeed) the MJO convection, the interaction of the components of the net surface heat flux that lead to these anomalies are different over the Indian Ocean. The ECHAM4/OPYC model in which the atmospheric model is run at a horizontal resolution of T42, has eastward propagating zonal wind anomalies and latent heat flux anomalies. However, the integrations with ECHO-G and SINTEX, which used T30 atmospheres, produce westward propagation of the latent heat flux anomalies, contrary to reanalysis. Furthermore, it is suggested that the differing ability of the models to represent the near-surface westerlies over the Indian Ocean is related to the different horizontal resolutions of the atmospheric model employed.« less
The Scylla Multi-Code Comparison Project
NASA Astrophysics Data System (ADS)
Maller, Ariyeh; Stewart, Kyle; Bullock, James; Oñorbe, Jose; Scylla Team
2016-01-01
Cosmological hydrodynamical simulations are one of the main techniques used to understand galaxy formation and evolution. However, it is far from clear to what extent different numerical techniques and different implementations of feedback yield different results. The Scylla Multi-Code Comparison Project seeks to address this issue by running idenitical initial condition simulations with different popular hydrodynamic galaxy formation codes. Here we compare simulations of a Milky Way mass halo using the codes enzo, ramses, art, arepo and gizmo-psph. The different runs produce galaxies with a variety of properties. There are many differences, but also many similarities. For example we find that in all runs cold flow disks exist; extended gas structures, far beyond the galactic disk, that show signs of rotation. Also, the angular momentum of warm gas in the halo is much larger than the angular momentum of the dark matter. We also find notable differences between runs. The temperature and density distribution of hot gas can differ by over an order of magnitude between codes and the stellar mass to halo mass relation also varies widely. These results suggest that observations of galaxy gas halos and the stellar mass to halo mass relation can be used to constarin the correct model of feedback.
NASA Astrophysics Data System (ADS)
Lukey, B. T.; Sheffield, J.; Bathurst, J. C.; Lavabre, J.; Mathys, N.; Martin, C.
1995-08-01
The sediment yield of two catchments in southern France was modelled using the newly developed sediment code of SHETRAN. A fire in August 1990 denuded the Rimbaud catchment, providing an opportunity to study the effect of vegetation cover on sediment yield by running the model for both pre-and post-fire cases. Model output is in the form of upper and lower bounds on sediment discharge, reflecting the uncertainty in the erodibility of the soil. The results are encouraging since measured sediment discharge falls largely between the predicted bounds, and simulated sediment yield is dramatically lower for the catchment before the fire which matches observation. SHETRAN is also applied to the Laval catchment, which is subject to Badlands gulley erosion. Again using the principle of generating upper and lower bounds on sediment discharge, the model is shown to be capable of predicting the bulk sediment discharge over periods of months. To simulate the effect of reforestation, the model is run with vegetation cover equivalent to a neighbouring fully forested basin. The results obtained indicate that SHETRAN provides a powerful tool for predicting the impact of environmental change and land management on sediment yield.
NASA Astrophysics Data System (ADS)
Srinath, Srikar; Poyneer, Lisa A.; Rudy, Alexander R.; Ammons, S. M.
2014-08-01
The advent of expensive, large-aperture telescopes and complex adaptive optics (AO) systems has strengthened the need for detailed simulation of such systems from the top of the atmosphere to control algorithms. The credibility of any simulation is underpinned by the quality of the atmosphere model used for introducing phase variations into the incident photons. Hitherto, simulations which incorporate wind layers have relied upon phase screen generation methods that tax the computation and memory capacities of the platforms on which they run. This places limits on parameters of a simulation, such as exposure time or resolution, thus compromising its utility. As aperture sizes and fields of view increase the problem will only get worse. We present an autoregressive method for evolving atmospheric phase that is efficient in its use of computation resources and allows for variability in the power contained in frozen flow or stochastic components of the atmosphere. Users have the flexibility of generating atmosphere datacubes in advance of runs where memory constraints allow to save on computation time or of computing the phase at each time step for long exposure times. Preliminary tests of model atmospheres generated using this method show power spectral density and rms phase in accordance with established metrics for Kolmogorov models.
Comparing Simulated and Experimental Data from UCN τ
NASA Astrophysics Data System (ADS)
Howard, Dezrick; Holley, Adam
2017-09-01
The UCN τ experiment is designed to measure the average lifetime of a free neutron (τn) by trapping ultracold neutrons (UCN) in a magneto-gravitational trap and allowing them to β-decay, with the ultimate goal of minimizing the uncertainty to approximately 0.01% (0.1 s). Understanding the systematics of the experiment at the level necessary to reach this high precision may help to better understand the disparity between measurements from cold neutron beam and UCN bottle experiments (τn 888 s and τn 878 s, respectively). To assist in evaluating systemics that might conceivably contribute at this level, a neutron spin-tracking Monte Carlo simulation, which models a UCN population's behavior throughout a run, is currently under development. The simulation will utilize an empirical map of the magnetic field in the trap (see poster by K. Hoffman) by interpolating the field between measured points (see poster by J. Felkins) in order to model the depolarization mechanism with high fidelity. As a preliminary step, I have checked that the Monte Carlo model can reasonably reproduce the observed behavior of the experiment. In particular, I will present a comparison between simulated data and data acquired from the 2016-2017 UCN τ run cycle.
Scidac-Data: Enabling Data Driven Modeling of Exascale Computing
NASA Astrophysics Data System (ADS)
Mubarak, Misbah; Ding, Pengfei; Aliaga, Leo; Tsaris, Aristeidis; Norman, Andrew; Lyon, Adam; Ross, Robert
2017-10-01
The SciDAC-Data project is a DOE-funded initiative to analyze and exploit two decades of information and analytics that have been collected by the Fermilab data center on the organization, movement, and consumption of high energy physics (HEP) data. The project analyzes the analysis patterns and data organization that have been used by NOvA, MicroBooNE, MINERvA, CDF, D0, and other experiments to develop realistic models of HEP analysis workflows and data processing. The SciDAC-Data project aims to provide both realistic input vectors and corresponding output data that can be used to optimize and validate simulations of HEP analysis. These simulations are designed to address questions of data handling, cache optimization, and workflow structures that are the prerequisites for modern HEP analysis chains to be mapped and optimized to run on the next generation of leadership-class exascale computing facilities. We present the use of a subset of the SciDAC-Data distributions, acquired from analysis of approximately 71,000 HEP workflows run on the Fermilab data center and corresponding to over 9 million individual analysis jobs, as the input to detailed queuing simulations that model the expected data consumption and caching behaviors of the work running in high performance computing (HPC) and high throughput computing (HTC) environments. In particular we describe how the Sequential Access via Metadata (SAM) data-handling system in combination with the dCache/Enstore-based data archive facilities has been used to develop radically different models for analyzing the HEP data. We also show how the simulations may be used to assess the impact of design choices in archive facilities.
Phast4Windows: a 3D graphical user interface for the reactive-transport simulator PHAST.
Charlton, Scott R; Parkhurst, David L
2013-01-01
Phast4Windows is a Windows® program for developing and running groundwater-flow and reactive-transport models with the PHAST simulator. This graphical user interface allows definition of grid-independent spatial distributions of model properties-the porous media properties, the initial head and chemistry conditions, boundary conditions, and locations of wells, rivers, drains, and accounting zones-and other parameters necessary for a simulation. Spatial data can be defined without reference to a grid by drawing, by point-by-point definitions, or by importing files, including ArcInfo® shape and raster files. All definitions can be inspected, edited, deleted, moved, copied, and switched from hidden to visible through the data tree of the interface. Model features are visualized in the main panel of the interface, so that it is possible to zoom, pan, and rotate features in three dimensions (3D). PHAST simulates single phase, constant density, saturated groundwater flow under confined or unconfined conditions. Reactions among multiple solutes include mineral equilibria, cation exchange, surface complexation, solid solutions, and general kinetic reactions. The interface can be used to develop and run simple or complex models, and is ideal for use in the classroom, for analysis of laboratory column experiments, and for development of field-scale simulations of geochemical processes and contaminant transport. Published 2012. This article is a U.S. Government work and is in the public domain in the USA.
Structure of High Latitude Currents in Magnetosphere-Ionosphere Models
NASA Astrophysics Data System (ADS)
Wiltberger, M.; Rigler, E. J.; Merkin, V.; Lyon, J. G.
2017-03-01
Using three resolutions of the Lyon-Fedder-Mobarry global magnetosphere-ionosphere model (LFM) and the Weimer 2005 empirical model we examine the structure of the high latitude field-aligned current patterns. Each resolution was run for the entire Whole Heliosphere Interval which contained two high speed solar wind streams and modest interplanetary magnetic field strengths. Average states of the field-aligned current (FAC) patterns for 8 interplanetary magnetic field clock angle directions are computed using data from these runs. Generally speaking the patterns obtained agree well with results obtained from the Weimer 2005 computing using the solar wind and IMF conditions that correspond to each bin. As the simulation resolution increases the currents become more intense and narrow. A machine learning analysis of the FAC patterns shows that the ratio of Region 1 (R1) to Region 2 (R2) currents decreases as the simulation resolution increases. This brings the simulation results into better agreement with observational predictions and the Weimer 2005 model results. The increase in R2 current strengths also results in the cross polar cap potential (CPCP) pattern being concentrated in higher latitudes. Current-voltage relationships between the R1 and CPCP are quite similar at the higher resolution indicating the simulation is converging on a common solution. We conclude that LFM simulations are capable of reproducing the statistical features of FAC patterns.
Structure of high latitude currents in global magnetospheric-ionospheric models
Wiltberger, M; Rigler, E. J.; Merkin, V; Lyon, J. G
2016-01-01
Using three resolutions of the Lyon-Fedder-Mobarry global magnetosphere-ionosphere model (LFM) and the Weimer 2005 empirical model we examine the structure of the high latitude field-aligned current patterns. Each resolution was run for the entire Whole Heliosphere Interval which contained two high speed solar wind streams and modest interplanetary magnetic field strengths. Average states of the field-aligned current (FAC) patterns for 8 interplanetary magnetic field clock angle directions are computed using data from these runs. Generally speaking the patterns obtained agree well with results obtained from the Weimer 2005 computing using the solar wind and IMF conditions that correspond to each bin. As the simulation resolution increases the currents become more intense and narrow. A machine learning analysis of the FAC patterns shows that the ratio of Region 1 (R1) to Region 2 (R2) currents decreases as the simulation resolution increases. This brings the simulation results into better agreement with observational predictions and the Weimer 2005 model results. The increase in R2 current strengths also results in the cross polar cap potential (CPCP) pattern being concentrated in higher latitudes. Current-voltage relationships between the R1 and CPCP are quite similar at the higher resolution indicating the simulation is converging on a common solution. We conclude that LFM simulations are capable of reproducing the statistical features of FAC patterns.
KINEXP: Computer Simulation in Enzyme Kinetics.
ERIC Educational Resources Information Center
Gelpi, Josep Lluis; Domenech, Carlos
1988-01-01
Describes a program which allows students to identify and characterize several kinetic inhibitory mechanisms. Uses the generic model of reversible inhibition of a monosubstrate enzyme but can be easily modified to run other models such as bisubstrate enzymes. Uses MS-DOS BASIC. (MVL)
NASA Astrophysics Data System (ADS)
Rohmer, Jeremy; Idier, Deborah; Bulteau, Thomas; Paris, François
2016-04-01
From a risk management perspective, it can be of high interest to identify the critical set of offshore conditions that lead to inundation on key assets for the studied territory (e.g., assembly points, evacuation routes, hospitals, etc.). This inverse approach of risk assessment (Idier et al., NHESS, 2013) can be of primary importance either for the estimation of the coastal flood hazard return period or for constraining the early warning networks based on hydro-meteorological forecast or observations. However, full-process based models for coastal flooding simulation have very large computational time cost (typically of several hours), which often limits the analysis to a few scenarios. Recently, it has been shown that meta-modelling approaches can efficiently handle this difficulty (e.g., Rohmer & Idier, NHESS, 2012). Yet, the full-process based models are expected to present strong non-linearities (non-regularities) or shocks (discontinuities), i.e. dynamics controlled by thresholds. For instance, in case of coastal defense, the dynamics is characterized first by a linear behavior of the waterline position (increase with increasing offshore conditions), as long as there is no overtopping, and then by a very strong increase (as soon as the offshore conditions are energetic enough to lead to wave overtopping, and then overflow). Such behavior might make the training phase of the meta-model very tedious. In the present study, we propose to explore the feasibility of active learning techniques, aka semi-supervised machine learning, to track the set of critical conditions with a reduced number of long-running simulations. The basic idea relies on identifying the simulation scenarios which should both reduce the meta-model error and improve the prediction of the critical contour of interest. To overcome the afore-described difficulty related to non-regularity, we rely on Support Vector Machines, which have shown very high performance for structural reliability assessment. The developments are done on a cross-shore case, using the process-based SWASH model. The related computational time is 10 hours for a single run. The dynamic forcing conditions are parametrized by several factors (storm surge S, significant wave height Hs, dephasing between tide and surge, etc.). In particular, we validated the approach with respect to a reference set of 400 long-running simulations in the domain of (S ; Hs). Our tests showed that the tracking of the critical contour can be achieved with a reasonable number of long-running simulations of a few tens.
Weather extremes in very large, high-resolution ensembles: the weatherathome experiment
NASA Astrophysics Data System (ADS)
Allen, M. R.; Rosier, S.; Massey, N.; Rye, C.; Bowery, A.; Miller, J.; Otto, F.; Jones, R.; Wilson, S.; Mote, P.; Stone, D. A.; Yamazaki, Y. H.; Carrington, D.
2011-12-01
Resolution and ensemble size are often seen as alternatives in climate modelling. Models with sufficient resolution to simulate many classes of extreme weather cannot normally be run often enough to assess the statistics of rare events, still less how these statistics may be changing. As a result, assessments of the impact of external forcing on regional climate extremes must be based either on statistical downscaling from relatively coarse-resolution models, or statistical extrapolation from 10-year to 100-year events. Under the weatherathome experiment, part of the climateprediction.net initiative, we have compiled the Met Office Regional Climate Model HadRM3P to run on personal computer volunteered by the general public at 25 and 50km resolution, embedded within the HadAM3P global atmosphere model. With a global network of about 50,000 volunteers, this allows us to run time-slice ensembles of essentially unlimited size, exploring the statistics of extreme weather under a range of scenarios for surface forcing and atmospheric composition, allowing for uncertainty in both boundary conditions and model parameters. Current experiments, developed with the support of Microsoft Research, focus on three regions, the Western USA, Europe and Southern Africa. We initially simulate the period 1959-2010 to establish which variables are realistically simulated by the model and on what scales. Our next experiments are focussing on the Event Attribution problem, exploring how the probability of various types of extreme weather would have been different over the recent past in a world unaffected by human influence, following the design of Pall et al (2011), but extended to a longer period and higher spatial resolution. We will present the first results of the unique, global, participatory experiment and discuss the implications for the attribution of recent weather events to anthropogenic influence on climate.
Grace: A cross-platform micromagnetic simulator on graphics processing units
NASA Astrophysics Data System (ADS)
Zhu, Ru
2015-12-01
A micromagnetic simulator running on graphics processing units (GPUs) is presented. Different from GPU implementations of other research groups which are predominantly running on NVidia's CUDA platform, this simulator is developed with C++ Accelerated Massive Parallelism (C++ AMP) and is hardware platform independent. It runs on GPUs from venders including NVidia, AMD and Intel, and achieves significant performance boost as compared to previous central processing unit (CPU) simulators, up to two orders of magnitude. The simulator paved the way for running large size micromagnetic simulations on both high-end workstations with dedicated graphics cards and low-end personal computers with integrated graphics cards, and is freely available to download.
NASA Astrophysics Data System (ADS)
Colin, Jeanne; Déqué, Michel; Radu, Raluca; Somot, Samuel
2010-10-01
We assess the impact of two sources of uncertainties in a limited area model (LAM) on the representation of intense precipitation: the size of the domain of integration and the use of the spectral nudging technique (driving of the large-scale within the domain of integration). We work in a perfect-model approach where the LAM is driven by a general circulation model (GCM) run at the same resolution and sharing the same physics and dynamics as the LAM. A set of three 50 km resolution simulations run over Western Europe with the LAM ALADIN-Climate and the GCM ARPEGE-Climate are performed to address this issue. Results are consistent with previous studies regarding the seasonal-mean fields. Furthermore, they show that neither the use of the spectral nudging nor the choice of a small domain are detrimental to the modelling of heavy precipitation in the present experiment.
Effects of Real-Time NASA Vegetation Data on Model Forecasts of Severe Weather
NASA Technical Reports Server (NTRS)
Case, Jonathan L.; Bell, Jordan R.; LaFontaine, Frank J.; Peters-Lidard, Christa D.
2012-01-01
The NASA Short-term Prediction Research and Transition (SPoRT) Center has developed a Greenness Vegetation Fraction (GVF) dataset, which is updated daily using swaths of Normalized Difference Vegetation Index data from the Moderate Resolution Imaging Spectroradiometer (MODIS) data aboard the NASA-EOS Aqua and Terra satellites. NASA SPoRT started generating daily real-time GVF composites at 1-km resolution over the Continental United States beginning 1 June 2010. A companion poster presentation (Bell et al.) primarily focuses on impact results in an offline configuration of the Noah land surface model (LSM) for the 2010 warm season, comparing the SPoRT/MODIS GVF dataset to the current operational monthly climatology GVF available within the National Centers for Environmental Prediction (NCEP) and Weather Research and Forecasting (WRF) models. This paper/presentation primarily focuses on individual case studies of severe weather events to determine the impacts and possible improvements by using the real-time, high-resolution SPoRT-MODIS GVFs in place of the coarser-resolution NCEP climatological GVFs in model simulations. The NASA-Unified WRF (NU-WRF) modeling system is employed to conduct the sensitivity simulations of individual events. The NU-WRF is an integrated modeling system based on the Advanced Research WRF dynamical core that is designed to represents aerosol, cloud, precipitation, and land processes at satellite-resolved scales in a coupled simulation environment. For this experiment, the coupling between the NASA Land Information System (LIS) and the WRF model is utilized to measure the impacts of the daily SPoRT/MODIS versus the monthly NCEP climatology GVFs. First, a spin-up run of the LIS is integrated for two years using the Noah LSM to ensure that the land surface fields reach an equilibrium state on the 4-km grid mesh used. Next, the spin-up LIS is run in two separate modes beginning on 1 June 2010, one continuing with the climatology GVFs while the other uses the daily SPoRT/MODIS GVFs. Finally, snapshots of the LIS land surface fields are used to initialize two different simulations of the NU-WRF, one running with climatology LIS and GVFs, and the other running with experimental LIS and NASA/SPoRT GVFs. In this paper/presentation, case study results will be highlighted in regions with significant differences in GVF between the NCEP climatology and SPoRT product during severe weather episodes.
Convergence in France facing Big Data era and Exascale challenges for Climate Sciences
NASA Astrophysics Data System (ADS)
Denvil, Sébastien; Dufresne, Jean-Louis; Salas, David; Meurdesoif, Yann; Valcke, Sophie; Caubel, Arnaud; Foujols, Marie-Alice; Servonnat, Jérôme; Sénési, Stéphane; Derouillat, Julien; Voury, Pascal
2014-05-01
The presentation will introduce a french national project : CONVERGENCE that has been funded for four years. This project will tackle big data and computational challenges faced by climate modeling community in HPC context. Model simulations are central to the study of complex mechanisms and feedbacks in the climate system and to provide estimates of future and past climate changes. Recent trends in climate modelling are to add more physical components in the modelled system, increasing the resolution of each individual component and the more systematic use of large suites of simulations to address many scientific questions. Climate simulations may therefore differ in their initial state, parameter values, representation of physical processes, spatial resolution, model complexity, and degree of realism or degree of idealisation. In addition, there is a strong need for evaluating, improving and monitoring the performance of climate models using a large ensemble of diagnostics and better integration of model outputs and observational data. High performance computing is currently reaching the exascale and has the potential to produce this exponential increase of size and numbers of simulations. However, post-processing, analysis, and exploration of the generated data have stalled and there is a strong need for new tools to cope with the growing size and complexity of the underlying simulations and datasets. Exascale simulations require new scalable software tools to generate, manage and mine those simulations ,and data to extract the relevant information and to take the correct decision. The primary purpose of this project is to develop a platform capable of running large ensembles of simulations with a suite of models, to handle the complex and voluminous datasets generated, to facilitate the evaluation and validation of the models and the use of higher resolution models. We propose to gather interdisciplinary skills to design, using a component-based approach, a specific programming environment for scalable scientific simulations and analytics, integrating new and efficient ways of deploying and analysing the applications on High Performance Computing (HPC) system. CONVERGENCE, gathering HPC and informatics expertise that cuts across the individual partners and the broader HPC community, will allow the national climate community to leverage information technology (IT) innovations to address its specific needs. Our methodology consists in developing an ensemble of generic elements needed to run the French climate models with different grids and different resolution, ensuring efficient and reliable execution of these models, managing large volume and number of data and allowing analysis of the results and precise evaluation of the models. These elements include data structure definition and input-output (IO), code coupling and interpolation, as well as runtime and pre/post-processing environments. A common data and metadata structure will allow transferring consistent information between the various elements. All these generic elements will be open source and publicly available. The IPSL-CM and CNRM-CM climate models will make use of these elements that will constitute a national platform for climate modelling. This platform will be used, in its entirety, to optimise and tune the next version of the IPSL-CM model and to develop a global coupled climate model with a regional grid refinement. It will also be used, at least partially, to run ensembles of the CNRM-CM model at relatively high resolution and to run a very-high resolution prototype of this model. The climate models we developed are already involved in many international projects. For instance we participate to the CMIP (Coupled Model Intercomparison Project) project that is very demanding but has a high visibility: its results are widely used and are in particular synthesised in the IPCC (Intergovernmental Panel on Climate Change) assessment reports. The CONVERGENCE project will constitute an invaluable step for the French climate community to prepare and better contribute to the next phase of the CMIP project.
Visualization and Analysis of Climate Simulation Performance Data
NASA Astrophysics Data System (ADS)
Röber, Niklas; Adamidis, Panagiotis; Behrens, Jörg
2015-04-01
Visualization is the key process of transforming abstract (scientific) data into a graphical representation, to aid in the understanding of the information hidden within the data. Climate simulation data sets are typically quite large, time varying, and consist of many different variables sampled on an underlying grid. A large variety of climate models - and sub models - exist to simulate various aspects of the climate system. Generally, one is mainly interested in the physical variables produced by the simulation runs, but model developers are also interested in performance data measured along with these simulations. Climate simulation models are carefully developed complex software systems, designed to run in parallel on large HPC systems. An important goal thereby is to utilize the entire hardware as efficiently as possible, that is, to distribute the workload as even as possible among the individual components. This is a very challenging task, and detailed performance data, such as timings, cache misses etc. have to be used to locate and understand performance problems in order to optimize the model implementation. Furthermore, the correlation of performance data to the processes of the application and the sub-domains of the decomposed underlying grid is vital when addressing communication and load imbalance issues. High resolution climate simulations are carried out on tens to hundreds of thousands of cores, thus yielding a vast amount of profiling data, which cannot be analyzed without appropriate visualization techniques. This PICO presentation displays and discusses the ICON simulation model, which is jointly developed by the Max Planck Institute for Meteorology and the German Weather Service and in partnership with DKRZ. The visualization and analysis of the models performance data allows us to optimize and fine tune the model, as well as to understand its execution on the HPC system. We show and discuss our workflow, as well as present new ideas and solutions that greatly aided our understanding. The software employed is based on Avizo Green, ParaView and SimVis, as well as own developed software extensions.
Accelerating cardiac bidomain simulations using graphics processing units.
Neic, A; Liebmann, M; Hoetzl, E; Mitchell, L; Vigmond, E J; Haase, G; Plank, G
2012-08-01
Anatomically realistic and biophysically detailed multiscale computer models of the heart are playing an increasingly important role in advancing our understanding of integrated cardiac function in health and disease. Such detailed simulations, however, are computationally vastly demanding, which is a limiting factor for a wider adoption of in-silico modeling. While current trends in high-performance computing (HPC) hardware promise to alleviate this problem, exploiting the potential of such architectures remains challenging since strongly scalable algorithms are necessitated to reduce execution times. Alternatively, acceleration technologies such as graphics processing units (GPUs) are being considered. While the potential of GPUs has been demonstrated in various applications, benefits in the context of bidomain simulations where large sparse linear systems have to be solved in parallel with advanced numerical techniques are less clear. In this study, the feasibility of multi-GPU bidomain simulations is demonstrated by running strong scalability benchmarks using a state-of-the-art model of rabbit ventricles. The model is spatially discretized using the finite element methods (FEM) on fully unstructured grids. The GPU code is directly derived from a large pre-existing code, the Cardiac Arrhythmia Research Package (CARP), with very minor perturbation of the code base. Overall, bidomain simulations were sped up by a factor of 11.8 to 16.3 in benchmarks running on 6-20 GPUs compared to the same number of CPU cores. To match the fastest GPU simulation which engaged 20 GPUs, 476 CPU cores were required on a national supercomputing facility.
Accelerating Cardiac Bidomain Simulations Using Graphics Processing Units
Neic, Aurel; Liebmann, Manfred; Hoetzl, Elena; Mitchell, Lawrence; Vigmond, Edward J.; Haase, Gundolf
2013-01-01
Anatomically realistic and biophysically detailed multiscale computer models of the heart are playing an increasingly important role in advancing our understanding of integrated cardiac function in health and disease. Such detailed simulations, however, are computationally vastly demanding, which is a limiting factor for a wider adoption of in-silico modeling. While current trends in high-performance computing (HPC) hardware promise to alleviate this problem, exploiting the potential of such architectures remains challenging since strongly scalable algorithms are necessitated to reduce execution times. Alternatively, acceleration technologies such as graphics processing units (GPUs) are being considered. While the potential of GPUs has been demonstrated in various applications, benefits in the context of bidomain simulations where large sparse linear systems have to be solved in parallel with advanced numerical techniques are less clear. In this study, the feasibility of multi-GPU bidomain simulations is demonstrated by running strong scalability benchmarks using a state-of-the-art model of rabbit ventricles. The model is spatially discretized using the finite element methods (FEM) on fully unstructured grids. The GPU code is directly derived from a large pre-existing code, the Cardiac Arrhythmia Research Package (CARP), with very minor perturbation of the code base. Overall, bidomain simulations were sped up by a factor of 11.8 to 16.3 in benchmarks running on 6–20 GPUs compared to the same number of CPU cores. To match the fastest GPU simulation which engaged 20GPUs, 476 CPU cores were required on a national supercomputing facility. PMID:22692867
Simulating Ideal Assistive Devices to Reduce the Metabolic Cost of Running
Uchida, Thomas K.; Seth, Ajay; Pouya, Soha; Dembia, Christopher L.; Hicks, Jennifer L.; Delp, Scott L.
2016-01-01
Tools have been used for millions of years to augment the capabilities of the human body, allowing us to accomplish tasks that would otherwise be difficult or impossible. Powered exoskeletons and other assistive devices are sophisticated modern tools that have restored bipedal locomotion in individuals with paraplegia and have endowed unimpaired individuals with superhuman strength. Despite these successes, designing assistive devices that reduce energy consumption during running remains a substantial challenge, in part because these devices disrupt the dynamics of a complex, finely tuned biological system. Furthermore, designers have hitherto relied primarily on experiments, which cannot report muscle-level energy consumption and are fraught with practical challenges. In this study, we use OpenSim to generate muscle-driven simulations of 10 human subjects running at 2 and 5 m/s. We then add ideal, massless assistive devices to our simulations and examine the predicted changes in muscle recruitment patterns and metabolic power consumption. Our simulations suggest that an assistive device should not necessarily apply the net joint moment generated by muscles during unassisted running, and an assistive device can reduce the activity of muscles that do not cross the assisted joint. Our results corroborate and suggest biomechanical explanations for similar effects observed by experimentalists, and can be used to form hypotheses for future experimental studies. The models, simulations, and software used in this study are freely available at simtk.org and can provide insight into assistive device design that complements experimental approaches. PMID:27656901
Principled Design of an Augmented Reality Trainer for Medics
2018-04-13
retake test is scheduled. In addition, extensive simulation capstone scenarios are run with a full body manikin that includes airway management...platform so they could run with high quality graphical resolution. We updated the underlying data models to reflect the training scenario parameters...Sedeh, P., Schumann, M., & Groeben, H. (2009). Laryngoscopy via Macintosh blade versus GlideScope: Success rate and time for endotracheal intubation
Processes influencing model-data mismatch in drought-stressed, fire-disturbed eddy flux sites
NASA Astrophysics Data System (ADS)
Mitchell, Stephen; Beven, Keith; Freer, Jim; Law, Beverly
2011-06-01
Semiarid forests are very sensitive to climatic change and among the most difficult ecosystems to accurately model. We tested the performance of the Biome-BGC model against eddy flux data taken from young (years 2004-2008), mature (years 2002-2008), and old-growth (year 2000) ponderosa pine stands at Metolius, Oregon, and subsequently examined several potential causes for model-data mismatch. We used the Generalized Likelihood Uncertainty Estimation methodology, which involved 500,000 model runs for each stand (1,500,000 total). Each simulation was run with randomly generated parameter values from a uniform distribution based on published parameter ranges, resulting in modeled estimates of net ecosystem CO2 exchange (NEE) that were compared to measured eddy flux data. Simulations for the young stand exhibited the highest level of performance, though they overestimated ecosystem C accumulation (-NEE) 99% of the time. Among the simulations for the mature and old-growth stands, 100% and 99% of the simulations underestimated ecosystem C accumulation. One obvious area of model-data mismatch is soil moisture, which was overestimated by the model in the young and old-growth stands yet underestimated in the mature stand. However, modeled estimates of soil water content and associated water deficits did not appear to be the primary cause of model-data mismatch; our analysis indicated that gross primary production can be accurately modeled even if soil moisture content is not. Instead, difficulties in adequately modeling ecosystem respiration, mainly autotrophic respiration, appeared to be the fundamental cause of model-data mismatch.
Processes influencing model-data mismatch in drought-stressed, fire-disturbed eddy flux sites
NASA Astrophysics Data System (ADS)
Mitchell, S. R.; Beven, K.; Freer, J. E.; Law, B. E.
2010-12-01
Semi-arid forests are very sensitive to climatic change and among the most difficult ecosystems to accurately model. We tested the performance of the Biome-BGC model against eddy flux data taken from young (years 2004-2008), mature (years 2002-2008), and old-growth (year 2000) Ponderosa pine stands at Metolius, Oregon, and subsequently examined several potential causes for model-data mismatch. We used the generalized likelihood uncertainty estimation (GLUE) methodology, which involved 500,000 model runs for each stand (1,500,000 total). Each simulation was run with randomly generated parameter values from a uniform distribution based on published parameter ranges, resulting in modeled estimates of net ecosystem CO2 exchange (NEE) that were compared to measured eddy flux data. Simulations for the young stand exhibited the highest level of performance, though they over-estimated ecosystem C accumulation (-NEE) 99% of the time. Among the simulations for the mature and old-growth stands, 100% and 99% of the simulations under-estimated ecosystem C accumulation. One obvious area of model-data mismatch is soil moisture, which was overestimated by the model in the young and old-growth stands yet underestimated in the mature stand. However, modeled estimates of soil water content and associated water deficits did not appear to be the primary cause of model-data mismatch; our analysis indicated that gross primary production can be accurately modeled even if soil moisture content is not. Instead, difficulties in adequately modeling ecosystem respiration, both autotrophic and heterotrophic, appeared to be fundamental causes of model-data mismatch.
A Multiplicative Cascade Model for High-Resolution Space-Time Downscaling of Rainfall
NASA Astrophysics Data System (ADS)
Raut, Bhupendra A.; Seed, Alan W.; Reeder, Michael J.; Jakob, Christian
2018-02-01
Distributions of rainfall with the time and space resolutions of minutes and kilometers, respectively, are often needed to drive the hydrological models used in a range of engineering, environmental, and urban design applications. The work described here is the first step in constructing a model capable of downscaling rainfall to scales of minutes and kilometers from time and space resolutions of several hours and a hundred kilometers. A multiplicative random cascade model known as the Short-Term Ensemble Prediction System is run with parameters from the radar observations at Melbourne (Australia). The orographic effects are added through multiplicative correction factor after the model is run. In the first set of model calculations, 112 significant rain events over Melbourne are simulated 100 times. Because of the stochastic nature of the cascade model, the simulations represent 100 possible realizations of the same rain event. The cascade model produces realistic spatial and temporal patterns of rainfall at 6 min and 1 km resolution (the resolution of the radar data), the statistical properties of which are in close agreement with observation. In the second set of calculations, the cascade model is run continuously for all days from January 2008 to August 2015 and the rainfall accumulations are compared at 12 locations in the greater Melbourne area. The statistical properties of the observations lie with envelope of the 100 ensemble members. The model successfully reproduces the frequency distribution of the 6 min rainfall intensities, storm durations, interarrival times, and autocorrelation function.
NASA Astrophysics Data System (ADS)
Dong, Shaojiang; Sun, Dihua; Xu, Xiangyang; Tang, Baoping
2017-06-01
Aiming at the problem that it is difficult to extract the feature information from the space bearing vibration signal because of different noise, for example the running trend information, high-frequency noise and especially the existence of lot of power line interference (50Hz) and its octave ingredients of the running space simulated equipment in the ground. This article proposed a combination method to eliminate them. Firstly, the EMD is used to remove the running trend item information of the signal, the running trend that affect the signal processing accuracy is eliminated. Then the morphological filter is used to eliminate high-frequency noise. Finally, the components and characteristics of the power line interference are researched, based on the characteristics of the interference, the revised blind source separation model is used to remove the power line interferences. Through analysis of simulation and practical application, results suggest that the proposed method can effectively eliminate those noise.
Next-Generation Climate Modeling Science Challenges for Simulation, Workflow and Analysis Systems
NASA Astrophysics Data System (ADS)
Koch, D. M.; Anantharaj, V. G.; Bader, D. C.; Krishnan, H.; Leung, L. R.; Ringler, T.; Taylor, M.; Wehner, M. F.; Williams, D. N.
2016-12-01
We will present two examples of current and future high-resolution climate-modeling research that are challenging existing simulation run-time I/O, model-data movement, storage and publishing, and analysis. In each case, we will consider lessons learned as current workflow systems are broken by these large-data science challenges, as well as strategies to repair or rebuild the systems. First we consider the science and workflow challenges to be posed by the CMIP6 multi-model HighResMIP, involving around a dozen modeling groups performing quarter-degree simulations, in 3-member ensembles for 100 years, with high-frequency (1-6 hourly) diagnostics, which is expected to generate over 4PB of data. An example of science derived from these experiments will be to study how resolution affects the ability of models to capture extreme-events such as hurricanes or atmospheric rivers. Expected methods to transfer (using parallel Globus) and analyze (using parallel "TECA" software tools) HighResMIP data for such feature-tracking by the DOE CASCADE project will be presented. A second example will be from the Accelerated Climate Modeling for Energy (ACME) project, which is currently addressing challenges involving multiple century-scale coupled high resolution (quarter-degree) climate simulations on DOE Leadership Class computers. ACME is anticipating production of over 5PB of data during the next 2 years of simulations, in order to investigate the drivers of water cycle changes, sea-level-rise, and carbon cycle evolution. The ACME workflow, from simulation to data transfer, storage, analysis and publication will be presented. Current and planned methods to accelerate the workflow, including implementing run-time diagnostics, and implementing server-side analysis to avoid moving large datasets will be presented.
NASA Astrophysics Data System (ADS)
Davini, Paolo; von Hardenberg, Jost; Corti, Susanna; Subramanian, Aneesh; Weisheimer, Antje; Christensen, Hannah; Juricke, Stephan; Palmer, Tim
2016-04-01
The PRACE Climate SPHINX project investigates the sensitivity of climate simulations to model resolution and stochastic parameterization. The EC-Earth Earth-System Model is used to explore the impact of stochastic physics in 30-years climate integrations as a function of model resolution (from 80km up to 16km for the atmosphere). The experiments include more than 70 simulations in both a historical scenario (1979-2008) and a climate change projection (2039-2068), using RCP8.5 CMIP5 forcing. A total amount of 20 million core hours will be used at end of the project (March 2016) and about 150 TBytes of post-processed data will be available to the climate community. Preliminary results show a clear improvement in the representation of climate variability over the Euro-Atlantic following resolution increase. More specifically, the well-known atmospheric blocking negative bias over Europe is definitely resolved. High resolution runs also show improved fidelity in representation of tropical variability - such as the MJO and its propagation - over the low resolution simulations. It is shown that including stochastic parameterization in the low resolution runs help to improve some of the aspects of the MJO propagation further. These findings show the importance of representing the impact of small scale processes on the large scale climate variability either explicitly (with high resolution simulations) or stochastically (in low resolution simulations).
Development of Aspen: A microanalytic simulation model of the US economy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pryor, R.J.; Basu, N.; Quint, T.
1996-02-01
This report describes the development of an agent-based microanalytic simulation model of the US economy. The microsimulation model capitalizes on recent technological advances in evolutionary learning and parallel computing. Results are reported for a test problem that was run using the model. The test results demonstrate the model`s ability to predict business-like cycles in an economy where prices and inventories are allowed to vary. Since most economic forecasting models have difficulty predicting any kind of cyclic behavior. These results show the potential of microanalytic simulation models to improve economic policy analysis and to provide new insights into underlying economic principles.more » Work already has begun on a more detailed model.« less
Chung, S W; Lee, H S
2009-01-01
In monsoon climate area, turbidity flows typically induced by flood runoffs cause numerous environmental impacts such as impairment of fish habitat and river attraction, and degradation of water supply efficiency. This study was aimed to characterize the physical dynamics of turbidity plume induced into a stratified reservoir using field monitoring and numerical simulations, and to assess the effect of different withdrawal scenarios on the control of downstream water quality. Three different turbidity models (RUN1, RUN2, RUN3) were developed based on a two-dimensional laterally averaged hydrodynamic and transport model, and validated against field data. RUN1 assumed constant settling velocity of suspended sediment, while RUN2 estimated the settling velocity as a function of particle size, density, and water temperature to consider vertical stratification. RUN3 included a lumped first-order turbidity attenuation rate taking into account the effects of particles aggregation and degradable organic particles. RUN3 showed best performance in replicating the observed variations of in-reservoir and release turbidity. Numerical experiments implemented to assess the effectiveness of different withdrawal depths showed that the alterations of withdrawal depth can modify the pathway and flow regimes of the turbidity plume, but its effect on the control of release water quality could be trivial.
Street-running LRT may not affect a neighbour's sleep
NASA Astrophysics Data System (ADS)
Sarkar, S. K.; Wang, J.-N.
2003-10-01
A comprehensive dynamic finite difference model and analysis was conducted simulating LRT running at the speed of 24 km/h on a city street. The analysis predicted ground borne vibration (GBV) to remain at or below the FTA criterion of a RMS velocity of 72 VdB (0.004 in/s) at the nearest residence. In the model, site-specific stratography and dynamic soil and rock properties were used that were determined from in situ testing. The dynamic input load from LRT vehicle running at 24 km/h was computed from actual measured data from Portland, Oregon's West Side LRT project, which used a low floor vehicle similar to the one proposed for the NJ Transit project. During initial trial runs of the LRT system, vibration and noise measurements were taken at three street locations while the vehicles were running at about the 20-24 km/h operating speed. The measurements confirmed the predictions and satisfied FTA criteria for noise and vibration for frequent events. This paper presents the analytical model, GBV predictions, site measurement data and comparison with FTA criterion.
NASA Astrophysics Data System (ADS)
Pembroke, A. D.; Colbert, J. A.
2015-12-01
The Community Coordinated Modeling Center (CCMC) provides hosting for many of the simulations used by the space weather community of scientists, educators, and forecasters. CCMC users may submit model runs through the Runs on Request system, which produces static visualizations of model output in the browser, while further analysis may be performed off-line via Kameleon, CCMC's cross-language access and interpolation library. Off-line analysis may be suitable for power-users, but storage and coding requirements present a barrier to entry for non-experts. Moreover, a lack of a consistent framework for analysis hinders reproducibility of scientific findings. To that end, we have developed Kameleon Live, a cloud based interactive analysis and visualization platform. Kameleon Live allows users to create scientific studies built around selected runs from the Runs on Request database, perform analysis on those runs, collaborate with other users, and disseminate their findings among the space weather community. In addition to showcasing these novel collaborative analysis features, we invite feedback from CCMC users as we seek to advance and improve on the new platform.
2018-01-01
Mathematical models simulating different and representative engineering problem, atomic dry friction, the moving front problems and elastic and solid mechanics are presented in the form of a set of non-linear, coupled or not coupled differential equations. For different parameters values that influence the solution, the problem is numerically solved by the network method, which provides all the variables of the problems. Although the model is extremely sensitive to the above parameters, no assumptions are considered as regards the linearization of the variables. The design of the models, which are run on standard electrical circuit simulation software, is explained in detail. The network model results are compared with common numerical methods or experimental data, published in the scientific literature, to show the reliability of the model. PMID:29518121
Sánchez-Pérez, J F; Marín, F; Morales, J L; Cánovas, M; Alhama, F
2018-01-01
Mathematical models simulating different and representative engineering problem, atomic dry friction, the moving front problems and elastic and solid mechanics are presented in the form of a set of non-linear, coupled or not coupled differential equations. For different parameters values that influence the solution, the problem is numerically solved by the network method, which provides all the variables of the problems. Although the model is extremely sensitive to the above parameters, no assumptions are considered as regards the linearization of the variables. The design of the models, which are run on standard electrical circuit simulation software, is explained in detail. The network model results are compared with common numerical methods or experimental data, published in the scientific literature, to show the reliability of the model.
NASA Technical Reports Server (NTRS)
Kawamura, K.; Beale, G. O.; Schaffer, J. D.; Hsieh, B. J.; Padalkar, S.; Rodriguez-Moscoso, J. J.
1985-01-01
The results of the first phase of Research on an Expert System for Database Operation of Simulation/Emulation Math Models, is described. Techniques from artificial intelligence (AI) were to bear on task domains of interest to NASA Marshall Space Flight Center. One such domain is simulation of spacecraft attitude control systems. Two related software systems were developed to and delivered to NASA. One was a generic simulation model for spacecraft attitude control, written in FORTRAN. The second was an expert system which understands the usage of a class of spacecraft attitude control simulation software and can assist the user in running the software. This NASA Expert Simulation System (NESS), written in LISP, contains general knowledge about digital simulation, specific knowledge about the simulation software, and self knowledge.
CO2 Push-Pull Dual (Conjugate) Faults Injection Simulations
Oldenburg, Curtis (ORCID:0000000201326016); Lee, Kyung Jae; Doughty, Christine; Jung, Yoojin; Borgia, Andrea; Pan, Lehua; Zhang, Rui; Daley, Thomas M.; Altundas, Bilgin; Chugunov, Nikita
2017-07-20
This submission contains datasets and a final manuscript associated with a project simulating carbon dioxide push-pull into a conjugate fault system modeled after Dixie Valley- sensitivity analysis of significant parameters and uncertainty prediction by data-worth analysis. Datasets include: (1) Forward simulation runs of standard cases (push & pull phases), (2) Local sensitivity analyses (push & pull phases), and (3) Data-worth analysis (push & pull phases).
Computer Simulation Of Cyclic Oxidation
NASA Technical Reports Server (NTRS)
Probst, H. B.; Lowell, C. E.
1990-01-01
Computer model developed to simulate cyclic oxidation of metals. With relatively few input parameters, kinetics of cyclic oxidation simulated for wide variety of temperatures, durations of cycles, and total numbers of cycles. Program written in BASICA and run on any IBM-compatible microcomputer. Used in variety of ways to aid experimental research. In minutes, effects of duration of cycle and/or number of cycles on oxidation kinetics of material surveyed.
Investigating NARCCAP Precipitation Extremes via Bivariate Extreme Value Theory (Invited)
NASA Astrophysics Data System (ADS)
Weller, G. B.; Cooley, D. S.; Sain, S. R.; Bukovsky, M. S.; Mearns, L. O.
2013-12-01
We introduce methodology from statistical extreme value theory to examine the ability of reanalysis-drive regional climate models to simulate past daily precipitation extremes. Going beyond a comparison of summary statistics such as 20-year return values, we study whether the most extreme precipitation events produced by climate model simulations exhibit correspondence to the most extreme events seen in observational records. The extent of this correspondence is formulated via the statistical concept of tail dependence. We examine several case studies of extreme precipitation events simulated by the six models of the North American Regional Climate Change Assessment Program (NARCCAP) driven by NCEP reanalysis. It is found that the NARCCAP models generally reproduce daily winter precipitation extremes along the Pacific coast quite well; in contrast, simulation of past daily summer precipitation extremes in a central US region is poor. Some differences in the strength of extremal correspondence are seen in the central region between models which employ spectral nudging and those which do not. We demonstrate how these techniques may be used to draw a link between extreme precipitation events and large-scale atmospheric drivers, as well as to downscale extreme precipitation simulated by a future run of a regional climate model. Specifically, we examine potential future changes in the nature of extreme precipitation along the Pacific coast produced by the pineapple express (PE) phenomenon. A link between extreme precipitation events and a "PE Index" derived from North Pacific sea-surface pressure fields is found. This link is used to study PE-influenced extreme precipitation produced by a future-scenario climate model run.
SCEC Earthquake System Science Using High Performance Computing
NASA Astrophysics Data System (ADS)
Maechling, P. J.; Jordan, T. H.; Archuleta, R.; Beroza, G.; Bielak, J.; Chen, P.; Cui, Y.; Day, S.; Deelman, E.; Graves, R. W.; Minster, J. B.; Olsen, K. B.
2008-12-01
The SCEC Community Modeling Environment (SCEC/CME) collaboration performs basic scientific research using high performance computing with the goal of developing a predictive understanding of earthquake processes and seismic hazards in California. SCEC/CME research areas including dynamic rupture modeling, wave propagation modeling, probabilistic seismic hazard analysis (PSHA), and full 3D tomography. SCEC/CME computational capabilities are organized around the development and application of robust, re- usable, well-validated simulation systems we call computational platforms. The SCEC earthquake system science research program includes a wide range of numerical modeling efforts and we continue to extend our numerical modeling codes to include more realistic physics and to run at higher and higher resolution. During this year, the SCEC/USGS OpenSHA PSHA computational platform was used to calculate PSHA hazard curves and hazard maps using the new UCERF2.0 ERF and new 2008 attenuation relationships. Three SCEC/CME modeling groups ran 1Hz ShakeOut simulations using different codes and computer systems and carefully compared the results. The DynaShake Platform was used to calculate several dynamic rupture-based source descriptions equivalent in magnitude and final surface slip to the ShakeOut 1.2 kinematic source description. A SCEC/CME modeler produced 10Hz synthetic seismograms for the ShakeOut 1.2 scenario rupture by combining 1Hz deterministic simulation results with 10Hz stochastic seismograms. SCEC/CME modelers ran an ensemble of seven ShakeOut-D simulations to investigate the variability of ground motions produced by dynamic rupture-based source descriptions. The CyberShake Platform was used to calculate more than 15 new probabilistic seismic hazard analysis (PSHA) hazard curves using full 3D waveform modeling and the new UCERF2.0 ERF. The SCEC/CME group has also produced significant computer science results this year. Large-scale SCEC/CME high performance codes were run on NSF TeraGrid sites including simulations that use the full PSC Big Ben supercomputer (4096 cores) and simulations that ran on more than 10K cores at TACC Ranger. The SCEC/CME group used scientific workflow tools and grid-computing to run more than 1.5 million jobs at NCSA for the CyberShake project. Visualizations produced by a SCEC/CME researcher of the 10Hz ShakeOut 1.2 scenario simulation data were used by USGS in ShakeOut publications and public outreach efforts. OpenSHA was ported onto an NSF supercomputer and was used to produce very high resolution hazard PSHA maps that contained more than 1.6 million hazard curves.
VERIFICATION OF THE HYDROLOGIC EVALUATION OF LANDFILL PERFORMANCE (HELP) MODEL USING FIELD DATA
The report describes a study conducted to verify the Hydrologic Evaluation of Landfill Performance (HELP) computer model using existing field data from a total of 20 landfill cells at 7 sites in the United States. Simulations using the HELP model were run to compare the predicted...
Structure of high latitude currents in magnetosphere-ionosphere models
NASA Astrophysics Data System (ADS)
Wiltberger, M. J.; Lyon, J.; Merkin, V. G.; Rigler, E. J.
2016-12-01
Using three resolutions of the Lyon-Fedder-Mobarry global magnetosphere-ionosphere model (LFM) and the Weimer 2005 empirical model the structure of the high latitude field-aligned current patterns is examined. Each LFM resolution was run for the entire Whole Heliosphere Interval (WHI), which contained two high-speed solar wind streams and modest interplanetary magnetic field strengths. Average states of the field-aligned current (FAC) patterns for 8 interplanetary magnetic field clock angle directions are computed using data from these runs. Generally speaking the patterns obtained agree well with results from the Weimer 2005 computed using the solar wind and IMF conditions that correspond to each bin. As the simulation resolution increases the currents become more intense and confined. A machine learning analysis of the FAC patterns shows that the ratio of Region 1 (R1) to Region 2 (R2) currents decreases as the simulation resolution increases. This brings the simulation results into better agreement with observational predictions and the Weimer 2005 model results. The increase in R2 current strengths in the model also results in a better shielding of mid- and low-latitude ionosphere from the polar cap convection, also in agreement with observations. Current-voltage relationships between the R1 strength and the cross-polar cap potential (CPCP) are quite similar at the higher resolutions indicating the simulation is converging on a common solution. We conclude that LFM simulations are capable of reproducing the statistical features of FAC patterns.
NASA Astrophysics Data System (ADS)
Nabil, Mahdi; Rattner, Alexander S.
The volume-of-fluid (VOF) approach is a mature technique for simulating two-phase flows. However, VOF simulation of phase-change heat transfer is still in its infancy. Multiple closure formulations have been proposed in the literature, each suited to different applications. While these have enabled significant research advances, few implementations are publicly available, actively maintained, or inter-operable. Here, a VOF solver is presented (interThermalPhaseChangeFoam), which incorporates an extensible framework for phase-change heat transfer modeling, enabling simulation of diverse phenomena in a single environment. The solver employs object oriented OpenFOAM library features, including Run-Time-Type-Identification to enable rapid implementation and run-time selection of phase change and surface tension force models. The solver is packaged with multiple phase change and surface tension closure models, adapted and refined from earlier studies. This code has previously been applied to study wavy film condensation, Taylor flow evaporation, nucleate boiling, and dropwise condensation. Tutorial cases are provided for simulation of horizontal film condensation, smooth and wavy falling film condensation, nucleate boiling, and bubble condensation. Validation and grid sensitivity studies, interfacial transport models, effects of spurious currents from surface tension models, effects of artificial heat transfer due to numerical factors, and parallel scaling performance are described in detail in the Supplemental Material (see Appendix A). By incorporating the framework and demonstration cases into a single environment, users can rapidly apply the solver to study phase-change processes of interest.
NASA Astrophysics Data System (ADS)
Cao, Duc; Moses, Gregory; Delettrez, Jacques; Collins, Timothy
2014-10-01
A design process is presented for the nonlocal thermal transport iSNB (implicit Schurtz, Nicolai, and Busquet) model to provide reliable nonlocal thermal transport in polar-drive ICF simulations. Results from the iSNB model are known to be sensitive to changes in the SNB ``mean free path'' formula, and the latter's original form required modification to obtain realistic preheat levels. In the presented design process, SNB mean free paths are first modified until the model can match temperatures from Goncharov's thermal transport model in 1D temperature relaxation simulations. Afterwards the same mean free paths are tested in a 1D polar-drive surrogate simulation to match adiabats from Goncharov's model. After passing the two previous steps, the model can then be run in a full 2D polar-drive simulation. This research is supported by the University of Rochester Laboratory for Laser Energetics.
NASA Astrophysics Data System (ADS)
Rougé, Charles; Harou, Julien J.; Pulido-Velazquez, Manuel; Matrosov, Evgenii S.
2017-04-01
The marginal opportunity cost of water refers to benefits forgone by not allocating an additional unit of water to its most economically productive use at a specific location in a river basin at a specific moment in time. Estimating the opportunity cost of water is an important contribution to water management as it can be used for better water allocation or better system operation, and can suggest where future water infrastructure could be most beneficial. Opportunity costs can be estimated using 'shadow values' provided by hydro-economic optimization models. Yet, such models' use of optimization means the models had difficulty accurately representing the impact of operating rules and regulatory and institutional mechanisms on actual water allocation. In this work we use more widely available river basin simulation models to estimate opportunity costs. This has been done before by adding in the model a small quantity of water at the place and time where the opportunity cost should be computed, then running a simulation and comparing the difference in system benefits. The added system benefits per unit of water added to the system then provide an approximation of the opportunity cost. This approximation can then be used to design efficient pricing policies that provide incentives for users to reduce their water consumption. Yet, this method requires one simulation run per node and per time step, which is demanding computationally for large-scale systems and short time steps (e.g., a day or a week). Besides, opportunity cost estimates are supposed to reflect the most productive use of an additional unit of water, yet the simulation rules do not necessarily use water that way. In this work, we propose an alternative approach, which computes the opportunity cost through a double backward induction, first recursively from outlet to headwaters within the river network at each time step, then recursively backwards in time. Both backward inductions only require linear operations, and the resulting algorithm tracks the maximal benefit that can be obtained by having an additional unit of water at any node in the network and at any date in time. Results 1) can be obtained from the results of a rule-based simulation using a single post-processing run, and 2) are exactly the (gross) benefit forgone by not allocating an additional unit of water to its most productive use. The proposed method is applied to London's water resource system to track the value of storage in the city's water supply reservoirs on the Thames River throughout a weekly 85-year simulation. Results, obtained in 0.4 seconds on a single processor, reflect the environmental cost of water shortage. This fast computation allows visualizing the seasonal variations of the opportunity cost depending on reservoir levels, demonstrating the potential of this approach for exploring water values and its variations using simulation models with multiple runs (e.g. of stochastically generated plausible future river inflows).
System analysis for the Huntsville Operational Support Center distributed computer system
NASA Technical Reports Server (NTRS)
Ingels, F. M.; Mauldin, J.
1984-01-01
The Huntsville Operations Support Center (HOSC) is a distributed computer system used to provide real time data acquisition, analysis and display during NASA space missions and to perform simulation and study activities during non-mission times. The primary purpose is to provide a HOSC system simulation model that is used to investigate the effects of various HOSC system configurations. Such a model would be valuable in planning the future growth of HOSC and in ascertaining the effects of data rate variations, update table broadcasting and smart display terminal data requirements on the HOSC HYPERchannel network system. A simulation model was developed in PASCAL and results of the simulation model for various system configuraions were obtained. A tutorial of the model is presented and the results of simulation runs are presented. Some very high data rate situations were simulated to observe the effects of the HYPERchannel switch over from contention to priority mode under high channel loading.
International Futures (IFs): A Global Issues Simulation for Teaching and Research.
ERIC Educational Resources Information Center
Hughes, Barry B.
This paper describes the International Futures (IFs) computer assisted simulation game for use with undergraduates. Written in Standard Fortran IV, the model currently runs on mainframe or mini computers, but has not been adapted for micros. It has been successfully installed on Harris, Burroughs, Telefunken, CDC, Univac, IBM, and Prime machines.…
Interactions between hyporheic flow produced by stream meanders, bars, and dunes
Stonedahl, Susa H.; Harvey, Judson W.; Packman, Aaron I.
2013-01-01
Stream channel morphology from grain-scale roughness to large meanders drives hyporheic exchange flow. In practice, it is difficult to model hyporheic flow over the wide spectrum of topographic features typically found in rivers. As a result, many studies only characterize isolated exchange processes at a single spatial scale. In this work, we simulated hyporheic flows induced by a range of geomorphic features including meanders, bars and dunes in sand bed streams. Twenty cases were examined with 5 degrees of river meandering. Each meandering river model was run initially without any small topographic features. Models were run again after superimposing only bars and then only dunes, and then run a final time after including all scales of topographic features. This allowed us to investigate the relative importance and interactions between flows induced by different scales of topography. We found that dunes typically contributed more to hyporheic exchange than bars and meanders. Furthermore, our simulations show that the volume of water exchanged and the distributions of hyporheic residence times resulting from various scales of topographic features are close to, but not linearly additive. These findings can potentially be used to develop scaling laws for hyporheic flow that can be widely applied in streams and rivers.
Cosmic Reionization On Computers: Numerical and Physical Convergence
Gnedin, Nickolay Y.
2016-04-01
In this paper I show that simulations of reionization performed under the Cosmic Reionization On Computers (CROC) project do converge in space and mass, albeit rather slowly. A fully converged solution (for a given star formation and feedback model) can be determined at a level of precision of about 20%, but such a solution is useless in practice, since achieving it in production-grade simulations would require a large set of runs at various mass and spatial resolutions, and computational resources for such an undertaking are not yet readily available. In order to make progress in the interim, I introduce amore » weak convergence correction factor in the star formation recipe, which allows one to approximate the fully converged solution with finite resolution simulations. The accuracy of weakly converged simulations approaches a comparable, ~20% level of precision for star formation histories of individual galactic halos and other galactic properties that are directly related to star formation rates, like stellar masses and metallicities. Yet other properties of model galaxies, for example, their HI masses, are recovered in the weakly converged runs only within a factor of two.« less
Cosmic Reionization On Computers: Numerical and Physical Convergence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gnedin, Nickolay Y.
In this paper I show that simulations of reionization performed under the Cosmic Reionization On Computers (CROC) project do converge in space and mass, albeit rather slowly. A fully converged solution (for a given star formation and feedback model) can be determined at a level of precision of about 20%, but such a solution is useless in practice, since achieving it in production-grade simulations would require a large set of runs at various mass and spatial resolutions, and computational resources for such an undertaking are not yet readily available. In order to make progress in the interim, I introduce amore » weak convergence correction factor in the star formation recipe, which allows one to approximate the fully converged solution with finite resolution simulations. The accuracy of weakly converged simulations approaches a comparable, ~20% level of precision for star formation histories of individual galactic halos and other galactic properties that are directly related to star formation rates, like stellar masses and metallicities. Yet other properties of model galaxies, for example, their HI masses, are recovered in the weakly converged runs only within a factor of two.« less
A Lunar Surface Operations Simulator
NASA Technical Reports Server (NTRS)
Nayar, H.; Balaram, J.; Cameron, J.; Jain, A.; Lim, C.; Mukherjee, R.; Peters, S.; Pomerantz, M.; Reder, L.; Shakkottai, P.;
2008-01-01
The Lunar Surface Operations Simulator (LSOS) is being developed to support planning and design of space missions to return astronauts to the moon. Vehicles, habitats, dynamic and physical processes and related environment systems are modeled and simulated in LSOS to assist in the visualization and design optimization of systems for lunar surface operations. A parametric analysis tool and a data browser were also implemented to provide an intuitive interface to run multiple simulations and review their results. The simulator and parametric analysis capability are described in this paper.
The Trick Simulation Toolkit: A NASA/Open source Framework for Running Time Based Physics Models
NASA Technical Reports Server (NTRS)
Penn, John M.; Lin, Alexander S.
2016-01-01
This paper describes the design and use at of the Trick Simulation Toolkit, a simulation development environment for creating high fidelity training and engineering simulations at the NASA Johnson Space Center and many other NASA facilities. It describes Trick's design goals and how the development environment attempts to achieve those goals. It describes how Trick is used in some of the many training and engineering simulations at NASA. Finally it describes the Trick NASA/Open source project on Github.
ESIM_DSN Web-Enabled Distributed Simulation Network
NASA Technical Reports Server (NTRS)
Bedrossian, Nazareth; Novotny, John
2002-01-01
In this paper, the eSim(sup DSN) approach to achieve distributed simulation capability using the Internet is presented. With this approach a complete simulation can be assembled from component subsystems that run on different computers. The subsystems interact with each other via the Internet The distributed simulation uses a hub-and-spoke type network topology. It provides the ability to dynamically link simulation subsystem models to different computers as well as the ability to assign a particular model to each computer. A proof-of-concept demonstrator is also presented. The eSim(sup DSN) demonstrator can be accessed at http://www.jsc.draper.com/esim which hosts various examples of Web enabled simulations.
Daily hydro- and morphodynamic simulations at Duck, NC, USA using Delft3D
NASA Astrophysics Data System (ADS)
Penko, Allison; Veeramony, Jay; Palmsten, Margaret; Bak, Spicer; Brodie, Katherine; Hesser, Tyler
2017-04-01
Operational forecasting of the coastal nearshore has wide ranging societal and humanitarian benefits, specifically for the prediction of natural hazards due to extreme storm events. However, understanding the model limitations and uncertainty is as equally important as the predictions themselves. By comparing and contrasting the predictions of multiple high-resolution models in a location with near real-time collection of observations, we are able to perform a vigorous analysis of the model results in order to achieve more robust and certain predictions. In collaboration with the U.S. Army Corps of Engineers Field Research Facility (USACE FRF) as part of the Coastal Model Test Bed (CMTB) project, we have set up Delft3D at Duck, NC, USA to run in near-real time, driven by measured wave data at the boundary. The CMTB at the USACE FRF allows for the unique integration of operational wave, circulation, and morphology models with real-time observations. The FRF has an extensive array of in-situ and remotely sensed oceanographic, bathymetric, and meteorological data that is broadcast in near-real time onto a publically accessible server. Wave, current, and bed elevation instruments are permanently installed across the model domain including 2 waverider buoys in 17-m and 26-m water depths at 3.5-km and 17-km offshore, respectively, that record directional wave data every 30-min. Here, we present the workflow and output of the Delft3D hydro- and morphodynamic simulations at Duck, and show the tactical benefits and operational potential of such a system. A nested Delft3D simulation runs a parent grid that extends 12-km in the along-shore and 3.5-km in the cross-shore with 50-m resolution and a maximum depth of approximately 17-m. The bathymetry for the parent grid was obtained from a regional digital elevation model (DEM) generated by the Federal Emergency Management Agency (FEMA). The inner nested grid extends 1.8-km in the along-shore and 1-km in the cross-shore with 5-m resolution and a maximum depth of approximately 8-m. The inner nested grid initial model bathymetry is set to either the predicted bathymetry from the previous day's simulation or a survey, whichever is more recent. Delft3D-WAVE runs in the parent grid and is driven with the real-time spectral wave measurements from the waverider buoy in 17-m depth. The spectral output from Delft3D-WAVE in the parent grid is then used as the boundary condition for the inner nested high-resolution grid, in which the coupled Delft3D wave-flow-morphology model is run. The model results are then compared to the wave, current, and bathymetry observations collected at the FRF as well as other models that are run in the CMTB.
Parser for Sabin-to-Mahoney Transition Model of Quasispecies Replication
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ecale Zhou, Carol
2016-01-03
This code is a data parse for preparing output from the Qspp agent-based stochastic simulation model for plotting in Excel. This code is specific to a set of simulations that were run for the purpose of preparing data for a publication. It is necessary to make this code open-source in order to publish the model code (Qspp), which has already been released. There is a necessity of assuring that results from using Qspp for a publication
Grid-based Meteorological and Crisis Applications
NASA Astrophysics Data System (ADS)
Hluchy, Ladislav; Bartok, Juraj; Tran, Viet; Lucny, Andrej; Gazak, Martin
2010-05-01
We present several applications from domain of meteorology and crisis management we developed and/or plan to develop. Particularly, we present IMS Model Suite - a complex software system designed to address the needs of accurate forecast of weather and hazardous weather phenomena, environmental pollution assessment, prediction of consequences of nuclear accident and radiological emergency. We discuss requirements on computational means and our experiences how to meet them by grid computing. The process of a pollution assessment and prediction of the consequences in case of radiological emergence results in complex data-flows and work-flows among databases, models and simulation tools (geographical databases, meteorological and dispersion models, etc.). A pollution assessment and prediction requires running of 3D meteorological model (4 nests with resolution from 50 km to 1.8 km centered on nuclear power plant site, 38 vertical levels) as well as running of the dispersion model performing the simulation of the release transport and deposition of the pollutant with respect to the numeric weather prediction data, released material description, topography, land use description and user defined simulation scenario. Several post-processing options can be selected according to particular situation (e.g. doses calculation). Another example is a forecasting of fog as one of the meteorological phenomena hazardous to the aviation as well as road traffic. It requires complicated physical model and high resolution meteorological modeling due to its dependence on local conditions (precise topography, shorelines and land use classes). An installed fog modeling system requires a 4 time nested parallelized 3D meteorological model with 1.8 km horizontal resolution and 42 levels vertically (approx. 1 million points in 3D space) to be run four times daily. The 3D model outputs and multitude of local measurements are utilized by SPMD-parallelized 1D fog model run every hour. The fog forecast model is a subject of the parameterization and parameter optimization before its real deployment. The parameter optimization requires tens of evaluations of the parameterized model accuracy and each evaluation of the model parameters requires re-running of the hundreds of meteorological situations collected over the years and comparison of the model output with the observed data. The architecture and inherent heterogeneity of both examples and their computational complexity and their interfaces to other systems and services make them well suited for decomposition into a set of web and grid services. Such decomposition has been performed within several projects we participated or participate in cooperation with academic sphere, namely int.eu.grid (dispersion model deployed as a pilot application to an interactive grid), SEMCO-WS (semantic composition of the web and grid services), DMM (development of a significant meteorological phenomena prediction system based on the data mining), VEGA 2009-2011 and EGEE III. We present useful and practical applications of technologies of high performance computing. The use of grid technology provides access to much higher computation power not only for modeling and simulation, but also for the model parameterization and validation. This results in the model parameters optimization and more accurate simulation outputs. Having taken into account that the simulations are used for the aviation, road traffic and crisis management, even small improvement in accuracy of predictions may result in significant improvement of safety as well as cost reduction. We found grid computing useful for our applications. We are satisfied with this technology and our experience encourages us to extend its use. Within an ongoing project (DMM) we plan to include processing of satellite images which extends our requirement on computation very rapidly. We believe that thanks to grid computing we are able to handle the job almost in real time.
Sailfish: A flexible multi-GPU implementation of the lattice Boltzmann method
NASA Astrophysics Data System (ADS)
Januszewski, M.; Kostur, M.
2014-09-01
We present Sailfish, an open source fluid simulation package implementing the lattice Boltzmann method (LBM) on modern Graphics Processing Units (GPUs) using CUDA/OpenCL. We take a novel approach to GPU code implementation and use run-time code generation techniques and a high level programming language (Python) to achieve state of the art performance, while allowing easy experimentation with different LBM models and tuning for various types of hardware. We discuss the general design principles of the code, scaling to multiple GPUs in a distributed environment, as well as the GPU implementation and optimization of many different LBM models, both single component (BGK, MRT, ELBM) and multicomponent (Shan-Chen, free energy). The paper also presents results of performance benchmarks spanning the last three NVIDIA GPU generations (Tesla, Fermi, Kepler), which we hope will be useful for researchers working with this type of hardware and similar codes. Catalogue identifier: AETA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AETA_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Lesser General Public License, version 3 No. of lines in distributed program, including test data, etc.: 225864 No. of bytes in distributed program, including test data, etc.: 46861049 Distribution format: tar.gz Programming language: Python, CUDA C, OpenCL. Computer: Any with an OpenCL or CUDA-compliant GPU. Operating system: No limits (tested on Linux and Mac OS X). RAM: Hundreds of megabytes to tens of gigabytes for typical cases. Classification: 12, 6.5. External routines: PyCUDA/PyOpenCL, Numpy, Mako, ZeroMQ (for multi-GPU simulations), scipy, sympy Nature of problem: GPU-accelerated simulation of single- and multi-component fluid flows. Solution method: A wide range of relaxation models (LBGK, MRT, regularized LB, ELBM, Shan-Chen, free energy, free surface) and boundary conditions within the lattice Boltzmann method framework. Simulations can be run in single or double precision using one or more GPUs. Restrictions: The lattice Boltzmann method works for low Mach number flows only. Unusual features: The actual numerical calculations run exclusively on GPUs. The numerical code is built dynamically at run-time in CUDA C or OpenCL, using templates and symbolic formulas. The high-level control of the simulation is maintained by a Python process. Additional comments: !!!!! The distribution file for this program is over 45 Mbytes and therefore is not delivered directly when Download or Email is requested. Instead a html file giving details of how the program can be obtained is sent. !!!!! Running time: Problem-dependent, typically minutes (for small cases or short simulations) to hours (large cases or long simulations).
Digital data processing system dynamic loading analysis
NASA Technical Reports Server (NTRS)
Lagas, J. J.; Peterka, J. J.; Tucker, A. E.
1976-01-01
Simulation and analysis of the Space Shuttle Orbiter Digital Data Processing System (DDPS) are reported. The mated flight and postseparation flight phases of the space shuttle's approach and landing test configuration were modeled utilizing the Information Management System Interpretative Model (IMSIM) in a computerized simulation modeling of the ALT hardware, software, and workload. System requirements simulated for the ALT configuration were defined. Sensitivity analyses determined areas of potential data flow problems in DDPS operation. Based on the defined system requirements and the sensitivity analyses, a test design is described for adapting, parameterizing, and executing the IMSIM. Varying load and stress conditions for the model execution are given. The analyses of the computer simulation runs were documented as results, conclusions, and recommendations for DDPS improvements.
Space shuttle orbiter digital data processing system timing sensitivity analysis OFT ascent phase
NASA Technical Reports Server (NTRS)
Lagas, J. J.; Peterka, J. J.; Becker, D. A.
1977-01-01
Dynamic loads were investigated to provide simulation and analysis of the space shuttle orbiter digital data processing system (DDPS). Segments of the ascent test (OFT) configuration were modeled utilizing the information management system interpretive model (IMSIM) in a computerized simulation modeling of the OFT hardware and software workload. System requirements for simulation of the OFT configuration were defined, and sensitivity analyses determined areas of potential data flow problems in DDPS operation. Based on the defined system requirements and these sensitivity analyses, a test design was developed for adapting, parameterizing, and executing IMSIM, using varying load and stress conditions for model execution. Analyses of the computer simulation runs are documented, including results, conclusions, and recommendations for DDPS improvements.
Modelling past hydrology of an interfluve area in the Campine region (NE Belgium)
NASA Astrophysics Data System (ADS)
Leterme, Bertrand; Beerten, Koen; Gedeon, Matej; Vandersteen, Katrijn
2015-04-01
This study aims at hydrological model verification of a small lowland interfluve area (18.6 km²) in NE Belgium, for conditions that are different than today. We compare the current state with five reference periods in the past (AD 1500, 1770, 1854, 1909 and 1961) representing important stages of landscape evolution in the study area. Historical information and proxy data are used to derive conceptual model features and boundary conditions specific to each period: topography, surface water geometry (canal, drains and lakes), land use, soils, vegetation and climate. The influence of landscape evolution on the hydrological cycle is assessed using numerical simulations of a coupled unsaturated zone - groundwater model (HYDRUS-MODFLOW). The induced hydrological changes are assessed in terms of groundwater level, recharge, evapotranspiration, and surface water discharge. HYDRUS-MODFLOW coupling allows including important processes such as the groundwater contribution to evapotranspiration. Major land use change occurred between AD 1854 and 1909, with about 41% of the study area being converted from heath to coniferous forest, together with the development of a drainage network. Results show that this led to a significant decrease of groundwater recharge and lowering of the groundwater table. A limitation of the study lies in the comparison of simulated past hydrology with appropriate palaeo-records. Examples are given as how some indicators (groundwater head, swamp zones) can be used to tend to model validation. Quantifying the relative impact of land use and climate changes requires running sensitivity simulations where the models using alternative land use are run with the climate forcing of other periods. A few examples of such sensitivity runs are presented in order to compare the influence of land use and climate change on the study area hydrology.
Simulating an Exploding Fission-Bomb Core
NASA Astrophysics Data System (ADS)
Reed, Cameron
2016-03-01
A time-dependent desktop-computer simulation of the core of an exploding fission bomb (nuclear weapon) has been developed. The simulation models a core comprising a mixture of two isotopes: a fissile one (such as U-235) and an inert one (such as U-238) that captures neutrons and removes them from circulation. The user sets the enrichment percentage and scattering and fission cross-sections of the fissile isotope, the capture cross-section of the inert isotope, the number of neutrons liberated per fission, the number of ``initiator'' neutrons, the radius of the core, and the neutron-reflection efficiency of a surrounding tamper. The simulation, which is predicated on ordinary kinematics, follows the three-dimensional motions and fates of neutrons as they travel through the core. Limitations of time and computer memory render it impossible to model a real-life core, but results of numerous runs clearly demonstrate the existence of a critical mass for a given set of parameters and the dramatic effects of enrichment and tamper efficiency on the growth (or decay) of the neutron population. The logic of the simulation will be described and results of typical runs will be presented and discussed.
Fully 3D modeling of tokamak vertical displacement events with realistic parameters
NASA Astrophysics Data System (ADS)
Pfefferle, David; Ferraro, Nathaniel; Jardin, Stephen; Bhattacharjee, Amitava
2016-10-01
In this work, we model the complex multi-domain and highly non-linear physics of Vertical Displacement Events (VDEs), one of the most damaging off-normal events in tokamaks, with the implicit 3D extended MHD code M3D-C1. The code has recently acquired the capability to include finite thickness conducting structures within the computational domain. By exploiting the possibility of running a linear 3D calculation on top of a non-linear 2D simulation, we monitor the non-axisymmetric stability and assess the eigen-structure of kink modes as the simulation proceeds. Once a stability boundary is crossed, a fully 3D non-linear calculation is launched for the remainder of the simulation, starting from an earlier time of the 2D run. This procedure, along with adaptive zoning, greatly increases the efficiency of the calculation, and allows to perform VDE simulations with realistic parameters and high resolution. Simulations are being validated with NSTX data where both axisymmetric (toroidally averaged) and non-axisymmetric induced and conductive (halo) currents have been measured. This work is supported by US DOE Grant DE-AC02-09CH11466.
Models and Simulations as a Service: Exploring the Use of Galaxy for Delivering Computational Models
Walker, Mark A.; Madduri, Ravi; Rodriguez, Alex; Greenstein, Joseph L.; Winslow, Raimond L.
2016-01-01
We describe the ways in which Galaxy, a web-based reproducible research platform, can be used for web-based sharing of complex computational models. Galaxy allows users to seamlessly customize and run simulations on cloud computing resources, a concept we refer to as Models and Simulations as a Service (MaSS). To illustrate this application of Galaxy, we have developed a tool suite for simulating a high spatial-resolution model of the cardiac Ca2+ spark that requires supercomputing resources for execution. We also present tools for simulating models encoded in the SBML and CellML model description languages, thus demonstrating how Galaxy’s reproducible research features can be leveraged by existing technologies. Finally, we demonstrate how the Galaxy workflow editor can be used to compose integrative models from constituent submodules. This work represents an important novel approach, to our knowledge, to making computational simulations more accessible to the broader scientific community. PMID:26958881
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bose, Sownak; Li, Baojiu; He, Jian-hua
We describe and demonstrate the potential of a new and very efficient method for simulating certain classes of modified gravity theories, such as the widely studied f ( R ) gravity models. High resolution simulations for such models are currently very slow due to the highly nonlinear partial differential equation that needs to be solved exactly to predict the modified gravitational force. This nonlinearity is partly inherent, but is also exacerbated by the specific numerical algorithm used, which employs a variable redefinition to prevent numerical instabilities. The standard Newton-Gauss-Seidel iterative method used to tackle this problem has a poor convergencemore » rate. Our new method not only avoids this, but also allows the discretised equation to be written in a form that is analytically solvable. We show that this new method greatly improves the performance and efficiency of f ( R ) simulations. For example, a test simulation with 512{sup 3} particles in a box of size 512 Mpc/ h is now 5 times faster than before, while a Millennium-resolution simulation for f ( R ) gravity is estimated to be more than 20 times faster than with the old method. Our new implementation will be particularly useful for running very high resolution, large-sized simulations which, to date, are only possible for the standard model, and also makes it feasible to run large numbers of lower resolution simulations for covariance analyses. We hope that the method will bring us to a new era for precision cosmological tests of gravity.« less
Self-adaptive Fault-Tolerance of HLA-Based Simulations in the Grid Environment
NASA Astrophysics Data System (ADS)
Huang, Jijie; Chai, Xudong; Zhang, Lin; Li, Bo Hu
The objects of a HLA-based simulation can access model services to update their attributes. However, the grid server may be overloaded and refuse the model service to handle objects accesses. Because these objects have been accessed this model service during last simulation loop and their medium state are stored in this server, this may terminate the simulation. A fault-tolerance mechanism must be introduced into simulations. But the traditional fault-tolerance methods cannot meet the above needs because the transmission latency between a federate and the RTI in grid environment varies from several hundred milliseconds to several seconds. By adding model service URLs to the OMT and expanding the HLA services and model services with some interfaces, this paper proposes a self-adaptive fault-tolerance mechanism of simulations according to the characteristics of federates accessing model services. Benchmark experiments indicate that the expanded HLA/RTI can make simulations self-adaptively run in the grid environment.
Influence of plasticity models upon the outcome of simulated hypervelocity impacts
NASA Astrophysics Data System (ADS)
Thomas, John N.
1994-07-01
This paper describes the results of numerical simulations of aluminum upon aluminum impacts which were performed with the CTH hydrocode to determine the effect plasticity formulations upon the final perforation size in the targets. The targets were 1 mm and 5 mm thick plates and the projectiles were 10 mm by 10 mm right circular cylinders. Both targets and projectiles were represented as 2024 aluminium alloy. The hydrocode simulations were run in a two-dimensional cylindrical geometry. Normal impacts at velocites between 5 and 15 km/s were simulated. Three isotropic yield stress models were explored in the simulations: an elastic-perfectly plastic model and the Johnson-Cook and Steinberg-Guinan-Lund viscoplastic models. The fracture behavior was modeled by a simple tensile pressure criterion. The simulations show that using the three strength models resulted in only minor differences in the final perforation diameter. The simulation results were used to construct an equation to predict the final hole size resulting from impacts on thin targets.
The Variation of Hydrocarbon Abundances with Latitude and Season in Saturn's Stratosphere
NASA Technical Reports Server (NTRS)
Moses, J. I.; Greathouse, T. K.
2005-01-01
We have developed a realistic, time-variable, one-dimensional, seasonal model for stratospheric photochemistry on Saturn using the Caltech/ JPL KINETICS code [1,2,3]. The model accounts for variations in ultraviolet flux due to orbital position, solar-cycle variations, and ring-shadowing effects. The results for two Saturnian years, starting at Ls = 0 in 1950 and running until the upcoming northern vernal equinox in 2009, are presented for numerous latitudes. The same two model years are run over and over again until the model convergences to make sure that high-altitude effects have had a chance to propagate down through the atmosphere. We use the SOLAR2000 model [4,5], in combination with the spectra presented in [6], to predict the ultraviolet flux at any wavelength and any point in time during the simulation. Saturn's orbital position during the simulation was taken from the ephemeris calculator at http://ssd.jpl.nasa.gov/horizons.html [7]. The photochemical model is derived from "Model C" of [8] and uses a hydrocarbon reaction list that has been extensively updated from that presented in [3].
PHREEQCI; a graphical user interface for the geochemical computer program PHREEQC
Charlton, Scott R.; Macklin, Clifford L.; Parkhurst, David L.
1997-01-01
PhreeqcI is a Windows-based graphical user interface for the geochemical computer program PHREEQC. PhreeqcI provides the capability to generate and edit input data files, run simulations, and view text files containing simulation results, all within the framework of a single interface. PHREEQC is a multipurpose geochemical program that can perform speciation, inverse, reaction-path, and 1D advective reaction-transport modeling. Interactive access to all of the capabilities of PHREEQC is available with PhreeqcI. The interface is written in Visual Basic and will run on personal computers under the Windows(3.1), Windows95, and WindowsNT operating systems.
Modeling of Hall Thruster Lifetime and Erosion Mechanisms (Preprint)
2007-09-01
Hall thruster plasma discharge has been upgraded to simulate the erosion of the thruster acceleration channel, the degradation of which is the main life-limiting factor of the propulsion system. Evolution of the thruster geometry as a result of material removal due to sputtering is modeled by calculating wall erosion rates, stepping the grid boundary by a chosen time step and altering the computational mesh between simulation runs. The code is first tuned to predict the nose cone erosion of a 200 W Busek Hall thruster , the BHT-200. Simulated erosion
Yildirim, Ilyas; Park, Hajeung; Disney, Matthew D.; Schatz, George C.
2013-01-01
One class of functionally important RNA is repeating transcripts that cause disease through various mechanisms. For example, expanded r(CAG) repeats can cause Huntington’s and other disease through translation of toxic proteins. Herein, crystal structure of r[5ʹUUGGGC(CAG)3GUCC]2, a model of CAG expanded transcripts, refined to 1.65 Å resolution is disclosed that show both anti-anti and syn-anti orientations for 1×1 nucleotide AA internal loops. Molecular dynamics (MD) simulations using Amber force field in explicit solvent were run for over 500 ns on model systems r(5ʹGCGCAGCGC)2 (MS1) and r(5ʹCCGCAGCGG)2 (MS2). In these MD simulations, both anti-anti and syn-anti AA base pairs appear to be stable. While anti-anti AA base pairs were dynamic and sampled multiple anti-anti conformations, no syn-anti↔anti-anti transformations were observed. Umbrella sampling simulations were run on MS2, and a 2D free energy surface was created to extract transformation pathways. In addition, over 800 ns explicit solvent MD simulation was run on r[5ʹGGGC(CAG)3GUCC]2, which closely represents the refined crystal structure. One of the terminal AA base pairs (syn-anti conformation), transformed to anti-anti conformation. The pathway followed in this transformation was the one predicted by umbrella sampling simulations. Further analysis showed a binding pocket near AA base pairs in syn-anti conformations. Computational results combined with the refined crystal structure show that global minimum conformation of 1×1 nucleotide AA internal loops in r(CAG) repeats is anti-anti but can adopt syn-anti depending on the environment. These results are important to understand RNA dynamic-function relationships and develop small molecules that target RNA dynamic ensembles. PMID:23441937
Predictions of Cockpit Simulator Experimental Outcome Using System Models
NASA Technical Reports Server (NTRS)
Sorensen, J. A.; Goka, T.
1984-01-01
This study involved predicting the outcome of a cockpit simulator experiment where pilots used cockpit displays of traffic information (CDTI) to establish and maintain in-trail spacing behind a lead aircraft during approach. The experiments were run on the NASA Ames Research Center multicab cockpit simulator facility. Prior to the experiments, a mathematical model of the pilot/aircraft/CDTI flight system was developed which included relative in-trail and vertical dynamics between aircraft in the approach string. This model was used to construct a digital simulation of the string dynamics including response to initial position errors. The model was then used to predict the outcome of the in-trail following cockpit simulator experiments. Outcome included performance and sensitivity to different separation criteria. The experimental results were then used to evaluate the model and its prediction accuracy. Lessons learned in this modeling and prediction study are noted.
Ditching Tests of Two Models of the Army B-36 Airplane
NASA Technical Reports Server (NTRS)
Fisher, Lloyd J.; Cederborg, Gibson, A.
1948-01-01
The ditching characteristics of the Army B-36 airplane were determined by testing 1/20- and 1/30-scale dynamic models in calm water in Langley tank no. 2 and at the outdoor catapult. The scope of the tests consisted of ditching the models at various conditions of simulated damage, landing attitudes, and speeds, with various flap settings using several degrees of restraint of the flap hinges. The ditching behavior was evaluated from recordings of deceleration, length of run, and motions of the models. The results showed that the airplane should be ditched at an attitude of about 9 deg with flaps full down. The probable ditching behavior will be a smooth run with a maximum longitudinal deceleration of 3g to 4g and a landing run of 4 to 5 fuselage lengths. Structural failure of the underside of the fuselage will not seriously affect the behavior of the airplane.
A software-based sensor for combined sewer overflows.
Leonhardt, G; Fach, S; Engelhard, C; Kinzel, H; Rauch, W
2012-01-01
A new methodology for online estimation of excess flow from combined sewer overflow (CSO) structures based on simulation models is presented. If sufficient flow and water level data from the sewer system is available, no rainfall data are needed to run the model. An inverse rainfall-runoff model was developed to simulate net rainfall based on flow and water level data. Excess flow at all CSO structures in a catchment can then be simulated with a rainfall-runoff model. The method is applied to a case study and results show that the inverse rainfall-runoff model can be used instead of missing rain gauges. Online operation is ensured by software providing an interface to the SCADA-system of the operator and controlling the model. A water quality model could be included to simulate also pollutant concentrations in the excess flow.
Massively parallel algorithms for trace-driven cache simulations
NASA Technical Reports Server (NTRS)
Nicol, David M.; Greenberg, Albert G.; Lubachevsky, Boris D.
1991-01-01
Trace driven cache simulation is central to computer design. A trace is a very long sequence of reference lines from main memory. At the t(exp th) instant, reference x sub t is hashed into a set of cache locations, the contents of which are then compared with x sub t. If at the t sup th instant x sub t is not present in the cache, then it is said to be a miss, and is loaded into the cache set, possibly forcing the replacement of some other memory line, and making x sub t present for the (t+1) sup st instant. The problem of parallel simulation of a subtrace of N references directed to a C line cache set is considered, with the aim of determining which references are misses and related statistics. A simulation method is presented for the Least Recently Used (LRU) policy, which regradless of the set size C runs in time O(log N) using N processors on the exclusive read, exclusive write (EREW) parallel model. A simpler LRU simulation algorithm is given that runs in O(C log N) time using N/log N processors. Timings are presented of the second algorithm's implementation on the MasPar MP-1, a machine with 16384 processors. A broad class of reference based line replacement policies are considered, which includes LRU as well as the Least Frequently Used and Random replacement policies. A simulation method is presented for any such policy that on any trace of length N directed to a C line set runs in the O(C log N) time with high probability using N processors on the EREW model. The algorithms are simple, have very little space overhead, and are well suited for SIMD implementation.
Gutchess, Kristina; Jin, Li; Ledesma, José L J; Crossman, Jill; Kelleher, Christa; Lautz, Laura; Lu, Zunli
2018-02-06
The long-term application of road salts has led to a rise in surface water chloride (Cl - ) concentrations. While models have been used to assess the potential future impacts of continued deicing practices, prior approaches have not incorporated changes in climate that are projected to impact hydrogeology in the 21st century. We use an INtegrated CAtchment (INCA) model to simulate Cl - concentrations in the Tioughnioga River watershed. The model was run over a baseline period (1961-1990) and climate simulations from a range of GCMs run over three 30-year intervals (2010-2039; 2040-2069; 2070-2099). Model projections suggest that Cl - concentrations in the two river branches will continue to rise for several decades, before beginning to decline around 2040-2069, with all GCM scenarios indicating reductions in snowfall and associated salt applications over the 21st century. The delay in stream response is most likely attributed to climate change and continued contribution of Cl - from aquifers. By 2100, surface water Cl - concentrations will decrease to below 1960s values. Catchments dominated by urban lands will experience a decrease in average surface water Cl - , although moderate compared to more rural catchments.
NASA Technical Reports Server (NTRS)
Schubert, Siegfried; Kang, In-Sik; Reale, Oreste
2009-01-01
This talk gives an update on the progress and further plans for a coordinated project to carry out and analyze high-resolution simulations of tropical storm activity with a number of state-of-the-art global climate models. Issues addressed include, the mechanisms by which SSTs control tropical storm. activity on inter-annual and longer time scales, the modulation of that activity by the Madden Julian Oscillation on sub-seasonal time scales, as well as the sensitivity of the results to model formulation. The project also encourages companion coarser resolution runs to help assess resolution dependence, and. the ability of the models to capture the large-scale and long-terra changes in the parameters important for hurricane development. Addressing the above science questions is critical to understanding the nature of the variability of the Asian-Australian monsoon and its regional impacts, and thus CLIVAR RAMP fully endorses the proposed tropical storm simulation activity. The project is open to all interested organizations and investigators, and the results from the runs will be shared among the participants, as well as made available to the broader scientific community for analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hagos, Samson M.; Feng, Zhe; Burleyson, Casey D.
Regional cloud permitting model simulations of cloud populations observed during the 2011 ARM Madden Julian Oscillation Investigation Experiment/ Dynamics of Madden-Julian Experiment (AMIE/DYNAMO) field campaign are evaluated against radar and ship-based measurements. Sensitivity of model simulated surface rain rate statistics to parameters and parameterization of hydrometeor sizes in five commonly used WRF microphysics schemes are examined. It is shown that at 2 km grid spacing, the model generally overestimates rain rate from large and deep convective cores. Sensitivity runs involving variation of parameters that affect rain drop or ice particle size distribution (more aggressive break-up process etc) generally reduce themore » bias in rain-rate and boundary layer temperature statistics as the smaller particles become more vulnerable to evaporation. Furthermore significant improvement in the convective rain-rate statistics is observed when the horizontal grid-spacing is reduced to 1 km and 0.5 km, while it is worsened when run at 4 km grid spacing as increased turbulence enhances evaporation. The results suggest modulation of evaporation processes, through parameterization of turbulent mixing and break-up of hydrometeors may provide a potential avenue for correcting cloud statistics and associated boundary layer temperature biases in regional and global cloud permitting model simulations.« less
Revisiting ocean carbon sequestration by direct injection: a global carbon budget perspective
NASA Astrophysics Data System (ADS)
Reith, Fabian; Keller, David P.; Oschlies, Andreas
2016-11-01
In this study we look beyond the previously studied effects of oceanic CO2 injections on atmospheric and oceanic reservoirs and also account for carbon cycle and climate feedbacks between the atmosphere and the terrestrial biosphere. Considering these additional feedbacks is important since backfluxes from the terrestrial biosphere to the atmosphere in response to reducing atmospheric CO2 can further offset the targeted reduction. To quantify these dynamics we use an Earth system model of intermediate complexity to simulate direct injection of CO2 into the deep ocean as a means of emissions mitigation during a high CO2 emission scenario. In three sets of experiments with different injection depths, we simulate a 100-year injection period of a total of 70 Gt
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reyniers, G.C.; Froment, G.F.; Kopinke, F.D.
1994-11-01
An extensive experimental program has been carried out in a pilot unit for the thermal cracking of hydrocarbons. On the basis of the experimental information and the insight in the mechanisms for coke formation in pyrolysis reactors, a mathematical model describing the coke formation has been derived. This model has been incorporated in the existing simulation tools at the Laboratorium voor Petrochemische Techniek, and the run length of an industrial naphtha cracking furnace has been accurately simulated. In this way the coking model has been validated.
Numerical Model Simulation of Atmosphere above A.C. Airport
NASA Astrophysics Data System (ADS)
Lutes, Tiffany; Trout, Joseph
2014-03-01
In this research project, the Weather Research & Forecasting (WRF) model from the National Center for Atmospheric Research (NCAR) is used to investigate past and present weather conditions. The Atlantic City Airport area in southern New Jersey is the area of interest. Long-term hourly data is analyzed and model simulations are created. By inputting high resolution surface data, a more accurate picture of the effects of different weather conditions will be portrayed. Currently, the impact of gridded model runs is being tested, and the impact of surface characteristics is being investigated.
Performance of a reconfigured atmospheric general circulation model at low resolution
NASA Astrophysics Data System (ADS)
Wen, Xinyu; Zhou, Tianjun; Wang, Shaowu; Wang, Bin; Wan, Hui; Li, Jian
2007-07-01
Paleoclimate simulations usually require model runs over a very long time. The fast integration version of a state-of-the-art general circulation model (GCM), which shares the same physical and dynamical processes but with reduced horizontal resolution and increased time step, is usually developed. In this study, we configure a fast version of an atmospheric GCM (AGCM), the Grid Atmospheric Model of IAP/LASG (Institute of Atmospheric Physics/State Key Laboratory of Numerical Modeling for Atmospheric Sciences and Geophysical Fluid Dynamics), at low resolution (GAMIL-L, hereafter), and compare the simulation results with the NCEP/NCAR reanalysis and other data to examine its performance. GAMIL-L, which is derived from the original GAMIL, is a finite difference AGCM with 72×40 grids in longitude and latitude and 26 vertical levels. To validate the simulated climatology and variability, two runs were achieved. One was a 60-year control run with fixed climatological monthly sea surface temperature (SST) forcing, and the other was a 50-yr (1950 2000) integration with observational time-varying monthly SST forcing. Comparisons between these two cases and the reanalysis, including intra-seasonal and inter-annual variability are also presented. In addition, the differences between GAMIL-L and the original version of GAMIL are also investigated. The results show that GAMIL-L can capture most of the large-scale dynamical features of the atmosphere, especially in the tropics and mid latitudes, although a few deficiencies exist, such as the underestimated Hadley cell and thereby the weak strength of the Asia summer monsoon. However, the simulated mean states over high latitudes, especially over the polar regions, are not acceptable. Apart from dynamics, the thermodynamic features mainly depend upon the physical parameterization schemes. Since the physical package of GAMIL-L is exactly the same as the original high-resolution version of GAMIL, in which the NCAR Community Atmosphere Model (CAM2) physical package was used, there are only small differences between them in the precipitation and temperature fields. Because our goal is to develop a fast-running AGCM and employ it in the coupled climate system model of IAP/LASG for paleoclimate studies such as ENSO and Australia-Asia monsoon, particular attention has been paid to the model performances in the tropics. More model validations, such as those ran for the Southern Oscillation and South Asia monsoon, indicate that GAMIL-L is reasonably competent and valuable in this regard.
Pairwise velocities in the "Running FLRW" cosmological model
NASA Astrophysics Data System (ADS)
Bibiano, Antonio; Croton, Darren J.
2017-05-01
We present an analysis of the pairwise velocity statistics from a suite of cosmological N-body simulations describing the 'Running Friedmann-Lemaître-Robertson-Walker' (R-FLRW) cosmological model. This model is based on quantum field theory in a curved space-time and extends Λ cold dark matter (CDM) with a time-evolving vacuum energy density, ρ _Λ. To enforce local conservation of matter, a time-evolving gravitational coupling is also included. Our results constitute the first study of velocities in the R-FLRW cosmology, and we also compare with other dark energy simulations suites, repeating the same analysis. We find a strong degeneracy between the pairwise velocity and σ8 at z = 0 for almost all scenarios considered, which remains even when we look back to epochs as early as z = 2. We also investigate various coupled dark energy models, some of which show minimal degeneracy, and reveal interesting deviations from ΛCDM that could be readily exploited by future cosmological observations to test and further constrain our understanding of dark energy.
NASA Astrophysics Data System (ADS)
Tavakkol, Sasan; Lynett, Patrick
2017-08-01
In this paper, we introduce an interactive coastal wave simulation and visualization software, called Celeris. Celeris is an open source software which needs minimum preparation to run on a Windows machine. The software solves the extended Boussinesq equations using a hybrid finite volume-finite difference method and supports moving shoreline boundaries. The simulation and visualization are performed on the GPU using Direct3D libraries, which enables the software to run faster than real-time. Celeris provides a first-of-its-kind interactive modeling platform for coastal wave applications and it supports simultaneous visualization with both photorealistic and colormapped rendering capabilities. We validate our software through comparison with three standard benchmarks for non-breaking and breaking waves.
Crashworthiness simulations with DYNA3D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schauer, D.A.; Hoover, C.G.; Kay, G.J.
1996-04-01
Current progress in parallel algorithm research and applications in vehicle crash simulation is described for the explicit, finite element algorithms in DYNA3D. Problem partitioning methods and parallel algorithms for contact at material interfaces are the two challenging algorithm research problems that are addressed. Two prototype parallel contact algorithms have been developed for treating the cases of local and arbitrary contact. Demonstration problems for local contact are crashworthiness simulations with 222 locally defined contact surfaces and a vehicle/barrier collision modeled with arbitrary contact. A simulation of crash tests conducted for a vehicle impacting a U-channel small sign post embedded in soilmore » has been run on both the serial and parallel versions of DYNA3D. A significant reduction in computational time has been observed when running these problems on the parallel version. However, to achieve maximum efficiency, complex problems must be appropriately partitioned, especially when contact dominates the computation.« less
Constructive Engineering of Simulations
NASA Technical Reports Server (NTRS)
Snyder, Daniel R.; Barsness, Brendan
2011-01-01
Joint experimentation that investigates sensor optimization, re-tasking and management has far reaching implications for Department of Defense, Interagency and multinational partners. An adaption of traditional human in the loop (HITL) Modeling and Simulation (M&S) was one approach used to generate the findings necessary to derive and support these implications. Here an entity-based simulation was re-engineered to run on USJFCOM's High Performance Computer (HPC). The HPC was used to support the vast number of constructive runs necessary to produce statistically significant data in a timely manner. Then from the resulting sensitivity analysis, event designers blended the necessary visualization and decision making components into a synthetic environment for the HITL simulations trials. These trials focused on areas where human decision making had the greatest impact on the sensor investigations. Thus, this paper discusses how re-engineering existing M&S for constructive applications can positively influence the design of an associated HITL experiment.
Dshell++: A Component Based, Reusable Space System Simulation Framework
NASA Technical Reports Server (NTRS)
Lim, Christopher S.; Jain, Abhinandan
2009-01-01
This paper describes the multi-mission Dshell++ simulation framework for high fidelity, physics-based simulation of spacecraft, robotic manipulation and mobility systems. Dshell++ is a C++/Python library which uses modern script driven object-oriented techniques to allow component reuse and a dynamic run-time interface for complex, high-fidelity simulation of spacecraft and robotic systems. The goal of the Dshell++ architecture is to manage the inherent complexity of physicsbased simulations while supporting component model reuse across missions. The framework provides several features that support a large degree of simulation configurability and usability.
NASA Astrophysics Data System (ADS)
D'Alessandro, J.; Diao, M.; Chen, M.
2015-12-01
John D'Alessandro1, Minghui Diao1, Ming Chen2, George Bryan2, Hugh Morrison21. Department of Meteorology and Climate Science, San Jose State University2. Mesoscale & Microscale Meteorology Division, National Center for Atmospheric Research, Boulder, CO, 80301 Ice crystal formation requires the prerequisite condition of ice supersaturation, i.e., relative humidity with respect to ice (RHi) greater than 100%. The formation and evolution of ice supersaturated regions (ISSRs) has large impact on the subsequent formation of ice clouds. To examine the characteristics of simulated ice supersaturated regions at various model spatial resolutions, case studies between airborne in-situ measurements in the NSF Deep Convective, Clouds and Chemistry (DC3) campaign (May - June 2012) and WRF simulations are conducted in this work. Recent studies using ~200 m in-situ observations showed that ice supersaturated regions are mostly around 1 km in horizontal scale (Diao et al. 2014). Yet it is still unclear if such observed characteristics can be represented by WRF simulations at various spatial resolutions. In this work, we compare the WRF simulated anvil cirrus spatial characteristics with those observed in the DC3 campaign over the southern great plains in US. The WRF model is run at 1 km and 3 km horizontal grid spacing with a recent update of Thompson microphysics scheme. Our comparisons focus on the spatial characteristics of ISSRs and cirrus clouds, including the distributions of their horizontal scales, the maximum relative humidity with respect to ice (RHi) and the relationship between RHi and temperature. Our previous work on the NCAR CM1 cloud-resolving model shows that the higher resolution runs (i.e., 250m and 1km) generally have better agreement with observations than the coarser resolution (4km) runs. We will examine if similar trend exists for WRF simulations in deep convection cases. In addition, we will compare the simulation results between WRF and CM1, particularly for spatial correlations between ISSRs and cirrus and their evolution (based on the method of Diao et al. 2013). Overall, our work will help to assess the representation of ISSRs and cirrus in WRF simulation based on comparisons with in-situ observations.
CMB constraints on running non-Gaussianity
NASA Astrophysics Data System (ADS)
Oppizzi, F.; Liguori, M.; Renzi, A.; Arroja, F.; Bartolo, N.
2018-05-01
We develop a complete set of tools for CMB forecasting, simulation and estimation of primordial running bispectra, arising from a variety of curvaton and single-field (DBI) models of Inflation. We validate our pipeline using mock CMB running non-Gaussianity realizations and test it on real data by obtaining experimental constraints on the fNL running spectral index, nNG, using WMAP 9-year data. Our final bounds (68% C.L.) read ‑0.6< nNG<1.4}, ‑0.3< nNG<1.2, ‑1.1
The AAO fiber instrument data simulator
NASA Astrophysics Data System (ADS)
Goodwin, Michael; Farrell, Tony; Smedley, Scott; Heald, Ron; Heijmans, Jeroen; De Silva, Gayandhi; Carollo, Daniela
2012-09-01
The fiber instrument data simulator is an in-house software tool that simulates detector images of fiber-fed spectrographs developed by the Australian Astronomical Observatory (AAO). In addition to helping validate the instrument designs, the resulting simulated images are used to develop the required data reduction software. Example applications that have benefited from the tool usage are the HERMES and SAMI instrumental projects for the Anglo-Australian Telescope (AAT). Given the sophistication of these projects an end-to-end data simulator that accurately models the predicted detector images is required. The data simulator encompasses all aspects of the transmission and optical aberrations of the light path: from the science object, through the atmosphere, telescope, fibers, spectrograph and finally the camera detectors. The simulator runs under a Linux environment that uses pre-calculated information derived from ZEMAX models and processed data from MATLAB. In this paper, we discuss the aspects of the model, software, example simulations and verification.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duan, Nan; Dimitrovski, Aleksandar D; Simunovic, Srdjan
2016-01-01
The development of high-performance computing techniques and platforms has provided many opportunities for real-time or even faster-than-real-time implementation of power system simulations. One approach uses the Parareal in time framework. The Parareal algorithm has shown promising theoretical simulation speedups by temporal decomposing a simulation run into a coarse simulation on the entire simulation interval and fine simulations on sequential sub-intervals linked through the coarse simulation. However, it has been found that the time cost of the coarse solver needs to be reduced to fully exploit the potentials of the Parareal algorithm. This paper studies a Parareal implementation using reduced generatormore » models for the coarse solver and reports the testing results on the IEEE 39-bus system and a 327-generator 2383-bus Polish system model.« less
NASA Astrophysics Data System (ADS)
Heptinstall, David; Bouvet de Maisonneuve, Caroline; Neuberg, Jurgen; Taisne, Benoit; Collinson, Amy
2016-04-01
Heat flow models can bring new insights into the thermal and rheological evolution of volcanic 3 systems. We shall investigate the thermal processes and timescales in a crystallizing, static 4 magma column, with a heat flow model of Soufriere Hills Volcano (SHV), Montserrat. The latent heat of crystallization is initially computed with MELTS, as a function of pressure and temperature for an andesitic melt (SHV groundmass starting composition). Three fractional crystallization simulations are performed; two with initial pressures of 34MPa (runs 1 & 2) and one of 25MPa (run 3). Decompression rate was varied between 0.1MPa/° C (runs 1 & 3) and 0.2MPa/° C (run 2). Natural and experimental matrix glass compositions are accurately reproduced by all MELTS runs. The cumulative latent heat released for runs 1, 2 and 3 differs by less than 9% (8.69E5 J/kg*K, 9.32E5 J/kg*K, and 9.49E5 J/kg*K respectively). The 2D axisymmetric conductive cooling simulations consider a 30m-diameter conduit that extends from the surface to a depth of 1500m (34MPa). The temporal evolution of temperature is closely tracked at depths of 10m, 750m and 1400m in the centre of the conduit, at the conduit walls, and 20m from the walls into the host rock. Following initial cooling by 7-15oC at 10m depth inside the conduit, the magma temperature rebounds through latent heat release by 32-35oC over 85-123 days to a maximum temperature of 1002-1005oC. At 10m depth, it takes 4.1-9.2 years for the magma column to cool by 108-131oC and crystallize to 75wt%, at which point it cannot be easily remobilized. It takes 11-31.5 years to reach the same crystallinity at 750-1400m depth. We find a wide range in cooling timescales, particularly at depths of 750m or greater, attributed to the initial run pressure and the dominant latent heat producing crystallizing phase, Albite-rich Plagioclase Feldspar. Run 1 is shown to cool fastest and run 3 cool the slowest, with surface emissivity having the strongest cooling influence in the upper tens of meters of the conduit in all runs.
NASA Astrophysics Data System (ADS)
Heptinstall, D. A.; Neuberg, J. W.; Bouvet de Maisonneuve, C.; Collinson, A.; Taisne, B.; Morgan, D. J.
2015-12-01
Heat flow models can bring new insights into the thermal and rheological evolution of volcanic systems. We shall investigate the thermal processes and timescales in a crystallizing, static magma column, with a heat flow model of Soufriere Hills Volcano (SHV), Montserrat. The latent heat of crystallization is initially computed with MELTS, as a function of pressure and temperature for an andesitic melt (SHV groundmass starting composition). Three fractional crystallization simulations are performed; two with initial pressures of 34MPa (runs 1 & 2) and one of 25MPa (run 3). Decompression rate was varied between 0.1MPa/°C (runs 1 & 3) and 0.2MPa/°C (run 2). Natural and experimental matrix glass compositions are accurately reproduced by all MELTS runs. The cumulative latent heat released for runs 1, 2 and 3 differs by less than 9% (8.69e5 J/kg*K, 9.32e5 J/kg*K, and 9.49e5 J/kg*K respectively). The 2D axisymmetric conductive cooling simulations consider a 30m-diameter conduit that extends from the surface to a depth of 1500m (34MPa). The temporal evolution of temperature is closely tracked at depths of 10m, 750m and 1400m in the center of the conduit, at the conduit walls, and 20m from the walls into the host rock. Following initial cooling by 7-15oC at 10m depth inside the conduit, the magma temperature rebounds through latent heat release by 32-35oC over 85-123 days to a maximum temperature of 1002-1005oC. At 10 m depth, it takes 4.1-9.2 years for the magma column to cool over 108-130oC and crystallize to 75wt%, at which point it cannot be easily remobilized. It takes 11-31.5 years to reach the same crystallinity at 750-1400m depth. We find a wide range in cooling timescales, particularly at depths of 750m or greater, attributed to the initial run pressure and dominant latent heat producing crystallizing phases (Quartz), where run 1 cools fastest and run 3 cools slowest. Surface cooling by comparison has the strongest influence on the upper tens of meters in all runs.
USDA-ARS?s Scientific Manuscript database
Coupled Model Intercomparison Project 3 simulations of surface temperature were evaluated over the period 1902-1999 to assess their ability to reproduce historical temperature variability at 211 global locations. Model performance was evaluated using the running Mann Whitney-Z method, a technique th...
Due to the computational cost of running regional-scale numerical air quality models, reduced form models (RFM) have been proposed as computationally efficient simulation tools for characterizing the pollutant response to many different types of emission reductions. The U.S. Envi...
NASA Astrophysics Data System (ADS)
Warner, Thomas T.; Sheu, Rong-Shyang; Bowers, James F.; Sykes, R. Ian; Dodd, Gregory C.; Henn, Douglas S.
2002-05-01
Ensemble simulations made using a coupled atmospheric dynamic model and a probabilistic Lagrangian puff dispersion model were employed in a forensic analysis of the transport and dispersion of a toxic gas that may have been released near Al Muthanna, Iraq, during the Gulf War. The ensemble study had two objectives, the first of which was to determine the sensitivity of the calculated dosage fields to the choices that must be made about the configuration of the atmospheric dynamic model. In this test, various choices were used for model physics representations and for the large-scale analyses that were used to construct the model initial and boundary conditions. The second study objective was to examine the dispersion model's ability to use ensemble inputs to predict dosage probability distributions. Here, the dispersion model was used with the ensemble mean fields from the individual atmospheric dynamic model runs, including the variability in the individual wind fields, to generate dosage probabilities. These are compared with the explicit dosage probabilities derived from the individual runs of the coupled modeling system. The results demonstrate that the specific choices made about the dynamic-model configuration and the large-scale analyses can have a large impact on the simulated dosages. For example, the area near the source that is exposed to a selected dosage threshold varies by up to a factor of 4 among members of the ensemble. The agreement between the explicit and ensemble dosage probabilities is relatively good for both low and high dosage levels. Although only one ensemble was considered in this study, the encouraging results suggest that a probabilistic dispersion model may be of value in quantifying the effects of uncertainties in a dynamic-model ensemble on dispersion model predictions of atmospheric transport and dispersion.
The use of a block diagram simulation language for rapid model prototyping
NASA Technical Reports Server (NTRS)
Whitlow, Jonathan E.
1995-01-01
The research performed this summer focussed on the development of a predictive model for the loading of liquid oxygen (LO2) into the external tank (ET) of the shuttle prior to launch. A predictive model can greatly aid the operational personnel since instrumentation aboard the orbiter and ET is limited due to weight constraints. The model, which focuses primarily on the orbiter section of the system was developed using a block diagram based simulation language known as VisSim. Simulations were run on LO2 loading data for shuttle flights STS50 and STS55 and the model was demonstrated to accurately predict the sensor data recorded for these flights. As a consequence of the simulation results, it can be concluded that the software tool can be very useful for rapid prototyping of complex models.
Technical Note: On the use of nudging for aerosol–climate model intercomparison studies
Zhang, K.; Wan, H.; Liu, X.; ...
2014-08-26
Nudging as an assimilation technique has seen increased use in recent years in the development and evaluation of climate models. Constraining the simulated wind and temperature fields using global weather reanalysis facilitates more straightforward comparison between simulation and observation, and reduces uncertainties associated with natural variabilities of the large-scale circulation. On the other hand, the forcing introduced by nudging can be strong enough to change the basic characteristics of the model climate. In the paper we show that for the Community Atmosphere Model version 5 (CAM5), due to the systematic temperature bias in the standard model and the sensitivity ofmore » simulated ice formation to anthropogenic aerosol concentration, nudging towards reanalysis results in substantial reductions in the ice cloud amount and the impact of anthropogenic aerosols on long-wave cloud forcing. In order to reduce discrepancies between the nudged and unconstrained simulations, and meanwhile take the advantages of nudging, two alternative experimentation methods are evaluated. The first one constrains only the horizontal winds. The second method nudges both winds and temperature, but replaces the long-term climatology of the reanalysis by that of the model. Results show that both methods lead to substantially improved agreement with the free-running model in terms of the top-of-atmosphere radiation budget and cloud ice amount. The wind-only nudging is more convenient to apply, and provides higher correlations of the wind fields, geopotential height and specific humidity between simulation and reanalysis. Results from both CAM5 and a second aerosol–climate model ECHAM6-HAM2 also indicate that compared to the wind-and-temperature nudging, constraining only winds leads to better agreement with the free-running model in terms of the estimated shortwave cloud forcing and the simulated convective activities. This suggests nudging the horizontal winds but not temperature is a good strategy for the investigation of aerosol indirect effects since it provides well-constrained meteorology without strongly perturbing the model's mean climate.« less
Technical Note: On the use of nudging for aerosol-climate model intercomparison studies
NASA Astrophysics Data System (ADS)
Zhang, K.; Wan, H.; Liu, X.; Ghan, S. J.; Kooperman, G. J.; Ma, P.-L.; Rasch, P. J.; Neubauer, D.; Lohmann, U.
2014-08-01
Nudging as an assimilation technique has seen increased use in recent years in the development and evaluation of climate models. Constraining the simulated wind and temperature fields using global weather reanalysis facilitates more straightforward comparison between simulation and observation, and reduces uncertainties associated with natural variabilities of the large-scale circulation. On the other hand, the forcing introduced by nudging can be strong enough to change the basic characteristics of the model climate. In the paper we show that for the Community Atmosphere Model version 5 (CAM5), due to the systematic temperature bias in the standard model and the sensitivity of simulated ice formation to anthropogenic aerosol concentration, nudging towards reanalysis results in substantial reductions in the ice cloud amount and the impact of anthropogenic aerosols on long-wave cloud forcing. In order to reduce discrepancies between the nudged and unconstrained simulations, and meanwhile take the advantages of nudging, two alternative experimentation methods are evaluated. The first one constrains only the horizontal winds. The second method nudges both winds and temperature, but replaces the long-term climatology of the reanalysis by that of the model. Results show that both methods lead to substantially improved agreement with the free-running model in terms of the top-of-atmosphere radiation budget and cloud ice amount. The wind-only nudging is more convenient to apply, and provides higher correlations of the wind fields, geopotential height and specific humidity between simulation and reanalysis. Results from both CAM5 and a second aerosol-climate model ECHAM6-HAM2 also indicate that compared to the wind-and-temperature nudging, constraining only winds leads to better agreement with the free-running model in terms of the estimated shortwave cloud forcing and the simulated convective activities. This suggests nudging the horizontal winds but not temperature is a good strategy for the investigation of aerosol indirect effects since it provides well-constrained meteorology without strongly perturbing the model's mean climate.
Simple Queueing Model Applied to the City of Portland
NASA Astrophysics Data System (ADS)
Simon, Patrice M.; Esser, Jörg; Nagel, Kai
We use a simple traffic micro-simulation model based on queueing dynamics as introduced by Gawron [IJMPC, 9(3):393, 1998] in order to simulate traffic in Portland/Oregon. Links have a flow capacity, that is, they do not release more vehicles per second than is possible according to their capacity. This leads to queue built-up if demand exceeds capacity. Links also have a storage capacity, which means that once a link is full, vehicles that want to enter the link need to wait. This leads to queue spill-back through the network. The model is compatible with route-plan-based approaches such as TRANSIMS, where each vehicle attempts to follow its pre-computed path. Yet, both the data requirements and the computational requirements are considerably lower than for the full TRANSIMS microsimulation. Indeed, the model uses standard emme/2 network data, and runs about eight times faster than real time with more than 100 000 vehicles simultaneously in the simulation on a single Pentium-type CPU. We derive the model's fundamental diagrams and explain it. The simulation is used to simulate traffic on the emme/2 network of the Portland (Oregon) metropolitan region (20 000 links). Demand is generated by a simplified home-to-work destination assignment which generates about half a million trips for the morning peak. Route assignment is done by iterative feedback between micro-simulation and router. An iterative solution of the route assignment for the above problem can be achieved within about half a day of computing time on a desktop workstation. We compare results with field data and with results of traditional assignment runs by the Portland Metropolitan Planning Organization. Thus, with a model such as this one, it is possible to use a dynamic, activities-based approach to transportation simulation (such as in TRANSIMS) with affordable data and hardware. This should enable systematic research about the coupling of demand generation, route assignment, and micro-simulation output.
Simulating Human Cognition in the Domain of Air Traffic Control
NASA Technical Reports Server (NTRS)
Freed, Michael; Johnston, James C.; Null, Cynthia H. (Technical Monitor)
1995-01-01
Experiments intended to assess performance in human-machine interactions are often prohibitively expensive, unethical or otherwise impractical to run. Approximations of experimental results can be obtained, in principle, by simulating the behavior of subjects using computer models of human mental behavior. Computer simulation technology has been developed for this purpose. Our goal is to produce a cognitive model suitable to guide the simulation machinery and enable it to closely approximate a human subject's performance in experimental conditions. The described model is designed to simulate a variety of cognitive behaviors involved in routine air traffic control. As the model is elaborated, our ability to predict the effects of novel circumstances on controller error rates and other performance characteristics should increase. This will enable the system to project the impact of proposed changes to air traffic control procedures and equipment on controller performance.
NASA Technical Reports Server (NTRS)
Stevens, M. E.; Roskam, J.
1985-01-01
The problem of determining the vertical axis control requirements for landing a VTOL aircraft on a moving ship deck in various sea states is examined. Both a fixed-base piloted simulation and a nonpiloted simulation were used to determine the landing performance as influenced by thrust-to-weight ratio, vertical damping, and engine lags. The piloted simulation was run using a fixed-based simulator at Ames Research center. Simplified versions of an existing AV-8A Harrier model and an existing head-up display format were used. The ship model used was that of a DD963 class destroyer. Simplified linear models of the pilot, aircraft, ship motion, and ship air-wake turbulence were developed for the nonpiloted simulation. A unique aspect of the nonpiloted simulation was the development of a model of the piloting strategy used for shipboard landing. This model was refined during the piloted simulation until it provided a reasonably good representation of observed pilot behavior.
Modeling the 2004 Indian Ocean Tsunami for Introductory Physics Students
ERIC Educational Resources Information Center
DiLisi, Gregory A.; Rarick, Richard A.
2006-01-01
In this paper we develop materials to address student interest in the Indian Ocean tsunami of December 2004. We discuss the physical characteristics of tsunamis and some of the specific data regarding the 2004 event. Finally, we create an easy-to-make tsunami tank to run simulations in the classroom. The simulations exhibit three dramatic…
NASA Astrophysics Data System (ADS)
Silva, F.; Maechling, P. J.; Goulet, C.; Somerville, P.; Jordan, T. H.
2013-12-01
The Southern California Earthquake Center (SCEC) Broadband Platform is a collaborative software development project involving SCEC researchers, graduate students, and the SCEC Community Modeling Environment. The SCEC Broadband Platform is open-source scientific software that can generate broadband (0-100Hz) ground motions for earthquakes, integrating complex scientific modules that implement rupture generation, low and high-frequency seismogram synthesis, non-linear site effects calculation, and visualization into a software system that supports easy on-demand computation of seismograms. The Broadband Platform operates in two primary modes: validation simulations and scenario simulations. In validation mode, the Broadband Platform runs earthquake rupture and wave propagation modeling software to calculate seismograms of a historical earthquake for which observed strong ground motion data is available. Also in validation mode, the Broadband Platform calculates a number of goodness of fit measurements that quantify how well the model-based broadband seismograms match the observed seismograms for a certain event. Based on these results, the Platform can be used to tune and validate different numerical modeling techniques. During the past year, we have modified the software to enable the addition of a large number of historical events, and we are now adding validation simulation inputs and observational data for 23 historical events covering the Eastern and Western United States, Japan, Taiwan, Turkey, and Italy. In scenario mode, the Broadband Platform can run simulations for hypothetical (scenario) earthquakes. In this mode, users input an earthquake description, a list of station names and locations, and a 1D velocity model for their region of interest, and the Broadband Platform software then calculates ground motions for the specified stations. By establishing an interface between scientific modules with a common set of input and output files, the Broadband Platform facilitates the addition of new scientific methods, which are written by earth scientists in a number of languages such as C, C++, Fortran, and Python. The Broadband Platform's modular design also supports the reuse of existing software modules as building blocks to create new scientific methods. Additionally, the Platform implements a wrapper around each scientific module, converting input and output files to and from the specific formats required (or produced) by individual scientific codes. Working in close collaboration with scientists and research engineers, the SCEC software development group continues to add new capabilities to the Broadband Platform and to release new versions as open-source scientific software distributions that can be compiled and run on many Linux computer systems. Our latest release includes the addition of 3 new simulation methods and several new data products, such as map and distance-based goodness of fit plots. Finally, as the number and complexity of scenarios simulated using the Broadband Platform increase, we have added batching utilities to substantially improve support for running large-scale simulations on computing clusters.
NASA Technical Reports Server (NTRS)
Manobianco, John; Zack, John W.; Taylor, Gregory E.
1996-01-01
This paper describes the capabilities and operational utility of a version of the Mesoscale Atmospheric Simulation System (MASS) that has been developed to support operational weather forecasting at the Kennedy Space Center (KSC) and Cape Canaveral Air Station (CCAS). The implementation of local, mesoscale modeling systems at KSC/CCAS is designed to provide detailed short-range (less than 24 h) forecasts of winds, clouds, and hazardous weather such as thunderstorms. Short-range forecasting is a challenge for daily operations, and manned and unmanned launches since KSC/CCAS is located in central Florida where the weather during the warm season is dominated by mesoscale circulations like the sea breeze. For this application, MASS has been modified to run on a Stardent 3000 workstation. Workstation-based, real-time numerical modeling requires a compromise between the requirement to run the system fast enough so that the output can be used before expiration balanced against the desire to improve the simulations by increasing resolution and using more detailed physical parameterizations. It is now feasible to run high-resolution mesoscale models such as MASS on local workstations to provide timely forecasts at a fraction of the cost required to run these models on mainframe supercomputers. MASS has been running in the Applied Meteorology Unit (AMU) at KSC/CCAS since January 1994 for the purpose of system evaluation. In March 1995, the AMU began sending real-time MASS output to the forecasters and meteorologists at CCAS, Spaceflight Meteorology Group (Johnson Space Center, Houston, Texas), and the National Weather Service (Melbourne, Florida). However, MASS is not yet an operational system. The final decision whether to transition MASS for operational use will depend on a combination of forecaster feedback, the AMU's final evaluation results, and the life-cycle costs of the operational system.
Status of the AIAA Modeling and Simulation Format Standard
NASA Technical Reports Server (NTRS)
Jackson, E. Bruce; Hildreth, Bruce L.
2008-01-01
The current draft AIAA Standard for flight simulation models represents an on-going effort to improve the productivity of practitioners of the art of digital flight simulation (one of the original digital computer applications). This initial release provides the capability for the efficient representation and exchange of an aerodynamic model in full fidelity; the DAVE-ML format can be easily imported (with development of site-specific import tools) in an unambiguous way with automatic verification. An attractive feature of the standard is the ability to coexist with existing legacy software or tools. The draft Standard is currently limited in scope to static elements of dynamic flight simulations; however, these static elements represent the bulk of typical flight simulation mathematical models. It is already seeing application within U.S. and Australian government agencies in an effort to improve productivity and reduce model rehosting overhead. An existing tool allows import of DAVE-ML models into a popular simulation modeling and analysis tool, and other community-contributed tools and libraries can simplify the use of DAVE-ML compliant models at compile- or run-time of high-fidelity flight simulation.
Large-eddy simulations of a Salt Lake Valley cold-air pool
NASA Astrophysics Data System (ADS)
Crosman, Erik T.; Horel, John D.
2017-09-01
Persistent cold-air pools are often poorly forecast by mesoscale numerical weather prediction models, in part due to inadequate parameterization of planetary boundary-layer physics in stable atmospheric conditions, and also because of errors in the initialization and treatment of the model surface state. In this study, an improved numerical simulation of the 27-30 January 2011 cold-air pool in Utah's Great Salt Lake Basin is obtained using a large-eddy simulation with more realistic surface state characterization. Compared to a Weather Research and Forecasting model configuration run as a mesoscale model with a planetary boundary-layer scheme where turbulence is highly parameterized, the large-eddy simulation more accurately captured turbulent interactions between the stable boundary-layer and flow aloft. The simulations were also found to be sensitive to variations in the Great Salt Lake temperature and Salt Lake Valley snow cover, illustrating the importance of land surface state in modelling cold-air pools.
Consistency of internal fluxes in a hydrological model running at multiple time steps
NASA Astrophysics Data System (ADS)
Ficchi, Andrea; Perrin, Charles; Andréassian, Vazken
2016-04-01
Improving hydrological models remains a difficult task and many ways can be explored, among which one can find the improvement of spatial representation, the search for more robust parametrization, the better formulation of some processes or the modification of model structures by trial-and-error procedure. Several past works indicate that model parameters and structure can be dependent on the modelling time step, and there is thus some rationale in investigating how a model behaves across various modelling time steps, to find solutions for improvements. Here we analyse the impact of data time step on the consistency of the internal fluxes of a rainfall-runoff model run at various time steps, by using a large data set of 240 catchments. To this end, fine time step hydro-climatic information at sub-hourly resolution is used as input of a parsimonious rainfall-runoff model (GR) that is run at eight different model time steps (from 6 minutes to one day). The initial structure of the tested model (i.e. the baseline) corresponds to the daily model GR4J (Perrin et al., 2003), adapted to be run at variable sub-daily time steps. The modelled fluxes considered are interception, actual evapotranspiration and intercatchment groundwater flows. Observations of these fluxes are not available, but the comparison of modelled fluxes at multiple time steps gives additional information for model identification. The joint analysis of flow simulation performance and consistency of internal fluxes at different time steps provides guidance to the identification of the model components that should be improved. Our analysis indicates that the baseline model structure is to be modified at sub-daily time steps to warrant the consistency and realism of the modelled fluxes. For the baseline model improvement, particular attention is devoted to the interception model component, whose output flux showed the strongest sensitivity to modelling time step. The dependency of the optimal model complexity on time step is also analysed. References: Perrin, C., Michel, C., Andréassian, V., 2003. Improvement of a parsimonious model for streamflow simulation. Journal of Hydrology, 279(1-4): 275-289. DOI:10.1016/S0022-1694(03)00225-7
Storm Surge Modeling of Typhoon Haiyan at the Naval Oceanographic Office Using Delft3D
NASA Astrophysics Data System (ADS)
Gilligan, M. J.; Lovering, J. L.
2016-02-01
The Naval Oceanographic Office provides estimates of the rise in sea level along the coast due to storm surge associated with tropical cyclones, typhoons, and hurricanes. Storm surge modeling and prediction helps the US Navy by providing a threat assessment tool to help protect Navy assets and provide support for humanitarian assistance/disaster relief efforts. Recent advancements in our modeling capabilities include the use of the Delft3D modeling suite as part of a Naval Research Laboratory (NRL) developed Coastal Surge Inundation Prediction System (CSIPS). Model simulations were performed on Typhoon Haiyan, which made landfall in the Philippines in November 2013. Comparisons of model simulations using forecast and hindcast track data highlight the importance of accurate storm track information for storm surge predictions. Model runs using the forecast track prediction and hindcast track information give maximum storm surge elevations of 4 meters and 6.1 meters, respectively. Model results for the hindcast simulation were compared with data published by the JSCE-PICE Joint survey for locations in San Pedro Bay (SPB) and on the Eastern Samar Peninsula (ESP). In SPB, where wind-induced set-up predominates, the model run using the forecast track predicted surge within 2 meters in 38% of survey locations and within 3 meters in 59% of the locations. When the hindcast track was used, the model predicted within 2 meters in 77% of the locations and within 3 meters in 95% of the locations. The model was unable to predict the high surge reported along the ESP produced by infragravity wave-induced set-up, which is not simulated in the model. Additional modeling capabilities incorporating infragravity waves are required to predict storm surge accurately along open coasts with steep bathymetric slopes, such as those seen in island arcs.
MAGIC: Model and Graphic Information Converter
NASA Technical Reports Server (NTRS)
Herbert, W. C.
2009-01-01
MAGIC is a software tool capable of converting highly detailed 3D models from an open, standard format, VRML 2.0/97, into the proprietary DTS file format used by the Torque Game Engine from GarageGames. MAGIC is used to convert 3D simulations from authoritative sources into the data needed to run the simulations in NASA's Distributed Observer Network. The Distributed Observer Network (DON) is a simulation presentation tool built by NASA to facilitate the simulation sharing requirements of the Data Presentation and Visualization effort within the Constellation Program. DON is built on top of the Torque Game Engine (TGE) and has chosen TGE's Dynamix Three Space (DTS) file format to represent 3D objects within simulations.
Memory interface simulator: A computer design aid
NASA Technical Reports Server (NTRS)
Taylor, D. S.; Williams, T.; Weatherbee, J. E.
1972-01-01
Results are presented of a study conducted with a digital simulation model being used in the design of the Automatically Reconfigurable Modular Multiprocessor System (ARMMS), a candidate computer system for future manned and unmanned space missions. The model simulates the activity involved as instructions are fetched from random access memory for execution in one of the system central processing units. A series of model runs measured instruction execution time under various assumptions pertaining to the CPU's and the interface between the CPU's and RAM. Design tradeoffs are presented in the following areas: Bus widths, CPU microprogram read only memory cycle time, multiple instruction fetch, and instruction mix.
Adapting NBODY4 with a GRAPE-6a Supercomputer for Web Access, Using NBodyLab
NASA Astrophysics Data System (ADS)
Johnson, V.; Aarseth, S.
2006-07-01
A demonstration site has been developed by the authors that enables researchers and students to experiment with the capabilities and performance of NBODY4 running on a GRAPE-6a over the web. NBODY4 is a sophisticated open-source N-body code for high accuracy simulations of dense stellar systems (Aarseth 2003). In 2004, NBODY4 was successfully tested with a GRAPE-6a, yielding an unprecedented low-cost tool for astrophysical research. The GRAPE-6a is a supercomputer card developed by astrophysicists to accelerate high accuracy N-body simulations with a cluster or a desktop PC (Fukushige et al. 2005, Makino & Taiji 1998). The GRAPE-6a card became commercially available in 2004, runs at 125 Gflops peak, has a standard PCI interface and costs less than 10,000. Researchers running the widely used NBODY6 (which does not require GRAPE hardware) can compare their own PC or laptop performance with simulations run on http://www.NbodyLab.org. Such comparisons may help justify acquisition of a GRAPE-6a. For workgroups such as university physics or astronomy departments, the demonstration site may be replicated or serve as a model for a shared computing resource. The site was constructed using an NBodyLab server-side framework.
Large eddy simulation of dust-uplift by haboob density currents
NASA Astrophysics Data System (ADS)
Huang, Q.
2017-12-01
Cold pool outflows have been shown from both observations and convection-permitting models to be a dominant source of dust uplift ("haboobs") in the summertime Sahel and Sahara, and to cause dust uplift over deserts across the world. In this paper large eddy model (LEM) simulations, which resolve the turbulence within the cold-pools much better than previous studies of haboobs which have used convection-permitting models, are used to investigate the winds that cause dust uplift in cold pools, and the resultant dust uplift and transport. Dust uplift largely occurs in the head of the density current, consistent with the few existing observations. In the modeled density current dust is largely restricted to the lowest coldest and well mixed layer of the cold pool outflow (below around 400 m), except above the head of the cold pool where some dust reaches 2.5 km. This rapid transport to high altitude will contribute to long atmospheric lifetimes of large dust particles from haboobs. Decreasing the model horizontal grid-spacing from 1.0 km to 100 m resolves more turbulence, locally increasing winds, increasing mixing and reducing the propagation speed of the density current. Total accumulated dust uplift is approximately twice as large in 1.0 km runs compared with 100 m runs, suggesting that for studying haboobs in convection-permitting runs the representation of turbulence and mixing is significant. Simulations with surface sensible heat fluxes representative of those from a desert region in daytime show that increasing surface fluxes slow the density current due to increased mixing, but increase dust uplift rates, due to increased downward transport of momentum to the surface.
Compressed quantum computation using a remote five-qubit quantum computer
NASA Astrophysics Data System (ADS)
Hebenstreit, M.; Alsina, D.; Latorre, J. I.; Kraus, B.
2017-05-01
The notion of compressed quantum computation is employed to simulate the Ising interaction of a one-dimensional chain consisting of n qubits using the universal IBM cloud quantum computer running on log2(n ) qubits. The external field parameter that controls the quantum phase transition of this model translates into particular settings of the quantum gates that generate the circuit. We measure the magnetization, which displays the quantum phase transition, on a two-qubit system, which simulates a four-qubit Ising chain, and show its agreement with the theoretical prediction within a certain error. We also discuss the relevant point of how to assess errors when using a cloud quantum computer with a limited amount of runs. As a solution, we propose to use validating circuits, that is, to run independent controlled quantum circuits of similar complexity to the circuit of interest.
Changes in running pattern due to fatigue and cognitive load in orienteering.
Millet, Guillaume Y; Divert, Caroline; Banizette, Marion; Morin, Jean-Benoit
2010-01-01
The aim of this study was to examine the influence of fatigue on running biomechanics in normal running, in normal running with a cognitive task, and in running while map reading. Nineteen international and less experienced orienteers performed a fatiguing running exercise of duration and intensity similar to a classic distance orienteering race on an instrumented treadmill while performing mental arithmetic, an orienteering simulation, and control running at regular intervals. Two-way repeated-measures analysis of variance did not reveal any significant difference between mental arithmetic and control running for any of the kinematic and kinetic parameters analysed eight times over the fatiguing protocol. However, these parameters were systematically different between the orienteering simulation and the other two conditions (mental arithmetic and control running). The adaptations in orienteering simulation running were significantly more pronounced in the elite group when step frequency, peak vertical ground reaction force, vertical stiffness, and maximal downward displacement of the centre of mass during contact were considered. The effects of fatigue on running biomechanics depended on whether the orienteers read their map or ran normally. It is concluded that adding a cognitive load does not modify running patterns. Therefore, all changes in running pattern observed during the orienteering simulation, particularly in elite orienteers, are the result of adaptations to enable efficient map reading and/or potentially prevent injuries. Finally, running patterns are not affected to the same extent by fatigue when a map reading task is added.
Users guide: The LaRC human-operator-simulator-based pilot model
NASA Technical Reports Server (NTRS)
Bogart, E. H.; Waller, M. C.
1985-01-01
A Human Operator Simulator (HOS) based pilot model has been developed for use at NASA LaRC for analysis of flight management problems. The model is currently configured to simulate piloted flight of an advanced transport airplane. The generic HOS operator and machine model was originally developed under U.S. Navy sponsorship by Analytics, Inc. and through a contract with LaRC was configured to represent a pilot flying a transport airplane. A version of the HOS program runs in batch mode on LaRC's (60-bit-word) central computer system. This document provides a guide for using the program and describes in some detail the assortment of files used during its operation.
NASA Technical Reports Server (NTRS)
Otter-Nacke, S.; Godwin, D. C.; Ritchie, J. T.
1986-01-01
CERES-Wheat is a computer simulation model of the growth, development, and yield of spring and winter wheat. It was designed to be used in any location throughout the world where wheat can be grown. The model is written in Fortran 77, operates on a daily time stop, and runs on a range of computer systems from microcomputers to mainframes. Two versions of the model were developed: one, CERES-Wheat, assumes nitrogen to be nonlimiting; in the other, CERES-Wheat-N, the effects of nitrogen deficiency are simulated. The report provides the comparisons of simulations and measurements of about 350 wheat data sets collected from throughout the world.
Modeling DNP3 Traffic Characteristics of Field Devices in SCADA Systems of the Smart Grid
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Huan; Cheng, Liang; Chuah, Mooi Choo
In the generation, transmission, and distribution sectors of the smart grid, intelligence of field devices is realized by programmable logic controllers (PLCs). Many smart-grid subsystems are essentially cyber-physical energy systems (CPES): For instance, the power system process (i.e., the physical part) within a substation is monitored and controlled by a SCADA network with hosts running miscellaneous applications (i.e., the cyber part). To study the interactions between the cyber and physical components of a CPES, several co-simulation platforms have been proposed. However, the network simulators/emulators of these platforms do not include a detailed traffic model that takes into account the impactsmore » of the execution model of PLCs on traffic characteristics. As a result, network traces generated by co-simulation only reveal the impacts of the physical process on the contents of the traffic generated by SCADA hosts, whereas the distinction between PLCs and computing nodes (e.g., a hardened computer running a process visualization application) has been overlooked. To generate realistic network traces using co-simulation for the design and evaluation of applications relying on accurate traffic profiles, it is necessary to establish a traffic model for PLCs. In this work, we propose a parameterized model for PLCs that can be incorporated into existing co-simulation platforms. We focus on the DNP3 subsystem of slave PLCs, which automates the processing of packets from the DNP3 master. To validate our approach, we extract model parameters from both the configuration and network traces of real PLCs. Simulated network traces are generated and compared against those from PLCs. Our evaluation shows that our proposed model captures the essential traffic characteristics of DNP3 slave PLCs, which can be used to extend existing co-simulation platforms and gain further insights into the behaviors of CPES.« less
Preference pulses and the win-stay, fix-and-sample model of choice.
Hachiga, Yosuke; Sakagami, Takayuki; Silberberg, Alan
2015-11-01
Two groups of six rats each were trained to respond to two levers for a food reinforcer. One group was trained on concurrent variable-ratio 20 extinction schedules of reinforcement. The second group was trained on a concurrent variable-interval 27-s extinction schedule. In both groups, lever-schedule assignments changed randomly following reinforcement; a light cued the lever providing the next reinforcer. In the next condition, the light cue was removed and reinforcer assignment strictly alternated between levers. The next two conditions redetermined, in order, the first two conditions. Preference pulses, defined as a tendency for relative response rate to decline to the just-reinforced alternative with time since reinforcement, only appeared during the extinction schedule. Although the pulse's functional form was well described by a reinforcer-induction equation, there was a large residual between actual data and a pulse-as-artifact simulation (McLean, Grace, Pitts, & Hughes, 2014) used to discern reinforcer-dependent contributions to pulsing. However, if that simulation was modified to include a win-stay tendency (a propensity to stay on the just-reinforced alternative), the residual was greatly reduced. Additional modifications of the parameter values of the pulse-as-artifact simulation enabled it to accommodate the present results as well as those it originally accommodated. In its revised form, this simulation was used to create a model that describes response runs to the preferred alternative as terminating probabilistically, and runs to the unpreferred alternative as punctate with occasional perseverative response runs. After reinforcement, choices are modeled as returning briefly to the lever location that had been just reinforced. This win-stay propensity is hypothesized as due to reinforcer induction. © Society for the Experimental Analysis of Behavior.
Snow hydrology in a general circulation model
NASA Technical Reports Server (NTRS)
Marshall, Susan; Roads, John O.; Glatzmaier, Gary
1994-01-01
A snow hydrology has been implemented in an atmospheric general circulation model (GCM). The snow hydrology consists of parameterizations of snowfall and snow cover fraction, a prognostic calculation of snow temperature, and a model of the snow mass and hydrologic budgets. Previously, only snow albedo had been included by a specified snow line. A 3-year GCM simulation with this now more complete surface hydrology is compared to a previous GCM control run with the specified snow line, as well as with observations. In particular, the authors discuss comparisons of the atmospheric and surface hydrologic budgets and the surface energy budget for U.S. and Canadian areas. The new snow hydrology changes the annual cycle of the surface moisture and energy budgets in the model. There is a noticeable shift in the runoff maximum from winter in the control run to spring in the snow hydrology run. A substantial amount of GCM winter precipitation is now stored in the seasonal snowpack. Snow cover also acts as an important insulating layer between the atmosphere and the ground. Wintertime soil temperatures are much higher in the snow hydrology experiment than in the control experiment. Seasonal snow cover is important for dampening large fluctuations in GCM continental skin temperature during the Northern Hemisphere winter. Snow depths and snow extent show good agreement with observations over North America. The geographic distribution of maximum depths is not as well simulated by the model due, in part, to the coarse resolution of the model. The patterns of runoff are qualitatively and quantitatively similar to observed patterns of streamflow averaged over the continental United States. The seasonal cycles of precipitation and evaporation are also reasonably well simulated by the model, although their magnitudes are larger than is observed. This is due, in part, to a cold bias in this model, which results in a dry model atmosphere and enhances the hydrologic cycle everywhere.
Evaluation of the transport matrix method for simulation of ocean biogeochemical tracers
NASA Astrophysics Data System (ADS)
Kvale, Karin F.; Khatiwala, Samar; Dietze, Heiner; Kriest, Iris; Oschlies, Andreas
2017-06-01
Conventional integration of Earth system and ocean models can accrue considerable computational expenses, particularly for marine biogeochemical applications. Offline
numerical schemes in which only the biogeochemical tracers are time stepped and transported using a pre-computed circulation field can substantially reduce the burden and are thus an attractive alternative. One such scheme is the transport matrix method
(TMM), which represents tracer transport as a sequence of sparse matrix-vector products that can be performed efficiently on distributed-memory computers. While the TMM has been used for a variety of geochemical and biogeochemical studies, to date the resulting solutions have not been comprehensively assessed against their online
counterparts. Here, we present a detailed comparison of the two. It is based on simulations of the state-of-the-art biogeochemical sub-model embedded within the widely used coarse-resolution University of Victoria Earth System Climate Model (UVic ESCM). The default, non-linear advection scheme was first replaced with a linear, third-order upwind-biased advection scheme to satisfy the linearity requirement of the TMM. Transport matrices were extracted from an equilibrium run of the physical model and subsequently used to integrate the biogeochemical model offline to equilibrium. The identical biogeochemical model was also run online. Our simulations show that offline integration introduces some bias to biogeochemical quantities through the omission of the polar filtering used in UVic ESCM and in the offline application of time-dependent forcing fields, with high latitudes showing the largest differences with respect to the online model. Differences in other regions and in the seasonality of nutrients and phytoplankton distributions are found to be relatively minor, giving confidence that the TMM is a reliable tool for offline integration of complex biogeochemical models. Moreover, while UVic ESCM is a serial code, the TMM can be run on a parallel machine with no change to the underlying biogeochemical code, thus providing orders of magnitude speed-up over the online model.
Tenbus, Frederick J.; Fleck, William B.
2001-01-01
Military activity at Graces Quarters, a former open-air chemical-agent facility at Aberdeen Proving Ground, Maryland, has resulted in ground-water contamination by chlorinated hydrocarbons. As part of a ground-water remediation feasibility study, a three-dimensional model was constructed to simulate transport of four chlorinated hydrocarbons (1,1,2,2-tetrachloroethane, trichloroethene, carbon tetrachloride, and chloroform) that are components of a contaminant plume in the surficial and middle aquifers underlying the east-central part of Graces Quarters. The model was calibrated to steady-state hydraulic head at 58 observation wells and to the concentration of 1,1,2,2-tetrachloroethane in 58 observation wells and 101direct-push probe samples from the mid-1990s. Simulations using the same basic model with minor adjustments were then run for each of the other plume constituents. The error statistics between the simulated and measured concentrations of each of the constituents compared favorably to the error statisticst,1,2,2-tetrachloroethane calibration. Model simulations were used in conjunction with contaminant concentration data to examine the sources and degradation of the plume constituents. It was determined from this that mixed contaminant sources with no ambient degradation was the best approach for simulating multi-species solute transport at the site. Forward simulations were run to show potential solute transport 30 years and 100 years into the future with and without source removal. Although forward simulations are subject to uncertainty, they can be useful for illustrating various aspects of the conceptual model and its implementation. The forward simulation with no source removal indicates that contaminants would spread throughout various parts of the surficial and middle aquifers, with the100-year simulation showing potential discharge areas in either the marshes at the end of the Graces Quarters peninsula or just offshore in the estuaries. The simulation with source removal indicates that if the modeling assumptions are reasonable and ground-water cleanup within30 years is important, source removal alone is not a sufficient remedy, and cleanup might not even occur within 100 years.
Development of an in silico stochastic 4D model of tumor growth with angiogenesis.
Forster, Jake C; Douglass, Michael J J; Harriss-Phillips, Wendy M; Bezak, Eva
2017-04-01
A stochastic computer model of tumour growth with spatial and temporal components that includes tumour angiogenesis was developed. In the current work it was used to simulate head and neck tumour growth. The model also provides the foundation for a 4D cellular radiotherapy simulation tool. The model, developed in Matlab, contains cell positions randomised in 3D space without overlap. Blood vessels are represented by strings of blood vessel units which branch outwards to achieve the desired tumour relative vascular volume. Hypoxic cells have an increased cell cycle time and become quiescent at oxygen tensions less than 1 mmHg. Necrotic cells are resorbed. A hierarchy of stem cells, transit cells and differentiated cells is considered along with differentiated cell loss. Model parameters include the relative vascular volume (2-10%), blood oxygenation (20-100 mmHg), distance from vessels to the onset of necrosis (80-300 μm) and probability for stem cells to undergo symmetric division (2%). Simulations were performed to observe the effects of hypoxia on tumour growth rate for head and neck cancers. Simulations were run on a supercomputer with eligible parts running in parallel on 12 cores. Using biologically plausible model parameters for head and neck cancers, the tumour volume doubling time varied from 45 ± 5 days (n = 3) for well oxygenated tumours to 87 ± 5 days (n = 3) for severely hypoxic tumours. The main achievements of the current model were randomised cell positions and the connected vasculature structure between the cells. These developments will also be beneficial when irradiating the simulated tumours using Monte Carlo track structure methods. © 2017 American Association of Physicists in Medicine.
Suppressing correlations in massively parallel simulations of lattice models
NASA Astrophysics Data System (ADS)
Kelling, Jeffrey; Ódor, Géza; Gemming, Sibylle
2017-11-01
For lattice Monte Carlo simulations parallelization is crucial to make studies of large systems and long simulation time feasible, while sequential simulations remain the gold-standard for correlation-free dynamics. Here, various domain decomposition schemes are compared, concluding with one which delivers virtually correlation-free simulations on GPUs. Extensive simulations of the octahedron model for 2 + 1 dimensional Kardar-Parisi-Zhang surface growth, which is very sensitive to correlation in the site-selection dynamics, were performed to show self-consistency of the parallel runs and agreement with the sequential algorithm. We present a GPU implementation providing a speedup of about 30 × over a parallel CPU implementation on a single socket and at least 180 × with respect to the sequential reference.
Report on results of current and future metal casting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Unal, Cetin; Carlson, Neil N.
2015-09-28
New modeling capabilities needed to simulate the casting of metallic fuels are added to Truchas code. In this report we summarize improvements we made in FY2015 in three areas; (1) Analysis of new casting experiments conducted with BCS and EFL designs, (2) the simulation of INL’s U-Zr casting experiments with Flow3D computer program, (3) the implementation of surface tension model into Truchas for unstructured mesh required to run U-Zr casting.
NASA Astrophysics Data System (ADS)
Bellafiore, Debora; McKiver, William J.; Ferrarin, Christian; Umgiesser, Georg
2018-05-01
Dense water (DW) formation commonly occurs in the shallow Northern Adriatic Sea during winter outbreaks, when there is a combination of the cooling of surface waters by the winds and high salinity as a result of reduced river inputs. These DWs subsequently propagate southwards over a period of weeks/months, eventually arriving in the Southern Adriatic Sea. The investigation is based on a new nonhydrostatic (NH) formulation of the 3D finite element model SHYFEM that is validated for a number of theoretical test cases. Subsequently this model is used to simulate, through high-resolution numerical simulations, an extreme DW event that occurred in the Adriatic Sea in 2012. We perform both hydrostatic (HY) and NH simulations in order to explicitly see the impact of NH processes on the DW dynamics. The modeled results are compared to observations collected in the field campaign of March-April 2012 in the Southern Adriatic Sea. The NH run correctly reproduces the across isobath bottom-trapped gravity current characterizing the canyon DW pathways. It also more accurately captures the frequency and intensity of dense water cascading pulsing events, as the inclusion of NH processes produces stronger currents with different DW mixing characteristics. Finally, the NH run simulates internal gravity waves (IGW), generated during the cascading at the edge of the canyon, which propagate downslope. This IGW activity is not captured in the HY case.
Sakhteman, Amirhossein; Zare, Bijan
2016-01-01
An interactive application, Modelface, was presented for Modeller software based on windows platform. The application is able to run all steps of homology modeling including pdb to fasta generation, running clustal, model building and loop refinement. Other modules of modeler including energy calculation, energy minimization and the ability to make single point mutations in the PDB structures are also implemented inside Modelface. The API is a simple batch based application with no memory occupation and is free of charge for academic use. The application is also able to repair missing atom types in the PDB structures making it suitable for many molecular modeling studies such as docking and molecular dynamic simulation. Some successful instances of modeling studies using Modelface are also reported. PMID:28243276
Consequence modeling using the fire dynamics simulator.
Ryder, Noah L; Sutula, Jason A; Schemel, Christopher F; Hamer, Andrew J; Van Brunt, Vincent
2004-11-11
The use of Computational Fluid Dynamics (CFD) and in particular Large Eddy Simulation (LES) codes to model fires provides an efficient tool for the prediction of large-scale effects that include plume characteristics, combustion product dispersion, and heat effects to adjacent objects. This paper illustrates the strengths of the Fire Dynamics Simulator (FDS), an LES code developed by the National Institute of Standards and Technology (NIST), through several small and large-scale validation runs and process safety applications. The paper presents two fire experiments--a small room fire and a large (15 m diameter) pool fire. The model results are compared to experimental data and demonstrate good agreement between the models and data. The validation work is then extended to demonstrate applicability to process safety concerns by detailing a model of a tank farm fire and a model of the ignition of a gaseous fuel in a confined space. In this simulation, a room was filled with propane, given time to disperse, and was then ignited. The model yields accurate results of the dispersion of the gas throughout the space. This information can be used to determine flammability and explosive limits in a space and can be used in subsequent models to determine the pressure and temperature waves that would result from an explosion. The model dispersion results were compared to an experiment performed by Factory Mutual. Using the above examples, this paper will demonstrate that FDS is ideally suited to build realistic models of process geometries in which large scale explosion and fire failure risks can be evaluated with several distinct advantages over more traditional CFD codes. Namely transient solutions to fire and explosion growth can be produced with less sophisticated hardware (lower cost) than needed for traditional CFD codes (PC type computer verses UNIX workstation) and can be solved for longer time histories (on the order of hundreds of seconds of computed time) with minimal computer resources and length of model run. Additionally results that are produced can be analyzed, viewed, and tabulated during and following a model run within a PC environment. There are some tradeoffs, however, as rapid computations in PC's may require a sacrifice in the grid resolution or in the sub-grid modeling, depending on the size of the geometry modeled.
The Challenge of Grounding Planning in Simulation with an Interactive Model Development Environment
NASA Technical Reports Server (NTRS)
Clement, Bradley J.; Frank, Jeremy D.; Chachere, John M.; Smith, Tristan B.; Swanson, Keith J.
2011-01-01
A principal obstacle to fielding automated planning systems is the difficulty of modeling. Physical systems are modeled conventionally based on specification documents and the modeler's understanding of the system. Thus, the model is developed in a way that is disconnected from the system's actual behavior and is vulnerable to manual error. Another obstacle to fielding planners is testing and validation. For a space mission, generated plans must be validated often by translating them into command sequences that are run in a simulation testbed. Testing in this way is complex and onerous because of the large number of possible plans and states of the spacecraft. Though, if used as a source of domain knowledge, the simulator can ease validation. This paper poses a challenge: to ground planning models in the system physics represented by simulation. A proposed, interactive model development environment illustrates the integration of planning and simulation to meet the challenge. This integration reveals research paths for automated model construction and validation.
Quasi-decadal Oscillation in the CMIP5 and CMIP3 Climate Model Simulations: California Case
NASA Astrophysics Data System (ADS)
Wang, J.; Yin, H.; Reyes, E.; Chung, F. I.
2014-12-01
The ongoing three drought years in California are reminding us of two other historical long drought periods: 1987-1992 and 1928-1934. This kind of interannual variability is corresponding to the dominating 7-15 yr quasi-decadal oscillation in precipitation and streamflow in California. When using global climate model projections to assess the climate change impact on water resources planning in California, it is natural to ask if global climate models are able to reproduce the observed interannual variability like 7-15 yr quasi-decadal oscillation. Further spectral analysis to tree ring retrieved precipitation and historical precipitation record proves the existence of 7-15 yr quasi-decadal oscillation in California. But while implementing spectral analysis to all the CMIP5 and CMIP3 global climate model historical simulations using wavelet analysis approach, it was found that only two models in CMIP3 , CGCM 2.3.2a of MRI and NCAP PCM1.0, and only two models in CMIP5, MIROC5 and CESM1-WACCM, have statistically significant 7-15 yr quasi-decadal oscillations in California. More interesting, the existence of 7-15 yr quasi-decadal oscillation in the global climate model simulation is also sensitive to initial conditions. 12-13 yr quasi-decadal oscillation occurs in one ensemble run of CGCM 2.3.2a of MRI but does not exist in the other four ensemble runs.
The joy of interactive modeling
NASA Astrophysics Data System (ADS)
Donchyts, Gennadii; Baart, Fedor; van Dam, Arthur; Jagers, Bert
2013-04-01
The conventional way of working with hydrodynamical models usually consists of the following steps: 1) define a schematization (e.g., in a graphical user interface, or by editing input files) 2) run model from start to end 3) visualize results 4) repeat any of the previous steps. This cycle commonly takes up from hours to several days. What if we can make this happen instantly? As most of the research done using numerical models is in fact qualitative and exploratory (Oreskes et al., 1994), why not use these models as such? How can we adapt models so that we can edit model input, run and visualize results at the same time? More and more, interactive models become available as online apps, mainly for demonstration and educational purposes. These models often simplify the physics behind flows and run on simplified model geometries, particularly when compared with state-of-the-art scientific simulation packages. Here we show how the aforementioned conventional standalone models ("static, run once") can be transformed into interactive models. The basic concepts behind turning existing (conventional) model engines into interactive engines are the following. The engine does not run the model from start to end, but is always available in memory, and can be fed by new boundary conditions, or state changes at any time. The model can be run continuously, per step, or up to a specified time. The Hollywood principle dictates how the model engine is instructed from 'outside', instead of the model engine taking all necessary actions on its own initiative. The underlying techniques that facilitate these concepts are introspection of the computation engine, which exposes its state variables, and control functions, e.g. for time stepping, via a standardized interface, such as BMI (Peckam et. al., 2012). In this work we have used a shallow water flow model engine D-Flow Flexible Mesh. The model was converted from executable to a library, and coupled to the graphical modelling environment Delta Shell. Both the engine and the environment are open source tools under active development at Deltares. The combination provides direct interactive control over the time loop and model state, and offers live 3D visualization of the running model using VTK library.
Performance of distributed multiscale simulations
Borgdorff, J.; Ben Belgacem, M.; Bona-Casas, C.; Fazendeiro, L.; Groen, D.; Hoenen, O.; Mizeranschi, A.; Suter, J. L.; Coster, D.; Coveney, P. V.; Dubitzky, W.; Hoekstra, A. G.; Strand, P.; Chopard, B.
2014-01-01
Multiscale simulations model phenomena across natural scales using monolithic or component-based code, running on local or distributed resources. In this work, we investigate the performance of distributed multiscale computing of component-based models, guided by six multiscale applications with different characteristics and from several disciplines. Three modes of distributed multiscale computing are identified: supplementing local dependencies with large-scale resources, load distribution over multiple resources, and load balancing of small- and large-scale resources. We find that the first mode has the apparent benefit of increasing simulation speed, and the second mode can increase simulation speed if local resources are limited. Depending on resource reservation and model coupling topology, the third mode may result in a reduction of resource consumption. PMID:24982258
NASA Technical Reports Server (NTRS)
Veres, Joseph
2001-01-01
This report outlines the detailed simulation of Aircraft Turbofan Engine. The objectives were to develop a detailed flow model of a full turbofan engine that runs on parallel workstation clusters overnight and to develop an integrated system of codes for combustor design and analysis to enable significant reduction in design time and cost. The model will initially simulate the 3-D flow in the primary flow path including the flow and chemistry in the combustor, and ultimately result in a multidisciplinary model of the engine. The overnight 3-D simulation capability of the primary flow path in a complete engine will enable significant reduction in the design and development time of gas turbine engines. In addition, the NPSS (Numerical Propulsion System Simulation) multidisciplinary integration and analysis are discussed.
Numerical Simulation of Nonperiodic Rail Operation Diagram Characteristics
Qian, Yongsheng; Wang, Bingbing; Zeng, Junwei; Wang, Xin
2014-01-01
This paper succeeded in utilizing cellular automata (CA) model to simulate the process of the train operation under the four-aspect color light system and getting the nonperiodic diagram of the mixed passenger and freight tracks. Generally speaking, the concerned models could simulate well the situation of wagon in preventing trains from colliding when parking and restarting and of the real-time changes the situation of train speeds and displacement and get hold of the current train states in their departures and arrivals. Finally the model gets the train diagram that simulates the train operation in different ratios of the van and analyzes some parameter characters in the process of train running, such as time, speed, through capacity, interval departing time, and departing numbers. PMID:25435863
Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware
Knight, James C.; Tully, Philip J.; Kaplan, Bernhard A.; Lansner, Anders; Furber, Steve B.
2016-01-01
SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 2.0 × 104 neurons and 5.1 × 107 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately 45× more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models. PMID:27092061
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gragg, Evan James; Middleton, Richard Stephen
This report describes the benefits of the BECCUS screening tools. The goals of this project are to utilize NATCARB database for site screening; enhance NATCARB database; run CO 2-EOR simulations and economic models using updated reservoir data sets (SCO 2T-EOR).
Effect of reduced gravity on the preferred walk-run transition speed
NASA Technical Reports Server (NTRS)
Kram, R.; Domingo, A.; Ferris, D. P.
1997-01-01
We investigated the effect of reduced gravity on the human walk-run gait transition speed and interpreted the results using an inverted-pendulum mechanical model. We simulated reduced gravity using an apparatus that applied a nearly constant upward force at the center of mass, and the subjects walked and ran on a motorized treadmill. In the inverted pendulum model for walking, gravity provides the centripetal force needed to keep the pendulum in contact with the ground. The ratio of the centripetal and gravitational forces (mv2/L)/(mg) reduces to the dimensionless Froude number (v2/gL). Applying this model to a walking human, m is body mass, v is forward velocity, L is leg length and g is gravity. In normal gravity, humans and other bipeds with different leg lengths all choose to switch from a walk to a run at different absolute speeds but at approximately the same Froude number (0.5). We found that, at lower levels of gravity, the walk-run transition occurred at progressively slower absolute speeds but at approximately the same Froude number. This supports the hypothesis that the walk-run transition is triggered by the dynamics of an inverted-pendulum system.
MM5 simulations for air quality modeling: An application to a coastal area with complex terrain
NASA Astrophysics Data System (ADS)
Lee, Sang-Mi; Princevac, Marko; Mitsutomi, Satoru; Cassmassi, Joe
A series of modifications were implemented in MM5 simulation in order to account for wind along the Santa Clarita valley, a north-south running valley located in the north of Los Angeles. Due to high range mountains in the north and the east of the Los Angeles Air Basin, sea breeze entering Los Angeles exits into two directions. One branch moves toward the eastern part of the basin and the other to the north toward the Santa Clarita valley. However, the northward flow has not been examined thoroughly nor simulated successfully in the previous studies. In the present study, we proposed four modifications to trigger the flow separation. They were (1) increasing drag over the ocean, (2) increasing soil moisture content, (3) selective observational nudging, and (4) one-way nesting for the innermost domain. The Control run overpredicted near-surface wind speed over the ocean and sensible heat flux, in an urbanized area, which justifies the above 1st and 2nd modification. The Modified run provided an improvement in near-surface temperature, sensible heat flux and wind fields including southeasterly flow along the Santa Clarita valley. The improved MM5 wind field triggered a transport to the Santa Clarita valley generating a plume elongated from an urban center to the north, which did not exist in MM5 Control run. In all, the modified MM5 fields yielded better agreement in both CO and O3 simulations especially in the Santa Clarita area.
Demonstration of the Water Erosion Prediction Project (WEPP) internet interface and services
USDA-ARS?s Scientific Manuscript database
The Water Erosion Prediction Project (WEPP) model is a process-based FORTRAN computer simulation program for prediction of runoff and soil erosion by water at hillslope profile, field, and small watershed scales. To effectively run the WEPP model and interpret results additional software has been de...
Autoshaping and Automaintenance: A Neural-Network Approach
ERIC Educational Resources Information Center
Burgos, Jose E.
2007-01-01
This article presents an interpretation of autoshaping, and positive and negative automaintenance, based on a neural-network model. The model makes no distinction between operant and respondent learning mechanisms, and takes into account knowledge of hippocampal and dopaminergic systems. Four simulations were run, each one using an "A-B-A" design…
Non-linear structure formation in the `Running FLRW' cosmological model
NASA Astrophysics Data System (ADS)
Bibiano, Antonio; Croton, Darren J.
2016-07-01
We present a suite of cosmological N-body simulations describing the `Running Friedmann-Lemaïtre-Robertson-Walker' (R-FLRW) cosmological model. This model is based on quantum field theory in a curved space-time and extends Lambda cold dark matter (ΛCDM) with a time-evolving vacuum density, Λ(z), and time-evolving gravitational Newton's coupling, G(z). In this paper, we review the model and introduce the necessary analytical treatment needed to adapt a reference N-body code. Our resulting simulations represent the first realization of the full growth history of structure in the R-FLRW cosmology into the non-linear regime, and our normalization choice makes them fully consistent with the latest cosmic microwave background data. The post-processing data products also allow, for the first time, an analysis of the properties of the halo and sub-halo populations. We explore the degeneracies of many statistical observables and discuss the steps needed to break them. Furthermore, we provide a quantitative description of the deviations of R-FLRW from ΛCDM, which could be readily exploited by future cosmological observations to test and further constrain the model.
Advanced overlay: sampling and modeling for optimized run-to-run control
NASA Astrophysics Data System (ADS)
Subramany, Lokesh; Chung, WoongJae; Samudrala, Pavan; Gao, Haiyong; Aung, Nyan; Gomez, Juan Manuel; Gutjahr, Karsten; Park, DongSuk; Snow, Patrick; Garcia-Medina, Miguel; Yap, Lipkong; Demirer, Onur Nihat; Pierson, Bill; Robinson, John C.
2016-03-01
In recent years overlay (OVL) control schemes have become more complicated in order to meet the ever shrinking margins of advanced technology nodes. As a result, this brings up new challenges to be addressed for effective run-to- run OVL control. This work addresses two of these challenges by new advanced analysis techniques: (1) sampling optimization for run-to-run control and (2) bias-variance tradeoff in modeling. The first challenge in a high order OVL control strategy is to optimize the number of measurements and the locations on the wafer, so that the "sample plan" of measurements provides high quality information about the OVL signature on the wafer with acceptable metrology throughput. We solve this tradeoff between accuracy and throughput by using a smart sampling scheme which utilizes various design-based and data-based metrics to increase model accuracy and reduce model uncertainty while avoiding wafer to wafer and within wafer measurement noise caused by metrology, scanner or process. This sort of sampling scheme, combined with an advanced field by field extrapolated modeling algorithm helps to maximize model stability and minimize on product overlay (OPO). Second, the use of higher order overlay models means more degrees of freedom, which enables increased capability to correct for complicated overlay signatures, but also increases sensitivity to process or metrology induced noise. This is also known as the bias-variance trade-off. A high order model that minimizes the bias between the modeled and raw overlay signature on a single wafer will also have a higher variation from wafer to wafer or lot to lot, that is unless an advanced modeling approach is used. In this paper, we characterize the bias-variance trade off to find the optimal scheme. The sampling and modeling solutions proposed in this study are validated by advanced process control (APC) simulations to estimate run-to-run performance, lot-to-lot and wafer-to- wafer model term monitoring to estimate stability and ultimately high volume manufacturing tests to monitor OPO by densely measured OVL data.
Analysis of model output and science data in the Virtual Model Repository (VMR).
NASA Astrophysics Data System (ADS)
De Zeeuw, D.; Ridley, A. J.
2014-12-01
Big scientific data not only includes large repositories of data from scientific platforms like satelites and ground observation, but also the vast output of numerical models. The Virtual Model Repository (VMR) provides scientific analysis and visualization tools for a many numerical models of the Earth-Sun system. Individual runs can be analyzed in the VMR and compared to relevant data through relevant metadata, but larger collections of runs can also now be studied and statistics generated on the accuracy and tendancies of model output. The vast model repository at the CCMC with over 1000 simulations of the Earth's magnetosphere was used to look at overall trends in accuracy when compared to satelites such as GOES, Geotail, and Cluster. Methodology for this analysis as well as case studies will be presented.
A new framework for the analysis of continental-scale convection-resolving climate simulations
NASA Astrophysics Data System (ADS)
Leutwyler, D.; Charpilloz, C.; Arteaga, A.; Ban, N.; Di Girolamo, S.; Fuhrer, O.; Hoefler, T.; Schulthess, T. C.; Christoph, S.
2017-12-01
High-resolution climate simulations at horizontal resolution of O(1-4 km) allow explicit treatment of deep convection (thunderstorms and rain showers). Explicitly treating convection by the governing equations reduces uncertainties associated with parametrization schemes and allows a model formulation closer to physical first principles [1,2]. But kilometer-scale climate simulations with long integration periods and large computational domains are expensive and data storage becomes unbearably voluminous. Hence new approaches to perform analysis are required. In the crCLIM project we propose a new climate modeling framework that allows scientists to conduct analysis at high spatial and temporal resolution. We tackle the computational cost by using the largest available supercomputers such as hybrid CPU-GPU architectures. For this the COSMO model has been adapted to run on such architectures [2]. We then alleviate the I/O-bottleneck by employing a simulation data-virtualizer (SDaVi) that allows to trade-off storage (space) for computational effort (time). This is achieved by caching the simulation outputs and efficiently launching re-simulations in case of cache misses. All this is done transparently from the analysis applications [3]. For the re-runs this approach requires a bit-reproducible version of COSMO. That is to say a model that produces identical results on different architectures to ensure coherent recomputation of the requested data [4]. In this contribution we present a version of SDaVi, a first performance model, and a strategy to obtain bit-reproducibility across hardware architectures.[1] N. Ban, J. Schmidli, C. Schär. Evaluation of the convection-resolving regional climate modeling approach in decade-long simulations. J. Geophys. Res. Atmos., 7889-7907, 2014.[2] D. Leutwyler, O. Fuhrer, X. Lapillonne, D. Lüthi, C. Schär. Towards European-scale convection-resolving climate simulations with GPUs: a study with COSMO 4.19. Geosci. Model Dev, 3393-3412, 2016.[3] S. Di Girolamo, P. Schmid, T. Schulthess, T. Hoefler. Virtualized Big Data: Reproducing Simulation Output on Demand. Submit. to the 23rd ACM Symposium on PPoPP 18, Vienna, Austria.[4] A. Arteaga, O. Fuhrer, T. Hoefler. Designing Bit-Reproducible Portable High-Performance Applications. IEEE 28th IPDPS, 2014.
Operational on-line coupled chemical weather forecasts for Europe with WRF/Chem
NASA Astrophysics Data System (ADS)
Hirtl, Marcus; Mantovani, Simone; Krüger, Bernd C.; Flandorfer, Claudia; Langer, Matthias
2014-05-01
Air quality is a key element for the well-being and quality of life of European citizens. Air pollution measurements and modeling tools are essential for the assessment of air quality according to EU legislation. The responsibilities of ZAMG as the national weather service of Austria include the support of the federal states and the public in questions connected to the protection of the environment in the frame of advisory and counseling services as well as expert opinions. ZAMG conducts daily Air-Quality forecasts using the on-line coupled model WRF/Chem. Meteorology is simulated simultaneously with the emissions, turbulent mixing, transport, transformation, and fate of trace gases and aerosols. The emphasis of the application is on predicting pollutants over Austria. Two domains are used for the simulations: the mother domain covers Europe with a resolution of 12 km, the inner domain includes the alpine region with a horizontal resolution of 4 km; 45 model levels are used in the vertical direction. The model runs 2 times per day for a period of 72 hours and is initialized with ECMWF forecasts. On-line coupled models allow considering two-way interactions between different atmospheric processes including chemistry (both gases and aerosols), clouds, radiation, boundary layer, emissions, meteorology and climate. In the operational set-up direct-, indirect and semi-direct effects between meteorology and air chemistry are enabled. The model is running on the HPCF (High Performance Computing Facility) of the ZAMG. In the current set-up 1248 CPUs are used. As the simulations need a big amount of computing resources, a method to safe I/O-time was implemented. Every MPI task writes all its output into the shared memory filesystem of the compute nodes. Once the WRF/Chem integration is finished, all split NetCDF-files are merged and saved on the global file system. The merge-routine is based on parallel-NetCDF. With this method the model runs about 30% faster on the SGI-ICEX. Different additional external data sources can be used to improve the forecasts. Satellite measurements of the Aerosol Optical Thickness (AOT) and ground-based PM10-measurements are combined to highly-resolved initial fields using regression- and assimilation techniques. The available local emission inventories provided by the different Austrian regional governments were harmonized and are used for the model simulations. A model evaluation for a selected episode in February 2010 is presented with respect to PM10 forecasts. During that month exceedances of PM10-thresholds occurred at many measurement stations of the Austrian network. Different model runs (only model/only ground stations assimilated/satellite and ground stations assimilated) are compared to the respective measurements.
Compressed quantum simulation of the Ising model.
Kraus, B
2011-12-16
Jozsa et al. [Proc. R. Soc. A 466, 809 2009)] have shown that a match gate circuit running on n qubits can be compressed to a universal quantum computation on log(n)+3 qubits. Here, we show how this compression can be employed to simulate the Ising interaction of a 1D chain consisting of n qubits using a universal quantum computer running on log(n) qubits. We demonstrate how the adiabatic evolution can be realized on this exponentially smaller system and how the magnetization, which displays a quantum phase transition, can be measured. This shows that the quantum phase transition of very large systems can be observed experimentally with current technology. © 2011 American Physical Society
Parameter Uncertainty on AGCM-simulated Tropical Cyclones
NASA Astrophysics Data System (ADS)
He, F.
2015-12-01
This work studies the parameter uncertainty on tropical cyclone (TC) simulations in Atmospheric General Circulation Models (AGCMs) using the Reed-Jablonowski TC test case, which is illustrated in Community Atmosphere Model (CAM). It examines the impact from 24 parameters across the physical parameterization schemes that represent the convection, turbulence, precipitation and cloud processes in AGCMs. The one-at-a-time (OAT) sensitivity analysis method first quantifies their relative importance on TC simulations and identifies the key parameters to the six different TC characteristics: intensity, precipitation, longwave cloud radiative forcing (LWCF), shortwave cloud radiative forcing (SWCF), cloud liquid water path (LWP) and ice water path (IWP). Then, 8 physical parameters are chosen and perturbed using the Latin-Hypercube Sampling (LHS) method. The comparison between OAT ensemble run and LHS ensemble run shows that the simulated TC intensity is mainly affected by the parcel fractional mass entrainment rate in Zhang-McFarlane (ZM) deep convection scheme. The nonlinear interactive effect among different physical parameters is negligible on simulated TC intensity. In contrast, this nonlinear interactive effect plays a significant role in other simulated tropical cyclone characteristics (precipitation, LWCF, SWCF, LWP and IWP) and greatly enlarge their simulated uncertainties. The statistical emulator Extended Multivariate Adaptive Regression Splines (EMARS) is applied to characterize the response functions for nonlinear effect. Last, we find that the intensity uncertainty caused by physical parameters is in a degree comparable to uncertainty caused by model structure (e.g. grid) and initial conditions (e.g. sea surface temperature, atmospheric moisture). These findings suggest the importance of using the perturbed physics ensemble (PPE) method to revisit tropical cyclone prediction under climate change scenario.
NASA Astrophysics Data System (ADS)
Rusgiyarto, Ferry; Sjafruddin, Ade; Frazila, Russ Bona; Suprayogi
2017-06-01
Increasing container traffic and land acquisition problem for terminal expansion leads to usage of external yard in a port buffer area. This condition influenced the terminal performance because a road which connects the terminal and the external yard was also used by non-container traffic. Location choice problem considered to solve this condition, but the previous research has not taken account a stochastic condition of container arrival rate and service time yet. Bi-level programming framework was used to find optimum location configuration. In the lower-level, there was a problem to construct the equation, which correlated the terminal operation and the road due to different time cycle equilibrium. Container moves from the quay to a terminal gate in a daily unit of time, meanwhile, it moves from the terminal gate to the external yard through the road in a minute unit of time. If the equation formulated in hourly unit equilibrium, it cannot catch up the container movement characteristics in the terminal. Meanwhile, if the equation formulated in daily unit equilibrium, it cannot catch up the road traffic movement characteristics in the road. This problem can be addressed using simulation model. Discrete Event Simulation Model was used to simulate import container flow processes in the container terminal and external yard. Optimum location configuration in the upper-level was the combinatorial problem, which was solved by Full Enumeration approach. The objective function of the external yard location model was to minimize user transport cost (or time) and to maximize operator benefit. Numerical experiment was run for the scenario assumption of two container handling ways, three external yards, and thirty-day simulation periods. Jakarta International Container Terminal (JICT) container characteristics data was referred for the simulation. Based on five runs which were 5, 10, 15, 20, and 30 repetitions, operation one of three available external yards (external yard - 3) was the optimum result. Apparently, the model confirmed the hypothesis that there was an optimum configuration of the external yard. Nevertheless, the model needs detail elaboration related to the objective function and the optimization constraint. It requires detail validation, in term of service time value, distribution pattern, and arrival rate in each unit server modeled in the next step of the research. The model gave unique and relatively consistent value of each run. It was indicated that the method has a chance to solve the research problem.
NASA Astrophysics Data System (ADS)
Ivanovic, Ruza F.; Gregoire, Lauren J.; Kageyama, Masa; Roche, Didier M.; Valdes, Paul J.; Burke, Andrea; Drummond, Rosemarie; Peltier, W. Richard; Tarasov, Lev
2016-07-01
The last deglaciation, which marked the transition between the last glacial and present interglacial periods, was punctuated by a series of rapid (centennial and decadal) climate changes. Numerical climate models are useful for investigating mechanisms that underpin the climate change events, especially now that some of the complex models can be run for multiple millennia. We have set up a Paleoclimate Modelling Intercomparison Project (PMIP) working group to coordinate efforts to run transient simulations of the last deglaciation, and to facilitate the dissemination of expertise between modellers and those engaged with reconstructing the climate of the last 21 000 years. Here, we present the design of a coordinated Core experiment over the period 21-9 thousand years before present (ka) with time-varying orbital forcing, greenhouse gases, ice sheets and other geographical changes. A choice of two ice sheet reconstructions is given, and we make recommendations for prescribing ice meltwater (or not) in the Core experiment. Additional focussed simulations will also be coordinated on an ad hoc basis by the working group, for example to investigate more thoroughly the effect of ice meltwater on climate system evolution, and to examine the uncertainty in other forcings. Some of these focussed simulations will target shorter durations around specific events in order to understand them in more detail and allow for the more computationally expensive models to take part.
Parallel ALLSPD-3D: Speeding Up Combustor Analysis Via Parallel Processing
NASA Technical Reports Server (NTRS)
Fricker, David M.
1997-01-01
The ALLSPD-3D Computational Fluid Dynamics code for reacting flow simulation was run on a set of benchmark test cases to determine its parallel efficiency. These test cases included non-reacting and reacting flow simulations with varying numbers of processors. Also, the tests explored the effects of scaling the simulation with the number of processors in addition to distributing a constant size problem over an increasing number of processors. The test cases were run on a cluster of IBM RS/6000 Model 590 workstations with ethernet and ATM networking plus a shared memory SGI Power Challenge L workstation. The results indicate that the network capabilities significantly influence the parallel efficiency, i.e., a shared memory machine is fastest and ATM networking provides acceptable performance. The limitations of ethernet greatly hamper the rapid calculation of flows using ALLSPD-3D.
Rover Attitude and Pointing System Simulation Testbed
NASA Technical Reports Server (NTRS)
Vanelli, Charles A.; Grinblat, Jonathan F.; Sirlin, Samuel W.; Pfister, Sam
2009-01-01
The MER (Mars Exploration Rover) Attitude and Pointing System Simulation Testbed Environment (RAPSSTER) provides a simulation platform used for the development and test of GNC (guidance, navigation, and control) flight algorithm designs for the Mars rovers, which was specifically tailored to the MERs, but has since been used in the development of rover algorithms for the Mars Science Laboratory (MSL) as well. The software provides an integrated simulation and software testbed environment for the development of Mars rover attitude and pointing flight software. It provides an environment that is able to run the MER GNC flight software directly (as opposed to running an algorithmic model of the MER GNC flight code). This improves simulation fidelity and confidence in the results. Further more, the simulation environment allows the user to single step through its execution, pausing, and restarting at will. The system also provides for the introduction of simulated faults specific to Mars rover environments that cannot be replicated in other testbed platforms, to stress test the GNC flight algorithms under examination. The software provides facilities to do these stress tests in ways that cannot be done in the real-time flight system testbeds, such as time-jumping (both forwards and backwards), and introduction of simulated actuator faults that would be difficult, expensive, and/or destructive to implement in the real-time testbeds. Actual flight-quality codes can be incorporated back into the development-test suite of GNC developers, closing the loop between the GNC developers and the flight software developers. The software provides fully automated scripting, allowing multiple tests to be run with varying parameters, without human supervision.
Bridging Scientific Model Outputs with Emergency Response Needs in Catastrophic Earthquake Responses
ERIC Educational Resources Information Center
Johannes, Tay W.
2010-01-01
In emergency management, scientific models are widely used for running hazard simulations and estimating losses often in support of planning and mitigation efforts. This work expands utility of the scientific model into the response phase of emergency management. The focus is on the common operating picture as it gives context to emergency…
ERIC Educational Resources Information Center
Leth-Steensen, Craig; Gallitto, Elena
2016-01-01
A large number of approaches have been proposed for estimating and testing the significance of indirect effects in mediation models. In this study, four sets of Monte Carlo simulations involving full latent variable structural equation models were run in order to contrast the effectiveness of the currently popular bias-corrected bootstrapping…
A PICKSC Science Gateway for enabling the common plasma physicist to run kinetic software
NASA Astrophysics Data System (ADS)
Hu, Q.; Winjum, B. J.; Zonca, A.; Youn, C.; Tsung, F. S.; Mori, W. B.
2017-10-01
Computer simulations offer tremendous opportunities for studying plasmas, ranging from simulations for students that illuminate fundamental educational concepts to research-level simulations that advance scientific knowledge. Nevertheless, there is a significant hurdle to using simulation tools. Users must navigate codes and software libraries, determine how to wrangle output into meaningful plots, and oftentimes confront a significant cyberinfrastructure with powerful computational resources. Science gateways offer a Web-based environment to run simulations without needing to learn or manage the underlying software and computing cyberinfrastructure. We discuss our progress on creating a Science Gateway for the Particle-in-Cell and Kinetic Simulation Software Center that enables users to easily run and analyze kinetic simulations with our software. We envision that this technology could benefit a wide range of plasma physicists, both in the use of our simulation tools as well as in its adaptation for running other plasma simulation software. Supported by NSF under Grant ACI-1339893 and by the UCLA Institute for Digital Research and Education.
Insights into the paleoclimate of the PETM from an ensemble of EMIC simulations
NASA Astrophysics Data System (ADS)
Keery, John; Holden, Philip; Edwards, Neil; Monteiro, Fanny; Ridgwell, Andy
2016-04-01
The Eocene epoch, and in particular, the Paleocene-Eocene Thermal Maximum (PETM) of 55.8 Ma, exhibit several features of particular interest for probing our understanding of the Earth system and carbon cycle. CO2 levels have not yet been definitively established, but were known to have varied considerably, peaking at up to several times modern values. Temperatures were several degrees higher than in the modern era, and there were periods of relatively rapid warming, with substantial variability in carbon cycle processes. The Eocene is therefore highly relevant for our understanding of the climate of the 21st Century. Earth system models of intermediate complexity (EMICs), with less detailed simulation of the dynamics of the atmosphere and oceans than general circulation models (GCMs), are sufficiently fast to allow climate modelling over long periods of geological time in comparatively short periods of computer run-time. This speed advantage of EMICs over GCMs permits an "ensemble" of model simulations to be run, allowing statistical analysis of results to be carried out, and allowing the uncertainties in model predictions to be estimated. Here we apply the EMICs PLASIM-GENIE, and GENIE-1, with an Eocene paleogeography which incorporates the major continental configurations and ocean connections, including a shallow strait linking the Arctic to the Tethys, but with neither the Tasman Gateway nor the Drake Passage yet open. Our two model strategy benefits from the detailed simulation of ocean biogeochemistry in GENIE-1, and the 3D spectral atmospheric dynamics in PLASIM-GENIE, which also provides boundary conditions for the GENIE-1 simulations. Using a 50-member ensemble of 1000-year quasi-equilibrium simulations with PLASIM-GENIE, we investigate the relative contributions of orbital and CO2 variability on climate and equator-pole temperature gradients. Results from PLASIM-GENIE are used to configure a harmonised ensemble of GENIE-1 simulations, which will be compared with newly obtained geochemical data on ocean oxygenation through the Eocene from the UK NERC RESPIRE project.
Hittle, Elizabeth
2011-01-01
In small watersheds, runoff entering local waterways from large storms can cause rapid and profound changes in the streambed that can contribute to flooding. Wymans Run, a small stream in Cochranton Borough, Crawford County, experienced a large rain event in June 2008 that caused sediment to be deposited at a bridge. A hydrodynamic model, Flow and Sediment Transport and Morphological Evolution of Channels (FaSTMECH), which is incorporated into the U.S. Geological Survey Multi-Dimensional Surface-Water Modeling System (MD_SWMS) was constructed to predict boundary shear stress and velocity in Wymans Run using data from the June 2008 event. Shear stress and velocity values can be used to indicate areas of a stream where sediment, transported downstream, can be deposited on the streambed. Because of the short duration of the June 2008 rain event, streamflow was not directly measured but was estimated using U.S. Army Corps of Engineers one-dimensional Hydrologic Engineering Centers River Analysis System (HEC-RAS). Scenarios to examine possible engineering solutions to decrease the amount of sediment at the bridge, including bridge expansion, channel expansion, and dredging upstream from the bridge, were simulated using the FaSTMECH model. Each scenario was evaluated for potential effects on water-surface elevation, boundary shear stress, and velocity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Markidis, S.; Rizwan, U.
The use of virtual nuclear control room can be an effective and powerful tool for training personnel working in the nuclear power plants. Operators could experience and simulate the functioning of the plant, even in critical situations, without being in a real power plant or running any risk. 3D models can be exported to Virtual Reality formats and then displayed in the Virtual Reality environment providing an immersive 3D experience. However, two major limitations of this approach are that 3D models exhibit static textures, and they are not fully interactive and therefore cannot be used effectively in training personnel. Inmore » this paper we first describe a possible solution for embedding the output of a computer application in a 3D virtual scene, coupling real-world applications and VR systems. The VR system reported here grabs the output of an application running on an X server; creates a texture with the output and then displays it on a screen or a wall in the virtual reality environment. We then propose a simple model for providing interaction between the user in the VR system and the running simulator. This approach is based on the use of internet-based application that can be commanded by a laptop or tablet-pc added to the virtual environment. (authors)« less
Development of a simulation model for dynamic derailment analysis of high-speed trains
NASA Astrophysics Data System (ADS)
Ling, Liang; Xiao, Xin-Biao; Jin, Xue-Song
2014-12-01
The running safety of high-speed trains has become a major concern of the current railway research with the rapid development of high-speed railways around the world. The basic safety requirement is to prevent the derailment. The root causes of the dynamic derailment of high-speed trains operating in severe environments are not easy to identify using the field tests or laboratory experiments. Numerical simulation using an advanced train-track interaction model is a highly efficient and low-cost approach to investigate the dynamic derailment behavior and mechanism of high-speed trains. This paper presents a three-dimensional dynamic model of a high-speed train coupled with a ballast track for dynamic derailment analysis. The model considers a train composed of multiple vehicles and the nonlinear inter-vehicle connections. The ballast track model consists of rails, fastenings, sleepers, ballasts, and roadbed, which are modeled by Euler beams, nonlinear spring-damper elements, equivalent ballast bodies, and continuous viscoelastic elements, in which the modal superposition method was used to reduce the order of the partial differential equations of Euler beams. The commonly used derailment safety assessment criteria around the world are embedded in the simulation model. The train-track model was then used to investigate the dynamic derailment responses of a high-speed train passing over a buckled track, in which the derailment mechanism and train running posture during the dynamic derailment process were analyzed in detail. The effects of train and track modelling on dynamic derailment analysis were also discussed. The numerical results indicate that the train and track modelling options have a significant effect on the dynamic derailment analysis. The inter-vehicle impacts and the track flexibility and nonlinearity should be considered in the dynamic derailment simulations.
NASA Technical Reports Server (NTRS)
Oman, Luke D.; Strahan, Susan E.
2017-01-01
Simulations using reanalysis meteorological fields have long been used to understand the causes of atmospheric composition change in the recent past. Using the new MERRA-2 reanalysis, we are conducting chemistry simulations to create products covering 1980-2016 for the atmospheric composition community. These simulations use the Global Modeling Initiative (GMI) chemical mechanism in two different models: the GMI Chemical Transport Model (CTM) and the GEOS-5 model in Replay mode. Replay mode means an integration of the GEOS-5 general circulation model that is incrementally adjusted each time step toward the MERRA-2 reanalysis. The GMI CTM is a 1 deg x 1.25 deg simulation and the MERRA-2 GMI Replay simulation uses the native MERRA-2 grid of approximately 1/2 deg horizontal resolution on the cubed sphere. A specialized set of transport diagnostics is included in both runs to better understand trace gas transport and its variability in the recent past.
The Impact of TRMM on Mesoscale Model Simulation of Super Typhoon Paka
NASA Technical Reports Server (NTRS)
Tao, W.-K.; Jia, Y.; Halverson, J.; Hou, A.; Olson, W.; Rodgers, E.; Simpson, J.
1999-01-01
Tropical cyclone Paka formed during the first week of December 1997 and underwent three periods of rapid intensification over the following two weeks. During one of these periods, which initiated early on December 10, Paka's Dvorak-measured windspeed increased from 23 to 60 m/s over a 48-hr period. On December 18, during the last rapid deepening episode, Paka became a supertyphoon with a maximum wind speed of about 80 m/s. In this study, the Penn State/NCAR Mesoscale Model (MM5) with improved physics (i.e., cloud microphysics, radiation, land-soil-vegetation-surface processes, and TOGA COARE flux scheme) and a multiple level nesting technique (135, 45 and 15 km horizontal resolution) will be used to simulate supertyphoon Paka. We performed two runs initialized with Goddard Earth Observing System (GEOS) data sets. The first GEOS data set does not incorporate either TRMM (tropical rainfall measuring mission satellite) or SSM/I (sensor microwave imager) observed rainfall fields into the GEOS's assimilation system while the second one does. Preliminary results show that the MM5 simulated surface pressure deepened by more than 25 mb (45 km resolution domain) in the run initialized with the GEOS data set incorporating TRMM and SSM/I derived rainfall, compared to the one initialized without. However, the track and precipitation patterns are quite similar between the runs. In our presentation, we will show the impact of TRMM rainfall upon the MM5 simulation of Paka at various horizontal resolutions. We will also examine the physical processes associated with initial explosive development by comparing MM5 simulated rainfall and latent heat release. In addition, budget (vorticity, PV, momentum and heat) calculations and sensitivity tests will be performed to examine the upper-tropospheric and SST mechanisms responsible for the explosive development of Paka.
Nonlinear relaxation algorithms for circuit simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saleh, R.A.
Circuit simulation is an important Computer-Aided Design (CAD) tool in the design of Integrated Circuits (IC). However, the standard techniques used in programs such as SPICE result in very long computer-run times when applied to large problems. In order to reduce the overall run time, a number of new approaches to circuit simulation were developed and are described. These methods are based on nonlinear relaxation techniques and exploit the relative inactivity of large circuits. Simple waveform-processing techniques are described to determine the maximum possible speed improvement that can be obtained by exploiting this property of large circuits. Three simulation algorithmsmore » are described, two of which are based on the Iterated Timing Analysis (ITA) method and a third based on the Waveform-Relaxation Newton (WRN) method. New programs that incorporate these techniques were developed and used to simulate a variety of industrial circuits. The results from these simulations are provided. The techniques are shown to be much faster than the standard approach. In addition, a number of parallel aspects of these algorithms are described, and a general space-time model of parallel-task scheduling is developed.« less
Neuronvisio: A Graphical User Interface with 3D Capabilities for NEURON.
Mattioni, Michele; Cohen, Uri; Le Novère, Nicolas
2012-01-01
The NEURON simulation environment is a commonly used tool to perform electrical simulation of neurons and neuronal networks. The NEURON User Interface, based on the now discontinued InterViews library, provides some limited facilities to explore models and to plot their simulation results. Other limitations include the inability to generate a three-dimensional visualization, no standard mean to save the results of simulations, or to store the model geometry within the results. Neuronvisio (http://neuronvisio.org) aims to address these deficiencies through a set of well designed python APIs and provides an improved UI, allowing users to explore and interact with the model. Neuronvisio also facilitates access to previously published models, allowing users to browse, download, and locally run NEURON models stored in ModelDB. Neuronvisio uses the matplotlib library to plot simulation results and uses the HDF standard format to store simulation results. Neuronvisio can be viewed as an extension of NEURON, facilitating typical user workflows such as model browsing, selection, download, compilation, and simulation. The 3D viewer simplifies the exploration of complex model structure, while matplotlib permits the plotting of high-quality graphs. The newly introduced ability of saving numerical results allows users to perform additional analysis on their previous simulations.
GEO2D - Two-Dimensional Computer Model of a Ground Source Heat Pump System
James Menart
2013-06-07
This file contains a zipped file that contains many files required to run GEO2D. GEO2D is a computer code for simulating ground source heat pump (GSHP) systems in two-dimensions. GEO2D performs a detailed finite difference simulation of the heat transfer occurring within the working fluid, the tube wall, the grout, and the ground. Both horizontal and vertical wells can be simulated with this program, but it should be noted that the vertical wall is modeled as a single tube. This program also models the heat pump in conjunction with the heat transfer occurring. GEO2D simulates the heat pump and ground loop as a system. Many results are produced by GEO2D as a function of time and position, such as heat transfer rates, temperatures and heat pump performance. On top of this information from an economic comparison between the geothermal system simulated and a comparable air heat pump systems or a comparable gas, oil or propane heating systems with a vapor compression air conditioner. The version of GEO2D in the attached file has been coupled to the DOE heating and cooling load software called ENERGYPLUS. This is a great convenience for the user because heating and cooling loads are an input to GEO2D. GEO2D is a user friendly program that uses a graphical user interface for inputs and outputs. These make entering data simple and they produce many plotted results that are easy to understand. In order to run GEO2D access to MATLAB is required. If this program is not available on your computer you can download the program MCRInstaller.exe, the 64 bit version, from the MATLAB website or from this geothermal depository. This is a free download which will enable you to run GEO2D..
The role of historical forcings in simulating the observed Atlantic multidecadal oscillation
NASA Astrophysics Data System (ADS)
Murphy, Lisa N.; Bellomo, Katinka; Cane, Mark; Clement, Amy
2017-03-01
We analyze the Atlantic multidecadal oscillation (AMO) in the preindustrial (PI) and historical (HIST) simulations from the Coupled Model Intercomparison Project Phase 5 (CMIP5) to assess the drivers of the observed AMO from 1865 to 2005. We draw 141 year samples from the 41 CMIP5 model's PI runs and compare the correlation and variance between the observed AMO and the simulated PI and HIST AMO. The correlation coefficients in 38 forced (HIST) models are above the 90% confidence level and explain up to 56% of the observed variance. The probability that any of the unforced (PI) models do as well is less than 3% in 31 models. Multidecadal variability is larger in 39 CMIP5 HIST simulations and in all HIST members of the Community Earth System Model Large Ensemble than their corresponding PI. We conclude that there is an essential role for external forcing in driving the observed AMO.
User's manual for a computer program for simulating intensively managed allowable cut.
Robert W. Sassaman; Ed Holt; Karl Bergsvik
1972-01-01
Detailed operating instructions are described for SIMAC, a computerized forest simulation model which calculates the allowable cut assuming volume regulation for forests with intensively managed stands. A sample problem illustrates the required inputs and expected output. SIMAC is written in FORTRAN IV and runs on a CDC 6400 computer with a SCOPE 3.3 operating system....
McDonald, Richard; Nelson, Jonathan; Kinzel, Paul; Conaway, Jeffrey S.
2006-01-01
The Multi-Dimensional Surface-Water Modeling System (MD_SWMS) is a Graphical User Interface for surface-water flow and sediment-transport models. The capabilities of MD_SWMS for developing models include: importing raw topography and other ancillary data; building the numerical grid and defining initial and boundary conditions; running simulations; visualizing results; and comparing results with measured data.
Mining data from CFD simulation for aneurysm and carotid bifurcation models.
Miloš, Radović; Dejan, Petrović; Nenad, Filipović
2011-01-01
Arterial geometry variability is present both within and across individuals. To analyze the influence of geometric parameters, blood density, dynamic viscosity and blood velocity on wall shear stress (WSS) distribution in the human carotid artery bifurcation and aneurysm, the computer simulations were run to generate the data pertaining to this phenomenon. In our work we evaluate two prediction models for modeling these relationships: neural network model and k-nearest neighbor model. The results revealed that both models have high prediction ability for this prediction task. The achieved results represent progress in assessment of stroke risk for a given patient data in real time.
NASA Astrophysics Data System (ADS)
Manninen, L. M.
1993-12-01
The document describes TKKMOD, a simulation model developed at Helsinki University of Technology for a specific wind-diesel system layout, with special emphasis on the battery submodel and its use in simulation. The model has been included into the European wind-diesel modeling software package WDLTOOLS under the CEC JOULE project 'Engineering Design Tools for Wind-Diesel Systems' (JOUR-0078). WDLTOOLS serves as the user interface and processes the input and output data of different logistic simulation models developed by the project participants. TKKMOD cannot be run without this shell. The report only describes the simulation principles and model specific parameters of TKKMOD and gives model specific user instructions. The input and output data processing performed outside this model is described in the documentation of the shell. The simulation model is utilized for calculation of long-term performance of the reference system configuration for given wind and load conditions. The main results are energy flows, losses in the system components, diesel fuel consumption, and the number of diesel engine starts.
COSMIC REIONIZATION ON COMPUTERS: NUMERICAL AND PHYSICAL CONVERGENCE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gnedin, Nickolay Y., E-mail: gnedin@fnal.gov; Kavli Institute for Cosmological Physics, University of Chicago, Chicago, IL 60637; Department of Astronomy and Astrophysics, University of Chicago, Chicago, IL 60637
In this paper I show that simulations of reionization performed under the Cosmic Reionization On Computers project do converge in space and mass, albeit rather slowly. A fully converged solution (for a given star formation and feedback model) can be determined at a level of precision of about 20%, but such a solution is useless in practice, since achieving it in production-grade simulations would require a large set of runs at various mass and spatial resolutions, and computational resources for such an undertaking are not yet readily available. In order to make progress in the interim, I introduce a weakmore » convergence correction factor in the star formation recipe, which allows one to approximate the fully converged solution with finite-resolution simulations. The accuracy of weakly converged simulations approaches a comparable, ∼20% level of precision for star formation histories of individual galactic halos and other galactic properties that are directly related to star formation rates, such as stellar masses and metallicities. Yet other properties of model galaxies, for example, their H i masses, are recovered in the weakly converged runs only within a factor of 2.« less