Sample records for simulation model part

  1. Imaging simulation of active EO-camera

    NASA Astrophysics Data System (ADS)

    Pérez, José; Repasi, Endre

    2018-04-01

    A modeling scheme for active imaging through atmospheric turbulence is presented. The model consists of two parts: In the first part, the illumination laser beam is propagated to a target that is described by its reflectance properties, using the well-known split-step Fourier method for wave propagation. In the second part, the reflected intensity distribution imaged on a camera is computed using an empirical model developed for passive imaging through atmospheric turbulence. The split-step Fourier method requires carefully chosen simulation parameters. These simulation requirements together with the need to produce dynamic scenes with a large number of frames led us to implement the model on GPU. Validation of this implementation is shown for two different metrics. This model is well suited for Gated-Viewing applications. Examples of imaging simulation results are presented here.

  2. Phase II, improved work zone design guidelines and enhanced model of traffic delays in work zones : final report, March 2009.

    DOT National Transportation Integrated Search

    2009-03-01

    This project contains three major parts. In the first part a digital computer simulation model was developed with the aim to model the traffic through a freeway work zone situation. The model was based on the Arena simulation software and used cumula...

  3. Phase II, improved work zone design guidelines and enhanced model of traffic delays in work zones : executive summary report.

    DOT National Transportation Integrated Search

    2009-03-01

    This project contains three major parts. In the first part a digital computer simulation model was developed with the aim to model the traffic through a freeway work zone situation. The model was based on the Arena simulation software and used cumula...

  4. Feedbacks between Air Pollution and Weather, Part 1: Effects on Weather

    EPA Science Inventory

    The meteorological predictions of fully coupled air-quality models running in “feedback” versus “nofeedback” simulations were compared against each other as part of Phase 2 of the Air Quality Model Evaluation International Initiative. The model simulations included a “no-feedback...

  5. Modelling of stamping of DP steel automotive part accounting for the effect of hard components in the microstructure

    NASA Astrophysics Data System (ADS)

    Ambrozinski, Mateusz; Bzowski, Krzysztof; Mirek, Michal; Rauch, Lukasz; Pietrzyk, Maciej

    2013-05-01

    The paper presents simulations of the manufacturing of the automotive part, which has high influence on improvement of passengers safety. Two approaches to the Finite Element (FE) modelling of stamping of a part that provides extra stiffening of construction subassemblies in the back of a car were considered. The first is conventional simulation, which assumes that the material is a continuum with flow stress model and anisotropy coefficients determined from the tensile tests. In the second approach two-phase microstructure of the DP steel is accounted for in simulations. The FE2 method, which belongs to upscaling techniques, is used. Representative Volume Element (RVE), which is the basis of the upscaling approach and reflects the real microstructure, was obtained by the image analysis of the micrograph of the DP steel. However, since FE2 simulations with the real picture of the microstructure in the micro scale, are extremely time consuming, the idea of the Statistically Similar Representative Volume Element (SSRVE) was applied. SSRVE obtained for the DP steel, used for production of automotive part, is presented in the paper in the form of 3D inclusion. The macro scale model of the simulated part is described in details, as well as the results obtained for macro and micro-macro simulations.

  6. Equivalent circuit simulation of HPEM-induced transient responses at nonlinear loads

    NASA Astrophysics Data System (ADS)

    Kotzev, Miroslav; Bi, Xiaotang; Kreitlow, Matthias; Gronwald, Frank

    2017-09-01

    In this paper the equivalent circuit modeling of a nonlinearly loaded loop antenna and its transient responses to HPEM field excitations are investigated. For the circuit modeling the general strategy to characterize the nonlinearly loaded antenna by a linear and a nonlinear circuit part is pursued. The linear circuit part can be determined by standard methods of antenna theory and numerical field computation. The modeling of the nonlinear circuit part requires realistic circuit models of the nonlinear loads that are given by Schottky diodes. Combining both parts, appropriate circuit models are obtained and analyzed by means of a standard SPICE circuit simulator. It is the main result that in this way full-wave simulation results can be reproduced. Furthermore it is clearly seen that the equivalent circuit modeling offers considerable advantages with respect to computation speed and also leads to improved physical insights regarding the coupling between HPEM field excitation and nonlinearly loaded loop antenna.

  7. Knowledge representation and qualitative simulation of salmon redd functioning. Part I: qualitative modeling and simulation.

    PubMed

    Guerrin, F; Dumas, J

    2001-02-01

    This work aims at representing empirical knowledge of freshwater ecologists on the functioning of salmon redds (spawning areas of salmon) and its impact on mortality of early stages. For this, we use Qsim, a qualitative simulator. In this first part, we provide unfamiliar readers with the underlying qualitative differential equation (QDE) ontology of Qsim: representing quantities, qualitative variables, qualitative constraints, QDE structure. Based on a very simple example taken of the salmon redd application, we show how informal biological knowledge may be represented and simulated using an approach that was first intended to analyze qualitatively ordinary differential equations systems. A companion paper (Part II) gives the full description and simulation of the salmon redd qualitative model. This work was part of a project aimed at assessing the impact of the environment on salmon populations dynamics by the use of models of processes acting at different levels: catchment, river, and redds. Only the latter level is dealt with in this paper.

  8. A dynamic model of the marriage market-Part 2: simulation of marital states and application to empirical data.

    PubMed

    Matthews, A P; Garenne, M L

    2013-09-01

    A dynamic, two-sex, age-structured marriage model is presented. Part 1 focused on first marriage only and described a marriage market matching algorithm. In Part 2 the model is extended to include divorce, widowing, and remarriage. The model produces a self-consistent set of marital states distributed by age and sex in a stable population by means of a gender-symmetric numerical method. The model is compared with empirical data for the case of Zambia. Furthermore, a dynamic marriage function for a changing population is demonstrated in simulations of three hypothetical scenarios of elevated mortality in young to middle adulthood. The marriage model has its primary application to simulation of HIV-AIDS epidemics in African countries. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. Simulation of water level, streamflow, and mass transport for the Cooper and Wando rivers near Charleston, South Carolina, 1992-95

    USGS Publications Warehouse

    Conrads, P.A.; Smith, P.A.

    1996-01-01

    The one-dimensional, unsteady-flow model, BRANCH, and the Branched Lagrangian Transport Model (BLTM) were calibrated and validated for the Cooper and Wando Rivers near Charleston, South Carolina. Data used to calibrate the BRANCH model included water-level data at four locations on the Cooper River and two locations on the Wando River, measured tidal-cycle streamflows at five locations on the Wando River, and simulated tidal-cycle streamflows (using an existing validated BRANCH model of the Cooper River) for four locations on the Cooper River. The BRANCH model was used to generate the necessary hydraulic data used in the BLTM model. The BLTM model was calibrated and validated using time series of salinity concentrations at two locations on the Cooper River and at two locations on the Wando River. Successful calibration and validation of the BRANCH and BLTM models to water levels, stream flows, and salinity were achieved after applying a positive 0.45 foot datum correction to the downstream boundary. The sensitivity of the simulated salinity concentrations to changes in the downstream gage datum, channel geometry, and roughness coefficient in the BRANCH model, and to the dispersion factor in the BLTM model was evaluated. The simulated salinity concentrations were most sensitive to changes in the downstream gage datum. A decrease of 0.5 feet in the downstream gage datum increased the simulated 3-day mean salinity concentration by 107 percent (12.7 to 26.3 parts per thousand). The range of the salinity concentration went from a tidal oscillation with a standard deviation of 3.9 parts per thousand to a nearly constant concentration with a standard deviation of 0.0 parts per thousand. An increase in the downstream gage datum decreased the simulated 3-day mean salinity concentration by 47 percent (12.7 to 6.7 parts per thousand) and decreased the standard deviation from 3.9 to 3.4 parts per thousand.

  10. THE MARK I BUSINESS SYSTEM SIMULATION MODEL

    DTIC Science & Technology

    of a large-scale business simulation model as a vehicle for doing research in management controls. The major results of the program were the...development of the Mark I business simulation model and the Simulation Package (SIMPAC). SIMPAC is a method and set of programs facilitating the construction...of large simulation models. The object of this document is to describe the Mark I Corporation model, state why parts of the business were modeled as they were, and indicate the research applications of the model. (Author)

  11. A Multi-Model Assessment for the 2006 and 2010 Simulations under the Air Quality Model Evaluation International Initiative (AQMEII) Phase 2 over North America: Part II. Evaluation of Column Variable Predictions Using Satellite Data

    EPA Science Inventory

    Within the context of the Air Quality Model Evaluation International Initiative phase 2 (AQMEII2) project, this part II paper performs a multi-model assessment of major column abundances of gases, radiation, aerosol, and cloud variables for 2006 and 2010 simulations with three on...

  12. Springback Simulation and Compensation for High Strength Parts Using JSTAMP

    NASA Astrophysics Data System (ADS)

    Shindo, Terumasa; Sugitomo, Nobuhiko; Ma, Ninshu

    2011-08-01

    The stamping parts made from high strength steel have a large springback which is difficult to control. With the development of simulation technology, the springback can be accurately predicted using advanced kinematic material models and CAE systems. In this paper, a stamping process for a pillar part made from several classes of high strength steel was simulated using a Yoshida-Uemori kinematic material model and the springback was well predicted. To obtain the desired part shape, CAD surfaces of the stamping tools were compensated by a CAE system JSTAMP. After applying the compensation 2 or 3 times, the dimension accuracy of the simulation for the part shape achieved was about 0.5 mm. The compensated CAD surfaces of the stamping tools were directly exported from JSTAMP to CAM for machining. The effectiveness of the compensation was verified by an experiment using the compensated tools.

  13. Hydrogeology and simulation of groundwater flow and land-surface subsidence in the northern part of the Gulf Coast aquifer system, Texas, 1891-2009

    USGS Publications Warehouse

    Kasmarek, Mark C.

    2012-01-01

    The MODFLOW-2000 groundwater flow model described in this report comprises four layers, one for each of the hydrogeologic units of the aquifer system except the Catahoula confining system, the assumed no-flow base of the system. The HAGM is composed of 137 rows and 245 columns of 1-square-mile grid cells with lateral no-flow boundaries at the extent of each hydrogeologic unit to the northwest, at groundwater divides associated with large rivers to the southwest and northeast, and at the downdip limit of freshwater to the southeast. The model was calibrated within the specified criteria by using trial-and-error adjustment of selected model-input data in a series of transient simulations until the model output (potentiometric surfaces, land-surface subsidence, and selected water-budget components) acceptably reproduced field measured (or estimated) aquifer responses including water level and subsidence. The HAGM-simulated subsidence generally compared well to 26 Predictions Relating Effective Stress to Subsidence (PRESS) models in Harris, Galveston, and Fort Bend Counties. Simulated HAGM results indicate that as much as 10 feet (ft) of subsidence has occurred in southeastern Harris County. Measured subsidence and model results indicate that a larger geographic area encompassing this area of maximum subsidence and much of central to southeastern Harris County has subsided at least 6 ft. For the western part of the study area, the HAGM simulated as much as 3 ft of subsidence in Wharton, Jackson, and Matagorda Counties. For the eastern part of the study area, the HAGM simulated as much as 3 ft of subsidence at the boundary of Hardin and Jasper Counties. Additionally, in the southeastern part of the study area in Orange County, the HAGM simulated as much as 3 ft of subsidence. Measured subsidence for these areas in the western and eastern parts of the HAGM has not been documented.

  14. Simulating The Technological Movements Of The Equipment Used For Manufacturing Prosthetic Devices Using 3D Models

    NASA Astrophysics Data System (ADS)

    Chicea, Anca-Lucia

    2015-09-01

    The paper presents the process of building geometric and kinematic models of a technological equipment used in the process of manufacturing devices. First, the process of building the model for a six axes industrial robot is presented. In the second part of the paper, the process of building the model for a five-axis CNC milling machining center is also shown. Both models can be used for accurate cutting processes simulation of complex parts, such as prosthetic devices.

  15. Modelling and scale-up of chemical flooding: First annual report for the period October 1985-September 1986

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pope, G.A.; Lake, L.W.; Sepehrnoori, K.

    1987-07-01

    This report consists of three parts. Part A describes the development of our chemical flood simulator UTCHEM during the past year, simulation studies, and physical property modelling and experiments. Part B is a report on the optimization and vectorization of UTCHEM on our Cray supercomputer to speed it up. Part C describes our use of UTCHEM to investigate the use of tracers for interwell reservoir tests. Part A of this Annual Report consists of five sections. In the first section, we give a general description of the simulator and recent changes in it along with a test case for amore » slightly compressible fluid. In the second section, we describe the major changes which were needed to add gel and alkaline reactions and give preliminary simulation results for these processes. In the third section, comparisons with a surfactant pilot field test are given. In the fourth section, process scaleup and design simulations are given and also our recent mesh refinement results. In the fifth section, experimental results and associated physical property modelling studies are reported. Part B gives our results on the speedup of UTCHEM on a Cray supercomputer. Depending on the size of the problem, this speedup factor was at least tenfold and resulted from a combination of a faster solver, vectorization, and code optimization. Part C describes our use of UTCHEM for field tracer studies and gives the results of a comparison with field tracer data on the same field (Big Muddy) as was simulated and compared with the surfactant pilot reported in section 3 of Part A. 120 figs., 37 tabs.« less

  16. Uniaxial ratchetting of 316FR steel at room temperature -- Part 2. Constitutive modeling and simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ohno, N.; Abdel-Karim, M.

    2000-01-01

    Uniaxial ratchetting experiments of 316FR steel at room temperature reported in Part 1 are simulated using a new kinematic hardening model which has two kinds of dynamic recovery terms. The model, which features the capability of simulating slight opening of stress-strain hysteresis loops robustly, is formulated by furnishing the Armstrong and Frederick model with the critical state of dynamic recovery introduced by Ohno and Wang (1993). The model is then combined with a viscoplastic equation, and the resulting constitutive model is applied successfully to simulating the experiments. It is shown that for ratchetting under stress cycling with negative stress ratio,more » viscoplasticity and slight opening of hysteresis loops are effective mainly in early and subsequent cycles, respectively, whereas for ratchetting under zero-to-tension only viscoplasticity is effective.« less

  17. Estimating Welfare Effects Consistent with Forward-Looking Behavior. Part I: Lessons from a Simulation Exercise. Part II: Empirical Results.

    ERIC Educational Resources Information Center

    Keane, Michael P.; Wolpin, Kenneth I.

    2002-01-01

    Part I uses simulations of a model of welfare participation and women's fertility decisions, showing that increases in per-child payments have substantial impact on fertility. Part II uses estimations of decision rules of forward-looking women regarding welfare participation, fertility, marriage, work, and schooling. (SK)

  18. Simulating the x-ray image contrast to setup techniques with desired flaw detectability

    NASA Astrophysics Data System (ADS)

    Koshti, Ajay M.

    2015-04-01

    The paper provides simulation data of previous work by the author in developing a model for estimating detectability of crack-like flaws in radiography. The methodology is developed to help in implementation of NASA Special x-ray radiography qualification, but is generically applicable to radiography. The paper describes a method for characterizing the detector resolution. Applicability of ASTM E 2737 resolution requirements to the model are also discussed. The paper describes a model for simulating the detector resolution. A computer calculator application, discussed here, also performs predicted contrast and signal-to-noise ratio calculations. Results of various simulation runs in calculating x-ray flaw size parameter and image contrast for varying input parameters such as crack depth, crack width, part thickness, x-ray angle, part-to-detector distance, part-to-source distance, source sizes, and detector sensitivity and resolution are given as 3D surfaces. These results demonstrate effect of the input parameters on the flaw size parameter and the simulated image contrast of the crack. These simulations demonstrate utility of the flaw size parameter model in setting up x-ray techniques that provide desired flaw detectability in radiography. The method is applicable to film radiography, computed radiography, and digital radiography.

  19. Autonomous control of production networks using a pheromone approach

    NASA Astrophysics Data System (ADS)

    Armbruster, D.; de Beer, C.; Freitag, M.; Jagalski, T.; Ringhofer, C.

    2006-04-01

    The flow of parts through a production network is usually pre-planned by a central control system. Such central control fails in presence of highly fluctuating demand and/or unforeseen disturbances. To manage such dynamic networks according to low work-in-progress and short throughput times, an autonomous control approach is proposed. Autonomous control means a decentralized routing of the autonomous parts themselves. The parts’ decisions base on backward propagated information about the throughput times of finished parts for different routes. So, routes with shorter throughput times attract parts to use this route again. This process can be compared to ants leaving pheromones on their way to communicate with following ants. The paper focuses on a mathematical description of such autonomously controlled production networks. A fluid model with limited service rates in a general network topology is derived and compared to a discrete-event simulation model. Whereas the discrete-event simulation of production networks is straightforward, the formulation of the addressed scenario in terms of a fluid model is challenging. Here it is shown, how several problems in a fluid model formulation (e.g. discontinuities) can be handled mathematically. Finally, some simulation results for the pheromone-based control with both the discrete-event simulation model and the fluid model are presented for a time-dependent influx.

  20. Modelling and scale-up of chemical flooding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pope, G.A.; Lake, L.W.; Sepehrnoori, K.

    1990-03-01

    The objective of this research is to develop, validate, and apply a comprehensive chemical flooding simulator for chemical recovery processes involving surfactants, polymers, and alkaline chemicals in various combinations. This integrated program includes components of laboratory experiments, physical property modelling, scale-up theory, and numerical analysis as necessary and integral components of the simulation activity. We have continued to develop, test, and apply our chemical flooding simulator (UTCHEM) to a wide variety of laboratory and reservoir problems involving tracers, polymers, polymer gels, surfactants, and alkaline agents. Part I is an update on the Application of Higher-Order Methods in Chemical Flooding Simulation.more » This update focuses on the comparison of grid orientation effects for four different numerical methods implemented in UTCHEM. Part II is on Simulation Design Studies and is a continuation of Saad's Big Muddy surfactant pilot simulation study reported last year. Part III reports on the Simulation of Gravity Effects under conditions similar to those of some of the oil reservoirs in the North Sea. Part IV is on Determining Oil Saturation from Interwell Tracers UTCHEM is used for large-scale interwell tracer tests. A systematic procedure for estimating oil saturation from interwell tracer data is developed and a specific example based on the actual field data provided by Sun E P Co. is given. Part V reports on the Application of Vectorization and Microtasking for Reservoir Simulation. Part VI reports on Alkaline Simulation. The alkaline/surfactant/polymer flood compositional simulator (UTCHEM) reported last year is further extended to include reactions involving chemical species containing magnesium, aluminium and silicon as constituent elements. Part VII reports on permeability and trapping of microemulsion.« less

  1. Scale effects in wind tunnel modeling of an urban atmospheric boundary layer

    NASA Astrophysics Data System (ADS)

    Kozmar, Hrvoje

    2010-03-01

    Precise urban atmospheric boundary layer (ABL) wind tunnel simulations are essential for a wide variety of atmospheric studies in built-up environments including wind loading of structures and air pollutant dispersion. One of key issues in addressing these problems is a proper choice of simulation length scale. In this study, an urban ABL was reproduced in a boundary layer wind tunnel at different scales to study possible scale effects. Two full-depth simulations and one part-depth simulation were carried out using castellated barrier wall, vortex generators, and a fetch of roughness elements. Redesigned “Counihan” vortex generators were employed in the part-depth ABL simulation. A hot-wire anemometry system was used to measure mean velocity and velocity fluctuations. Experimental results are presented as mean velocity, turbulence intensity, Reynolds stress, integral length scale of turbulence, and power spectral density of velocity fluctuations. Results suggest that variations in length-scale factor do not influence the generated ABL models when using similarity criteria applied in this study. Part-depth ABL simulation compares well with two full-depth ABL simulations indicating the truncated vortex generators developed for this study can be successfully employed in urban ABL part-depth simulations.

  2. Evaluation of subgrid-scale models in large-eddy simulations of turbulent flow in a centrifugal pump impeller

    NASA Astrophysics Data System (ADS)

    Yang, Zhengjun; Wang, Fujun; Zhou, Peijian

    2012-09-01

    The current research of large eddy simulation (LES) of turbulent flow in pumps mainly concentrates in applying conventional subgrid-scale (SGS) model to simulate turbulent flow, which aims at obtaining the flow field in pump. The selection of SGS model is usually not considered seriously, so the accuracy and efficiency of the simulation cannot be ensured. Three SGS models including Smagorinsky-Lilly model, dynamic Smagorinsky model and dynamic mixed model are comparably studied by using the commercial CFD code Fluent combined with its user define function. The simulations are performed for the turbulent flow in a centrifugal pump impeller. The simulation results indicate that the mean flows predicted by the three SGS models agree well with the experimental data obtained from the test that detailed measurements of the flow inside the rotating passages of a six-bladed shrouded centrifugal pump impeller performed using particle image velocimetry (PIV) and laser Doppler velocimetry (LDV). The comparable results show that dynamic mixed model gives the most accurate results for mean flow in the centrifugal pump impeller. The SGS stress of dynamic mixed model is decompose into the scale similar part and the eddy viscous part. The scale similar part of SGS stress plays a significant role in high curvature regions, such as the leading edge and training edge of pump blade. It is also found that the dynamic mixed model is more adaptive to compute turbulence in the pump impeller. The research results presented is useful to improve the computational accuracy and efficiency of LES for centrifugal pumps, and provide important reference for carrying out simulation in similar fluid machineries.

  3. Propagation of uncertainty in nasal spray in vitro performance models using Monte Carlo simulation: Part II. Error propagation during product performance modeling.

    PubMed

    Guo, Changning; Doub, William H; Kauffman, John F

    2010-08-01

    Monte Carlo simulations were applied to investigate the propagation of uncertainty in both input variables and response measurements on model prediction for nasal spray product performance design of experiment (DOE) models in the first part of this study, with an initial assumption that the models perfectly represent the relationship between input variables and the measured responses. In this article, we discard the initial assumption, and extended the Monte Carlo simulation study to examine the influence of both input variable variation and product performance measurement variation on the uncertainty in DOE model coefficients. The Monte Carlo simulations presented in this article illustrate the importance of careful error propagation during product performance modeling. Our results show that the error estimates based on Monte Carlo simulation result in smaller model coefficient standard deviations than those from regression methods. This suggests that the estimated standard deviations from regression may overestimate the uncertainties in the model coefficients. Monte Carlo simulations provide a simple software solution to understand the propagation of uncertainty in complex DOE models so that design space can be specified with statistically meaningful confidence levels. (c) 2010 Wiley-Liss, Inc. and the American Pharmacists Association

  4. Viscous and thermal modelling of thermoplastic composites forming process

    NASA Astrophysics Data System (ADS)

    Guzman, Eduardo; Liang, Biao; Hamila, Nahiene; Boisse, Philippe

    2016-10-01

    Thermoforming thermoplastic prepregs is a fast manufacturing process. It is suitable for automotive composite parts manufacturing. The simulation of thermoplastic prepreg forming is achieved by alternate thermal and mechanical analyses. The thermal properties are obtained from a mesoscopic analysis and a homogenization procedure. The forming simulation is based on a viscous-hyperelastic approach. The thermal simulations define the coefficients of the mechanical model that depend on the temperature. The forming simulations modify the boundary conditions and the internal geometry of the thermal analyses. The comparison of the simulation with an experimental thermoforming of a part representative of automotive applications shows the efficiency of the approach.

  5. Analysis of geologic terrain models for determination of optimum SAR sensor configuration and optimum information extraction for exploration of global non-renewable resources. Pilot study: Arkansas Remote Sensing Laboratory, part 1, part 2, and part 3

    NASA Technical Reports Server (NTRS)

    Kaupp, V. H.; Macdonald, H. C.; Waite, W. P.; Stiles, J. A.; Frost, F. S.; Shanmugam, K. S.; Smith, S. A.; Narayanan, V.; Holtzman, J. C. (Principal Investigator)

    1982-01-01

    Computer-generated radar simulations and mathematical geologic terrain models were used to establish the optimum radar sensor operating parameters for geologic research. An initial set of mathematical geologic terrain models was created for three basic landforms and families of simulated radar images were prepared from these models for numerous interacting sensor, platform, and terrain variables. The tradeoffs between the various sensor parameters and the quantity and quality of the extractable geologic data were investigated as well as the development of automated techniques of digital SAR image analysis. Initial work on a texture analysis of SEASAT SAR imagery is reported. Computer-generated radar simulations are shown for combinations of two geologic models and three SAR angles of incidence.

  6. Hybrid ray-FDTD model for the simulation of the ultrasonic inspection of CFRP parts

    NASA Astrophysics Data System (ADS)

    Jezzine, Karim; Ségur, Damien; Ecault, Romain; Dominguez, Nicolas; Calmon, Pierre

    2017-02-01

    Carbon Fiber Reinforced Polymers (CFRP) are commonly used in structural parts in the aeronautic industry, to reduce the weight of aircraft while maintaining high mechanical performances. Simulation of the ultrasonic inspections of these parts has to face the highly heterogeneous and anisotropic characteristics of these materials. To model the propagation of ultrasound in these composite structures, we propose two complementary approaches. The first one is based on a ray model predicting the propagation of the ultrasound in an anisotropic effective medium obtained from a homogenization of the material. The ray model is designed to deal with possibly curved parts and subsequent continuously varying anisotropic orientations. The second approach is based on the coupling of the ray model, and a finite difference scheme in time domain (FDTD). The ray model handles the ultrasonic propagation between the transducer and the FDTD computation zone that surrounds the composite part. In this way, the computational efficiency is preserved and the ultrasound scattering by the composite structure can be predicted. Inspections of flat or curved composite panels, as well as stiffeners can be performed. The models have been implemented in the CIVA software platform and compared to experiments. We also present an application of the simulation to the performance demonstration of the adaptive inspection technique SAUL (Surface Adaptive Ultrasound).

  7. Knowledge representation and qualitative simulation of salmon redd functioning. Part II: qualitative model of redds.

    PubMed

    Guerrin, F; Dumas, J

    2001-02-01

    This paper describes a qualitative model of the functioning of salmon redds (spawning areas of salmon) and its impact on mortality rates of early stages. For this, we use Qsim, a qualitative simulator, which appeared adequate for representing available qualitative knowledge of freshwater ecology experts (see Part I of this paper). Since the number of relevant variables was relatively large, it appeared necessary to decompose the model into two parts, corresponding to processes occurring at separate time-scales. A qualitative clock allows us to submit the simulation of salmon developmental stages to the calculation of accumulated daily temperatures (degree-days), according to the clock ticks and a water temperature regime set by the user. Therefore, this introduces some way of real-time dating and duration in a purely qualitative model. Simulating both sub-models, either separately or by means of alternate transitions, allows us to generate the evolutions of variables of interest, such as the mortality rates according to two factors (flow of oxygenated water and plugging of gravel interstices near the bed surface), under various scenarios.

  8. Research on motion model for the hypersonic boost-glide aircraft

    NASA Astrophysics Data System (ADS)

    Xu, Shenda; Wu, Jing; Wang, Xueying

    2015-11-01

    A motion model for the hypersonic boost-glide aircraft(HBG) was proposed in this paper, which also analyzed the precision of model through simulation. Firstly the trajectory of HBG was analyzed, and a scheme which divide the trajectory into two parts then build the motion model on each part. Secondly a restrained model of boosting stage and a restrained model of J2 perturbation were established, and set up the observe model. Finally the analysis of simulation results show the feasible and high-accuracy of the model, and raise a expectation for intensive research.

  9. Development of NASA's Models and Simulations Standard

    NASA Technical Reports Server (NTRS)

    Bertch, William J.; Zang, Thomas A.; Steele, Martin J.

    2008-01-01

    From the Space Shuttle Columbia Accident Investigation, there were several NASA-wide actions that were initiated. One of these actions was to develop a standard for development, documentation, and operation of Models and Simulations. Over the course of two-and-a-half years, a team of NASA engineers, representing nine of the ten NASA Centers developed a Models and Simulation Standard to address this action. The standard consists of two parts. The first is the traditional requirements section addressing programmatics, development, documentation, verification, validation, and the reporting of results from both the M&S analysis and the examination of compliance with this standard. The second part is a scale for evaluating the credibility of model and simulation results using levels of merit associated with 8 key factors. This paper provides an historical account of the challenges faced by and the processes used in this committee-based development effort. This account provides insights into how other agencies might approach similar developments. Furthermore, we discuss some specific applications of models and simulations used to assess the impact of this standard on future model and simulation activities.

  10. 40 CFR Appendix C to Part 75 - Missing Data Estimation Procedures

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... certification of a parametric, empirical, or process simulation method or model for calculating substitute data... available process simulation methods and models. 1.2Petition Requirements Continuously monitor, determine... desulfurization, a corresponding empirical correlation or process simulation parametric method using appropriate...

  11. Evaluation of The Operational Benefits Versus Costs of An Automated Cargo Mover

    DTIC Science & Technology

    2016-12-01

    logistics footprint and life-cycle cost are presented as part of this report. Analysis of modeling and simulation results identified statistically...life-cycle cost are presented as part of this report. Analysis of modeling and simulation results identified statistically significant differences...Error of Estimation. Source: Eskew and Lawler (1994). ...........................75 Figure 24. Load Results (100 Runs per Scenario

  12. Three phase heat and mass transfer model for unsaturated soil freezing process: Part 2 - model validation

    NASA Astrophysics Data System (ADS)

    Zhang, Yaning; Xu, Fei; Li, Bingxi; Kim, Yong-Song; Zhao, Wenke; Xie, Gongnan; Fu, Zhongbin

    2018-04-01

    This study aims to validate the three-phase heat and mass transfer model developed in the first part (Three phase heat and mass transfer model for unsaturated soil freezing process: Part 1 - model development). Experimental results from studies and experiments were used for the validation. The results showed that the correlation coefficients for the simulated and experimental water contents at different soil depths were between 0.83 and 0.92. The correlation coefficients for the simulated and experimental liquid water contents at different soil temperatures were between 0.95 and 0.99. With these high accuracies, the developed model can be well used to predict the water contents at different soil depths and temperatures.

  13. Real-time dynamic simulation of the Cassini spacecraft using DARTS. Part 2: Parallel/vectorized real-time implementation

    NASA Technical Reports Server (NTRS)

    Fijany, A.; Roberts, J. A.; Jain, A.; Man, G. K.

    1993-01-01

    Part 1 of this paper presented the requirements for the real-time simulation of Cassini spacecraft along with some discussion of the DARTS algorithm. Here, in Part 2 we discuss the development and implementation of parallel/vectorized DARTS algorithm and architecture for real-time simulation. Development of the fast algorithms and architecture for real-time hardware-in-the-loop simulation of spacecraft dynamics is motivated by the fact that it represents a hard real-time problem, in the sense that the correctness of the simulation depends on both the numerical accuracy and the exact timing of the computation. For a given model fidelity, the computation should be computed within a predefined time period. Further reduction in computation time allows increasing the fidelity of the model (i.e., inclusion of more flexible modes) and the integration routine.

  14. A Component-Based Extension Framework for Large-Scale Parallel Simulations in NEURON

    PubMed Central

    King, James G.; Hines, Michael; Hill, Sean; Goodman, Philip H.; Markram, Henry; Schürmann, Felix

    2008-01-01

    As neuronal simulations approach larger scales with increasing levels of detail, the neurosimulator software represents only a part of a chain of tools ranging from setup, simulation, interaction with virtual environments to analysis and visualizations. Previously published approaches to abstracting simulator engines have not received wide-spread acceptance, which in part may be to the fact that they tried to address the challenge of solving the model specification problem. Here, we present an approach that uses a neurosimulator, in this case NEURON, to describe and instantiate the network model in the simulator's native model language but then replaces the main integration loop with its own. Existing parallel network models are easily adopted to run in the presented framework. The presented approach is thus an extension to NEURON but uses a component-based architecture to allow for replaceable spike exchange components and pluggable components for monitoring, analysis, or control that can run in this framework alongside with the simulation. PMID:19430597

  15. Oligopolistic competition in wholesale electricity markets: Large-scale simulation and policy analysis using complementarity models

    NASA Astrophysics Data System (ADS)

    Helman, E. Udi

    This dissertation conducts research into the large-scale simulation of oligopolistic competition in wholesale electricity markets. The dissertation has two parts. Part I is an examination of the structure and properties of several spatial, or network, equilibrium models of oligopolistic electricity markets formulated as mixed linear complementarity problems (LCP). Part II is a large-scale application of such models to the electricity system that encompasses most of the United States east of the Rocky Mountains, the Eastern Interconnection. Part I consists of Chapters 1 to 6. The models developed in this part continue research into mixed LCP models of oligopolistic electricity markets initiated by Hobbs [67] and subsequently developed by Metzler [87] and Metzler, Hobbs and Pang [88]. Hobbs' central contribution is a network market model with Cournot competition in generation and a price-taking spatial arbitrage firm that eliminates spatial price discrimination by the Cournot firms. In one variant, the solution to this model is shown to be equivalent to the "no arbitrage" condition in a "pool" market, in which a Regional Transmission Operator optimizes spot sales such that the congestion price between two locations is exactly equivalent to the difference in the energy prices at those locations (commonly known as locational marginal pricing). Extensions to this model are presented in Chapters 5 and 6. One of these is a market model with a profit-maximizing arbitrage firm. This model is structured as a mathematical program with equilibrium constraints (MPEC), but due to the linearity of its constraints, can be solved as a mixed LCP. Part II consists of Chapters 7 to 12. The core of these chapters is a large-scale simulation of the U.S. Eastern Interconnection applying one of the Cournot competition with arbitrage models. This is the first oligopolistic equilibrium market model to encompass the full Eastern Interconnection with a realistic network representation (using a DC load flow approximation). Chapter 9 shows the price results. In contrast to prior market power simulations of these markets, much greater variability in price-cost margins is found when using a realistic model of hourly conditions on such a large network. Chapter 10 shows that the conventional concentration indices (HHIs) are poorly correlated with PCMs. Finally, Chapter 11 proposes that the simulation models are applied to merger analysis and provides two large-scale merger examples. (Abstract shortened by UMI.)

  16. Simulation of transonic flows through a turbine blade cascade with various prescription of outlet boundary conditions

    NASA Astrophysics Data System (ADS)

    Louda, Petr; Straka, Petr; Příhoda, Jaromír

    2018-06-01

    The contribution deals with the numerical simulation of transonic flows through a linear turbine blade cascade. Numerical simulations were carried partly for the standard computational domain with various outlet boundary conditions by the algebraic transition model of Straka and Příhoda [1] connected with the EARSM turbulence model of Hellsten [2] and partly for the computational domain corresponding to the geometrical arrangement in the wind tunnel by the γ-ζ transition model of Dick et al. [3] with the SST turbulence model. Numerical results were compared with experimental data. The agreement of numerical results with experimental results is acceptable through a complicated experimental configuration.

  17. Simulating the X-Ray Image Contrast to Set-Up Techniques with Desired Flaw Detectability

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M.

    2015-01-01

    The paper provides simulation data of previous work by the author in developing a model for estimating detectability of crack-like flaws in radiography. The methodology is being developed to help in implementation of NASA Special x-ray radiography qualification, but is generically applicable to radiography. The paper describes a method for characterizing X-ray detector resolution for crack detection. Applicability of ASTM E 2737 resolution requirements to the model are also discussed. The paper describes a model for simulating the detector resolution. A computer calculator application, discussed here, also performs predicted contrast and signal-to-noise ratio calculations. Results of various simulation runs in calculating x-ray flaw size parameter and image contrast for varying input parameters such as crack depth, crack width, part thickness, x-ray angle, part-to-detector distance, part-to-source distance, source sizes, and detector sensitivity and resolution are given as 3D surfaces. These results demonstrate effect of the input parameters on the flaw size parameter and the simulated image contrast of the crack. These simulations demonstrate utility of the flaw size parameter model in setting up x-ray techniques that provide desired flaw detectability in radiography. The method is applicable to film radiography, computed radiography, and digital radiography.

  18. Wind-Stress Simulations and Equatorial Dynamics in an AGCM. Part 1; Basic Results from a 1979-1999 Forced SST Experiment

    NASA Technical Reports Server (NTRS)

    Bacmeister, Julio T.; Suarez, Max J.; Einaudi, Franco (Technical Monitor)

    2001-01-01

    This is the first of a two part study examining the connection of the equatorial momentum budget in an AGCM (Atmospheric General Circulation Model), with simulated equatorial surface wind stresses over the Pacific. The AGCM used in this study forms part of a newly developed coupled forecasting system used at NASA's Seasonal- to-Interannual Prediction Project. Here we describe the model and present results from a 20-year (1979-1999) AMIP-type experiment forced with observed SSTs (Sea Surface Temperatures). Model results are compared them with available observational data sets. The climatological pattern of extra-tropical planetary waves as well as their ENSO-related variability is found to agree quite well with re-analysis estimates. The model's surface wind stress is examined in detail, and reveals a reasonable overall simulation of seasonal interannual variability, as well as seasonal mean distributions. However, an excessive annual oscillation in wind stress over the equatorial central Pacific is found. We examine the model's divergent circulation over the tropical Pacific and compare it with estimates based on re-analysis data. These comparisons are generally good, but reveal excessive upper-level convergence in the central Pacific. In Part II of this study a direct examination of individual terms in the AGCM's momentum budget is presented. We relate the results of this analysis to the model's simulation of surface wind stress.

  19. Simulating optoelectronic systems for remote sensing with SENSOR

    NASA Astrophysics Data System (ADS)

    Boerner, Anko

    2003-04-01

    The consistent end-to-end simulation of airborne and spaceborne remote sensing systems is an important task and sometimes the only way for the adaptation and optimization of a sensor and its observation conditions, the choice and test of algorithms for data processing, error estimation and the evaluation of the capabilities of the whole sensor system. The presented software simulator SENSOR (Software ENvironment for the Simulation of Optical Remote sensing systems) includes a full model of the sensor hardware, the observed scene, and the atmosphere in between. It allows the simulation of a wide range of optoelectronic systems for remote sensing. The simulator consists of three parts. The first part describes the geometrical relations between scene, sun, and the remote sensing system using a ray tracing algorithm. The second part of the simulation environment considers the radiometry. It calculates the at-sensor radiance using a pre-calculated multidimensional lookup-table taking the atmospheric influence on the radiation into account. Part three consists of an optical and an electronic sensor model for the generation of digital images. Using SENSOR for an optimization requires the additional application of task-specific data processing algorithms. The principle of the end-to-end-simulation approach is explained, all relevant concepts of SENSOR are discussed, and examples of its use are given. The verification of SENSOR is demonstrated.

  20. EVALUATION OF THE COMMUNITY MULTISCALE AIR QUALITY (CMAQ) MODEL VERSION 4.5: UNCERTAINTIES AND SENSITIVITIES IMPACTING MODEL PERFORMANCE: PART II - PARTICULATE MATTER

    EPA Science Inventory

    This paper presents an analysis of the CMAQ v4.5 model performance for particulate matter and its chemical components for the simulated year 2001. This is part two is two part series of papers that examines the model performance of CMAQ v4.5.

  1. Improving the iterative Linear Interaction Energy approach using automated recognition of configurational transitions.

    PubMed

    Vosmeer, C Ruben; Kooi, Derk P; Capoferri, Luigi; Terpstra, Margreet M; Vermeulen, Nico P E; Geerke, Daan P

    2016-01-01

    Recently an iterative method was proposed to enhance the accuracy and efficiency of ligand-protein binding affinity prediction through linear interaction energy (LIE) theory. For ligand binding to flexible Cytochrome P450s (CYPs), this method was shown to decrease the root-mean-square error and standard deviation of error prediction by combining interaction energies of simulations starting from different conformations. Thereby, different parts of protein-ligand conformational space are sampled in parallel simulations. The iterative LIE framework relies on the assumption that separate simulations explore different local parts of phase space, and do not show transitions to other parts of configurational space that are already covered in parallel simulations. In this work, a method is proposed to (automatically) detect such transitions during the simulations that are performed to construct LIE models and to predict binding affinities. Using noise-canceling techniques and splines to fit time series of the raw data for the interaction energies, transitions during simulation between different parts of phase space are identified. Boolean selection criteria are then applied to determine which parts of the interaction energy trajectories are to be used as input for the LIE calculations. Here we show that this filtering approach benefits the predictive quality of our previous CYP 2D6-aryloxypropanolamine LIE model. In addition, an analysis is performed of the gain in computational efficiency that can be obtained from monitoring simulations using the proposed filtering method and by prematurely terminating simulations accordingly.

  2. Multiscale Issues and Simulation-Based Science and Engineering for Materials-by-Design

    DTIC Science & Technology

    2010-05-15

    planning and execution of programs to achieve the vision of ? material -by-design?. A key part of this effort has been to examine modeling at the mesoscale...15. SUBJECT TERMS Modelling & Simulation, Materials Design 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT Same as Report (SAR) 18...planning and execution of programs to achieve the vision of “ material -by-design”. A key part of this effort has been to examine modeling at the mesoscale. A

  3. The GFDL global atmosphere and land model AM4.0/LM4.0: 1. Simulation characteristics with prescribed SSTs

    USGS Publications Warehouse

    Zhao, M.; Golaz, J.-C.; Held, I. M.; Guo, H.; Balaji, V.; Benson, R.; Chen, J.-H.; Chen, X.; Donner, L. J.; Dunne, J. P.; Dunne, Krista A.; Durachta, J.; Fan, S.-M.; Freidenreich, S. M.; Garner, S. T.; Ginoux, P.; Harris, L. M.; Horowitz, L. W.; Krasting, J. P.; Langenhorst, A. R.; Liang, Z.; Lin, P.; Lin, S.-J.; Malyshev, S. L.; Mason, E.; Milly, Paul C.D.; Ming, Y.; Naik, V.; Paulot, F.; Paynter, D.; Phillipps, P.; Radhakrishnan, A.; Ramaswamy, V.; Robinson, T.; Schwarzkopf, D.; Seman, C. J.; Shevliakova, E.; Shen, Z.; Shin, H.; Silvers, L.; Wilson, J. R.; Winton, M.; Wittenberg, A. T.; Wyman, B.; Xiang, B.

    2018-01-01

    In this two‐part paper, a description is provided of a version of the AM4.0/LM4.0 atmosphere/land model that will serve as a base for a new set of climate and Earth system models (CM4 and ESM4) under development at NOAA's Geophysical Fluid Dynamics Laboratory (GFDL). This version, with roughly 100 km horizontal resolution and 33 levels in the vertical, contains an aerosol model that generates aerosol fields from emissions and a “light” chemistry mechanism designed to support the aerosol model but with prescribed ozone. In Part 1, the quality of the simulation in AMIP (Atmospheric Model Intercomparison Project) mode—with prescribed sea surface temperatures (SSTs) and sea‐ice distribution—is described and compared with previous GFDL models and with the CMIP5 archive of AMIP simulations. The model's Cess sensitivity (response in the top‐of‐atmosphere radiative flux to uniform warming of SSTs) and effective radiative forcing are also presented. In Part 2, the model formulation is described more fully and key sensitivities to aspects of the model formulation are discussed, along with the approach to model tuning.

  4. The GFDL Global Atmosphere and Land Model AM4.0/LM4.0: 1. Simulation Characteristics With Prescribed SSTs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Ming; Golaz, J. -C.; Held, I. M.

    In this two–part paper, a description is provided of a version of the AM4.0/LM4.0 atmosphere/land model that will serve as a base for a new set of climate and Earth system models (CM4 and ESM4) under development at NOAA's Geophysical Fluid Dynamics Laboratory (GFDL). This version, with roughly 100 km horizontal resolution and 33 levels in the vertical, contains an aerosol model that generates aerosol fields from emissions and a “light” chemistry mechanism designed to support the aerosol model but with prescribed ozone. In Part 1, the quality of the simulation in AMIP (Atmospheric Model Intercomparison Project) mode—with prescribed seamore » surface temperatures (SSTs) and sea–ice distribution—is described and compared with previous GFDL models and with the CMIP5 archive of AMIP simulations. Here, the model's Cess sensitivity (response in the top–of–atmosphere radiative flux to uniform warming of SSTs) and effective radiative forcing are also presented. In Part 2, the model formulation is described more fully and key sensitivities to aspects of the model formulation are discussed, along with the approach to model tuning.« less

  5. The GFDL Global Atmosphere and Land Model AM4.0/LM4.0: 1. Simulation Characteristics With Prescribed SSTs

    DOE PAGES

    Zhao, Ming; Golaz, J. -C.; Held, I. M.; ...

    2018-02-19

    In this two–part paper, a description is provided of a version of the AM4.0/LM4.0 atmosphere/land model that will serve as a base for a new set of climate and Earth system models (CM4 and ESM4) under development at NOAA's Geophysical Fluid Dynamics Laboratory (GFDL). This version, with roughly 100 km horizontal resolution and 33 levels in the vertical, contains an aerosol model that generates aerosol fields from emissions and a “light” chemistry mechanism designed to support the aerosol model but with prescribed ozone. In Part 1, the quality of the simulation in AMIP (Atmospheric Model Intercomparison Project) mode—with prescribed seamore » surface temperatures (SSTs) and sea–ice distribution—is described and compared with previous GFDL models and with the CMIP5 archive of AMIP simulations. Here, the model's Cess sensitivity (response in the top–of–atmosphere radiative flux to uniform warming of SSTs) and effective radiative forcing are also presented. In Part 2, the model formulation is described more fully and key sensitivities to aspects of the model formulation are discussed, along with the approach to model tuning.« less

  6. The GFDL Global Atmosphere and Land Model AM4.0/LM4.0: 1. Simulation Characteristics With Prescribed SSTs

    NASA Astrophysics Data System (ADS)

    Zhao, M.; Golaz, J.-C.; Held, I. M.; Guo, H.; Balaji, V.; Benson, R.; Chen, J.-H.; Chen, X.; Donner, L. J.; Dunne, J. P.; Dunne, K.; Durachta, J.; Fan, S.-M.; Freidenreich, S. M.; Garner, S. T.; Ginoux, P.; Harris, L. M.; Horowitz, L. W.; Krasting, J. P.; Langenhorst, A. R.; Liang, Z.; Lin, P.; Lin, S.-J.; Malyshev, S. L.; Mason, E.; Milly, P. C. D.; Ming, Y.; Naik, V.; Paulot, F.; Paynter, D.; Phillipps, P.; Radhakrishnan, A.; Ramaswamy, V.; Robinson, T.; Schwarzkopf, D.; Seman, C. J.; Shevliakova, E.; Shen, Z.; Shin, H.; Silvers, L. G.; Wilson, J. R.; Winton, M.; Wittenberg, A. T.; Wyman, B.; Xiang, B.

    2018-03-01

    In this two-part paper, a description is provided of a version of the AM4.0/LM4.0 atmosphere/land model that will serve as a base for a new set of climate and Earth system models (CM4 and ESM4) under development at NOAA's Geophysical Fluid Dynamics Laboratory (GFDL). This version, with roughly 100 km horizontal resolution and 33 levels in the vertical, contains an aerosol model that generates aerosol fields from emissions and a "light" chemistry mechanism designed to support the aerosol model but with prescribed ozone. In Part 1, the quality of the simulation in AMIP (Atmospheric Model Intercomparison Project) mode—with prescribed sea surface temperatures (SSTs) and sea-ice distribution—is described and compared with previous GFDL models and with the CMIP5 archive of AMIP simulations. The model's Cess sensitivity (response in the top-of-atmosphere radiative flux to uniform warming of SSTs) and effective radiative forcing are also presented. In Part 2, the model formulation is described more fully and key sensitivities to aspects of the model formulation are discussed, along with the approach to model tuning.

  7. Dynamic stresses in a Francis model turbine at deep part load

    NASA Astrophysics Data System (ADS)

    Weber, Wilhelm; von Locquenghien, Florian; Conrad, Philipp; Koutnik, Jiri

    2017-04-01

    A comparison between numerically obtained dynamic stresses in a Francis model turbine at deep part load with experimental ones is presented. Due to the change in the electrical power mix to more content of new renewable energy sources, Francis turbines are forced to operate at deep part load in order to compensate stochastic nature of wind and solar power and to ensure grid stability. For the extension of the operating range towards deep part load improved understanding of the harsh flow conditions and their impact on material fatigue of hydraulic components is required in order to ensure long life time of the power unit. In this paper pressure loads on a model turbine runner from unsteady two-phase computational fluid dynamics simulation at deep part load are used for calculation of mechanical stresses by finite element analysis. Therewith, stress distribution over time is determined. Since only few runner rotations are simulated due to enormous numerical cost, more effort has to be spent to evaluation procedure in order to obtain objective results. By comparing the numerical results with measured strains accuracy of the whole simulation procedure is verified.

  8. An integrated approach for the knowledge discovery in computer simulation models with a multi-dimensional parameter space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khawli, Toufik Al; Eppelt, Urs; Hermanns, Torsten

    2016-06-08

    In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part ismore » to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.« less

  9. An integrated approach for the knowledge discovery in computer simulation models with a multi-dimensional parameter space

    NASA Astrophysics Data System (ADS)

    Khawli, Toufik Al; Gebhardt, Sascha; Eppelt, Urs; Hermanns, Torsten; Kuhlen, Torsten; Schulz, Wolfgang

    2016-06-01

    In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part is to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.

  10. Integrating O/S models during conceptual design, part 1

    NASA Technical Reports Server (NTRS)

    Ebeling, Charles E.

    1994-01-01

    The University of Dayton is pleased to submit this report to the National Aeronautics and Space Administration (NASA), Langley Research Center, which integrates a set of models for determining operational capabilities and support requirements during the conceptual design of proposed space systems. This research provides for the integration of the reliability and maintainability (R&M) model, both new and existing simulation models, and existing operations and support (O&S) costing equations in arriving at a complete analysis methodology. Details concerning the R&M model and the O&S costing model may be found in previous reports accomplished under this grant (NASA Research Grant NAG1-1327). In the process of developing this comprehensive analysis approach, significant enhancements were made to the R&M model, updates to the O&S costing model were accomplished, and a new simulation model developed. This is the 1st part of a 3 part technical report.

  11. Computer modeling and simulators as part of university training for NPP operating personnel

    NASA Astrophysics Data System (ADS)

    Volman, M.

    2017-01-01

    This paper considers aspects of a program for training future nuclear power plant personnel developed by the NPP Department of Ivanovo State Power Engineering University. Computer modeling is used for numerical experiments on the kinetics of nuclear reactors in Mathcad. Simulation modeling is carried out on the computer and full-scale simulator of water-cooled power reactor for the simulation of neutron-physical reactor measurements and the start-up - shutdown process.

  12. Calibrating and testing a gap model for simulating forest management in the Oregon Coast Range

    Treesearch

    Robert J. Pabst; Matthew N. Goslin; Steven L. Garman; Thomas A. Spies

    2008-01-01

    The complex mix of economic and ecological objectives facing today's forest managers necessitates the development of growth models with a capacity for simulating a wide range of forest conditions while producing outputs useful for economic analyses. We calibrated the gap model ZELIG to simulate stand level forest development in the Oregon Coast Range as part of a...

  13. Hybrid methods for simulating hydrodynamics and heat transfer in multiscale (1D-3D) models

    NASA Astrophysics Data System (ADS)

    Filimonov, S. A.; Mikhienkova, E. I.; Dekterev, A. A.; Boykov, D. V.

    2017-09-01

    The work is devoted to application of different-scale models in the simulation of hydrodynamics and heat transfer of large and/or complex systems, which can be considered as a combination of extended and “compact” elements. The model consisting of simultaneously existing three-dimensional and network (one-dimensional) elements is called multiscale. The paper examines the relevance of building such models and considers three main options for their implementation: the spatial and the network parts of the model are calculated separately; spatial and network parts are calculated simultaneously (hydraulically unified model); network elements “penetrate” the spatial part and are connected through the integral characteristics at the tube/channel walls (hydraulically disconnected model). Each proposed method is analyzed in terms of advantages and disadvantages. The paper presents a number of practical examples demonstrating the application of multiscale models.

  14. Modeling dolomitized carbonate-ramp reservoirs: A case study of the Seminole San Andres unit. Part 2 -- Seismic modeling, reservoir geostatistics, and reservoir simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, F.P.; Dai, J.; Kerans, C.

    1998-11-01

    In part 1 of this paper, the authors discussed the rock-fabric/petrophysical classes for dolomitized carbonate-ramp rocks, the effects of rock fabric and pore type on petrophysical properties, petrophysical models for analyzing wireline logs, the critical scales for defining geologic framework, and 3-D geologic modeling. Part 2 focuses on geophysical and engineering characterizations, including seismic modeling, reservoir geostatistics, stochastic modeling, and reservoir simulation. Synthetic seismograms of 30 to 200 Hz were generated to study the level of seismic resolution required to capture the high-frequency geologic features in dolomitized carbonate-ramp reservoirs. Outcrop data were collected to investigate effects of sampling interval andmore » scale-up of block size on geostatistical parameters. Semivariogram analysis of outcrop data showed that the sill of log permeability decreases and the correlation length increases with an increase of horizontal block size. Permeability models were generated using conventional linear interpolation, stochastic realizations without stratigraphic constraints, and stochastic realizations with stratigraphic constraints. Simulations of a fine-scale Lawyer Canyon outcrop model were used to study the factors affecting waterflooding performance. Simulation results show that waterflooding performance depends strongly on the geometry and stacking pattern of the rock-fabric units and on the location of production and injection wells.« less

  15. 3D modelling of the flow of self-compacting concrete with or without steel fibres. Part I: slump flow test

    NASA Astrophysics Data System (ADS)

    Deeb, R.; Kulasegaram, S.; Karihaloo, B. L.

    2014-12-01

    In part I of this two-part paper, a three-dimensional Lagrangian smooth particle hydrodynamics method has been used to model the flow of self-compacting concrete (SCC) with or without short steel fibres in the slump cone test. The constitutive behaviour of this non-Newtonian viscous fluid is described by a Bingham-type model. The 3D simulation of SCC without fibres is focused on the distribution of large aggregates (larger than or equal to 8 mm) during the flow. The simulation of self-compacting high- and ultra-high- performance concrete containing short steel fibres is focused on the distribution of fibres and their orientation during the flow. The simulation results show that the fibres and/or heavier aggregates do not precipitate but remain homogeneously distributed in the mix throughout the flow.

  16. SENSOR++: Simulation of Remote Sensing Systems from Visible to Thermal Infrared

    NASA Astrophysics Data System (ADS)

    Paproth, C.; Schlüßler, E.; Scherbaum, P.; Börner, A.

    2012-07-01

    During the development process of a remote sensing system, the optimization and the verification of the sensor system are important tasks. To support these tasks, the simulation of the sensor and its output is valuable. This enables the developers to test algorithms, estimate errors, and evaluate the capabilities of the whole sensor system before the final remote sensing system is available and produces real data. The presented simulation concept, SENSOR++, consists of three parts. The first part is the geometric simulation which calculates where the sensor looks at by using a ray tracing algorithm. This also determines whether the observed part of the scene is shadowed or not. The second part describes the radiometry and results in the spectral at-sensor radiance from the visible spectrum to the thermal infrared according to the simulated sensor type. In the case of earth remote sensing, it also includes a model of the radiative transfer through the atmosphere. The final part uses the at-sensor radiance to generate digital images by using an optical and an electronic sensor model. Using SENSOR++ for an optimization requires the additional application of task-specific data processing algorithms. The principle of the simulation approach is explained, all relevant concepts of SENSOR++ are discussed, and first examples of its use are given, for example a camera simulation for a moon lander. Finally, the verification of SENSOR++ is demonstrated.

  17. The GFDL global atmosphere and land model AM4.0/LM4.0: 2. Model description, sensitivity studies, and tuning strategies

    USGS Publications Warehouse

    Zhao, M.; Golaz, J.-C.; Held, I. M.; Guo, H.; Balaji, V.; Benson, R.; Chen, J.-H.; Chen, X.; Donner, L. J.; Dunne, J. P.; Dunne, Krista A.; Durachta, J.; Fan, S.-M.; Freidenreich, S. M.; Garner, S. T.; Ginoux, P.; Harris, L. M.; Horowitz, L. W.; Krasting, J. P.; Langenhorst, A. R.; Liang, Z.; Lin, P.; Lin, S.-J.; Malyshev, S. L.; Mason, E.; Milly, Paul C.D.; Ming, Y.; Naik, V.; Paulot, F.; Paynter, D.; Phillipps, P.; Radhakrishnan, A.; Ramaswamy, V.; Robinson, T.; Schwarzkopf, D.; Seman, C. J.; Shevliakova, E.; Shen, Z.; Shin, H.; Silvers, L.; Wilson, J. R.; Winton, M.; Wittenberg, A. T.; Wyman, B.; Xiang, B.

    2018-01-01

    In Part 2 of this two‐part paper, documentation is provided of key aspects of a version of the AM4.0/LM4.0 atmosphere/land model that will serve as a base for a new set of climate and Earth system models (CM4 and ESM4) under development at NOAA's Geophysical Fluid Dynamics Laboratory (GFDL). The quality of the simulation in AMIP (Atmospheric Model Intercomparison Project) mode has been provided in Part 1. Part 2 provides documentation of key components and some sensitivities to choices of model formulation and values of parameters, highlighting the convection parameterization and orographic gravity wave drag. The approach taken to tune the model's clouds to observations is a particular focal point. Care is taken to describe the extent to which aerosol effective forcing and Cess sensitivity have been tuned through the model development process, both of which are relevant to the ability of the model to simulate the evolution of temperatures over the last century when coupled to an ocean model.

  18. The GFDL Global Atmosphere and Land Model AM4.0/LM4.0: 2. Model Description, Sensitivity Studies, and Tuning Strategies

    DOE PAGES

    Zhao, Ming; Golaz, J. -C.; Held, I. M.; ...

    2018-02-19

    Here, in Part 2 of this two–part paper, documentation is provided of key aspects of a version of the AM4.0/LM4.0 atmosphere/land model that will serve as a base for a new set of climate and Earth system models (CM4 and ESM4) under development at NOAA's Geophysical Fluid Dynamics Laboratory (GFDL). The quality of the simulation in AMIP (Atmospheric Model Intercomparison Project) mode has been provided in Part 1. Part 2 provides documentation of key components and some sensitivities to choices of model formulation and values of parameters, highlighting the convection parameterization and orographic gravity wave drag. The approach taken tomore » tune the model's clouds to observations is a particular focal point. Care is taken to describe the extent to which aerosol effective forcing and Cess sensitivity have been tuned through the model development process, both of which are relevant to the ability of the model to simulate the evolution of temperatures over the last century when coupled to an ocean model.« less

  19. The GFDL Global Atmosphere and Land Model AM4.0/LM4.0: 2. Model Description, Sensitivity Studies, and Tuning Strategies

    NASA Astrophysics Data System (ADS)

    Zhao, M.; Golaz, J.-C.; Held, I. M.; Guo, H.; Balaji, V.; Benson, R.; Chen, J.-H.; Chen, X.; Donner, L. J.; Dunne, J. P.; Dunne, K.; Durachta, J.; Fan, S.-M.; Freidenreich, S. M.; Garner, S. T.; Ginoux, P.; Harris, L. M.; Horowitz, L. W.; Krasting, J. P.; Langenhorst, A. R.; Liang, Z.; Lin, P.; Lin, S.-J.; Malyshev, S. L.; Mason, E.; Milly, P. C. D.; Ming, Y.; Naik, V.; Paulot, F.; Paynter, D.; Phillipps, P.; Radhakrishnan, A.; Ramaswamy, V.; Robinson, T.; Schwarzkopf, D.; Seman, C. J.; Shevliakova, E.; Shen, Z.; Shin, H.; Silvers, L. G.; Wilson, J. R.; Winton, M.; Wittenberg, A. T.; Wyman, B.; Xiang, B.

    2018-03-01

    In Part 2 of this two-part paper, documentation is provided of key aspects of a version of the AM4.0/LM4.0 atmosphere/land model that will serve as a base for a new set of climate and Earth system models (CM4 and ESM4) under development at NOAA's Geophysical Fluid Dynamics Laboratory (GFDL). The quality of the simulation in AMIP (Atmospheric Model Intercomparison Project) mode has been provided in Part 1. Part 2 provides documentation of key components and some sensitivities to choices of model formulation and values of parameters, highlighting the convection parameterization and orographic gravity wave drag. The approach taken to tune the model's clouds to observations is a particular focal point. Care is taken to describe the extent to which aerosol effective forcing and Cess sensitivity have been tuned through the model development process, both of which are relevant to the ability of the model to simulate the evolution of temperatures over the last century when coupled to an ocean model.

  20. The GFDL Global Atmosphere and Land Model AM4.0/LM4.0: 2. Model Description, Sensitivity Studies, and Tuning Strategies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Ming; Golaz, J. -C.; Held, I. M.

    Here, in Part 2 of this two–part paper, documentation is provided of key aspects of a version of the AM4.0/LM4.0 atmosphere/land model that will serve as a base for a new set of climate and Earth system models (CM4 and ESM4) under development at NOAA's Geophysical Fluid Dynamics Laboratory (GFDL). The quality of the simulation in AMIP (Atmospheric Model Intercomparison Project) mode has been provided in Part 1. Part 2 provides documentation of key components and some sensitivities to choices of model formulation and values of parameters, highlighting the convection parameterization and orographic gravity wave drag. The approach taken tomore » tune the model's clouds to observations is a particular focal point. Care is taken to describe the extent to which aerosol effective forcing and Cess sensitivity have been tuned through the model development process, both of which are relevant to the ability of the model to simulate the evolution of temperatures over the last century when coupled to an ocean model.« less

  1. Grinding Method and Error Analysis of Eccentric Shaft Parts

    NASA Astrophysics Data System (ADS)

    Wang, Zhiming; Han, Qiushi; Li, Qiguang; Peng, Baoying; Li, Weihua

    2017-12-01

    RV reducer and various mechanical transmission parts are widely used in eccentric shaft parts, The demand of precision grinding technology for eccentric shaft parts now, In this paper, the model of X-C linkage relation of eccentric shaft grinding is studied; By inversion method, the contour curve of the wheel envelope is deduced, and the distance from the center of eccentric circle is constant. The simulation software of eccentric shaft grinding is developed, the correctness of the model is proved, the influence of the X-axis feed error, the C-axis feed error and the wheel radius error on the grinding process is analyzed, and the corresponding error calculation model is proposed. The simulation analysis is carried out to provide the basis for the contour error compensation.

  2. Traffic analysis toolbox volume XIII : integrated corridor management analysis, modeling, and simulation guide

    DOT National Transportation Integrated Search

    2017-02-01

    As part of the Federal Highway Administration (FHWA) Traffic Analysis Toolbox (Volume XIII), this guide was designed to help corridor stakeholders implement the Integrated Corridor Management (ICM) Analysis, Modeling, and Simulation (AMS) methodology...

  3. Traffic analysis toolbox volume XIII : integrated corridor management analysis, modeling, and simulation guide.

    DOT National Transportation Integrated Search

    2017-02-01

    As part of the Federal Highway Administration (FHWA) Traffic Analysis Toolbox (Volume XIII), this guide was designed to help corridor stakeholders implement the Integrated Corridor Management (ICM) Analysis, Modeling, and Simulation (AMS) methodology...

  4. Digital-computer model of the principal ground-water reservoir in Beryl-Enterprise area, Escalante Desert, Utah

    USGS Publications Warehouse

    Mower, R.W.; Bartholoma, Scott D.

    1981-01-01

    The computer model presented in this report was used to simulate the principal ground-water reservoir in the Beryl-Enterprise area, Escalante Desert, Beaver, Iron, and Washington Counties, Utah (Mower, 1981). The details of the formulation of the model, testing of its validity, and the results of predictions are discussed in the cited report. This report was prepared as part of a cooperative program with the Utah Department of Natural Resources, Division of Water Rights, to investigate the water resources of the State. It is an addendum to the principal interpretive report, and it is presented in order to make the model available to anyone desiring to use it for additional predictions. The main program used was the finite-difference model for aquifer simulation in two dimensions documented by Trescott, Pinder, and Larson, (1976). Minor modifications were made to adapt the program to the principal ground-water reservoir in the Beryl-Enterprise area. All the modifications are listed at the top of table 1, and were related to parameter input and output, thus none of the computational subroutines were affected. The parameter arrays (table 1) and map of the area with a grid overlay (pi. 1) are given on following pages. The model simulates an aquifer- under water-table conditions, mostly composed of unconsoliuated basin-fill deposits. The boundaries of the modeled area (pi. 1) generally coincide with the boundaries of the saturated basin fill. However, in the southwest-central part of the model, permeable consolidated rock is included; and that part of the northern boundary between the Black and Wah Wah Mountains is an arbitrary boundary in basin fill between the Beryl-Enterprise area and the Milford area that lies to the northeast. The ignimbrite at Table Butte also was included in the active part of the model. The model includes simulation of discharge by evapotranspiration from phreatophytes. The areal recharge array was used to simulate recharge entering the modeled area at its boundaries and from stream infiltration in the southern corner near Enterprise. In addition, this array included discharge by wells operated during the period simulated as being under steady-state conditions (virtually 1937), and discharging wells simulating flow of water northeast to the Milford area. These wells also were included in the transient-state simulation (1937-77), although any changes in this discharge were modeled using the pumpage array (Group IV, table 1). The wells simulating outflow to the Milford area are shown on plate 1, but the wells pumping in 1937 are not shown unless they also were pumped during 1937-77. The pumpage array was used to simulate: (1) Discharge from wells, (2) discharge after 1977 from a mine in the southwest-central part of the model and recharge resulting form the mine discharge (pi. 1), and (3) changes in discharge in wells operated during the steady-state period. Recharge from irrigation was simulated by reducing pumpage from nodes where irrigation occurs. Discharge from all wells was reduced by 5 percent by multiplying all pumpage by 0.95 in the computer program. North of Newcastle, in T. 35 S., R. 15 W., pumpage was reduced by 35 percent because surface materials are very permeable.

  5. Integrating O/S models during conceptual design, part 3

    NASA Technical Reports Server (NTRS)

    Ebeling, Charles E.

    1994-01-01

    Space vehicles, such as the Space Shuttle, require intensive ground support prior to, during, and after each mission. Maintenance is a significant part of that ground support. All space vehicles require scheduled maintenance to ensure operability and performance. In addition, components of any vehicle are not one-hundred percent reliable so they exhibit random failures. Once detected, a failure initiates unscheduled maintenance on the vehicle. Maintenance decreases the number of missions which can be completed by keeping vehicles out of service so that the time between the completion of one mission and the start of the next is increased. Maintenance also requires resources such as people, facilities, tooling, and spare parts. Assessing the mission capability and resource requirements of any new space vehicle, in addition to performance specification, is necessary to predict the life cycle cost and success of the vehicle. Maintenance and logistics support has been modeled by computer simulation to estimate mission capability and resource requirements for evaluation of proposed space vehicles. The simulation was written with Simulation Language for Alternative Modeling II (SLAM II) for execution on a personal computer. For either one or a fleet of space vehicles, the model simulates the preflight maintenance checks, the mission and return to earth, and the post flight maintenance in preparation to be sent back into space. THe model enables prediction of the number of missions possible and vehicle turn-time (the time between completion of one mission and the start of the next) given estimated values for component reliability and maintainability. The model also facilitates study of the manpower and vehicle requirements for the proposed vehicle to meet its desired mission rate. This is the 3rd part of a 3 part technical report.

  6. A Primer on Simulation and Gaming.

    ERIC Educational Resources Information Center

    Barton, Richard F.

    In a primer intended for the administrative professions, for the behavioral sciences, and for education, simulation and its various aspects are defined, illustrated, and explained. Man-model simulation, man-computer simulation, all-computer simulation, and analysis are discussed as techniques for studying object systems (parts of the "real…

  7. Ionospheric Simulation System for Satellite Observations and Global Assimilative Model Experiments - ISOGAME

    NASA Technical Reports Server (NTRS)

    Pi, Xiaoqing; Mannucci, Anthony J.; Verkhoglyadova, Olga; Stephens, Philip; Iijima, Bryron A.

    2013-01-01

    Modeling and imaging the Earth's ionosphere as well as understanding its structures, inhomogeneities, and disturbances is a key part of NASA's Heliophysics Directorate science roadmap. This invention provides a design tool for scientific missions focused on the ionosphere. It is a scientifically important and technologically challenging task to assess the impact of a new observation system quantitatively on our capability of imaging and modeling the ionosphere. This question is often raised whenever a new satellite system is proposed, a new type of data is emerging, or a new modeling technique is developed. The proposed constellation would be part of a new observation system with more low-Earth orbiters tracking more radio occultation signals broadcast by Global Navigation Satellite System (GNSS) than those offered by the current GPS and COSMIC observation system. A simulation system was developed to fulfill this task. The system is composed of a suite of software that combines the Global Assimilative Ionospheric Model (GAIM) including first-principles and empirical ionospheric models, a multiple- dipole geomagnetic field model, data assimilation modules, observation simulator, visualization software, and orbit design, simulation, and optimization software.

  8. Theory, modeling, and integrated studies in the Arase (ERG) project

    NASA Astrophysics Data System (ADS)

    Seki, Kanako; Miyoshi, Yoshizumi; Ebihara, Yusuke; Katoh, Yuto; Amano, Takanobu; Saito, Shinji; Shoji, Masafumi; Nakamizo, Aoi; Keika, Kunihiro; Hori, Tomoaki; Nakano, Shin'ya; Watanabe, Shigeto; Kamiya, Kei; Takahashi, Naoko; Omura, Yoshiharu; Nose, Masahito; Fok, Mei-Ching; Tanaka, Takashi; Ieda, Akimasa; Yoshikawa, Akimasa

    2018-02-01

    Understanding of underlying mechanisms of drastic variations of the near-Earth space (geospace) is one of the current focuses of the magnetospheric physics. The science target of the geospace research project Exploration of energization and Radiation in Geospace (ERG) is to understand the geospace variations with a focus on the relativistic electron acceleration and loss processes. In order to achieve the goal, the ERG project consists of the three parts: the Arase (ERG) satellite, ground-based observations, and theory/modeling/integrated studies. The role of theory/modeling/integrated studies part is to promote relevant theoretical and simulation studies as well as integrated data analysis to combine different kinds of observations and modeling. Here we provide technical reports on simulation and empirical models related to the ERG project together with their roles in the integrated studies of dynamic geospace variations. The simulation and empirical models covered include the radial diffusion model of the radiation belt electrons, GEMSIS-RB and RBW models, CIMI model with global MHD simulation REPPU, GEMSIS-RC model, plasmasphere thermosphere model, self-consistent wave-particle interaction simulations (electron hybrid code and ion hybrid code), the ionospheric electric potential (GEMSIS-POT) model, and SuperDARN electric field models with data assimilation. ERG (Arase) science center tools to support integrated studies with various kinds of data are also briefly introduced.[Figure not available: see fulltext.

  9. Simulation of ground-water flow and transport of chlorinated hydrocarbons at Graces Quarters, Aberdeen Proving Ground, Maryland

    USGS Publications Warehouse

    Tenbus, Frederick J.; Fleck, William B.

    2001-01-01

    Military activity at Graces Quarters, a former open-air chemical-agent facility at Aberdeen Proving Ground, Maryland, has resulted in ground-water contamination by chlorinated hydrocarbons. As part of a ground-water remediation feasibility study, a three-dimensional model was constructed to simulate transport of four chlorinated hydrocarbons (1,1,2,2-tetrachloroethane, trichloroethene, carbon tetrachloride, and chloroform) that are components of a contaminant plume in the surficial and middle aquifers underlying the east-central part of Graces Quarters. The model was calibrated to steady-state hydraulic head at 58 observation wells and to the concentration of 1,1,2,2-tetrachloroethane in 58 observation wells and 101direct-push probe samples from the mid-1990s. Simulations using the same basic model with minor adjustments were then run for each of the other plume constituents. The error statistics between the simulated and measured concentrations of each of the constituents compared favorably to the error statisticst,1,2,2-tetrachloroethane calibration. Model simulations were used in conjunction with contaminant concentration data to examine the sources and degradation of the plume constituents. It was determined from this that mixed contaminant sources with no ambient degradation was the best approach for simulating multi-species solute transport at the site. Forward simulations were run to show potential solute transport 30 years and 100 years into the future with and without source removal. Although forward simulations are subject to uncertainty, they can be useful for illustrating various aspects of the conceptual model and its implementation. The forward simulation with no source removal indicates that contaminants would spread throughout various parts of the surficial and middle aquifers, with the100-year simulation showing potential discharge areas in either the marshes at the end of the Graces Quarters peninsula or just offshore in the estuaries. The simulation with source removal indicates that if the modeling assumptions are reasonable and ground-water cleanup within30 years is important, source removal alone is not a sufficient remedy, and cleanup might not even occur within 100 years.

  10. Hydrology and digital simulation of the regional aquifer system, eastern Snake River Plain, Idaho

    USGS Publications Warehouse

    Garabedian, S.P.

    1992-01-01

    The transient model was used to simulate aquifer changes from 1981 to 2010 in response to three hypothetical development alternatives: (1) Continuation of 1980 hydrologic conditions, (2) increased pumpage, and (3) increased recharge. Simulation of continued 1980 hydrologic conditions for 30 years indicated that head declines of 2 to 8 feet might be expected in the central part of the plain. The magnitude of simulated head declines was con- sistent with head declines measured during the 1980 water year. Larger declines were calculated along model boundaries, but these changes may have resulted from underestimation of tribu- tary drainage-basin underflow and inadequate aquifer definition. Simulation of increased ground-water pumpage (an additional 2,400 cubic feet per second) for 30 years indicated head declines of 10 to 50 feet in the central part of the plain. These relatively large head declines were accompanied by increased simulated river leakage of 50 percent and decreased spring discharge of 20 percent. The effect of increased recharge (800 cubic feet per sec- ond) for 30 years was a rise in simulated heads of 0 to 5 feet in the central part of the plain.

  11. Characterization, modeling and simulation of fused deposition modeling fabricated part surfaces

    NASA Astrophysics Data System (ADS)

    Taufik, Mohammad; Jain, Prashant K.

    2017-12-01

    Surface roughness is generally used for characterization, modeling and simulation of fused deposition modeling (FDM) fabricated part surfaces. But the average surface roughness is not able to provide the insight of surface characteristics with sharp peaks and deep valleys. It deals in the average sense for all types of surfaces, including FDM fabricated surfaces with distinct surface profile features. The present research work shows that kurtosis and skewness can be used for characterization, modeling and simulation of FDM surfaces because these roughness parameters have the ability to characterize a surface with sharp peaks and deep valleys. It can be critical in certain application areas in tribology and biomedicine, where the surface profile plays an important role. Thus, in this study along with surface roughness, skewness and kurtosis are considered to show a novel strategy to provide new transferable knowledge about FDM fabricated part surfaces. The results suggest that the surface roughness, skewness and kurtosis are significantly different at 0° and in the range (0°, 30°], [30°, 90°] of build orientation.

  12. SENSOR: a tool for the simulation of hyperspectral remote sensing systems

    NASA Astrophysics Data System (ADS)

    Börner, Anko; Wiest, Lorenz; Keller, Peter; Reulke, Ralf; Richter, Rolf; Schaepman, Michael; Schläpfer, Daniel

    The consistent end-to-end simulation of airborne and spaceborne earth remote sensing systems is an important task, and sometimes the only way for the adaptation and optimisation of a sensor and its observation conditions, the choice and test of algorithms for data processing, error estimation and the evaluation of the capabilities of the whole sensor system. The presented software simulator SENSOR (Software Environment for the Simulation of Optical Remote sensing systems) includes a full model of the sensor hardware, the observed scene, and the atmosphere in between. The simulator consists of three parts. The first part describes the geometrical relations between scene, sun, and the remote sensing system using a ray-tracing algorithm. The second part of the simulation environment considers the radiometry. It calculates the at-sensor radiance using a pre-calculated multidimensional lookup-table taking the atmospheric influence on the radiation into account. The third part consists of an optical and an electronic sensor model for the generation of digital images. Using SENSOR for an optimisation requires the additional application of task-specific data processing algorithms. The principle of the end-to-end-simulation approach is explained, all relevant concepts of SENSOR are discussed, and first examples of its use are given. The verification of SENSOR is demonstrated. This work is closely related to the Airborne PRISM Experiment (APEX), an airborne imaging spectrometer funded by the European Space Agency.

  13. Wind field near complex terrain using numerical weather prediction model

    NASA Astrophysics Data System (ADS)

    Chim, Kin-Sang

    The PennState/NCAR MM5 model was modified to simulate an idealized flow pass through a 3D obstacle in the Micro- Alpha Scale domain. The obstacle used were the idealized Gaussian obstacle and the real topography of Lantau Island of Hong Kong. The Froude number under study is ranged from 0.22 to 1.5. Regime diagrams for both the idealized Gaussian obstacle and Lantau island were constructed. This work is divided into five parts. The first part is the problem definition and the literature review of the related publications. The second part briefly discuss as the PennState/NCAR MM5 model and a case study of long- range transport is included. The third part is devoted to the modification and the verification of the PennState/NCAR MM5 model on the Micro-Alpha Scale domain. The implementation of the Orlanski (1976) open boundary condition is included with the method of single sounding initialization of the model. Moreover, an upper dissipative layer, Klemp and Lilly (1978), is implemented on the model. The simulated result is verified by the Automatic Weather Station (AWS) data and the Wind Profiler data. Four different types of Planetary Boundary Layer (PBL) parameterization schemes have been investigated in order to find out the most suitable one for Micro-Alpha Scale domain in terms of both accuracy and efficiency. Bulk Aerodynamic type of PBL parameterization scheme is found to be the most suitable PBL parameterization scheme. Investigation of the free- slip lower boundary condition is performed and the simulated result is compared with that with friction. The fourth part is the use of the modified PennState/NCAR MM5 model for an idealized flow simulation. The idealized uniform flow used is nonhydrostatic and has constant Froude number. Sensitivity test is performed by varying the Froude number and the regime diagram is constructed. Moreover, nondimensional drag is found to be useful for regime identification. The model result is also compared with the analytic results by Miles (1969) and Smith (1980, 1985), and the numerical results of Stein (1992), Miranda and James (1992) and Olaffson and Bougeault (1997). It is found that the simulated result in the present study is comparable with others. The fifth part is the construction of the regime diagram for the Lantau island of Hong Kong. All eight major wind directions are discussed.

  14. Simulation of APEX data: the SENSOR approach

    NASA Astrophysics Data System (ADS)

    Boerner, Anko; Schaepman, Michael E.; Schlaepfer, Daniel; Wiest, Lorenz; Reulke, Ralf

    1999-10-01

    The consistent simulation of airborne and spaceborne hyperspectral data is an important task and sometimes the only way for the adaptation and optimization of a sensor and its observing conditions, the choice and test of algorithms for data processing, error estimations and the evaluation of the capabilities of the whole sensor system. The integration of three approaches is suggested for the data simulation of APEX (Airborne Prism Experiment): (1) a spectrally consistent approach (e.g. using AVIRIS data), (2) a geometrically consistent approach (e.g. using CASI data), and (3) an end-to- end simulation of the sensor system. In this paper, the last approach is discussed in detail. Such a technique should be used if there is no simple deterministic relation between input and output parameters. The simulation environment SENSOR (Software Environment for the Simulation of Optical Remote Sensing Systems) presented here includes a full model of the sensor system, the observed object and the atmosphere. The simulator consists of three parts. The first part describes the geometrical relations between object, sun, and sensor using a ray tracing algorithm. The second part of the simulation environment considers the radiometry. It calculates the at-sensor-radiance using a pre-calculated multidimensional lookup-table for the atmospheric boundary conditions and bi- directional reflectances. Part three consists of an optical and an electronic sensor model for the generation of digital images. Application-specific algorithms for data processing must be considered additionally. The benefit of using an end- to-end simulation approach is demonstrated, an example of a simulated APEX data cube is given, and preliminary steps of evaluation of SENSOR are carried out.

  15. Preclinical endoscopic training using a part-task simulator: learning curve assessment and determination of threshold score for advancement to clinical endoscopy.

    PubMed

    Jirapinyo, Pichamol; Abidi, Wasif M; Aihara, Hiroyuki; Zaki, Theodore; Tsay, Cynthia; Imaeda, Avlin B; Thompson, Christopher C

    2017-10-01

    Preclinical simulator training has the potential to decrease endoscopic procedure time and patient discomfort. This study aims to characterize the learning curve of endoscopic novices in a part-task simulator and propose a threshold score for advancement to initial clinical cases. Twenty novices with no prior endoscopic experience underwent repeated endoscopic simulator sessions using the part-task simulator. Simulator scores were collected; their inverse was averaged and fit to an exponential curve. The incremental improvement after each session was calculated. Plateau was defined as the session after which incremental improvement in simulator score model was less than 5%. Additionally, all participants filled out questionnaires regarding simulator experience after sessions 1, 5, 10, 15, and 20. A visual analog scale and NASA task load index were used to assess levels of comfort and demand. Twenty novices underwent 400 simulator sessions. Mean simulator scores at sessions 1, 5, 10, 15, and 20 were 78.5 ± 5.95, 176.5 ± 17.7, 275.55 ± 23.56, 347 ± 26.49, and 441.11 ± 38.14. The best fit exponential model was [time/score] = 26.1 × [session #] -0.615 ; r 2  = 0.99. This corresponded to an incremental improvement in score of 35% after the first session, 22% after the second, 16% after the third and so on. Incremental improvement dropped below 5% after the 12th session corresponding to the predicted score of 265. Simulator training was related to higher comfort maneuvering an endoscope and increased readiness for supervised clinical endoscopy, both plateauing between sessions 10 and 15. Mental demand, physical demand, and frustration levels decreased with increased simulator training. Preclinical training using an endoscopic part-task simulator appears to increase comfort level and decrease mental and physical demand associated with endoscopy. Based on a rigorous model, we recommend that novices complete a minimum of 12 training sessions and obtain a simulator score of at least 265 to be best prepared for clinical endoscopy.

  16. Modeling an anode layer Hall thruster and its plume

    NASA Astrophysics Data System (ADS)

    Choi, Yongjun

    This thesis consists of two parts: a study of the D55 Hall thruster channel using a hydrodynamic model; and particle simulations of plasma plume flow from the D55 Hall thruster. The first part of this thesis investigates the xenon plasma properties within the D55 thruster channel using a hydrodynamic model. The discharge voltage (V) and current (I) characteristic of the D55 Hall thruster are studied. The hydrodynamic model fails to accurately predict the V-I characteristics. This analysis shows that the model needs to be improved. Also, the hydrodynamic model is used to simulate the plasma flow within the D55 Hall thruster. This analysis is performed to investigate the plasma properties of the channel exit. It is found that the hydrodynamic model is very sensitive to initial conditions, and fails to simulate the complete domain of the D55 Hall thruster. However, the model successfully calculates the channel domain of the D55 Hall thruster. The results show that, at the thruster exit, the plasma density has a maximum value while the ion velocity has a minimum at the channel center. Also, the results show that the flow angle varies almost linearly across the exit plane and increases from the center to the walls. Finally, the hydrodynamic model results are used to estimate the plasma properties at the thruster nozzle exit. The second part of the thesis presents two dimensional axisymmetric simulations of xenon plasma plume flow fields from the D55 anode layer Hall thruster. A hybrid particle-fluid method is used for the simulations. The magnetic field near the Hall thruster exit is included in the calculation. The plasma properties obtained from the hydrodynamic model are used to determine boundary conditions for the simulations. In these simulations, the Boltzmann model and a detailed fluid model are used to compute the electron properties, the direct simulation Monte Carlo method models the collisions of heavy particles, and the Particle-In-Cell method models the transport of ions in an electric field. The accuracy of the simulation is assessed through comparison with various sets of measured data. It is found that a magnetic field significantly affects the profile of the plasma in the Detailed model. For instance, the plasma potential decreases more rapidly with distance from the thruster in the presence of a magnetic field. Results predicted by the Detailed model with the magnetic field are in better agreement with experimental data than those obtained with other models investigated.

  17. Simulation of Neural Firing Dynamics: A Student Project.

    ERIC Educational Resources Information Center

    Kletsky, E. J.

    This paper describes a student project in digital simulation techniques that is part of a graduate systems analysis course entitled Biosimulation. The students chose different simulation techniques to solve a problem related to the neuron model. (MLH)

  18. Active Transportation and Demand Management (ATDM) foundational research : Analysis, Modeling, and Simulation (AMS) Concept of Operations (CONOPS).

    DOT National Transportation Integrated Search

    2013-06-01

    As part of the Federal Highway Administrations (FHWA) Active Transportation and Demand Management (ATDM) Foundational Research, this ATDM Analysis, Modeling and Simulation (AMS) Concept of Operations (CONOPS) provides the description of the ATDM A...

  19. Modeling and FE Simulation of Quenchable High Strength Steels Sheet Metal Hot Forming Process

    NASA Astrophysics Data System (ADS)

    Liu, Hongsheng; Bao, Jun; Xing, Zhongwen; Zhang, Dejin; Song, Baoyu; Lei, Chengxi

    2011-08-01

    High strength steel (HSS) sheet metal hot forming process is investigated by means of numerical simulations. With regard to a reliable numerical process design, the knowledge of the thermal and thermo-mechanical properties is essential. In this article, tensile tests are performed to examine the flow stress of the material HSS 22MnB5 at different strains, strain rates, and temperatures. Constitutive model based on phenomenological approach is developed to describe the thermo-mechanical properties of the material 22MnB5 by fitting the experimental data. A 2D coupled thermo-mechanical finite element (FE) model is developed to simulate the HSS sheet metal hot forming process for U-channel part. The ABAQUS/explicit model is used conduct the hot forming stage simulations, and ABAQUS/implicit model is used for accurately predicting the springback which happens at the end of hot forming stage. Material modeling and FE numerical simulations are carried out to investigate the effect of the processing parameters on the hot forming process. The processing parameters have significant influence on the microstructure of U-channel part. The springback after hot forming stage is the main factor impairing the shape precision of hot-formed part. The mechanism of springback is advanced and verified through numerical simulations and tensile loading-unloading tests. Creep strain is found in the tensile loading-unloading test under isothermal condition and has a distinct effect on springback. According to the numerical and experimental results, it can be concluded that springback is mainly caused by different cooling rats and the nonhomogengeous shrink of material during hot forming process, the creep strain is the main factor influencing the amount of the springback.

  20. The Sensitivity of WRF Daily Summertime Simulations over West Africa to Alternative Parameterizations. Part 1: African Wave Circulation

    NASA Technical Reports Server (NTRS)

    Noble, Erik; Druyan, Leonard M.; Fulakeza, Matthew

    2014-01-01

    The performance of the NCAR Weather Research and Forecasting Model (WRF) as a West African regional-atmospheric model is evaluated. The study tests the sensitivity of WRF-simulated vorticity maxima associated with African easterly waves to 64 combinations of alternative parameterizations in a series of simulations in September. In all, 104 simulations of 12-day duration during 11 consecutive years are examined. The 64 combinations combine WRF parameterizations of cumulus convection, radiation transfer, surface hydrology, and PBL physics. Simulated daily and mean circulation results are validated against NASA's Modern-Era Retrospective Analysis for Research and Applications (MERRA) and NCEP/Department of Energy Global Reanalysis 2. Precipitation is considered in a second part of this two-part paper. A wide range of 700-hPa vorticity validation scores demonstrates the influence of alternative parameterizations. The best WRF performers achieve correlations against reanalysis of 0.40-0.60 and realistic amplitudes of spatiotemporal variability for the 2006 focus year while a parallel-benchmark simulation by the NASA Regional Model-3 (RM3) achieves higher correlations, but less realistic spatiotemporal variability. The largest favorable impact on WRF-vorticity validation is achieved by selecting the Grell-Devenyi cumulus convection scheme, resulting in higher correlations against reanalysis than simulations using the Kain-Fritch convection. Other parameterizations have less-obvious impact, although WRF configurations incorporating one surface model and PBL scheme consistently performed poorly. A comparison of reanalysis circulation against two NASA radiosonde stations confirms that both reanalyses represent observations well enough to validate the WRF results. Validation statistics for optimized WRF configurations simulating the parallel period during 10 additional years are less favorable than for 2006.

  1. Mechatronic modeling of a 750kW fixed-speed wind energy conversion system using the Bond Graph Approach.

    PubMed

    Khaouch, Zakaria; Zekraoui, Mustapha; Bengourram, Jamaa; Kouider, Nourreeddine; Mabrouki, Mustapha

    2016-11-01

    In this paper, we would like to focus on modeling main parts of the wind turbines (blades, gearbox, tower, generator and pitching system) from a mechatronics viewpoint using the Bond-Graph Approach (BGA). Then, these parts are combined together in order to simulate the complete system. Moreover, the real dynamic behavior of the wind turbine is taken into account and with the new model; final load simulation is more realistic offering benefits and reliable system performance. This model can be used to develop control algorithms to reduce fatigue loads and enhance power production. Different simulations are carried-out in order to validate the proposed wind turbine model, using real data provided in the open literature (blade profile and gearbox parameters for a 750 kW wind turbine). Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  2. A comparison of three approaches for simulating fine-scale surface winds in support of wildland fire management. Part II. An exploratory study of the effect of simulated winds on fire growth simulations

    Treesearch

    Jason M. Forthofer; Bret W. Butler; Charles W. McHugh; Mark A. Finney; Larry S. Bradshaw; Richard D. Stratton; Kyle S. Shannon; Natalie S. Wagenbrenner

    2014-01-01

    The effect of fine-resolution wind simulations on fire growth simulations is explored. The wind models are (1) a wind field consisting of constant speed and direction applied everywhere over the area of interest; (2) a tool based on the solution of the conservation of mass only (termed mass-conserving model) and (3) a tool based on a solution of conservation of mass...

  3. Hydrogeology and simulation of ground-water flow and land-surface subsidence in the northern part of the Gulf Coast aquifer system, Texas

    USGS Publications Warehouse

    Kasmarek, Mark C.; Robinson, James L.

    2004-01-01

    As a part of the Texas Water Development Board Ground- Water Availability Modeling program, the U.S. Geological Survey developed and tested a numerical finite-difference (MODFLOW) model to simulate ground-water flow and land-surface subsidence in the northern part of the Gulf Coast aquifer system in Texas from predevelopment (before 1891) through 2000. The model is intended to be a tool that water-resource managers can use to address future ground-water-availability issues.From land surface downward, the Chicot aquifer, the Evangeline aquifer, the Burkeville confining unit, the Jasper aquifer, and the Catahoula confining unit are the hydrogeologic units of the Gulf Coast aquifer system. Withdrawals of large quantities of ground water have resulted in potentiometric surface (head) declines in the Chicot, Evangeline, and Jasper aquifers and land-surface subsidence (primarily in the Houston area) from depressurization and compaction of clay layers interbedded in the aquifer sediments. In a generalized conceptual model of the aquifer system, water enters the ground-waterflow system in topographically high outcrops of the hydrogeologic units in the northwestern part of the approximately 25,000-square-mile model area. Water that does not discharge to streams flows to intermediate and deep zones of the system southeastward of the outcrop areas where it is discharged by wells and by upward leakage in topographically low areas near the coast. The uppermost parts of the aquifer system, which include outcrop areas, are under water-table conditions. As depth increases in the aquifer system and as interbedded sand and clay accumulate, water-table conditions evolve into confined conditions.The model comprises four layers, one for each of the hydrogeologic units of the aquifer system except the Catahoula confining unit, the assumed no-flow base of the system. Each layer consists of 137 rows and 245 columns of uniformly spaced grid blocks, each block representing 1 square mile. Lateral no-flow boundaries were located on the basis of outcrop extent (northwestern), major streams (southwestern, northeastern), and downdip limit of freshwater (southeastern). The MODFLOW general-head boundary package was used to simulate recharge and discharge in the outcrops of the hydrogeologic units. Simulation of land-surface subsidence (actually, compaction of clays) and release of water from storage in the clays of the Chicot and Evangeline aquifers was accomplished using the Interbed-Storage Package designed for use with the MODFLOW model. The model was calibrated by trial-anderror adjustment of selected model input data in a series of transient simulations until the model output (potentiometric surfaces, land-surface subsidence, and selected water-budget components) reasonably reproduced field measured (or estimated) aquifer responses.Model calibration comprised four elements: The first was qualitative comparison of simulated and measured heads in the aquifers for 1977 and 2000; and quantitative comparison by computation and areal distribution of the root-mean-square error between simulated and measured heads. The second calibration element was comparison of simulated and measured hydrographs from wells in the aquifers in a number of counties throughout the modeled area. The third calibration element was comparison of simulated water-budget componentsprimarily recharge and dischargeto estimates of physically reasonable ranges of actual water-budget components. The fourth calibration element was comparison of simulated land-surface subsidence from predevelopment to 2000 to measured land surface subsidence from 1906 through 1995.

  4. Simulation of ground-water flow in coastal Georgia and adjacent parts of South Carolina and Florida-predevelopment, 1980, and 2000

    USGS Publications Warehouse

    Payne, Dorothy F.; Rumman, Malek Abu; Clarke, John S.

    2005-01-01

    A digital model was developed to simulate steady-state ground-water flow in a 42,155-square-mile area of coastal Georgia and adjacent parts of South Carolina and Florida. The model was developed to (1) understand and refine the conceptual model of regional ground-water flow, (2) serve as a framework for the development of digital subregional ground-water flow and solute-transport models, and (3) serve as a tool for future evaluations of hypothetical pumping scenarios used to facilitate water management in the coastal area. Single-density ground-water flow was simulated using the U.S. Geological Survey finite-difference code MODFLOW-2000 for mean-annual conditions during predevelopment (pre?1900) and the years 1980 and 2000. The model comprises seven layers: the surficial aquifer system, the Brunswick aquifer system, the Upper Floridan aquifer, the Lower Floridan aquifer, and the intervening confining units. A combination of boundary conditions was applied, including a general-head boundary condition on the top active cells of the model and a time-variable fixed-head boundary condition along part of the southern lateral boundary. Simulated heads for 1980 and 2000 conditions indicate a good match to observed values, based on a plus-or-minus 10-foot (ft) calibration target and calibration statistics. The root-mean square of residual water levels for the Upper Floridan aquifer was 13.0 ft for the 1980 calibration and 9.94 ft for the 2000 calibration. Some spatial patterns of residuals were indicated for the 1980 and 2000 simulations, and are likely a result of model-grid cell size and insufficiently detailed hydraulic-property and pumpage data in some areas. Simulated potentiometric surfaces for predevelopment, 1980, and 2000 conditions all show major flow system features that are indicated by estimated peotentiometric maps. During 1980?2000, simulated water levels at the centers of pumping at Savannah and Brunswick rose more than 20 ft and 8 ft, respectively, in response to decreased pumping. Simulated drawdown exceeded 10 ft in the Upper Floridan aquifer across much of the western half of the model area, with drawdown exceeding 20 ft along parts of the western, northern, and southern boundaries where irrigation pumping increased during this period. From predevelopment to 2000 conditions, the simulated water budget showed an increase in inflow from, and decrease in outflow to, the general-head boundaries, and a reversal from net seaward flow to net landward flow across the coastline. Simulated changes in recharge and discharge distribution from predevelopment to 2000 conditions showed an increase in extent and magnitude of net recharge cells in the northern part of the model area, and a decrease in discharge or change to recharge in cells containing major streams and beneath major pumping centers. The model is relatively sensitive to pumping and the controlling head at the fixed-head boundary and less sensitive to the distribution of aquifer properties in general. Model limitations include: (1) its spatial scale and discretization, (2) the extent to which data are available to physically define the flow system, (3) the type of boundary conditions and controlling parameters used, (4) uncertainty in the distribution of pumping, and (5) uncertainty in field-scale hydraulic properties. The model could be improved with more accurate estimates of ground-water pumpage and better characterization of recharge and discharge.

  5. ASSESSING RESIDENTIAL EXPOSURE USING THE STOCHASTIC HUMAN EXPOSURE AND DOSE SIMULATION (SHEDS) MODEL

    EPA Science Inventory

    As part of a workshop sponsored by the Environmental Protection Agency's Office of Research and Development and Office of Pesticide Programs, the Aggregate Stochastic Human Exposure and Dose Simulation (SHEDS) Model was used to assess potential aggregate residential pesticide e...

  6. A program code generator for multiphysics biological simulation using markup languages.

    PubMed

    Amano, Akira; Kawabata, Masanari; Yamashita, Yoshiharu; Rusty Punzalan, Florencio; Shimayoshi, Takao; Kuwabara, Hiroaki; Kunieda, Yoshitoshi

    2012-01-01

    To cope with the complexity of the biological function simulation models, model representation with description language is becoming popular. However, simulation software itself becomes complex in these environment, thus, it is difficult to modify the simulation conditions, target computation resources or calculation methods. In the complex biological function simulation software, there are 1) model equations, 2) boundary conditions and 3) calculation schemes. Use of description model file is useful for first point and partly second point, however, third point is difficult to handle for various calculation schemes which is required for simulation models constructed from two or more elementary models. We introduce a simulation software generation system which use description language based description of coupling calculation scheme together with cell model description file. By using this software, we can easily generate biological simulation code with variety of coupling calculation schemes. To show the efficiency of our system, example of coupling calculation scheme with three elementary models are shown.

  7. Assessment of NASA GISS CMIP5 and Post-CMIP5 Simulated Clouds and TOA Radiation Budgets Using Satellite Observations. Part 2; TOA Radiation Budget and CREs

    NASA Technical Reports Server (NTRS)

    Stanfield, Ryan E.; Dong, Xiquan; Xi, Baike; Del Genio, Anthony D.; Minnis, Patrick; Doelling, David; Loeb, Norman

    2014-01-01

    In Part I of this study, the NASA GISS Coupled Model Intercomparison Project (CMIP5) and post-CMIP5 (herein called C5 and P5, respectively) simulated cloud properties were assessed utilizing multiple satellite observations, with a particular focus on the southern midlatitudes (SMLs). This study applies the knowledge gained from Part I of this series to evaluate the modeled TOA radiation budgets and cloud radiative effects (CREs) globally using CERES EBAF (CE) satellite observations and the impact of regional cloud properties and water vapor on the TOA radiation budgets. Comparisons revealed that the P5- and C5-simulated global means of clear-sky and all-sky outgoing longwave radiation (OLR) match well with CE observations, while biases are observed regionally. Negative biases are found in both P5- and C5-simulated clear-sky OLR. P5-simulated all-sky albedo slightly increased over the SMLs due to the increase in low-level cloud fraction from the new planetary boundary layer (PBL) scheme. Shortwave, longwave, and net CRE are quantitatively analyzed as well. Regions of strong large-scale atmospheric upwelling/downwelling motion are also defined to compare regional differences across multiple cloud and radiative variables. In general, the P5 and C5 simulations agree with the observations better over the downwelling regime than over the upwelling regime. Comparing the results herein with the cloud property comparisons presented in Part I, the modeled TOA radiation budgets and CREs agree well with the CE observations. These results, combined with results in Part I, have quantitatively estimated how much improvement is found in the P5-simulated cloud and radiative properties, particularly over the SMLs and tropics, due to the implementation of the new PBL and convection schemes.

  8. Simulation of raw water and treatment parameters in support of the disinfection by-products regulatory impact analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Regli, S.; Cromwell, J.; Mosher, J.

    The U.S. EPA has undertaken an effort to model how the water supply industry may respond to possible rules and how those responses may affect human health risk. The model is referred to as the Disinfection By-Product Regulatory Analysis Model (DBPRAM), The paper is concerned primarily with presenting and discussing the methods, underlying data, assumptions, limitations and results for the first part of the model. This part of the model shows the creation of sets of simulated water supplies that are representative of the conditions currently encountered by public water supplies with respect to certain raw water quality and watermore » treatment characteristics.« less

  9. Relationship between a solar drying model of red pepper and the kinetics of pure water evaporation (2)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Passamai, V.; Saravia, L.

    1997-05-01

    In part one, a simple drying model of red pepper related to water evaporation was developed. In this second part the drying model is applied by means of related experiments. Both laboratory and open air drying experiments were carried out to validate the model and simulation results are presented.

  10. A time-dependent, three-dimensional model of the Delaware Bay and River system. Part 2: Three-dimensional flow fields and residual circulation

    NASA Astrophysics Data System (ADS)

    Galperin, Boris; Mellor, George L.

    1990-09-01

    The three-dimensional model of Delaware Bay, River and adjacent continental shelf was described in Part 1. Here, Part 2 of this two-part paper demonstrates that the model is capable of realistic simulation of current and salinity distributions, tidal cycle variability, events of strong mixing caused by high winds and rapid salinity changes due to high river runoff. The 25-h average subtidal circulation strongly depends on the wind forcing. Monthly residual currents and salinity distributions demonstrate a classical two-layer estuarine circulation wherein relatively low salinity water flows out at the surface and compensating high salinity water from the shelf flows at the bottom. The salinity intrusion is most vigorous along deep channels in the Bay. Winds can generate salinity fronts inside and outside the Bay and enhance or weaken the two-layer circulation pattern. Since the portion of the continental shelf included in the model is limited, the model shelf circulation is locally wind-driven and excludes such effects as coastally trapped waves and interaction with Gulf Stream rings; nevertheless, a significant portion of the coastal elevation variability is hindcast by the model. Also, inclusion of the shelf improves simulation of salinity inside the Bay compared with simulations where the salinity boundary condition is specified at the mouth of the Bay.

  11. Dynamical diagnostics of the SST annual cycle in the eastern equatorial Pacific: Part II analysis of CMIP5 simulations

    NASA Astrophysics Data System (ADS)

    Chen, Ying-Ying; Jin, Fei-Fei

    2017-12-01

    In this study, a simple coupled framework established in Part I is utilized to investigate inter-model diversity in simulating the equatorial Pacific SST annual cycle (SSTAC). It demonstrates that the simulated amplitude and phase characteristics of SSTAC in models are controlled by two internal dynamical factors (the damping rate and phase speed) and two external forcing factors (the strength of the annual and semi-annual harmonic forcing). These four diagnostic factors are further condensed into a dynamical response factor and a forcing factor to derive theoretical solutions of amplitude and phase of SSTAC. The theoretical solutions are in remarkable agreement with observations and CMIP5 simulations. The great diversity in the simulated SSTACs is related to the spreads in these dynamic and forcing factors. Most models tend to simulate a weak SSTAC, due to their weak damping rate and annual harmonic forcing. The latter is due to bias in the meridional asymmetry of the annual mean state of the tropical Pacific, represented by the weak cross-equatorial winds in the cold tongue region.

  12. The effect of improving task representativeness on capturing nurses’ risk assessment judgements: a comparison of written case simulations and physical simulations

    PubMed Central

    2013-01-01

    Background The validity of studies describing clinicians’ judgements based on their responses to paper cases is questionable, because - commonly used - paper case simulations only partly reflect real clinical environments. In this study we test whether paper case simulations evoke similar risk assessment judgements to the more realistic simulated patients used in high fidelity physical simulations. Methods 97 nurses (34 experienced nurses and 63 student nurses) made dichotomous assessments of risk of acute deterioration on the same 25 simulated scenarios in both paper case and physical simulation settings. Scenarios were generated from real patient cases. Measures of judgement ‘ecology’ were derived from the same case records. The relationship between nurses’ judgements, actual patient outcomes (i.e. ecological criteria), and patient characteristics were described using the methodology of judgement analysis. Logistic regression models were constructed to calculate Lens Model Equation parameters. Parameters were then compared between the modeled paper-case and physical-simulation judgements. Results Participants had significantly less achievement (ra) judging physical simulations than when judging paper cases. They used less modelable knowledge (G) with physical simulations than with paper cases, while retaining similar cognitive control and consistency on repeated patients. Respiration rate, the most important cue for predicting patient risk in the ecological model, was weighted most heavily by participants. Conclusions To the extent that accuracy in judgement analysis studies is a function of task representativeness, improving task representativeness via high fidelity physical simulations resulted in lower judgement performance in risk assessments amongst nurses when compared to paper case simulations. Lens Model statistics could prove useful when comparing different options for the design of simulations used in clinical judgement analysis. The approach outlined may be of value to those designing and evaluating clinical simulations as part of education and training strategies aimed at improving clinical judgement and reasoning. PMID:23718556

  13. FEA Simulation of Free-Bending - a Preforming Step in the Hydroforming Process Chain

    NASA Astrophysics Data System (ADS)

    Beulich, N.; Craighero, P.; Volk, W.

    2017-09-01

    High-strength steel and aluminum alloys are essential for developing innovative, lightly-weighted space frame concepts. The intended design is built from car body parts with high geometrical complexity and reduced material-thickness. Over the past few years, many complex car body parts have been produced using hydroforming. To increase the accuracy of hydroforming in relation to prospective car concepts, the virtual manufacturing of forming becomes more important. As a part of process digitalization, it is necessary to develop a simulation model for the hydroforming process chain. The preforming of longitudinal welded tubes is therefore implemented by the use of three-dimensional free-bending. This technique is able to reproduce complex deflection curves in combination with innovative low-thickness material design for hydroforming processes. As a first step to the complete process simulation, the content of this paper deals with the development of a finite element simulation model for the free-bending process with 6 degrees of freedom. A mandrel built from spherical segments connected by a steel rope is located inside of the tube to prevent geometrical instability. Critical parameters for the result of the bending process are therefore evaluated and optimized. The simulation model is verified by surface measurements of a two-dimensional bending test.

  14. A Last Glacial Maximum world-ocean simulation at eddy-permitting resolution - Part 1: Experimental design and basic evaluation

    NASA Astrophysics Data System (ADS)

    Ballarotta, M.; Brodeau, L.; Brandefelt, J.; Lundberg, P.; Döös, K.

    2013-01-01

    Most state-of-the-art climate models include a coarsely resolved oceanic component, which has difficulties in capturing detailed dynamics, and therefore eddy-permitting/eddy-resolving simulations have been developed to reproduce the observed World Ocean. In this study, an eddy-permitting numerical experiment is conducted to simulate the global ocean state for a period of the Last Glacial Maximum (LGM, ~ 26 500 to 19 000 yr ago) and to investigate the improvements due to taking into account these higher spatial scales. The ocean general circulation model is forced by a 49-yr sample of LGM atmospheric fields constructed from a quasi-equilibrated climate-model simulation. The initial state and the bottom boundary condition conform to the Paleoclimate Modelling Intercomparison Project (PMIP) recommendations. Before evaluating the model efficiency in representing the paleo-proxy reconstruction of the surface state, the LGM experiment is in this first part of the investigation, compared with a present-day eddy-permitting hindcast simulation as well as with the available PMIP results. It is shown that the LGM eddy-permitting simulation is consistent with the quasi-equilibrated climate-model simulation, but large discrepancies are found with the PMIP model analyses, probably due to the different equilibration states. The strongest meridional gradients of the sea-surface temperature are located near 40° N and S, this due to particularly large North-Atlantic and Southern-Ocean sea-ice covers. These also modify the locations of the convection sites (where deep-water forms) and most of the LGM Conveyor Belt circulation consequently takes place in a thinner layer than today. Despite some discrepancies with other LGM simulations, a glacial state is captured and the eddy-permitting simulation undertaken here yielded a useful set of data for comparisons with paleo-proxy reconstructions.

  15. Numerical simulation of a compressible homogeneous, turbulent shear flow. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Feiereisen, W. J.; Reynolds, W. C.; Ferziger, J. H.

    1981-01-01

    A direct, low Reynolds number, numerical simulation was performed on a homogeneous turbulent shear flow. The full compressible Navier-Stokes equations were used in a simulation on the ILLIAC IV computer with a 64,000 mesh. The flow fields generated by the code are used as an experimental data base, to examine the behavior of the Reynols stresses in this simple, compressible flow. The variation of the structure of the stresses and their dynamic equations as the character of the flow changed is emphasized. The structure of the tress tensor is more heavily dependent on the shear number and less on the fluctuating Mach number. The pressure-strain correlation tensor in the dynamic uations is directly calculated in this simulation. These correlations are decomposed into several parts, as contrasted with the traditional incompressible decomposition into two parts. The performance of existing models for the conventional terms is examined, and a model is proposed for the 'mean fluctuating' part.

  16. Friction and lubrication modelling in sheet metal forming: Influence of lubrication amount, tool roughness and sheet coating on product quality

    NASA Astrophysics Data System (ADS)

    Hol, J.; Wiebenga, J. H.; Carleer, B.

    2017-09-01

    In the stamping of automotive parts, friction and lubrication play a key role in achieving high quality products. In the development process of new automotive parts, it is therefore crucial to accurately account for these effects in sheet metal forming simulations. This paper presents a selection of results considering friction and lubrication modelling in sheet metal forming simulations of a front fender product. For varying lubrication conditions, the front fender can either show wrinkling or fractures. The front fender is modelled using different lubrication amounts, tool roughness’s and sheet coatings to show the strong influence of friction on both part quality and the overall production stability. For this purpose, the TriboForm software is used in combination with the AutoForm software. The results demonstrate that the TriboForm software enables the simulation of friction behaviour for varying lubrication conditions, i.e. resulting in a generally applicable approach for friction characterization under industrial sheet metal forming process conditions.

  17. Underwater Electromagnetic Sensor Networks, Part II: Localization and Network Simulations

    PubMed Central

    Zazo, Javier; Valcarcel Macua, Sergio; Zazo, Santiago; Pérez, Marina; Pérez-Álvarez, Iván; Jiménez, Eugenio; Cardona, Laura; Brito, Joaquín Hernández; Quevedo, Eduardo

    2016-01-01

    In the first part of the paper, we modeled and characterized the underwater radio channel in shallow waters. In the second part, we analyze the application requirements for an underwater wireless sensor network (U-WSN) operating in the same environment and perform detailed simulations. We consider two localization applications, namely self-localization and navigation aid, and propose algorithms that work well under the specific constraints associated with U-WSN, namely low connectivity, low data rates and high packet loss probability. We propose an algorithm where the sensor nodes collaboratively estimate their unknown positions in the network using a low number of anchor nodes and distance measurements from the underwater channel. Once the network has been self-located, we consider a node estimating its position for underwater navigation communicating with neighboring nodes. We also propose a communication system and simulate the whole electromagnetic U-WSN in the Castalia simulator to evaluate the network performance, including propagation impairments (e.g., noise, interference), radio parameters (e.g., modulation scheme, bandwidth, transmit power), hardware limitations (e.g., clock drift, transmission buffer) and complete MAC and routing protocols. We also explain the changes that have to be done to Castalia in order to perform the simulations. In addition, we propose a parametric model of the communication channel that matches well with the results from the first part of this paper. Finally, we provide simulation results for some illustrative scenarios. PMID:27999309

  18. The carbon cycle in the Australian Community Climate and Earth System Simulator (ACCESS-ESM1) - Part 1: Model description and pre-industrial simulation

    NASA Astrophysics Data System (ADS)

    Law, Rachel M.; Ziehn, Tilo; Matear, Richard J.; Lenton, Andrew; Chamberlain, Matthew A.; Stevens, Lauren E.; Wang, Ying-Ping; Srbinovsky, Jhan; Bi, Daohua; Yan, Hailin; Vohralik, Peter F.

    2017-07-01

    Earth system models (ESMs) that incorporate carbon-climate feedbacks represent the present state of the art in climate modelling. Here, we describe the Australian Community Climate and Earth System Simulator (ACCESS)-ESM1, which comprises atmosphere (UM7.3), land (CABLE), ocean (MOM4p1), and sea-ice (CICE4.1) components with OASIS-MCT coupling, to which ocean and land carbon modules have been added. The land carbon model (as part of CABLE) can optionally include both nitrogen and phosphorous limitation on the land carbon uptake. The ocean carbon model (WOMBAT, added to MOM) simulates the evolution of phosphate, oxygen, dissolved inorganic carbon, alkalinity and iron with one class of phytoplankton and zooplankton. We perform multi-centennial pre-industrial simulations with a fixed atmospheric CO2 concentration and different land carbon model configurations (prescribed or prognostic leaf area index). We evaluate the equilibration of the carbon cycle and present the spatial and temporal variability in key carbon exchanges. Simulating leaf area index results in a slight warming of the atmosphere relative to the prescribed leaf area index case. Seasonal and interannual variations in land carbon exchange are sensitive to whether leaf area index is simulated, with interannual variations driven by variability in precipitation and temperature. We find that the response of the ocean carbon cycle shows reasonable agreement with observations. While our model overestimates surface phosphate values, the global primary productivity agrees well with observations. Our analysis highlights some deficiencies inherent in the carbon models and where the carbon simulation is negatively impacted by known biases in the underlying physical model and consequent limits on the applicability of this model version. We conclude the study with a brief discussion of key developments required to further improve the realism of our model simulation.

  19. Numerical Simulation of Ground-Water Flow and Assessment of the Effects of Artificial Recharge in the Rialto-Colton Basin, San Bernardino County, California

    USGS Publications Warehouse

    Woolfenden, Linda R.; Koczot, Kathryn M.

    2001-01-01

    The Rialto?Colton Basin, in western San Bernardino County, California, was chosen for storage of imported water because of the good quality of native ground water, the known storage capacity for additional ground-water storage in the basin, and the availability of imported water. To supplement native ground-water resources and offset overdraft conditions in the basin during dry periods, artificial-recharge operations during wet periods in the Rialto?Colton Basin were begun in 1982 to store surplus imported water. Local water purveyors recognized that determining the movement and ultimate disposition of the artificially recharged imported water would require a better understanding of the ground-water flow system. In this study, a finite-difference model was used to simulate ground-water flow in the Rialto?Colton Basin to gain a better understanding of the ground-water flow system and to evaluate the hydraulic effects of artificial recharge of imported water. The ground-water basin was simulated as four horizontal layers representing the river- channel deposits and the upper, middle, and lower water-bearing units. Several flow barriers bordering and internal to the Rialto?Colton Basin influence the direction of ground-water flow. Ground water may flow relatively unrestricted in the shallow parts of the flow system; however, the faults generally become more restrictive at depth. A particle-tracking model was used to simulate advective transport of imported water within the ground-water flow system and to evaluate three artificial-recharge alternatives. The ground-water flow model was calibrated to transient conditions for 1945?96. Initial conditions for the transient-state simulation were established by using 1945 recharge and discharge rates, and assuming no change in storage in the basin. Average hydrologic conditions for 1945?96 were used for the predictive simulations (1997?2027). Ground-water-level measurements made during 1945 were used for comparison with the initial-conditions simulation to determine if there was a reasonable match, and thus reasonable starting heads, for the transient simulation. The comparison between simulated head and measured water levels indicates that, overall, the simulated heads match measured water levels well; the goodness-of-fit value is 0.99. The largest differences between simulated head and measured water level occurred between Barrier H and the Rialto?Colton Fault. Simulated heads near the Santa Ana River and Warm Creek, and simulated heads northwest of Barrier J, generally are within 30 feet of measured water levels and five are within 20 feet. Model-simulated heads were compared with measured long-term changes in hydrographs of composite water levels in selected wells, and with measured short-term changes in hydrographs of water levels in multiple-depth observation wells installed for this project. Simulated hydraulic heads generally matched measured water levels in wells northwest of Barrier J (in the northwestern part of the basin) and in the central part of the basin during 1945?96. In addition, the model adequately simulated water levels in the southeastern part of the basin near the Santa Ana River and Warm Creek and east of an unnamed fault that subparallels the San Jacinto Fault. Simulated heads and measured water levels in the central part of the basin generally are within 10 feet until about 1982?85 when differences become greater. In the northwestern part of the basin southeast of Barrier J, simulated heads were as much as 50 feet higher than measured water levels during 1945?82 but matched measured water levels well after 1982. In the compartment between Barrier H and the Rialto?Colton Fault, simulated heads match well during 1945?82 but are comparatively low during 1982?96. Near the Santa Ana River and Warm Creek, simulated heads generally rose above measured water levels except during 1965?72 when simulated heads compared well with measured water levels. Average

  20. Simulation of the June 11, 2010, flood along the Little Missouri River near Langley, Arkansas, using a hydrologic model coupled to a hydraulic model

    USGS Publications Warehouse

    Westerman, Drew A.; Clark, Brian R.

    2013-01-01

    The results from the precipitation-runoff hydrologic model, the one-dimensional unsteady-state hydraulic model, and a separate two-dimensional model developed as part of a coincident study, each complement the other in terms of streamflow timing, water-surface elevations, and velocities propagated by the June 11, 2010, flood event. The simulated grids for water depth and stream velocity from each model were directly compared by subtracting the one-dimensional hydraulic model grid from the two-dimensional model grid. The absolute mean difference for the simulated water depth was 0.9 foot. Additionally, the absolute mean difference for the simulated stream velocity was 1.9 feet per second.

  1. A Hybrid of the Chemical Master Equation and the Gillespie Algorithm for Efficient Stochastic Simulations of Sub-Networks.

    PubMed

    Albert, Jaroslav

    2016-01-01

    Modeling stochastic behavior of chemical reaction networks is an important endeavor in many aspects of chemistry and systems biology. The chemical master equation (CME) and the Gillespie algorithm (GA) are the two most fundamental approaches to such modeling; however, each of them has its own limitations: the GA may require long computing times, while the CME may demand unrealistic memory storage capacity. We propose a method that combines the CME and the GA that allows one to simulate stochastically a part of a reaction network. First, a reaction network is divided into two parts. The first part is simulated via the GA, while the solution of the CME for the second part is fed into the GA in order to update its propensities. The advantage of this method is that it avoids the need to solve the CME or stochastically simulate the entire network, which makes it highly efficient. One of its drawbacks, however, is that most of the information about the second part of the network is lost in the process. Therefore, this method is most useful when only partial information about a reaction network is needed. We tested this method against the GA on two systems of interest in biology--the gene switch and the Griffith model of a genetic oscillator--and have shown it to be highly accurate. Comparing this method to four different stochastic algorithms revealed it to be at least an order of magnitude faster than the fastest among them.

  2. Realization of a Complex Control & Diagnosis System on Simplified Hardware

    NASA Astrophysics Data System (ADS)

    Stetter, R.; Swamy Prasad, M.

    2015-11-01

    Energy is an important factor in today's industrial environment. Pump systems account for about 20% of the total industrial electrical energy consumption. Several studies show that with proper monitoring, control and maintenance, the efficiency of pump systems can be increased. Controlling pump systems with intelligent systems can help to reduce a pump's energy consumption by up to one third of its original consumption. The research in this paper was carried out in the scope of a research project which involves modelling and simulation of pump systems. This paper focuses on the future implementation of modelling capabilities in PLCs (programmable logic controllers). The whole project aims to use a pump itself as the sensor rather than introducing external sensors into the system, which would increase the cost considerably. One promising approach for an economic and robust industrial implementation of this intelligence is the use of PLCs. PLCs can be simulated in multiple ways; in this project, Codesys was chosen for several reasons which are explained in this paper. The first part of this paper explains the modelling of a pump itself, the process load of the asynchronous motor with a control system, and the simulation possibilities of the motor in Codesys. The second part describes the simulation and testing of a system realized. The third part elaborates the Codesys system structure and interfacing of the system with external files. The final part consists of comparing the result with an earlier Matlab/SIMULINK model and original test data.

  3. Statistical analysis and yield management in LED design through TCAD device simulation

    NASA Astrophysics Data System (ADS)

    Létay, Gergö; Ng, Wei-Choon; Schneider, Lutz; Bregy, Adrian; Pfeiffer, Michael

    2007-02-01

    This paper illustrates how technology computer-aided design (TCAD), which nowadays is an essential part of CMOS technology, can be applied to LED development and manufacturing. In the first part, the essential electrical and optical models inherent to LED modeling are reviewed. The second part of the work describes a methodology to improve the efficiency of the simulation procedure by using the concept of process compact models (PCMs). The last part demonstrates the capabilities of PCMs using an example of a blue InGaN LED. In particular, a parameter screening is performed to find the most important parameters, an optimization task incorporating the robustness of the design is carried out, and finally the impact of manufacturing tolerances on yield is investigated. It is indicated how the concept of PCMs can contribute to an efficient design for manufacturing DFM-aware development.

  4. Computer Modeling of Direct Metal Laser Sintering

    NASA Technical Reports Server (NTRS)

    Cross, Matthew

    2014-01-01

    A computational approach to modeling direct metal laser sintering (DMLS) additive manufacturing process is presented. The primary application of the model is for determining the temperature history of parts fabricated using DMLS to evaluate residual stresses found in finished pieces and to assess manufacturing process strategies to reduce part slumping. The model utilizes MSC SINDA as a heat transfer solver with imbedded FORTRAN computer code to direct laser motion, apply laser heating as a boundary condition, and simulate the addition of metal powder layers during part fabrication. Model results are compared to available data collected during in situ DMLS part manufacture.

  5. Development of Improved Models, Stochasticity, and Frameworks for the MIT Extensible Air Network Simulation

    NASA Technical Reports Server (NTRS)

    Clarke, John-Paul

    2004-01-01

    MEANS, the MIT Extensible Air Network Simulation, was created in February of 2001, and has been developed with support from NASA Ames since August of 2001. MEANS is a simulation tool which is designed to maximize fidelity without requiring data of such a low level as to preclude easy examination of alternative scenarios. To this end, MEANS is structured in a modular fashion to allow more detailed components to be brought in when desired, and left out when they would only be an impediment. Traditionally, one of the difficulties with high-fidelity models is that they require a level of detail in their data that is difficult to obtain. For analysis of past scenarios, the required data may not have been collected, or may be considered proprietary and thus difficult for independent researchers to obtain. For hypothetical scenarios, generation of the data is sufficiently difficult to be a task in and of itself. Often, simulations designed by a researcher will model exactly one element of the problem well and in detail, while assuming away other parts of the problem which are not of interest or for which data is not available. While these models are useful for working with the task at hand, they are very often not applicable to future problems. The MEAN Simulation attempts to address these problems by using a modular design which provides components of varying fidelity for each aspect of the simulation. This allows for the most accurate model for which data is available to be used. It also provides for easy analysis of sensitivity to data accuracy. This can be particularly useful in the case where accurate data is available for some subset of the situations that are to be considered. Furthermore, the ability to use the same model while examining effects on different parts of a system reduces the time spent learning the simulation, and provides for easier comparisons between changes to different parts of the system.

  6. Simulation of springback and microstructural analysis of dual phase steels

    NASA Astrophysics Data System (ADS)

    Kalyan, T. Sri.; Wei, Xing; Mendiguren, Joseba; Rolfe, Bernard

    2013-12-01

    With increasing demand for weight reduction and better crashworthiness abilities in car development, advanced high strength Dual Phase (DP) steels have been progressively used when making automotive parts. The higher strength steels exhibit higher springback and lower dimensional accuracy after stamping. This has necessitated the use of simulation of each stamped component prior to production to estimate the part's dimensional accuracy. Understanding the micro-mechanical behaviour of AHSS sheet may provide more accuracy to stamping simulations. This work can be divided basically into two parts: first modelling a standard channel forming process; second modelling the micro-structure of the process. The standard top hat channel forming process, benchmark NUMISHEET'93, is used for investigating springback effect of WISCO Dual Phase steels. The second part of this work includes the finite element analysis of microstructures to understand the behaviour of the multi-phase steel at a more fundamental level. The outcomes of this work will help in the dimensional control of steels during manufacturing stage based on the material's microstructure.

  7. A Multi-Model Assessment for the 2006 and 2010 Simulations under the AirQuality Model Evaluation International Initiative (AQMEII) Phase 2 over North America: Part I. Indicators of the Sensitivity of O3 and PM2.5 Formation Regimes

    EPA Science Inventory

    Under the Air Quality Model Evaluation International Initiative, Phase 2 (AQMEII-2), three online coupled air quality model simulations, with six different configurations, are analyzed for their performance, inter-model agreement, and responses to emission and meteorological chan...

  8. Model Performance Evaluation and Scenario Analysis ...

    EPA Pesticide Factsheets

    This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit measures that capture magnitude only, sequence only, and combined magnitude and sequence errors. The performance measures include error analysis, coefficient of determination, Nash-Sutcliffe efficiency, and a new weighted rank method. These performance metrics only provide useful information about the overall model performance. Note that MPESA is based on the separation of observed and simulated time series into magnitude and sequence components. The separation of time series into magnitude and sequence components and the reconstruction back to time series provides diagnostic insights to modelers. For example, traditional approaches lack the capability to identify if the source of uncertainty in the simulated data is due to the quality of the input data or the way the analyst adjusted the model parameters. This report presents a suite of model diagnostics that identify if mismatches between observed and simulated data result from magnitude or sequence related errors. MPESA offers graphical and statistical options that allow HSPF users to compare observed and simulated time series and identify the parameter values to adjust or the input data to modify. The scenario analysis part of the too

  9. Fatigue Analysis of Rotating Parts. A Case Study for a Belt Driven Pulley

    NASA Astrophysics Data System (ADS)

    Sandu, Ionela; Tabacu, Stefan; Ducu, Catalin

    2017-10-01

    The present study is focused on the life estimation of a rotating part as a component of an engine assembly namely the pulley of the coolant pump. The goal of the paper is to develop a model, supported by numerical analysis, capable to predict the lifetime of the part. Starting from functional drawing, CAD Model and technical specifications of the part a numerical model was developed. MATLAB code was used to develop a tool to apply the load over the selected area. The numerical analysis was performed in two steps. The first simulation concerned the inertia relief due to rotational motion about the shaft (of the pump). Results from this simulation were saved and the stress - strain state used as initial conditions for the analysis with the load applied. The lifetime of a good part was estimated. A defect was created in order to investigate the influence over the working requirements. It was found that there is little influence with respect to the prescribed lifetime.

  10. User's guide to Version 2 of the Regeneration Establishment Model: Part of the Prognosis Model

    Treesearch

    Dennis E. Ferguson; Nicholas L. Crookston

    1991-01-01

    This publication describes how to use version 2 of the Regeneration Establishment Model, a computer-based simulator that is part of the Prognosis Model for Stand Development. Conifer regeneration is predicted following harvest and site preparation for forests in western Montana, central Idaho, and northern Idaho. The influence of western spruce budworm (Choristoneura...

  11. Predicting agricultural impacts of large-scale drought: 2012 and the case for better modeling

    USDA-ARS?s Scientific Manuscript database

    We present an example of a simulation-based forecast for the 2012 U.S. maize growing season produced as part of a high-resolution, multi-scale, predictive mechanistic modeling study designed for decision support, risk management, and counterfactual analysis. The simulations undertaken for this analy...

  12. High fidelity studies of exploding foil initiator bridges, Part 3: ALEGRA MHD simulations

    NASA Astrophysics Data System (ADS)

    Neal, William; Garasi, Christopher

    2017-01-01

    Simulations of high voltage detonators, such as Exploding Bridgewire (EBW) and Exploding Foil Initiators (EFI), have historically been simple, often empirical, one-dimensional models capable of predicting parameters such as current, voltage, and in the case of EFIs, flyer velocity. Experimental methods have correspondingly generally been limited to the same parameters. With the advent of complex, first principles magnetohydrodynamic codes such as ALEGRA and ALE-MHD, it is now possible to simulate these components in three dimensions, and predict a much greater range of parameters than before. A significant improvement in experimental capability was therefore required to ensure these simulations could be adequately verified. In this third paper of a three part study, the experimental results presented in part 2 are compared against 3-dimensional MHD simulations. This improved experimental capability, along with advanced simulations, offer an opportunity to gain a greater understanding of the processes behind the functioning of EBW and EFI detonators.

  13. Coupling all-atom molecular dynamics simulations of ions in water with Brownian dynamics.

    PubMed

    Erban, Radek

    2016-02-01

    Molecular dynamics (MD) simulations of ions (K + , Na + , Ca 2+ and Cl - ) in aqueous solutions are investigated. Water is described using the SPC/E model. A stochastic coarse-grained description for ion behaviour is presented and parametrized using MD simulations. It is given as a system of coupled stochastic and ordinary differential equations, describing the ion position, velocity and acceleration. The stochastic coarse-grained model provides an intermediate description between all-atom MD simulations and Brownian dynamics (BD) models. It is used to develop a multiscale method which uses all-atom MD simulations in parts of the computational domain and (less detailed) BD simulations in the remainder of the domain.

  14. Short-stack modeling of degradation in solid oxide fuel cells. Part I. Contact degradation

    NASA Astrophysics Data System (ADS)

    Gazzarri, J. I.; Kesler, O.

    As the first part of a two paper series, we present a two-dimensional impedance model of a working solid oxide fuel cell (SOFC) to study the effect of contact degradation on the impedance spectrum for the purpose of non-invasive diagnosis. The two dimensional modeled geometry includes the ribbed interconnect, and is adequate to represent co- and counter-flow configurations. Simulated degradation modes include: cathode delamination, interconnect oxidation, and interconnect-cathode detachment. The simulations show differences in the way each degradation mode impacts the impedance spectrum shape, suggesting that identification is possible. In Part II, we present a sensitivity analysis of the results to input parameter variability that reveals strengths and limitations of the method, as well as describing possible interactions between input parameters and concurrent degradation modes.

  15. High Speed Civil Transport Aircraft Simulation: Reference-H Cycle 1, MATLAB Implementation

    NASA Technical Reports Server (NTRS)

    Sotack, Robert A.; Chowdhry, Rajiv S.; Buttrill, Carey S.

    1999-01-01

    The mathematical model and associated code to simulate a high speed civil transport aircraft - the Boeing Reference H configuration - are described. The simulation was constructed in support of advanced control law research. In addition to providing time histories of the dynamic response, the code includes the capabilities for calculating trim solutions and for generating linear models. The simulation relies on the nonlinear, six-degree-of-freedom equations which govern the motion of a rigid aircraft in atmospheric flight. The 1962 Standard Atmosphere Tables are used along with a turbulence model to simulate the Earth atmosphere. The aircraft model has three parts - an aerodynamic model, an engine model, and a mass model. These models use the data from the Boeing Reference H cycle 1 simulation data base. Models for the actuator dynamics, landing gear, and flight control system are not included in this aircraft model. Dynamic responses generated by the nonlinear simulation are presented and compared with results generated from alternate simulations at Boeing Commercial Aircraft Company and NASA Langley Research Center. Also, dynamic responses generated using linear models are presented and compared with dynamic responses generated using the nonlinear simulation.

  16. SynBioSS designer: a web-based tool for the automated generation of kinetic models for synthetic biological constructs

    PubMed Central

    Weeding, Emma; Houle, Jason

    2010-01-01

    Modeling tools can play an important role in synthetic biology the same way modeling helps in other engineering disciplines: simulations can quickly probe mechanisms and provide a clear picture of how different components influence the behavior of the whole. We present a brief review of available tools and present SynBioSS Designer. The Synthetic Biology Software Suite (SynBioSS) is used for the generation, storing, retrieval and quantitative simulation of synthetic biological networks. SynBioSS consists of three distinct components: the Desktop Simulator, the Wiki, and the Designer. SynBioSS Designer takes as input molecular parts involved in gene expression and regulation (e.g. promoters, transcription factors, ribosome binding sites, etc.), and automatically generates complete networks of reactions that represent transcription, translation, regulation, induction and degradation of those parts. Effectively, Designer uses DNA sequences as input and generates networks of biomolecular reactions as output. In this paper we describe how Designer uses universal principles of molecular biology to generate models of any arbitrary synthetic biological system. These models are useful as they explain biological phenotypic complexity in mechanistic terms. In turn, such mechanistic explanations can assist in designing synthetic biological systems. We also discuss, giving practical guidance to users, how Designer interfaces with the Registry of Standard Biological Parts, the de facto compendium of parts used in synthetic biology applications. PMID:20639523

  17. InterSpread Plus: a spatial and stochastic simulation model of disease in animal populations.

    PubMed

    Stevenson, M A; Sanson, R L; Stern, M W; O'Leary, B D; Sujau, M; Moles-Benfell, N; Morris, R S

    2013-04-01

    We describe the spatially explicit, stochastic simulation model of disease spread, InterSpread Plus, in terms of its epidemiological framework, operation, and mode of use. The input data required by the model, the method for simulating contact and infection spread, and methods for simulating disease control measures are described. Data and parameters that are essential for disease simulation modelling using InterSpread Plus are distinguished from those that are non-essential, and it is suggested that a rational approach to simulating disease epidemics using this tool is to start with core data and parameters, adding additional layers of complexity if and when the specific requirements of the simulation exercise require it. We recommend that simulation models of disease are best developed as part of epidemic contingency planning so decision makers are familiar with model outputs and assumptions and are well-positioned to evaluate their strengths and weaknesses to make informed decisions in times of crisis. Copyright © 2012 Elsevier B.V. All rights reserved.

  18. The Simulation of College Enrollments: A Description of a Higher Education Enrollment Forecasting Model. New York State 1978-1994.

    ERIC Educational Resources Information Center

    New York State Education Dept., Albany. Office of Postsecondary Research, Information Systems, and Institutional Aid.

    A highly technical report describes higher education forecasting procedures used by the State Education Department of New York at Albany to project simulated college enrollments for New York State from 1978-1994. Basic components of the projections--generated for full- and part-time undergraduates, full- and part-time graduates, and…

  19. Simulation based optimized beam velocity in additive manufacturing

    NASA Astrophysics Data System (ADS)

    Vignat, Frédéric; Béraud, Nicolas; Villeneuve, François

    2017-08-01

    Manufacturing good parts with additive technologies rely on melt pool dimension and temperature and are controlled by manufacturing strategies often decided on machine side. Strategies are built on beam path and variable energy input. Beam path are often a mix of contour and hatching strategies filling the contours at each slice. Energy input depend on beam intensity and speed and is determined from simple thermal models to control melt pool dimensions and temperature and ensure porosity free material. These models take into account variation in thermal environment such as overhanging surfaces or back and forth hatching path. However not all the situations are correctly handled and precision is limited. This paper proposes new method to determine energy input from full built chamber 3D thermal simulation. Using the results of the simulation, energy is modified to keep melt pool temperature in a predetermined range. The paper present first an experimental method to determine the optimal range of temperature. In a second part the method to optimize the beam speed from the simulation results is presented. Finally, the optimized beam path is tested in the EBM machine and built part are compared with part built with ordinary beam path.

  20. Gating Mechanisms of Mechanosensitive Channels of Large Conductance, I: A Continuum Mechanics-Based Hierarchical Framework

    PubMed Central

    Chen, Xi; Cui, Qiang; Tang, Yuye; Yoo, Jejoong; Yethiraj, Arun

    2008-01-01

    A hierarchical simulation framework that integrates information from molecular dynamics (MD) simulations into a continuum model is established to study the mechanical response of mechanosensitive channel of large-conductance (MscL) using the finite element method (FEM). The proposed MD-decorated FEM (MDeFEM) approach is used to explore the detailed gating mechanisms of the MscL in Escherichia coli embedded in a palmitoyloleoylphosphatidylethanolamine lipid bilayer. In Part I of this study, the framework of MDeFEM is established. The transmembrane and cytoplasmic helices are taken to be elastic rods, the loops are modeled as springs, and the lipid bilayer is approximated by a three-layer sheet. The mechanical properties of the continuum components, as well as their interactions, are derived from molecular simulations based on atomic force fields. In addition, analytical closed-form continuum model and elastic network model are established to complement the MDeFEM approach and to capture the most essential features of gating. In Part II of this study, the detailed gating mechanisms of E. coli-MscL under various types of loading are presented and compared with experiments, structural model, and all-atom simulations, as well as the analytical models established in Part I. It is envisioned that such a hierarchical multiscale framework will find great value in the study of a variety of biological processes involving complex mechanical deformations such as muscle contraction and mechanotransduction. PMID:18390626

  1. Finite element simulation and experimental verification of ultrasonic non-destructive inspection of defects in additively manufactured materials

    NASA Astrophysics Data System (ADS)

    Taheri, H.; Koester, L.; Bigelow, T.; Bond, L. J.

    2018-04-01

    Industrial applications of additively manufactured components are increasing quickly. Adequate quality control of the parts is necessary in ensuring safety when using these materials. Base material properties, surface conditions, as well as location and size of defects are some of the main targets for nondestructive evaluation of additively manufactured parts, and the problem of adequate characterization is compounded given the challenges of complex part geometry. Numerical modeling can allow the interplay of the various factors to be studied, which can lead to improved measurement design. This paper presents a finite element simulation verified by experimental results of ultrasonic waves scattering from flat bottom holes (FBH) in additive manufacturing materials. A focused beam immersion ultrasound transducer was used for both the modeling and simulations in the additive manufactured samples. The samples were SS17 4 PH steel samples made by laser sintering in a powder bed.

  2. Performance Summary of the 2006 Community Multiscale Air Quality (CMAQ) Simulation for the AQMEII Project: North American Application

    EPA Science Inventory

    The CMAQ modeling system has been used to simulate the CONUS using 12-km by 12-km horizontal grid spacing for the entire year of 2006 as part of the Air Quality Model Evaluation International initiative (AQMEII). The operational model performance for O3 and PM2.5<...

  3. Numerical Simulation of High-Speed Combustion Processes in Scramjet Configurations

    NASA Astrophysics Data System (ADS)

    Potturi, Amarnatha Sarma

    Flows through scramjet configurations are simulated using hybrid large-eddy simulation / Reynolds-averaged Navier-Stokes techniques. Present study is performed in three parts: parametric studies to determine the sensitivities of the predictions to modeling and algorithmic variations; formulation, implementation, and testing of several subgrid closures aimed at modeling filtered species production rates, which account for turbulence-chemistry interactions in a finite rate chemistry large-eddy simulation framework; and as a final assessment of the complete methodology, cavity-stabilized ethylene combustion is simulated. Throughout the present study, emphasis is placed on characterizing facility-specific effects, since they can have a significant influence on the numerical solution. In Part One, non-reactive and reactive flows through a model scramjet combustor with a wedge shaped injector are simulated. Different grids, flux reconstruction methods, reaction mechanisms, and inflow boundary conditions are used. To enhance fuel-air mixing, a synthetic eddy method is used to generate turbulence in the injector boundary layers and the hydrogen jets. The results show that in all the cases a lifted flame is predicted with varying standoff distances, heat releases, and shapes. In Part Two, the subgrid closures for modeling the filtered species production rates are tested on two different scramjet configurations with fundamentally different flow patterns and flame structures, one with the wedge shaped injector placed at the center of the combustor section (first, used in Part One), another with a three-dimensional ramp injector located on the upper wall of the combustor section (second). While the impact of these closures on the flow through the first configuration is insignificant, they have a more pronounced effect on the flow through the second configuration. Error analysis and performance quantification of these closures reveal that, relative to a baseline model, two of the closures improve the accuracy of the predictions, but the degree of improvement is quite modest. Also, from a cost-benefit perspective none of the models are a significant improvement over the 'laminar-chemistry' closure (where turbulence-chemistry interactions are ignored), for the configurations tested and the mesh resolutions employed. In Part Three, reactive flow through an ethylene fueled cavity flameholder is simulated using 14- and 22-species ethylene oxidation mechanisms, and the synthetic eddy method (used in Part Two) is used to introduce turbulence at the inflow plane of the flameholder. For an equivalence ratio of 0.15, the 14-species mechanism resulted in a flame blow-out, and the 22-species mechanism predicted a cavity stabilized flame. Results predicted using the 22-species mechanism compare well with the experimental data, especially, water mole-fraction distribution and pressure along the upper wall of the combustor. In general, the predictions show excellent agreement with experimental data within the cavity region; further downstream, experimental results suggest that the heat release is over-predicted in the simulations.

  4. Improvements in simulation of multiple scattering effects in ATLAS fast simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Basalaev, A. E., E-mail: artem.basalaev@cern.ch

    Fast ATLAS Tracking Simulation (Fatras) package was verified on single layer geometry with respect to full simulation with GEANT4. Fatras hadronic interactions and multiple scattering simulation were studied in comparison with GEANT4. Disagreement was found in multiple scattering distributions of primary charged particles (μ, π, e). A new model for multiple scattering simulation was implemented in Fatras. The model was based on R. Frühwirth’s mixture models. New model was tested on single layer geometry and a good agreement with GEANT4 was achieved. Also a comparison of reconstructed tracks’ parameters was performed for Inner Detector geometry, and Fatras with new multiplemore » scattering model proved to have better agreement with GEANT4. New model of multiple scattering was added as a part of Fatras package in the development release of ATLAS software—ATHENA.« less

  5. A Systems Approach to Scalable Transportation Network Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S

    2006-01-01

    Emerging needs in transportation network modeling and simulation are raising new challenges with respect to scal-ability of network size and vehicular traffic intensity, speed of simulation for simulation-based optimization, and fidel-ity of vehicular behavior for accurate capture of event phe-nomena. Parallel execution is warranted to sustain the re-quired detail, size and speed. However, few parallel simulators exist for such applications, partly due to the challenges underlying their development. Moreover, many simulators are based on time-stepped models, which can be computationally inefficient for the purposes of modeling evacuation traffic. Here an approach is presented to de-signing a simulator with memory andmore » speed efficiency as the goals from the outset, and, specifically, scalability via parallel execution. The design makes use of discrete event modeling techniques as well as parallel simulation meth-ods. Our simulator, called SCATTER, is being developed, incorporating such design considerations. Preliminary per-formance results are presented on benchmark road net-works, showing scalability to one million vehicles simu-lated on one processor.« less

  6. Regression Model for Light Weight and Crashworthiness Enhancement Design of Automotive Parts in Frontal CAR Crash

    NASA Astrophysics Data System (ADS)

    Bae, Gihyun; Huh, Hoon; Park, Sungho

    This paper deals with a regression model for light weight and crashworthiness enhancement design of automotive parts in frontal car crash. The ULSAB-AVC model is employed for the crash analysis and effective parts are selected based on the amount of energy absorption during the crash behavior. Finite element analyses are carried out for designated design cases in order to investigate the crashworthiness and weight according to the material and thickness of main energy absorption parts. Based on simulations results, a regression analysis is performed to construct a regression model utilized for light weight and crashworthiness enhancement design of automotive parts. An example for weight reduction of main energy absorption parts demonstrates the validity of a regression model constructed.

  7. Plug-and -Play Model Architecture and Development Environment for Powertrain/Propulsion System - Final CRADA Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rousseau, Aymeric

    2013-02-01

    Several tools already exist to develop detailed plant model, including GT-Power, AMESim, CarSim, and SimScape. The objective of Autonomie is not to provide a language to develop detailed models; rather, Autonomie supports the assembly and use of models from design to simulation to analysis with complete plug-and-play capabilities. Autonomie provides a plug-and-play architecture to support this ideal use of modeling and simulation for math-based automotive control system design. Models in the standard format create building blocks, which are assembled at runtime into a simulation model of a vehicle, system, subsystem, or component to simulate. All parts of the graphical usermore » interface (GUI) are designed to be flexible to support architectures, systems, components, and processes not yet envisioned. This allows the software to be molded to individual uses, so it can grow as requirements and technical knowledge expands. This flexibility also allows for implementation of legacy code, including models, controller code, processes, drive cycles, and post-processing equations. A library of useful and tested models and processes is included as part of the software package to support a full range of simulation and analysis tasks, immediately. Autonomie also includes a configuration and database management front end to facilitate the storage, versioning, and maintenance of all required files, such as the models themselves, the model’s supporting files, test data, and reports. During the duration of the CRADA, Argonne has worked closely with GM to implement and demonstrate each one of their requirements. A use case was developed by GM for every requirement and demonstrated by Argonne. Each of the new features were verified by GM experts through a series of Gate. Once all the requirements were validated they were presented to the directors as part of GM Gate process.« less

  8. Finite Element Modeling, Simulation, Tools, and Capabilities at Superform

    NASA Astrophysics Data System (ADS)

    Raman, Hari; Barnes, A. J.

    2010-06-01

    Over the past thirty years Superform has been a pioneer in the SPF arena, having developed a keen understanding of the process and a range of unique forming techniques to meet varying market needs. Superform’s high-profile list of customers includes Boeing, Airbus, Aston Martin, Ford, and Rolls Royce. One of the more recent additions to Superform’s technical know-how is finite element modeling and simulation. Finite element modeling is a powerful numerical technique which when applied to SPF provides a host of benefits including accurate prediction of strain levels in a part, presence of wrinkles and predicting pressure cycles optimized for time and part thickness. This paper outlines a brief history of finite element modeling applied to SPF and then reviews some of the modeling tools and techniques that Superform have applied and continue to do so to successfully superplastically form complex-shaped parts. The advantages of employing modeling at the design stage are discussed and illustrated with real-world examples.

  9. Effect of linear and non-linear blade modelling techniques on simulated fatigue and extreme loads using Bladed

    NASA Astrophysics Data System (ADS)

    Beardsell, Alec; Collier, William; Han, Tao

    2016-09-01

    There is a trend in the wind industry towards ever larger and more flexible turbine blades. Blade tip deflections in modern blades now commonly exceed 10% of blade length. Historically, the dynamic response of wind turbine blades has been analysed using linear models of blade deflection which include the assumption of small deflections. For modern flexible blades, this assumption is becoming less valid. In order to continue to simulate dynamic turbine performance accurately, routine use of non-linear models of blade deflection may be required. This can be achieved by representing the blade as a connected series of individual flexible linear bodies - referred to in this paper as the multi-part approach. In this paper, Bladed is used to compare load predictions using single-part and multi-part blade models for several turbines. The study examines the impact on fatigue and extreme loads and blade deflection through reduced sets of load calculations based on IEC 61400-1 ed. 3. Damage equivalent load changes of up to 16% and extreme load changes of up to 29% are observed at some turbine load locations. It is found that there is no general pattern in the loading differences observed between single-part and multi-part blade models. Rather, changes in fatigue and extreme loads with a multi-part blade model depend on the characteristics of the individual turbine and blade. Key underlying causes of damage equivalent load change are identified as differences in edgewise- torsional coupling between the multi-part and single-part models, and increased edgewise rotor mode damping in the multi-part model. Similarly, a causal link is identified between torsional blade dynamics and changes in ultimate load results.

  10. Combination of the discontinuous Galerkin method with finite differences for simulation of seismic wave propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lisitsa, Vadim, E-mail: lisitsavv@ipgg.sbras.ru; Novosibirsk State University, Novosibirsk; Tcheverda, Vladimir

    We present an algorithm for the numerical simulation of seismic wave propagation in models with a complex near surface part and free surface topography. The approach is based on the combination of finite differences with the discontinuous Galerkin method. The discontinuous Galerkin method can be used on polyhedral meshes; thus, it is easy to handle the complex surfaces in the models. However, this approach is computationally intense in comparison with finite differences. Finite differences are computationally efficient, but in general, they require rectangular grids, leading to the stair-step approximation of the interfaces, which causes strong diffraction of the wavefield. Inmore » this research we present a hybrid algorithm where the discontinuous Galerkin method is used in a relatively small upper part of the model and finite differences are applied to the main part of the model.« less

  11. A comparison of three approaches for simulating fine-scale surface winds in support of wildland fire management: Part I. Model formulation and comparison against measurements

    Treesearch

    Jason M. Forthofer; Bret W. Butler; Natalie S. Wagenbrenner

    2014-01-01

    For this study three types of wind models have been defined for simulating surface wind flow in support of wildland fire management: (1) a uniform wind field (typically acquired from coarse-resolution (,4 km) weather service forecast models); (2) a newly developed mass-conserving model and (3) a newly developed mass and momentumconserving model (referred to as the...

  12. EVA/ORU model architecture using RAMCOST

    NASA Technical Reports Server (NTRS)

    Ntuen, Celestine A.; Park, Eui H.; Wang, Y. M.; Bretoi, R.

    1990-01-01

    A parametrically driven simulation model is presented in order to provide a detailed insight into the effects of various input parameters in the life testing of a modular space suit. The RAMCOST model employed is a user-oriented simulation model for studying the life-cycle costs of designs under conditions of uncertainty. The results obtained from the EVA simulated model are used to assess various mission life testing parameters such as the number of joint motions per EVA cycle time, part availability, and number of inspection requirements. RAMCOST first simulates EVA completion for NASA application using a probabilistic like PERT network. With the mission time heuristically determined, RAMCOST then models different orbital replacement unit policies with special application to the astronaut's space suit functional designs.

  13. Comparisons of CTH simulations with measured wave profiles for simple flyer plate experiments

    DOE PAGES

    Thomas, S. A.; Veeser, L. R.; Turley, W. D.; ...

    2016-06-13

    We conducted detailed 2-dimensional hydrodynamics calculations to assess the quality of simulations commonly used to design and analyze simple shock compression experiments. Such simple shock experiments also contain data where dynamic properties of materials are integrated together. We wished to assess how well the chosen computer hydrodynamic code could do at capturing both the simple parts of the experiments and the integral parts. We began with very simple shock experiments, in which we examined the effects of the equation of state and the compressional and tensile strength models. We increased complexity to include spallation in copper and iron and amore » solid-solid phase transformation in iron to assess the quality of the damage and phase transformation simulations. For experiments with a window, the response of both the sample and the window are integrated together, providing a good test of the material models. While CTH physics models are not perfect and do not reproduce all experimental details well, we find the models are useful; the simulations are adequate for understanding much of the dynamic process and for planning experiments. However, higher complexity in the simulations, such as adding in spall, led to greater differences between simulation and experiment. Lastly, this comparison of simulation to experiment may help guide future development of hydrodynamics codes so that they better capture the underlying physics.« less

  14. Comparison between Radiation-Hydrodynamic Simulation of Supercritical Accretion Flows and a Steady Model with Outflows

    NASA Astrophysics Data System (ADS)

    Jiao, Cheng-Liang; Mineshige, Shin; Takeuchi, Shun; Ohsuga, Ken

    2015-06-01

    We apply our two-dimensional (2D), radially self-similar steady-state accretion flow model to the analysis of hydrodynamic simulation results of supercritical accretion flows. Self-similarity is checked and the input parameters for the model calculation, such as advective factor and heat capacity ratio, are obtained from time-averaged simulation data. Solutions of the model are then calculated and compared with the simulation results. We find that in the converged region of the simulation, excluding the part too close to the black hole, the radial distributions of azimuthal velocity {{v}φ }, density ρ and pressure p basically follow the self-similar assumptions, i.e., they are roughly proportional to {{r}-0.5}, {{r}-n}, and {{r}-(n+1)}, respectively, where n∼ 0.85 for the mass injection rate of 1000{{L}E}/{{c}2}, and n∼ 0.74 for 3000{{L}E}/{{c}2}. The distribution of vr and {{v}θ } agrees less with self-similarity, possibly due to convective motions in the rθ plane. The distribution of velocity, density, and pressure in the θ direction obtained by the steady model agrees well with the simulation results within the calculation boundary of the steady model. Outward mass flux in the simulations is overall directed toward a polar angle of 0.8382 rad (∼ 48\\buildrel{\\circ}\\over{.} 0) for 1000{{L}E}/{{c}2} and 0.7852 rad (∼ 43\\buildrel{\\circ}\\over{.} 4) for 3000{{L}E}/{{c}2}, and ∼94% of the mass inflow is driven away as outflow, while outward momentum and energy fluxes are focused around the polar axis. Parts of these fluxes lie in the region that is not calculated by the steady model, and special attention should be paid when the model is applied.

  15. REGIONAL-SCALE (1000 KM) MODEL OF PHOTOCHEMICAL AIR POLLUTION. PART 2. INPUT PROCESSOR NETWORK DESIGN

    EPA Science Inventory

    Detailed specifications are given for a network of data processors and submodels that can generate the parameter fields required by the regional oxidant model formulated in Part 1 of this report. Operations performed by the processor network include simulation of the motion and d...

  16. A multi-level simulation platform of natural gas internal reforming solid oxide fuel cell-gas turbine hybrid generation system - Part II. Balancing units model library and system simulation

    NASA Astrophysics Data System (ADS)

    Bao, Cheng; Cai, Ningsheng; Croiset, Eric

    2011-10-01

    Following our integrated hierarchical modeling framework of natural gas internal reforming solid oxide fuel cell (IRSOFC), this paper firstly introduces the model libraries of main balancing units, including some state-of-the-art achievements and our specific work. Based on gPROMS programming code, flexible configuration and modular design are fully realized by specifying graphically all unit models in each level. Via comparison with the steady-state experimental data of Siemens-Westinghouse demonstration system, the in-house multi-level SOFC-gas turbine (GT) simulation platform is validated to be more accurate than the advanced power system analysis tool (APSAT). Moreover, some units of the demonstration system are designed reversely for analysis of a typically part-load transient process. The framework of distributed and dynamic modeling in most of units is significant for the development of control strategies in the future.

  17. Description, validation, and modification of the Guyton model for space-flight applications. Part A. Guyton model of circulatory, fluid and electrolyte control. Part B. Modification of the Guyton model for circulatory, fluid and electrolyte control

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1985-01-01

    The mathematical model that has been a cornerstone for the systems analysis of space-flight physiological studies is the Guyton model describing circulatory, fluid and electrolyte regulation. The model and the modifications that are made to permit simulation and analysis of the stress of weightlessness are described.

  18. Simulation technique for slurries interacting with moving parts and deformable solids with applications

    NASA Astrophysics Data System (ADS)

    Mutabaruka, Patrick; Kamrin, Ken

    2018-04-01

    A numerical method for particle-laden fluids interacting with a deformable solid domain and mobile rigid parts is proposed and implemented in a full engineering system. The fluid domain is modeled with a lattice Boltzmann representation, the particles and rigid parts are modeled with a discrete element representation, and the deformable solid domain is modeled using a Lagrangian mesh. The main issue of this work, since separately each of these methods is a mature tool, is to develop coupling and model-reduction approaches in order to efficiently simulate coupled problems of this nature, as in various geological and engineering applications. The lattice Boltzmann method incorporates a large eddy simulation technique using the Smagorinsky turbulence model. The discrete element method incorporates spherical and polyhedral particles for stiff contact interactions. A neo-Hookean hyperelastic model is used for the deformable solid. We provide a detailed description of how to couple the three solvers within a unified algorithm. The technique we propose for rubber modeling/coupling exploits a simplification that prevents having to solve a finite-element problem at each time step. We also developed a technique to reduce the domain size of the full system by replacing certain zones with quasi-analytic solutions, which act as effective boundary conditions for the lattice Boltzmann method. The major ingredients of the routine are separately validated. To demonstrate the coupled method in full, we simulate slurry flows in two kinds of piston valve geometries. The dynamics of the valve and slurry are studied and reported over a large range of input parameters.

  19. Programming and machining of complex parts based on CATIA solid modeling

    NASA Astrophysics Data System (ADS)

    Zhu, Xiurong

    2017-09-01

    The complex parts of the use of CATIA solid modeling programming and simulation processing design, elaborated in the field of CNC machining, programming and the importance of processing technology. In parts of the design process, first make a deep analysis on the principle, and then the size of the design, the size of each chain, connected to each other. After the use of backstepping and a variety of methods to calculate the final size of the parts. In the selection of parts materials, careful study, repeated testing, the final choice of 6061 aluminum alloy. According to the actual situation of the processing site, it is necessary to make a comprehensive consideration of various factors in the machining process. The simulation process should be based on the actual processing, not only pay attention to shape. It can be used as reference for machining.

  20. Mapping sources, sinks, and connectivity using a simulation model of Northern Spotted Owls

    EPA Science Inventory

    This is a study of source-sink dynamics at a landscape scale. In conducting the study, we make use of a mature simulation model for the northern spotted owl (Strix occidentalis caurina) that was developed as part of the US Fish and Wildlife Service’s most recent recovery plannin...

  1. Comparing Traditional versus Alternative Sequencing of Instruction When Using Simulation Modeling

    ERIC Educational Resources Information Center

    Bowen, Bradley; DeLuca, William

    2015-01-01

    Many engineering and technology education classrooms incorporate simulation modeling as part of curricula to teach engineering and STEM-based concepts. The traditional method of the learning process has students first learn the content from the classroom teacher and then may have the opportunity to apply the learned content through simulation…

  2. Firestar-"D": Computerized Adaptive Testing Simulation Program for Dichotomous Item Response Theory Models

    ERIC Educational Resources Information Center

    Choi, Seung W.; Podrabsky, Tracy; McKinney, Natalie

    2012-01-01

    Computerized adaptive testing (CAT) enables efficient and flexible measurement of latent constructs. The majority of educational and cognitive measurement constructs are based on dichotomous item response theory (IRT) models. An integral part of developing various components of a CAT system is conducting simulations using both known and empirical…

  3. A Theory for the Neural Basis of Language Part 2: Simulation Studies of the Model

    ERIC Educational Resources Information Center

    Baron, R. J.

    1974-01-01

    Computer simulation studies of the proposed model are presented. Processes demonstrated are (1) verbally directed recall of visual experience; (2) understanding of verbal information; (3) aspects of learning and forgetting; (4) the dependence of recognition and understanding in context; and (5) elementary concepts of sentence production. (Author)

  4. Problems in Catalytic Oxidation of Hydrocarbons and Detailed Simulation of Combustion Processes

    NASA Astrophysics Data System (ADS)

    Xin, Yuxuan

    This dissertation research consists of two parts, with Part I on the kinetics of catalytic oxidation of hydrocarbons and Part II on aspects on the detailed simulation of combustion processes. In Part I, the catalytic oxidation of C1--C3 hydrocarbons, namely methane, ethane, propane and ethylene, was investigated for lean hydrocarbon-air mixtures over an unsupported Pd-based catalyst, from 600 to 800 K and under atmospheric pressure. In Chapter 2, the experimental facility of wire microcalorimetry and simulation configuration were described in details. In Chapter 3 and 4, the oxidation rate of C1--C 3 hydrocarbons is demonstrated to be determined by the dissociative adsorption of hydrocarbons. A detailed surface kinetics model is proposed with deriving the rate coefficient of hydrocarbon dissociative adsorption from the wire microcalorimetry data. In Part II, four fundamental studies were conducted through detailed combustion simulations. In Chapter 5, self-accelerating hydrogen-air flames are studied via two-dimensional detailed numerical simulation (DNS). The increase in the global flame velocity is shown to be caused by the increase of flame surface area, and the fractal structure of the flame front is demonstrated by the box-counting method. In Chapter 6, skeletal reaction models for butane combustion are derived by using directed relation graph (DRG) and DRG-aided sensitivity analysis (DRGASA), and uncertainty minimization by polynomial chaos expansion (MUM-PCE) mothodes. The dependence of model uncertainty is subjected to the completeness of the model. In Chapter 7, a systematic strategy is proposed to reduce the cost of the multicomponent diffusion model by accurately accounting for the species whose diffusivity is important to the global responses of the combustion systems, and approximating those of less importance by the mixture-averaged model. The reduced model is validated in an n-heptane mechanism with 88 species. In Chapter 8, the influence of Soret diffusion on the n-heptane/air flames is investigated numerically. In the unstretched flames, Soret diffusion primarily affects the chemical kinetics embedded in the flame structure and the net effect is small; while in the stretched flames, its impact is mainly through those of n-heptane and the secondary fuel, H2, in modifying the flame temperature, with substantial effects.

  5. Aerospace Toolbox---a flight vehicle design, analysis, simulation ,and software development environment: I. An introduction and tutorial

    NASA Astrophysics Data System (ADS)

    Christian, Paul M.; Wells, Randy

    2001-09-01

    This paper presents a demonstrated approach to significantly reduce the cost and schedule of non real-time modeling and simulation, real-time HWIL simulation, and embedded code development. The tool and the methodology presented capitalize on a paradigm that has become a standard operating procedure in the automotive industry. The tool described is known as the Aerospace Toolbox, and it is based on the MathWorks Matlab/Simulink framework, which is a COTS application. Extrapolation of automotive industry data and initial applications in the aerospace industry show that the use of the Aerospace Toolbox can make significant contributions in the quest by NASA and other government agencies to meet aggressive cost reduction goals in development programs. The part I of this paper provides a detailed description of the GUI based Aerospace Toolbox and how it is used in every step of a development program; from quick prototyping of concept developments that leverage built-in point of departure simulations through to detailed design, analysis, and testing. Some of the attributes addressed include its versatility in modeling 3 to 6 degrees of freedom, its library of flight test validated library of models (including physics, environments, hardware, and error sources), and its built-in Monte Carlo capability. Other topics to be covered in this part include flight vehicle models and algorithms, and the covariance analysis package, Navigation System Covariance Analysis Tools (NavSCAT). Part II of this paper, to be published at a later date, will conclude with a description of how the Aerospace Toolbox is an integral part of developing embedded code directly from the simulation models by using the Mathworks Real Time Workshop and optimization tools. It will also address how the Toolbox can be used as a design hub for Internet based collaborative engineering tools such as NASA's Intelligent Synthesis Environment (ISE) and Lockheed Martin's Interactive Missile Design Environment (IMD).

  6. Simulation of Variable-Density Ground-Water Flow and Saltwater Intrusion beneath Manhasset Neck, Nassau County, New York, 1905-2005

    USGS Publications Warehouse

    Monti, Jack; Misut, Paul E.; Busciolano, Ronald J.

    2009-01-01

    The coastal-aquifer system of Manhasset Neck, Nassau County, New York, has been stressed by pumping, which has led to saltwater intrusion and the abandonment of one public-supply well in 1944. Measurements of chloride concentrations and water levels in 2004 from the deep, confined aquifers indicate active saltwater intrusion in response to public-supply pumping. A numerical model capable of simulating three-dimensional variable-density ground-water flow and solute transport in heterogeneous, anisotropic aquifers was developed using the U.S. Geological Survey finite-element, variable-density, solute-transport simulator SUTRA, to investigate the extent of saltwater intrusion beneath Manhasset Neck. The model is composed of eight layers representing the hydrogeologic system beneath Manhasset Neck. Four modifications to the area?s previously described hydrogeologic framework were made in the model (1) the bedrock-surface altitude at well N12191 was corrected from a previously reported value, (2) part of the extent of the Raritan confining unit was shifted, (3) part of the extent of the North Shore confining unit was shifted, and (4) a clay layer in the upper glacial aquifer was added in the central and southern parts of the Manhasset Neck peninsula. Ground-water flow and the location of the freshwater-saltwater interface were simulated for three conditions (time periods) (1) a steady-state (predevelopment) simulation of no pumping prior to about 1905, (2) a 40-year transient simulation based on 1939 pumpage representing the 1905-1944 period of gradual saltwater intrusion, and (3) a 60-year transient simulation based on 1995 pumpage representing the 1945-2005 period of stabilized withdrawals. The 1939 pumpage rate (12.1 million gallons per day (Mgal/d)) applied to the 1905-1944 transient simulation caused modeled average water-level declines of 2 and 4 feet (ft) in the shallow and deep aquifer systems from predevelopment conditions, respectively, a net decrease of 5.2 Mgal/d in freshwater discharge to offshore areas and a net increase of 6.9 Mgal/d of freshwater entering the model from the eastern, western, and southern lateral boundaries. The 1995 pumpage rate (43.3 Mgal/d) applied to the 1945-2005 transient simulation caused modeled average water-level declines of 5 and 8 ft in the shallow and deep aquifer systems from predevelopment conditions, respectively, a net decrease of 13.2 Mgal/d in freshwater discharge to offshore areas and a net increase of 30.1 Mgal/d of freshwater entering the model from the eastern, western, and southern lateral boundaries. The simulated decrease in freshwater discharge to the offshore areas caused saltwater intrusion in two parts of the deep aquifer system under Manhasset Neck. Saline ground water simulated in a third part of the deep aquifer system under Manhasset Neck was due to the absence of the North Shore confining unit near Sands Point. Simulated chloride concentrations greater than 250 milligrams per liter (mg/L) were used to represent the freshwater-saltwater interface, and the movement of this concentration was evaluated for transient simulations. The decrease in the 1905-1944 simulated freshwater discharge to the offshore areas caused the freshwater-saltwater interface in the deep aquifer system to advance landward more than 1,700 ft from its steady-state position in the vicinity of Baxter Estates Village, Long Island, New York. The decrease in the 1945-2005 simulated freshwater discharge to the offshore areas caused a different area of the freshwater-saltwater interface in the deep aquifer system to advance more than 600 ft from its steady-state position approximately 1 mile south of the Baxter Estates Village. However, the 1945-2005 transient simulation underestimates the concentration and extent of saltwater intrusion determined from water-quality samples collected from wells N12508 and N12793, where measured chloride concentrations increased from 625 and 18 mg/L in 1997 t

  7. Sensitivity of a Cloud-Resolving Model to Bulk and Explicit Bin Microphysical Schemes. Part 2; Cloud Microphysics and Storm Dynamics Interactions

    NASA Technical Reports Server (NTRS)

    Li, Xiaowen; Tao, Wei-Kuo; Khain, Alexander P.; Simpson, Joanne; Johnson, Daniel E.

    2009-01-01

    Part I of this paper compares two simulations, one using a bulk and the other a detailed bin microphysical scheme, of a long-lasting, continental mesoscale convective system with leading convection and trailing stratiform region. Diagnostic studies and sensitivity tests are carried out in Part II to explain the simulated contrasts in the spatial and temporal variations by the two microphysical schemes and to understand the interactions between cloud microphysics and storm dynamics. It is found that the fixed raindrop size distribution in the bulk scheme artificially enhances rain evaporation rate and produces a stronger near surface cool pool compared with the bin simulation. In the bulk simulation, cool pool circulation dominates the near-surface environmental wind shear in contrast to the near-balance between cool pool and wind shear in the bin simulation. This is the main reason for the contrasting quasi-steady states simulated in Part I. Sensitivity tests also show that large amounts of fast-falling hail produced in the original bulk scheme not only result in a narrow trailing stratiform region but also act to further exacerbate the strong cool pool simulated in the bulk parameterization. An empirical formula for a correction factor, r(q(sub r)) = 0.11q(sub r)(exp -1.27) + 0.98, is developed to correct the overestimation of rain evaporation in the bulk model, where r is the ratio of the rain evaporation rate between the bulk and bin simulations and q(sub r)(g per kilogram) is the rain mixing ratio. This formula offers a practical fix for the simple bulk scheme in rain evaporation parameterization.

  8. New method of processing heat treatment experiments with numerical simulation support

    NASA Astrophysics Data System (ADS)

    Kik, T.; Moravec, J.; Novakova, I.

    2017-08-01

    In this work, benefits of combining modern software for numerical simulations of welding processes with laboratory research was described. Proposed new method of processing heat treatment experiments leading to obtaining relevant input data for numerical simulations of heat treatment of large parts was presented. It is now possible, by using experiments on small tested samples, to simulate cooling conditions comparable with cooling of bigger parts. Results from this method of testing makes current boundary conditions during real cooling process more accurate, but also can be used for improvement of software databases and optimization of a computational models. The point is to precise the computation of temperature fields for large scale hardening parts based on new method of temperature dependence determination of the heat transfer coefficient into hardening media for the particular material, defined maximal thickness of processed part and cooling conditions. In the paper we will also present an example of the comparison standard and modified (according to newly suggested methodology) heat transfer coefficient data’s and theirs influence on the simulation results. It shows how even the small changes influence mainly on distribution of temperature, metallurgical phases, hardness and stresses distribution. By this experiment it is also possible to obtain not only input data and data enabling optimization of computational model but at the same time also verification data. The greatest advantage of described method is independence of used cooling media type.

  9. Atmospheric Profiles, Clouds, and the Evolution of Sea Ice Cover in the Beaufort and Chukchi Seas: Atmospheric Observations and Modeling as Part of the SeasonalIce Zone Reconnaissance Surveys

    DTIC Science & Technology

    2015-09-30

    hired to conduct WRF model experiments. • We conducted Weather Research and Forecast ( WRF ) model simulations for the summer of 2014 and compared with... WRF simulations under different synoptic conditions will help to more 10 clearly identify the deficiencies in the representation of these processes

  10. Visualization Methods for Viability Studies of Inspection Modules for the Space Shuttle

    NASA Technical Reports Server (NTRS)

    Mobasher, Amir A.

    2005-01-01

    An effective simulation of an object, process, or task must be similar to that object, process, or task. A simulation could consist of a physical device, a set of mathematical equations, a computer program, a person, or some combination of these. There are many reasons for the use of simulators. Although some of the reasons are unique to a specific situation, there are many general reasons and purposes for using simulators. Some are listed but not limited to (1) Safety, (2) Scarce resources, (3) Teaching/education, (4) Additional capabilities, (5) Flexibility and (6) Cost. Robot simulators are in use for all of these reasons. Virtual environments such as simulators will eliminate physical contact with humans and hence will increase the safety of work environment. Corporations with limited funding and resources may utilize simulators to accomplish their goals while saving manpower and money. A computer simulation is safer than working with a real robot. Robots are typically a scarce resource. Schools typically don t have a large number of robots, if any. Factories don t want the robots not performing useful work unless absolutely necessary. Robot simulators are useful in teaching robotics. A simulator gives a student hands-on experience, if only with a simulator. The simulator is more flexible. A user can quickly change the robot configuration, workcell, or even replace the robot with a different one altogether. In order to be useful, a robot simulator must create a model that accurately performs like the real robot. A powerful simulator is usually thought of as a combination of a CAD package with simulation capabilities. Computer Aided Design (CAD) techniques are used extensively by engineers in virtually all areas of engineering. Parts are designed interactively aided by the graphical display of both wireframe and more realistic shaded renderings. Once a part s dimensions have been specified to the CAD package, designers can view the part from any direction to examine how it will look and perform in relation to other parts. If changes are deemed necessary, the designer can easily make the changes and view the results graphically. However, a complex process of moving parts intended for operation in a complex environment can only be fully understood through the process of animated graphical simulation. A CAD package with simulation capabilities allows the designer to develop geometrical models of the process being designed, as well as the environment in which the process will be used, and then test the process in graphical animation much as the actual physical system would be run . By being able to operate the system of moving and stationary parts, the designer is able to see in simulation how the system will perform under a wide variety of conditions. If, for example, undesired collisions occur between parts of the system, design changes can be easily made without the expense or potential danger of testing the physical system.

  11. Effects of using two- versus three-dimensional computational modeling of fluidized beds Part I, hydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Nan; Battaglia, Francine; Pannala, Sreekanth

    2008-01-01

    Simulations of fluidized beds are performed to study and determine the effect on the use of coordinate systems and geometrical configurations to model fluidized bed reactors. Computational fluid dynamics is employed for an Eulerian-Eulerian model, which represents each phase as an interspersed continuum. The transport equation for granular temperature is solved and a hyperbolic tangent function is used to provide a smooth transition between the plastic and viscous regimes for the solid phase. The aim of the present work is to show the range of validity for employing simulations based on a 2D Cartesian coordinate system to approximate both cylindricalmore » and rectangular fluidized beds. Three different fluidization regimes, bubbling, slugging and turbulent regimes, are investigated and the results of 2D and 3D simulations are presented for both cylindrical and rectangular domains. The results demonstrate that a 2D Cartesian system can be used to successfully simulate and predict a bubbling regime. However, caution must be exercised when using 2D Cartesian coordinates for other fluidized regimes. A budget analysis that explains all the differences in detail is presented in Part II [N. Xie, F. Battaglia, S. Pannala, Effects of Using Two-Versus Three-Dimensional Computational Modeling of Fluidized Beds: Part II, budget analysis, 182 (1) (2007) 14] to complement the hydrodynamic theory of this paper.« less

  12. The methodology for modeling queuing systems using Petri nets

    NASA Astrophysics Data System (ADS)

    Kotyrba, Martin; Gaj, Jakub; Tvarůžka, Matouš

    2017-07-01

    This papers deals with the use of Petri nets in modeling and simulation of queuing systems. The first part is focused on the explanation of basic concepts and properties of Petri nets and queuing systems. The proposed methodology for the modeling of queuing systems using Petri nets is described in the practical part. The proposed methodology will be tested on specific cases.

  13. Modeling and Simulation of Compression Molding Process for Sheet Molding Compound (SMC) of Chopped Carbon Fiber Composites

    DOE PAGES

    Li, Yang; Chen, Zhangxing; Xu, Hongyi; ...

    2017-01-02

    Compression molded SMC composed of chopped carbon fiber and resin polymer which balances the mechanical performance and manufacturing cost presents a promising solution for vehicle lightweight strategy. However, the performance of the SMC molded parts highly depends on the compression molding process and local microstructure, which greatly increases the cost for the part level performance testing and elongates the design cycle. ICME (Integrated Computational Material Engineering) approaches are thus necessary tools to reduce the number of experiments required during part design and speed up the deployment of the SMC materials. As the fundamental stage of the ICME workflow, commercial softwaremore » packages for SMC compression molding exist yet remain not fully validated especially for chopped fiber systems. In this study, SMC plaques are prepared through compression molding process. The corresponding simulation models are built in Autodesk Moldflow with the same part geometry and processing conditions as in the molding tests. The output variables of the compression molding simulations, including press force history and fiber orientation of the part, are compared with experimental data. Influence of the processing conditions to the fiber orientation of the SMC plaque is also discussed. It is found that generally Autodesk Moldflow can achieve a good simulation of the compression molding process for chopped carbon fiber SMC, yet quantitative discrepancies still remain between predicted variables and experimental results.« less

  14. Modeling and Simulation of Compression Molding Process for Sheet Molding Compound (SMC) of Chopped Carbon Fiber Composites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yang; Chen, Zhangxing; Xu, Hongyi

    Compression molded SMC composed of chopped carbon fiber and resin polymer which balances the mechanical performance and manufacturing cost presents a promising solution for vehicle lightweight strategy. However, the performance of the SMC molded parts highly depends on the compression molding process and local microstructure, which greatly increases the cost for the part level performance testing and elongates the design cycle. ICME (Integrated Computational Material Engineering) approaches are thus necessary tools to reduce the number of experiments required during part design and speed up the deployment of the SMC materials. As the fundamental stage of the ICME workflow, commercial softwaremore » packages for SMC compression molding exist yet remain not fully validated especially for chopped fiber systems. In this study, SMC plaques are prepared through compression molding process. The corresponding simulation models are built in Autodesk Moldflow with the same part geometry and processing conditions as in the molding tests. The output variables of the compression molding simulations, including press force history and fiber orientation of the part, are compared with experimental data. Influence of the processing conditions to the fiber orientation of the SMC plaque is also discussed. It is found that generally Autodesk Moldflow can achieve a good simulation of the compression molding process for chopped carbon fiber SMC, yet quantitative discrepancies still remain between predicted variables and experimental results.« less

  15. A Melting Layer Model for Passive/Active Microwave Remote Sensing Applications. Part 2; Simulation of TRMM Observations

    NASA Technical Reports Server (NTRS)

    Olson, William S.; Bauer, Peter; Kummerow, Christian D.; Tao, Wei-Kuo

    2000-01-01

    The one-dimensional, steady-state melting layer model developed in Part I of this study is used to calculate both the microphysical and radiative properties of melting precipitation, based upon the computed concentrations of snow and graupel just above the freezing level at applicable horizontal gridpoints of 3-dimensional cloud resolving model simulations. The modified 3-dimensional distributions of precipitation properties serve as input to radiative transfer calculations of upwelling radiances and radar extinction/reflectivities at the TRMM Microwave Imager (TMI) and Precipitation Radar (PR) frequencies, respectively. At the resolution of the cloud resolving model grids (approx. 1 km), upwelling radiances generally increase if mixed-phase precipitation is included in the model atmosphere. The magnitude of the increase depends upon the optical thickness of the cloud and precipitation, as well as the scattering characteristics of ice-phase precipitation aloft. Over the set of cloud resolving model simulations utilized in this study, maximum radiance increases of 43, 28, 18, and 10 K are simulated at 10.65, 19.35 GHz, 37.0, and 85.5 GHz, respectively. The impact of melting on TMI-measured radiances is determined not only by the physics of the melting particles but also by the horizontal extent of the melting precipitation, since the lower-frequency channels have footprints that extend over 10''s of kilometers. At TMI resolution, the maximum radiance increases are 16, 15, 12, and 9 K at the same frequencies. Simulated PR extinction and reflectivities in the melting layer can increase dramatically if mixed-phase precipitation is included, a result consistent with previous studies. Maximum increases of 0.46 (-2 dB) in extinction optical depth and 5 dBZ in reflectivity are simulated based upon the set of cloud resolving model simulations.

  16. Research the simulation model of the passenger travel behavior in urban rail platform

    NASA Astrophysics Data System (ADS)

    Wang, Yujia; Yin, Xiangyong

    2017-05-01

    Based on the results of the research on the platform of the Beijing Chegongzhuang subway station in the line 2, the passenger travel behavior in urban rail platform is divided into 4 parts, which are the enter passenger walking, the passenger waiting distribution and queuing up before the door, passenger boarding and alighting and the alighting passengers walking, according to the social force model, simulation model was built based on Matlab software. Combined with the actual data of subway the Chegongzhuang subway station in the line 2, the simulation results show that the social force model is effective.

  17. Finite Element Simulation of Compression Molding of Woven Fabric Carbon Fiber/Epoxy Composites: Part I Material Model Development

    DOE PAGES

    Li, Yang; Zhao, Qiangsheng; Mirdamadi, Mansour; ...

    2016-01-06

    Woven fabric carbon fiber/epoxy composites made through compression molding are one of the promising choices of material for the vehicle light-weighting strategy. Previous studies have shown that the processing conditions can have substantial influence on the performance of this type of the material. Therefore the optimization of the compression molding process is of great importance to the manufacturing practice. An efficient way to achieve the optimized design of this process would be through conducting finite element (FE) simulations of compression molding for woven fabric carbon fiber/epoxy composites. However, performing such simulation remains a challenging task for FE as multiple typesmore » of physics are involved during the compression molding process, including the epoxy resin curing and the complex mechanical behavior of woven fabric structure. In the present study, the FE simulation of the compression molding process of resin based woven fabric composites at continuum level is conducted, which is enabled by the implementation of an integrated material modeling methodology in LS-Dyna. Specifically, the chemo-thermo-mechanical problem of compression molding is solved through the coupling of three material models, i.e., one thermal model for temperature history in the resin, one mechanical model to update the curing-dependent properties of the resin and another mechanical model to simulate the behavior of the woven fabric composites. Preliminary simulations of the carbon fiber/epoxy woven fabric composites in LS-Dyna are presented as a demonstration, while validations and models with real part geometry are planned in the future work.« less

  18. Cloud Properties Simulated by a Single-Column Model. Part II: Evaluation of Cumulus Detrainment and Ice-phase Microphysics Using a Cloud Resolving Model

    NASA Technical Reports Server (NTRS)

    Luo, Yali; Krueger, Steven K.; Xu, Kuan-Man

    2005-01-01

    This paper is the second in a series in which kilometer-scale-resolving observations from the Atmospheric Radiation Measurement program and a cloud-resolving model (CRM) are used to evaluate the single-column model (SCM) version of the National Centers for Environmental Prediction Global Forecast System model. Part I demonstrated that kilometer-scale cirrus properties simulated by the SCM significantly differ from the cloud radar observations while the CRM simulation reproduced most of the cirrus properties as revealed by the observations. The present study describes an evaluation, through a comparison with the CRM, of the SCM's representation of detrainment from deep cumulus and ice-phase microphysics in an effort to better understand the findings of Part I. It is found that detrainment occurs too infrequently at a single level at a time in the SCM, although the detrainment rate averaged over the entire simulation period is somewhat comparable to that of the CRM simulation. Relatively too much detrained ice is sublimated when first detrained. Snow falls over too deep of a layer due to the assumption that snow source and sink terms exactly balance within one time step in the SCM. These characteristics in the SCM parameterizations may explain many of the differences in the cirrus properties between the SCM and the observations (or between the SCM and the CRM). A possible improvement for the SCM consists of the inclusion of multiple cumulus cloud types as in the original Arakawa-Schubert scheme, prognostically determining the stratiform cloud fraction and snow mixing ratio. This would allow better representation of the detrainment from deep convection, better coupling of the volume of detrained air with cloud fraction, and better representation of snow field.

  19. On testing models for the pressure-strain correlation of turbulence using direct simulations

    NASA Technical Reports Server (NTRS)

    Speziale, Charles G.; Gatski, Thomas B.; Sarkar, Sutanu

    1992-01-01

    Direct simulations of homogeneous turbulence have, in recent years, come into widespread use for the evaluation of models for the pressure-strain correlation of turbulence. While work in this area has been beneficial, the increasingly common practice of testing the slow and rapid parts of these models separately in uniformly strained turbulent flows is shown in this paper to be unsound. For such flows, the decomposition of models for the total pressure-strain correlation into slow and rapid parts is ambiguous. Consequently, when tested in this manner, misleading conclusions can be drawn about the performance of pressure-strain models. This point is amplified by illustrative calculations of homogeneous shear flow where other pitfalls in the evaluation of models are also uncovered. More meaningful measures for testing the performance of pressure-strain models in uniformly strained turbulent flows are proposed and the implications for turbulence modeling are discussed.

  20. Autonomous Agent-Based Systems and Their Applications in Fluid Dynamics, Particle Separation, and Co-evolving Networks

    NASA Astrophysics Data System (ADS)

    Graeser, Oliver

    This thesis comprises three parts, reporting research results in Fluid Dynamics (Part I), Particle Separation (Part II) and Co-evolving Networks (Part III). Part I deals with the simulation of fluid dynamics using the lattice-Boltzmann method. Microfluidic devices often feature two-dimensional, repetitive arrays. Flows through such devices are pressure-driven and confined by solid walls. We have defined new adaptive generalised periodic boundary conditions to represent the effects of outer solid walls, and are thus able to exploit the periodicity of the array by simulating the flow through one unit cell in lieu of the entire device. The so-calculated fully developed flow describes the flow through the entire array accurately, but with computational requirements that are reduced according to the dimensions of the array. Part II discusses the problem of separating macromolecules like proteins or DNA coils. The reliable separation of such molecules is a crucial task in molecular biology. The use of Brownian ratchets as mechanisms for the separation of such particles has been proposed and discussed during the last decade. Pressure-driven flows have so far been dismissed as possible driving forces for Brownian ratchets, as they do not generate ratchet asymmetry. We propose a microfluidic design that uses pressure-driven flows to create asymmetry and hence allows particle separation. The dependence of the asymmetry on various factors of the microfluidic geometry is discussed. We further exemplify the feasibility of our approach using Brownian dynamics simulations of particles of different sizes in such a device. The results show that ratchet-based particle separation using flows as the driving force is possible. Simulation results and ratchet theory predictions are in excellent agreement. Part III deals with the co-evolution of networks and dynamic models. A group of agents occupies the nodes of a network, which defines the relationship between these agents. The evolution of the agents is defined by the rules of the dynamic model and depends on the relationship between agents, i.e., the state of the network. In return, the evolution of the network depends on the state of the dynamic model. The concept is introduced through the adaptive SIS model. We show that the previously used criterion determining the critical infected fraction, i.e., the number of infected agents required to sustain the epidemic, is inappropriate for this model. We introduce a different criterion and show that the critical infected fraction so determined is in good agreement with results obtained by numerical simulations. We further discuss the concept of co-evolving dynamics using the Snowdrift Game as a model paradigm. Co-evolution occurs through agents cutting dissatisfied links and rewiring to other agents at random. The effect of co-evolution on the emergence of cooperation is discussed using a mean-field theory and numerical simulations. A transition between a connected and a disconnected, highly cooperative state of the system is observed, and explained using the mean-field model. Quantitative deviations regarding the level of cooperation in the disconnected regime can be fully resolved through an improved mean-field theory that includes the effect of random fluctuations into its model.

  1. Cloud-radiative effects on implied oceanic energy transport as simulated by atmospheric general circulation models

    NASA Technical Reports Server (NTRS)

    Gleckler, P. J.; Randall, D. A.; Boer, G.; Colman, R.; Dix, M.; Galin, V.; Helfand, M.; Kiehl, J.; Kitoh, A.; Lau, W.

    1995-01-01

    This paper summarizes the ocean surface net energy flux simulated by fifteen atmospheric general circulation models constrained by realistically-varying sea surface temperatures and sea ice as part of the Atmospheric Model Intercomparison Project. In general, the simulated energy fluxes are within the very large observational uncertainties. However, the annual mean oceanic meridional heat transport that would be required to balance the simulated surface fluxes is shown to be critically sensitive to the radiative effects of clouds, to the extent that even the sign of the Southern Hemisphere ocean heat transport can be affected by the errors in simulated cloud-radiation interactions. It is suggested that improved treatment of cloud radiative effects should help in the development of coupled atmosphere-ocean general circulation models.

  2. Numerical simulation of dynamics of brushless dc motors for aerospace and other applications. Volume 1: Model development and applications, part B

    NASA Technical Reports Server (NTRS)

    Demerdash, N. A. O.; Nehl, T. W.

    1979-01-01

    A mathematical model was developed and computerized simulations were obtained for a brushless dc motor. Experimentally obtained oscillograms of the machine phase currents are presented and the corresponding current and voltage waveforms for various modes of operation of the motor are presented and discussed.

  3. Modeling and Simulation of Ceramic Arrays to Improve Ballistic Performance

    DTIC Science & Technology

    2014-01-17

    30cal AP M2 Projectile, 762x39 PS Projectile, SPH , Aluminum 5083, SiC, DoP Expeminets, AutoDyn Simulations, Tile Gap 16. SECURITY CLASSIFICATION...particle hydrodynamics ( SPH ) is applied for all parts. The SPH particle size is .4 mm, with the assumption that modeling dust smaller than .4 mm can be

  4. War-gaming application for future space systems acquisition part 2: acquisition and bidding war-gaming modeling and simulation approaches for FFP and FPIF

    NASA Astrophysics Data System (ADS)

    Nguyen, Tien M.; Guillen, Andy T.

    2017-05-01

    This paper describes cooperative and non-cooperative static Bayesian game models with complete and incomplete information for the development of optimum acquisition strategies associated with the Program and Technical Baseline (PTB) solutions obtained from Part 1 of this paper [1]. The optimum acquisition strategies discussed focus on achieving "Affordability" by incorporating contractors' bidding strategies into the government acquisition strategies for acquiring future space systems. The paper discusses System Engineering (SE) frameworks, analytical and simulation approaches and modeling for developing the optimum acquisition strategies from both the government and contractor perspectives for Firm Fixed Price (FFP) and Fixed Price Incentive Firm (FPIF) contract types.

  5. Aeroelastic modeling for the FIT (Functional Integration Technology) team F/A-18 simulation

    NASA Technical Reports Server (NTRS)

    Zeiler, Thomas A.; Wieseman, Carol D.

    1989-01-01

    As part of Langley Research Center's commitment to developing multidisciplinary integration methods to improve aerospace systems, the Functional Integration Technology (FIT) team was established to perform dynamics integration research using an existing aircraft configuration, the F/A-18. An essential part of this effort has been the development of a comprehensive simulation modeling capability that includes structural, control, and propulsion dynamics as well as steady and unsteady aerodynamics. The structural and unsteady aerodynamics contributions come from an aeroelastic mode. Some details of the aeroelastic modeling done for the Functional Integration Technology (FIT) team research are presented. Particular attention is given to work done in the area of correction factors to unsteady aerodynamics data.

  6. Self-consistent radiation-based simulation of electric arcs: II. Application to gas circuit breakers

    NASA Astrophysics Data System (ADS)

    Iordanidis, A. A.; Franck, C. M.

    2008-07-01

    An accurate and robust method for radiative heat transfer simulation for arc applications was presented in the previous paper (part I). In this paper a self-consistent mathematical model based on computational fluid dynamics and a rigorous radiative heat transfer model is described. The model is applied to simulate switching arcs in high voltage gas circuit breakers. The accuracy of the model is proven by comparison with experimental data for all arc modes. The ablation-controlled arc model is used to simulate high current PTFE arcs burning in cylindrical tubes. Model accuracy for the lower current arcs is evaluated using experimental data on the axially blown SF6 arc in steady state and arc resistance measurements close to current zero. The complete switching process with the arc going through all three phases is also simulated and compared with the experimental data from an industrial circuit breaker switching test.

  7. Modelling for environmental assessment of municipal solid waste landfills (part II: biodegradation).

    PubMed

    Garcia de Cortázar, Amaya Lobo; Lantarón, Javier Herrero; Fernández, Oscar Montero; Monzón, Iñaki Tejero; Lamia, Maria Fantelli

    2002-12-01

    The biodegradation module of a simulation program for municipal solid waste landfills (MODUELO) was developed. The biodegradation module carries out the balance of organic material starting with the results of the hydrologic simulation and the waste composition. It simulates the biologic reactions of hydrolysis of solids and the gasification of the dissolved biodegradable material. The results of this module are: organic matter (COD, BOD and elemental components such as carbon, hydrogen, nitrogen, oxygen, sulfur and ash), ammonium nitrogen generated with the gas and transported by the leachates and the potential rates of methane and carbon dioxide generation. The model was calibrated by using the general tendency curves of the pollutants recorded in municipal solid waste landfills, fitting the first part of them to available landfill data. Although the results show some agreement, further work is being done to make MODUELO a useful tool for real landfill simulation.

  8. Orthogonal Gaussian process models

    DOE PAGES

    Plumlee, Matthew; Joseph, V. Roshan

    2017-01-01

    Gaussian processes models are widely adopted for nonparameteric/semi-parametric modeling. Identifiability issues occur when the mean model contains polynomials with unknown coefficients. Though resulting prediction is unaffected, this leads to poor estimation of the coefficients in the mean model, and thus the estimated mean model loses interpretability. This paper introduces a new Gaussian process model whose stochastic part is orthogonal to the mean part to address this issue. As a result, this paper also discusses applications to multi-fidelity simulations using data examples.

  9. Orthogonal Gaussian process models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plumlee, Matthew; Joseph, V. Roshan

    Gaussian processes models are widely adopted for nonparameteric/semi-parametric modeling. Identifiability issues occur when the mean model contains polynomials with unknown coefficients. Though resulting prediction is unaffected, this leads to poor estimation of the coefficients in the mean model, and thus the estimated mean model loses interpretability. This paper introduces a new Gaussian process model whose stochastic part is orthogonal to the mean part to address this issue. As a result, this paper also discusses applications to multi-fidelity simulations using data examples.

  10. Construction and calibration of a groundwater-flow model to assess groundwater availability in the uppermost principal aquifer systems of the Williston Basin, United States and Canada

    USGS Publications Warehouse

    Davis, Kyle W.; Long, Andrew J.

    2018-05-31

    The U.S. Geological Survey developed a groundwater-flow model for the uppermost principal aquifer systems in the Williston Basin in parts of Montana, North Dakota, and South Dakota in the United States and parts of Manitoba and Saskatchewan in Canada as part of a detailed assessment of the groundwater availability in the area. The assessment was done because of the potential for increased demands and stresses on groundwater associated with large-scale energy development in the area. As part of this assessment, a three-dimensional groundwater-flow model was developed as a tool that can be used to simulate how the groundwater-flow system responds to changes in hydrologic stresses at a regional scale.The three-dimensional groundwater-flow model was developed using the U.S. Geological Survey’s numerical finite-difference groundwater model with the Newton-Rhapson solver, MODFLOW–NWT, to represent the glacial, lower Tertiary, and Upper Cretaceous aquifer systems for steady-state (mean) hydrological conditions for 1981‒2005 and for transient (temporally varying) conditions using a combination of a steady-state period for pre-1960 and transient periods for 1961‒2005. The numerical model framework was constructed based on existing and interpreted hydrogeologic and geospatial data and consisted of eight layers. Two layers were used to represent the glacial aquifer system in the model; layer 1 represented the upper one-half and layer 2 represented the lower one-half of the glacial aquifer system. Three layers were used to represent the lower Tertiary aquifer system in the model; layer 3 represented the upper Fort Union aquifer, layer 4 represented the middle Fort Union hydrogeologic unit, and layer 5 represented the lower Fort Union aquifer. Three layers were used to represent the Upper Cretaceous aquifer system in the model; layer 6 represented the upper Hell Creek hydrogeologic unit, layer 7 represented the lower Hell Creek aquifer, and layer 8 represented the Fox Hills aquifer. The numerical model was constructed using a uniform grid with square cells that are about 1 mile (1,600 meters) on each side with a total of about 657,000 active cells.Model calibration was completed by linking Parameter ESTimation (PEST) software with MODFLOW–NWT. The PEST software uses statistical parameter estimation techniques to identify an optimum set of input parameters by adjusting individual model input parameters and assessing the differences, or residuals, between observed (measured or estimated) data and simulated values. Steady-state model calibration consisted of attempting to match mean simulated values to measured or estimated values of (1) hydraulic head, (2) hydraulic head differences between model layers, (3) stream infiltration, and (4) discharge to streams. Calibration of the transient model consisted of attempting to match simulated and measured temporally distributed values of hydraulic head changes, stream base flow, and groundwater discharge to artesian flowing wells. Hydraulic properties estimated through model calibration included hydraulic conductivity, vertical hydraulic conductivity, aquifer storage, and riverbed hydraulic conductivity in addition to groundwater recharge and well skin.The ability of the numerical model to accurately simulate groundwater flow in the Williston Basin was assessed primarily by its ability to match calibration targets for hydraulic head, stream base flow, and flowing well discharge. The steady-state model also was used to assess the simulated potentiometric surfaces in the upper Fort Union aquifer, the lower Fort Union aquifer, and the Fox Hills aquifer. Additionally, a previously estimated regional groundwater-flow budget was compared with the simulated steady-state groundwater-flow budget for the Williston Basin. The simulated potentiometric surfaces typically compared well with the estimated potentiometric surfaces based on measured hydraulic head data and indicated localized groundwater-flow gradients that were topographically controlled in outcrop areas and more generalized regional gradients where the aquifers were confined. The differences between the measured and simulated (residuals) hydraulic head values for 11,109 wells were assessed, which indicated that the steady-state model generally underestimated hydraulic head in the model area. This underestimation is indicated by a positive mean residual of 11.2 feet for all model layers. Layer 7, which represents the lower Hell Creek aquifer, is the only layer for which the steady-state model overestimated hydraulic head. Simulated groundwater-level changes for the transient model matched within plus or minus 2.5 feet of the measured values for more than 60 percent of all measurements and to within plus or minus 17.5 feet for 95 percent of all measurements; however, the transient model underestimated groundwater-level changes for all model layers. A comparison between simulated and estimated base flows for the steady-state and transient models indicated that both models overestimated base flow in streams and underestimated annual fluctuations in base flow.The estimated and simulated groundwater budgets indicate the model area received a substantial amount of recharge from precipitation and stream infiltration. The steady-state model indicated that reservoir seepage was a larger component of recharge in the Williston Basin than was previously estimated. Irrigation recharge and groundwater inflow from outside the Williston Basin accounted for a relatively small part of total groundwater recharge when compared with recharge from precipitation, stream infiltration, and reservoir seepage. Most of the estimated and simulated groundwater discharge in the Williston Basin was to streams and reservoirs. Simulated groundwater withdrawal, discharge to reservoirs, and groundwater outflow in the Williston Basin accounted for a smaller part of total groundwater discharge.The transient model was used to simulate discharge to 571 flowing artesian wells within the model area. Of the 571 established flowing artesian wells simulated by the model, 271 wells did not flow at any time during the simulation because hydraulic head was always below the land-surface altitude. As hydraulic head declined throughout the simulation, 68 of these wells responded by ceasing to flow by the end of 2005. Total mean simulated discharge for the 571 flowing artesian wells was 55.1 cubic feet per second (ft3/s), and the mean simulated flowing well discharge for individual wells was 0.118 ft3/s. Simulated discharge to individual flowing artesian wells increased from 0.039 to 0.177 ft3/s between 1961 and 1975 and decreased to 0.102 ft3/s by 2005. The mean residual for 34 flowing wells with measured discharge was 0.014 ft3/s, which indicates the transient model overestimated discharge to flowing artesian wells in the model area.Model limitations arise from aspects of the conceptual model and from simplifications inherent in the construction and calibration of a regional-scale numerical groundwater-flow model. Simplifying assumptions in defining hydraulic parameters in space and hydrologic stresses and time-varying observational data in time can limit the capabilities of this tool to simulate how the groundwater-flow system responds to changes in hydrologic stresses, particularly at the local scale; nevertheless, the steady-state model adequately simulated flow in the uppermost principal aquifer systems in the Williston Basin based on the comparison between the simulated and estimated groundwater-flow budget, the comparison between simulated and estimated potentiometric surfaces, and the results of the calibration process.

  11. The Role of Simulation in Microsurgical Training.

    PubMed

    Evgeniou, Evgenios; Walker, Harriet; Gujral, Sameer

    Simulation has been established as an integral part of microsurgical training. The aim of this study was to assess and categorize the various simulation models in relation to the complexity of the microsurgical skill being taught and analyze the assessment methods commonly employed in microsurgical simulation training. Numerous courses have been established using simulation models. These models can be categorized, according to the level of complexity of the skill being taught, into basic, intermediate, and advanced. Microsurgical simulation training should be assessed using validated assessment methods. Assessment methods vary significantly from subjective expert opinions to self-assessment questionnaires and validated global rating scales. The appropriate assessment method should carefully be chosen based on the simulation modality. Simulation models should be validated, and a model with appropriate fidelity should be chosen according to the microsurgical skill being taught. Assessment should move from traditional simple subjective evaluations of trainee performance to validated tools. Future studies should assess the transferability of skills gained during simulation training to the real-life setting. Copyright © 2018 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  12. Atmospheric Profiles, Clouds, and the Evolution of Sea Ice Cover in the Beaufort and Chukchi Seas: Atmospheric Observations and Modeling as Part of the Seasonal Ice Zone Reconnaissance Surveys

    DTIC Science & Technology

    2015-09-30

    to conduct WRF model experiments.  We conducted Weather Research and Forecast ( WRF ) model simulations for the summer of 2014 and compared with the...level winds might be more important forcing for sea ice. In addition, evaluation of Polar- WRF simulations under different synoptic conditions will help

  13. War-gaming application for future space systems acquisition part 1: program and technical baseline war-gaming modeling and simulation approaches

    NASA Astrophysics Data System (ADS)

    Nguyen, Tien M.; Guillen, Andy T.

    2017-05-01

    This paper describes static Bayesian game models with "Pure" and "Mixed" games for the development of an optimum Program and Technical Baseline (PTB) solution for affordable acquisition of future space systems. The paper discusses System Engineering (SE) frameworks and analytical and simulation modeling approaches for developing the optimum PTB solutions from both the government and contractor perspectives.

  14. Studying Turbulence Using Numerical Simulation Databases. Part 6; Proceedings of the 1996 Summer Program

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Topics considered include: New approach to turbulence modeling; Second moment closure analysis of the backstep flow database; Prediction of the backflow and recovery regions in the backward facing step at various Reynolds numbers; Turbulent flame propagation in partially premixed flames; Ensemble averaged dynamic modeling. Also included a study of the turbulence structures of wall-bounded shear flows; Simulation and modeling of the elliptic streamline flow.

  15. Ground-water flow in the New Jersey Coastal Plain

    USGS Publications Warehouse

    Martin, Mary

    1998-01-01

    Ground-water flow in 10 aquifers and 9 intervening confining units of the New Jersey Coastal Plain was simulated as part of the Regional Aquifer System Analysis. Data on aquifer and confining unit characteristics and on pumpage and water levels from 1918 through 1980 were incorporated into a multilayer finite-difference model. The report describes the conceptual hydrogeologic model of the unstressed flow systems, the methods and approach used in simulating flow, and the results of the simulations.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pitarka, A.

    In this project we developed GEN_SRF4 a computer program for generating kinematic rupture models, compatible with the SRF format, using Irikura and Miyake (2011) asperity-­based earthquake rupture model (IM2011, hereafter). IM2011, also known as Irkura’s recipe, has been widely used to model and simulate ground motion from earthquakes in Japan. An essential part of the method is its kinematic rupture generation technique, which is based on a deterministic rupture asperity modeling approach. The source model simplicity and efficiency of IM2011 at reproducing ground motion from earthquakes recorded in Japan makes it attractive to developers and users of the Southern Californiamore » Earthquake Center Broadband Platform (SCEC BB platform). Besides writing the code the objective of our study was to test the transportability of IM2011 to broadband simulation methods used by the SCEC BB platform. Here we test it using the Graves and Pitarka (2010) method, implemented in the platform. We performed broadband (0.1- -10 Hz) ground motion simulations for a M6.7 scenario earthquake using rupture models produced with both GEN_SRF4 and rupture generator of Graves and Pitarka (2016), (GP2016 hereafter). In the simulations we used the same Green’s functions, and same high frequency approach for calculating the low-­frequency and high-­frequency parts of ground motion, respectively.« less

  17. Evaluation of the CORDEX-Africa multi-RCM hindcast: systematic model errors

    NASA Astrophysics Data System (ADS)

    Kim, J.; Waliser, Duane E.; Mattmann, Chris A.; Goodale, Cameron E.; Hart, Andrew F.; Zimdars, Paul A.; Crichton, Daniel J.; Jones, Colin; Nikulin, Grigory; Hewitson, Bruce; Jack, Chris; Lennard, Christopher; Favre, Alice

    2014-03-01

    Monthly-mean precipitation, mean (TAVG), maximum (TMAX) and minimum (TMIN) surface air temperatures, and cloudiness from the CORDEX-Africa regional climate model (RCM) hindcast experiment are evaluated for model skill and systematic biases. All RCMs simulate basic climatological features of these variables reasonably, but systematic biases also occur across these models. All RCMs show higher fidelity in simulating precipitation for the west part of Africa than for the east part, and for the tropics than for northern Sahara. Interannual variation in the wet season rainfall is better simulated for the western Sahel than for the Ethiopian Highlands. RCM skill is higher for TAVG and TMAX than for TMIN, and regionally, for the subtropics than for the tropics. RCM skill in simulating cloudiness is generally lower than for precipitation or temperatures. For all variables, multi-model ensemble (ENS) generally outperforms individual models included in ENS. An overarching conclusion in this study is that some model biases vary systematically for regions, variables, and metrics, posing difficulties in defining a single representative index to measure model fidelity, especially for constructing ENS. This is an important concern in climate change impact assessment studies because most assessment models are run for specific regions/sectors with forcing data derived from model outputs. Thus, model evaluation and ENS construction must be performed separately for regions, variables, and metrics as required by specific analysis and/or assessments. Evaluations using multiple reference datasets reveal that cross-examination, quality control, and uncertainty estimates of reference data are crucial in model evaluations.

  18. Materials Characterisation and Analysis for Flow Simulation of Liquid Resin Infusion

    NASA Astrophysics Data System (ADS)

    Sirtautas, J.; Pickett, A. K.; George, A.

    2015-06-01

    Liquid Resin Infusion (LRI) processes including VARI and VARTM have received increasing attention in recent years, particularly for infusion of large parts, or for low volume production. This method avoids the need for costly matched metal tooling as used in Resin Transfer Moulding (RTM) and can provide fast infusion if used in combination with flow media. Full material characterisation for LRI analysis requires models for three dimensional fabric permeability as a function of fibre volume content, fabric through-thickness compliance as a function of resin pressure, flow media permeability and resin viscosity. The characterisation of fabric relaxation during infusion is usually determined from cyclic compaction tests on saturated fabrics. This work presents an alternative method to determine the compressibility by using LRI flow simulation and fitting a model to experimental thickness measurements during LRI. The flow media is usually assumed to have isotropic permeability, but this work shows greater simulation accuracy from combining the flow media with separation plies as a combined orthotropic material. The permeability of this combined media can also be determined by fitting the model with simulation to LRI flow measurements. The constitutive models and the finite element solution were validated by simulation of the infusion of a complex aerospace demonstrator part.

  19. Space Station communications and tracking systems modeling and RF link simulation

    NASA Technical Reports Server (NTRS)

    Tsang, Chit-Sang; Chie, Chak M.; Lindsey, William C.

    1986-01-01

    In this final report, the effort spent on Space Station Communications and Tracking System Modeling and RF Link Simulation is described in detail. The effort is mainly divided into three parts: frequency division multiple access (FDMA) system simulation modeling and software implementation; a study on design and evaluation of a functional computerized RF link simulation/analysis system for Space Station; and a study on design and evaluation of simulation system architecture. This report documents the results of these studies. In addition, a separate User's Manual on Space Communications Simulation System (SCSS) (Version 1) documents the software developed for the Space Station FDMA communications system simulation. The final report, SCSS user's manual, and the software located in the NASA JSC system analysis division's VAX 750 computer together serve as the deliverables from LinCom for this project effort.

  20. Weibull mixture regression for marginal inference in zero-heavy continuous outcomes.

    PubMed

    Gebregziabher, Mulugeta; Voronca, Delia; Teklehaimanot, Abeba; Santa Ana, Elizabeth J

    2017-06-01

    Continuous outcomes with preponderance of zero values are ubiquitous in data that arise from biomedical studies, for example studies of addictive disorders. This is known to lead to violation of standard assumptions in parametric inference and enhances the risk of misleading conclusions unless managed properly. Two-part models are commonly used to deal with this problem. However, standard two-part models have limitations with respect to obtaining parameter estimates that have marginal interpretation of covariate effects which are important in many biomedical applications. Recently marginalized two-part models are proposed but their development is limited to log-normal and log-skew-normal distributions. Thus, in this paper, we propose a finite mixture approach, with Weibull mixture regression as a special case, to deal with the problem. We use extensive simulation study to assess the performance of the proposed model in finite samples and to make comparisons with other family of models via statistical information and mean squared error criteria. We demonstrate its application on real data from a randomized controlled trial of addictive disorders. Our results show that a two-component Weibull mixture model is preferred for modeling zero-heavy continuous data when the non-zero part are simulated from Weibull or similar distributions such as Gamma or truncated Gauss.

  1. Dimensional Metrology of Non-rigid Parts Without Specialized Inspection Fixtures =

    NASA Astrophysics Data System (ADS)

    Sabri, Vahid

    Quality control is an important factor for manufacturing companies looking to prosper in an era of globalization, market pressures and technological advances. Functionality and product quality cannot be guaranteed without this important aspect. Manufactured parts have deviations from their nominal (CAD) shape caused by the manufacturing process. Thus, geometric inspection is a very important element in the quality control of mechanical parts. We will focus here on the geometric inspection of non-rigid (flexible) parts which are widely used in the aeronautic and automotive industries. Non-rigid parts can have different forms in a free-state condition compared with their nominal models due to residual stress and gravity loads. To solve this problem, dedicated inspection fixtures are generally used in industry to compensate for the displacement of such parts for simulating the use state in order to perform geometric inspections. These fixtures and the installation and inspection processes are expensive and time-consuming. Our aim in this thesis is therefore to develop an inspection method which eliminates the need for specialized fixtures. This is done by acquiring a point cloud from the part in a free-state condition using a contactless measuring device such as optical scanning and comparing it with the CAD model for the deviation identification. Using a non-rigid registration method and finite element analysis, we numerically inspect the profile of a non-rigid part. To do so, a simulated displacement is performed using an improved definition of displacement boundary conditions for simulating unfixed parts. In addition, we propose a numerical method for dimensional metrology of non-rigid parts in a free-state condition based on the arc length measurement by calculating the geodesic distance using the Fast Marching Method (FMM). In this thesis, we apply our developed methods on industrial non-rigid parts with free-form surfaces simulated with different types of displacement, defect, and measurement noise in order to evaluate the metrological performance of the developed methods.

  2. Numerical simulation of complex part manufactured by selective laser melting process

    NASA Astrophysics Data System (ADS)

    Van Belle, Laurent

    2017-10-01

    Selective Laser Melting (SLM) process belonging to the family of the Additive Manufacturing (AM) technologies, enable to build parts layer by layer, from metallic powder and a CAD model. Physical phenomena that occur in the process have the same issues as conventional welding. Thermal gradients generate significant residual stresses and distortions in the parts. Moreover, the large and complex parts to manufacturing, accentuate the undesirable effects. Therefore, it is essential for manufacturers to offer a better understanding of the process and to ensure production reliability of parts with high added value. This paper focuses on the simulation of manufacturing turbine by SLM process in order to calculate residual stresses and distortions. Numerical results will be presented.

  3. A kinetic energy model of two-vehicle crash injury severity.

    PubMed

    Sobhani, Amir; Young, William; Logan, David; Bahrololoom, Sareh

    2011-05-01

    An important part of any model of vehicle crashes is the development of a procedure to estimate crash injury severity. After reviewing existing models of crash severity, this paper outlines the development of a modelling approach aimed at measuring the injury severity of people in two-vehicle road crashes. This model can be incorporated into a discrete event traffic simulation model, using simulation model outputs as its input. The model can then serve as an integral part of a simulation model estimating the crash potential of components of the traffic system. The model is developed using Newtonian Mechanics and Generalised Linear Regression. The factors contributing to the speed change (ΔV(s)) of a subject vehicle are identified using the law of conservation of momentum. A Log-Gamma regression model is fitted to measure speed change (ΔV(s)) of the subject vehicle based on the identified crash characteristics. The kinetic energy applied to the subject vehicle is calculated by the model, which in turn uses a Log-Gamma Regression Model to estimate the Injury Severity Score of the crash from the calculated kinetic energy, crash impact type, presence of airbag and/or seat belt and occupant age. Copyright © 2010 Elsevier Ltd. All rights reserved.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Huan; Cheng, Liang; Chuah, Mooi Choo

    In the generation, transmission, and distribution sectors of the smart grid, intelligence of field devices is realized by programmable logic controllers (PLCs). Many smart-grid subsystems are essentially cyber-physical energy systems (CPES): For instance, the power system process (i.e., the physical part) within a substation is monitored and controlled by a SCADA network with hosts running miscellaneous applications (i.e., the cyber part). To study the interactions between the cyber and physical components of a CPES, several co-simulation platforms have been proposed. However, the network simulators/emulators of these platforms do not include a detailed traffic model that takes into account the impactsmore » of the execution model of PLCs on traffic characteristics. As a result, network traces generated by co-simulation only reveal the impacts of the physical process on the contents of the traffic generated by SCADA hosts, whereas the distinction between PLCs and computing nodes (e.g., a hardened computer running a process visualization application) has been overlooked. To generate realistic network traces using co-simulation for the design and evaluation of applications relying on accurate traffic profiles, it is necessary to establish a traffic model for PLCs. In this work, we propose a parameterized model for PLCs that can be incorporated into existing co-simulation platforms. We focus on the DNP3 subsystem of slave PLCs, which automates the processing of packets from the DNP3 master. To validate our approach, we extract model parameters from both the configuration and network traces of real PLCs. Simulated network traces are generated and compared against those from PLCs. Our evaluation shows that our proposed model captures the essential traffic characteristics of DNP3 slave PLCs, which can be used to extend existing co-simulation platforms and gain further insights into the behaviors of CPES.« less

  5. Full-Process Computer Model of Magnetron Sputter, Part I: Test Existing State-of-Art Components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walton, C C; Gilmer, G H; Wemhoff, A P

    2007-09-26

    This work is part of a larger project to develop a modeling capability for magnetron sputter deposition. The process is divided into four steps: plasma transport, target sputter, neutral gas and sputtered atom transport, and film growth, shown schematically in Fig. 1. Each of these is simulated separately in this Part 1 of the project, which is jointly funded between CMLS and Engineering. The Engineering portion is the plasma modeling, in step 1. The plasma modeling was performed using the Object-Oriented Particle-In-Cell code (OOPIC) from UC Berkeley [1]. Figure 2 shows the electron density in the simulated region, using magneticmore » field strength input from experiments by Bohlmark [2], where a scale of 1% is used. Figures 3 and 4 depict the magnetic field components that were generated using two-dimensional linear interpolation of Bohlmark's experimental data. The goal of the overall modeling tool is to understand, and later predict, relationships between parameters of film deposition we can change (such as gas pressure, gun voltage, and target-substrate distance) and key properties of the results (such as film stress, density, and stoichiometry.) The simulation must use existing codes, either open-source or low-cost, not develop new codes. In part 1 (FY07) we identified and tested the best available code for each process step, then determined if it can cover the size and time scales we need in reasonable computation times. We also had to determine if the process steps are sufficiently decoupled that they can be treated separately, and identify any research-level issues preventing practical use of these codes. Part 2 will consider whether the codes can be (or need to be) made to talk to each other and integrated into a whole.« less

  6. Hierarchical calibration and validation framework of bench-scale computational fluid dynamics simulations for solvent-based carbon capture. Part 2: Chemical absorption across a wetted wall column: Original Research Article: Hierarchical calibration and validation framework of bench-scale computational fluid dynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Chao; Xu, Zhijie; Lai, Kevin

    The first part of this paper (Part 1) presents a numerical model for non-reactive physical mass transfer across a wetted wall column (WWC). In Part 2, we improved the existing computational fluid dynamics (CFD) model to simulate chemical absorption occurring in a WWC as a bench-scale study of solvent-based carbon dioxide (CO2) capture. To generate data for WWC model validation, CO2 mass transfer across a monoethanolamine (MEA) solvent was first measured on a WWC experimental apparatus. The numerical model developed in this work has the ability to account for both chemical absorption and desorption of CO2 in MEA. In addition,more » the overall mass transfer coefficient predicted using traditional/empirical correlations is conducted and compared with CFD prediction results for both steady and wavy falling films. A Bayesian statistical calibration algorithm is adopted to calibrate the reaction rate constants in chemical absorption/desorption of CO2 across a falling film of MEA. The posterior distributions of the two transport properties, i.e., Henry’s constant and gas diffusivity in the non-reacting nitrous oxide (N2O)/MEA system obtained from Part 1 of this study, serves as priors for the calibration of CO2 reaction rate constants after using the N2O/CO2 analogy method. The calibrated model can be used to predict the CO2 mass transfer in a WWC for a wider range of operating conditions.« less

  7. Analysis of satellite multibeam antennas’ performances

    NASA Astrophysics Data System (ADS)

    Sterbini, Guido

    2006-07-01

    In this work, we discuss the application of frequency reuse's concept in satellite communications, stressing the importance for a design-oriented mathematical model as first step for dimensioning antenna systems. We consider multibeam reflector antennas. The first part of the work consists in reorganizing, making uniform and completing the models already developed in the scientific literature. In doing it, we adopt the multidimensional Taylor development formalism. For computing the spillover efficiency of the antenna, we consider different feed's illuminations and we propose a completely original mathematical model, obtained by the interpolation of simulator results. The second part of the work is dedicated to characterize the secondary far field pattern. Combining this model together with the information on the cellular coverage geometry is possible to evaluate the isolation and the minimum directivity on the cell. As third part, in order to test the model and its analysis and synthesis capabilities, we implement a software tool that helps the designer in the rapid tuning of the fundamental quantities for the optimization of the performance: the proposed model shows an optimum agreement with the results of the simulations.

  8. Modeling, Simulation and Analysis of Public Key Infrastructure

    NASA Technical Reports Server (NTRS)

    Liu, Yuan-Kwei; Tuey, Richard; Ma, Paul (Technical Monitor)

    1998-01-01

    Security is an essential part of network communication. The advances in cryptography have provided solutions to many of the network security requirements. Public Key Infrastructure (PKI) is the foundation of the cryptography applications. The main objective of this research is to design a model to simulate a reliable, scalable, manageable, and high-performance public key infrastructure. We build a model to simulate the NASA public key infrastructure by using SimProcess and MatLab Software. The simulation is from top level all the way down to the computation needed for encryption, decryption, digital signature, and secure web server. The application of secure web server could be utilized in wireless communications. The results of the simulation are analyzed and confirmed by using queueing theory.

  9. Sensitivity Analysis of the Integrated Medical Model for ISS Programs

    NASA Technical Reports Server (NTRS)

    Goodenow, D. A.; Myers, J. G.; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Young, M.

    2016-01-01

    Sensitivity analysis estimates the relative contribution of the uncertainty in input values to the uncertainty of model outputs. Partial Rank Correlation Coefficient (PRCC) and Standardized Rank Regression Coefficient (SRRC) are methods of conducting sensitivity analysis on nonlinear simulation models like the Integrated Medical Model (IMM). The PRCC method estimates the sensitivity using partial correlation of the ranks of the generated input values to each generated output value. The partial part is so named because adjustments are made for the linear effects of all the other input values in the calculation of correlation between a particular input and each output. In SRRC, standardized regression-based coefficients measure the sensitivity of each input, adjusted for all the other inputs, on each output. Because the relative ranking of each of the inputs and outputs is used, as opposed to the values themselves, both methods accommodate the nonlinear relationship of the underlying model. As part of the IMM v4.0 validation study, simulations are available that predict 33 person-missions on ISS and 111 person-missions on STS. These simulated data predictions feed the sensitivity analysis procedures. The inputs to the sensitivity procedures include the number occurrences of each of the one hundred IMM medical conditions generated over the simulations and the associated IMM outputs: total quality time lost (QTL), number of evacuations (EVAC), and number of loss of crew lives (LOCL). The IMM team will report the results of using PRCC and SRRC on IMM v4.0 predictions of the ISS and STS missions created as part of the external validation study. Tornado plots will assist in the visualization of the condition-related input sensitivities to each of the main outcomes. The outcomes of this sensitivity analysis will drive review focus by identifying conditions where changes in uncertainty could drive changes in overall model output uncertainty. These efforts are an integral part of the overall verification, validation, and credibility review of IMM v4.0.

  10. Simulation of the Press Hardening Process and Prediction of the Final Mechanical Material Properties

    NASA Astrophysics Data System (ADS)

    Hochholdinger, Bernd; Hora, Pavel; Grass, Hannes; Lipp, Arnulf

    2011-08-01

    Press hardening is a well-established production process in the automotive industry today. The actual trend of this process technology points towards the manufacturing of parts with tailored properties. Since the knowledge of the mechanical properties of a structural part after forming and quenching is essential for the evaluation of for example the crash performance, an accurate as possible virtual assessment of the production process is more than ever necessary. In order to achieve this, the definition of reliable input parameters and boundary conditions for the thermo-mechanically coupled simulation of the process steps is required. One of the most important input parameters, especially regarding the final properties of the quenched material, is the contact heat transfer coefficient (IHTC). The CHTC depends on the effective pressure or the gap distance between part and tool. The CHTC at different contact pressures and gap distances is determined through inverse parameter identification. Furthermore a simulation strategy for the subsequent steps of the press hardening process as well as adequate modeling approaches for part and tools are discussed. For the prediction of the yield curves of the material after press hardening a phenomenological model is presented. This model requires the knowledge of the microstructure within the part. By post processing the nodal temperature history with a CCT diagram the quantitative distribution of the phase fractions martensite, bainite, ferrite and pearlite after press hardening is determined. The model itself is based on a Hockett-Sherby approach with the Hockett-Sherby parameters being defined in function of the phase fractions and a characteristic cooling rate.

  11. Estimation of effective brain connectivity with dual Kalman filter and EEG source localization methods.

    PubMed

    Rajabioun, Mehdi; Nasrabadi, Ali Motie; Shamsollahi, Mohammad Bagher

    2017-09-01

    Effective connectivity is one of the most important considerations in brain functional mapping via EEG. It demonstrates the effects of a particular active brain region on others. In this paper, a new method is proposed which is based on dual Kalman filter. In this method, firstly by using a brain active localization method (standardized low resolution brain electromagnetic tomography) and applying it to EEG signal, active regions are extracted, and appropriate time model (multivariate autoregressive model) is fitted to extracted brain active sources for evaluating the activity and time dependence between sources. Then, dual Kalman filter is used to estimate model parameters or effective connectivity between active regions. The advantage of this method is the estimation of different brain parts activity simultaneously with the calculation of effective connectivity between active regions. By combining dual Kalman filter with brain source localization methods, in addition to the connectivity estimation between parts, source activity is updated during the time. The proposed method performance has been evaluated firstly by applying it to simulated EEG signals with interacting connectivity simulation between active parts. Noisy simulated signals with different signal to noise ratios are used for evaluating method sensitivity to noise and comparing proposed method performance with other methods. Then the method is applied to real signals and the estimation error during a sweeping window is calculated. By comparing proposed method results in different simulation (simulated and real signals), proposed method gives acceptable results with least mean square error in noisy or real conditions.

  12. Simulation of crash tests for high impact levels of a new bridge safety barrier

    NASA Astrophysics Data System (ADS)

    Drozda, Jiří; Rotter, Tomáš

    2017-09-01

    The purpose is to show the opportunity of a non-linear dynamic impact simulation and to explain the possibility of using finite element method (FEM) for developing new designs of safety barriers. The main challenge is to determine the means to create and validate the finite element (FE) model. The results of accurate impact simulations can help to reduce necessary costs for developing of a new safety barrier. The introductory part deals with the creation of the FE model, which includes the newly-designed safety barrier and focuses on the application of an experimental modal analysis (EMA). The FE model has been created in ANSYS Workbench and is formed from shell and solid elements. The experimental modal analysis, which was performed on a real pattern, was employed for measuring the modal frequencies and shapes. After performing the EMA, the FE mesh was calibrated after comparing the measured modal frequencies with the calculated ones. The last part describes the process of the numerical non-linear dynamic impact simulation in LS-DYNA. This simulation was validated after comparing the measured ASI index with the calculated ones. The aim of the study is to improve professional public knowledge about dynamic non-linear impact simulations. This should ideally lead to safer, more accurate and profitable designs.

  13. Evaluation of Global Observations-Based Evapotranspiration Datasets and IPCC AR4 Simulations

    NASA Technical Reports Server (NTRS)

    Mueller, B.; Seneviratne, S. I.; Jimenez, C.; Corti, T.; Hirschi, M.; Balsamo, G.; Ciais, P.; Dirmeyer, P.; Fisher, J. B.; Guo, Z.; hide

    2011-01-01

    Quantification of global land evapotranspiration (ET) has long been associated with large uncertainties due to the lack of reference observations. Several recently developed products now provide the capacity to estimate ET at global scales. These products, partly based on observational data, include satellite ]based products, land surface model (LSM) simulations, atmospheric reanalysis output, estimates based on empirical upscaling of eddycovariance flux measurements, and atmospheric water balance datasets. The LandFlux-EVAL project aims to evaluate and compare these newly developed datasets. Additionally, an evaluation of IPCC AR4 global climate model (GCM) simulations is presented, providing an assessment of their capacity to reproduce flux behavior relative to the observations ]based products. Though differently constrained with observations, the analyzed reference datasets display similar large-scale ET patterns. ET from the IPCC AR4 simulations was significantly smaller than that from the other products for India (up to 1 mm/d) and parts of eastern South America, and larger in the western USA, Australia and China. The inter-product variance is lower across the IPCC AR4 simulations than across the reference datasets in several regions, which indicates that uncertainties may be underestimated in the IPCC AR4 models due to shared biases of these simulations.

  14. MONTE CARLO SIMULATIONS OF PERIODIC PULSED REACTOR WITH MOVING GEOMETRY PARTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, Yan; Gohar, Yousry

    2015-11-01

    In a periodic pulsed reactor, the reactor state varies periodically from slightly subcritical to slightly prompt supercritical for producing periodic power pulses. Such periodic state change is accomplished by a periodic movement of specific reactor parts, such as control rods or reflector sections. The analysis of such reactor is difficult to perform with the current reactor physics computer programs. Based on past experience, the utilization of the point kinetics approximations gives considerable errors in predicting the magnitude and the shape of the power pulse if the reactor has significantly different neutron life times in different zones. To accurately simulate themore » dynamics of this type of reactor, a Monte Carlo procedure using the transfer function TRCL/TR of the MCNP/MCNPX computer programs is utilized to model the movable reactor parts. In this paper, two algorithms simulating the geometry part movements during a neutron history tracking have been developed. Several test cases have been developed to evaluate these procedures. The numerical test cases have shown that the developed algorithms can be utilized to simulate the reactor dynamics with movable geometry parts.« less

  15. Numerical Modeling of Hailstorms and Hailstone Growth. Part III: Simulation of an Alberta Hailstorm--Natural and Seeded Cases.

    NASA Astrophysics Data System (ADS)

    Farley, Richard D.

    1987-07-01

    This paper reports on simulations of a multicellular hailstorm case observed during the 1983 Alberta Hail Project. The field operations on that day concentrated on two successive feeder cells which were subjected to controlled seeding experiments. The fist of these cells received the placebo treatment and the second was seeded with dry ice. The principal tool of this study is a modified version of the two-dimensional, time dependent hail category model described in Part I of this series of papers. It is with this model that hail growth processes are investigated, including the simulated effects of cloud seeding techniques as practiced in Alberta.The model simulation of the natural case produces a very good replication of the observed storm, particularly the placebo feeder cell. This is evidenced, in particular, by the high degree of fidelity of the observed and modeled radar reflectivity in terms of magnitudes, structure, and evolution. The character of the hailfall at the surface and the scale of the storm are captured nicely by the model, although cloud-top heights are generally too high, particularly for the mature storm system.Seeding experiments similar to those conducted in the field have also been simulated. These involve seeding the feeder cell early in its active development phase with dry ice (CO2) or silver iodide (AgI) introduced near cloud top. The model simulations of these seeded cases capture some of the observed seeding signatures detected by radar and aircraft. In these model experiments, CO2 seeding produced a stronger response than AgI seeding relative to inhibiting hail formation. For both seeded cases, production of precipitating ice was initially enhanced by the seeding, but retarded slightly in the later stages, the net result being modest increases in surface rainfall, with hail reduced slightly. In general, the model simulations support several subhypotheses of the operational strategy of the Alberta Research Council regarding the earlier formation of ice, snow, and graupel due to seeding.

  16. Computational modeling of the EGFR network elucidates control mechanisms regulating signal dynamics

    PubMed Central

    2009-01-01

    Background The epidermal growth factor receptor (EGFR) signaling pathway plays a key role in regulation of cellular growth and development. While highly studied, it is still not fully understood how the signal is orchestrated. One of the reasons for the complexity of this pathway is the extensive network of inter-connected components involved in the signaling. In the aim of identifying critical mechanisms controlling signal transduction we have performed extensive analysis of an executable model of the EGFR pathway using the stochastic pi-calculus as a modeling language. Results Our analysis, done through simulation of various perturbations, suggests that the EGFR pathway contains regions of functional redundancy in the upstream parts; in the event of low EGF stimulus or partial system failure, this redundancy helps to maintain functional robustness. Downstream parts, like the parts controlling Ras and ERK, have fewer redundancies, and more than 50% inhibition of specific reactions in those parts greatly attenuates signal response. In addition, we suggest an abstract model that captures the main control mechanisms in the pathway. Simulation of this abstract model suggests that without redundancies in the upstream modules, signal transduction through the entire pathway could be attenuated. In terms of specific control mechanisms, we have identified positive feedback loops whose role is to prolong the active state of key components (e.g., MEK-PP, Ras-GTP), and negative feedback loops that help promote signal adaptation and stabilization. Conclusions The insights gained from simulating this executable model facilitate the formulation of specific hypotheses regarding the control mechanisms of the EGFR signaling, and further substantiate the benefit to construct abstract executable models of large complex biological networks. PMID:20028552

  17. The Implications of 3D Thermal Structure on 1D Atmospheric Retrieval

    NASA Astrophysics Data System (ADS)

    Blecic, Jasmina; Dobbs-Dixon, Ian; Greene, Thomas

    2017-10-01

    Using the atmospheric structure from a 3D global radiation-hydrodynamic simulation of HD 189733b and the open-source Bayesian Atmospheric Radiative Transfer (BART) code, we investigate the difference between the secondary-eclipse temperature structure produced with a 3D simulation and the best-fit 1D retrieved model. Synthetic data are generated by integrating the 3D models over the Spitzer, the Hubble Space Telescope (HST), and the James Web Space Telescope (JWST) bandpasses, covering the wavelength range between 1 and 11 μm where most spectroscopically active species have pronounced features. Using the data from different observing instruments, we present detailed comparisons between the temperature-pressure profiles recovered by BART and those from the 3D simulations. We calculate several averages of the 3D thermal structure and explore which particular thermal profile matches the retrieved temperature structure. We implement two temperature parameterizations that are commonly used in retrieval to investigate different thermal profile shapes. To assess which part of the thermal structure is best constrained by the data, we generate contribution functions for our theoretical model and each of our retrieved models. Our conclusions are strongly affected by the spectral resolution of the instruments included, their wavelength coverage, and the number of data points combined. We also see some limitations in each of the temperature parametrizations, as they are not able to fully match the complex curvatures that are usually produced in hydrodynamic simulations. The results show that our 1D retrieval is recovering a temperature and pressure profile that most closely matches the arithmetic average of the 3D thermal structure. When we use a higher resolution, more data points, and a parametrized temperature profile that allows more flexibility in the middle part of the atmosphere, we find a better match between the retrieved temperature and pressure profile and the arithmetic average. The Spitzer and HST simulated observations sample deep parts of the planetary atmosphere and provide fewer constraints on the temperature and pressure profile, while the JWST observations sample the middle part of the atmosphere, providing a good match with the middle and most complex part of the arithmetic average of the 3D temperature structure.

  18. American Society of Composites, 32nd Technical Conference

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aitharaju, Venkat; Yu, Hang; Zhao, Selina

    Resin transfer molding (RTM) has become increasingly popular for the manufacturing of composite parts. To enable high volume manufacturing and obtain good quality parts at an acceptable cost to automotive industry, accurate process simulation tools are necessary to optimize the process conditions. Towards that goal, General Motors and the ESI-group are involved in developing a state of the art process simulation tool for composite manufacturing in a project supported by the Department of Energy. This paper describes the modeling of various stages in resin transfer molding such as resin injection, resin curing, and part distortion. An instrumented RTM system locatedmore » at the General Motors Research and Development center was used to perform flat plaque molding experiments. The experimental measurements of fill time, in-mold pressure versus time, cure variation with time, and part deformation were compared with the model predictions and very good correlations were observed.« less

  19. Arctic Sea Ice Simulation in the PlioMIP Ensemble

    NASA Technical Reports Server (NTRS)

    Howell, Fergus W.; Haywood, Alan M.; Otto-Bliesner, Bette L.; Bragg, Fran; Chan, Wing-Le; Chandler, Mark A.; Contoux, Camille; Kamae, Youichi; Abe-Ouchi, Ayako; Rosenbloom, Nan A.; hide

    2016-01-01

    Eight general circulation models have simulated the mid-Pliocene warm period (mid-Pliocene, 3.264 to 3.025 Ma) as part of the Pliocene Modelling Intercomparison Project (PlioMIP). Here, we analyse and compare their simulation of Arctic sea ice for both the pre-industrial period and the mid-Pliocene. Mid-Pliocene sea ice thickness and extent is reduced, and the model spread of extent is more than twice the pre-industrial spread in some summer months. Half of the PlioMIP models simulate ice-free conditions in the mid-Pliocene. This spread amongst the ensemble is in line with the uncertainties amongst proxy reconstructions for mid-Pliocene sea ice extent. Correlations between mid-Pliocene Arctic temperatures and sea ice extents are almost twice as strong as the equivalent correlations for the pre-industrial simulations. The need for more comprehensive sea ice proxy data is highlighted, in order to better compare model performances.

  20. Algorithms for radiative transfer simulations for aerosol retrieval

    NASA Astrophysics Data System (ADS)

    Mukai, Sonoyo; Sano, Itaru; Nakata, Makiko

    2012-11-01

    Aerosol retrieval work from satellite data, i.e. aerosol remote sensing, is divided into three parts as: satellite data analysis, aerosol modeling and multiple light scattering calculation in the atmosphere model which is called radiative transfer simulation. The aerosol model is compiled from the accumulated measurements during more than ten years provided with the world wide aerosol monitoring network (AERONET). The radiative transfer simulations take Rayleigh scattering by molecules and Mie scattering by aerosols in the atmosphere, and reflection by the Earth surface into account. Thus the aerosol properties are estimated by comparing satellite measurements with the numerical values of radiation simulations in the Earth-atmosphere-surface model. It is reasonable to consider that the precise simulation of multiple light-scattering processes is necessary, and needs a long computational time especially in an optically thick atmosphere model. Therefore efficient algorithms for radiative transfer problems are indispensable to retrieve aerosols from space.

  1. Large-eddy simulations of a Salt Lake Valley cold-air pool

    NASA Astrophysics Data System (ADS)

    Crosman, Erik T.; Horel, John D.

    2017-09-01

    Persistent cold-air pools are often poorly forecast by mesoscale numerical weather prediction models, in part due to inadequate parameterization of planetary boundary-layer physics in stable atmospheric conditions, and also because of errors in the initialization and treatment of the model surface state. In this study, an improved numerical simulation of the 27-30 January 2011 cold-air pool in Utah's Great Salt Lake Basin is obtained using a large-eddy simulation with more realistic surface state characterization. Compared to a Weather Research and Forecasting model configuration run as a mesoscale model with a planetary boundary-layer scheme where turbulence is highly parameterized, the large-eddy simulation more accurately captured turbulent interactions between the stable boundary-layer and flow aloft. The simulations were also found to be sensitive to variations in the Great Salt Lake temperature and Salt Lake Valley snow cover, illustrating the importance of land surface state in modelling cold-air pools.

  2. Model aerodynamic test results for two variable cycle engine coannular exhaust systems at simulated takeoff and cruise conditions. Comprehensive data report. Volume 1: Design layouts

    NASA Technical Reports Server (NTRS)

    Nelson, D. P.

    1981-01-01

    The design layouts and detailed design drawings of coannular exhaust nozzle models for a supersonic propulsion system are presented. The layout drawings show the assembly of the component parts for each configuration. A listing of the component parts is also given.

  3. From Simulation to Real Robots with Predictable Results: Methods and Examples

    NASA Astrophysics Data System (ADS)

    Balakirsky, S.; Carpin, S.; Dimitoglou, G.; Balaguer, B.

    From a theoretical perspective, one may easily argue (as we will in this chapter) that simulation accelerates the algorithm development cycle. However, in practice many in the robotics development community share the sentiment that “Simulation is doomed to succeed” (Brooks, R., Matarić, M., Robot Learning, Kluwer Academic Press, Hingham, MA, 1993, p. 209). This comes in large part from the fact that many simulation systems are brittle; they do a fair-to-good job of simulating the expected, and fail to simulate the unexpected. It is the authors' belief that a simulation system is only as good as its models, and that deficiencies in these models lead to the majority of these failures. This chapter will attempt to address these deficiencies by presenting a systematic methodology with examples for the development of both simulated mobility models and sensor models for use with one of today's leading simulation engines. Techniques for using simulation for algorithm development leading to real-robot implementation will be presented, as well as opportunities for involvement in international robotics competitions based on these techniques.

  4. Numerical simulations - Some results for the 2- and 3-D Hubbard models and a 2-D electron phonon model

    NASA Technical Reports Server (NTRS)

    Scalapino, D. J.; Sugar, R. L.; White, S. R.; Bickers, N. E.; Scalettar, R. T.

    1989-01-01

    Numerical simulations on the half-filled three-dimensional Hubbard model clearly show the onset of Neel order. Simulations of the two-dimensional electron-phonon Holstein model show the competition between the formation of a Peierls-CDW state and a superconducting state. However, the behavior of the partly filled two-dimensional Hubbard model is more difficult to determine. At half-filling, the antiferromagnetic correlations grow as T is reduced. Doping away from half-filling suppresses these correlations, and it is found that there is a weak attractive pairing interaction in the d-wave channel. However, the strength of the pair field susceptibility is weak at the temperatures and lattice sizes that have been simulated, and the nature of the low-temperature state of the nearly half-filled Hubbard model remains open.

  5. A Case Study Using Modeling and Simulation to Predict Logistics Supply Chain Issues

    NASA Technical Reports Server (NTRS)

    Tucker, David A.

    2007-01-01

    Optimization of critical supply chains to deliver thousands of parts, materials, sub-assemblies, and vehicle structures as needed is vital to the success of the Constellation Program. Thorough analysis needs to be performed on the integrated supply chain processes to plan, source, make, deliver, and return critical items efficiently. Process modeling provides simulation technology-based, predictive solutions for supply chain problems which enable decision makers to reduce costs, accelerate cycle time and improve business performance. For example, United Space Alliance, LLC utilized this approach in late 2006 to build simulation models that recreated shuttle orbiter thruster failures and predicted the potential impact of thruster removals on logistics spare assets. The main objective was the early identification of possible problems in providing thruster spares for the remainder of the Shuttle Flight Manifest. After extensive analysis the model results were used to quantify potential problems and led to improvement actions in the supply chain. Similarly the proper modeling and analysis of Constellation parts, materials, operations, and information flows will help ensure the efficiency of the critical logistics supply chains and the overall success of the program.

  6. Analysis and simulation of a magnetic bearing suspension system for a laboratory model annular momentum control device

    NASA Technical Reports Server (NTRS)

    Groom, N. J.; Woolley, C. T.; Joshi, S. M.

    1981-01-01

    A linear analysis and the results of a nonlinear simulation of a magnetic bearing suspension system which uses permanent magnet flux biasing are presented. The magnetic bearing suspension is part of a 4068 N-m-s (3000 lb-ft-sec) laboratory model annular momentum control device (AMCD). The simulation includes rigid body rim dynamics, linear and nonlinear axial actuators, linear radial actuators, axial and radial rim warp, and power supply and power driver current limits.

  7. Dispersion in Spherical Water Drops.

    ERIC Educational Resources Information Center

    Eliason, John C., Jr.

    1989-01-01

    Discusses a laboratory exercise simulating the paths of light rays through spherical water drops by applying principles of ray optics and geometry. Describes four parts: determining the output angles, computer simulation, explorations, model testing, and solutions. Provides a computer program and some diagrams. (YP)

  8. Revised Planning Methodology For Signalized Intersections And Operational Analysis Of Exclusive Left-Turn Lanes, Part-II: Models And Procedures (Final Report)

    DOT National Transportation Integrated Search

    1996-04-01

    THIS REPORT ALSO DESCRIBES THE PROCEDURES FOR DIRECT ESTIMATION OF INTERSECTION CAPACITY WITH SIMULATION, INCLUDING A SET OF RIGOROUS STATISTICAL TESTS FOR SIMULATION PARAMETER CALIBRATION FROM FIELD DATA.

  9. Linking Statistically- and Physically-Based Models for Improved Streamflow Simulation in Gaged and Ungaged Areas

    NASA Astrophysics Data System (ADS)

    Lafontaine, J.; Hay, L.; Archfield, S. A.; Farmer, W. H.; Kiang, J. E.

    2014-12-01

    The U.S. Geological Survey (USGS) has developed a National Hydrologic Model (NHM) to support coordinated, comprehensive and consistent hydrologic model development, and facilitate the application of hydrologic simulations within the continental US. The portion of the NHM located within the Gulf Coastal Plains and Ozarks Landscape Conservation Cooperative (GCPO LCC) is being used to test the feasibility of improving streamflow simulations in gaged and ungaged watersheds by linking statistically- and physically-based hydrologic models. The GCPO LCC covers part or all of 12 states and 5 sub-geographies, totaling approximately 726,000 km2, and is centered on the lower Mississippi Alluvial Valley. A total of 346 USGS streamgages in the GCPO LCC region were selected to evaluate the performance of this new calibration methodology for the period 1980 to 2013. Initially, the physically-based models are calibrated to measured streamflow data to provide a baseline for comparison. An enhanced calibration procedure then is used to calibrate the physically-based models in the gaged and ungaged areas of the GCPO LCC using statistically-based estimates of streamflow. For this application, the calibration procedure is adjusted to address the limitations of the statistically generated time series to reproduce measured streamflow in gaged basins, primarily by incorporating error and bias estimates. As part of this effort, estimates of uncertainty in the model simulations are also computed for the gaged and ungaged watersheds.

  10. Brian Ball | NREL

    Science.gov Websites

    Integration program, developing inverse modeling algorithms to calibrate building energy models, and is part related equipment. This work included developing an engineering grade operator training simulator for an

  11. Tests and Techniques for Characterizing and Modeling X-43A Electromechanical Actuators

    NASA Technical Reports Server (NTRS)

    Lin, Yohan; Baumann, Ethan; Bose, David M.; Beck, Roger; Jenney, Gavin

    2008-01-01

    A series of tests were conducted on the electromechanical actuators of the X-43A research vehicle in preparation for the Mach 7 and 10 hypersonic flights. The tests were required to help validate the actuator models in the simulation and acquire a better understanding of the installed system characteristics. Static and dynamic threshold, multichannel crosstalk, command-to-surface timing, free play, voltage regeneration, calibration, frequency response, compliance, hysteretic damping, and aircraft-in-the-loop tests were performed as part of this effort. This report describes the objectives, configurations, and methods for those tests, as well as the techniques used for developing second-order actuator models from the test results. When the first flight attempt failed because of actuator problems with the launch vehicle, further analysis and model enhancements were performed as part of the return-to-flight activities. High-fidelity models are described, along with the modifications that were required to match measurements taken from the research vehicle. Problems involving the implementation of these models into the X-43A simulation are also discussed. This report emphasizes lessons learned from the actuator testing, simulation modeling, and integration efforts for the X-43A hypersonic research vehicle.

  12. C4MIP - The Coupled Climate-Carbon Cycle Model Intercomparison Project: experimental protocol for CMIP6

    NASA Astrophysics Data System (ADS)

    Jones, Chris D.; Arora, Vivek; Friedlingstein, Pierre; Bopp, Laurent; Brovkin, Victor; Dunne, John; Graven, Heather; Hoffman, Forrest; Ilyina, Tatiana; John, Jasmin G.; Jung, Martin; Kawamiya, Michio; Koven, Charlie; Pongratz, Julia; Raddatz, Thomas; Randerson, James T.; Zaehle, Sönke

    2016-08-01

    Coordinated experimental design and implementation has become a cornerstone of global climate modelling. Model Intercomparison Projects (MIPs) enable systematic and robust analysis of results across many models, by reducing the influence of ad hoc differences in model set-up or experimental boundary conditions. As it enters its 6th phase, the Coupled Model Intercomparison Project (CMIP6) has grown significantly in scope with the design and documentation of individual simulations delegated to individual climate science communities. The Coupled Climate-Carbon Cycle Model Intercomparison Project (C4MIP) takes responsibility for design, documentation, and analysis of carbon cycle feedbacks and interactions in climate simulations. These feedbacks are potentially large and play a leading-order contribution in determining the atmospheric composition in response to human emissions of CO2 and in the setting of emissions targets to stabilize climate or avoid dangerous climate change. For over a decade, C4MIP has coordinated coupled climate-carbon cycle simulations, and in this paper we describe the C4MIP simulations that will be formally part of CMIP6. While the climate-carbon cycle community has created this experimental design, the simulations also fit within the wider CMIP activity, conform to some common standards including documentation and diagnostic requests, and are designed to complement the CMIP core experiments known as the Diagnostic, Evaluation and Characterization of Klima (DECK). C4MIP has three key strands of scientific motivation and the requested simulations are designed to satisfy their needs: (1) pre-industrial and historical simulations (formally part of the common set of CMIP6 experiments) to enable model evaluation, (2) idealized coupled and partially coupled simulations with 1 % per year increases in CO2 to enable diagnosis of feedback strength and its components, (3) future scenario simulations to project how the Earth system will respond to anthropogenic activity over the 21st century and beyond. This paper documents in detail these simulations, explains their rationale and planned analysis, and describes how to set up and run the simulations. Particular attention is paid to boundary conditions, input data, and requested output diagnostics. It is important that modelling groups participating in C4MIP adhere as closely as possible to this experimental design.

  13. C4MIP – The Coupled Climate–Carbon Cycle Model Intercomparison Project: Experimental protocol for CMIP6

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Chris D.; Arora, Vivek; Friedlingstein, Pierre

    Coordinated experimental design and implementation has become a cornerstone of global climate modelling. Model Intercomparison Projects (MIPs) enable systematic and robust analysis of results across many models, by reducing the influence of ad hoc differences in model set-up or experimental boundary conditions. As it enters its 6th phase, the Coupled Model Intercomparison Project (CMIP6) has grown significantly in scope with the design and documentation of individual simulations delegated to individual climate science communities. The Coupled Climate–Carbon Cycle Model Intercomparison Project (C4MIP) takes responsibility for design, documentation, and analysis of carbon cycle feedbacks and interactions in climate simulations. These feedbacks aremore » potentially large and play a leading-order contribution in determining the atmospheric composition in response to human emissions of CO 2 and in the setting of emissions targets to stabilize climate or avoid dangerous climate change. For over a decade, C4MIP has coordinated coupled climate–carbon cycle simulations, and in this paper we describe the C4MIP simulations that will be formally part of CMIP6. While the climate–carbon cycle community has created this experimental design, the simulations also fit within the wider CMIP activity, conform to some common standards including documentation and diagnostic requests, and are designed to complement the CMIP core experiments known as the Diagnostic, Evaluation and Characterization of Klima (DECK). C4MIP has three key strands of scientific motivation and the requested simulations are designed to satisfy their needs: (1) pre-industrial and historical simulations (formally part of the common set of CMIP6 experiments) to enable model evaluation, (2) idealized coupled and partially coupled simulations with 1 % per year increases in CO 2 to enable diagnosis of feedback strength and its components, (3) future scenario simulations to project how the Earth system will respond to anthropogenic activity over the 21st century and beyond. This study documents in detail these simulations, explains their rationale and planned analysis, and describes how to set up and run the simulations. Particular attention is paid to boundary conditions, input data, and requested output diagnostics. It is important that modelling groups participating in C4MIP adhere as closely as possible to this experimental design.« less

  14. C4MIP – The Coupled Climate–Carbon Cycle Model Intercomparison Project: Experimental protocol for CMIP6

    DOE PAGES

    Jones, Chris D.; Arora, Vivek; Friedlingstein, Pierre; ...

    2016-08-25

    Coordinated experimental design and implementation has become a cornerstone of global climate modelling. Model Intercomparison Projects (MIPs) enable systematic and robust analysis of results across many models, by reducing the influence of ad hoc differences in model set-up or experimental boundary conditions. As it enters its 6th phase, the Coupled Model Intercomparison Project (CMIP6) has grown significantly in scope with the design and documentation of individual simulations delegated to individual climate science communities. The Coupled Climate–Carbon Cycle Model Intercomparison Project (C4MIP) takes responsibility for design, documentation, and analysis of carbon cycle feedbacks and interactions in climate simulations. These feedbacks aremore » potentially large and play a leading-order contribution in determining the atmospheric composition in response to human emissions of CO 2 and in the setting of emissions targets to stabilize climate or avoid dangerous climate change. For over a decade, C4MIP has coordinated coupled climate–carbon cycle simulations, and in this paper we describe the C4MIP simulations that will be formally part of CMIP6. While the climate–carbon cycle community has created this experimental design, the simulations also fit within the wider CMIP activity, conform to some common standards including documentation and diagnostic requests, and are designed to complement the CMIP core experiments known as the Diagnostic, Evaluation and Characterization of Klima (DECK). C4MIP has three key strands of scientific motivation and the requested simulations are designed to satisfy their needs: (1) pre-industrial and historical simulations (formally part of the common set of CMIP6 experiments) to enable model evaluation, (2) idealized coupled and partially coupled simulations with 1 % per year increases in CO 2 to enable diagnosis of feedback strength and its components, (3) future scenario simulations to project how the Earth system will respond to anthropogenic activity over the 21st century and beyond. This study documents in detail these simulations, explains their rationale and planned analysis, and describes how to set up and run the simulations. Particular attention is paid to boundary conditions, input data, and requested output diagnostics. It is important that modelling groups participating in C4MIP adhere as closely as possible to this experimental design.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Asadi, Somayeh; Masoudi, Seyed Farhad, E-mail: masoudi@kntu.ac.ir; Shahriari, Majid

    In ophthalmic brachytherapy dosimetry, it is common to consider the water phantom as human eye anatomy. However, for better clinical analysis, there is a need for the dose determination in different parts of the eye. In this work, a full human eye is simulated with MCNP-4C code by considering all parts of the eye, i.e., the lens, cornea, retina, choroid, sclera, anterior chamber, optic nerve, and bulk of the eye comprising vitreous body and tumor. The average dose in different parts of this full model of the human eye is determined and the results are compared with the dose calculatedmore » in water phantom. The central axes depth dose and the dose in whole of the tumor for these 2 simulated eye models are calculated as well, and the results are compared.« less

  16. Population variability in animal health: Influence on dose-exposure-response relationships: Part II: Modelling and simulation.

    PubMed

    Martinez, Marilyn N; Gehring, Ronette; Mochel, Jonathan P; Pade, Devendra; Pelligand, Ludovic

    2018-05-28

    During the 2017 Biennial meeting, the American Academy of Veterinary Pharmacology and Therapeutics hosted a 1-day session on the influence of population variability on dose-exposure-response relationships. In Part I, we highlighted some of the sources of population variability. Part II provides a summary of discussions on modelling and simulation tools that utilize existing pharmacokinetic data, can integrate drug physicochemical characteristics with species physiological characteristics and dosing information or that combine observed with predicted and in vitro information to explore and describe sources of variability that may influence the safe and effective use of veterinary pharmaceuticals. © 2018 John Wiley & Sons Ltd. This article has been contributed to by US Government employees and their work is in the public domain in the USA.

  17. Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics

    NASA Technical Reports Server (NTRS)

    Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.

  18. Reactive transport of metal contaminants in alluvium - Model comparison and column simulation

    USGS Publications Warehouse

    Brown, J.G.; Bassett, R.L.; Glynn, P.D.

    2000-01-01

    A comparative assessment of two reactive-transport models, PHREEQC and HYDROGEOCHEM (HGC), was done to determine the suitability of each for simulating the movement of acidic contamination in alluvium. For simulations that accounted for aqueous complexation, precipitation and dissolution, the breakthrough and rinseout curves generated by each model were similar. The differences in simulated equilibrium concentrations between models were minor and were related to (1) different units in model output, (2) different activity coefficients, and (3) ionic-strength calculations. When adsorption processes were added to the models, the rinseout pH simulated by PHREEQC using the diffuse double-layer adsorption model rose to a pH of 6 after pore volume 15, about 1 pore volume later than the pH simulated by HGC using the constant-capacitance model. In PHREEQC simulation of a laboratory column experiment, the inability of the model to match measured outflow concentrations of selected constituents was related to the evident lack of local geochemical equilibrium in the column. The difference in timing and size of measured and simulated breakthrough of selected constituents indicated that the redox and adsorption reactions in the column occurred slowly when compared with the modeled reactions. MINTEQA2 and PHREEQC simulations of the column experiment indicated that the number of surface sites that took part in adsorption reactions was less than that estimated from the measured concentration of Fe hydroxide in the alluvium.

  19. Deep Part Load Flow Analysis in a Francis Model turbine by means of two-phase unsteady flow simulations

    NASA Astrophysics Data System (ADS)

    Conrad, Philipp; Weber, Wilhelm; Jung, Alexander

    2017-04-01

    Hydropower plants are indispensable to stabilize the grid by reacting quickly to changes of the energy demand. However, an extension of the operating range towards high and deep part load conditions without fatigue of the hydraulic components is desirable to increase their flexibility. In this paper a model sized Francis turbine at low discharge operating conditions (Q/QBEP = 0.27) is analyzed by means of computational fluid dynamics (CFD). Unsteady two-phase simulations for two Thoma-number conditions are conducted. Stochastic pressure oscillations, observed on the test rig at low discharge, require sophisticated numerical models together with small time steps, large grid sizes and long simulation times to cope with these fluctuations. In this paper the BSL-EARSM model (Explicit Algebraic Reynolds Stress) was applied as a compromise between scale resolving and two-equation turbulence models with respect to computational effort and accuracy. Simulation results are compared to pressure measurements showing reasonable agreement in resolving the frequency spectra and amplitude. Inner blade vortices were predicted successfully in shape and size. Surface streamlines in blade-to-blade view are presented, giving insights to the formation of the inner blade vortices. The acquired time dependent pressure fields can be used for quasi-static structural analysis (FEA) for fatigue calculations in the future.

  20. Improving Representation of Convective Transport for Scale-Aware Parameterization – Part I: Convection and Cloud Properties Simulated with Spectral Bin and Bulk Microphysics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, Jiwen; Liu, Yi-Chin; Xu, Kuan-Man

    2015-04-27

    The ultimate goal of this study is to improve representation of convective transport by cumulus parameterization for meso-scale and climate models. As Part I of the study, we perform extensive evaluations of cloud-resolving simulations of a squall line and mesoscale convective complexes in mid-latitude continent and tropical regions using the Weather Research and Forecasting (WRF) model with spectral-bin microphysics (SBM) and with two double-moment bulk microphysics schemes: a modified Morrison (MOR) and Milbrandt and Yau (MY2). Compared to observations, in general, SBM gives better simulations of precipitation, vertical velocity of convective cores, and the vertically decreasing trend of radar reflectivitymore » than MOR and MY2, and therefore will be used for analysis of scale-dependence of eddy transport in Part II. The common features of the simulations for all convective systems are (1) the model tends to overestimate convection intensity in the middle and upper troposphere, but SBM can alleviate much of the overestimation and reproduce the observed convection intensity well; (2) the model greatly overestimates radar reflectivity in convective cores (SBM predicts smaller radar reflectivity but does not remove the large overestimation); and (3) the model performs better for mid-latitude convective systems than tropical system. The modeled mass fluxes of the mid latitude systems are not sensitive to microphysics schemes, but are very sensitive for the tropical case indicating strong microphysics modification to convection. Cloud microphysical measurements of rain, snow and graupel in convective cores will be critically important to further elucidate issues within cloud microphysics schemes.« less

  1. Composite Cure Process Modeling and Simulations using COMPRO(Registered Trademark) and Validation of Residual Strains using Fiber Optics Sensors

    NASA Technical Reports Server (NTRS)

    Sreekantamurthy, Thammaiah; Hudson, Tyler B.; Hou, Tan-Hung; Grimsley, Brian W.

    2016-01-01

    Composite cure process induced residual strains and warping deformations in composite components present significant challenges in the manufacturing of advanced composite structure. As a part of the Manufacturing Process and Simulation initiative of the NASA Advanced Composite Project (ACP), research is being conducted on the composite cure process by developing an understanding of the fundamental mechanisms by which the process induced factors influence the residual responses. In this regard, analytical studies have been conducted on the cure process modeling of composite structural parts with varied physical, thermal, and resin flow process characteristics. The cure process simulation results were analyzed to interpret the cure response predictions based on the underlying physics incorporated into the modeling tool. In the cure-kinetic analysis, the model predictions on the degree of cure, resin viscosity and modulus were interpreted with reference to the temperature distribution in the composite panel part and tool setup during autoclave or hot-press curing cycles. In the fiber-bed compaction simulation, the pore pressure and resin flow velocity in the porous media models, and the compaction strain responses under applied pressure were studied to interpret the fiber volume fraction distribution predictions. In the structural simulation, the effect of temperature on the resin and ply modulus, and thermal coefficient changes during curing on predicted mechanical strains and chemical cure shrinkage strains were studied to understand the residual strains and stress response predictions. In addition to computational analysis, experimental studies were conducted to measure strains during the curing of laminated panels by means of optical fiber Bragg grating sensors (FBGs) embedded in the resin impregnated panels. The residual strain measurements from laboratory tests were then compared with the analytical model predictions. The paper describes the cure process procedures and residual strain predications, and discusses pertinent experimental results from the validation studies.

  2. Severe Nuclear Accident Program (SNAP) - a real time model for accidental releases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saltbones, J.; Foss, A.; Bartnicki, J.

    1996-12-31

    The model: Several Nuclear Accident Program (SNAP) has been developed at the Norwegian Meteorological Institute (DNMI) in Oslo to provide decision makers and Government officials with real-time tool for simulating large accidental releases of radioactivity from nuclear power plants or other sources. SNAP is developed in the Lagrangian framework in which atmospheric transport of radioactive pollutants is simulated by emitting a large number of particles from the source. The main advantage of the Lagrangian approach is a possibility of precise parameterization of advection processes, especially close to the source. SNAP can be used to predict the transport and deposition ofmore » a radioactive cloud in e future (up to 48 hours, in the present version) or to analyze the behavior of the cloud in the past. It is also possible to run the model in the mixed mode (partly analysis and partly forecast). In the routine run we assume unit (1 g s{sup -1}) emission in each of three classes. This assumption is very convenient for the main user of the model output in case of emergency: Norwegian Radiation Protection Agency. Due to linearity of the model equations, user can test different emission scenarios as a post processing task by assigning different weights to concentration and deposition fields corresponding to each of three emission classes. SNAP is fully operational and can be run by the meteorologist on duty at any time. The output from SNAP has two forms: First on the maps of Europe, or selected parts of Europe, individual particles are shown during the simulation period. Second, immediately after the simulation, concentration/deposition fields can be shown every three hours of the simulation period as isoline maps for each emission class. In addition, concentration and deposition maps, as well as some meteorological data, are stored on a public accessible disk for further processing by the model users.« less

  3. Hierarchical calibration and validation framework of bench-scale computational fluid dynamics simulations for solvent-based carbon capture. Part 2: Chemical absorption across a wetted wall column: Original Research Article: Hierarchical calibration and validation framework of bench-scale computational fluid dynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Chao; Xu, Zhijie; Lai, Kevin

    Part 1 of this paper presents a numerical model for non-reactive physical mass transfer across a wetted wall column (WWC). In Part 2, we improved the existing computational fluid dynamics (CFD) model to simulate chemical absorption occurring in a WWC as a bench-scale study of solvent-based carbon dioxide (CO2) capture. To generate data for WWC model validation, CO2 mass transfer across a monoethanolamine (MEA) solvent was first measured on a WWC experimental apparatus. The numerical model developed in this work can account for both chemical absorption and desorption of CO2 in MEA. In addition, the overall mass transfer coefficient predictedmore » using traditional/empirical correlations is conducted and compared with CFD prediction results for both steady and wavy falling films. A Bayesian statistical calibration algorithm is adopted to calibrate the reaction rate constants in chemical absorption/desorption of CO2 across a falling film of MEA. The posterior distributions of the two transport properties, i.e., Henry's constant and gas diffusivity in the non-reacting nitrous oxide (N2O)/MEA system obtained from Part 1 of this study, serves as priors for the calibration of CO2 reaction rate constants after using the N2O/CO2 analogy method. The calibrated model can be used to predict the CO2 mass transfer in a WWC for a wider range of operating conditions.« less

  4. Hierarchical calibration and validation framework of bench-scale computational fluid dynamics simulations for solvent-based carbon capture. Part 2: Chemical absorption across a wetted wall column: Original Research Article: Hierarchical calibration and validation framework of bench-scale computational fluid dynamics simulations

    DOE PAGES

    Wang, Chao; Xu, Zhijie; Lai, Kevin; ...

    2017-10-24

    Part 1 of this paper presents a numerical model for non-reactive physical mass transfer across a wetted wall column (WWC). In Part 2, we improved the existing computational fluid dynamics (CFD) model to simulate chemical absorption occurring in a WWC as a bench-scale study of solvent-based carbon dioxide (CO2) capture. To generate data for WWC model validation, CO2 mass transfer across a monoethanolamine (MEA) solvent was first measured on a WWC experimental apparatus. The numerical model developed in this work can account for both chemical absorption and desorption of CO2 in MEA. In addition, the overall mass transfer coefficient predictedmore » using traditional/empirical correlations is conducted and compared with CFD prediction results for both steady and wavy falling films. A Bayesian statistical calibration algorithm is adopted to calibrate the reaction rate constants in chemical absorption/desorption of CO2 across a falling film of MEA. The posterior distributions of the two transport properties, i.e., Henry's constant and gas diffusivity in the non-reacting nitrous oxide (N2O)/MEA system obtained from Part 1 of this study, serves as priors for the calibration of CO2 reaction rate constants after using the N2O/CO2 analogy method. The calibrated model can be used to predict the CO2 mass transfer in a WWC for a wider range of operating conditions.« less

  5. Modeling and performance improvement of the constant power regulator systems in variable displacement axial piston pump.

    PubMed

    Park, Sung Hwan; Lee, Ji Min; Kim, Jong Shik

    2013-01-01

    An irregular performance of a mechanical-type constant power regulator is considered. In order to find the cause of an irregular discharge flow at the cut-off pressure area, modeling and numerical simulations are performed to observe dynamic behavior of internal parts of the constant power regulator system for a swashplate-type axial piston pump. The commercial numerical simulation software AMESim is applied to model the mechanical-type regulator with hydraulic pump and simulate the performance of it. The validity of the simulation model of the constant power regulator system is verified by comparing simulation results with experiments. In order to find the cause of the irregular performance of the mechanical-type constant power regulator system, the behavior of main components such as the spool, sleeve, and counterbalance piston is investigated using computer simulation. The shape modification of the counterbalance piston is proposed to improve the undesirable performance of the mechanical-type constant power regulator. The performance improvement is verified by computer simulation using AMESim software.

  6. Process Modeling of Composite Materials for Wind-Turbine Rotor Blades: Experiments and Numerical Modeling

    PubMed Central

    Wieland, Birgit; Ropte, Sven

    2017-01-01

    The production of rotor blades for wind turbines is still a predominantly manual process. Process simulation is an adequate way of improving blade quality without a significant increase in production costs. This paper introduces a module for tolerance simulation for rotor-blade production processes. The investigation focuses on the simulation of temperature distribution for one-sided, self-heated tooling and thick laminates. Experimental data from rotor-blade production and down-scaled laboratory tests are presented. Based on influencing factors that are identified, a physical model is created and implemented as a simulation. This provides an opportunity to simulate temperature and cure-degree distribution for two-dimensional cross sections. The aim of this simulation is to support production processes. Hence, it is modelled as an in situ simulation with direct input of temperature data and real-time capability. A monolithic part of the rotor blade, the main girder, is used as an example for presenting the results. PMID:28981458

  7. Process Modeling of Composite Materials for Wind-Turbine Rotor Blades: Experiments and Numerical Modeling.

    PubMed

    Wieland, Birgit; Ropte, Sven

    2017-10-05

    The production of rotor blades for wind turbines is still a predominantly manual process. Process simulation is an adequate way of improving blade quality without a significant increase in production costs. This paper introduces a module for tolerance simulation for rotor-blade production processes. The investigation focuses on the simulation of temperature distribution for one-sided, self-heated tooling and thick laminates. Experimental data from rotor-blade production and down-scaled laboratory tests are presented. Based on influencing factors that are identified, a physical model is created and implemented as a simulation. This provides an opportunity to simulate temperature and cure-degree distribution for two-dimensional cross sections. The aim of this simulation is to support production processes. Hence, it is modelled as an in situ simulation with direct input of temperature data and real-time capability. A monolithic part of the rotor blade, the main girder, is used as an example for presenting the results.

  8. Development and Implementation of a Transport Method for the Transport and Reaction Simulation Engine (TaRSE) based on the Godunov-Mixed Finite Element Method

    USGS Publications Warehouse

    James, Andrew I.; Jawitz, James W.; Munoz-Carpena, Rafael

    2009-01-01

    A model to simulate transport of materials in surface water and ground water has been developed to numerically approximate solutions to the advection-dispersion equation. This model, known as the Transport and Reaction Simulation Engine (TaRSE), uses an algorithm that incorporates a time-splitting technique where the advective part of the equation is solved separately from the dispersive part. An explicit finite-volume Godunov method is used to approximate the advective part, while a mixed-finite element technique is used to approximate the dispersive part. The dispersive part uses an implicit discretization, which allows it to run stably with a larger time step than the explicit advective step. The potential exists to develop algorithms that run several advective steps, and then one dispersive step that encompasses the time interval of the advective steps. Because the dispersive step is computationally most expensive, schemes can be implemented that are more computationally efficient than non-time-split algorithms. This technique enables scientists to solve problems with high grid Peclet numbers, such as transport problems with sharp solute fronts, without spurious oscillations in the numerical approximation to the solution and with virtually no artificial diffusion.

  9. Aerospace Toolbox--a flight vehicle design, analysis, simulation, and software development environment II: an in-depth overview

    NASA Astrophysics Data System (ADS)

    Christian, Paul M.

    2002-07-01

    This paper presents a demonstrated approach to significantly reduce the cost and schedule of non real-time modeling and simulation, real-time HWIL simulation, and embedded code development. The tool and the methodology presented capitalize on a paradigm that has become a standard operating procedure in the automotive industry. The tool described is known as the Aerospace Toolbox, and it is based on the MathWorks Matlab/Simulink framework, which is a COTS application. Extrapolation of automotive industry data and initial applications in the aerospace industry show that the use of the Aerospace Toolbox can make significant contributions in the quest by NASA and other government agencies to meet aggressive cost reduction goals in development programs. The part I of this paper provided a detailed description of the GUI based Aerospace Toolbox and how it is used in every step of a development program; from quick prototyping of concept developments that leverage built-in point of departure simulations through to detailed design, analysis, and testing. Some of the attributes addressed included its versatility in modeling 3 to 6 degrees of freedom, its library of flight test validated library of models (including physics, environments, hardware, and error sources), and its built-in Monte Carlo capability. Other topics that were covered in part I included flight vehicle models and algorithms, and the covariance analysis package, Navigation System Covariance Analysis Tools (NavSCAT). Part II of this series will cover a more in-depth look at the analysis and simulation capability and provide an update on the toolbox enhancements. It will also address how the Toolbox can be used as a design hub for Internet based collaborative engineering tools such as NASA's Intelligent Synthesis Environment (ISE) and Lockheed Martin's Interactive Missile Design Environment (IMD).

  10. Unsteady hydraulic simulation of the cavitating part load vortex rope in Francis turbines

    NASA Astrophysics Data System (ADS)

    Brammer, J.; Segoufin, C.; Duparchy, F.; Lowys, P. Y.; Favrel, A.; Avellan, F.

    2017-04-01

    For Francis turbines at part load operation a helical vortex rope is formed due to the swirling nature of the flow exiting the runner. This vortex creates pressure fluctuations which can lead to power swings, and the unsteady loading can lead to fatigue damage of the runner. In the case that the vortex rope cavitates there is the additional risk that hydro-acoustic resonance can occur. It is therefore important to be able to accurately simulate this phenomenon to address these issues. In this paper an unsteady, multi-phase CFD model was used to simulate two part-load operating points, for two different cavitation conditions. The simulation results were validated with test-rig data, and showed very good agreement. These results also served as an input for FEA calculations and fatigue analysis, which are presented in a separate study.

  11. Production Strategies for Production-Quality Parts for Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Cawley, J. D.; Best, J. E.; Liu, Z.; Eckel, A. J.; Reed, B. D.; Fox, D. S.; Bhatt, R.; Levine, Stanley R. (Technical Monitor)

    2000-01-01

    A combination of rapid prototyping processes (3D Systems' stereolithography and Sanders Prototyping's ModelMaker) are combined with gelcasting to produce high quality silicon nitride components that were performance tested under simulated use conditions. Two types of aerospace components were produced, a low-force rocket thruster and a simulated airfoil section. The rocket was tested in a test stand using varying mixtures of H2 and O2, whereas the simulated airfoil was tested by subjecting it to a 0.3 Mach jet-fuel burner flame. Both parts performed successfully, demonstrating the usefulness of the rapid prototyping in efforts to effect materials substitution. In addition, the simulated airfoil was used to explore the possibility of applying thermal/environmental barrier coatings and providing for internal cooling of ceramic parts. It is concluded that this strategy for processing offers the ceramic engineer all the flexibility normally associated with investment casting of superalloys.

  12. Modeling microbiological and chemical processes in municipal solid waste bioreactor, Part II: Application of numerical model BIOKEMOD-3P.

    PubMed

    Gawande, Nitin A; Reinhart, Debra R; Yeh, Gour-Tsyh

    2010-02-01

    Biodegradation process modeling of municipal solid waste (MSW) bioreactor landfills requires the knowledge of various process reactions and corresponding kinetic parameters. Mechanistic models available to date are able to simulate biodegradation processes with the help of pre-defined species and reactions. Some of these models consider the effect of critical parameters such as moisture content, pH, and temperature. Biomass concentration is a vital parameter for any biomass growth model and often not compared with field and laboratory results. A more complex biodegradation model includes a large number of chemical and microbiological species. Increasing the number of species and user defined process reactions in the simulation requires a robust numerical tool. A generalized microbiological and chemical model, BIOKEMOD-3P, was developed to simulate biodegradation processes in three-phases (Gawande et al. 2009). This paper presents the application of this model to simulate laboratory-scale MSW bioreactors under anaerobic conditions. BIOKEMOD-3P was able to closely simulate the experimental data. The results from this study may help in application of this model to full-scale landfill operation.

  13. Active Transportation and Demand Management (ATDM) foundational research : Analysis, Modeling, and Simulation (AMS) capabilities assessment.

    DOT National Transportation Integrated Search

    2013-06-01

    As part of the Federal Highway Administrations (FHWAs) Active Transportation and Demand Management (ATDM) Foundational Research, this publication identifies the AMS needs to support simulated real-time and real-time analysis to evaluate the imp...

  14. Development of Land Segmentation, Stream-Reach Network, and Watersheds in Support of Hydrological Simulation Program-Fortran (HSPF) Modeling, Chesapeake Bay Watershed, and Adjacent Parts of Maryland, Delaware, and Virginia

    USGS Publications Warehouse

    Martucci, Sarah K.; Krstolic, Jennifer L.; Raffensperger, Jeff P.; Hopkins, Katherine J.

    2006-01-01

    The U.S. Geological Survey, U.S. Environmental Protection Agency Chesapeake Bay Program Office, Interstate Commission on the Potomac River Basin, Maryland Department of the Environment, Virginia Department of Conservation and Recreation, Virginia Department of Environmental Quality, and the University of Maryland Center for Environmental Science are collaborating on the Chesapeake Bay Regional Watershed Model, using Hydrological Simulation Program - FORTRAN to simulate streamflow and concentrations and loads of nutrients and sediment to Chesapeake Bay. The model will be used to provide information for resource managers. In order to establish a framework for model simulation, digital spatial datasets were created defining the discretization of the model region (including the Chesapeake Bay watershed, as well as the adjacent parts of Maryland, Delaware, and Virginia outside the watershed) into land segments, a stream-reach network, and associated watersheds. Land segmentation was based on county boundaries represented by a 1:100,000-scale digital dataset. Fifty of the 254 counties and incorporated cities in the model region were divided on the basis of physiography and topography, producing a total of 309 land segments. The stream-reach network for the Chesapeake Bay watershed part of the model region was based on the U.S. Geological Survey Chesapeake Bay SPARROW (SPAtially Referenced Regressions On Watershed attributes) model stream-reach network. Because that network was created only for the Chesapeake Bay watershed, the rest of the model region uses a 1:500,000-scale stream-reach network. Streams with mean annual streamflow of less than 100 cubic feet per second were excluded based on attributes from the dataset. Additional changes were made to enhance the data and to allow for inclusion of stream reaches with monitoring data that were not part of the original network. Thirty-meter-resolution Digital Elevation Model data were used to delineate watersheds for each stream reach. State watershed boundaries replaced the Digital Elevation Model-derived watersheds where coincident. After a number of corrections, the watersheds were coded to indicate major and minor basin, mean annual streamflow, and each watershed's unique identifier as well as that of the downstream watershed. Land segments and watersheds were intersected to create land-watershed segments for the model.

  15. Design-based research in designing the model for educating simulation facilitators.

    PubMed

    Koivisto, Jaana-Maija; Hannula, Leena; Bøje, Rikke Buus; Prescott, Stephen; Bland, Andrew; Rekola, Leena; Haho, Päivi

    2018-03-01

    The purpose of this article is to introduce the concept of design-based research, its appropriateness in creating education-based models, and to describe the process of developing such a model. The model was designed as part of the Nurse Educator Simulation based learning project, funded by the EU's Lifelong Learning program (2013-1-DK1-LEO05-07053). The project partners were VIA University College, Denmark, the University of Huddersfield, UK and Metropolia University of Applied Sciences, Finland. As an outcome of the development process, "the NESTLED model for educating simulation facilitators" (NESTLED model) was generated. This article also illustrates five design principles that could be applied to other pedagogies. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Progress in catalytic ignition fabrication, modeling and infrastructure : (part 2) development of a multi-zone engine model simulated using MATLAB software.

    DOT National Transportation Integrated Search

    2014-02-01

    A mathematical model was developed for the purpose of providing students with data : acquisition and engine modeling experience at the University of Idaho. In developing the : model, multiple heat transfer and emissions models were researched and com...

  17. Analysis of groundwater flow in arid areas with limited hydrogeological data using the Grey Model: a case study of the Nubian Sandstone, Kharga Oasis, Egypt

    NASA Astrophysics Data System (ADS)

    Mahmod, Wael Elham; Watanabe, Kunio; Zahr-Eldeen, Ashraf A.

    2013-08-01

    Management of groundwater resources can be enhanced by using numerical models to improve development strategies. However, the lack of basic data often limits the implementation of these models. The Kharga Oasis in the western desert of Egypt is an arid area that mainly depends on groundwater from the Nubian Sandstone Aquifer System (NSAS), for which the hydrogeological data needed for groundwater simulation are lacking, thereby introducing a problem for model calibration and validation. The Grey Model (GM) was adopted to analyze groundwater flow. This model combines a finite element method (FEM) with a linear regression model to try to obtain the best-fit piezometric-level trends compared to observations. The GM simulation results clearly show that the future water table in the northeastern part of the study area will face a severe drawdown compared with that in the southwestern part and that the hydraulic head difference between these parts will reach 140 m by 2060. Given the uncertainty and limitation of available data, the GM produced more realistic results compared with those obtained from a FEM alone. The GM could be applied to other cases with similar data limitations.

  18. Application of an interactive water simulation model in urban water management: a case study in Amsterdam.

    PubMed

    Leskens, J G; Brugnach, M; Hoekstra, A Y

    2014-01-01

    Water simulation models are available to support decision-makers in urban water management. To use current water simulation models, special expertise is required. Therefore, model information is prepared prior to work sessions, in which decision-makers weigh different solutions. However, this model information quickly becomes outdated when new suggestions for solutions arise and are therefore limited in use. We suggest that new model techniques, i.e. fast and flexible computation algorithms and realistic visualizations, allow this problem to be solved by using simulation models during work sessions. A new Interactive Water Simulation Model was applied for two case study areas in Amsterdam and was used in two workshops. In these workshops, the Interactive Water Simulation Model was positively received. It included non-specialist participants in the process of suggesting and selecting possible solutions and made them part of the accompanying discussions and negotiations. It also provided the opportunity to evaluate and enhance possible solutions more often within the time horizon of a decision-making process. Several preconditions proved to be important for successfully applying the Interactive Water Simulation Model, such as the willingness of the stakeholders to participate and the preparation of different general main solutions that can be used for further iterations during a work session.

  19. Simulation Experiment Description Markup Language (SED-ML) Level 1 Version 2.

    PubMed

    Bergmann, Frank T; Cooper, Jonathan; Le Novère, Nicolas; Nickerson, David; Waltemath, Dagmar

    2015-09-04

    The number, size and complexity of computational models of biological systems are growing at an ever increasing pace. It is imperative to build on existing studies by reusing and adapting existing models and parts thereof. The description of the structure of models is not sufficient to enable the reproduction of simulation results. One also needs to describe the procedures the models are subjected to, as recommended by the Minimum Information About a Simulation Experiment (MIASE) guidelines. This document presents Level 1 Version 2 of the Simulation Experiment Description Markup Language (SED-ML), a computer-readable format for encoding simulation and analysis experiments to apply to computational models. SED-ML files are encoded in the Extensible Markup Language (XML) and can be used in conjunction with any XML-based model encoding format, such as CellML or SBML. A SED-ML file includes details of which models to use, how to modify them prior to executing a simulation, which simulation and analysis procedures to apply, which results to extract and how to present them. Level 1 Version 2 extends the format by allowing the encoding of repeated and chained procedures.

  20. Simulation Experiment Description Markup Language (SED-ML) Level 1 Version 2.

    PubMed

    Bergmann, Frank T; Cooper, Jonathan; Le Novère, Nicolas; Nickerson, David; Waltemath, Dagmar

    2015-06-01

    The number, size and complexity of computational models of biological systems are growing at an ever increasing pace. It is imperative to build on existing studies by reusing and adapting existing models and parts thereof. The description of the structure of models is not sufficient to enable the reproduction of simulation results. One also needs to describe the procedures the models are subjected to, as recommended by the Minimum Information About a Simulation Experiment (MIASE) guidelines. This document presents Level 1 Version 2 of the Simulation Experiment Description Markup Language (SED-ML), a computer-readable format for encoding simulation and analysis experiments to apply to computational models. SED-ML files are encoded in the Extensible Markup Language (XML) and can be used in conjunction with any XML-based model encoding format, such as CellML or SBML. A SED-ML file includes details of which models to use, how to modify them prior to executing a simulation, which simulation and analysis procedures to apply, which results to extract and how to present them. Level 1 Version 2 extends the format by allowing the encoding of repeated and chained procedures.

  1. Rill erosion in natural and disturbed forests: 2. Modeling approaches

    Treesearch

    J. W. Wagenbrenner; P. R. Robichaud; W. J. Elliot

    2010-01-01

    As forest management scenarios become more complex, the ability to more accurately predict erosion from those scenarios becomes more important. In this second part of a two-part study we report model parameters based on 66 simulated runoff experiments in two disturbed forests in the northwestern U.S. The 5 disturbance classes were natural, 10-month old and 2-week old...

  2. Use of a Process Analysis Tool for Diagnostic Study on Fine Particulate Matter Predictions in the U.S.-Part II: Analysis and Sensitivity Simulations

    EPA Science Inventory

    Following the Part I paper that described an application of the U.S. EPA Models-3/Community Multiscale Air Quality (CMAQ) modeling system to the 1999 Southern Oxidants Study episode, this paper presents results from process analysis (PA) using the PA tool embedded in CMAQ and s...

  3. An energy function for dynamics simulations of polypeptides in torsion angle space

    NASA Astrophysics Data System (ADS)

    Sartori, F.; Melchers, B.; Böttcher, H.; Knapp, E. W.

    1998-05-01

    Conventional simulation techniques to model the dynamics of proteins in atomic detail are restricted to short time scales. A simplified molecular description, in which high frequency motions with small amplitudes are ignored, can overcome this problem. In this protein model only the backbone dihedrals φ and ψ and the χi of the side chains serve as degrees of freedom. Bond angles and lengths are fixed at ideal geometry values provided by the standard molecular dynamics (MD) energy function CHARMM. In this work a Monte Carlo (MC) algorithm is used, whose elementary moves employ cooperative rotations in a small window of consecutive amide planes, leaving the polypeptide conformation outside of this window invariant. A single of these window MC moves generates local conformational changes only. But, the application of many such moves at different parts of the polypeptide backbone leads to global conformational changes. To account for the lack of flexibility in the protein model employed, the energy function used to evaluate conformational energies is split into sequentially neighbored and sequentially distant contributions. The sequentially neighbored part is represented by an effective (φ,ψ)-torsion potential. It is derived from MD simulations of a flexible model dipeptide using a conventional MD energy function. To avoid exaggeration of hydrogen bonding strengths, the electrostatic interactions involving hydrogen atoms are scaled down at short distances. With these adjustments of the energy function, the rigid polypeptide model exhibits the same equilibrium distributions as obtained by conventional MD simulation with a fully flexible molecular model. Also, the same temperature dependence of the stability and build-up of α helices of 18-alanine as found in MD simulations is observed using the adapted energy function for MC simulations. Analyses of transition frequencies demonstrate that also dynamical aspects of MD trajectories are faithfully reproduced. Finally, it is demonstrated that even for high temperature unfolded polypeptides the MC simulation is more efficient by a factor of 10 than conventional MD simulations.

  4. A meta-model based approach for rapid formability estimation of continuous fibre reinforced components

    NASA Astrophysics Data System (ADS)

    Zimmerling, Clemens; Dörr, Dominik; Henning, Frank; Kärger, Luise

    2018-05-01

    Due to their high mechanical performance, continuous fibre reinforced plastics (CoFRP) become increasingly important for load bearing structures. In many cases, manufacturing CoFRPs comprises a forming process of textiles. To predict and optimise the forming behaviour of a component, numerical simulations are applied. However, for maximum part quality, both the geometry and the process parameters must match in mutual regard, which in turn requires numerous numerically expensive optimisation iterations. In both textile and metal forming, a lot of research has focused on determining optimum process parameters, whilst regarding the geometry as invariable. In this work, a meta-model based approach on component level is proposed, that provides a rapid estimation of the formability for variable geometries based on pre-sampled, physics-based draping data. Initially, a geometry recognition algorithm scans the geometry and extracts a set of doubly-curved regions with relevant geometry parameters. If the relevant parameter space is not part of an underlying data base, additional samples via Finite-Element draping simulations are drawn according to a suitable design-table for computer experiments. Time saving parallel runs of the physical simulations accelerate the data acquisition. Ultimately, a Gaussian Regression meta-model is built from the data base. The method is demonstrated on a box-shaped generic structure. The predicted results are in good agreement with physics-based draping simulations. Since evaluations of the established meta-model are numerically inexpensive, any further design exploration (e.g. robustness analysis or design optimisation) can be performed in short time. It is expected that the proposed method also offers great potential for future applications along virtual process chains: For each process step along the chain, a meta-model can be set-up to predict the impact of design variations on manufacturability and part performance. Thus, the method is considered to facilitate a lean and economic part and process design under consideration of manufacturing effects.

  5. Dynamic modeling of brushless dc motors for aerospace actuation

    NASA Technical Reports Server (NTRS)

    Demerdash, N. A.; Nehl, T. W.

    1980-01-01

    A discrete time model for simulation of the dynamics of samarium cobalt-type permanent magnet brushless dc machines is presented. The simulation model includes modeling of the interaction between these machines and their attached power conditioners. These are transistorized conditioner units. This model is part of an overall discrete-time analysis of the dynamic performance of electromechanical actuators, which was conducted as part of prototype development of such actuators studied and built for NASA-Johnson Space Center as a prospective alternative to hydraulic actuators presently used in shuttle orbiter applications. The resulting numerical simulations of the various machine and power conditioner current and voltage waveforms gave excellent correlation to the actual waveforms collected from actual hardware experimental testing. These results, numerical and experimental, are presented here for machine motoring, regeneration and dynamic braking modes. Application of the resulting model to the determination of machine current and torque profiles during closed-loop actuator operation were also analyzed and the results are given here. These results are given in light of an overall view of the actuator system components. The applicability of this method of analysis to design optimization and trouble-shooting in such prototype development is also discussed in light of the results at hand.

  6. Statistical Validation of a New Python-based Military Workforce Simulation Model

    DTIC Science & Technology

    2014-12-30

    also having a straightforward syntax that is accessible to non-programmers. Furthermore, it is supported by an impressive variety of scientific... accessed by a given element of model logic or line of code. For example, in Arena, data arrays, queues and the simulation clock are part of the...global scope and are therefore accessible anywhere in the model. The disadvantage of scopes is that all names in a scope must be unique. If more than

  7. Molecular dynamics simulations of biological membranes and membrane proteins using enhanced conformational sampling algorithms☆

    PubMed Central

    Mori, Takaharu; Miyashita, Naoyuki; Im, Wonpil; Feig, Michael; Sugita, Yuji

    2016-01-01

    This paper reviews various enhanced conformational sampling methods and explicit/implicit solvent/membrane models, as well as their recent applications to the exploration of the structure and dynamics of membranes and membrane proteins. Molecular dynamics simulations have become an essential tool to investigate biological problems, and their success relies on proper molecular models together with efficient conformational sampling methods. The implicit representation of solvent/membrane environments is reasonable approximation to the explicit all-atom models, considering the balance between computational cost and simulation accuracy. Implicit models can be easily combined with replica-exchange molecular dynamics methods to explore a wider conformational space of a protein. Other molecular models and enhanced conformational sampling methods are also briefly discussed. As application examples, we introduce recent simulation studies of glycophorin A, phospholamban, amyloid precursor protein, and mixed lipid bilayers and discuss the accuracy and efficiency of each simulation model and method. This article is part of a Special Issue entitled: Membrane Proteins. Guest Editors: J.C. Gumbart and Sergei Noskov. PMID:26766517

  8. Phase 1 Free Air CO2 Enrichment Model-Data Synthesis (FACE-MDS): Model Output Data (2015)

    DOE Data Explorer

    Walker, A. P.; De Kauwe, M. G.; Medlyn, B. E.; Zaehle, S.; Asao, S.; Dietze, M.; El-Masri, B.; Hanson, P. J.; Hickler, T.; Jain, A.; Luo, Y.; Parton, W. J.; Prentice, I. C.; Ricciuto, D. M.; Thornton, P. E.; Wang, S.; Wang, Y -P; Warlind, D.; Weng, E.; Oren, R.; Norby, R. J.

    2015-01-01

    These datasets comprise the model output from phase 1 of the FACE-MDS. These include simulations of the Duke and Oak Ridge experiments and also idealised long-term (300 year) simulations at both sites (please see the modelling protocol for details). Included as part of this dataset are modelling and output protocols. The model datasets are formatted according to the output protocols. Phase 1 datasets are reproduced here for posterity and reproducibility although the model output for the experimental period have been somewhat superseded by the Phase 2 datasets.

  9. Development of the TRANSIMS Environmental Model

    DOT National Transportation Integrated Search

    1997-06-01

    The TRansportation ANalysis and SIMulation System (TRANSIMS) is one part the multi-track Travel Model Improvement Program under joint development by the Department of Transportation, the Environmental Protection Agency, and the Department of Energy. ...

  10. Development of a Human Motor Model for the Evaluation of an Integrated Alerting and Notification Flight Deck System

    NASA Technical Reports Server (NTRS)

    Daiker, Ron; Schnell, Thomas

    2010-01-01

    A human motor model was developed on the basis of performance data that was collected in a flight simulator. The motor model is under consideration as one component of a virtual pilot model for the evaluation of NextGen crew alerting and notification systems in flight decks. This model may be used in a digital Monte Carlo simulation to compare flight deck layout design alternatives. The virtual pilot model is being developed as part of a NASA project to evaluate multiple crews alerting and notification flight deck configurations. Model parameters were derived from empirical distributions of pilot data collected in a flight simulator experiment. The goal of this model is to simulate pilot motor performance in the approach-to-landing task. The unique challenges associated with modeling the complex dynamics of humans interacting with the cockpit environment are discussed, along with the current state and future direction of the model.

  11. Fault Gauge Numerical Simulation : Dynamic Rupture Propagation and Local Energy Partitioning

    NASA Astrophysics Data System (ADS)

    Mollon, G.

    2017-12-01

    In this communication, we present dynamic simulations of the local (centimetric) behaviour of a fault filled with a granular gauge submitted to dynamic rupture. The numerical tool (Fig. 1) combines classical Discrete Element Modelling (albeit with the ability to deal with arbitrary grain shapes) for the simualtion of the gauge, and continuous modelling for the simulation of the acoustic waves emission and propagation. In a first part, the model is applied to the simulation of steady-state shearing of the fault under remote displacement boudary conditions, in order to observe the shear accomodation at the interface (R1 cracks, localization, wear, etc.). It also makes it possible to fit to desired values the Rate and State Friction properties of the granular gauge by adapting the contact laws between grains. Such simulations provide quantitative insight in the steady-state energy partitionning between fracture, friction and acoustic emissions as a function of the shear rate. In a second part, the model is submitted to dynamic rupture. For that purpose, the fault is elastically preloaded just below rupture, and a displacement pulse is applied at one end of the sample (and on only one side of the fault). This allows to observe the propagation of the instability along the fault and the interplay between this propagation and the local granular phenomena. Energy partitionning is then observed both in space and time.

  12. Overview of the Tusas Code for Simulation of Dendritic Solidification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trainer, Amelia J.; Newman, Christopher Kyle; Francois, Marianne M.

    2016-01-07

    The aim of this project is to conduct a parametric investigation into the modeling of two dimensional dendrite solidification, using the phase field model. Specifically, we use the Tusas code, which is for coupled heat and phase-field simulation of dendritic solidification. Dendritic solidification, which may occur in the presence of an unstable solidification interface, results in treelike microstructures that often grow perpendicular to the rest of the growth front. The interface may become unstable if the enthalpy of the solid material is less than that of the liquid material, or if the solute is less soluble in solid than itmore » is in liquid, potentially causing a partition [1]. A key motivation behind this research is that a broadened understanding of phase-field formulation and microstructural developments can be utilized for macroscopic simulations of phase change. This may be directly implemented as a part of the Telluride project at Los Alamos National Laboratory (LANL), through which a computational additive manufacturing simulation tool is being developed, ultimately to become part of the Advanced Simulation and Computing Program within the U.S. Department of Energy [2].« less

  13. Variable viscosity and density biofilm simulations using an immersed boundary method, part II: Experimental validation and the heterogeneous rheology-IBM

    NASA Astrophysics Data System (ADS)

    Stotsky, Jay A.; Hammond, Jason F.; Pavlovsky, Leonid; Stewart, Elizabeth J.; Younger, John G.; Solomon, Michael J.; Bortz, David M.

    2016-07-01

    The goal of this work is to develop a numerical simulation that accurately captures the biomechanical response of bacterial biofilms and their associated extracellular matrix (ECM). In this, the second of a two-part effort, the primary focus is on formally presenting the heterogeneous rheology Immersed Boundary Method (hrIBM) and validating our model by comparison to experimental results. With this extension of the Immersed Boundary Method (IBM), we use the techniques originally developed in Part I ([19]) to treat biofilms as viscoelastic fluids possessing variable rheological properties anchored to a set of moving locations (i.e., the bacteria locations). In particular, we incorporate spatially continuous variable viscosity and density fields into our model. Although in [14,15], variable viscosity is used in an IBM context to model discrete viscosity changes across interfaces, to our knowledge this work and Part I are the first to apply the IBM to model a continuously variable viscosity field. We validate our modeling approach from Part I by comparing dynamic moduli and compliance moduli computed from our model to data from mechanical characterization experiments on Staphylococcus epidermidis biofilms. The experimental setup is described in [26] in which biofilms are grown and tested in a parallel plate rheometer. In order to initialize the positions of bacteria in the biofilm, experimentally obtained three dimensional coordinate data was used. One of the major conclusions of this effort is that treating the spring-like connections between bacteria as Maxwell or Zener elements provides good agreement with the mechanical characterization data. We also found that initializing the simulations with different coordinate data sets only led to small changes in the mechanical characterization results. Matlab code used to produce results in this paper will be available at https://github.com/MathBioCU/BiofilmSim.

  14. The systems biology simulation core algorithm

    PubMed Central

    2013-01-01

    Background With the increasing availability of high dimensional time course data for metabolites, genes, and fluxes, the mathematical description of dynamical systems has become an essential aspect of research in systems biology. Models are often encoded in formats such as SBML, whose structure is very complex and difficult to evaluate due to many special cases. Results This article describes an efficient algorithm to solve SBML models that are interpreted in terms of ordinary differential equations. We begin our consideration with a formal representation of the mathematical form of the models and explain all parts of the algorithm in detail, including several preprocessing steps. We provide a flexible reference implementation as part of the Systems Biology Simulation Core Library, a community-driven project providing a large collection of numerical solvers and a sophisticated interface hierarchy for the definition of custom differential equation systems. To demonstrate the capabilities of the new algorithm, it has been tested with the entire SBML Test Suite and all models of BioModels Database. Conclusions The formal description of the mathematics behind the SBML format facilitates the implementation of the algorithm within specifically tailored programs. The reference implementation can be used as a simulation backend for Java™-based programs. Source code, binaries, and documentation can be freely obtained under the terms of the LGPL version 3 from http://simulation-core.sourceforge.net. Feature requests, bug reports, contributions, or any further discussion can be directed to the mailing list simulation-core-development@lists.sourceforge.net. PMID:23826941

  15. A numerical model of a HIL scaled roller rig for simulation of wheel-rail degraded adhesion condition

    NASA Astrophysics Data System (ADS)

    Conti, Roberto; Meli, Enrico; Pugi, Luca; Malvezzi, Monica; Bartolini, Fabio; Allotta, Benedetto; Rindi, Andrea; Toni, Paolo

    2012-05-01

    Scaled roller rigs used for railway applications play a fundamental role in the development of new technologies and new devices, combining the hardware in the loop (HIL) benefits with the reduction of the economic investments. The main problem of the scaled roller rig with respect to the full scale ones is the improved complexity due to the scaling factors. For this reason, before building the test rig, the development of a software model of the HIL system can be useful to analyse the system behaviour in different operative conditions. One has to consider the multi-body behaviour of the scaled roller rig, the controller and the model of the virtual vehicle, whose dynamics has to be reproduced on the rig. The main purpose of this work is the development of a complete model that satisfies the previous requirements and in particular the performance analysis of the controller and of the dynamical behaviour of the scaled roller rig when some disturbances are simulated with low adhesion conditions. Since the scaled roller rig will be used to simulate degraded adhesion conditions, accurate and realistic wheel-roller contact model also has to be included in the model. The contact model consists of two parts: the contact point detection and the adhesion model. The first part is based on a numerical method described in some previous studies for the wheel-rail case and modified to simulate the three-dimensional contact between revolute surfaces (wheel-roller). The second part consists in the evaluation of the contact forces by means of the Hertz theory for the normal problem and the Kalker theory for the tangential problem. Some numerical tests were performed, in particular low adhesion conditions were simulated, and bogie hunting and dynamical imbalance of the wheelsets were introduced. The tests were devoted to verify the robustness of control system with respect to some of the more frequent disturbances that may influence the roller rig dynamics. In particular we verified that the wheelset imbalance could significantly influence system performance, and to reduce the effect of this disturbance a multistate filter was designed.

  16. Simulation Exercises for an Undergraduate Digital Process Control Course.

    ERIC Educational Resources Information Center

    Reeves, Deborah E.; Schork, F. Joseph

    1988-01-01

    Presents six problems from an alternative approach to homework traditionally given to follow-up lectures. Stresses the advantage of longer term exercises which allow for creativity and independence on the part of the student. Problems include: "System Model,""Open-Loop Simulation,""PID Control,""Dahlin…

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Epiney, A.; Canepa, S.; Zerkak, O.

    The STARS project at the Paul Scherrer Institut (PSI) has adopted the TRACE thermal-hydraulic (T-H) code for best-estimate system transient simulations of the Swiss Light Water Reactors (LWRs). For analyses involving interactions between system and core, a coupling of TRACE with the SIMULATE-3K (S3K) LWR core simulator has also been developed. In this configuration, the TRACE code and associated nuclear power reactor simulation models play a central role to achieve a comprehensive safety analysis capability. Thus, efforts have now been undertaken to consolidate the validation strategy by implementing a more rigorous and structured assessment approach for TRACE applications involving eithermore » only system T-H evaluations or requiring interfaces to e.g. detailed core or fuel behavior models. The first part of this paper presents the preliminary concepts of this validation strategy. The principle is to systematically track the evolution of a given set of predicted physical Quantities of Interest (QoIs) over a multidimensional parametric space where each of the dimensions represent the evolution of specific analysis aspects, including e.g. code version, transient specific simulation methodology and model "nodalisation". If properly set up, such environment should provide code developers and code users with persistent (less affected by user effect) and quantified information (sensitivity of QoIs) on the applicability of a simulation scheme (codes, input models, methodology) for steady state and transient analysis of full LWR systems. Through this, for each given transient/accident, critical paths of the validation process can be identified that could then translate into defining reference schemes to be applied for downstream predictive simulations. In order to illustrate this approach, the second part of this paper presents a first application of this validation strategy to an inadvertent blowdown event that occurred in a Swiss BWR/6. The transient was initiated by the spurious actuation of the Automatic Depressurization System (ADS). The validation approach progresses through a number of dimensions here: First, the same BWR system simulation model is assessed for different versions of the TRACE code, up to the most recent one. The second dimension is the "nodalisation" dimension, where changes to the input model are assessed. The third dimension is the "methodology" dimension. In this case imposed power and an updated TRACE core model are investigated. For each step in each validation dimension, a common set of QoIs are investigated. For the steady-state results, these include fuel temperatures distributions. For the transient part of the present study, the evaluated QoIs include the system pressure evolution and water carry-over into the steam line.« less

  18. Modeling DNP3 Traffic Characteristics of Field Devices in SCADA Systems of the Smart Grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Huan; Cheng, Liang; Chuah, Mooi Choo

    In the generation, transmission, and distribution sectors of the smart grid, intelligence of field devices is realized by programmable logic controllers (PLCs). Many smart-grid subsystems are essentially cyber-physical energy systems (CPES): For instance, the power system process (i.e., the physical part) within a substation is monitored and controlled by a SCADA network with hosts running miscellaneous applications (i.e., the cyber part). To study the interactions between the cyber and physical components of a CPES, several co-simulation platforms have been proposed. However, the network simulators/emulators of these platforms do not include a detailed traffic model that takes into account the impactsmore » of the execution model of PLCs on traffic characteristics. As a result, network traces generated by co-simulation only reveal the impacts of the physical process on the contents of the traffic generated by SCADA hosts, whereas the distinction between PLCs and computing nodes (e.g., a hardened computer running a process visualization application) has been overlooked. To generate realistic network traces using co-simulation for the design and evaluation of applications relying on accurate traffic profiles, it is necessary to establish a traffic model for PLCs. In this work, we propose a parameterized model for PLCs that can be incorporated into existing co-simulation platforms. We focus on the DNP3 subsystem of slave PLCs, which automates the processing of packets from the DNP3 master. To validate our approach, we extract model parameters from both the configuration and network traces of real PLCs. Simulated network traces are generated and compared against those from PLCs. Our evaluation shows that our proposed model captures the essential traffic characteristics of DNP3 slave PLCs, which can be used to extend existing co-simulation platforms and gain further insights into the behaviors of CPES.« less

  19. Nonintrusive 3D reconstruction of human bone models to simulate their bio-mechanical response

    NASA Astrophysics Data System (ADS)

    Alexander, Tsouknidas; Antonis, Lontos; Savvas, Savvakis; Nikolaos, Michailidis

    2012-06-01

    3D finite element models representing functional parts of the human skeletal system, have been repeatedly introduced over the last years, to simulate biomechanical response of anatomical characteristics or investigate surgical treatment. The reconstruction of geometrically accurate FEM models, poses a significant challenge for engineers and physicians, as recent advances in tissue engineering dictate highly customized implants, while facilitating the production of alloplast materials that are employed to restore, replace or supplement the function of human tissue. The premises of every accurate reconstruction method, is to encapture the precise geometrical characteristics of the examined tissue and thus the selection of a sufficient imaging technique is of the up-most importance. This paper reviews existing and potential applications related to the current state-of-the-art of medical imaging and simulation techniques. The procedures are examined by introducing their concepts; strengths and limitations, while the authors also present part of their recent activities in these areas. [Figure not available: see fulltext.

  20. Optimization and Simulation of SLM Process for High Density H13 Tool Steel Parts

    NASA Astrophysics Data System (ADS)

    Laakso, Petri; Riipinen, Tuomas; Laukkanen, Anssi; Andersson, Tom; Jokinen, Antero; Revuelta, Alejandro; Ruusuvuori, Kimmo

    This paper demonstrates the successful printing and optimization of processing parameters of high-strength H13 tool steel by Selective Laser Melting (SLM). D-Optimal Design of Experiments (DOE) approach is used for parameter optimization of laser power, scanning speed and hatch width. With 50 test samples (1×1×1cm) we establish parameter windows for these three parameters in relation to part density. The calculated numerical model is found to be in good agreement with the density data obtained from the samples using image analysis. A thermomechanical finite element simulation model is constructed of the SLM process and validated by comparing the calculated densities retrieved from the model with the experimentally determined densities. With the simulation tool one can explore the effect of different parameters on density before making any printed samples. Establishing a parameter window provides the user with freedom for parameter selection such as choosing parameters that result in fastest print speed.

  1. Water and salt balance of Great Salt Lake, Utah, and simulation of water and salt movement through the causeway

    USGS Publications Warehouse

    Wold, Steven R.; Thomas, Blakemore E.; Waddell, Kidd M.

    1997-01-01

    The water and salt balance of Great Salt Lake primarily depends on the amount of inflow from tributary streams and the conveyance properties of a causeway constructed during 1957-59 that divides the lake into the south and north parts. The conveyance properties of the causeway originally included two culverts, each 15 feet wide, and the permeable rock-fill material.During 1980-86, the salt balance changed as a result of record high inflow that averaged 4,627,000 acre-feet annually and modifications made to the conveyance properties of the causeway that included opening a 300-foot-wide breach. In this study, a model developed in 1973 by Waddell and Bolke to simulate the water and salt balance of the lake was revised to accommodate the high water-surface altitude and modifications made to the causeway. This study, done by the U.S. Geological Survey in cooperation with the Utah Department of Natural Resources, Division of State Lands and Forestry, updates the model with monitoring data collected during 1980-86. This report describes the calibration of the model and presents the results of simulations for three hypothetical 10-year periods.During January 1, 1980, to July 31, 1984, a net load of 0.5 billion tons of dissolved salt flowed from the south to the north part of the lake primarily as a result of record inflows. From August 1, 1984, when the breach was opened, to December 31,1986, a net load of 0.3 billion tons of dissolved salt flowed from the north to the south part of the lake primarily as a result of the breach.For simulated inflow rates during a hypothetical 10-year period resulting in the water-surface altitude decreasing from about 4,200 to 4,192 feet, there was a net movement of about 1.0 billion tons of dissolved salt from the south to the north part, and about 1.7 billion tons of salt precipitated in the north part. For simulated inflow rates during a hypothetical 10-year period resulting in a rise in water-surface altitude from about 4,200 to 4,212 feet, there was a net movement of about 0.2 billion tons of dissolved salt from the south to the north part and no salt was precipitated in the north part of the lake.

  2. Short-Range Prediction of Monsoon Precipitation by NCMRWF Regional Unified Model with Explicit Convection

    NASA Astrophysics Data System (ADS)

    Mamgain, Ashu; Rajagopal, E. N.; Mitra, A. K.; Webster, S.

    2018-03-01

    There are increasing efforts towards the prediction of high-impact weather systems and understanding of related dynamical and physical processes. High-resolution numerical model simulations can be used directly to model the impact at fine-scale details. Improvement in forecast accuracy can help in disaster management planning and execution. National Centre for Medium Range Weather Forecasting (NCMRWF) has implemented high-resolution regional unified modeling system with explicit convection embedded within coarser resolution global model with parameterized convection. The models configurations are based on UK Met Office unified seamless modeling system. Recent land use/land cover data (2012-2013) obtained from Indian Space Research Organisation (ISRO) are also used in model simulations. Results based on short-range forecast of both the global and regional models over India for a month indicate that convection-permitting simulations by the high-resolution regional model is able to reduce the dry bias over southern parts of West Coast and monsoon trough zone with more intense rainfall mainly towards northern parts of monsoon trough zone. Regional model with explicit convection has significantly improved the phase of the diurnal cycle of rainfall as compared to the global model. Results from two monsoon depression cases during study period show substantial improvement in details of rainfall pattern. Many categories in rainfall defined for operational forecast purposes by Indian forecasters are also well represented in case of convection-permitting high-resolution simulations. For the statistics of number of days within a range of rain categories between `No-Rain' and `Heavy Rain', the regional model is outperforming the global model in all the ranges. In the very heavy and extremely heavy categories, the regional simulations show overestimation of rainfall days. Global model with parameterized convection have tendency to overestimate the light rainfall days and underestimate the heavy rain days compared to the observation data.

  3. Simulation Based Low-Cost Composite Process Development at the US Air Force Research Laboratory

    NASA Technical Reports Server (NTRS)

    Rice, Brian P.; Lee, C. William; Curliss, David B.

    2003-01-01

    Low-cost composite research in the US Air Force Research Laboratory, Materials and Manufacturing Directorate, Organic Matrix Composites Branch has focused on the theme of affordable performance. Practically, this means that we use a very broad view when considering the affordability of composites. Factors such as material costs, labor costs, recurring and nonrecurring manufacturing costs are balanced against performance to arrive at the relative affordability vs. performance measure of merit. The research efforts discussed here are two projects focused on affordable processing of composites. The first topic is the use of a neural network scheme to model cure reaction kinetics, then utilize the kinetics coupled with simple heat transport models to predict, in real-time, future exotherms and control them. The neural network scheme is demonstrated to be very robust and a much more efficient method that mechanistic cure modeling approach. This enables very practical low-cost processing of thick composite parts. The second project is liquid composite molding (LCM) process simulation. LCM processing of large 3D integrated composite parts has been demonstrated to be a very cost effective way to produce large integrated aerospace components specific examples of LCM processes are resin transfer molding (RTM), vacuum assisted resin transfer molding (VARTM), and other similar approaches. LCM process simulation is a critical part of developing an LCM process approach. Flow simulation enables the development of the most robust approach to introducing resin into complex preforms. Furthermore, LCM simulation can be used in conjunction with flow front sensors to control the LCM process in real-time to account for preform or resin variability.

  4. Automated procedure for developing hybrid computer simulations of turbofan engines. Part 1: General description

    NASA Technical Reports Server (NTRS)

    Szuch, J. R.; Krosel, S. M.; Bruton, W. M.

    1982-01-01

    A systematic, computer-aided, self-documenting methodology for developing hybrid computer simulations of turbofan engines is presented. The methodology that is pesented makes use of a host program that can run on a large digital computer and a machine-dependent target (hybrid) program. The host program performs all the calculations and data manipulations that are needed to transform user-supplied engine design information to a form suitable for the hybrid computer. The host program also trims the self-contained engine model to match specified design-point information. Part I contains a general discussion of the methodology, describes a test case, and presents comparisons between hybrid simulation and specified engine performance data. Part II, a companion document, contains documentation, in the form of computer printouts, for the test case.

  5. Overview of High-Fidelity Modeling Activities in the Numerical Propulsion System Simulations (NPSS) Project

    NASA Technical Reports Server (NTRS)

    Veres, Joseph P.

    2002-01-01

    A high-fidelity simulation of a commercial turbofan engine has been created as part of the Numerical Propulsion System Simulation Project. The high-fidelity computer simulation utilizes computer models that were developed at NASA Glenn Research Center in cooperation with turbofan engine manufacturers. The average-passage (APNASA) Navier-Stokes based viscous flow computer code is used to simulate the 3D flow in the compressors and turbines of the advanced commercial turbofan engine. The 3D National Combustion Code (NCC) is used to simulate the flow and chemistry in the advanced aircraft combustor. The APNASA turbomachinery code and the NCC combustor code exchange boundary conditions at the interface planes at the combustor inlet and exit. This computer simulation technique can evaluate engine performance at steady operating conditions. The 3D flow models provide detailed knowledge of the airflow within the fan and compressor, the high and low pressure turbines, and the flow and chemistry within the combustor. The models simulate the performance of the engine at operating conditions that include sea level takeoff and the altitude cruise condition.

  6. Simulation of a weather radar display for over-water airborne radar approaches

    NASA Technical Reports Server (NTRS)

    Clary, G. R.

    1983-01-01

    Airborne radar approach (ARA) concepts are being investigated as a part of NASA's Rotorcraft All-Weather Operations Research Program on advanced guidance and navigation methods. This research is being conducted using both piloted simulations and flight test evaluations. For the piloted simulations, a mathematical model of the airborne radar was developed for over-water ARAs to offshore platforms. This simulated flight scenario requires radar simulation of point targets, such as oil rigs and ships, distributed sea clutter, and transponder beacon replies. Radar theory, weather radar characteristics, and empirical data derived from in-flight radar photographs are combined to model a civil weather/mapping radar typical of those used in offshore rotorcraft operations. The resulting radar simulation is realistic and provides the needed simulation capability for ongoing ARA research.

  7. Engineering applications of strong ground motion simulation

    NASA Astrophysics Data System (ADS)

    Somerville, Paul

    1993-02-01

    The formulation, validation and application of a procedure for simulating strong ground motions for use in engineering practice are described. The procedure uses empirical source functions (derived from near-source strong motion recordings of small earthquakes) to provide a realistic representation of effects such as source radiation that are difficult to model at high frequencies due to their partly stochastic behavior. Wave propagation effects are modeled using simplified Green's functions that are designed to transfer empirical source functions from their recording sites to those required for use in simulations at a specific site. The procedure has been validated against strong motion recordings of both crustal and subduction earthquakes. For the validation process we choose earthquakes whose source models (including a spatially heterogeneous distribution of the slip of the fault) are independently known and which have abundant strong motion recordings. A quantitative measurement of the fit between the simulated and recorded motion in this validation process is used to estimate the modeling and random uncertainty associated with the simulation procedure. This modeling and random uncertainty is one part of the overall uncertainty in estimates of ground motions of future earthquakes at a specific site derived using the simulation procedure. The other contribution to uncertainty is that due to uncertainty in the source parameters of future earthquakes that affect the site, which is estimated from a suite of simulations generated by varying the source parameters over their ranges of uncertainty. In this paper, we describe the validation of the simulation procedure for crustal earthquakes against strong motion recordings of the 1989 Loma Prieta, California, earthquake, and for subduction earthquakes against the 1985 Michoacán, Mexico, and Valparaiso, Chile, earthquakes. We then show examples of the application of the simulation procedure to the estimatation of the design response spectra for crustal earthquakes at a power plant site in California and for subduction earthquakes in the Seattle-Portland region. We also demonstrate the use of simulation methods for modeling the attenuation of strong ground motion, and show evidence of the effect of critical reflections from the lower crust in causing the observed flattening of the attenuation of strong ground motion from the 1988 Saguenay, Quebec, and 1989 Loma Prieta earthquakes.

  8. IFC BIM-Based Methodology for Semi-Automated Building Energy Performance Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bazjanac, Vladimir

    2008-07-01

    Building energy performance (BEP) simulation is still rarely used in building design, commissioning and operations. The process is too costly and too labor intensive, and it takes too long to deliver results. Its quantitative results are not reproducible due to arbitrary decisions and assumptions made in simulation model definition, and can be trusted only under special circumstances. A methodology to semi-automate BEP simulation preparation and execution makes this process much more effective. It incorporates principles of information science and aims to eliminate inappropriate human intervention that results in subjective and arbitrary decisions. This is achieved by automating every part ofmore » the BEP modeling and simulation process that can be automated, by relying on data from original sources, and by making any necessary data transformation rule-based and automated. This paper describes the new methodology and its relationship to IFC-based BIM and software interoperability. It identifies five steps that are critical to its implementation, and shows what part of the methodology can be applied today. The paper concludes with a discussion of application to simulation with EnergyPlus, and describes data transformation rules embedded in the new Geometry Simplification Tool (GST).« less

  9. Data Analysis Techniques for Physical Scientists

    NASA Astrophysics Data System (ADS)

    Pruneau, Claude A.

    2017-10-01

    Preface; How to read this book; 1. The scientific method; Part I. Foundation in Probability and Statistics: 2. Probability; 3. Probability models; 4. Classical inference I: estimators; 5. Classical inference II: optimization; 6. Classical inference III: confidence intervals and statistical tests; 7. Bayesian inference; Part II. Measurement Techniques: 8. Basic measurements; 9. Event reconstruction; 10. Correlation functions; 11. The multiple facets of correlation functions; 12. Data correction methods; Part III. Simulation Techniques: 13. Monte Carlo methods; 14. Collision and detector modeling; List of references; Index.

  10. Volcano Modelling and Simulation gateway (VMSg): A new web-based framework for collaborative research in physical modelling and simulation of volcanic phenomena

    NASA Astrophysics Data System (ADS)

    Esposti Ongaro, T.; Barsotti, S.; de'Michieli Vitturi, M.; Favalli, M.; Longo, A.; Nannipieri, L.; Neri, A.; Papale, P.; Saccorotti, G.

    2009-12-01

    Physical and numerical modelling is becoming of increasing importance in volcanology and volcanic hazard assessment. However, new interdisciplinary problems arise when dealing with complex mathematical formulations, numerical algorithms and their implementations on modern computer architectures. Therefore new frameworks are needed for sharing knowledge, software codes, and datasets among scientists. Here we present the Volcano Modelling and Simulation gateway (VMSg, accessible at http://vmsg.pi.ingv.it), a new electronic infrastructure for promoting knowledge growth and transfer in the field of volcanological modelling and numerical simulation. The new web portal, developed in the framework of former and ongoing national and European projects, is based on a dynamic Content Manager System (CMS) and was developed to host and present numerical models of the main volcanic processes and relationships including magma properties, magma chamber dynamics, conduit flow, plume dynamics, pyroclastic flows, lava flows, etc. Model applications, numerical code documentation, simulation datasets as well as model validation and calibration test-cases are also part of the gateway material.

  11. Predictive Model for Particle Residence Time Distributions in Riser Reactors. Part 1: Model Development and Validation

    DOE PAGES

    Foust, Thomas D.; Ziegler, Jack L.; Pannala, Sreekanth; ...

    2017-02-28

    Here in this computational study, we model the mixing of biomass pyrolysis vapor with solid catalyst in circulating riser reactors with a focus on the determination of solid catalyst residence time distributions (RTDs). A comprehensive set of 2D and 3D simulations were conducted for a pilot-scale riser using the Eulerian-Eulerian two-fluid modeling framework with and without sub-grid-scale models for the gas-solids interaction. A validation test case was also simulated and compared to experiments, showing agreement in the pressure gradient and RTD mean and spread. For simulation cases, it was found that for accurate RTD prediction, the Johnson and Jackson partialmore » slip solids boundary condition was required for all models and a sub-grid model is useful so that ultra high resolutions grids that are very computationally intensive are not required. Finally, we discovered a 2/3 scaling relation for the RTD mean and spread when comparing resolved 2D simulations to validated unresolved 3D sub-grid-scale model simulations.« less

  12. Does preliminary optimisation of an anatomically correct skull-brain model using simple simulants produce clinically realistic ballistic injury fracture patterns?

    PubMed

    Mahoney, P F; Carr, D J; Delaney, R J; Hunt, N; Harrison, S; Breeze, J; Gibb, I

    2017-07-01

    Ballistic head injury remains a significant threat to military personnel. Studying such injuries requires a model that can be used with a military helmet. This paper describes further work on a skull-brain model using skulls made from three different polyurethane plastics and a series of skull 'fills' to simulate brain (3, 5, 7 and 10% gelatine by mass and PermaGel™). The models were subjected to ballistic impact from 7.62 × 39 mm mild steel core bullets. The first part of the work compares the different polyurethanes (mean bullet muzzle velocity of 708 m/s), and the second part compares the different fills (mean bullet muzzle velocity of 680 m/s). The impact events were filmed using high speed cameras. The resulting fracture patterns in the skulls were reviewed and scored by five clinicians experienced in assessing penetrating head injury. In over half of the models, one or more assessors felt aspects of the fracture pattern were close to real injury. Limitations of the model include the skull being manufactured in two parts and the lack of a realistic skin layer. Further work is ongoing to address these.

  13. Modelling and simulation of the consolidation behavior during thermoplastic prepreg composites forming process

    NASA Astrophysics Data System (ADS)

    Xiong, H.; Hamila, N.; Boisse, P.

    2017-10-01

    Pre-impregnated thermoplastic composites have recently attached increasing interest in the automotive industry for their excellent mechanical properties and their rapid cycle manufacturing process, modelling and numerical simulations of forming processes for composites parts with complex geometry is necessary to predict and optimize manufacturing practices, especially for the consolidation effects. A viscoelastic relaxation model is proposed to characterize the consolidation behavior of thermoplastic prepregs based on compaction tests with a range of temperatures. The intimate contact model is employed to predict the evolution of the consolidation which permits the microstructure prediction of void presented through the prepreg. Within a hyperelastic framework, several simulation tests are launched by combining a new developed solid shell finite element and the consolidation models.

  14. Simulation of the interaction of karstic lakes Magnolia and Brooklyn with the upper Floridan Aquifer, southwestern Clay County, Florida

    USGS Publications Warehouse

    Merritt, M.L.

    2001-01-01

    The stage of Lake Brooklyn, in southwestern Clay County, Florida, has varied over a range of 27 feet since measurements by the U.S. Geological Survey began in July 1957. The large stage changes have been attributed to the relation between highly transient surface-water inflow to the lake and subsurface conduits of karstic origin that permit a high rate of leakage from the lake to the Upper Floridan aquifer. After the most recent and severe stage decline (1990-1994), the U.S. Geological Survey began a study that entailed the use of numerical ground-water flow models to simulate the interaction of the lake with the Upper Floridan aquifer and the large fluctuations of stage that were a part of that process. A package (set of computer programs) designed to represent lake/aquifer interaction in the U.S. Geological Survey Modular Finite-Difference Ground-Water Flow Model (MODFLOW-96) and the Three-Dimensional Method-of-Characteristics Solute-Transport Model (MOC3D) simulators was prepared as part of this study, and a demonstration of its capability was a primary objective of the study. (Although the official names are Brooklyn Lake and Magnolia Lake (Florida Geographic Names), in this report the local names, Lake Brooklyn and Lake Magnolia, are used.) In the simulator of lake/aquifer interaction used in this investigation, the stage of each lake in a simulation is updated in successive time steps by a budget process that takes into account ground-water seepage, precipitation upon and evaporation from the lake surface, stream inflows and outflows, overland runoff inflows, and augmentation or depletion by artificial means. The simulator was given the capability to simulate both the division of a lake into separate pools as lake stage falls and the coalescence of several pools into a single lake as the stage rises. This representational capability was required to simulate Lake Brooklyn, which can divide into as many as 10 separate pools at sufficiently low stage. In the first of two calibrated models, recharge to the water table, specified as a monthly rate, was set equal to 40 percent of the monthly rainfall rate. The specified rate of inflow to the uppermost stream segment was set equal to outflows from Lake Lowry estimated from lake stage and the 1994-97 rating table. Leakage to the intermediate and Upper Floridan aquifers was assumed to occur from the surficial aquifer system through the confining layers directly beneath deeper parts of the lake bottom. A leakance coefficient value of 0.001 feet per day per foot of thickness was used beneath Lake Magnolia, and a value of 0.005 feet per day per foot of thickness was used beneath most of Lake Brooklyn. With these values, the conductance through the confining layers beneath Lake Brooklyn was about 19 times that beneath Lake Magnolia. The simulated stages of Lake Brooklyn matched the measured stages reasonably well in the early (1957-72) and later (1990-98) parts of the simulation time period, but the match was unsatisfactory in an intermediate time period (1973-89). To resolve this discrepancy, the hypothesis was proposed that undocumented losses of water from Alligator Creek upstream from Lake Brooklyn or from the lake itself occurred between 1973 and 1989 when there was sufficient streamflow. The resulting simulation of lake stages matched the measured lake stages accurately during the entire simulation time period. The model was then revised to incorporate the assumption that only 20 percent of precipitation recharged the water table (the second calibrated model). Recalibration of the model required that leakance values for the confining units under deeper parts of the lakes also be reduced by nearly 50 percent. The stages simulated with the new parameter assumptions, but retaining the assumption of surface-water losses, were an excellent match of the measured values. The stage of Lake Magnolia was also simulated accurately. The results of sensitivity analyses show that simulated s

  15. The "Grey Zone" cold air outbreak global model intercomparison: A cross evaluation using large-eddy simulations

    NASA Astrophysics Data System (ADS)

    Tomassini, Lorenzo; Field, Paul R.; Honnert, Rachel; Malardel, Sylvie; McTaggart-Cowan, Ron; Saitou, Kei; Noda, Akira T.; Seifert, Axel

    2017-03-01

    A stratocumulus-to-cumulus transition as observed in a cold air outbreak over the North Atlantic Ocean is compared in global climate and numerical weather prediction models and a large-eddy simulation model as part of the Working Group on Numerical Experimentation "Grey Zone" project. The focus of the project is to investigate to what degree current convection and boundary layer parameterizations behave in a scale-adaptive manner in situations where the model resolution approaches the scale of convection. Global model simulations were performed at a wide range of resolutions, with convective parameterizations turned on and off. The models successfully simulate the transition between the observed boundary layer structures, from a well-mixed stratocumulus to a deeper, partly decoupled cumulus boundary layer. There are indications that surface fluxes are generally underestimated. The amount of both cloud liquid water and cloud ice, and likely precipitation, are under-predicted, suggesting deficiencies in the strength of vertical mixing in shear-dominated boundary layers. But also regulation by precipitation and mixed-phase cloud microphysical processes play an important role in the case. With convection parameterizations switched on, the profiles of atmospheric liquid water and cloud ice are essentially resolution-insensitive. This, however, does not imply that convection parameterizations are scale-aware. Even at the highest resolutions considered here, simulations with convective parameterizations do not converge toward the results of convection-off experiments. Convection and boundary layer parameterizations strongly interact, suggesting the need for a unified treatment of convective and turbulent mixing when addressing scale-adaptivity.

  16. Digital Modeling and Testing Research on Digging Mechanism of Deep Rootstalk Crops

    NASA Astrophysics Data System (ADS)

    Yang, Chuanhua; Xu, Ma; Wang, Zhoufei; Yang, Wenwu; Liao, Xinglong

    The digital model of the laboratory bench parts of digging deep rootstalk crops were established through adopting the parametric model technology based on feature. The virtual assembly of the laboratory bench of digging deep rootstalk crops was done and the digital model of the laboratory bench parts of digging deep rootstalk crops was gained. The vibrospade, which is the key part of the laboratory bench of digging deep rootstalk crops was simulated and the movement parametric curves of spear on the vibrospade were obtained. The results show that the spear was accorded with design requirements. It is propitious to the deep rootstalk.

  17. Use of North American and European Air Quality Networks to Evaluate Global Chemistry-Climate Modeling of Surface Ozone

    NASA Technical Reports Server (NTRS)

    Schnell, J. L.; Prather, M. J.; Josse, B.; Naik, V.; Horowitz, L. W.; Cameron-Smith, P.; Bergmann, D.; Zeng, G.; Plummer, D. A.; Sudo, K.; hide

    2015-01-01

    We test the current generation of global chemistry-climate models in their ability to simulate observed, present-day surface ozone. Models are evaluated against hourly surface ozone from 4217 stations in North America and Europe that are averaged over 1 degree by 1 degree grid cells, allowing commensurate model-measurement comparison. Models are generally biased high during all hours of the day and in all regions. Most models simulate the shape of regional summertime diurnal and annual cycles well, correctly matching the timing of hourly (approximately 15:00 local time (LT)) and monthly (mid-June) peak surface ozone abundance. The amplitude of these cycles is less successfully matched. The observed summertime diurnal range (25 ppb) is underestimated in all regions by about 7 parts per billion, and the observed seasonal range (approximately 21 parts per billion) is underestimated by about 5 parts per billion except in the most polluted regions, where it is overestimated by about 5 parts per billion. The models generally match the pattern of the observed summertime ozone enhancement, but they overestimate its magnitude in most regions. Most models capture the observed distribution of extreme episode sizes, correctly showing that about 80 percent of individual extreme events occur in large-scale, multi-day episodes of more than 100 grid cells. The models also match the observed linear relationship between episode size and a measure of episode intensity, which shows increases in ozone abundance by up to 6 parts per billion for larger-sized episodes. We conclude that the skill of the models evaluated here provides confidence in their projections of future surface ozone.

  18. Steady state operation simulation of the Francis-99 turbine by means of advanced turbulence models

    NASA Astrophysics Data System (ADS)

    Gavrilov, A.; Dekterev, A.; Minakov, A.; Platonov, D.; Sentyabov, A.

    2017-01-01

    The paper presents numerical simulation of the flow in hydraulic turbine based on the experimental data of the II Francis-99 workshop. The calculation domain includes the wicket gate, runner and draft tube with rotating reference frame for the runner zone. Different turbulence models such as k-ω SST, ζ-f and RSM were considered. The calculations were performed by means of in-house CFD code SigmaFlow. The numerical simulation for part load, high load and best efficiency operation points were performed.

  19. Supporting observation campaigns with high resolution modeling

    NASA Astrophysics Data System (ADS)

    Klocke, Daniel; Brueck, Matthias; Voigt, Aiko

    2017-04-01

    High resolution simulation in support of measurement campaigns offers a promising and emerging way to create large-scale context for small-scale observations of clouds and precipitation processes. As these simulation include the coupling of measured small-scale processes with the circulation, they also help to integrate the research communities from modeling and observations and allow for detailed model evaluations against dedicated observations. In connection with the measurement campaign NARVAL (August 2016 and December 2013) simulations with a grid-spacing of 2.5 km for the tropical Atlantic region (9000x3300 km), with local refinement to 1.2 km for the western part of the domain, were performed using the icosahedral non-hydrostatic (ICON) general circulation model. These simulations are again used to drive large eddy resolving simulations with the same model for selected days in the high definition clouds and precipitation for advancing climate prediction (HD(CP)2) project. The simulations are presented with the focus on selected results showing the benefit for the scientific communities doing atmospheric measurements and numerical modeling of climate and weather. Additionally, an outlook will be given on how similar simulations will support the NAWDEX measurement campaign in the North Atlantic and AC3 measurement campaign in the Arctic.

  20. Comparison of 3D Echocardiogram-Derived 3D Printed Valve Models to Molded Models for Simulated Repair of Pediatric Atrioventricular Valves.

    PubMed

    Scanlan, Adam B; Nguyen, Alex V; Ilina, Anna; Lasso, Andras; Cripe, Linnea; Jegatheeswaran, Anusha; Silvestro, Elizabeth; McGowan, Francis X; Mascio, Christopher E; Fuller, Stephanie; Spray, Thomas L; Cohen, Meryl S; Fichtinger, Gabor; Jolley, Matthew A

    2018-03-01

    Mastering the technical skills required to perform pediatric cardiac valve surgery is challenging in part due to limited opportunity for practice. Transformation of 3D echocardiographic (echo) images of congenitally abnormal heart valves to realistic physical models could allow patient-specific simulation of surgical valve repair. We compared materials, processes, and costs for 3D printing and molding of patient-specific models for visualization and surgical simulation of congenitally abnormal heart valves. Pediatric atrioventricular valves (mitral, tricuspid, and common atrioventricular valve) were modeled from transthoracic 3D echo images using semi-automated methods implemented as custom modules in 3D Slicer. Valve models were then both 3D printed in soft materials and molded in silicone using 3D printed "negative" molds. Using pre-defined assessment criteria, valve models were evaluated by congenital cardiac surgeons to determine suitability for simulation. Surgeon assessment indicated that the molded valves had superior material properties for the purposes of simulation compared to directly printed valves (p < 0.01). Patient-specific, 3D echo-derived molded valves are a step toward realistic simulation of complex valve repairs but require more time and labor to create than directly printed models. Patient-specific simulation of valve repair in children using such models may be useful for surgical training and simulation of complex congenital cases.

  1. Space plasma simulations; Proceedings of the Second International School for Space Simulations, Kapaa, HI, February 4-15, 1985. Parts 1 & 2

    NASA Technical Reports Server (NTRS)

    Ashour-Abdalla, M. (Editor); Dutton, D. A. (Editor)

    1985-01-01

    Space plasma simulations, observations, and theories are discussed. Papers are presented on the capabilities of various types of simulation codes and simulation models. Consideration is given to plasma waves in the earth's magnetotail, outer planet magnetosphere, geospace, and the auroral and polar cap regions. Topics discussed include space plasma turbulent dissipation, the kinetics of plasma waves, wave-particle interactions, whistler mode propagation, global energy regulation, and auroral arc formation.

  2. Sensitivity of subject-specific models to Hill muscle-tendon model parameters in simulations of gait.

    PubMed

    Carbone, V; van der Krogt, M M; Koopman, H F J M; Verdonschot, N

    2016-06-14

    Subject-specific musculoskeletal (MS) models of the lower extremity are essential for applications such as predicting the effects of orthopedic surgery. We performed an extensive sensitivity analysis to assess the effects of potential errors in Hill muscle-tendon (MT) model parameters for each of the 56 MT parts contained in a state-of-the-art MS model. We used two metrics, namely a Local Sensitivity Index (LSI) and an Overall Sensitivity Index (OSI), to distinguish the effect of the perturbation on the predicted force produced by the perturbed MT parts and by all the remaining MT parts, respectively, during a simulated gait cycle. Results indicated that sensitivity of the model depended on the specific role of each MT part during gait, and not merely on its size and length. Tendon slack length was the most sensitive parameter, followed by maximal isometric muscle force and optimal muscle fiber length, while nominal pennation angle showed very low sensitivity. The highest sensitivity values were found for the MT parts that act as prime movers of gait (Soleus: average OSI=5.27%, Rectus Femoris: average OSI=4.47%, Gastrocnemius: average OSI=3.77%, Vastus Lateralis: average OSI=1.36%, Biceps Femoris Caput Longum: average OSI=1.06%) and hip stabilizers (Gluteus Medius: average OSI=3.10%, Obturator Internus: average OSI=1.96%, Gluteus Minimus: average OSI=1.40%, Piriformis: average OSI=0.98%), followed by the Peroneal muscles (average OSI=2.20%) and Tibialis Anterior (average OSI=1.78%) some of which were not included in previous sensitivity studies. Finally, the proposed priority list provides quantitative information to indicate which MT parts and which MT parameters should be estimated most accurately to create detailed and reliable subject-specific MS models. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. A feature illustration and application of azimuthal P receiver function patterns

    NASA Astrophysics Data System (ADS)

    Eckhardt, C.; Rabbel, W.

    2009-12-01

    Based on a synthetic catalog of thirty azimuthal patterns of P receiver functions for crustal structures down to thirty km depth we have summarized and illustrated the most important azimuthal features. We have constructed five model classes encompassing (an-)isotropic horizontal and dipping layers. The model classes were initialized by in situ observations of three deep reflection seismic profiles (DEKORP) of varying high reflective zones and a spiral shaped foliation scheme of an upper crustal bore hole out of the German Continental Deep Drilling Program (KTB). Up to fourteen azimuthal features were extracted out of the synthetic patterns and could be grouped into an already known fundamental part, a multiple part and into an extension part. Each feature was rated by a specific grade A, B, C to inform about the type of its initialization ((an-) isotropy and/or layer dipping). We have evaluated the fourteen features on the synthetic patterns to apply a hierarchical classification. From the classification of the model objects we found that nearly eighty percent of the models are well explained by the fundamental part. The hierarchical order of the model objects can be used as a template to screen real observed azimuthal patterns to find a starting model for a forward modeling or an inversion procedure. For one station of the German Regional Seismic Network (GRSN) we have evaluated the features and screened them through the template. A forward simulation of the azimuthal pattern, using the modified first found model explanation out of the hierarchical order for station MOX, leads to a good coincidence between the real and the simulated pattern. The final 1D model could be divided into an upper crustal part (8 km deep) with an axis of symmetry tilt of 55° and 20°NW trend (direction of axis tilt) and a lower crustal part (24 km thickness) with an axis of symmetry of increasing tilt from 55° to 85° and a trend orientation of 20°SE. For the simulation we have assumed 8 and 7 percent of negative P+S anisotropy for hexagonal symmetry of the upper and lower crust, respectively. From the synthetic and the real observations it is evident that additional boundaries beside the Moho discontinuity are merely detectable for certain circumstances in an azimuthal resolution and will be blinded out in the traditional radial stack.

  4. Simulation of ground-water flow in the St. Peter aquifer in an area contaminated by coal-tar derivatives, St. Louis Park, Minnesota. Water Resources Investigation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lorenz, D.L.; Stark, J.R.

    1990-01-01

    A model constructed to simulate ground-water flow in part of the Prairie du Chien-Jordan and St. Peter aquifers, St. Louis Park, Minnesota, was used to test hypotheses about the movement of ground water contaminated with coal-tar derivatives and to simulate alternatives for reducing the downgradient movement of contamination in the St. Peter aquifer. The model, constructed for a previous study, was applied to simulate the effects of current ground-water withdrawals on the potentiometric surface of the St. Peter aquifer. Model simulations predict that the multiaquifer wells have the potential to limit downgradient migration of contaminants in the St. Peter aquifermore » caused by cones of depression created around the multiaquifer wells. Differences in vertical leakage to the St. Peter aquifer may exist in areas of bedrock valleys. Model simulations indicate that these differences are not likely to affect significantly the general patterns of ground-water flow.« less

  5. Object-oriented approach for gas turbine engine simulation

    NASA Technical Reports Server (NTRS)

    Curlett, Brian P.; Felder, James L.

    1995-01-01

    An object-oriented gas turbine engine simulation program was developed. This program is a prototype for a more complete, commercial grade engine performance program now being proposed as part of the Numerical Propulsion System Simulator (NPSS). This report discusses architectural issues of this complex software system and the lessons learned from developing the prototype code. The prototype code is a fully functional, general purpose engine simulation program, however, only the component models necessary to model a transient compressor test rig have been written. The production system will be capable of steady state and transient modeling of almost any turbine engine configuration. Chief among the architectural considerations for this code was the framework in which the various software modules will interact. These modules include the equation solver, simulation code, data model, event handler, and user interface. Also documented in this report is the component based design of the simulation module and the inter-component communication paradigm. Object class hierarchies for some of the code modules are given.

  6. Numerical simulation of the effect of regular and sub-caliber projectiles on military bunkers

    NASA Astrophysics Data System (ADS)

    Jiricek, Pavel; Foglar, Marek

    2015-09-01

    One of the most demanding topics in blast and impact engineering is the modelling of projectile impact. To introduce this topic, a set of numerical simulations was undertaken. The simulations study the impact of regular and sub-calibre projectile on Czech pre-WW2 military bunkers. The penetrations of the military objects are well documented and can be used for comparison. The numerical model composes of a part from a wall of a military object. The concrete block is subjected to an impact of a regular and sub-calibre projectile. The model is divided into layers to simplify the evaluation of the results. The simulations are processed within ANSYS AUTODYN software. A nonlinear material model of with damage and incorporated strain-rate effect was used. The results of the numerical simulations are evaluated in means of the damage of the concrete block. Progress of the damage is described versus time. The numerical simulation provides good agreement with the documented penetrations.

  7. Realization of planning design of mechanical manufacturing system by Petri net simulation model

    NASA Astrophysics Data System (ADS)

    Wu, Yanfang; Wan, Xin; Shi, Weixiang

    1991-09-01

    Planning design is to work out a more overall long-term plan. In order to guarantee a mechanical manufacturing system (MMS) designed to obtain maximum economical benefit, it is necessary to carry out a reasonable planning design for the system. First, some principles on planning design for MMS are introduced. Problems of production scheduling and their decision rules for computer simulation are presented. Realizable method of each production scheduling decision rule in Petri net model is discussed. Second, the solution of conflict rules for conflict problems during running Petri net is given. Third, based on the Petri net model of MMS which includes part flow and tool flow, according to the principle of minimum event time advance, a computer dynamic simulation of the Petri net model, that is, a computer dynamic simulation of MMS, is realized. Finally, the simulation program is applied to a simulation exmple, so the scheme of a planning design for MMS can be evaluated effectively.

  8. Improved simulation method of automotive spot weld failure with an account of the mechanical properties of spot welds

    NASA Astrophysics Data System (ADS)

    Wu, H.; Meng, X. M.; Fang, R.; Huang, Y. F.; Zhan, S.

    2017-12-01

    In this paper, the microstructure and mechanical properties of spot weld were studied, the hardness of nugget and heat affected zone (HAZ) were also tested by metallographic microscope and microhardness tester. The strength of the spot weld with the different parts' area has been characterized. According to the experiments result, CAE model of spot weld with HAZ structure was established, and simulation results of different lap-shear CAE models were analyzed. The results show that the spot weld model which contained the HAZ has good performance and more suitable for engineering application in spot weld simulation.

  9. SIMULATION OF OZONE EFFECTS ON EIGHT TREE SPECIES AT SHENANDOAH NATIONAL PARK

    EPA Science Inventory

    As part of an assessment of potential effects of air pollutants on the vegetation of Shenandoah National Park (SHEN), we simulated the growth of eight important tree species using TREGRO, a mechanistic model of individual tree growth. Published TREGRO parameters for black cherry...

  10. Systematic approach to verification and validation: High explosive burn models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menikoff, Ralph; Scovel, Christina A.

    2012-04-16

    Most material models used in numerical simulations are based on heuristics and empirically calibrated to experimental data. For a specific model, key questions are determining its domain of applicability and assessing its relative merits compared to other models. Answering these questions should be a part of model verification and validation (V and V). Here, we focus on V and V of high explosive models. Typically, model developers implemented their model in their own hydro code and use different sets of experiments to calibrate model parameters. Rarely can one find in the literature simulation results for different models of the samemore » experiment. Consequently, it is difficult to assess objectively the relative merits of different models. This situation results in part from the fact that experimental data is scattered through the literature (articles in journals and conference proceedings) and that the printed literature does not allow the reader to obtain data from a figure in electronic form needed to make detailed comparisons among experiments and simulations. In addition, it is very time consuming to set up and run simulations to compare different models over sufficiently many experiments to cover the range of phenomena of interest. The first difficulty could be overcome if the research community were to support an online web based database. The second difficulty can be greatly reduced by automating procedures to set up and run simulations of similar types of experiments. Moreover, automated testing would be greatly facilitated if the data files obtained from a database were in a standard format that contained key experimental parameters as meta-data in a header to the data file. To illustrate our approach to V and V, we have developed a high explosive database (HED) at LANL. It now contains a large number of shock initiation experiments. Utilizing the header information in a data file from HED, we have written scripts to generate an input file for a hydro code, run a simulation, and generate a comparison plot showing simulated and experimental velocity gauge data. These scripts are then applied to several series of experiments and to several HE burn models. The same systematic approach is applicable to other types of material models; for example, equations of state models and material strength models.« less

  11. A SIMULINK environment for flight dynamics and control analysis: Application to the DHC-2 Beaver. Part 1: Implementation of a model library in SIMULINK. Part 2: Nonlinear analysis of the Beaver autopilot

    NASA Technical Reports Server (NTRS)

    Rauw, Marc O.

    1993-01-01

    The design of advanced Automatic Aircraft Control Systems (AACS's) can be improved upon considerably if the designer can access all models and tools required for control system design and analysis through a graphical user interface, from within one software environment. This MSc-thesis presents the first step in the development of such an environment, which is currently being done at the Section for Stability and Control of Delft University of Technology, Faculty of Aerospace Engineering. The environment is implemented within the commercially available software package MATLAB/SIMULINK. The report consists of two parts. Part 1 gives a detailed description of the AACS design environment. The heart of this environment is formed by the SIMULINK implementation of a nonlinear aircraft model in block-diagram format. The model has been worked out for the old laboratory aircraft of the Faculty, the DeHavilland DHC-2 'Beaver', but due to its modular structure, it can easily be adapted for other aircraft. Part 1 also describes MATLAB programs which can be applied for finding steady-state trimmed-flight conditions and for linearization of the aircraft model, and it shows how the built-in simulation routines of SIMULINK have been used for open-loop analysis of the aircraft dynamics. Apart from the implementation of the models and tools, a thorough treatment of the theoretical backgrounds is presented. Part 2 of this report presents a part of an autopilot design process for the 'Beaver' aircraft, which clearly demonstrates the power and flexibility of the AACS design environment from part 1. Evaluations of all longitudinal and lateral control laws by means of nonlinear simulations are treated in detail. A floppy disk containing all relevant MATLAB programs and SIMULINK models is provided as a supplement.

  12. Short stack modeling of degradation in solid oxide fuel cells. Part II. Sensitivity and interaction analysis

    NASA Astrophysics Data System (ADS)

    Gazzarri, J. I.; Kesler, O.

    In the first part of this two-paper series, we presented a numerical model of the impedance behaviour of a solid oxide fuel cell (SOFC) aimed at simulating the change in the impedance spectrum induced by contact degradation at the interconnect-electrode, and at the electrode-electrolyte interfaces. The purpose of that investigation was to develop a non-invasive diagnostic technique to identify degradation modes in situ. In the present paper, we appraise the predictive capabilities of the proposed method in terms of its robustness to uncertainties in the input parameters, many of which are very difficult to measure independently. We applied this technique to the degradation modes simulated in Part I, in addition to anode sulfur poisoning. Electrode delamination showed the highest robustness to input parameter variations, followed by interconnect oxidation and interconnect detachment. The most sensitive degradation mode was sulfur poisoning, due to strong parameter interactions. In addition, we simulate several simultaneous two-degradation-mode scenarios, assessing the method's capabilities and limitations for the prediction of electrochemical behaviour of SOFC's undergoing multiple simultaneous degradation modes.

  13. INTEGRATION OF COST MODELS AND PROCESS SIMULATION TOOLS FOR OPTIMUM COMPOSITE MANUFACTURING PROCESS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pack, Seongchan; Wilson, Daniel; Aitharaju, Venkat

    Manufacturing cost of resin transfer molded composite parts is significantly influenced by the cycle time, which is strongly related to the time for both filling and curing of the resin in the mold. The time for filling can be optimized by various injection strategies, and by suitably reducing the length of the resin flow distance during the injection. The curing time can be reduced by the usage of faster curing resins, but it requires a high pressure injection equipment, which is capital intensive. Predictive manufacturing simulation tools that are being developed recently for composite materials are able to provide variousmore » scenarios of processing conditions virtually well in advance of manufacturing the parts. In the present study, we integrate the cost models with process simulation tools to study the influence of various parameters such as injection strategies, injection pressure, compression control to minimize high pressure injection, resin curing rate, and demold time on the manufacturing cost as affected by the annual part volume. A representative automotive component was selected for the study and the results are presented in this paper« less

  14. Phase-field Model for Interstitial Loop Growth Kinetics and Thermodynamic and Kinetic Models of Irradiated Fe-Cr Alloys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yulan; Hu, Shenyang Y.; Sun, Xin

    2011-06-15

    Microstructure evolution kinetics in irradiated materials has strongly spatial correlation. For example, void and second phases prefer to nucleate and grow at pre-existing defects such as dislocations, grain boundaries, and cracks. Inhomogeneous microstructure evolution results in inhomogeneity of microstructure and thermo-mechanical properties. Therefore, the simulation capability for predicting three dimensional (3-D) microstructure evolution kinetics and its subsequent impact on material properties and performance is crucial for scientific design of advanced nuclear materials and optimal operation conditions in order to reduce uncertainty in operational and safety margins. Very recently the meso-scale phase-field (PF) method has been used to predict gas bubblemore » evolution, void swelling, void lattice formation and void migration in irradiated materials,. Although most results of phase-field simulations are qualitative due to the lake of accurate thermodynamic and kinetic properties of defects, possible missing of important kinetic properties and processes, and the capability of current codes and computers for large time and length scale modeling, the simulations demonstrate that PF method is a promising simulation tool for predicting 3-D heterogeneous microstructure and property evolution, and providing microstructure evolution kinetics for higher scale level simulations of microstructure and property evolution such as mean field methods. This report consists of two parts. In part I, we will present a new phase-field model for predicting interstitial loop growth kinetics in irradiated materials. The effect of defect (vacancy/interstitial) generation, diffusion and recombination, sink strength, long-range elastic interaction, inhomogeneous and anisotropic mobility on microstructure evolution kinetics is taken into account in the model. The model is used to study the effect of elastic interaction on interstitial loop growth kinetics, the interstitial flux, and sink strength of interstitial loop for interstitials. In part II, we present a generic phase field model and discuss the thermodynamic and kinetic properties in phase-field models including the reaction kinetics of radiation defects and local free energy of irradiated materials. In particular, a two-sublattice thermodynamic model is suggested to describe the local free energy of alloys with irradiated defects. Fe-Cr alloy is taken as an example to explain the required thermodynamic and kinetic properties for quantitative phase-field modeling. Finally the great challenges in phase-field modeling will be discussed.« less

  15. Integrated modeling of temperature and rotation profiles in JET ITER-like wall discharges

    NASA Astrophysics Data System (ADS)

    Rafiq, T.; Kritz, A. H.; Kim, Hyun-Tae; Schuster, E.; Weiland, J.

    2017-10-01

    Simulations of 78 JET ITER-like wall D-D discharges and 2 D-T reference discharges are carried out using the TRANSP predictive integrated modeling code. The time evolved temperature and rotation profiles are computed utilizing the Multi-Mode anomalous transport model. The discharges involve a broad range of conditions including scans over gyroradius, collisionality, and values of q95. The D-T reference discharges are selected in anticipation of the D-T experimental campaign planned at JET in 2019. The simulated temperature and rotation profiles are compared with the corresponding experimental profiles in the radial range from the magnetic axis to the ρ = 0.9 flux surface. The comparison is quantified by calculating the RMS deviations and Offsets. Overall, good agreement is found between the profiles produced in the simulations and the experimental data. It is planned that the simulations obtained using the Multi-Mode model will be compared with the simulations using the TGLF model. Research supported in part by the US, DoE, Office of Sciences.

  16. In vitro bacteriological study of a new hub model for intravascular catheters and infusion equipment.

    PubMed Central

    Segura, M; Alía, C; Oms, L; Sancho, J J; Torres-Rodríguez, J M; Sitges-Serra, A

    1989-01-01

    We investigated in vitro the antibacterial properties of a simulated new hub model in which the female part has an antiseptic chamber through which the needle (male part) must pass before connection of the set and the catheter. To establish the time needed for disinfection, the magnitude of reduction of the contaminating inocula by the new hub model, and the antibacterial properties of the different components of the hub, we used needles contaminated with solutions containing high inocula (1.9 x 10(7) to 1.2 x 10(11) CFU/ml) of microorganisms involved in hub-related catheter sepsis. Sterilization of the needles was accomplished by allowing them to remain in the antiseptic chamber for 10 s in all assays with Staphylococcus epidermidis, Pseudomonas aeruginosa, Escherichia coli, and Candida albicans. The rubber closures limiting the antiseptic chamber and the dilution effect of the antiseptic itself accounted for a minor part of the inoculum reduction achieved by the new hub model. This simulated hub provides good protection against endoluminal contamination. Further studies seem warranted to prove its industrial viability and clinical efficacy. PMID:2512322

  17. Thermo-mechanical simulations of early-age concrete cracking with durability predictions

    NASA Astrophysics Data System (ADS)

    Havlásek, Petr; Šmilauer, Vít; Hájková, Karolina; Baquerizo, Luis

    2017-09-01

    Concrete performance is strongly affected by mix design, thermal boundary conditions, its evolving mechanical properties, and internal/external restraints with consequences to possible cracking with impaired durability. Thermo-mechanical simulations are able to capture those relevant phenomena and boundary conditions for predicting temperature, strains, stresses or cracking in reinforced concrete structures. In this paper, we propose a weakly coupled thermo-mechanical model for early age concrete with an affinity-based hydration model for thermal part, taking into account concrete mix design, cement type and thermal boundary conditions. The mechanical part uses B3/B4 model for concrete creep and shrinkage with isotropic damage model for cracking, able to predict a crack width. All models have been implemented in an open-source OOFEM software package. Validations of thermo-mechanical simulations will be presented on several massive concrete structures, showing excellent temperature predictions. Likewise, strain validation demonstrates good predictions on a restrained reinforced concrete wall and concrete beam. Durability predictions stem from induction time of reinforcement corrosion, caused by carbonation and/or chloride ingress influenced by crack width. Reinforcement corrosion in concrete struts of a bridge will serve for validation.

  18. Hierarchical calibration and validation framework of bench-scale computational fluid dynamics simulations for solvent-based carbon capture. Part 2: Chemical absorption across a wetted wall column: Original Research Article: Hierarchical calibration and validation framework of bench-scale computational fluid dynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Chao; Xu, Zhijie; Lai, Kevin

    Part 1 of this paper presents a numerical model for non-reactive physical mass transfer across a wetted wall column (WWC). In Part 2, we improved the existing computational fluid dynamics (CFD) model to simulate chemical absorption occurring in a WWC as a bench-scale study of solvent-based carbon dioxide (CO 2) capture. In this study, to generate data for WWC model validation, CO 2 mass transfer across a monoethanolamine (MEA) solvent was first measured on a WWC experimental apparatus. The numerical model developed in this work can account for both chemical absorption and desorption of CO 2 in MEA. In addition,more » the overall mass transfer coefficient predicted using traditional/empirical correlations is conducted and compared with CFD prediction results for both steady and wavy falling films. A Bayesian statistical calibration algorithm is adopted to calibrate the reaction rate constants in chemical absorption/desorption of CO 2 across a falling film of MEA. The posterior distributions of the two transport properties, i.e., Henry's constant and gas diffusivity in the non-reacting nitrous oxide (N 2O)/MEA system obtained from Part 1 of this study, serves as priors for the calibration of CO 2 reaction rate constants after using the N 2O/CO 2 analogy method. Finally, the calibrated model can be used to predict the CO 2 mass transfer in a WWC for a wider range of operating conditions.« less

  19. Hierarchical calibration and validation framework of bench-scale computational fluid dynamics simulations for solvent-based carbon capture. Part 2: Chemical absorption across a wetted wall column: Original Research Article: Hierarchical calibration and validation framework of bench-scale computational fluid dynamics simulations

    DOE PAGES

    Wang, Chao; Xu, Zhijie; Lai, Kevin; ...

    2017-10-24

    Part 1 of this paper presents a numerical model for non-reactive physical mass transfer across a wetted wall column (WWC). In Part 2, we improved the existing computational fluid dynamics (CFD) model to simulate chemical absorption occurring in a WWC as a bench-scale study of solvent-based carbon dioxide (CO 2) capture. In this study, to generate data for WWC model validation, CO 2 mass transfer across a monoethanolamine (MEA) solvent was first measured on a WWC experimental apparatus. The numerical model developed in this work can account for both chemical absorption and desorption of CO 2 in MEA. In addition,more » the overall mass transfer coefficient predicted using traditional/empirical correlations is conducted and compared with CFD prediction results for both steady and wavy falling films. A Bayesian statistical calibration algorithm is adopted to calibrate the reaction rate constants in chemical absorption/desorption of CO 2 across a falling film of MEA. The posterior distributions of the two transport properties, i.e., Henry's constant and gas diffusivity in the non-reacting nitrous oxide (N 2O)/MEA system obtained from Part 1 of this study, serves as priors for the calibration of CO 2 reaction rate constants after using the N 2O/CO 2 analogy method. Finally, the calibrated model can be used to predict the CO 2 mass transfer in a WWC for a wider range of operating conditions.« less

  20. Evaluation of cloud-resolving and limited area model intercomparison simulations using TWP-ICE observations: 1. Deep convective updraft properties: Eval. of TWP-ICE CRMs and LAMs Pt. 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Varble, Adam; Zipser, Edward J.; Fridlind, Ann M.

    2014-12-18

    Ten 3D cloud-resolving model (CRM) simulations and four 3D limited area model (LAM) simulations of an intense mesoscale convective system observed on 23-24 January 2006 during the Tropical Warm Pool – International Cloud Experiment (TWP-ICE) are compared with each other and with observed radar reflectivity fields and dual-Doppler retrievals of vertical wind speeds in an attempt to explain published results showing a high bias in simulated convective radar reflectivity aloft. This high bias results from ice water content being large, which is a product of large, strong convective updrafts, although hydrometeor size distribution assumptions modulate the size of this bias.more » Making snow mass more realistically proportional to D2 rather than D3 eliminates unrealistically large snow reflectivities over 40 dBZ in some simulations. Graupel, unlike snow, produces high biased reflectivity in all simulations, which is partly a result of parameterized microphysics, but also partly a result of overly intense simulated updrafts. Peak vertical velocities in deep convective updrafts are greater than dual-Doppler retrieved values, especially in the upper troposphere. Freezing of liquid condensate, often rain, lofted above the freezing level in simulated updraft cores greatly contributes to these excessive upper tropospheric vertical velocities. The strongest simulated updraft cores are nearly undiluted, with some of the strongest showing supercell characteristics during the multicellular (pre-squall) stage of the event. Decreasing horizontal grid spacing from 900 to 100 meters slightly weakens deep updraft vertical velocity and moderately decreases the amount of condensate aloft, but not enough to match observational retrievals. Therefore, overly intense simulated updrafts may additionally be a product of unrealistic interactions between convective dynamics, parameterized microphysics, and the large-scale model forcing that promote different convective strengths than observed.« less

  1. The Importance of Precise Digital Elevation Models (DEM) in Modelling Floods

    NASA Astrophysics Data System (ADS)

    Demir, Gokben; Akyurek, Zuhal

    2016-04-01

    Digital elevation Models (DEM) are important inputs for topography for the accurate modelling of floodplain hydrodynamics. Floodplains have a key role as natural retarding pools which attenuate flood waves and suppress flood peaks. GPS, LIDAR and bathymetric surveys are well known surveying methods to acquire topographic data. It is not only time consuming and expensive to obtain topographic data through surveying but also sometimes impossible for remote areas. In this study it is aimed to present the importance of accurate modelling of topography for flood modelling. The flood modelling for Samsun-Terme in Blacksea region of Turkey is done. One of the DEM is obtained from the point observations retrieved from 1/5000 scaled orthophotos and 1/1000 scaled point elevation data from field surveys at x-sections. The river banks are corrected by using the orthophotos and elevation values. This DEM is named as scaled DEM. The other DEM is obtained from bathymetric surveys. 296 538 number of points and the left/right bank slopes were used to construct the DEM having 1 m spatial resolution and this DEM is named as base DEM. Two DEMs were compared by using 27 x-sections. The maximum difference at thalweg of the river bed is 2m and the minimum difference is 20 cm between two DEMs. The channel conveyance capacity in base DEM is larger than the one in scaled DEM and floodplain is modelled in detail in base DEM. MIKE21 with flexible grid is used in 2- dimensional shallow water flow modelling. The model by using two DEMs were calibrated for a flood event (July 9, 2012). The roughness is considered as the calibration parameter. From comparison of input hydrograph at the upstream of the river and output hydrograph at the downstream of the river, the attenuation is obtained as 91% and 84% for the base DEM and scaled DEM, respectively. The time lag in hydrographs does not show any difference for two DEMs and it is obtained as 3 hours. Maximum flood extents differ for the two DEMs, larger flooded area is simulated from scaled DEM. The main difference is observed for the braided and meandering parts of the river. For the meandering part of the river, additional 1.82 106 m3 water (5% of the total volume) is calculated as the flooded volume simulated by using the scaled DEM. For the braided stream part 0.187 106 m3 more water is simulated as the flooded volume by the scaled DEM. The flood extent around the braided part of the river is 27.6 ha larger in the simulated flood map obtained from scaled DEM compared to the one obtained from base DEM. Around the meandering part of the river scaled DEM gave 59.8 ha more flooded area. The importance of correct topography of the braided and meandering part of the river in flood modelling and the uncertainty it brings to modelling are discussed in detail.

  2. Climate Simulations based on a different-grid nested and coupled model

    NASA Astrophysics Data System (ADS)

    Li, Dan; Ji, Jinjun; Li, Yinpeng

    2002-05-01

    An atmosphere-vegetation interaction model (A VIM) has been coupled with a nine-layer General Cir-culation Model (GCM) of Institute of Atmospheic Physics/State Key Laboratory of Numerical Modeling for Atmospheric Sciences and Geophysical Fluid Dynamics (IAP/LASG), which is rhomboidally truncated at zonal wave number 15, to simulate global climatic mean states. A VIM is a model having inter-feedback between land surface processes and eco-physiological processes on land. As the first step to couple land with atmosphere completely, the physiological processes are fixed and only the physical part (generally named the SVAT (soil-vegetation-atmosphere-transfer scheme) model) of AVIM is nested into IAP/LASG L9R15 GCM. The ocean part of GCM is prescribed and its monthly sea surface temperature (SST) is the climatic mean value. With respect to the low resolution of GCM, i.e., each grid cell having lon-gitude 7.5° and latitude 4.5°, the vegetation is given a high resolution of 1.5° by 1.5° to nest and couple the fine grid cells of land with the coarse grid cells of atmosphere. The coupling model has been integrated for 15 years and its last ten-year mean of outputs was chosen for analysis. Compared with observed data and NCEP reanalysis, the coupled model simulates the main characteris-tics of global atmospheric circulation and the fields of temperature and moisture. In particular, the simu-lated precipitation and surface air temperature have sound results. The work creates a solid base on coupling climate models with the biosphere.

  3. Exploring uncertainty in the radiative budget of the Antarctic atmospheric boundary layer at Dome C

    NASA Astrophysics Data System (ADS)

    Veron, D. E.; Schroth, A.; Genthon, C.; Vignon, E.

    2017-12-01

    In the past two decades, significant advances have been made in observing and modeling the atmospheric boundary layer processes over the Eastern Antarctic plateau. However, there are gaps in understanding related to the radiative and moisture budgets in the very bottom of the ABL. Since 2009, continuous meteorological observations have been made at 6 heights in the bottom 40-m of the atmosphere as part of the CALibration and VAlidation of meteorological and climate models and satellite retrievals (C ALVA) campaign to improve understanding of the atmospheric state over Dome C. A recent case study that is part of the GEWEX Atmospheric Boundary Layer Study, GABLS4, has also focused on the ability of models to simulate stable summertime boundary layers at the same location. As part of the intercomparison, a model derived summertime climatology based on 10-years of PolarWRF simulations over the Eastern Antarctic plateau was developed. Comparisons between these simulations and data from the CALVA campaign suggest that PolarWRF is not capturing the small-scale variations in the longwave heating rate profile near the surface, and so predicts biased surface temperatures relative to observations. Additional work suggests that modifications of the surface snow representations may also be needed. Studies of the sensitivity of these results to changes in the moisture budget are ongoing.

  4. Effect of aerosol feedback in the Korea Peninsula using WRF-CMAQ two-way coupled model

    NASA Astrophysics Data System (ADS)

    Yoo, J.; Jeon, W.; Lee, H.; Lee, S.

    2017-12-01

    Aerosols influence the climate system by scattering and absorption of the solar radiation by altering the cloud radiative properties. For the reason, consideration of aerosol feedback is important numerical weather prediction and air quality models. The purpose of this study was to investigate the effect of aerosol feedback on PM10 simulation in Korean Peninsula using the Weather Research and Forecasting (WRF) and the community multiscale air quality (CMAQ) two-way coupled model. Simulations were conducted with the aerosol feedback (FB) and without (NFB). The results of the simulated solar radiation in the west part of Korea decreased due to the aerosol feedback effect. The feedback effect was significant in the west part of Korea Peninsula, showing high Particulate Matter (PM) estimates due to dense emissions and its long-range transport from China. The decrease of solar radiation lead to planetary boundary layer (PBL) height reduction, thereby dispersion of air pollutants such as PM is suppressed, and resulted in higher PM concentrations. These results indicate that aerosol feedback effects can play an important role in the simulation of meteorology and air quality over Korea Peninsula.

  5. Assimilation of snow covered area information into hydrologic and land-surface models

    USGS Publications Warehouse

    Clark, M.P.; Slater, A.G.; Barrett, A.P.; Hay, L.E.; McCabe, G.J.; Rajagopalan, B.; Leavesley, G.H.

    2006-01-01

    This paper describes a data assimilation method that uses observations of snow covered area (SCA) to update hydrologic model states in a mountainous catchment in Colorado. The assimilation method uses SCA information as part of an ensemble Kalman filter to alter the sub-basin distribution of snow as well as the basin water balance. This method permits an optimal combination of model simulations and observations, as well as propagation of information across model states. Sensitivity experiments are conducted with a fairly simple snowpack/water-balance model to evaluate effects of the data assimilation scheme on simulations of streamflow. The assimilation of SCA information results in minor improvements in the accuracy of streamflow simulations near the end of the snowmelt season. The small effect from SCA assimilation is initially surprising. It can be explained both because a substantial portion of snowmelts before any bare ground is exposed, and because the transition from 100% to 0% snow coverage occurs fairly quickly. Both of these factors are basin-dependent. Satellite SCA information is expected to be most useful in basins where snow cover is ephemeral. The data assimilation strategy presented in this study improved the accuracy of the streamflow simulation, indicating that SCA is a useful source of independent information that can be used as part of an integrated data assimilation strategy. ?? 2005 Elsevier Ltd. All rights reserved.

  6. A New Look at Stratospheric Sudden Warmings. Part II: Evaluation of Numerical Model Simulations

    NASA Technical Reports Server (NTRS)

    Charlton, Andrew J.; Polvani, Lorenza M.; Perlwitz, Judith; Sassi, Fabrizio; Manzini, Elisa; Shibata, Kiyotaka; Pawson, Steven; Nielsen, J. Eric; Rind, David

    2007-01-01

    The simulation of major midwinter stratospheric sudden warmings (SSWs) in six stratosphere-resolving general circulation models (GCMs) is examined. The GCMs are compared to a new climatology of SSWs, based on the dynamical characteristics of the events. First, the number, type, and temporal distribution of SSW events are evaluated. Most of the models show a lower frequency of SSW events than the climatology, which has a mean frequency of 6.0 SSWs per decade. Statistical tests show that three of the six models produce significantly fewer SSWs than the climatology, between 1.0 and 2.6 SSWs per decade. Second, four process-based diagnostics are calculated for all of the SSW events in each model. It is found that SSWs in the GCMs compare favorably with dynamical benchmarks for SSW established in the first part of the study. These results indicate that GCMs are capable of quite accurately simulating the dynamics required to produce SSWs, but with lower frequency than the climatology. Further dynamical diagnostics hint that, in at least one case, this is due to a lack of meridional heat flux in the lower stratosphere. Even though the SSWs simulated by most GCMs are dynamically realistic when compared to the NCEP-NCAR reanalysis, the reasons for the relative paucity of SSWs in GCMs remains an important and open question.

  7. How to teach emergency procedural skills in an outdoor environment using low-fidelity simulation.

    PubMed

    Saxon, Kathleen D; Kapadia, Alison P R; Juneja, Nadia S; Bassin, Benjamin S

    2014-03-01

    Teaching emergency procedural skills in a wilderness setting can be logistically challenging. To teach these skills as part of a wilderness medicine elective for medical students, we designed an outdoor simulation session with low-fidelity models. The session involved 6 stations in which procedural skills were taught using homemade low-fidelity simulators. At each station, the students encountered a "victim," who required an emergency procedure that was performed using the low-fidelity model. The models are easy and inexpensive to construct, and their design and implementation in the session is described here. Using low-fidelity simulation models in an outdoor setting is an effective teaching tool for emergency wilderness medicine procedures and can easily be reproduced in future wilderness medicine courses. © 2014 Wilderness Medical Society Published by Wilderness Medical Society All rights reserved.

  8. Computer Simulations and Theoretical Studies of Complex Systems: from complex fluids to frustrated magnets

    NASA Astrophysics Data System (ADS)

    Choi, Eunsong

    Computer simulations are an integral part of research in modern condensed matter physics; they serve as a direct bridge between theory and experiment by systemactically applying a microscopic model to a collection of particles that effectively imitate a macroscopic system. In this thesis, we study two very differnt condensed systems, namely complex fluids and frustrated magnets, primarily by simulating classical dynamics of each system. In the first part of the thesis, we focus on ionic liquids (ILs) and polymers--the two complementary classes of materials that can be combined to provide various unique properties. The properties of polymers/ILs systems, such as conductivity, viscosity, and miscibility, can be fine tuned by choosing an appropriate combination of cations, anions, and polymers. However, designing a system that meets a specific need requires a concrete understanding of physics and chemistry that dictates a complex interplay between polymers and ionic liquids. In this regard, molecular dynamics (MD) simulation is an efficient tool that provides a molecular level picture of such complex systems. We study the behavior of Poly (ethylene oxide) (PEO) and the imidazolium based ionic liquids, using MD simulations and statistical mechanics. We also discuss our efforts to develop reliable and efficient classical force-fields for PEO and the ionic liquids. The second part is devoted to studies on geometrically frustrated magnets. In particular, a microscopic model, which gives rise to an incommensurate spiral magnetic ordering observed in a pyrochlore antiferromagnet is investigated. The validation of the model is made via a comparison of the spin-wave spectra with the neutron scattering data. Since the standard Holstein-Primakoff method is difficult to employ in such a complex ground state structure with a large unit cell, we carry out classical spin dynamics simulations to compute spin-wave spectra directly from the Fourier transform of spin trajectories. We conclude the study by showing an excellent agreement between the simulation and the experiment.

  9. Biotrickling filter modeling for styrene abatement. Part 2: Simulating a two-phase partitioning bioreactor.

    PubMed

    San-Valero, Pau; Dorado, Antonio D; Quijano, Guillermo; Álvarez-Hornos, F Javier; Gabaldón, Carmen

    2018-01-01

    A dynamic model describing styrene abatement was developed for a two-phase partitioning bioreactor operated as a biotrickling filter (TPPB-BTF). The model was built as a coupled set of two different systems of partial differential equations depending on whether an irrigation or a non-irrigation period was simulated. The maximum growth rate was previously calibrated from a conventional BTF treating styrene (Part 1). The model was extended to simulate the TPPB-BTF based on the hypothesis that the main change associated with the non-aqueous phase is the modification of the pollutant properties in the liquid phase. The three phases considered were gas, a water-silicone liquid mixture, and biofilm. The selected calibration parameters were related to the physical properties of styrene: Henry's law constant, diffusivity, and the gas-liquid mass transfer coefficient. A sensitivity analysis revealed that Henry's law constant was the most sensitive parameter. The model was successfully calibrated with a goodness of fit of 0.94. It satisfactorily simulated the performance of the TPPB-BTF at styrene loads ranging from 13 to 77 g C m -3 h -1 and empty bed residence times of 30-15 s with the mass transfer enhanced by a factor of 1.6. The model was validated with data obtained in a TPPB-BTF removing styrene continuously. The experimental outlet emissions associated to oscillating inlet concentrations were satisfactorily predicted by using the calibrated parameters. Model simulations demonstrated the potential improvement of the mass-transfer performance of a conventional BTF degrading styrene by adding silicone oil. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Dynamics of basaltic glass dissolution - Capturing microscopic effects in continuum scale models

    NASA Astrophysics Data System (ADS)

    Aradóttir, E. S. P.; Sigfússon, B.; Sonnenthal, E. L.; Björnsson, G.; Jónsson, H.

    2013-11-01

    The method of 'multiple interacting continua' (MINC) was applied to include microscopic rate-limiting processes in continuum scale reactive transport models of basaltic glass dissolution. The MINC method involves dividing the system up to ambient fluid and grains, using a specific surface area to describe the interface between the two. The various grains and regions within grains can then be described by dividing them into continua separated by dividing surfaces. Millions of grains can thus be considered within the method without the need to explicity discretizing them. Four continua were used for describing a dissolving basaltic glass grain; the first one describes the ambient fluid around the grain, while the second, third and fourth continuum refer to a diffusive leached layer, the dissolving part of the grain and the inert part of the grain, respectively. The model was validated using the TOUGHREACT simulator and data from column flow through experiments of basaltic glass dissolution at low, neutral and high pH values. Successful reactive transport simulations of the experiments and overall adequate agreement between measured and simulated values provides validation that the MINC approach can be applied for incorporating microscopic effects in continuum scale basaltic glass dissolution models. Equivalent models can be used when simulating dissolution and alteration of other minerals. The study provides an example of how numerical modeling and experimental work can be combined to enhance understanding of mechanisms associated with basaltic glass dissolution. Column outlet concentrations indicated basaltic glass to dissolve stoichiometrically at pH 3. Predictive simulations with the developed MINC model indicated significant precipitation of secondary minerals within the column at neutral and high pH, explaining observed non-stoichiometric outlet concentrations at these pH levels. Clay, zeolite and hydroxide precipitation was predicted to be most abundant within the column.

  11. The GEOS-5 Atmospheric General Circulation Model: Mean Climate and Development from MERRA to Fortuna

    NASA Technical Reports Server (NTRS)

    Molod, Andrea; Takacs, Lawrence; Suarez, Max; Bacmeister, Julio; Song, In-Sun; Eichmann, Andrew

    2012-01-01

    This report is a documentation of the Fortuna version of the GEOS-5 Atmospheric General Circulation Model (AGCM). The GEOS-5 AGCM is currently in use in the NASA Goddard Modeling and Assimilation Office (GMAO) for simulations at a wide range of resolutions, in atmosphere only, coupled ocean-atmosphere, and data assimilation modes. The focus here is on the development subsequent to the version that was used as part of NASA s Modern-Era Retrospective Analysis for Research and Applications (MERRA). We present here the results of a series of 30-year atmosphere-only simulations at different resolutions, with focus on the behavior of the 1-degree resolution simulation. The details of the changes in parameterizations subsequent to the MERRA model version are outlined, and results of a series of 30-year, atmosphere-only climate simulations at 2-degree resolution are shown to demonstrate changes in simulated climate associated with specific changes in parameterizations. The GEOS-5 AGCM presented here is the model used for the GMAO s atmosphere-only and coupled CMIP-5 simulations.

  12. A Storm Surge and Inundation Model of the Back River Watershed at NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Loftis, Jon Derek; Wang, Harry V.; DeYoung, Russell J.

    2013-01-01

    This report on a Virginia Institute for Marine Science project demonstrates that the sub-grid modeling technology (now as part of Chesapeake Bay Inundation Prediction System, CIPS) can incorporate high-resolution Lidar measurements provided by NASA Langley Research Center into the sub-grid model framework to resolve detailed topographic features for use as a hydrological transport model for run-off simulations within NASA Langley and Langley Air Force Base. The rainfall over land accumulates in the ditches/channels resolved via the model sub-grid was tested to simulate the run-off induced by heavy precipitation. Possessing both the capabilities for storm surge and run-off simulations, the CIPS model was then applied to simulate real storm events starting with Hurricane Isabel in 2003. It will be shown that the model can generate highly accurate on-land inundation maps as demonstrated by excellent comparison of the Langley tidal gauge time series data (CAPABLE.larc.nasa.gov) and spatial patterns of real storm wrack line measurements with the model results simulated during Hurricanes Isabel (2003), Irene (2011), and a 2009 Nor'easter. With confidence built upon the model's performance, sea level rise scenarios from the ICCP (International Climate Change Partnership) were also included in the model scenario runs to simulate future inundation cases.

  13. Advancements in tailored hot stamping simulations: Cooling channel and distortion analyses

    NASA Astrophysics Data System (ADS)

    Billur, Eren; Wang, Chao; Bloor, Colin; Holecek, Martin; Porzner, Harald; Altan, Taylan

    2013-12-01

    Hot stamped components have been widely used in the automotive industry in the last decade where ultra high strength is required. These parts, however, may not provide sufficient toughness to absorb crash energy. Therefore, these components are "tailored" by controlling the microstructure at various locations. Simulation of tailored hot stamped components requires more detailed analysis of microstructural changes. Furthermore, since the part is not uniformly quenched, severe distortion can be observed. CPF, together with ESI have developed a number of techniques to predict the final properties of a tailored part. This paper discusses the recent improvements in modeling distortion and die design with cooling channels.

  14. Multi-year application of WRF-CAM5 over East Asia-Part I: Comprehensive evaluation and formation regimes of O 3 and PM 2.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Jian; Zhang, Yang; Wang, Kai

    Accurate simulations of air quality and climate require robust model parameterizations on regional and global scales. The Weather Research and Forecasting model with Chemistry version 3.4.1 has been coupled with physics packages from the Community Atmosphere Model version 5 (CAM5) (WRF-CAM5) to assess the robustness of the CAM5 physics package for regional modeling at higher grid resolutions than typical grid resolutions used in global modeling. In this two-part study, Part I describes the application and evaluation of WRF-CAM5 over East Asia at a horizontal resolution of 36-km for six years: 2001, 2005, 2006, 2008, 2010, and 2011. The simulations aremore » evaluated comprehensively with a variety of datasets from surface networks, satellites, and aircraft. The results show that meteorology is relatively well simulated by WRF-CAM5. However, cloud variables are largely or moderately underpredicted, indicating uncertainties in the model treatments of dynamics, thermodynamics, and microphysics of clouds/ices as well as aerosol-cloud interactions. For chemical predictions, the tropospheric column abundances of CO, NO2, and O3 are well simulated, but those of SO2 and HCHO are moderately overpredicted, and the column HCHO/NO2 indicator is underpredicted. Large biases exist in the surface concentrations of CO, NO2, and PM10 due to uncertainties in the emissions as well as vertical mixing. The underpredictions of NO lead to insufficient O3 titration, thus O3 overpredictions. The model can generally reproduce the observed O3 and PM indicators. These indicators suggest to control NOx emissions throughout the year, and VOCs emissions in summer in big cities and in winter over North China Plain, North/South Korea, and Japan to reduce surface O3, and to control SO2, NH3, and NOx throughout the year to reduce inorganic surface PM.« less

  15. Modelling and optimization of transient processes in line focusing power plants with single-phase heat transfer medium

    NASA Astrophysics Data System (ADS)

    Noureldin, K.; González-Escalada, L. M.; Hirsch, T.; Nouri, B.; Pitz-Paal, R.

    2016-05-01

    A large number of commercial and research line focusing solar power plants are in operation and under development. Such plants include parabolic trough collectors (PTC) or linear Fresnel using thermal oil or molten salt as the heat transfer medium (HTM). However, the continuously varying and dynamic solar condition represent a big challenge for the plant control in order to optimize its power production and to keep the operation safe. A better understanding of the behaviour of such power plants under transient conditions will help reduce defocusing instances, improve field control, and hence, increase the energy yield and confidence in this new technology. Computational methods are very powerful and cost-effective tools to gain such understanding. However, most simulation models described in literature assume equal mass flow distributions among the parallel loops in the field or totally decouple the flow and thermal conditions. In this paper, a new numerical model to simulate a whole solar field with single-phase HTM is described. The proposed model consists of a hydraulic part and a thermal part that are coupled to account for the effect of the thermal condition of the field on the flow distribution among the parallel loops. The model is specifically designed for large line-focusing solar fields offering a high degree of flexibility in terms of layout, condition of the mirrors, and spatially resolved DNI data. Moreover, the model results have been compared to other simulation tools, as well as experimental and plant data, and the results show very good agreement. The model can provide more precise data to the control algorithms to improve the plant control. In addition, short-term and accurate spatially discretized DNI forecasts can be used as input to predict the field behaviour in-advance. In this paper, the hydraulic and thermal parts, as well as the coupling procedure, are described and some validation results and results of simulating an example field are shown.

  16. Simplifications for hydronic system models in modelica

    DOE PAGES

    Jorissen, F.; Wetter, M.; Helsen, L.

    2018-01-12

    Building systems and their heating, ventilation and air conditioning flow networks, are becoming increasingly complex. Some building energy simulation tools simulate these flow networks using pressure drop equations. These flow network models typically generate coupled algebraic nonlinear systems of equations, which become increasingly more difficult to solve as their sizes increase. This leads to longer computation times and can cause the solver to fail. These problems also arise when using the equation-based modelling language Modelica and Annex 60-based libraries. This may limit the applicability of the library to relatively small problems unless problems are restructured. This paper discusses two algebraicmore » loop types and presents an approach that decouples algebraic loops into smaller parts, or removes them completely. The approach is applied to a case study model where an algebraic loop of 86 iteration variables is decoupled into smaller parts with a maximum of five iteration variables.« less

  17. PICASSO: an end-to-end image simulation tool for space and airborne imaging systems II. Extension to the thermal infrared: equations and methods

    NASA Astrophysics Data System (ADS)

    Cota, Stephen A.; Lomheim, Terrence S.; Florio, Christopher J.; Harbold, Jeffrey M.; Muto, B. Michael; Schoolar, Richard B.; Wintz, Daniel T.; Keller, Robert A.

    2011-10-01

    In a previous paper in this series, we described how The Aerospace Corporation's Parameterized Image Chain Analysis & Simulation SOftware (PICASSO) tool may be used to model space and airborne imaging systems operating in the visible to near-infrared (VISNIR). PICASSO is a systems-level tool, representative of a class of such tools used throughout the remote sensing community. It is capable of modeling systems over a wide range of fidelity, anywhere from conceptual design level (where it can serve as an integral part of the systems engineering process) to as-built hardware (where it can serve as part of the verification process). In the present paper, we extend the discussion of PICASSO to the modeling of Thermal Infrared (TIR) remote sensing systems, presenting the equations and methods necessary to modeling in that regime.

  18. Variational prediction of the mechanical behavior of shape memory alloys based on thermal experiments

    NASA Astrophysics Data System (ADS)

    Junker, Philipp; Jaeger, Stefanie; Kastner, Oliver; Eggeler, Gunther; Hackl, Klaus

    2015-07-01

    In this work, we present simulations of shape memory alloys which serve as first examples demonstrating the predicting character of energy-based material models. We begin with a theoretical approach for the derivation of the caloric parts of the Helmholtz free energy. Afterwards, experimental results for DSC measurements are presented. Then, we recall a micromechanical model based on the principle of the minimum of the dissipation potential for the simulation of polycrystalline shape memory alloys. The previously determined caloric parts of the Helmholtz free energy close the set of model parameters without the need of parameter fitting. All quantities are derived directly from experiments. Finally, we compare finite element results for tension tests to experimental data and show that the model identified by thermal measurements can predict mechanically induced phase transformations and thus rationalize global material behavior without any further assumptions.

  19. School System Simulation: An Effective Model for Educational Leaders.

    ERIC Educational Resources Information Center

    Nelson, Jorge O.

    This study reviews the literature regarding the theoretical rationale for creating a computer-based school system simulation for educational leaders' use in problem solving and decision making. Like all social systems, educational systems are so complex that individuals are hard-pressed to consider all interrelated parts as a totality. A…

  20. The Implications of 3D Thermal Structure on 1D Atmospheric Retrieval

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blecic, Jasmina; Dobbs-Dixon, Ian; Greene, Thomas, E-mail: jasmina@nyu.edu

    Using the atmospheric structure from a 3D global radiation-hydrodynamic simulation of HD 189733b and the open-source Bayesian Atmospheric Radiative Transfer (BART) code, we investigate the difference between the secondary-eclipse temperature structure produced with a 3D simulation and the best-fit 1D retrieved model. Synthetic data are generated by integrating the 3D models over the Spitzer , the Hubble Space Telescope ( HST ), and the James Web Space Telescope ( JWST ) bandpasses, covering the wavelength range between 1 and 11 μ m where most spectroscopically active species have pronounced features. Using the data from different observing instruments, we present detailedmore » comparisons between the temperature–pressure profiles recovered by BART and those from the 3D simulations. We calculate several averages of the 3D thermal structure and explore which particular thermal profile matches the retrieved temperature structure. We implement two temperature parameterizations that are commonly used in retrieval to investigate different thermal profile shapes. To assess which part of the thermal structure is best constrained by the data, we generate contribution functions for our theoretical model and each of our retrieved models. Our conclusions are strongly affected by the spectral resolution of the instruments included, their wavelength coverage, and the number of data points combined. We also see some limitations in each of the temperature parametrizations, as they are not able to fully match the complex curvatures that are usually produced in hydrodynamic simulations. The results show that our 1D retrieval is recovering a temperature and pressure profile that most closely matches the arithmetic average of the 3D thermal structure. When we use a higher resolution, more data points, and a parametrized temperature profile that allows more flexibility in the middle part of the atmosphere, we find a better match between the retrieved temperature and pressure profile and the arithmetic average. The Spitzer and HST simulated observations sample deep parts of the planetary atmosphere and provide fewer constraints on the temperature and pressure profile, while the JWST observations sample the middle part of the atmosphere, providing a good match with the middle and most complex part of the arithmetic average of the 3D temperature structure.« less

  1. Study on establishment and mechanics application of finite element model of bovine eye.

    PubMed

    Cui, Yan-Hui; Huang, Ju-Fang; Cheng, Si-Ying; Wei, Wei; Shang, Lei; Li, Na; Xiong, Kun

    2015-08-13

    Glaucoma mainly induced by increased intraocular pressure (IOP), it was believed that the pressure that wall of eyeball withstands were determined by material properties of the tissue and stereoscopic geometry of the eyeball. In order to study the pressure changes in different parts of interior eyeball wall, it is necessary to develop a novel eye ball FEM with more accurate geometry and material properties. Use this model to study the stress changes in different parts of eyeball, especially the lamina cribrosa (LC) under normal physiological and pathological IOP, and provide a mathematical model for biomechanical studies of selected retinal ganglion cells (RGCs) death. (1) Sclera was cut into 3.8-mm wide, 14.5-mm long strips, and cornea was cut into 9.5-mm-wide and 10-mm-long strips; (2) 858 Mini BionixII biomechanical loading instrument was used to stretch sclera and cornea. The stretching rate for sclera was 0.3 mm/s, 3 mm/s, 30 mm/s, 300 mm/s; and for cornea were 0.3 mm/s and 30 mm/s. The deformation-stress curve was recorded; (3) Naso-temporal and longitudinal distance of LC were measured; (4) Micro-CT was used to accurately scan fresh bovine eyes and obtain the geometrical image and data to establish bovine eye model. 3-D reconstruction was performed using these images and data to work out the geometric shape of bovine eye; (5) IOP levels for eyeball FEM was set and the inner wall of eyeball was used taken as load-bearing part. Simulated eyeball FE modeling was run under the IOP level of 10 mmHg, 30 mmHg, 60 mmHg and 100 mmHg, and the force condition of different parts of eyeball was recorded under different IOP levels. (1) We obtained the material parameters more in line with physiological conditions and established a more realistic eyeball model using reversed engineering of parameters optimization method to calculate the complex nonlinear super-elastic and viscoelastic parameters more accurately; (2) We observed the following phenomenon by simulating increased pressure using FEM: as simulative IOP increased, the stress concentration scope on the posterior half of sclera became narrower; in the meantime, the stress-concentration scope on the anterior half of scleral gradually expanded, and the stress on the central part of LC is highest. As simulative IOP increased, stress-concentration scope on the posterior half of sclera gradually narrowed; in the meantime, the stress-concentration scope on the anterior half of sclera gradually expanded, and the stress on the LC is mainly concentrated in the central part, suggesting that IOP is mainly concentrated in the anterior part of the eyeball as it increases. This might provide a biomechanical evidence to explain why RGCs in peripheral part die earlier than RGCs in central part under HIOP.

  2. Applications of agent-based modeling to nutrient movement Lake Michigan

    EPA Science Inventory

    As part of an ongoing project aiming to provide useful information for nearshore management (harmful algal blooms, nutrient loading), we explore the value of agent-based models in Lake Michigan. Agent-based models follow many individual “agents” moving through a simul...

  3. Modeling and Performance Improvement of the Constant Power Regulator Systems in Variable Displacement Axial Piston Pump

    PubMed Central

    Park, Sung Hwan; Lee, Ji Min; Kim, Jong Shik

    2013-01-01

    An irregular performance of a mechanical-type constant power regulator is considered. In order to find the cause of an irregular discharge flow at the cut-off pressure area, modeling and numerical simulations are performed to observe dynamic behavior of internal parts of the constant power regulator system for a swashplate-type axial piston pump. The commercial numerical simulation software AMESim is applied to model the mechanical-type regulator with hydraulic pump and simulate the performance of it. The validity of the simulation model of the constant power regulator system is verified by comparing simulation results with experiments. In order to find the cause of the irregular performance of the mechanical-type constant power regulator system, the behavior of main components such as the spool, sleeve, and counterbalance piston is investigated using computer simulation. The shape modification of the counterbalance piston is proposed to improve the undesirable performance of the mechanical-type constant power regulator. The performance improvement is verified by computer simulation using AMESim software. PMID:24282389

  4. Towards a Consolidated Approach for the Assessment of Evaluation Models of Nuclear Power Reactors

    DOE PAGES

    Epiney, A.; Canepa, S.; Zerkak, O.; ...

    2016-11-02

    The STARS project at the Paul Scherrer Institut (PSI) has adopted the TRACE thermal-hydraulic (T-H) code for best-estimate system transient simulations of the Swiss Light Water Reactors (LWRs). For analyses involving interactions between system and core, a coupling of TRACE with the SIMULATE-3K (S3K) LWR core simulator has also been developed. In this configuration, the TRACE code and associated nuclear power reactor simulation models play a central role to achieve a comprehensive safety analysis capability. Thus, efforts have now been undertaken to consolidate the validation strategy by implementing a more rigorous and structured assessment approach for TRACE applications involving eithermore » only system T-H evaluations or requiring interfaces to e.g. detailed core or fuel behavior models. The first part of this paper presents the preliminary concepts of this validation strategy. The principle is to systematically track the evolution of a given set of predicted physical Quantities of Interest (QoIs) over a multidimensional parametric space where each of the dimensions represent the evolution of specific analysis aspects, including e.g. code version, transient specific simulation methodology and model "nodalisation". If properly set up, such environment should provide code developers and code users with persistent (less affected by user effect) and quantified information (sensitivity of QoIs) on the applicability of a simulation scheme (codes, input models, methodology) for steady state and transient analysis of full LWR systems. Through this, for each given transient/accident, critical paths of the validation process can be identified that could then translate into defining reference schemes to be applied for downstream predictive simulations. In order to illustrate this approach, the second part of this paper presents a first application of this validation strategy to an inadvertent blowdown event that occurred in a Swiss BWR/6. The transient was initiated by the spurious actuation of the Automatic Depressurization System (ADS). The validation approach progresses through a number of dimensions here: First, the same BWR system simulation model is assessed for different versions of the TRACE code, up to the most recent one. The second dimension is the "nodalisation" dimension, where changes to the input model are assessed. The third dimension is the "methodology" dimension. In this case imposed power and an updated TRACE core model are investigated. For each step in each validation dimension, a common set of QoIs are investigated. For the steady-state results, these include fuel temperatures distributions. For the transient part of the present study, the evaluated QoIs include the system pressure evolution and water carry-over into the steam line.« less

  5. Statistical power calculations for mixed pharmacokinetic study designs using a population approach.

    PubMed

    Kloprogge, Frank; Simpson, Julie A; Day, Nicholas P J; White, Nicholas J; Tarning, Joel

    2014-09-01

    Simultaneous modelling of dense and sparse pharmacokinetic data is possible with a population approach. To determine the number of individuals required to detect the effect of a covariate, simulation-based power calculation methodologies can be employed. The Monte Carlo Mapped Power method (a simulation-based power calculation methodology using the likelihood ratio test) was extended in the current study to perform sample size calculations for mixed pharmacokinetic studies (i.e. both sparse and dense data collection). A workflow guiding an easy and straightforward pharmacokinetic study design, considering also the cost-effectiveness of alternative study designs, was used in this analysis. Initially, data were simulated for a hypothetical drug and then for the anti-malarial drug, dihydroartemisinin. Two datasets (sampling design A: dense; sampling design B: sparse) were simulated using a pharmacokinetic model that included a binary covariate effect and subsequently re-estimated using (1) the same model and (2) a model not including the covariate effect in NONMEM 7.2. Power calculations were performed for varying numbers of patients with sampling designs A and B. Study designs with statistical power >80% were selected and further evaluated for cost-effectiveness. The simulation studies of the hypothetical drug and the anti-malarial drug dihydroartemisinin demonstrated that the simulation-based power calculation methodology, based on the Monte Carlo Mapped Power method, can be utilised to evaluate and determine the sample size of mixed (part sparsely and part densely sampled) study designs. The developed method can contribute to the design of robust and efficient pharmacokinetic studies.

  6. Test of Hadronic Interaction Models with the KASCADE Hadron Calorimeter

    NASA Astrophysics Data System (ADS)

    Milke, J.; KASCADE Collaboration

    The interpretation of extensive air shower (EAS) measurements often requires the comparison with EAS simulations based on high-energy hadronic interaction models. These interaction models have to extrapolate into kinematical regions and energy ranges beyond the limit of present accelerators. Therefore, it is necessary to test whether these models are able to describe the EAS development in a consistent way. By measuring simultaneously the hadronic, electromagnetic, and muonic part of an EAS the experiment KASCADE offers best facilities for checking the models. For the EAS simulations the program CORSIKA with several hadronic event generators implemented is used. Different hadronic observables, e.g. hadron number, energy spectrum, lateral distribution, are investigated, as well as their correlations with the electromagnetic and muonic shower size. By comparing measurements and simulations the consistency of the description of the EAS development is checked. First results with the new interaction model NEXUS and the version II.5 of the model DPMJET, recently included in CORSIKA, are presented and compared with QGSJET simulations.

  7. Cognitive simulation as a tool for cognitive task analysis.

    PubMed

    Roth, E M; Woods, D D; Pople, H E

    1992-10-01

    Cognitive simulations are runnable computer programs that represent models of human cognitive activities. We show how one cognitive simulation built as a model of some of the cognitive processes involved in dynamic fault management can be used in conjunction with small-scale empirical data on human performance to uncover the cognitive demands of a task, to identify where intention errors are likely to occur, and to point to improvements in the person-machine system. The simulation, called Cognitive Environment Simulation or CES, has been exercised on several nuclear power plant accident scenarios. Here we report one case to illustrate how a cognitive simulation tool such as CES can be used to clarify the cognitive demands of a problem-solving situation as part of a cognitive task analysis.

  8. Conceptual Design of Simulation Models in an Early Development Phase of Lunar Spacecraft Simulator Using SMP2 Standard

    NASA Astrophysics Data System (ADS)

    Lee, Hoon Hee; Koo, Cheol Hea; Moon, Sung Tae; Han, Sang Hyuck; Ju, Gwang Hyeok

    2013-08-01

    The conceptual study for Korean lunar orbiter/lander prototype has been performed in Korea Aerospace Research Institute (KARI). Across diverse space programs around European countries, a variety of simulation application has been developed using SMP2 (Simulation Modelling Platform) standard related to portability and reuse of simulation models by various model users. KARI has not only first-hand experience of a development of SMP compatible simulation environment but also an ongoing study to apply the SMP2 development process of simulation model to a simulator development project for lunar missions. KARI has tried to extend the coverage of the development domain based on SMP2 standard across the whole simulation model life-cycle from software design to its validation through a lunar exploration project. Figure. 1 shows a snapshot from a visualization tool for the simulation of lunar lander motion. In reality, a demonstrator prototype on the right-hand side of image was made and tested in 2012. In an early phase of simulator development prior to a kick-off start in the near future, targeted hardware to be modelled has been investigated and indentified at the end of 2012. The architectural breakdown of the lunar simulator at system level was performed and the architecture with a hierarchical tree of models from the system to parts at lower level has been established. Finally, SMP Documents such as Catalogue, Assembly, Schedule and so on were converted using a XML(eXtensible Mark-up Language) converter. To obtain benefits of the suggested approaches and design mechanisms in SMP2 standard as far as possible, the object-oriented and component-based design concepts were strictly chosen throughout a whole model development process.

  9. Prediction of thinning of the sheet metal in the program AutoForm and its experimental verification

    NASA Astrophysics Data System (ADS)

    Fedorko, M.; Urbánek, M.; Rund, M.

    2017-02-01

    The manufacture of press-formed parts often involves deep-drawing operations. Deep drawing, however, can be deemed an industrial branch in its own right. Today, many experimental as well as numerical methods are available for designing and optimizing deep drawing operations. The best option, however, is to combine both approaches. The present paper describes one such investigation. Here, measurements and numerical simulation were used for mapping the impact of anisotropy on thickness variation in a spherical-shaped drawn part of DC01 steel. Variation in sheet thickness was measured on spherical-shaped drawn parts of various geometries by means of two cameras, and evaluated with digital image correlation using the ARAMIS software from the company GOM. The forming experiment was carried out on an INOVA 200 kN servohydraulic testing machine in which the force vs. piston displacement curve was recorded. The same experiment was then numerically simulated and analyzed using the AUTOFORM software. Various parameters were monitored, such as thinning, strain magnitude, formability, and others. For the purpose of this simulation, a series of mechanical tests was conducted to obtain descriptions of the experimental material of 1.5 mm thickness. A material model was constructed from the tests data involving the work-hardening curve, the impact of anisotropy, and the forming limit diagram. Specifically, these tests included tensile tests, the Nakajima test, and the stacked test, which were carried out to determine materials data for the model. The actual sheet thickness was measured on a sectioned spherical-shaped drawn part using a NIKON optical microscope. The variations in thickness along defined lines on the sectioned drawn part were compared with the numerical simulations data using digital image correlation. The above-described experimental programme is suitable for calibrating a material model for any computational software and can correctly solve deep-drawing problems.

  10. Attentional models of multitask pilot performance using advanced display technology.

    PubMed

    Wickens, Christopher D; Goh, Juliana; Helleberg, John; Horrey, William J; Talleur, Donald A

    2003-01-01

    In the first part of the reported research, 12 instrument-rated pilots flew a high-fidelity simulation, in which air traffic control presentation of auditory (voice) information regarding traffic and flight parameters was compared with advanced display technology presentation of equivalent information regarding traffic (cockpit display of traffic information) and flight parameters (data link display). Redundant combinations were also examined while pilots flew the aircraft simulation, monitored for outside traffic, and read back communications messages. The data suggested a modest cost for visual presentation over auditory presentation, a cost mediated by head-down visual scanning, and no benefit for redundant presentation. The effects in Part 1 were modeled by multiple-resource and preemption models of divided attention. In the second part of the research, visual scanning in all conditions was fit by an expected value model of selective attention derived from a previous experiment. This model accounted for 94% of the variance in the scanning data and 90% of the variance in a second validation experiment. Actual or potential applications of this research include guidance on choosing the appropriate modality for presenting in-cockpit information and understanding task strategies induced by introducing new aviation technology.

  11. Simulating the Heterogeneity in Braided Channel Belt Deposits: Part 1. A Geometric-Based Methodology and Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramanathan, Ramya; Guin, Arijit; Ritzi, Robert W.

    A geometric-based simulation methodology was developed and incorporated into a computer code to model the hierarchical stratal architecture, and the corresponding spatial distribution of permeability, in braided channel belt deposits. The code creates digital models of these deposits as a three-dimensional cubic lattice, which can be used directly in numerical aquifer or reservoir models for fluid flow. The digital models have stratal units defined from the km scale to the cm scale. These synthetic deposits are intended to be used as high-resolution base cases in various areas of computational research on multiscale flow and transport processes, including the testing ofmore » upscaling theories. The input parameters are primarily univariate statistics. These include the mean and variance for characteristic lengths of sedimentary unit types at each hierarchical level, and the mean and variance of log-permeability for unit types defined at only the lowest level (smallest scale) of the hierarchy. The code has been written for both serial and parallel execution. The methodology is described in Part 1 of this series. In Part 2, models generated by the code are presented and evaluated.« less

  12. On-lattice agent-based simulation of populations of cells within the open-source Chaste framework.

    PubMed

    Figueredo, Grazziela P; Joshi, Tanvi V; Osborne, James M; Byrne, Helen M; Owen, Markus R

    2013-04-06

    Over the years, agent-based models have been developed that combine cell division and reinforced random walks of cells on a regular lattice, reaction-diffusion equations for nutrients and growth factors; and ordinary differential equations for the subcellular networks regulating the cell cycle. When linked to a vascular layer, this multiple scale model framework has been applied to tumour growth and therapy. Here, we report on the creation of an agent-based multi-scale environment amalgamating the characteristics of these models within a Virtual Physiological Human (VPH) Exemplar Project. This project enables reuse, integration, expansion and sharing of the model and relevant data. The agent-based and reaction-diffusion parts of the multi-scale model have been implemented and are available for download as part of the latest public release of Chaste (Cancer, Heart and Soft Tissue Environment; http://www.cs.ox.ac.uk/chaste/), part of the VPH Toolkit (http://toolkit.vph-noe.eu/). The environment functionalities are verified against the original models, in addition to extra validation of all aspects of the code. In this work, we present the details of the implementation of the agent-based environment, including the system description, the conceptual model, the development of the simulation model and the processes of verification and validation of the simulation results. We explore the potential use of the environment by presenting exemplar applications of the 'what if' scenarios that can easily be studied in the environment. These examples relate to tumour growth, cellular competition for resources and tumour responses to hypoxia (low oxygen levels). We conclude our work by summarizing the future steps for the expansion of the current system.

  13. A comparison of the lattice discrete particle method to the finite-element method and the K&C material model for simulating the static and dynamic response of concrete.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Jovanca J.; Bishop, Joseph E.

    2013-11-01

    This report summarizes the work performed by the graduate student Jovanca Smith during a summer internship in the summer of 2012 with the aid of mentor Joe Bishop. The projects were a two-part endeavor that focused on the use of the numerical model called the Lattice Discrete Particle Model (LDPM). The LDPM is a discrete meso-scale model currently used at Northwestern University and the ERDC to model the heterogeneous quasi-brittle material, concrete. In the first part of the project, LDPM was compared to the Karagozian and Case Concrete Model (K&C) used in Presto, an explicit dynamics finite-element code, developed atmore » Sandia National Laboratories. In order to make this comparison, a series of quasi-static numerical experiments were performed, namely unconfined uniaxial compression tests on four varied cube specimen sizes, three-point bending notched experiments on three proportional specimen sizes, and six triaxial compression tests on a cylindrical specimen. The second part of this project focused on the application of LDPM to simulate projectile perforation on an ultra high performance concrete called CORTUF. This application illustrates the strengths of LDPM over traditional continuum models.« less

  14. Modeling of light-induced degradation due to Cu precipitation in p-type silicon. II. Comparison of simulations and experiments

    NASA Astrophysics Data System (ADS)

    Vahlman, H.; Haarahiltunen, A.; Kwapil, W.; Schön, J.; Inglese, A.; Savin, H.

    2017-05-01

    The presence of copper impurities is known to deteriorate the bulk minority carrier lifetime of silicon. In p-type silicon, the degradation occurs only under carrier injection (e.g., illumination), but the reason for this phenomenon called copper-related light-induced degradation (Cu-LID) has long remained uncertain. To clarify the physics of this problem, a mathematical model of Cu-LID was introduced in Paper I of this article. Within the model, kinetic precipitation simulations are interlinked with a Schottky junction model for electric behavior of metallic precipitates. As this approach enables simulating precipitation directly at the minority carrier lifetime level, the model is verified in this second part with a direct comparison to the corresponding degradation experiments and literature data. Convincing agreement is found with different doping and Cu concentrations as well as at increased temperature, and in the dark, both simulated degradation and measured degradation are very slow. In addition, modeled final lifetimes after illumination are very close to experimental final lifetimes, and a correlation with the final precipitate size is found. However, the model underestimates experimentally observed differences in the degradation rate at different illumination intensities. Nevertheless, the results of this work support the theory of Cu-LID as a precipitate formation process. Part of the results also imply that heterogeneous nucleation sites play a role during precipitate nucleation. The model reveals fundamental aspects of the physics of Cu-LID including how doping and heterogeneous nucleation site concentrations can considerably influence the final recombination activity.

  15. Designing simulator-based training: an approach integrating cognitive task analysis and four-component instructional design.

    PubMed

    Tjiam, Irene M; Schout, Barbara M A; Hendrikx, Ad J M; Scherpbier, Albert J J M; Witjes, J Alfred; van Merriënboer, Jeroen J G

    2012-01-01

    Most studies of simulator-based surgical skills training have focused on the acquisition of psychomotor skills, but surgical procedures are complex tasks requiring both psychomotor and cognitive skills. As skills training is modelled on expert performance consisting partly of unconscious automatic processes that experts are not always able to explicate, simulator developers should collaborate with educational experts and physicians in developing efficient and effective training programmes. This article presents an approach to designing simulator-based skill training comprising cognitive task analysis integrated with instructional design according to the four-component/instructional design model. This theory-driven approach is illustrated by a description of how it was used in the development of simulator-based training for the nephrostomy procedure.

  16. Process simulations for manufacturing of thick composites

    NASA Astrophysics Data System (ADS)

    Kempner, Evan A.

    The availability of manufacturing simulations for composites can significantly reduce the costs associated with process development. Simulations provide a tool for evaluating the effect of processing conditions on the quality of parts produced without requiring numerous experiments. This is especially significant in parts that have troublesome features such as large thickness. The development of simulations for thick walled composites has been approached by examining the mechanics of resin flow and fiber deformation during processing, applying these evaluations to develop simulations, and evaluating the simulation with experimental results. A unified analysis is developed to describe the three-dimensional resin flow and fiber preform deformation during processing regardless of the manufacturing process used. It is shown how the generic governing evaluations in the unified analysis can be applied to autoclave molding, compression molding, pultrusion, filament winding, and resin transfer molding. A comparison is provided with earlier models derived individually for these processes. The evaluations described for autoclave curing were used to produce a one-dimensional cure simulation for autoclave curing of thick composites. The simulation consists of an analysis for heat transfer and resin flow in the composite as well as bleeder plies used to absorb resin removed from the part. Experiments were performed in a hot press to approximate curing in an autoclave. Graphite/epoxy laminates of 3 cm and 5 cm thickness were cured while monitoring temperatures at several points inside the laminate and thickness. The simulation predicted temperatures fairly closely, but difficulties were encountered in correlation of thickness results. This simulation was also used to study the effects of prepreg aging on processing of thick composites. An investigation was also performed on filament winding with prepreg tow. Cylinders were wound of approximately 12 mm thickness with pressure gages at the mandrel-composite interface. Cylinders were hoop wound with tensions ranging from 13-34 N. An analytical model was developed to calculate change in stress due to relaxation during winding. Although compressive circumferential stresses occurred throughout each of the cylinders, the magnitude was fairly low.

  17. Advanced EUV mask and imaging modeling

    NASA Astrophysics Data System (ADS)

    Evanschitzky, Peter; Erdmann, Andreas

    2017-10-01

    The exploration and optimization of image formation in partially coherent EUV projection systems with complex source shapes requires flexible, accurate, and efficient simulation models. This paper reviews advanced mask diffraction and imaging models for the highly accurate and fast simulation of EUV lithography systems, addressing important aspects of the current technical developments. The simulation of light diffraction from the mask employs an extended rigorous coupled wave analysis (RCWA) approach, which is optimized for EUV applications. In order to be able to deal with current EUV simulation requirements, several additional models are included in the extended RCWA approach: a field decomposition and a field stitching technique enable the simulation of larger complex structured mask areas. An EUV multilayer defect model including a database approach makes the fast and fully rigorous defect simulation and defect repair simulation possible. A hybrid mask simulation approach combining real and ideal mask parts allows the detailed investigation of the origin of different mask 3-D effects. The image computation is done with a fully vectorial Abbe-based approach. Arbitrary illumination and polarization schemes and adapted rigorous mask simulations guarantee a high accuracy. A fully vectorial sampling-free description of the pupil with Zernikes and Jones pupils and an optimized representation of the diffraction spectrum enable the computation of high-resolution images with high accuracy and short simulation times. A new pellicle model supports the simulation of arbitrary membrane stacks, pellicle distortions, and particles/defects on top of the pellicle. Finally, an extension for highly accurate anamorphic imaging simulations is included. The application of the models is demonstrated by typical use cases.

  18. Smackdown: Adventures in Simulation Standards and Interoperability

    NASA Technical Reports Server (NTRS)

    Elfrey, Priscilla R.; Zacharewicz, Gregory; Ni, marcus

    2011-01-01

    The paucity of existing employer-driven simulation education and the need for workers broadly trained in Modeling & Simulation (M&S) poses a critical need that the simulation community as a whole must address. This paper will describe how this need became an impetus for a new inter-university activity that allows students to learn about simulation by doing it. The event, called Smackdown, was demonstrated for the first time in April at the Spring Simulation Multi-conference. Smackdown is an adventure in international cooperation. Students and faculty took part from the US and Europe supported by IEEE/SISO standards, industry software and National Aeronautics and Space Administration (NASA) content of are supply mission to the Moon. The developers see Smackdown providing all participants with a memorable, interactive, problem-solving experience, which can contribute, importantly to the workforce of the future. This is part of the larger need to increase undergraduate education in simulation and could be a prime candidate for senior design projects.

  19. PySM: Python Sky Model

    NASA Astrophysics Data System (ADS)

    Thorne, Ben; Alonso, David; Naess, Sigurd; Dunkley, Jo

    2017-04-01

    PySM generates full-sky simulations of Galactic foregrounds in intensity and polarization relevant for CMB experiments. The components simulated are thermal dust, synchrotron, AME, free-free, and CMB at a given Nside, with an option to integrate over a top hat bandpass, to add white instrument noise, and to smooth with a given beam. PySM is based on the large-scale Galactic part of Planck Sky Model code and uses some of its inputs

  20. Numerical and experimental modelling of the centrifugal compressor stage - setting the model of impellers with 2D blades

    NASA Astrophysics Data System (ADS)

    Matas, Richard; Syka, Tomáš; Luňáček, Ondřej

    The article deals with a description of results from research and development of a radial compressor stage. The experimental compressor and used numerical models are briefly described. In the first part, the comparisons of characteristics obtained experimentally and by numerical simulations for stage with vaneless diffuser are described. In the second part, the results for stage with vanned diffuser are presented. The results are relevant for next studies in research and development process.

  1. Modeling Hydrodynamics and Heat Transport in Upper Klamath Lake, Oregon, and Implications for Water Quality

    USGS Publications Warehouse

    Wood, Tamara M.; Cheng, Ralph T.; Gartner, Jeffrey W.; Hoilman, Gene R.; Lindenberg, Mary K.; Wellman, Roy E.

    2008-01-01

    The three-dimensional numerical model UnTRIM was used to model hydrodynamics and heat transport in Upper Klamath Lake, Oregon, between mid-June and mid-September in 2005 and between mid-May and mid-October in 2006. Data from as many as six meteorological stations were used to generate a spatially interpolated wind field to use as a forcing function. Solar radiation, air temperature, and relative humidity data all were available at one or more sites. In general, because the available data for all inflows and outflows did not adequately close the water budget as calculated from lake elevation and stage-capacity information, a residual inflow or outflow was used to assure closure of the water budget. Data used for calibration in 2005 included lake elevation at 3 water-level gages around the lake, water currents at 5 Acoustic Doppler Current Profiler (ADCP) sites, and temperature at 16 water-quality monitoring locations. The calibrated model accurately simulated the fluctuations of the surface of the lake caused by daily wind patterns. The use of a spatially variable surface wind interpolated from two sites on the lake and four sites on the shoreline generally resulted in more accurate simulation of the currents than the use of a spatially invariant surface wind as observed at only one site on the lake. The simulation of currents was most accurate at the deepest site (ADCP1, where the velocities were highest) using a spatially variable surface wind; the mean error (ME) and root mean square error (RMSE) for the depth-averaged speed over a 37-day simulation from July 26 to August 31, 2005, were 0.50 centimeter per second (cm/s) and 3.08 cm/s, respectively. Simulated currents at the remaining sites were less accurate and, in general, underestimated the measured currents. The maximum errors in simulated currents were at a site near the southern end of the trench at the mouth of Howard Bay (ADCP7), where the ME and RMSE in the depth-averaged speed were 3.02 and 4.38 cm/s, respectively. The range in ME of the temperature simulations over the same period was ?0.94 to 0.73 degrees Celsius (?C), and the RMSE ranged from 0.43 to 1.12?C. The model adequately simulated periods of stratification in the deep trench when complete mixing did not occur for several days at a time. The model was validated using boundary conditions and forcing functions from 2006 without changing any calibration parameters. A spatially variable wind was used. Data for the model validation periods in 2006 included lake elevation at 4 gages around the lake, currents collected at 2 ADCP sites, and temperature collected at 21 water-quality monitoring locations. Errors generally were larger than in 2005. ME and RMSE in the simulated velocity at ADCP1 were 2.30 cm/s and 3.88 cm/s, respectively, for the same 37-day simulation over which errors were computed for 2005. The ME in temperature over the same period ranged from ?0.56 to 1.5?C and the RMSE ranged from 0.41 to 1.86?C. Numerical experiments with conservative tracers were used to demonstrate the prevailing clockwise circulation patterns in the lake, and to show the influence of water from the deep trench located along the western shoreline of the lake on fish habitat in the northern part of the lake. Because water exiting the trench is split into two pathways, the numerical experiments indicate that bottom water from the trench has a stronger influence on water quality in the northern part of the lake, and surface water from the trench has a stronger influence on the southern part of the lake. This may be part of the explanation for why episodes of low dissolved oxygen tend to be more severe in the northern than in the southern part of the lake.

  2. Statistical framework for evaluation of climate model simulations by use of climate proxy data from the last millennium - Part 1: Theory

    NASA Astrophysics Data System (ADS)

    Sundberg, R.; Moberg, A.; Hind, A.

    2012-08-01

    A statistical framework for comparing the output of ensemble simulations from global climate models with networks of climate proxy and instrumental records has been developed, focusing on near-surface temperatures for the last millennium. This framework includes the formulation of a joint statistical model for proxy data, instrumental data and simulation data, which is used to optimize a quadratic distance measure for ranking climate model simulations. An essential underlying assumption is that the simulations and the proxy/instrumental series have a shared component of variability that is due to temporal changes in external forcing, such as volcanic aerosol load, solar irradiance or greenhouse gas concentrations. Two statistical tests have been formulated. Firstly, a preliminary test establishes whether a significant temporal correlation exists between instrumental/proxy and simulation data. Secondly, the distance measure is expressed in the form of a test statistic of whether a forced simulation is closer to the instrumental/proxy series than unforced simulations. The proposed framework allows any number of proxy locations to be used jointly, with different seasons, record lengths and statistical precision. The goal is to objectively rank several competing climate model simulations (e.g. with alternative model parameterizations or alternative forcing histories) by means of their goodness of fit to the unobservable true past climate variations, as estimated from noisy proxy data and instrumental observations.

  3. Winter-to-Summer Precipitation Phasing in Southwestern North America: A Multi-Century Perspective from Paleoclimatic Model-Data Comparisons

    NASA Technical Reports Server (NTRS)

    Coats, Sloan; Smerdon, Jason E.; Seager, Richard; Griffin, Daniel; Cook, Benjamin I.

    2015-01-01

    The phasing of winter-to-summer precipitation anomalies in the North American monsoon (NAM) region 2 (113.25 deg W-107.75 deg W, 30 deg N-35.25 deg N-NAM2) of southwestern North America is analyzed in fully coupled simulations of the Last Millennium and compared to tree ring reconstructed winter and summer precipitation variability. The models simulate periods with in-phase seasonal precipitation anomalies, but the strength of this relationship is variable on multidecadal time scales, behavior that is also exhibited by the reconstructions. The models, however, are unable to simulate periods with consistently out-of-phase winter-to-summer precipitation anomalies as observed in the latter part of the instrumental interval. The periods with predominantly in-phase winter-to-summer precipitation anomalies in the models are significant against randomness, and while this result is suggestive of a potential for dual-season drought on interannual and longer time scales, models do not consistently exhibit the persistent dual-season drought seen in the dendroclimatic reconstructions. These collective findings indicate that model-derived drought risk assessments may underestimate the potential for dual-season drought in 21st century projections of hydroclimate in the American Southwest and parts of Mexico.

  4. Molecular dynamics simulations of biological membranes and membrane proteins using enhanced conformational sampling algorithms.

    PubMed

    Mori, Takaharu; Miyashita, Naoyuki; Im, Wonpil; Feig, Michael; Sugita, Yuji

    2016-07-01

    This paper reviews various enhanced conformational sampling methods and explicit/implicit solvent/membrane models, as well as their recent applications to the exploration of the structure and dynamics of membranes and membrane proteins. Molecular dynamics simulations have become an essential tool to investigate biological problems, and their success relies on proper molecular models together with efficient conformational sampling methods. The implicit representation of solvent/membrane environments is reasonable approximation to the explicit all-atom models, considering the balance between computational cost and simulation accuracy. Implicit models can be easily combined with replica-exchange molecular dynamics methods to explore a wider conformational space of a protein. Other molecular models and enhanced conformational sampling methods are also briefly discussed. As application examples, we introduce recent simulation studies of glycophorin A, phospholamban, amyloid precursor protein, and mixed lipid bilayers and discuss the accuracy and efficiency of each simulation model and method. This article is part of a Special Issue entitled: Membrane Proteins edited by J.C. Gumbart and Sergei Noskov. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  5. Distributed energy-balance modeling of snow-cover evolution and melt in rugged terrain: Tobacco Root Mountains, Montana, USA

    USGS Publications Warehouse

    Letsinger, S.L.; Olyphant, G.A.

    2007-01-01

    A distributed energy-balance model was developed for simulating snowpack evolution and melt in rugged terrain. The model, which was applied to a 43-km2 watershed in the Tobacco Root Mountains, Montana, USA, used measured ambient data from nearby weather stations to drive energy-balance calculations and to constrain the model of Liston and Sturm [Liston, G.E., Sturm, M., 1998. A snow-transport model for complex terrain. Journal of Glaciology 44 (148), 498-516] for calculating the initial snowpack thickness. Simulated initial snow-water equivalent ranged between 1 cm and 385 cm w.e. (water equivalent) with high values concentrated on east-facing slopes below tall summits. An interpreted satellite image of the snowcover distribution on May 6, 1998, closely matched the simulated distribution with the greatest discrepancy occurring in the floor of the main trunk valley. Model simulations indicated that snowmelt commenced early in the melt season, but rapid meltout of snow cover did not occur until after the average energy balance of the entire watershed became positive about 45 days into the melt season. Meltout was fastest in the lower part of the watershed where warmer temperatures and tree cover enhanced the energy income of the underlying snow. An interpreted satellite image of the snowcover distribution on July 9, 1998 compared favorably with the simulated distribution, and melt curves for modeled canopy-covered cells mimicked the trends measured at nearby snow pillow stations. By the end of the simulation period (August 3), 28% of the watershed remained snow covered, most of which was concentrated in the highest parts of the watershed where initially thick accumulations had been shaded by surrounding summits. The results of this study provide further demonstration of the critical role that topography plays in the timing and magnitude of snowmelt from high mountain watersheds. ?? 2006 Elsevier B.V. All rights reserved.

  6. Turbulence Resolving Flow Simulations of a Francis Turbine in Part Load using Highly Parallel CFD Simulations

    NASA Astrophysics Data System (ADS)

    Krappel, Timo; Riedelbauch, Stefan; Jester-Zuerker, Roland; Jung, Alexander; Flurl, Benedikt; Unger, Friedeman; Galpin, Paul

    2016-11-01

    The operation of Francis turbines in part load conditions causes high fluctuations and dynamic loads in the turbine and especially in the draft tube. At the hub of the runner outlet a rotating vortex rope within a low pressure zone arises and propagates into the draft tube cone. The investigated part load operating point is at about 72% discharge of best efficiency. To reduce the possible influence of boundary conditions on the solution, a flow simulation of a complete Francis turbine is conducted consisting of spiral case, stay and guide vanes, runner and draft tube. As the flow has a strong swirling component for the chosen operating point, it is very challenging to accurately predict the flow and in particular the flow losses in the diffusor. The goal of this study is to reach significantly better numerical prediction of this flow type. This is achieved by an improved resolution of small turbulent structures. Therefore, the Scale Adaptive Simulation SAS-SST turbulence model - a scale resolving turbulence model - is applied and compared to the widely used RANS-SST turbulence model. The largest mesh contains 300 million elements, which achieves LES-like resolution throughout much of the computational domain. The simulations are evaluated in terms of the hydraulic losses in the machine, evaluation of the velocity field, pressure oscillations in the draft tube and visual comparisons of turbulent flow structures. A pre-release version of ANSYS CFX 17.0 is used in this paper, as this CFD solver has a parallel performance up to several thousands of cores for this application which includes a transient rotor-stator interface to support the relative motion between the runner and the stationary portions of the water turbine.

  7. [Simulator sickness and its measurement with Simulator Sickness Questionnaire (SSQ)].

    PubMed

    Biernacki, Marcin P; Kennedy, Robert S; Dziuda, Łukasz

    One of the most common methods for studying the simulator sickness issue is the Simulator Sickness Questionnaire (SSQ) (Kennedy et al., 1993). Despite the undoubted popularity of the SSQ, this questionnaire has not as yet been standardized and translated, which could allow us to use it in Poland for research purposes. The aim of our article is to introduce the SSQ to Polish readers, both researchers and practitioners. In the first part of this paper, the studies using the SSQ are discussed, whereas the second part consists of the description of the SSQ test procedure and the calculation method of sample results. Med Pr 2016;67(4):545-555. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.

  8. Battery-Charge-State Model

    NASA Technical Reports Server (NTRS)

    Vivian, H. C.

    1985-01-01

    Charge-state model for lead/acid batteries proposed as part of effort to make equivalent of fuel gage for battery-powered vehicles. Models based on equations that approximate observable characteristics of battery electrochemistry. Uses linear equations, easier to simulate on computer, and gives smooth transitions between charge, discharge, and recuperation.

  9. Design, revision, and application of ground-water flow models for simulation of selected water-management scenarios in the coastal area of Georgia and adjacent parts of South Carolina and Florida

    USGS Publications Warehouse

    Clarke, John S.; Krause, Richard E.

    2000-01-01

    Ground-water flow models of the Floridan aquifer system in the coastal area of Georgia and adjacent parts of South Carolina and Florida, were revised and updated to ensure consistency among the various models used, and to facilitate evaluation of the effects of pumping on the ground-water level near areas of saltwater contamination. The revised models, developed as part of regional and areal assessments of ground-water resources in coastal Georgia, are--the Regional Aquifer-System Analysis (RASA) model, the Glynn County area (Glynn) model, and the Savannah area (Savannah) model. Changes were made to hydraulic-property arrays of the RASA and Glynn models to ensure consistency among all of the models; results of theses changes are evidenced in revised water budgets and calibration statistics. Following revision, the three models were used to simulate 32 scenarios of hypothetical changes in pumpage that ranged from about 82 million gallons per day (Mgal/d) lower to about 438 Mgal/d higher, than the May 1985 pumping rate of 308 Mgal/d. The scenarios were developed by the Georgia Department of Natural Resources, Environmental Protection Division and the Chatham County-Savannah Metropolitan Planning Commission to evaluate water-management alternatives in coastal Georgia. Maps showing simulated ground-water-level decline and diagrams presenting changes in simulated flow rates are presented for each scenario. Scenarios were grouped on the basis of pumping location--entire 24-county area, central subarea, Glynn-Wayne-Camden County subarea, and Savannah-Hilton Head Island subarea. For those scenarios that simulated decreased pumpage, the water level at both Brunswick and Hilton Head Island rose, decreasing the hydraulic gradient and reducing the potential for saltwater contamination. Conversely, in response to scenarios of increased pumpage, the water level at both locations declined, increasing the hydraulic gradient and increasing the potential for saltwater contamination. Pumpage effects on ground-water levels and related saltwater contamination at Brunswick and Hilton Head Island generally diminish with increased distance from these areas. Additional development of the Upper Floridan aquifer may be possible in parts of the coastal area without affecting saltwater contamination at Brunswick or Hilton Head Island, due to the presence of two hydrologic boundaries--the Gulf Trough, separating the northern and central subareas; and the hypothesized Satilla Line, separating the central and southern subareas. These boundaries diminish pumpage effects across them; and may enable greater ground-water withdrawal in areas north of the Gulf Trough and south of the Satilla Line without producing appreciable drawdown at Brunswick or Hilton Head Island.

  10. Hybrid 3-D rocket trajectory program. Part 1: Formulation and analysis. Part 2: Computer programming and user's instruction. [computerized simulation using three dimensional motion analysis

    NASA Technical Reports Server (NTRS)

    Huang, L. C. P.; Cook, R. A.

    1973-01-01

    Models utilizing various sub-sets of the six degrees of freedom are used in trajectory simulation. A 3-D model with only linear degrees of freedom is especially attractive, since the coefficients for the angular degrees of freedom are the most difficult to determine and the angular equations are the most time consuming for the computer to evaluate. A computer program is developed that uses three separate subsections to predict trajectories. A launch rail subsection is used until the rocket has left its launcher. The program then switches to a special 3-D section which computes motions in two linear and one angular degrees of freedom. When the rocket trims out, the program switches to the standard, three linear degrees of freedom model.

  11. Deep Drawing Simulations With Different Polycrystalline Models

    NASA Astrophysics Data System (ADS)

    Duchêne, Laurent; de Montleau, Pierre; Bouvier, Salima; Habraken, Anne Marie

    2004-06-01

    The goal of this research is to study the anisotropic material behavior during forming processes, represented by both complex yield loci and kinematic-isotropic hardening models. A first part of this paper describes the main concepts of the `Stress-strain interpolation' model that has been implemented in the non-linear finite element code Lagamine. This model consists of a local description of the yield locus based on the texture of the material through the full constraints Taylor's model. The texture evolution due to plastic deformations is computed throughout the FEM simulations. This `local yield locus' approach was initially linked to the classical isotropic Swift hardening law. Recently, a more complex hardening model was implemented: the physically-based microstructural model of Teodosiu. It takes into account intergranular heterogeneity due to the evolution of dislocation structures, that affects isotropic and kinematic hardening. The influence of the hardening model is compared to the influence of the texture evolution thanks to deep drawing simulations.

  12. The Effect of Part-simulation of Weightlessness on Human Control of Bilateral Teleoperation: Neuromotor Considerations

    NASA Technical Reports Server (NTRS)

    Corker, K.; Bejczy, A. K.

    1984-01-01

    The effect of weightlessness on the human operator's performance in force reflecting position control of remote manipulators was investigated. A gravity compensation system was developed to simulate the effect of weightlessness on the operator's arm. A universal force reflecting hand controller (FRHC) and task simulation software were employed. Two experiments were performed because of anticipated disturbances in neuromotor control specification on the human operator in an orbital control environment to investigate: (1) the effect of controller stiffness on the attainment of a learned terminal position in the three dimensional controller space, and (2) the effect of controller stiffness and damping on force tracking of the contour of a simulated three dimensional cube using the part simulation of weightless conditions. The results support the extension of neuromotor control models, which postulate a stiffness balance encoding of terminal position, to three dimensional motion of a multilink system, confirm the existence of a disturbance in human manual control performance under gravity compensated conditions, and suggest techniques for compensation of weightlessness induced performance decrement through appropriate specification of hand controller response characteristics. These techniques are based on the human control model.

  13. Flow analysis of new type propulsion system for UV’s

    NASA Astrophysics Data System (ADS)

    Eimanis, M.; Auzins, J.

    2017-10-01

    This paper presents an original design of an autonomous underwater vehicle where thrust force is created by the helicoidal shape of the hull rather than screw propellers. Propulsion force is created by counter-rotating bow and stern parts. The middle part of the vehicle has the function of a cargo compartment containing all control mechanisms and communications. It’s made of elastic material, containing a Cardan-joint mechanism, which allows changing the direction of vehicle, actuated by bending drives. A bending drive velocity control algorithm for the automatic control of vehicle movement direction is proposed. The dynamics of AUV are simulated using multibody simulation software MSC Adams. For the simulation of water resistance forces and torques the surrogate polynomial metamodels are created on the basis of computer experiments with CFD software. For flow interaction with model geometry the simplified vehicle model is submerged in fluid medium using special CFD software, with the same idea used in wind tunnel experiments. The simulation results are compared with measurements of the AUV prototype, created at Institute of Mechanics of Riga Technical University. Experiments with the prototype showed good agreement with simulation results and confirmed the effectiveness and the future potential of the proposed principle.

  14. WEST-3 wind turbine simulator development. Volume 2: Verification

    NASA Technical Reports Server (NTRS)

    Sridhar, S.

    1985-01-01

    The details of a study to validate WEST-3, a new time wind turbine simulator developed by Paragib Pacific Inc., are presented in this report. For the validation, the MOD-0 wind turbine was simulated on WEST-3. The simulation results were compared with those obtained from previous MOD-0 simulations, and with test data measured during MOD-0 operations. The study was successful in achieving the major objective of proving that WEST-3 yields results which can be used to support a wind turbine development process. The blade bending moments, peak and cyclic, from the WEST-3 simulation correlated reasonably well with the available MOD-0 data. The simulation was also able to predict the resonance phenomena observed during MOD-0 operations. Also presented in the report is a description and solution of a serious numerical instability problem encountered during the study. The problem was caused by the coupling of the rotor and the power train models. The results of the study indicate that some parts of the existing WEST-3 simulation model may have to be refined for future work; specifically, the aerodynamics and procedure used to couple the rotor model with the tower and the power train models.

  15. Simulation Study of Flap Effects on a Commercial Transport Airplane in Upset Conditions

    NASA Technical Reports Server (NTRS)

    Cunningham, Kevin; Foster, John V.; Shah, Gautam H.; Stewart, Eric C.; Ventura, Robin N.; Rivers, Robert A.; Wilborn, James E.; Gato, William

    2005-01-01

    As part of NASA's Aviation Safety and Security Program, a simulation study of a twinjet transport airplane crew training simulation was conducted to address fidelity for upset or loss of control conditions and to study the effect of flap configuration in those regimes. Piloted and desktop simulations were used to compare the baseline crew training simulation model with an enhanced aerodynamic model that was developed for high-angle-of-attack conditions. These studies were conducted with various flap configurations and addressed the approach-to-stall, stall, and post-stall flight regimes. The enhanced simulation model showed that flap configuration had a significant effect on the character of departures that occurred during post-stall flight. Preliminary comparisons with flight test data indicate that the enhanced model is a significant improvement over the baseline. Some of the unrepresentative characteristics that are predicted by the baseline crew training simulation for flight in the post-stall regime have been identified. This paper presents preliminary results of this simulation study and discusses key issues regarding predicted flight dynamics characteristics during extreme upset and loss-of-control flight conditions with different flap configurations.

  16. Weather model performance on extreme rainfall events simulation's over Western Iberian Peninsula

    NASA Astrophysics Data System (ADS)

    Pereira, S. C.; Carvalho, A. C.; Ferreira, J.; Nunes, J. P.; Kaiser, J. J.; Rocha, A.

    2012-08-01

    This study evaluates the performance of the WRF-ARW numerical weather model in simulating the spatial and temporal patterns of an extreme rainfall period over a complex orographic region in north-central Portugal. The analysis was performed for the December month of 2009, during the Portugal Mainland rainy season. The heavy rainfall to extreme heavy rainfall periods were due to several low surface pressure's systems associated with frontal surfaces. The total amount of precipitation for December exceeded, in average, the climatological mean for the 1971-2000 time period in +89 mm, varying from 190 mm (south part of the country) to 1175 mm (north part of the country). Three model runs were conducted to assess possible improvements in model performance: (1) the WRF-ARW is forced with the initial fields from a global domain model (RunRef); (2) data assimilation for a specific location (RunObsN) is included; (3) nudging is used to adjust the analysis field (RunGridN). Model performance was evaluated against an observed hourly precipitation dataset of 15 rainfall stations using several statistical parameters. The WRF-ARW model reproduced well the temporal rainfall patterns but tended to overestimate precipitation amounts. The RunGridN simulation provided the best results but model performance of the other two runs was good too, so that the selected extreme rainfall episode was successfully reproduced.

  17. Multimodel comparison of the ionosphere variability during the 2009 sudden stratosphere warming

    NASA Astrophysics Data System (ADS)

    Pedatella, N. M.; Fang, T.-W.; Jin, H.; Sassi, F.; Schmidt, H.; Chau, J. L.; Siddiqui, T. A.; Goncharenko, L.

    2016-07-01

    A comparison of different model simulations of the ionosphere variability during the 2009 sudden stratosphere warming (SSW) is presented. The focus is on the equatorial and low-latitude ionosphere simulated by the Ground-to-topside model of the Atmosphere and Ionosphere for Aeronomy (GAIA), Whole Atmosphere Model plus Global Ionosphere Plasmasphere (WAM+GIP), and Whole Atmosphere Community Climate Model eXtended version plus Thermosphere-Ionosphere-Mesosphere-Electrodynamics General Circulation Model (WACCMX+TIMEGCM). The simulations are compared with observations of the equatorial vertical plasma drift in the American and Indian longitude sectors, zonal mean F region peak density (NmF2) from the Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) satellites, and ground-based Global Positioning System (GPS) total electron content (TEC) at 75°W. The model simulations all reproduce the observed morning enhancement and afternoon decrease in the vertical plasma drift, as well as the progression of the anomalies toward later local times over the course of several days. However, notable discrepancies among the simulations are seen in terms of the magnitude of the drift perturbations, and rate of the local time shift. Comparison of the electron densities further reveals that although many of the broad features of the ionosphere variability are captured by the simulations, there are significant differences among the different model simulations, as well as between the simulations and observations. Additional simulations are performed where the neutral atmospheres from four different whole atmosphere models (GAIA, HAMMONIA (Hamburg Model of the Neutral and Ionized Atmosphere), WAM, and WACCMX) provide the lower atmospheric forcing in the TIME-GCM. These simulations demonstrate that different neutral atmospheres, in particular, differences in the solar migrating semidiurnal tide, are partly responsible for the differences in the simulated ionosphere variability in GAIA, WAM+GIP, and WACCMX+TIMEGCM.

  18. Ionospheric Simulation System for Satellite Observations and Global Assimilative Modeling Experiments (ISOGAME)

    NASA Technical Reports Server (NTRS)

    Pi, Xiaoqing; Mannucci, Anthony J.; Verkhoglyadova, Olga P.; Stephens, Philip; Wilson, Brian D.; Akopian, Vardan; Komjathy, Attila; Lijima, Byron A.

    2013-01-01

    ISOGAME is designed and developed to assess quantitatively the impact of new observation systems on the capability of imaging and modeling the ionosphere. With ISOGAME, one can perform observation system simulation experiments (OSSEs). A typical OSSE using ISOGAME would involve: (1) simulating various ionospheric conditions on global scales; (2) simulating ionospheric measurements made from a constellation of low-Earth-orbiters (LEOs), particularly Global Navigation Satellite System (GNSS) radio occultation data, and from ground-based global GNSS networks; (3) conducting ionospheric data assimilation experiments with the Global Assimilative Ionospheric Model (GAIM); and (4) analyzing modeling results with visualization tools. ISOGAME can provide quantitative assessment of the accuracy of assimilative modeling with the interested observation system. Other observation systems besides those based on GNSS are also possible to analyze. The system is composed of a suite of software that combines the GAIM, including a 4D first-principles ionospheric model and data assimilation modules, an Internal Reference Ionosphere (IRI) model that has been developed by international ionospheric research communities, observation simulator, visualization software, and orbit design, simulation, and optimization software. The core GAIM model used in ISOGAME is based on the GAIM++ code (written in C++) that includes a new high-fidelity geomagnetic field representation (multi-dipole). New visualization tools and analysis algorithms for the OSSEs are now part of ISOGAME.

  19. Modeling marine boundary-layer clouds with a two-layer model: A one-dimensional simulation

    NASA Technical Reports Server (NTRS)

    Wang, Shouping

    1993-01-01

    A two-layer model of the marine boundary layer is described. The model is used to simulate both stratocumulus and shallow cumulus clouds in downstream simulations. Over cold sea surfaces, the model predicts a relatively uniform structure in the boundary layer with 90%-100% cloud fraction. Over warm sea surfaces, the model predicts a relatively strong decoupled and conditionally unstable structure with a cloud fraction between 30% and 60%. A strong large-scale divergence considerably limits the height of the boundary layer and decreases relative humidity in the upper part of the cloud layer; thus, a low cloud fraction results. The efffects of drizzle on the boundary-layer structure and cloud fraction are also studied with downstream simulations. It is found that drizzle dries and stabilizes the cloud layer and tends to decouple the cloud from the subcloud layer. Consequently, solid stratocumulus clouds may break up and the cloud fraction may decrease because of drizzle.

  20. Numerical simulation of the groundwater-flow system of the Kitsap Peninsula, west-central Washington

    USGS Publications Warehouse

    Frans, Lonna M.; Olsen, Theresa D.

    2016-05-05

    A groundwater-flow model was developed to improve understanding of water resources on the Kitsap Peninsula. The Kitsap Peninsula is in the Puget Sound lowland of west-central Washington, is bounded by Puget Sound on the east and by Hood Canal on the west, and covers an area of about 575 square miles. The peninsula encompasses all of Kitsap County, Mason County north of Hood Canal, and part of Pierce County west of Puget Sound. The peninsula is surrounded by saltwater, and the hydrologic setting is similar to that of an island. The study area is underlain by a thick sequence of unconsolidated glacial and interglacial deposits that overlie sedimentary and volcanic bedrock units that crop out in the central part of the study area. Twelve hydrogeologic units consisting of aquifers, confining units, and an underlying bedrock unit form the basis of the groundwater-flow model.Groundwater flow on the Kitsap Peninsula was simulated using the groundwater-flow model, MODFLOW‑NWT. The finite difference model grid comprises 536 rows, 362 columns, and 14 layers. Each model cell has a horizontal dimension of 500 by 500 feet, and the model contains a total of 1,227,772 active cells. Groundwater flow was simulated for transient conditions. Transient conditions were simulated for January 1985–December 2012 using annual stress periods for 1985–2004 and monthly stress periods for 2005–2012. During model calibration, variables were adjusted within probable ranges to minimize differences between measured and simulated groundwater levels and stream baseflows. As calibrated to transient conditions, the model has a standard deviation for heads and flows of 47.04 feet and 2.46 cubic feet per second, respectively.Simulated inflow to the model area for the 2005–2012 period from precipitation and secondary recharge was 585,323 acre-feet per year (acre-ft/yr) (93 percent of total simulated inflow ignoring changes in storage), and simulated inflow from stream and lake leakage was 43,905 acre-ft/yr (7 percent of total simulated inflow). Simulated outflow from the model primarily was through discharge to streams, lakes, springs, seeps, and Puget Sound (594,595 acre-ft/yr; 95 percent of total simulated outflow excluding changes in storage) and through withdrawals from wells (30,761 acre-ft/yr; 5 percent of total simulated outflow excluding changes in storage).Six scenarios were formulated with input from project stakeholders and were simulated using the calibrated model to provide representative examples of how the model could be used to evaluate the effects on water levels and stream baseflows of potential changes in groundwater withdrawals, in consumptive use, and in recharge. These included simulations of a steady-state system, no-pumping and return flows, 15-percent increase in current withdrawals in all wells, 80-percent decrease in outdoor water to simulate effects of conservation efforts, 15-percent decrease in recharge from precipitation to simulate a drought, and particle tracking to determine flow paths.Changes in water-level altitudes and baseflow amounts vary depending on the stress applied to the system in these various scenarios. Reducing recharge by 15 percent between 2005 and 2012 had the largest effect, with water-level altitudes declining throughout the model domain and baseflow amounts decreasing by as much as 18 percent compared to baseline conditions. Changes in pumping volumes had a smaller effect on the model. Removing all pumping and resulting return flows caused increased water-level altitudes in many areas and increased baseflow amounts of between 1 and 3 percent.

  1. Improved Solver Settings for 3D Exploding Wire Simulations in ALEGRA

    DTIC Science & Technology

    2016-08-01

    expanding plasma and shock wave resulting from the wire burst can extend to tens of cen- timeters. The elliptic nature of the magnetic diffusion...such simulations were prohibitively slow due in part to unoptimized (matrix) solver settings. In this report, we address that by varying 6 parameters...distribution is unlimited. simulation code developed by SNL for modeling high-deformation solid dynam- ics, shock -hydrodynamics, magnetohydrodynamics

  2. Verification and Validation of Rural Propagation in the Sage 2.0 Simulation

    DTIC Science & Technology

    2016-08-01

    position unless so designated by other authorized documents. Citation of manufacturer’s or trade names does not constitute an official endorsement or...Approved for public release; distribution unlimited. 1 1. Introduction The System of Systems Survivability Simulation (S4) is designed to be...materiel developers. The Sage model, part of the S4 simulation suite, has been developed primarily to support SLAD analysts in pretest planning and

  3. Combustor Simulation

    NASA Technical Reports Server (NTRS)

    Norris, Andrew

    2003-01-01

    The goal was to perform 3D simulation of GE90 combustor, as part of full turbofan engine simulation. Requirements of high fidelity as well as fast turn-around time require massively parallel code. National Combustion Code (NCC) was chosen for this task as supports up to 999 processors and includes state-of-the-art combustion models. Also required is ability to take inlet conditions from compressor code and give exit conditions to turbine code.

  4. Numerical characteristics of quantum computer simulation

    NASA Astrophysics Data System (ADS)

    Chernyavskiy, A.; Khamitov, K.; Teplov, A.; Voevodin, V.; Voevodin, Vl.

    2016-12-01

    The simulation of quantum circuits is significantly important for the implementation of quantum information technologies. The main difficulty of such modeling is the exponential growth of dimensionality, thus the usage of modern high-performance parallel computations is relevant. As it is well known, arbitrary quantum computation in circuit model can be done by only single- and two-qubit gates, and we analyze the computational structure and properties of the simulation of such gates. We investigate the fact that the unique properties of quantum nature lead to the computational properties of the considered algorithms: the quantum parallelism make the simulation of quantum gates highly parallel, and on the other hand, quantum entanglement leads to the problem of computational locality during simulation. We use the methodology of the AlgoWiki project (algowiki-project.org) to analyze the algorithm. This methodology consists of theoretical (sequential and parallel complexity, macro structure, and visual informational graph) and experimental (locality and memory access, scalability and more specific dynamic characteristics) parts. Experimental part was made by using the petascale Lomonosov supercomputer (Moscow State University, Russia). We show that the simulation of quantum gates is a good base for the research and testing of the development methods for data intense parallel software, and considered methodology of the analysis can be successfully used for the improvement of the algorithms in quantum information science.

  5. On the design of script languages for neural simulation.

    PubMed

    Brette, Romain

    2012-01-01

    In neural network simulators, models are specified according to a language, either specific or based on a general programming language (e.g. Python). There are also ongoing efforts to develop standardized languages, for example NeuroML. When designing these languages, efforts are often focused on expressivity, that is, on maximizing the number of model types than can be described and simulated. I argue that a complementary goal should be to minimize the cognitive effort required on the part of the user to use the language. I try to formalize this notion with the concept of "language entropy", and I propose a few practical guidelines to minimize the entropy of languages for neural simulation.

  6. Assessment of predictive capabilities for aerodynamic heating in hypersonic flow

    NASA Astrophysics Data System (ADS)

    Knight, Doyle; Chazot, Olivier; Austin, Joanna; Badr, Mohammad Ali; Candler, Graham; Celik, Bayram; Rosa, Donato de; Donelli, Raffaele; Komives, Jeffrey; Lani, Andrea; Levin, Deborah; Nompelis, Ioannis; Panesi, Marco; Pezzella, Giuseppe; Reimann, Bodo; Tumuklu, Ozgur; Yuceil, Kemal

    2017-04-01

    The capability for CFD prediction of hypersonic shock wave laminar boundary layer interaction was assessed for a double wedge model at Mach 7.1 in air and nitrogen at 2.1 MJ/kg and 8 MJ/kg. Simulations were performed by seven research organizations encompassing both Navier-Stokes and Direct Simulation Monte Carlo (DSMC) methods as part of the NATO STO AVT Task Group 205 activity. Comparison of the CFD simulations with experimental heat transfer and schlieren visualization suggest the need for accurate modeling of the tunnel startup process in short-duration hypersonic test facilities, and the importance of fully 3-D simulations of nominally 2-D (i.e., non-axisymmmetric) experimental geometries.

  7. Lazy Updating of hubs can enable more realistic models by speeding up stochastic simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ehlert, Kurt; Loewe, Laurence, E-mail: loewe@wisc.edu; Wisconsin Institute for Discovery, University of Wisconsin-Madison, Madison, Wisconsin 53715

    2014-11-28

    To respect the nature of discrete parts in a system, stochastic simulation algorithms (SSAs) must update for each action (i) all part counts and (ii) each action's probability of occurring next and its timing. This makes it expensive to simulate biological networks with well-connected “hubs” such as ATP that affect many actions. Temperature and volume also affect many actions and may be changed significantly in small steps by the network itself during fever and cell growth, respectively. Such trends matter for evolutionary questions, as cell volume determines doubling times and fever may affect survival, both key traits for biological evolution.more » Yet simulations often ignore such trends and assume constant environments to avoid many costly probability updates. Such computational convenience precludes analyses of important aspects of evolution. Here we present “Lazy Updating,” an add-on for SSAs designed to reduce the cost of simulating hubs. When a hub changes, Lazy Updating postpones all probability updates for reactions depending on this hub, until a threshold is crossed. Speedup is substantial if most computing time is spent on such updates. We implemented Lazy Updating for the Sorting Direct Method and it is easily integrated into other SSAs such as Gillespie's Direct Method or the Next Reaction Method. Testing on several toy models and a cellular metabolism model showed >10× faster simulations for its use-cases—with a small loss of accuracy. Thus we see Lazy Updating as a valuable tool for some special but important simulation problems that are difficult to address efficiently otherwise.« less

  8. Properties of Local Group galaxies in hydrodynamical simulations of sterile neutrino dark matter cosmologies

    NASA Astrophysics Data System (ADS)

    Lovell, Mark R.; Bose, Sownak; Boyarsky, Alexey; Crain, Robert A.; Frenk, Carlos S.; Hellwing, Wojciech A.; Ludlow, Aaron D.; Navarro, Julio F.; Ruchayskiy, Oleg; Sawala, Till; Schaller, Matthieu; Schaye, Joop; Theuns, Tom

    2017-07-01

    We study galaxy formation in sterile neutrino dark matter models that differ significantly from both cold and from 'warm thermal relic' models. We use the eagle code to carry out hydrodynamic simulations of the evolution of pairs of galaxies chosen to resemble the Local Group, as part of the APOSTLE simulations project. We compare cold dark matter (CDM) with two sterile neutrino models with 7 keV mass: one, the warmest among all models of this mass (LA120) and the other, a relatively cold case (LA10). We show that the lower concentration of sterile neutrino subhaloes compared to their CDM counterparts makes the inferred inner dark matter content of galaxies like Fornax (or Magellanic Clouds) less of an outlier in the sterile neutrino cosmologies. In terms of the galaxy number counts, the LA10 simulations are indistinguishable from CDM when one takes into account halo-to-halo (or 'simulation-to-simulation') scatter. In order for the LA120 model to match the number of Local Group dwarf galaxies, a higher fraction of low-mass haloes is required to form galaxies than is predicted by the eagle simulations. As the census of the Local Group galaxies nears completion, this population may provide a strong discriminant between cold and warm dark matter models.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baptista, António M.

    This work focuses on the numerical modeling of Columbia River estuarine circulation and associated modeling-supported analyses conducted as an integral part of a multi-disciplinary and multi-institutional effort led by NOAA's Northwest Fisheries Science Center. The overall effort is aimed at: (1) retrospective analyses to reconstruct historic bathymetric features and assess effects of climate and river flow on the extent and distribution of shallow water, wetland and tidal-floodplain habitats; (2) computer simulations using a 3-dimensional numerical model to evaluate the sensitivity of salmon rearing opportunities to various historical modifications affecting the estuary (including channel changes, flow regulation, and diking of tidalmore » wetlands and floodplains); (3) observational studies of present and historic food web sources supporting selected life histories of juvenile salmon as determined by stable isotope, microchemistry, and parasitology techniques; and (4) experimental studies in Grays River in collaboration with Columbia River Estuary Study Taskforce (CREST) and the Columbia Land Trust (CLT) to assess effects of multiple tidal wetland restoration projects on various life histories of juvenile salmon and to compare responses to observed habitat-use patterns in the mainstem estuary. From the above observations, experiments, and additional modeling simulations, the effort will also (5) examine effects of alternative flow-management and habitat-restoration scenarios on habitat opportunity and the estuary's productive capacity for juvenile salmon. The underlying modeling system is part of the SATURN1coastal-margin observatory [1]. SATURN relies on 3D numerical models [2, 3] to systematically simulate and understand baroclinic circulation in the Columbia River estuary-plume-shelf system [4-7] (Fig. 1). Multi-year simulation databases of circulation are produced as an integral part of SATURN, and have multiple applications in understanding estuary/plume variability, the role of the estuary and plume on salmon survival, and functional changes in the estuary-plume system in response to climate and human activities.« less

  10. Surrogate model approach for improving the performance of reactive transport simulations

    NASA Astrophysics Data System (ADS)

    Jatnieks, Janis; De Lucia, Marco; Sips, Mike; Dransch, Doris

    2016-04-01

    Reactive transport models can serve a large number of important geoscientific applications involving underground resources in industry and scientific research. It is common for simulation of reactive transport to consist of at least two coupled simulation models. First is a hydrodynamics simulator that is responsible for simulating the flow of groundwaters and transport of solutes. Hydrodynamics simulators are well established technology and can be very efficient. When hydrodynamics simulations are performed without coupled geochemistry, their spatial geometries can span millions of elements even when running on desktop workstations. Second is a geochemical simulation model that is coupled to the hydrodynamics simulator. Geochemical simulation models are much more computationally costly. This is a problem that makes reactive transport simulations spanning millions of spatial elements very difficult to achieve. To address this problem we propose to replace the coupled geochemical simulation model with a surrogate model. A surrogate is a statistical model created to include only the necessary subset of simulator complexity for a particular scenario. To demonstrate the viability of such an approach we tested it on a popular reactive transport benchmark problem that involves 1D Calcite transport. This is a published benchmark problem (Kolditz, 2012) for simulation models and for this reason we use it to test the surrogate model approach. To do this we tried a number of statistical models available through the caret and DiceEval packages for R, to be used as surrogate models. These were trained on randomly sampled subset of the input-output data from the geochemical simulation model used in the original reactive transport simulation. For validation we use the surrogate model to predict the simulator output using the part of sampled input data that was not used for training the statistical model. For this scenario we find that the multivariate adaptive regression splines (MARS) method provides the best trade-off between speed and accuracy. This proof-of-concept forms an essential step towards building an interactive visual analytics system to enable user-driven systematic creation of geochemical surrogate models. Such a system shall enable reactive transport simulations with unprecedented spatial and temporal detail to become possible. References: Kolditz, O., Görke, U.J., Shao, H. and Wang, W., 2012. Thermo-hydro-mechanical-chemical processes in porous media: benchmarks and examples (Vol. 86). Springer Science & Business Media.

  11. Climate Change Impacts for Conterminous USA: An Integrated Assessment Part 2. Models and Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomson, Allison M.; Rosenberg, Norman J.; Izaurralde, R Cesar C.

    As CO{sub 2} and other greenhouse gases accumulate in the atmosphere and contribute to rising global temperatures, it is important to examine how a changing climate may affect natural and managed ecosystems. In this series of papers, we study the impacts of climate change on agriculture, water resources and natural ecosystems in the conterminous United States using a suite of climate change predictions from General Circulation Models (GCMs) as described in Part 1. Here we describe the agriculture model EPIC and the HUMUS water model and validate them with historical crop yields and streamflow data. We compare EPIC simulated grainmore » and forage crop yields with historical crop yields from the US Department of Agriculture and find an acceptable level of agreement for this study. The validation of HUMUS simulated streamflow with estimates of natural streamflow from the US Geological Survey shows that the model is able to reproduce significant relationships and capture major trends.« less

  12. Computational modeling of high performance steel fiber reinforced concrete using a micromorphic approach

    NASA Astrophysics Data System (ADS)

    Huespe, A. E.; Oliver, J.; Mora, D. F.

    2013-12-01

    A finite element methodology for simulating the failure of high performance fiber reinforced concrete composites (HPFRC), with arbitrarily oriented short fibers, is presented. The composite material model is based on a micromorphic approach. Using the framework provided by this theory, the body configuration space is described through two kinematical descriptors. At the structural level, the displacement field represents the standard kinematical descriptor. Additionally, a morphological kinematical descriptor, the micromorphic field, is introduced. It describes the fiber-matrix relative displacement, or slipping mechanism of the bond, observed at the mesoscale level. In the first part of this paper, we summarize the model formulation of the micromorphic approach presented in a previous work by the authors. In the second part, and as the main contribution of the paper, we address specific issues related to the numerical aspects involved in the computational implementation of the model. The developed numerical procedure is based on a mixed finite element technique. The number of dofs per node changes according with the number of fiber bundles simulated in the composite. Then, a specific solution scheme is proposed to solve the variable number of unknowns in the discrete model. The HPFRC composite model takes into account the important effects produced by concrete fracture. A procedure for simulating quasi-brittle fracture is introduced into the model and is described in the paper. The present numerical methodology is assessed by simulating a selected set of experimental tests which proves its viability and accuracy to capture a number of mechanical phenomenon interacting at the macro- and mesoscale and leading to failure of HPFRC composites.

  13. Commentary on the Integration of Model Sharing and Reproducibility Analysis to Scholarly Publishing Workflow in Computational Biomechanics

    PubMed Central

    Erdemir, Ahmet; Guess, Trent M.; Halloran, Jason P.; Modenese, Luca; Reinbolt, Jeffrey A.; Thelen, Darryl G.; Umberger, Brian R.

    2016-01-01

    Objective The overall goal of this document is to demonstrate that dissemination of models and analyses for assessing the reproducibility of simulation results can be incorporated in the scientific review process in biomechanics. Methods As part of a special issue on model sharing and reproducibility in IEEE Transactions on Biomedical Engineering, two manuscripts on computational biomechanics were submitted: A. Rajagopal et al., IEEE Trans. Biomed. Eng., 2016 and A. Schmitz and D. Piovesan, IEEE Trans. Biomed. Eng., 2016. Models used in these studies were shared with the scientific reviewers and the public. In addition to the standard review of the manuscripts, the reviewers downloaded the models and performed simulations that reproduced results reported in the studies. Results There was general agreement between simulation results of the authors and those of the reviewers. Discrepancies were resolved during the necessary revisions. The manuscripts and instructions for download and simulation were updated in response to the reviewers’ feedback; changes that may otherwise have been missed if explicit model sharing and simulation reproducibility analysis were not conducted in the review process. Increased burden on the authors and the reviewers, to facilitate model sharing and to repeat simulations, were noted. Conclusion When the authors of computational biomechanics studies provide access to models and data, the scientific reviewers can download and thoroughly explore the model, perform simulations, and evaluate simulation reproducibility beyond the traditional manuscript-only review process. Significance Model sharing and reproducibility analysis in scholarly publishing will result in a more rigorous review process, which will enhance the quality of modeling and simulation studies and inform future users of computational models. PMID:28072567

  14. Comparing proxy and model estimates of hydroclimate variability and change over the Common Era

    NASA Astrophysics Data System (ADS)

    Hydro2k Consortium, Pages

    2017-12-01

    Water availability is fundamental to societies and ecosystems, but our understanding of variations in hydroclimate (including extreme events, flooding, and decadal periods of drought) is limited because of a paucity of modern instrumental observations that are distributed unevenly across the globe and only span parts of the 20th and 21st centuries. Such data coverage is insufficient for characterizing hydroclimate and its associated dynamics because of its multidecadal to centennial variability and highly regionalized spatial signature. High-resolution (seasonal to decadal) hydroclimatic proxies that span all or parts of the Common Era (CE) and paleoclimate simulations from climate models are therefore important tools for augmenting our understanding of hydroclimate variability. In particular, the comparison of the two sources of information is critical for addressing the uncertainties and limitations of both while enriching each of their interpretations. We review the principal proxy data available for hydroclimatic reconstructions over the CE and highlight the contemporary understanding of how these proxies are interpreted as hydroclimate indicators. We also review the available last-millennium simulations from fully coupled climate models and discuss several outstanding challenges associated with simulating hydroclimate variability and change over the CE. A specific review of simulated hydroclimatic changes forced by volcanic events is provided, as is a discussion of expected improvements in estimated radiative forcings, models, and their implementation in the future. Our review of hydroclimatic proxies and last-millennium model simulations is used as the basis for articulating a variety of considerations and best practices for how to perform proxy-model comparisons of CE hydroclimate. This discussion provides a framework for how best to evaluate hydroclimate variability and its associated dynamics using these comparisons and how they can better inform interpretations of both proxy data and model simulations. We subsequently explore means of using proxy-model comparisons to better constrain and characterize future hydroclimate risks. This is explored specifically in the context of several examples that demonstrate how proxy-model comparisons can be used to quantitatively constrain future hydroclimatic risks as estimated from climate model projections.

  15. Comparing Proxy and Model Estimates of Hydroclimate Variability and Change over the Common Era

    NASA Technical Reports Server (NTRS)

    Smerdon, Jason E.; Luterbacher, Jurg; Phipps, Steven J.; Anchukaitis, Kevin J.; Ault, Toby; Coats, Sloan; Cobb, Kim M.; Cook, Benjamin I.; Colose, Chris; Felis, Thomas; hide

    2017-01-01

    Water availability is fundamental to societies and ecosystems, but our understanding of variations in hydroclimate (including extreme events, flooding, and decadal periods of drought) is limited because of a paucity of modern instrumental observations that are distributed unevenly across the globe and only span parts of the 20th and 21st centuries. Such data coverage is insufficient for characterizing hydroclimate and its associated dynamics because of its multidecadal to centennial variability and highly regionalized spatial signature. High-resolution (seasonal to decadal) hydroclimatic proxies that span all or parts of the Common Era (CE) and paleoclimate simulations from climate models are therefore important tools for augmenting our understanding of hydroclimate variability. In particular, the comparison of the two sources of information is critical for addressing the uncertainties and limitations of both while enriching each of their interpretations. We review the principal proxy data available for hydroclimatic reconstructions over the CE and highlight the contemporary understanding of how these proxies are interpreted as hydroclimate indicators. We also review the available last-millennium simulations from fully coupled climate models and discuss several outstanding challenges associated with simulating hydroclimate variability and change over the CE. A specific review of simulated hydroclimatic changes forced by volcanic events is provided, as is a discussion of expected improvements in estimated radiative forcings, models, and their implementation in the future. Our review of hydroclimatic proxies and last-millennium model simulations is used as the basis for articulating a variety of considerations and best practices for how to perform proxy-model comparisons of CE hydroclimate. This discussion provides a framework for how best to evaluate hydroclimate variability and its associated dynamics using these comparisons and how they can better inform interpretations of both proxy data and model simulations.We subsequently explore means of using proxy-model comparisons to better constrain and characterize future hydroclimate risks. This is explored specifically in the context of several examples that demonstrate how proxy-model comparisons can be used to quantitatively constrain future hydroclimatic risks as estimated from climate model projections.

  16. Simulating timber management in Lake States forests.

    Treesearch

    Gary J. Brand

    1981-01-01

    Describes in detail a management subsystem to simulate cutting in Lake States forest types. This subsystem is part of a Stand and Tree Evaluation and Modeling System (STEMS) contained in the Forest Resource Evaluation Program (FREP) for the Lake States. The management subsystem can be used to test the effect of alternate management strategies.

  17. Curricular Improvements through Computation and Experiment Based Learning Modules

    ERIC Educational Resources Information Center

    Khan, Fazeel; Singh, Kumar

    2015-01-01

    Engineers often need to predict how a part, mechanism or machine will perform in service, and this insight is typically achieved thorough computer simulations. Therefore, instruction in the creation and application of simulation models is essential for aspiring engineers. The purpose of this project was to develop a unified approach to teaching…

  18. Nursing Simulation: A Review of the Past 40 Years

    ERIC Educational Resources Information Center

    Nehring, Wendy M.; Lashley, Felissa R.

    2009-01-01

    Simulation, in its many forms, has been a part of nursing education and practice for many years. The use of games, computer-assisted instruction, standardized patients, virtual reality, and low-fidelity to high-fidelity mannequins have appeared in the past 40 years, whereas anatomical models, partial task trainers, and role playing were used…

  19. Revised Planning Methodology For Signalized Intersections And Operational Analysis Of Exclusive Left-Turn Lanes, A Simulation-Based Method, Part - I: Literature Review (Final Report)

    DOT National Transportation Integrated Search

    1996-04-01

    THE STUDY INVESTIGATES THE APPLICATION OF SIMULATION ALONG WITH FIELD OBSERVATIONS FOR ESTIMATION OF EXCLUSIVE LEFT-TURN SATURATION FLOW RATE AND CAPACITY. THE ENTIRE RESEARCH HAS COVERED THE FOLLOWING PRINCIPAL SUBJECTS: (1) A SATURATION FLOW MODEL ...

  20. The Shock and Vibration Bulletin. Part 1: Invited Papers, Vibrations and Acoustics, Blast and Shock

    NASA Technical Reports Server (NTRS)

    1979-01-01

    Development in the modeling and simulation of shock and vibration phenomena are considered. Predicting the noise exposure of payloads in the space shuttle, prediction for step-stress fatigue, pyrotechnique shock simulation using metal-to-metal impact, and prediction of fragment velocities and trajectories are among the topics covered.

  1. Curing of Thick Thermoset Composite Laminates: Multiphysics Modeling and Experiments

    NASA Astrophysics Data System (ADS)

    Anandan, S.; Dhaliwal, G. S.; Huo, Z.; Chandrashekhara, K.; Apetre, N.; Iyyer, N.

    2017-11-01

    Fiber reinforced polymer composites are used in high-performance aerospace applications as they are resistant to fatigue, corrosion free and possess high specific strength. The mechanical properties of these composite components depend on the degree of cure and residual stresses developed during the curing process. While these parameters are difficult to determine experimentally in large and complex parts, they can be simulated using numerical models in a cost-effective manner. These simulations can be used to develop cure cycles and change processing parameters to obtain high-quality parts. In the current work, a numerical model was built in Comsol MultiPhysics to simulate the cure behavior of a carbon/epoxy prepreg system (IM7/Cycom 5320-1). A thermal spike was observed in thick laminates when the recommended cure cycle was used. The cure cycle was modified to reduce the thermal spike and maintain the degree of cure at the laminate center. A parametric study was performed to evaluate the effect of air flow in the oven, post cure cycles and cure temperatures on the thermal spike and the resultant degree of cure in the laminate.

  2. Folgen des Globalen Wandels für das Grundwasser in Süddeutschland - Teil 2: Sozioökonomische Aspekte

    NASA Astrophysics Data System (ADS)

    Barthel, Roland; Krimly, Tatjana; Elbers, Michael; Soboll, Anja; Wackerbauer, Johann; Hennicker, Rolf; Janisch, Stephan; Reichenau, Tim G.; Dabbert, Stephan; Schmude, Jürgen; Ernst, Andreas; Mauser, Wolfram

    2011-12-01

    In order to account for complex interactions between humans climate and the water cycle, the research consortium GLOWA-Danube (www.glowa-danube.de) has developed the simulation system DANUBIA which consists of 17 coupled models. DANUBIA was applied to investigate various impacts of global-change between 2011 and 2060 in the Upper Danube Catchment. This article describes part 2 of an article series with investigations of socio-economic aspects, while part 1 (Barthel et al. in Grundwasser 16(4), doi:10.1007/s007-011-01794, 2011) deals with natural-spatial aspects. The principles of socio-economic actor-modeling and interactions between socio-economic and natural science model components are described here. We present selected simulations that show impacts on groundwater from changes in agriculture, tourism, economy, domestic water users and water supply. Despite decreases in water consumption, the scenario simulations show significant decreases in groundwater quantity. On the other hand, groundwater quality will likely be influenced more severely by land use changes compared to direct climatic causes. However, overall changes will not be dramatic.

  3. New tools for aquatic habitat modeling

    Treesearch

    D. Tonina; J. A. McKean; C. Tang; P. Goodwin

    2011-01-01

    Modeling of aquatic microhabitat in streams has been typically done over short channel reaches using one-dimensional simulations, partly because of a lack of high resolution. subaqueous topographic data to better define model boundary conditions. The Experimental Advanced Airborne Research Lidar (EAARL) is an airborne aquatic-terrestrial sensor that allows simultaneous...

  4. A PROBABILISTIC MODELING FRAMEWORK FOR PREDICTING POPULATION EXPOSURES TO BENZENE

    EPA Science Inventory

    The US Environmental Protection Agency (EPA) is modifying their probabilistic Stochastic Human Exposure Dose Simulation (SHEDS) model to assess aggregate exposures to air toxics. Air toxics include urban Hazardous Air Pollutants (HAPS) such as benzene from mobile sources, part...

  5. Development of a Water Recovery System Resource Tracking Model

    NASA Technical Reports Server (NTRS)

    Chambliss, Joe; Stambaugh, Imelda; Sargusingh, Miriam; Shull, Sarah; Moore, Michael

    2015-01-01

    A simulation model has been developed to track water resources in an exploration vehicle using Regenerative Life Support (RLS) systems. The Resource Tracking Model (RTM) integrates the functions of all the vehicle components that affect the processing and recovery of water during simulated missions. The approach used in developing the RTM enables its use as part of a complete vehicle simulation for real time mission studies. Performance data for the components in the RTM is focused on water processing. The data provided to the model has been based on the most recent information available regarding the technology of the component. This paper will describe the process of defining the RLS system to be modeled, the way the modeling environment was selected, and how the model has been implemented. Results showing how the RLS components exchange water are provided in a set of test cases.

  6. Development of a Water Recovery System Resource Tracking Model

    NASA Technical Reports Server (NTRS)

    Chambliss, Joe; Stambaugh, Imelda; Sarguishm, Miriam; Shull, Sarah; Moore, Michael

    2014-01-01

    A simulation model has been developed to track water resources in an exploration vehicle using regenerative life support (RLS) systems. The model integrates the functions of all the vehicle components that affect the processing and recovery of water during simulated missions. The approach used in developing the model results in the RTM being a part of of a complete vehicle simulation that can be used in real time mission studies. Performance data for the variety of components in the RTM is focused on water processing and has been defined based on the most recent information available for the technology of the component. This paper will describe the process of defining the RLS system to be modeled and then the way the modeling environment was selected and how the model has been implemented. Results showing how the variety of RLS components exchange water are provided in a set of test cases.

  7. Evaluation of Cloud-resolving and Limited Area Model Intercomparison Simulations using TWP-ICE Observations. Part 1: Deep Convective Updraft Properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Varble, A. C.; Zipser, Edward J.; Fridlind, Ann

    2014-12-27

    Ten 3D cloud-resolving model (CRM) simulations and four 3D limited area model (LAM) simulations of an intense mesoscale convective system observed on January 23-24, 2006 during the Tropical Warm Pool – International Cloud Experiment (TWP-ICE) are compared with each other and with observed radar reflectivity fields and dual-Doppler retrievals of vertical wind speeds in an attempt to explain published results showing a high bias in simulated convective radar reflectivity aloft. This high bias results from ice water content being large, which is a product of large, strong convective updrafts, although hydrometeor size distribution assumptions modulate the size of this bias.more » Snow reflectivity can exceed 40 dBZ in a two-moment scheme when a constant bulk density of 100 kg m-3 is used. Making snow mass more realistically proportional to area rather than volume should somewhat alleviate this problem. Graupel, unlike snow, produces high biased reflectivity in all simulations. This is associated with large amounts of liquid water above the freezing level in updraft cores. Peak vertical velocities in deep convective updrafts are greater than dual-Doppler retrieved values, especially in the upper troposphere. Freezing of large rainwater contents lofted above the freezing level in simulated updraft cores greatly contributes to these excessive upper tropospheric vertical velocities. Strong simulated updraft cores are nearly undiluted, with some showing supercell characteristics. Decreasing horizontal grid spacing from 900 meters to 100 meters weakens strong updrafts, but not enough to match observational retrievals. Therefore, overly intense simulated updrafts may partly be a product of interactions between convective dynamics, parameterized microphysics, and large-scale environmental biases that promote different convective modes and strengths than observed.« less

  8. Modelling & Simulation Support to the Effects Based Approach to Operations - Observations from Using GAMMA in MNE 4

    DTIC Science & Technology

    2006-09-01

    The aim of the two parts of the experiment was identical: To explore concepts and supporting tools for Effects Based Approach to Operations (EBAO...feedback on the PMESII factors over time and the degree of achievement of the Operational Endstate. Modelling & Simulation Support to the Effects ...specific situation depends also on his interests. GAMMA provides two different methods: 1. The importance for different PMESII factors (ie potential

  9. Hydrogeology and simulation of groundwater flow in the Central Oklahoma (Garber-Wellington) Aquifer, Oklahoma, 1987 to 2009, and simulation of available water in storage, 2010–2059

    USGS Publications Warehouse

    Mashburn, Shana L.; Ryter, Derek W.; Neel, Christopher R.; Smith, S. Jerrod; Magers, Jessica S.

    2014-02-10

    The Central Oklahoma (Garber-Wellington) aquifer underlies about 3,000 square miles of central Oklahoma. The study area for this investigation was the extent of the Central Oklahoma aquifer. Water from the Central Oklahoma aquifer is used for public, industrial, commercial, agricultural, and domestic supply. With the exception of Oklahoma City, all of the major communities in central Oklahoma rely either solely or partly on groundwater from this aquifer. The Oklahoma City metropolitan area, incorporating parts of Canadian, Cleveland, Grady, Lincoln, Logan, McClain, and Oklahoma Counties, has a population of approximately 1.2 million people. As areas are developed for groundwater supply, increased groundwater withdrawals may result in decreases in long-term aquifer storage. The U.S. Geological Survey, in cooperation with the Oklahoma Water Resources Board, investigated the hydrogeology and simulated groundwater flow in the aquifer using a numerical groundwater-flow model. The purpose of this report is to describe an investigation of the Central Oklahoma aquifer that included analyses of the hydrogeology, hydrogeologic framework of the aquifer, and construction of a numerical groundwater-flow model. The groundwater-flow model was used to simulate groundwater levels and for water-budget analysis. A calibrated transient model was used to evaluate changes in groundwater storage associated with increased future water demands.

  10. Radiometry simulation within the end-to-end simulation tool SENSOR

    NASA Astrophysics Data System (ADS)

    Wiest, Lorenz; Boerner, Anko

    2001-02-01

    12 An end-to-end simulation is a valuable tool for sensor system design, development, optimization, testing, and calibration. This contribution describes the radiometry module of the end-to-end simulation tool SENSOR. It features MODTRAN 4.0-based look up tables in conjunction with a cache-based multilinear interpolation algorithm to speed up radiometry calculations. It employs a linear reflectance parameterization to reduce look up table size, considers effects due to the topology of a digital elevation model (surface slope, sky view factor) and uses a reflectance class feature map to assign Lambertian and BRDF reflectance properties to the digital elevation model. The overall consistency of the radiometry part is demonstrated by good agreement between ATCOR 4-retrieved reflectance spectra of a simulated digital image cube and the original reflectance spectra used to simulate this image data cube.

  11. Impact Of The Material Variability On The Stamping Process: Numerical And Analytical Analysis

    NASA Astrophysics Data System (ADS)

    Ledoux, Yann; Sergent, Alain; Arrieux, Robert

    2007-05-01

    The finite element simulation is a very useful tool in the deep drawing industry. It is used more particularly for the development and the validation of new stamping tools. It allows to decrease cost and time for the tooling design and set up. But one of the most important difficulties to have a good agreement between the simulation and the real process comes from the definition of the numerical conditions (mesh, punch travel speed, limit conditions,…) and the parameters which model the material behavior. Indeed, in press shop, when the sheet set changes, often a variation of the formed part geometry is observed according to the variability of the material properties between these different sets. This last parameter represents probably one of the main source of process deviation when the process is set up. That's why it is important to study the influence of material data variation on the geometry of a classical stamped part. The chosen geometry is an omega shaped part because of its simplicity and it is representative one in the automotive industry (car body reinforcement). Moreover, it shows important springback deviations. An isotropic behaviour law is assumed. The impact of the statistical deviation of the three law coefficients characterizing the material and the friction coefficient around their nominal values is tested. A Gaussian distribution is supposed and their impact on the geometry variation is studied by FE simulation. An other approach is envisaged consisting in modeling the process variability by a mathematical model and then, in function of the input parameters variability, it is proposed to define an analytical model which leads to find the part geometry variability around the nominal shape. These two approaches allow to predict the process capability as a function of the material parameter variability.

  12. Dynamics Modeling and Simulation of Large Transport Airplanes in Upset Conditions

    NASA Technical Reports Server (NTRS)

    Foster, John V.; Cunningham, Kevin; Fremaux, Charles M.; Shah, Gautam H.; Stewart, Eric C.; Rivers, Robert A.; Wilborn, James E.; Gato, William

    2005-01-01

    As part of NASA's Aviation Safety and Security Program, research has been in progress to develop aerodynamic modeling methods for simulations that accurately predict the flight dynamics characteristics of large transport airplanes in upset conditions. The motivation for this research stems from the recognition that simulation is a vital tool for addressing loss-of-control accidents, including applications to pilot training, accident reconstruction, and advanced control system analysis. The ultimate goal of this effort is to contribute to the reduction of the fatal accident rate due to loss-of-control. Research activities have involved accident analyses, wind tunnel testing, and piloted simulation. Results have shown that significant improvements in simulation fidelity for upset conditions, compared to current training simulations, can be achieved using state-of-the-art wind tunnel testing and aerodynamic modeling methods. This paper provides a summary of research completed to date and includes discussion on key technical results, lessons learned, and future research needs.

  13. Designing an Advanced Instructional Design Advisor: Principles of Instructional Design. Volume 2

    DTIC Science & Technology

    1991-05-01

    ones contained in this paper would comprise a substantial part of the knowledge base for the AIDA . 14. SUBJECT TERMS IS.NUMBER OF PAGES ucigoirlive...the classroom (e.g., computer simulations models can be used to enhance CBI). The Advanced Instructional Design Advisor is a project aimed at providing... model shares with its variations. Tennyson then identifies research- based prescriptions from the cognitive sciences which should become part of ISD in

  14. Spin-up simulation behaviors in a climate model to build a basement of long-time simulation

    NASA Astrophysics Data System (ADS)

    Lee, J.; Xue, Y.; De Sales, F.

    2015-12-01

    It is essential to develop start-up information when conducting long-time climate simulation. In case that the initial condition is already available from the previous simulation of same type model this does not necessary; however, if not, model needs spin-up simulation to have adjusted and balanced initial condition with the model climatology. Otherwise, a severe spin may take several years. Some of model variables such as deep soil temperature fields and temperature in ocean deep layers in initial fields would affect model's further long-time simulation due to their long residual memories. To investigate the important factor for spin-up simulation in producing an atmospheric initial condition, we had conducted two different spin-up simulations when no atmospheric condition is available from exist datasets. One simulation employed atmospheric global circulation model (AGCM), namely Global Forecast System (GFS) of National Center for Environmental Prediction (NCEP), while the other employed atmosphere-ocean coupled global circulation model (CGCM), namely Climate Forecast System (CFS) of NCEP. Both models share the atmospheric modeling part and only difference is in applying of ocean model coupling, which is conducted by Modular Ocean Model version 4 (MOM4) of Geophysical Fluid Dynamics Laboratory (GFDL) in CFS. During a decade of spin-up simulation, prescribed sea-surface temperature (SST) fields of target year is forced to the GFS daily basis, while CFS digested only first time step ocean condition and freely iterated for the rest of the period. Both models were forced by CO2 condition and solar constant given from the target year. Our analyses of spin-up simulation results indicate that freely conducted interaction between the ocean and the atmosphere is more helpful to produce the initial condition for the target year rather than produced by fixed SST forcing. Since the GFS used prescribed forcing exactly given from the target year, this result is unexpected. The detail analysis will be discussed in this presentation.

  15. Memory Circuit Fault Simulator

    NASA Technical Reports Server (NTRS)

    Sheldon, Douglas J.; McClure, Tucker

    2013-01-01

    Spacecraft are known to experience significant memory part-related failures and problems, both pre- and postlaunch. These memory parts include both static and dynamic memories (SRAM and DRAM). These failures manifest themselves in a variety of ways, such as pattern-sensitive failures, timingsensitive failures, etc. Because of the mission critical nature memory devices play in spacecraft architecture and operation, understanding their failure modes is vital to successful mission operation. To support this need, a generic simulation tool that can model different data patterns in conjunction with variable write and read conditions was developed. This tool is a mathematical and graphical way to embed pattern, electrical, and physical information to perform what-if analysis as part of a root cause failure analysis effort.

  16. Effects of recharge, Upper Floridan aquifer heads, and time scale on simulated ground-water exchange with Lake Starr, a seepage lake in central Florida

    USGS Publications Warehouse

    Swancar, Amy; Lee, Terrie Mackin

    2003-01-01

    Lake Starr and other lakes in the mantled karst terrain of Florida's Central Lake District are surrounded by a conductive surficial aquifer system that receives highly variable recharge from rainfall. In addition, downward leakage from these lakes varies as heads in the underlying Upper Floridan aquifer change seasonally and with pumpage. A saturated three-dimensional finite-difference ground-water flow model was used to simulate the effects of recharge, Upper Floridan aquifer heads, and model time scale on ground-water exchange with Lake Starr. The lake was simulated as an active part of the model using high hydraulic conductivity cells. Simulated ground-water flow was compared to net ground-water flow estimated from a rigorously derived water budget for the 2-year period August 1996-July 1998. Calibrating saturated ground-water flow models with monthly stress periods to a monthly lake water budget will result in underpredicting gross inflow to, and leakage from, ridge lakes in Florida. Underprediction of ground-water inflow occurs because recharge stresses and ground-water flow responses during rainy periods are averaged over too long a time period using monthly stress periods. When inflow is underestimated during calibration, leakage also is underestimated because inflow and leakage are correlated if lake stage is maintained over the long term. Underpredicted leakage reduces the implied effect of ground-water withdrawals from the Upper Floridan aquifer on the lake. Calibrating the weekly simulation required accounting for transient responses in the water table near the lake that generated the greater range of net ground-water flow values seen in the weekly water budget. Calibrating to the weekly lake water budget also required increasing the value of annual recharge in the nearshore region well above the initial estimate of 35 percent of the rainfall, and increasing the hydraulic conductivity of the deposits around and beneath the lake. To simulate the total ground-water inflow to lakes, saturated-flow models of lake basins need to account for the potential effects of rapid and efficient recharge in the surficial aquifer system closest to the lake. In this part of the basin, the ability to accurately estimate recharge is crucial because the water table is shallowest and the response time between rainfall and recharge is shortest. Use of the one-dimensional LEACHM model to simulate the effects of the unsaturated zone on the timing and magnitude of recharge in the nearshore improved the simulation of peak values of ground-water inflow to Lake Starr. Results of weekly simulations suggest that weekly recharge can approach the majority of weekly rainfall on the nearshore part of the lake basin. However, even though a weekly simulation with higher recharge in the nearshore was able to reproduce the extremes of ground-water exchange with the lake more accurately, it was not consistently better at predicting net ground-water flow within the water budget error than a simulation with lower recharge. The more subtle effects of rainfall and recharge on ground-water inflow to the lake were more difficult to simulate. The use of variably saturated flow modeling, with time scales that are shorter than weekly and finer spatial discretization, is probably necessary to understand these processes. The basin-wide model of Lake Starr had difficulty simulating the full spectrum of ground-water inflows observed in the water budget because of insufficient information about recharge to ground water, and because of practical limits on spatial and temporal discretization in a model at this scale. In contrast, the saturated flow model appeared to successfully simulate the effects of heads in the Upper Floridan aquifer on water levels and ground-water exchange with the lake at both weekly and monthly stress periods. Most of the variability in lake leakage can be explained by the average vertical head difference between the lake and a re

  17. Analysis and modeling of subgrid scalar mixing using numerical data

    NASA Technical Reports Server (NTRS)

    Girimaji, Sharath S.; Zhou, YE

    1995-01-01

    Direct numerical simulations (DNS) of passive scalar mixing in isotropic turbulence is used to study, analyze and, subsequently, model the role of small (subgrid) scales in the mixing process. In particular, we attempt to model the dissipation of the large scale (supergrid) scalar fluctuations caused by the subgrid scales by decomposing it into two parts: (1) the effect due to the interaction among the subgrid scales; and (2) the effect due to interaction between the supergrid and the subgrid scales. Model comparisons with DNS data show good agreement. This model is expected to be useful in the large eddy simulations of scalar mixing and reaction.

  18. Mid-Holocene and last glacial maximum climate simulations with the IPSL model: part II: model-data comparisons

    NASA Astrophysics Data System (ADS)

    Kageyama, Masa; Braconnot, Pascale; Bopp, Laurent; Mariotti, Véronique; Roy, Tilla; Woillez, Marie-Noëlle; Caubel, Arnaud; Foujols, Marie-Alice; Guilyardi, Eric; Khodri, Myriam; Lloyd, James; Lombard, Fabien; Marti, Olivier

    2013-05-01

    The climates of the mid-Holocene (MH, 6,000 years ago) and the Last Glacial Maximum (LGM, 21,000 years ago) have been extensively documented and as such, have become targets for the evaluation of climate models for climate contexts very different from the present. In Part 1 of the present work, we have studied the MH and LGM simulations performed with the last two versions of the IPSL model: IPSL_CM4, run for the PMIP2/CMIP3 (Coupled Model Intercomparion Project) projects and IPSL_CM5A, run for the most recent PMIP3/CMIP5 projets. We have shown that not only are these models different in their simulations of the PI climate, but also in their simulations of the climatic anomalies for the MH and LGM. In the Part 2 of this paper, we first examine whether palaeo-data can help discriminate between the model performances. This is indeed the case for the African monsoon for the MH or for North America south of the Laurentide ice sheet, the South Atlantic or the southern Indian ocean for the LGM. For the LGM, off-line vegetation modelling appears to offer good opportunities to distinguish climate model results because glacial vegetation proves to be very sensitive to even small differences in LGM climate. For other cases such as the LGM North Atlantic or the LGM equatorial Pacific, the large uncertainty on the SST reconstructions, prevents model discrimination. We have examined the use of other proxy-data for model evaluation, which has become possible with the inclusion of the biogeochemistry morel PISCES in the IPSL_CM5A model. We show a broad agreement of the LGM-PI export production changes with reconstructions. These changes are related to the mixed layer depth in most regions and to sea-ice variations in the high latitudes. We have also modelled foraminifer abundances with the FORAMCLIM model and shown that the changes in foraminifer abundance in the equatorial Pacific are mainly forced by changes in SSTs, hence confirming the SST-foraminifer abundance relationship. Yet, this is not the case in all regions in the North Atlantic, where food availability can have a strong impact of foraminifer abundances. Further work will be needed to exhaustively examine the role of factors other than climate in piloting changes in palaeo-indicators.

  19. Magnetic Testing, and Modeling, Simulation and Analysis for Space Applications

    NASA Technical Reports Server (NTRS)

    Boghosian, Mary; Narvaez, Pablo; Herman, Ray

    2012-01-01

    The Aerospace Corporation (Aerospace) and Lockheed Martin Space Systems (LMSS) participated with Jet Propulsion Laboratory (JPL) in the implementation of a magnetic cleanliness program of the NASA/JPL JUNO mission. The magnetic cleanliness program was applied from early flight system development up through system level environmental testing. The JUNO magnetic cleanliness program required setting-up a specialized magnetic test facility at Lockheed Martin Space Systems for testing the flight system and a testing program with facility for testing system parts and subsystems at JPL. The magnetic modeling, simulation and analysis capability was set up and performed by Aerospace to provide qualitative and quantitative magnetic assessments of the magnetic parts, components, and subsystems prior to or in lieu of magnetic tests. Because of the sensitive nature of the fields and particles scientific measurements being conducted by the JUNO space mission to Jupiter, the imposition of stringent magnetic control specifications required a magnetic control program to ensure that the spacecraft's science magnetometers and plasma wave search coil were not magnetically contaminated by flight system magnetic interferences. With Aerospace's magnetic modeling, simulation and analysis and JPL's system modeling and testing approach, and LMSS's test support, the project achieved a cost effective approach to achieving a magnetically clean spacecraft. This paper presents lessons learned from the JUNO magnetic testing approach and Aerospace's modeling, simulation and analysis activities used to solve problems such as remnant magnetization, performance of hard and soft magnetic materials within the targeted space system in applied external magnetic fields.

  20. Adaptive Core Simulation Employing Discrete Inverse Theory - Part II: Numerical Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdel-Khalik, Hany S.; Turinsky, Paul J.

    2005-07-15

    Use of adaptive simulation is intended to improve the fidelity and robustness of important core attribute predictions such as core power distribution, thermal margins, and core reactivity. Adaptive simulation utilizes a selected set of past and current reactor measurements of reactor observables, i.e., in-core instrumentation readings, to adapt the simulation in a meaningful way. The companion paper, ''Adaptive Core Simulation Employing Discrete Inverse Theory - Part I: Theory,'' describes in detail the theoretical background of the proposed adaptive techniques. This paper, Part II, demonstrates several computational experiments conducted to assess the fidelity and robustness of the proposed techniques. The intentmore » is to check the ability of the adapted core simulator model to predict future core observables that are not included in the adaption or core observables that are recorded at core conditions that differ from those at which adaption is completed. Also, this paper demonstrates successful utilization of an efficient sensitivity analysis approach to calculate the sensitivity information required to perform the adaption for millions of input core parameters. Finally, this paper illustrates a useful application for adaptive simulation - reducing the inconsistencies between two different core simulator code systems, where the multitudes of input data to one code are adjusted to enhance the agreement between both codes for important core attributes, i.e., core reactivity and power distribution. Also demonstrated is the robustness of such an application.« less

  1. Simulating Heinrich events in a coupled atmosphere-ocean-ice sheet model

    NASA Astrophysics Data System (ADS)

    Mikolajewicz, Uwe; Ziemen, Florian

    2016-04-01

    Heinrich events are among the most prominent events of long-term climate variability recorded in proxies across the northern hemisphere. They are the archetype of ice sheet - climate interactions on millennial time scales. Nevertheless, the exact mechanisms that cause Heinrich events are still under discussion, and their climatic consequences are far from being fully understood. We contribute to answering the open questions by studying Heinrich events in a coupled ice sheet model (ISM) atmosphere-ocean-vegetation general circulation model (AOVGCM) framework, where this variability occurs as part of the model generated internal variability without the need to prescribe external perturbations, as was the standard approach in almost all model studies so far. The setup consists of a northern hemisphere setup of the modified Parallel Ice Sheet Model (mPISM) coupled to the global coarse resolution AOVGCM ECHAM5/MPIOM/LPJ. The simulations used for this analysis were an ensemble covering substantial parts of the late Glacial forced with transient insolation and prescribed atmospheric greenhouse gas concentrations. The modeled Heinrich events show a marked influence of the ice discharge on the Atlantic circulation and heat transport, but none of the Heinrich events during the Glacial did show a complete collapse of the North Atlantic meridional overturning circulation. The simulated main consequences of the Heinrich events are a freshening and cooling over the North Atlantic and a drying over northern Europe.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erickson III, David J

    The climate of the last glacial maximum (LGM) is simulated with a high-resolution atmospheric general circulation model, the NCAR CCM3 at spectral truncation of T170, corresponding to a grid cell size of roughly 75 km. The purpose of the study is to assess whether there are significant benefits from the higher resolution simulation compared to the lower resolution simulation associated with the role of topography. The LGM simulations were forced with modified CLIMAP sea ice distribution and sea surface temperatures (SST) reduced by 1 C, ice sheet topography, reduced CO{sub 2}, and 21,000 BP orbital parameters. The high-resolution model capturesmore » modern climate reasonably well, in particular the distribution of heavy precipitation in the tropical Pacific. For the ice age case, surface temperature simulated by the high-resolution model agrees better with those of proxy estimates than does the low-resolution model. Despite the fact that tropical SSTs were only 2.1 C less than the control run, there are many lowland tropical land areas 4-6 C colder than present. Comparison of T170 model results with the best constrained proxy temperature estimates (noble gas concentrations in groundwater) now yield no significant differences between model and observations. There are also significant upland temperature changes in the best resolved tropical mountain belt (the Andes). We provisionally attribute this result in part as resulting from decreased lateral mixing between ocean and land in a model with more model grid cells. A longstanding model-data discrepancy therefore appears to be resolved without invoking any unusual model physics. The response of the Asian summer monsoon can also be more clearly linked to local geography in the high-resolution model than in the low-resolution model; this distinction should enable more confident validation of climate proxy data with the high-resolution model. Elsewhere, an inferred salinity increase in the subtropical North Atlantic may have significant implications for ocean circulation changes during the LGM. A large part of the Amazon and Congo Basins are simulated to be substantially drier in the ice age - consistent with many (but not all) paleo data. These results suggest that there are considerable benefits derived from high-resolution model regarding regional climate responses, and that observationalists can now compare their results with models that resolve geography at a resolution comparable to that which the proxy data represent.« less

  3. Modelling of Dynamic Rock Fracture Process with a Rate-Dependent Combined Continuum Damage-Embedded Discontinuity Model Incorporating Microstructure

    NASA Astrophysics Data System (ADS)

    Saksala, Timo

    2016-10-01

    This paper deals with numerical modelling of rock fracture under dynamic loading. For this end, a combined continuum damage-embedded discontinuity model is applied in finite element modelling of crack propagation in rock. In this model, the strong loading rate sensitivity of rock is captured by the rate-dependent continuum scalar damage model that controls the pre-peak nonlinear hardening part of rock behaviour. The post-peak exponential softening part of the rock behaviour is governed by the embedded displacement discontinuity model describing the mode I, mode II and mixed mode fracture of rock. Rock heterogeneity is incorporated in the present approach by random description of the rock mineral texture based on the Voronoi tessellation. The model performance is demonstrated in numerical examples where the uniaxial tension and compression tests on rock are simulated. Finally, the dynamic three-point bending test of a semicircular disc is simulated in order to show that the model correctly predicts the strain rate-dependent tensile strengths as well as the failure modes of rock in this test. Special emphasis is laid on modelling the loading rate sensitivity of tensile strength of Laurentian granite.

  4. Effect of bar cross-section geometry on stress distribution in overdenture-retaining system simulating horizontal misfit and bone loss.

    PubMed

    Spazzin, Aloísio Oro; Costa, Ana Rosa; Correr, Américo Bortolazzo; Consani, Rafael Leonardo Xediek; Correr-Sobrinho, Lourenço; dos Santos, Mateus Bertolini Fernandes

    2013-08-09

    This study evaluated the influence of cross-section geometry of the bar framework on the distribution of static stresses in an overdenture-retaining bar system simulating horizontal misfit and bone loss. Three-dimensional FE models were created including two titanium implants and three cross-section geometries (circular, ovoid or Hader) of bar framework placed in the anterior part of a severely resorbed jaw. One model with 1.4-mm vertical loss of the peri-implant tissue was also created. The models set were exported to mechanical simulation software, where horizontal displacement (10, 50 or 100 μm) was applied simulating the settling of the framework, which suffered shrinkage during the laboratory procedures. The bar material used for the bar framework was a cobalt--chromium alloy. For evaluation of bone loss effect, only the 50-μm horizontal misfit was simulated. Data were qualitatively and quantitatively evaluated using von Mises stress for the mechanical part and maximum principal stress and μ-strain for peri-implant bone tissue given by the software. Stresses were concentrated along the bar and in the join between the bar and cylinder. In the peri-implant bone tissue, the μ-strain was higher in the cervical third. Higher stress levels and μ-strain were found for the models using the Hader bar. The bone loss simulated presented considerable increase on maximum principal stresses and μ-strain in the peri-implant bone tissue. In addition, for the amplification of the horizontal misfit, the higher complexity of the bar cross-section geometry and bone loss increases the levels of static stresses in the peri-implant bone tissue. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. A NEW THREE-DIMENSIONAL SOLAR WIND MODEL IN SPHERICAL COORDINATES WITH A SIX-COMPONENT GRID

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, Xueshang; Zhang, Man; Zhou, Yufen, E-mail: fengx@spaceweather.ac.cn

    In this paper, we introduce a new three-dimensional magnetohydrodynamics numerical model to simulate the steady state ambient solar wind from the solar surface to 215 R {sub s} or beyond, and the model adopts a splitting finite-volume scheme based on a six-component grid system in spherical coordinates. By splitting the magnetohydrodynamics equations into a fluid part and a magnetic part, a finite volume method can be used for the fluid part and a constrained-transport method able to maintain the divergence-free constraint on the magnetic field can be used for the magnetic induction part. This new second-order model in space andmore » time is validated when modeling the large-scale structure of the solar wind. The numerical results for Carrington rotation 2064 show its ability to produce structured solar wind in agreement with observations.« less

  6. Multi-scale Modeling of Arctic Clouds

    NASA Astrophysics Data System (ADS)

    Hillman, B. R.; Roesler, E. L.; Dexheimer, D.

    2017-12-01

    The presence and properties of clouds are critically important to the radiative budget in the Arctic, but clouds are notoriously difficult to represent in global climate models (GCMs). The challenge stems partly from a disconnect in the scales at which these models are formulated and the scale of the physical processes important to the formation of clouds (e.g., convection and turbulence). Because of this, these processes are parameterized in large-scale models. Over the past decades, new approaches have been explored in which a cloud system resolving model (CSRM), or in the extreme a large eddy simulation (LES), is embedded into each gridcell of a traditional GCM to replace the cloud and convective parameterizations to explicitly simulate more of these important processes. This approach is attractive in that it allows for more explicit simulation of small-scale processes while also allowing for interaction between the small and large-scale processes. The goal of this study is to quantify the performance of this framework in simulating Arctic clouds relative to a traditional global model, and to explore the limitations of such a framework using coordinated high-resolution (eddy-resolving) simulations. Simulations from the global model are compared with satellite retrievals of cloud fraction partioned by cloud phase from CALIPSO, and limited-area LES simulations are compared with ground-based and tethered-balloon measurements from the ARM Barrow and Oliktok Point measurement facilities.

  7. Sea ice simulations based on fields generated by the GLAS GCM. [Goddard Laboratory for Atmospheric Sciences General Circulation Model

    NASA Technical Reports Server (NTRS)

    Parkinson, C. L.; Herman, G. F.

    1980-01-01

    The GLAS General Circulation Model (GCM) was applied to the four-month simulation of the thermodynamic part of the Parkinson-Washington sea ice model using atmospheric boundary conditions. The sea ice thickness and distribution were predicted for the Jan. 1-Apr. 30 period using the GCM-fields of solar and infrared radiation, specific humidity and air temperature at the surface, and snow accumulation; the sensible heat and evaporative surface fluxes were consistent with the ground temperatures produced by the ice model and the air temperatures determined by the atmospheric concept. It was concluded that the Parkinson-Washington sea ice model results in acceptable ice concentrations and thicknesses when used with GLAS GCM for the Jan.-Apr. period suggesting the feasibility of fully coupled ice-atmosphere simulations with these two approaches.

  8. AGCM Biases in Evaporation Regime: Impacts on Soil Moisture Memory and Land-Atmosphere Feedback

    NASA Technical Reports Server (NTRS)

    Mahanama, Sarith P. P.; Koster, Randal D.

    2005-01-01

    Because precipitation and net radiation in an atmospheric general circulation model (AGCM) are typically biased relative to observations, the simulated evaporative regime of a region may be biased, with consequent negative effects on the AGCM s ability to translate an initialized soil moisture anomaly into an improved seasonal prediction. These potential problems are investigated through extensive offline analyses with the Mosaic land surface model (LSM). We first forced the LSM globally with a 15-year observations-based dataset. We then repeated the simulation after imposing a representative set of GCM climate biases onto the forcings - the observational forcings were scaled so that their mean seasonal cycles matched those simulated by the NSIPP-1 (NASA Global Modeling and Assimilation Office) AGCM over the same period-The AGCM s climate biases do indeed lead to significant biases in evaporative regime in certain regions, with the expected impacts on soil moisture memory timescales. Furthermore, the offline simulations suggest that the biased forcing in the AGCM should contribute to overestimated feedback in certain parts of North America - parts already identified in previous studies as having excessive feedback. The present study thus supports the notion that the reduction of climate biases in the AGCM will lead to more appropriate translations of soil moisture initialization into seasonal prediction skill.

  9. Synthesis of a hybrid model of the VSC FACTS devices and HVDC technologies

    NASA Astrophysics Data System (ADS)

    Borovikov, Yu S.; Gusev, A. S.; Sulaymanov, A. O.; Ufa, R. A.

    2014-10-01

    The motivation of the presented research is based on the need for development of new methods and tools for adequate simulation of FACTS devices and HVDC systems as part of real electric power systems (EPS). The Research object: An alternative hybrid approach for synthesizing VSC-FACTS and -HVDC hybrid model is proposed. The results: the VSC- FACTS and -HVDC hybrid model is designed in accordance with the presented concepts of hybrid simulation. The developed model allows us to carry out adequate simulation in real time of all the processes in HVDC, FACTS devices and EPS as a whole without any decomposition and limitation on their duration, and also use the developed tool for effective solution of a design, operational and research tasks of EPS containing such devices.

  10. A Numerical Simulation and Statistical Modeling of High Intensity Radiated Fields Experiment Data

    NASA Technical Reports Server (NTRS)

    Smith, Laura J.

    2004-01-01

    Tests are conducted on a quad-redundant fault tolerant flight control computer to establish upset characteristics of an avionics system in an electromagnetic field. A numerical simulation and statistical model are described in this work to analyze the open loop experiment data collected in the reverberation chamber at NASA LaRC as a part of an effort to examine the effects of electromagnetic interference on fly-by-wire aircraft control systems. By comparing thousands of simulation and model outputs, the models that best describe the data are first identified and then a systematic statistical analysis is performed on the data. All of these efforts are combined which culminate in an extrapolation of values that are in turn used to support previous efforts used in evaluating the data.

  11. Hydrological risks in anthropized watersheds: modeling of hazard, vulnerability and impacts on population from south-west of Madagascar

    NASA Astrophysics Data System (ADS)

    Mamy Rakotoarisoa, Mahefa; Fleurant, Cyril; Taibi, Nuscia; Razakamanana, Théodore

    2016-04-01

    Hydrological risks, especially for floods, are recurrent on the Fiherenana watershed - southwest of Madagascar. The city of Toliara, which is located at the outlet of the river basin, is subjected each year to hurricane hazards and floods. The stakes are of major importance in this part of the island. This study begins with the analysis of hazard by collecting all existing hydro-climatic data on the catchment. It then seeks to determine trends, despite the significant lack of data, using simple statistical models (decomposition of time series). Then, two approaches are conducted to assess the vulnerability of the city of Toliara and the surrounding villages. First, a static approach, from surveys of land and the use of GIS are used. Then, the second method is the use of a multi-agent-based simulation model. The first step is the mapping of a vulnerability index which is the arrangement of several static criteria. This is a microscale indicator (the scale used is the housing). For each House, there are several criteria of vulnerability, which are the potential water depth, the flow rate, or the architectural typology of the buildings. For the second part, simulations involving scenes of agents are used in order to evaluate the degree of vulnerability of homes from flooding. Agents are individual entities to which we can assign behaviours on purpose to simulate a given phenomenon. The aim is not to give a criterion to the house as physical building, such as its architectural typology or its strength. The model wants to know the chances of the occupants of the house to escape from a catastrophic flood. For this purpose, we compare various settings and scenarios. Some scenarios are conducted to take into account the effect of certain decision made by the responsible entities (Information and awareness of the villagers for example). The simulation consists of two essential parts taking place simultaneously in time: simulation of the rise of water and the flow using classical hydrological functions and multi agent system (transfer function and production function) and the simulation of the behaviour of the people facing the arrival of hazard.

  12. Southern Regional Center for Lightweight Innovative Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Paul T.

    The Southern Regional Center for Lightweight Innovative Design (SRCLID) has developed an experimentally validated cradle-to-grave modeling and simulation effort to optimize automotive components in order to decrease weight and cost, yet increase performance and safety in crash scenarios. In summary, the three major objectives of this project are accomplished: To develop experimentally validated cradle-to-grave modeling and simulation tools to optimize automotive and truck components for lightweighting materials (aluminum, steel, and Mg alloys and polymer-based composites) with consideration of uncertainty to decrease weight and cost, yet increase the performance and safety in impact scenarios; To develop multiscale computational models that quantifymore » microstructure-property relations by evaluating various length scales, from the atomic through component levels, for each step of the manufacturing process for vehicles; and To develop an integrated K-12 educational program to educate students on lightweighting designs and impact scenarios. In this final report, we divided the content into two parts: the first part contains the development of building blocks for the project, including materials and process models, process-structure-property (PSP) relationship, and experimental validation capabilities; the second part presents the demonstration task for Mg front-end work associated with USAMP projects.« less

  13. A stochastic optimization model under modeling uncertainty and parameter certainty for groundwater remediation design--part I. Model development.

    PubMed

    He, L; Huang, G H; Lu, H W

    2010-04-15

    Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the "true" ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes. 2009 Elsevier B.V. All rights reserved.

  14. Bayesian inference for two-part mixed-effects model using skew distributions, with application to longitudinal semicontinuous alcohol data.

    PubMed

    Xing, Dongyuan; Huang, Yangxin; Chen, Henian; Zhu, Yiliang; Dagne, Getachew A; Baldwin, Julie

    2017-08-01

    Semicontinuous data featured with an excessive proportion of zeros and right-skewed continuous positive values arise frequently in practice. One example would be the substance abuse/dependence symptoms data for which a substantial proportion of subjects investigated may report zero. Two-part mixed-effects models have been developed to analyze repeated measures of semicontinuous data from longitudinal studies. In this paper, we propose a flexible two-part mixed-effects model with skew distributions for correlated semicontinuous alcohol data under the framework of a Bayesian approach. The proposed model specification consists of two mixed-effects models linked by the correlated random effects: (i) a model on the occurrence of positive values using a generalized logistic mixed-effects model (Part I); and (ii) a model on the intensity of positive values using a linear mixed-effects model where the model errors follow skew distributions including skew- t and skew-normal distributions (Part II). The proposed method is illustrated with an alcohol abuse/dependence symptoms data from a longitudinal observational study, and the analytic results are reported by comparing potential models under different random-effects structures. Simulation studies are conducted to assess the performance of the proposed models and method.

  15. System Dynamics Modeling for Proactive Intelligence

    DTIC Science & Technology

    2010-01-01

    5  4. Modeling Resources as Part of an Integrated Multi- Methodology System .................. 16  5. Formalizing Pro-Active...Observable Data With and Without Simulation Analysis ............................... 15  Figure 13. Summary of Probe Methodology and Results...Strategy ............................................................................. 22  Figure 22. Overview of Methodology

  16. A new multi-layer approach for progressive damage simulation in composite laminates based on isogeometric analysis and Kirchhoff-Love shells. Part II: impact modeling

    NASA Astrophysics Data System (ADS)

    Pigazzini, M. S.; Bazilevs, Y.; Ellison, A.; Kim, H.

    2017-11-01

    In this two-part paper we introduce a new formulation for modeling progressive damage in laminated composite structures. We adopt a multi-layer modeling approach, based on isogeometric analysis, where each ply or lamina is represented by a spline surface, and modeled as a Kirchhoff-Love thin shell. Continuum damage mechanics is used to model intralaminar damage, and a new zero-thickness cohesive-interface formulation is introduced to model delamination as well as permitting laminate-level transverse shear compliance. In Part I of this series we focus on the presentation of the modeling framework, validation of the framework using standard Mode I and Mode II delamination tests, and assessment of its suitability for modeling thick laminates. In Part II of this series we focus on the application of the proposed framework to modeling and simulation of damage in composite laminates resulting from impact. The proposed approach has significant accuracy and efficiency advantages over existing methods for modeling impact damage. These stem from the use of IGA-based Kirchhoff-Love shells to represent the individual plies of the composite laminate, while the compliant cohesive interfaces enable transverse shear deformation of the laminate. Kirchhoff-Love shells give a faithful representation of the ply deformation behavior, and, unlike solids or traditional shear-deformable shells, do not suffer from transverse-shear locking in the limit of vanishing thickness. This, in combination with higher-order accurate and smooth representation of the shell midsurface displacement field, allows us to adopt relatively coarse in-plane discretizations without sacrificing solution accuracy. Furthermore, the thin-shell formulation employed does not use rotational degrees of freedom, which gives additional efficiency benefits relative to more standard shell formulations.

  17. A new multi-layer approach for progressive damage simulation in composite laminates based on isogeometric analysis and Kirchhoff-Love shells. Part I: basic theory and modeling of delamination and transverse shear

    NASA Astrophysics Data System (ADS)

    Bazilevs, Y.; Pigazzini, M. S.; Ellison, A.; Kim, H.

    2017-11-01

    In this two-part paper we introduce a new formulation for modeling progressive damage in laminated composite structures. We adopt a multi-layer modeling approach, based on Isogeometric Analysis (IGA), where each ply or lamina is represented by a spline surface, and modeled as a Kirchhoff-Love thin shell. Continuum Damage Mechanics is used to model intralaminar damage, and a new zero-thickness cohesive-interface formulation is introduced to model delamination as well as permitting laminate-level transverse shear compliance. In Part I of this series we focus on the presentation of the modeling framework, validation of the framework using standard Mode I and Mode II delamination tests, and assessment of its suitability for modeling thick laminates. In Part II of this series we focus on the application of the proposed framework to modeling and simulation of damage in composite laminates resulting from impact. The proposed approach has significant accuracy and efficiency advantages over existing methods for modeling impact damage. These stem from the use of IGA-based Kirchhoff-Love shells to represent the individual plies of the composite laminate, while the compliant cohesive interfaces enable transverse shear deformation of the laminate. Kirchhoff-Love shells give a faithful representation of the ply deformation behavior, and, unlike solids or traditional shear-deformable shells, do not suffer from transverse-shear locking in the limit of vanishing thickness. This, in combination with higher-order accurate and smooth representation of the shell midsurface displacement field, allows us to adopt relatively coarse in-plane discretizations without sacrificing solution accuracy. Furthermore, the thin-shell formulation employed does not use rotational degrees of freedom, which gives additional efficiency benefits relative to more standard shell formulations.

  18. Dynamic Simulation of Human Gait Model With Predictive Capability.

    PubMed

    Sun, Jinming; Wu, Shaoli; Voglewede, Philip A

    2018-03-01

    In this paper, it is proposed that the central nervous system (CNS) controls human gait using a predictive control approach in conjunction with classical feedback control instead of exclusive classical feedback control theory that controls based on past error. To validate this proposition, a dynamic model of human gait is developed using a novel predictive approach to investigate the principles of the CNS. The model developed includes two parts: a plant model that represents the dynamics of human gait and a controller that represents the CNS. The plant model is a seven-segment, six-joint model that has nine degrees-of-freedom (DOF). The plant model is validated using data collected from able-bodied human subjects. The proposed controller utilizes model predictive control (MPC). MPC uses an internal model to predict the output in advance, compare the predicted output to the reference, and optimize the control input so that the predicted error is minimal. To decrease the complexity of the model, two joints are controlled using a proportional-derivative (PD) controller. The developed predictive human gait model is validated by simulating able-bodied human gait. The simulation results show that the developed model is able to simulate the kinematic output close to experimental data.

  19. Chimaera simulation of complex states of flowing matter

    PubMed Central

    2016-01-01

    We discuss a unified mesoscale framework (chimaera) for the simulation of complex states of flowing matter across scales of motion. The chimaera framework can deal with each of the three macro–meso–micro levels through suitable ‘mutations’ of the basic mesoscale formulation. The idea is illustrated through selected simulations of complex micro- and nanoscale flows. This article is part of the themed issue ‘Multiscale modelling at the physics–chemistry–biology interface’. PMID:27698031

  20. Impacts of transportation sector emissions on future U.S. air quality in a changing climate. Part I: Projected emissions, simulation design, and model evaluation.

    PubMed

    Campbell, Patrick; Zhang, Yang; Yan, Fang; Lu, Zifeng; Streets, David

    2018-07-01

    Emissions from the transportation sector are rapidly changing worldwide; however, the interplay of such emission changes in the face of climate change are not as well understood. This two-part study examines the impact of projected emissions from the U.S. transportation sector (Part I) on ambient air quality in the face of climate change (Part II). In Part I of this study, we describe the methodology and results of a novel Technology Driver Model (see graphical abstract) that includes 1) transportation emission projections (including on-road vehicles, non-road engines, aircraft, rail, and ship) derived from a dynamic technology model that accounts for various technology and policy options under an IPCC emission scenario, and 2) the configuration/evaluation of a dynamically downscaled Weather Research and Forecasting/Community Multiscale Air Quality modeling system. By 2046-2050, the annual domain-average transportation emissions of carbon monoxide (CO), nitrogen oxides (NO x ), volatile organic compounds (VOCs), ammonia (NH 3 ), and sulfur dioxide (SO 2 ) are projected to decrease over the continental U.S. The decreases in gaseous emissions are mainly due to reduced emissions from on-road vehicles and non-road engines, which exhibit spatial and seasonal variations across the U.S. Although particulate matter (PM) emissions widely decrease, some areas in the U.S. experience relatively large increases due to increases in ship emissions. The on-road vehicle emissions dominate the emission changes for CO, NO x , VOC, and NH 3 , while emissions from both the on-road and non-road modes have strong contributions to PM and SO 2 emission changes. The evaluation of the baseline 2005 WRF simulation indicates that annual biases are close to or within the acceptable criteria for meteorological performance in the literature, and there is an overall good agreement in the 2005 CMAQ simulations of chemical variables against both surface and satellite observations. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Atmospheric Boundary Layer Dynamics Near Ross Island and Over West Antarctica.

    NASA Astrophysics Data System (ADS)

    Liu, Zhong

    The atmospheric boundary layer dynamics near Ross Island and over West Antarctica has been investigated. The study consists of two parts. The first part involved the use of data from ground-based remote sensing equipment (sodar and RASS), radiosondes, pilot balloons, automatic weather stations, and NOAA AVHRR satellite imagery. The second part involved the use of a high resolution boundary layer model coupled with a three-dimensional primitive equation mesoscale model to simulate the observed atmospheric boundary layer winds and temperatures. Turbulence parameters were simulated with an E-epsilon turbulence model driven by observed winds and temperatures. The observational analysis, for the first time, revealed that the airflow passing through the Ross Island area is supplied mainly by enhanced katabatic drainage from Byrd Glacier and secondarily drainage from Mulock and Skelton glaciers. The observed diurnal variation of the blocking effect near Ross Island is dominated by the changes in the upstream katabatic airflow. The synthesized analysis over West Antarctica found that the Siple Coast katabatic wind confluence zone consists of two superimposed katabatic airflows: a relatively warm and more buoyant katabatic flow from West Antarctica overlies a colder and less buoyant katabatic airflow from East Antarctica. The force balance analysis revealed that, inside the West Antarctic katabatic wind zone, the pressure gradient force associated with the blocked airflow against the Transantarctic Mountains dominates; inside the East Antarctic katabatic wind zone, the downslope buoyancy force due to the cold air overlying the sloping terrain is dominant. The analysis also shows that these forces are in geostrophic balance with the Coriolis force. An E-epsilon turbulence closure model is used to simulate the diurnal variation of sodar backscatter. The results show that the model is capable of qualitatively capturing the main features of the observed sodar backscatter. To improve the representation of the atmospheric boundary layer, a second-order turbulence closure model coupled with the input from a mesoscale model was applied to the springtime Siple Coast katabatic wind confluence zone. The simulation was able to capture the main features of the confluence zone, which were not well resolved by the mesoscale model.

  2. Simulating fire-induced ecological succession with the dynamically coupled fire-vegetation model, ED-SPIFTIRE

    NASA Astrophysics Data System (ADS)

    Spessa, A.; Fisher, R.

    2009-04-01

    The simulation of fire-vegetation feedbacks is crucial for determining fire-induced changes to ecosystem structure and function, and emissions of trace gases and aerosols under future climate change. A new global fire model SPITFIRE (SPread and InTensity of FIRE) has been designed to overcome many of the limitations in existing fire models set within DGVM frameworks (Thonicke et al. 2008). SPITFIRE has been applied in coupled mode globally (Thonicke et al. 2008) and northern Australia (Spessa et al. unpubl.) as part of the LPJ DGVM. It has also been driven with MODIS burnt area data applied to sub-Saharan Africa (Lehsten et al. 2008) as part of the LPJ-GUESS vegetation model (Smith et al. 2001). Recently, Spessa & Fisher (unpubl.) completed the coupling of SPIFTIRE to the Ecosystem Demography (ED) model (Moorecroft et al. 2001), which has been globalised by Dr R. Fisher as part of the development of the new land surface scheme JULES (Joint UK Environment Simulator) within the QUEST Earth System Model (http://www.quest-esm.ac.uk/). In contrast to the LPJ DGVM, ED is a ‘size and age structured' approximation of an individual based gap model. The major innovation of the ED-SPITFIRE model compared with LPJ-SPITFIRE is the categorisation of each climatic grid cell into a series of non-spatially contiguous patches which are defined by a common ‘age since last disturbance'. In theory, the age-class structure of ED facilitates ecologically realistic processes of succession and re-growth to be represented. By contrast, LPJ DGVM adopts an ‘area-based approach' that implicitly averages individual and patch differences across a wider area and across ‘populations' of PFTs. This presentation provides an overview of SPITFIRE, and provides preliminary results from ED-SPITFIRE applied to northern Australian savanna ecosystems which, due to spatio-temporal variation in fire disturbance, comprise a patchwork of grasses and trees at different stages of post-fire succession. Comparisons with similar simulations undertaken with LPJ-SPITFIRE are also presented.

  3. Using simulators to teach pediatric airway procedures in an international setting.

    PubMed

    Schwartz, Marissa A; Kavanagh, Katherine R; Frampton, Steven J; Bruce, Iain A; Valdez, Tulio A

    2018-01-01

    There has been a growing shift towards endoscopic management of laryngeal procedures in pediatric otolaryngology. There still appears to be a shortage of pediatric otolaryngology programs and children's hospitals worldwide where physicians can learn and practice these skills. Laryngeal simulation models have the potential to be part of the educational training of physicians who lack exposure to relatively uncommon pediatric otolaryngologic pathology. The objective of this study was to assess the utility of pediatric laryngeal models to teach laryngeal pathology to physicians at an international meeting. Pediatric laryngeal models were assessed by participants at an international pediatric otolaryngology meeting. Participants provided demographic information and previous experience with pediatric airways. Participants then performed simulated surgery on these models and evaluated them using both a previously validated Tissue Likeness Scale and a pre-simulation to post-simulation confidence scale. Participants reported significant subjective improvement in confidence level after use of the simulation models (p < 0.05). Participants reported realistic representations of human anatomy and pathology. The models' tissue mechanics were adequate to practice operative technique including the ability to incise, suture, and suspend models. The pediatric laryngeal models demonstrate high quality anatomy, which is easy manipulated with surgical instruments. These models allow both trainees and surgeons to practice time-sensitive airway surgeries in a safe and controlled environment. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Predictive modeling of infrared radiative heating in tomato dry-peeling process: Part II. Model validation and sensitivity analysis

    USDA-ARS?s Scientific Manuscript database

    A predictive mathematical model was developed to simulate heat transfer in a tomato undergoing double sided infrared (IR) heating in a dry-peeling process. The aims of this study were to validate the developed model using experimental data and to investigate different engineering parameters that mos...

  5. Examination of the Community Multiscale Air Quality (CMAQ) Model Performance over the North American and European Domains

    EPA Science Inventory

    The CMAQ modeling system has been used to simulate the air quality for North America and Europe for the entire year of 2006 as part of the Air Quality Model Evaluation International Initiative (AQMEII) and the operational model performance of O3, fine particulate matte...

  6. 2D modeling of direct laser metal deposition process using a finite particle method

    NASA Astrophysics Data System (ADS)

    Anedaf, T.; Abbès, B.; Abbès, F.; Li, Y. M.

    2018-05-01

    Direct laser metal deposition is one of the material additive manufacturing processes used to produce complex metallic parts. A thorough understanding of the underlying physical phenomena is required to obtain a high-quality parts. In this work, a mathematical model is presented to simulate the coaxial laser direct deposition process tacking into account of mass addition, heat transfer, and fluid flow with free surface and melting. The fluid flow in the melt pool together with mass and energy balances are solved using the Computational Fluid Dynamics (CFD) software NOGRID-points, based on the meshless Finite Pointset Method (FPM). The basis of the computations is a point cloud, which represents the continuum fluid domain. Each finite point carries all fluid information (density, velocity, pressure and temperature). The dynamic shape of the molten zone is explicitly described by the point cloud. The proposed model is used to simulate a single layer cladding.

  7. Heat and Mass Transfer with Condensation in Capillary Porous Bodies

    PubMed Central

    2014-01-01

    The purpose of this present work is related to wetting process analysis caused by condensation phenomena in capillary porous material by using a numerical simulation. Special emphasis is given to the study of the mechanism involved and the evaluation of classical theoretical models used as a predictive tool. A further discussion will be given for the distribution of the liquid phase for both its pendular and its funicular state and its consequence on diffusion coefficients of the mathematical model used. Beyond the complexity of the interaction effects between vaporisation-condensation processes on the gas-liquid interfaces, the comparison between experimental and numerical simulations permits to identify the specific contribution and the relative part of mass and energy transport parameters. This analysis allows us to understand the contribution of each part of the mathematical model used and to simplify the study. PMID:24688366

  8. Development of a numerical model to simulate groundwater flow in the shallow aquifer system of Assateague Island, Maryland and Virginia

    USGS Publications Warehouse

    Masterson, John P.; Fienen, Michael N.; Gesch, Dean B.; Carlson, Carl S.

    2013-01-01

    A three-dimensional groundwater-flow model was developed for Assateague Island in eastern Maryland and Virginia to simulate both groundwater flow and solute (salt) transport to evaluate the groundwater system response to sea-level rise. The model was constructed using geologic and spatial information to represent the island geometry, boundaries, and physical properties and was calibrated using an inverse modeling parameter-estimation technique. An initial transient solute-transport simulation was used to establish the freshwater-saltwater boundary for a final calibrated steady-state model of groundwater flow. This model was developed as part of an ongoing investigation by the U.S. Geological Survey Climate and Land Use Change Research and Development Program to improve capabilities for predicting potential climate-change effects and provide the necessary tools for adaptation and mitigation of potentially adverse impacts.

  9. On Fitting a Multivariate Two-Part Latent Growth Model

    PubMed Central

    Xu, Shu; Blozis, Shelley A.; Vandewater, Elizabeth A.

    2017-01-01

    A 2-part latent growth model can be used to analyze semicontinuous data to simultaneously study change in the probability that an individual engages in a behavior, and if engaged, change in the behavior. This article uses a Monte Carlo (MC) integration algorithm to study the interrelationships between the growth factors of 2 variables measured longitudinally where each variable can follow a 2-part latent growth model. A SAS macro implementing Mplus is developed to estimate the model to take into account the sampling uncertainty of this simulation-based computational approach. A sample of time-use data is used to show how maximum likelihood estimates can be obtained using a rectangular numerical integration method and an MC integration method. PMID:29333054

  10. Groundwater availability in the Crouch Branch and McQueen Branch aquifers, Chesterfield County, South Carolina, 1900-2012

    USGS Publications Warehouse

    Campbell, Bruce G.; Landmeyer, James E.

    2014-01-01

    Chesterfield County is located in the northeastern part of South Carolina along the southern border of North Carolina and is primarily underlain by unconsolidated sediments of Late Cretaceous age and younger of the Atlantic Coastal Plain. Approximately 20 percent of Chesterfield County is in the Piedmont Physiographic Province, and this area of the county is not included in this study. These Atlantic Coastal Plain sediments compose two productive aquifers: the Crouch Branch aquifer that is present at land surface across most of the county and the deeper, semi-confined McQueen Branch aquifer. Most of the potable water supplied to residents of Chesterfield County is produced from the Crouch Branch and McQueen Branch aquifers by a well field located near McBee, South Carolina, in the southwestern part of the county. Overall, groundwater availability is good to very good in most of Chesterfield County, especially the area around and to the south of McBee, South Carolina. The eastern part of Chesterfield County does not have as abundant groundwater resources but resources are generally adequate for domestic purposes. The primary purpose of this study was to determine groundwater-flow rates, flow directions, and changes in water budgets over time for the Crouch Branch and McQueen Branch aquifers in the Chesterfield County area. This goal was accomplished by using the U.S. Geological Survey finite-difference MODFLOW groundwater-flow code to construct and calibrate a groundwater-flow model of the Atlantic Coastal Plain of Chesterfield County. The model was created with a uniform grid size of 300 by 300 feet to facilitate a more accurate simulation of groundwater-surface-water interactions. The model consists of 617 rows from north to south extending about 35 miles and 884 columns from west to east extending about 50 miles, yielding a total area of about 1,750 square miles. However, the active part of the modeled area, or the part where groundwater flow is simulated, totaled about 1,117 square miles. Major types of data used as input to the model included groundwater levels, groundwater-use data, and hydrostratigraphic data, along with estimates and measurements of stream base flows made specifically for this study. The groundwater-flow model was calibrated to groundwater-level and stream base-flow conditions from 1900 to 2012 using 39 stress periods. The model was calibrated with an automated parameter-estimation approach using the computer program PEST, and the model used regularized inversion and pilot points. The groundwater-flow model was calibrated using field data that included groundwater levels that had been collected between 1940 and 2012 from 239 wells and base-flow measurements from 44 locations distributed within the study area. To better understand recharge and inter-aquifer interactions, seven wells were equipped with continuous groundwater-level recording equipment during the course of the study, between 2008 and 2012. These water levels were included in the model calibration process. The observed groundwater levels were compared to the simulated ones, and acceptable calibration fits were achieved. Root mean square error for the simulated groundwater levels compared to all observed groundwater levels was 9.3 feet for the Crouch Branch aquifer and 8.6 feet for the McQueen Branch aquifer. The calibrated groundwater-flow model was then used to calculate groundwater budgets for the entire study area and for two sub-areas. The sub-areas are the Alligator Rural Water and Sewer Company well field near McBee, South Carolina, and the Carolina Sandhills National Wildlife Refuge acquisition boundary area. For the overall model area, recharge rates vary from 56 to 1,679 million gallons per day (Mgal/d) with a mean of 737 Mgal/d over the simulation period (1900–2012). The simulated water budget for the streams and rivers varies from 653 to 1,127 Mgal/d with a mean of 944 Mgal/d. The simulated “storage-in term” ranges from 0 to 565 Mgal/d with a mean of 276 Mgal/d. The simulated “storage-out term” has a range of 0 to 552 Mgal/d with a mean of 77 Mgal/d. Groundwater budgets for the McBee, South Carolina, area and the Carolina Sandhills National Wildlife Refuge acquisition area had similar results. An analysis of the effects of past and current groundwater withdrawals on base flows in the McBee area indicated a negligible effect of pumping from the Alligator Rural Water and Sewer well field on local stream base flows. Simulate base flows for 2012 for selected streams in and around the McBee area were similar with and without simulated groundwater withdrawals from the well field. Removing all pumping from the model for the entire simulation period (1900–2012) produces a negligible difference in increased base flow for the selected streams. The 2012 flow for Lower Alligator Creek was 5.04 Mgal/d with the wells pumping and 5.08 Mgal/d without the wells pumping; this represents the largest difference in simulated flows for the six streams.

  11. Automated simulation as part of a design workstation

    NASA Technical Reports Server (NTRS)

    Cantwell, E.; Shenk, T.; Robinson, P.; Upadhye, R.

    1990-01-01

    A development project for a design workstation for advanced life-support systems incorporating qualitative simulation, required the implementation of a useful qualitative simulation capability and the integration of qualitative and quantitative simulations, such that simulation capabilities are maximized without duplication. The reason is that to produce design solutions to a system goal, the behavior of the system in both a steady and perturbed state must be represented. The paper reports on the Qualitative Simulation Tool (QST), on an expert-system-like model building and simulation interface toll called ScratchPad (SP), and on the integration of QST and SP with more conventional, commercially available simulation packages now being applied in the evaluation of life-support system processes and components.

  12. Investigating the dependence of SCM simulated precipitation and clouds on the spatial scale of large-scale forcing at SGP [Investigating the scale dependence of SCM simulated precipitation and cloud by using gridded forcing data at SGP

    DOE PAGES

    Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng

    2017-08-05

    Large-scale forcing data, such as vertical velocity and advective tendencies, are required to drive single-column models (SCMs), cloud-resolving models, and large-eddy simulations. Previous studies suggest that some errors of these model simulations could be attributed to the lack of spatial variability in the specified domain-mean large-scale forcing. This study investigates the spatial variability of the forcing and explores its impact on SCM simulated precipitation and clouds. A gridded large-scale forcing data during the March 2000 Cloud Intensive Operational Period at the Atmospheric Radiation Measurement program's Southern Great Plains site is used for analysis and to drive the single-column version ofmore » the Community Atmospheric Model Version 5 (SCAM5). When the gridded forcing data show large spatial variability, such as during a frontal passage, SCAM5 with the domain-mean forcing is not able to capture the convective systems that are partly located in the domain or that only occupy part of the domain. This problem has been largely reduced by using the gridded forcing data, which allows running SCAM5 in each subcolumn and then averaging the results within the domain. This is because the subcolumns have a better chance to capture the timing of the frontal propagation and the small-scale systems. As a result, other potential uses of the gridded forcing data, such as understanding and testing scale-aware parameterizations, are also discussed.« less

  13. Model Free iPID Control for Glycemia Regulation of Type-1 Diabetes.

    PubMed

    MohammadRidha, Taghreed; Ait-Ahmed, Mourad; Chaillous, Lucy; Krempf, Michel; Guilhem, Isabelle; Poirier, Jean-Yves; Moog, Claude H

    2018-01-01

    The objective is to design a fully automated glycemia controller of Type-1 Diabetes (T1D) in both fasting and postprandial phases on a large number of virtual patients. A model-free intelligent proportional-integral-derivative (iPID) is used to infuse insulin. The feasibility of iPID is tested in silico on two simulators with and without measurement noise. The first simulator is derived from a long-term linear time-invariant model. The controller is also validated on the UVa/Padova metabolic simulator on 10 adults under 25 runs/subject for noise robustness test. It was shown that without measurement noise, iPID mimicked the normal pancreatic secretion with a relatively fast reaction to meals as compared to a standard PID. With the UVa/Padova simulator, the robustness against CGM noise was tested. A higher percentage of time in target was obtained with iPID as compared to standard PID with reduced time spent in hyperglycemia. Two different T1D simulators tests showed that iPID detects meals and reacts faster to meal perturbations as compared to a classic PID. The intelligent part turns the controller to be more aggressive immediately after meals without neglecting safety. Further research is suggested to improve the computation of the intelligent part of iPID for such systems under actuator constraints. Any improvement can impact the overall performance of the model-free controller. The simple structure iPID is a step for PID-like controllers since it combines the classic PID nice properties with new adaptive features.

  14. Uranium adsorption on weathered schist - Intercomparison of modeling approaches

    USGS Publications Warehouse

    Payne, T.E.; Davis, J.A.; Ochs, M.; Olin, M.; Tweed, C.J.

    2004-01-01

    Experimental data for uranium adsorption on a complex weathered rock were simulated by twelve modelling teams from eight countries using surface complexation (SC) models. This intercomparison was part of an international project to evaluate the present capabilities and limitations of SC models in representing sorption by geologic materials. The models were assessed in terms of their predictive ability, data requirements, number of optimised parameters, ability to simulate diverse chemical conditions and transferability to other substrates. A particular aim was to compare the generalised composite (GC) and component additivity (CA) approaches for modelling sorption by complex substrates. Both types of SC models showed a promising capability to simulate sorption data obtained across a range of chemical conditions. However, the models incorporated a wide variety of assumptions, particularly in terms of input parameters such as site densities and surface site types. Furthermore, the methods used to extrapolate the model simulations to different weathered rock samples collected at the same field site tended to be unsatisfactory. The outcome of this modelling exercise provides an overview of the present status of adsorption modelling in the context of radionuclide migration as practised in a number of countries worldwide.

  15. A Microstructure-Based Constitutive Model for Superplastic Forming

    NASA Astrophysics Data System (ADS)

    Jafari Nedoushan, Reza; Farzin, Mahmoud; Mashayekhi, Mohammad; Banabic, Dorel

    2012-11-01

    A constitutive model is proposed for simulations of hot metal forming processes. This model is constructed based on dominant mechanisms that take part in hot forming and includes intergranular deformation, grain boundary sliding, and grain boundary diffusion. A Taylor type polycrystalline model is used to predict intergranular deformation. Previous works on grain boundary sliding and grain boundary diffusion are extended to drive three-dimensional macro stress-strain rate relationships for each mechanism. In these relationships, the effect of grain size is also taken into account. The proposed model is first used to simulate step strain-rate tests and the results are compared with experimental data. It is shown that the model can be used to predict flow stresses for various grain sizes and strain rates. The yield locus is then predicted for multiaxial stress states, and it is observed that it is very close to the von Mises yield criterion. It is also shown that the proposed model can be directly used to simulate hot forming processes. Bulge forming process and gas pressure tray forming are simulated, and the results are compared with experimental data.

  16. Ground Motion Modeling in the Eastern Caucasus

    DOE PAGES

    Pitarka, Arben; Gok, Rengin; Yetirmishli, Gurban; ...

    2016-05-13

    In this paper, we analyzed the performance of a preliminary three-dimensional (3D) velocity model of the Eastern Caucasus covering most of the Azerbaijan. The model was developed in support to long-period ground motion simulations and seismic hazard assessment from regional earthquakes in Azerbaijan. The model’s performance was investigated by simulating ground motion from the damaging Mw 5.9, 2012 Zaqatala earthquake, which was well recorded throughout the region by broadband seismic instruments. In our simulations, we use a parallelized finite-difference method of fourth-order accuracy. The comparison between the simulated and recorded ground motion velocity in the modeled period range of 3–20more » s shows that in general, the 3D velocity model performs well. Areas in which the model needs improvements are located mainly in the central part of the Kura basin and in the Caspian Sea coastal areas. Comparisons of simulated ground motion using our 3D velocity model and corresponding 1D regional velocity model were used to locate areas with strong 3D wave propagation effects. In areas with complex underground structure, the 1D model fails to produce the observed ground motion amplitude and duration, and spatial extend of ground motion amplification caused by wave propagation effects.« less

  17. Design and Analysis of Solar Smartflower Simulation by Solidwork Program

    NASA Astrophysics Data System (ADS)

    Mulyana, Tatang; Sebayang, Darwin; Fajrina, Fildzah; Raihan; Faizal, M.

    2018-03-01

    The potential of solar energy that is so large in Indonesia can be a driving force for the use of renewable energy as a solution for energy needs. Government with the community can utilize and optimize this technology to increase the electrification ratio up to 100% in all corners of Indonesia. Because of its modular and practical nature, making this technology easy to apply. One of the latest imported products that have started to be offered and sold in Indonesia but not yet widely used for solar power generation is the kind of smartflower. Before using the product, it is of course very important and immediately to undertake an in-depth study of the utilization, use, maintenance, repair, component supply and fabrication. The best way to know the above is through a review of the design and simulation. To meet this need, this paper presents a solar-smartflower design and then simulated using the facilities available in the solidwork program. Solid simulation express is a tool that serves to create power simulation of a design part modelling. With the simulation is very helpful at all to reduce errors in making design. Accurate or not a design created is also influenced by several other factors such as material objects, the silent part of the part, and the load given. The simulation is static simulation and body battery drop test, and based on the results of this simulation is known that the design results have been very satisfactory.

  18. The challenges of simulating wake vortex encounters and assessing separation criteria

    NASA Technical Reports Server (NTRS)

    Dunham, R. E.; Stuever, Robert A.; Vicroy, Dan D.

    1993-01-01

    During landings and take-offs, the longitudinal spacing between airplanes is in part determined by the safe separation required to avoid the trailing vortex wake of the preceding aircraft. Safe exploration of the feasibility of reducing longitudinal separation standards will require use of aircraft simulators. This paper discusses the approaches to vortex modeling, methods for modeling the aircraft/vortex interaction, some of the previous attempts of defining vortex hazard criteria, and current understanding of the development of vortex hazard criteria.

  19. Multiphase Modeling of Water Injection on Flame Deflector

    NASA Technical Reports Server (NTRS)

    Vu, Bruce T.; Bachchan, Nili; Peroomian, Oshin; Akdag, Vedat

    2013-01-01

    This paper describes the use of an Eulerian Dispersed Phase (EDP) model to simulate the water injected from the flame deflector and its interaction with supersonic rocket exhaust from a proposed Space Launch System (SLS) vehicle. The Eulerian formulation, as part of the multi-phase framework, is described. The simulations show that water cooling is only effective over the region under the liquid engines. Likewise, the water injection provides only minor effects over the surface area under the solid engines.

  20. A comparative study of two approaches to analyse groundwater recharge, travel times and nitrate storage distribution at a regional scale

    NASA Astrophysics Data System (ADS)

    Turkeltaub, T.; Ascott, M.; Gooddy, D.; Jia, X.; Shao, M.; Binley, A. M.

    2017-12-01

    Understanding deep percolation, travel time processes and nitrate storage in the unsaturated zone at a regional scale is crucial for sustainable management of many groundwater systems. Recently, global hydrological models have been developed to quantify the water balance at such scales and beyond. However, the coarse spatial resolution of the global hydrological models can be a limiting factor when analysing regional processes. This study compares simulations of water flow and nitrate storage based on regional and global scale approaches. The first approach was applied over the Loess Plateau of China (LPC) to investigate the water fluxes and nitrate storage and travel time to the LPC groundwater system. Using raster maps of climate variables, land use data and soil parameters enabled us to determine fluxes by employing Richards' equation and the advection - dispersion equation. These calculations were conducted for each cell on the raster map in a multiple 1-D column approach. In the second approach, vadose zone travel times and nitrate storage were estimated by coupling groundwater recharge (PCR-GLOBWB) and nitrate leaching (IMAGE) models with estimates of water table depth and unsaturated zone porosity. The simulation results of the two methods indicate similar spatial groundwater recharge, nitrate storage and travel time distribution. Intensive recharge rates are located mainly at the south central and south west parts of the aquifer's outcrops. Particularly low recharge rates were simulated in the top central area of the outcrops. However, there are significant discrepancies between the simulated absolute recharge values, which might be related to the coarse scale that is used in the PCR-GLOBWB model, leading to smoothing of the recharge estimations. Both models indicated large nitrate inventories in the south central and south west parts of the aquifer's outcrops and the shortest travel times in the vadose zone are in the south central and east parts of the outcrops. Our results suggest that, for the LPC at least, global scale models might be useful for highlighting the locations with higher recharge rates potential and nitrate contamination risk. Global modelling simulations appear ideal as a primary step in recognizing locations which require investigations at the plot, field and local scales.

  1. The effect of linear spring number at side load of McPherson suspension in electric city car

    NASA Astrophysics Data System (ADS)

    Budi, Sigit Setijo; Suprihadi, Agus; Makhrojan, Agus; Ismail, Rifky; Jamari, J.

    2017-01-01

    The function of the spring suspension on Mc Pherson type is to control vehicle stability and increase ride convenience although having tendencies of side load presence. The purpose of this study is to obtain simulation results of Mc Pherson suspension spring in the electric city car by using the finite element method and determining the side load that appears on the spring suspension. This research is conducted in several stages; they are linear spring designing models with various spring coil and spring suspension modeling using FEM software. Suspension spring is compressed in the vertical direction (z-axis) and at the upper part of the suspension springs will be seen the force that arises towards the x, y, and z-axis to simulate the side load arising on the upper part of the spring. The results of FEM simulation that the side load on the spring toward the x and y-axis which the value gets close to zero is the most stable spring.

  2. Subgrid-scale models for large-eddy simulation of rotating turbulent flows

    NASA Astrophysics Data System (ADS)

    Silvis, Maurits; Trias, Xavier; Abkar, Mahdi; Bae, Hyunji Jane; Lozano-Duran, Adrian; Verstappen, Roel

    2016-11-01

    This paper discusses subgrid models for large-eddy simulation of anisotropic flows using anisotropic grids. In particular, we are looking into ways to model not only the subgrid dissipation, but also transport processes, since these are expected to play an important role in rotating turbulent flows. We therefore consider subgrid-scale models of the form τ = - 2νt S +μt (SΩ - ΩS) , where the eddy-viscosity νt is given by the minimum-dissipation model, μt represents a transport coefficient; S is the symmetric part of the velocity gradient and Ω the skew-symmetric part. To incorporate the effect of mesh anisotropy the filter length is taken in such a way that it minimizes the difference between the turbulent stress in physical and computational space, where the physical space is covered by an anisotropic mesh and the computational space is isotropic. The resulting model is successfully tested for rotating homogeneous isotropic turbulence and rotating plane-channel flows. The research was largely carried out during the CTR SP 2016. M.S, and R.V. acknowledge the financial support to attend this Summer Program.

  3. Simulations and Experiments of the Nonisothermal Forging Process of a Ti-6Al-4V Impeller

    NASA Astrophysics Data System (ADS)

    Prabhu, T. Ram

    2016-09-01

    In the present study, a nonisothermal precision forging process of a Ti-6Al-4V first-stage impeller for the gas turbine engine was simulated using the finite element software. The simulation results such as load requirements, damage, velocity field, stress, strain, and temperature distributions are discussed in detail. Simulations predicted the maximum load requirement of about 80 MN. The maximum temperature loss was observed at the contour surface regions. The center and contour regions are the high-strained regions in the part. To validate the model, forging experiments mimicking simulations were performed in the α + β phases region (930 °C). The selected locations of the part were characterized for tensile properties at 27 and 200 °C, hardness, microstructure, grain size, and the amount of primary α phase based on the strain distribution results. The soundness of the forged part was verified using fluorescent penetrant test (Mil Std 2175 Grade A) and ultrasonic test (AMS 2630 class A1). From the experimental results, it was found that the variations in the hardness, tensile properties at room, and elevated temperature are not significant. The microstructure, grain size, and primary α phase content are nearly same.

  4. Determination of the mechanical and physical properties of cartilage by coupling poroelastic-based finite element models of indentation with artificial neural networks.

    PubMed

    Arbabi, Vahid; Pouran, Behdad; Campoli, Gianni; Weinans, Harrie; Zadpoor, Amir A

    2016-03-21

    One of the most widely used techniques to determine the mechanical properties of cartilage is based on indentation tests and interpretation of the obtained force-time or displacement-time data. In the current computational approaches, one needs to simulate the indentation test with finite element models and use an optimization algorithm to estimate the mechanical properties of cartilage. The modeling procedure is cumbersome, and the simulations need to be repeated for every new experiment. For the first time, we propose a method for fast and accurate estimation of the mechanical and physical properties of cartilage as a poroelastic material with the aid of artificial neural networks. In our study, we used finite element models to simulate the indentation for poroelastic materials with wide combinations of mechanical and physical properties. The obtained force-time curves are then divided into three parts: the first two parts of the data is used for training and validation of an artificial neural network, while the third part is used for testing the trained network. The trained neural network receives the force-time curves as the input and provides the properties of cartilage as the output. We observed that the trained network could accurately predict the properties of cartilage within the range of properties for which it was trained. The mechanical and physical properties of cartilage could therefore be estimated very fast, since no additional finite element modeling is required once the neural network is trained. The robustness of the trained artificial neural network in determining the properties of cartilage based on noisy force-time data was assessed by introducing noise to the simulated force-time data. We found that the training procedure could be optimized so as to maximize the robustness of the neural network against noisy force-time data. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Simulation study of the discharge characteristics of silos with cohesive particles

    NASA Astrophysics Data System (ADS)

    Hund, David; Weis, Dominik; Hesse, Robert; Antonyuk, Sergiy

    2017-06-01

    In many industrial applications the silo for bulk materials is an important part of an overall process. Silos are used for instance to buffer intermediate products to ensure a continuous supply for the next process step. This study deals with the discharging behaviour of silos containing cohesive bulk solids with particle sizes in the range of 100-500 μm. In this contribution the TOMAS [1,2] model developed for stationary and non-stationary discharging of a convergent hopper is verified with experiments and simulations using the Discrete Element Method. Moreover the influence of the cohesion of the bulk solids on the discharge behaviour is analysed by the simulation. The simulation results showed a qualitative agreement with the analytical model of TOMAS.

  6. A study of long-term trends in mineral dust aerosol distributions in Asia using a general circulation model

    NASA Astrophysics Data System (ADS)

    Mukai, Makiko; Nakajima, Teruyuki; Takemura, Toshihiko

    2004-10-01

    Dust events have been observed in Japan with high frequency since 2000. On the other hand, the frequency of dust storms is said to have decreased in the desert regions of China since about the middle of the 1970s. This study simulates dust storms and transportation of mineral dust aerosols in the east Asia region from 1981 to 2001 using an aerosol transport model, Spectral Radiation-Transport Model for Aerosol Species (SPRINTARS), implemented in the Center for Climate System Research/National Institute for Environmental Studies atmospheric global circulation model, in order to investigate the main factors that control a dust event and its long-term variation. The model was forced to simulate a real atmospheric condition by a nudging technique using European Centre for Medium-Range Weather Forecasts reanalysis data on wind velocities, temperature, specific humidity, soil wetness, and snow depth. From a comparison between the long-term change in the dust emission and model parameters, it is found that the wind speed near the surface level had a significant influence on the dust emission, and snow is also an important factor in the early spring dust emission. The simulated results suggested that dust emissions from northeast China have a great impact on dust mass concentration in downwind regions, such as the cities of northeastern China, Korea, and Japan. When the frequency of dust events was high in Japan, a low-pressure system tended to develop over the northeast China region that caused strong winds. From 2000 to 2001 the simulated dust emission flux decreased in the Taklimakan desert and the northwestern part of China, while it increased in the Gobi desert and the northeastern part of China. Consequently, dust particles seem to be transported more from the latter region by prevailing westerlies in the springtime to downwind areas as actually observed. In spite of the similarity, however, there is still a large disagreement between observed and simulated dust frequencies and concentrations. A more realistic land surface and uplift mechanism of dust particles should be modeled to improve the model simulation. Desertification of the northeastern China region may be another reason for this disagreement.

  7. A methodology for evacuation design for urban areas: theoretical aspects and experimentation

    NASA Astrophysics Data System (ADS)

    Russo, F.; Vitetta, A.

    2009-04-01

    This paper proposes an unifying approach for the simulation and design of a transportation system under conditions of incoming safety and/or security. Safety and security are concerned with threats generated by very different factors and which, in turn, generate emergency conditions, such as the 9/11, Madrid and London attacks, the Asian tsunami, and the Katrina hurricane; just considering the last five years. In transportation systems, when exogenous events happen and there is a sufficient interval time between the instant when the event happens and the instant when the event has effect on the population, it is possible to reduce the negative effects with the population evacuation. For this event in every case it is possible to prepare with short and long term the evacuation. For other event it is possible also to plan the real time evacuation inside the general risk methodology. The development of models for emergency conditions in transportation systems has not received much attention in the literature. The main findings in this area are limited to only a few public research centres and private companies. In general, there is no systematic analysis of the risk theory applied in the transportation system. Very often, in practice, the vulnerability and exposure in the transportation system are considered as similar variables, or in other worse cases the exposure variables are treated as vulnerability variables. Models and algorithms specified and calibrated in ordinary conditions cannot be directly applied in emergency conditions under the usual hypothesis considered. This paper is developed with the following main objectives: (a) to formalize the risk problem with clear diversification (for the consequences) in the definition of the vulnerability and exposure in a transportation system; thus the book offers improvements over consolidated quantitative risk analysis models, especially transportation risk analysis models (risk assessment); (b) to formalize a system of models for evacuation simulation; (c) to calibrate and validate system of model for evacuation simulation from a real experimentation. In relation to the proposed objectives in this paper: (a) a general framework about risk analysis is reported in the first part, with specific methods and models to analyze urban transportation system performances in emergency conditions when exogenous phenomena occur and for the specification of the risk function; (b) a formulation of the general evacuation problem in the standard simulation context of "what if" approach is specified in the second part with reference to the model considered for the simulation of transportation system in ordinary condition; (c) a set of models specified in the second part are calibrated and validated from a real experimentation in the third part. The experimentation was developed in the central business district of an Italian village and about 1000 inhabitants were evacuated, in order to construct a complete data-base. Our experiment required that socioeconomic information (population, number employed, public buildings, schools, etc.) and ‎transport supply characteristics (infrastructures, etc.) be measured before and during experimentation. The real data of evacuation were recorded with 30 video cameras for laboratory analysis. The results are divided into six strictly connected tasks: Demand models; Supply and supply-demand interaction models for users; Simulation of refuge areas for users; Design of path choice models for emergency vehicles; Pedestrian outflow models in a building; Planning process and guidelines.

  8. A numerical study of attraction/repulsion collective behavior models: 3D particle analyses and 1D kinetic simulations

    NASA Astrophysics Data System (ADS)

    Vecil, Francesco; Lafitte, Pauline; Rosado Linares, Jesús

    2013-10-01

    We study at particle and kinetic level a collective behavior model based on three phenomena: self-propulsion, friction (Rayleigh effect) and an attractive/repulsive (Morse) potential rescaled so that the total mass of the system remains constant independently of the number of particles N. In the first part of the paper, we introduce the particle model: the agents are numbered and described by their position and velocity. We identify five parameters that govern the possible asymptotic states for this system (clumps, spheres, dispersion, mills, rigid-body rotation, flocks) and perform a numerical analysis on the 3D setting. Then, in the second part of the paper, we describe the kinetic system derived as the limit from the particle model as N tends to infinity; we propose, in 1D, a numerical scheme for the simulations, and perform a numerical analysis devoted to trying to recover asymptotically patterns similar to those emerging for the equivalent particle systems, when particles originally evolved on a circle.

  9. On the interplay of gas dynamics and the electromagnetic field in an atmospheric Ar/H2 microwave plasma torch

    NASA Astrophysics Data System (ADS)

    Synek, Petr; Obrusník, Adam; Hübner, Simon; Nijdam, Sander; Zajíčková, Lenka

    2015-04-01

    A complementary simulation and experimental study of an atmospheric pressure microwave torch operating in pure argon or argon/hydrogen mixtures is presented. The modelling part describes a numerical model coupling the gas dynamics and mixing to the electromagnetic field simulations. Since the numerical model is not fully self-consistent and requires the electron density as an input, quite extensive spatially resolved Stark broadening measurements were performed for various gas compositions and input powers. In addition, the experimental part includes Rayleigh scattering measurements, which are used for the validation of the model. The paper comments on the changes in the gas temperature and hydrogen dissociation with the gas composition and input power, showing in particular that the dependence on the gas composition is relatively strong and non-monotonic. In addition, the work provides interesting insight into the plasma sustainment mechanism by showing that the power absorption profile in the plasma has two distinct maxima: one at the nozzle tip and one further upstream.

  10. Radiation pattern of a borehole radar antenna

    USGS Publications Warehouse

    Ellefsen, K.J.; Wright, D.L.

    2005-01-01

    The finite-difference time-domain method was used to simulate radar waves that were generated by a transmitting antenna inside a borehole. The simulations were of four different models that included features such as a water-filled borehole and an antenna with resistive loading. For each model, radiation patterns for the far-field region were calculated. The radiation patterns show that the amplitude of the radar wave was strongly affected by its frequency, the water-filled borehole, the resistive loading of the antenna, and the external metal parts of the antenna (e.g., the cable head and the battery pack). For the models with a water-filled borehole, their normalized radiation patterns were practically identical to the normalized radiation pattern of a finite-length electric dipole when the wavelength in the formation was significantly greater than the total length of the radiating elements of the model antenna. The minimum wavelength at which this criterion was satisfied depended upon the features of the antenna, especially its external metal parts. ?? 2005 Society of Exploration Geophysicists. All rights reserved.

  11. Science based integrated approach to advanced nuclear fuel development - integrated multi-scale multi-physics hierarchical modeling and simulation framework Part III: cladding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tome, Carlos N; Caro, J A; Lebensohn, R A

    2010-01-01

    Advancing the performance of Light Water Reactors, Advanced Nuclear Fuel Cycles, and Advanced Reactors, such as the Next Generation Nuclear Power Plants, requires enhancing our fundamental understanding of fuel and materials behavior under irradiation. The capability to accurately model the nuclear fuel systems to develop predictive tools is critical. Not only are fabrication and performance models needed to understand specific aspects of the nuclear fuel, fully coupled fuel simulation codes are required to achieve licensing of specific nuclear fuel designs for operation. The backbone of these codes, models, and simulations is a fundamental understanding and predictive capability for simulating themore » phase and microstructural behavior of the nuclear fuel system materials and matrices. In this paper we review the current status of the advanced modeling and simulation of nuclear reactor cladding, with emphasis on what is available and what is to be developed in each scale of the project, how we propose to pass information from one scale to the next, and what experimental information is required for benchmarking and advancing the modeling at each scale level.« less

  12. Kinematical simulation of robotic complex operation for implementing full-scale additive technologies of high-end materials, composites, structures, and buildings

    NASA Astrophysics Data System (ADS)

    Antsiferov, S. I.; Eltsov, M. Iu; Khakhalev, P. A.

    2018-03-01

    This paper considers a newly designed electronic digital model of a robotic complex for implementing full-scale additive technologies, funded under a Federal Target Program. The electronic and digital model was used to solve the problem of simulating the movement of a robotic complex using the NX CAD/CAM/CAE system. The virtual mechanism was built and the main assemblies, joints, and drives were identified as part of solving the problem. In addition, the maximum allowed printable area size was identified for the robotic complex, and a simulation of printing a rectangular-shaped article was carried out.

  13. Scattering of Acoustic Energy from Rough Deep Ocean Seafloor: a Numerical Modeling Approach.

    NASA Astrophysics Data System (ADS)

    Robertsson, Johan Olof Anders

    1995-01-01

    The highly heterogeneous and anelastic nature of deep ocean seafloor results in complex reverberation as acoustic energy incident from the overlaying water column interacts and scatters from it. To gain a deeper understanding of the mechanisms causing the reverberation in sonar and seafloor scattering experiments, we have developed numerical simulation techniques that are capable of modeling the principal physical properties of complex seafloor structures. A new viscoelastic finite-difference technique for modeling anelastic wave propagation in 2-D and 3-D heterogeneous media, as well as a computationally optimally efficient method for quantifying the anelastic properties in terms of viscoelastic mechanics are presented. A method for reducing numerical dispersion using a Galerkin-wavelet formulation that enables large computational savings is also presented. The widely different regimes of wave propagation occurring in ocean acoustic problems motivate the use of hybrid simulation techniques. HARVEST (Hybrid Adaptive Regime Visco-Elastic Simulation Technique) combines solutions from Gaussian beams, viscoelastic finite-differences, and Kirchhoff extrapolation, to simulate large offset scattering problems. Several scattering hypotheses based on finite -difference simulations of short-range acoustic scattering from realistic seafloor models are presented. Anelastic sediments on the seafloor are found to have a significant impact on the backscattered field from low grazing angle scattering experiments. In addition, small perturbations in the sediment compressional velocity can also dramatically alter the backscattered field due to transitions between pre- and post-critical reflection regimes. The hybrid techniques are employed to simulate deep ocean acoustic reverberation data collected in the vicinity of the northern mid-Atlantic ridge. In general, the simulated data compare well to the real data. Noise partly due to side-lobes in the beam-pattern of the receiver -array is the principal source of reverberation at lower levels. Overall, the employed seafloor models were found to model the real seafloor well. Inaccurately predicted events may partly be attributed to the intrinsic uncertainty in the stochastic seafloor models. For optimal comparison between real and HARVEST simulated data the experimental geometry should be chosen so that 3-D effects may be ignored, and to yield a cross-range resolution in the beam-formed acoustic data that is small relative to the lineation of the seafloor.

  14. Gap models and their individual-based relatives in the assessment of the consequences of global change

    NASA Astrophysics Data System (ADS)

    Shugart, Herman H.; Wang, Bin; Fischer, Rico; Ma, Jianyong; Fang, Jing; Yan, Xiaodong; Huth, Andreas; Armstrong, Amanda H.

    2018-03-01

    Individual-based models (IBMs) of complex systems emerged in the 1960s and early 1970s, across diverse disciplines from astronomy to zoology. Ecological IBMs arose with seemingly independent origins out of the tradition of understanding the ecosystems dynamics of ecosystems from a ‘bottom-up’ accounting of the interactions of the parts. Individual trees are principal among the parts of forests. Because these models are computationally demanding, they have prospered as the power of digital computers has increased exponentially over the decades following the 1970s. This review will focus on a class of forest IBMs called gap models. Gap models simulate the changes in forests by simulating the birth, growth and death of each individual tree on a small plot of land. The summation of these plots comprise a forest (or set of sample plots on a forested landscape or region). Other, more aggregated forest IBMs have been used in global applications including cohort-based models, ecosystem demography models, etc. Gap models have been used to provide the parameters for these bulk models. Currently, gap models have grown from local-scale to continental-scale and even global-scale applications to assess the potential consequences of climate change on natural forests. Modifications to the models have enabled simulation of disturbances including fire, insect outbreak and harvest. Our objective in this review is to provide the reader with an overview of the history, motivation and applications, including theoretical applications, of these models. In a time of concern over global changes, gap models are essential tools to understand forest responses to climate change, modified disturbance regimes and other change agents. Development of forest surveys to provide the starting points for simulations and better estimates of the behavior of the diversity of tree species in response to the environment are continuing needs for improvement for these and other IBMs.

  15. Innovative model to simulate exhalation phase in human respiratory system.

    PubMed

    Sbrana, Tommaso; Landi, Alberto; Catapano, Giosuè Angelo

    2011-11-01

    In this paper, we present a mathematical model, which mimics the bronchial resistances of human's lung in an expiratory act. The model is implemented in Matlab. The inputs that are used in this model derive from spirometry test. This model is able to study a physiologic condition, a pathologic one and the patient's follow up after drug treatment. We split our study into two parts. The first one focuses the analysis on the gas fluido dynamic inside of the respiratory pathways. The second part takes care of the pressure equilibrium in the exchange zone. We use the outputs that derive from the second subsystem to solve the Bernoulli's equation of the first part. The model was validated with data provided from "Clinical Physiology Institute" of CNR and G. Monasterio Foundation of Pisa. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  16. Finite Element Peen Forming Simulation

    NASA Astrophysics Data System (ADS)

    Gariépy, Alexandre; Larose, Simon; Perron, Claude; Bocher, Philippe; Lévesque, Martin

    Shot peening consists of projecting multiple small particles onto a ductile part in order to induce compressive residual stresses near the surface. Peen forming, a derivative of shot peening, is a process that creates an unbalanced stress state which in turn leads to a deformation to shape thin parts. This versatile and cost-effective process is commonly used to manufacture aluminum wing skins and rocket panels. This paper presents the finite element modelling approach that was developed by the authors to simulate the process. The method relies on shell elements and calculated stress profiles and uses an approximation equation to take into account the incremental nature of the process. Finite element predictions were in good agreement with experimental results for small-scale tests. The method was extended to a hypothetical wing skin model to show its potential applications.

  17. Impact of air-sea drag coefficient for latent heat flux on large scale climate in coupled and atmosphere stand-alone simulations

    NASA Astrophysics Data System (ADS)

    Torres, Olivier; Braconnot, Pascale; Marti, Olivier; Gential, Luc

    2018-05-01

    The turbulent fluxes across the ocean/atmosphere interface represent one of the principal driving forces of the global atmospheric and oceanic circulation. Despite decades of effort and improvements, representation of these fluxes still presents a challenge due to the small-scale acting turbulent processes compared to the resolved scales of the models. Beyond this subgrid parameterization issue, a comprehensive understanding of the impact of air-sea interactions on the climate system is still lacking. In this paper we investigates the large-scale impacts of the transfer coefficient used to compute turbulent heat fluxes with the IPSL-CM4 climate model in which the surface bulk formula is modified. Analyzing both atmosphere and coupled ocean-atmosphere general circulation model (AGCM, OAGCM) simulations allows us to study the direct effect and the mechanisms of adjustment to this modification. We focus on the representation of latent heat flux in the tropics. We show that the heat transfer coefficients are highly similar for a given parameterization between AGCM and OAGCM simulations. Although the same areas are impacted in both kind of simulations, the differences in surface heat fluxes are substantial. A regional modification of heat transfer coefficient has more impact than uniform modification in AGCM simulations while in OAGCM simulations, the opposite is observed. By studying the global energetics and the atmospheric circulation response to the modification, we highlight the role of the ocean in dampening a large part of the disturbance. Modification of the heat exchange coefficient modifies the way the coupled system works due to the link between atmospheric circulation and SST, and the different feedbacks between ocean and atmosphere. The adjustment that takes place implies a balance of net incoming solar radiation that is the same in all simulations. As there is no change in model physics other than drag coefficient, we obtain similar latent heat flux between coupled simulations with different atmospheric circulations. Finally, we analyze the impact of model tuning and show that it can offset part of the feedbacks.

  18. Virtual environments simulation in research reactor

    NASA Astrophysics Data System (ADS)

    Muhamad, Shalina Bt. Sheik; Bahrin, Muhammad Hannan Bin

    2017-01-01

    Virtual reality based simulations are interactive and engaging. It has the useful potential in improving safety training. Virtual reality technology can be used to train workers who are unfamiliar with the physical layout of an area. In this study, a simulation program based on the virtual environment at research reactor was developed. The platform used for virtual simulation is 3DVia software for which it's rendering capabilities, physics for movement and collision and interactive navigation features have been taken advantage of. A real research reactor was virtually modelled and simulated with the model of avatars adopted to simulate walking. Collision detection algorithms were developed for various parts of the 3D building and avatars to restrain the avatars to certain regions of the virtual environment. A user can control the avatar to move around inside the virtual environment. Thus, this work can assist in the training of personnel, as in evaluating the radiological safety of the research reactor facility.

  19. Multithreaded Stochastic PDES for Reactions and Diffusions in Neurons.

    PubMed

    Lin, Zhongwei; Tropper, Carl; Mcdougal, Robert A; Patoary, Mohammand Nazrul Ishlam; Lytton, William W; Yao, Yiping; Hines, Michael L

    2017-07-01

    Cells exhibit stochastic behavior when the number of molecules is small. Hence a stochastic reaction-diffusion simulator capable of working at scale can provide a more accurate view of molecular dynamics within the cell. This paper describes a parallel discrete event simulator, Neuron Time Warp-Multi Thread (NTW-MT), developed for the simulation of reaction diffusion models of neurons. To the best of our knowledge, this is the first parallel discrete event simulator oriented towards stochastic simulation of chemical reactions in a neuron. The simulator was developed as part of the NEURON project. NTW-MT is optimistic and thread-based, which attempts to capitalize on multi-core architectures used in high performance machines. It makes use of a multi-level queue for the pending event set and a single roll-back message in place of individual anti-messages to disperse contention and decrease the overhead of processing rollbacks. Global Virtual Time is computed asynchronously both within and among processes to get rid of the overhead for synchronizing threads. Memory usage is managed in order to avoid locking and unlocking when allocating and de-allocating memory and to maximize cache locality. We verified our simulator on a calcium buffer model. We examined its performance on a calcium wave model, comparing it to the performance of a process based optimistic simulator and a threaded simulator which uses a single priority queue for each thread. Our multi-threaded simulator is shown to achieve superior performance to these simulators. Finally, we demonstrated the scalability of our simulator on a larger CICR model and a more detailed CICR model.

  20. Influence of ecohydrologic feedbacks from simulated crop growth on integrated regional hydrologic simulations under climate scenarios

    NASA Astrophysics Data System (ADS)

    van Walsum, P. E. V.; Supit, I.

    2012-06-01

    Hydrologic climate change modelling is hampered by climate-dependent model parameterizations. To reduce this dependency, we extended the regional hydrologic modelling framework SIMGRO to host a two-way coupling between the soil moisture model MetaSWAP and the crop growth simulation model WOFOST, accounting for ecohydrologic feedbacks in terms of radiation fraction that reaches the soil, crop coefficient, interception fraction of rainfall, interception storage capacity, and root zone depth. Except for the last, these feedbacks are dependent on the leaf area index (LAI). The influence of regional groundwater on crop growth is included via a coupling to MODFLOW. Two versions of the MetaSWAP-WOFOST coupling were set up: one with exogenous vegetation parameters, the "static" model, and one with endogenous crop growth simulation, the "dynamic" model. Parameterization of the static and dynamic models ensured that for the current climate the simulated long-term averages of actual evapotranspiration are the same for both models. Simulations were made for two climate scenarios and two crops: grass and potato. In the dynamic model, higher temperatures in a warm year under the current climate resulted in accelerated crop development, and in the case of potato a shorter growing season, thus partly avoiding the late summer heat. The static model has a higher potential transpiration; depending on the available soil moisture, this translates to a higher actual transpiration. This difference between static and dynamic models is enlarged by climate change in combination with higher CO2 concentrations. Including the dynamic crop simulation gives for potato (and other annual arable land crops) systematically higher effects on the predicted recharge change due to climate change. Crop yields from soils with poor water retention capacities strongly depend on capillary rise if moisture supply from other sources is limited. Thus, including a crop simulation model in an integrated hydrologic simulation provides a valuable addition for hydrologic modelling as well as for crop modelling.

  1. Constraining the Intergalactic and Circumgalactic Media with Lyman-Alpha Absorption

    NASA Astrophysics Data System (ADS)

    Sorini, Daniele; Onorbe, Jose; Hennawi, Joseph F.; Lukic, Zarija

    2018-01-01

    Lyman-alpha (Ly-a) absorption features detected in quasar spectra in the redshift range 02Mpc, the simulations asymptotically match the observations, because the ΛCDM model successfully describes the ambient IGM. This represents a critical advantage of studying the mean absorption profile. However, significant differences between the simulations, and between simulations and observations are present on scales 20kpc-2Mpc, illustrating the challenges of accurately modeling and resolving galaxy formation physics. It is noteworthy that these differences are observed as far out as ~2Mpc, indicating that the `sphere-of-influence' of galaxies could extend to approximately ~20 times the halo virial radius (~100kpc). Current observations are very precise on these scales and can thus strongly discriminate between different galaxy formation models. I demonstrate that the Ly-a absorption profile is primarily sensitive to the underlying temperature-density relationship of diffuse gas around galaxies, and argue that it thus provides a fundamental test of galaxy formation models. With near-future high-precision observations of Ly-a absorption, the tools developed in my thesis set the stage for even stronger constraints on models of galaxy formation and cosmology.

  2. Future climate change scenarios in Central America at high spatial resolution.

    PubMed

    Imbach, Pablo; Chou, Sin Chan; Lyra, André; Rodrigues, Daniela; Rodriguez, Daniel; Latinovic, Dragan; Siqueira, Gracielle; Silva, Adan; Garofolo, Lucas; Georgiou, Selena

    2018-01-01

    The objective of this work is to assess the downscaling projections of climate change over Central America at 8-km resolution using the Eta Regional Climate Model, driven by the HadGEM2-ES simulations of RCP4.5 emission scenario. The narrow characteristic of continent supports the use of numerical simulations at very high-horizontal resolution. Prior to assessing climate change, the 30-year baseline period 1961-1990 is evaluated against different sources of observations of precipitation and temperature. The mean seasonal precipitation and temperature distribution show reasonable agreement with observations. Spatial correlation of the Eta, 8-km resolution, simulations against observations show clear advantage over the driver coarse global model simulations. Seasonal cycle of precipitation confirms the added value of the Eta at 8-km over coarser resolution simulations. The Eta simulations show a systematic cold bias in the region. Climate features of the Mid-Summer Drought and the Caribbean Low-Level Jet are well simulated by the Eta model at 8-km resolution. The assessment of the future climate change is based on the 30-year period 2021-2050, under RCP4.5 scenario. Precipitation is generally reduced, in particular during the JJA and SON, the rainy season. Warming is expected over the region, but stronger in the northern portion of the continent. The Mid-Summer Drought may develop in regions that do not occur during the baseline period, and where it occurs the strength may increase in the future scenario. The Caribbean Low-Level Jet shows little change in the future. Extreme temperatures have positive trend within the period 2021-2050, whereas extreme precipitation, measured by R50mm and R90p, shows positive trend in the eastern coast, around Costa Rica, and negative trends in the northern part of the continent. Negative trend in the duration of dry spell, which is an estimate based on evapotranspiration, is projected in most part of the continent. Annual mean water excess has negative trends in most part of the continent, which suggests decreasing water availability in the future scenario.

  3. Future climate change scenarios in Central America at high spatial resolution

    PubMed Central

    Imbach, Pablo; Chou, Sin Chan; Rodrigues, Daniela; Rodriguez, Daniel; Latinovic, Dragan; Siqueira, Gracielle; Silva, Adan; Garofolo, Lucas; Georgiou, Selena

    2018-01-01

    The objective of this work is to assess the downscaling projections of climate change over Central America at 8-km resolution using the Eta Regional Climate Model, driven by the HadGEM2-ES simulations of RCP4.5 emission scenario. The narrow characteristic of continent supports the use of numerical simulations at very high-horizontal resolution. Prior to assessing climate change, the 30-year baseline period 1961–1990 is evaluated against different sources of observations of precipitation and temperature. The mean seasonal precipitation and temperature distribution show reasonable agreement with observations. Spatial correlation of the Eta, 8-km resolution, simulations against observations show clear advantage over the driver coarse global model simulations. Seasonal cycle of precipitation confirms the added value of the Eta at 8-km over coarser resolution simulations. The Eta simulations show a systematic cold bias in the region. Climate features of the Mid-Summer Drought and the Caribbean Low-Level Jet are well simulated by the Eta model at 8-km resolution. The assessment of the future climate change is based on the 30-year period 2021–2050, under RCP4.5 scenario. Precipitation is generally reduced, in particular during the JJA and SON, the rainy season. Warming is expected over the region, but stronger in the northern portion of the continent. The Mid-Summer Drought may develop in regions that do not occur during the baseline period, and where it occurs the strength may increase in the future scenario. The Caribbean Low-Level Jet shows little change in the future. Extreme temperatures have positive trend within the period 2021–2050, whereas extreme precipitation, measured by R50mm and R90p, shows positive trend in the eastern coast, around Costa Rica, and negative trends in the northern part of the continent. Negative trend in the duration of dry spell, which is an estimate based on evapotranspiration, is projected in most part of the continent. Annual mean water excess has negative trends in most part of the continent, which suggests decreasing water availability in the future scenario. PMID:29694355

  4. Mathematical Models of IABG Thermal-Vacuum Facilities

    NASA Astrophysics Data System (ADS)

    Doring, Daniel; Ulfers, Hendrik

    2014-06-01

    IABG in Ottobrunn, Germany, operates thermal-vacuum facilities of different sizes and complexities as a service for space-testing of satellites and components. One aspect of these tests is the qualification of the thermal control system that keeps all onboard components within their save operating temperature band. As not all possible operation / mission states can be simulated within a sensible test time, usually a subset of important and extreme states is tested at TV facilities to validate the thermal model of the satellite, which is then used to model all other possible mission states. With advances in the precision of customer thermal models, simple assumptions of the test environment (e.g. everything black & cold, one solar constant of light from this side) are no longer sufficient, as real space simulation chambers do deviate from this ideal. For example the mechanical adapters which support the spacecraft are usually not actively cooled. To enable IABG to provide a model that is sufficiently detailed and realistic for current system tests, Munich engineering company CASE developed ESATAN models for the two larger chambers. CASE has many years of experience in thermal analysis for space-flight systems and ESATAN. The two models represent the rather simple (and therefore very homogeneous) 3m-TVA and the extremely complex space simulation test facility and its solar simulator. The cooperation of IABG and CASE built up extensive knowledge of the facilities thermal behaviour. This is the key to optimally support customers with their test campaigns in the future. The ESARAD part of the models contains all relevant information with regard to geometry (CAD data), surface properties (optical measurements) and solar irradiation for the sun simulator. The temperature of the actively cooled thermal shrouds is measured and mapped to the thermal mesh to create the temperature field in the ESATAN part as boundary conditions. Both models comprise switches to easily establish multiple possible set-ups (e.g. exclude components like the motion system or enable / disable the solar simulator). Both models were validated by comparing calculated results (thermal balance temperatures for simple passive test articles) with measured temperatures generated in actual tests in these facilities. This paper presents information about the chambers, the modelling approach, properties of the models and their performance in the validation tests.

  5. A Mathematical Model of the Olfactory Bulb for the Selective Adaptation Mechanism in the Rodent Olfactory System.

    PubMed

    Soh, Zu; Nishikawa, Shinya; Kurita, Yuichi; Takiguchi, Noboru; Tsuji, Toshio

    2016-01-01

    To predict the odor quality of an odorant mixture, the interaction between odorants must be taken into account. Previously, an experiment in which mice discriminated between odorant mixtures identified a selective adaptation mechanism in the olfactory system. This paper proposes an olfactory model for odorant mixtures that can account for selective adaptation in terms of neural activity. The proposed model uses the spatial activity pattern of the mitral layer obtained from model simulations to predict the perceptual similarity between odors. Measured glomerular activity patterns are used as input to the model. The neural interaction between mitral cells and granular cells is then simulated, and a dissimilarity index between odors is defined using the activity patterns of the mitral layer. An odor set composed of three odorants is used to test the ability of the model. Simulations are performed based on the odor discrimination experiment on mice. As a result, we observe that part of the neural activity in the glomerular layer is enhanced in the mitral layer, whereas another part is suppressed. We find that the dissimilarity index strongly correlates with the odor discrimination rate of mice: r = 0.88 (p = 0.019). We conclude that our model has the ability to predict the perceptual similarity of odorant mixtures. In addition, the model also accounts for selective adaptation via the odor discrimination rate, and the enhancement and inhibition in the mitral layer may be related to this selective adaptation.

  6. Model structure identification for wastewater treatment simulation based on computational fluid dynamics.

    PubMed

    Alex, J; Kolisch, G; Krause, K

    2002-01-01

    The objective of this presented project is to use the results of an CFD simulation to automatically, systematically and reliably generate an appropriate model structure for simulation of the biological processes using CSTR activated sludge compartments. Models and dynamic simulation have become important tools for research but also increasingly for the design and optimisation of wastewater treatment plants. Besides the biological models several cases are reported about the application of computational fluid dynamics ICFD) to wastewater treatment plants. One aim of the presented method to derive model structures from CFD results is to exclude the influence of empirical structure selection to the result of dynamic simulations studies of WWTPs. The second application of the approach developed is the analysis of badly performing treatment plants where the suspicion arises that bad flow behaviour such as short cut flows is part of the problem. The method suggested requires as the first step the calculation of fluid dynamics of the biological treatment step at different loading situations by use of 3-dimensional CFD simulation. The result of this information is used to generate a suitable model structure for conventional dynamic simulation of the treatment plant by use of a number of CSTR modules with a pattern of exchange flows between the tanks automatically. The method is explained in detail and the application to the WWTP Wuppertal Buchenhofen is presented.

  7. Simulation in paediatric urology and surgery, part 2: An overview of simulation modalities and their applications.

    PubMed

    Nataraja, R M; Webb, N; Lopez, P J

    2018-04-01

    Surgical training has changed radically in the last few decades. The traditional Halstedian model of time-bound apprenticeship has been replaced with competency-based training. In our previous article, we presented an overview of learning theory relevant to clinical teaching; a summary for the busy paediatric surgeon and urologist. We introduced the concepts underpinning current changes in surgical education and training. In this next article, we give an overview of the various modalities of surgical simulation, the educational principles that underlie them, and potential applications in clinical practice. These modalities include; open surgical models and trainers, laparoscopic bench trainers, virtual reality trainers, simulated patients and role-play, hybrid simulation, scenario-based simulation, distributed simulation, virtual reality, and online simulation. Specific examples of technology that may be used for these modalities are included but this is not a comprehensive review of all available products. Copyright © 2018 Journal of Pediatric Urology Company. Published by Elsevier Ltd. All rights reserved.

  8. Simulation of ground-water flow in the Antlers aquifer in southeastern Oklahoma and northeastern Texas

    USGS Publications Warehouse

    Morton, R.B.

    1992-01-01

    The Antlers Sandstone of Early Cretaceous age occurs in all or parts of Atoka, Bryan, Carter, Choctaw, Johnston, Love, Marshall, McCurtain, and Pushmataha Counties, a 4,400-square-mile area in southeastern Oklahoma parallel to the Red River. The sandstone comprising the Antlers aquifer is exposed in the northern one-third of the area, and ground water in the outcrop area is unconfined. Younger Cretaceous rocks overlie the Antlers in the southern two thirds of the study area where the aquifer is confined. The Antlers extends in the subsurface south into Texas where it underlies all or parts of Bowie, Cooke, Fannin, Grayson, Lamar, and Red River Counties. An area of approximately 5,400 square miles in Texas is included in the study. The Antlers Sandstone consists of sand, clay, conglomerate, and limestone deposited on an erosional surface of Paleozoic rocks. Saturated thickness in the Antlers ranges from 0 feet at the updip limit to probably more than 2,000 feet, 25 to 30 miles south of the Red River. Simulated recharge to the Antlers based on model calibration ranges from 0.32 to about 0.96 inch per year. Base flow increases where streams cross the Antlers outcrop, indicating that the aquifer supplies much of the base flow. Pumpage rates for 1980 in excess of 35 million gallons per year per grid cell for public supply, irrigation, and industrial uses total 872 million gallons in the Oklahoma part of the Antlers and 5,228 million gallons in the Texas part of the Antlers. Ground-water flow in the Antlers aquifer was simulated using one active layer in a three-dimensional finite-difference mathematical model. Simulated aquifer hydraulic conductivity values range from 0.87 to 3.75 feet per day. A vertical hydraulic conductivity of 1.5x10-4 foot per day was specified for the younger confining unit at the start of the simulation. An average storage coefficient of 0.0005 was specified for the confined part of the aquifer; a specific yield of 0.17 was specified for the unconfined part. Because pumping from the Antlers is minimal, calibration under transient conditions was not possible. Consequently, the head changes resulting from projection simulations in this study are estimates only. Volumetric results of the six projection simulations from the years 1990 to 2040 indicate that the decrease in the volume of ground water in storage due to pumping approximately 9,700,000 acre-feet from 1970 to 2040 is less than 0.1 percent.

  9. El Nino - La Nina events simulated with Cane and Zebiak`s model and observed with satellite or in situ data. Part I: Model data comparison

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perigaud C.; Dewitte, B.

    The Zebiak and Cane model is used in its {open_quotes}uncoupled mode,{close_quotes} meaning that the oceanic model component is driven by the Florida State University (FSU) wind stress anomalies over 1980-93 to simulate sea surface temperature anomalies, and these are used in the atmospheric model component to generate wind anomalies. Simulations are compared with data derived from FSU winds, International Satellite Cloud Climatology Project cloud convection, Advanced Very High Resolution Radiometer SST, Geosat sea level, 20{degrees}C isotherm depth derived from an expendable bathythermograph, and current velocities estimated from drifters or current-meter moorings. Forced by the simulated SST, the atmospheric model ismore » fairly successful in reproducing the observed westerlies during El Nino events. The model fails to simulate the easterlies during La Nina 1988. The simulated forcing of the atmosphere is in very poor agreement with the heating derived from cloud convection data. Similarly, the model is fairly successful in reproducing the warm anomalies during El Nino events. However, it fails to simulate the observed cold anomalies. Simulated variations of thermocline depth agree reasonably well with observations. The model simulates zonal current anomalies that are reversing at a dominant 9-month frequency. Projecting altimetric observations on Kelvin and Rossby waves provides an estimate of zonal current anomalies, which is consistent with the ones derived from drifters or from current meter moorings. Unlike the simulated ones, the observed zonal current anomalies reverse from eastward during El Nino events to westward during La Nina events. The simulated 9-month oscillations correspond to a resonant mode of the basin. They can be suppressed by cancelling the wave reflection at the boundaries, or they can be attenuated by increasing the friction in the ocean model. 58 refs., 14 figs., 6 tabs.« less

  10. A survey on hair modeling: styling, simulation, and rendering.

    PubMed

    Ward, Kelly; Bertails, Florence; Kim, Tae-Yong; Marschner, Stephen R; Cani, Marie-Paule; Lin, Ming C

    2007-01-01

    Realistic hair modeling is a fundamental part of creating virtual humans in computer graphics. This paper surveys the state of the art in the major topics of hair modeling: hairstyling, hair simulation, and hair rendering. Because of the difficult, often unsolved problems that arise in all these areas, a broad diversity of approaches are used, each with strengths that make it appropriate for particular applications. We discuss each of these major topics in turn, presenting the unique challenges facing each area and describing solutions that have been presented over the years to handle these complex issues. Finally, we outline some of the remaining computational challenges in hair modeling.

  11. Groundwater-flow budget for the lower Apalachicola-Chattahoochee-Flint River Basin in southwestern Georgia and parts of Florida and Alabama, 2008–12

    USGS Publications Warehouse

    Jones, L. Elliott; Painter, Jaime A.; LaFontaine, Jacob H.; Sepúlveda, Nicasio; Sifuentes, Dorothy F.

    2017-12-29

    As part of the National Water Census program in the Apalachicola-Chattahoochee-Flint (ACF) River Basin, the U.S. Geological Survey evaluated the groundwater budget of the lower ACF, with particular emphasis on recharge, characterizing the spatial and temporal relation between surface water and groundwater, and groundwater pumping. To evaluate the hydrologic budget of the lower ACF River Basin, a groundwater-flow model, constructed using MODFLOW-2005, was developed for the Upper Floridan aquifer and overlying semiconfining unit for 2008–12. Model input included temporally and spatially variable specified recharge, estimated using a Precipitation-Runoff Modeling System (PRMS) model for the ACF River Basin, and pumping, partly estimated on the basis of measured agricultural pumping rates in Georgia. The model was calibrated to measured groundwater levels and base flows, which were estimated using hydrograph separation.The simulated groundwater-flow budget resulted in a small net cumulative loss of groundwater in storage during the study period. The model simulated a net loss in groundwater storage for all the subbasins as conditions became substantially drier from the beginning to the end of the study period. The model is limited by its conceptualization, the data used to represent and calibrate the model, and the mathematical representation of the system; therefore, any interpretations should be considered in light of these limitations. In spite of these limitations, the model provides insight regarding water availability in the lower ACF River Basin.

  12. Dynamic Downscaling of Seasonal Simulations over South America.

    NASA Astrophysics Data System (ADS)

    Misra, Vasubandhu; Dirmeyer, Paul A.; Kirtman, Ben P.

    2003-01-01

    In this paper multiple atmospheric global circulation model (AGCM) integrations at T42 spectral truncation and prescribed sea surface temperature were used to drive regional spectral model (RSM) simulations at 80-km resolution for the austral summer season (January-February-March). Relative to the AGCM, the RSM improves the ensemble mean simulation of precipitation and the lower- and upper-level tropospheric circulation over both tropical and subtropical South America and the neighboring ocean basins. It is also seen that the RSM exacerbates the dry bias over the northern tip of South America and the Nordeste region, and perpetuates the erroneous split intertropical convergence zone (ITCZ) over both the Pacific and Atlantic Ocean basins from the AGCM. The RSM at 80-km horizontal resolution is able to reasonably resolve the Altiplano plateau. This led to an improvement in the mean precipitation over the plateau. The improved resolution orography in the RSM did not substantially change the predictability of the precipitation, surface fluxes, or upper- and lower-level winds in the vicinity of the Andes Mountains from the AGCM. In spite of identical convective and land surface parameterization schemes, the diagnostic quantities, such as precipitation and surface fluxes, show significant differences in the intramodel variability over oceans and certain parts of the Amazon River basin (ARB). However, the prognostic variables of the models exhibit relatively similar model noise structures and magnitude. This suggests that the model physics are in large part responsible for the divergence of the solutions in the two models. However, the surface temperature and fluxes from the land surface scheme of the model [Simplified Simple Biosphere scheme (SSiB)] display comparable intramodel variability, except over certain parts of ARB in the two models. This suggests a certain resilience of predictability in SSiB (over the chosen domain of study) to variations in horizontal resolution. It is seen in this study that the summer precipitation over tropical and subtropical South America is highly unpredictable in both models.

  13. Application of an online-coupled regional climate model, WRF-CAM5, over East Asia for examination of ice nucleation schemes: Part I. Comprehensive model evaluation and trend analysis for 2006 and 2011

    DOE PAGES

    Chen, Ying; Zhang, Yang; Fan, Jiwen; ...

    2015-08-18

    Online-coupled climate and chemistry models are necessary to realistically represent the interactions between climate variables and chemical species and accurately simulate aerosol direct and indirect effects on cloud, precipitation, and radiation. In this Part I of a two-part paper, simulations from the Weather Research and Forecasting model coupled with the physics package of Community Atmosphere Model (WRF-CAM5) are conducted with the default heterogeneous ice nucleation parameterization over East Asia for two full years: 2006 and 2011. A comprehensive model evaluation is performed using satellite and surface observations. The model shows an overall acceptable performance for major meteorological variables at themore » surface and in the boundary layer, as well as column variables (e.g., precipitation, cloud fraction, precipitating water vapor, downward longwave and shortwave radiation). Moderate to large biases exist for cloud condensation nuclei over oceanic areas, cloud variables (e.g., cloud droplet number concentration, cloud liquid and ice water paths, cloud optical depth, longwave and shortwave cloud forcing). These biases indicate a need to improve the model treatments for cloud processes, especially cloud droplets and ice nucleation, as well as to reduce uncertainty in the satellite retrievals. The model simulates well the column abundances of chemical species except for column SO 2 but relatively poor for surface concentrations of several species such as CO, NO 2, SO 2, PM 2.5, and PM 10. Several reasons could contribute to the underestimation of major chemical species in East Asia including underestimations of anthropogenic emissions and natural dust emissions, uncertainties in the spatial and vertical distributions of the anthropogenic emissions, as well as biases in meteorological, radiative, and cloud predictions. Despite moderate to large biases in the chemical predictions, the model performance is generally consistent with or even better than that reported for East Asia with only a few exceptions. The model generally reproduces the observed seasonal variations and the difference between 2006 and 2011 for most variables or chemical species. Overall, these results demonstrate promising skills of WRF-CAM5 for long-term simulations at a regional scale and suggest several areas of potential improvements.« less

  14. Application of an Online-Coupled Regional Climate Model, WRF-CAM5, over East Asia for Examination of Ice Nucleation Schemes: Part I. Comprehensive Model Evaluation and Trend Analysis for 2006 and 2011

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Ying; Zhang, Yang; Fan, Jiwen

    Online-coupled climate and chemistry models are necessary to realistically represent the interactions between climate variables and chemical species and accurately simulate aerosol direct and indirect effects on cloud, precipitation, and radiation. In this Part I of a two-part paper, simulations from the Weather Research and Forecasting model coupled with the physics package of Community Atmosphere Model (WRF-CAM5) are conducted with the default heterogeneous ice nucleation parameterization over East Asia for two full years: 2006 and 2011. A comprehensive model evaluation is performed using satellite and surface observations. The model shows an overall acceptable performance for major meteorological variables at themore » surface and in the boundary layer, as well as column variables (e.g., precipitation, cloud fraction, precipitating water vapor, downward longwave and shortwave radiation). Moderate to large biases exist for cloud condensation nuclei over oceanic areas, cloud variables (e.g., cloud droplet number concentration, cloud liquid and ice water paths, cloud optical depth, longwave and shortwave cloud forcing). These biases indicate a need to improve the model treatments for cloud processes, especially cloud droplets and ice nucleation, as well as to reduce uncertainty in the satellite retrievals. The model simulates well the column abundances of chemical species except for column SO 2 but relatively poor for surface concentrations of several species such as CO, NO 2, SO 2, PM2.5, and PM10. Several reasons could contribute to the underestimation of major chemical species in East Asia including underestimations of anthropogenic emissions and natural dust emissions, uncertainties in the spatial and vertical distributions of the anthropogenic emissions, as well as biases in meteorological, radiative, and cloud predictions. Despite moderate to large biases in the chemical predictions, the model performance is generally consistent with or even better than that reported for East Asia with only a few exceptions. The model generally reproduces the observed seasonal variations and the difference between 2006 and 2011 for most variables or chemical species. Overall, these results demonstrate promising skills of WRF-CAM5 for long-term simulations at a regional scale and suggest several areas of potential improvements.« less

  15. Application of an online-coupled regional climate model, WRF-CAM5, over East Asia for examination of ice nucleation schemes: Part I. Comprehensive model evaluation and trend analysis for 2006 and 2011

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Ying; Zhang, Yang; Fan, Jiwen

    Online-coupled climate and chemistry models are necessary to realistically represent the interactions between climate variables and chemical species and accurately simulate aerosol direct and indirect effects on cloud, precipitation, and radiation. In this Part I of a two-part paper, simulations from the Weather Research and Forecasting model coupled with the physics package of Community Atmosphere Model (WRF-CAM5) are conducted with the default heterogeneous ice nucleation parameterization over East Asia for two full years: 2006 and 2011. A comprehensive model evaluation is performed using satellite and surface observations. The model shows an overall acceptable performance for major meteorological variables at themore » surface and in the boundary layer, as well as column variables (e.g., precipitation, cloud fraction, precipitating water vapor, downward longwave and shortwave radiation). Moderate to large biases exist for cloud condensation nuclei over oceanic areas, cloud variables (e.g., cloud droplet number concentration, cloud liquid and ice water paths, cloud optical depth, longwave and shortwave cloud forcing). These biases indicate a need to improve the model treatments for cloud processes, especially cloud droplets and ice nucleation, as well as to reduce uncertainty in the satellite retrievals. The model simulates well the column abundances of chemical species except for column SO 2 but relatively poor for surface concentrations of several species such as CO, NO 2, SO 2, PM 2.5, and PM 10. Several reasons could contribute to the underestimation of major chemical species in East Asia including underestimations of anthropogenic emissions and natural dust emissions, uncertainties in the spatial and vertical distributions of the anthropogenic emissions, as well as biases in meteorological, radiative, and cloud predictions. Despite moderate to large biases in the chemical predictions, the model performance is generally consistent with or even better than that reported for East Asia with only a few exceptions. The model generally reproduces the observed seasonal variations and the difference between 2006 and 2011 for most variables or chemical species. Overall, these results demonstrate promising skills of WRF-CAM5 for long-term simulations at a regional scale and suggest several areas of potential improvements.« less

  16. Simulation of the GEM detector for BM@N experiment

    NASA Astrophysics Data System (ADS)

    Baranov, Dmitriy; Rogachevsky, Oleg

    2017-03-01

    The Gas Electron Multiplier (GEM) detector is one of the basic parts of the BM@N experiment included in the NICA project. The simulation model that takes into account features of signal generation process in an ionization GEM chamber is presented in this article. Proper parameters for the simulation were extracted from data retrieved with the help of Garfield++ (a toolkit for the detailed simulation of particle detectors). Due to this, we are able to generate clusters in layers of the micro-strip readout that correspond to clusters retrieved from a real physics experiment.

  17. Evaluation of a Mesoscale Atmospheric Dispersion Modeling System with Observations from the 1980 Great Plains Mesoscale Tracer Field Experiment. Part I: Datasets and Meteorological Simulations.

    NASA Astrophysics Data System (ADS)

    Moran, Michael D.; Pielke, Roger A.

    1996-03-01

    The Colorado State University mesoscale atmospheric dispersion (MAD) numerical modeling system, which consists of a prognostic mesoscale meteorological model coupled to a mesoscale Lagrangian particle dispersion model, has been used to simulate the transport and diffusion of a perfluorocarbon tracer-gas cloud for one afternoon surface release during the July 1980 Great Plains mesoscale tracer field experiment. Ground-level concentration (GLC) measurements taken along arcs of samplers 100 and 600 km downwind of the release site at Norman, Oklahoma, up to three days after the tracer release were available for comparison. Quantitative measures of a number of significant dispersion characteristics obtained from analysis of the observed tracer cloud's moving GLC `footprint' have been used to evaluate the modeling system's skill in simulating this MAD case.MAD is more dependent upon the spatial and temporal structure of the transport wind field than is short-range atmospheric dispersion. For the Great Plains mesoscale tracer experiment, the observations suggest that the Great Plains nocturnal low-level jet played an important role in transporting and deforming the tracer cloud. A suite of ten two- and three-dimensional numerical meteorological experiments was devised to investigate the relative contributions of topography, other surface inhomogeneities, atmospheric baroclinicity, synoptic-scale flow evolution, and meteorological model initialization time to the structure and evolution of the low-level mesoscale flow field and thus to MAD. Results from the ten mesoscale meteorological simulations are compared in this part of the paper. The predicted wind fields display significant differences, which give rise in turn to significant differences in predicted low-level transport. The presence of an oscillatory ageostrophic component in the observed synoptic low-level winds for this case is shown to complicate initialization of the meteorological model considerably and is the likely cause of directional errors in the predicted mean tracer transport. A companion paper describes the results from the associated dispersion simulations.

  18. Towards predictive data-driven simulations of wildfire spread - Part I: Reduced-cost Ensemble Kalman Filter based on a Polynomial Chaos surrogate model for parameter estimation

    NASA Astrophysics Data System (ADS)

    Rochoux, M. C.; Ricci, S.; Lucor, D.; Cuenot, B.; Trouvé, A.

    2014-05-01

    This paper is the first part in a series of two articles and presents a data-driven wildfire simulator for forecasting wildfire spread scenarios, at a reduced computational cost that is consistent with operational systems. The prototype simulator features the following components: a level-set-based fire propagation solver FIREFLY that adopts a regional-scale modeling viewpoint, treats wildfires as surface propagating fronts, and uses a description of the local rate of fire spread (ROS) as a function of environmental conditions based on Rothermel's model; a series of airborne-like observations of the fire front positions; and a data assimilation algorithm based on an ensemble Kalman filter (EnKF) for parameter estimation. This stochastic algorithm partly accounts for the non-linearities between the input parameters of the semi-empirical ROS model and the fire front position, and is sequentially applied to provide a spatially-uniform correction to wind and biomass fuel parameters as observations become available. A wildfire spread simulator combined with an ensemble-based data assimilation algorithm is therefore a promising approach to reduce uncertainties in the forecast position of the fire front and to introduce a paradigm-shift in the wildfire emergency response. In order to reduce the computational cost of the EnKF algorithm, a surrogate model based on a polynomial chaos (PC) expansion is used in place of the forward model FIREFLY in the resulting hybrid PC-EnKF algorithm. The performance of EnKF and PC-EnKF is assessed on synthetically-generated simple configurations of fire spread to provide valuable information and insight on the benefits of the PC-EnKF approach as well as on a controlled grassland fire experiment. The results indicate that the proposed PC-EnKF algorithm features similar performance to the standard EnKF algorithm, but at a much reduced computational cost. In particular, the re-analysis and forecast skills of data assimilation strongly relate to the spatial and temporal variability of the errors in the ROS model parameters.

  19. Towards predictive data-driven simulations of wildfire spread - Part I: Reduced-cost Ensemble Kalman Filter based on a Polynomial Chaos surrogate model for parameter estimation

    NASA Astrophysics Data System (ADS)

    Rochoux, M. C.; Ricci, S.; Lucor, D.; Cuenot, B.; Trouvé, A.

    2014-11-01

    This paper is the first part in a series of two articles and presents a data-driven wildfire simulator for forecasting wildfire spread scenarios, at a reduced computational cost that is consistent with operational systems. The prototype simulator features the following components: an Eulerian front propagation solver FIREFLY that adopts a regional-scale modeling viewpoint, treats wildfires as surface propagating fronts, and uses a description of the local rate of fire spread (ROS) as a function of environmental conditions based on Rothermel's model; a series of airborne-like observations of the fire front positions; and a data assimilation (DA) algorithm based on an ensemble Kalman filter (EnKF) for parameter estimation. This stochastic algorithm partly accounts for the nonlinearities between the input parameters of the semi-empirical ROS model and the fire front position, and is sequentially applied to provide a spatially uniform correction to wind and biomass fuel parameters as observations become available. A wildfire spread simulator combined with an ensemble-based DA algorithm is therefore a promising approach to reduce uncertainties in the forecast position of the fire front and to introduce a paradigm-shift in the wildfire emergency response. In order to reduce the computational cost of the EnKF algorithm, a surrogate model based on a polynomial chaos (PC) expansion is used in place of the forward model FIREFLY in the resulting hybrid PC-EnKF algorithm. The performance of EnKF and PC-EnKF is assessed on synthetically generated simple configurations of fire spread to provide valuable information and insight on the benefits of the PC-EnKF approach, as well as on a controlled grassland fire experiment. The results indicate that the proposed PC-EnKF algorithm features similar performance to the standard EnKF algorithm, but at a much reduced computational cost. In particular, the re-analysis and forecast skills of DA strongly relate to the spatial and temporal variability of the errors in the ROS model parameters.

  20. Modeling of pathogen survival during simulated gastric digestion.

    PubMed

    Koseki, Shige; Mizuno, Yasuko; Sotome, Itaru

    2011-02-01

    The objective of the present study was to develop a mathematical model of pathogenic bacterial inactivation kinetics in a gastric environment in order to further understand a part of the infectious dose-response mechanism. The major bacterial pathogens Listeria monocytogenes, Escherichia coli O157:H7, and Salmonella spp. were examined by using simulated gastric fluid adjusted to various pH values. To correspond to the various pHs in a stomach during digestion, a modified logistic differential equation model and the Weibull differential equation model were examined. The specific inactivation rate for each pathogen was successfully described by a square-root model as a function of pH. The square-root models were combined with the modified logistic differential equation to obtain a complete inactivation curve. Both the modified logistic and Weibull models provided a highly accurate fitting of the static pH conditions for every pathogen. However, while the residuals plots of the modified logistic model indicated no systematic bias and/or regional prediction problems, the residuals plots of the Weibull model showed a systematic bias. The modified logistic model appropriately predicted the pathogen behavior in the simulated gastric digestion process with actual food, including cut lettuce, minced tuna, hamburger, and scrambled egg. Although the developed model enabled us to predict pathogen inactivation during gastric digestion, its results also suggested that the ingested bacteria in the stomach would barely be inactivated in the real digestion process. The results of this study will provide important information on a part of the dose-response mechanism of bacterial pathogens.

  1. Modeling of Pathogen Survival during Simulated Gastric Digestion ▿

    PubMed Central

    Koseki, Shige; Mizuno, Yasuko; Sotome, Itaru

    2011-01-01

    The objective of the present study was to develop a mathematical model of pathogenic bacterial inactivation kinetics in a gastric environment in order to further understand a part of the infectious dose-response mechanism. The major bacterial pathogens Listeria monocytogenes, Escherichia coli O157:H7, and Salmonella spp. were examined by using simulated gastric fluid adjusted to various pH values. To correspond to the various pHs in a stomach during digestion, a modified logistic differential equation model and the Weibull differential equation model were examined. The specific inactivation rate for each pathogen was successfully described by a square-root model as a function of pH. The square-root models were combined with the modified logistic differential equation to obtain a complete inactivation curve. Both the modified logistic and Weibull models provided a highly accurate fitting of the static pH conditions for every pathogen. However, while the residuals plots of the modified logistic model indicated no systematic bias and/or regional prediction problems, the residuals plots of the Weibull model showed a systematic bias. The modified logistic model appropriately predicted the pathogen behavior in the simulated gastric digestion process with actual food, including cut lettuce, minced tuna, hamburger, and scrambled egg. Although the developed model enabled us to predict pathogen inactivation during gastric digestion, its results also suggested that the ingested bacteria in the stomach would barely be inactivated in the real digestion process. The results of this study will provide important information on a part of the dose-response mechanism of bacterial pathogens. PMID:21131530

  2. A consistent modelling methodology for secondary settling tanks in wastewater treatment.

    PubMed

    Bürger, Raimund; Diehl, Stefan; Nopens, Ingmar

    2011-03-01

    The aim of this contribution is partly to build consensus on a consistent modelling methodology (CMM) of complex real processes in wastewater treatment by combining classical concepts with results from applied mathematics, and partly to apply it to the clarification-thickening process in the secondary settling tank. In the CMM, the real process should be approximated by a mathematical model (process model; ordinary or partial differential equation (ODE or PDE)), which in turn is approximated by a simulation model (numerical method) implemented on a computer. These steps have often not been carried out in a correct way. The secondary settling tank was chosen as a case since this is one of the most complex processes in a wastewater treatment plant and simulation models developed decades ago have no guarantee of satisfying fundamental mathematical and physical properties. Nevertheless, such methods are still used in commercial tools to date. This particularly becomes of interest as the state-of-the-art practice is moving towards plant-wide modelling. Then all submodels interact and errors propagate through the model and severely hamper any calibration effort and, hence, the predictive purpose of the model. The CMM is described by applying it first to a simple conversion process in the biological reactor yielding an ODE solver, and then to the solid-liquid separation in the secondary settling tank, yielding a PDE solver. Time has come to incorporate established mathematical techniques into environmental engineering, and wastewater treatment modelling in particular, and to use proven reliable and consistent simulation models. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. Chimaera simulation of complex states of flowing matter.

    PubMed

    Succi, S

    2016-11-13

    We discuss a unified mesoscale framework (chimaera) for the simulation of complex states of flowing matter across scales of motion. The chimaera framework can deal with each of the three macro-meso-micro levels through suitable 'mutations' of the basic mesoscale formulation. The idea is illustrated through selected simulations of complex micro- and nanoscale flows.This article is part of the themed issue 'Multiscale modelling at the physics-chemistry-biology interface'. © 2016 The Author(s).

  4. Model modifications for simulation of flow through stratified rocks in eastern Ohio

    USGS Publications Warehouse

    Helgesen, J.O.; Razem, A.C.; Larson, S.P.

    1982-01-01

    A quasi three-dimensional groundwater flow model is being used as part of a study to determine impacts of coal-strip mining on local hydrologic systems. Modifications to the model were necessary to simulate local hydrologic conditions properly. Perched water tables required that the method of calculating vertical flow rate be changed. A head-dependent spring-discharge function and a head-dependent stream aquifer-interchange function were added to the program. Modifications were also made to allow recharge from precipitation to any layer. The modified program, data deck instructions, and sample input and output are presented. (USGS)

  5. Beyond standard model calculations with Sherpa

    DOE PAGES

    Höche, Stefan; Kuttimalai, Silvan; Schumann, Steffen; ...

    2015-03-24

    We present a fully automated framework as part of the Sherpa event generator for the computation of tree-level cross sections in beyond Standard Model scenarios, making use of model information given in the Universal FeynRules Output format. Elementary vertices are implemented into C++ code automatically and provided to the matrix-element generator Comix at runtime. Widths and branching ratios for unstable particles are computed from the same building blocks. The corresponding decays are simulated with spin correlations. Parton showers, QED radiation and hadronization are added by Sherpa, providing a full simulation of arbitrary BSM processes at the hadron level.

  6. Beyond standard model calculations with Sherpa.

    PubMed

    Höche, Stefan; Kuttimalai, Silvan; Schumann, Steffen; Siegert, Frank

    We present a fully automated framework as part of the Sherpa event generator for the computation of tree-level cross sections in Beyond Standard Model scenarios, making use of model information given in the Universal FeynRules Output format. Elementary vertices are implemented into C++ code automatically and provided to the matrix-element generator Comix at runtime. Widths and branching ratios for unstable particles are computed from the same building blocks. The corresponding decays are simulated with spin correlations. Parton showers, QED radiation and hadronization are added by Sherpa, providing a full simulation of arbitrary BSM processes at the hadron level.

  7. A numerically efficient damping model for acoustic resonances in microfluidic cavities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hahn, P., E-mail: hahnp@ethz.ch; Dual, J.

    Bulk acoustic wave devices are typically operated in a resonant state to achieve enhanced acoustic amplitudes and high acoustofluidic forces for the manipulation of microparticles. Among other loss mechanisms related to the structural parts of acoustofluidic devices, damping in the fluidic cavity is a crucial factor that limits the attainable acoustic amplitudes. In the analytical part of this study, we quantify all relevant loss mechanisms related to the fluid inside acoustofluidic micro-devices. Subsequently, a numerical analysis of the time-harmonic visco-acoustic and thermo-visco-acoustic equations is carried out to verify the analytical results for 2D and 3D examples. The damping results aremore » fitted into the framework of classical linear acoustics to set up a numerically efficient device model. For this purpose, all damping effects are combined into an acoustofluidic loss factor. Since some components of the acoustofluidic loss factor depend on the acoustic mode shape in the fluid cavity, we propose a two-step simulation procedure. In the first step, the loss factors are deduced from the simulated mode shape. Subsequently, a second simulation is invoked, taking all losses into account. Owing to its computational efficiency, the presented numerical device model is of great relevance for the simulation of acoustofluidic particle manipulation by means of acoustic radiation forces or acoustic streaming. For the first time, accurate 3D simulations of realistic micro-devices for the quantitative prediction of pressure amplitudes and the related acoustofluidic forces become feasible.« less

  8. An advanced constitutive model in the sheet metal forming simulation: the Teodosiu microstructural model and the Cazacu Barlat yield criterion

    NASA Astrophysics Data System (ADS)

    Alves, J. L.; Oliveira, M. C.; Menezes, L. F.

    2004-06-01

    Two constitutive models used to describe the plastic behavior of sheet metals in the numerical simulation of sheet metal forming process are studied: a recently proposed advanced constitutive model based on the Teodosiu microstructural model and the Cazacu Barlat yield criterion is compared with a more classical one, based on the Swift law and the Hill 1948 yield criterion. These constitutive models are implemented into DD3IMP, a finite element home code specifically developed to simulate sheet metal forming processes, which generically is a 3-D elastoplastic finite element code with an updated Lagrangian formulation, following a fully implicit time integration scheme, large elastoplastic strains and rotations. Solid finite elements and parametric surfaces are used to model the blank sheet and tool surfaces, respectively. Some details of the numerical implementation of the constitutive models are given. Finally, the theory is illustrated with the numerical simulation of the deep drawing of a cylindrical cup. The results show that the proposed advanced constitutive model predicts with more exactness the final shape (medium height and ears profile) of the formed part, as one can conclude from the comparison with the experimental results.

  9. C/sup 3/ and combat simulation - a survey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erickson, S.A. Jr.

    1983-01-04

    This article looks at the overlap between C/sup 3/ and combat simulation, from the point of view of the developer of combat simulations and models. In this context, there are two different questions. The first is: How and to what extent should specific models of the C/sup 3/ processes be incorporated in simulations of combat. Here the key point is the assessment of impact. In which types or levels of combat does C/sup 3/ play a role sufficiently intricate and closely coupled with combat performance that it would significantly affect combat results. Conversely, when is C/sup 3/ a known factormore » or modifier which can be simply accommodated without a specific detailed model being made for it. The second question is the inverse one. In the development of future C/sup 3/ systems, what rule should combat simulation play. Obviously, simulation of the operation of the hardware, software and other parts of the C/sup 3/ system would be useful in its design and specification, but this is not combat simulation. When is it necessary to encase the C/sup 3/ simulation model in a combat model which has enough detail to be considered a simulation itself. How should this outer combat model be scoped out as to the components needed. In order to build a background for answering these questions a two-pronged approach will be taken. First a framework for C/sup 3/ modeling will be developed, in which the various types of modeling which can be done to include or encase C/sup 3/ in a combat model are organized. This framework will hopefully be useful in describing the particular assumptions made in specific models in terms of what could be done in a more general way. Then a few specific models will be described, concentrating on the C/sup 3/ portion of the simulations, or what could be interpreted as the C/sup 3/ assumptions.« less

  10. Based new WiMax simulation model to investigate Qos with OPNET modeler in sheduling environment

    NASA Astrophysics Data System (ADS)

    Saini, Sanju; Saini, K. K.

    2012-11-01

    WiMAX stands for World Interoperability for Microwave Access. It is considered a major part of broadband wireless network having the IEEE 802.16 standard. WiMAX provides innovative, fixed as well as mobile platforms for broadband internet access anywhere anytime with different transmission modes. The results show approximately equal load and throughput while the delay values vary among the different Base Stations Introducing the various type of scheduling algorithm, like FIFO,PQ,WFQ, for comparison of four type of scheduling service, with its own QoS needs and also introducing OPNET modeler support for Worldwide Interoperability for Microwave Access (WiMAX) network. The simulation results indicate the correctness and the effectiveness of this algorithm. This paper presents a WiMAX simulation model designed with OPNET modeler 14 to measure the delay, load and the throughput performance factors.

  11. Hydrodynamics and water quality models applied to Sepetiba Bay

    NASA Astrophysics Data System (ADS)

    Cunha, Cynara de L. da N.; Rosman, Paulo C. C.; Ferreira, Aldo Pacheco; Carlos do Nascimento Monteiro, Teófilo

    2006-10-01

    A coupled hydrodynamic and water quality model is used to simulate the pollution in Sepetiba Bay due to sewage effluent. Sepetiba Bay has a complicated geometry and bottom topography, and is located on the Brazilian coast near Rio de Janeiro. In the simulation, the dissolved oxygen (DO) concentration and biochemical oxygen demand (BOD) are used as indicators for the presence of organic matter in the body of water, and as parameters for evaluating the environmental pollution of the eastern part of Sepetiba Bay. Effluent sources in the model are taken from DO and BOD field measurements. The simulation results are consistent with field observations and demonstrate that the model has been correctly calibrated. The model is suitable for evaluating the environmental impact of sewage effluent on Sepetiba Bay from river inflows, assessing the feasibility of different treatment schemes, and developing specific monitoring activities. This approach has general applicability for environmental assessment of complicated coastal bays.

  12. Watershed Models for Decision Support for Inflows to Potholes Reservoir, Washington

    USGS Publications Warehouse

    Mastin, Mark C.

    2009-01-01

    A set of watershed models for four basins (Crab Creek, Rocky Ford Creek, Rocky Coulee, and Lind Coulee), draining into Potholes Reservoir in east-central Washington, was developed as part of a decision support system to aid the U.S. Department of the Interior, Bureau of Reclamation, in managing water resources in east-central Washington State. The project is part of the U.S. Geological Survey and Bureau of Reclamation collaborative Watershed and River Systems Management Program. A conceptual model of hydrology is outlined for the study area that highlights the significant processes that are important to accurately simulate discharge under a wide range of conditions. The conceptual model identified the following factors as significant for accurate discharge simulations: (1) influence of frozen ground on peak discharge, (2) evaporation and ground-water flow as major pathways in the system, (3) channel losses, and (4) influence of irrigation practices on reducing or increasing discharge. The Modular Modeling System was used to create a watershed model for the four study basins by combining standard Precipitation Runoff Modeling System modules with modified modules from a previous study and newly modified modules. The model proved unreliable in simulating peak-flow discharge because the index used to track frozen ground conditions was not reliable. Mean monthly and mean annual discharges were more reliable when simulated. Data from seven USGS streamflow-gaging stations were used to compare with simulated discharge for model calibration and evaluation. Mean annual differences between simulated and observed discharge varied from 1.2 to 13.8 percent for all stations used in the comparisons except one station on a regional ground-water discharge stream. Two thirds of the mean monthly percent differences between the simulated mean and the observed mean discharge for these six stations were between -20 and 240 percent, or in absolute terms, between -0.8 and 11 cubic feet per second. A graphical user interface was developed for the user to easily run the model, make runoff forecasts, and evaluate the results. The models; however, are not reliable for managing short-term operations because of their demonstrated inability to match individual storm peaks and individual monthly discharge values. Short-term forecasting may be improved with real-time monitoring of the extent of frozen ground and the snow-water equivalent in the basin. Despite the models unreliability for short-term runoff forecasts, they are useful in providing long-term, time-series discharge data where no observed data exist.

  13. Development and testing of a mouse simulated space flight model

    NASA Technical Reports Server (NTRS)

    Sonnenfeld, Gerald

    1987-01-01

    The development and testing of a mouse model for simulating some aspects of weightlessness that occurs during space flight, and the carrying out of immunological experiments on animals undergoing space flight is examined. The mouse model developed was an antiorthostatic, hypokinetic, hypodynamic suspension model similar to one used with rats. The study was divided into two parts. The first involved determination of which immunological parameters should be observed on animals flown during space flight or studied in the suspension model. The second involved suspending mice and determining which of those immunological parameters were altered by the suspension. Rats that were actually flown in Space Shuttle SL-3 were used to test the hypotheses.

  14. Application of WRF/Chem over East Asia: Part II. Model improvement and sensitivity simulations

    NASA Astrophysics Data System (ADS)

    Zhang, Yang; Zhang, Xin; Wang, Kai; Zhang, Qiang; Duan, Fengkui; He, Kebin

    2016-01-01

    To address the problems and limitations identified through a comprehensive evaluation in Part I paper, several modifications are made in model inputs, treatments, and configurations and sensitivity simulations with improved model inputs and treatments are performed in this Part II paper. The use of reinitialization of meteorological variables reduces the biases and increases the spatial correlations in simulated temperature at 2-m (T2), specific humidity at 2-m (Q2), wind speed at 10-m (WS10), and precipitation (Precip). The use of a revised surface drag parameterization further reduces the biases in simulated WS10. The adjustment of only the magnitudes of anthropogenic emissions in the surface layer does not help improve overall model performance, whereas the adjustment of both the magnitudes and vertical distributions of anthropogenic emissions shows moderate to large improvement in simulated surface concentrations and column mass abundances of species in terms of domain mean performance statistics, hourly and monthly mean concentrations, and vertical profiles of concentrations at individual sites. The revised and more advanced dust emission schemes can help improve PM predictions. Using revised upper boundary conditions for O3 significantly improves the column O3 abundances. Using a simple SOA formation module further improves the predictions of organic carbon and PM2.5. The sensitivity simulation that combines all above model improvements greatly improves the overall model performance. For example, the sensitivity simulation gives the normalized mean biases (NMBs) of -6.1% to 23.8% for T2, 2.7-13.8% for Q2, 22.5-47.6% for WS10, and -9.1% to 15.6% for Precip, comparing to -9.8% to 75.6% for T2, 0.4-23.4% for Q2, 66.5-101.0% for WS10, and 11.4%-92.7% for Precip from the original simulation without those improvements. It also gives the NMBs for surface predictions of -68.2% to -3.7% for SO2, -73.8% to -20.6% for NO2, -8.8%-128.7% for O3, -61.4% to -26.5% for PM2.5, and -64.0% to 7.2% for PM10, comparing to -84.2% to -44.5% for SO2, -88.1% to -44.0% for NO2, -11.0%-160.3% for O3, -63.9% to -25.2% for PM2.5, and -68.9%-33.3% for PM10 from the original simulation. The improved WRF/Chem is applied to estimate the impact of anthropogenic aerosols on regional climate and air quality in East Asia. Anthropogenic aerosols can increase cloud condensation nuclei, aerosol optical depth, cloud droplet number concentrations, and cloud optical depth. They can decrease surface net radiation, temperature at 2-m, wind speed at 10-m, planetary boundary layer height, and precipitation through various direct and indirect effects. These changes in turn lead to changes in chemical predictions in a variety of ways.

  15. QUANTIFYING SEASONAL SHIFTS IN NITROGEN SOURCES TO OREGON ESTUARIES: PART II: TRANSPORT MODELING

    EPA Science Inventory

    Identifying the sources of dissolved inorganic nitrogen (DIN) in estuaries is complicated by the multiple sources, temporal variability in inputs, and variations in transport. We used a hydrodynamic model to simulate the transport and uptake of three sources of DIN (oceanic, riv...

  16. Development of a dynamic traffic assignment model to evaluate lane-reversal plans for I-65.

    DOT National Transportation Integrated Search

    2010-05-01

    This report presents the methodology and results from a project that studied contra-flow operations in support of : hurricane evacuations in the state of Alabama. As part of this effort, a simulation model was developed using the : VISTA platform for...

  17. LAMMPS Project Report for the Trinity KNL Open Science Period.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moore, Stan Gerald; Thompson, Aidan P.; Wood, Mitchell

    LAMMPS is a classical molecular dynamics code (lammps.sandia.gov) used to model materials science problems at Sandia National Laboratories and around the world. LAMMPS was one of three Sandia codes selected to participate in the Trinity KNL (TR2) Open Science period. During this period, three different problems of interest were investigated using LAMMPS. The first was benchmarking KNL performance using different force field models. The second was simulating void collapse in shocked HNS energetic material using an all-atom model. The third was simulating shock propagation through poly-crystalline RDX energetic material using a coarse-grain model, the results of which were used inmore » an ACM Gordon Bell Prize submission. This report describes the results of these simulations, lessons learned, and some hardware issues found on Trinity KNL as part of this work.« less

  18. Testing simulation and structural models with applications to energy demand

    NASA Astrophysics Data System (ADS)

    Wolff, Hendrik

    2007-12-01

    This dissertation deals with energy demand and consists of two parts. Part one proposes a unified econometric framework for modeling energy demand and examples illustrate the benefits of the technique by estimating the elasticity of substitution between energy and capital. Part two assesses the energy conservation policy of Daylight Saving Time and empirically tests the performance of electricity simulation. In particular, the chapter "Imposing Monotonicity and Curvature on Flexible Functional Forms" proposes an estimator for inference using structural models derived from economic theory. This is motivated by the fact that in many areas of economic analysis theory restricts the shape as well as other characteristics of functions used to represent economic constructs. Specific contributions are (a) to increase the computational speed and tractability of imposing regularity conditions, (b) to provide regularity preserving point estimates, (c) to avoid biases existent in previous applications, and (d) to illustrate the benefits of our approach via numerical simulation results. The chapter "Can We Close the Gap between the Empirical Model and Economic Theory" discusses the more fundamental question of whether the imposition of a particular theory to a dataset is justified. I propose a hypothesis test to examine whether the estimated empirical model is consistent with the assumed economic theory. Although the proposed methodology could be applied to a wide set of economic models, this is particularly relevant for estimating policy parameters that affect energy markets. This is demonstrated by estimating the Slutsky matrix and the elasticity of substitution between energy and capital, which are crucial parameters used in computable general equilibrium models analyzing energy demand and the impacts of environmental regulations. Using the Berndt and Wood dataset, I find that capital and energy are complements and that the data are significantly consistent with duality theory. Both results would not necessarily be achieved using standard econometric methods. The final chapter "Daylight Time and Energy" uses a quasi-experiment to evaluate a popular energy conservation policy: we challenge the conventional wisdom that extending Daylight Saving Time (DST) reduces energy demand. Using detailed panel data on half-hourly electricity consumption, prices, and weather conditions from four Australian states we employ a novel 'triple-difference' technique to test the electricity-saving hypothesis. We show that the extension failed to reduce electricity demand and instead increased electricity prices. We also apply the most sophisticated electricity simulation model available in the literature to the Australian data. We find that prior simulation models significantly overstate electricity savings. Our results suggest that extending DST will fail as an instrument to save energy resources.

  19. Implementation of interactive virtual simulation of physical systems

    NASA Astrophysics Data System (ADS)

    Sanchez, H.; Escobar, J. J.; Gonzalez, J. D.; Beltran, J.

    2014-03-01

    Considering the limited availability of laboratories for physics teaching and the difficulties this causes in the learning of school students in Santa Marta Colombia, we have developed software in order to generate greater student interaction with the phenomena physical and improve their understanding. Thereby, this system has been proposed in an architecture Model/View- View- Model (MVVM), sharing the benefits of MVC. Basically, this pattern consists of 3 parts: The Model, that is responsible for business logic related. The View, which is the part with which we are most familiar and the user sees. Its role is to display data to the user and allowing manipulation of the data of the application. The ViewModel, which is the middle part of the Model and the View (analogous to the Controller in the MVC pattern), as well as being responsible for implementing the behavior of the view to respond to user actions and expose data model in a way that is easy to use links to data in the view. .NET Framework 4.0 and editing package Silverlight 4 and 5 are the main requirements needed for the deployment of physical simulations that are hosted in the web application and a web browser (Internet Explorer, Mozilla Firefox or Chrome). The implementation of this innovative application in educational institutions has shown that students improved their contextualization of physical phenomena.

  20. Simulation of ground-water flow in the Coastal Plain aquifer system of North Carolina

    USGS Publications Warehouse

    Giese, G.I.; Eimers, J.L.; Coble, R.W.

    1997-01-01

    A three-dimensional finite-difference digital model was used to simulate ground-water flow in the 25,000-square-mile aquifer system of the North Carolina Coastal Plain. The model was developed from a hydrogeologic framework that is based on an alternating sequence of 10 aquifers and 9 confining units, which make up a seaward-thickening wedge of sediments that form the Coastal Plain aquifer system in the State of North Carolina. The model was calibrated by comparing observed and simulated water levels. The model calibration was achieved by adjusting model parameters, primarily leakance of confining units and transmissivity of aquifers, until differences between observed and simulated water levels were within acceptable limits, generally within 15 feet. The maximum transmissivity of an individual aquifer in the calibrated model is 200,000 feet squared per day in a part of the Castle Hayne aquifer, which consists predominantly of limestone. The maximum value for simulated vertical hydraulic conductivity in a confining unit was 2.5 feet per day, in a part of the confining unit overlying the upper Cape Fear aquifer. The minimum value was 4.1x10-6 feet per day, in part of the confining unit overlying the lower Cape Fear aquifer. Analysis indicated the model is highly sensitive to changes in transmissivity and leakance near pumping centers; away from pumping centers, the model is only slightly sensitive to changes in transmissivity but is moderately sensitive to changes in leakance. Recharge from precipitation to the surficial aquifer ranges from about 12 inches per year in areas having clay at the surface to about 20 inches per year in areas having sand at the surface. Most of this recharge moves laterally to streams, and only about 1 inch per year moves downward to the confined parts of the aquifer system. Under predevelopment conditions, the confined aquifers were generally recharged in updip interstream areas and discharged through streambeds and in downdip coastward areas. Hydrologic analysis of the flow system using the calibrated model indicated that, because of ground-water withdrawals, areas of ground-water recharge have expanded and encroached upon some major stream valleys and into coastal area. Simulations of pumping conditions indicate that by 1980 large parts of the former coastal discharge areas had become areas of potential or actual recharge. Declines of ground-water level, which are the result of water taken from storage, are extensive in some areas and minimal in others. Hydraulic head declines of more than 135 feet have occurred in the northern Coastal Plain since 1940 primarily due to withdrawals in the Franklin area in Virginia. Declines of ground-water levels greater than 110 feet have occurred in aquifers in the central Coastal Plain due to combined effects of pumpage for public and industrial water supplies. Water-level declines exceeding 100 feet have occurred in the Beaufort County area because of withdrawals for a mining operation and water supplies for a chemical plant. Head declines have been less than 10 feet in the shallow surficial and Yorktown aquifers and in the updip parts of the major confined aquifers distant from areas of major withdrawals. In 1980, contribution from aquifer storage was 14 cubic feet per second, which is about 4.8 percent of pumpage and about 0.05 percent of ground-water recharge. A water-budget analysis using the model simulations indicates that much of the water removed from the ground-water system by pumping ultimately is made up by a reduction in water leaving the aquifer system, which discharges to streams as base flow. The reduction in stream base flow was 294 cubic feet per second in 1980 and represents about 1.1 percent of the ground-water recharge. The net reduction to streamflow is not large, however, because most pumped ground water is eventually discharged to streams. In places, such as at rock quarries in Onslow and Craven Counties, water is lost from st

Top