Validation of SWEEP for creep, saltation, and suspension in a desert-oasis ecotone
NASA Astrophysics Data System (ADS)
Pi, H.; Sharratt, B.; Feng, G.; Lei, J.; Li, X.; Zheng, Z.
2016-03-01
Wind erosion in the desert-oasis ecotone can accelerate desertification, but little is known about the susceptibility of the ecotone to wind erosion in the Tarim Basin despite being a major source of windblown dust in China. The objective of this study was to test the performance of the Single-event Wind Erosion Evaluation Program (SWEEP) in simulating soil loss as creep, saltation, and suspension in a desert-oasis ecotone. Creep, saltation, and suspension were measured and simulated in a desert-oasis ecotone of the Tarim Basin during discrete periods of high winds in spring 2012 and 2013. The model appeared to adequately simulate total soil loss (ranged from 23 to 2272 g m-2 across sample periods) according to the high index of agreement (d = 0.76). The adequate agreement of the SWEEP in simulating total soil loss was due to the good performance of the model (d = 0.71) in simulating creep plus saltation. The SWEEP model, however, inadequately simulated suspension based upon a low d (⩽0.43). The slope estimates of the regression between simulated and measured suspension and difference of mean suggested that the SWEEP underestimated suspension. The adequate simulation of creep plus saltation thus provides reasonable estimates of total soil loss using SWEEP in a desert-oasis environment.
Piloted evaluation of an integrated propulsion and flight control simulator
NASA Technical Reports Server (NTRS)
Bright, Michelle M.; Simon, Donald L.
1992-01-01
A piloted evaluation of the integrated flight and propulsion control simulator for advanced integrated propulsion and airframe control design is described. The evaluation will cover control effector gains and deadbands, control effectiveness and control authority, and heads up display functionality. For this evaluation the flight simulator is configured for transition flight using an advanced Short Take-Off and Vertical Landing fighter aircraft model, a simplified high-bypass turbofan engine model, fighter cockpit displays, and pilot effectors. The piloted tasks used for rating displays and control effector gains are described. Pilot comments and simulation results confirm that the display symbology and control gains are very adequate for the transition flight task. Additionally, it is demonstrated that this small-scale, fixed base flight simulator facility can adequately perform a real time, piloted control evaluation.
Synthesis of a hybrid model of the VSC FACTS devices and HVDC technologies
NASA Astrophysics Data System (ADS)
Borovikov, Yu S.; Gusev, A. S.; Sulaymanov, A. O.; Ufa, R. A.
2014-10-01
The motivation of the presented research is based on the need for development of new methods and tools for adequate simulation of FACTS devices and HVDC systems as part of real electric power systems (EPS). The Research object: An alternative hybrid approach for synthesizing VSC-FACTS and -HVDC hybrid model is proposed. The results: the VSC- FACTS and -HVDC hybrid model is designed in accordance with the presented concepts of hybrid simulation. The developed model allows us to carry out adequate simulation in real time of all the processes in HVDC, FACTS devices and EPS as a whole without any decomposition and limitation on their duration, and also use the developed tool for effective solution of a design, operational and research tasks of EPS containing such devices.
Piloted evaluation of an integrated propulsion and flight control simulator
NASA Technical Reports Server (NTRS)
Bright, Michelle M.; Simon, Donald L.
1992-01-01
This paper describes a piloted evaluation of the integrated flight and propulsion control simulator at NASA Lewis Research Center. The purpose of this evaluation is to demonstrate the suitability and effectiveness of this fixed based simulator for advanced integrated propulsion and airframe control design. The evaluation will cover control effector gains and deadbands, control effectiveness and control authority, and heads up display functionality. For this evaluation the flight simulator is configured for transition flight using an advanced Short Take-Off and vertical Landing fighter aircraft model, a simplified high-bypass turbofan engine model, fighter cockpit, displays, and pilot effectors. The paper describes the piloted tasks used for rating displays and control effector gains. Pilot comments and simulation results confirm that the display symbology and control gains are very adequate for the transition flight task. Additionally, it is demonstrated that this small-scale, fixed base flight simulator facility can adequately perform a real time, piloted control evaluation.
Abiotic/biotic coupling in the rhizosphere: a reactive transport modeling analysis
Lawrence, Corey R.; Steefel, Carl; Maher, Kate
2014-01-01
A new generation of models is needed to adequately simulate patterns of soil biogeochemical cycling in response changing global environmental drivers. For example, predicting the influence of climate change on soil organic matter storage and stability requires models capable of addressing complex biotic/abiotic interactions of rhizosphere and weathering processes. Reactive transport modeling provides a powerful framework simulating these interactions and the resulting influence on soil physical and chemical characteristics. Incorporation of organic reactions in an existing reactive transport model framework has yielded novel insights into soil weathering and development but much more work is required to adequately capture root and microbial dynamics in the rhizosphere. This endeavor provides many advantages over traditional soil biogeochemical models but also many challenges.
NASA Astrophysics Data System (ADS)
Chen, Jie; Brissette, François P.; Lucas-Picher, Philippe
2016-11-01
Given the ever increasing number of climate change simulations being carried out, it has become impractical to use all of them to cover the uncertainty of climate change impacts. Various methods have been proposed to optimally select subsets of a large ensemble of climate simulations for impact studies. However, the behaviour of optimally-selected subsets of climate simulations for climate change impacts is unknown, since the transfer process from climate projections to the impact study world is usually highly non-linear. Consequently, this study investigates the transferability of optimally-selected subsets of climate simulations in the case of hydrological impacts. Two different methods were used for the optimal selection of subsets of climate scenarios, and both were found to be capable of adequately representing the spread of selected climate model variables contained in the original large ensemble. However, in both cases, the optimal subsets had limited transferability to hydrological impacts. To capture a similar variability in the impact model world, many more simulations have to be used than those that are needed to simply cover variability from the climate model variables' perspective. Overall, both optimal subset selection methods were better than random selection when small subsets were selected from a large ensemble for impact studies. However, as the number of selected simulations increased, random selection often performed better than the two optimal methods. To ensure adequate uncertainty coverage, the results of this study imply that selecting as many climate change simulations as possible is the best avenue. Where this was not possible, the two optimal methods were found to perform adequately.
Validation of Potential Models for Li2O in Classical Molecular Dynamics Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oda, Takuji; Oya, Yasuhisa; Tanaka, Satoru
2007-08-01
Four Buckingham-type pairwise potential models for Li2O were assessed by molecular static and dynamics simulations. In the static simulation, all models afforded acceptable agreement with experimental values and ab initio calculation results for the crystalline properties. Moreover, the superionic phase transition was realized in the dynamics simulation. However, the Li diffusivity and the lattice expansion were not adequately reproduced at the same time by any model. When using these models in future radiation simulation, these features should be taken into account, in order to reduce the model dependency of the results.
Flight Dynamics Modeling and Simulation of a Damaged Transport Aircraft
NASA Technical Reports Server (NTRS)
Shah, Gautam H.; Hill, Melissa A.
2012-01-01
A study was undertaken at NASA Langley Research Center to establish, demonstrate, and apply methodology for modeling and implementing the aerodynamic effects of MANPADS damage to a transport aircraft into real-time flight simulation, and to demonstrate a preliminary capability of using such a simulation to conduct an assessment of aircraft survivability. Key findings from this study include: superpositioning of incremental aerodynamic characteristics to the baseline simulation aerodynamic model proved to be a simple and effective way of modeling damage effects; the primary effect of wing damage rolling moment asymmetry may limit minimum airspeed for adequate controllability, but this can be mitigated by the use of sideslip; combined effects of aerodynamics, control degradation, and thrust loss can result in significantly degraded controllability for a safe landing; and high landing speeds may be required to maintain adequate control if large excursions from the nominal approach path are allowed, but high-gain pilot control during landing can mitigate this risk.
Obi, Andrea; Chung, Jennifer; Chen, Ryan; Lin, Wandi; Sun, Siyuan; Pozehl, William; Cohn, Amy M; Daskin, Mark S; Seagull, F Jacob; Reddy, Rishindra M
2015-11-01
Certain operative cases occur unpredictably and/or have long operative times, creating a conflict between Accreditation Council for Graduate Medical Education (ACGME) rules and adequate training experience. A ProModel-based simulation was developed based on historical data. Probabilistic distributions of operative time calculated and combined with an ACGME compliant call schedule. For the advanced surgical cases modeled (cardiothoracic transplants), 80-hour violations were 6.07% and the minimum number of days off was violated 22.50%. There was a 36% chance of failure to fulfill any (either heart or lung) minimum case requirement despite adequate volume. The variable nature of emergency cases inevitably leads to work hour violations under ACGME regulations. Unpredictable cases mandate higher operative volume to ensure achievement of adequate caseloads. Publically available simulation technology provides a valuable avenue to identify adequacy of case volumes for trainees in both the elective and emergency setting. Copyright © 2015 Elsevier Inc. All rights reserved.
The Design and the Formative Evaluation of a Web-Based Course for Simulation Analysis Experiences
ERIC Educational Resources Information Center
Tao, Yu-Hui; Guo, Shin-Ming; Lu, Ya-Hui
2006-01-01
Simulation output analysis has received little attention comparing to modeling and programming in real-world simulation applications. This is further evidenced by our observation that students and beginners acquire neither adequate details of knowledge nor relevant experience of simulation output analysis in traditional classroom learning. With…
NASA Astrophysics Data System (ADS)
Zubanov, V. M.; Stepanov, D. V.; Shabliy, L. S.
2017-01-01
The article describes the method for simulation of transient combustion processes in the rocket engine. The engine operates on gaseous propellant: oxygen and hydrogen. Combustion simulation was performed using the ANSYS CFX software. Three reaction mechanisms for the stationary mode were considered and described in detail. Reactions mechanisms have been taken from several sources and verified. The method for converting ozone properties from the Shomate equation to the NASA-polynomial format was described in detail. The way for obtaining quick CFD-results with intermediate combustion components using an EDM model was found. Modeling difficulties with combustion model Finite Rate Chemistry, associated with a large scatter of reference data were identified and described. The way to generate the Flamelet library with CFX-RIF is described. Formulated adequate reaction mechanisms verified at a steady state have also been tested for transient simulation. The Flamelet combustion model was recognized as adequate for the transient mode. Integral parameters variation relates to the values obtained during stationary simulation. A cyclic irregularity of the temperature field, caused by precession of the vortex core, was detected in the chamber with the proposed simulation technique. Investigations of unsteady processes of rocket engines including the processes of ignition were proposed as the area for application of the described simulation technique.
NASA Astrophysics Data System (ADS)
Abisset-Chavanne, Emmanuelle; Duval, Jean Louis; Cueto, Elias; Chinesta, Francisco
2018-05-01
Traditionally, Simulation-Based Engineering Sciences (SBES) has relied on the use of static data inputs (model parameters, initial or boundary conditions, … obtained from adequate experiments) to perform simulations. A new paradigm in the field of Applied Sciences and Engineering has emerged in the last decade. Dynamic Data-Driven Application Systems [9, 10, 11, 12, 22] allow the linkage of simulation tools with measurement devices for real-time control of simulations and applications, entailing the ability to dynamically incorporate additional data into an executing application, and in reverse, the ability of an application to dynamically steer the measurement process. It is in that context that traditional "digital-twins" are giving raise to a new generation of goal-oriented data-driven application systems, also known as "hybrid-twins", embracing models based on physics and models exclusively based on data adequately collected and assimilated for filling the gap between usual model predictions and measurements. Within this framework new methodologies based on model learners, machine learning and kinetic goal-oriented design are defining a new paradigm in materials, processes and systems engineering.
Asquith, W.H.; Mosier, J. G.; Bush, P.W.
1997-01-01
The watershed simulation model Hydrologic Simulation Program—Fortran (HSPF) was used to generate simulated flow (runoff) from the 13 watersheds to the six bay systems because adequate gaged streamflow data from which to estimate freshwater inflows are not available; only about 23 percent of the adjacent contributing watershed area is gaged. The model was calibrated for the gaged parts of three watersheds—that is, selected input parameters (meteorologic and hydrologic properties and conditions) that control runoff were adjusted in a series of simulations until an adequate match between model-generated flows and a set (time series) of gaged flows was achieved. The primary model input is rainfall and evaporation data and the model output is a time series of runoff volumes. After calibration, simulations driven by daily rainfall for a 26-year period (1968–93) were done for the 13 watersheds to obtain runoff under current (1983–93), predevelopment (pre-1940 streamflow and pre-urbanization), and future (2010) land-use conditions for estimating freshwater inflows and for comparing runoff under the three land-use conditions; and to obtain time series of runoff from which to estimate time series of freshwater inflows for trend analysis.
Faculty Flow in a Medical School: A Policy Simulator. AIR Forum 1979 Paper.
ERIC Educational Resources Information Center
Kutina, Kenneth L.; Bruss, Edward A.
A computer-based simulation model is described that can be used in an interactive mode to analyze the effects of alternative hiring, promotion, tenure granting, retirement, and salary policies on faculty size, distribution, and aggregate salary expense. The model was designed to be adequately flexible and comprehensive to incorporate the array of…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romanov, Gennady; /Fermilab
CST Particle Studio combines electromagnetic field simulation, multi-particle tracking, adequate post-processing and advanced probabilistic emission model, which is the most important new capability in multipactor simulation. The emission model includes in simulation the stochastic properties of emission and adds primary electron elastic and inelastic reflection from the surfaces. The simulation of multipactor in coaxial waveguides have been performed to study the effects of the innovations on the multipactor threshold and the range over which multipactor can occur. The results compared with available previous experiments and simulations as well as the technique of MP simulation with CST PS are presented andmore » discussed.« less
Determining erosion relevant soil characteristics with a small-scale rainfall simulator
NASA Astrophysics Data System (ADS)
Schindewolf, M.; Schmidt, J.
2009-04-01
The use of soil erosion models is of great importance in soil and water conservation. Routine application of these models on the regional scale is not at least limited by the high parameter demands. Although the EROSION 3D simulation model is operating with a comparable low number of parameters, some of the model input variables could only be determined by rainfall simulation experiments. The existing data base of EROSION 3D was created in the mid 90s based on large-scale rainfall simulation experiments on 22x2m sized experimental plots. Up to now this data base does not cover all soil and field conditions adequately. Therefore a new campaign of experiments would be essential to produce additional information especially with respect to the effects of new soil management practices (e.g. long time conservation tillage, non tillage). The rainfall simulator used in the actual campaign consists of 30 identic modules, which are equipped with oscillating rainfall nozzles. Veejet 80/100 (Spraying Systems Co., Wheaton, IL) are used in order to ensure best possible comparability to natural rainfalls with respect to raindrop size distribution and momentum transfer. Central objectives of the small-scale rainfall simulator are - effectively application - provision of comparable results to large-scale rainfall simulation experiments. A crucial problem in using the small scale simulator is the restriction on rather small volume rates of surface runoff. Under this conditions soil detachment is governed by raindrop impact. Thus impact of surface runoff on particle detachment cannot be reproduced adequately by a small-scale rainfall simulator With this problem in mind this paper presents an enhanced small-scale simulator which allows a virtual multiplication of the plot length by feeding additional sediment loaded water to the plot from upstream. Thus is possible to overcome the plot length limited to 3m while reproducing nearly similar flow conditions as in rainfall experiments on standard plots. The simulator is extensively applied to plots of different soil types, crop types and management systems. The comparison with existing data sets obtained by large-scale rainfall simulations show that results can adequately be reproduced by the applied combination of small-scale rainfall simulator and sediment loaded water influx.
Simulation-based Testing of Control Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ozmen, Ozgur; Nutaro, James J.; Sanyal, Jibonananda
It is impossible to adequately test complex software by examining its operation in a physical prototype of the system monitored. Adequate test coverage can require millions of test cases, and the cost of equipment prototypes combined with the real-time constraints of testing with them makes it infeasible to sample more than a small number of these tests. Model based testing seeks to avoid this problem by allowing for large numbers of relatively inexpensive virtual prototypes that operate in simulation time at a speed limited only by the available computing resources. In this report, we describe how a computer system emulatormore » can be used as part of a model based testing environment; specifically, we show that a complete software stack including operating system and application software - can be deployed within a simulated environment, and that these simulations can proceed as fast as possible. To illustrate this approach to model based testing, we describe how it is being used to test several building control systems that act to coordinate air conditioning loads for the purpose of reducing peak demand. These tests involve the use of ADEVS (A Discrete Event System Simulator) and QEMU (Quick Emulator) to host the operational software within the simulation, and a building model developed with the MODELICA programming language using Buildings Library and packaged as an FMU (Functional Mock-up Unit) that serves as the virtual test environment.« less
NASA Astrophysics Data System (ADS)
Taheri, H.; Koester, L.; Bigelow, T.; Bond, L. J.
2018-04-01
Industrial applications of additively manufactured components are increasing quickly. Adequate quality control of the parts is necessary in ensuring safety when using these materials. Base material properties, surface conditions, as well as location and size of defects are some of the main targets for nondestructive evaluation of additively manufactured parts, and the problem of adequate characterization is compounded given the challenges of complex part geometry. Numerical modeling can allow the interplay of the various factors to be studied, which can lead to improved measurement design. This paper presents a finite element simulation verified by experimental results of ultrasonic waves scattering from flat bottom holes (FBH) in additive manufacturing materials. A focused beam immersion ultrasound transducer was used for both the modeling and simulations in the additive manufactured samples. The samples were SS17 4 PH steel samples made by laser sintering in a powder bed.
Review of simulation techniques for Aquifer Thermal Energy Storage (ATES)
NASA Astrophysics Data System (ADS)
Mercer, J. W.; Faust, C. R.; Miller, W. J.; Pearson, F. J., Jr.
1981-03-01
The analysis of aquifer thermal energy storage (ATES) systems rely on the results from mathematical and geochemical models. Therefore, the state-of-the-art models relevant to ATES were reviewed and evaluated. These models describe important processes active in ATES including ground-water flow, heat transport (heat flow), solute transport (movement of contaminants), and geochemical reactions. In general, available models of the saturated ground-water environment are adequate to address most concerns associated with ATES; that is, design, operation, and environmental assessment. In those cases where models are not adequate, development should be preceded by efforts to identify significant physical phenomena and relate model parameters to measurable quantities.
Longitudinal train dynamics model for a rail transit simulation system
Wang, Jinghui; Rakha, Hesham A.
2018-01-01
The paper develops a longitudinal train dynamics model in support of microscopic railway transportation simulation. The model can be calibrated without any mechanical data making it ideal for implementation in transportation simulators. The calibration and validation work is based on data collected from the Portland light rail train fleet. The calibration procedure is mathematically formulated as a constrained non-linear optimization problem. The validity of the model is assessed by comparing instantaneous model predictions against field observations, and also evaluated in the domains of acceleration/deceleration versus speed and acceleration/deceleration versus distance. A test is conducted to investigate the adequacy of themore » model in simulation implementation. The results demonstrate that the proposed model can adequately capture instantaneous train dynamics, and provides good performance in the simulation test. Thus, the model provides a simple theoretical foundation for microscopic simulators and will significantly support the planning, management and control of railway transportation systems.« less
Longitudinal train dynamics model for a rail transit simulation system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Jinghui; Rakha, Hesham A.
The paper develops a longitudinal train dynamics model in support of microscopic railway transportation simulation. The model can be calibrated without any mechanical data making it ideal for implementation in transportation simulators. The calibration and validation work is based on data collected from the Portland light rail train fleet. The calibration procedure is mathematically formulated as a constrained non-linear optimization problem. The validity of the model is assessed by comparing instantaneous model predictions against field observations, and also evaluated in the domains of acceleration/deceleration versus speed and acceleration/deceleration versus distance. A test is conducted to investigate the adequacy of themore » model in simulation implementation. The results demonstrate that the proposed model can adequately capture instantaneous train dynamics, and provides good performance in the simulation test. Thus, the model provides a simple theoretical foundation for microscopic simulators and will significantly support the planning, management and control of railway transportation systems.« less
Ribeiro de Oliveira, Marcelo Magaldi; Nicolato, Arthur; Santos, Marcilea; Godinho, Joao Victor; Brito, Rafael; Alvarenga, Alexandre; Martins, Ana Luiza Valle; Prosdocimi, André; Trivelato, Felipe Padovani; Sabbagh, Abdulrahman J; Reis, Augusto Barbosa; Maestro, Rolando Del
2016-05-01
OBJECT The development of neurointerventional treatments of central nervous system disorders has resulted in the need for adequate training environments for novice interventionalists. Virtual simulators offer anatomical definition but lack adequate tactile feedback. Animal models, which provide more lifelike training, require an appropriate infrastructure base. The authors describe a training model for neurointerventional procedures using the human placenta (HP), which affords haptic training with significantly fewer resource requirements, and discuss its validation. METHODS Twelve HPs were prepared for simulated endovascular procedures. Training exercises performed by interventional neuroradiologists and novice fellows were placental angiography, stent placement, aneurysm coiling, and intravascular liquid embolic agent injection. RESULTS The endovascular training exercises proposed can be easily reproduced in the HP. Face, content, and construct validity were assessed by 6 neurointerventional radiologists and 6 novice fellows in interventional radiology. CONCLUSIONS The use of HP provides an inexpensive training model for the training of neurointerventionalists. Preliminary validation results show that this simulation model has face and content validity and has demonstrated construct validity for the interventions assessed in this study.
Simulating traffic for incident management and ITS investment decisions
DOT National Transportation Integrated Search
1998-08-01
UTPS-type models were designed to adequately support planning activities typical of the 1960s and 1970s. However, these packages were not designed to model intelligent transportation systems (ITS) and support incident management planning. To ov...
"The Effect of Alternative Representations of Lake ...
Lakes can play a significant role in regional climate, modulating inland extremes in temperature and enhancing precipitation. Representing these effects becomes more important as regional climate modeling (RCM) efforts focus on simulating smaller scales. When using the Weather Research and Forecasting (WRF) model to downscale future global climate model (GCM) projections into RCM simulations, model users typically must rely on the GCM to represent temperatures at all water points. However, GCMs have insufficient resolution to adequately represent even large inland lakes, such as the Great Lakes. Some interpolation methods, such as setting lake surface temperatures (LSTs) equal to the nearest water point, can result in inland lake temperatures being set from sea surface temperatures (SSTs) that are hundreds of km away. In other cases, a single point is tasked with representing multiple large, heterogeneous lakes. Similar consequences can result from interpolating ice from GCMs to inland lake points, resulting in lakes as large as Lake Superior freezing completely in the space of a single timestep. The use of a computationally-efficient inland lake model can improve RCM simulations where the input data is too coarse to adequately represent inland lake temperatures and ice (Gula and Peltier 2012). This study examines three scenarios under which ice and LSTs can be set within the WRF model when applied as an RCM to produce 2-year simulations at 12 km gri
NASA Technical Reports Server (NTRS)
Throop, David R.
1992-01-01
The paper examines the requirements for the reuse of computational models employed in model-based reasoning (MBR) to support automated inference about mechanisms. Areas in which the theory of MBR is not yet completely adequate for using the information that simulations can yield are identified, and recent work in these areas is reviewed. It is argued that using MBR along with simulations forces the use of specific fault models. Fault models are used so that a particular fault can be instantiated into the model and run. This in turn implies that the component specification language needs to be capable of encoding any fault that might need to be sensed or diagnosed. It also means that the simulation code must anticipate all these faults at the component level.
Three Dimensional CFD Analysis of the GTX Combustor
NASA Technical Reports Server (NTRS)
Steffen, C. J., Jr.; Bond, R. B.; Edwards, J. R.
2002-01-01
The annular combustor geometry of a combined-cycle engine has been analyzed with three-dimensional computational fluid dynamics. Both subsonic combustion and supersonic combustion flowfields have been simulated. The subsonic combustion analysis was executed in conjunction with a direct-connect test rig. Two cold-flow and one hot-flow results are presented. The simulations compare favorably with the test data for the two cold flow calculations; the hot-flow data was not yet available. The hot-flow simulation indicates that the conventional ejector-ramjet cycle would not provide adequate mixing at the conditions tested. The supersonic combustion ramjet flowfield was simulated with frozen chemistry model. A five-parameter test matrix was specified, according to statistical design-of-experiments theory. Twenty-seven separate simulations were used to assemble surrogate models for combustor mixing efficiency and total pressure recovery. ScramJet injector design parameters (injector angle, location, and fuel split) as well as mission variables (total fuel massflow and freestream Mach number) were included in the analysis. A promising injector design has been identified that provides good mixing characteristics with low total pressure losses. The surrogate models can be used to develop performance maps of different injector designs. Several complex three-way variable interactions appear within the dataset that are not adequately resolved with the current statistical analysis.
Near-wall k-epsilon turbulence modeling
NASA Technical Reports Server (NTRS)
Mansour, N. N.; Kim, J.; Moin, P.
1987-01-01
The flow fields from a turbulent channel simulation are used to compute the budgets for the turbulent kinetic energy (k) and its dissipation rate (epsilon). Data from boundary layer simulations are used to analyze the dependence of the eddy-viscosity damping-function on the Reynolds number and the distance from the wall. The computed budgets are used to test existing near-wall turbulence models of the k-epsilon type. It was found that the turbulent transport models should be modified in the vicinity of the wall. It was also found that existing models for the different terms in the epsilon-budget are adequate in the region from the wall, but need modification near the wall. The channel flow is computed using a k-epsilon model with an eddy-viscosity damping function from the data and no damping functions in the epsilon-equation. These computations show that the k-profile can be adequately predicted, but to correctly predict the epsilon-profile, damping functions in the epsilon-equation are needed.
Transonic flow about a thick circular-arc airfoil
NASA Technical Reports Server (NTRS)
Mcdevitt, J. B.; Levy, L. L., Jr.; Deiwert, G. S.
1975-01-01
An experimental and theoretical study of transonic flow over a thick airfoil, prompted by a need for adequately documented experiments that could provide rigorous verification of viscous flow simulation computer codes, is reported. Special attention is given to the shock-induced separation phenomenon in the turbulent regime. Measurements presented include surface pressures, streamline and flow separation patterns, and shadowgraphs. For a limited range of free-stream Mach numbers the airfoil flow field is found to be unsteady. Dynamic pressure measurements and high-speed shadowgraph movies were taken to investigate this phenomenon. Comparisons of experimentally determined and numerically simulated steady flows using a new viscous-turbulent code are also included. The comparisons show the importance of including an accurate turbulence model. When the shock-boundary layer interaction is weak the turbulence model employed appears adequate, but when the interaction is strong, and extensive regions of separation are present, the model is inadequate and needs further development.
Application of Probabilistic Analysis to Aircraft Impact Dynamics
NASA Technical Reports Server (NTRS)
Lyle, Karen H.; Padula, Sharon L.; Stockwell, Alan E.
2003-01-01
Full-scale aircraft crash simulations performed with nonlinear, transient dynamic, finite element codes can incorporate structural complexities such as: geometrically accurate models; human occupant models; and advanced material models to include nonlinear stressstrain behaviors, laminated composites, and material failure. Validation of these crash simulations is difficult due to a lack of sufficient information to adequately determine the uncertainty in the experimental data and the appropriateness of modeling assumptions. This paper evaluates probabilistic approaches to quantify the uncertainty in the simulated responses. Several criteria are used to determine that a response surface method is the most appropriate probabilistic approach. The work is extended to compare optimization results with and without probabilistic constraints.
Egelund, E F; Isaza, R; Brock, A P; Alsultan, A; An, G; Peloquin, C A
2015-04-01
The objective of this study was to develop a population pharmacokinetic model for rifampin in elephants. Rifampin concentration data from three sources were pooled to provide a total of 233 oral concentrations from 37 Asian elephants. The population pharmacokinetic models were created using Monolix (version 4.2). Simulations were conducted using ModelRisk. We examined the influence of age, food, sex, and weight as model covariates. We further optimized the dosing of rifampin based upon simulations using the population pharmacokinetic model. Rifampin pharmacokinetics were best described by a one-compartment open model including first-order absorption with a lag time and first-order elimination. Body weight was a significant covariate for volume of distribution, and food intake was a significant covariate for lag time. The median Cmax of 6.07 μg/mL was below the target range of 8-24 μg/mL. Monte Carlo simulations predicted the highest treatable MIC of 0.25 μg/mL with the current initial dosing recommendation of 10 mg/kg, based upon a previously published target AUC0-24/MIC > 271 (fAUC > 41). Simulations from the population model indicate that the current dose of 10 mg/kg may be adequate for MICs up to 0.25 μg/mL. While the targeted AUC/MIC may be adequate for most MICs, the median Cmax for all elephants is below the human and elephant targeted ranges. © 2014 John Wiley & Sons Ltd.
A Study of Fan Stage/Casing Interaction Models
NASA Technical Reports Server (NTRS)
Lawrence, Charles; Carney, Kelly; Gallardo, Vicente
2003-01-01
The purpose of the present study is to investigate the performance of several existing and new, blade-case interactions modeling capabilities that are compatible with the large system simulations used to capture structural response during blade-out events. Three contact models are examined for simulating the interactions between a rotor bladed disk and a case: a radial and linear gap element and a new element based on a hydrodynamic formulation. The first two models are currently available in commercial finite element codes such as NASTRAN and have been showed to perform adequately for simulating rotor-case interactions. The hydrodynamic model, although not readily available in commercial codes, may prove to be better able to characterize rotor-case interactions.
Scale-Resolving simulations (SRS): How much resolution do we really need?
NASA Astrophysics Data System (ADS)
Pereira, Filipe M. S.; Girimaji, Sharath
2017-11-01
Scale-resolving simulations (SRS) are emerging as the computational approach of choice for many engineering flows with coherent structures. The SRS methods seek to resolve only the most important features of the coherent structures and model the remainder of the flow field with canonical closures. With reference to a typical Large-Eddy Simulation (LES), practical SRS methods aim to resolve a considerably narrower range of scales (reduced physical resolution) to achieve an adequate degree of accuracy at reasonable computational effort. While the objective of SRS is well-founded, the criteria for establishing the optimal degree of resolution required to achieve an acceptable level of accuracy are not clear. This study considers the canonical case of the flow around a circular cylinder to address the issue of `optimal' resolution. Two important criteria are developed. The first condition addresses the issue of adequate resolution of the flow field. The second guideline provides an assessment of whether the modeled field is canonical (stochastic) turbulence amenable to closure-based computations.
Empirical models of wind conditions on Upper Klamath Lake, Oregon
Buccola, Norman L.; Wood, Tamara M.
2010-01-01
Upper Klamath Lake is a large (230 square kilometers), shallow (mean depth 2.8 meters at full pool) lake in southern Oregon. Lake circulation patterns are driven largely by wind, and the resulting currents affect the water quality and ecology of the lake. To support hydrodynamic modeling of the lake and statistical investigations of the relation between wind and lake water-quality measurements, the U.S. Geological Survey has monitored wind conditions along the lakeshore and at floating raft sites in the middle of the lake since 2005. In order to make the existing wind archive more useful, this report summarizes the development of empirical wind models that serve two purposes: (1) to fill short (on the order of hours or days) wind data gaps at raft sites in the middle of the lake, and (2) to reconstruct, on a daily basis, over periods of months to years, historical wind conditions at U.S. Geological Survey sites prior to 2005. Empirical wind models based on Artificial Neural Network (ANN) and Multivariate-Adaptive Regressive Splines (MARS) algorithms were compared. ANNs were better suited to simulating the 10-minute wind data that are the dependent variables of the gap-filling models, but the simpler MARS algorithm may be adequate to accurately simulate the daily wind data that are the dependent variables of the historical wind models. To further test the accuracy of the gap-filling models, the resulting simulated winds were used to force the hydrodynamic model of the lake, and the resulting simulated currents were compared to measurements from an acoustic Doppler current profiler. The error statistics indicated that the simulation of currents was degraded as compared to when the model was forced with observed winds, but probably is adequate for short gaps in the data of a few days or less. Transport seems to be less affected by the use of the simulated winds in place of observed winds. The simulated tracer concentration was similar between model results when simulated winds were used to force the model, and when observed winds were used to force the model, and differences between the two results did not accumulate over time.
NASA Astrophysics Data System (ADS)
Mokhov, I. I.
2018-04-01
The results describing the ability of contemporary global and regional climate models not only to assess the risk of general trends of changes but also to predict qualitatively new regional effects are presented. In particular, model simulations predicted spatially inhomogeneous changes in the wind and wave conditions in the Arctic basins, which have been confirmed in recent years. According to satellite and reanalysis data, a qualitative transition to the regime predicted by model simulations occurred about a decade ago.
Modelling and simulation techniques for membrane biology.
Burrage, Kevin; Hancock, John; Leier, André; Nicolau, Dan V
2007-07-01
One of the most important aspects of Computational Cell Biology is the understanding of the complicated dynamical processes that take place on plasma membranes. These processes are often so complicated that purely temporal models cannot always adequately capture the dynamics. On the other hand, spatial models can have large computational overheads. In this article, we review some of these issues with respect to chemistry, membrane microdomains and anomalous diffusion and discuss how to select appropriate modelling and simulation paradigms based on some or all the following aspects: discrete, continuous, stochastic, delayed and complex spatial processes.
Ocean-Atmosphere Coupled Model Simulations of Precipitation in the Central Andes
NASA Technical Reports Server (NTRS)
Nicholls, Stephen D.; Mohr, Karen I.
2015-01-01
The meridional extent and complex orography of the South American continent contributes to a wide diversity of climate regimes ranging from hyper-arid deserts to tropical rainforests to sub-polar highland regions. In addition, South American meteorology and climate are also made further complicated by ENSO, a powerful coupled ocean-atmosphere phenomenon. Modelling studies in this region have typically resorted to either atmospheric mesoscale or atmosphere-ocean coupled global climate models. The latter offers full physics and high spatial resolution, but it is computationally inefficient typically lack an interactive ocean, whereas the former offers high computational efficiency and ocean-atmosphere coupling, but it lacks adequate spatial and temporal resolution to adequate resolve the complex orography and explicitly simulate precipitation. Explicit simulation of precipitation is vital in the Central Andes where rainfall rates are light (0.5-5 mm hr-1), there is strong seasonality, and most precipitation is associated with weak mesoscale-organized convection. Recent increases in both computational power and model development have led to the advent of coupled ocean-atmosphere mesoscale models for both weather and climate study applications. These modelling systems, while computationally expensive, include two-way ocean-atmosphere coupling, high resolution, and explicit simulation of precipitation. In this study, we use the Coupled Ocean-Atmosphere-Wave-Sediment Transport (COAWST), a fully-coupled mesoscale atmosphere-ocean modeling system. Previous work has shown COAWST to reasonably simulate the entire 2003-2004 wet season (Dec-Feb) as validated against both satellite and model analysis data when ECMWF interim analysis data were used for boundary conditions on a 27-9-km grid configuration (Outer grid extent: 60.4S to 17.7N and 118.6W to 17.4W).
Application of Probability Methods to Assess Crash Modeling Uncertainty
NASA Technical Reports Server (NTRS)
Lyle, Karen H.; Stockwell, Alan E.; Hardy, Robin C.
2003-01-01
Full-scale aircraft crash simulations performed with nonlinear, transient dynamic, finite element codes can incorporate structural complexities such as: geometrically accurate models; human occupant models; and advanced material models to include nonlinear stress-strain behaviors, and material failure. Validation of these crash simulations is difficult due to a lack of sufficient information to adequately determine the uncertainty in the experimental data and the appropriateness of modeling assumptions. This paper evaluates probabilistic approaches to quantify the effects of finite element modeling assumptions on the predicted responses. The vertical drop test of a Fokker F28 fuselage section will be the focus of this paper. The results of a probabilistic analysis using finite element simulations will be compared with experimental data.
Application of Probability Methods to Assess Crash Modeling Uncertainty
NASA Technical Reports Server (NTRS)
Lyle, Karen H.; Stockwell, Alan E.; Hardy, Robin C.
2007-01-01
Full-scale aircraft crash simulations performed with nonlinear, transient dynamic, finite element codes can incorporate structural complexities such as: geometrically accurate models; human occupant models; and advanced material models to include nonlinear stress-strain behaviors, and material failure. Validation of these crash simulations is difficult due to a lack of sufficient information to adequately determine the uncertainty in the experimental data and the appropriateness of modeling assumptions. This paper evaluates probabilistic approaches to quantify the effects of finite element modeling assumptions on the predicted responses. The vertical drop test of a Fokker F28 fuselage section will be the focus of this paper. The results of a probabilistic analysis using finite element simulations will be compared with experimental data.
Marco A. Contreras; Russell A. Parsons; Woodam Chung
2012-01-01
Land managers have been using fire behavior and simulation models to assist in several fire management tasks. These widely-used models use average attributes to make stand-level predictions without considering spatial variability of fuels within a stand. Consequently, as the existing models have limitations in adequately modeling crown fire initiation and propagation,...
NASA Technical Reports Server (NTRS)
Dunn, Mariea C.; Alves, Jeffrey R.; Hutchinson, Sonya L.
1999-01-01
This paper describes the human engineering analysis performed on the Materials Science Research Rack-1 and Quench Module Insert (MSRR-1/QMI) using Transom Jack (Jack) software. The Jack software was used to model a virtual environment consisting of the MSRR-1/QMI hardware configuration and human figures representing the 95th percentile male and 5th percentile female. The purpose of the simulation was to assess the human interfaces in the design for their ability to meet the requirements of the Pressurized Payloads Interface Requirements Document - International Space Program, Revision C (SSP 57000). Jack was used in the evaluation because of its ability to correctly model anthropometric body measurements and the physical behavior of astronauts working in microgravity, which is referred to as the neutral body posture. The Jack model allows evaluation of crewmember interaction with hardware through task simulation including but not limited to collision avoidance behaviors, hand/eye coordination, reach path planning, and automatic grasping to part contours. Specifically, this virtual simulation depicts the human figures performing the QMI installation and check-out, sample cartridge insertion and removal, and gas bottle drawer removal. These tasks were evaluated in terms of adequate clearance in reach envelopes, adequate accessibility in work envelopes, appropriate line of sight in visual envelopes, and accommodation of full size range for male and female stature maneuverability. The results of the human engineering analysis virtual simulation indicate that most of the associated requirements of SSP 57000 were met. However, some hardware design considerations and crew procedures modifications are recommended to improve accessibility, provide an adequate work envelope, reduce awkward body posture, and eliminate permanent protrusions.
Assessment of Alternative Conceptual Models Using Reactive Transport Modeling with Monitoring Data
NASA Astrophysics Data System (ADS)
Dai, Z.; Price, V.; Heffner, D.; Hodges, R.; Temples, T.; Nicholson, T.
2005-12-01
Monitoring data proved very useful in evaluating alternative conceptual models, simulating contaminant transport behavior, and reducing uncertainty. A graded approach using three alternative conceptual site models was formulated to simulate a field case of tetrachloroethene (PCE) transport and biodegradation. These models ranged from simple to complex in their representation of subsurface heterogeneities. The simplest model was a single-layer homogeneous aquifer that employed an analytical reactive transport code, BIOCHLOR (Aziz et al., 1999). Due to over-simplification of the aquifer structure, this simulation could not reproduce the monitoring data. The second model consisted of a multi-layer conceptual model, in combination with numerical modules, MODFLOW and RT3D within GMS, to simulate flow and reactive transport. Although the simulation results from the second model were comparatively better than those from the simple model, they still did not adequately reproduce the monitoring well concentrations because the geological structures were still inadequately defined. Finally, a more realistic conceptual model was formulated that incorporated heterogeneities and geologic structures identified from well logs and seismic survey data using the Petra and PetraSeis software. This conceptual model included both a major channel and a younger channel that were detected in the PCE source area. In this model, these channels control the local ground-water flow direction and provide a preferential chemical transport pathway. Simulation results using this conceptual site model proved compatible with the monitoring concentration data. This study demonstrates that the bias and uncertainty from inadequate conceptual models are much larger than those introduced from an inadequate choice of model parameter values (Neuman and Wierenga, 2003; Meyer et al., 2004; Ye et al., 2004). This case study integrated conceptual and numerical models, based on interpreted local hydrogeologic and geochemical data, with detailed monitoring plume data. It provided key insights for confirming alternative conceptual site models and assessing the performance of monitoring networks. A monitoring strategy based on this graded approach for assessing alternative conceptual models can provide the technical bases for identifying critical monitoring locations, adequate monitoring frequency, and performance indicator parameters for performance monitoring involving ground-water levels and PCE concentrations.
Behavior of the gypsy moth life system model and development of synoptic model formulations
J. J. Colbert; Xu Rumei
1991-01-01
Aims of the research: The gypsy moth life system model (GMLSM) is a complex model which incorporates numerous components (both biotic and abiotic) and ecological processes. It is a detailed simulation model which has much biological reality. However, it has not yet been tested with life system data. For such complex models, evaluation and testing cannot be adequately...
Simulating potato gas exchange as influenced by CO2 and irrigation
USDA-ARS?s Scientific Manuscript database
Recent research suggests that an energy balance approach is required for crop models to adequately respond to current and future climatic conditions associated with elevated CO2, higher temperatures, and water scarcity. More realistic models are needed in order to understand the impact of, and deve...
Techniques of Australian forest planning
Australian Forestry Council
1978-01-01
Computer modeling has been extensively adopted for Australian forest planning over the last ten years. It has been confined almost entirely to the plantations of fast-growing species for which adequate inventory, growth, and experimental data are available. Stand simulation models have replaced conventional yield tables and enabled a wide range of alternative...
Simulation tools for robotics research and assessment
NASA Astrophysics Data System (ADS)
Fields, MaryAnne; Brewer, Ralph; Edge, Harris L.; Pusey, Jason L.; Weller, Ed; Patel, Dilip G.; DiBerardino, Charles A.
2016-05-01
The Robotics Collaborative Technology Alliance (RCTA) program focuses on four overlapping technology areas: Perception, Intelligence, Human-Robot Interaction (HRI), and Dexterous Manipulation and Unique Mobility (DMUM). In addition, the RCTA program has a requirement to assess progress of this research in standalone as well as integrated form. Since the research is evolving and the robotic platforms with unique mobility and dexterous manipulation are in the early development stage and very expensive, an alternate approach is needed for efficient assessment. Simulation of robotic systems, platforms, sensors, and algorithms, is an attractive alternative to expensive field-based testing. Simulation can provide insight during development and debugging unavailable by many other means. This paper explores the maturity of robotic simulation systems for applications to real-world problems in robotic systems research. Open source (such as Gazebo and Moby), commercial (Simulink, Actin, LMS), government (ANVEL/VANE), and the RCTA-developed RIVET simulation environments are examined with respect to their application in the robotic research domains of Perception, Intelligence, HRI, and DMUM. Tradeoffs for applications to representative problems from each domain are presented, along with known deficiencies and disadvantages. In particular, no single robotic simulation environment adequately covers the needs of the robotic researcher in all of the domains. Simulation for DMUM poses unique constraints on the development of physics-based computational models of the robot, the environment and objects within the environment, and the interactions between them. Most current robot simulations focus on quasi-static systems, but dynamic robotic motion places an increased emphasis on the accuracy of the computational models. In order to understand the interaction of dynamic multi-body systems, such as limbed robots, with the environment, it may be necessary to build component-level computational models to provide the necessary simulation fidelity for accuracy. However, the Perception domain remains the most problematic for adequate simulation performance due to the often cartoon nature of computer rendering and the inability to model realistic electromagnetic radiation effects, such as multiple reflections, in real-time.
NASA Astrophysics Data System (ADS)
Scudeler, Carlotta; Pangle, Luke; Pasetto, Damiano; Niu, Guo-Yue; Volkmann, Till; Paniconi, Claudio; Putti, Mario; Troch, Peter
2016-10-01
This paper explores the challenges of model parameterization and process representation when simulating multiple hydrologic responses from a highly controlled unsaturated flow and transport experiment with a physically based model. The experiment, conducted at the Landscape Evolution Observatory (LEO), involved alternate injections of water and deuterium-enriched water into an initially very dry hillslope. The multivariate observations included point measures of water content and tracer concentration in the soil, total storage within the hillslope, and integrated fluxes of water and tracer through the seepage face. The simulations were performed with a three-dimensional finite element model that solves the Richards and advection-dispersion equations. Integrated flow, integrated transport, distributed flow, and distributed transport responses were successively analyzed, with parameterization choices at each step supported by standard model performance metrics. In the first steps of our analysis, where seepage face flow, water storage, and average concentration at the seepage face were the target responses, an adequate match between measured and simulated variables was obtained using a simple parameterization consistent with that from a prior flow-only experiment at LEO. When passing to the distributed responses, it was necessary to introduce complexity to additional soil hydraulic parameters to obtain an adequate match for the point-scale flow response. This also improved the match against point measures of tracer concentration, although model performance here was considerably poorer. This suggests that still greater complexity is needed in the model parameterization, or that there may be gaps in process representation for simulating solute transport phenomena in very dry soils.
St James, Sara; Seco, Joao; Mishra, Pankaj; Lewis, John H
2013-09-01
The purpose of this work is to present a framework to evaluate the accuracy of four-dimensional treatment planning in external beam radiation therapy using measured patient data and digital phantoms. To accomplish this, 4D digital phantoms of two model patients were created using measured patient lung tumor positions. These phantoms were used to simulate a four-dimensional computed tomography image set, which in turn was used to create a 4D Monte Carlo (4DMC) treatment plan. The 4DMC plan was evaluated by simulating the delivery of the treatment plan over approximately 5 min of tumor motion measured from the same patient on a different day. Unique phantoms accounting for the patient position (tumor position and thorax position) at 2 s intervals were used to represent the model patients on the day of treatment delivery and the delivered dose to the tumor was determined using Monte Carlo simulations. For Patient 1, the tumor was adequately covered with 95.2% of the tumor receiving the prescribed dose. For Patient 2, the tumor was not adequately covered and only 74.3% of the tumor received the prescribed dose. This study presents a framework to evaluate 4D treatment planning methods and demonstrates a potential limitation of 4D treatment planning methods. When systematic errors are present, including when the imaging study used for treatment planning does not represent all potential tumor locations during therapy, the treatment planning methods may not adequately predict the dose to the tumor. This is the first example of a simulation study based on patient tumor trajectories where systematic errors that occur due to an inaccurate estimate of tumor motion are evaluated.
An Improved MUSIC Model for Gibbsite Surfaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, Scott C.; Bickmore, Barry R.; Tadanier, Christopher J.
2004-06-01
Here we use gibbsite as a model system with which to test a recently published, bond-valence method for predicting intrinsic pKa values for surface functional groups on oxides. At issue is whether the method is adequate when valence parameters for the functional groups are derived from ab initio structure optimization of surfaces terminated by vacuum. If not, ab initio molecular dynamics (AIMD) simulations of solvated surfaces (which are much more computationally expensive) will have to be used. To do this, we had to evaluate extant gibbsite potentiometric titration data that where some estimate of edge and basal surface area wasmore » available. Applying BET and recently developed atomic force microscopy methods, we found that most of these data sets were flawed, in that their surface area estimates were probably wrong. Similarly, there may have been problems with many of the titration procedures. However, one data set was adequate on both counts, and we applied our method of surface pKa int prediction to fitting a MUSIC model to this data with considerable success—several features of the titration data were predicted well. However, the model fit was certainly not perfect, and we experienced some difficulties optimizing highly charged, vacuum-terminated surfaces. Therefore, we conclude that we probably need to do AIMD simulations of solvated surfaces to adequately predict intrinsic pKa values for surface functional groups.« less
Quadrupedal locomotor simulation: producing more realistic gaits using dual-objective optimization
Hirasaki, Eishi
2018-01-01
In evolutionary biomechanics it is often considered that gaits should evolve to minimize the energetic cost of travelling a given distance. In gait simulation this goal often leads to convincing gait generation. However, as the musculoskeletal models used get increasingly sophisticated, it becomes apparent that such a single goal can lead to extremely unrealistic gait patterns. In this paper, we explore the effects of requiring adequate lateral stability and show how this increases both energetic cost and the realism of the generated walking gait in a high biofidelity chimpanzee musculoskeletal model. We also explore the effects of changing the footfall sequences in the simulation so it mimics both the diagonal sequence walking gaits that primates typically use and also the lateral sequence walking gaits that are much more widespread among mammals. It is apparent that adding a lateral stability criterion has an important effect on the footfall phase relationship, suggesting that lateral stability may be one of the key drivers behind the observed footfall sequences in quadrupedal gaits. The observation that single optimization goals are no longer adequate for generating gait in current models has important implications for the use of biomimetic virtual robots to predict the locomotor patterns in fossil animals. PMID:29657790
Experiences in integrating auto-translated state-chart designs for model checking
NASA Technical Reports Server (NTRS)
Pingree, P. J.; Benowitz, E. G.
2003-01-01
In the complex environment of JPL's flight missions with increasing dependency on advanced software designs, traditional software validation methods of simulation and testing are being stretched to adequately cover the needs of software development.
NASA Astrophysics Data System (ADS)
Paparrizos, Spyridon; Maris, Fotios
2017-05-01
The MIKE SHE model is able to simulate the entire stream flow which includes direct and basic flow. Many models either do not simulate or use simplistic methods to determine the basic flow. The MIKE SHE model takes into account many hydrological data. Since this study was directed towards the simulation of surface runoff and infiltration into saturated and unsaturated zone, the MIKE SHE is an appropriate model for reliable conclusions. In the current research, the MIKE SHE model was used to simulate runoff in the area of Sperchios River basin. Meteorological data from eight rainfall stations within the Sperchios River basin were used as inputs. Vegetation as well as geological data was used to perform the calibration and validation of the physical processes of the model. Additionally, ArcGIS program was used. The results indicated that the model was able to simulate the surface runoff satisfactorily, representing all the hydrological data adequately. Some minor differentiations appeared which can be eliminated with the appropriate adjustments that can be decided by the researcher's experience.
Wieland, Birgit; Ropte, Sven
2017-01-01
The production of rotor blades for wind turbines is still a predominantly manual process. Process simulation is an adequate way of improving blade quality without a significant increase in production costs. This paper introduces a module for tolerance simulation for rotor-blade production processes. The investigation focuses on the simulation of temperature distribution for one-sided, self-heated tooling and thick laminates. Experimental data from rotor-blade production and down-scaled laboratory tests are presented. Based on influencing factors that are identified, a physical model is created and implemented as a simulation. This provides an opportunity to simulate temperature and cure-degree distribution for two-dimensional cross sections. The aim of this simulation is to support production processes. Hence, it is modelled as an in situ simulation with direct input of temperature data and real-time capability. A monolithic part of the rotor blade, the main girder, is used as an example for presenting the results. PMID:28981458
Wieland, Birgit; Ropte, Sven
2017-10-05
The production of rotor blades for wind turbines is still a predominantly manual process. Process simulation is an adequate way of improving blade quality without a significant increase in production costs. This paper introduces a module for tolerance simulation for rotor-blade production processes. The investigation focuses on the simulation of temperature distribution for one-sided, self-heated tooling and thick laminates. Experimental data from rotor-blade production and down-scaled laboratory tests are presented. Based on influencing factors that are identified, a physical model is created and implemented as a simulation. This provides an opportunity to simulate temperature and cure-degree distribution for two-dimensional cross sections. The aim of this simulation is to support production processes. Hence, it is modelled as an in situ simulation with direct input of temperature data and real-time capability. A monolithic part of the rotor blade, the main girder, is used as an example for presenting the results.
Walter, Donald A.; Masterson, John P.
2003-01-01
The U.S. Geological Survey has developed several ground-water models in support of an investigation of ground-water contamination being conducted by the Army National Guard Bureau at Camp Edwards, Massachusetts Military Reservation on western Cape Cod, Massachusetts. Regional and subregional steady-state models and regional transient models were used to (1) improve understanding of the hydrologic system, (2) simulate advective transport of contaminants, (3) delineate recharge areas to municipal wells, and (4) evaluate how model discretization and time-varying recharge affect simulation results. A water-table mound dominates ground-water-flow patterns. Near the top of the mound, which is within Camp Edwards, hydraulic gradients are nearly vertically downward and horizontal gradients are small. In downgradient areas that are further from the top of the water-table mound, the ratio of horizontal to vertical gradients is larger and horizontal flow predominates. The steady-state regional model adequately simulates advective transport in some areas of the aquifer; however, simulation of ground-water flow in areas with local hydrologic boundaries, such as ponds, requires more finely discretized subregional models. Subregional models also are needed to delineate recharge areas to municipal wells that are inadequately represented in the regional model or are near other pumped wells. Long-term changes in recharge rates affect hydraulic heads in the aquifer and shift the position of the top of the water-table mound. Hydraulic-gradient directions do not change over time in downgradient areas, whereas they do change substantially with temporal changes in recharge near the top of the water-table mound. The assumption of steady-state hydraulic conditions is valid in downgradient area, where advective transport paths change little over time. In areas closer to the top of the water-table mound, advective transport paths change as a function of time, transient and steady-state paths do not coincide, and the assumption of steady-state conditions is not valid. The simulation results indicate that several modeling tools are needed to adequately simulate ground-water flow at the site and that the utility of a model varies according to hydrologic conditions in the specific areas of interest.
Simulation of Triple Oxidation Ditch Wastewater Treatment Process
NASA Astrophysics Data System (ADS)
Yang, Yue; Zhang, Jinsong; Liu, Lixiang; Hu, Yongfeng; Xu, Ziming
2010-11-01
This paper presented the modeling mechanism and method of a sewage treatment system. A triple oxidation ditch process of a WWTP was simulated based on activated sludge model ASM2D with GPS-X software. In order to identify the adequate model structure to be implemented into the GPS-X environment, the oxidation ditch was divided into several completely stirred tank reactors depended on the distribution of aeration devices and dissolved oxygen concentration. The removal efficiency of COD, ammonia nitrogen, total nitrogen, total phosphorus and SS were simulated by GPS-X software with influent quality data of this WWTP from June to August 2009, to investigate the differences between the simulated results and the actual results. The results showed that, the simulated values could well reflect the actual condition of the triple oxidation ditch process. Mathematical modeling method was appropriate in effluent quality predicting and process optimizing.
Recent progress towards predicting aircraft ground handling performance
NASA Technical Reports Server (NTRS)
Yager, T. J.; White, E. J.
1981-01-01
Capability implemented in simulating aircraft ground handling performance is reviewed and areas for further expansion and improvement are identified. Problems associated with providing necessary simulator input data for adequate modeling of aircraft tire/runway friction behavior are discussed and efforts to improve tire/runway friction definition, and simulator fidelity are described. Aircraft braking performance data obtained on several wet runway surfaces are compared to ground vehicle friction measurements. Research to improve methods of predicting tire friction performance are discussed.
Modeling laser velocimeter signals as triply stochastic Poisson processes
NASA Technical Reports Server (NTRS)
Mayo, W. T., Jr.
1976-01-01
Previous models of laser Doppler velocimeter (LDV) systems have not adequately described dual-scatter signals in a manner useful for analysis and simulation of low-level photon-limited signals. At low photon rates, an LDV signal at the output of a photomultiplier tube is a compound nonhomogeneous filtered Poisson process, whose intensity function is another (slower) Poisson process with the nonstationary rate and frequency parameters controlled by a random flow (slowest) process. In the present paper, generalized Poisson shot noise models are developed for low-level LDV signals. Theoretical results useful in detection error analysis and simulation are presented, along with measurements of burst amplitude statistics. Computer generated simulations illustrate the difference between Gaussian and Poisson models of low-level signals.
The Co-simulation of Humanoid Robot Based on Solidworks, ADAMS and Simulink
NASA Astrophysics Data System (ADS)
Song, Dalei; Zheng, Lidan; Wang, Li; Qi, Weiwei; Li, Yanli
A simulation method of adaptive controller is proposed for the humanoid robot system based on co-simulation of Solidworks, ADAMS and Simulink. A complex mathematical modeling process is avoided by this method, and the real time dynamic simulating function of Simulink would be exerted adequately. This method could be generalized to other complicated control system. This method is adopted to build and analyse the model of humanoid robot. The trajectory tracking and adaptive controller design also proceed based on it. The effect of trajectory tracking is evaluated by fitting-curve theory of least squares method. The anti-interference capability of the robot is improved a lot through comparative analysis.
USDA-ARS?s Scientific Manuscript database
Most available biogeochemical models focus within a soil profile and cannot adequately resolve contributions of the lighter size fractions of organic rich soils for Enrichment Ratio (ER) estimates, thereby causing unintended errors in Soil Organic Carbon (SOC) storage predictions. These models set E...
A visual-environment simulator with variable contrast
NASA Astrophysics Data System (ADS)
Gusarova, N. F.; Demin, A. V.; Polshchikov, G. V.
1987-01-01
A visual-environment simulator is proposed in which the image contrast can be varied continuously up to the reversal of the image. Contrast variability can be achieved by using two independently adjustable light sources to simultaneously illuminate the carrier of visual information (e.g., a slide or a cinematographic film). It is shown that such a scheme makes it possible to adequately model a complex visual environment.
Mind the Noise When Identifying Computational Models of Cognition from Brain Activity.
Kolossa, Antonio; Kopp, Bruno
2016-01-01
The aim of this study was to analyze how measurement error affects the validity of modeling studies in computational neuroscience. A synthetic validity test was created using simulated P300 event-related potentials as an example. The model space comprised four computational models of single-trial P300 amplitude fluctuations which differed in terms of complexity and dependency. The single-trial fluctuation of simulated P300 amplitudes was computed on the basis of one of the models, at various levels of measurement error and at various numbers of data points. Bayesian model selection was performed based on exceedance probabilities. At very low numbers of data points, the least complex model generally outperformed the data-generating model. Invalid model identification also occurred at low levels of data quality and under low numbers of data points if the winning model's predictors were closely correlated with the predictors from the data-generating model. Given sufficient data quality and numbers of data points, the data-generating model could be correctly identified, even against models which were very similar to the data-generating model. Thus, a number of variables affects the validity of computational modeling studies, and data quality and numbers of data points are among the main factors relevant to the issue. Further, the nature of the model space (i.e., model complexity, model dependency) should not be neglected. This study provided quantitative results which show the importance of ensuring the validity of computational modeling via adequately prepared studies. The accomplishment of synthetic validity tests is recommended for future applications. Beyond that, we propose to render the demonstration of sufficient validity via adequate simulations mandatory to computational modeling studies.
The strengths and weaknesses of inverted pendulum models of human walking.
McGrath, Michael; Howard, David; Baker, Richard
2015-02-01
An investigation into the kinematic and kinetic predictions of two "inverted pendulum" (IP) models of gait was undertaken. The first model consisted of a single leg, with anthropometrically correct mass and moment of inertia, and a point mass at the hip representing the rest of the body. A second model incorporating the physiological extension of a head-arms-trunk (HAT) segment, held upright by an actuated hip moment, was developed for comparison. Simulations were performed, using both models, and quantitatively compared with empirical gait data. There was little difference between the two models' predictions of kinematics and ground reaction force (GRF). The models agreed well with empirical data through mid-stance (20-40% of the gait cycle) suggesting that IP models adequately simulate this phase (mean error less than one standard deviation). IP models are not cyclic, however, and cannot adequately simulate double support and step-to-step transition. This is because the forces under both legs augment each other during double support to increase the vertical GRF. The incorporation of an actuated hip joint was the most novel change and added a new dimension to the classic IP model. The hip moment curve produced was similar to those measured during experimental walking trials. As a result, it was interpreted that the primary role of the hip musculature in stance is to keep the HAT upright. Careful consideration of the differences between the models throws light on what the different terms within the GRF equation truly represent. Copyright © 2014 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, Paul A.; Cooper, Candice Frances; Burnett, Damon J.
Light body armor development for the warfighter is based on trial-and-error testing of prototype designs against ballistic projectiles. Torso armor testing against blast is virtually nonexistent but necessary to ensure adequate protection against injury to the heart and lungs. In this report, we discuss the development of a high-fidelity human torso model, it's merging with the existing Sandia Human Head-Neck Model, and development of the modeling & simulation (M&S) capabilities necessary to simulate wound injury scenarios. Using the new Sandia Human Torso Model, we demonstrate the advantage of virtual simulation in the investigation of wound injury as it relates tomore » the warfighter experience. We present the results of virtual simulations of blast loading and ballistic projectile impact to the tors o with and without notional protective armor. In this manner, we demonstrate the ad vantages of applying a modeling and simulation approach to the investigation of wound injury and relative merit assessments of protective body armor without the need for trial-and-error testing.« less
Katsube, Takayuki; Wajima, Toshihiro; Ishibashi, Toru; Arjona Ferreira, Juan Camilo; Echols, Roger
2017-01-01
Cefiderocol, a novel parenteral siderophore cephalosporin, exhibits potent efficacy against most Gram-negative bacteria, including carbapenem-resistant strains. Since cefiderocol is excreted primarily via the kidneys, this study was conducted to develop a population pharmacokinetics (PK) model to determine dose adjustment based on renal function. Population PK models were developed based on data for cefiderocol concentrations in plasma, urine, and dialysate with a nonlinear mixed-effects model approach. Monte-Carlo simulations were conducted to calculate the probability of target attainment (PTA) of fraction of time during the dosing interval where the free drug concentration in plasma exceeds the MIC (T f >MIC ) for an MIC range of 0.25 to 16 μg/ml. For the simulations, dose regimens were selected to compare cefiderocol exposure among groups with different levels of renal function. The developed models well described the PK of cefiderocol for each renal function group. A dose of 2 g every 8 h with 3-h infusions provided >90% PTA for 75% T f >MIC for an MIC of ≤4 μg/ml for patients with normal renal function, while a more frequent dose (every 6 h) could be used for patients with augmented renal function. A reduced dose and/or extended dosing interval was selected for patients with impaired renal function. A supplemental dose immediately after intermittent hemodialysis was proposed for patients requiring intermittent hemodialysis. The PK of cefiderocol could be adequately modeled, and the modeling-and-simulation approach suggested dose regimens based on renal function, ensuring drug exposure with adequate bactericidal effect. Copyright © 2016 American Society for Microbiology.
Adapting the Water Erosion Prediction Project (WEPP) model for forest applications
Shuhui Dun; Joan Q. Wu; William J. Elliot; Peter R. Robichaud; Dennis C. Flanagan; James R. Frankenberger; Robert E. Brown; Arthur C. Xu
2009-01-01
There has been an increasing public concern over forest stream pollution by excessive sedimentation due to natural or human disturbances. Adequate erosion simulation tools are needed for sound management of forest resources. The Water Erosion Prediction Project (WEPP) watershed model has proved useful in forest applications where Hortonian flow is the major form of...
Computer simulations of liquid crystals: Defects, deformations and dynamics
NASA Astrophysics Data System (ADS)
Billeter, Jeffrey Lee
1999-11-01
Computer simulations play an increasingly important role in investigating fundamental issues in the physics of liquid crystals. Presented here are the results of three projects which utilize the unique power of simulations to probe questions which neither theory nor experiment can adequately answer. Throughout, we use the (generalized) Gay-Berne model, a widely-used phenomenological potential which captures the essential features of the anisotropic mesogen shapes and interactions. First, we used a Molecular Dynamics simulation with 65536 Gay-Berne particles to study the behaviors of topological defects in a quench from the isotropic to the nematic phase. Twist disclination loops were the dominant defects, and we saw evidence for dynamical scaling. We observed the loops separating, combining and collapsing, and we also observed numerous non-singular type-1 lines which appeared to be intimately involved with many of the loop processes. Second, we used a Molecular Dynamics simulation of a sphere embedded in a system of 2048 Gay-Berne particles to study the effects of radial anchoring of the molecules at the sphere's surface. A saturn ring defect configuration was observed, and the ring caused a driven sphere (modelling the falling ball experiment) to experience an increased resistance as it moved through the nematic. Deviations from a linear relationship between the driving force and the terminal speed are attributed to distortions of the saturn ring which we observed. The existence of the saturn ring confirms theoretical predictions for small spheres. Finally, we constructed a model for wedge-shaped molecules and used a linear response approach in a Monte Carlo simulation to investigate the flexoelectric behavior of a system of 256 such wedges. Novel potential models as well as novel analytical and visualization techniques were developed for these projects. Once again, the emphasis throughout was to investigate questions which simulations alone can adequately answer.
Local rules simulation of the kinetics of virus capsid self-assembly.
Schwartz, R; Shor, P W; Prevelige, P E; Berger, B
1998-12-01
A computer model is described for studying the kinetics of the self-assembly of icosahedral viral capsids. Solution of this problem is crucial to an understanding of the viral life cycle, which currently cannot be adequately addressed through laboratory techniques. The abstract simulation model employed to address this is based on the local rules theory of. Proc. Natl. Acad. Sci. USA. 91:7732-7736). It is shown that the principle of local rules, generalized with a model of kinetics and other extensions, can be used to simulate complicated problems in self-assembly. This approach allows for a computationally tractable molecular dynamics-like simulation of coat protein interactions while retaining many relevant features of capsid self-assembly. Three simple simulation experiments are presented to illustrate the use of this model. These show the dependence of growth and malformation rates on the energetics of binding interactions, the tolerance of errors in binding positions, and the concentration of subunits in the examples. These experiments demonstrate a tradeoff within the model between growth rate and fidelity of assembly for the three parameters. A detailed discussion of the computational model is also provided.
Flight Testing an Iced Business Jet for Flight Simulation Model Validation
NASA Technical Reports Server (NTRS)
Ratvasky, Thomas P.; Barnhart, Billy P.; Lee, Sam; Cooper, Jon
2007-01-01
A flight test of a business jet aircraft with various ice accretions was performed to obtain data to validate flight simulation models developed through wind tunnel tests. Three types of ice accretions were tested: pre-activation roughness, runback shapes that form downstream of the thermal wing ice protection system, and a wing ice protection system failure shape. The high fidelity flight simulation models of this business jet aircraft were validated using a software tool called "Overdrive." Through comparisons of flight-extracted aerodynamic forces and moments to simulation-predicted forces and moments, the simulation models were successfully validated. Only minor adjustments in the simulation database were required to obtain adequate match, signifying the process used to develop the simulation models was successful. The simulation models were implemented in the NASA Ice Contamination Effects Flight Training Device (ICEFTD) to enable company pilots to evaluate flight characteristics of the simulation models. By and large, the pilots confirmed good similarities in the flight characteristics when compared to the real airplane. However, pilots noted pitch up tendencies at stall with the flaps extended that were not representative of the airplane and identified some differences in pilot forces. The elevator hinge moment model and implementation of the control forces on the ICEFTD were identified as a driver in the pitch ups and control force issues, and will be an area for future work.
Spherical harmonic analysis of a model-generated climatology
NASA Technical Reports Server (NTRS)
Christidis, Z. D.; Spar, J.
1981-01-01
Monthly mean fields of 850 mb temperature (T850), 500 mb geopotential height (G500) and sea level pressure (SLP) were generated in the course of a five-year climate simulation run with a global general circulation model. Both the model-generated climatology and an observed climatology were subjected to spherical harmonic analysis, with separate analyses of the globe and the Northern Hemisphere. Comparison of the dominant harmonics of the two climatologies indicates that more than 95% of the area-weighted spatial variance of G500 and more than 90% of that of T850 are explained by fewer than three components, and that the model adequately simulates these large-scale characteristics. On the other hand, as many as 25 harmonics are needed to explain 95% of the observed variance of SLP, and the model simulation of this field is much less satisfactory. The model climatology is also evaluated in terms of the annual cycles of the dominant harmonics.
A simple electric circuit model for proton exchange membrane fuel cells
NASA Astrophysics Data System (ADS)
Lazarou, Stavros; Pyrgioti, Eleftheria; Alexandridis, Antonio T.
A simple and novel dynamic circuit model for a proton exchange membrane (PEM) fuel cell suitable for the analysis and design of power systems is presented. The model takes into account phenomena like activation polarization, ohmic polarization, and mass transport effect present in a PEM fuel cell. The proposed circuit model includes three resistors to approach adequately these phenomena; however, since for the PEM dynamic performance connection or disconnection of an additional load is of crucial importance, the proposed model uses two saturable inductors accompanied by an ideal transformer to simulate the double layer charging effect during load step changes. To evaluate the effectiveness of the proposed model its dynamic performance under load step changes is simulated. Experimental results coming from a commercial PEM fuel cell module that uses hydrogen from a pressurized cylinder at the anode and atmospheric oxygen at the cathode, clearly verify the simulation results.
Rainfall runoff modelling of the Upper Ganga and Brahmaputra basins using PERSiST.
Futter, M N; Whitehead, P G; Sarkar, S; Rodda, H; Crossman, J
2015-06-01
There are ongoing discussions about the appropriate level of complexity and sources of uncertainty in rainfall runoff models. Simulations for operational hydrology, flood forecasting or nutrient transport all warrant different levels of complexity in the modelling approach. More complex model structures are appropriate for simulations of land-cover dependent nutrient transport while more parsimonious model structures may be adequate for runoff simulation. The appropriate level of complexity is also dependent on data availability. Here, we use PERSiST; a simple, semi-distributed dynamic rainfall-runoff modelling toolkit to simulate flows in the Upper Ganges and Brahmaputra rivers. We present two sets of simulations driven by single time series of daily precipitation and temperature using simple (A) and complex (B) model structures based on uniform and hydrochemically relevant land covers respectively. Models were compared based on ensembles of Bayesian Information Criterion (BIC) statistics. Equifinality was observed for parameters but not for model structures. Model performance was better for the more complex (B) structural representations than for parsimonious model structures. The results show that structural uncertainty is more important than parameter uncertainty. The ensembles of BIC statistics suggested that neither structural representation was preferable in a statistical sense. Simulations presented here confirm that relatively simple models with limited data requirements can be used to credibly simulate flows and water balance components needed for nutrient flux modelling in large, data-poor basins.
Angulo, Jesús; Nieto, Pedro M; Martín-Lomas, Manuel
2003-07-07
For a synthetic hexasaccharide model it is shown that the conformational flexibility of the L-iduronate ring in glycosaminoglycans can be adequately described by using the PME methodology together with simulation protocols suitable for highly charged systems.
Numerical Simulation of Slag Eye Formation and Slag Entrapment in a Bottom-Blown Argon-Stirred Ladle
NASA Astrophysics Data System (ADS)
Liu, Wei; Tang, Haiyan; Yang, Shufeng; Wang, Minghui; Li, Jingshe; Liu, Qing; Liu, Jianhui
2018-06-01
A transient mathematical model is developed for simulating the bubble-steel-slag-top gas four-phase flow in a bottom-blown argon-stirred ladle with a 70-ton capacity. The Lagrangian discrete phase model (DPM) is used for describing the moving behavior of bubbles in the steel and slag. To observe the formation process of slag eye, the volume of fluid (VOF) model is used to track the interfaces between three incompressible phases: metal/slag, metal/gas, and slag/gas. The complex multiphase turbulent flow induced by bubble-liquid interactions is solved by a large eddy simulation (LES) model. Slag eye area and slag droplet dispersion are investigated under different gas flow rates. The results show that the movement of bubbles, formation and collapse of slag eye, volatility of steel/slag interface and behavior of slag entrapment can be properly predicted in the current model. When the gas flow rate is 300 L/min, the circulation driven by the bubble plume will stir the entire ladle adequately and form a slag eye of the right size. At the same time, it will not cause strong erosion to the ladle wall, and the fluctuation of the interface is of adequate intensity, which will be helpful for improving the desulfurization efficiency; the slag entrapment behavior can also be decreased. Interestingly, with the motion of liquid steel circulation, the collision and coalescence of dispersed slag droplets occur during the floating process in the vicinity of the wall.
Application of an Integrated HPC Reliability Prediction Framework to HMMWV Suspension System
2010-09-13
model number M966 (TOW Missle Carrier, Basic Armor without weapons), since they were available. Tires used for all simulations were the bias-type...vehicle fleet, including consideration of all kinds of uncertainty, especially including model uncertainty. The end result will be a tool to use...building an adequate vehicle reliability prediction framework for military vehicles is the accurate modeling of the integration of various types of
NASA Astrophysics Data System (ADS)
Klimczak, Marcin; Bojarski, Jacek; Ziembicki, Piotr; Kęskiewicz, Piotr
2017-11-01
The requirements concerning energy performance of buildings and their internal installations, particularly HVAC systems, have been growing continuously in Poland and all over the world. The existing, traditional calculation methods following from the static heat exchange model are frequently not sufficient for a reasonable heating design of a building. Both in Poland and elsewhere in the world, methods and software are employed which allow a detailed simulation of the heating and moisture conditions in a building, and also an analysis of the performance of HVAC systems within a building. However, these systems are usually difficult in use and complex. In addition, the development of a simulation model that is sufficiently adequate to the real building requires considerable time involvement of a designer, is time-consuming and laborious. A simplification of the simulation model of a building renders it possible to reduce the costs of computer simulations. The paper analyses in detail the effect of introducing a number of different variants of the simulation model developed in Design Builder on the quality of final results obtained. The objective of this analysis is to find simplifications which allow obtaining simulation results which have an acceptable level of deviations from the detailed model, thus facilitating a quick energy performance analysis of a given building.
Numerical simulation of small-scale thermal convection in the atmosphere
NASA Technical Reports Server (NTRS)
Somerville, R. C. J.
1973-01-01
A Boussinesq system is integrated numerically in three dimensions and time in a study of nonhydrostatic convection in the atmosphere. Simulation of cloud convection is achieved by the inclusion of parametrized effects of latent heat and small-scale turbulence. The results are compared with the cell structure observed in Rayleigh-Benard laboratory conversion experiments in air. At a Rayleigh number of 4000, the numerical model adequately simulates the experimentally observed evolution, including some prominent transients of a flow from a randomly perturbed initial conductive state into the final state of steady large-amplitude two-dimensional rolls. At Rayleigh number 9000, the model reproduces the experimentally observed unsteady equilibrium of vertically coherent oscillatory waves superimposed on rolls.
Sim, Adelene Y L
2016-06-01
Nucleic acids are biopolymers that carry genetic information and are also involved in various gene regulation functions such as gene silencing and protein translation. Because of their negatively charged backbones, nucleic acids are polyelectrolytes. To adequately understand nucleic acid folding and function, we need to properly describe its i) polymer/polyelectrolyte properties and ii) associating ion atmosphere. While various theories and simulation models have been developed to describe nucleic acids and the ions around them, many of these theories/simulations have not been well evaluated due to complexities in comparison with experiment. In this review, I discuss some recent experiments that have been strategically designed for straightforward comparison with theories and simulation models. Such data serve as excellent benchmarks to identify limitations in prevailing theories and simulation parameters. Copyright © 2015 Elsevier B.V. All rights reserved.
Nursing simulation: a community experience.
Gunowa, Neesha Oozageer; Elliott, Karen; McBride, Michelle
2018-04-02
The education sector faces major challenges in providing learning experiences so that newly qualified nurses feel adequately prepared to work in a community setting. With this in mind, higher education institutions need to develop more innovative ways to deliver the community-nurse experience to student nurses. This paper presents and explores how simulation provides an opportunity for educators to support and evaluate student performance in an environment that models a complete patient encounter in the community. Following the simulation, evaluative data were collated and the answers analysed to identify key recommendations.
Microscopic transport model animation visualisation on KML base
NASA Astrophysics Data System (ADS)
Yatskiv, I.; Savrasovs, M.
2012-10-01
By reading classical literature devoted to the simulation theory it could be found that one of the greatest possibilities of simulation is the ability to present processes inside the system by animation. This gives to the simulation model additional value during presentation of simulation results for the public and authorities who are not familiar enough with simulation. That is why most of universal and specialised simulation tools have the ability to construct 2D and 3D representation of the model. Usually the development of such representation could take much time and there must be put a lot forces into creating an adequate 3D representation of the model. For long years such well-known microscopic traffic flow simulation software tools as VISSIM, AIMSUN and PARAMICS have had a possibility to produce 2D and 3D animation. But creation of realistic 3D model of the place where traffic flows are simulated, even in these professional software tools it is a hard and time consuming action. The goal of this paper is to describe the concepts of use the existing on-line geographical information systems for visualisation of animation produced by simulation software. For demonstration purposes the following technologies and tools have been used: PTV VISION VISSIM, KML and Google Earth.
Simulation of Chronic Liver Injury Due to Environmental Chemicals
US EPA Virtual Liver (v-Liver) is a cellular systems model of hepatic tissues to predict the effects of chronic exposure to chemicals. Tens of thousands of chemicals are currently in commerce and hundreds more are introduced every year. Few of these chemicals have been adequate...
A subsurface drip irrigation system for weighing lysimetry
USDA-ARS?s Scientific Manuscript database
Large, precision weighing lysimeters can have accuracies as good as 0.04 mm equivalent depth of water, adequate for hourly and even half-hourly determinations of evapotranspiration (ET) rate from crops. Such data are important for testing and improving simulation models of the complex interactions o...
Menshutkin, V V; Kazanskiĭ, A B; Levchenko, V F
2010-01-01
The history of rise and development of evolutionary methods in Saint Petersburg school of biological modelling is traced and analyzed. Some pioneering works in simulation of ecological and evolutionary processes, performed in St.-Petersburg school became an exemplary ones for many followers in Russia and abroad. The individual-based approach became the crucial point in the history of the school as an adequate instrument for construction of models of biological evolution. This approach is natural for simulation of the evolution of life-history parameters and adaptive processes in populations and communities. In some cases simulated evolutionary process was used for solving a reverse problem, i. e., for estimation of uncertain life-history parameters of population. Evolutionary computations is one more aspect of this approach application in great many fields. The problems and vistas of ecological and evolutionary modelling in general are discussed.
Winterhalter, Wade E.
2011-09-01
Global climate change is expected to impact biological populations through a variety of mechanisms including increases in the length of their growing season. Climate models are useful tools for predicting how season length might change in the future. However, the accuracy of these models tends to be rather low at regional geographic scales. Here, I determined the ability of several atmosphere and ocean general circulating models (AOGCMs) to accurately simulate historical season lengths for a temperate ectotherm across the continental United States. I also evaluated the effectiveness of regional-scale correction factors to improve the accuracy of these models. I foundmore » that both the accuracy of simulated season lengths and the effectiveness of the correction factors to improve the model's accuracy varied geographically and across models. These results suggest that regional specific correction factors do not always adequately remove potential discrepancies between simulated and historically observed environmental parameters. As such, an explicit evaluation of the correction factors' effectiveness should be included in future studies of global climate change's impact on biological populations.« less
NASA Technical Reports Server (NTRS)
Davis, John H.
1993-01-01
Lunar spherical harmonic gravity coefficients are estimated from simulated observations of a near-circular low altitude polar orbiter disturbed by lunar mascons. Lunar gravity sensing missions using earth-based nearside observations with and without satellite-based far-side observations are simulated and least squares maximum likelihood estimates are developed for spherical harmonic expansion fit models. Simulations and parameter estimations are performed by a modified version of the Smithsonian Astrophysical Observatory's Planetary Ephemeris Program. Two different lunar spacecraft mission phases are simulated to evaluate the estimated fit models. Results for predicting state covariances one orbit ahead are presented along with the state errors resulting from the mismodeled gravity field. The position errors from planning a lunar landing maneuver with a mismodeled gravity field are also presented. These simulations clearly demonstrate the need to include observations of satellite motion over the far side in estimating the lunar gravity field. The simulations also illustrate that the eighth degree and order expansions used in the simulated fits were unable to adequately model lunar mascons.
STS/DBS power subsystem end-to-end stability margin
NASA Astrophysics Data System (ADS)
Devaux, R. N.; Vattimo, R. J.; Peck, S. R.; Baker, W. E.
Attention is given to a full-up end-to-end subsystem stability test which was performed with a flight solar array providing power to a fully operational spacecraft. The solar array simulator is described, and a comparison is made between test results obtained with the simulator and those obtained with the actual array. It is concluded that stability testing with a fully integrated spacecraft is necessary to ensure that all elements have been adequately modeled.
Schummers, Laura; Himes, Katherine P; Bodnar, Lisa M; Hutcheon, Jennifer A
2016-09-21
Compelled by the intuitive appeal of predicting each individual patient's risk of an outcome, there is a growing interest in risk prediction models. While the statistical methods used to build prediction models are increasingly well understood, the literature offers little insight to researchers seeking to gauge a priori whether a prediction model is likely to perform well for their particular research question. The objective of this study was to inform the development of new risk prediction models by evaluating model performance under a wide range of predictor characteristics. Data from all births to overweight or obese women in British Columbia, Canada from 2004 to 2012 (n = 75,225) were used to build a risk prediction model for preeclampsia. The data were then augmented with simulated predictors of the outcome with pre-set prevalence values and univariable odds ratios. We built 120 risk prediction models that included known demographic and clinical predictors, and one, three, or five of the simulated variables. Finally, we evaluated standard model performance criteria (discrimination, risk stratification capacity, calibration, and Nagelkerke's r 2 ) for each model. Findings from our models built with simulated predictors demonstrated the predictor characteristics required for a risk prediction model to adequately discriminate cases from non-cases and to adequately classify patients into clinically distinct risk groups. Several predictor characteristics can yield well performing risk prediction models; however, these characteristics are not typical of predictor-outcome relationships in many population-based or clinical data sets. Novel predictors must be both strongly associated with the outcome and prevalent in the population to be useful for clinical prediction modeling (e.g., one predictor with prevalence ≥20 % and odds ratio ≥8, or 3 predictors with prevalence ≥10 % and odds ratios ≥4). Area under the receiver operating characteristic curve values of >0.8 were necessary to achieve reasonable risk stratification capacity. Our findings provide a guide for researchers to estimate the expected performance of a prediction model before a model has been built based on the characteristics of available predictors.
Molecular-dynamics simulation of mutual diffusion in nonideal liquid mixtures
NASA Astrophysics Data System (ADS)
Rowley, R. L.; Stoker, J. M.; Giles, N. F.
1991-05-01
The mutual-diffusion coefficients, D 12, of n-hexane, n-heptane, and n-octane in chloroform were modeled using equilibrium molecular-dynamics (MD) simulations of simple Lennard-Jones (LJ) fluids. Pure-component LJ parameters were obtained by comparison of simulations to experimental self-diffusion coefficients. While values of “effective” LJ parameters are not expected to simulate accurately diverse thermophysical properties over a wide range of conditions, it was recently shown that effective parameters obtained from pure self-diffusion coefficients can accurately model mutual diffusion in ideal, liquid mixtures. In this work, similar simulations are used to model diffusion in nonideal mixtures. The same combining rules used in the previous study for the cross-interaction parameters were found to be adequate to represent the composition dependence of D 12. The effect of alkane chain length on D 12 is also correctly predicted by the simulations. A commonly used assumption in empirical correlations of D 12, that its kinetic portion is a simple, compositional average of the intradiffusion coefficients, is inconsistent with the simulation results. In fact, the value of the kinetic portion of D 12 was often outside the range of values bracketed by the two intradiffusion coefficients for the nonideal system modeled here.
Ghany, Ahmad; Vassanji, Karim; Kuziemsky, Craig; Keshavjee, Karim
2013-01-01
Electronic prescribing (e-prescribing) is expected to bring many benefits to Canadian healthcare, such as a reduction in errors and adverse drug reactions. As there currently is no functioning e-prescribing system in Canada that is completely electronic, we are unable to evaluate the performance of a live system. An alternative approach is to use simulation modeling for evaluation. We developed two discrete-event simulation models, one of the current handwritten prescribing system and one of a proposed e-prescribing system, to compare the performance of these two systems. We were able to compare the number of processes in each model, workflow efficiency, and the distribution of patients or prescriptions. Although we were able to compare these models to each other, using discrete-event simulation software was challenging. We were limited in the number of variables we could measure. We discovered non-linear processes and feedback loops in both models that could not be adequately represented using discrete-event simulation software. Finally, interactions between entities in both models could not be modeled using this type of software. We have come to the conclusion that a more appropriate approach to modeling both the handwritten and electronic prescribing systems would be to use a complex adaptive systems approach using agent-based modeling or systems-based modeling.
Toolbox for Urban Mobility Simulation: High Resolution Population Dynamics for Global Cities
NASA Astrophysics Data System (ADS)
Bhaduri, B. L.; Lu, W.; Liu, C.; Thakur, G.; Karthik, R.
2015-12-01
In this rapidly urbanizing world, unprecedented rate of population growth is not only mirrored by increasing demand for energy, food, water, and other natural resources, but has detrimental impacts on environmental and human security. Transportation simulations are frequently used for mobility assessment in urban planning, traffic operation, and emergency management. Previous research, involving purely analytical techniques to simulations capturing behavior, has investigated questions and scenarios regarding the relationships among energy, emissions, air quality, and transportation. Primary limitations of past attempts have been availability of input data, useful "energy and behavior focused" models, validation data, and adequate computational capability that allows adequate understanding of the interdependencies of our transportation system. With increasing availability and quality of traditional and crowdsourced data, we have utilized the OpenStreetMap roads network, and has integrated high resolution population data with traffic simulation to create a Toolbox for Urban Mobility Simulations (TUMS) at global scale. TUMS consists of three major components: data processing, traffic simulation models, and Internet-based visualizations. It integrates OpenStreetMap, LandScanTM population, and other open data (Census Transportation Planning Products, National household Travel Survey, etc.) to generate both normal traffic operation and emergency evacuation scenarios. TUMS integrates TRANSIMS and MITSIM as traffic simulation engines, which are open-source and widely-accepted for scalable traffic simulations. Consistent data and simulation platform allows quick adaption to various geographic areas that has been demonstrated for multiple cities across the world. We are combining the strengths of geospatial data sciences, high performance simulations, transportation planning, and emissions, vehicle and energy technology development to design and develop a simulation framework to assist decision makers at all levels - local, state, regional, and federal. Using Cleveland, Tennessee as an example, in this presentation, we illustrate how emerging cities could easily assess future land use scenario driven impacts on energy and environment utilizing such a capability.
Density and white light brightness in looplike coronal mass ejections - Temporal evolution
NASA Technical Reports Server (NTRS)
Steinolfson, R. S.; Hundhausen, A. J.
1988-01-01
Three ambient coronal models suitable for studies of time-dependent phenomena were used to investigate the propagation of coronal mass ejections initiated in each atmosphere by an identical energy source. These models included those of a static corona with a dipole magnetic field, developed by Dryer et al. (1979); a steady polytropic corona with an equatorial coronal streamer, developed by Steinolfson et al. (1982); and Steinolfson's (1988) model of heated corona with an equatorial coronal streamer. The results indicated that the first model does not adequately represent the general characteristics of observed looplike mass ejections, and the second model simulated only some of the observed features. Only the third model, which included a heating term and a streamer, was found to yield accurate simulation of the mess ejection observations.
Defining metrics of the Quasi-Biennial Oscillation in global climate models
NASA Astrophysics Data System (ADS)
Schenzinger, Verena; Osprey, Scott; Gray, Lesley; Butchart, Neal
2017-06-01
As the dominant mode of variability in the tropical stratosphere, the Quasi-Biennial Oscillation (QBO) has been subject to extensive research. Though there is a well-developed theory of this phenomenon being forced by wave-mean flow interaction, simulating the QBO adequately in global climate models still remains difficult. This paper presents a set of metrics to characterize the morphology of the QBO using a number of different reanalysis datasets and the FU Berlin radiosonde observation dataset. The same metrics are then calculated from Coupled Model Intercomparison Project 5 and Chemistry-Climate Model Validation Activity 2 simulations which included a representation of QBO-like behaviour to evaluate which aspects of the QBO are well captured by the models and which ones remain a challenge for future model development.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Fan; Parker, Jack C.; Watson, David B
This study investigates uranium and technetium sorption onto aluminum and iron hydroxides during titration of acidic groundwater. The contaminated groundwater exhibits oxic conditions with high concentrations of NO{sub 3}{sup -}, SO{sub 4}{sup 2-}, U, Tc, and various metal cations. More than 90% of U and Tc was removed from the aqueous phase as Al and Fe precipitated above pH 5.5, but was partially resolublized at higher pH values. An equilibrium hydrolysis and precipitation reaction model adequately described variations in aqueous concentrations of metal cations. An anion exchange reaction model was incorporated to simulate sulfate, U and Tc sorption onto variablymore » charged (pH-dependent) Al and Fe hydroxides. Modeling results indicate that competitive sorption/desorption on mixed mineral phases needs to be considered to adequately predict U and Tc mobility. The model could be useful for future studies of the speciation of U, Tc and co-existing ions during pre- and post-groundwater treatment practices.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pruess, K.; Oldenburg, C.; Moridis, G.
1997-12-31
This paper summarizes recent advances in methods for simulating water and tracer injection, and presents illustrative applications to liquid- and vapor-dominated geothermal reservoirs. High-resolution simulations of water injection into heterogeneous, vertical fractures in superheated vapor zones were performed. Injected water was found to move in dendritic patterns, and to experience stronger lateral flow effects than predicted from homogeneous medium models. Higher-order differencing methods were applied to modeling water and tracer injection into liquid-dominated systems. Conventional upstream weighting techniques were shown to be adequate for predicting the migration of thermal fronts, while higher-order methods give far better accuracy for tracer transport.more » A new fluid property module for the TOUGH2 simulator is described which allows a more accurate description of geofluids, and includes mineral dissolution and precipitation effects with associated porosity and permeability change. Comparisons between numerical simulation predictions and data for laboratory and field injection experiments are summarized. Enhanced simulation capabilities include a new linear solver package for TOUGH2, and inverse modeling techniques for automatic history matching and optimization.« less
NASA Technical Reports Server (NTRS)
Melcher, Kevin J.
1997-01-01
The NASA Lewis Research Center is developing analytical methods and software tools to create a bridge between the controls and computational fluid dynamics (CFD) disciplines. Traditionally, control design engineers have used coarse nonlinear simulations to generate information for the design of new propulsion system controls. However, such traditional methods are not adequate for modeling the propulsion systems of complex, high-speed vehicles like the High Speed Civil Transport. To properly model the relevant flow physics of high-speed propulsion systems, one must use simulations based on CFD methods. Such CFD simulations have become useful tools for engineers that are designing propulsion system components. The analysis techniques and software being developed as part of this effort are an attempt to evolve CFD into a useful tool for control design as well. One major aspect of this research is the generation of linear models from steady-state CFD results. CFD simulations, often used during the design of high-speed inlets, yield high resolution operating point data. Under a NASA grant, the University of Akron has developed analytical techniques and software tools that use these data to generate linear models for control design. The resulting linear models have the same number of states as the original CFD simulation, so they are still very large and computationally cumbersome. Model reduction techniques have been successfully applied to reduce these large linear models by several orders of magnitude without significantly changing the dynamic response. The result is an accurate, easy to use, low-order linear model that takes less time to generate than those generated by traditional means. The development of methods for generating low-order linear models from steady-state CFD is most complete at the one-dimensional level, where software is available to generate models with different kinds of input and output variables. One-dimensional methods have been extended somewhat so that linear models can also be generated from two- and three-dimensional steady-state results. Standard techniques are adequate for reducing the order of one-dimensional CFD-based linear models. However, reduction of linear models based on two- and three-dimensional CFD results is complicated by very sparse, ill-conditioned matrices. Some novel approaches are being investigated to solve this problem.
Development of the CCP-200 mathematical model for Syzran CHPP using the Thermolib software package
NASA Astrophysics Data System (ADS)
Usov, S. V.; Kudinov, A. A.
2016-04-01
Simplified cycle diagram of the CCP-200 power generating unit of Syzran CHPP containing two gas turbines PG6111FA with generators, two steam recovery boilers KUP-110/15-8.0/0.7-540/200, and one steam turbine Siemens SST-600 (one-cylinder with two variable heat extraction units of 60/75 MW in heatextraction and condensing modes, accordingly) with S-GEN5-100 generators was presented. Results of experimental guarantee tests of the CCP-200 steam-gas unit are given. Brief description of the Thermolib application for the MatLab Simulink software package is given. Basic equations used in Thermolib for modeling thermo-technical processes are given. Mathematical models of gas-turbine plant, heat-recovery steam generator, steam turbine and integrated plant for power generating unit CCP-200 of Syzran CHPP were developed with the help of MatLab Simulink and Thermolib. The simulation technique at different ambient temperature values was used in order to get characteristics of the developed mathematical model. Graphic comparison of some characteristics of the CCP-200 simulation model (gas temperature behind gas turbine, gas turbine and combined cycle plant capacity, high and low pressure steam consumption and feed water consumption for high and low pressure economizers) with actual characteristics of the steam-gas unit received at experimental (field) guarantee tests at different ambient temperature are shown. It is shown that the chosen degrees of complexity, characteristics of the CCP-200 simulation model, developed by Thermolib, adequately correspond to the actual characteristics of the steam-gas unit received at experimental (field) guarantee tests; this allows considering the developed mathematical model as adequate and acceptable it for further work.
Robot, computer problem solving system
NASA Technical Reports Server (NTRS)
Becker, J. D.
1972-01-01
The development of a computer problem solving system is reported that considers physical problems faced by an artificial robot moving around in a complex environment. Fundamental interaction constraints with a real environment are simulated for the robot by visual scan and creation of an internal environmental model. The programming system used in constructing the problem solving system for the simulated robot and its simulated world environment is outlined together with the task that the system is capable of performing. A very general framework for understanding the relationship between an observed behavior and an adequate description of that behavior is included.
Kawamura, Kazuya; Kobayashi, Yo; Fujie, Masakatsu G
2007-01-01
Medical technology has advanced with the introduction of robot technology, making previous medical treatments that were very difficult far more possible. However, operation of a surgical robot demands substantial training and continual practice on the part of the surgeon because it requires difficult techniques that are different from those of traditional surgical procedures. We focused on a simulation technology based on the physical characteristics of organs. In this research, we proposed the development of surgical simulation, based on a physical model, for intra-operative navigation by a surgeon. In this paper, we describe the design of our system, in particular our organ deformation calculator. The proposed simulation system consists of an organ deformation calculator and virtual slave manipulators. We obtained adequate experimental results of a target node at a nearby point of interaction, because this point ensures better accuracy for our simulation model. The next research step would be to focus on a surgical environment in which internal organ models would be integrated into a slave simulation system.
NASA Astrophysics Data System (ADS)
Schafhirt, S.; Kaufer, D.; Cheng, P. W.
2014-12-01
In recent years many advanced load simulation tools, allowing an aero-servo-hydroelastic analyses of an entire offshore wind turbine, have been developed and verified. Nowadays, even an offshore wind turbine with a complex support structure such as a jacket can be analysed. However, the computational effort rises significantly with an increasing level of details. This counts especially for offshore wind turbines with lattice support structures, since those models do naturally have a higher number of nodes and elements than simpler monopile structures. During the design process multiple load simulations are demanded to obtain an optimal solution. In the view of pre-design tasks it is crucial to apply load simulations which keep the simulation quality and the computational effort in balance. The paper will introduce a reference wind turbine model consisting of the REpower5M wind turbine and a jacket support structure with a high level of detail. In total twelve variations of this reference model are derived and presented. Main focus is to simplify the models of the support structure and the foundation. The reference model and the simplified models are simulated with the coupled simulation tool Flex5-Poseidon and analysed regarding frequencies, fatigue loads, and ultimate loads. A model has been found which reaches an adequate increase of simulation speed while holding the results in an acceptable range compared to the reference results.
NASA Astrophysics Data System (ADS)
Niazi, A.; Bentley, L. R.; Hayashi, M.
2016-12-01
Geostatistical simulations are used to construct heterogeneous aquifer models. Optimally, such simulations should be conditioned with both lithologic and hydraulic data. We introduce an approach to condition lithologic geostatistical simulations of a paleo-fluvial bedrock aquifer consisting of relatively high permeable sandstone channels embedded in relatively low permeable mudstone using hydraulic data. The hydraulic data consist of two-hour single well pumping tests extracted from the public water well database for a 250-km2 watershed in Alberta, Canada. First, lithologic models of the entire watershed are simulated and conditioned with hard lithological data using transition probability - Markov chain geostatistics (TPROGS). Then, a segment of the simulation around a pumping well is used to populate a flow model (FEFLOW) with either sand or mudstone. The values of the hydraulic conductivity and specific storage of sand and mudstone are then adjusted to minimize the difference between simulated and actual pumping test data using the parameter estimation program PEST. If the simulated pumping test data do not adequately match the measured data, the lithologic model is updated by locally deforming the lithology distribution using the probability perturbation method and the model parameters are again updated with PEST. This procedure is repeated until the simulated and measured data agree within a pre-determined tolerance. The procedure is repeated for each well that has pumping test data. The method creates a local groundwater model that honors both the lithologic model and pumping test data and provides estimates of hydraulic conductivity and specific storage. Eventually, the simulations will be integrated into a watershed-scale groundwater model.
Using simulators to teach pediatric airway procedures in an international setting.
Schwartz, Marissa A; Kavanagh, Katherine R; Frampton, Steven J; Bruce, Iain A; Valdez, Tulio A
2018-01-01
There has been a growing shift towards endoscopic management of laryngeal procedures in pediatric otolaryngology. There still appears to be a shortage of pediatric otolaryngology programs and children's hospitals worldwide where physicians can learn and practice these skills. Laryngeal simulation models have the potential to be part of the educational training of physicians who lack exposure to relatively uncommon pediatric otolaryngologic pathology. The objective of this study was to assess the utility of pediatric laryngeal models to teach laryngeal pathology to physicians at an international meeting. Pediatric laryngeal models were assessed by participants at an international pediatric otolaryngology meeting. Participants provided demographic information and previous experience with pediatric airways. Participants then performed simulated surgery on these models and evaluated them using both a previously validated Tissue Likeness Scale and a pre-simulation to post-simulation confidence scale. Participants reported significant subjective improvement in confidence level after use of the simulation models (p < 0.05). Participants reported realistic representations of human anatomy and pathology. The models' tissue mechanics were adequate to practice operative technique including the ability to incise, suture, and suspend models. The pediatric laryngeal models demonstrate high quality anatomy, which is easy manipulated with surgical instruments. These models allow both trainees and surgeons to practice time-sensitive airway surgeries in a safe and controlled environment. Copyright © 2017 Elsevier B.V. All rights reserved.
Small-scale multi-axial hybrid simulation of a shear-critical reinforced concrete frame
NASA Astrophysics Data System (ADS)
Sadeghian, Vahid; Kwon, Oh-Sung; Vecchio, Frank
2017-10-01
This study presents a numerical multi-scale simulation framework which is extended to accommodate hybrid simulation (numerical-experimental integration). The framework is enhanced with a standardized data exchange format and connected to a generalized controller interface program which facilitates communication with various types of laboratory equipment and testing configurations. A small-scale experimental program was conducted using a six degree-of-freedom hydraulic testing equipment to verify the proposed framework and provide additional data for small-scale testing of shearcritical reinforced concrete structures. The specimens were tested in a multi-axial hybrid simulation manner under a reversed cyclic loading condition simulating earthquake forces. The physical models were 1/3.23-scale representations of a beam and two columns. A mixed-type modelling technique was employed to analyze the remainder of the structures. The hybrid simulation results were compared against those obtained from a large-scale test and finite element analyses. The study found that if precautions are taken in preparing model materials and if the shear-related mechanisms are accurately considered in the numerical model, small-scale hybrid simulations can adequately simulate the behaviour of shear-critical structures. Although the findings of the study are promising, to draw general conclusions additional test data are required.
Discrete-time pilot model. [human dynamics and digital simulation
NASA Technical Reports Server (NTRS)
Cavalli, D.
1978-01-01
Pilot behavior is considered as a discrete-time process where the decision making has a sequential nature. This model differs from both the quasilinear model which follows from classical control theory and from the optimal control model which considers the human operator as a Kalman estimator-predictor. An additional factor considered is that the pilot's objective may not be adequately formulated as a quadratic cost functional to be minimized, but rather as a more fuzzy measure of the closeness with which the aircraft follows a reference trajectory. All model parameters, in the digital program simulating the pilot's behavior, were successfully compared in terms of standard-deviation and performance with those of professional pilots in IFR configuration. The first practical application of the model was in the study of its performance degradation when the aircraft model static margin decreases.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-03
... WIPP PA process culminates in a series of computer simulations that model the physical attributes of... and Processes LWA Land Withdrawal Act MSHA Mine Safety and Health Administration NMED New Mexico... Agency's technical review process was to determine whether, with the new design, the WIPP adequately...
NASA Astrophysics Data System (ADS)
Lee, Jaeeun; Park, Siwook; Kim, Hwangsun; Park, Seong-Jun; Lee, Keunho; Kim, Mi-Young; Madakashira, Phaniraj P.; Han, Heung Nam
2018-03-01
Fe-Al-Mn-C alloy systems are low-density austenite-based steels that show excellent mechanical properties. After aging such steels at adequate temperatures for adequate time, nano-scale precipitates such as κ-carbide form, which have profound effects on the mechanical properties. Therefore, it is important to predict the amount and size of the generated κ-carbide precipitates in order to control the mechanical properties of low-density steels. In this study, the microstructure and mechanical properties of aged low-density austenitic steel were characterized. Thermo-kinetic simulations of the aging process were used to predict the size and phase fraction of κ-carbide after different aging periods, and these results were validated by comparison with experimental data derived from dark-field transmission electron microscopy images. Based on these results, models for precipitation strengthening based on different mechanisms were assessed. The measured increase in the strength of aged specimens was compared with that calculated from the models to determine the exact precipitation strengthening mechanism.
López, Iván; Borzacconi, Liliana
2010-10-01
A model based on the work of Angelidaki et al. (1993) was applied to simulate the anaerobic biodegradation of ruminal contents. In this study, two fractions of solids with different biodegradation rates were considered. A first-order kinetic was used for the easily biodegradable fraction and a kinetic expression that is function of the extracellular enzyme concentration was used for the slowly biodegradable fraction. Batch experiments were performed to obtain an accumulated methane curve that was then used to obtain the model parameters. For this determination, a methodology derived from the "multiple-shooting" method was successfully used. Monte Carlo simulations allowed a confidence range to be obtained for each parameter. Simulations of a continuous reactor were performed using the optimal set of model parameters. The final steady-states were determined as functions of the operational conditions (solids load and residence time). The simulations showed that methane flow peaked at a flow rate of 0.5-0.8 Nm(3)/d/m(reactor)(3) at a residence time of 10-20 days. Simulations allow the adequate selection of operating conditions of a continuous reactor. (c) 2010 Elsevier Ltd. All rights reserved.
Guerrin, F; Dumas, J
2001-02-01
This paper describes a qualitative model of the functioning of salmon redds (spawning areas of salmon) and its impact on mortality rates of early stages. For this, we use Qsim, a qualitative simulator, which appeared adequate for representing available qualitative knowledge of freshwater ecology experts (see Part I of this paper). Since the number of relevant variables was relatively large, it appeared necessary to decompose the model into two parts, corresponding to processes occurring at separate time-scales. A qualitative clock allows us to submit the simulation of salmon developmental stages to the calculation of accumulated daily temperatures (degree-days), according to the clock ticks and a water temperature regime set by the user. Therefore, this introduces some way of real-time dating and duration in a purely qualitative model. Simulating both sub-models, either separately or by means of alternate transitions, allows us to generate the evolutions of variables of interest, such as the mortality rates according to two factors (flow of oxygenated water and plugging of gravel interstices near the bed surface), under various scenarios.
A formal language for the specification and verification of synchronous and asynchronous circuits
NASA Technical Reports Server (NTRS)
Russinoff, David M.
1993-01-01
A formal hardware description language for the intended application of verifiable asynchronous communication is described. The language is developed within the logical framework of the Nqthm system of Boyer and Moore and is based on the event-driven behavioral model of VHDL, including the basic VHDL signal propagation mechanisms, the notion of simulation deltas, and the VHDL simulation cycle. A core subset of the language corresponds closely with a subset of VHDL and is adequate for the realistic gate-level modeling of both combinational and sequential circuits. Various extensions to this subset provide means for convenient expression of behavioral circuit specifications.
Energy decay in a granular gas collapse
NASA Astrophysics Data System (ADS)
Almazán, Lidia; Serero, Dan; Salueña, Clara; Pöschel, Thorsten
2017-01-01
An inelastic hard ball bouncing repeatedly off the ground comes to rest in finite time by performing an infinite number of collisions. Similarly, a granular gas under the influence of external gravity, condenses at the bottom of the confinement due to inelastic collisions. By means of hydrodynamical simulations, we find that the condensation process of a granular gas reveals a similar dynamics as the bouncing ball. Our result is in agreement with both experiments and particle simulations, but disagrees with earlier simplified hydrodynamical description. Analyzing the result in detail, we find that the adequate modeling of pressure plays a key role in continuum modeling of granular matter.
Simulation loop between cad systems, GEANT-4 and GeoModel: Implementation and results
NASA Astrophysics Data System (ADS)
Sharmazanashvili, A.; Tsutskiridze, Niko
2016-09-01
Compare analysis of simulation and as-built geometry descriptions of detector is important field of study for data_vs_Monte-Carlo discrepancies. Shapes consistency and detalization is not important while adequateness of volumes and weights of detector components are essential for tracking. There are 2 main reasons of faults of geometry descriptions in simulation: (1) Difference between simulated and as-built geometry descriptions; (2) Internal inaccuracies of geometry transformations added by simulation software infrastructure itself. Georgian Engineering team developed hub on the base of CATIA platform and several tools enabling to read in CATIA different descriptions used by simulation packages, like XML->CATIA; VP1->CATIA; Geo-Model->CATIA; Geant4->CATIA. As a result it becomes possible to compare different descriptions with each other using the full power of CATIA and investigate both classes of reasons of faults of geometry descriptions. Paper represents results of case studies of ATLAS Coils and End-Cap toroid structures.
Nonequilibrium Nonideal Nanoplasma Generated by a Fast Single Ion in Condensed Matter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faenov, A. Ya.; Kansai Photon Science Institut, Japan Atomic Energy Agency; Lankin, A. V.
A plasma model of relaxation of a medium in heavy ion tracks in condensed matter is proposed. The model is based on three assumptions: the Maxwell distribution of plasma electrons, localization of plasma inside the track nanochannel and constant values of the plasma electron density and temperature during the X-ray irradiation. It is demonstrated that the plasma relaxation model adequately describes the X-ray spectra observed upon interaction of a fast ion with condensed target. Preassumptions of plasma relaxation model are validated by the molecular dynamics modeling and simulation.
Moving base simulation evaluation of translational rate command systems for STOVL aircraft in hover
NASA Technical Reports Server (NTRS)
Franklin, James A.; Stortz, Michael W.
1996-01-01
Using a generalized simulation model, a moving-base simulation of a lift-fan short takeoff/vertical landing fighter aircraft has been conducted on the Vertical Motion Simulator at Ames Research Center. Objectives of the experiment were to determine the influence of system bandwidth and phase delay on flying qualities for translational rate command and vertical velocity command systems. Assessments were made for precision hover control and for landings aboard an LPH type amphibious assault ship in the presence of winds and rough seas. Results obtained define the boundaries between satisfactory and adequate flying qualities for these design features for longitudinal and lateral translational rate command and for vertical velocity command.
Hydrological and water quality processes simulation by the integrated MOHID model
NASA Astrophysics Data System (ADS)
Epelde, Ane; Antiguedad, Iñaki; Brito, David; Eduardo, Jauch; Neves, Ramiro; Sauvage, Sabine; Sánchez-Pérez, José Miguel
2016-04-01
Different modelling approaches have been used in recent decades to study the water quality degradation caused by non-point source pollution. In this study, the MOHID fully distributed and physics-based model has been employed to simulate hydrological processes and nitrogen dynamics in a nitrate vulnerable zone: the Alegria River watershed (Basque Country, Northern Spain). The results of this study indicate that the MOHID code is suitable for hydrological processes simulation at the watershed scale, as the model shows satisfactory performance at simulating the discharge (with NSE: 0.74 and 0.76 during calibration and validation periods, respectively). The agronomical component of the code, allowed the simulation of agricultural practices, which lead to adequate crop yield simulation in the model. Furthermore, the nitrogen exportation also shows satisfactory performance (with NSE: 0.64 and 0.69 during calibration and validation periods, respectively). While the lack of field measurements do not allow to evaluate the nutrient cycling processes in depth, it has been observed that the MOHID model simulates the annual denitrification according to general ranges established for agricultural watersheds (in this study, 9 kg N ha-1 year-1). In addition, the model has simulated coherently the spatial distribution of the denitrification process, which is directly linked to the simulated hydrological conditions. Thus, the model has localized the highest rates nearby the discharge zone of the aquifer and also where the aquifer thickness is low. These results evidence the strength of this model to simulate watershed scale hydrological processes as well as the crop production and the agricultural activity derived water quality degradation (considering both nutrient exportation and nutrient cycling processes).
Large-Eddy Simulation of Waked Turbines in a Scaled Wind Farm Facility
NASA Astrophysics Data System (ADS)
Wang, J.; McLean, D.; Campagnolo, F.; Yu, T.; Bottasso, C. L.
2017-05-01
The aim of this paper is to present the numerical simulation of waked scaled wind turbines operating in a boundary layer wind tunnel. The simulation uses a LES-lifting-line numerical model. An immersed boundary method in conjunction with an adequate wall model is used to represent the effects of both the wind turbine nacelle and tower, which are shown to have a considerable effect on the wake behavior. Multi-airfoil data calibrated at different Reynolds numbers are used to account for the lift and drag characteristics at the low and varying Reynolds conditions encountered in the experiments. The present study focuses on low turbulence inflow conditions and inflow non-uniformity due to wind tunnel characteristics, while higher turbulence conditions are considered in a separate study. The numerical model is validated by using experimental data obtained during test campaigns conducted with the scaled wind farm facility. The simulation and experimental results are compared in terms of power capture, rotor thrust, downstream velocity profiles and turbulence intensity.
A sophisticated simulation for the fracture behavior of concrete material using XFEM
NASA Astrophysics Data System (ADS)
Zhai, Changhai; Wang, Xiaomin; Kong, Jingchang; Li, Shuang; Xie, Lili
2017-10-01
The development of a powerful numerical model to simulate the fracture behavior of concrete material has long been one of the dominant research areas in earthquake engineering. A reliable model should be able to adequately represent the discontinuous characteristics of cracks and simulate various failure behaviors under complicated loading conditions. In this paper, a numerical formulation, which incorporates a sophisticated rigid-plastic interface constitutive model coupling cohesion softening, contact, friction and shear dilatation into the XFEM, is proposed to describe various crack behaviors of concrete material. An effective numerical integration scheme for accurately assembling the contribution to the weak form on both sides of the discontinuity is introduced. The effectiveness of the proposed method has been assessed by simulating several well-known experimental tests. It is concluded that the numerical method can successfully capture the crack paths and accurately predict the fracture behavior of concrete structures. The influence of mode-II parameters on the mixed-mode fracture behavior is further investigated to better determine these parameters.
Recent Progress Towards Predicting Aircraft Ground Handling Performance
NASA Technical Reports Server (NTRS)
Yager, T. J.; White, E. J.
1981-01-01
The significant progress which has been achieved in development of aircraft ground handling simulation capability is reviewed and additional improvements in software modeling identified. The problem associated with providing necessary simulator input data for adequate modeling of aircraft tire/runway friction behavior is discussed and efforts to improve this complex model, and hence simulator fidelity, are described. Aircraft braking performance data obtained on several wet runway surfaces is compared to ground vehicle friction measurements and, by use of empirically derived methods, good agreement between actual and estimated aircraft braking friction from ground vehilce data is shown. The performance of a relatively new friction measuring device, the friction tester, showed great promise in providing data applicable to aircraft friction performance. Additional research efforts to improve methods of predicting tire friction performance are discussed including use of an instrumented tire test vehicle to expand the tire friction data bank and a study of surface texture measurement techniques.
Proceedings of the Augmented VIsual Display (AVID) Research Workshop
NASA Technical Reports Server (NTRS)
Kaiser, Mary K. (Editor); Sweet, Barbara T. (Editor)
1993-01-01
The papers, abstracts, and presentations were presented at a three day workshop focused on sensor modeling and simulation, and image enhancement, processing, and fusion. The technical sessions emphasized how sensor technology can be used to create visual imagery adequate for aircraft control and operations. Participants from industry, government, and academic laboratories contributed to panels on Sensor Systems, Sensor Modeling, Sensor Fusion, Image Processing (Computer and Human Vision), and Image Evaluation and Metrics.
Aurally-adequate time-frequency analysis for scattered sound in auditoria
NASA Astrophysics Data System (ADS)
Norris, Molly K.; Xiang, Ning; Kleiner, Mendel
2005-04-01
The goal of this work was to apply an aurally-adequate time-frequency analysis technique to the analysis of sound scattering effects in auditoria. Time-frequency representations were developed as a motivated effort that takes into account binaural hearing, with a specific implementation of interaural cross-correlation process. A model of the human auditory system was implemented in the MATLAB platform based on two previous models [A. Härmä and K. Palomäki, HUTear, Espoo, Finland; and M. A. Akeroyd, A. Binaural Cross-correlogram Toolbox for MATLAB (2001), University of Sussex, Brighton]. These stages include proper frequency selectivity, the conversion of the mechanical motion of the basilar membrane to neural impulses, and binaural hearing effects. The model was then used in the analysis of room impulse responses with varying scattering characteristics. This paper discusses the analysis results using simulated and measured room impulse responses. [Work supported by the Frank H. and Eva B. Buck Foundation.
About one counterexample of applying method of splitting in modeling of plating processes
NASA Astrophysics Data System (ADS)
Solovjev, D. S.; Solovjeva, I. A.; Litovka, Yu V.; Korobova, I. L.
2018-05-01
The paper presents the main factors that affect the uniformity of the thickness distribution of plating on the surface of the product. The experimental search for the optimal values of these factors is expensive and time-consuming. The problem of adequate simulation of coating processes is very relevant. The finite-difference approximation using seven-point and five-point templates in combination with the splitting method is considered as solution methods for the equations of the model. To study the correctness of the solution of equations of the mathematical model by these methods, the experiments were conducted on plating with a flat anode and cathode, which relative position was not changed in the bath. The studies have shown that the solution using the splitting method was up to 1.5 times faster, but it did not give adequate results due to the geometric features of the task under the given boundary conditions.
Different modelling approaches to evaluate nitrogen transport and turnover at the watershed scale
NASA Astrophysics Data System (ADS)
Epelde, Ane Miren; Antiguedad, Iñaki; Brito, David; Jauch, Eduardo; Neves, Ramiro; Garneau, Cyril; Sauvage, Sabine; Sánchez-Pérez, José Miguel
2016-08-01
This study presents the simulation of hydrological processes and nutrient transport and turnover processes using two integrated numerical models: Soil and Water Assessment Tool (SWAT) (Arnold et al., 1998), an empirical and semi-distributed numerical model; and Modelo Hidrodinâmico (MOHID) (Neves, 1985), a physics-based and fully distributed numerical model. This work shows that both models reproduce satisfactorily water and nitrate exportation at the watershed scale at annual and daily basis, MOHID providing slightly better results. At the watershed scale, both SWAT and MOHID simulated similarly and satisfactorily the denitrification amount. However, as MOHID numerical model was the only one able to reproduce adequately the spatial variation of the soil hydrological conditions and water table level fluctuation, it proved to be the only model able of reproducing the spatial variation of the nutrient cycling processes that are dependent to the soil hydrological conditions such as the denitrification process. This evidences the strength of the fully distributed and physics-based models to simulate the spatial variability of nutrient cycling processes that are dependent to the hydrological conditions of the soils.
Shape-based approach for the estimation of individual facial mimics in craniofacial surgery planning
NASA Astrophysics Data System (ADS)
Gladilin, Evgeny; Zachow, Stefan; Deuflhard, Peter; Hege, Hans-Christian
2002-05-01
Besides the static soft tissue prediction, the estimation of basic facial emotion expressions is another important criterion for the evaluation of craniofacial surgery planning. For a realistic simulation of facial mimics, an adequate biomechanical model of soft tissue including the mimic musculature is needed. In this work, we present an approach for the modeling of arbitrarily shaped muscles and the estimation of basic individual facial mimics, which is based on the geometrical model derived from the individual tomographic data and the general finite element modeling of soft tissue biomechanics.
Design of Bi-Directional Hydrofoils for Tidal Current Turbines
NASA Astrophysics Data System (ADS)
Nedyalkov, Ivaylo; Wosnik, Martin
2015-11-01
Tidal Current Turbines operate in flows which reverse direction. Bi-directional hydrofoils have rotational symmetry and allow such turbines to operate without the need for pitch or yaw control, decreasing the initial and maintenance costs. A numerical test-bed was developed to automate the simulations of hydrofoils in OpenFOAM and was utilized to simulate the flow over eleven classes of hydrofoils comprising a total of 700 foil shapes at different angles of attack. For promising candidate foil shapes physical models of 75 mm chord and 150 mm span were fabricated and tested in the University of New Hampshire High-Speed Cavitation Tunnel (HiCaT). The experimental results were compared to the simulations for model validation. The numerical test-bed successfully generated simulations for a wide range of foil shapes, although, as expected, the k - ω - SST turbulence model employed here was not adequate for some of the foils and for large angles of attack at which separation occurred. An optimization algorithm is currently being coupled with the numerical test-bed and additional turbulence models will be implemented in the future.
SPH-based numerical simulations of flow slides in municipal solid waste landfills.
Huang, Yu; Dai, Zili; Zhang, Weijie; Huang, Maosong
2013-03-01
Most municipal solid waste (MSW) is disposed of in landfills. Over the past few decades, catastrophic flow slides have occurred in MSW landfills around the world, causing substantial economic damage and occasionally resulting in human victims. It is therefore important to predict the run-out, velocity and depth of such slides in order to provide adequate mitigation and protection measures. To overcome the limitations of traditional numerical methods for modelling flow slides, a mesh-free particle method entitled smoothed particle hydrodynamics (SPH) is introduced in this paper. The Navier-Stokes equations were adopted as the governing equations and a Bingham model was adopted to analyse the relationship between material stress rates and particle motion velocity. The accuracy of the model is assessed using a series of verifications, and then flow slides that occurred in landfills located in Sarajevo and Bandung were simulated to extend its applications. The simulated results match the field data well and highlight the capability of the proposed SPH modelling method to simulate such complex phenomena as flow slides in MSW landfills.
Opportunities and Challenges in Supply-Side Simulation: Physician-Based Models
Gresenz, Carole Roan; Auerbach, David I; Duarte, Fabian
2013-01-01
Objective To provide a conceptual framework and to assess the availability of empirical data for supply-side microsimulation modeling in the context of health care. Data Sources Multiple secondary data sources, including the American Community Survey, Health Tracking Physician Survey, and SK&A physician database. Study Design We apply our conceptual framework to one entity in the health care market—physicians—and identify, assess, and compare data available for physician-based simulation models. Principal Findings Our conceptual framework describes three broad types of data required for supply-side microsimulation modeling. Our assessment of available data for modeling physician behavior suggests broad comparability across various sources on several dimensions and highlights the need for significant integration of data across multiple sources to provide a platform adequate for modeling. A growing literature provides potential estimates for use as behavioral parameters that could serve as the models' engines. Sources of data for simulation modeling that account for the complex organizational and financial relationships among physicians and other supply-side entities are limited. Conclusions A key challenge for supply-side microsimulation modeling is optimally combining available data to harness their collective power. Several possibilities also exist for novel data collection. These have the potential to serve as catalysts for the next generation of supply-side-focused simulation models to inform health policy. PMID:23347041
NASA Astrophysics Data System (ADS)
van der Plas, Peter; Guerriero, Suzanne; Cristiano, Leorato; Rugina, Ana
2012-08-01
Modelling and simulation can support a number of use cases across the spacecraft development life-cycle. Given the increasing complexity of space missions, the observed general trend is for a more extensive usage of simulation already in the early phases. A major perceived advantage is that modelling and simulation can enable the validation of critical aspects of the spacecraft design before the actual development is started, as such reducing the risk in later phases.Failure Detection, Isolation, and Recovery (FDIR) is one of the areas with a high potential to benefit from early modelling and simulation. With the increasing level of required spacecraft autonomy, FDIR specifications can grow in such a way that the traditional document-based review process soon becomes inadequate.This paper shows that FDIR modelling and simulation in a system context can provide a powerful tool to support the FDIR verification process. It is highlighted that FDIR modelling at this early stage requires heterogeneous modelling tools and languages, in order to provide an adequate functional description of the different components (i.e. FDIR functions, environment, equipment, etc.) to be modelled.For this reason, an FDIR simulation framework is proposed in this paper. This framework is based on a number of tools already available in the Avionics Systems Laboratory at ESTEC, which are the Avionics Test Bench Functional Engineering Simulator (ATB FES), Matlab/Simulink, TASTE, and Real Time Developer Studio (RTDS).The paper then discusses the application of the proposed simulation framework to a real case-study, i.e. the FDIR modelling of a satellite in support of actual ESA mission. Challenges and benefits of the approach are described. Finally, lessons learned and the generality of the proposed approach are discussed.
NASA Astrophysics Data System (ADS)
Zhou, Y.; Hou, A.; Lau, W. K.; Shie, C.; Tao, W.; Lin, X.; Chou, M.; Olson, W. S.; Grecu, M.
2006-05-01
The cloud and precipitation statistics simulated by 3D Goddard Cumulus Ensemble (GCE) model during the South China Sea Monsoon Experiment (SCSMEX) is compared with Tropical Rainfall Measuring Mission (TRMM) TMI and PR rainfall measurements and the Earth's Radiant Energy System (CERES) single scanner footprint (SSF) radiation and cloud retrievals. It is found that GCE is capable of simulating major convective system development and reproducing total surface rainfall amount as compared with rainfall estimated from the soundings. Mesoscale organization is adequately simulated except when environmental wind shear is very weak. The partitions between convective and stratiform rain are also close to TMI and PR classification. However, the model simulated rain spectrum is quite different from either TMI or PR measurements. The model produces more heavy rains and light rains (less than 0.1 mm/hr) than the observations. The model also produces heavier vertical hydrometer profiles of rain, graupel when compared with TMI retrievals and PR radar reflectivity. Comparing GCE simulated OLR and cloud properties with CERES measurements found that the model has much larger domain averaged OLR due to smaller total cloud fraction and a much skewed distribution of OLR and cloud top than CERES observations, indicating that the model's cloud field is not wide spread, consistent with the model's precipitation activity. These results will be used as guidance for improving the model's microphysics.
A Lagrangian stochastic model for aerial spray transport above an oak forest
Wang, Yansen; Miller, David R.; Anderson, Dean E.; McManus, Michael L.
1995-01-01
An aerial spray droplets' transport model has been developed by applying recent advances in Lagrangian stochastic simulation of heavy particles. A two-dimensional Lagrangian stochastic model was adopted to simulate the spray droplet dispersion in atmospheric turbulence by adjusting the Lagrangian integral time scale along the drop trajectory. The other major physical processes affecting the transport of spray droplets above a forest canopy, the aircraft wingtip vortices and the droplet evaporation, were also included in each time step of the droplets' transport.The model was evaluated using data from an aerial spray field experiment. In generally neutral stability conditions, the accuracy of the model predictions varied from run-to-run as expected. The average root-mean-square error was 24.61 IU cm−2, and the average relative error was 15%. The model prediction was adequate in two-dimensional steady wind conditions, but was less accurate in variable wind condition. The results indicated that the model can simulate successfully the ensemble; average transport of aerial spray droplets under neutral, steady atmospheric wind conditions.
NASA Astrophysics Data System (ADS)
Oaida, C. M.; Skiles, M.; Painter, T. H.; Xue, Y.
2015-12-01
The mountain snowpack is an essential resource for both the environment as well as society. Observational and energy balance modeling work have shown that dust on snow (DOS) in western U.S. (WUS) is a major contributor to snow processes, including snowmelt timing and runoff amount in regions like the Upper Colorado River Basin (UCRB). In order to accurately estimate the impact of DOS to the hydrologic cycle and water resources, now and under a changing climate, we need to be able to (1) adequately simulate the snowpack (accumulation), and (2) realistically represent DOS processes in models. Energy balance models do not capture the impact on a broader local or regional scale, nor the land-atmosphere feedbacks, while GCM studies cannot resolve orographic-related precipitation processes, and therefore snowpack accumulation, owing to coarse spatial resolution and smoother terrain. All this implies the impacts of dust on snow on the mountain snowpack and other hydrologic processes are likely not well captured in current modeling studies. Recent increase in computing power allows for RCMs to be used at higher spatial resolutions, while recent in situ observations of dust in snow properties can help constrain modeling simulations. Therefore, in the work presented here, we take advantage of these latest resources to address the some of the challenges outlined above. We employ the newly enhanced WRF/SSiB regional climate model at 4 km horizontal resolution. This scale has been shown by others to be adequate in capturing orographic processes over WUS. We also constrain the magnitude of dust deposition provided by a global chemistry and transport model, with in situ measurements taken at sites in the UCRB. Furthermore, we adjust the dust absorptive properties based on observed values at these sites, as opposed to generic global ones. This study aims to improve simulation of the impact of dust in snow on the hydrologic cycle and related water resources.
NASA Astrophysics Data System (ADS)
Zhang, B.; Wang, W.; Wu, Q.; Knipp, D.; Kilcommons, L.; Brambles, O. J.; Liu, J.; Wiltberger, M.; Lyon, J. G.; Häggström, I.
2016-08-01
This paper investigates a possible physical mechanism of the observed dayside high-latitude upper thermospheric wind using numerical simulations from the coupled magnetosphere-ionosphere-thermosphere (CMIT) model. Results show that the CMIT model is capable of reproducing the unexpected afternoon equatorward winds in the upper thermosphere observed by the High altitude Interferometer WIND observation (HIWIND) balloon. Models that lack adequate coupling produce poleward winds. The modeling study suggests that ion drag driven by magnetospheric lobe cell convection is another possible mechanism for turning the climatologically expected dayside poleward winds to the observed equatorward direction. The simulation results are validated by HIWIND, European Incoherent Scatter, and Defense Meteorological Satellite Program. The results suggest a strong momentum coupling between high-latitude ionospheric plasma circulation and thermospheric neutral winds in the summer hemisphere during positive IMF Bz periods, through the formation of magnetospheric lobe cell convection driven by persistent positive IMF By. The CMIT simulation adds important insight into the role of dayside coupling during intervals of otherwise quiet geomagnetic activity
Vibronic coupling simulations for linear and nonlinear optical processes: Simulation results
NASA Astrophysics Data System (ADS)
Silverstein, Daniel W.; Jensen, Lasse
2012-02-01
A vibronic coupling model based on time-dependent wavepacket approach is applied to simulate linear optical processes, such as one-photon absorbance and resonance Raman scattering, and nonlinear optical processes, such as two-photon absorbance and resonance hyper-Raman scattering, on a series of small molecules. Simulations employing both the long-range corrected approach in density functional theory and coupled cluster are compared and also examined based on available experimental data. Although many of the small molecules are prone to anharmonicity in their potential energy surfaces, the harmonic approach performs adequately. A detailed discussion of the non-Condon effects is illustrated by the molecules presented in this work. Linear and nonlinear Raman scattering simulations allow for the quantification of interference between the Franck-Condon and Herzberg-Teller terms for different molecules.
Multislice spiral CT simulator for dynamic cardiopulmonary studies
NASA Astrophysics Data System (ADS)
De Francesco, Silvia; Ferreira da Silva, Augusto M.
2002-04-01
We've developed a Multi-slice Spiral CT Simulator modeling the acquisition process of a real tomograph over a 4-dimensional phantom (4D MCAT) of the human thorax. The simulator allows us to visually characterize artifacts due to insufficient temporal sampling and a priori evaluate the quality of the images obtained in cardio-pulmonary studies (both with single-/multi-slice and ECG gated acquisition processes). The simulating environment allows both for conventional and spiral scanning modes and includes a model of noise in the acquisition process. In case of spiral scanning, reconstruction facilities include longitudinal interpolation methods (360LI and 180LI both for single and multi-slice). Then, the reconstruction of the section is performed through FBP. The reconstructed images/volumes are affected by distortion due to insufficient temporal sampling of the moving object. The developed simulating environment allows us to investigate the nature of the distortion characterizing it qualitatively and quantitatively (using, for example, Herman's measures). Much of our work is focused on the determination of adequate temporal sampling and sinogram regularization techniques. At the moment, the simulator model is limited to the case of multi-slice tomograph, being planned as a next step of development the extension to cone beam or area detectors.
Simulation tools for particle-based reaction-diffusion dynamics in continuous space
2014-01-01
Particle-based reaction-diffusion algorithms facilitate the modeling of the diffusional motion of individual molecules and the reactions between them in cellular environments. A physically realistic model, depending on the system at hand and the questions asked, would require different levels of modeling detail such as particle diffusion, geometrical confinement, particle volume exclusion or particle-particle interaction potentials. Higher levels of detail usually correspond to increased number of parameters and higher computational cost. Certain systems however, require these investments to be modeled adequately. Here we present a review on the current field of particle-based reaction-diffusion software packages operating on continuous space. Four nested levels of modeling detail are identified that capture incrementing amount of detail. Their applicability to different biological questions is discussed, arching from straight diffusion simulations to sophisticated and expensive models that bridge towards coarse grained molecular dynamics. PMID:25737778
NASA Astrophysics Data System (ADS)
Zerkle, Ronald D.; Prakash, Chander
1995-03-01
This viewgraph presentation summarizes some CFD experience at GE Aircraft Engines for flows in the primary gaspath of a gas turbine engine and in turbine blade cooling passages. It is concluded that application of the standard k-epsilon turbulence model with wall functions is not adequate for accurate CFD simulation of aerodynamic performance and heat transfer in the primary gas path of a gas turbine engine. New models are required in the near-wall region which include more physics than wall functions. The two-layer modeling approach appears attractive because of its computational complexity. In addition, improved CFD simulation of film cooling and turbine blade internal cooling passages will require anisotropic turbulence models. New turbulence models must be practical in order to have a significant impact on the engine design process. A coordinated turbulence modeling effort between NASA centers would be beneficial to the gas turbine industry.
NASA Technical Reports Server (NTRS)
Zerkle, Ronald D.; Prakash, Chander
1995-01-01
This viewgraph presentation summarizes some CFD experience at GE Aircraft Engines for flows in the primary gaspath of a gas turbine engine and in turbine blade cooling passages. It is concluded that application of the standard k-epsilon turbulence model with wall functions is not adequate for accurate CFD simulation of aerodynamic performance and heat transfer in the primary gas path of a gas turbine engine. New models are required in the near-wall region which include more physics than wall functions. The two-layer modeling approach appears attractive because of its computational complexity. In addition, improved CFD simulation of film cooling and turbine blade internal cooling passages will require anisotropic turbulence models. New turbulence models must be practical in order to have a significant impact on the engine design process. A coordinated turbulence modeling effort between NASA centers would be beneficial to the gas turbine industry.
Opportunities and pitfalls in clinical proof-of-concept: principles and examples.
Chen, Chao
2018-04-01
Clinical proof-of-concept trials crucially inform major resource deployment decisions. This paper discusses several mechanisms for enhancing their rigour and efficiency. The importance of careful consideration when using a surrogate endpoint is illustrated; situational effectiveness of run-in patient enrichment is explored; a versatile tool is introduced to ensure a strong pharmacological underpinning; the benefits of dose-titration are revealed by simulation; and the importance of adequately scheduled observations is shown. The general process of model-based trial design and analysis is described and several examples demonstrate the value in historical data, simulation-guided design, model-based analysis and trial adaptation informed by interim analysis. Copyright © 2018 Elsevier Ltd. All rights reserved.
Relaxing the rule of ten events per variable in logistic and Cox regression.
Vittinghoff, Eric; McCulloch, Charles E
2007-03-15
The rule of thumb that logistic and Cox models should be used with a minimum of 10 outcome events per predictor variable (EPV), based on two simulation studies, may be too conservative. The authors conducted a large simulation study of other influences on confidence interval coverage, type I error, relative bias, and other model performance measures. They found a range of circumstances in which coverage and bias were within acceptable levels despite less than 10 EPV, as well as other factors that were as influential as or more influential than EPV. They conclude that this rule can be relaxed, in particular for sensitivity analyses undertaken to demonstrate adequate control of confounding.
Effect of Turbulence Modeling on an Excited Jet
NASA Technical Reports Server (NTRS)
Brown, Clifford A.; Hixon, Ray
2010-01-01
The flow dynamics in a high-speed jet are dominated by unsteady turbulent flow structures in the plume. Jet excitation seeks to control these flow structures through the natural instabilities present in the initial shear layer of the jet. Understanding and optimizing the excitation input, for jet noise reduction or plume mixing enhancement, requires many trials that may be done experimentally or computationally at a significant cost savings. Numerical simulations, which model various parts of the unsteady dynamics to reduce the computational expense of the simulation, must adequately capture the unsteady flow dynamics in the excited jet for the results are to be used. Four CFD methods are considered for use in an excited jet problem, including two turbulence models with an Unsteady Reynolds Averaged Navier-Stokes (URANS) solver, one Large Eddy Simulation (LES) solver, and one URANS/LES hybrid method. Each method is used to simulate a simplified excited jet and the results are evaluated based on the flow data, computation time, and numerical stability. The knowledge gained about the effect of turbulence modeling and CFD methods from these basic simulations will guide and assist future three-dimensional (3-D) simulations that will be used to understand and optimize a realistic excited jet for a particular application.
Numerical simulation of double‐diffusive finger convection
Hughes, Joseph D.; Sanford, Ward E.; Vacher, H. Leonard
2005-01-01
A hybrid finite element, integrated finite difference numerical model is developed for the simulation of double‐diffusive and multicomponent flow in two and three dimensions. The model is based on a multidimensional, density‐dependent, saturated‐unsaturated transport model (SUTRA), which uses one governing equation for fluid flow and another for solute transport. The solute‐transport equation is applied sequentially to each simulated species. Density coupling of the flow and solute‐transport equations is accounted for and handled using a sequential implicit Picard iterative scheme. High‐resolution data from a double‐diffusive Hele‐Shaw experiment, initially in a density‐stable configuration, is used to verify the numerical model. The temporal and spatial evolution of simulated double‐diffusive convection is in good agreement with experimental results. Numerical results are very sensitive to discretization and correspond closest to experimental results when element sizes adequately define the spatial resolution of observed fingering. Numerical results also indicate that differences in the molecular diffusivity of sodium chloride and the dye used to visualize experimental sodium chloride concentrations are significant and cause inaccurate mapping of sodium chloride concentrations by the dye, especially at late times. As a result of reduced diffusion, simulated dye fingers are better defined than simulated sodium chloride fingers and exhibit more vertical mass transfer.
Aspects of intelligent electronic device based switchgear control training model application
NASA Astrophysics Data System (ADS)
Bogdanov, Dimitar; Popov, Ivaylo
2018-02-01
The design of the protection and control equipment for electrical power sector application was object of extensive advance in the last several decades. The modern technologies offer a wide range of multifunctional flexible applications, making the protection and control of facilities more sophisticated. In the same time, the advance of technology imposes the necessity of simulators, training models and tutorial laboratory equipment to be used for adequate training of students and field specialists
Computer simulation of surface and film processes
NASA Technical Reports Server (NTRS)
Tiller, W. A.; Halicioglu, M. T.
1983-01-01
Adequate computer methods, based on interactions between discrete particles, provide information leading to an atomic level understanding of various physical processes. The success of these simulation methods, however, is related to the accuracy of the potential energy function representing the interactions among the particles. The development of a potential energy function for crystalline SiO2 forms that can be employed in lengthy computer modelling procedures was investigated. In many of the simulation methods which deal with discrete particles, semiempirical two body potentials were employed to analyze energy and structure related properties of the system. Many body interactions are required for a proper representation of the total energy for many systems. Many body interactions for simulations based on discrete particles are discussed.
NASA Astrophysics Data System (ADS)
Silva, F. E. O. E.; Naghettini, M. D. C.; Fernandes, W.
2014-12-01
This paper evaluated the uncertainties associated with the estimation of the parameters of a conceptual rainfall-runoff model, through the use of Bayesian inference techniques by Monte Carlo simulation. The Pará River sub-basin, located in the upper São Francisco river basin, in southeastern Brazil, was selected for developing the studies. In this paper, we used the Rio Grande conceptual hydrologic model (EHR/UFMG, 2001) and the Markov Chain Monte Carlo simulation method named DREAM (VRUGT, 2008a). Two probabilistic models for the residues were analyzed: (i) the classic [Normal likelihood - r ≈ N (0, σ²)]; and (ii) a generalized likelihood (SCHOUPS & VRUGT, 2010), in which it is assumed that the differences between observed and simulated flows are correlated, non-stationary, and distributed as a Skew Exponential Power density. The assumptions made for both models were checked to ensure that the estimation of uncertainties in the parameters was not biased. The results showed that the Bayesian approach proved to be adequate to the proposed objectives, enabling and reinforcing the importance of assessing the uncertainties associated with hydrological modeling.
NASA Astrophysics Data System (ADS)
Lin, Caiyan; Zhang, Zhongfeng; Pu, Zhaoxia; Wang, Fengyun
2017-10-01
A series of numerical simulations is conducted to understand the formation, evolution, and dissipation of an advection fog event over Shanghai Pudong International Airport (ZSPD) with the Weather Research and Forecasting (WRF) model. Using the current operational settings at the Meteorological Center of East China Air Traffic Management Bureau, the WRF model successfully predicts the fog event at ZSPD. Additional numerical experiments are performed to examine the physical processes associated with the fog event. The results indicate that prediction of this particular fog event is sensitive to microphysical schemes for the time of fog dissipation but not for the time of fog onset. The simulated timing of the arrival and dissipation of the fog, as well as the cloud distribution, is substantially sensitive to the planetary boundary layer and radiation (both longwave and shortwave) processes. Moreover, varying forecast lead times also produces different simulation results for the fog event regarding its onset and duration, suggesting a trade-off between more accurate initial conditions and a proper forecast lead time that allows model physical processes to spin up adequately during the fog simulation. The overall outcomes from this study imply that the complexity of physical processes and their interactions within the WRF model during fog evolution and dissipation is a key area of future research.
Modeling scintillator and WLS fiber signals for fast Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Sánchez, F. A.; Medina-Tanco, G.
2010-08-01
In this work we present a fast, robust and flexible procedure to simulate electronic signals of scintillator units: plastic scintillator material embedded with a wavelength shifter optical fiber coupled to a photo-multiplier tube which, in turn, is plugged to a front-end electronic board. The simple rationale behind the simulation chain allows to adapt the procedure to a broad range of detectors based on that kind of units. We show that, in order to produce realistic results, the simulation parameters can be properly calibrated against laboratory measurements and used thereafter as input of the simulations. Simulated signals of atmospheric background cosmic ray muons are presented and their main features analyzed and validated using actual measured data. Conversely, for any given practical application, the present simulation scheme can be used to find an adequate combination of photo-multiplier tube and optical fiber at the prototyping stage.
Optimization of droplets for UV-NIL using coarse-grain simulation of resist flow
NASA Astrophysics Data System (ADS)
Sirotkin, Vadim; Svintsov, Alexander; Zaitsev, Sergey
2009-03-01
A mathematical model and numerical method are described, which make it possible to simulate ultraviolet ("step and flash") nanoimprint lithography (UV-NIL) process adequately even using standard Personal Computers. The model is derived from 3D Navier-Stokes equations with the understanding that the resist motion is largely directed along the substrate surface and characterized by ultra-low values of the Reynolds number. By the numerical approximation of the model, a special finite difference method is applied (a coarse-grain method). A coarse-grain modeling tool for detailed analysis of resist spreading in UV-NIL at the structure-scale level is tested. The obtained results demonstrate the high ability of the tool to calculate optimal dispensing for given stamp design and process parameters. This dispensing provides uniform filled areas and a homogeneous residual layer thickness in UV-NIL.
Snowmelt-runoff Model Utilizing Remotely-sensed Data
NASA Technical Reports Server (NTRS)
Rango, A.
1985-01-01
Remotely sensed snow cover information is the critical data input for the Snowmelt-Runoff Model (SRM), which was developed to simulatke discharge from mountain basins where snowmelt is an important component of runoff. Of simple structure, the model requires only input of temperature, precipitation, and snow covered area. SRM was run successfully on two widely separated basins. The simulations on the Kings River basin are significant because of the large basin area (4000 sq km) and the adequate performance in the most extreme drought year of record (1976). The performance of SRM on the Okutadami River basin was important because it was accomplished with minimum snow cover data available. Tables show: optimum and minimum conditions for model application; basin sizes and elevations where SRM was applied; and SRM strengths and weaknesses. Graphs show results of discharge simulation.
Large Eddy Simulation of Turbulent Combustion
2005-10-01
a new method to automatically generate skeletal kinetic mechanisms for surrogate fuels, using the directed relation graph method with error...propagation, was developed. These mechanisms are guaranteed to match results obtained using detailed chemistry within a user- defined accuracy for any...specified target. They can be combined together to produce adequate chemical models for surrogate fuels. A library containing skeletal mechanisms of various
A simulation model for the infiltration of heterogeneous sediment into a stream bed
Tim Lauck; Roland Lamberson; Thomas E. Lisle
1993-01-01
Abstract - Salmonid embryos depend on the adequate flow of oxygenated water to survive and interstitial passageways to emerge from the gravel bed. Spawning gravels are initially cleaned by the spawning female, but sediment transported during subsequent high-runoff events can nfiltrate the porous substrate. In many gravel-bed channels used for spawning, most of the...
Revision of the Rawls et al. (1982) pedotransfer functions for their applicability to US croplands
USDA-ARS?s Scientific Manuscript database
Large scale environmental impact studies typically involve the use of simulation models and require a variety of inputs, some of which may need to be estimated in absence of adequate measured data. As an example, soil water retention needs to be estimated for a large number of soils that are to be u...
Fukae, Masato; Shiraishi, Yoshimasa; Hirota, Takeshi; Sasaki, Yuka; Yamahashi, Mika; Takayama, Koichi; Nakanishi, Yoichi; Ieiri, Ichiro
2016-11-01
Docetaxel is used to treat many cancers, and neutropenia is the dose-limiting factor for its clinical use. A population pharmacokinetic-pharmacodynamic (PK-PD) model was introduced to predict the development of docetaxel-induced neutropenia in Japanese patients with non-small cell lung cancer (NSCLC). Forty-seven advanced or recurrent Japanese patients with NSCLC were enrolled. Patients received 50 or 60 mg/m 2 docetaxel as monotherapy, and blood samples for a PK analysis were collected up to 24 h after its infusion. Laboratory tests including absolute neutrophil count data and demographic information were used in population PK-PD modeling. The model was built by NONMEM 7.2 with a first-order conditional estimation using an interaction method. Based on the final model, a Monte Carlo simulation was performed to assess the impact of covariates on and the predictability of neutropenia. A three-compartment model was employed to describe PK data, and the PK model adequately described the docetaxel concentrations observed. Serum albumin (ALB) was detected as a covariate of clearance (CL): CL (L/h) = 32.5 × (ALB/3.6) 0.965 × (WGHT/70) 3/4 . In population PK-PD modeling, a modified semi-mechanistic myelosuppression model was applied, and characterization of the time course of neutrophil counts was adequate. The covariate selection indicated that α1-acid glycoprotein (AAG) was a predictor of neutropenia. The model-based simulation also showed that ALB and AAG negatively correlated with the development of neutropenia and that the time course of neutrophil counts was predictable. The developed model may facilitate the prediction and care of docetaxel-induced neutropenia.
Short-stack modeling of degradation in solid oxide fuel cells. Part I. Contact degradation
NASA Astrophysics Data System (ADS)
Gazzarri, J. I.; Kesler, O.
As the first part of a two paper series, we present a two-dimensional impedance model of a working solid oxide fuel cell (SOFC) to study the effect of contact degradation on the impedance spectrum for the purpose of non-invasive diagnosis. The two dimensional modeled geometry includes the ribbed interconnect, and is adequate to represent co- and counter-flow configurations. Simulated degradation modes include: cathode delamination, interconnect oxidation, and interconnect-cathode detachment. The simulations show differences in the way each degradation mode impacts the impedance spectrum shape, suggesting that identification is possible. In Part II, we present a sensitivity analysis of the results to input parameter variability that reveals strengths and limitations of the method, as well as describing possible interactions between input parameters and concurrent degradation modes.
Electron-phonon interaction within classical molecular dynamics
Tamm, A.; Samolyuk, G.; Correa, A. A.; ...
2016-07-14
Here, we present a model for nonadiabatic classical molecular dynamics simulations that captures with high accuracy the wave-vector q dependence of the phonon lifetimes, in agreement with quantum mechanics calculations. It is based on a local view of the e-ph interaction where individual atom dynamics couples to electrons via a damping term that is obtained as the low-velocity limit of the stopping power of a moving ion in a host. The model is parameter free, as its components are derived from ab initio-type calculations, is readily extended to the case of alloys, and is adequate for large-scale molecular dynamics computermore » simulations. We also show how this model removes some oversimplifications of the traditional ionic damped dynamics commonly used to describe situations beyond the Born-Oppenheimer approximation.« less
Leblanc, Fabien; Senagore, Anthony J; Ellis, Clyde N; Champagne, Bradley J; Augestad, Knut M; Neary, Paul C; Delaney, Conor P
2010-01-01
The aim of this study was to compare a simulator with the human cadaver model for hand-assisted laparoscopic colorectal skills acquisition training. An observational prospective comparative study was conducted to compare the laparoscopic surgery training models. The study took place during the laparoscopic colectomy training course performed at the annual scientific meeting of the American Society of Colon and Rectal Surgeons. Thirty four practicing surgeons performed hand-assisted laparoscopic sigmoid colectomy on human cadavers (n = 7) and on an augmented reality simulator (n = 27). Prior laparoscopic colorectal experience was assessed. Trainers and trainees completed independently objective structured assessment forms. Training models were compared by trainees' technical skills scores, events scores, and satisfaction. Prior laparoscopic experience was similar in both surgeon groups. Generic and specific skills scores were similar on both training models. Generic events scores were significantly better on the cadaver model. The 2 most frequent generic events occurring on the simulator were poor hand-eye coordination and inefficient use of retraction. Specific events were scored better on the simulator and reached the significance limit (p = 0.051) for trainers. The specific events occurring on the cadaver were intestinal perforation and left ureter identification difficulties. Overall satisfaction was better for the cadaver than for the simulator model (p = 0.009). With regard to skills scores, the augmented reality simulator had adequate qualities for the hand-assisted laparoscopic colectomy training. Nevertheless, events scores highlighted weaknesses of the anatomical replication on the simulator. Although improvements likely will be required to incorporate the simulator more routinely into the colorectal training, it may be useful in its current form for more junior trainees or those early on their learning curve. Copyright 2010 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Strauch, Kellan R.; Linard, Joshua I.
2009-01-01
The U.S. Geological Survey, in cooperation with the Upper Elkhorn, Lower Elkhorn, Upper Loup, Lower Loup, Middle Niobrara, Lower Niobrara, Lewis and Clark, and Lower Platte North Natural Resources Districts, used the Soil and Water Assessment Tool to simulate streamflow and estimate percolation in north-central Nebraska to aid development of long-term strategies for management of hydrologically connected ground and surface water. Although groundwater models adequately simulate subsurface hydrologic processes, they often are not designed to simulate the hydrologically complex processes occurring at or near the land surface. The use of watershed models such as the Soil and Water Assessment Tool, which are designed specifically to simulate surface and near-subsurface processes, can provide helpful insight into the effects of surface-water hydrology on the groundwater system. The Soil and Water Assessment Tool was calibrated for five stream basins in the Elkhorn-Loup Groundwater Model study area in north-central Nebraska to obtain spatially variable estimates of percolation. Six watershed models were calibrated to recorded streamflow in each subbasin by modifying the adjustment parameters. The calibrated parameter sets were then used to simulate a validation period; the validation period was half of the total streamflow period of record with a minimum requirement of 10 years. If the statistical and water-balance results for the validation period were similar to those for the calibration period, a model was considered satisfactory. Statistical measures of each watershed model's performance were variable. These objective measures included the Nash-Sutcliffe measure of efficiency, the ratio of the root-mean-square error to the standard deviation of the measured data, and an estimate of bias. The model met performance criteria for the bias statistic, but failed to meet statistical adequacy criteria for the other two performance measures when evaluated at a monthly time step. A primary cause of the poor model validation results was the inability of the model to reproduce the sustained base flow and streamflow response to precipitation that was observed in the Sand Hills region. The watershed models also were evaluated based on how well they conformed to the annual mass balance (precipitation equals the sum of evapotranspiration, streamflow/runoff, and deep percolation). The model was able to adequately simulate annual values of evapotranspiration, runoff, and precipitation in comparison to reported values, which indicates the model may provide reasonable estimates of annual percolation. Mean annual percolation estimated by the model as basin averages varied within the study area from a maximum of 12.9 inches in the Loup River Basin to a minimum of 1.5 inches in the Shell Creek Basin. Percolation also varied within the studied basins; basin headwaters tended to have greater percolation rates than downstream areas. This variance in percolation rates was mainly was because of the predominance of sandy, highly permeable soils in the upstream areas of the modeled basins.
Parameterizing deep convection using the assumed probability density function method
Storer, R. L.; Griffin, B. M.; Höft, J.; ...
2014-06-11
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing ismore » weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less
NASA Technical Reports Server (NTRS)
Goldberg, Louis F.
1992-01-01
Aspects of the information propagation modeling behavior of integral machine computer simulation programs are investigated in terms of a transmission line. In particular, the effects of pressure-linking and temporal integration algorithms on the amplitude ratio and phase angle predictions are compared against experimental and closed-form analytic data. It is concluded that the discretized, first order conservation balances may not be adequate for modeling information propagation effects at characteristic numbers less than about 24. An entropy transport equation suitable for generalized use in Stirling machine simulation is developed. The equation is evaluated by including it in a simulation of an incompressible oscillating flow apparatus designed to demonstrate the effect of flow oscillations on the enhancement of thermal diffusion. Numerical false diffusion is found to be a major factor inhibiting validation of the simulation predictions with experimental and closed-form analytic data. A generalized false diffusion correction algorithm is developed which allows the numerical results to match their analytic counterparts. Under these conditions, the simulation yields entropy predictions which satisfy Clausius' inequality.
Parameterizing deep convection using the assumed probability density function method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Storer, R. L.; Griffin, B. M.; Höft, J.
2015-01-06
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and midlatitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak.more » The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less
Parameterizing deep convection using the assumed probability density function method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Storer, R. L.; Griffin, B. M.; Hoft, Jan
2015-01-06
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection.These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak. Themore » same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less
Comparisons of CTH simulations with measured wave profiles for simple flyer plate experiments
Thomas, S. A.; Veeser, L. R.; Turley, W. D.; ...
2016-06-13
We conducted detailed 2-dimensional hydrodynamics calculations to assess the quality of simulations commonly used to design and analyze simple shock compression experiments. Such simple shock experiments also contain data where dynamic properties of materials are integrated together. We wished to assess how well the chosen computer hydrodynamic code could do at capturing both the simple parts of the experiments and the integral parts. We began with very simple shock experiments, in which we examined the effects of the equation of state and the compressional and tensile strength models. We increased complexity to include spallation in copper and iron and amore » solid-solid phase transformation in iron to assess the quality of the damage and phase transformation simulations. For experiments with a window, the response of both the sample and the window are integrated together, providing a good test of the material models. While CTH physics models are not perfect and do not reproduce all experimental details well, we find the models are useful; the simulations are adequate for understanding much of the dynamic process and for planning experiments. However, higher complexity in the simulations, such as adding in spall, led to greater differences between simulation and experiment. Lastly, this comparison of simulation to experiment may help guide future development of hydrodynamics codes so that they better capture the underlying physics.« less
On the multi-scale description of micro-structured fluids composed of aggregating rods
NASA Astrophysics Data System (ADS)
Perez, Marta; Scheuer, Adrien; Abisset-Chavanne, Emmanuelle; Ammar, Amine; Chinesta, Francisco; Keunings, Roland
2018-05-01
When addressing the flow of concentrated suspensions composed of rods, dense clusters are observed. Thus, the adequate modelling and simulation of such a flow requires addressing the kinematics of these dense clusters and their impact on the flow in which they are immersed. In a former work, we addressed a first modelling framework of these clusters, assumed so dense that they were considered rigid and their kinematics (flow-induced rotation) were totally defined by a symmetric tensor c with unit trace representing the cluster conformation. Then, the rigid nature of the clusters was relaxed, assuming them deformable, and a model giving the evolution of both the cluster shape and its microstructural orientation descriptor (the so-called shape and orientation tensors) was proposed. This paper compares the predictions coming from those models with finer-scale discrete simulations inspired from molecular dynamics modelling.
Nesting large-eddy simulations within mesoscale simulations for wind energy applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lundquist, J K; Mirocha, J D; Chow, F K
2008-09-08
With increasing demand for more accurate atmospheric simulations for wind turbine micrositing, for operational wind power forecasting, and for more reliable turbine design, simulations of atmospheric flow with resolution of tens of meters or higher are required. These time-dependent large-eddy simulations (LES), which resolve individual atmospheric eddies on length scales smaller than turbine blades and account for complex terrain, are possible with a range of commercial and open-source software, including the Weather Research and Forecasting (WRF) model. In addition to 'local' sources of turbulence within an LES domain, changing weather conditions outside the domain can also affect flow, suggesting thatmore » a mesoscale model provide boundary conditions to the large-eddy simulations. Nesting a large-eddy simulation within a mesoscale model requires nuanced representations of turbulence. Our group has improved the Weather and Research Forecasting model's (WRF) LES capability by implementing the Nonlinear Backscatter and Anisotropy (NBA) subfilter stress model following Kosovic (1997) and an explicit filtering and reconstruction technique to compute the Resolvable Subfilter-Scale (RSFS) stresses (following Chow et al, 2005). We have also implemented an immersed boundary method (IBM) in WRF to accommodate complex terrain. These new models improve WRF's LES capabilities over complex terrain and in stable atmospheric conditions. We demonstrate approaches to nesting LES within a mesoscale simulation for farms of wind turbines in hilly regions. Results are sensitive to the nesting method, indicating that care must be taken to provide appropriate boundary conditions, and to allow adequate spin-up of turbulence in the LES domain.« less
A comparison between block and smooth modeling in finite element simulations of tDCS*
Indahlastari, Aprinda; Sadleir, Rosalind J.
2018-01-01
Current density distributions in five selected structures, namely, anterior superior temporal gyrus (ASTG), hippocampus (HIP), inferior frontal gyrus (IFG), occipital lobe (OCC) and pre-central gyrus (PRC) were investigated as part of a comparison between electrostatic finite element models constructed directly from MRI-resolution data (block models), and smoothed tetrahedral finite element models (smooth models). Three electrode configurations were applied, mimicking different tDCS therapies. Smooth model simulations were found to require three times longer to complete. The percentage differences between mean and median current densities of each model type in arbitrarily chosen brain structures ranged from −33.33–48.08%. No clear relationship was found between structure volumes and current density differences between the two model types. Tissue regions nearby the electrodes demonstrated the least percentage differences between block and smooth models. Therefore, block models may be adequate to predict current density values in cortical regions presumed targeted by tDCS. PMID:26737023
Efficiency of endoscopy units can be improved with use of discrete event simulation modeling.
Sauer, Bryan G; Singh, Kanwar P; Wagner, Barry L; Vanden Hoek, Matthew S; Twilley, Katherine; Cohn, Steven M; Shami, Vanessa M; Wang, Andrew Y
2016-11-01
Background and study aims: The projected increased demand for health services obligates healthcare organizations to operate efficiently. Discrete event simulation (DES) is a modeling method that allows for optimization of systems through virtual testing of different configurations before implementation. The objective of this study was to identify strategies to improve the daily efficiencies of an endoscopy center with the use of DES. Methods: We built a DES model of a five procedure room endoscopy unit at a tertiary-care university medical center. After validating the baseline model, we tested alternate configurations to run the endoscopy suite and evaluated outcomes associated with each change. The main outcome measures included adequate number of preparation and recovery rooms, blocked inflow, delay times, blocked outflows, and patient cycle time. Results: Based on a sensitivity analysis, the adequate number of preparation rooms is eight and recovery rooms is nine for a five procedure room unit (total 3.4 preparation and recovery rooms per procedure room). Simple changes to procedure scheduling and patient arrival times led to a modest improvement in efficiency. Increasing the preparation/recovery rooms based on the sensitivity analysis led to significant improvements in efficiency. Conclusions: By applying tools such as DES, we can model changes in an environment with complex interactions and find ways to improve the medical care we provide. DES is applicable to any endoscopy unit and would be particularly valuable to those who are trying to improve on the efficiency of care and patient experience.
Efficiency of endoscopy units can be improved with use of discrete event simulation modeling
Sauer, Bryan G.; Singh, Kanwar P.; Wagner, Barry L.; Vanden Hoek, Matthew S.; Twilley, Katherine; Cohn, Steven M.; Shami, Vanessa M.; Wang, Andrew Y.
2016-01-01
Background and study aims: The projected increased demand for health services obligates healthcare organizations to operate efficiently. Discrete event simulation (DES) is a modeling method that allows for optimization of systems through virtual testing of different configurations before implementation. The objective of this study was to identify strategies to improve the daily efficiencies of an endoscopy center with the use of DES. Methods: We built a DES model of a five procedure room endoscopy unit at a tertiary-care university medical center. After validating the baseline model, we tested alternate configurations to run the endoscopy suite and evaluated outcomes associated with each change. The main outcome measures included adequate number of preparation and recovery rooms, blocked inflow, delay times, blocked outflows, and patient cycle time. Results: Based on a sensitivity analysis, the adequate number of preparation rooms is eight and recovery rooms is nine for a five procedure room unit (total 3.4 preparation and recovery rooms per procedure room). Simple changes to procedure scheduling and patient arrival times led to a modest improvement in efficiency. Increasing the preparation/recovery rooms based on the sensitivity analysis led to significant improvements in efficiency. Conclusions: By applying tools such as DES, we can model changes in an environment with complex interactions and find ways to improve the medical care we provide. DES is applicable to any endoscopy unit and would be particularly valuable to those who are trying to improve on the efficiency of care and patient experience. PMID:27853739
Knowledge Based Cloud FE Simulation of Sheet Metal Forming Processes.
Zhou, Du; Yuan, Xi; Gao, Haoxiang; Wang, Ailing; Liu, Jun; El Fakir, Omer; Politis, Denis J; Wang, Liliang; Lin, Jianguo
2016-12-13
The use of Finite Element (FE) simulation software to adequately predict the outcome of sheet metal forming processes is crucial to enhancing the efficiency and lowering the development time of such processes, whilst reducing costs involved in trial-and-error prototyping. Recent focus on the substitution of steel components with aluminum alloy alternatives in the automotive and aerospace sectors has increased the need to simulate the forming behavior of such alloys for ever more complex component geometries. However these alloys, and in particular their high strength variants, exhibit limited formability at room temperature, and high temperature manufacturing technologies have been developed to form them. Consequently, advanced constitutive models are required to reflect the associated temperature and strain rate effects. Simulating such behavior is computationally very expensive using conventional FE simulation techniques. This paper presents a novel Knowledge Based Cloud FE (KBC-FE) simulation technique that combines advanced material and friction models with conventional FE simulations in an efficient manner thus enhancing the capability of commercial simulation software packages. The application of these methods is demonstrated through two example case studies, namely: the prediction of a material's forming limit under hot stamping conditions, and the tool life prediction under multi-cycle loading conditions.
Knowledge Based Cloud FE Simulation of Sheet Metal Forming Processes
Zhou, Du; Yuan, Xi; Gao, Haoxiang; Wang, Ailing; Liu, Jun; El Fakir, Omer; Politis, Denis J.; Wang, Liliang; Lin, Jianguo
2016-01-01
The use of Finite Element (FE) simulation software to adequately predict the outcome of sheet metal forming processes is crucial to enhancing the efficiency and lowering the development time of such processes, whilst reducing costs involved in trial-and-error prototyping. Recent focus on the substitution of steel components with aluminum alloy alternatives in the automotive and aerospace sectors has increased the need to simulate the forming behavior of such alloys for ever more complex component geometries. However these alloys, and in particular their high strength variants, exhibit limited formability at room temperature, and high temperature manufacturing technologies have been developed to form them. Consequently, advanced constitutive models are required to reflect the associated temperature and strain rate effects. Simulating such behavior is computationally very expensive using conventional FE simulation techniques. This paper presents a novel Knowledge Based Cloud FE (KBC-FE) simulation technique that combines advanced material and friction models with conventional FE simulations in an efficient manner thus enhancing the capability of commercial simulation software packages. The application of these methods is demonstrated through two example case studies, namely: the prediction of a material's forming limit under hot stamping conditions, and the tool life prediction under multi-cycle loading conditions. PMID:28060298
NASA Astrophysics Data System (ADS)
Paiewonsky, Pablo; Elison Timm, Oliver
2018-03-01
In this paper, we present a simple dynamic global vegetation model whose primary intended use is auxiliary to the land-atmosphere coupling scheme of a climate model, particularly one of intermediate complexity. The model simulates and provides important ecological-only variables but also some hydrological and surface energy variables that are typically either simulated by land surface schemes or else used as boundary data input for these schemes. The model formulations and their derivations are presented here, in detail. The model includes some realistic and useful features for its level of complexity, including a photosynthetic dependency on light, full coupling of photosynthesis and transpiration through an interactive canopy resistance, and a soil organic carbon dependence for bare-soil albedo. We evaluate the model's performance by running it as part of a simple land surface scheme that is driven by reanalysis data. The evaluation against observational data includes net primary productivity, leaf area index, surface albedo, and diagnosed variables relevant for the closure of the hydrological cycle. In this setup, we find that the model gives an adequate to good simulation of basic large-scale ecological and hydrological variables. Of the variables analyzed in this paper, gross primary productivity is particularly well simulated. The results also reveal the current limitations of the model. The most significant deficiency is the excessive simulation of evapotranspiration in mid- to high northern latitudes during their winter to spring transition. The model has a relative advantage in situations that require some combination of computational efficiency, model transparency and tractability, and the simulation of the large-scale vegetation and land surface characteristics under non-present-day conditions.
NASA Astrophysics Data System (ADS)
Hansen, A. L.; Donnelly, C.; Refsgaard, J. C.; Karlsson, I. B.
2018-01-01
This paper describes a modeling approach proposed to simulate the impact of local-scale, spatially targeted N-mitigation measures for the Baltic Sea Basin. Spatially targeted N-regulations aim at exploiting the considerable spatial differences in the natural N-reduction taking place in groundwater and surface water. While such measures can be simulated using local-scale physically-based catchment models, use of such detailed models for the 1.8 million km2 Baltic Sea basin is not feasible due to constraints on input data and computing power. Large-scale models that are able to simulate the Baltic Sea basin, on the other hand, do not have adequate spatial resolution to simulate some of the field-scale measures. Our methodology combines knowledge and results from two local-scale physically-based MIKE SHE catchment models, the large-scale and more conceptual E-HYPE model, and auxiliary data in order to enable E-HYPE to simulate how spatially targeted regulation of agricultural practices may affect N-loads to the Baltic Sea. We conclude that the use of E-HYPE with this upscaling methodology enables the simulation of the impact on N-loads of applying a spatially targeted regulation at the Baltic Sea basin scale to the correct order-of-magnitude. The E-HYPE model together with the upscaling methodology therefore provides a sound basis for large-scale policy analysis; however, we do not expect it to be sufficiently accurate to be useful for the detailed design of local-scale measures.
Numerical Simulation of the Detonation of Condensed Explosives
NASA Astrophysics Data System (ADS)
Wang, Cheng; Ye, Ting; Ning, Jianguo
Detonation process of a condensed explosive was simulated using a finite difference method. Euler equations were applied to describe the detonation flow field, an ignition and growth model for the chemical reaction and Jones-Wilkins-Lee (JWL) equations of state for the state of explosives and detonation products. Based on the simple mixture rule that assumes the reacting explosives to be a mixture of the reactant and product components, 1D and 2D codes were developed to simulate the detonation process of high explosive PBX9404. The numerical results are in good agreement with the experimental results, which demonstrates that the finite difference method, mixture rule and chemical reaction proposed in this paper are adequate and feasible.
Strategies for sperm chemotaxis in the siphonophores and ascidians: a numerical simulation study.
Ishikawa, Makiko; Tsutsui, Hidekazu; Cosson, Jacky; Oka, Yoshitaka; Morisawa, Masaaki
2004-04-01
Chemotactic swimming behaviors of spermatozoa toward an egg have been reported in various species. The strategies underlying these behaviors, however, are poorly understood. We focused on two types of chemotaxis, one in the siphonophores and the second in the ascidians, and then proposed two models based on experimental data. Both models assumed that the radius of the path curvature of a swimming spermatozoon depends on [Ca(2+)](i), the intracellular calcium concentration. The chemotaxis in the siphonophores could be simulated in a model that assumes that [Ca(2+)](i) depends on the local concentration of the attractant in the vicinity of the spermatozoon and that a substantial time period is required for the clearance of transient high [Ca(2+)](i). In the case of ascidians, trajectories similar to those in experiments could be adequately simulated by a variant of this model that assumes that [Ca(2+)](i) depends on the time derivative of the attractant concentration. The properties of these strategies and future problems are discussed in relation to these models.
Pyramid algorithms as models of human cognition
NASA Astrophysics Data System (ADS)
Pizlo, Zygmunt; Li, Zheng
2003-06-01
There is growing body of experimental evidence showing that human perception and cognition involves mechanisms that can be adequately modeled by pyramid algorithms. The main aspect of those mechanisms is hierarchical clustering of information: visual images, spatial relations, and states as well as transformations of a problem. In this paper we review prior psychophysical and simulation results on visual size transformation, size discrimination, speed-accuracy tradeoff, figure-ground segregation, and the traveling salesman problem. We also present our new results on graph search and on the 15-puzzle.
Sea-ice deformation in a coupled ocean-sea-ice model and in satellite remote sensing data
NASA Astrophysics Data System (ADS)
Spreen, Gunnar; Kwok, Ron; Menemenlis, Dimitris; Nguyen, An T.
2017-07-01
A realistic representation of sea-ice deformation in models is important for accurate simulation of the sea-ice mass balance. Simulated sea-ice deformation from numerical simulations with 4.5, 9, and 18 km horizontal grid spacing and a viscous-plastic (VP) sea-ice rheology are compared with synthetic aperture radar (SAR) satellite observations (RGPS, RADARSAT Geophysical Processor System) for the time period 1996-2008. All three simulations can reproduce the large-scale ice deformation patterns, but small-scale sea-ice deformations and linear kinematic features (LKFs) are not adequately reproduced. The mean sea-ice total deformation rate is about 40 % lower in all model solutions than in the satellite observations, especially in the seasonal sea-ice zone. A decrease in model grid spacing, however, produces a higher density and more localized ice deformation features. The 4.5 km simulation produces some linear kinematic features, but not with the right frequency. The dependence on length scale and probability density functions (PDFs) of absolute divergence and shear for all three model solutions show a power-law scaling behavior similar to RGPS observations, contrary to what was found in some previous studies. Overall, the 4.5 km simulation produces the most realistic divergence, vorticity, and shear when compared with RGPS data. This study provides an evaluation of high and coarse-resolution viscous-plastic sea-ice simulations based on spatial distribution, time series, and power-law scaling metrics.
Multiscale solute transport upscaling for a three-dimensional hierarchical porous medium
NASA Astrophysics Data System (ADS)
Zhang, Mingkan; Zhang, Ye
2015-03-01
A laboratory-generated hierarchical, fully heterogeneous aquifer model (FHM) provides a reference for developing and testing an upscaling approach that integrates large-scale connectivity mapping with flow and transport modeling. Based on the FHM, three hydrostratigraphic models (HSMs) that capture lithological (static) connectivity at different resolutions are created, each corresponding to a sedimentary hierarchy. Under increasing system lnK variances (0.1, 1.0, 4.5), flow upscaling is first conducted to calculate equivalent hydraulic conductivity for individual connectivity (or unit) of the HSMs. Given the computed flow fields, an instantaneous, conservative tracer test is simulated by all models. For the HSMs, two upscaling formulations are tested based on the advection-dispersion equation (ADE), implementing space versus time-dependent macrodispersivity. Comparing flow and transport predictions of the HSMs against those of the reference model, HSMs capturing connectivity at increasing resolutions are more accurate, although upscaling errors increase with system variance. Results suggest: (1) by explicitly modeling connectivity, an enhanced degree of freedom in representing dispersion can improve the ADE-based upscaled models by capturing non-Fickian transport of the FHM; (2) when connectivity is sufficiently resolved, the type of data conditioning used to model transport becomes less critical. Data conditioning, however, is influenced by the prediction goal; (3) when aquifer is weakly-to-moderately heterogeneous, the upscaled models adequately capture the transport simulation of the FHM, despite the existence of hierarchical heterogeneity at smaller scales. When aquifer is strongly heterogeneous, the upscaled models become less accurate because lithological connectivity cannot adequately capture preferential flows; (4) three-dimensional transport connectivities of the hierarchical aquifer differ quantitatively from those analyzed for two-dimensional systems. This article was corrected on 7 MAY 2015. See the end of the full text for details.
NASA Astrophysics Data System (ADS)
Li, Guangquan; Field, Malcolm S.
2014-03-01
Documenting and understanding water balances in a karst watershed in which groundwater and surface water resources are strongly interconnected are important aspects for managing regional water resources. Assessing water balances in karst watersheds can be difficult, however, because karst watersheds are so very strongly affected by groundwater flows through solution conduits that are often connected to one or more sinkholes. In this paper we develop a mathematical model to approximate sinkhole porosity from discharge at a downstream spring. The model represents a combination of a traditional linear reservoir model with turbulent hydrodynamics in the solution conduit connecting the downstream spring with the upstream sinkhole, which allows for the simulation of spring discharges and estimation of sinkhole porosity. Noting that spring discharge is an integral of all aspects of water storage and flow, it is mainly dependent on the behavior of the karst aquifer as a whole and can be adequately simulated using the analytical model described in this paper. The model is advantageous in that it obviates the need for a sophisticated numerical model that is much more costly to calibrate and operate. The model is demonstrated using the St. Marks River Watershed in northwestern Florida.
NASA Astrophysics Data System (ADS)
Fernández, Alfonso; Najafi, Mohammad Reza; Durand, Michael; Mark, Bryan G.; Moritz, Mark; Jung, Hahn Chul; Neal, Jeffrey; Shastry, Apoorva; Laborde, Sarah; Phang, Sui Chian; Hamilton, Ian M.; Xiao, Ningchuan
2016-08-01
Recent innovations in hydraulic modeling have enabled global simulation of rivers, including simulation of their coupled wetlands and floodplains. Accurate simulations of floodplains using these approaches may imply tremendous advances in global hydrologic studies and in biogeochemical cycling. One such innovation is to explicitly treat sub-grid channels within two-dimensional models, given only remotely sensed data in areas with limited data availability. However, predicting inundated area in floodplains using a sub-grid model has not been rigorously validated. In this study, we applied the LISFLOOD-FP hydraulic model using a sub-grid channel parameterization to simulate inundation dynamics on the Logone River floodplain, in northern Cameroon, from 2001 to 2007. Our goal was to determine whether floodplain dynamics could be simulated with sufficient accuracy to understand human and natural contributions to current and future inundation patterns. Model inputs in this data-sparse region include in situ river discharge, satellite-derived rainfall, and the shuttle radar topography mission (SRTM) floodplain elevation. We found that the model accurately simulated total floodplain inundation, with a Pearson correlation coefficient greater than 0.9, and RMSE less than 700 km2, compared to peak inundation greater than 6000 km2. Predicted discharge downstream of the floodplain matched measurements (Nash-Sutcliffe efficiency of 0.81), and indicated that net flow from the channel to the floodplain was modeled accurately. However, the spatial pattern of inundation was not well simulated, apparently due to uncertainties in SRTM elevations. We evaluated model results at 250, 500 and 1000-m spatial resolutions, and found that results are insensitive to spatial resolution. We also compared the model output against results from a run of LISFLOOD-FP in which the sub-grid channel parameterization was disabled, finding that the sub-grid parameterization simulated more realistic dynamics. These results suggest that analysis of global inundation is feasible using a sub-grid model, but that spatial patterns at sub-kilometer resolutions still need to be adequately predicted.
NASA Technical Reports Server (NTRS)
Lee, Young-Hee; Mahrt, L.
2005-01-01
This study evaluates the prediction of heat and moisture fluxes from a new land surface scheme with eddy correlation data collected at the old aspen site during the Boreal Ecosystem-Atmosphere Study (BOREAS) in 1994. The model used in this study couples a multilayer vegetation model with a soil model. Inclusion of organic material in the upper soil layer is required to adequately simulate exchange between the soil and subcanopy air. Comparisons between the model and observations are discussed to reveal model misrepresentation of some aspects of the diurnal variation of subcanopy processes. Evapotranspiration
An analysis of airline landing flare data based on flight and training simulator measurements
NASA Technical Reports Server (NTRS)
Heffley, R. K.; Schulman, T. M.; Clement, T. M.
1982-01-01
Landings by experienced airline pilots transitioning to the DC-10, performed in flight and on a simulator, were analyzed and compared using a pilot-in-the-loop model of the landing maneuver. By solving for the effective feedback gains and pilot compensation which described landing technique, it was possible to discern fundamental differences in pilot behavior between the actual aircraft and the simulator. These differences were then used to infer simulator fidelity in terms of specific deficiencies and to quantify the effectiveness of training on the simulator as compared to training in flight. While training on the simulator, pilots exhibited larger effective lag in commanding the flare. The inability to compensate adequately for this lag was associated with hard or inconsistent landings. To some degree this deficiency was carried into flight, thus resulting in a slightly different and inferior landing technique than exhibited by pilots trained exclusively on the actual aircraft.
High fidelity studies of exploding foil initiator bridges, Part 3: ALEGRA MHD simulations
NASA Astrophysics Data System (ADS)
Neal, William; Garasi, Christopher
2017-01-01
Simulations of high voltage detonators, such as Exploding Bridgewire (EBW) and Exploding Foil Initiators (EFI), have historically been simple, often empirical, one-dimensional models capable of predicting parameters such as current, voltage, and in the case of EFIs, flyer velocity. Experimental methods have correspondingly generally been limited to the same parameters. With the advent of complex, first principles magnetohydrodynamic codes such as ALEGRA and ALE-MHD, it is now possible to simulate these components in three dimensions, and predict a much greater range of parameters than before. A significant improvement in experimental capability was therefore required to ensure these simulations could be adequately verified. In this third paper of a three part study, the experimental results presented in part 2 are compared against 3-dimensional MHD simulations. This improved experimental capability, along with advanced simulations, offer an opportunity to gain a greater understanding of the processes behind the functioning of EBW and EFI detonators.
Technology transfer of operator-in-the-loop simulation
NASA Technical Reports Server (NTRS)
Yae, K. H.; Lin, H. C.; Lin, T. C.; Frisch, H. P.
1994-01-01
The technology developed for operator-in-the-loop simulation in space teleoperation has been applied to Caterpillar's backhoe, wheel loader, and off-highway truck. On an SGI workstation, the simulation integrates computer modeling of kinematics and dynamics, real-time computational and visualization, and an interface with the operator through the operator's console. The console is interfaced with the workstation through an IBM-PC in which the operator's commands were digitized and sent through an RS-232 serial port. The simulation gave visual feedback adequate for the operator in the loop, with the camera's field of vision projected on a large screen in multiple view windows. The view control can emulate either stationary or moving cameras. This simulator created an innovative engineering design environment by integrating computer software and hardware with the human operator's interactions. The backhoe simulation has been adopted by Caterpillar in building a virtual reality tool for backhoe design.
Experimental study and simulation of space charge stimulated discharge
NASA Astrophysics Data System (ADS)
Noskov, M. D.; Malinovski, A. S.; Cooke, C. M.; Wright, K. A.; Schwab, A. J.
2002-11-01
The electrical discharge of volume distributed space charge in poly(methylmethacrylate) (PMMA) has been investigated both experimentally and by computer simulation. The experimental space charge was implanted in dielectric samples by exposure to a monoenergetic electron beam of 3 MeV. Electrical breakdown through the implanted space charge region within the sample was initiated by a local electric field enhancement applied to the sample surface. A stochastic-deterministic dynamic model for electrical discharge was developed and used in a computer simulation of these breakdowns. The model employs stochastic rules to describe the physical growth of the discharge channels, and deterministic laws to describe the electric field, the charge, and energy dynamics within the discharge channels and the dielectric. Simulated spatial-temporal and current characteristics of the expanding discharge structure during physical growth are quantitatively compared with the experimental data to confirm the discharge model. It was found that a single fixed set of physically based dielectric parameter values was adequate to simulate the complete family of experimental space charge discharges in PMMA. It is proposed that such a set of parameters also provides a useful means to quantify the breakdown properties of other dielectrics.
A model of the productivity of the northern pintail
Carlson, J.D.; Clark, W.R.; Klaas, E.E.
1993-01-01
We adapted a stochastic computer model to simulate productivity of the northern pintail (Anas acuta). Researchers at the Northern Prairie Wildlife Research Center of the U.S. Fish and Wildlife Service originally developed the model to simulate productivity of the mallard (A. platyrhynchos). We obtained data and descriptive information on the breeding biology of pintails from a literature review and from discussions with waterfowl biologists. All biological parameters in the productivity component of the mallard model (e.g, initial body weights, weight loss during laying and incubation, incubation time, clutch size, nest site selection characteristics) were compared with data on pintails and adjusted accordingly. The function in the mallard model that predicts nest initiation in response to pond conditions adequately mimicked pintail behavior and did not require adjustment.Recruitment rate was most sensitive to variations in parameters that control nest success, seasonal duckling survival rate, and yearling and adult body weight. We simulated upland and wetland habitat conditions in central North Dakota and compared simulation results with observed data. Simulated numbers were not significantly different from observed numbers of successful nests during wet, average, and dry wetland conditions. The simulated effect of predator barrier fencing in a study area in central North Dakota increased recruitment rate by an average of 18.4%. This modeling synthesized existing knowledge on the breeding biology of the northern pintail, identified necessary research, and furnished a useful tool for the examination and comparison of various management options.
NASA Astrophysics Data System (ADS)
Zhang, Rong-Hua; Tao, Ling-Jiang; Gao, Chuan
2017-09-01
Large uncertainties exist in real-time predictions of the 2015 El Niño event, which have systematic intensity biases that are strongly model-dependent. It is critically important to characterize those model biases so they can be reduced appropriately. In this study, the conditional nonlinear optimal perturbation (CNOP)-based approach was applied to an intermediate coupled model (ICM) equipped with a four-dimensional variational data assimilation technique. The CNOP-based approach was used to quantify prediction errors that can be attributed to initial conditions (ICs) and model parameters (MPs). Two key MPs were considered in the ICM: one represents the intensity of the thermocline effect, and the other represents the relative coupling intensity between the ocean and atmosphere. Two experiments were performed to illustrate the effects of error corrections, one with a standard simulation and another with an optimized simulation in which errors in the ICs and MPs derived from the CNOP-based approach were optimally corrected. The results indicate that simulations of the 2015 El Niño event can be effectively improved by using CNOP-derived error correcting. In particular, the El Niño intensity in late 2015 was adequately captured when simulations were started from early 2015. Quantitatively, the Niño3.4 SST index simulated in Dec. 2015 increased to 2.8 °C in the optimized simulation, compared with only 1.5 °C in the standard simulation. The feasibility and effectiveness of using the CNOP-based technique to improve ENSO simulations are demonstrated in the context of the 2015 El Niño event. The limitations and further applications are also discussed.
A mechanistic diagnosis of the simulation of soil CO2 efflux of the ACME Land Model
NASA Astrophysics Data System (ADS)
Liang, J.; Ricciuto, D. M.; Wang, G.; Gu, L.; Hanson, P. J.; Mayes, M. A.
2017-12-01
Accurate simulation of the CO2 efflux from soils (i.e., soil respiration) to the atmosphere is critical to project global biogeochemical cycles and the magnitude of climate change in Earth system models (ESMs). Currently, the simulated soil respiration by ESMs still have a large uncertainty. In this study, a mechanistic diagnosis of soil respiration in the Accelerated Climate Model for Energy (ACME) Land Model (ALM) was conducted using long-term observations at the Missouri Ozark AmeriFlux (MOFLUX) forest site in the central U.S. The results showed that the ALM default run significantly underestimated annual soil respiration and gross primary production (GPP), while incorrectly estimating soil water potential. Improved simulations of soil water potential with site-specific data significantly improved the modeled annual soil respiration, primarily because annual GPP was simultaneously improved. Therefore, accurate simulations of soil water potential must be carefully calibrated in ESMs. Despite improved annual soil respiration, the ALM continued to underestimate soil respiration during peak growing seasons, and to overestimate soil respiration during non-peak growing seasons. Simulations involving increased GPP during peak growing seasons increased soil respiration, while neither improved plant phenology nor increased temperature sensitivity affected the simulation of soil respiration during non-peak growing seasons. One potential reason for the overestimation of the soil respiration during non-peak growing seasons may be that the current model structure is substrate-limited, while microbial dormancy under stress may cause the system to become decomposer-limited. Further studies with more microbial data are required to provide adequate representation of soil respiration and to understand the underlying reasons for inaccurate model simulations.
Benchmarking of vertically-integrated CO2 flow simulations at the Sleipner Field, North Sea
NASA Astrophysics Data System (ADS)
Cowton, L. R.; Neufeld, J. A.; White, N. J.; Bickle, M. J.; Williams, G. A.; White, J. C.; Chadwick, R. A.
2018-06-01
Numerical modeling plays an essential role in both identifying and assessing sub-surface reservoirs that might be suitable for future carbon capture and storage projects. Accuracy of flow simulations is tested by benchmarking against historic observations from on-going CO2 injection sites. At the Sleipner project located in the North Sea, a suite of time-lapse seismic reflection surveys enables the three-dimensional distribution of CO2 at the top of the reservoir to be determined as a function of time. Previous attempts have used Darcy flow simulators to model CO2 migration throughout this layer, given the volume of injection with time and the location of the injection point. Due primarily to computational limitations preventing adequate exploration of model parameter space, these simulations usually fail to match the observed distribution of CO2 as a function of space and time. To circumvent these limitations, we develop a vertically-integrated fluid flow simulator that is based upon the theory of topographically controlled, porous gravity currents. This computationally efficient scheme can be used to invert for the spatial distribution of reservoir permeability required to minimize differences between the observed and calculated CO2 distributions. When a uniform reservoir permeability is assumed, inverse modeling is unable to adequately match the migration of CO2 at the top of the reservoir. If, however, the width and permeability of a mapped channel deposit are allowed to independently vary, a satisfactory match between the observed and calculated CO2 distributions is obtained. Finally, the ability of this algorithm to forecast the flow of CO2 at the top of the reservoir is assessed. By dividing the complete set of seismic reflection surveys into training and validation subsets, we find that the spatial pattern of permeability required to match the training subset can successfully predict CO2 migration for the validation subset. This ability suggests that it might be feasible to forecast migration patterns into the future with a degree of confidence. Nevertheless, our analysis highlights the difficulty in estimating reservoir parameters away from the region swept by CO2 without additional observational constraints.
Growing C4 perennial grass for bioenergy using a new Agro-BGC ecosystem model
NASA Astrophysics Data System (ADS)
di Vittorio, A. V.; Anderson, R. S.; Miller, N. L.; Running, S. W.
2009-12-01
Accurate, spatially gridded estimates of bioenergy crop yields require 1) biophysically accurate crop growth models and 2) careful parameterization of unavailable inputs to these models. To meet the first requirement we have added the capacity to simulate C4 perennial grass as a bioenergy crop to the Biome-BGC ecosystem model. This new model, hereafter referred to as Agro-BGC, includes enzyme driven C4 photosynthesis, individual live and dead leaf, stem, and root carbon/nitrogen pools, separate senescence and litter fall processes, fruit growth, optional annual seeding, flood irrigation, a growing degree day phenology with a killing frost option, and a disturbance handler that effectively simulates fertilization, harvest, fire, and incremental irrigation. There are four Agro-BGC vegetation parameters that are unavailable for Panicum virgatum (switchgrass), and to meet the second requirement we have optimized the model across multiple calibration sites to obtain representative values for these parameters. We have verified simulated switchgrass yields against observations at three non-calibration sites in IL. Agro-BGC simulates switchgrass growth and yield at harvest very well at a single site. Our results suggest that a multi-site optimization scheme would be adequate for producing regional-scale estimates of bioenergy crop yields on high spatial resolution grids.
Hu, Jingwen; Klinich, Kathleen D; Reed, Matthew P; Kokkolaras, Michael; Rupp, Jonathan D
2012-06-01
In motor-vehicle crashes, young school-aged children restrained by vehicle seat belt systems often suffer from abdominal injuries due to submarining. However, the current anthropomorphic test device, so-called "crash dummy", is not adequate for proper simulation of submarining. In this study, a modified Hybrid-III six-year-old dummy model capable of simulating and predicting submarining was developed using MADYMO (TNO Automotive Safety Solutions). The model incorporated improved pelvis and abdomen geometry and properties previously tested in a modified physical dummy. The model was calibrated and validated against four sled tests under two test conditions with and without submarining using a multi-objective optimization method. A sensitivity analysis using this validated child dummy model showed that dummy knee excursion, torso rotation angle, and the difference between head and knee excursions were good predictors for submarining status. It was also shown that restraint system design variables, such as lap belt angle, D-ring height, and seat coefficient of friction (COF), may have opposite effects on head and abdomen injury risks; therefore child dummies and dummy models capable of simulating submarining are crucial for future restraint system design optimization for young school-aged children. Copyright © 2011 IPEM. Published by Elsevier Ltd. All rights reserved.
Transfer of training for aerospace operations: How to measure, validate, and improve it
NASA Technical Reports Server (NTRS)
Cohen, Malcolm M.
1993-01-01
It has been a commonly accepted practice to train pilots and astronauts in expensive, extremely sophisticated, high fidelity simulators, with as much of the real-world feel and response as possible. High fidelity and high validity have often been assumed to be inextricably interwoven, although this assumption may not be warranted. The Project Mercury rate-damping task on the Naval Air Warfare Center's Human Centrifuge Dynamic Flight Simulator, the shuttle landing task on the NASA-ARC Vertical Motion Simulator, and the almost complete acceptance by the airline industry of full-up Boeing 767 flight simulators, are just a few examples of this approach. For obvious reasons, the classical models of transfer of training have never been adequately evaluated in aerospace operations, and there have been few, if any, scientifically valid replacements for the classical models. This paper reviews some of the earlier work involving transfer of training in aerospace operations, and discusses some of the methods by which appropriate criteria for assessing the validity of training may be established.
Measurements and Computations of Flow in an Urban Street System
NASA Astrophysics Data System (ADS)
Castro, Ian P.; Xie, Zheng-Tong; Fuka, V.; Robins, Alan G.; Carpentieri, M.; Hayden, P.; Hertwig, D.; Coceal, O.
2017-02-01
We present results from laboratory and computational experiments on the turbulent flow over an array of rectangular blocks modelling a typical, asymmetric urban canopy at various orientations to the approach flow. The work forms part of a larger study on dispersion within such arrays (project DIPLOS) and concentrates on the nature of the mean flow and turbulence fields within the canopy region, recognising that unless the flow field is adequately represented in computational models there is no reason to expect realistic simulations of the nature of the dispersion of pollutants emitted within the canopy. Comparisons between the experimental data and those obtained from both large-eddy simulation (LES) and direct numerical simulation (DNS) are shown and it is concluded that careful use of LES can produce generally excellent agreement with laboratory and DNS results, lending further confidence in the use of LES for such situations. Various crucial issues are discussed and advice offered to both experimentalists and those seeking to compute canopy flows with turbulence resolving models.
Dimitrakis, Dimitrios A; Syrigou, Maria; Lorentzou, Souzana; Kostoglou, Margaritis; Konstandopoulos, Athanasios G
2017-10-11
This study aims at developing a kinetic model that can adequately describe solar thermochemical water and carbon dioxide splitting with nickel ferrite powder as the active redox material. The kinetic parameters of water splitting of a previous study are revised to include transition times and new kinetic parameters for carbon dioxide splitting are developed. The computational results show a satisfactory agreement with experimental data and continuous multicycle operation under varying operating conditions is simulated. Different test cases are explored in order to improve the product yield. At first a parametric analysis is conducted, investigating the appropriate duration of the oxidation and the thermal reduction step that maximizes the hydrogen yield. Subsequently, a non-isothermal oxidation step is simulated and proven as an interesting option for increasing the hydrogen production. The kinetic model is adapted to simulate the production yields in structured solar reactor components, i.e. extruded monolithic structures, as well.
Loeffler, Johannes R; Ehmki, Emanuel S R; Fuchs, Julian E; Liedl, Klaus R
2016-05-01
Urea derivatives are ubiquitously found in many chemical disciplines. N,N'-substituted ureas may show different conformational preferences depending on their substitution pattern. The high energetic barrier for isomerization of the cis and trans state poses additional challenges on computational simulation techniques aiming at a reproduction of the biological properties of urea derivatives. Herein, we investigate energetics of urea conformations and their interconversion using a broad spectrum of methodologies ranging from data mining, via quantum chemistry to molecular dynamics simulation and free energy calculations. We find that the inversion of urea conformations is inherently slow and beyond the time scale of typical simulation protocols. Therefore, extra care needs to be taken by computational chemists to work with appropriate model systems. We find that both knowledge-driven approaches as well as physics-based methods may guide molecular modelers towards accurate starting structures for expensive calculations to ensure that conformations of urea derivatives are modeled as adequately as possible.
Led, Santiago; Azpilicueta, Leire; Aguirre, Erik; de Espronceda, Miguel Martínez; Serrano, Luis; Falcone, Francisco
2013-01-01
In this work, a novel ambulatory ECG monitoring device developed in-house called HOLTIN is analyzed when operating in complex indoor scenarios. The HOLTIN system is described, from the technological platform level to its functional model. In addition, by using in-house 3D ray launching simulation code, the wireless channel behavior, which enables ubiquitous operation, is performed. The effect of human body presence is taken into account by a novel simplified model embedded within the 3D Ray Launching code. Simulation as well as measurement results are presented, showing good agreement. These results may aid in the adequate deployment of this novel device to automate conventional medical processes, increasing the coverage radius and optimizing energy consumption. PMID:23584122
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Chern, Jiun-Dar
2017-01-01
The importance of precipitating mesoscale convective systems (MCSs) has been quantified from TRMM precipitation radar and microwave imager retrievals. MCSs generate more than 50% of the rainfall in most tropical regions. MCSs usually have horizontal scales of a few hundred kilometers (km); therefore, a large domain with several hundred km is required for realistic simulations of MCSs in cloud-resolving models (CRMs). Almost all traditional global and climate models do not have adequate parameterizations to represent MCSs. Typical multi-scale modeling frameworks (MMFs) may also lack the resolution (4 km grid spacing) and domain size (128 km) to realistically simulate MCSs. In this study, the impact of MCSs on precipitation is examined by conducting model simulations using the Goddard Cumulus Ensemble (GCE) model and Goddard MMF (GMMF). The results indicate that both models can realistically simulate MCSs with more grid points (i.e., 128 and 256) and higher resolutions (1 or 2 km) compared to those simulations with fewer grid points (i.e., 32 and 64) and low resolution (4 km). The modeling results also show the strengths of the Hadley circulations, mean zonal and regional vertical velocities, surface evaporation, and amount of surface rainfall are weaker or reduced in the GMMF when using more CRM grid points and higher CRM resolution. In addition, the results indicate that large-scale surface evaporation and wind feed back are key processes for determining the surface rainfall amount in the GMMF. A sensitivity test with reduced sea surface temperatures shows both reduced surface rainfall and evaporation.
NASA Astrophysics Data System (ADS)
Tao, Wei-Kuo; Chern, Jiun-Dar
2017-06-01
The importance of precipitating mesoscale convective systems (MCSs) has been quantified from TRMM precipitation radar and microwave imager retrievals. MCSs generate more than 50% of the rainfall in most tropical regions. MCSs usually have horizontal scales of a few hundred kilometers (km); therefore, a large domain with several hundred km is required for realistic simulations of MCSs in cloud-resolving models (CRMs). Almost all traditional global and climate models do not have adequate parameterizations to represent MCSs. Typical multiscale modeling frameworks (MMFs) may also lack the resolution (4 km grid spacing) and domain size (128 km) to realistically simulate MCSs. The impact of MCSs on precipitation is examined by conducting model simulations using the Goddard Cumulus Ensemble (GCE, a CRM) model and Goddard MMF that uses the GCEs as its embedded CRMs. Both models can realistically simulate MCSs with more grid points (i.e., 128 and 256) and higher resolutions (1 or 2 km) compared to those simulations with fewer grid points (i.e., 32 and 64) and low resolution (4 km). The modeling results also show the strengths of the Hadley circulations, mean zonal and regional vertical velocities, surface evaporation, and amount of surface rainfall are weaker or reduced in the Goddard MMF when using more CRM grid points and higher CRM resolution. In addition, the results indicate that large-scale surface evaporation and wind feedback are key processes for determining the surface rainfall amount in the GMMF. A sensitivity test with reduced sea surface temperatures shows both reduced surface rainfall and evaporation.
He, Yujie; Zhuang, Qianlai; McGuire, David; Liu, Yaling; Chen, Min
2013-01-01
Model-data fusion is a process in which field observations are used to constrain model parameters. How observations are used to constrain parameters has a direct impact on the carbon cycle dynamics simulated by ecosystem models. In this study, we present an evaluation of several options for the use of observations in modeling regional carbon dynamics and explore the implications of those options. We calibrated the Terrestrial Ecosystem Model on a hierarchy of three vegetation classification levels for the Alaskan boreal forest: species level, plant-functional-type level (PFT level), and biome level, and we examined the differences in simulated carbon dynamics. Species-specific field-based estimates were directly used to parameterize the model for species-level simulations, while weighted averages based on species percent cover were used to generate estimates for PFT- and biome-level model parameterization. We found that calibrated key ecosystem process parameters differed substantially among species and overlapped for species that are categorized into different PFTs. Our analysis of parameter sets suggests that the PFT-level parameterizations primarily reflected the dominant species and that functional information of some species were lost from the PFT-level parameterizations. The biome-level parameterization was primarily representative of the needleleaf PFT and lost information on broadleaf species or PFT function. Our results indicate that PFT-level simulations may be potentially representative of the performance of species-level simulations while biome-level simulations may result in biased estimates. Improved theoretical and empirical justifications for grouping species into PFTs or biomes are needed to adequately represent the dynamics of ecosystem functioning and structure.
NASA Astrophysics Data System (ADS)
Ivancic, B.; Riedmann, H.; Frey, M.; Knab, O.; Karl, S.; Hannemann, K.
2016-07-01
The paper summarizes technical results and first highlights of the cooperation between DLR and Airbus Defence and Space (DS) within the work package "CFD Modeling of Combustion Chamber Processes" conducted in the frame of the Propulsion 2020 Project. Within the addressed work package, DLR Göttingen and Airbus DS Ottobrunn have identified several test cases where adequate test data are available and which can be used for proper validation of the computational fluid dynamics (CFD) tools. In this paper, the first test case, the Penn State chamber (RCM1), is discussed. Presenting the simulation results from three different tools, it is shown that the test case can be computed properly with steady-state Reynolds-averaged Navier-Stokes (RANS) approaches. The achieved simulation results reproduce the measured wall heat flux as an important validation parameter very well but also reveal some inconsistencies in the test data which are addressed in this paper.
Using travel times to simulate multi-dimensional bioreactive transport in time-periodic flows.
Sanz-Prat, Alicia; Lu, Chuanhe; Finkel, Michael; Cirpka, Olaf A
2016-04-01
In travel-time models, the spatially explicit description of reactive transport is replaced by associating reactive-species concentrations with the travel time or groundwater age at all locations. These models have been shown adequate for reactive transport in river-bank filtration under steady-state flow conditions. Dynamic hydrological conditions, however, can lead to fluctuations of infiltration velocities, putting the validity of travel-time models into question. In transient flow, the local travel-time distributions change with time. We show that a modified version of travel-time based reactive transport models is valid if only the magnitude of the velocity fluctuates, whereas its spatial orientation remains constant. We simulate nonlinear, one-dimensional, bioreactive transport involving oxygen, nitrate, dissolved organic carbon, aerobic and denitrifying bacteria, considering periodic fluctuations of velocity. These fluctuations make the bioreactive system pulsate: The aerobic zone decreases at times of low velocity and increases at those of high velocity. For the case of diurnal fluctuations, the biomass concentrations cannot follow the hydrological fluctuations and a transition zone containing both aerobic and obligatory denitrifying bacteria is established, whereas a clear separation of the two types of bacteria prevails in the case of seasonal velocity fluctuations. We map the 1-D results to a heterogeneous, two-dimensional domain by means of the mean groundwater age for steady-state flow in both domains. The mapped results are compared to simulation results of spatially explicit, two-dimensional, advective-dispersive-bioreactive transport subject to the same relative fluctuations of velocity as in the one-dimensional model. The agreement between the mapped 1-D and the explicit 2-D results is excellent. We conclude that travel-time models of nonlinear bioreactive transport are adequate in systems of time-periodic flow if the flow direction does not change. Copyright © 2016 Elsevier B.V. All rights reserved.
Shi, Yan; Zhang, Bolun; Cai, Maolin; Zhang, Xiaohua Douglas
2017-09-01
Mechanical ventilation is a key therapy for patients who cannot breathe adequately by themselves, and dynamics of mechanical ventilation system is of great significance for life support of patients. Recently, models of mechanical ventilated respiratory system with 1 lung are used to simulate the respiratory system of patients. However, humans have 2 lungs. When the respiratory characteristics of 2 lungs are different, a single-lung model cannot reflect real respiratory system. In this paper, to illustrate dynamic characteristics of mechanical ventilated respiratory system with 2 different lungs, we propose a mathematical model of mechanical ventilated respiratory system with 2 different lungs and conduct experiments to verify the model. Furthermore, we study the dynamics of mechanical ventilated respiratory system with 2 different lungs. This research study can be used for improving the efficiency and safety of volume-controlled mechanical ventilation system. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Pavlick, R.; Schimel, D.
2014-12-01
Dynamic Global Vegetation Models (DGVMs) typically employ only a small set of Plant Functional Types (PFTs) to represent the vast diversity of observed vegetation forms and functioning. There is growing evidence, however, that this abstraction may not adequately represent the observed variation in plant functional traits, which is thought to play an important role for many ecosystem functions and for ecosystem resilience to environmental change. The geographic distribution of PFTs in these models is also often based on empirical relationships between present-day climate and vegetation patterns. Projections of future climate change, however, point toward the possibility of novel regional climates, which could lead to no-analog vegetation compositions incompatible with the PFT paradigm. Here, we present results from the Jena Diversity-DGVM (JeDi-DGVM), a novel traits-based vegetation model, which simulates a large number of hypothetical plant growth strategies constrained by functional tradeoffs, thereby allowing for a more flexible temporal and spatial representation of the terrestrial biosphere. First, we compare simulated present-day geographical patterns of functional traits with empirical trait observations (in-situ and from airborne imaging spectroscopy). The observed trait patterns are then used to improve the tradeoff parameterizations of JeDi-DGVM. Finally, focusing primarily on the simulated leaf traits, we run the model with various amounts of trait diversity. We quantify the effects of these modeled biodiversity manipulations on simulated ecosystem fluxes and stocks for both present-day conditions and transient climate change scenarios. The simulation results reveal that the coarse treatment of plant functional traits by current PFT-based vegetation models may contribute substantial uncertainty regarding carbon-climate feedbacks. Further development of trait-based models and further investment in global in-situ and spectroscopic plant trait observations are needed.
Data-driven RANS for simulations of large wind farms
NASA Astrophysics Data System (ADS)
Iungo, G. V.; Viola, F.; Ciri, U.; Rotea, M. A.; Leonardi, S.
2015-06-01
In the wind energy industry there is a growing need for real-time predictions of wind turbine wake flows in order to optimize power plant control and inhibit detrimental wake interactions. To this aim, a data-driven RANS approach is proposed in order to achieve very low computational costs and adequate accuracy through the data assimilation procedure. The RANS simulations are implemented with a classical Boussinesq hypothesis and a mixing length turbulence closure model, which is calibrated through the available data. High-fidelity LES simulations of a utility-scale wind turbine operating with different tip speed ratios are used as database. It is shown that the mixing length model for the RANS simulations can be calibrated accurately through the Reynolds stress of the axial and radial velocity components, and the gradient of the axial velocity in the radial direction. It is found that the mixing length is roughly invariant in the very near wake, then it increases linearly with the downstream distance in the diffusive region. The variation rate of the mixing length in the downstream direction is proposed as a criterion to detect the transition between near wake and transition region of a wind turbine wake. Finally, RANS simulations were performed with the calibrated mixing length model, and a good agreement with the LES simulations is observed.
Thermal modelling using discrete vasculature for thermal therapy: a review
Kok, H.P.; Gellermann, J.; van den Berg, C.A.T.; Stauffer, P.R.; Hand, J.W.; Crezee, J.
2013-01-01
Reliable temperature information during clinical hyperthermia and thermal ablation is essential for adequate treatment control, but conventional temperature measurements do not provide 3D temperature information. Treatment planning is a very useful tool to improve treatment quality and substantial progress has been made over the last decade. Thermal modelling is a very important and challenging aspect of hyperthermia treatment planning. Various thermal models have been developed for this purpose, with varying complexity. Since blood perfusion is such an important factor in thermal redistribution of energy in in vivo tissue, thermal simulations are most accurately performed by modelling discrete vasculature. This review describes the progress in thermal modelling with discrete vasculature for the purpose of hyperthermia treatment planning and thermal ablation. There has been significant progress in thermal modelling with discrete vasculature. Recent developments have made real-time simulations possible, which can provide feedback during treatment for improved therapy. Future clinical application of thermal modelling with discrete vasculature in hyperthermia treatment planning is expected to further improve treatment quality. PMID:23738700
Fancher, R Marcus; Zhang, Hongjian; Sleczka, Bogdan; Derbin, George; Rockar, Richard; Marathe, Punit
2011-07-01
A preclinical canine model capable of predicting a compound's potential for pH-dependent absorption in humans was developed. This involved the surgical insertion of a gastrostomy feeding tube into the stomach of a beagle dog. The tube was sutured in position to allow frequent withdrawal of gastric fluid for pH measurement. Therefore, it was possible to measure pH in the stomach and assess the effect of gastric pH-modifying agents on the absorption of various test compounds. Fasted gastric pH in the dog showed considerable inter- and intra-animal variability. Pretreatment of pentagastrin (6 µg/kg intramuscularly) 20 min prior to test compound administration was determined to be adequate for simulating fasting stomach pH in humans. Pretreatment with famotidine [40 mg orally] 1 h prior to test compound administration was determined to be adequate for simulating human gastric pH when acid-reducing agents are coadministered. Pentagastrin and famotidine pretreatments were used to test two discovery compounds and distinct differences in their potential for pH-dependent absorption were observed. The model described herein can be used preclinically to screen out compounds, differentiate compounds, and support the assessment of various formulation- and prodrug-based strategies to mitigate the pH effect. Copyright © 2011 Wiley-Liss, Inc. and the American Pharmacists Association
A new numerical benchmark for variably saturated variable-density flow and transport in porous media
NASA Astrophysics Data System (ADS)
Guevara, Carlos; Graf, Thomas
2016-04-01
In subsurface hydrological systems, spatial and temporal variations in solute concentration and/or temperature may affect fluid density and viscosity. These variations could lead to potentially unstable situations, in which a dense fluid overlies a less dense fluid. These situations could produce instabilities that appear as dense plume fingers migrating downwards counteracted by vertical upwards flow of freshwater (Simmons et al., Transp. Porous Medium, 2002). As a result of unstable variable-density flow, solute transport rates are increased over large distances and times as compared to constant-density flow. The numerical simulation of variable-density flow in saturated and unsaturated media requires corresponding benchmark problems against which a computer model is validated (Diersch and Kolditz, Adv. Water Resour, 2002). Recorded data from a laboratory-scale experiment of variable-density flow and solute transport in saturated and unsaturated porous media (Simmons et al., Transp. Porous Medium, 2002) is used to define a new numerical benchmark. The HydroGeoSphere code (Therrien et al., 2004) coupled with PEST (www.pesthomepage.org) are used to obtain an optimized parameter set capable of adequately representing the data set by Simmons et al., (2002). Fingering in the numerical model is triggered using random hydraulic conductivity fields. Due to the inherent randomness, a large number of simulations were conducted in this study. The optimized benchmark model adequately predicts the plume behavior and the fate of solutes. This benchmark is useful for model verification of variable-density flow problems in saturated and/or unsaturated media.
Large-eddy simulations with wall models
NASA Technical Reports Server (NTRS)
Cabot, W.
1995-01-01
The near-wall viscous and buffer regions of wall-bounded flows generally require a large expenditure of computational resources to be resolved adequately, even in large-eddy simulation (LES). Often as much as 50% of the grid points in a computational domain are devoted to these regions. The dense grids that this implies also generally require small time steps for numerical stability and/or accuracy. It is commonly assumed that the inner wall layers are near equilibrium, so that the standard logarithmic law can be applied as the boundary condition for the wall stress well away from the wall, for example, in the logarithmic region, obviating the need to expend large amounts of grid points and computational time in this region. This approach is commonly employed in LES of planetary boundary layers, and it has also been used for some simple engineering flows. In order to calculate accurately a wall-bounded flow with coarse wall resolution, one requires the wall stress as a boundary condition. The goal of this work is to determine the extent to which equilibrium and boundary layer assumptions are valid in the near-wall regions, to develop models for the inner layer based on such assumptions, and to test these modeling ideas in some relatively simple flows with different pressure gradients, such as channel flow and flow over a backward-facing step. Ultimately, models that perform adequately in these situations will be applied to more complex flow configurations, such as an airfoil.
Equilibrium, kinetic, and reactive transport models for plutonium
NASA Astrophysics Data System (ADS)
Schwantes, Jon Michael
Equilibrium, kinetic, and reactive transport models for plutonium (Pu) have been developed to help meet environmental concerns posed by past war-related and present and future peacetime nuclear technologies. A thorough review of the literature identified several hurdles that needed to be overcome in order to develop capable predictive tools for Pu. These hurdles include: (1) missing or ill-defined chemical equilibrium and kinetic constants for environmentally important Pu species; (2) no adequate conceptual model describing the formation of Pu oxy/hydroxide colloids and solids; and (3) an inability of two-phase reactive transport models to adequately simulate Pu behavior in the presence of colloids. A computer program called INVRS K was developed that integrates the geochemical modeling software of PHREEQC with a nonlinear regression routine. This program provides a tool for estimating equilibrium and kinetic constants from experimental data. INVRS K was used to regress on binding constants for Pu sorbing onto various mineral and humic surfaces. These constants enhance the thermodynamic database for Pu and improve the capability of current predictive tools. Time and temperature studies of the Pu intrinsic colloid were also conducted and results of these studies were presented here. Formation constants for the fresh and aged Pu intrinsic colloid were regressed upon using INVRS K. From these results, it was possible to develop a cohesive diagenetic model that describes the formation of Pu oxy/hydroxide colloids and solids. This model provides for the first time a means of deciphering historically unexplained observations with respect to the Pu intrinsic colloid, as well as a basis for simulating the behavior within systems containing these solids. Discussion of the development and application of reactive transport models is also presented and includes: (1) the general application of a 1-D in flow, three-phase (i.e., dissolved, solid, and colloidal), reactive transport model; (2) a simulation of the effects of dissolution of PuO2 solid and radiolysis on the behavior of Pu diffusing out of a confined pore space; and (3) application of a steady-state three phase reactive transport model to groundwater at the Nevada Test Site.
User's guide for MAGIC-Meteorologic and hydrologic genscn (generate scenarios) input converter
Ortel, Terry W.; Martin, Angel
2010-01-01
Meteorologic and hydrologic data used in watershed modeling studies are collected by various agencies and organizations, and stored in various formats. Data may be in a raw, un-processed format with little or no quality control, or may be checked for validity before being made available. Flood-simulation systems require data in near real-time so that adequate flood warnings can be made. Additionally, forecasted data are needed to operate flood-control structures to potentially mitigate flood damages. Because real-time data are of a provisional nature, missing data may need to be estimated for use in floodsimulation systems. The Meteorologic and Hydrologic GenScn (Generate Scenarios) Input Converter (MAGIC) can be used to convert data from selected formats into the Hydrologic Simulation System-Fortran hourly-observations format for input to a Watershed Data Management database, for use in hydrologic modeling studies. MAGIC also can reformat the data to the Full Equations model time-series format, for use in hydraulic modeling studies. Examples of the application of MAGIC for use in the flood-simulation system for Salt Creek in northeastern Illinois are presented in this report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Tianzhen; Buhl, Fred; Haves, Philip
2008-09-20
EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences,more » identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.« less
Gooseff, M.N.; Bencala, K.E.; Scott, D.T.; Runkel, R.L.; McKnight, Diane M.
2005-01-01
The transient storage model (TSM) has been widely used in studies of stream solute transport and fate, with an increasing emphasis on reactive solute transport. In this study we perform sensitivity analyses of a conservative TSM and two different reactive solute transport models (RSTM), one that includes first-order decay in the stream and the storage zone, and a second that considers sorption of a reactive solute on streambed sediments. Two previously analyzed data sets are examined with a focus on the reliability of these RSTMs in characterizing stream and storage zone solute reactions. Sensitivities of simulations to parameters within and among reaches, parameter coefficients of variation, and correlation coefficients are computed and analyzed. Our results indicate that (1) simulated values have the greatest sensitivity to parameters within the same reach, (2) simulated values are also sensitive to parameters in reaches immediately upstream and downstream (inter-reach sensitivity), (3) simulated values have decreasing sensitivity to parameters in reaches farther downstream, and (4) in-stream reactive solute data provide adequate data to resolve effective storage zone reaction parameters, given the model formulations. Simulations of reactive solutes are shown to be equally sensitive to transport parameters and effective reaction parameters of the model, evidence of the control of physical transport on reactive solute dynamics. Similar to conservative transport analysis, reactive solute simulations appear to be most sensitive to data collected during the rising and falling limb of the concentration breakthrough curve. ?? 2005 Elsevier Ltd. All rights reserved.
Projected strengthening of Amazonian dry season by constrained climate model simulations
NASA Astrophysics Data System (ADS)
Boisier, Juan P.; Ciais, Philippe; Ducharne, Agnès; Guimberteau, Matthieu
2015-07-01
The vulnerability of Amazonian rainforest, and the ecological services it provides, depends on an adequate supply of dry-season water, either as precipitation or stored soil moisture. How the rain-bearing South American monsoon will evolve across the twenty-first century is thus a question of major interest. Extensive savanization, with its loss of forest carbon stock and uptake capacity, is an extreme although very uncertain scenario. We show that the contrasting rainfall projections simulated for Amazonia by 36 global climate models (GCMs) can be reproduced with empirical precipitation models, calibrated with historical GCM data as functions of the large-scale circulation. A set of these simple models was therefore calibrated with observations and used to constrain the GCM simulations. In agreement with the current hydrologic trends, the resulting projection towards the end of the twenty-first century is for a strengthening of the monsoon seasonal cycle, and a dry-season lengthening in southern Amazonia. With this approach, the increase in the area subjected to lengthy--savannah-prone--dry seasons is substantially larger than the GCM-simulated one. Our results confirm the dominant picture shown by the state-of-the-art GCMs, but suggest that the `model democracy' view of these impacts can be significantly underestimated.
GCM simulations of Titan's middle and lower atmosphere and comparison to observations
NASA Astrophysics Data System (ADS)
Lora, Juan M.; Lunine, Jonathan I.; Russell, Joellen L.
2015-04-01
Simulation results are presented from a new general circulation model (GCM) of Titan, the Titan Atmospheric Model (TAM), which couples the Flexible Modeling System (FMS) spectral dynamical core to a suite of external/sub-grid-scale physics. These include a new non-gray radiative transfer module that takes advantage of recent data from Cassini-Huygens, large-scale condensation and quasi-equilibrium moist convection schemes, a surface model with "bucket" hydrology, and boundary layer turbulent diffusion. The model produces a realistic temperature structure from the surface to the lower mesosphere, including a stratopause, as well as satisfactory superrotation. The latter is shown to depend on the dynamical core's ability to build up angular momentum from surface torques. Simulated latitudinal temperature contrasts are adequate, compared to observations, and polar temperature anomalies agree with observations. In the lower atmosphere, the insolation distribution is shown to strongly impact turbulent fluxes, and surface heating is maximum at mid-latitudes. Surface liquids are unstable at mid- and low-latitudes, and quickly migrate poleward. The simulated humidity profile and distribution of surface temperatures, compared to observations, corroborate the prevalence of dry conditions at low latitudes. Polar cloud activity is well represented, though the observed mid-latitude clouds remain somewhat puzzling, and some formation alternatives are suggested.
NASA Astrophysics Data System (ADS)
Barrere, Mathieu; Domine, Florent; Decharme, Bertrand; Morin, Samuel; Vionnet, Vincent; Lafaysse, Matthieu
2017-09-01
Climate change projections still suffer from a limited representation of the permafrost-carbon feedback. Predicting the response of permafrost temperature to climate change requires accurate simulations of Arctic snow and soil properties. This study assesses the capacity of the coupled land surface and snow models ISBA-Crocus and ISBA-ES to simulate snow and soil properties at Bylot Island, a high Arctic site. Field measurements complemented with ERA-Interim reanalyses were used to drive the models and to evaluate simulation outputs. Snow height, density, temperature, thermal conductivity and thermal insulance are examined to determine the critical variables involved in the soil and snow thermal regime. Simulated soil properties are compared to measurements of thermal conductivity, temperature and water content. The simulated snow density profiles are unrealistic, which is most likely caused by the lack of representation in snow models of the upward water vapor fluxes generated by the strong temperature gradients within the snowpack. The resulting vertical profiles of thermal conductivity are inverted compared to observations, with high simulated values at the bottom of the snowpack. Still, ISBA-Crocus manages to successfully simulate the soil temperature in winter. Results are satisfactory in summer, but the temperature of the top soil could be better reproduced by adequately representing surface organic layers, i.e., mosses and litter, and in particular their water retention capacity. Transition periods (soil freezing and thawing) are the least well reproduced because the high basal snow thermal conductivity induces an excessively rapid heat transfer between the soil and the snow in simulations. Hence, global climate models should carefully consider Arctic snow thermal properties, and especially the thermal conductivity of the basal snow layer, to perform accurate predictions of the permafrost evolution under climate change.
Florian, J; Garnett, C E; Nallani, S C; Rappaport, B A; Throckmorton, D C
2012-04-01
Pharmacokinetic (PK)-pharmacodynamic modeling and simulation were used to establish a link between methadone dose, concentrations, and Fridericia rate-corrected QT (QTcF) interval prolongation, and to identify a dose that was associated with increased risk of developing torsade de pointes. A linear relationship between concentration and QTcF described the data from five clinical trials in patients on methadone maintenance treatment (MMT). A previously published population PK model adequately described the concentration-time data, and this model was used for simulation. QTcF was increased by a mean (90% confidence interval (CI)) of 17 (12, 22) ms per 1,000 ng/ml of methadone. Based on this model, doses >120 mg/day would increase the QTcF interval by >20 ms. The model predicts that 1-3% of patients would have ΔQTcF >60 ms, and 0.3-2.0% of patients would have QTcF >500 ms at doses of 160-200 mg/day. Our predictions are consistent with available observational data and support the need for electrocardiogram (ECG) monitoring and arrhythmia risk factor assessment in patients receiving methadone doses >120 mg/day.
A neurocomputational account of cognitive deficits in Parkinson’s disease
Hélie, Sébastien; Paul, Erick J.; Ashby, F. Gregory
2014-01-01
Parkinson’s disease (PD) is caused by the accelerated death of dopamine (DA) producing neurons. Numerous studies documenting cognitive deficits of PD patients have revealed impairments in a variety of tasks related to memory, learning, visuospatial skills, and attention. While there have been several studies documenting cognitive deficits of PD patients, very few computational models have been proposed. In this article, we use the COVIS model of category learning to simulate DA depletion and show that the model suffers from cognitive symptoms similar to those of human participants affected by PD. Specifically, DA depletion in COVIS produced deficits in rule-based categorization, non-linear information-integration categorization, probabilistic classification, rule maintenance, and rule switching. These were observed by simulating results from younger controls, older controls, PD patients, and severe PD patients in five well-known tasks. Differential performance among the different age groups and clinical populations was modeled simply by changing the amount of DA available in the model. This suggests that COVIS may not only be an adequate model of the simulated tasks and phenomena but also more generally of the role of DA in these tasks and phenomena. PMID:22683450
Li, Lu; Persaud, Bhagwant; Shalaby, Amer
2017-03-01
This study investigates the use of crash prediction models and micro-simulation to develop an effective surrogate safety assessment measure at the intersection level. With the use of these tools, hypothetical scenarios can be developed and explored to evaluate the safety impacts of design alternatives in a controlled environment, in which factors not directly associated with the design alternatives can be fixed. Micro-simulation models are developed, calibrated, and validated. Traffic conflicts in the micro-simulation models are estimated and linked with observed crash frequency, which greatly alleviates the lengthy time needed to collect sufficient crash data for evaluating alternatives, due to the rare and infrequent nature of crash events. A set of generalized linear models with negative binomial error structure is developed to correlate the simulated conflicts with the observed crash frequency in Toronto, Ontario, Canada. Crash prediction models are also developed for crashes of different impact types and for transit-involved crashes. The resulting statistical significance and the goodness-of-fit of the models suggest adequate predictive ability. Based on the established correlation between simulated conflicts and observed crashes, scenarios are developed in the micro-simulation models to investigate the safety effects of individual transit line elements by making hypothetical modifications to such elements and estimating changes in crash frequency from the resulting changes in conflicts. The findings imply that the existing transit signal priority schemes can have a negative effect on safety performance, and that the existing near-side stop positioning and streetcar transit type can be safer at their current state than if they were to be replaced by their respective counterparts. Copyright © 2017 Elsevier Ltd. All rights reserved.
On recontamination and directional-bias problems in Monte Carlo simulation of PDF turbulence models
NASA Technical Reports Server (NTRS)
Hsu, Andrew T.
1991-01-01
Turbulent combustion can not be simulated adequately by conventional moment closure turbulence models. The difficulty lies in the fact that the reaction rate is in general an exponential function of the temperature, and the higher order correlations in the conventional moment closure models of the chemical source term can not be neglected, making the applications of such models impractical. The probability density function (pdf) method offers an attractive alternative: in a pdf model, the chemical source terms are closed and do not require additional models. A grid dependent Monte Carlo scheme was studied, since it is a logical alternative, wherein the number of computer operations increases only linearly with the increase of number of independent variables, as compared to the exponential increase in a conventional finite difference scheme. A new algorithm was devised that satisfies a restriction in the case of pure diffusion or uniform flow problems. Although for nonuniform flows absolute conservation seems impossible, the present scheme has reduced the error considerably.
The Influences of Airmass Histories on Radical Species During POLARIS
NASA Technical Reports Server (NTRS)
Pierson, James M.; Kawa, S. R.
1998-01-01
The POLARIS mission focused on understanding the processes associated with the decrease of polar stratospheric ozone from spring to fall at high latitudes. This decrease is linked primarily to in situ photochemical destruction by reactive nitrogen species, NO and NO2, which also control other catalytic loss cycles. Steady state models have been used to test photochemistry and radical behavior but are not always adequate in simulating radical species observations. In some cases, air mass history can be important and trajectory models give an improved simulation of the radical species. Trajectory chemistry models, however, still consistently underestimate NO and NO2 abundances compared to measurements along the ER-2 flight track. The Goddard chemistry on trajectory model has been used to test updated rate constants for NO2 + OH, NO2 + O and OH + HNO3, key reactions that affect NO and NO2 abundances. We present comparisons between the modified Goddard chemistry on trajectory model, the JPL steady state model and observations from selected flights.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seethala, C.; Pandithurai, G.; Fast, Jerome D.
We utilized WRF-Chem multi-scale model to simulate the regional distribution of aerosols, optical properties and its effect on radiation over India for a winter month. The model is evaluated using measurements obtained from upper-air soundings, AERONET sun photometers, various satellite instruments, and pyranometers operated by the Indian Meteorological Department. The simulated downward shortwave flux was overestimated when the effect of aerosols on radiation and clouds was neglected. Downward shortwave radiation from a simulation that included aerosol-radiation interaction processes was 5 to 25 Wm{sup -2} closer to the observations, while a simulation that included aerosol-cloud interaction processes were another 1 tomore » 20 Wm{sup -2} closer to the observations. For the few observations available, the model usually underestimated particulate concentration. This is likely due to turbulent mixing, transport errors and the lack of secondary organic aerosol treatment in the model. The model efficiently captured the broad regional hotspots such as high aerosol optical depth over Indo-Gangetic basin as well as the northwestern and southern part of India. The regional distribution of aerosol optical depth compares well with AVHRR aerosol optical depth and the TOMS aerosol index. The magnitude and wavelength-dependence of simulated aerosol optical depth was also similar to the AERONET observations across India. Differences in surface shortwave radiation between simulations that included and neglected aerosol-radiation interactions were as high as -25 Wm{sup -2}, while differences in surface shortwave radiation between simulations that included and neglect aerosol-radiation-cloud interactions were as high as -30 Wm{sup -2}. The spatial variations of these differences were also compared with AVHRR observation. This study suggests that the model is able to qualitatively simulate the impact of aerosols on radiation over India; however, additional measurements of particulate mass and composition are needed to fully evaluate whether the aerosol precursor emissions are adequate when simulating radiative forcing in the region.« less
An error criterion for determining sampling rates in closed-loop control systems
NASA Technical Reports Server (NTRS)
Brecher, S. M.
1972-01-01
The determination of an error criterion which will give a sampling rate for adequate performance of linear, time-invariant closed-loop, discrete-data control systems was studied. The proper modelling of the closed-loop control system for characterization of the error behavior, and the determination of an absolute error definition for performance of the two commonly used holding devices are discussed. The definition of an adequate relative error criterion as a function of the sampling rate and the parameters characterizing the system is established along with the determination of sampling rates. The validity of the expressions for the sampling interval was confirmed by computer simulations. Their application solves the problem of making a first choice in the selection of sampling rates.
LSD (Landing System Development) Impact Simulation
NASA Astrophysics Data System (ADS)
Ullio, R.; Riva, N.; Pellegrino, P.; Deloo, P.
2012-07-01
In the frame of the Exploration Programs, a soft landing on the planet surface is foreseen. To ensure a successful final landing phase, a landing system by using leg tripod design landing legs with adequate crushable damping system was selected, capable of absorbing the residual velocities (vertical, horizontal and angular) at touch- down, insuring stability. TAS-I developed a numerical non linear dynamic methodology for the landing impact simulation of the Lander system by using a commercial explicit finite element analysis code (i.e. Altair RADIOSS). In this paper the most significant FE modeling approaches and results of the analytical simulation of landing impact are reported, especially with respect to the definition of leg dimensioning loads and the design update of selected parts (if necessary).
A spatial simulation model of hydrology and vegetation dynamics in semi-permanent prairie wetlands
Poiani, Karen A.; Johnson, W. Carter
1993-01-01
The objective of this study was to construct a spatial simulation model of the vegetation dynamics in semi-permanent prairie wetlands. A hydrologic submodel estimated water levels based on precipitation, runoff, and potential evapotranspiration. A vegetation submodel calculated the amount and distribution of emergent cover and open water using a geographic information system. The response of vegetation to water-level changes was based on seed bank composition, seedling recruitment and establishment, and plant survivorship. The model was developed and tested using data from the Cottonwood Lake study site in North Dakota. Data from semi-permanent wetland P1 were used to calibrate the model. Data from a second wetland, P4, were used to evaluate model performance. Simulation results were compared with actual water data from 1797 through 1989. Test results showed that differences between calculated and observed water levels were within 10 cm 75% of the time. Open water over the past decade ranged from 0 to 7% in wetland P4 and from 0 to 8% in submodel simulations. Several model parameters including evapotranspiration and timing of seedling germination could be improved with more complex techniques or relatively minor adjustments. Despite these differences the model adequately represented vegetation dynamics of prairie wetlands and can be used to examine wetland response to natural or human-induced climate change.
Xu, Hongmei; Zhou, Wangda; Zhou, Diansong; Li, Jianguo; Al-Huniti, Nidal
2017-03-01
Aztreonam is a monocyclic β-lactam antibiotic often used to treat infections caused by Enterobacteriaceae or Pseudomonas aeruginosa. Despite the long history of clinical use, population pharmacokinetic modeling of aztreonam in renally impaired patients is not yet available. The aims of this study were to assess the impact of renal impairment on aztreonam exposure and to evaluate dosing regimens for patients with renal impairment. A population model describing aztreonam pharmacokinetics following intravenous administration was developed using plasma concentrations from 42 healthy volunteers and renally impaired patients from 2 clinical studies. The final pharmacokinetic model was used to predict aztreonam plasma concentrations and evaluate the probability of pharmacodynamic target attainment (PTA) in patients with different levels of renal function. A 2-compartment model with first-order elimination adequately described aztreonam pharmacokinetics. The population mean estimates of aztreonam clearance, intercompartmental clearance, volume of distribution of the central compartment, and volume of distribution of the peripheral compartment were 4.93 L/h, 9.26 L/h, 7.43 L, and 6.44 L, respectively. Creatinine clearance and body weight were the most significant variables to explain patient variability in aztreonam clearance and volume of distribution, respectively. Simulations using the final pharmacokinetic model resulted in a clinical susceptibility break point of 4 and 8 mg/L, respectively, based on the clinical use of 1- and 2-g loading doses with the same or reduced maintenance dose every 8 hours for various renal deficiency patients. The population pharmacokinetic modeling and PTA estimation support adequate PTAs (>90% PTA) from the aztreonam label for dose adjustment of aztreonam in patients with moderate and severe renal impairment. © 2016, The American College of Clinical Pharmacology.
NASA Technical Reports Server (NTRS)
Petroff, D. N.; Scher, S. H.; Sutton, C. E.
1978-01-01
Data were obtained with and without the nose boom and with several strake configurations; also, data were obtained for various control surface deflections. Analysis of the results revealed that selected strake configurations adequately provided low Reynolds number simulation of the high Reynolds number characteristics. The addition of the boom in general tended to reduce the Reynolds number effects.
Guidelines for Calibration and Application of Storm.
1977-12-01
combination method uses the SCS method on pervious areas and the coefficient method on impervious areas of the watershed. Storm water quality is computed...stations, it should be accomplished according to procedures outlined In Reference 7. Adequate storm water quality data are the most difficult and costly...mass discharge of pollutants is negligible. The state-of-the-art in urban storm water quality modeling precludes highly accurate simulation of
NASA Astrophysics Data System (ADS)
Zahmatkesh, Zahra; Karamouz, Mohammad; Nazif, Sara
2015-09-01
Simulation of rainfall-runoff process in urban areas is of great importance considering the consequences and damages of extreme runoff events and floods. The first issue in flood hazard analysis is rainfall simulation. Large scale climate signals have been proved to be effective in rainfall simulation and prediction. In this study, an integrated scheme is developed for rainfall-runoff modeling considering different sources of uncertainty. This scheme includes three main steps of rainfall forecasting, rainfall-runoff simulation and future runoff prediction. In the first step, data driven models are developed and used to forecast rainfall using large scale climate signals as rainfall predictors. Due to high effect of different sources of uncertainty on the output of hydrologic models, in the second step uncertainty associated with input data, model parameters and model structure is incorporated in rainfall-runoff modeling and simulation. Three rainfall-runoff simulation models are developed for consideration of model conceptual (structural) uncertainty in real time runoff forecasting. To analyze the uncertainty of the model structure, streamflows generated by alternative rainfall-runoff models are combined, through developing a weighting method based on K-means clustering. Model parameters and input uncertainty are investigated using an adaptive Markov Chain Monte Carlo method. Finally, calibrated rainfall-runoff models are driven using the forecasted rainfall to predict future runoff for the watershed. The proposed scheme is employed in the case study of the Bronx River watershed, New York City. Results of uncertainty analysis of rainfall-runoff modeling reveal that simultaneous estimation of model parameters and input uncertainty significantly changes the probability distribution of the model parameters. It is also observed that by combining the outputs of the hydrological models using the proposed clustering scheme, the accuracy of runoff simulation in the watershed is remarkably improved up to 50% in comparison to the simulations by the individual models. Results indicate that the developed methodology not only provides reliable tools for rainfall and runoff modeling, but also adequate time for incorporating required mitigation measures in dealing with potentially extreme runoff events and flood hazard. Results of this study can be used in identification of the main factors affecting flood hazard analysis.
Numerical simulations of clinical focused ultrasound functional neurosurgery
NASA Astrophysics Data System (ADS)
Pulkkinen, Aki; Werner, Beat; Martin, Ernst; Hynynen, Kullervo
2014-04-01
A computational model utilizing grid and finite difference methods were developed to simulate focused ultrasound functional neurosurgery interventions. The model couples the propagation of ultrasound in fluids (soft tissues) and solids (skull) with acoustic and visco-elastic wave equations. The computational model was applied to simulate clinical focused ultrasound functional neurosurgery treatments performed in patients suffering from therapy resistant chronic neuropathic pain. Datasets of five patients were used to derive the treatment geometry. Eight sonications performed in the treatments were then simulated with the developed model. Computations were performed by driving the simulated phased array ultrasound transducer with the acoustic parameters used in the treatments. Resulting focal temperatures and size of the thermal foci were compared quantitatively, in addition to qualitative inspection of the simulated pressure and temperature fields. This study found that the computational model and the simulation parameters predicted an average of 24 ± 13% lower focal temperature elevations than observed in the treatments. The size of the simulated thermal focus was found to be 40 ± 13% smaller in the anterior-posterior direction and 22 ± 14% smaller in the inferior-superior direction than in the treatments. The location of the simulated thermal focus was off from the prescribed target by 0.3 ± 0.1 mm, while the peak focal temperature elevation observed in the measurements was off by 1.6 ± 0.6 mm. Although the results of the simulations suggest that there could be some inaccuracies in either the tissue parameters used, or in the simulation methods, the simulations were able to predict the focal spot locations and temperature elevations adequately for initial treatment planning performed to assess, for example, the feasibility of sonication. The accuracy of the simulations could be improved if more precise ultrasound tissue properties (especially of the skull bone) could be obtained.
NASA Astrophysics Data System (ADS)
Akinsanola, A. A.; Ajayi, V. O.; Adejare, A. T.; Adeyeri, O. E.; Gbode, I. E.; Ogunjobi, K. O.; Nikulin, G.; Abolude, A. T.
2018-04-01
This study presents evaluation of the ability of Rossby Centre Regional Climate Model (RCA4) driven by nine global circulation models (GCMs), to skilfully reproduce the key features of rainfall climatology over West Africa for the period of 1980-2005. The seasonal climatology and annual cycle of the RCA4 simulations were assessed over three homogenous subregions of West Africa (Guinea coast, Savannah, and Sahel) and evaluated using observed precipitation data from the Global Precipitation Climatology Project (GPCP). Furthermore, the model output was evaluated using a wide range of statistical measures. The interseasonal and interannual variability of the RCA4 were further assessed over the subregions and the whole of the West Africa domain. Results indicate that the RCA4 captures the spatial and interseasonal rainfall pattern adequately but exhibits a weak performance over the Guinea coast. Findings from the interannual rainfall variability indicate that the model performance is better over the larger West Africa domain than the subregions. The largest difference across the RCA4 simulated annual rainfall was found in the Sahel. Result from the Mann-Kendall test showed no significant trend for the 1980-2005 period in annual rainfall either in GPCP observation data or in the model simulations over West Africa. In many aspects, the RCA4 simulation driven by the HadGEM2-ES perform best over the region. The use of the multimodel ensemble mean has resulted to the improved representation of rainfall characteristics over the study domain.
Testing the ability of a semidistributed hydrological model to simulate contributing area
NASA Astrophysics Data System (ADS)
Mengistu, S. G.; Spence, C.
2016-06-01
A dry climate, the prevalence of small depressions, and the lack of a well-developed drainage network are characteristics of environments with extremely variable contributing areas to runoff. These types of regions arguably present the greatest challenge to properly understanding catchment streamflow generation processes. Previous studies have shown that contributing area dynamics are important for streamflow response, but the nature of the relationship between the two is not typically understood. Furthermore, it is not often tested how well hydrological models simulate contributing area. In this study, the ability of a semidistributed hydrological model, the PDMROF configuration of Environment Canada's MESH model, was tested to determine if it could simulate contributing area. The study focused on the St. Denis Creek watershed in central Saskatchewan, Canada, which with its considerable topographic depressions, exhibits wide variation in contributing area, making it ideal for this type of investigation. MESH-PDMROF was able to replicate contributing area derived independently from satellite imagery. Daily model simulations revealed a hysteretic relationship between contributing area and streamflow not apparent from the less frequent remote sensing observations. This exercise revealed that contributing area extent can be simulated by a semi-distributed hydrological model with a scheme that assumes storage capacity distribution can be represented with a probability function. However, further investigation is needed to determine if it can adequately represent the complex relationship between streamflow and contributing area that is such a key signature of catchment behavior.
NASA Astrophysics Data System (ADS)
Dobos, P.; Tamás, P.; Illés, B.
2016-11-01
Adequate establishment and operation of warehouse logistics determines the companies’ competitiveness significantly because it effects greatly the quality and the selling price of the goods that the production companies produce. In order to implement and manage an adequate warehouse system, adequate warehouse position, stock management model, warehouse technology, motivated work force committed to process improvement and material handling strategy are necessary. In practical life, companies have paid small attantion to select the warehouse strategy properly. Although it has a major influence on the production in the case of material warehouse and on smooth costumer service in the case of finished goods warehouse because this can happen with a huge loss in material handling. Due to the dynamically changing production structure, frequent reorganization of warehouse activities is needed, on what the majority of the companies react basically with no reactions. This work presents a simulation test system frames for eligible warehouse material handling strategy selection and also the decision method for selection.
The impact of mesoscale convective systems on global precipitation: A modeling study
NASA Astrophysics Data System (ADS)
Tao, Wei-Kuo
2017-04-01
The importance of precipitating mesoscale convective systems (MCSs) has been quantified from TRMM precipitation radar and microwave imager retrievals. MCSs generate more than 50% of the rainfall in most tropical regions. Typical MCSs have horizontal scales of a few hundred kilometers (km); therefore, a large domain and high resolution are required for realistic simulations of MCSs in cloud-resolving models (CRMs). Almost all traditional global and climate models do not have adequate parameterizations to represent MCSs. Typical multi-scale modeling frameworks (MMFs) with 32 CRM grid points and 4 km grid spacing also might not have sufficient resolution and domain size for realistically simulating MCSs. In this study, the impact of MCSs on precipitation processes is examined by conducting numerical model simulations using the Goddard Cumulus Ensemble model (GCE) and Goddard MMF (GMMF). The results indicate that both models can realistically simulate MCSs with more grid points (i.e., 128 and 256) and higher resolutions (1 or 2 km) compared to those simulations with less grid points (i.e., 32 and 64) and low resolution (4 km). The modeling results also show that the strengths of the Hadley circulations, mean zonal and regional vertical velocities, surface evaporation, and amount of surface rainfall are either weaker or reduced in the GMMF when using more CRM grid points and higher CRM resolution. In addition, the results indicate that large-scale surface evaporation and wind feed back are key processes for determining the surface rainfall amount in the GMMF. A sensitivity test with reduced sea surface temperatures (SSTs) is conducted and results in both reduced surface rainfall and evaporation.
NASA Astrophysics Data System (ADS)
Shashkov, Andrey; Lovtsov, Alexander; Tomilin, Dmitry
2017-04-01
According to present knowledge, countless numerical simulations of the discharge plasma in Hall thrusters were conducted. However, on the one hand, adequate two-dimensional (2D) models require a lot of time to carry out numerical research of the breathing mode oscillations or the discharge structure. On the other hand, existing one-dimensional (1D) models are usually too simplistic and do not take into consideration such important phenomena as neutral-wall collisions, magnetic field induced by Hall current and double, secondary, and stepwise ionizations together. In this paper a one-dimensional with three-dimensional velocity space (1D3V) hybrid-PIC model is presented. The model is able to incorporate all the phenomena mentioned above. A new method of neutral-wall collisions simulation in described space was developed and validated. Simulation results obtained for KM-88 and KM-60 thrusters are in a good agreement with experimental data. The Bohm collision coefficient was the same for both thrusters. Neutral-wall collisions, doubly charged ions, and induced magnetic field were proved to stabilize the breathing mode oscillations in a Hall thruster under some circumstances.
Electro-osmotic flow of a model electrolyte
NASA Astrophysics Data System (ADS)
Zhu, Wei; Singer, Sherwin J.; Zheng, Zhi; Conlisk, A. T.
2005-04-01
Electro-osmotic flow is studied by nonequilibrium molecular dynamics simulations in a model system chosen to elucidate various factors affecting the velocity profile and facilitate comparison with existing continuum theories. The model system consists of spherical ions and solvent, with stationary, uniformly charged walls that make a channel with a height of 20 particle diameters. We find that hydrodynamic theory adequately describes simple pressure-driven (Poiseuille) flow in this model. However, Poisson-Boltzmann theory fails to describe the ion distribution in important situations, and therefore continuum fluid dynamics based on the Poisson-Boltzmann ion distribution disagrees with simulation results in those situations. The failure of Poisson-Boltzmann theory is traced to the exclusion of ions near the channel walls resulting from reduced solvation of the ions in that region. When a corrected ion distribution is used as input for hydrodynamic theory, agreement with numerical simulations is restored. An analytic theory is presented that demonstrates that repulsion of the ions from the channel walls increases the flow rate, and attraction to the walls has the opposite effect. A recent numerical study of electro-osmotic flow is reanalyzed in the light of our findings, and the results conform well to our conclusions for the model system.
Surface diffusion of cyclic hydrocarbons on nickel
NASA Astrophysics Data System (ADS)
Silverwood, I. P.; Armstrong, J.
2018-08-01
Surface diffusion of adsorbates is difficult to measure on realistic systems, yet it is of fundamental interest in catalysis and coating reactions. quasielastic neutron scattering (QENS) was used to investigate the diffusion of cyclohexane and benzene adsorbed on a nickel metal sponge catalyst. Molecular dynamics simulations of benzene on a model (111) nickel surface showed localised motion with diffusion by intermittent jumps. The experimental data was therefore fitted to the Singwi-Sjölander model and activation energies for diffusion of 4.0 kJ mol-1 for benzene and 4.3 kJ mol-1 for cyclohexane were calculated for the two dimensional model. Limited motion out-of plane was seen in the dynamics simulations and is discussed, although the resolution of the scattering experiment is insufficient to quantify this. Good agreement is seen between the use of a perfect crystal as a model for a disordered system over short time scales, suggesting that simple models are adequate to describe diffusion over polycrystalline metal surfaces on the timescale of QENS measurement.
Ramos-Infante, Samuel Jesús; Ten-Esteve, Amadeo; Alberich-Bayarri, Angel; Pérez, María Angeles
2018-01-01
This paper proposes a discrete particle model based on the random-walk theory for simulating cement infiltration within open-cell structures to prevent osteoporotic proximal femur fractures. Model parameters consider the cement viscosity (high and low) and the desired direction of injection (vertical and diagonal). In vitro and in silico characterizations of augmented open-cell structures validated the computational model and quantified the improved mechanical properties (Young's modulus) of the augmented specimens. The cement injection pattern was successfully predicted in all the simulated cases. All the augmented specimens exhibited enhanced mechanical properties computationally and experimentally (maximum improvements of 237.95 ± 12.91% and 246.85 ± 35.57%, respectively). The open-cell structures with high porosity fraction showed a considerable increase in mechanical properties. Cement augmentation in low porosity fraction specimens resulted in a lesser increase in mechanical properties. The results suggest that the proposed discrete particle model is adequate for use as a femoroplasty planning framework.
de Jesus, E B; de Andrade Lima, L R P
2016-08-01
Souring of oil fields during secondary oil recovery by water injection occurs mainly due to the action of sulfate-reducing bacteria (SRB) adhered to the rock surface in the vicinity of injection wells. Upflow packed-bed bioreactors have been used in petroleum microbiology because of its similarity to the oil field near the injection wells or production. However, these reactors do not realistically describe the regions near the injection wells, which are characterized by the presence of a saturated zone and a void region close to the well. In this study, the hydrodynamics of the two-compartment packing-free/packed-bed pilot bioreactor that mimics an oil reservoir was studied. The packed-free compartment was modeled using a continuous stirred tank model with mass exchange between active and stagnant zones, whereas the packed-bed compartment was modeled using a piston-dispersion-exchange model. The proposed model adequately represents the hydrodynamic of the packed-free/packed-bed bioreactor while the simulations provide important information about the characteristics of the residence time distribution (RTD) curves for different sets of model parameters. Simulations were performed to represent the control of the sulfate-reducing bacteria activity in the bioreactor with the use of molybdate in different scenarios. The simulations show that increased amounts of molybdate cause an effective inhibition of the souring sulfate-reducing bacteria activity.
NASA Astrophysics Data System (ADS)
Mönnich, David; Troost, Esther G. C.; Kaanders, Johannes H. A. M.; Oyen, Wim J. G.; Alber, Markus; Thorwarth, Daniela
2011-04-01
Hypoxia can be assessed non-invasively by positron emission tomography (PET) using radiotracers such as [18F]fluoromisonidazole (Fmiso) accumulating in poorly oxygenated cells. Typical features of dynamic Fmiso PET data are high signal variability in the first hour after tracer administration and slow formation of a consistent contrast. The purpose of this study is to investigate whether these characteristics can be explained by the current conception of the underlying microscopic processes and to identify fundamental effects. This is achieved by modelling and simulating tissue oxygenation and tracer dynamics on the microscopic scale. In simulations, vessel structures on histology-derived maps act as sources and sinks for oxygen as well as tracer molecules. Molecular distributions in the extravascular space are determined by reaction-diffusion equations, which are solved numerically using a two-dimensional finite element method. Simulated Fmiso time activity curves (TACs), though not directly comparable to PET TACs, reproduce major characteristics of clinical curves, indicating that the microscopic model and the parameter values are adequate. Evidence for dependence of the early PET signal on the vascular fraction is found. Further, possible effects leading to late contrast formation and potential implications on the quantification of Fmiso PET data are discussed.
Calibration of discrete element model parameters: soybeans
NASA Astrophysics Data System (ADS)
Ghodki, Bhupendra M.; Patel, Manish; Namdeo, Rohit; Carpenter, Gopal
2018-05-01
Discrete element method (DEM) simulations are broadly used to get an insight of flow characteristics of granular materials in complex particulate systems. DEM input parameters for a model are the critical prerequisite for an efficient simulation. Thus, the present investigation aims to determine DEM input parameters for Hertz-Mindlin model using soybeans as a granular material. To achieve this aim, widely acceptable calibration approach was used having standard box-type apparatus. Further, qualitative and quantitative findings such as particle profile, height of kernels retaining the acrylic wall, and angle of repose of experiments and numerical simulations were compared to get the parameters. The calibrated set of DEM input parameters includes the following (a) material properties: particle geometric mean diameter (6.24 mm); spherical shape; particle density (1220 kg m^{-3} ), and (b) interaction parameters such as particle-particle: coefficient of restitution (0.17); coefficient of static friction (0.26); coefficient of rolling friction (0.08), and particle-wall: coefficient of restitution (0.35); coefficient of static friction (0.30); coefficient of rolling friction (0.08). The results may adequately be used to simulate particle scale mechanics (grain commingling, flow/motion, forces, etc) of soybeans in post-harvest machinery and devices.
Jung, Kwang-Wook; Yoon, Choon-G; Jang, Jae-Ho; Kong, Dong-Soo
2008-01-01
Effective watershed management often demands qualitative and quantitative predictions of the effect of future management activities as arguments for policy makers and administration. The BASINS geographic information system was developed to compute total maximum daily loads, which are helpful to establish hydrological process and water quality modeling system. In this paper the BASINS toolkit HSPF model is applied in 20,271 km(2) large watershed of the Han River Basin is used for applicability of HSPF and BMPs scenarios. For proper evaluation of watershed and stream water quality, comprehensive estimation methods are necessary to assess large amounts of point source and nonpoint-source (NPS) pollution based on the total watershed area. In this study, The Hydrological Simulation Program-FORTRAN (HSPF) was estimated to simulate watershed pollutant loads containing dam operation and applied BMPs scenarios for control NPS pollution. The 8-day monitoring data (about three years) were used in the calibration and verification processes. Model performance was in the range of "very good" and "good" based on percent difference. The water-quality simulation results were encouraging for this large sizable watershed with dam operation practice and mixed land uses; HSPF proved adequate, and its application is recommended to simulate watershed processes and BMPs evaluation. IWA Publishing 2008.
Ducrot, Virginie; Billoir, Elise; Péry, Alexandre R R; Garric, Jeanne; Charles, Sandrine
2010-05-01
Effects of zinc were studied in the freshwater worm Branchiura sowerbyi using partial and full life-cycle tests. Only newborn and juveniles were sensitive to zinc, displaying effects on survival, growth, and age at first brood at environmentally relevant concentrations. Threshold effect models were proposed to assess toxic effects on individuals. They were fitted to life-cycle test data using Bayesian inference and adequately described life-history trait data in exposed organisms. The daily asymptotic growth rate of theoretical populations was then simulated with a matrix population model, based upon individual-level outputs. Population-level outputs were in accordance with existing literature for controls. Working in a Bayesian framework allowed incorporating parameter uncertainty in the simulation of the population-level response to zinc exposure, thus increasing the relevance of test results in the context of ecological risk assessment.
Heinmets, F; Leary, R H
1991-06-01
A model system (1) was established to analyze purine and pyrimidine metabolism. This system has been expanded to include macrosimulation of DNA synthesis and the study of its regulation by terminal deoxynucleoside triphosphates (dNTPs) via a complex set of interactions. Computer experiments reveal that our model exhibits adequate and reasonable sensitivity in terms of dNTP pool levels and rates of DNA synthesis when inputs to the system are varied. These simulation experiments reveal that in order to achieve maximum DNA synthesis (in terms of purine metabolism), a proper balance is required in guanine and adenine input into this metabolic system. Excessive inputs will become inhibitory to DNA synthesis. In addition, studies are carried out on rates of DNA synthesis when various parameters are changed quantitatively. The current system is formulated by 110 differential equations.
Zolgharni, M; Griffiths, H; Ledger, P D
2010-08-01
The feasibility of detecting a cerebral haemorrhage with a hemispherical MIT coil array consisting of 56 exciter/sensor coils of 10 mm radius and operating at 1 and 10 MHz was investigated. A finite difference method combined with an anatomically realistic head model comprising 12 tissue types was used to simulate the strokes. Frequency-difference images were reconstructed from the modelled data with different levels of the added phase noise and two types of a priori boundary errors: a displacement of the head and a size scaling error. The results revealed that a noise level of 3 m degrees (standard deviation) was adequate for obtaining good visualization of a peripheral stroke (volume approximately 49 ml). The simulations further showed that the displacement error had to be within 3-4 mm and the scaling error within 3-4% so as not to cause unacceptably large artefacts on the images.
d'Entremont, Anna; Corgnale, Claudio; Hardy, Bruce; ...
2018-01-11
Concentrating solar power plants can achieve low cost and efficient renewable electricity production if equipped with adequate thermal energy storage systems. Metal hydride based thermal energy storage systems are appealing candidates due to their demonstrated potential for very high volumetric energy densities, high exergetic efficiencies, and low costs. The feasibility and performance of a thermal energy storage system based on NaMgH 2F hydride paired with TiCr 1.6Mn 0.2 is examined, discussing its integration with a solar-driven ultra-supercritical steam power plant. The simulated storage system is based on a laboratory-scale experimental apparatus. It is analyzed using a detailed transport model accountingmore » for the thermochemical hydrogen absorption and desorption reactions, including kinetics expressions adequate for the current metal hydride system. The results show that the proposed metal hydride pair can suitably be integrated with a high temperature steam power plant. The thermal energy storage system achieves output energy densities of 226 kWh/m 3, 9 times the DOE SunShot target, with moderate temperature and pressure swings. Also, simulations indicate that there is significant scope for performance improvement via heat-transfer enhancement strategies.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
d'Entremont, Anna; Corgnale, Claudio; Hardy, Bruce
Concentrating solar power plants can achieve low cost and efficient renewable electricity production if equipped with adequate thermal energy storage systems. Metal hydride based thermal energy storage systems are appealing candidates due to their demonstrated potential for very high volumetric energy densities, high exergetic efficiencies, and low costs. The feasibility and performance of a thermal energy storage system based on NaMgH 2F hydride paired with TiCr 1.6Mn 0.2 is examined, discussing its integration with a solar-driven ultra-supercritical steam power plant. The simulated storage system is based on a laboratory-scale experimental apparatus. It is analyzed using a detailed transport model accountingmore » for the thermochemical hydrogen absorption and desorption reactions, including kinetics expressions adequate for the current metal hydride system. The results show that the proposed metal hydride pair can suitably be integrated with a high temperature steam power plant. The thermal energy storage system achieves output energy densities of 226 kWh/m 3, 9 times the DOE SunShot target, with moderate temperature and pressure swings. Also, simulations indicate that there is significant scope for performance improvement via heat-transfer enhancement strategies.« less
NASA Technical Reports Server (NTRS)
Cole, Benjamin H.; Yang, Ping; Baum, Bryan A.; Riedi, Jerome; Labonnote, Laurent C.; Thieuleux, Francois; Platnick, Steven
2012-01-01
Insufficient knowledge of the habit distribution and the degree of surface roughness of ice crystals within ice clouds is a source of uncertainty in the forward light scattering and radiative transfer simulations required in downstream applications involving these clouds. The widely used MODerate Resolution Imaging Spectroradiometer (MODIS) Collection 5 ice microphysical model assumes a mixture of various ice crystal shapes with smooth-facets except aggregates of columns for which a moderately rough condition is assumed. When compared with PARASOL (Polarization and Anisotropy of Reflectances for Atmospheric Sciences coupled with Observations from a Lidar) polarized reflection data, simulations of polarized reflectance using smooth particles show a poor fit to the measurements, whereas very rough-faceted particles provide an improved fit to the polarized reflectance. In this study a new microphysical model based on a mixture of 9 different ice crystal habits with severely roughened facets is developed. Simulated polarized reflectance using the new ice habit distribution is calculated using a vector adding-doubling radiative transfer model, and the simulations closely agree with the polarized reflectance observed by PARASOL. The new general habit mixture is also tested using a spherical albedo differences analysis, and surface roughening is found to improve the consistency of multi-angular observations. It is suggested that an ice model incorporating an ensemble of different habits with severely roughened surfaces would potentially be an adequate choice for global ice cloud retrievals.
Spacecraft Jitter Attenuation Using Embedded Piezoelectric Actuators
NASA Technical Reports Server (NTRS)
Belvin, W. Keith
1995-01-01
Remote sensing from spacecraft requires precise pointing of measurement devices in order to achieve adequate spatial resolution. Unfortunately, various spacecraft disturbances induce vibrational jitter in the remote sensing instruments. The NASA Langley Research Center has performed analysis, simulations, and ground tests to identify the more promising technologies for minimizing spacecraft pointing jitter. These studies have shown that the use of smart materials to reduce spacecraft jitter is an excellent match between a maturing technology and an operational need. This paper describes the use of embedding piezoelectric actuators for vibration control and payload isolation. In addition, recent advances in modeling, simulation, and testing of spacecraft pointing jitter are discussed.
Dynamic Deployment Simulations of Inflatable Space Structures
NASA Technical Reports Server (NTRS)
Wang, John T.
2005-01-01
The feasibility of using Control Volume (CV) method and the Arbitrary Lagrangian Eulerian (ALE) method in LSDYNA to simulate the dynamic deployment of inflatable space structures is investigated. The CV and ALE methods were used to predict the inflation deployments of three folded tube configurations. The CV method was found to be a simple and computationally efficient method that may be adequate for modeling slow inflation deployment sine the inertia of the inflation gas can be neglected. The ALE method was found to be very computationally intensive since it involves the solving of three conservative equations of fluid as well as dealing with complex fluid structure interactions.
Thermal Structure Analysis of SIRCA Tile for X-34 Wing Leading Edge TPS
NASA Technical Reports Server (NTRS)
Milos, Frank S.; Squire, Thomas H.; Rasky, Daniel J. (Technical Monitor)
1997-01-01
This paper will describe in detail thermal/structural analyses of SIRCA tiles which were performed at NASA Ames under the The Tile Analysis Task of the X-34 Program. The analyses used the COSMOS/M finite element software to simulate the material response in arc-jet tests, mechanical deflection tests, and the performance of candidate designs for the TPS system. Purposes of the analysis were to verify thermal and structural models for the SIRCA tiles, to establish failure criteria for stressed tiles, to simulate the TPS response under flight aerothermal and mechanical load, and to confirm that adequate safety margins exist for the actual TPS design.
Modeling of detachment experiments at DIII-D
Canik, John M.; Briesemeister, Alexis R.; Lasnier, C. J.; ...
2014-11-26
Edge fluid–plasma/kinetic–neutral modeling of well-diagnosed DIII-D experiments is performed in order to document in detail how well certain aspects of experimental measurements are reproduced within the model as the transition to detachment is approached. Results indicate, that at high densities near detachment onset, the poloidal temperature profile produced in the simulations agrees well with that measured in experiment. However, matching the heat flux in the model requires a significant increase in the radiated power compared to what is predicted using standard chemical sputtering rates. Lastly, these results suggest that the model is adequate to predict the divertor temperature, provided thatmore » the discrepancy in radiated power level can be resolved.« less
Modelling the morphology of migrating bacterial colonies
NASA Astrophysics Data System (ADS)
Nishiyama, A.; Tokihiro, T.; Badoual, M.; Grammaticos, B.
2010-08-01
We present a model which aims at describing the morphology of colonies of Proteus mirabilis and Bacillus subtilis. Our model is based on a cellular automaton which is obtained by the adequate discretisation of a diffusion-like equation, describing the migration of the bacteria, to which we have added rules simulating the consolidation process. Our basic assumption, following the findings of the group of Chuo University, is that the migration and consolidation processes are controlled by the local density of the bacteria. We show that it is possible within our model to reproduce the morphological diagrams of both bacteria species. Moreover, we model some detailed experiments done by the Chuo University group, obtaining a fine agreement.
Numerical model for the uptake of groundwater contaminants by phreatophytes
Widdowson, M.A.; El-Sayed, A.; Landmeyer, J.E.
2008-01-01
Conventional solute transport models do not adequately account for the effects of phreatophytic plant systems on contaminant concentrations in shallow groundwater systems. A numerical model was developed and tested to simulate threedimensional reactive solute transport in a heterogeneous porous medium. Advective-dispersive transport is coupled to biodegradation, sorption, and plantbased attenuation processes including plant uptake and sorption by plant roots. The latter effects are a function of the physical-chemical properties of the individual solutes and plant species. Models for plant uptake were tested and evaluated using the experimental data collected at a field site comprised of hybrid poplar trees. A non-linear equilibrium isotherm model best represented site conditions.
Exploring the large-scale structure of Taylor–Couette turbulence through Large-Eddy Simulations
NASA Astrophysics Data System (ADS)
Ostilla-Mónico, Rodolfo; Zhu, Xiaojue; Verzicco, Roberto
2018-04-01
Large eddy simulations (LES) of Taylor-Couette (TC) flow, the flow between two co-axial and independently rotating cylinders are performed in an attempt to explore the large-scale axially-pinned structures seen in experiments and simulations. Both static and dynamic LES models are used. The Reynolds number is kept fixed at Re = 3.4 · 104, and the radius ratio η = ri /ro is set to η = 0.909, limiting the effects of curvature and resulting in frictional Reynolds numbers of around Re τ ≈ 500. Four rotation ratios from Rot = ‑0.0909 to Rot = 0.3 are simulated. First, the LES of TC is benchmarked for different rotation ratios. Both the Smagorinsky model with a constant of cs = 0.1 and the dynamic model are found to produce reasonable results for no mean rotation and cyclonic rotation, but deviations increase for increasing rotation. This is attributed to the increasing anisotropic character of the fluctuations. Second, “over-damped” LES, i.e. LES with a large Smagorinsky constant is performed and is shown to reproduce some features of the large-scale structures, even when the near-wall region is not adequately modeled. This shows the potential for using over-damped LES for fast explorations of the parameter space where large-scale structures are found.
Sun, Peng; Zhou, Haoyin; Ha, Seongmin; Hartaigh, Bríain ó; Truong, Quynh A.; Min, James K.
2016-01-01
In clinical cardiology, both anatomy and physiology are needed to diagnose cardiac pathologies. CT imaging and computer simulations provide valuable and complementary data for this purpose. However, it remains challenging to gain useful information from the large amount of high-dimensional diverse data. The current tools are not adequately integrated to visualize anatomic and physiologic data from a complete yet focused perspective. We introduce a new computer-aided diagnosis framework, which allows for comprehensive modeling and visualization of cardiac anatomy and physiology from CT imaging data and computer simulations, with a primary focus on ischemic heart disease. The following visual information is presented: (1) Anatomy from CT imaging: geometric modeling and visualization of cardiac anatomy, including four heart chambers, left and right ventricular outflow tracts, and coronary arteries; (2) Function from CT imaging: motion modeling, strain calculation, and visualization of four heart chambers; (3) Physiology from CT imaging: quantification and visualization of myocardial perfusion and contextual integration with coronary artery anatomy; (4) Physiology from computer simulation: computation and visualization of hemodynamics (e.g., coronary blood velocity, pressure, shear stress, and fluid forces on the vessel wall). Substantially, feedback from cardiologists have confirmed the practical utility of integrating these features for the purpose of computer-aided diagnosis of ischemic heart disease. PMID:26863663
NASA Astrophysics Data System (ADS)
Johnson, Tony; Metcalfe, Jason; Brewster, Benjamin; Manteuffel, Christopher; Jaswa, Matthew; Tierney, Terrance
2010-04-01
The proliferation of intelligent systems in today's military demands increased focus on the optimization of human-robot interactions. Traditional studies in this domain involve large-scale field tests that require humans to operate semiautomated systems under varying conditions within military-relevant scenarios. However, provided that adequate constraints are employed, modeling and simulation can be a cost-effective alternative and supplement. The current presentation discusses a simulation effort that was executed in parallel with a field test with Soldiers operating military vehicles in an environment that represented key elements of the true operational context. In this study, "constructive" human operators were designed to represent average Soldiers executing supervisory control over an intelligent ground system. The constructive Soldiers were simulated performing the same tasks as those performed by real Soldiers during a directly analogous field test. Exercising the models in a high-fidelity virtual environment provided predictive results that represented actual performance in certain aspects, such as situational awareness, but diverged in others. These findings largely reflected the quality of modeling assumptions used to design behaviors and the quality of information available on which to articulate principles of operation. Ultimately, predictive analyses partially supported expectations, with deficiencies explicable via Soldier surveys, experimenter observations, and previously-identified knowledge gaps.
NASA Astrophysics Data System (ADS)
Mohammed, K.; Islam, A. S.; Khan, M. J. U.; Das, M. K.
2017-12-01
With the large number of hydrologic models presently available along with the global weather and geographic datasets, streamflows of almost any river in the world can be easily modeled. And if a reasonable amount of observed data from that river is available, then simulations of high accuracy can sometimes be performed after calibrating the model parameters against those observed data through inverse modeling. Although such calibrated models can succeed in simulating the general trend or mean of the observed flows very well, more often than not they fail to adequately simulate the extreme flows. This causes difficulty in tasks such as generating reliable projections of future changes in extreme flows due to climate change, which is obviously an important task due to floods and droughts being closely connected to people's lives and livelihoods. We propose an approach where the outputs of a physically-based hydrologic model are used as an input to a machine learning model to try and better simulate the extreme flows. To demonstrate this offline-coupling approach, the Soil and Water Assessment Tool (SWAT) was selected as the physically-based hydrologic model, the Artificial Neural Network (ANN) as the machine learning model and the Ganges-Brahmaputra-Meghna (GBM) river system as the study area. The GBM river system, located in South Asia, is the third largest in the world in terms of freshwater generated and forms the largest delta in the world. The flows of the GBM rivers were simulated separately in order to test the performance of this proposed approach in accurately simulating the extreme flows generated by different basins that vary in size, climate, hydrology and anthropogenic intervention on stream networks. Results show that by post-processing the simulated flows of the SWAT models with ANN models, simulations of extreme flows can be significantly improved. The mean absolute errors in simulating annual maximum/minimum daily flows were minimized from 4967 cusecs to 1294 cusecs for Ganges, from 5695 cusecs to 2115 cusecs for Brahmaputra and from 689 cusecs to 321 cusecs for Meghna. Using this approach, simulations of hydrologic variables other than streamflow can also be improved given that a decent amount of observed data for that variable is available.
Processes influencing model-data mismatch in drought-stressed, fire-disturbed eddy flux sites
NASA Astrophysics Data System (ADS)
Mitchell, Stephen; Beven, Keith; Freer, Jim; Law, Beverly
2011-06-01
Semiarid forests are very sensitive to climatic change and among the most difficult ecosystems to accurately model. We tested the performance of the Biome-BGC model against eddy flux data taken from young (years 2004-2008), mature (years 2002-2008), and old-growth (year 2000) ponderosa pine stands at Metolius, Oregon, and subsequently examined several potential causes for model-data mismatch. We used the Generalized Likelihood Uncertainty Estimation methodology, which involved 500,000 model runs for each stand (1,500,000 total). Each simulation was run with randomly generated parameter values from a uniform distribution based on published parameter ranges, resulting in modeled estimates of net ecosystem CO2 exchange (NEE) that were compared to measured eddy flux data. Simulations for the young stand exhibited the highest level of performance, though they overestimated ecosystem C accumulation (-NEE) 99% of the time. Among the simulations for the mature and old-growth stands, 100% and 99% of the simulations underestimated ecosystem C accumulation. One obvious area of model-data mismatch is soil moisture, which was overestimated by the model in the young and old-growth stands yet underestimated in the mature stand. However, modeled estimates of soil water content and associated water deficits did not appear to be the primary cause of model-data mismatch; our analysis indicated that gross primary production can be accurately modeled even if soil moisture content is not. Instead, difficulties in adequately modeling ecosystem respiration, mainly autotrophic respiration, appeared to be the fundamental cause of model-data mismatch.
Processes influencing model-data mismatch in drought-stressed, fire-disturbed eddy flux sites
NASA Astrophysics Data System (ADS)
Mitchell, S. R.; Beven, K.; Freer, J. E.; Law, B. E.
2010-12-01
Semi-arid forests are very sensitive to climatic change and among the most difficult ecosystems to accurately model. We tested the performance of the Biome-BGC model against eddy flux data taken from young (years 2004-2008), mature (years 2002-2008), and old-growth (year 2000) Ponderosa pine stands at Metolius, Oregon, and subsequently examined several potential causes for model-data mismatch. We used the generalized likelihood uncertainty estimation (GLUE) methodology, which involved 500,000 model runs for each stand (1,500,000 total). Each simulation was run with randomly generated parameter values from a uniform distribution based on published parameter ranges, resulting in modeled estimates of net ecosystem CO2 exchange (NEE) that were compared to measured eddy flux data. Simulations for the young stand exhibited the highest level of performance, though they over-estimated ecosystem C accumulation (-NEE) 99% of the time. Among the simulations for the mature and old-growth stands, 100% and 99% of the simulations under-estimated ecosystem C accumulation. One obvious area of model-data mismatch is soil moisture, which was overestimated by the model in the young and old-growth stands yet underestimated in the mature stand. However, modeled estimates of soil water content and associated water deficits did not appear to be the primary cause of model-data mismatch; our analysis indicated that gross primary production can be accurately modeled even if soil moisture content is not. Instead, difficulties in adequately modeling ecosystem respiration, both autotrophic and heterotrophic, appeared to be fundamental causes of model-data mismatch.
A Film Depositional Model of Permeability for Mineral Reactions in Unsaturated Media.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freedman, Vicky L.; Saripalli, Prasad; Bacon, Diana H.
2004-11-15
A new modeling approach based on the biofilm models of Taylor et al. (1990, Water Resources Research, 26, 2153-2159) has been developed for modeling changes in porosity and permeability in saturated porous media and implemented in an inorganic reactive transport code. Application of the film depositional models to mineral precipitation and dissolution reactions requires that calculations of mineral films be dynamically changing as a function of time dependent reaction processes. Since calculations of film thicknesses do not consider mineral density, results show that the film porosity model does not adequately describe volumetric changes in the porous medium. These effects canmore » be included in permeability calculations by coupling the film permeability models (Mualem and Childs and Collis-George) to a volumetric model that incorporates both mineral density and reactive surface area. Model simulations demonstrate that an important difference between the biofilm and mineral film models is in the translation of changes in mineral radii to changes in pore space. Including the effect of tortuosity on pore radii changes improves the performance of the Mualem permeability model for both precipitation and dissolution. Results from simulation of simultaneous dissolution and secondary mineral precipitation provides reasonable estimates of porosity and permeability. Moreover, a comparison of experimental and simulated data show that the model yields qualitatively reasonable results for permeability changes due to solid-aqueous phase reactions.« less
Influence of mass transfer resistance on overall nitrate removal rate in upflow sludge bed reactors.
Ting, Wen-Huei; Huang, Ju-Sheng
2006-09-01
A kinetic model with intrinsic reaction kinetics and a simplified model with apparent reaction kinetics for denitrification in upflow sludge bed (USB) reactors were proposed. USB-reactor performance data with and without sludge wasting were also obtained for model verification. An independent batch study showed that the apparent kinetic constants k' did not differ from the intrinsic k but the apparent Ks' was significantly larger than the intrinsic Ks suggesting that the intra-granule mass transfer resistance can be modeled by changes in Ks. Calculations of the overall effectiveness factor, Thiele modulus, and Biot number combined with parametric sensitivity analysis showed that the influence of internal mass transfer resistance on the overall nitrate removal rate in USB reactors is more significant than the external mass transfer resistance. The simulated residual nitrate concentrations using the simplified model were in good agreement with the experimental data; the simulated results using the simplified model were also close to those using the kinetic model. Accordingly, the simplified model adequately described the overall nitrate removal rate and can be used for process design.
Experimental Investigations And Numerical Modelling of 210CR12 Steel in Semi-Solid State
NASA Astrophysics Data System (ADS)
Macioł, Piotr; Zalecki, Władysław; Kuziak, Roman; Jakubowicz, Aleksandra; Weglarczyk, Stanisław
2011-05-01
Experimental investigation, including hot compression and simple closed die filling was performed. Temperature range of tests was between 1225 °C and 1320 °C. Temperature selection was adequate with liquid fraction between 20 and 60%, which is typical for thixoforming processes. In the die filling test, steel dies with ceramic layer was used (highly refractory air-setting mortar JM 3300 manufactured by Thermal Ceramics). Experiments were carried out on the Gleeble 3800 physical simulator with MCU unit. In the paper, methodology of experimental investigation is described. Dependency of forming forces on temperature and forming velocities is analysed. Obtained results are discussed. The second part of the paper concerns numerical modelling of semi-solid forming. Numerical models for both sets of test were developed. Structural and Computational Fluid Dynamics models are compared. Initial works in microstructural modelling of 210CR12 steel behaviour are described. Lattice Boltzman Method model for thixotropic flows is introduced. Microscale and macroscale models were integrated into multiscale simulation of semi-solid forming. Some fundamental issues related to multiscale modelling of thixoforming are discussed.
Modeling of the flow stress for AISI H13 Tool Steel during Hard Machining Processes
NASA Astrophysics Data System (ADS)
Umbrello, Domenico; Rizzuti, Stefania; Outeiro, José C.; Shivpuri, Rajiv
2007-04-01
In general, the flow stress models used in computer simulation of machining processes are a function of effective strain, effective strain rate and temperature developed during the cutting process. However, these models do not adequately describe the material behavior in hard machining, where a range of material hardness between 45 and 60 HRC are used. Thus, depending on the specific material hardness different material models must be used in modeling the cutting process. This paper describes the development of a hardness-based flow stress and fracture models for the AISI H13 tool steel, which can be applied for range of material hardness mentioned above. These models were implemented in a non-isothermal viscoplastic numerical model to simulate the machining process for AISI H13 with various hardness values and applying different cutting regime parameters. Predicted results are validated by comparing them with experimental results found in the literature. They are found to predict reasonably well the cutting forces as well as the change in chip morphology from continuous to segmented chip as the material hardness change.
Simulating Space Capsule Water Landing with Explicit Finite Element Method
NASA Technical Reports Server (NTRS)
Wang, John T.; Lyle, Karen H.
2007-01-01
A study of using an explicit nonlinear dynamic finite element code for simulating the water landing of a space capsule was performed. The finite element model contains Lagrangian shell elements for the space capsule and Eulerian solid elements for the water and air. An Arbitrary Lagrangian Eulerian (ALE) solver and a penalty coupling method were used for predicting the fluid and structure interaction forces. The space capsule was first assumed to be rigid, so the numerical results could be correlated with closed form solutions. The water and air meshes were continuously refined until the solution was converged. The converged maximum deceleration predicted is bounded by the classical von Karman and Wagner solutions and is considered to be an adequate solution. The refined water and air meshes were then used in the models for simulating the water landing of a capsule model that has a flexible bottom. For small pitch angle cases, the maximum deceleration from the flexible capsule model was found to be significantly greater than the maximum deceleration obtained from the corresponding rigid model. For large pitch angle cases, the difference between the maximum deceleration of the flexible model and that of its corresponding rigid model is smaller. Test data of Apollo space capsules with a flexible heat shield qualitatively support the findings presented in this paper.
Garcia-Menendez, Fernando; Hu, Yongtao; Odman, Mehmet T
2014-09-15
Air quality forecasts generated with chemical transport models can provide valuable information about the potential impacts of fires on pollutant levels. However, significant uncertainties are associated with fire-related emission estimates as well as their distribution on gridded modeling domains. In this study, we explore the sensitivity of fine particulate matter concentrations predicted by a regional-scale air quality model to the spatial and temporal allocation of fire emissions. The assessment was completed by simulating a fire-related smoke episode in which air quality throughout the Atlanta metropolitan area was affected on February 28, 2007. Sensitivity analyses were carried out to evaluate the significance of emission distribution among the model's vertical layers, along the horizontal plane, and into hourly inputs. Predicted PM2.5 concentrations were highly sensitive to emission injection altitude relative to planetary boundary layer height. Simulations were also responsive to the horizontal allocation of fire emissions and their distribution into single or multiple grid cells. Additionally, modeled concentrations were greatly sensitive to the temporal distribution of fire-related emissions. The analyses demonstrate that, in addition to adequate estimates of emitted mass, successfully modeling the impacts of fires on air quality depends on an accurate spatiotemporal allocation of emissions. Copyright © 2014 Elsevier B.V. All rights reserved.
Simulating the Risk of Liver Fluke Infection using a Mechanistic Hydro-epidemiological Model
NASA Astrophysics Data System (ADS)
Beltrame, Ludovica; Dunne, Toby; Rose, Hannah; Walker, Josephine; Morgan, Eric; Vickerman, Peter; Wagener, Thorsten
2016-04-01
Liver Fluke (Fasciola hepatica) is a common parasite found in livestock and responsible for considerable economic losses throughout the world. Risk of infection is strongly influenced by climatic and hydrological conditions, which characterise the host environment for parasite development and transmission. Despite on-going control efforts, increases in fluke outbreaks have been reported in recent years in the UK, and have been often attributed to climate change. Currently used fluke risk models are based on empirical relationships derived between historical climate and incidence data. However, hydro-climate conditions are becoming increasingly non-stationary due to climate change and direct anthropogenic impacts such as land use change, making empirical models unsuitable for simulating future risk. In this study we introduce a mechanistic hydro-epidemiological model for Liver Fluke, which explicitly simulates habitat suitability for disease development in space and time, representing the parasite life cycle in connection with key environmental conditions. The model is used to assess patterns of Liver Fluke risk for two catchments in the UK under current and potential future climate conditions. Comparisons are made with a widely used empirical model employing different datasets, including data from regional veterinary laboratories. Results suggest that mechanistic models can achieve adequate predictive ability and support adaptive fluke control strategies under climate change scenarios.
NASA Astrophysics Data System (ADS)
Ruiz Pérez, Guiomar; Latron, Jérôme; Llorens, Pilar; Gallart, Francesc; Francés, Félix
2017-04-01
Selecting an adequate hydrological model is the first step to carry out a rainfall-runoff modelling exercise. A hydrological model is a hypothesis of catchment functioning, encompassing a description of dominant hydrological processes and predicting how these processes interact to produce the catchment's response to external forcing. Current research lines emphasize the importance of multiple working hypotheses for hydrological modelling instead of only using a single model. In line with this philosophy, here different hypotheses were considered and analysed to simulate the nonlinear response of a small Mediterranean catchment and to progress in the analysis of its hydrological behaviour. In particular, three hydrological models were considered representing different potential hypotheses: two lumped models called LU3 and LU4, and one distributed model called TETIS. To determine how well each specific model performed and to assess whether a model was more adequate than another, we raised three complementary tests: one based on the analysis of residual errors series, another based on a sensitivity analysis and the last one based on using multiple evaluation criteria associated to the concept of Pareto frontier. This modelling approach, based on multiple working hypotheses, helped to improve our perceptual model of the catchment behaviour and, furthermore, could be used as a guidance to improve the performance of other environmental models.
Numerical Study of a Convective Turbulence Encounter
NASA Technical Reports Server (NTRS)
Proctor, Fred H.; Hamilton, David W.; Bowles, Roland L.
2002-01-01
A numerical simulation of a convective turbulence event is investigated and compared with observational data. The specific case was encountered during one of NASA's flight tests and was characterized by severe turbulence. The event was associated with overshooting convective turrets that contained low to moderate radar reflectivity. Model comparisons with observations are quite favorable. Turbulence hazard metrics are proposed and applied to the numerical data set. Issues such as adequate grid size are examined.
Doppler lidar wind measurement on Eos
NASA Technical Reports Server (NTRS)
Fitzjarrald, D.; Bilbro, J.; Beranek, R.; Mabry, J.
1985-01-01
A polar-orbiting platform segment of the Earth Observing System (EOS) could carry a CO2-laser based Doppler lidar for recording global wind profiles. Development goals would include the manufacture of a 10 J laser with a 2 yr operational life, space-rating the optics and associated software, and the definition of models for global aerosol distributions. Techniques will be needed for optimal scanning and generating computer simulations which will provide adequately accurate weather predictions.
A model for phosphorus transformation and runoff loss for surface-applied manures.
Vadas, P A; Gburek, W J; Sharpley, A N; Kleinman, P J A; Moore, P A; Cabrera, M L; Harmel, R D
2007-01-01
Agricultural P transport in runoff is an environmental concern. An important source of P runoff is surface-applied, unincorporated manures, but computer models used to assess P transport do not adequately simulate P release and transport from surface manures. We developed a model to address this limitation. The model operates on a daily basis and simulates manure application to the soil surface, letting 60% of manure P infiltrate into soil if manure slurry with less than 15% solids is applied. The model divides manure P into four pools, water-extractable inorganic and organic P, and stable inorganic and organic P. The model simulates manure dry matter decomposition, and manure stable P transformation to water-extractable P. Manure dry matter and P are assimilated into soil to simulate bioturbation. Water-extractable P is leached from manure when it rains, and a portion of leached P can be transferred to surface runoff. Eighty percent of manure P leached into soil by rain remains in the top 2 cm, while 20% leaches deeper. This 2-cm soil layer contributes P to runoff via desorption. We used data from field studies in Texas, Pennsylvania, Georgia, and Arkansas to build and validate the model. Validation results show the model accurately predicted cumulative P loads in runoff, reflecting successful simulation of the dynamics of manure dry matter, manure and soil P pools, and storm-event runoff P concentrations. Predicted runoff P concentrations were significantly related to (r2=0.57) but slightly less than measured concentrations. Our model thus represents an important modification for field or watershed scale models that assess P loss from manured soils.
NASA Astrophysics Data System (ADS)
Solman, Silvina A.; Pessacg, Natalia L.
2012-01-01
In this study the capability of the MM5 model in simulating the main mode of intraseasonal variability during the warm season over South America is evaluated through a series of sensitivity experiments. Several 3-month simulations nested into ERA40 reanalysis were carried out using different cumulus schemes and planetary boundary layer schemes in an attempt to define the optimal combination of physical parameterizations for simulating alternating wet and dry conditions over La Plata Basin (LPB) and the South Atlantic Convergence Zone regions, respectively. The results were compared with different observational datasets and model evaluation was performed taking into account the spatial distribution of monthly precipitation and daily statistics of precipitation over the target regions. Though every experiment was able to capture the contrasting behavior of the precipitation during the simulated period, precipitation was largely underestimated particularly over the LPB region, mainly due to a misrepresentation in the moisture flux convergence. Experiments using grid nudging of the winds above the planetary boundary layer showed a better performance compared with those in which no constrains were imposed to the regional circulation within the model domain. Overall, no single experiment was found to perform the best over the entire domain and during the two contrasting months. The experiment that outperforms depends on the area of interest, being the simulation using the Grell (Kain-Fritsch) cumulus scheme in combination with the MRF planetary boundary layer scheme more adequate for subtropical (tropical) latitudes. The ensemble of the sensitivity experiments showed a better performance compared with any individual experiment.
Effect of lift-to-drag ratio in pilot rating of the HL-20 landing task
NASA Technical Reports Server (NTRS)
Jackson, E. B.; Rivers, Robert A.; Bailey, Melvin L.
1993-01-01
A man-in-the-loop simulation study of the handling qualities of the HL-20 lifting-body vehicle was made in a fixed-base simulation cockpit at NASA Langley Research Center. The purpose of the study was to identify and substantiate opportunities for improving the original design of the vehicle from a handling qualities and landing performance perspective. Using preliminary wind-tunnel data, a subsonic aerodynamic model of the HL-20 was developed. This model was adequate to simulate the last 75-90 s of the approach and landing. A simple flight-control system was designed and implemented. Using this aerodynamic model as a baseline, visual approaches and landings were made at several vehicle lift-to-drag ratios. Pilots rated the handling characteristics of each configuration using a conventional numerical pilot-rating scale. Results from the study showed a high degree of correlation between the lift-to-drag ratio and pilot rating. Level 1 pilot ratings were obtained when the L/D ratio was approximately 3.8 or higher.
Effect of lift-to-drag ratio in pilot rating of the HL-20 landing task
NASA Astrophysics Data System (ADS)
Jackson, E. B.; Rivers, Robert A.; Bailey, Melvin L.
1993-10-01
A man-in-the-loop simulation study of the handling qualities of the HL-20 lifting-body vehicle was made in a fixed-base simulation cockpit at NASA Langley Research Center. The purpose of the study was to identify and substantiate opportunities for improving the original design of the vehicle from a handling qualities and landing performance perspective. Using preliminary wind-tunnel data, a subsonic aerodynamic model of the HL-20 was developed. This model was adequate to simulate the last 75-90 s of the approach and landing. A simple flight-control system was designed and implemented. Using this aerodynamic model as a baseline, visual approaches and landings were made at several vehicle lift-to-drag ratios. Pilots rated the handling characteristics of each configuration using a conventional numerical pilot-rating scale. Results from the study showed a high degree of correlation between the lift-to-drag ratio and pilot rating. Level 1 pilot ratings were obtained when the L/D ratio was approximately 3.8 or higher.
Independent Review of Simulation of Net Infiltration for Present-Day and Potential Future Climates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Review Panel: Soroosh Sorooshian, Ph.D., Panel Chairperson, University of California, Irvine; Jan M. H. Hendrickx, Ph.D., New Mexico Institute of Mining and Technology; Binayak P. Mohanty, Ph.D., Texas A&M University
The DOE Office of Civilian Radioactive Waste Management (OCRWM) tasked Oak Ridge Institute for Science and Education (ORISE) with providing an independent expert review of the documented model and prediction results for net infiltration of water into the unsaturated zone at Yucca Mountain. The specific purpose of the model, as documented in the report MDL-NBS-HS-000023, Rev. 01, is “to provide a spatial representation, including epistemic and aleatory uncertainty, of the predicted mean annual net infiltration at the Yucca Mountain site ...” (p. 1-1) The expert review panel assembled by ORISE concluded that the model report does not provide a technicallymore » credible spatial representation of net infiltration at Yucca Mountain. Specifically, the ORISE Review Panel found that: • A critical lack of site-specific meteorological, surface, and subsurface information prevents verification of (i) the net infiltration estimates, (ii) the uncertainty estimates of parameters caused by their spatial variability, and (iii) the assumptions used by the modelers (ranges and distributions) for the characterization of parameters. The paucity of site-specific data used by the modeling team for model implementation and validation is a major deficiency in this effort. • The model does not incorporate at least one potentially important hydrologic process. Subsurface lateral flow is not accounted for by the model, and the assumption that the effect of subsurface lateral flow is negligible is not adequately justified. This issue is especially critical for the wetter climate periods. This omission may be one reason the model results appear to underestimate net infiltration beneath wash environments and therefore imprecisely represent the spatial variability of net infiltration. • While the model uses assumptions consistently, such as uniform soil depths and a constant vegetation rooting depth, such assumptions may not be appropriate for this net infiltration simulation because they oversimplify a complex landscape and associated hydrologic processes, especially since the model assumptions have not been adequately corroborated by field and laboratory observations at Yucca Mountain.« less
Narayanan, Sarath Kumar; Cohen, Ralph Clinton; Shun, Albert
2014-06-01
Minimal access techniques have transformed the way pediatric surgery is practiced. Due to various constraints, surgical residency programs have not been able to tutor adequate training skills in the routine setting. The advent of new technology and methods in minimally invasive surgery (MIS), has similarly contributed to the need for systematic skills' training in a safe, simulated environment. To enable the training of the proper technique among pediatric surgery trainees, we have advanced a porcine non-survival model for endoscopic surgery. The technical advancements over the past 3 years and a subjective validation of the porcine model from 114 participating trainees using a standard questionnaire and a 5-point Likert scale have been described here. Mean attitude scores and analysis of variance (ANOVA) were used for statistical analysis of the data. Almost all trainees agreed or strongly agreed that the animal-based model was appropriate (98.35%) and also acknowledged that such workshops provided adequate practical experience before attempting on human subjects (96.6%). Mean attitude score for respondents was 19.08 (SD 3.4, range 4-20). Attitude scores showed no statistical association with years of experience or the level of seniority, indicating a positive attitude among all groups of respondents. Structured porcine-based MIS training should be an integral part of skill acquisition for pediatric surgery trainees and the experience gained can be transferred into clinical practice. We advocate that laparoscopic training should begin in a controlled workshop setting before procedures are attempted on human patients.
3D SPH numerical simulation of the wave generated by the Vajont rockslide
NASA Astrophysics Data System (ADS)
Vacondio, R.; Mignosa, P.; Pagani, S.
2013-09-01
A 3D numerical modeling of the wave generated by the Vajont slide, one of the most destructive ever occurred, is presented in this paper. A meshless Lagrangian Smoothed Particle Hydrodynamics (SPH) technique was adopted to simulate the highly fragmented violent flow generated by the falling slide in the artificial reservoir. The speed-up achievable via General Purpose Graphic Processing Units (GP-GPU) allowed to adopt the adequate resolution to describe the phenomenon. The comparison with the data available in literature showed that the results of the numerical simulation reproduce satisfactorily the maximum run-up, also the water surface elevation in the residual lake after the event. Moreover, the 3D velocity field of the flow during the event and the discharge hydrograph which overtopped the dam, were obtained.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ewy, Ann; Heim, Kenneth J.; McGonigal, Sean T.
A comparative groundwater hydrogeologic modeling analysis is presented herein to simulate potential contaminant migration pathways in a sole source aquifer in Nassau County, Long Island, New York. The source of contamination is related to historical operations at the Sylvania Corning Plant ('Site'), a 9.49- acre facility located at 70, 100 and 140 Cantiague Rock Road, Town of Oyster Bay in the westernmost portion of Hicksville, Long Island. The Site had historically been utilized as a nuclear materials manufacturing facility (e.g., cores, slug, and fuel elements) for reactors used in both research and electric power generation in early 1950's until latemore » 1960's. The Site is contaminated with various volatile organic and inorganic compounds, as well as radionuclides. The major contaminants of concern at the Site are tetrachloroethene (PCE), trichloroethene (TCE), nickel, uranium, and thorium. These compounds are present in soil and groundwater underlying the Site and have migrated off-site. The Site is currently being investigated as part of the Formerly Utilized Sites Remedial Action Program (FUSRAP). The main objective of the current study is to simulate the complex hydrogeologic features in the region, such as numerous current and historic production well fields; large, localized recharge basins; and, multiple aquifers, and to assess potential contaminant migration pathways originating from the Site. For this purpose, the focus of attention was given to the underlying Magothy formation, which has been impacted by the contaminants of concern. This aquifer provides more than 90% of potable water supply in the region. Nassau and Suffolk Counties jointly developed a three-dimensional regional groundwater flow model to help understand the factors affecting groundwater flow regime in the region, to determine adequate water supply for public consumption, to investigate salt water intrusion in localized areas, to evaluate the impacts of regional pumping activity, and to better understand the contaminant transport and fate mechanisms through the underlying aquifers. This regional model, developed for the N.Y. State Department of Environmental Conservation (NYSDEC) by Camp Dresser and McKee (CDM), uses the finite element model DYNFLOW developed by CDM, Cambridge, Massachusetts. The coarseness of the regional model, however, could not adequately capture the hydrogeologic heterogeneity of the aquifer. Specifically, the regional model did not adequately capture the interbedded nature of the Magothy aquifer and, as such, simulated particles tended to track down-gradient from the Site in relatively straight lines while the movement of groundwater in such a heterogeneous aquifer is expected to proceed along a more tortuous path. This paper presents a qualitative comparison of site-specific groundwater flow modeling results with results obtained from the regional model. In order to assess the potential contaminant migration pathways, a particle tracking method was employed. Available site-specific and regional hydraulic conductivity data measured in-situ with respect to depth and location were incorporated into the T-PROG module in GMS model to define statistical variation to better represent the actual stratigraphy and layer heterogeneity. The groundwater flow characteristics in the Magothy aquifer were simulated with the stochastic hydraulic conductivity variation as opposed to constant values as employed in the regional model. Contaminant sources and their exact locations have been fully delineated at the Site during the Remedial Investigation (RI) phase of the project. Contaminant migration pathways originating from these source locations at the Site are qualitatively traced within the sole source aquifer utilizing particles introduced at source locations. Contaminant transport mechanism modeled in the current study is based on pure advection (i.e., plug flow) and mechanical dispersion while molecular diffusion effects are neglected due to relatively high groundwater velocities encountered in the aquifer. In addition, fate of contaminants is ignored hereby to simulate the worst-case scenario, which considers the contaminants of concern as tracer-like compounds for modeling purposes. The results of the modeling analysis are qualitatively compared with the County's regional model, and patterns of contaminant migration in the region are presented. (authors)« less
NASA Astrophysics Data System (ADS)
Regina, J. A.; Ogden, F. L.; Steinke, R. C.; Frazier, N.; Cheng, Y.; Zhu, J.
2017-12-01
Preferential flow paths (PFP) resulting from biotic and abiotic factors contribute significantly to the generation of runoff in moist lowland tropical watersheds. Flow through PFPs represents the dominant mechanism by which land use choices affect hydrological behavior. The relative influence of PFP varies depending upon land-use management practices. Assessing the possible effects of land-use and landcover change on flows, and other ecosystem services, in the humid tropics partially depends on adequate simulation of PFP across different land-uses. Currently, 5% of global trade passes through the Panama Canal, which is supplied with fresh water from the Panama Canal Watershed. A third set of locks, recently constructed, are expected to double the capacity of the Canal. We incorporated explicit simulation of PFPs in to the ADHydro HPC distributed hydrological model to simulate the effects of land-use and landcover change due to land management incentives on water resources availability in the Panama Canal Watershed. These simulations help to test hypotheses related to the effectiveness of various proposed payments for ecosystem services schemes. This presentation will focus on hydrological model formulation and performance in an HPC environment.
Chenu, Karine; Chapman, Scott C; Hammer, Graeme L; McLean, Greg; Salah, Halim Ben Haj; Tardieu, François
2008-03-01
Physiological and genetic studies of leaf growth often focus on short-term responses, leaving a gap to whole-plant models that predict biomass accumulation, transpiration and yield at crop scale. To bridge this gap, we developed a model that combines an existing model of leaf 6 expansion in response to short-term environmental variations with a model coordinating the development of all leaves of a plant. The latter was based on: (1) rates of leaf initiation, appearance and end of elongation measured in field experiments; and (2) the hypothesis of an independence of the growth between leaves. The resulting whole-plant leaf model was integrated into the generic crop model APSIM which provided dynamic feedback of environmental conditions to the leaf model and allowed simulation of crop growth at canopy level. The model was tested in 12 field situations with contrasting temperature, evaporative demand and soil water status. In observed and simulated data, high evaporative demand reduced leaf area at the whole-plant level, and short water deficits affected only leaves developing during the stress, either visible or still hidden in the whorl. The model adequately simulated whole-plant profiles of leaf area with a single set of parameters that applied to the same hybrid in all experiments. It was also suitable to predict biomass accumulation and yield of a similar hybrid grown in different conditions. This model extends to field conditions existing knowledge of the environmental controls of leaf elongation, and can be used to simulate how their genetic controls flow through to yield.
THE SIMULATION OF FINE SCALE NOCTURNAL BOUNDARY LAYER MOTIONS WITH A MESO-SCALE ATMOSPHERIC MODEL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Werth, D.; Kurzeja, R.; Parker, M.
A field project over the Atmospheric Radiation Measurement-Clouds and Radiation Testbed (ARM-CART) site during a period of several nights in September, 2007 was conducted to explore the evolution of the low-level jet (LLJ). Data was collected from a tower and a sodar and analyzed for turbulent behavior. To study the full range of nocturnal boundary layer (NBL) behavior, the Regional Atmospheric Modeling System (RAMS) was used to simulate the ARM-CART NBL field experiment and validated against the data collected from the site. This model was run at high resolution, and is ideal for calculating the interactions among the various motionsmore » within the boundary layer and their influence on the surface. The model reproduces adequately the synoptic situation and the formation and dissolution cycles of the low-level jet, although it suffers from insufficient cloud production and excessive nocturnal cooling. The authors suggest that observed heat flux data may further improve the realism of the simulations both in the cloud formation and in the jet characteristics. In a higher resolution simulation, the NBL experiences motion on a range of timescales as revealed by a wavelet analysis, and these are affected by the presence of the LLJ. The model can therefore be used to provide information on activity throughout the depth of the NBL.« less
Evaluation of the Surface Representation of the Greenland Ice Sheet in a General Circulation Model
NASA Technical Reports Server (NTRS)
Cullather, Richard I.; Nowicki, Sophie M. J.; Zhao, Bin; Suarez, Max J.
2014-01-01
Simulated surface conditions of the Goddard Earth Observing System model, version 5 (GEOS 5) atmospheric general circulation model (AGCM) are examined for the contemporary Greenland Ice Sheet (GrIS). A surface parameterization that explicitly models surface processes including snow compaction, meltwater percolation and refreezing, and surface albedo is found to remedy an erroneous deficit in the annual net surface energy flux and provide an adequate representation of surface mass balance (SMB) in an evaluation using simulations at two spatial resolutions. The simulated 1980-2008 GrIS SMB average is 24.7+/-4.5 cm yr(- 1) water-equivalent (w.e.) at.5 degree model grid spacing, and 18.2+/-3.3 cm yr(- 1) w.e. for 2 degree grid spacing. The spatial variability and seasonal cycle of the simulation compare favorably to recent studies using regional climate models, while results from 2 degree integrations reproduce the primary features of the SMB field. In comparison to historical glaciological observations, the coarser resolution model overestimates accumulation in the southern areas of the GrIS, while the overall SMB is underestimated. These changes relate to the sensitivity of accumulation and melt to the resolution of topography. The GEOS-5 SMB fields contrast with available corresponding atmospheric models simulations from the Coupled Model Intercomparison Project (CMIP5). It is found that only a few of the CMIP5 AGCMs examined provide significant summertime runoff, a dominant feature of the GrIS seasonal cycle. This is a condition that will need to be remedied if potential contributions to future eustatic change from polar ice sheets are to be examined with GCMs.
NASA Astrophysics Data System (ADS)
Xiao, HuiFang; Huang, Bin; Yao, Ge; Kang, WenBin; Gong, Sheng; Pan, Hai; Cao, Yi; Wang, Jun; Zhang, Jian; Wang, Wei
2018-03-01
Understanding the processes of protein adsorption/desorption on nanoparticles' surfaces is important for the development of new nanotechnology involving biomaterials; however, an atomistic resolution picture for these processes and for the simultaneous protein conformational change is missing. Here, we report the adsorption of protein GB1 on a polystyrene nanoparticle surface using atomistic molecular dynamic simulations. Enabled by metadynamics, we explored the relevant phase space and identified three protein states, each involving both the adsorbed and desorbed modes. We also studied the change of the secondary and tertiary structures of GB1 during adsorption and the dominant interactions between the protein and surface in different adsorption stages. The results we obtained from simulation were found to be more adequate and complete than the previous one. We believe the model presented in this paper, in comparison with the previous ones, is a better theoretical model to understand and explain the experimental results.
A simple statistical model for geomagnetic reversals
NASA Technical Reports Server (NTRS)
Constable, Catherine
1990-01-01
The diversity of paleomagnetic records of geomagnetic reversals now available indicate that the field configuration during transitions cannot be adequately described by simple zonal or standing field models. A new model described here is based on statistical properties inferred from the present field and is capable of simulating field transitions like those observed. Some insight is obtained into what one can hope to learn from paleomagnetic records. In particular, it is crucial that the effects of smoothing in the remanence acquisition process be separated from true geomagnetic field behavior. This might enable us to determine the time constants associated with the dominant field configuration during a reversal.
Momentum loss in proton-nucleus and nucleus-nucleus collisions
NASA Technical Reports Server (NTRS)
Khan, Ferdous; Townsend, Lawrence W.
1993-01-01
An optical model description, based on multiple scattering theory, of longitudinal momentum loss in proton-nucleus and nucleus-nucleus collisions is presented. The crucial role of the imaginary component of the nucleon-nucleon transition matrix in accounting for longitudinal momentum transfer is demonstrated. Results obtained with this model are compared with Intranuclear Cascade (INC) calculations, as well as with predictions from Vlasov-Uehling-Uhlenbeck (VUU) and quantum molecular dynamics (QMD) simulations. Comparisons are also made with experimental data where available. These indicate that the present model is adequate to account for longitudinal momentum transfer in both proton-nucleus and nucleus-nucleus collisions over a wide range of energies.
Schullcke, B; Krueger-Ziolek, S; Gong, B; Jörres, R A; Mueller-Lisse, U; Moeller, K
2017-10-10
Electrical impedance tomography (EIT) has mostly been used in the Intensive Care Unit (ICU) to monitor ventilation distribution but is also promising for the diagnosis in spontaneously breathing patients with obstructive lung diseases. Beside tomographic images, several numerical measures have been proposed to quantitatively assess the lung state. In this study two common measures, the 'Global Inhomogeneity Index' and the 'Coefficient of Variation' were compared regarding their capability to reflect the severity of lung obstruction. A three-dimensional simulation model was used to simulate obstructed lungs, whereby images were reconstructed on a two-dimensional domain. Simulations revealed that minor obstructions are not adequately recognized in the reconstructed images and that obstruction above and below the electrode plane may result in misleading values of inhomogeneity measures. EIT measurements on several electrode planes are necessary to apply these measures in patients with obstructive lung diseases in a promising manner.
NASA Astrophysics Data System (ADS)
Zhang, Hongda; Han, Chao; Ye, Taohong; Ren, Zhuyin
2016-03-01
A method of chemistry tabulation combined with presumed probability density function (PDF) is applied to simulate piloted premixed jet burner flames with high Karlovitz number using large eddy simulation. Thermo-chemistry states are tabulated by the combination of auto-ignition and extended auto-ignition model. To evaluate the predictive capability of the proposed tabulation method to represent the thermo-chemistry states under the condition of different fresh gases temperature, a-priori study is conducted by performing idealised transient one-dimensional premixed flame simulations. Presumed PDF is used to involve the interaction of turbulence and flame with beta PDF to model the reaction progress variable distribution. Two presumed PDF models, Dirichlet distribution and independent beta distribution, respectively, are applied for representing the interaction between two mixture fractions that are associated with three inlet streams. Comparisons of statistical results show that two presumed PDF models for the two mixture fractions are both capable of predicting temperature and major species profiles, however, they are shown to have a significant effect on the predictions for intermediate species. An analysis of the thermo-chemical state-space representation of the sub-grid scale (SGS) combustion model is performed by comparing correlations between the carbon monoxide mass fraction and temperature. The SGS combustion model based on the proposed chemistry tabulation can reasonably capture the peak value and change trend of intermediate species. Aspects regarding model extensions to adequately predict the peak location of intermediate species are discussed.
NASA Technical Reports Server (NTRS)
Hsu, Andrew T.
1992-01-01
Turbulent combustion can not be simulated adequately by conventional moment closure turbulent models. The probability density function (PDF) method offers an attractive alternative: in a PDF model, the chemical source terms are closed and do not require additional models. Because the number of computational operations grows only linearly in the Monte Carlo scheme, it is chosen over finite differencing schemes. A grid dependent Monte Carlo scheme following J.Y. Chen and W. Kollmann has been studied in the present work. It was found that in order to conserve the mass fractions absolutely, one needs to add further restrictions to the scheme, namely alpha(sub j) + gamma(sub j) = alpha(sub j - 1) + gamma(sub j + 1). A new algorithm was devised that satisfied this restriction in the case of pure diffusion or uniform flow problems. Using examples, it is shown that absolute conservation can be achieved. Although for non-uniform flows absolute conservation seems impossible, the present scheme has reduced the error considerably.
Theoretical models for coronary vascular biomechanics: Progress & challenges
Waters, Sarah L.; Alastruey, Jordi; Beard, Daniel A.; Bovendeerd, Peter H.M.; Davies, Peter F.; Jayaraman, Girija; Jensen, Oliver E.; Lee, Jack; Parker, Kim H.; Popel, Aleksander S.; Secomb, Timothy W.; Siebes, Maria; Sherwin, Spencer J.; Shipley, Rebecca J.; Smith, Nicolas P.; van de Vosse, Frans N.
2013-01-01
A key aim of the cardiac Physiome Project is to develop theoretical models to simulate the functional behaviour of the heart under physiological and pathophysiological conditions. Heart function is critically dependent on the delivery of an adequate blood supply to the myocardium via the coronary vasculature. Key to this critical function of the coronary vasculature is system dynamics that emerge via the interactions of the numerous constituent components at a range of spatial and temporal scales. Here, we focus on several components for which theoretical approaches can be applied, including vascular structure and mechanics, blood flow and mass transport, flow regulation, angiogenesis and vascular remodelling, and vascular cellular mechanics. For each component, we summarise the current state of the art in model development, and discuss areas requiring further research. We highlight the major challenges associated with integrating the component models to develop a computational tool that can ultimately be used to simulate the responses of the coronary vascular system to changing demands and to diseases and therapies. PMID:21040741
PUFoam : A novel open-source CFD solver for the simulation of polyurethane foams
NASA Astrophysics Data System (ADS)
Karimi, M.; Droghetti, H.; Marchisio, D. L.
2017-08-01
In this work a transient three-dimensional mathematical model is formulated and validated for the simulation of polyurethane (PU) foams. The model is based on computational fluid dynamics (CFD) and is coupled with a population balance equation (PBE) to describe the evolution of the gas bubbles/cells within the PU foam. The front face of the expanding foam is monitored on the basis of the volume-of-fluid (VOF) method using a compressible solver available in OpenFOAM version 3.0.1. The solver is additionally supplemented to include the PBE, solved with the quadrature method of moments (QMOM), the polymerization kinetics, an adequate rheological model and a simple model for the foam thermal conductivity. The new solver is labelled as PUFoam and is, for the first time in this work, validated for 12 different mixing-cup experiments. Comparison of the time evolution of the predicted and experimentally measured density and temperature of the PU foam shows the potentials and limitations of the approach.
NASA Astrophysics Data System (ADS)
Erokhin, Sergey; Berkov, Dmitry; Ito, Masaaki; Kato, Akira; Yano, Masao; Michels, Andreas
2018-03-01
We demonstrate how micromagnetic simulations can be employed in order to characterize and analyze the magnetic microstructure of nanocomposites. For the example of nanocrystalline Nd-Fe-B, which is a potential material for future permanent-magnet applications, we have compared three different models for the micromagnetic analysis of this material class: (i) a description of the nanocomposite microstructure in terms of Stoner-Wohlfarth particles with and without the magnetodipolar interaction; (ii) a model based on the core-shell representation of the nanograins; (iii) the latter model including a contribution of superparamagnetic clusters. The relevant parameter spaces have been systematically scanned with the aim to establish which micromagnetic approach can most adequately describe experimental data for this material. According to our results, only the last, most sophisticated model is able to provide an excellent agreement with the measured hysteresis loop. The presented methodology is generally applicable to multiphase magnetic nanocomposites and it highligths the complex interrelationship between the microstructure, magnetic interactions, and the macroscopic magnetic properties.
Robertson, John B.
1976-01-01
Aqueous chemical and low-level radioactive effluents have been disposed to seepage ponds since 1952 at the Idaho National Engineering Laboratory. The solutions percolate toward the Snake River Plain aquifer (135 m below) through interlayered basalts and unconsolidated sediments and an extensive zone of ground water perched on a sedimentary layer about 40 m beneath the ponds. A three-segment numerical model was developed to simulate the system, including effects of convection, hydrodynamic dispersion, radioactive decay, and adsorption. Simulated hydraulics and solute migration patterns for all segments agree adequately with the available field data. The model can be used to project subsurface distributions of waste solutes under a variety of assumed conditions for the future. Although chloride and tritium reached the aquifer several years ago, the model analysis suggests that the more easily sorbed solutes, such as cesium-137 and strontium-90, would not reach the aquifer in detectable concentrations within 150 years for the conditions assumed. (Woodard-USGS)
Link-prediction to tackle the boundary specification problem in social network surveys
De Wilde, Philippe; Buarque de Lima-Neto, Fernando
2017-01-01
Diffusion processes in social networks often cause the emergence of global phenomena from individual behavior within a society. The study of those global phenomena and the simulation of those diffusion processes frequently require a good model of the global network. However, survey data and data from online sources are often restricted to single social groups or features, such as age groups, single schools, companies, or interest groups. Hence, a modeling approach is required that extrapolates the locally restricted data to a global network model. We tackle this Missing Data Problem using Link-Prediction techniques from social network research, network generation techniques from the area of Social Simulation, as well as a combination of both. We found that techniques employing less information may be more adequate to solve this problem, especially when data granularity is an issue. We validated the network models created with our techniques on a number of real-world networks, investigating degree distributions as well as the likelihood of links given the geographical distance between two nodes. PMID:28426826
Banaee, Nooshin; Asgari, Sepideh; Nedaie, Hassan Ali
2018-07-01
The accuracy of penumbral measurements in radiotherapy is pivotal because dose planning computers require accurate data to adequately modeling the beams, which in turn are used to calculate patient dose distributions. Gamma knife is a non-invasive intracranial technique based on principles of the Leksell stereotactic system for open deep brain surgeries, invented and developed by Professor Lars Leksell. The aim of this study is to compare the penumbra widths of Leksell Gamma Knife model C and Gamma ART 6000. Initially, the structure of both systems were simulated by using Monte Carlo MCNP6 code and after validating the accuracy of simulation, beam profiles of different collimators were plotted. MCNP6 beam profile calculations showed that the penumbra values of Leksell Gamma knife model C and Gamma ART 6000 for 18, 14, 8 and 4 mm collimators are 9.7, 7.9, 4.3, 2.6 and 8.2, 6.9, 3.6, 2.4, respectively. The results of this study showed that since Gamma ART 6000 has larger solid angle in comparison with Gamma Knife model C, it produces better beam profile penumbras than Gamma Knife model C in the direct plane. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Timchalk, Chuck; Campbell, James A.; Liu, Guodong
2007-03-01
Abstract Non-invasive biomonitoring approaches are being developed using reliable portable analytical systems to quantify dosimetry utilizing readily obtainable body fluids, such as saliva. In the current study, rats were given single oral gavage doses (1, 10 or 50 mg/kg) of the insecticide chlorpyrifos (CPF), saliva and blood were collected from groups of animals (4/time-point) at 3, 6, and 12 hr post-dosing, and the samples were analyzed for the CPF metabolite trichlorpyridinol (TCP). Trichlorpyridinol was detected in both blood and saliva at all doses and the TCP concentration in blood exceeded saliva, although the kinetics in blood and saliva were comparable.more » A physiologically based pharmacokinetic and pharmacodynamic (PBPK/PD) model for CPF incorporated a compartment model to describe the time-course of TCP in blood and saliva. The model adequately simulated the experimental results over the dose ranges evaluated. A rapid and sensitive sequential injection (SI) electrochemical immunoassay was developed to monitor TCP, and the reported detection limit for TCP in water was 6 ng/L. Computer model simulation in the range of the Allowable Daily Intake (ADI) or Reference Dose (RfD) for CPF (0.01-0.003 mg/kg/day) suggest that the electrochemical immunoassay had adequate sensitivity to detect and quantify TCP in saliva at these low exposure levels. To validate this approach further studies are needed to more fully understand the pharmacokinetics of CPF and TCP excretion in saliva. The utilization of saliva as a biomonitoring matrix, coupled to real-time quantitation and PBPK/PD modeling represents a novel approach with broad application for evaluating both occupational and environmental exposures to insecticides.« less
Code of Federal Regulations, 2014 CFR
2014-01-01
... adequate periods of time and at a location approved by the Administrator, adequate flight training equipment and courseware, including at least one flight simulator or advanced flight training device. [Doc... significant distractions caused by flight operations and maintenance operations at the airport. (b) An...
Code of Federal Regulations, 2011 CFR
2011-01-01
... adequate periods of time and at a location approved by the Administrator, adequate flight training equipment and courseware, including at least one flight simulator or advanced flight training device. [Doc... significant distractions caused by flight operations and maintenance operations at the airport. (b) An...
Code of Federal Regulations, 2013 CFR
2013-01-01
... adequate periods of time and at a location approved by the Administrator, adequate flight training equipment and courseware, including at least one flight simulator or advanced flight training device. [Doc... significant distractions caused by flight operations and maintenance operations at the airport. (b) An...
Code of Federal Regulations, 2012 CFR
2012-01-01
... adequate periods of time and at a location approved by the Administrator, adequate flight training equipment and courseware, including at least one flight simulator or advanced flight training device. [Doc... significant distractions caused by flight operations and maintenance operations at the airport. (b) An...
Stephen, Julia M; Ranken, Doug M; Aine, Cheryl J; Weisend, Michael P; Shih, Jerry J
2005-12-01
Previous studies have shown that magnetoencephalography (MEG) can measure hippocampal activity, despite the cylindrical shape and deep location in the brain. The current study extended this work by examining the ability to differentiate the hippocampal subfields, parahippocampal cortex, and neocortical temporal sources using simulated interictal epileptic activity. A model of the hippocampus was generated on the MRIs of five subjects. CA1, CA3, and dentate gyrus of the hippocampus were activated as well as entorhinal cortex, presubiculum, and neocortical temporal cortex. In addition, pairs of sources were activated sequentially to emulate various hypotheses of mesial temporal lobe seizure generation. The simulated MEG activity was added to real background brain activity from the five subjects and modeled using a multidipole spatiotemporal modeling technique. The waveforms and source locations/orientations for hippocampal and parahippocampal sources were differentiable from neocortical temporal sources. In addition, hippocampal and parahippocampal sources were differentiated to varying degrees depending on source. The sequential activation of hippocampal and parahippocampal sources was adequately modeled by a single source; however, these sources were not resolvable when they overlapped in time. These results suggest that MEG has the sensitivity to distinguish parahippocampal and hippocampal spike generators in mesial temporal lobe epilepsy.
Modeling of rock friction 2. Simulation of preseismic slip
Dieterich, J.H.
1979-01-01
The constitutive relations developed in the companion paper are used to model detailed observations of preseismic slip and the onset of unstable slip in biaxial laboratory experiments. The simulations employ a deterministic plane strain finite element model to represent the interactions both within the sliding blocks and between the blocks and the loading apparatus. Both experiments and simulations show that preseismic slip is controlled by initial inhomogeneity of shear stress along the sliding surface relative to the frictional strength. As a consequence of the inhomogeneity, stable slip begins at a point on the surface and the area of slip slowly expands as the external loading increases. A previously proposed correlation between accelerating rates of stable slip and growth of the area of slip is supported by the simulations. In the simulations and in the experiments, unstable slip occurs shortly after a propagating slip event traverses the sliding surface and breaks out at the ends of the sample. In the model the breakout of stable slip causes a sudden acceleration of slip rates. Because of velocity dependency of the constitutive relationship for friction, the rapid acceleration of slip causes a decrease in frictional strength. Instability occurs when the frictional strength decreases with displacement at a rate that exceeds the intrinsic unloading characteristics of the sample and test machine. A simple slider-spring model that does not consider preseismic slip appears to approximate the transition adequately from stable sliding to unstable slip as a function of normal stress, machine stiffness, and surface roughness for small samples. However, for large samples and for natural faults the simulations suggest that the simple model may be inaccurate because it does not take into account potentially large preseismic displacements that will alter the friction parameters prior to instability. Copyright ?? 1979 by the American Geophysical Union.
Xiong, Chengjie; Luo, Jingqin; Morris, John C; Bateman, Randall
2018-01-01
Modern clinical trials on Alzheimer disease (AD) focus on the early symptomatic stage or even the preclinical stage. Subtle disease progression at the early stages, however, poses a major challenge in designing such clinical trials. We propose a multivariate mixed model on repeated measures to model the disease progression over time on multiple efficacy outcomes, and derive the optimum weights to combine multiple outcome measures by minimizing the sample sizes to adequately power the clinical trials. A cross-validation simulation study is conducted to assess the accuracy for the estimated weights as well as the improvement in reducing the sample sizes for such trials. The proposed methodology is applied to the multiple cognitive tests from the ongoing observational study of the Dominantly Inherited Alzheimer Network (DIAN) to power future clinical trials in the DIAN with a cognitive endpoint. Our results show that the optimum weights to combine multiple outcome measures can be accurately estimated, and that compared to the individual outcomes, the combined efficacy outcome with these weights significantly reduces the sample size required to adequately power clinical trials. When applied to the clinical trial in the DIAN, the estimated linear combination of six cognitive tests can adequately power the clinical trial. PMID:29546251
NASA Astrophysics Data System (ADS)
Petoussi-Henss, Nina; Becker, Janine; Greiter, Matthias; Schlattl, Helmut; Zankl, Maria; Hoeschen, Christoph
2014-03-01
In radiography there is generally a conflict between the best image quality and the lowest possible patient dose. A proven method of dosimetry is the simulation of radiation transport in virtual human models (i.e. phantoms). However, while the resolution of these voxel models is adequate for most dosimetric purposes, they cannot provide the required organ fine structures necessary for the assessment of the imaging quality. The aim of this work is to develop hybrid/dual-lattice voxel models (called also phantoms) as well as simulation methods by which patient dose and image quality for typical radiographic procedures can be determined. The results will provide a basis to investigate by means of simulations the relationships between patient dose and image quality for various imaging parameters and develop methods for their optimization. A hybrid model, based on NURBS (Non Linear Uniform Rational B-Spline) and PM (Polygon Mesh) surfaces, was constructed from an existing voxel model of a female patient. The organs of the hybrid model can be then scaled and deformed in a non-uniform way i.e. organ by organ; they can be, thus, adapted to patient characteristics without losing their anatomical realism. Furthermore, the left lobe of the lung was substituted by a high resolution lung voxel model, resulting in a dual-lattice geometry model. "Dual lattice" means in this context the combination of voxel models with different resolution. Monte Carlo simulations of radiographic imaging were performed with the code EGS4nrc, modified such as to perform dual lattice transport. Results are presented for a thorax examination.
Simulating flight boundary conditions for orbiter payload modal survey
NASA Technical Reports Server (NTRS)
Chung, Y. T.; Sernaker, M. L.; Peebles, J. H.
1993-01-01
An approach to simulate the characteristics of the payload/orbiter interfaces for the payload modal survey was developed. The flexure designed for this approach is required to provide adequate stiffness separation in the free and constrained interface degrees of freedom to closely resemble the flight boundary condition. Payloads will behave linearly and demonstrate similar modal effective mass distribution and load path as the flight if the flexure fixture is used for the payload modal survey. The potential non-linearities caused by the trunnion slippage during the conventional fixed base modal survey may be eliminated. Consequently, the effort to correlate the test and analysis models can be significantly reduced. An example is given to illustrate the selection and the sensitivity of the flexure stiffness. The advantages of using flexure fixtures for the modal survey and for the analytical model verification are also demonstrated.
NASA Astrophysics Data System (ADS)
Ravazzani, G.; Montaldo, N.; Mancini, M.; Rosso, R.
2003-04-01
Event-based hydrologic models need the antecedent soil moisture condition, as critical boundary initial condition for flood simulation. Land-surface models (LSMs) have been developed to simulate mass and energy transfers, and to update the soil moisture condition through time from the solution of water and energy balance equations. They are recently used in distributed hydrologic modeling for flood prediction systems. Recent developments have made LSMs more complex by inclusion of more processes and controlling variables, increasing parameter number and uncertainty of their estimates. This also led to increasing of computational burden and parameterization of the distributed hydrologic models. In this study we investigate: 1) the role of soil moisture initial conditions in the modeling of Alpine basin floods; 2) the adequate complexity level of LSMs for the distributed hydrologic modeling of Alpine basin floods. The Toce basin is the case study; it is located in the North Piedmont (Italian Alps), and it has a total drainage area of 1534 km2 at Candoglia section. Three distributed hydrologic models of different level of complexity are developed and compared: two (TDLSM and SDLSM) are continuous models, one (FEST02) is an event model based on the simplified SCS-CN method for rainfall abstractions. In the TDLSM model a two-layer LSM computes both saturation and infiltration excess runoff, and simulates the evolution of the water table spatial distribution using the topographic index; in the SDLSM model a simplified one-layer distributed LSM only computes hortonian runoff, and doesn’t simulate the water table dynamic. All the three hydrologic models simulate the surface runoff propagation through the Muskingum-Cunge method. TDLSM and SDLSM models have been applied for the two-year (1996 and 1997) simulation period, during which two major floods occurred in the November 1996 and in the June 1997. The models have been calibrated and tested comparing simulated and observed hydrographs at Candoglia. Sensitivity analysis of the models to significant LSM parameters were also performed. The performances of the three models in the simulation of the two major floods are compared. Interestingly, the results indicate that the SDLSM model is able to sufficiently well predict the major floods of this Alpine basin; indeed, this model is a good compromise between the over-parameterized and too complex TDLSM model and the over-simplified FEST02 model.
Zhang, Liying; Gurao, Manish; Yang, King H.; King, Albert I.
2011-01-01
Computer models of the head can be used to simulate the events associated with traumatic brain injury (TBI) and quantify biomechanical response within the brain. Marmarou’s impact acceleration rodent model is a widely used experimental model of TBI mirroring axonal pathology in humans. The mechanical properties of the low density polyurethane (PU) foam, an essential piece of energy management used in Marmarou’s impact device, has not been fully characterized. The foam used in Marmarou’s device was tested at seven strain rates ranging from quasi-static to dynamic (0.014 ~ 42.86 s−1) to quantify the stress-strain relationships in compression. Recovery rate of the foam after cyclic compression was also determined through the periods of recovery up to three weeks. The experimentally determined stress-strain curves were incorporated into a material model in an explicit Finite Element (FE) solver to validate the strain rate dependency of the FE foam model. Compression test results have shown that the foam used in the rodent impact acceleration model is strain rate dependent. The foam has been found to be reusable for multiple impacts. However the stress resistance of used foam is reduced to 70% of the new foam. The FU_CHANG_FOAM material model in an FE solver has been found to be adequate to simulate this rate sensitive foam. PMID:21459114
Zhang, Liying; Gurao, Manish; Yang, King H; King, Albert I
2011-05-15
Computer models of the head can be used to simulate the events associated with traumatic brain injury (TBI) and quantify biomechanical response within the brain. Marmarou's impact acceleration rodent model is a widely used experimental model of TBI mirroring axonal pathology in humans. The mechanical properties of the low density polyurethane (PU) foam, an essential piece of energy management used in Marmarou's impact device, has not been fully characterized. The foam used in Marmarou's device was tested at seven strain rates ranging from quasi-static to dynamic (0.014-42.86 s⁻¹) to quantify the stress-strain relationships in compression. Recovery rate of the foam after cyclic compression was also determined through the periods of recovery up to three weeks. The experimentally determined stress-strain curves were incorporated into a material model in an explicit Finite Element (FE) solver to validate the strain rate dependency of the FE foam model. Compression test results have shown that the foam used in the rodent impact acceleration model is strain rate dependent. The foam has been found to be reusable for multiple impacts. However the stress resistance of used foam is reduced to 70% of the new foam. The FU_CHANG_FOAM material model in an FE solver has been found to be adequate to simulate this rate sensitive foam. Copyright © 2011 Elsevier B.V. All rights reserved.
Structure of dysprosium(111) dl-tartrate dimer in aqueous solution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chevela, V.V.; Vulfson, S.G.; Salnikov, Y.I.
1994-10-01
The paramagnetic birefringence method was supplemented by numerical simulation to determine the molar paramagnetic-birefringence constant of the dysprosium dl-tartrate dimer Dy{sub 2}(d-L)(l-L){sup 2-} (I), where d-L{sup 4-} and l-L{sup 4-} are the deprotonated d- and l-tartaric acid molecules, respectively. The structure of the ligand and hydration surroundings of I was modeled by molecular mechanic calculations (the Dashevskii-Pylamovatyi model). It is shown that adequate results can be obtained only if one takes into account the coordination of I to the Na{sup +} ion.
Report of NPSAT1 Battery Thermal Contact Resistance Testing, Modeling and Simulation
2012-10-01
lithium ion battery is the spacecraft component with the smallest temperature range of 0?C to 45?C during operation. Thermal analysis results, however, can only provide adequate results if there is sufficient fidelity in thermal modeling. Arguably, the values used in defining thermal coupling for components are the most difficult to estimate because of the many variables that define them. This document describes the work performed by the authors starting in the 2012 winter quarter as part of the SS3900 directed study course. The objectives of the study were to
NASA Astrophysics Data System (ADS)
Fernandez, J. P. R.; Franchito, S. H.; Rao, V. B.
2006-09-01
This study investigates the capabilities of two regional models (the ICTP RegCM3 and the climate version of the CPTEC Eta model - EtaClim) in simulating the mean climatological features of the summer quasi-stationary circulations over South America. Comparing the results with the NCEP/DOE reanalysis II data it is seen that the RegCM3 simulates a weaker and southward shifted Bolivian high (BH). But, the Nordeste low (NL) is located close to its climatological position. In the EtaClim the position of the BH is reproduced well, but the NL is shifted towards the interior of the continent. To the east of Andes, the RegCM3 simulates a weaker low level jet and a weaker basic flow from the tropical Atlantic to Amazonia while they are stronger in the EtaClim. In general, the RegCM3 and EtaClim show, respectively a negative and positive bias in the surface temperature in almost all regions of South America. For both models, the correlation coefficients between the simulated precipitation and the GPCP data are high over most of South America. Although the RegCM3 and EtaClim overestimate the precipitation in the Andes region they show a negative bias in general over the entire South America. The simulations of upper and lower level circulations and precipitation fields in EtaClim were better than that of the RegCM3. In central Amazonia both models were unable to simulate the precipitation correctly. The results showed that although the RegCM3 and EtaClim are capable of simulating the main climatological features of the summer climate over South America, there are areas which need improvement. This indicates that the models must be more adequately tuned in order to give reliable predictions in the different regions of South America.
Calculation for simulation of archery goal value using a web camera and ultrasonic sensor
NASA Astrophysics Data System (ADS)
Rusjdi, Darma; Abdurrasyid, Wulandari, Dewi Arianti
2017-08-01
Development of the device simulator digital indoor archery-based embedded systems as a solution to the limitations of the field or open space is adequate, especially in big cities. Development of the device requires simulations to calculate the value of achieving the target based on the approach defined by the parabolic motion variable initial velocity and direction of motion of the arrow reaches the target. The simulator device should be complemented with an initial velocity measuring device using ultrasonic sensors and measuring direction of the target using a digital camera. The methodology uses research and development of application software from modeling and simulation approach. The research objective to create simulation applications calculating the value of the achievement of the target arrows. Benefits as a preliminary stage for the development of the simulator device of archery. Implementation of calculating the value of the target arrows into the application program generates a simulation game of archery that can be used as a reference development of the digital archery simulator in a room with embedded systems using ultrasonic sensors and web cameras. Applications developed with the simulation calculation comparing the outer radius of the circle produced a camera from a distance of three meters.
Zhang, Y; Roberts, J; Tortorici, M; Veldman, A; St Ledger, K; Feussner, A; Sidhu, J
2017-06-01
Essentials rVIII-SingleChain is a unique recombinant factor VIII (FVIII) molecule. A population pharmacokinetic model was based on FVIII activity of severe hemophilia A patients. The model was used to simulate factor VIII activity-time profiles for various dosing scenarios. The model supports prolonged dosing of rVIII-SingleChain with intervals of up to twice per week. Background Single-chain recombinant coagulation factor VIII (rVIII-SingleChain) is a unique recombinant coagulation factor VIII molecule. Objectives To: (i) characterize the population pharmacokinetics (PK) of rVIII-SingleChain in patients with severe hemophilia A; (ii) identify correlates of variability in rVIII-SingleChain PK; and (iii) simulate various dosing scenarios of rVIII-SingleChain. Patients/Methods A population PK model was developed, based on FVIII activity levels of 130 patients with severe hemophilia A (n = 91 for ≥ 12-65 years; n = 39 for < 12 years) who had participated in a single-dose PK investigation with rVIII-SingleChain 50 IU kg -1 . PK sampling was performed for up to 96 h. Results A two-compartment population PK model with first-order elimination adequately described FVIII activity. Body weight and predose level of von Willebrand factor were significant covariates on clearance, and body weight was a significant covariate on the central distribution volume. Simulations using the model with various dosing scenarios estimated that > 85% and > 93% of patients were predicted to maintain FVIII activity level above 1 IU dL -1 , at all times with three-times-weekly dosing (given on days 0, 2, and 4.5) at the lowest (20 IU kg -1 ) and highest (50 IU kg -1 ) doses, respectively. For twice weekly dosing (days 0 and 3.5) of 50 IU kg -1 rVIII-SingleChain, 62-80% of patients across all ages were predicted to maintain a FVIII activity level above 1 IU dL -1 at day 7. Conclusions The population PK model adequately characterized rVIII-SingleChain PK, and the model can be utilized to simulate FVIII activity-time profiles for various dosing scenarios. © 2017 The Authors. Journal of Thrombosis and Haemostasis published by Wiley Periodicals, Inc. on behalf of International Society on Thrombosis and Haemostasis.
A computer model of solar panel-plasma interactions
NASA Technical Reports Server (NTRS)
Cooke, D. L.; Freeman, J. W.
1980-01-01
High power solar arrays for satellite power systems are presently being planned with dimensions of kilometers, and with tens of kilovolts distributed over their surface. Such systems face many plasma interaction problems, such as power leakage to the plasma, particle focusing, and anomalous arcing. These effects cannot be adequately modeled without detailed knowledge of the plasma sheath structure and space charge effects. Laboratory studies of 1 by 10 meter solar array in a simulated low Earth orbit plasma are discussed. The plasma screening process is discussed, program theory is outlined, and a series of calibration models is presented. These models are designed to demonstrate that PANEL is capable of accurate self consistant space charge calculations. Such models include PANEL predictions for the Child-Langmuir diode problem.
Cell population modelling of yeast glycolytic oscillations.
Henson, Michael A; Müller, Dirk; Reuss, Matthias
2002-01-01
We investigated a cell-population modelling technique in which the population is constructed from an ensemble of individual cell models. The average value or the number distribution of any intracellular property captured by the individual cell model can be calculated by simulation of a sufficient number of individual cells. The proposed method is applied to a simple model of yeast glycolytic oscillations where synchronization of the cell population is mediated by the action of an excreted metabolite. We show that smooth one-dimensional distributions can be obtained with ensembles comprising 1000 individual cells. Random variations in the state and/or structure of individual cells are shown to produce complex dynamic behaviours which cannot be adequately captured by small ensembles. PMID:12206713
Development of a model for on-line control of crystal growth by the AHP method
NASA Astrophysics Data System (ADS)
Gonik, M. A.; Lomokhova, A. V.; Gonik, M. M.; Kuliev, A. T.; Smirnov, A. D.
2007-05-01
The possibility to apply a simplified 2D model for heat transfer calculations in crystal growth by the axial heat close to phase interface (AHP) method is discussed in this paper. A comparison with global heat transfer calculations with the CGSim software was performed to confirm the accuracy of this model. The simplified model was shown to provide adequate results for the shape of the melt-crystal interface and temperature field in an opaque (Ge) and a transparent crystal (CsI:Tl). The model proposed is used for identification of the growth setup as a control object, for synthesis of a digital controller (PID controller at the present stage) and, finally, in on-line simulations of crystal growth control.
Implementing Dynamic Root Optimization in Noah-MP for Simulating Phreatophytic Root Water Uptake
NASA Astrophysics Data System (ADS)
Wang, Ping; Niu, Guo-Yue; Fang, Yuan-Hao; Wu, Run-Jian; Yu, Jing-Jie; Yuan, Guo-Fu; Pozdniakov, Sergey P.; Scott, Russell L.
2018-03-01
Widely distributed in arid and semiarid regions, phreatophytic roots extend into the saturated zone and extract water directly from groundwater. In this paper, we implemented a vegetation optimality model of root dynamics (VOM-ROOT) in the Noah land surface model with multiparameterization options (Noah-MP LSM) to model the extraction of groundwater through phreatophytic roots at a riparian site with a hyperarid climate (with precipitation of 35 mm/yr) in northwestern China. VOM-ROOT numerically describes the natural optimization of the root profile in response to changes in subsurface water conditions. The coupled Noah-MP/VOM-ROOT model substantially improves the simulation of surface energy and water fluxes, particularly during the growing season, compared to the prescribed static root profile in the default Noah-MP. In the coupled model, more roots are required to grow into the saturated zone to meet transpiration demand when the groundwater level declines over the growing season. The modeling results indicate that at the study site, the modeled annual transpiration is 472 mm, accounting for 92.3% of the total evapotranspiration. Direct root water uptake from the capillary fringe and groundwater, which is supplied by lateral groundwater flow, accounts for approximately 84% of the total transpiration. This study demonstrates the importance of implementing a dynamic root scheme in a land surface model for adequately simulating phreatophytic root water uptake and the associated latent heat flux.
NASA Astrophysics Data System (ADS)
Kravtsov, Sergey
2017-06-01
Identification and dynamical attribution of multidecadal climate undulations to either variations in external forcings or to internal sources is one of the most important topics of modern climate science, especially in conjunction with the issue of human-induced global warming. Here we utilize ensembles of twentieth century climate simulations to isolate the forced signal and residual internal variability in a network of observed and modeled climate indices. The observed internal variability so estimated exhibits a pronounced multidecadal mode with a distinctive spatiotemporal signature, which is altogether absent in model simulations. This single mode explains a major fraction of model-data differences over the entire climate index network considered; it may reflect either biases in the models' forced response or models' lack of requisite internal dynamics, or a combination of both.
De, Suvranu; Deo, Dhannanjay; Sankaranarayanan, Ganesh; Arikatla, Venkata S.
2012-01-01
Background While an update rate of 30 Hz is considered adequate for real time graphics, a much higher update rate of about 1 kHz is necessary for haptics. Physics-based modeling of deformable objects, especially when large nonlinear deformations and complex nonlinear material properties are involved, at these very high rates is one of the most challenging tasks in the development of real time simulation systems. While some specialized solutions exist, there is no general solution for arbitrary nonlinearities. Methods In this work we present PhyNNeSS - a Physics-driven Neural Networks-based Simulation System - to address this long-standing technical challenge. The first step is an off-line pre-computation step in which a database is generated by applying carefully prescribed displacements to each node of the finite element models of the deformable objects. In the next step, the data is condensed into a set of coefficients describing neurons of a Radial Basis Function network (RBFN). During real-time computation, these neural networks are used to reconstruct the deformation fields as well as the interaction forces. Results We present realistic simulation examples from interactive surgical simulation with real time force feedback. As an example, we have developed a deformable human stomach model and a Penrose-drain model used in the Fundamentals of Laparoscopic Surgery (FLS) training tool box. Conclusions A unique computational modeling system has been developed that is capable of simulating the response of nonlinear deformable objects in real time. The method distinguishes itself from previous efforts in that a systematic physics-based pre-computational step allows training of neural networks which may be used in real time simulations. We show, through careful error analysis, that the scheme is scalable, with the accuracy being controlled by the number of neurons used in the simulation. PhyNNeSS has been integrated into SoFMIS (Software Framework for Multimodal Interactive Simulation) for general use. PMID:22629108
Impact of Aerosols on Convective Clouds and Precipitation
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Chen, Jen-Ping; Li, Zhanqing; Wang, Chien; Zhang, Chidong
2011-01-01
Aerosols are a critical factor in the atmospheric hydrological cycle and radiation budget. As a major reason for clouds to form and a significant attenuator of solar radiation, aerosols affect climate in several ways. Current research suggests that aerosol effects on clouds could further extend to precipitation, both through the formation of cloud particles and by exerting persistent radiative forcing on the climate system that disturbs dynamics. However, the various mechanisms behind these effects, in particular the ones connected to precipitation, are not yet well understood. The atmospheric and climate communities have long been working to gain a better grasp of these critical effects and hence to reduce the significant uncertainties in climate prediction resulting from such a lack of adequate knowledge. The central theme of this paper is to review past efforts and summarize our current understanding of the effect of aerosols on precipitation processes from theoretical analysis of microphysics, observational evidence, and a range of numerical model simulations. In addition, the discrepancy between results simulated by models, as well as that between simulations and observations will be presented. Specifically, this paper will address the following topics: (1) fundamental theories of aerosol effects on microphysics and precipitation processes, (2) observational evidence of the effect of aerosols on precipitation processes, (3) signatures of the aerosol impact on precipitation from large-scale analyses, (4) results from cloud-resolving model simulations, and (5) results from large-scale numerical model simulations. Finally, several future research directions on aerosol - precipitation interactions are suggested.
Impact of Aerosols on Convective Clouds and Precipitation
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Chen, Jen-Ping; Li, Zhanqing; Wang, Chien; Zhang, Chidong
2012-01-01
Aerosols are a critical factor in the atmospheric hydrological cycle and radiation budget. As a major agent for clouds to form and a significant attenuator of solar radiation, aerosols affect climate in several ways. Current research suggests that aerosol effects on clouds could further extend to precipitation, both through the formation of cloud particles and by exerting persistent radiative forcing on the climate system that disturbs dynamics. However, the various mechanisms behind these effects, in particular the ones connected to precipitation, are not yet well understood. The atmospheric and climate communities have long been working to gain a better grasp of these critical effects and hence to reduce the significant uncertainties in climate prediction resulting from such a lack of adequate knowledge. Here we review past efforts and summarize our current understanding of the effect of aerosols on convective precipitation processes from theoretical analysis of microphysics, observational evidence, and a range of numerical model simulations. In addition, the discrepancy between results simulated by models, as well as that between simulations and observations, are presented. Specifically, this paper addresses the following topics: (1) fundamental theories of aerosol effects on microphysics and precipitation processes, (2) observational evidence of the effect of aerosols on precipitation processes, (3) signatures of the aerosol impact on precipitation from largescale analyses, (4) results from cloud-resolving model simulations, and (5) results from large-scale numerical model simulations. Finally, several future research directions for gaining a better understanding of aerosol--cloud-precipitation interactions are suggested.
Soulis, Konstantinos X; Valiantzas, John D; Ntoulas, Nikolaos; Kargas, George; Nektarios, Panayiotis A
2017-09-15
In spite of the well-known green roof benefits, their widespread adoption in the management practices of urban drainage systems requires the use of adequate analytical and modelling tools. In the current study, green roof runoff modeling was accomplished by developing, testing, and jointly using a simple conceptual model and a physically based numerical simulation model utilizing HYDRUS-1D software. The use of such an approach combines the advantages of the conceptual model, namely simplicity, low computational requirements, and ability to be easily integrated in decision support tools with the capacity of the physically based simulation model to be easily transferred in conditions and locations other than those used for calibrating and validating it. The proposed approach was evaluated with an experimental dataset that included various green roof covers (either succulent plants - Sedum sediforme, or xerophytic plants - Origanum onites, or bare substrate without any vegetation) and two substrate depths (either 8 cm or 16 cm). Both the physically based and the conceptual models matched very closely the observed hydrographs. In general, the conceptual model performed better than the physically based simulation model but the overall performance of both models was sufficient in most cases as it is revealed by the Nash-Sutcliffe Efficiency index which was generally greater than 0.70. Finally, it was showcased how a physically based and a simple conceptual model can be jointly used to allow the use of the simple conceptual model for a wider set of conditions than the available experimental data and in order to support green roof design. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Arrillaga, Jon A.; Yagüe, Carlos; Sastre, Mariano; Román-Cascón, Carlos
2016-11-01
The behaviour of the sea breeze along the north coast of Spain is investigated using observations of two topographically contrasting sites together with simulations from the Weather Research and Forecasting (WRF) model. An objective and systematic selection method is used to detect sea-breeze days from a database of two summer months. The direction and intensity of the sea breeze are significantly affected by the topography of the area; indeed, the estimated sea-breeze intensity shows an opposite relationship with the cross-shore temperature gradient for both sites. WRF simulations reproduce the onset of the sea breeze, but some characteristics are not adequately simulated: they generally overestimate the wind speed, smooth the temperature evolution and they do not represent the correct interaction with the terrain-induced flows. Additionally, four sensitivity experiments are performed with the WRF model varying the Planetary Boundary Layer (PBL) scheme, as well as the grid analysis nudging for an anomalous case study which is incorrectly filtered. As the two simulations considering nudging reproduce an unreal (not observed) sea breeze, this day turns out to be of great interest: it allows to evaluate the influence of the passage of the sea-breeze front (SBF) in other variables mainly related to turbulence. Furthermore, the best model scores are obtained for the PBL scheme that does not use a TKE closure.
NASA Astrophysics Data System (ADS)
Lai, Jiawei; Alwazzan, Dana; Chakraborty, Nilanjan
2017-11-01
The statistical behaviour and the modelling of turbulent scalar flux transport have been analysed using a direct numerical simulation (DNS) database of head-on quenching of statistically planar turbulent premixed flames by an isothermal wall. A range of different values of Damköhler, Karlovitz numbers and Lewis numbers has been considered for this analysis. The magnitudes of the turbulent transport and mean velocity gradient terms in the turbulent scalar flux transport equation remain small in comparison to the pressure gradient, molecular dissipation and reaction-velocity fluctuation correlation terms in the turbulent scalar flux transport equation when the flame is away from the wall but the magnitudes of all these terms diminish and assume comparable values during flame quenching before vanishing altogether. It has been found that the existing models for the turbulent transport, pressure gradient, molecular dissipation and reaction-velocity fluctuation correlation terms in the turbulent scalar flux transport equation do not adequately address the respective behaviours extracted from DNS data in the near-wall region during flame quenching. Existing models for transport equation-based closures of turbulent scalar flux have been modified in such a manner that these models provide satisfactory prediction both near to and away from the wall.
The resilience and functional role of moss in boreal and arctic ecosystems.
Turetsky, M R; Bond-Lamberty, B; Euskirchen, E; Talbot, J; Frolking, S; McGuire, A D; Tuittila, E-S
2012-10-01
Mosses in northern ecosystems are ubiquitous components of plant communities, and strongly influence nutrient, carbon and water cycling. We use literature review, synthesis and model simulations to explore the role of mosses in ecological stability and resilience. Moss community responses to disturbance showed all possible responses (increases, decreases, no change) within most disturbance categories. Simulations from two process-based models suggest that northern ecosystems would need to experience extreme perturbation before mosses were eliminated. But simulations with two other models suggest that loss of moss will reduce soil carbon accumulation primarily by influencing decomposition rates and soil nitrogen availability. It seems clear that mosses need to be incorporated into models as one or more plant functional types, but more empirical work is needed to determine how to best aggregate species. We highlight several issues that have not been adequately explored in moss communities, such as functional redundancy and singularity, relationships between response and effect traits, and parameter vs conceptual uncertainty in models. Mosses play an important role in several ecosystem processes that play out over centuries - permafrost formation and thaw, peat accumulation, development of microtopography - and there is a need for studies that increase our understanding of slow, long-term dynamical processes. © 2012 The Authors. New Phytologist © 2012 New Phytologist Trust.
NOx Emissions from a Rotating Detonation-wave Engine
NASA Astrophysics Data System (ADS)
Kailasanath, Kazhikathra; Schwer, Douglas
2016-11-01
Rotating detonation-wave engines (RDE) are a form of continuous detonation-wave engines. They potentially provide further gains in performance than an intermittent or pulsed detonation-wave engine (PDE). The overall flow field in an idealized RDE, primarily consisting of two concentric cylinders, has been discussed in previous meetings. Because of the high pressures involved and the lack of adequate reaction mechanisms for this regime, previous simulations have typically used simplified chemistry models. However, understanding the exhaust species concentrations in propulsion devices is important for both performance considerations as well as estimating pollutant emissions. Progress towards addressing this need will be discussed in this talk. In this approach, an induction parameter model is used for simulating the detonation but a more detailed finite-chemistry model including NOx chemistry is used in the expansion flow region, where the pressures are lower and the uncertainties in the chemistry model are greatly reduced. Results show that overall radical concentrations in the exhaust flow are substantially lower than from earlier predictions with simplified models. Results to date show that NOx emissions are not a problem for the RDE due to the short residence times and the nature of the flow field. Furthermore, simulations show that the amount of NOx can be further reduced by tailoring the fluid dynamics within the RDE.
The resilience and functional role of moss in boreal and arctic ecosystems
Turetsky, M.; Bond-Lamberty, B.; Euskirchen, E.S.; Talbot, J. J.; Frolking, S.; McGuire, A.D.; Tuittila, E.S.
2012-01-01
Mosses in northern ecosystems are ubiquitous components of plant communities, and strongly influence nutrient, carbon and water cycling. We use literature review, synthesis and model simulations to explore the role of mosses in ecological stability and resilience. Moss community responses to disturbance showed all possible responses (increases, decreases, no change) within most disturbance categories. Simulations from two process-based models suggest that northern ecosystems would need to experience extreme perturbation before mosses were eliminated. But simulations with two other models suggest that loss of moss will reduce soil carbon accumulation primarily by influencing decomposition rates and soil nitrogen availability. It seems clear that mosses need to be incorporated into models as one or more plant functional types, but more empirical work is needed to determine how to best aggregate species. We highlight several issues that have not been adequately explored in moss communities, such as functional redundancy and singularity, relationships between response and effect traits, and parameter vs conceptual uncertainty in models. Mosses play an important role in several ecosystem processes that play out over centuries – permafrost formation and thaw, peat accumulation, development of microtopography – and there is a need for studies that increase our understanding of slow, long-term dynamical processes.
NASA Astrophysics Data System (ADS)
Shrestha, Rudra K.; Arora, Vivek K.; Melton, Joe R.; Sushama, Laxmi
2017-10-01
The performance of the competition module of the CLASS-CTEM (Canadian Land Surface Scheme and Canadian Terrestrial Ecosystem Model) modelling framework is assessed at 1° spatial resolution over North America by comparing the simulated geographical distribution of its plant functional types (PFTs) with two observation-based estimates. The model successfully reproduces the broad geographical distribution of trees, grasses and bare ground although limitations remain. In particular, compared to the two observation-based estimates, the simulated fractional vegetation coverage is lower in the arid southwest North American region and higher in the Arctic region. The lower-than-observed simulated vegetation coverage in the southwest region is attributed to lack of representation of shrubs in the model and plausible errors in the observation-based data sets. The observation-based data indicate vegetation fractional coverage of more than 60 % in this arid region, despite only 200-300 mm of precipitation that the region receives annually, and observation-based leaf area index (LAI) values in the region are lower than one. The higher-than-observed vegetation fractional coverage in the Arctic is likely due to the lack of representation of moss and lichen PFTs and also likely because of inadequate representation of permafrost in the model as a result of which the C3 grass PFT performs overly well in the region. The model generally reproduces the broad spatial distribution and the total area covered by the two primary tree PFTs (needleleaf evergreen trees, NDL-EVG; and broadleaf cold deciduous trees, BDL-DCD-CLD) reasonably well. The simulated fractional coverage of tree PFTs increases after the 1960s in response to the CO2 fertilization effect and climate warming. Differences between observed and simulated PFT coverages highlight model limitations and suggest that the inclusion of shrubs, and moss and lichen PFTs, and an adequate representation of permafrost will help improve model performance.
NASA Astrophysics Data System (ADS)
Hummels, Cameron B.; Bryan, Greg L.; Smith, Britton D.; Turk, Matthew J.
2013-04-01
Cosmological hydrodynamical simulations of galaxy evolution are increasingly able to produce realistic galaxies, but the largest hurdle remaining is in constructing subgrid models that accurately describe the behaviour of stellar feedback. As an alternate way to test and calibrate such models, we propose to focus on the circumgalactic medium (CGM). To do so, we generate a suite of adaptive mesh refinement simulations for a Milky-Way-massed galaxy run to z = 0, systematically varying the feedback implementation. We then post-process the simulation data to compute the absorbing column density for a wide range of common atomic absorbers throughout the galactic halo, including H I, Mg II, Si II, Si III, Si IV, C IV, N V, O VI and O VII. The radial profiles of these atomic column densities are compared against several quasar absorption line studies to determine if one feedback prescription is favoured. We find that although our models match some of the observations (specifically those ions with lower ionization strengths), it is particularly difficult to match O VI observations. There is some indication that the models with increased feedback intensity are better matches. We demonstrate that sufficient metals exist in these haloes to reproduce the observed column density distribution in principle, but the simulated CGM lacks significant multiphase substructure and is generally too hot. Furthermore, we demonstrate the failings of inflow-only models (without energetic feedback) at populating the CGM with adequate metals to match observations even in the presence of multiphase structure. Additionally, we briefly investigate the evolution of the CGM from z = 3 to present. Overall, we find that quasar absorption line observations of the gas around galaxies provide a new and important constraint on feedback models.
Micromechanics of failure waves in glass. 2: Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Espinosa, H.D.; Xu, Y.; Brar, N.S.
1997-08-01
In an attempt to elucidate the failure mechanism responsible for the so-called failure waves in glass, numerical simulations of plate and rod impact experiments, with a multiple-plane model, have been performed. These simulations show that the failure wave phenomenon can be modeled by the nucleation and growth of penny-shaped shear defects from the specimen surface to its interior. Lateral stress increase, reduction of spall strength,and progressive attenuation of axial stress behind the failure front are properly predicted by the multiple-plane model. Numerical simulations of high-strain-rate pressure-shear experiments indicate that the model predicts reasonably well the shear resistance of the materialmore » at strain rates as high as 1 {times} 10{sup 6}/s. The agreement is believed to be the result of the model capability in simulating damage-induced anisotropy. By examining the kinetics of the failure process in plate experiments, the authors show that the progressive glass spallation in the vicinity of the failure front and the rate of increase in lateral stress are more consistent with a representation of inelasticity based on shear-activated flow surfaces, inhomogeneous flow, and microcracking, rather than pure microcracking. In the former mechanism, microcracks are likely formed at a later time at the intersection of flow surfaces, in the case of rod-on-rod impact, stress and radial velocity histories predicted by the microcracking model are in agreement with the experimental measurements. Stress attenuation, pulse duration, and release structure are properly simulated. It is shown that failure wave speeds in excess to 3,600 m/s are required for adequate prediction in rod radial expansion.« less
Dynamics of basaltic glass dissolution - Capturing microscopic effects in continuum scale models
NASA Astrophysics Data System (ADS)
Aradóttir, E. S. P.; Sigfússon, B.; Sonnenthal, E. L.; Björnsson, G.; Jónsson, H.
2013-11-01
The method of 'multiple interacting continua' (MINC) was applied to include microscopic rate-limiting processes in continuum scale reactive transport models of basaltic glass dissolution. The MINC method involves dividing the system up to ambient fluid and grains, using a specific surface area to describe the interface between the two. The various grains and regions within grains can then be described by dividing them into continua separated by dividing surfaces. Millions of grains can thus be considered within the method without the need to explicity discretizing them. Four continua were used for describing a dissolving basaltic glass grain; the first one describes the ambient fluid around the grain, while the second, third and fourth continuum refer to a diffusive leached layer, the dissolving part of the grain and the inert part of the grain, respectively. The model was validated using the TOUGHREACT simulator and data from column flow through experiments of basaltic glass dissolution at low, neutral and high pH values. Successful reactive transport simulations of the experiments and overall adequate agreement between measured and simulated values provides validation that the MINC approach can be applied for incorporating microscopic effects in continuum scale basaltic glass dissolution models. Equivalent models can be used when simulating dissolution and alteration of other minerals. The study provides an example of how numerical modeling and experimental work can be combined to enhance understanding of mechanisms associated with basaltic glass dissolution. Column outlet concentrations indicated basaltic glass to dissolve stoichiometrically at pH 3. Predictive simulations with the developed MINC model indicated significant precipitation of secondary minerals within the column at neutral and high pH, explaining observed non-stoichiometric outlet concentrations at these pH levels. Clay, zeolite and hydroxide precipitation was predicted to be most abundant within the column.
Impact of eliminating fracture intersection nodes in multiphase compositional flow simulation
NASA Astrophysics Data System (ADS)
Walton, Kenneth M.; Unger, Andre J. A.; Ioannidis, Marios A.; Parker, Beth L.
2017-04-01
Algebraic elimination of nodes at discrete fracture intersections via the star-delta technique has proven to be a valuable tool for making multiphase numerical simulations more tractable and efficient. This study examines the assumptions of the star-delta technique and exposes its effects in a 3-D, multiphase context for advective and dispersive/diffusive fluxes. Key issues of relative permeability-saturation-capillary pressure (kr-S-Pc) and capillary barriers at fracture-fracture intersections are discussed. This study uses a multiphase compositional, finite difference numerical model in discrete fracture network (DFN) and discrete fracture-matrix (DFM) modes. It verifies that the numerical model replicates analytical solutions and performs adequately in convergence exercises (conservative and decaying tracer, one and two-phase flow, DFM and DFN domains). The study culminates in simulations of a two-phase laboratory experiment in which a fluid invades a simple fracture intersection. The experiment and simulations evoke different invading fluid flow paths by varying fracture apertures as oil invades water-filled fractures and as water invades air-filled fractures. Results indicate that the node elimination technique as implemented in numerical model correctly reproduces the long-term flow path of the invading fluid, but that short-term temporal effects of the capillary traps and barriers arising from the intersection node are lost.
Psikuta, Agnes; Koelblen, Barbara; Mert, Emel; Fontana, Piero; Annaheim, Simon
2017-12-07
Following the growing interest in the further development of manikins to simulate human thermal behaviour more adequately, thermo-physiological human simulators have been developed by coupling a thermal sweating manikin with a thermo-physiology model. Despite their availability and obvious advantages, the number of studies involving these devices is only marginal, which plausibly results from the high complexity of the development and evaluation process and need of multi-disciplinary expertise. The aim of this paper is to present an integrated approach to develop, validate and operate such devices including technical challenges and limitations of thermo-physiological human simulators, their application and measurement protocol, strategy for setting test scenarios, and the comparison to standard methods and human studies including details which have not been published so far. A physical manikin controlled by a human thermoregulation model overcame the limitations of mathematical clothing models and provided a complementary method to investigate thermal interactions between the human body, protective clothing, and its environment. The opportunities of these devices include not only realistic assessment of protective clothing assemblies and equipment but also potential application in many research fields ranging from biometeorology, automotive industry, environmental engineering, and urban climate to clinical and safety applications.
PSIKUTA, Agnes; KOELBLEN, Barbara; MERT, Emel; FONTANA, Piero; ANNAHEIM, Simon
2017-01-01
Following the growing interest in the further development of manikins to simulate human thermal behaviour more adequately, thermo-physiological human simulators have been developed by coupling a thermal sweating manikin with a thermo-physiology model. Despite their availability and obvious advantages, the number of studies involving these devices is only marginal, which plausibly results from the high complexity of the development and evaluation process and need of multi-disciplinary expertise. The aim of this paper is to present an integrated approach to develop, validate and operate such devices including technical challenges and limitations of thermo-physiological human simulators, their application and measurement protocol, strategy for setting test scenarios, and the comparison to standard methods and human studies including details which have not been published so far. A physical manikin controlled by a human thermoregulation model overcame the limitations of mathematical clothing models and provided a complementary method to investigate thermal interactions between the human body, protective clothing, and its environment. The opportunities of these devices include not only realistic assessment of protective clothing assemblies and equipment but also potential application in many research fields ranging from biometeorology, automotive industry, environmental engineering, and urban climate to clinical and safety applications. PMID:28966294
Zelenyak, Andreea-Manuela; Schorer, Nora; Sause, Markus G R
2018-02-01
This paper presents a method for embedding realistic defect geometries of a fiber reinforced material in a finite element modeling environment in order to simulate active ultrasonic inspection. When ultrasonic inspection is used experimentally to investigate the presence of defects in composite materials, the microscopic defect geometry may cause signal characteristics that are difficult to interpret. Hence, modeling of this interaction is key to improve our understanding and way of interpreting the acquired ultrasonic signals. To model the true interaction of the ultrasonic wave field with such defect structures as pores, cracks or delamination, a realistic three dimensional geometry reconstruction is required. We present a 3D-image based reconstruction process which converts computed tomography data in adequate surface representations ready to be embedded for processing with finite element methods. Subsequent modeling using these geometries uses a multi-scale and multi-physics simulation approach which results in quantitative A-Scan ultrasonic signals which can be directly compared with experimental signals. Therefore, besides the properties of the composite material, a full transducer implementation, piezoelectric conversion and simultaneous modeling of the attached circuit is applied. Comparison between simulated and experimental signals provides very good agreement in electrical voltage amplitude and the signal arrival time and thus validates the proposed modeling approach. Simulating ultrasound wave propagation in a medium with a realistic shape of the geometry clearly shows a difference in how the disturbance of the waves takes place and finally allows more realistic modeling of A-scans. Copyright © 2017 Elsevier B.V. All rights reserved.
Boore, David M.; Di Alessandro, Carola; Abrahamson, Norman A.
2014-01-01
The stochastic method of simulating ground motions requires the specification of the shape and scaling with magnitude of the source spectrum. The spectral models commonly used are either single-corner-frequency or double-corner-frequency models, but the latter have no flexibility to vary the high-frequency spectral levels for a specified seismic moment. Two generalized double-corner-frequency ω2 source spectral models are introduced, one in which two spectra are multiplied together, and another where they are added. Both models have a low-frequency dependence controlled by the seismic moment, and a high-frequency spectral level controlled by the seismic moment and a stress parameter. A wide range of spectral shapes can be obtained from these generalized spectral models, which makes them suitable for inversions of data to obtain spectral models that can be used in ground-motion simulations in situations where adequate data are not available for purely empirical determinations of ground motions, as in stable continental regions. As an example of the use of the generalized source spectral models, data from up to 40 stations from seven events, plus response spectra at two distances and two magnitudes from recent ground-motion prediction equations, were inverted to obtain the parameters controlling the spectral shapes, as well as a finite-fault factor that is used in point-source, stochastic-method simulations of ground motion. The fits to the data are comparable to or even better than those from finite-fault simulations, even for sites close to large earthquakes.
NASA Astrophysics Data System (ADS)
Gayler, Sebastian; Wöhling, Thomas; Ingwersen, Joachim; Wizemann, Hans-Dieter; Warrach-Sagi, Kirsten; Attinger, Sabine; Streck, Thilo; Wulmeyer, Volker
2014-05-01
Interactions between the soil, the vegetation, and the atmospheric boundary layer require close attention when predicting water fluxes in the hydrogeosystem, agricultural systems, weather and climate. However, land-surface schemes used in large scale models continue to show deficits in consistently simulating fluxes of water and energy from the subsurface through vegetation layers to the atmosphere. In this study, the multi-physics version of the Noah land-surface model (Noah-MP) was used to identify the processes, which are most crucial for a simultaneous simulation of water and heat fluxes between land-surface and the lower atmosphere. Comprehensive field data sets of latent and sensible heat fluxes, ground heat flux, soil moisture, and leaf area index from two contrasting field sites in South-West Germany are used to assess the accuracy of simulations. It is shown that an adequate representation of vegetation-related processes is the most important control for a consistent simulation of energy and water fluxes in the soil-plant-atmosphere system. In particular, using a newly implemented sub-module to simulate root growth dynamics has enhanced the performance of Noah-MP at both field sites. We conclude that further advances in the representation of leaf area dynamics and root/soil moisture interactions are the most promising starting points for improving the simulation of feedbacks between the sub-soil, land-surface and atmosphere in fully-coupled hydrological and atmospheric models.
A piloted simulator evaluation of a ground-based 4-D descent advisor algorithm
NASA Technical Reports Server (NTRS)
Davis, Thomas J.; Green, Steven M.; Erzberger, Heinz
1990-01-01
A ground-based, four dimensional (4D) descent-advisor algorithm is under development at NASA-Ames. The algorithm combines detailed aerodynamic, propulsive, and atmospheric models with an efficient numerical integration scheme to generate 4D descent advisories. The ability is investigated of the 4D descent advisor algorithm to provide adequate control of arrival time for aircraft not equipped with on-board 4D guidance systems. A piloted simulation was conducted to determine the precision with which the descent advisor could predict the 4D trajectories of typical straight-in descents flown by airline pilots under different wind conditions. The effects of errors in the estimation of wind and initial aircraft weight were also studied. A description of the descent advisor as well as the result of the simulation studies are presented.
Statistical variances of diffusional properties from ab initio molecular dynamics simulations
NASA Astrophysics Data System (ADS)
He, Xingfeng; Zhu, Yizhou; Epstein, Alexander; Mo, Yifei
2018-12-01
Ab initio molecular dynamics (AIMD) simulation is widely employed in studying diffusion mechanisms and in quantifying diffusional properties of materials. However, AIMD simulations are often limited to a few hundred atoms and a short, sub-nanosecond physical timescale, which leads to models that include only a limited number of diffusion events. As a result, the diffusional properties obtained from AIMD simulations are often plagued by poor statistics. In this paper, we re-examine the process to estimate diffusivity and ionic conductivity from the AIMD simulations and establish the procedure to minimize the fitting errors. In addition, we propose methods for quantifying the statistical variance of the diffusivity and ionic conductivity from the number of diffusion events observed during the AIMD simulation. Since an adequate number of diffusion events must be sampled, AIMD simulations should be sufficiently long and can only be performed on materials with reasonably fast diffusion. We chart the ranges of materials and physical conditions that can be accessible by AIMD simulations in studying diffusional properties. Our work provides the foundation for quantifying the statistical confidence levels of diffusion results from AIMD simulations and for correctly employing this powerful technique.
NASA Technical Reports Server (NTRS)
Johnson, Daniel E.; Tao, W.-K.; Simpson, J.; Sui, C.-H.; Einaudi, Franco (Technical Monitor)
2001-01-01
Interactions between deep tropical clouds over the western Pacific warm pool and the larger-scale environment are key to understanding climate change. Cloud models are an extremely useful tool in simulating and providing statistical information on heat and moisture transfer processes between cloud systems and the environment, and can therefore be utilized to substantially improve cloud parameterizations in climate models. In this paper, the Goddard Cumulus Ensemble (GCE) cloud-resolving model is used in multi-day simulations of deep tropical convective activity over the Tropical Ocean-Global Atmosphere Coupled Ocean-Atmosphere Response Experiment (TOGA COARE). Large-scale temperature and moisture advective tendencies, and horizontal momentum from the TOGA-COARE Intensive Flux Array (IFA) region, are applied to the GCE version which incorporates cyclical boundary conditions. Sensitivity experiments show that grid domain size produces the largest response to domain-mean temperature and moisture deviations, as well as cloudiness, when compared to grid horizontal or vertical resolution, and advection scheme. It is found that a minimum grid-domain size of 500 km is needed to adequately resolve the convective cloud features. The control experiment shows that the atmospheric heating and moistening is primarily a response to cloud latent processes of condensation/evaporation, and deposition/sublimation, and to a lesser extent, melting of ice particles. Air-sea exchange of heat and moisture is found to be significant, but of secondary importance, while the radiational response is small. The simulated rainfall and atmospheric heating and moistening, agrees well with observations, and performs favorably to other models simulating this case.
NASA Technical Reports Server (NTRS)
Reschke, Millard F.; Parker, Donald E.
1987-01-01
Seven astronauts reported translational self-motion during roll simulation 1-3 h after landing following 5-7 d of orbital flight. Two reported strong translational self-motion perception when they performed pitch head motions during entry and while the orbiter was stationary on the runway. One of two astronauts from whom adequate data were collected exhibited a 132-deg shift in the phase angle between roll stimulation and horizontal eye position 2 h after landing. Neither of two from whom adequate data were collected exhibited increased horizontal eye movement amplitude or disturbance of voluntary pitch or roll body motion immediately postflight. These results are generally consistent with an otolith tilt-translation reinterpretation model and are being applied to the development of apparatus and procedures intended to preadapt astronauts to the sensory rearrangement of weightlessness.
A microstructurally based model of solder joints under conditions of thermomechanical fatigue
NASA Astrophysics Data System (ADS)
Frear, D. R.; Burchett, S. N.; Rashid, M. M.
The thermomechanical fatigue failure of solder joints is increasingly becoming an important reliability issue. We present two computational methodologies that have been developed to predict the behavior of near eutectic Sn-Pb solder joints under fatigue conditions that are based on metallurgical tests as fundamental input for constitutive relations. The two-phase model mathematically predicts the heterogeneous coarsening behavior of near eutectic Sn-Pb solder. The finite element simulations from this model agree well with experimental thermomechanical fatigue tests. The simulations show that the presence of an initial heterogeneity in the solder microstructure could significantly degrade the fatigue lifetime. The single phase model is a computational technique that was developed to predict solder joint behavior using materials data for constitutive relation constants that could be determined through straightforward metallurgical experiments. A shear/torsion test sample was developed to impose strain in two different orientations. Materials constants were derived from these tests and the results showed an adequate fit to experimental results. The single-phase model could be very useful for conditions where microstructural evolution is not a dominant factor in fatigue.
Intensity dependence of focused ultrasound lesion position
NASA Astrophysics Data System (ADS)
Meaney, Paul M.; Cahill, Mark D.; ter Haar, Gail R.
1998-04-01
Knowledge of the spatial distribution of intensity loss from an ultrasonic beam is critical to predicting lesion formation in focused ultrasound surgery. To date most models have used linear propagation models to predict the intensity profiles needed to compute the temporally varying temperature distributions. These can be used to compute thermal dose contours that can in turn be used to predict the extent of thermal damage. However, these simulations fail to adequately describe the abnormal lesion formation behavior observed for in vitro experiments in cases where the transducer drive levels are varied over a wide range. For these experiments, the extent of thermal damage has been observed to move significantly closer to the transducer with increasing transducer drive levels than would be predicted using linear propagation models. The simulations described herein, utilize the KZK (Khokhlov-Zabolotskaya-Kuznetsov) nonlinear propagation model with the parabolic approximation for highly focused ultrasound waves, to demonstrate that the positions of the peak intensity and the lesion do indeed move closer to the transducer. This illustrates that for accurate modeling of heating during FUS, nonlinear effects must be considered.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Na; Zhang, Peng; Kang, Wei
Multiscale simulations of fluids such as blood represent a major computational challenge of coupling the disparate spatiotemporal scales between molecular and macroscopic transport phenomena characterizing such complex fluids. In this paper, a coarse-grained (CG) particle model is developed for simulating blood flow by modifying the Morse potential, traditionally used in Molecular Dynamics for modeling vibrating structures. The modified Morse potential is parameterized with effective mass scales for reproducing blood viscous flow properties, including density, pressure, viscosity, compressibility and characteristic flow dynamics of human blood plasma fluid. The parameterization follows a standard inverse-problem approach in which the optimal micro parameters aremore » systematically searched, by gradually decoupling loosely correlated parameter spaces, to match the macro physical quantities of viscous blood flow. The predictions of this particle based multiscale model compare favorably to classic viscous flow solutions such as Counter-Poiseuille and Couette flows. It demonstrates that such coarse grained particle model can be applied to replicate the dynamics of viscous blood flow, with the advantage of bridging the gap between macroscopic flow scales and the cellular scales characterizing blood flow that continuum based models fail to handle adequately.« less
Simulation Modeling for Off-Nominal Conditions - Where Are We Today?
NASA Technical Reports Server (NTRS)
Shah, Gautam H.; Foster, John V.; Cunningham, Kevin
2010-01-01
The modeling of aircraft flight characteris4cs in off-nominal or otherwise adverse conditions has become increasingly important for simulation in the loss-of-control arena. Adverse conditions include environmentally-induced upsets such as wind shear or wake vortex encounters; off-nominal flight conditions, such as stall or departure; on-board systems failures; and structural failures or aircraft damage. Spirited discussions in the research community are taking place as to the fidelity and data requirements for adequate representation of vehicle dynamics under such conditions for a host of research areas, including recovery training, flight controls development, trajectory guidance/planning, and envelope limiting. The increasing need for multiple sources of data (empirical, computational, experimental) for modeling across a larger flight envelope leads to challenges in developing methods of appropriately applying or combining such data, particularly in a dynamic flight environment with a physically and/or aerodynamically asymmetric vehicle. Traditional simplifications and symmetry assumptions in current modeling methodology may no longer be valid. Furthermore, once modeled, challenges abound in the validation of flight dynamics characteristics in adverse flight regimes
Lenz, Bernard N.; Saad, David A.; Fitzpatrick, Faith A.
2003-01-01
The effects of land cover on flooding and base-flow characteristics of Whittlesey Creek, Bayfield County, Wis., were examined in a study that involved ground-water-flow and rainfall-runoff modeling. Field data were collected during 1999-2001 for synoptic base flow, streambed head and temperature, precipitation, continuous streamflow and stream stage, and other physical characteristics. Well logs provided data for potentiometric-surface altitudes and stratigraphic descriptions. Geologic, soil, hydrography, altitude, and historical land-cover data were compiled into a geographic information system and used in two ground-water-flow models (GFLOW and MODFLOW) and a rainfall-runoff model (SWAT). A deep ground-water system intersects Whittlesey Creek near the confluence with the North Fork, producing a steady base flow of 17?18 cubic feet per second. Upstream from the confluence, the creek has little or no base flow; flow is from surface runoff and a small amount of perched ground water. Most of the base flow to Whittlesey Creek originates as recharge through the permeable sands in the center of the Bayfield Peninsula to the northwest of the surface-water-contributing basin. Based on simulations, model-wide changes in recharge caused a proportional change in simulated base flow for Whittlesey Creek. Changing the simulated amount of recharge by 25 to 50 percent in only the ground-water-contributing area results in relatively small changes in base flow to Whittlesey Creek (about 2?11 percent). Simulated changes in land cover within the Whittlesey Creek surface-water-contributing basin would have minimal effects on base flow and average annual runoff, but flood peaks (based on daily mean flows on peak-flow days) could be affected. Based on the simulations, changing the basin land cover to a reforested condition results in a reduction in flood peaks of about 12 to 14 percent for up to a 100-yr flood. Changing the basin land cover to 25 percent urban land or returning basin land cover to the intensive row-crop agriculture of the 1920s results in flood peaks increasing by as much as 18 percent. The SWAT model is limited to a daily time step, which is adequate for describing the surface-water/ground-water interaction and percentage changes. It may not, however, be adequate in describing peak flow because the instantaneous peak flow in Whittlesey Creek during a flood can be more than twice the magnitude of the daily mean flow during that same flood. In addition, the storage and infiltration capacities of wetlands in the basin are not fully understood and need further study.
NASA Technical Reports Server (NTRS)
Shindell, Drew T.; Grenfell, J. Lee; Rind, David; Price, Colin; Grewe, Volker; Hansen, James E. (Technical Monitor)
2001-01-01
A tropospheric chemistry module has been developed for use within the Goddard Institute for Space Studies (GISS) general circulation model (GCM) to study interactions between chemistry and climate change. The model uses a simplified chemistry scheme based on CO-NOx-CH4 chemistry, and also includes a parameterization for emissions of isoprene, the most important non-methane hydrocarbon. The model reproduces present day annual cycles and mean distributions of key trace gases fairly well, based on extensive comparisons with available observations. Examining the simulated change between present day and pre-industrial conditions, we find that the model has a similar response to that seen in other simulations. It shows a 45% increase in the global tropospheric ozone burden, within the 25% - 57% range seen in other studies. Annual average zonal mean ozone increases by more than 125% at Northern Hemisphere middle latitudes near the surface. Comparison of model runs that allow the calculated ozone to interact with the GCM's radiation and meteorology with those that do not shows only minor differences for ozone. The common usage of ozone fields that are not calculated interactively seems to be adequate to simulate both the present day and the pre-industrial ozone distributions. However, use of coupled chemistry does alter the change in tropospheric oxidation capacity, enlarging the overall decrease in OH concentrations from the pre-industrial to the present by about 10% (-5.3% global annual average in uncoupled mode, -5.9% in coupled mode). This indicates that there may be systematic biases in the simulation of the pre-industrial to present day decrease in the oxidation capacity of the troposphere (though a 10% difference is well within the total uncertainty). Global annual average radiative forcing from pre-industrial to present day ozone change is 0.32 W/sq m. The forcing seems to be increased by about 10% when the chemistry is coupled to the GCM. Forcing values greater than 0.8 W/sq m are seen over large areas of the United States, Southern Europe, North Africa, the Middle East, Central Asia, and the Arctic. Radiative forcing is greater than 1.5 W/sq m over parts of these areas during Northern summer Though there are local differences, the radiative forcing is overall in good agreement with the results of other modeling studies in both its magnitude and spatial distribution, demonstrating that the simplified chemistry is adequate for climate studies.
Valen-Sendstad, Kristian; Mardal, Kent-André; Steinman, David A
2013-01-18
High-frequency flow fluctuations in intracranial aneurysms have previously been reported in vitro and in vivo. On the other hand, the vast majority of image-based computational fluid dynamics (CFD) studies of cerebral aneurysms report periodic, laminar flow. We have previously demonstrated that transitional flow, consistent with in vivo reports, can occur in a middle cerebral artery (MCA) bifurcation aneurysm when ultra-high-resolution direct numerical simulation methods are applied. The object of the present study was to investigate if such high-frequency flow fluctuations might be more widespread in adequately-resolved CFD models. A sample of N=12 anatomically realistic MCA aneurysms (five unruptured, seven ruptured), was digitally segmented from CT angiograms. Four were classified as sidewall aneurysms, the other eight as bifurcation aneurysms. Transient CFD simulations were carried out assuming a steady inflow velocity of 0.5m/s, corresponding to typical peak systolic conditions at the MCA. To allow for detection of clinically-reported high-frequency flow fluctuations and resulting flow structures, temporal and spatial resolutions of the CFD simulations were in the order of 0.1 ms and 0.1 mm, respectively. A transient flow response to the stationary inflow conditions was found in five of the 12 aneurysms, with energetic fluctuations up to 100 Hz, and in one case up to 900 Hz. Incidentally, all five were ruptured bifurcation aneurysms, whereas all four sidewall aneurysms, including one ruptured case, quickly reached a stable, steady state solution. Energetic, rapid fluctuations may be overlooked in CFD models of bifurcation aneurysms unless adequate temporal and spatial resolutions are used. Such fluctuations may be relevant to the mechanobiology of aneurysm rupture, and to a recently reported dichotomy between predictors of rupture likelihood for bifurcation vs. sidewall aneurysms. Copyright © 2012 Elsevier Ltd. All rights reserved.
Limits to high-speed simulations of spiking neural networks using general-purpose computers.
Zenke, Friedemann; Gerstner, Wulfram
2014-01-01
To understand how the central nervous system performs computations using recurrent neuronal circuitry, simulations have become an indispensable tool for theoretical neuroscience. To study neuronal circuits and their ability to self-organize, increasing attention has been directed toward synaptic plasticity. In particular spike-timing-dependent plasticity (STDP) creates specific demands for simulations of spiking neural networks. On the one hand a high temporal resolution is required to capture the millisecond timescale of typical STDP windows. On the other hand network simulations have to evolve over hours up to days, to capture the timescale of long-term plasticity. To do this efficiently, fast simulation speed is the crucial ingredient rather than large neuron numbers. Using different medium-sized network models consisting of several thousands of neurons and off-the-shelf hardware, we compare the simulation speed of the simulators: Brian, NEST and Neuron as well as our own simulator Auryn. Our results show that real-time simulations of different plastic network models are possible in parallel simulations in which numerical precision is not a primary concern. Even so, the speed-up margin of parallelism is limited and boosting simulation speeds beyond one tenth of real-time is difficult. By profiling simulation code we show that the run times of typical plastic network simulations encounter a hard boundary. This limit is partly due to latencies in the inter-process communications and thus cannot be overcome by increased parallelism. Overall, these results show that to study plasticity in medium-sized spiking neural networks, adequate simulation tools are readily available which run efficiently on small clusters. However, to run simulations substantially faster than real-time, special hardware is a prerequisite.
Kinetic analysis of elastomeric lag damper for helicopter rotors
NASA Astrophysics Data System (ADS)
Liu, Yafang; Wang, Jidong; Tong, Yan
2018-02-01
The elastomeric lag dampers suppress the ground resonance and air resonance that play a significant role in the stability of the helicopter. In this paper, elastomeric lag damper which is made from silicone rubber is built. And a series of experiments are conducted on this elastomeric lag damper. The stress-strain curves of elastomeric lag dampers employed shear forces at different frequency are obtained. And a finite element model is established based on Burgers model. The result of simulation and tests shows that the simple, linear model will yield good predictions of damper energy dissipation and it is adequate for predicting the stress-strain hysteresis loop within the operating frequency and a small-amplitude oscillation.
Monte Carlo modeling and meteor showers
NASA Technical Reports Server (NTRS)
Kulikova, N. V.
1987-01-01
Prediction of short lived increases in the cosmic dust influx, the concentration in lower thermosphere of atoms and ions of meteor origin and the determination of the frequency of micrometeor impacts on spacecraft are all of scientific and practical interest and all require adequate models of meteor showers at an early stage of their existence. A Monte Carlo model of meteor matter ejection from a parent body at any point of space was worked out by other researchers. This scheme is described. According to the scheme, the formation of ten well known meteor streams was simulated and the possibility of genetic affinity of each of them with the most probable parent comet was analyzed. Some of the results are presented.
NASA Technical Reports Server (NTRS)
Minnetyan, Levon; Chamis, Christos C. (Technical Monitor)
2003-01-01
Computational simulation results can give the prediction of damage growth and progression and fracture toughness of composite structures. The experimental data from literature provide environmental effects on the fracture behavior of metallic or fiber composite structures. However, the traditional experimental methods to analyze the influence of the imposed conditions are expensive and time consuming. This research used the CODSTRAN code to model the temperature effects, scaling effects and the loading effects of fiber/braided composite specimens with and without fiber-optic sensors on the damage initiation and energy release rates. The load-displacement relationship and fracture toughness assessment approach is compared with the test results from literature and it is verified that the computational simulation, with the use of established material modeling and finite element modules, adequately tracks the changes of fracture toughness and subsequent fracture propagation for any fiber/braided composite structure due to the change of fiber orientations, presence of large diameter optical fibers, and any loading conditions.
NASA Technical Reports Server (NTRS)
Minnetyan, Levon; Chamis, Christos C. (Technical Monitor)
2003-01-01
Computational simulation results can give the prediction of damage growth and progression and fracture toughness of composite structures. The experimental data from literature provide environmental effects on the fracture behavior of metallic or fiber composite structures. However, the traditional experimental methods to analyze the influence of the imposed conditions are expensive and time consuming. This research used the CODSTRAN code to model the temperature effects, scaling effects and the loading effects of fiberbraided composite specimens with and without fiber-optic sensors on the damage initiation and energy release rates. The load-displacement relationship and fracture toughness assessment approach is compared with the test results from literature and it is verified that the computational simulation, with the use of established material modeling and finite element modules, adequately tracks the changes of fracture toughness and subsequent fracture propagation for any fiberbraided composite structure due to the change of fiber orientations, presence of large diameter optical fibers, and any loading conditions.
NASA Technical Reports Server (NTRS)
Mill, F. W.; Krebs, G. N.; Strauss, E. S.
1976-01-01
The Multi-Purpose System Simulator (MPSS) model was used to investigate the current and projected performance of the Monitor and Control Display System (MACDS) at the Goddard Space Flight Center in processing and displaying launch data adequately. MACDS consists of two interconnected mini-computers with associated terminal input and display output equipment and a disk-stored data base. Three configurations of MACDS were evaluated via MPSS and their performances ascertained. First, the current version of MACDS was found inadequate to handle projected launch data loads because of unacceptable data backlogging. Second, the current MACDS hardware with enhanced software was capable of handling two times the anticipated data loads. Third, an up-graded hardware ensemble combined with the enhanced software was capable of handling four times the anticipated data loads.
NASA Astrophysics Data System (ADS)
Rodgers, Keith B.; Latif, Mojib; Legutke, Stephanie
2000-09-01
The sensitivity of the thermal structure of the equatorial Pacific and Indian Ocean pycnoclines to a model's representation of the Indonesian Straits connecting the two basins is investigated. Two integrations are performed using the global HOPE ocean model. The initial conditions and surface forcing for both cases are identical; the only difference between the runs is that one has an opening for the Indonesian Straits which spans the equator on the Pacific side, and the other has an opening which lies fully north of the equator. The resulting sensitivity throughout much of the upper ocean is greater than 0.5°C for both the equatorial Indian and Pacific. A realistic simulation of net Indonesian Throughflow (ITF) transport (measured in Sverdrups) is not sufficient for an adequate simulation of equatorial watermasses. The ITF must also contain a realistic admixture of northern and southern Pacific source water.
Liu, Hui; Hu, Dawei; Dong, Chen; Fu, Yuming; Liu, Guanghui; Qin, Youcai; Sun, Yi; Liu, Dianlei; Li, Lei; Liu, Hong
2017-08-01
There is much uncertainty about the risks of seed germination after repeated or protracted environmental low-dose ionizing radiation exposure. The purpose of this study is to explore the influence mechanism of low-dose ionizing radiation on wheat seed germination using a model linking physiological characteristics and developmental-dynamics simulation. A low-dose ionizing radiation environment simulator was built to investigate wheat (Triticum aestivum L.) seeds germination process and then a kinetic model expressing the relationship between wheat seed germination dynamics and low-dose ionizing radiation intensity variations was developed by experimental data, plant physiology, relevant hypotheses and system dynamics, and sufficiently validated and accredited by computer simulation. Germination percentages were showing no differences in response to different dose rates. However, root and shoot lengths were reduced significantly. Plasma governing equations were set up and the finite element analysis demonstrated H 2 O, CO 2 , O 2 as well as the seed physiological responses to the low-dose ionizing radiation. The kinetic model was highly valid, and simultaneously the related influence mechanism of low-dose ionizing radiation on wheat seed germination proposed in the modeling process was also adequately verified. Collectively these data demonstrate that low-dose ionizing radiation has an important effect on absorbing water, consuming O 2 and releasing CO 2 , which means the risk for embryo and endosperm development was higher. Copyright © 2017 Elsevier Ltd. All rights reserved.
3-d finite element model development for biomechanics: a software demonstration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hollerbach, K.; Hollister, A.M.; Ashby, E.
1997-03-01
Finite element analysis is becoming an increasingly important part of biomechanics and orthopedic research, as computational resources become more powerful, and data handling algorithms become more sophisticated. Until recently, tools with sufficient power did not exist or were not accessible to adequately model complicated, three-dimensional, nonlinear biomechanical systems. In the past, finite element analyses in biomechanics have often been limited to two-dimensional approaches, linear analyses, or simulations of single tissue types. Today, we have the resources to model fully three-dimensional, nonlinear, multi-tissue, and even multi-joint systems. The authors will present the process of developing these kinds of finite element models,more » using human hand and knee examples, and will demonstrate their software tools.« less
Xiong, Xiaoping; Wu, Jianrong
2017-01-01
The treatment of cancer has progressed dramatically in recent decades, such that it is no longer uncommon to see a cure or log-term survival in a significant proportion of patients with various types of cancer. To adequately account for the cure fraction when designing clinical trials, the cure models should be used. In this article, a sample size formula for the weighted log-rank test is derived under the fixed alternative hypothesis for the proportional hazards cure models. Simulation showed that the proposed sample size formula provides an accurate estimation of sample size for designing clinical trials under the proportional hazards cure models. Copyright © 2016 John Wiley & Sons, Ltd.
Towards an integrated model of floodplain hydrology representing feedbacks and anthropogenic effects
NASA Astrophysics Data System (ADS)
Andreadis, K.; Schumann, G.; Voisin, N.; O'Loughlin, F.; Tesfa, T. K.; Bates, P.
2017-12-01
The exchange of water between hillslopes, river channels and floodplain can be quite complex and the difficulty in capturing the mechanisms behind it is exacerbated by the impact of human activities such as irrigation and reservoir operations. Although there has been a vast body of work on modeling hydrological processes, most of the resulting models have been limited with regards to aspects of the coupled human-natural system. For example, hydrologic models that represent processes such as evapotranspiration, infiltration, interception and groundwater dynamics often neglect anthropogenic effects or do not adequately represent the inherently two-dimensional floodplain flow. We present an integrated modeling framework that is comprised of the Variable Infiltration Capacity (VIC) hydrology model, the LISFLOOD-FP hydrodynamic model, and the Water resources Management (WM) model. The VIC model solves the energy and water balance over a gridded domain and simulates a number of hydrologic features such as snow, frozen soils, lakes and wetlands, while also representing irrigation demand from cropland areas. LISFLOOD-FP solves an approximation of the Saint-Venant equations to efficiently simulate flow in river channels and the floodplain. The implementation of WM accommodates a variety of operating rules in reservoirs and withdrawals due to consumptive demands, allowing the successful simulation of regulated flow. The models are coupled so as to allow feedbacks between their corresponding processes, therefore providing the ability to test different hypotheses about the floodplain hydrology of large-scale basins. We test this integrated framework over the Zambezi River basin by simulating its hydrology from 2000-2010, and evaluate the results against remotely sensed observations. Finally, we examine the sensitivity of streamflow and water inundation to changes in reservoir operations, precipitation and temperature.
NASA Astrophysics Data System (ADS)
Seftigen, Kristina; Goosse, Hugues; Klein, Francois; Chen, Deliang
2017-12-01
The integration of climate proxy information with general circulation model (GCM) results offers considerable potential for deriving greater understanding of the mechanisms underlying climate variability, as well as unique opportunities for out-of-sample evaluations of model performance. In this study, we combine insights from a new tree-ring hydroclimate reconstruction from Scandinavia with projections from a suite of forced transient simulations of the last millennium and historical intervals from the CMIP5 and PMIP3 archives. Model simulations and proxy reconstruction data are found to broadly agree on the modes of atmospheric variability that produce droughts-pluvials in the region. Despite these dynamical similarities, large differences between simulated and reconstructed hydroclimate time series remain. We find that the GCM-simulated multi-decadal and/or longer hydroclimate variability is systematically smaller than the proxy-based estimates, whereas the dominance of GCM-simulated high-frequency components of variability is not reflected in the proxy record. Furthermore, the paleoclimate evidence indicates in-phase coherencies between regional hydroclimate and temperature on decadal timescales, i.e., sustained wet periods have often been concurrent with warm periods and vice versa. The CMIP5-PMIP3 archive suggests, however, out-of-phase coherencies between the two variables in the last millennium. The lack of adequate understanding of mechanisms linking temperature and moisture supply on longer timescales has serious implications for attribution and prediction of regional hydroclimate changes. Our findings stress the need for further paleoclimate data-model intercomparison efforts to expand our understanding of the dynamics of hydroclimate variability and change, to enhance our ability to evaluate climate models, and to provide a more comprehensive view of future drought and pluvial risks.
NASA Astrophysics Data System (ADS)
Rasmussen, K. L.; Prein, A. F.; Rasmussen, R. M.; Ikeda, K.; Liu, C.
2017-11-01
Novel high-resolution convection-permitting regional climate simulations over the US employing the pseudo-global warming approach are used to investigate changes in the convective population and thermodynamic environments in a future climate. Two continuous 13-year simulations were conducted using (1) ERA-Interim reanalysis and (2) ERA-Interim reanalysis plus a climate perturbation for the RCP8.5 scenario. The simulations adequately reproduce the observed precipitation diurnal cycle, indicating that they capture organized and propagating convection that most climate models cannot adequately represent. This study shows that weak to moderate convection will decrease and strong convection will increase in frequency in a future climate. Analysis of the thermodynamic environments supporting convection shows that both convective available potential energy (CAPE) and convective inhibition (CIN) increase downstream of the Rockies in a future climate. Previous studies suggest that CAPE will increase in a warming climate, however a corresponding increase in CIN acts as a balancing force to shift the convective population by suppressing weak to moderate convection and provides an environment where CAPE can build to extreme levels that may result in more frequent severe convection. An idealized investigation of fundamental changes in the thermodynamic environment was conducted by shifting a standard atmospheric profile by ± 5 °C. When temperature is increased, both CAPE and CIN increase in magnitude, while the opposite is true for decreased temperatures. Thus, even in the absence of synoptic and mesoscale variations, a warmer climate will provide more CAPE and CIN that will shift the convective population, likely impacting water and energy budgets on Earth.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bryan, Alexander M.; Cheng, Susan J.; Ashworth, Kirsti
Foliar emissions of biogenic volatile organic compounds (BVOC)dimportant precursors of tropospheric ozone and secondary organic aerosolsdvary widely by vegetation type. Modeling studies to date typi-cally represent the canopy as a single dominant tree type or a blend of tree types, yet many forests are diverse with trees of varying height. To assess the sensitivity of biogenic emissions to tree height vari-ation, we compare two 1-D canopy model simulations in which BVOC emission potentials are homo-geneous or heterogeneous with canopy depth. The heterogeneous canopy emulates the mid-successional forest at the University of Michigan Biological Station (UMBS). In this case, high-isoprene-emitting fo-liagemore » (e.g., aspen and oak) is constrained to the upper canopy, where higher sunlight availability increases the light-dependent isoprene emission, leading to 34% more isoprene and its oxidation products as compared to the homogeneous simulation. Isoprene declines from aspen mortality are 10% larger when heterogeneity is considered. Overall, our results highlight the importance of adequately representing complexities of forest canopy structure when simulating light-dependent BVOC emissions and chemistry.« less
Holmstrom, Eero; Haberl, Bianca; Pakarinen, Olli H.; ...
2016-02-20
Variability in the short-to-intermediate range order of pure amorphous silicon prepared by different experimental and computational techniques is probed by measuring mass density, atomic coordination, bond-angle deviation, and dihedral angle deviation. It is found that there is significant variability in order parameters at these length scales in this archetypal covalently bonded, monoatomic system. This diversity strongly reflects preparation technique and thermal history in both experimental and simulated systems. Experiment and simulation do not fully quantitatively agree, partly due to differences in the way parameters are accessed. However, qualitative agreement in the trends is identified. Relaxed forms of amorphous silicon closelymore » resemble continuous random networks generated by a hybrid method of bond-switching Monte Carlo and molecular dynamics simulation. As-prepared ion implanted amorphous silicon can be adequately modeled using a structure generated from amorphization via ion bombardement using energetic recoils. Preparation methods which narrowly avoid crystallization such as experimental pressure-induced amorphization or simulated melt-quenching result in inhomogeneous structures that contain regions with significant variations in atomic ordering. Ad hoc simulated structures containing small (1 nm) diamond cubic crystal inclusions were found to possess relatively high bond-angle deviations and low dihedral angle deviations, a trend that could not be reconciled with any experimental material.« less
Sim3C: simulation of Hi-C and Meta3C proximity ligation sequencing technologies.
DeMaere, Matthew Z; Darling, Aaron E
2018-02-01
Chromosome conformation capture (3C) and Hi-C DNA sequencing methods have rapidly advanced our understanding of the spatial organization of genomes and metagenomes. Many variants of these protocols have been developed, each with their own strengths. Currently there is no systematic means for simulating sequence data from this family of sequencing protocols, potentially hindering the advancement of algorithms to exploit this new datatype. We describe a computational simulator that, given simple parameters and reference genome sequences, will simulate Hi-C sequencing on those sequences. The simulator models the basic spatial structure in genomes that is commonly observed in Hi-C and 3C datasets, including the distance-decay relationship in proximity ligation, differences in the frequency of interaction within and across chromosomes, and the structure imposed by cells. A means to model the 3D structure of randomly generated topologically associating domains is provided. The simulator considers several sources of error common to 3C and Hi-C library preparation and sequencing methods, including spurious proximity ligation events and sequencing error. We have introduced the first comprehensive simulator for 3C and Hi-C sequencing protocols. We expect the simulator to have use in testing of Hi-C data analysis algorithms, as well as more general value for experimental design, where questions such as the required depth of sequencing, enzyme choice, and other decisions can be made in advance in order to ensure adequate statistical power with respect to experimental hypothesis testing.
Investigating low flow process controls, through complex modelling, in a UK chalk catchment
NASA Astrophysics Data System (ADS)
Lubega Musuuza, Jude; Wagener, Thorsten; Coxon, Gemma; Freer, Jim; Woods, Ross; Howden, Nicholas
2017-04-01
The typical streamflow response of Chalk catchments is dominated by groundwater contributions due the high degree of groundwater recharge through preferential flow pathways. The groundwater store attenuates the precipitation signal, which causes a delay between the corresponding high and low extremes in the precipitation and the stream flow signals. Streamflow responses can therefore be quite out of phase with the precipitation input to a Chalk catchment. Therefore characterising such catchment systems, including modelling approaches, clearly need to reproduce these percolation and groundwater dominated pathways to capture these dominant flow pathways. The simulation of low flow conditions for chalk catchments in numerical models is especially difficult due to the complex interactions between various processes that may not be adequately represented or resolved in the models. Periods of low stream flows are particularly important due to competing water uses in the summer, including agriculture and water supply. In this study we apply and evaluate the physically-based Pennstate Integrated Hydrologic Model (PIHM) to the River Kennet, a sub-catchment of the Thames Basin, to demonstrate how the simulations of a chalk catchment are improved by a physically-based system representation. We also use an ensemble of simulations to investigate the sensitivity of various hydrologic signatures (relevant to low flows and droughts) to the different parameters in the model, thereby inferring the levels of control exerted by the processes that the parameters represent.
A Methodology for the Design of Application-Specific Cyber-Physical Social Sensing Co-Simulators.
Sánchez, Borja Bordel; Alcarria, Ramón; Sánchez-Picot, Álvaro; Sánchez-de-Rivera, Diego
2017-09-22
Cyber-Physical Social Sensing (CPSS) is a new trend in the context of pervasive sensing. In these new systems, various domains coexist in time, evolve together and influence each other. Thus, application-specific tools are necessary for specifying and validating designs and simulating systems. However, nowadays, different tools are employed to simulate each domain independently. Mainly, the cause of the lack of co-simulation instruments to simulate all domains together is the extreme difficulty of combining and synchronizing various tools. In order to reduce that difficulty, an adequate architecture for the final co-simulator must be selected. Therefore, in this paper the authors investigate and propose a methodology for the design of CPSS co-simulation tools. The paper describes the four steps that software architects should follow in order to design the most adequate co-simulator for a certain application, considering the final users' needs and requirements and various additional factors such as the development team's experience. Moreover, the first practical use case of the proposed methodology is provided. An experimental validation is also included in order to evaluate the performing of the proposed co-simulator and to determine the correctness of the proposal.
A Methodology for the Design of Application-Specific Cyber-Physical Social Sensing Co-Simulators
Sánchez-Picot, Álvaro
2017-01-01
Cyber-Physical Social Sensing (CPSS) is a new trend in the context of pervasive sensing. In these new systems, various domains coexist in time, evolve together and influence each other. Thus, application-specific tools are necessary for specifying and validating designs and simulating systems. However, nowadays, different tools are employed to simulate each domain independently. Mainly, the cause of the lack of co-simulation instruments to simulate all domains together is the extreme difficulty of combining and synchronizing various tools. In order to reduce that difficulty, an adequate architecture for the final co-simulator must be selected. Therefore, in this paper the authors investigate and propose a methodology for the design of CPSS co-simulation tools. The paper describes the four steps that software architects should follow in order to design the most adequate co-simulator for a certain application, considering the final users’ needs and requirements and various additional factors such as the development team’s experience. Moreover, the first practical use case of the proposed methodology is provided. An experimental validation is also included in order to evaluate the performing of the proposed co-simulator and to determine the correctness of the proposal. PMID:28937610
Apollo experience report: Simulation of manned space flight for crew training
NASA Technical Reports Server (NTRS)
Woodling, C. H.; Faber, S.; Vanbockel, J. J.; Olasky, C. C.; Williams, W. K.; Mire, J. L. C.; Homer, J. R.
1973-01-01
Through space-flight experience and the development of simulators to meet the associated training requirements, several factors have been established as fundamental for providing adequate flight simulators for crew training. The development of flight simulators from Project Mercury through the Apollo 15 mission is described. The functional uses, characteristics, and development problems of the various simulators are discussed for the benefit of future programs.
Scribner, Richard; Ackleh, Azmy S; Fitzpatrick, Ben G; Jacquez, Geoffrey; Thibodeaux, Jeremy J; Rommel, Robert; Simonsen, Neal
2009-09-01
The misuse and abuse of alcohol among college students remain persistent problems. Using a systems approach to understand the dynamics of student drinking behavior and thus forecasting the impact of campus policy to address the problem represents a novel approach. Toward this end, the successful development of a predictive mathematical model of college drinking would represent a significant advance for prevention efforts. A deterministic, compartmental model of college drinking was developed, incorporating three processes: (1) individual factors, (2) social interactions, and (3) social norms. The model quantifies these processes in terms of the movement of students between drinking compartments characterized by five styles of college drinking: abstainers, light drinkers, moderate drinkers, problem drinkers, and heavy episodic drinkers. Predictions from the model were first compared with actual campus-level data and then used to predict the effects of several simulated interventions to address heavy episodic drinking. First, the model provides a reasonable fit of actual drinking styles of students attending Social Norms Marketing Research Project campuses varying by "wetness" and by drinking styles of matriculating students. Second, the model predicts that a combination of simulated interventions targeting heavy episodic drinkers at a moderately "dry" campus would extinguish heavy episodic drinkers, replacing them with light and moderate drinkers. Instituting the same combination of simulated interventions at a moderately "wet" campus would result in only a moderate reduction in heavy episodic drinkers (i.e., 50% to 35%). A simple, five-state compartmental model adequately predicted the actual drinking patterns of students from a variety of campuses surveyed in the Social Norms Marketing Research Project study. The model predicted the impact on drinking patterns of several simulated interventions to address heavy episodic drinking on various types of campuses.
Scribner, Richard; Ackleh, Azmy S.; Fitzpatrick, Ben G.; Jacquez, Geoffrey; Thibodeaux, Jeremy J.; Rommel, Robert; Simonsen, Neal
2009-01-01
Objective: The misuse and abuse of alcohol among college students remain persistent problems. Using a systems approach to understand the dynamics of student drinking behavior and thus forecasting the impact of campus policy to address the problem represents a novel approach. Toward this end, the successful development of a predictive mathematical model of college drinking would represent a significant advance for prevention efforts. Method: A deterministic, compartmental model of college drinking was developed, incorporating three processes: (1) individual factors, (2) social interactions, and (3) social norms. The model quantifies these processes in terms of the movement of students between drinking compartments characterized by five styles of college drinking: abstainers, light drinkers, moderate drinkers, problem drinkers, and heavy episodic drinkers. Predictions from the model were first compared with actual campus-level data and then used to predict the effects of several simulated interventions to address heavy episodic drinking. Results: First, the model provides a reasonable fit of actual drinking styles of students attending Social Norms Marketing Research Project campuses varying by “wetness” and by drinking styles of matriculating students. Second, the model predicts that a combination of simulated interventions targeting heavy episodic drinkers at a moderately “dry” campus would extinguish heavy episodic drinkers, replacing them with light and moderate drinkers. Instituting the same combination of simulated interventions at a moderately “wet” campus would result in only a moderate reduction in heavy episodic drinkers (i.e., 50% to 35%). Conclusions: A simple, five-state compartmental model adequately predicted the actual drinking patterns of students from a variety of campuses surveyed in the Social Norms Marketing Research Project study. The model predicted the impact on drinking patterns of several simulated interventions to address heavy episodic drinking on various types of campuses. PMID:19737506
NASA Technical Reports Server (NTRS)
Lee, M.-I.; Choi, I.; Tao, W.-K.; Schubert, S. D.; Kang, I.-K.
2010-01-01
The mechanisms of summertime diurnal precipitation in the US Great Plains were examined with the two-dimensional (2D) Goddard Cumulus Ensemble (GCE) cloud-resolving model (CRM). The model was constrained by the observed large-scale background state and surface flux derived from the Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Program s Intensive Observing Period (IOP) data at the Southern Great Plains (SGP). The model, when continuously-forced by realistic surface flux and large-scale advection, simulates reasonably well the temporal evolution of the observed rainfall episodes, particularly for the strongly forced precipitation events. However, the model exhibits a deficiency for the weakly forced events driven by diurnal convection. Additional tests were run with the GCE model in order to discriminate between the mechanisms that determine daytime and nighttime convection. In these tests, the model was constrained with the same repeating diurnal variation in the large-scale advection and/or surface flux. The results indicate that it is primarily the surface heat and moisture flux that is responsible for the development of deep convection in the afternoon, whereas the large-scale upward motion and associated moisture advection play an important role in preconditioning nocturnal convection. In the nighttime, high clouds are continuously built up through their interaction and feedback with long-wave radiation, eventually initiating deep convection from the boundary layer. Without these upper-level destabilization processes, the model tends to produce only daytime convection in response to boundary layer heating. This study suggests that the correct simulation of the diurnal variation in precipitation requires that the free-atmospheric destabilization mechanisms resolved in the CRM simulation must be adequately parameterized in current general circulation models (GCMs) many of which are overly sensitive to the parameterized boundary layer heating.
Can discrete event simulation be of use in modelling major depression?
Le Lay, Agathe; Despiegel, Nicolas; François, Clément; Duru, Gérard
2006-01-01
Background Depression is among the major contributors to worldwide disease burden and adequate modelling requires a framework designed to depict real world disease progression as well as its economic implications as closely as possible. Objectives In light of the specific characteristics associated with depression (multiple episodes at varying intervals, impact of disease history on course of illness, sociodemographic factors), our aim was to clarify to what extent "Discrete Event Simulation" (DES) models provide methodological benefits in depicting disease evolution. Methods We conducted a comprehensive review of published Markov models in depression and identified potential limits to their methodology. A model based on DES principles was developed to investigate the benefits and drawbacks of this simulation method compared with Markov modelling techniques. Results The major drawback to Markov models is that they may not be suitable to tracking patients' disease history properly, unless the analyst defines multiple health states, which may lead to intractable situations. They are also too rigid to take into consideration multiple patient-specific sociodemographic characteristics in a single model. To do so would also require defining multiple health states which would render the analysis entirely too complex. We show that DES resolve these weaknesses and that its flexibility allow patients with differing attributes to move from one event to another in sequential order while simultaneously taking into account important risk factors such as age, gender, disease history and patients attitude towards treatment, together with any disease-related events (adverse events, suicide attempt etc.). Conclusion DES modelling appears to be an accurate, flexible and comprehensive means of depicting disease progression compared with conventional simulation methodologies. Its use in analysing recurrent and chronic diseases appears particularly useful compared with Markov processes. PMID:17147790
Can discrete event simulation be of use in modelling major depression?
Le Lay, Agathe; Despiegel, Nicolas; François, Clément; Duru, Gérard
2006-12-05
Depression is among the major contributors to worldwide disease burden and adequate modelling requires a framework designed to depict real world disease progression as well as its economic implications as closely as possible. In light of the specific characteristics associated with depression (multiple episodes at varying intervals, impact of disease history on course of illness, sociodemographic factors), our aim was to clarify to what extent "Discrete Event Simulation" (DES) models provide methodological benefits in depicting disease evolution. We conducted a comprehensive review of published Markov models in depression and identified potential limits to their methodology. A model based on DES principles was developed to investigate the benefits and drawbacks of this simulation method compared with Markov modelling techniques. The major drawback to Markov models is that they may not be suitable to tracking patients' disease history properly, unless the analyst defines multiple health states, which may lead to intractable situations. They are also too rigid to take into consideration multiple patient-specific sociodemographic characteristics in a single model. To do so would also require defining multiple health states which would render the analysis entirely too complex. We show that DES resolve these weaknesses and that its flexibility allow patients with differing attributes to move from one event to another in sequential order while simultaneously taking into account important risk factors such as age, gender, disease history and patients attitude towards treatment, together with any disease-related events (adverse events, suicide attempt etc.). DES modelling appears to be an accurate, flexible and comprehensive means of depicting disease progression compared with conventional simulation methodologies. Its use in analysing recurrent and chronic diseases appears particularly useful compared with Markov processes.
Integrated tokamak modeling: when physics informs engineering and research planning
NASA Astrophysics Data System (ADS)
Poli, Francesca
2017-10-01
Simulations that integrate virtually all the relevant engineering and physics aspects of a real tokamak experiment are a power tool for experimental interpretation, model validation and planning for both present and future devices. This tutorial will guide through the building blocks of an ``integrated'' tokamak simulation, such as magnetic flux diffusion, thermal, momentum and particle transport, external heating and current drive sources, wall particle sources and sinks. Emphasis is given to the connection and interplay between external actuators and plasma response, between the slow time scales of the current diffusion and the fast time scales of transport, and how reduced and high-fidelity models can contribute to simulate a whole device. To illustrate the potential and limitations of integrated tokamak modeling for discharge prediction, a helium plasma scenario for the ITER pre-nuclear phase is taken as an example. This scenario presents challenges because it requires core-edge integration and advanced models for interaction between waves and fast-ions, which are subject to a limited experimental database for validation and guidance. Starting from a scenario obtained by re-scaling parameters from the demonstration inductive ``ITER baseline'', it is shown how self-consistent simulations that encompass both core and edge plasma regions, as well as high-fidelity heating and current drive source models are needed to set constraints on the density, magnetic field and heating scheme. This tutorial aims at demonstrating how integrated modeling, when used with adequate level of criticism, can not only support design of operational scenarios, but also help to asses the limitations and gaps in the available models, thus indicating where improved modeling tools are required and how present experiments can help their validation and inform research planning. Work supported by DOE under DE-AC02-09CH1146.
USING TIME VARIANT VOLTAGE TO CALCULATE ENERGY CONSUMPTION AND POWER USE OF BUILDING SYSTEMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makhmalbaf, Atefe; Augenbroe , Godfried
2015-12-09
Buildings are the main consumers of electricity across the world. However, in the research and studies related to building performance assessment, the focus has been on evaluating the energy efficiency of buildings whereas the instantaneous power efficiency has been overlooked as an important aspect of total energy consumption. As a result, we never developed adequate models that capture both thermal and electrical characteristics (e.g., voltage) of building systems to assess the impact of variations in the power system and emerging technologies of the smart grid on buildings energy and power performance and vice versa. This paper argues that the powermore » performance of buildings as a function of electrical parameters should be evaluated in addition to systems’ mechanical and thermal behavior. The main advantage of capturing electrical behavior of building load is to better understand instantaneous power consumption and more importantly to control it. Voltage is one of the electrical parameters that can be used to describe load. Hence, voltage dependent power models are constructed in this work and they are coupled with existing thermal energy models. Lack of models that describe electrical behavior of systems also adds to the uncertainty of energy consumption calculations carried out in building energy simulation tools such as EnergyPlus, a common building energy modeling and simulation tool. To integrate voltage-dependent power models with thermal models, the thermal cycle (operation mode) of each system was fed into the voltage-based electrical model. Energy consumption of systems used in this study were simulated using EnergyPlus. Simulated results were then compared with estimated and measured power data. The mean square error (MSE) between simulated, estimated, and measured values were calculated. Results indicate that estimated power has lower MSE when compared with measured data than simulated results. Results discussed in this paper will illustrate the significance of enhancing building energy models with electrical characteristics. This would support different studies such as those related to modernization of the power system that require micro scale building-grid interaction, evaluating building energy efficiency with power efficiency considerations, and also design and control decisions that rely on accuracy of building energy simulation results.« less
Parallelization of a Fully-Distributed Hydrologic Model using Sub-basin Partitioning
NASA Astrophysics Data System (ADS)
Vivoni, E. R.; Mniszewski, S.; Fasel, P.; Springer, E.; Ivanov, V. Y.; Bras, R. L.
2005-12-01
A primary obstacle towards advances in watershed simulations has been the limited computational capacity available to most models. The growing trend of model complexity, data availability and physical representation has not been matched by adequate developments in computational efficiency. This situation has created a serious bottleneck which limits existing distributed hydrologic models to small domains and short simulations. In this study, we present novel developments in the parallelization of a fully-distributed hydrologic model. Our work is based on the TIN-based Real-time Integrated Basin Simulator (tRIBS), which provides continuous hydrologic simulation using a multiple resolution representation of complex terrain based on a triangulated irregular network (TIN). While the use of TINs reduces computational demand, the sequential version of the model is currently limited over large basins (>10,000 km2) and long simulation periods (>1 year). To address this, a parallel MPI-based version of the tRIBS model has been implemented and tested using high performance computing resources at Los Alamos National Laboratory. Our approach utilizes domain decomposition based on sub-basin partitioning of the watershed. A stream reach graph based on the channel network structure is used to guide the sub-basin partitioning. Individual sub-basins or sub-graphs of sub-basins are assigned to separate processors to carry out internal hydrologic computations (e.g. rainfall-runoff transformation). Routed streamflow from each sub-basin forms the major hydrologic data exchange along the stream reach graph. Individual sub-basins also share subsurface hydrologic fluxes across adjacent boundaries. We demonstrate how the sub-basin partitioning provides computational feasibility and efficiency for a set of test watersheds in northeastern Oklahoma. We compare the performance of the sequential and parallelized versions to highlight the efficiency gained as the number of processors increases. We also discuss how the coupled use of TINs and parallel processing can lead to feasible long-term simulations in regional watersheds while preserving basin properties at high-resolution.
NASA Astrophysics Data System (ADS)
Taddele, Y. D.; Ayana, E.; Worqlul, A. W.; Srinivasan, R.; Gerik, T.; Clarke, N.
2017-12-01
The research presented in this paper is conducted in Ethiopia, which is located in the horn of Africa. Ethiopian economy largely depends on rainfed agriculture, which employs 80% of the labor force. The rainfed agriculture is frequently affected by droughts and dry spells. Small scale irrigation is considered as the lifeline for the livelihoods of smallholder farmers in Ethiopia. Biophysical models are highly used to determine the agricultural production, environmental sustainability, and socio-economic outcomes of small scale irrigation in Ethiopia. However, detailed spatially explicit data is not adequately available to calibrate and validate simulations from biophysical models. The Soil and Water Assessment Tool (SWAT) model was setup using finer resolution spatial and temporal data. The actual evapotranspiration (AET) estimation from the SWAT model was compared with two remotely sensed data, namely the Advanced Very High Resolution Radiometer (AVHRR) and Moderate Resolution Imaging Spectrometer (MODIS). The performance of the monthly satellite data was evaluated with correlation coefficient (R2) over the different land use groups. The result indicated that over the long term and monthly the AVHRR AET captures the pattern of SWAT simulated AET reasonably well, especially on agricultural dominated landscapes. A comparison between SWAT simulated AET and AVHRR AET provided mixed results on grassland dominated landscapes and poor agreement on forest dominated landscapes. Results showed that the AVHRR AET products showed superior agreement with the SWAT simulated AET than MODIS AET. This suggests that remotely sensed products can be used as valuable tool in properly modeling small scale irrigation.
A dynamic model for plant growth: validation study under changing temperatures
NASA Technical Reports Server (NTRS)
Wann, M.; Raper, C. D. Jr; Raper CD, J. r. (Principal Investigator)
1984-01-01
A dynamic simulation model to describe vegetative growth of plants, for which some functions and parameter values have been estimated previously by optimization search techniques and numerical experimentation based on data from constant temperature experiments, is validated under conditions of changing temperatures. To test the predictive capacity of the model, dry matter accumulation in the leaves, stems, and roots of tobacco plants (Nicotiana tabacum L.) was measured at 2- or 3-day intervals during a 5-week period when temperatures in controlled-environment rooms were programmed for changes at weekly and daily intervals and in ascending or descending sequences within a range of 14 to 34 degrees C. Simulations of dry matter accumulation and distribution were carried out using the programmed changes for experimental temperatures and compared with the measured values. The agreement between measured and predicted values was close and indicates that the temperature-dependent functional forms derived from constant-temperature experiments are adequate for modelling plant growth responses to conditions of changing temperatures with switching intervals as short as 1 day.
Numerical experiments with model monophyletic and paraphyletic taxa
NASA Technical Reports Server (NTRS)
Sepkoski, J. J. Jr; Kendrick, D. C.; Sepkoski JJ, J. r. (Principal Investigator)
1993-01-01
The problem of how accurately paraphyletic taxa versus monophyletic (i.e., holophyletic) groups (clades) capture underlying species patterns of diversity and extinction is explored with Monte Carlo simulations. Phylogenies are modeled as stochastic trees. Paraphyletic taxa are defined in an arbitrary manner by randomly choosing progenitors and clustering all descendants not belonging to other taxa. These taxa are then examined to determine which are clades, and the remaining paraphyletic groups are dissected to discover monophyletic subgroups. Comparisons of diversity patterns and extinction rates between modeled taxa and lineages indicate that paraphyletic groups can adequately capture lineage information under a variety of conditions of diversification and mass extinction. This suggests that these groups constitute more than mere "taxonomic noise" in this context. But, strictly monophyletic groups perform somewhat better, especially with regard to mass extinctions. However, when low levels of paleontologic sampling are simulated, the veracity of clades deteriorates, especially with respect to diversity, and modeled paraphyletic taxa often capture more information about underlying lineages. Thus, for studies of diversity and taxic evolution in the fossil record, traditional paleontologic genera and families need not be rejected in favor of cladistically-defined taxa.
Benchmarking hydrological model predictive capability for UK River flows and flood peaks.
NASA Astrophysics Data System (ADS)
Lane, Rosanna; Coxon, Gemma; Freer, Jim; Wagener, Thorsten
2017-04-01
Data and hydrological models are now available for national hydrological analyses. However, hydrological model performance varies between catchments, and lumped, conceptual models are not able to produce adequate simulations everywhere. This study aims to benchmark hydrological model performance for catchments across the United Kingdom within an uncertainty analysis framework. We have applied four hydrological models from the FUSE framework to 1128 catchments across the UK. These models are all lumped models and run at a daily timestep, but differ in the model structural architecture and process parameterisations, therefore producing different but equally plausible simulations. We apply FUSE over a 20 year period from 1988-2008, within a GLUE Monte Carlo uncertainty analyses framework. Model performance was evaluated for each catchment, model structure and parameter set using standard performance metrics. These were calculated both for the whole time series and to assess seasonal differences in model performance. The GLUE uncertainty analysis framework was then applied to produce simulated 5th and 95th percentile uncertainty bounds for the daily flow time-series and additionally the annual maximum prediction bounds for each catchment. The results show that the model performance varies significantly in space and time depending on catchment characteristics including climate, geology and human impact. We identify regions where models are systematically failing to produce good results, and present reasons why this could be the case. We also identify regions or catchment characteristics where one model performs better than others, and have explored what structural component or parameterisation enables certain models to produce better simulations in these catchments. Model predictive capability was assessed for each catchment, through looking at the ability of the models to produce discharge prediction bounds which successfully bound the observed discharge. These results improve our understanding of the predictive capability of simple conceptual hydrological models across the UK and help us to identify where further effort is needed to develop modelling approaches to better represent different catchment and climate typologies.
Verification test results of Apollo stabilization and control systems during undocked operations
NASA Technical Reports Server (NTRS)
Copeland, E. L.; Haken, R. L.
1974-01-01
The results are presented of analysis and simulation testing of both the Skylark 1 reaction control system digital autopilot (RCS DAP) and the thrust vector control (TVC) autopilot for use during the undocked portions of the Apollo/Soyuz Test Project Mission. The RCS DAP testing was performed using the Skylab Functional Simulator (SLFS), a digital computer program capable of simulating the Apollo and Skylab autopilots along with vehicle dynamics including bending and sloshing. The model is used to simulate three-axis automatic maneuvers along with pilot controlled manual maneuvers using the RCS DAP. The TVC autopilot was tested in two parts. A classical stability analysis was performed on the vehicle considering the effects of structural bending and sloshing when under control of the TVC autopilot. The time response of the TVC autopilot was tested using the SLFS. Results indicate that adequate performance stability margins can be expected for the CSM/DM configuration when under the control of the Apollo control systems tested.
High fidelity studies of exploding foil initiator bridges, Part 1: Experimental method
NASA Astrophysics Data System (ADS)
Bowden, Mike; Neal, William
2017-01-01
Simulations of high voltage detonators, such as Exploding Bridgewire (EBW) and Exploding Foil Initiators (EFI), have historically been simple, often empirical, one-dimensional models capable of predicting parameters such as current, voltage and in the case of EFIs, flyer velocity. Correspondingly, experimental methods have in general been limited to the same parameters. With the advent of complex, first principles magnetohydrodynamic codes such as ALEGRA and ALE-MHD, it is now possible to simulate these components in three dimensions, predicting a much greater range of parameters than before. A significant improvement in experimental capability was therefore required to ensure these simulations could be adequately validated. In this first paper of a three part study, the experimental method for determining the current, voltage, flyer velocity and multi-dimensional profile of detonator components is presented. This improved capability, along with high fidelity simulations, offer an opportunity to gain a greater understanding of the processes behind the functioning of EBW and EFI detonators.
Kitagawa, Kory H; Nakamura, Nina M; Yamamoto, Loren
2006-03-01
To measure the ventilation efficacy with three single-sized mask types on infant and child manikin models. Medical students were recruited as study subjects inasmuch as they are inexperienced resuscitators. They were taught proper bag-mask ventilation (BMV) according to the American Heart Association guidelines on an infant and a child manikin. Subjects completed a BMV attempt successfully using the adult standard mask (to simulate the uncertainty of mask selection), pocket mask, and blob mask. Each attempt consisted of 5 ventilations assessed by chest rise of the manikin. Study subjects were asked which mask was easiest to use. Four to six weeks later, subjects repeated the procedure with no instructions (to simulate an emergency BMV encounter without immediate pre-encounter teaching). Forty-six volunteer subjects were studied. During the first attempt, subjects preferred the standard and blob masks over the pocket mask. For the second attempt, the blob mask was preferred over the standard mask, and few liked the pocket mask. Using the standard, blob, and pocket masks on the child manikin, 39, 42, and 20 subjects, respectively, were able to achieve adequate ventilation. Using the standard, blob, and pocket masks on the infant manikin, 45, 45, and 11 subjects, respectively, were able to achieve adequate ventilation. Both the standard and blob masks are more effective than the pocket mask at achieving adequate ventilation on infant and child manikins in this group of inexperienced medical student resuscitators, who most often preferred the blob mask.
High fidelity studies of exploding foil initiator bridges, Part 2: Experimental results
NASA Astrophysics Data System (ADS)
Neal, William; Bowden, Mike
2017-01-01
Simulations of high voltage detonators, such as Exploding Bridgewire (EBW) and Exploding Foil Initiators (EFI), have historically been simple, often empirical, one-dimensional models capable of predicting parameters such as current, voltage, and in the case of EFIs, flyer velocity. Experimental methods have correspondingly generally been limited to the same parameters. With the advent of complex, first principles magnetohydrodynamic codes such as ALEGRA MHD, it is now possible to simulate these components in three dimensions and predict greater range of parameters than before. A significant improvement in experimental capability was therefore required to ensure these simulations could be adequately verified. In this second paper of a three part study, data is presented from a flexible foil EFI header experiment. This study has shown that there is significant bridge expansion before time of peak voltage and that heating within the bridge material is spatially affected by the microstructure of the metal foil.
Monte Carlo simulation of a near-continuum shock-shock interaction problem
NASA Technical Reports Server (NTRS)
Carlson, Ann B.; Wilmoth, Richard G.
1992-01-01
A complex shock interaction is calculated with direct simulation Monte Carlo (DSMC). The calculation is performed for the near-continuum flow produced when an incident shock impinges on the bow shock of a 0.1 in. radius cowl lip for freestream conditions of approximately Mach 15 and 35 km altitude. Solutions are presented both for a full finite-rate chemistry calculation and for a case with chemical reactions suppressed. In each case, both the undisturbed flow about the cowl lip and the full shock interaction flowfields are calculated. Good agreement has been obtained between the no-chemistry simulation of the undisturbed flow and a perfect gas solution obtained with the viscous shock-layer method. Large differences in calculated surface properties when different chemical models are used demonstrate the necessity of adequately representing the chemistry when making surface property predictions. Preliminary grid refinement studies make it possible to estimate the accuracy of the solutions.
Modeling missing data in knowledge space theory.
de Chiusole, Debora; Stefanutti, Luca; Anselmi, Pasquale; Robusto, Egidio
2015-12-01
Missing data are a well known issue in statistical inference, because some responses may be missing, even when data are collected carefully. The problem that arises in these cases is how to deal with missing data. In this article, the missingness is analyzed in knowledge space theory, and in particular when the basic local independence model (BLIM) is applied to the data. Two extensions of the BLIM to missing data are proposed: The former, called ignorable missing BLIM (IMBLIM), assumes that missing data are missing completely at random; the latter, called missing BLIM (MissBLIM), introduces specific dependencies of the missing data on the knowledge states, thus assuming that the missing data are missing not at random. The IMBLIM and the MissBLIM modeled the missingness in a satisfactory way, in both a simulation study and an empirical application, depending on the process that generates the missingness: If the missing data-generating process is of type missing completely at random, then either IMBLIM or MissBLIM provide adequate fit to the data. However, if the pattern of missingness is functionally dependent upon unobservable features of the data (e.g., missing answers are more likely to be wrong), then only a correctly specified model of the missingness distribution provides an adequate fit to the data. (c) 2015 APA, all rights reserved).
Funding the Formula Adequately in Oklahoma
ERIC Educational Resources Information Center
Hancock, Kenneth
2015-01-01
This report is a longevity, simulational study that looks at how the ratio of state support to local support effects the number of school districts that breaks the common school's funding formula which in turns effects the equity of distribution to the common schools. After nearly two decades of adequately supporting the funding formula, Oklahoma…
Comparability and Reliability Considerations of Adequate Yearly Progress
ERIC Educational Resources Information Center
Maier, Kimberly S.; Maiti, Tapabrata; Dass, Sarat C.; Lim, Chae Young
2012-01-01
The purpose of this study is to develop an estimate of Adequate Yearly Progress (AYP) that will allow for reliable and valid comparisons among student subgroups, schools, and districts. A shrinkage-type estimator of AYP using the Bayesian framework is described. Using simulated data, the performance of the Bayes estimator will be compared to…
Phuong, H N; Blavy, P; Martin, O; Schmidely, P; Friggens, N C
2016-01-01
Reproductive success is a key component of lifetime efficiency - which is the ratio of energy in milk (MJ) to energy intake (MJ) over the lifespan, of cows. At the animal level, breeding and feeding management can substantially impact milk yield, body condition and energy balance of cows, which are known as major contributors to reproductive failure in dairy cattle. This study extended an existing lifetime performance model to incorporate the impacts that performance changes due to changing breeding and feeding strategies have on the probability of reproducing and thereby on the productive lifespan, and thus allow the prediction of a cow's lifetime efficiency. The model is dynamic and stochastic, with an individual cow being the unit modelled and one day being the unit of time. To evaluate the model, data from a French study including Holstein and Normande cows fed high-concentrate diets and data from a Scottish study including Holstein cows selected for high and average genetic merit for fat plus protein that were fed high- v. low-concentrate diets were used. Generally, the model consistently simulated productive and reproductive performance of various genotypes of cows across feeding systems. In the French data, the model adequately simulated the reproductive performance of Holsteins but significantly under-predicted that of Normande cows. In the Scottish data, conception to first service was comparably simulated, whereas interval traits were slightly under-predicted. Selection for greater milk production impaired the reproductive performance and lifespan but not lifetime efficiency. The definition of lifetime efficiency used in this model did not include associated costs or herd-level effects. Further works should include such economic indicators to allow more accurate simulation of lifetime profitability in different production scenarios.
Modeling and control of flexible space platforms with articulated payloads
NASA Technical Reports Server (NTRS)
Graves, Philip C.; Joshi, Suresh M.
1989-01-01
The first steps in developing a methodology for spacecraft control-structure interaction (CSI) optimization are identification and classification of anticipated missions, and the development of tractable mathematical models in each mission class. A mathematical model of a generic large flexible space platform (LFSP) with multiple independently pointed rigid payloads is considered. The objective is not to develop a general purpose numerical simulation, but rather to develop an analytically tractable mathematical model of such composite systems. The equations of motion for a single payload case are derived, and are linearized about zero steady-state. The resulting model is then extended to include multiple rigid payloads, yielding the desired analytical form. The mathematical models developed clearly show the internal inertial/elastic couplings, and are therefore suitable for analytical and numerical studies. A simple decentralized control law is proposed for fine pointing the payloads and LFSP attitude control, and simulation results are presented for an example problem. The decentralized controller is shown to be adequate for the example problem chosen, but does not, in general, guarantee stability. A centralized dissipative controller is then proposed, requiring a symmetric form of the composite system equations. Such a controller guarantees robust closed loop stability despite unmodeled elastic dynamics and parameter uncertainties.
Parameterizing the Morse Potential for Coarse-Grained Modeling of Blood Plasma
Zhang, Na; Zhang, Peng; Kang, Wei; Bluestein, Danny; Deng, Yuefan
2014-01-01
Multiscale simulations of fluids such as blood represent a major computational challenge of coupling the disparate spatiotemporal scales between molecular and macroscopic transport phenomena characterizing such complex fluids. In this paper, a coarse-grained (CG) particle model is developed for simulating blood flow by modifying the Morse potential, traditionally used in Molecular Dynamics for modeling vibrating structures. The modified Morse potential is parameterized with effective mass scales for reproducing blood viscous flow properties, including density, pressure, viscosity, compressibility and characteristic flow dynamics of human blood plasma fluid. The parameterization follows a standard inverse-problem approach in which the optimal micro parameters are systematically searched, by gradually decoupling loosely correlated parameter spaces, to match the macro physical quantities of viscous blood flow. The predictions of this particle based multiscale model compare favorably to classic viscous flow solutions such as Counter-Poiseuille and Couette flows. It demonstrates that such coarse grained particle model can be applied to replicate the dynamics of viscous blood flow, with the advantage of bridging the gap between macroscopic flow scales and the cellular scales characterizing blood flow that continuum based models fail to handle adequately. PMID:24910470
Numerical Modeling of the Transient Chilldown Process of a Cryogenic Propellant Transfer Line
NASA Technical Reports Server (NTRS)
Hartwig, Jason; Vera, Jerry
2015-01-01
Before cryogenic fuel depots can be fully realized, efficient methods with which to chill down the spacecraft transfer line and receiver tank are required. This paper presents numerical modeling of the chilldown of a liquid hydrogen tank-to-tank propellant transfer line using the Generalized Fluid System Simulation Program (GFSSP). To compare with data from recently concluded turbulent LH2 chill down experiments, seven different cases were run across a range of inlet liquid temperatures and mass flow rates. Both trickle and pulse chill down methods were simulated. The GFSSP model qualitatively matches external skin mounted temperature readings, but large differences are shown between measured and predicted internal stream temperatures. Discrepancies are attributed to the simplified model correlation used to compute two-phase flow boiling heat transfer. Flow visualization from testing shows that the initial bottoming out of skin mounted sensors corresponds to annular flow, but that considerable time is required for the stream sensor to achieve steady state as the system moves through annular, churn, and bubbly flow. The GFSSP model does adequately well in tracking trends in the data but further work is needed to refine the two-phase flow modeling to better match observed test data.
NASA Astrophysics Data System (ADS)
Schirmer, Mario; Molson, John W.; Frind, Emil O.; Barker, James F.
2000-12-01
Biodegradation of organic contaminants in groundwater is a microscale process which is often observed on scales of 100s of metres or larger. Unfortunately, there are no known equivalent parameters for characterizing the biodegradation process at the macroscale as there are, for example, in the case of hydrodynamic dispersion. Zero- and first-order degradation rates estimated at the laboratory scale by model fitting generally overpredict the rate of biodegradation when applied to the field scale because limited electron acceptor availability and microbial growth are not considered. On the other hand, field-estimated zero- and first-order rates are often not suitable for predicting plume development because they may oversimplify or neglect several key field scale processes, phenomena and characteristics. This study uses the numerical model BIO3D to link the laboratory and field scales by applying laboratory-derived Monod kinetic degradation parameters to simulate a dissolved gasoline field experiment at the Canadian Forces Base (CFB) Borden. All input parameters were derived from independent laboratory and field measurements or taken from the literature a priori to the simulations. The simulated results match the experimental results reasonably well without model calibration. A sensitivity analysis on the most uncertain input parameters showed only a minor influence on the simulation results. Furthermore, it is shown that the flow field, the amount of electron acceptor (oxygen) available, and the Monod kinetic parameters have a significant influence on the simulated results. It is concluded that laboratory-derived Monod kinetic parameters can adequately describe field scale degradation, provided all controlling factors are incorporated in the field scale model. These factors include advective-dispersive transport of multiple contaminants and electron acceptors and large-scale spatial heterogeneities.
Get Over It! A Multilevel Threshold Autoregressive Model for State-Dependent Affect Regulation.
De Haan-Rietdijk, Silvia; Gottman, John M; Bergeman, Cindy S; Hamaker, Ellen L
2016-03-01
Intensive longitudinal data provide rich information, which is best captured when specialized models are used in the analysis. One of these models is the multilevel autoregressive model, which psychologists have applied successfully to study affect regulation as well as alcohol use. A limitation of this model is that the autoregressive parameter is treated as a fixed, trait-like property of a person. We argue that the autoregressive parameter may be state-dependent, for example, if the strength of affect regulation depends on the intensity of affect experienced. To allow such intra-individual variation, we propose a multilevel threshold autoregressive model. Using simulations, we show that this model can be used to detect state-dependent regulation with adequate power and Type I error. The potential of the new modeling approach is illustrated with two empirical applications that extend the basic model to address additional substantive research questions.
Reynolds-Averaged Navier-Stokes Simulations of Two Partial-Span Flap Wing Experiments
NASA Technical Reports Server (NTRS)
Takalluk, M. A.; Laflin, Kelly R.
1998-01-01
Structured Reynolds Averaged Navier-Stokes simulations of two partial-span flap wing experiments were performed. The high-lift aerodynamic and aeroacoustic wind-tunnel experiments were conducted at both the NASA Ames 7-by 10-Foot Wind Tunnel and at the NASA Langley Quiet Flow Facility. The purpose of these tests was to accurately document the acoustic and aerodynamic characteristics associated with the principle airframe noise sources, including flap side-edge noise. Specific measurements were taken that can be used to validate analytic and computational models of the noise sources and associated aerodynamic for configurations and conditions approximating flight for transport aircraft. The numerical results are used to both calibrate a widely used CFD code, CFL3D, and to obtain details of flap side-edge flow features not discernible from experimental observations. Both experimental set-ups were numerically modeled by using multiple block structured grids. Various turbulence models, grid block-interface interaction methods and grid topologies were implemented. Numerical results of both simulations are in excellent agreement with experimental measurements and flow visualization observations. The flow field in the flap-edge region was adequately resolved to discern some crucial information about the flow physics and to substantiate the merger of the two vortical structures. As a result of these investigations, airframe noise modelers have proposed various simplified models which use the results obtained from the steady-state computations as input.
Molecular dynamics simulations of β2-microglobulin interaction with hydrophobic surfaces.
Dongmo Foumthuim, Cedrix J; Corazza, Alessandra; Esposito, Gennaro; Fogolari, Federico
2017-11-21
Hydrophobic surfaces are known to adsorb and unfold proteins, a process that has been studied only for a few proteins. Here we address the interaction of β2-microglobulin, a paradigmatic protein for the study of amyloidogenesis, with hydrophobic surfaces. A system with 27 copies of the protein surrounded by a model cubic hydrophobic box is studied by implicit solvent molecular dynamics simulations. Most proteins adsorb on the walls of the box without major distortions in local geometry, whereas free molecules maintain proper structures and fluctuations as observed in explicit solvent molecular dynamics simulations. The major conclusions from the simulations are as follows: (i) the adopted implicit solvent model is adequate to describe protein dynamics and thermodynamics; (ii) adsorption occurs readily and is irreversible on the simulated timescale; (iii) the regions most involved in molecular encounters and stable interactions with the walls are the same as those that are important in protein-protein and protein-nanoparticle interactions; (iv) unfolding following adsorption occurs at regions found to be flexible by both experiments and simulations; (v) thermodynamic analysis suggests a very large contribution from van der Waals interactions, whereas unfavorable electrostatic interactions are not found to contribute much to adsorption energy. Surfaces with different degrees of hydrophobicity may occur in vivo. Our simulations show that adsorption is a fast and irreversible process which is accompanied by partial unfolding. The results and the thermodynamic analysis presented here are consistent with and rationalize previous experimental work.
NASA Astrophysics Data System (ADS)
Rampidis, I.; Nikolopoulos, A.; Koukouzas, N.; Grammelis, P.; Kakaras, E.
2007-09-01
This work aims to present a pure 3-D CFD model, accurate and efficient, for the simulation of a pilot scale CFB hydrodynamics. The accuracy of the model was investigated as a function of the numerical parameters, in order to derive an optimum model setup with respect to computational cost. The necessity of the in depth examination of hydrodynamics emerges by the trend to scale up CFBCs. This scale up brings forward numerous design problems and uncertainties, which can be successfully elucidated by CFD techniques. Deriving guidelines for setting a computational efficient model is important as the scale of the CFBs grows fast, while computational power is limited. However, the optimum efficiency matter has not been investigated thoroughly in the literature as authors were more concerned for their models accuracy and validity. The objective of this work is to investigate the parameters that influence the efficiency and accuracy of CFB computational fluid dynamics models, find the optimum set of these parameters and thus establish this technique as a competitive method for the simulation and design of industrial, large scale beds, where the computational cost is otherwise prohibitive. During the tests that were performed in this work, the influence of turbulence modeling approach, time and space density and discretization schemes were investigated on a 1.2 MWth CFB test rig. Using Fourier analysis dominant frequencies were extracted in order to estimate the adequate time period for the averaging of all instantaneous values. The compliance with the experimental measurements was very good. The basic differences between the predictions that arose from the various model setups were pointed out and analyzed. The results showed that a model with high order space discretization schemes when applied on a coarse grid and averaging of the instantaneous scalar values for a 20 sec period, adequately described the transient hydrodynamic behaviour of a pilot CFB while the computational cost was kept low. Flow patterns inside the bed such as the core-annulus flow and the transportation of clusters were at least qualitatively captured.
2012-01-01
Background In recent years, computer simulation models have supported development of pandemic influenza preparedness policies. However, U.S. policymakers have raised several concerns about the practical use of these models. In this review paper, we examine the extent to which the current literature already addresses these concerns and identify means of enhancing the current models for higher operational use. Methods We surveyed PubMed and other sources for published research literature on simulation models for influenza pandemic preparedness. We identified 23 models published between 1990 and 2010 that consider single-region (e.g., country, province, city) outbreaks and multi-pronged mitigation strategies. We developed a plan for examination of the literature based on the concerns raised by the policymakers. Results While examining the concerns about the adequacy and validity of data, we found that though the epidemiological data supporting the models appears to be adequate, it should be validated through as many updates as possible during an outbreak. Demographical data must improve its interfaces for access, retrieval, and translation into model parameters. Regarding the concern about credibility and validity of modeling assumptions, we found that the models often simplify reality to reduce computational burden. Such simplifications may be permissible if they do not interfere with the performance assessment of the mitigation strategies. We also agreed with the concern that social behavior is inadequately represented in pandemic influenza models. Our review showed that the models consider only a few social-behavioral aspects including contact rates, withdrawal from work or school due to symptoms appearance or to care for sick relatives, and compliance to social distancing, vaccination, and antiviral prophylaxis. The concern about the degree of accessibility of the models is palpable, since we found three models that are currently accessible by the public while other models are seeking public accessibility. Policymakers would prefer models scalable to any population size that can be downloadable and operable in personal computers. But scaling models to larger populations would often require computational needs that cannot be handled with personal computers and laptops. As a limitation, we state that some existing models could not be included in our review due to their limited available documentation discussing the choice of relevant parameter values. Conclusions To adequately address the concerns of the policymakers, we need continuing model enhancements in critical areas including: updating of epidemiological data during a pandemic, smooth handling of large demographical databases, incorporation of a broader spectrum of social-behavioral aspects, updating information for contact patterns, adaptation of recent methodologies for collecting human mobility data, and improvement of computational efficiency and accessibility. PMID:22463370
NASA Astrophysics Data System (ADS)
Kar, Leow Soo
2014-07-01
Two important factors that influence customer satisfaction in large supermarkets or hypermarkets are adequate parking facilities and short waiting times at the checkout counters. This paper describes the simulation analysis of a large supermarket to determine the optimal levels of these two factors. SAS Simulation Studio is used to model a large supermarket in a shopping mall with car park facility. In order to make the simulation model more realistic, a number of complexities are introduced into the model. For example, arrival patterns of customers vary with the time of the day (morning, afternoon and evening) and with the day of the week (weekdays or weekends), the transport mode of arriving customers (by car or other means), the mode of payment (cash or credit card), customer shopping pattern (leisurely, normal, exact) or choice of checkout counters (normal or express). In this study, we focus on 2 important components of the simulation model, namely the parking area, the normal and express checkout counters. The parking area is modeled using a Resource Pool block where one resource unit represents one parking bay. A customer arriving by car seizes a unit of the resource from the Pool block (parks car) and only releases it when he exits the system. Cars arriving when the Resource Pool is empty (no more parking bays) leave without entering the system. The normal and express checkouts are represented by Server blocks with appropriate service time distributions. As a case study, a supermarket in a shopping mall with a limited number of parking bays in Bangsar was chosen for this research. Empirical data on arrival patterns, arrival modes, payment modes, shopping patterns, service times of the checkout counters were collected and analyzed to validate the model. Sensitivity analysis was also performed with different simulation scenarios to identify the parameters for the optimal number the parking spaces and checkout counters.
NASA Astrophysics Data System (ADS)
Martin, D. F.; Asay-Davis, X.; Cornford, S. L.; Price, S. F.; Ng, E. G.; Collins, W.
2015-12-01
We present POPSICLES simulation results covering the full Antarctic Ice Sheet and the Southern Ocean spanning the period from 1990 to 2010. We use the CORE v. 2 interannual forcing data to force the ocean model. Simulations are performed at 0.1o(~5 km) ocean resolution with adaptive ice sheet resolution as fine as 500 m to adequately resolve the grounding line dynamics. We discuss the effect of improved ocean mixing and subshelf bathymetry (vs. the standard Bedmap2 bathymetry) on the behavior of the coupled system, comparing time-averaged melt rates below a number of major ice shelves with those reported in the literature. We also present seasonal variability and decadal melting trends from several Antarctic regions, along with the response of the ice shelves and the consequent dynamic response of the grounded ice sheet.POPSICLES couples the POP2x ocean model, a modified version of the Parallel Ocean Program, and the BISICLES ice-sheet model. POP2x includes sub-ice-shelf circulation using partial top cells and the commonly used three-equation boundary layer physics. Standalone POP2x output compares well with standard ice-ocean test cases (e.g., ISOMIP) and other continental-scale simulations and melt-rate observations. BISICLES makes use of adaptive mesh refinement and a 1st-order accurate momentum balance similar to the L1L2 model of Schoof and Hindmarsh to accurately model regions of dynamic complexity, such as ice streams, outlet glaciers, and grounding lines. Results of BISICLES simulations have compared favorably to comparable simulations with a Stokes momentum balance in both idealized tests (MISMIP-3d) and realistic configurations.The figure shows the BISICLES-computed vertically-integrated grounded ice velocity field 5 years into a 20-year coupled full-continent Antarctic-Southern-Ocean simulation. Submarine melt rates are painted onto the surface of the floating ice shelves. Grounding lines are shown in green.
Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav
2015-01-01
Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close “neighborhood” of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa. PMID:26327290
Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav
2015-01-01
Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close "neighborhood" of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa.
Transfer of training and simulator qualification or myth and folklore in helicopter simulation
NASA Technical Reports Server (NTRS)
Dohme, Jack
1992-01-01
Transfer of training studies at Fort Rucker using the backward-transfer paradigm have shown that existing flight simulators are not entirely adequate for meeting training requirements. Using an ab initio training research simulator, a simulation of the UH-1, training effectiveness ratios were developed. The data demonstrate it to be a cost-effective primary trainer. A simulator qualification method was suggested in which a combination of these transfer-of-training paradigms is used to determine overall simulator fidelity and training effectiveness.
NASA Astrophysics Data System (ADS)
Kusangaya, Samuel; Warburton Toucher, Michele L.; van Garderen, Emma Archer
2018-02-01
Downscaled General Circulation Models (GCMs) output are used to forecast climate change and provide information used as input for hydrological modelling. Given that our understanding of climate change points towards an increasing frequency, timing and intensity of extreme hydrological events, there is therefore the need to assess the ability of downscaled GCMs to capture these extreme hydrological events. Extreme hydrological events play a significant role in regulating the structure and function of rivers and associated ecosystems. In this study, the Indicators of Hydrologic Alteration (IHA) method was adapted to assess the ability of simulated streamflow (using downscaled GCMs (dGCMs)) in capturing extreme river dynamics (high and low flows), as compared to streamflow simulated using historical climate data from 1960 to 2000. The ACRU hydrological model was used for simulating streamflow for the 13 water management units of the uMngeni Catchment, South Africa. Statistically downscaled climate models obtained from the Climate System Analysis Group at the University of Cape Town were used as input for the ACRU Model. Results indicated that, high flows and extreme high flows (one in ten year high flows/large flood events) were poorly represented both in terms of timing, frequency and magnitude. Simulated streamflow using dGCMs data also captures more low flows and extreme low flows (one in ten year lowest flows) than that captured in streamflow simulated using historical climate data. The overall conclusion was that although dGCMs output can reasonably be used to simulate overall streamflow, it performs poorly when simulating extreme high and low flows. Streamflow simulation from dGCMs must thus be used with caution in hydrological applications, particularly for design hydrology, as extreme high and low flows are still poorly represented. This, arguably calls for the further improvement of downscaling techniques in order to generate climate data more relevant and useful for hydrological applications such as in design hydrology. Nevertheless, the availability of downscaled climatic output provide the potential of exploring climate model uncertainties in different hydro climatic regions at local scales where forcing data is often less accessible but more accurate at finer spatial scales and with adequate spatial detail.
Evaluation of Lightning Induced Effects in a Graphite Composite Fairing Structure
NASA Technical Reports Server (NTRS)
Trout, Dawn H.; Stanley, James E.; Wahid, Parveen F.
2011-01-01
Defining the electromagnetic environment inside a graphite composite fairing due to near-by lightning strikes is of interest to spacecraft developers. This effort develops a transmission-line-matrix (TLM) model with a CST Microstripes to examine induced voltages. on interior wire loops in a composite fairing due to a simulated near-by lightning strike. A physical vehicle-like composite fairing test fixture is constructed to anchor a TLM model in the time domain and a FEKO method of moments model in the frequency domain. Results show that a typical graphite composite fairing provides adequate shielding resulting in a significant reduction in induced voltages on high impedance circuits despite minimal attenuation of peak magnetic fields propagating through space in near-by lightning strike conditions.
Estimating potency for the Emax-model without attaining maximal effects.
Schoemaker, R C; van Gerven, J M; Cohen, A F
1998-10-01
The most widely applied model relating drug concentrations to effects is the Emax model. In practice, concentration-effect relationships often deviate from a simple linear relationship but without reaching a clear maximum because a further increase in concentration might be associated with unacceptable or distorting side effects. The parameters for the Emax model can only be estimated with reasonable precision if the curve shows sign of reaching a maximum, otherwise both EC50 and Emax estimates may be extremely imprecise. This paper provides a solution by introducing a new parameter (S0) equal to Emax/EC50 that can be used to characterize potency adequately even if there are no signs of a clear maximum. Simulations are presented to investigate the nature of the new parameter and published examples are used as illustration.
Utrillas, María P; Marín, María J; Esteve, Anna R; Estellés, Victor; Tena, Fernando; Cañada, Javier; Martínez-Lozano, José A
2009-01-01
Values of measured and modeled diffuse UV erythemal irradiance (UVER) for all sky conditions are compared on planes inclined at 40 degrees and oriented north, south, east and west. The models used for simulating diffuse UVER are of the geometric-type, mainly the Isotropic, Klucher, Hay, Muneer, Reindl and Schauberger models. To analyze the precision of the models, some statistical estimators were used such as root mean square deviation, mean absolute deviation and mean bias deviation. It was seen that all the analyzed models reproduce adequately the diffuse UVER on the south-facing plane, with greater discrepancies for the other inclined planes. When the models are applied to cloud-free conditions, the errors obtained are higher because the anisotropy of the sky dome acquires more importance and the models do not provide the estimation of diffuse UVER accurately.
Ujiie, Hideki; Kato, Tatsuya; Hu, Hsin-Pei; Bauer, Patrycja; Patel, Priya; Wada, Hironobu; Lee, Daiyoon; Fujino, Kosuke; Schieman, Colin; Pierre, Andrew; Waddell, Thomas K; Keshavjee, Shaf; Darling, Gail E; Yasufuku, Kazuhiro
2017-06-01
Surgical trainees are required to develop competency in a variety of laparoscopic operations. Developing laparoscopic technical skills can be difficult as there has been a decrease in the number of procedures performed. This study aims to develop an inexpensive and anatomically relevant model for training in laparoscopic foregut procedures. An ex vivo , anatomic model of the human upper abdomen was developed using intact porcine esophagus, stomach, diaphragm and spleen. The Toronto lap-Nissen simulator was contained in a laparoscopic box-trainer and included an arch system to simulate the normal radial shape and tension of the diaphragm. We integrated the use of this training model as a part of our laparoscopic skills laboratory-training curriculum. Afterwards, we surveyed trainees to evaluate the observed benefit of the learning session. Twenty-five trainees and five faculty members completed a survey regarding the use of this model. Among the trainees, only 4 (16%) had experience with laparoscopic Heller myotomy and Nissen fundoplication. They reported that practicing with the model was a valuable use of their limited time, repeating the exercise would be of additional benefit, and that the exercise improved their ability to perform or assist in an actual case in the operating room. Significant improvements were found in the following subjective measures comparing pre- vs. post-training: (I) knowledge level (5.6 vs. 8.0, P<0.001); (II) comfort level in assisting (6.3 vs. 7.6, P<0.001); and (III) comfort level in performing as the primary surgeon (4.9 vs. 7.1, P<0.001). The trainees and faculty members agreed that this model was of adequate fidelity and was a representative simulation of actual human anatomy. We developed an easily reproducible training model for laparoscopic procedures. This simulator reproduces human anatomy and increases the trainees' comfort level in performing and assisting with myotomy and fundoplication.
Ujiie, Hideki; Kato, Tatsuya; Hu, Hsin-Pei; Bauer, Patrycja; Patel, Priya; Wada, Hironobu; Lee, Daiyoon; Fujino, Kosuke; Schieman, Colin; Pierre, Andrew; Waddell, Thomas K.; Keshavjee, Shaf; Darling, Gail E.
2017-01-01
Background Surgical trainees are required to develop competency in a variety of laparoscopic operations. Developing laparoscopic technical skills can be difficult as there has been a decrease in the number of procedures performed. This study aims to develop an inexpensive and anatomically relevant model for training in laparoscopic foregut procedures. Methods An ex vivo, anatomic model of the human upper abdomen was developed using intact porcine esophagus, stomach, diaphragm and spleen. The Toronto lap-Nissen simulator was contained in a laparoscopic box-trainer and included an arch system to simulate the normal radial shape and tension of the diaphragm. We integrated the use of this training model as a part of our laparoscopic skills laboratory-training curriculum. Afterwards, we surveyed trainees to evaluate the observed benefit of the learning session. Results Twenty-five trainees and five faculty members completed a survey regarding the use of this model. Among the trainees, only 4 (16%) had experience with laparoscopic Heller myotomy and Nissen fundoplication. They reported that practicing with the model was a valuable use of their limited time, repeating the exercise would be of additional benefit, and that the exercise improved their ability to perform or assist in an actual case in the operating room. Significant improvements were found in the following subjective measures comparing pre- vs. post-training: (I) knowledge level (5.6 vs. 8.0, P<0.001); (II) comfort level in assisting (6.3 vs. 7.6, P<0.001); and (III) comfort level in performing as the primary surgeon (4.9 vs. 7.1, P<0.001). The trainees and faculty members agreed that this model was of adequate fidelity and was a representative simulation of actual human anatomy. Conclusions We developed an easily reproducible training model for laparoscopic procedures. This simulator reproduces human anatomy and increases the trainees’ comfort level in performing and assisting with myotomy and fundoplication. PMID:28740664
A Markov model for the temporal dynamics of balanced random networks of finite size
Lagzi, Fereshteh; Rotter, Stefan
2014-01-01
The balanced state of recurrent networks of excitatory and inhibitory spiking neurons is characterized by fluctuations of population activity about an attractive fixed point. Numerical simulations show that these dynamics are essentially nonlinear, and the intrinsic noise (self-generated fluctuations) in networks of finite size is state-dependent. Therefore, stochastic differential equations with additive noise of fixed amplitude cannot provide an adequate description of the stochastic dynamics. The noise model should, rather, result from a self-consistent description of the network dynamics. Here, we consider a two-state Markovian neuron model, where spikes correspond to transitions from the active state to the refractory state. Excitatory and inhibitory input to this neuron affects the transition rates between the two states. The corresponding nonlinear dependencies can be identified directly from numerical simulations of networks of leaky integrate-and-fire neurons, discretized at a time resolution in the sub-millisecond range. Deterministic mean-field equations, and a noise component that depends on the dynamic state of the network, are obtained from this model. The resulting stochastic model reflects the behavior observed in numerical simulations quite well, irrespective of the size of the network. In particular, a strong temporal correlation between the two populations, a hallmark of the balanced state in random recurrent networks, are well represented by our model. Numerical simulations of such networks show that a log-normal distribution of short-term spike counts is a property of balanced random networks with fixed in-degree that has not been considered before, and our model shares this statistical property. Furthermore, the reconstruction of the flow from simulated time series suggests that the mean-field dynamics of finite-size networks are essentially of Wilson-Cowan type. We expect that this novel nonlinear stochastic model of the interaction between neuronal populations also opens new doors to analyze the joint dynamics of multiple interacting networks. PMID:25520644
NASA Astrophysics Data System (ADS)
Bach, H.; Klug, P.; Ruf, T.; Migdall, S.; Schlenz, F.; Hank, T.; Mauser, W.
2015-04-01
To support food security, information products about the actual cropping area per crop type, the current status of agricultural production and estimated yields, as well as the sustainability of the agricultural management are necessary. Based on this information, well-targeted land management decisions can be made. Remote sensing is in a unique position to contribute to this task as it is globally available and provides a plethora of information about current crop status. M4Land is a comprehensive system in which a crop growth model (PROMET) and a reflectance model (SLC) are coupled in order to provide these information products by analyzing multi-temporal satellite images. SLC uses modelled surface state parameters from PROMET, such as leaf area index or phenology of different crops to simulate spatially distributed surface reflectance spectra. This is the basis for generating artificial satellite images considering sensor specific configurations (spectral bands, solar and observation geometries). Ensembles of model runs are used to represent different crop types, fertilization status, soil colour and soil moisture. By multi-temporal comparisons of simulated and real satellite images, the land cover/crop type can be classified in a dynamically, model-supervised way and without in-situ training data. The method is demonstrated in an agricultural test-site in Bavaria. Its transferability is studied by analysing PROMET model results for the rest of Germany. Especially the simulated phenological development can be verified on this scale in order to understand whether PROMET is able to adequately simulate spatial, as well as temporal (intra- and inter-season) crop growth conditions, a prerequisite for the model-supervised approach. This sophisticated new technology allows monitoring of management decisions on the field-level using high resolution optical data (presently RapidEye and Landsat). The M4Land analysis system is designed to integrate multi-mission data and is well suited for the use of Sentinel-2's continuous and manifold data stream.
Hurricane Forecasting with the High-resolution NASA Finite-volume General Circulation Model
NASA Technical Reports Server (NTRS)
Atlas, R.; Reale, O.; Shen, B.-W.; Lin, S.-J.; Chern, J.-D.; Putman, W.; Lee, T.; Yeh, K.-S.; Bosilovich, M.; Radakovich, J.
2004-01-01
A high-resolution finite-volume General Circulation Model (fvGCM), resulting from a development effort of more than ten years, is now being run operationally at the NASA Goddard Space Flight Center and Ames Research Center. The model is based on a finite-volume dynamical core with terrain-following Lagrangian control-volume discretization and performs efficiently on massive parallel architectures. The computational efficiency allows simulations at a resolution of a quarter of a degree, which is double the resolution currently adopted by most global models in operational weather centers. Such fine global resolution brings us closer to overcoming a fundamental barrier in global atmospheric modeling for both weather and climate, because tropical cyclones and even tropical convective clusters can be more realistically represented. In this work, preliminary results of the fvGCM are shown. Fifteen simulations of four Atlantic tropical cyclones in 2002 and 2004 are chosen because of strong and varied difficulties presented to numerical weather forecasting. It is shown that the fvGCM, run at the resolution of a quarter of a degree, can produce very good forecasts of these tropical systems, adequately resolving problems like erratic track, abrupt recurvature, intense extratropical transition, multiple landfall and reintensification, and interaction among vortices.
Reduced-Order Modeling for Flutter/LCO Using Recurrent Artificial Neural Network
NASA Technical Reports Server (NTRS)
Yao, Weigang; Liou, Meng-Sing
2012-01-01
The present study demonstrates the efficacy of a recurrent artificial neural network to provide a high fidelity time-dependent nonlinear reduced-order model (ROM) for flutter/limit-cycle oscillation (LCO) modeling. An artificial neural network is a relatively straightforward nonlinear method for modeling an input-output relationship from a set of known data, for which we use the radial basis function (RBF) with its parameters determined through a training process. The resulting RBF neural network, however, is only static and is not yet adequate for an application to problems of dynamic nature. The recurrent neural network method [1] is applied to construct a reduced order model resulting from a series of high-fidelity time-dependent data of aero-elastic simulations. Once the RBF neural network ROM is constructed properly, an accurate approximate solution can be obtained at a fraction of the cost of a full-order computation. The method derived during the study has been validated for predicting nonlinear aerodynamic forces in transonic flow and is capable of accurate flutter/LCO simulations. The obtained results indicate that the present recurrent RBF neural network is accurate and efficient for nonlinear aero-elastic system analysis
Development of a Physiologically-Based Pharmacokinetic Model of the Rat Central Nervous System
Badhan, Raj K. Singh; Chenel, Marylore; Penny, Jeffrey I.
2014-01-01
Central nervous system (CNS) drug disposition is dictated by a drug’s physicochemical properties and its ability to permeate physiological barriers. The blood–brain barrier (BBB), blood-cerebrospinal fluid barrier and centrally located drug transporter proteins influence drug disposition within the central nervous system. Attainment of adequate brain-to-plasma and cerebrospinal fluid-to-plasma partitioning is important in determining the efficacy of centrally acting therapeutics. We have developed a physiologically-based pharmacokinetic model of the rat CNS which incorporates brain interstitial fluid (ISF), choroidal epithelial and total cerebrospinal fluid (CSF) compartments and accurately predicts CNS pharmacokinetics. The model yielded reasonable predictions of unbound brain-to-plasma partition ratio (Kpuu,brain) and CSF:plasma ratio (CSF:Plasmau) using a series of in vitro permeability and unbound fraction parameters. When using in vitro permeability data obtained from L-mdr1a cells to estimate rat in vivo permeability, the model successfully predicted, to within 4-fold, Kpuu,brain and CSF:Plasmau for 81.5% of compounds simulated. The model presented allows for simultaneous simulation and analysis of both brain biophase and CSF to accurately predict CNS pharmacokinetics from preclinical drug parameters routinely available during discovery and development pathways. PMID:24647103
A microstructurally based model of solder joints under conditions of thermomechanical fatigue
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frear, D.R.; Burchett, S.N.; Rashid, M.M.
The thermomechanical fatigue failure of solder joints in increasingly becoming an important reliability issue. In this paper we present two computational methodologies that have been developed to predict the behavior of near eutectic Sn-Pb solder joints under fatigue conditions that are based on metallurgical tests as fundamental input for constitutive relations. The two-phase model mathematically predicts the heterogeneous coarsening behavior of near eutectic Sn-Pb solder. The finite element simulations from this model agree well with experimental thermomechanical fatigue tests. The simulations show that the presence of an initial heterogeneity in the solder microstructure could significantly degrade the fatigue lifetime. Themore » single phase model is a computational technique that was developed to predict solder joint behavior using materials data for constitutive relation constants that could be determined through straightforward metallurgical experiments. A shear/torsion test sample was developed to impose strain in two different orientations. Materials constants were derived from these tests and the results showed an adequate fit to experimental results. The single-phase model could be very useful for conditions where microstructural evolution is not a dominant factor in fatigue.« less
Modelling the photochemical pollution over the metropolitan area of Porto Alegre, Brazil
NASA Astrophysics Data System (ADS)
Borrego, C.; Monteiro, A.; Ferreira, J.; Moraes, M. R.; Carvalho, A.; Ribeiro, I.; Miranda, A. I.; Moreira, D. M.
2010-01-01
The main purpose of this study is to evaluate the photochemical pollution over the Metropolitan Area of Porto Alegre (MAPA), Brazil, where high concentrations of ozone have been registered during the past years. Due to the restricted spatial coverage of the monitoring air quality network, a numerical modelling technique was selected and applied to this assessment exercise. Two different chemistry-transport models - CAMx and CALGRID - were applied for a summer period, driven by the MM5 meteorological model. The meteorological model performance was evaluated comparing its results to available monitoring data measured at the Porto Alegre airport. Validation results point out a good model performance. It was not possible to evaluate the chemistry models performance due to the lack of adequate monitoring data. Nevertheless, the model intercomparison between CAMx and CALGRID shows a similar behaviour in what concerns the simulation of nitrogen dioxide, but some discrepancies concerning ozone. Regarding the fulfilment of the Brazilian air quality targets, the simulated ozone concentrations surpass the legislated value in specific periods, mainly outside the urban area of Porto Alegre. The ozone formation is influenced by the emission of pollutants that act as precursors (like the nitrogen oxides emitted at Porto Alegre urban area and coming from a large refinery complex) and by the meteorological conditions.
Prospects of second generation artificial intelligence tools in calibration of chemical sensors.
Braibanti, Antonio; Rao, Rupenaguntla Sambasiva; Ramam, Veluri Anantha; Rao, Gollapalli Nageswara; Rao, Vaddadi Venkata Panakala
2005-05-01
Multivariate data driven calibration models with neural networks (NNs) are developed for binary (Cu++ and Ca++) and quaternary (K+, Ca++, NO3- and Cl-) ion-selective electrode (ISE) data. The response profiles of ISEs with concentrations are non-linear and sub-Nernstian. This task represents function approximation of multi-variate, multi-response, correlated, non-linear data with unknown noise structure i.e. multi-component calibration/prediction in chemometric parlance. Radial distribution function (RBF) and Fuzzy-ARTMAP-NN models implemented in the software packages, TRAJAN and Professional II, are employed for the calibration. The optimum NN models reported are based on residuals in concentration space. Being a data driven information technology, NN does not require a model, prior- or posterior- distribution of data or noise structure. Missing information, spikes or newer trends in different concentration ranges can be modeled through novelty detection. Two simulated data sets generated from mathematical functions are modeled as a function of number of data points and network parameters like number of neurons and nearest neighbors. The success of RBF and Fuzzy-ARTMAP-NNs to develop adequate calibration models for experimental data and function approximation models for more complex simulated data sets ensures AI2 (artificial intelligence, 2nd generation) as a promising technology in quantitation.
Engineering Risk Assessment of Space Thruster Challenge Problem
NASA Technical Reports Server (NTRS)
Mathias, Donovan L.; Mattenberger, Christopher J.; Go, Susie
2014-01-01
The Engineering Risk Assessment (ERA) team at NASA Ames Research Center utilizes dynamic models with linked physics-of-failure analyses to produce quantitative risk assessments of space exploration missions. This paper applies the ERA approach to the baseline and extended versions of the PSAM Space Thruster Challenge Problem, which investigates mission risk for a deep space ion propulsion system with time-varying thruster requirements and operations schedules. The dynamic mission is modeled using a combination of discrete and continuous-time reliability elements within the commercially available GoldSim software. Loss-of-mission (LOM) probability results are generated via Monte Carlo sampling performed by the integrated model. Model convergence studies are presented to illustrate the sensitivity of integrated LOM results to the number of Monte Carlo trials. A deterministic risk model was also built for the three baseline and extended missions using the Ames Reliability Tool (ART), and results are compared to the simulation results to evaluate the relative importance of mission dynamics. The ART model did a reasonable job of matching the simulation models for the baseline case, while a hybrid approach using offline dynamic models was required for the extended missions. This study highlighted that state-of-the-art techniques can adequately adapt to a range of dynamic problems.
NASA Astrophysics Data System (ADS)
Leguy, G.; Lipscomb, W. H.; Asay-Davis, X.
2017-12-01
Ice sheets and ice shelves are linked by the transition zone, the region where the grounded ice lifts off the bedrock and begins to float. Adequate resolution of the transition zone is necessary for numerically accurate ice sheet-ice shelf simulations. In previous work we have shown that by using a simple parameterization of the basal hydrology, a smoother transition in basal water pressure between floating and grounded ice improves the numerical accuracy of a one-dimensional vertically integrated fixed-grid model. We used a set of experiments based on the Marine Ice Sheet Model Intercomparison Project (MISMIP) to show that reliable grounding-line dynamics at resolutions 1 km is achievable. In this presentation we use the Community Ice Sheet Model (CISM) to demonstrate how the representation of basal lubrication impacts three-dimensional models using the MISMIP-3D and MISMIP+ experiments. To this end we will compare three different Stokes approximations: the Shallow Shelf Approximation (SSA), a depth-integrated higher-order approximation, and the Blatter-Pattyn model. The results from our one-dimensional model carry over to the 3-D models; a resolution of 1 km (and in some cases 2 km) remains sufficient to accurately simulate grounding-line dynamics.
Ryan, Patrick B; Schuemie, Martijn J
2013-10-01
There has been only limited evaluation of statistical methods for identifying safety risks of drug exposure in observational healthcare data. Simulations can support empirical evaluation, but have not been shown to adequately model the real-world phenomena that challenge observational analyses. To design and evaluate a probabilistic framework (OSIM2) for generating simulated observational healthcare data, and to use this data for evaluating the performance of methods in identifying associations between drug exposure and health outcomes of interest. Seven observational designs, including case-control, cohort, self-controlled case series, and self-controlled cohort design were applied to 399 drug-outcome scenarios in 6 simulated datasets with no effect and injected relative risks of 1.25, 1.5, 2, 4, and 10, respectively. Longitudinal data for 10 million simulated patients were generated using a model derived from an administrative claims database, with associated demographics, periods of drug exposure derived from pharmacy dispensings, and medical conditions derived from diagnoses on medical claims. Simulation validation was performed through descriptive comparison with real source data. Method performance was evaluated using Area Under ROC Curve (AUC), bias, and mean squared error. OSIM2 replicates prevalence and types of confounding observed in real claims data. When simulated data are injected with relative risks (RR) ≥ 2, all designs have good predictive accuracy (AUC > 0.90), but when RR < 2, no methods achieve 100 % predictions. Each method exhibits a different bias profile, which changes with the effect size. OSIM2 can support methodological research. Results from simulation suggest method operating characteristics are far from nominal properties.
NASA Astrophysics Data System (ADS)
Xiao, H.; Wu, J.-L.; Wang, J.-X.; Sun, R.; Roy, C. J.
2016-11-01
Despite their well-known limitations, Reynolds-Averaged Navier-Stokes (RANS) models are still the workhorse tools for turbulent flow simulations in today's engineering analysis, design and optimization. While the predictive capability of RANS models depends on many factors, for many practical flows the turbulence models are by far the largest source of uncertainty. As RANS models are used in the design and safety evaluation of many mission-critical systems such as airplanes and nuclear power plants, quantifying their model-form uncertainties has significant implications in enabling risk-informed decision-making. In this work we develop a data-driven, physics-informed Bayesian framework for quantifying model-form uncertainties in RANS simulations. Uncertainties are introduced directly to the Reynolds stresses and are represented with compact parameterization accounting for empirical prior knowledge and physical constraints (e.g., realizability, smoothness, and symmetry). An iterative ensemble Kalman method is used to assimilate the prior knowledge and observation data in a Bayesian framework, and to propagate them to posterior distributions of velocities and other Quantities of Interest (QoIs). We use two representative cases, the flow over periodic hills and the flow in a square duct, to evaluate the performance of the proposed framework. Both cases are challenging for standard RANS turbulence models. Simulation results suggest that, even with very sparse observations, the obtained posterior mean velocities and other QoIs have significantly better agreement with the benchmark data compared to the baseline results. At most locations the posterior distribution adequately captures the true model error within the developed model form uncertainty bounds. The framework is a major improvement over existing black-box, physics-neutral methods for model-form uncertainty quantification, where prior knowledge and details of the models are not exploited. This approach has potential implications in many fields in which the governing equations are well understood but the model uncertainty comes from unresolved physical processes.
Barton, Gary J.; McDonald, Richard R.; Nelson, Jonathan M.
2009-01-01
During 2005, the U.S. Geological Survey (USGS) developed, calibrated, and validated a multidimensional flow model for simulating streamflow in the white sturgeon spawning habitat of the Kootenai River in Idaho. The model was developed as a tool to aid understanding of the physical factors affecting quality and quantity of spawning and rearing habitat used by the endangered white sturgeon (Acipenser transmontanus) and for assessing the feasibility of various habitat-enhancement scenarios to re-establish recruitment of white sturgeon. At the request of the Kootenai Tribe of Idaho, the USGS extended the two-dimensional flow model developed in 2005 into a braided reach upstream of the current white sturgeon spawning reach. Many scientists consider the braided reach a suitable substrate with adequate streamflow velocities for re-establishing recruitment of white sturgeon. The 2005 model was extended upstream to help assess the feasibility of various strategies to encourage white sturgeon to spawn in the reach. At the request of the Idaho Department of Fish and Game, the USGS also extended the two-dimensional flow model several kilometers downstream of the white sturgeon spawning reach. This modified model can quantify the physical characteristics of a reach that white sturgeon pass through as they swim upstream from Kootenay Lake to the spawning reach. The USGS Multi-Dimensional Surface-Water Modeling System was used for the 2005 modeling effort and for this subsequent modeling effort. This report describes the model applications and limitations, presents the results of a few simple simulations, and demonstrates how the model can be used to link physical characteristics of streamflow to the location of white sturgeon spawning events during 1994-2001. Model simulations also were used to report on the length and percentage of longitudinal profiles that met the minimum criteria during May and June 2006 and 2007 as stipulated in the U.S. Fish and Wildlife Biological Opinion.
Nonparametric autocovariance estimation from censored time series by Gaussian imputation.
Park, Jung Wook; Genton, Marc G; Ghosh, Sujit K
2009-02-01
One of the most frequently used methods to model the autocovariance function of a second-order stationary time series is to use the parametric framework of autoregressive and moving average models developed by Box and Jenkins. However, such parametric models, though very flexible, may not always be adequate to model autocovariance functions with sharp changes. Furthermore, if the data do not follow the parametric model and are censored at a certain value, the estimation results may not be reliable. We develop a Gaussian imputation method to estimate an autocovariance structure via nonparametric estimation of the autocovariance function in order to address both censoring and incorrect model specification. We demonstrate the effectiveness of the technique in terms of bias and efficiency with simulations under various rates of censoring and underlying models. We describe its application to a time series of silicon concentrations in the Arctic.
Zygomalas, Apollon; Giokas, Konstantinos; Koutsouris, Dimitrios
2014-01-01
Aim. Modular mini-robots can be used in novel minimally invasive surgery techniques like natural orifice transluminal endoscopic surgery (NOTES) and laparoendoscopic single site (LESS) surgery. The control of these miniature assistants is complicated. The aim of this study is the in silico investigation of a remote controlling interface for modular miniature robots which can be used in minimally invasive surgery. Methods. The conceptual controlling system was developed, programmed, and simulated using professional robotics simulation software. Three different modes of control were programmed. The remote controlling surgical interface was virtually designed as a high scale representation of the respective modular mini-robot, therefore a modular controlling system itself. Results. With the proposed modular controlling system the user could easily identify the conformation of the modular mini-robot and adequately modify it as needed. The arrangement of each module was always known. The in silico investigation gave useful information regarding the controlling mode, the adequate speed of rearrangements, and the number of modules needed for efficient working tasks. Conclusions. The proposed conceptual model may promote the research and development of more sophisticated modular controlling systems. Modular surgical interfaces may improve the handling and the dexterity of modular miniature robots during minimally invasive procedures. PMID:25295187
Zygomalas, Apollon; Giokas, Konstantinos; Koutsouris, Dimitrios
2014-01-01
Aim. Modular mini-robots can be used in novel minimally invasive surgery techniques like natural orifice transluminal endoscopic surgery (NOTES) and laparoendoscopic single site (LESS) surgery. The control of these miniature assistants is complicated. The aim of this study is the in silico investigation of a remote controlling interface for modular miniature robots which can be used in minimally invasive surgery. Methods. The conceptual controlling system was developed, programmed, and simulated using professional robotics simulation software. Three different modes of control were programmed. The remote controlling surgical interface was virtually designed as a high scale representation of the respective modular mini-robot, therefore a modular controlling system itself. Results. With the proposed modular controlling system the user could easily identify the conformation of the modular mini-robot and adequately modify it as needed. The arrangement of each module was always known. The in silico investigation gave useful information regarding the controlling mode, the adequate speed of rearrangements, and the number of modules needed for efficient working tasks. Conclusions. The proposed conceptual model may promote the research and development of more sophisticated modular controlling systems. Modular surgical interfaces may improve the handling and the dexterity of modular miniature robots during minimally invasive procedures.
Progress on Shape Memory Alloy Actuator Development for Active Clearance Control
NASA Technical Reports Server (NTRS)
DeCastro, Jonathan; Melcher, Kevin; Noebe, Ronald
2006-01-01
Results of a numerical analysis evaluating the feasibility of high-temperature shape memory alloys (HTSMA) for active clearance control actuation in the high-pressure turbine section of a modern turbofan engine has been conducted. The prototype actuator concept considered here consists of parallel HTSMA wires attached to the shroud that is located on the exterior of the turbine case. A transient model of an HTSMA actuator was used to evaluate active clearance control at various operating points in a test bed aircraft engine simulation. For the engine under consideration, each actuator must be designed to counteract loads from 380 to 2000 lbf and displace at least 0.033 in. Design results show that an actuator comprised of 10 wires 2 in. in length is adequate for control at critical engine operating points and still exhibit acceptable failsafe operability and cycle life. A proportional-integral-derivative (PID) controller with integrator windup protection was implemented to control clearance amidst engine transients during a normal mission. Simulation results show that the control system exhibits minimal variability in clearance control performance across the operating envelope. The final actuator design is sufficiently small to fit within the limited space outside the high-pressure turbine case and is shown to consume only small amounts of bleed air to adequately regulate temperature.
Simulation of breast compression in mammography using finite element analysis: A preliminary study
NASA Astrophysics Data System (ADS)
Liu, Yan-Lin; Liu, Pei-Yuan; Huang, Mei-Lan; Hsu, Jui-Ting; Han, Ruo-Ping; Wu, Jay
2017-11-01
Adequate compression during mammography lowers the absorbed dose in the breast and improves the image quality. The compressed breast thickness (CBT) is affected by various factors, such as breast volume, glandularity, and compression force. In this study, we used the finite element analysis to simulate breast compression and deformation and validated the simulated CBT with clinical mammography results. Image data from ten subjects who had undergone mammography screening and breast magnetic resonance imaging (MRI) were collected, and their breast models were created according to the MR images. The non-linear tissue deformation under 10-16 daN in the cranial-caudal direction was simulated. When the clinical compression force was used, the simulated CBT ranged from 2.34 to 5.90 cm. The absolute difference between the simulated CBT and the clinically measured CBT ranged from 0.5 to 7.1 mm. The simulated CBT had a strong positive linear relationship to breast volume and a weak negative correlation to glandularity. The average simulated CBT under 10, 12, 14, and 16 daN was 5.68, 5.12, 4.67, and 4.25 cm, respectively. Through this study, the relationships between CBT, breast volume, glandularity, and compression force are provided for use in clinical mammography.
ERIC Educational Resources Information Center
Brandhorst, Allan R.
Some factors in the design of instructional micrcomputer simulations that high school social studies teachers must consider when selecting and using computer software are discussed: (1) Instructional computer simulations are adequate instructionally only to the extent that they make explicit the set of relationships underlying the program for the…
Wood, Tamara M.; Cheng, Ralph T.; Gartner, Jeffrey W.; Hoilman, Gene R.; Lindenberg, Mary K.; Wellman, Roy E.
2008-01-01
The three-dimensional numerical model UnTRIM was used to model hydrodynamics and heat transport in Upper Klamath Lake, Oregon, between mid-June and mid-September in 2005 and between mid-May and mid-October in 2006. Data from as many as six meteorological stations were used to generate a spatially interpolated wind field to use as a forcing function. Solar radiation, air temperature, and relative humidity data all were available at one or more sites. In general, because the available data for all inflows and outflows did not adequately close the water budget as calculated from lake elevation and stage-capacity information, a residual inflow or outflow was used to assure closure of the water budget. Data used for calibration in 2005 included lake elevation at 3 water-level gages around the lake, water currents at 5 Acoustic Doppler Current Profiler (ADCP) sites, and temperature at 16 water-quality monitoring locations. The calibrated model accurately simulated the fluctuations of the surface of the lake caused by daily wind patterns. The use of a spatially variable surface wind interpolated from two sites on the lake and four sites on the shoreline generally resulted in more accurate simulation of the currents than the use of a spatially invariant surface wind as observed at only one site on the lake. The simulation of currents was most accurate at the deepest site (ADCP1, where the velocities were highest) using a spatially variable surface wind; the mean error (ME) and root mean square error (RMSE) for the depth-averaged speed over a 37-day simulation from July 26 to August 31, 2005, were 0.50 centimeter per second (cm/s) and 3.08 cm/s, respectively. Simulated currents at the remaining sites were less accurate and, in general, underestimated the measured currents. The maximum errors in simulated currents were at a site near the southern end of the trench at the mouth of Howard Bay (ADCP7), where the ME and RMSE in the depth-averaged speed were 3.02 and 4.38 cm/s, respectively. The range in ME of the temperature simulations over the same period was ?0.94 to 0.73 degrees Celsius (?C), and the RMSE ranged from 0.43 to 1.12?C. The model adequately simulated periods of stratification in the deep trench when complete mixing did not occur for several days at a time. The model was validated using boundary conditions and forcing functions from 2006 without changing any calibration parameters. A spatially variable wind was used. Data for the model validation periods in 2006 included lake elevation at 4 gages around the lake, currents collected at 2 ADCP sites, and temperature collected at 21 water-quality monitoring locations. Errors generally were larger than in 2005. ME and RMSE in the simulated velocity at ADCP1 were 2.30 cm/s and 3.88 cm/s, respectively, for the same 37-day simulation over which errors were computed for 2005. The ME in temperature over the same period ranged from ?0.56 to 1.5?C and the RMSE ranged from 0.41 to 1.86?C. Numerical experiments with conservative tracers were used to demonstrate the prevailing clockwise circulation patterns in the lake, and to show the influence of water from the deep trench located along the western shoreline of the lake on fish habitat in the northern part of the lake. Because water exiting the trench is split into two pathways, the numerical experiments indicate that bottom water from the trench has a stronger influence on water quality in the northern part of the lake, and surface water from the trench has a stronger influence on the southern part of the lake. This may be part of the explanation for why episodes of low dissolved oxygen tend to be more severe in the northern than in the southern part of the lake.
Low-Speed Flight Dynamic Tests and Analysis of the Orion Crew Module Drogue Parachute System
NASA Technical Reports Server (NTRS)
Hahne, David E.; Fremaux, C. Michael
2008-01-01
A test of a dynamically scaled model of the NASA Orion Crew Module (CM) with drogue parachutes was conducted in the NASA-Langley 20-Foot Vertical Spin Tunnel. The primary test objective was to assess the ability of the Orion Crew Module drogue parachute system to adequately stabilize the CM and reduce angular rates at low subsonic Mach numbers. Two attachment locations were tested: the current design nominal and an alternate. Experimental results indicated that the alternate attachment location showed a somewhat greater tendency to attenuate initial roll rate and reduce roll rate oscillations than the nominal location. Comparison of the experimental data to a Program To Optimize Simulated Trajectories (POST II) simulation of the experiment yielded results for the nominal attachment point that indicate differences between the low-speed pitch and yaw damping derivatives in the aerodynamic database and the physical model. Comparisons for the alternate attachment location indicate that riser twist plays a significant role in determining roll rate attenuation characteristics. Reevaluating the impact of the alternate attachment points using a simulation modified to account for these results showed significantly reduced roll rate attenuation tendencies when compared to the original simulation. Based on this modified simulation the alternate attachment point does not appear to offer a significant increase in allowable roll rate over the nominal configuration.
Simulation model for port shunting yards
NASA Astrophysics Data System (ADS)
Rusca, A.; Popa, M.; Rosca, E.; Rosca, M.; Dragu, V.; Rusca, F.
2016-08-01
Sea ports are important nodes in the supply chain, joining two high capacity transport modes: rail and maritime transport. The huge cargo flows transiting port requires high capacity construction and installation such as berths, large capacity cranes, respectively shunting yards. However, the port shunting yards specificity raises several problems such as: limited access since these are terminus stations for rail network, the in-output of large transit flows of cargo relatively to the scarcity of the departure/arrival of a ship, as well as limited land availability for implementing solutions to serve these flows. It is necessary to identify technological solutions that lead to an answer to these problems. The paper proposed a simulation model developed with ARENA computer simulation software suitable for shunting yards which serve sea ports with access to the rail network. Are investigates the principal aspects of shunting yards and adequate measures to increase their transit capacity. The operation capacity for shunting yards sub-system is assessed taking in consideration the required operating standards and the measure of performance (e.g. waiting time for freight wagons, number of railway line in station, storage area, etc.) of the railway station are computed. The conclusion and results, drawn from simulation, help transports and logistics specialists to test the proposals for improving the port management.
Structural convergence properties of amorphous InGaZnO4 from simulated liquid-quench methods.
Buchanan, Jacob C; Fast, Dylan B; Hanken, Benjamin E; Mustard, Thomas J L; Laurita, Geneva; Chiang, Tsung-Han; Keszler, Douglas A; Subramanian, Mas A; Wager, John F; Dolgos, Michelle R; Rustad, James R; Cheong, Paul Ha-Yeon
2017-11-14
The study of structural properties of amorphous structures is complicated by the lack of long-range order and necessitates the use of both cutting-edge computer modeling and experimental techniques. With regards to the computer modeling, many questions on convergence arise when trying to assess the accuracy of a simulated system. What cell size maximizes the accuracy while remaining computationally efficient? More importantly, does averaging multiple smaller cells adequately describe features found in bulk amorphous materials? How small is too small? The aims of this work are: (1) to report a newly developed set of pair potentials for InGaZnO 4 and (2) to explore the effects of structural parameters such as simulation cell size and numbers on the structural convergence of amorphous InGaZnO 4 . The total number of formula units considered over all runs is found to be the critical factor in convergence as long as the cell considered contains a minimum of circa fifteen formula units. There is qualitative agreement between these simulations and X-ray total scattering data - peak trends and locations are consistently reproduced while intensities are weaker. These new IGZO pair potentials are a valuable starting point for future structural refinement efforts.
Finite element based simulation on friction stud welding of metal matrix composites to steel
NASA Astrophysics Data System (ADS)
Hynes, N. Rajesh Jesudoss; Tharmaraj, R.; Velu, P. Shenbaga; Kumar, R.
2016-05-01
Friction welding is a solid state joining technique used for joining similar and dissimilar materials with high integrity. This new technique is being successfully applied to the aerospace, automobile, and ship building industries, and is attracting more and more research interest. The quality of Friction Stud Welded joints depends on the frictional heat generated at the interface. Hence, thermal analysis on friction stud welding of stainless steel (AISI 304) and aluminium silicon carbide (AlSiC) combination is carried out in the present work. In this study, numerical simulation is carried out using ANSYS software and the temperature profiles are predicted at various increments of time. The developed numerical model is found to be adequate to predict temperature distribution of friction stud weld aluminium silicon carbide/stainless steel joints.
NASA Astrophysics Data System (ADS)
Gayler, Sebastian; Wöhling, Thomas; Högy, Petra; Ingwersen, Joachim; Wizemann, Hans-Dieter; Wulfmeyer, Volker; Streck, Thilo
2013-04-01
During the last years, land-surface models have proven to perform well in several studies that compared simulated fluxes of water and energy from the land surface to the atmosphere against measured fluxes at the plot-scale. In contrast, considerable deficits of land-surface models have been identified to simulate soil water fluxes and vertical soil moisture distribution. For example, Gayler et al. (2013) showed that simplifications in the representation of root water uptake can result in insufficient simulations of the vertical distribution of soil moisture and its dynamics. However, in coupled simulations of the terrestrial water cycle, both sub-systems, the atmosphere and the subsurface hydrogeo-system, must fit together and models are needed, which are able to adequately simulate soil moisture, latent heat flux, and their interrelationship. Consequently, land-surface models must be further improved, e.g. by incorporation of advanced biogeophysics models. To improve the conceptual realism in biophysical and hydrological processes in the community land surface model Noah, this model was recently enhanced to Noah-MP by a multi-options framework to parameterize individual processes (Niu et al., 2011). Thus, in Noah-MP the user can choose from several alternative models for vegetation and hydrology processes that can be applied in different combinations. In this study, we evaluate the performance of different Noah-MP model settings to simulate water and energy fluxes across the land surface at two contrasting field sites in South-West Germany. The evaluation is done in 1D offline-mode, i.e. without coupling to an atmospheric model. The atmospheric forcing is provided by measured time series of the relevant variables. Simulation results are compared with eddy covariance measurements of turbulent fluxes and measured time series of soil moisture at different depths. The aims of the study are i) to carve out the most appropriate combination of process parameterizations in Noah-MP to simultaneously match the different components of the water and energy cycle at the field sites under consideration, and ii) to estimate the uncertainty in model structure. We further investigate the potential to improve simulation results by incorporating concepts of more advanced root water uptake models from agricultural field scale models into the land-surface-scheme. Gayler S, Ingwersen J, Priesack E, Wöhling T, Wulfmeyer V, Streck T (2013): Assessing the relevance of sub surface processes for the simulation of evapotranspiration and soil moisture dynamics with CLM3.5: Comparison with field data and crop model simulations. Environ. Earth Sci., 69(2), under revision. Niu G-Y, Yang Z-L, Mitchell KE, Chen F, Ek MB, Barlage M, Kumar A, Manning K, Niyogi D, Rosero E, Tewari M and Xia Y (2011): The community Noah land surface model with multiparameterization options (Noah-MP): 1. Model description and evaluation with local-scale measurements. Journal of Geophysical Research 116(D12109).
Challenges in global modeling of wetland extent and wetland methane dynamics
NASA Astrophysics Data System (ADS)
Spahni, R.; Melton, J. R.; Wania, R.; Stocker, B. D.; Zürcher, S.; Joos, F.
2012-12-01
Global wetlands are known to be climate sensitive, and are the largest natural emitters of methane (CH4). Increased wetland CH4 emissions could act as a positive feedback to future warming. Modelling of global wetland extent and wetland CH4 dynamics remains a challenge. Here we present results from the Wetland and Wetland CH4 Inter-comparison of Models Project (WETCHIMP) that investigated our present ability to simulate large scale wetland characteristics (e.g. wetland type, water table, carbon cycling, gas transport, etc.) and corresponding CH4 emissions. Ten models participated, covering the spectrum from simple to relatively complex, including models tailored either for regional or global simulations. The WETCHIMP experiments showed that while models disagree in spatial and temporal patterns of simulated CH4 emissions and wetland areal extent, they all do agree on a strong positive response to increased carbon dioxide concentrations. WETCHIMP made clear that we currently lack observation data sets that are adequate to evaluate model CH4 soil-atmosphere fluxes at a spatial scale comparable to model grid cells. Thus there are substantial parameter and structural uncertainties in large-scale CH4 emission models. As an illustration of the implications of CH4 emissions on climate we show results of the LPX-Bern model, as one of the models participating in WETCHIMP. LPX-Bern is forced with observed 20th century climate and climate output from an ensemble of five comprehensive climate models for a low and a high emission scenario till 2100 AD. In the high emission scenario increased substrate availability for methanogenesis due to a strong stimulation of net primary productivity, and faster soil turnover leads to an amplification of CH4 emissions with the sharpest increase in peatlands (+180% compared to present). Combined with prescribed anthropogenic CH4 emissions, simulated atmospheric CH4 concentration reaches ~4500 ppbv by 2100 AD, about 800 ppbv more than in standard IPCC scenarios. This represents a significant contribution to radiative forcing of global climate.
Schreffler, Curtis L.
2001-01-01
Ground-water flow in the Potomac-Raritan- Magothy aquifer system (PRM) in south Philadelphia and adjacent southwestern New Jersey was simulated by use of a three-dimensional, seven-layer finite-difference numerical flow model. The simulation was run from 1900, which was prior to groundwater development, through 1995 with 21 stress periods. The focus of the modeling was on a smaller area of concern in south Philadelphia in the vicinity of the Defense Supply Center Philadelphia (DSCP) and the Point Breeze Refinery (PBR). In order to adequately simulate the ground-water flow system in the area of concern, a much larger area was modeled that included parts of New Jersey where significant ground-water withdrawals, which affect water levels in southern Philadelphia, had occurred in the past. At issue in the area of concern is a hydrocarbon plume of unknown origin and time of release.The ground-water-flow system was simulated to estimate past water-level altitudes in and near the area of concern and to determine the effect of the Packer Avenue sewer, which lies south of the DSCP, on the ground-water-flow system. Simulated water-level altitudes for the lower sand unit of the PRM on the DSCP prior to 1945 ranged from pre-development, unstressed altitudes to 3 feet below sea level. Simulated water-level altitudes for the lower sand unit ranged from 3 to 7 feet below sea level from 1946 to 1954, from 6 to 10 feet below sea level from 1955 to 1968, and from 9 to 11 feet below sea level from 1969 to 1978. The lowest simulated water-level altitude on the DSCP was 10.69 feet below sea level near the end of 1974. Model simulations indicate ground water was infiltrating the Packer Avenue sewer prior to approximately 1947 or 1948. Subsequent to that time, simulated ground-water-level altitudes were lower than the bottom of the sewer.
NASA Technical Reports Server (NTRS)
Migdal, D.; Hill, W. G., Jr.; Jenkins, R. C.
1979-01-01
Results of a series of in ground effect twin jet tests are presented along with flow models for closely spaced jets to help predict pressure and upwash forces on simulated aircraft surfaces. The isolated twin jet tests revealed unstable fountains over a range of spacings and jet heights, regions of below ambient pressure on the ground, and negative pressure differential in the upwash flow field. A separate computer code was developed for vertically oriented, incompressible jets. This model more accurately reflects fountain behavior without fully formed wall jets, and adequately predicts ground isobars, upwash dynamic pressure decay, and fountain lift force variation with height above ground.
Wang, Yan Jason; Nguyen, Monica T; Steffens, Jonathan T; Tong, Zheming; Wang, Yungang; Hopke, Philip K; Zhang, K Max
2013-01-15
A new methodology, referred to as the multi-scale structure, integrates "tailpipe-to-road" (i.e., on-road domain) and "road-to-ambient" (i.e., near-road domain) simulations to elucidate the environmental impacts of particulate emissions from traffic sources. The multi-scale structure is implemented in the CTAG model to 1) generate process-based on-road emission rates of ultrafine particles (UFPs) by explicitly simulating the effects of exhaust properties, traffic conditions, and meteorological conditions and 2) to characterize the impacts of traffic-related emissions on micro-environmental air quality near a highway intersection in Rochester, NY. The performance of CTAG, evaluated against with the field measurements, shows adequate agreement in capturing the dispersion of carbon monoxide (CO) and the number concentrations of UFPs in the near road micro-environment. As a proof-of-concept case study, we also apply CTAG to separate the relative impacts of the shutdown of a large coal-fired power plant (CFPP) and the adoption of the ultra-low-sulfur diesel (ULSD) on UFP concentrations in the intersection micro-environment. Although CTAG is still computationally expensive compared to the widely-used parameterized dispersion models, it has the potential to advance our capability to predict the impacts of UFP emissions and spatial/temporal variations of air pollutants in complex environments. Furthermore, for the on-road simulations, CTAG can serve as a process-based emission model; Combining the on-road and near-road simulations, CTAG becomes a "plume-in-grid" model for mobile emissions. The processed emission profiles can potentially improve regional air quality and climate predictions accordingly. Copyright © 2012 Elsevier B.V. All rights reserved.
Dib, Alain E; Johnson, Chris E; Driscoll, Charles T; Fahey, Timothy J; Hayhoe, Katharine
2014-05-01
Carbon (C) sequestration in forest biomass and soils may help decrease regional C footprints and mitigate future climate change. The efficacy of these practices must be verified by monitoring and by approved calculation methods (i.e., models) to be credible in C markets. Two widely used soil organic matter models - CENTURY and RothC - were used to project changes in SOC pools after clear-cutting disturbance, as well as under a range of future climate and atmospheric carbon dioxide (CO(2) ) scenarios. Data from the temperate, predominantly deciduous Hubbard Brook Experimental Forest (HBEF) in New Hampshire, USA, were used to parameterize and validate the models. Clear-cutting simulations demonstrated that both models can effectively simulate soil C dynamics in the northern hardwood forest when adequately parameterized. The minimum postharvest SOC predicted by RothC occurred in postharvest year 14 and was within 1.5% of the observed minimum, which occurred in year 8. CENTURY predicted the postharvest minimum SOC to occur in year 45, at a value 6.9% greater than the observed minimum; the slow response of both models to disturbance suggests that they may overestimate the time required to reach new steady-state conditions. Four climate change scenarios were used to simulate future changes in SOC pools. Climate-change simulations predicted increases in SOC by as much as 7% at the end of this century, partially offsetting future CO(2) emissions. This sequestration was the product of enhanced forest productivity, and associated litter input to the soil, due to increased temperature, precipitation and CO(2) . The simulations also suggested that considerable losses of SOC (8-30%) could occur if forest vegetation at HBEF does not respond to changes in climate and CO(2) levels. Therefore, the source/sink behavior of temperate forest soils likely depends on the degree to which forest growth is stimulated by new climate and CO(2) conditions. © 2013 John Wiley & Sons Ltd.
A Stochastic Simulator of a Blood Product Donation Environment with Demand Spikes and Supply Shocks
An, Ming-Wen; Reich, Nicholas G.; Crawford, Stephen O.; Brookmeyer, Ron; Louis, Thomas A.; Nelson, Kenrad E.
2011-01-01
The availability of an adequate blood supply is a critical public health need. An influenza epidemic or another crisis affecting population mobility could create a critical donor shortage, which could profoundly impact blood availability. We developed a simulation model for the blood supply environment in the United States to assess the likely impact on blood availability of factors such as an epidemic. We developed a simulator of a multi-state model with transitions among states. Weekly numbers of blood units donated and needed were generated by negative binomial stochastic processes. The simulator allows exploration of the blood system under certain conditions of supply and demand rates, and can be used for planning purposes to prepare for sudden changes in the public's health. The simulator incorporates three donor groups (first-time, sporadic, and regular), immigration and emigration, deferral period, and adjustment factors for recruitment. We illustrate possible uses of the simulator by specifying input values for an -week flu epidemic, resulting in a moderate supply shock and demand spike (for example, from postponed elective surgeries), and different recruitment strategies. The input values are based in part on data from a regional blood center of the American Red Cross during –. Our results from these scenarios suggest that the key to alleviating deficit effects of a system shock may be appropriate timing and duration of recruitment efforts, in turn depending critically on anticipating shocks and rapidly implementing recruitment efforts. PMID:21814550
A stochastic simulator of a blood product donation environment with demand spikes and supply shocks.
An, Ming-Wen; Reich, Nicholas G; Crawford, Stephen O; Brookmeyer, Ron; Louis, Thomas A; Nelson, Kenrad E
2011-01-01
The availability of an adequate blood supply is a critical public health need. An influenza epidemic or another crisis affecting population mobility could create a critical donor shortage, which could profoundly impact blood availability. We developed a simulation model for the blood supply environment in the United States to assess the likely impact on blood availability of factors such as an epidemic. We developed a simulator of a multi-state model with transitions among states. Weekly numbers of blood units donated and needed were generated by negative binomial stochastic processes. The simulator allows exploration of the blood system under certain conditions of supply and demand rates, and can be used for planning purposes to prepare for sudden changes in the public's health. The simulator incorporates three donor groups (first-time, sporadic, and regular), immigration and emigration, deferral period, and adjustment factors for recruitment. We illustrate possible uses of the simulator by specifying input values for an 8-week flu epidemic, resulting in a moderate supply shock and demand spike (for example, from postponed elective surgeries), and different recruitment strategies. The input values are based in part on data from a regional blood center of the American Red Cross during 1996-2005. Our results from these scenarios suggest that the key to alleviating deficit effects of a system shock may be appropriate timing and duration of recruitment efforts, in turn depending critically on anticipating shocks and rapidly implementing recruitment efforts.
Falotico, Egidio; Vannucci, Lorenzo; Ambrosano, Alessandro; Albanese, Ugo; Ulbrich, Stefan; Vasquez Tieck, Juan Camilo; Hinkel, Georg; Kaiser, Jacques; Peric, Igor; Denninger, Oliver; Cauli, Nino; Kirtay, Murat; Roennau, Arne; Klinker, Gudrun; Von Arnim, Axel; Guyot, Luc; Peppicelli, Daniel; Martínez-Cañada, Pablo; Ros, Eduardo; Maier, Patrick; Weber, Sandro; Huber, Manuel; Plecher, David; Röhrbein, Florian; Deser, Stefan; Roitberg, Alina; van der Smagt, Patrick; Dillman, Rüdiger; Levi, Paul; Laschi, Cecilia; Knoll, Alois C.; Gewaltig, Marc-Oliver
2017-01-01
Combined efforts in the fields of neuroscience, computer science, and biology allowed to design biologically realistic models of the brain based on spiking neural networks. For a proper validation of these models, an embodiment in a dynamic and rich sensory environment, where the model is exposed to a realistic sensory-motor task, is needed. Due to the complexity of these brain models that, at the current stage, cannot deal with real-time constraints, it is not possible to embed them into a real-world task. Rather, the embodiment has to be simulated as well. While adequate tools exist to simulate either complex neural networks or robots and their environments, there is so far no tool that allows to easily establish a communication between brain and body models. The Neurorobotics Platform is a new web-based environment that aims to fill this gap by offering scientists and technology developers a software infrastructure allowing them to connect brain models to detailed simulations of robot bodies and environments and to use the resulting neurorobotic systems for in silico experimentation. In order to simplify the workflow and reduce the level of the required programming skills, the platform provides editors for the specification of experimental sequences and conditions, environments, robots, and brain–body connectors. In addition to that, a variety of existing robots and environments are provided. This work presents the architecture of the first release of the Neurorobotics Platform developed in subproject 10 “Neurorobotics” of the Human Brain Project (HBP).1 At the current state, the Neurorobotics Platform allows researchers to design and run basic experiments in neurorobotics using simulated robots and simulated environments linked to simplified versions of brain models. We illustrate the capabilities of the platform with three example experiments: a Braitenberg task implemented on a mobile robot, a sensory-motor learning task based on a robotic controller, and a visual tracking embedding a retina model on the iCub humanoid robot. These use-cases allow to assess the applicability of the Neurorobotics Platform for robotic tasks as well as in neuroscientific experiments. PMID:28179882
Fate and transport of phenol in a packed bed reactor containing simulated solid waste
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saquing, Jovita M., E-mail: jmsaquing@gmail.com; Knappe, Detlef R.U., E-mail: knappe@ncsu.edu; Barlaz, Morton A., E-mail: barlaz@ncsu.edu
Highlights: Black-Right-Pointing-Pointer Anaerobic column experiments were conducted at 37 Degree-Sign C using a simulated waste mixture. Black-Right-Pointing-Pointer Sorption and biodegradation model parameters were determined from batch tests. Black-Right-Pointing-Pointer HYDRUS simulated well the fate and transport of phenol in a fully saturated waste column. Black-Right-Pointing-Pointer The batch biodegradation rate and the rate obtained by inverse modeling differed by a factor of {approx}2. Black-Right-Pointing-Pointer Tracer tests showed the importance of hydrodynamic parameters to improve model estimates. - Abstract: An assessment of the risk to human health and the environment associated with the presence of organic contaminants (OCs) in landfills necessitates reliable predictivemore » models. The overall objectives of this study were to (1) conduct column experiments to measure the fate and transport of an OC in a simulated solid waste mixture, (2) compare the results of column experiments to model predictions using HYDRUS-1D (version 4.13), a contaminant fate and transport model that can be parameterized to simulate the laboratory experimental system, and (3) determine model input parameters from independently conducted batch experiments. Experiments were conducted in which sorption only and sorption plus biodegradation influenced OC transport. HYDRUS-1D can reasonably simulate the fate and transport of phenol in an anaerobic and fully saturated waste column in which biodegradation and sorption are the prevailing fate processes. The agreement between model predictions and column data was imperfect (i.e., within a factor of two) for the sorption plus biodegradation test and the error almost certainly lies in the difficulty of measuring a biodegradation rate that is applicable to the column conditions. Nevertheless, a biodegradation rate estimate that is within a factor of two or even five may be adequate in the context of a landfill, given the extended retention time and the fact that leachate release will be controlled by the infiltration rate which can be minimized by engineering controls.« less
Falotico, Egidio; Vannucci, Lorenzo; Ambrosano, Alessandro; Albanese, Ugo; Ulbrich, Stefan; Vasquez Tieck, Juan Camilo; Hinkel, Georg; Kaiser, Jacques; Peric, Igor; Denninger, Oliver; Cauli, Nino; Kirtay, Murat; Roennau, Arne; Klinker, Gudrun; Von Arnim, Axel; Guyot, Luc; Peppicelli, Daniel; Martínez-Cañada, Pablo; Ros, Eduardo; Maier, Patrick; Weber, Sandro; Huber, Manuel; Plecher, David; Röhrbein, Florian; Deser, Stefan; Roitberg, Alina; van der Smagt, Patrick; Dillman, Rüdiger; Levi, Paul; Laschi, Cecilia; Knoll, Alois C; Gewaltig, Marc-Oliver
2017-01-01
Combined efforts in the fields of neuroscience, computer science, and biology allowed to design biologically realistic models of the brain based on spiking neural networks. For a proper validation of these models, an embodiment in a dynamic and rich sensory environment, where the model is exposed to a realistic sensory-motor task, is needed. Due to the complexity of these brain models that, at the current stage, cannot deal with real-time constraints, it is not possible to embed them into a real-world task. Rather, the embodiment has to be simulated as well. While adequate tools exist to simulate either complex neural networks or robots and their environments, there is so far no tool that allows to easily establish a communication between brain and body models. The Neurorobotics Platform is a new web-based environment that aims to fill this gap by offering scientists and technology developers a software infrastructure allowing them to connect brain models to detailed simulations of robot bodies and environments and to use the resulting neurorobotic systems for in silico experimentation. In order to simplify the workflow and reduce the level of the required programming skills, the platform provides editors for the specification of experimental sequences and conditions, environments, robots, and brain-body connectors. In addition to that, a variety of existing robots and environments are provided. This work presents the architecture of the first release of the Neurorobotics Platform developed in subproject 10 "Neurorobotics" of the Human Brain Project (HBP). At the current state, the Neurorobotics Platform allows researchers to design and run basic experiments in neurorobotics using simulated robots and simulated environments linked to simplified versions of brain models. We illustrate the capabilities of the platform with three example experiments: a Braitenberg task implemented on a mobile robot, a sensory-motor learning task based on a robotic controller, and a visual tracking embedding a retina model on the iCub humanoid robot. These use-cases allow to assess the applicability of the Neurorobotics Platform for robotic tasks as well as in neuroscientific experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boville, B.A.; Randel, W.J.
1992-05-01
Equatorially trapped wave modes, such as Kelvin and mixed Rossby-gravity waves, are believed to play a crucial role in forcing the quasi-biennial oscillation (QBO) of the lower tropical stratosphere. This study examines the ability of a general circulation model (GCM) to simulate these waves and investigates the changes in the wave properties as a function of the vertical resolution of the model. The simulations produce a stratopause-level semiannual oscillation but not a QBO. An unfortunate property of the equatorially trapped waves is that they tend to have small vertical wavelengths ([le] 15 km). Some of the waves, believed to bemore » important in forcing the QBO, have wavelengths as short as 4 km. The short vertical wavelengths pose a stringent computational requirement for numerical models whose vertical grid spacing is typically chosen based on the requirements for simulating extratropical Rossby waves (which have much longer vertical wavelengths). This study examines the dependence of the equatorial wave simulation of vertical resolution using three experiments with vertical grid spacings of approximately 2.8, 1.4, and 0.7 km. Several Kelvin, mixed Rossby-gravity, and 0.7 km. Several Kelvin, mixed Rossby-gravity, and inertio-gravity waves are identified in the simulations. At high vertical resolution, the simulated waves are shown to correspond fairly well to the available observations. The properties of the relatively slow (and vertically short) waves believed to play a role in the QBO vary significantly with vertical resolution. Vertical grid spacings of about 1 km or less appear to be required to represent these waves adequately. The simulated wave amplitudes are at least as large as observed, and the waves are absorbed in the lower stratosphere, as required in order to force the QBO. However, the EP flux divergence associated with the waves is not sufficient to explain the zonal flow accelerations found in the QBO. 39 refs., 17 figs., 1 tab.« less
NASA Astrophysics Data System (ADS)
Ricciuto, D. M.; Warren, J.; Guha, A.
2017-12-01
While carbon and energy fluxes in current Earth system models generally have reasonable instantaneous responses to extreme temperature and precipitation events, they often do not adequately represent the long-term impacts of these events. For example, simulated net primary productivity (NPP) may decrease during an extreme heat wave or drought, but may recover rapidly to pre-event levels following the conclusion of the extreme event. However, field measurements indicate that long-lasting damage to leaves and other plant components often occur, potentially affecting the carbon and energy balance for months after the extreme event. The duration and frequency of such extreme conditions is likely to shift in the future, and therefore it is critical for Earth system models to better represent these processes for more accurate predictions of future vegetation productivity and land-atmosphere feedbacks. Here we modify the structure of the Accelerated Climate Model for Energy (ACME) land surface model to represent long-term impacts and test the improved model against observations from experiments that applied extreme conditions in growth chambers. Additionally, we test the model against eddy covariance measurements that followed extreme conditions at selected locations in North America, and against satellite-measured vegetation indices following regional extreme events.
In vivo porcine training model for laparoscopic Roux-en-Y choledochojejunostomy.
Lee, Jun Suh; Hong, Tae Ho
2015-06-01
The purpose of this study was to develop a porcine training model for laparoscopic choledochojejunostomy (CJ) that can act as a bridge between simulation models and actual surgery for novice surgeons. The feasibility of this model was evaluated. Laparoscopic CJ using intracorporeal sutures was performed on ten animals by a surgical fellow with no experience in human laparoscopic CJ. A single layer of running sutures was placed in the posterior and anterior layers. Jejunojejunostomy was performed using a linear stapler, and the jejunal opening was closed using absorbable unidirectional sutures (V-Loc 180). The average operation time was 131.3 ± 36.4 minutes, and the CJ time was 57.5 ± 18.4 minutes. Both the operation time and CJ time showed a steady decrease with an increasing number of cases. The average diameter of the CBD was 6.4 ± 0.8 mm. Of a total of ten animals, eight were sacrificed after the procedure. In two animals, a survival model was evaluated. Both pigs recovered completely and survived for two weeks, after which both animals were sacrificed. None of the animals exhibited any signs of bile leakage or anastomosis site stricture. The porcine training model introduced in this paper is an adequate model for practicing laparoscopic CJ. Human tissue simulation is excellent.
Alexiadis, Orestis; Daoulas, Kostas Ch; Mavrantzas, Vlasis G
2008-01-31
A new Monte Carlo algorithm is presented for the simulation of atomistically detailed alkanethiol self-assembled monolayers (R-SH) on a Au(111) surface. Built on a set of simpler but also more complex (sometimes nonphysical) moves, the new algorithm is capable of efficiently driving all alkanethiol molecules to the Au(111) surface, thereby leading to full surface coverage, irrespective of the initial setup of the system. This circumvents a significant limitation of previous methods in which the simulations typically started from optimally packed structures on the substrate close to thermal equilibrium. Further, by considering an extended ensemble of configurations each one of which corresponds to a different value of the sulfur-sulfur repulsive core potential, sigmass, and by allowing for configurations to swap between systems characterized by different sigmass values, the new algorithm can adequately simulate model R-SH/Au(111) systems for values of sigmass ranging from 4.25 A corresponding to the Hautman-Klein molecular model (J. Chem. Phys. 1989, 91, 4994; 1990, 93, 7483) to 4.97 A corresponding to the Siepmann-McDonald model (Langmuir 1993, 9, 2351), and practically any chain length. Detailed results are presented quantifying the efficiency and robustness of the new method. Representative simulation data for the dependence of the structural and conformational properties of the formed monolayer on the details of the employed molecular model are reported and discussed; an investigation of the variation of molecular organization and ordering on the Au(111) substrate for three CH3-(CH2)n-SH/Au(111) systems with n=9, 15, and 21 is also included.
Wang, Yanfu; Jiang, Juncheng; Zhu, Dezhi
2009-07-15
In order to research the fire characteristic under natural ventilation conditions in tunnels with roof openings, full-scale experiment of tunnel fire is designed and conducted. All the experimental data presented in this paper can be further applied for validation of numerical simulation models and reduced-scale experimental results. The physical model of tunnel with roof openings and the mathematical model of tunnel fire are presented in this paper. The tunnel fire under the same conditions as experiment is simulated using CFD software. From the results, it can be seen that most smoke is discharged directly off the tunnel through roof openings, so roof openings are favorable for exhausting smoke. But along with the decrease of smoke temperatures, some smoke may backflow and mix with the smoke-free layer below, which leads to fall in visibility and is unfavorable for personnel evacuation. So it is necessary to research more efficient ways for improving the smoke removal efficiency, such as early fire detection systems, adequate warning signs and setting tunnel cap.
Mesospheric ozone measurements by SAGE II
NASA Technical Reports Server (NTRS)
Chu, D. A.; Cunnold, D. M.
1994-01-01
SAGE II observations of ozone at sunrise and sunset (solar zenith angle = 90 deg) at approximately the same tropical latitude and on the same day exhibit larger concentrations at sunrise than at sunset between 55 and 65 km. Because of the rapid conversion between atomic oxygen and ozone, the onion-peeling scheme used in SAGE II retrievals, which is based on an assumption of constant ozone, is invalid. A one-dimensional photochemical model is used to simulate the diurnal variation of ozone particularly within the solar zenith angle of 80 deg - 100 deg. This model indicates that the retrieved SAGE II sunrise and sunset ozone values are both overestimated. The Chapman reactions produce an adequate simulation of the ozone sunrise/sunset ratio only below 60 km, while above 60 km this ratio is highly affected by the odd oxygen loss due to odd hydrogen reactions, particularly OH. The SAGE II ozone measurements are in excellent agreement with model results to which an onion peeling procedure is applied. The SAGE II ozone observations provide information on the mesospheric chemistry not only through the ozone profile averages but also from the sunrise/sunset ratio.
Catalytic Ignition and Upstream Reaction Propagation in Monolith Reactors
NASA Technical Reports Server (NTRS)
Struk, Peter M.; Dietrich, Daniel L.; Miller, Fletcher J.; T'ien, James S.
2007-01-01
Using numerical simulations, this work demonstrates a concept called back-end ignition for lighting-off and pre-heating a catalytic monolith in a power generation system. In this concept, a downstream heat source (e.g. a flame) or resistive heating in the downstream portion of the monolith initiates a localized catalytic reaction which subsequently propagates upstream and heats the entire monolith. The simulations used a transient numerical model of a single catalytic channel which characterizes the behavior of the entire monolith. The model treats both the gas and solid phases and includes detailed homogeneous and heterogeneous reactions. An important parameter in the model for back-end ignition is upstream heat conduction along the solid. The simulations used both dry and wet CO chemistry as a model fuel for the proof-of-concept calculations; the presence of water vapor can trigger homogenous reactions, provided that gas-phase temperatures are adequately high and there is sufficient fuel remaining after surface reactions. With sufficiently high inlet equivalence ratio, back-end ignition occurs using the thermophysical properties of both a ceramic and metal monolith (coated with platinum in both cases), with the heat-up times significantly faster for the metal monolith. For lower equivalence ratios, back-end ignition occurs without upstream propagation. Once light-off and propagation occur, the inlet equivalence ratio could be reduced significantly while still maintaining an ignited monolith as demonstrated by calculations using complete monolith heating.
Reconciling divergent trends and millennial variations in Holocene temperatures.
Marsicek, Jeremiah; Shuman, Bryan N; Bartlein, Patrick J; Shafer, Sarah L; Brewer, Simon
2018-01-31
Cooling during most of the past two millennia has been widely recognized and has been inferred to be the dominant global temperature trend of the past 11,700 years (the Holocene epoch). However, long-term cooling has been difficult to reconcile with global forcing, and climate models consistently simulate long-term warming. The divergence between simulations and reconstructions emerges primarily for northern mid-latitudes, for which pronounced cooling has been inferred from marine and coastal records using multiple approaches. Here we show that temperatures reconstructed from sub-fossil pollen from 642 sites across North America and Europe closely match simulations, and that long-term warming, not cooling, defined the Holocene until around 2,000 years ago. The reconstructions indicate that evidence of long-term cooling was limited to North Atlantic records. Early Holocene temperatures on the continents were more than two degrees Celsius below those of the past two millennia, consistent with the simulated effects of remnant ice sheets in the climate model Community Climate System Model 3 (CCSM3). CCSM3 simulates increases in 'growing degree days'-a measure of the accumulated warmth above five degrees Celsius per year-of more than 300 kelvin days over the Holocene, consistent with inferences from the pollen data. It also simulates a decrease in mean summer temperatures of more than two degrees Celsius, which correlates with reconstructed marine trends and highlights the potential importance of the different subseasonal sensitivities of the records. Despite the differing trends, pollen- and marine-based reconstructions are correlated at millennial-to-centennial scales, probably in response to ice-sheet and meltwater dynamics, and to stochastic dynamics similar to the temperature variations produced by CCSM3. Although our results depend on a single source of palaeoclimatic data (pollen) and a single climate-model simulation, they reinforce the notion that climate models can adequately simulate climates for periods other than the present-day. They also demonstrate that amplified warming in recent decades increased temperatures above the mean of any century during the past 11,000 years.
Reconciling divergent trends and millennial variations in Holocene temperatures
NASA Astrophysics Data System (ADS)
Marsicek, Jeremiah; Shuman, Bryan N.; Bartlein, Patrick J.; Shafer, Sarah L.; Brewer, Simon
2018-02-01
Cooling during most of the past two millennia has been widely recognized and has been inferred to be the dominant global temperature trend of the past 11,700 years (the Holocene epoch). However, long-term cooling has been difficult to reconcile with global forcing, and climate models consistently simulate long-term warming. The divergence between simulations and reconstructions emerges primarily for northern mid-latitudes, for which pronounced cooling has been inferred from marine and coastal records using multiple approaches. Here we show that temperatures reconstructed from sub-fossil pollen from 642 sites across North America and Europe closely match simulations, and that long-term warming, not cooling, defined the Holocene until around 2,000 years ago. The reconstructions indicate that evidence of long-term cooling was limited to North Atlantic records. Early Holocene temperatures on the continents were more than two degrees Celsius below those of the past two millennia, consistent with the simulated effects of remnant ice sheets in the climate model Community Climate System Model 3 (CCSM3). CCSM3 simulates increases in ‘growing degree days’—a measure of the accumulated warmth above five degrees Celsius per year—of more than 300 kelvin days over the Holocene, consistent with inferences from the pollen data. It also simulates a decrease in mean summer temperatures of more than two degrees Celsius, which correlates with reconstructed marine trends and highlights the potential importance of the different subseasonal sensitivities of the records. Despite the differing trends, pollen- and marine-based reconstructions are correlated at millennial-to-centennial scales, probably in response to ice-sheet and meltwater dynamics, and to stochastic dynamics similar to the temperature variations produced by CCSM3. Although our results depend on a single source of palaeoclimatic data (pollen) and a single climate-model simulation, they reinforce the notion that climate models can adequately simulate climates for periods other than the present-day. They also demonstrate that amplified warming in recent decades increased temperatures above the mean of any century during the past 11,000 years.
Ockerman, Darwin J.
2005-01-01
The U.S. Geological Survey, in cooperation with the San Antonio Water System, constructed three watershed models using the Hydrological Simulation Program—FORTRAN (HSPF) to simulate streamflow and estimate recharge to the Edwards aquifer in the Hondo Creek, Verde Creek, and San Geronimo Creek watersheds in south-central Texas. The three models were calibrated and tested with available data collected during 1992–2003. Simulations of streamflow and recharge were done for 1951–2003. The approach to construct the models was to first calibrate the Hondo Creek model (with an hourly time step) using 1992–99 data and test the model using 2000–2003 data. The Hondo Creek model parameters then were applied to the Verde Creek and San Geronimo Creek watersheds to construct the Verde Creek and San Geronimo Creek models. The simulated streamflows for Hondo Creek are considered acceptable. Annual, monthly, and daily simulated streamflows adequately match measured values, but simulated hourly streamflows do not. The accuracy of streamflow simulations for Verde Creek is uncertain. For San Geronimo Creek, the match of measured and simulated annual and monthly streamflows is acceptable (or nearly so); but for daily and hourly streamflows, the calibration is relatively poor. Simulated average annual total streamflow for 1951–2003 to Hondo Creek, Verde Creek, and San Geronimo Creek is 45,400; 32,400; and 11,100 acre-feet, respectively. Simulated average annual streamflow at the respective watershed outlets is 13,000; 16,200; and 6,920 acre-feet. The difference between total streamflow and streamflow at the watershed outlet is streamflow lost to channel infiltration. Estimated average annual Edwards aquifer recharge for Hondo Creek, Verde Creek, and San Geronimo Creek watersheds for 1951–2003 is 37,900 acrefeet (5.04 inches), 26,000 acre-feet (3.36 inches), and 5,940 acre-feet (1.97 inches), respectively. Most of the recharge (about 77 percent for the three watersheds together) occurs as streamflow channel infiltration. Diffuse recharge (direct infiltration of rainfall to the aquifer) accounts for the remaining 23 percent of recharge. For the Hondo Creek watershed, the HSPF recharge estimates for 1992–2003 averaged about 22 percent less than those estimated by the Puente method, a method the U.S. Geological Survey has used to compute annual recharge to the Edwards aquifer since 1978. HSPF recharge estimates for the Verde Creek watershed average about 40 percent less than those estimated by the Puente method.
Chvetsov, Alexei V; Dong, Lei; Palta, Jantinder R; Amdur, Robert J
2009-10-01
To develop a fast computational radiobiologic model for quantitative analysis of tumor volume during fractionated radiotherapy. The tumor-volume model can be useful for optimizing image-guidance protocols and four-dimensional treatment simulations in proton therapy that is highly sensitive to physiologic changes. The analysis is performed using two approximations: (1) tumor volume is a linear function of total cell number and (2) tumor-cell population is separated into four subpopulations: oxygenated viable cells, oxygenated lethally damaged cells, hypoxic viable cells, and hypoxic lethally damaged cells. An exponential decay model is used for disintegration and removal of oxygenated lethally damaged cells from the tumor. We tested our model on daily volumetric imaging data available for 14 head-and-neck cancer patients treated with an integrated computed tomography/linear accelerator system. A simulation based on the averaged values of radiobiologic parameters was able to describe eight cases during the entire treatment and four cases partially (50% of treatment time) with a maximum 20% error. The largest discrepancies between the model and clinical data were obtained for small tumors, which may be explained by larger errors in the manual tumor volume delineation procedure. Our results indicate that the change in gross tumor volume for head-and-neck cancer can be adequately described by a relatively simple radiobiologic model. In future research, we propose to study the variation of model parameters by fitting to clinical data for a cohort of patients with head-and-neck cancer and other tumors. The potential impact of other processes, like concurrent chemotherapy, on tumor volume should be evaluated.
Turbulence Modelling in Wind Turbine Wakes =
NASA Astrophysics Data System (ADS)
Olivares Espinosa, Hugo
With the expansion of the wind energy industry, wind parks have become a common appearance in our landscapes. Owing to restrictions of space or to economic reasons, wind turbines are located close to each other in wind farms. This causes interference problems which reduce the efficiency of the array. In particular, the wind turbine wakes increase the level of turbulence and cause a momentum defect that may lead to an increase of mechanical loads and to a reduction of power output. Thus, it is important for the wind energy industry to predict the characteristics of the turbulence field in the wakes with the purpose of increasing the efficiency of the power extraction. Since this is a phenomenon of intrinsically non-linear nature, it can only be accurately described by the full set of the Navier-Stokes equations. Furthermore, a proper characterization of turbulence cannot be made without resolving the turbulent motions, so neither linearized models nor the widely used Reynolds-Averaged Navier-Stokes model can be employed. Instead, Large-Eddy Simulations (LES) provide a feasible alternative, where the energy containing fluctuations of the velocity field are resolved and the effects of the smaller eddies are modelled through a sub-grid scale component. The objective of this work is the modelling of turbulence in wind turbine wakes in a homogeneous turbulence inflow. A methodology has been developed to fulfill this objective. Firstly, a synthetic turbulence field is introduced into a computational domain where LES are performed to simulate a decaying turbulence flow. Secondly, the Actuator Disk (AD) technique is employed to simulate the effect of a rotor in the incoming flow and produce a turbulent wake. The implementation is carried out in OpenFOAM, an open-source CFD platform, resembling a well documented procedure previously used for wake flow simulations. Results obtained with the proposed methodology are validated by comparing with values obtained from wind tunnel experiments. In addition, simulations are also carried out with EllipSys3D, a code widely used and tested for computations of wind turbine wakes, the results of which provide a useful reference. Despite a limited grid resolution with respect to the size of the inflow turbulence structures, the results show that the turbulence characteristics in both the decaying turbulence and in the wake field are aptly reproduced. These observations are accompanied by an assessment of the LES modelling, which is found to be adequate in the simulations. An analysis of the longitudinal evolution of the turbulence lengthscales shows that within the wake, they develop mostly as in the free decaying turbulence. Furthermore, both codes predict that the lengthscales of the ambience turbulence dominate across the wake, with little effect caused by the shear layer at the wake envelope. These remarks are supported by an examination of features in the energy spectra along the wake. Also in this thesis, the wake turbulence fields produced by two different AD models are compared: a uniformly loaded disk and a model that includes the effects of tangential velocities and considers airfoil blade properties. The latter includes a rotational velocity controller to simulate the real conditions of variable speed turbines. Results show that the differences observed between the models in the near wake field are reduced further downstream. Also, it is seen that these disparities decrease when a turbulent inflow is employed, in comparison with the non-turbulent case. These observations confirm the assumption that uniformly loaded disks are adequate to model the far wake. In addition, the control method is shown to adjust to the local inflow conditions, regulating the rotational speed accordingly, while the computed performance proves that the implementation represents well the modelled rotor design. The results obtained in this work show that the presented methodology can succesfuly be used in the modelling and analysis of turbulence in wake flows. None None None
Brooks, Lynette E.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the Southern Utah Valley Municipal Water Association, updated an existing USGS model of southern Utah and Goshen Valleys for hydrologic and climatic conditions from 1991 to 2011 and used the model for projection and groundwater management simulations. All model files used in the transient model were updated to be compatible with MODFLOW-2005 and with the additional stress periods. The well and recharge files had the most extensive changes. Discharge to pumping wells in southern Utah and Goshen Valleys was estimated and simulated on an annual basis from 1991 to 2011. Recharge estimates for 1991 to 2011 were included in the updated model by using precipitation, streamflow, canal diversions, and irrigation groundwater withdrawals for each year. The model was evaluated to determine how well it simulates groundwater conditions during recent increased withdrawals and drought, and to determine if the model is adequate for use in future planning. In southern Utah Valley, the magnitude and direction of annual water-level fluctuation simulated by the updated model reasonably match measured water-level changes, but they do not simulate as much decline as was measured in some locations from 2000 to 2002. Both the rapid increase in groundwater withdrawals and the total groundwater withdrawals in southern Utah Valley during this period exceed the variations and magnitudes simulated during the 1949 to 1990 calibration period. It is possible that hydraulic properties may be locally incorrect or that changes, such as land use or irrigation diversions, occurred that are not simulated. In the northern part of Goshen Valley, simulated water-level changes reasonably match measured changes. Farther south, however, simulated declines are much less than measured declines. Land-use changes indicate that groundwater withdrawals in Goshen Valley are possibly greater than estimated and simulated. It is also possible that irrigation methods, amount of diversions, or other factors have changed that are not simulated or that aquifer properties are incorrectly simulated. The model can be used for projections about the effects of future groundwater withdrawals and managed aquifer recharge in southern Utah Valley, but rapid changes in withdrawals and increasing withdrawals dramatically may reduce the accuracy of the predicted water-level and groundwater-budget changes. The model should not be used for projections in Goshen Valley until additional withdrawal and discharge data are collected and the model is recalibrated if necessary. Model projections indicate large drawdowns of up to 400 feet and complete cessation of natural discharge in some areas with potential future increases in water use. Simulated managed aquifer recharge counteracts those effects. Groundwater management examples indicate that drawdown could be less, and discharge at selected springs could be greater, with optimized groundwater withdrawals and managed aquifer recharge than without optimization. Recalibration to more recent stresses and seasonal stress periods, and collection of new withdrawal, stream, land-use, and discharge data could improve the model fit to water-level changes and the accuracy of predictions.
Parameter recovery, bias and standard errors in the linear ballistic accumulator model.
Visser, Ingmar; Poessé, Rens
2017-05-01
The linear ballistic accumulator (LBA) model (Brown & Heathcote, , Cogn. Psychol., 57, 153) is increasingly popular in modelling response times from experimental data. An R package, glba, has been developed to fit the LBA model using maximum likelihood estimation which is validated by means of a parameter recovery study. At sufficient sample sizes parameter recovery is good, whereas at smaller sample sizes there can be large bias in parameters. In a second simulation study, two methods for computing parameter standard errors are compared. The Hessian-based method is found to be adequate and is (much) faster than the alternative bootstrap method. The use of parameter standard errors in model selection and inference is illustrated in an example using data from an implicit learning experiment (Visser et al., , Mem. Cogn., 35, 1502). It is shown that typical implicit learning effects are captured by different parameters of the LBA model. © 2017 The British Psychological Society.
Numerical model of water flow in a fractured basalt vadose zone: Box Canyon Site, Idaho
NASA Astrophysics Data System (ADS)
Doughty, Christine
2000-12-01
A numerical model of a fractured basalt vadose zone has been developed on the basis of the conceptual model described by Faybishenko et al. [[his issue]. The model has been used to simulate a ponded infiltration test in order to investigate infiltration through partially saturated fractured basalt. A key question addressed is how the fracture pattern geometry and fracture connectivity within a single basalt flow of the Snake River Plain basalt affect water infiltration. The two-dimensional numerical model extends from the ground surface to a perched water body 20 m below and uses an unconventional quasi-deterministic approach with explicit but highly simplified representation of major fractures and other important hydrogeologic features. The model adequately reproduces the majority of the field observation and provides insights into the infiltration process that cannot be obtained by data collection alone, demonstrating its value as a component of field studies.
New Equation of State Models for Hydrodynamic Applications
NASA Astrophysics Data System (ADS)
Young, David A.; Barbee, Troy W., III; Rogers, Forrest J.
1997-07-01
Accurate models of the equation of state of matter at high pressures and temperatures are increasingly required for hydrodynamic simulations. We have developed two new approaches to accurate EOS modeling: 1) ab initio phonons from electron band structure theory for condensed matter and 2) the ACTEX dense plasma model for ultrahigh pressure shocks. We have studied the diamond and high pressure phases of carbon with the ab initio model and find good agreement between theory and experiment for shock Hugoniots, isotherms, and isobars. The theory also predicts a comprehensive phase diagram for carbon. For ultrahigh pressure shock states, we have studied the comparison of ACTEX theory with experiments for deuterium, beryllium, polystyrene, water, aluminum, and silicon dioxide. The agreement is good, showing that complex multispecies plasmas are treated adequately by the theory. These models will be useful in improving the numerical EOS tables used by hydrodynamic codes.
He, Yujie; Yang, Jinyan; Zhuang, Qianlai; McGuire, A. David; Zhu, Qing; Liu, Yaling; Teskey, Robert O.
2014-01-01
Conventional Q10 soil organic matter decomposition models and more complex microbial models are available for making projections of future soil carbon dynamics. However, it is unclear (1) how well the conceptually different approaches can simulate observed decomposition and (2) to what extent the trajectories of long-term simulations differ when using the different approaches. In this study, we compared three structurally different soil carbon (C) decomposition models (one Q10 and two microbial models of different complexity), each with a one- and two-horizon version. The models were calibrated and validated using 4 years of measurements of heterotrophic soil CO2 efflux from trenched plots in a Dahurian larch (Larix gmelinii Rupr.) plantation. All models reproduced the observed heterotrophic component of soil CO2 efflux, but the trajectories of soil carbon dynamics differed substantially in 100 year simulations with and without warming and increased litterfall input, with microbial models that produced better agreement with observed changes in soil organic C in long-term warming experiments. Our results also suggest that both constant and varying carbon use efficiency are plausible when modeling future decomposition dynamics and that the use of a short-term (e.g., a few years) period of measurement is insufficient to adequately constrain model parameters that represent long-term responses of microbial thermal adaption. These results highlight the need to reframe the representation of decomposition models and to constrain parameters with long-term observations and multiple data streams. We urge caution in interpreting future soil carbon responses derived from existing decomposition models because both conceptual and parameter uncertainties are substantial.
Cognitive task load in a naval ship control centre: from identification to prediction.
Grootjen, M; Neerincx, M A; Veltman, J A
Deployment of information and communication technology will lead to further automation of control centre tasks and an increasing amount of information to be processed. A method for establishing adequate levels of cognitive task load for the operators in such complex environments has been developed. It is based on a model distinguishing three load factors: time occupied, task-set switching, and level of information processing. Application of the method resulted in eight scenarios for eight extremes of task load (i.e. low and high values for each load factor). These scenarios were performed by 13 teams in a high-fidelity control centre simulator of the Royal Netherlands Navy. The results show that the method provides good prediction of the task load that will actually appear in the simulator. The model allowed identification of under- and overload situations showing negative effects on operator performance corresponding to controlled experiments in a less realistic task environment. Tools proposed to keep the operator at an optimum task load are (adaptive) task allocation and interface support.
Tang, Jingchun; Lv, Honghong; Gong, Yanyan; Huang, Yao
2015-11-01
A graphene/biochar composite (G/BC) was synthesized via slow pyrolysis of graphene (G) pretreated wheat straw, and tested for the sorption characteristics and mechanisms of representative aqueous contaminants (phenanthrene and mercury). Structure and morphology analysis showed that G was coated on the surface of biochar (BC) mainly through π-π interactions, resulting in a larger surface area, more functional groups, greater thermal stability, and higher removal efficiency of phenanthrene and mercury compared to BC. Pseudo second-order model adequately simulated sorption kinetics, and sorption isotherms of phenanthrene and mercury were simulated well by dual-mode and BET models, respectively. FTIR and SEM analysis suggested that partitioning and surface sorption were dominant mechanisms for phenanthrene sorption, and that surface complexation between mercury and C-O, CC, -OH, and OC-O functional groups was responsible for mercury removal. The results suggested that the G/BC composite is an efficient, economic, and environmentally friendly multifunctional adsorbent for environmental remediation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Numerical simulations of inductive-heated float-zone growth
NASA Technical Reports Server (NTRS)
Chan, Y. T.; Choi, S. K.
1992-01-01
The present work provides an improved fluid flow and heat-transfer modeling of float-zone growth by introducing a RF heating model so that an ad hoc heating temperature profile is not necessary. Numerical simulations were carried out to study the high-temperature float-zone growth of titanium carbide single crystal. The numerical results showed that the thermocapillary convection occurring inside the molten zone tends to increase the convexity of the melt-crystal interface and decrease the maximum temperature of the molten zone, while the natural convection tends to reduce the stability of the molten zone by increasing its height. It was found that the increase of induced heating due to the increase of applied RF voltage is reduced by the decrease of zone diameter. Surface tension plays an important role in controlling the amount of induced heating. Finally, a comparison of the computed shape of the free surface with a digital image obtained during a growth run showed adequate agreement.
NASA Astrophysics Data System (ADS)
Han, Pingping; Zhang, Haitian; Chen, Lingqi; Zhang, Xiaoan
2018-01-01
The models of doubly fed induction generator (DFIG) and its grid-side converter (GSC) are established under unbalanced grid condition based on DIgSILENT/PowerFactory. According to the mathematical model, the vector equations of positive and negative sequence voltage and current are deduced in the positive sequence synchronous rotating reference frame d-q-0 when the characteristics of the simulation software are considered adequately. Moreover, the reference value of current component of GSC in the positive sequence frame d-q-0 under unbalanced condition can be obtained to improve the traditional control of GSC when the national issue of unbalanced current limits is combined. The simulated results indicate that the control strategy can restrain negative sequence current and the two times frequency power wave of GSC’s ac side effectively. The voltage of DC bus can be maintained a constant to ensure the uninterrupted operation of DFIG under unbalanced grid condition eventually.
Hetzroni, Orit E; Banin, Irit
2017-07-01
People with intellectual and developmental disabilities (IDD) often demonstrate difficulties in social skills. The purpose of this study was to examine the effects of a comprehensive intervention program on the acquisition of social skills among students with mild IDD. Single subject multiple baseline design across situations was used for teaching five school-age children with mild IDD social skills embedded in school-based situations. Results demonstrate that the intervention program that included video modelling and games embedded with group discussions and simulations increased the level and use of adequate social behaviours within the school's natural environment. Results demonstrate the unique attribution of a comprehensive interactive program for acquisition and transfer of participants' social skills such as language pragmatics and social rules within the school environment. Group discussions and simulations were beneficial and enabled both group and personalized instruction through the unique application of the program designed for the study. © 2016 John Wiley & Sons Ltd.
Pair Potential That Reproduces the Shape of Isochrones in Molecular Liquids.
Veldhorst, Arno A; Schrøder, Thomas B; Dyre, Jeppe C
2016-08-18
Many liquids have curves (isomorphs) in their phase diagrams along which structure, dynamics, and some thermodynamic quantities are invariant in reduced units. A substantial part of their phase diagrams is thus effectively one dimensional. The shapes of these isomorphs are described by a material-dependent function of density, h(ρ), which for real liquids is well approximated by a power law, ρ(γ). However, in simulations, a power law is not adequate when density changes are large; typical models, such as Lennard-Jones liquids, show that γ(ρ) ≡ d ln h(ρ)/d ln ρ is a decreasing function of density. This article presents results from computer simulations using a new pair potential that diverges at a nonzero distance and can be tuned to give a more realistic shape of γ(ρ). Our results indicate that the finite size of molecules is an important factor to take into account when modeling liquids over a large density range.
Cadmiun and Zinc Adsorption by Acric Soils
NASA Astrophysics Data System (ADS)
da Silva, Luiz Gabriel; Colato, Alexandre; Casagrande, José Carlos; Soares, Marcio Roberto
2017-04-01
Acrodox soils are very weathered soils, characterized by having buildup of iron and aluminum oxides and hydroxides. These soils are present in extensive productive regions in the state of São Paulo. This work aimed at verifying the adequacy of constant capacitance model in describing the adsorption of cadmium and zinc in Anionic Rhodic Acrudox, Anionic Xanthic Acrudox and Rhodic Hapludalf. The chemical, mineralogical and physical attributes of these soils were determined in the layers 0-20 cm and 20-40 cm. Adsorption data of cadmium and zinc were also previously determined for samples of both layers of each soil. Were applied 5 mg dm-3 of cadmium and zinc to 2,0 g of soil to ample pH range (3 to 10) to build the adsorption envelops to three ionic strength. The constant capacitance model was adequate to simulate the adsorption of zinc and cadmium. It was not possible to make appropriate distinctions between measurements and simulations for two soil layers studied, neither between the three concentrations of background electrolyte.
Mesoscopic modeling and parameter estimation of a lithium-ion battery based on LiFePO4/graphite
NASA Astrophysics Data System (ADS)
Jokar, Ali; Désilets, Martin; Lacroix, Marcel; Zaghib, Karim
2018-03-01
A novel numerical model for simulating the behavior of lithium-ion batteries based on LiFePO4(LFP)/graphite is presented. The model is based on the modified Single Particle Model (SPM) coupled to a mesoscopic approach for the LFP electrode. The model comprises one representative spherical particle as the graphite electrode, and N LFP units as the positive electrode. All the SPM equations are retained to model the negative electrode performance. The mesoscopic model rests on non-equilibrium thermodynamic conditions and uses a non-monotonic open circuit potential for each unit. A parameter estimation study is also carried out to identify all the parameters needed for the model. The unknown parameters are the solid diffusion coefficient of the negative electrode (Ds,n), reaction-rate constant of the negative electrode (Kn), negative and positive electrode porosity (εn&εn), initial State-Of-Charge of the negative electrode (SOCn,0), initial partial composition of the LFP units (yk,0), minimum and maximum resistance of the LFP units (Rmin&Rmax), and solution resistance (Rcell). The results show that the mesoscopic model can simulate successfully the electrochemical behavior of lithium-ion batteries at low and high charge/discharge rates. The model also describes adequately the lithiation/delithiation of the LFP particles, however, it is computationally expensive compared to macro-based models.
NASA Astrophysics Data System (ADS)
Nadeem, Imran; Formayer, Herbert
2016-11-01
A suite of high-resolution (10 km) simulations were performed with the International Centre for Theoretical Physics (ICTP) Regional Climate Model (RegCM3) to study the effect of various lateral boundary conditions (LBCs), domain size, and intermediate domains on simulated precipitation over the Great Alpine Region. The boundary conditions used were ECMWF ERA-Interim Reanalysis with grid spacing 0.75∘, the ECMWF ERA-40 Reanalysis with grid spacing 1.125 and 2.5∘, and finally the 2.5∘ NCEP/DOE AMIP-II Reanalysis. The model was run in one-way nesting mode with direct nesting of the high-resolution RCM (horizontal grid spacing Δx = 10 km) with driving reanalysis, with one intermediate resolution nest (Δx = 30 km) between high-resolution RCM and reanalysis forcings, and also with two intermediate resolution nests (Δx = 90 km and Δx = 30 km) for simulations forced with LBC of resolution 2.5∘. Additionally, the impact of domain size was investigated. The results of multiple simulations were evaluated using different analysis techniques, e.g., Taylor diagram and a newly defined useful statistical parameter, called Skill-Score, for evaluation of daily precipitation simulated by the model. It has been found that domain size has the major impact on the results, while different resolution and versions of LBCs, e.g., 1.125∘ ERA40 and 0.7∘ ERA-Interim, do not produce significantly different results. It is also noticed that direct nesting with reasonable domain size, seems to be the most adequate method for reproducing precipitation over complex terrain, while introducing intermediate resolution nests seems to deteriorate the results.
Susong, D.; Marks, D.; Garen, D.
1999-01-01
Topographically distributed energy- and water-balance models can accurately simulate both the development and melting of a seasonal snowcover in the mountain basins. To do this they require time-series climate surfaces of air temperature, humidity, wind speed, precipitation, and solar and thermal radiation. If data are available, these parameters can be adequately estimated at time steps of one to three hours. Unfortunately, climate monitoring in mountain basins is very limited, and the full range of elevations and exposures that affect climate conditions, snow deposition, and melt is seldom sampled. Detailed time-series climate surfaces have been successfully developed using limited data and relatively simple methods. We present a synopsis of the tools and methods used to combine limited data with simple corrections for the topographic controls to generate high temporal resolution time-series images of these climate parameters. Methods used include simulations, elevational gradients, and detrended kriging. The generated climate surfaces are evaluated at points and spatially to determine if they are reasonable approximations of actual conditions. Recommendations are made for the addition of critical parameters and measurement sites into routine monitoring systems in mountain basins.Topographically distributed energy- and water-balance models can accurately simulate both the development and melting of a seasonal snowcover in the mountain basins. To do this they require time-series climate surfaces of air temperature, humidity, wind speed, precipitation, and solar and thermal radiation. If data are available, these parameters can be adequately estimated at time steps of one to three hours. Unfortunately, climate monitoring in mountain basins is very limited, and the full range of elevations and exposures that affect climate conditions, snow deposition, and melt is seldom sampled. Detailed time-series climate surfaces have been successfully developed using limited data and relatively simple methods. We present a synopsis of the tools and methods used to combine limited data with simple corrections for the topographic controls to generate high temporal resolution time-series images of these climate parameters. Methods used include simulations, elevational gradients, and detrended kriging. The generated climate surfaces are evaluated at points and spatially to determine if they are reasonable approximations of actual conditions. Recommendations are made for the addition of critical parameters and measurement sites into routine monitoring systems in mountain basins.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin Zhoumeng; Interdisciplinary Toxicology Program, University of Georgia, Athens, GA 30602; Fisher, Jeffrey W.
Atrazine (ATR) is a chlorotriazine herbicide that is widely used and relatively persistent in the environment. In laboratory rodents, excessive exposure to ATR is detrimental to the reproductive, immune, and nervous systems. To better understand the toxicokinetics of ATR and to fill the need for a mouse model, a physiologically based pharmacokinetic (PBPK) model for ATR and its main chlorotriazine metabolites (Cl-TRIs) desethyl atrazine (DE), desisopropyl atrazine (DIP), and didealkyl atrazine (DACT) was developed for the adult male C57BL/6 mouse. Taking advantage of all relevant and recently made available mouse-specific data, a flow-limited PBPK model was constructed. The ATR andmore » DACT sub-models included blood, brain, liver, kidney, richly and slowly perfused tissue compartments, as well as plasma protein binding and red blood cell binding, whereas the DE and DIP sub-models were constructed as simple five-compartment models. The model adequately simulated plasma levels of ATR and Cl-TRIs and urinary dosimetry of Cl-TRIs at four single oral dose levels (250, 125, 25, and 5 mg/kg). Additionally, the model adequately described the dose dependency of brain and liver ATR and DACT concentrations. Cumulative urinary DACT amounts were accurately predicted across a wide dose range, suggesting the model's potential use for extrapolation to human exposures by performing reverse dosimetry. The model was validated using previously reported data for plasma ATR and DACT in mice and rats. Overall, besides being the first mouse PBPK model for ATR and its Cl-TRIs, this model, by analogy, provides insights into tissue dosimetry for rats. The model could be used in tissue dosimetry prediction and as an aid in the exposure assessment to this widely used herbicide.« less
Application of four watershed acidification models to Batchawana Watershed, Canada.
Booty, W G; Bobba, A G; Lam, D C; Jeffries, D S
1992-01-01
Four watershed acidification models (TMWAM, ETD, ILWAS, and RAINS) are reviewed and a comparison of model performance is presented for a common watershed. The models have been used to simulate the dynamics of water quantity and quality at Batchawana Watershed, Canada, a sub-basin of the Turkey Lakes Watershed. The computed results are compared with observed data for a four-year period (Jan. 1981-Dec. 1984). The models exhibit a significant range in the ability to simulate the daily, monthly and seasonal changes present in the observed data. Monthly watershed outflows and lake chemistry predictions are compared to observed data. pH and ANC are the only two chemical parameters common to all four models. Coefficient of efficiency (E), linear (r) and rank (R) correlation coefficients, and regression slope (s) are used to compare the goodness of fit of the simulated with the observed data. The ILWAS, TMWAM and RAINS models performed very well in predicting the monthly flows, with values of r and R of approximately 0.98. The ETD model also showed strong correlations with linear (r) and rank (R) correlation coefficients of 0.896 and 0.892, respectively. The results of the analyses showed that TMWAM provided the best simulation of pH (E=0.264, r=0.648), which is slightly better than ETD (E=0.240, r=0.549), and much better than ILWAS (E=-2.965, r=0.293), and RAINS (E=-4.004, r=0.473). ETD was found to be superior in predicting ANC (E=0.608, r=0.781) as compared to TMWAM (E=0.340, r=0.598), ILWAS (E=0.275, r=0.442), and RAINS (E=-1.048, r=0.356). The TMWAM model adequately simulated SO4 over the four-year period (E=0.423, r=0.682) but the ETD (E=-0.904, r=0.274), ILWAS (E=-4.314, r=0.488), and RAINS (E=-6.479, r=0.126) models all performed poorer than the benchmark model (mean observed value).
NASA Astrophysics Data System (ADS)
Magid, S. I.; Arkhipova, E. N.; Kulichikhin, V. V.; Zagretdinov, I. Sh.
2016-12-01
Technogenic and anthropogenic accidence at hazardous industrial objects (HIO) in the Russian Federation has been considered. The accidence level at HIO, including power plants and network enterprises, is determined by anthropogenic reasons, so-called "human factor", in 70% of all cases. The analysis of incidents caused by personnel has shown that errors occur most often during accidental situations, launches, holdups, routine switches, and other effects on equipment controls. It has been demonstrated that skills needed to perform type and routine switches can be learned, to certain limits, on real operating equipment, while combating emergency and accidental situations can be learned only with the help of modern training simulators developed based on information technologies. Problems arising during the following processes have been considered: development of mathematical and software support of modern training equipment associated, in one way or another, with adequate power-generating object modeling in accordance with human operator specifics; modeling and/or simulation of the corresponding control and management systems; organization of the education system (functional supply of the instructor, education and methodological resources (EMR)); organization of the program-technical, scalable and adaptable, platform for modeling of the main and secondary functions of the training simulator. It has been concluded that the systemic approach principle on the necessity and sufficiency in the applied methodology allows to reproduce all technological characteristics of the equipment, its topological completeness, as well as to achieve the acceptable counting rate. The initial "rough" models of processes in the equipment are based on the normative techniques and equation coefficients taken from the normative materials as well. Then, the synthesis of "fine" models has been carried out following the global practice in modeling and training simulator building, i.e., verification of "rough" models based on experimental data available to the developer. Finally, the last stage of modeling is adaptation (validation) of "fine" models to the prototype object using experimental data on the power-generating object and tests of these models with operating and maintaining personnel. These stages determine adequacy of the used mathematical model for a particular training simulator and, thus, its compliance with such modern scientific criteria as objectivity and experimental verifiability.
Flexible Environments for Grand-Challenge Simulation in Climate Science
NASA Astrophysics Data System (ADS)
Pierrehumbert, R.; Tobis, M.; Lin, J.; Dieterich, C.; Caballero, R.
2004-12-01
Current climate models are monolithic codes, generally in Fortran, aimed at high-performance simulation of the modern climate. Though they adequately serve their designated purpose, they present major barriers to application in other problems. Tailoring them to paleoclimate of planetary simulations, for instance, takes months of work. Theoretical studies, where one may want to remove selected processes or break feedback loops, are similarly hindered. Further, current climate models are of little value in education, since the implementation of textbook concepts and equations in the code is obscured by technical detail. The Climate Systems Center at the University of Chicago seeks to overcome these limitations by bringing modern object-oriented design into the business of climate modeling. Our ultimate goal is to produce an end-to-end modeling environment capable of configuring anything from a simple single-column radiative-convective model to a full 3-D coupled climate model using a uniform, flexible interface. Technically, the modeling environment is implemented as a Python-based software component toolkit: key number-crunching procedures are implemented as discrete, compiled-language components 'glued' together and co-ordinated by Python, combining the high performance of compiled languages and the flexibility and extensibility of Python. We are incrementally working towards this final objective following a series of distinct, complementary lines. We will present an overview of these activities, including PyOM, a Python-based finite-difference ocean model allowing run-time selection of different Arakawa grids and physical parameterizations; CliMT, an atmospheric modeling toolkit providing a library of 'legacy' radiative, convective and dynamical modules which can be knitted into dynamical models, and PyCCSM, a version of NCAR's Community Climate System Model in which the coupler and run-control architecture are re-implemented in Python, augmenting its flexibility and adaptability.
NASA Astrophysics Data System (ADS)
Sivandran, G.; Bisht, G.; Ivanov, V. Y.; Bras, R. L.
2008-12-01
A coupled, dynamic vegetation and hydrologic model, tRIBS+VEGGIE, was applied to the semiarid Walnut Gulch Experimental Watershed in Arizona. The physically-based, distributed nature of the coupled model allows for parameterization and simulation of watershed vegetation-water-energy dynamics on timescales varying from hourly to interannual. The model also allows for explicit spatial representation of processes that vary due to complex topography, such as lateral redistribution of moisture and partitioning of radiation with respect to aspect and slope. Model parameterization and forcing was conducted using readily available databases for topography, soil types, and land use cover as well as the data from network of meteorological stations located within the Walnut Gulch watershed. In order to test the performance of the model, three sets of simulations were conducted over an 11 year period from 1997 to 2007. Two simulations focus on heavily instrumented nested watersheds within the Walnut Gulch basin; (i) Kendall watershed, which is dominated by annual grasses; and (ii) Lucky Hills watershed, which is dominated by a mixture of deciduous and evergreen shrubs. The third set of simulations cover the entire Walnut Gulch Watershed. Model validation and performance were evaluated in relation to three broad categories; (i) energy balance components: the network of meteorological stations were used to validate the key energy fluxes; (ii) water balance components: the network of flumes, rain gauges and soil moisture stations installed within the watershed were utilized to validate the manner in which the model partitions moisture; and (iii) vegetation dynamics: remote sensing products from MODIS were used to validate spatial and temporal vegetation dynamics. Model results demonstrate satisfactory spatial and temporal agreement with observed data, giving confidence that key ecohydrological processes can be adequately represented for future applications of tRIBS+VEGGIE in regional modeling of land-atmosphere interactions.
Overview of the Mathematical and Empirical Receptor Models Workshop (Quail Roost II)
NASA Astrophysics Data System (ADS)
Stevens, Robert K.; Pace, Thompson G.
On 14-17 March 1982, the U.S. Environmental Protection Agency sponsored the Mathematical and Empirical Receptor Models Workshop (Quail Roost II) at the Quail Roost Conference Center, Rougemont, NC. Thirty-five scientists were invited to participate. The objective of the workshop was to document and compare results of source apportionment analyses of simulated and real aerosol data sets. The simulated data set was developed by scientists from the National Bureau of Standards. It consisted of elemental mass data generated using a dispersion model that simulated transport of aerosols from a variety of sources to a receptor site. The real data set contained the mass, elemental, and ionic species concentrations of samples obtained in 18 consecutive 12-h sampling periods in Houston, TX. Some participants performed additional analyses of the Houston filters by X-ray powder diffraction, scanning electron microscopy, or light microscopy. Ten groups analyzed these data sets using a variety of modeling procedures. The results of the modeling exercises were evaluated and structured in a manner that permitted model intercomparisons. The major conclusions and recommendations derived from the intercomparisons were: (1) using aerosol elemental composition data, receptor models can resolve major emission sources, but additional analyses (including light microscopy and X-ray diffraction) significantly increase the number of sources that can be resolved; (2) simulated data sets that contain up to 6 dissimilar emission sources need to be generated, so that different receptor models can be adequately compared; (3) source apportionment methods need to be modified to incorporate a means of apportioning such aerosol species as sulfate and nitrate formed from SO 2 and NO, respectively, because current models tend to resolve particles into chemical species rather than to deduce their sources and (4) a source signature library may be required to be compiled for each airshed in order to improve the resolving capabilities of receptor models.
Tran, Chung Duc; Ibrahim, Rosdiazli; Asirvadam, Vijanth Sagayan; Saad, Nordin; Sabo Miya, Hassan
2018-04-01
The emergence of wireless technologies such as WirelessHART and ISA100 Wireless for deployment at industrial process plants has urged the need for research and development in wireless control. This is in view of the fact that the recent application is mainly in monitoring domain due to lack of confidence in control aspect. WirelessHART has an edge over its counterpart as it is based on the successful Wired HART protocol with over 30 million devices as of 2009. Recent works on control have primarily focused on maintaining the traditional PID control structure which is proven not adequate for the wireless environment. In contrast, Internal Model Control (IMC), a promising technique for delay compensation, disturbance rejection and setpoint tracking has not been investigated in the context of WirelessHART. Therefore, this paper discusses the control design using IMC approach with a focus on wireless processes. The simulation and experimental results using real-time WirelessHART hardware-in-the-loop simulator (WH-HILS) indicate that the proposed approach is more robust to delay variation of the network than the PID. Copyright © 2017. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Sumarsono, Danardono A.; Ibrahim, Fera; Santoso, Satria P.; Sari, Gema P.
2018-02-01
Gene gun is a mechanical device which has been used to deliver DNA vaccine into the cells and tissues by increasing the uptake of DNA plasmid so it can generate a high immune response with less amount of DNA. Nozzle is an important part of the gene gun which used to accelerate DNA in particle form with a gas flow to reach adequate momentum to enter the epidermis of human skin and elicit immune response. We developed new designs of nozzle for gene gun to make DNA uptake more efficient in vaccination. We used Computational Fluid Dynamics (CFD) by Autodesk® Simulation 2015 to simulate static fluid pressure and velocity contour of supersonic wave and parametric distance to predict the accuracy of the new nozzle. The result showed that the nozzle could create a shockwave at the distance parametric to the object from 4 to 5 cm using fluid pressure varied between 0.8-1.2 MPa. This is indication a possibility that the DNA particle could penetrate under the mammalian skin. For the future research step, this new nozzle model could be considered for development the main component of the DNA delivery system in vaccination in vivo
Pasipanodya, Jotam; Gumbo, Tawanda
2011-01-01
Antimicrobial pharmacokinetic-pharmacodynamic (PK/PD) science and clinical trial simulations have not been adequately applied to the design of doses and dose schedules of antituberculosis regimens because many researchers are skeptical about their clinical applicability. We compared findings of preclinical PK/PD studies of current first-line antituberculosis drugs to findings from several clinical publications that included microbiologic outcome and pharmacokinetic data or had a dose-scheduling design. Without exception, the antimicrobial PK/PD parameters linked to optimal effect were similar in preclinical models and in tuberculosis patients. Thus, exposure-effect relationships derived in the preclinical models can be used in the design of optimal antituberculosis doses, by incorporating population pharmacokinetics of the drugs and MIC distributions in Monte Carlo simulations. When this has been performed, doses and dose schedules of rifampin, isoniazid, pyrazinamide, and moxifloxacin with the potential to shorten antituberculosis therapy have been identified. In addition, different susceptibility breakpoints than those in current use have been identified. These steps outline a more rational approach than that of current methods for designing regimens and predicting outcome so that both new and older antituberculosis agents can shorten therapy duration.
The Navy/NASA Engine Program (NNEP89): A user's manual
NASA Technical Reports Server (NTRS)
Plencner, Robert M.; Snyder, Christopher A.
1991-01-01
An engine simulation computer code called NNEP89 was written to perform 1-D steady state thermodynamic analysis of turbine engine cycles. By using a very flexible method of input, a set of standard components are connected at execution time to simulate almost any turbine engine configuration that the user could imagine. The code was used to simulate a wide range of engine cycles from turboshafts and turboprops to air turborockets and supersonic cruise variable cycle engines. Off design performance is calculated through the use of component performance maps. A chemical equilibrium model is incorporated to adequately predict chemical dissociation as well as model virtually any fuel. NNEP89 is written in standard FORTRAN77 with clear structured programming and extensive internal documentation. The standard FORTRAN77 programming allows it to be installed onto most mainframe computers and workstations without modification. The NNEP89 code was derived from the Navy/NASA Engine program (NNEP). NNEP89 provides many improvements and enhancements to the original NNEP code and incorporates features which make it easier to use for the novice user. This is a comprehensive user's guide for the NNEP89 code.
Hullett, Bruce; Salman, Sam; O'Halloran, Sean J; Peirce, Deborah; Davies, Kylie; Ilett, Kenneth F
2012-05-01
Parecoxib is a cyclooxygenase-2 selective inhibitor used in management of postoperative pain in adults. This study aimed to provide pediatric pharmacokinetic information for parecoxib and its active metabolite valdecoxib. Thirty-eight children undergoing surgery received parecoxib (1 mg/kg IV to a maximum of 40 mg) at induction of anesthesia, and plasma samples were collected for drug measurement. Population pharmacokinetic parameters were estimated using nonlinear mixed effects modeling. Area under the valdecoxib concentration-time curve and time above cyclooxygenase-2 in vitro 50% inhibitory concentration for free valdecoxib were simulated. A three-compartment model best represented parecoxib disposition, whereas one compartment was adequate for valdecoxib. Age was linearly correlated with parecoxib clearance (5.0% increase/yr). There was a sigmoid relationship between age and both valdecoxib clearance and distribution volume. Time to 50% maturation was 87 weeks postmenstrual age for both. In simulations using allometric-based doses the 90% prediction interval of valdecoxib concentration-time curve in children 2-12.7 yr included the mean for adults given 40 mg parecoxib IV. Simulated free valdecoxib plasma concentration remained above the in vitro 50% inhibitory concentrations for more than 12 h. In children younger than 2 yr, a dose reduction is likely required due to ongoing metabolic maturation. The final pharmacokinetic model gave a robust representation of parecoxib and valdecoxib disposition. Area under the valdecoxib concentration-time curve was similar to that in adults (40 mg), and simulated free valdecoxib concentration was above the cyclooxygenase-2 in vitro 50% inhibitory concentration for free valdecoxib for at least 12 h.
NASA Astrophysics Data System (ADS)
Zhang, Z.; Zimmermann, N. E.; Poulter, B.
2015-12-01
Simulations of the spatial-temporal dynamics of wetlands is key to understanding the role of wetland biogeochemistry under past and future climate variability. Hydrologic inundation models, such as TOPMODEL, are based on a fundamental parameter known as the compound topographic index (CTI) and provide a computationally cost-efficient approach to simulate global wetland dynamics. However, there remains large discrepancy in the implementations of TOPMODEL in land-surface models (LSMs) and thus their performance against observations. This study describes new improvements to TOPMODEL implementation and estimates of global wetland dynamics using the LPJ-wsl DGVM, and quantifies uncertainties by comparing three digital elevation model products (HYDRO1k, GMTED, and HydroSHEDS) at different spatial resolution and accuracy on simulated inundation dynamics. We found that calibrating TOPMODEL with a benchmark dataset can help to successfully predict the seasonal and interannual variations of wetlands, as well as improve the spatial distribution of wetlands to be consistent with inventories. The HydroSHEDS DEM, using a river-basin scheme for aggregating the CTI, shows best accuracy for capturing the spatio-temporal dynamics of wetland among three DEM products. This study demonstrates the feasibility to capture spatial heterogeneity of inundation and to estimate seasonal and interannual variations in wetland by coupling a hydrological module in LSMs with appropriate benchmark datasets. It additionally highlight the importance of an adequate understanding of topographic indices for simulating global wetlands and show the opportunity to converge wetland estimations in LSMs by identifying the uncertainty associated with existing wetland products.
NASA Technical Reports Server (NTRS)
Kanakidou, Maria; Crutzen, Paul J.; Zimmermann, Peter H.
1994-01-01
As a consequence of the non-linear behavior of the chemistry of the atmosphere and because of the short lifetime of nitrogen oxides (NO(x)), two-dimensional models do not give an adequate description of the production and destruction rates of NO(x) and their effects on the distributions of the concentration of ozone and hydroxyl radical. In this study, we use a three-dimensional model to evaluate the contribution of increasing NO(x) emissions from industrial activity and biomass burning to changes in the chemical composition of the troposphere. By comparing results obtained from longitudinally-uniform and longitudinally-varying emissions of NO(x), we demonstrate that the geographical representation of the NO(x) emissions is crucial in simulating tropospheric chemistry.
NASA Technical Reports Server (NTRS)
Gallegos, J. J.
1978-01-01
A multi-objective test program was conducted at the NASA/JSC Radiant Heat Test Facility in which an aluminum skin/stringer test panel insulated with FRSI (Flexible Reusable Surface Insulation) was subjected to 24 simulated Space Shuttle Orbiter ascent/entry heating cycles with a cold soak in between in the 10th and 20th cycles. A two-dimensional thermal math model was developed and utilized to predict the thermal performance of the FRSI. Results are presented which indicate that the modeling techniques and property values have been proven adequate in predicting peak structure temperatures and entry thermal responses from both an ambient and cold soak condition of an FRSI covered aluminum structure.
Development of advanced techniques for rotorcraft state estimation and parameter identification
NASA Technical Reports Server (NTRS)
Hall, W. E., Jr.; Bohn, J. G.; Vincent, J. H.
1980-01-01
An integrated methodology for rotorcraft system identification consists of rotorcraft mathematical modeling, three distinct data processing steps, and a technique for designing inputs to improve the identifiability of the data. These elements are as follows: (1) a Kalman filter smoother algorithm which estimates states and sensor errors from error corrupted data. Gust time histories and statistics may also be estimated; (2) a model structure estimation algorithm for isolating a model which adequately explains the data; (3) a maximum likelihood algorithm for estimating the parameters and estimates for the variance of these estimates; and (4) an input design algorithm, based on a maximum likelihood approach, which provides inputs to improve the accuracy of parameter estimates. Each step is discussed with examples to both flight and simulated data cases.
Simplified Modeling of Oxidation of Hydrocarbons
NASA Technical Reports Server (NTRS)
Bellan, Josette; Harstad, Kenneth
2008-01-01
A method of simplified computational modeling of oxidation of hydrocarbons is undergoing development. This is one of several developments needed to enable accurate computational simulation of turbulent, chemically reacting flows. At present, accurate computational simulation of such flows is difficult or impossible in most cases because (1) the numbers of grid points needed for adequate spatial resolution of turbulent flows in realistically complex geometries are beyond the capabilities of typical supercomputers now in use and (2) the combustion of typical hydrocarbons proceeds through decomposition into hundreds of molecular species interacting through thousands of reactions. Hence, the combination of detailed reaction- rate models with the fundamental flow equations yields flow models that are computationally prohibitive. Hence, further, a reduction of at least an order of magnitude in the dimension of reaction kinetics is one of the prerequisites for feasibility of computational simulation of turbulent, chemically reacting flows. In the present method of simplified modeling, all molecular species involved in the oxidation of hydrocarbons are classified as either light or heavy; heavy molecules are those having 3 or more carbon atoms. The light molecules are not subject to meaningful decomposition, and the heavy molecules are considered to decompose into only 13 specified constituent radicals, a few of which are listed in the table. One constructs a reduced-order model, suitable for use in estimating the release of heat and the evolution of temperature in combustion, from a base comprising the 13 constituent radicals plus a total of 26 other species that include the light molecules and related light free radicals. Then rather than following all possible species through their reaction coordinates, one follows only the reduced set of reaction coordinates of the base. The behavior of the base was examined in test computational simulations of the combustion of heptane in a stirred reactor at various initial pressures ranging from 0.1 to 6 MPa. Most of the simulations were performed for stoichiometric mixtures; some were performed for fuel/oxygen mole ratios of 1/2 and 2.
Progress report on daily flow-routing simulation for the Carson River, California and Nevada
Hess, G.W.
1996-01-01
A physically based flow-routing model using Hydrological Simulation Program-FORTRAN (HSPF) was constructed for modeling streamflow in the Carson River at daily time intervals as part of the Truckee-Carson Program of the U.S. Geological Survey (USGS). Daily streamflow data for water years 1978-92 for the mainstem river, tributaries, and irrigation ditches from the East Fork Carson River near Markleeville and West Fork Carson River at Woodfords down to the mainstem Carson River at Fort Churchill upstream from Lahontan Reservoir were obtained from several agencies and were compiled into a comprehensive data base. No previous physically based flow-routing model of the Carson River has incorporated multi-agency streamflow data into a single data base and simulated flow at a daily time interval. Where streamflow data were unavailable or incomplete, hydrologic techniques were used to estimate some flows. For modeling purposes, the Carson River was divided into six segments, which correspond to those used in the Alpine Decree that governs water rights along the river. Hydraulic characteristics were defined for 48 individual stream reaches based on cross-sectional survey data obtained from field surveys and previous studies. Simulation results from the model were compared with available observed and estimated streamflow data. Model testing demonstrated that hydraulic characteristics of the Carson River are adequately represented in the models for a range of flow regimes. Differences between simulated and observed streamflow result mostly from inadequate data characterizing inflow and outflow from the river. Because irrigation return flows are largely unknown, irrigation return flow percentages were used as a calibration parameter to minimize differences between observed and simulated streamflows. Observed and simulated streamflow were compared for daily periods for the full modeled length of the Carson River and for two major subreaches modeled with more detailed input data. Hydrographs and statistics presented in this report describe these differences. A sensitivity analysis of four estimated components of the hydrologic system evaluated which components were significant in the model. Estimated ungaged tributary streamflow is not a significant component of the model during low runoff, but is significant during high runoff. The sensitivity analysis indicates that changes in the estimated irrigation diversion and estimated return flow creates a noticeable change in the statistics. The modeling for this study is preliminary. Results of the model are constrained by current availability and accuracy of observed hydrologic data. Several inflows and outflows of the Carson River are not described by time-series data and therefore are not represented in the model.
Development of PBPK Models for Gasoline in Adult and ...
Concern for potential developmental effects of exposure to gasoline-ethanol blends has grown along with their increased use in the US fuel supply. Physiologically-based pharmacokinetic (PBPK) models for these complex mixtures were developed to address dosimetric issues related to selection of exposure concentrations for in vivo toxicity studies. Sub-models for individual hydrocarbon (HC) constituents were first developed and calibrated with published literature or QSAR-derived data where available. Successfully calibrated sub-models for individual HCs were combined, assuming competitive metabolic inhibition in the liver, and a priori simulations of mixture interactions were performed. Blood HC concentration data were collected from exposed adult non-pregnant (NP) rats (9K ppm total HC vapor, 6h/day) to evaluate performance of the NP mixture model. This model was then converted to a pregnant (PG) rat mixture model using gestational growth equations that enabled a priori estimation of life-stage specific kinetic differences. To address the impact of changing relevant physiological parameters from NP to PG, the PG mixture model was first calibrated against the NP data. The PG mixture model was then evaluated against data from PG rats that were subsequently exposed (9K ppm/6.33h gestation days (GD) 9-20). Overall, the mixture models adequately simulated concentrations of HCs in blood from single (NP) or repeated (PG) exposures (within ~2-3 fold of measured values of
A Simulated Environment Experiment on Annoyance Due to Combined Road Traffic and Industrial Noises.
Marquis-Favre, Catherine; Morel, Julien
2015-07-21
Total annoyance due to combined noises is still difficult to predict adequately. This scientific gap is an obstacle for noise action planning, especially in urban areas where inhabitants are usually exposed to high noise levels from multiple sources. In this context, this work aims to highlight potential to enhance the prediction of total annoyance. The work is based on a simulated environment experiment where participants performed activities in a living room while exposed to combined road traffic and industrial noises. The first objective of the experiment presented in this paper was to gain further understanding of the effects on annoyance of some acoustical factors, non-acoustical factors and potential interactions between the combined noise sources. The second one was to assess total annoyance models constructed from the data collected during the experiment and tested using data gathered in situ. The results obtained in this work highlighted the superiority of perceptual models. In particular, perceptual models with an interaction term seemed to be the best predictors for the two combined noise sources under study, even with high differences in sound pressure level. Thus, these results reinforced the need to focus on perceptual models and to improve the prediction of partial annoyances.
One- and Two-Equation Models to Simulate Ion Transport in Charged Porous Electrodes
Gabitto, Jorge; Tsouris, Costas
2018-01-19
Energy storage in porous capacitor materials, capacitive deionization (CDI) for water desalination, capacitive energy generation, geophysical applications, and removal of heavy ions from wastewater streams are some examples of processes where understanding of ionic transport processes in charged porous media is very important. In this work, one- and two-equation models are derived to simulate ionic transport processes in heterogeneous porous media comprising two different pore sizes. It is based on a theory for capacitive charging by ideally polarizable porous electrodes without Faradaic reactions or specific adsorption of ions. A two-step volume averaging technique is used to derive the averaged transportmore » equations for multi-ionic systems without any further assumptions, such as thin electrical double layers or Donnan equilibrium. A comparison between both models is presented. The effective transport parameters for isotropic porous media are calculated by solving the corresponding closure problems. An approximate analytical procedure is proposed to solve the closure problems. Numerical and theoretical calculations show that the approximate analytical procedure yields adequate solutions. Lastly, a theoretical analysis shows that the value of interphase pseudo-transport coefficients determines which model to use.« less
Wind Farm LES Simulations Using an Overset Methodology
NASA Astrophysics Data System (ADS)
Ananthan, Shreyas; Yellapantula, Shashank
2017-11-01
Accurate simulation of wind farm wakes under realistic atmospheric inflow conditions and complex terrain requires modeling a wide range of length and time scales. The computational domain can span several kilometers while requiring mesh resolutions in O(10-6) to adequately resolve the boundary layer on the blade surface. Overset mesh methodology offers an attractive option to address the disparate range of length scales; it allows embedding body-confirming meshes around turbine geomtries within nested wake capturing meshes of varying resolutions necessary to accurately model the inflow turbulence and the resulting wake structures. Dynamic overset hole-cutting algorithms permit relative mesh motion that allow this nested mesh structure to track unsteady inflow direction changes, turbine control changes (yaw and pitch), and wake propagation. An LES model with overset mesh for localized mesh refinement is used to analyze wind farm wakes and performance and compared with local mesh refinements using non-conformal (hanging node) unstructured meshes. Turbine structures will be modeled using both actuator line approaches and fully-resolved structures to test the efficacy of overset methods for wind farm applications. Exascale Computing Project (ECP), Project Number: 17-SC-20-SC, a collaborative effort of two DOE organizations - the Office of Science and the National Nuclear Security Administration.
One- and Two-Equation Models to Simulate Ion Transport in Charged Porous Electrodes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gabitto, Jorge; Tsouris, Costas
Energy storage in porous capacitor materials, capacitive deionization (CDI) for water desalination, capacitive energy generation, geophysical applications, and removal of heavy ions from wastewater streams are some examples of processes where understanding of ionic transport processes in charged porous media is very important. In this work, one- and two-equation models are derived to simulate ionic transport processes in heterogeneous porous media comprising two different pore sizes. It is based on a theory for capacitive charging by ideally polarizable porous electrodes without Faradaic reactions or specific adsorption of ions. A two-step volume averaging technique is used to derive the averaged transportmore » equations for multi-ionic systems without any further assumptions, such as thin electrical double layers or Donnan equilibrium. A comparison between both models is presented. The effective transport parameters for isotropic porous media are calculated by solving the corresponding closure problems. An approximate analytical procedure is proposed to solve the closure problems. Numerical and theoretical calculations show that the approximate analytical procedure yields adequate solutions. Lastly, a theoretical analysis shows that the value of interphase pseudo-transport coefficients determines which model to use.« less
King, David A.; Bachelet, Dominique M.; Symstad, Amy J.; Ferschweiler, Ken; Hobbins, Michael
2014-01-01
The potential evapotranspiration (PET) that would occur with unlimited plant access to water is a central driver of simulated plant growth in many ecological models. PET is influenced by solar and longwave radiation, temperature, wind speed, and humidity, but it is often modeled as a function of temperature alone. This approach can cause biases in projections of future climate impacts in part because it confounds the effects of warming due to increased greenhouse gases with that which would be caused by increased radiation from the sun. We developed an algorithm for linking PET to extraterrestrial solar radiation (incoming top-of atmosphere solar radiation), as well as temperature and atmospheric water vapor pressure, and incorporated this algorithm into the dynamic global vegetation model MC1. We tested the new algorithm for the Northern Great Plains, USA, whose remaining grasslands are threatened by continuing woody encroachment. Both the new and the standard temperature-dependent MC1 algorithm adequately simulated current PET, as compared to the more rigorous PenPan model of Rotstayn et al. (2006). However, compared to the standard algorithm, the new algorithm projected a much more gradual increase in PET over the 21st century for three contrasting future climates. This difference led to lower simulated drought effects and hence greater woody encroachment with the new algorithm, illustrating the importance of more rigorous calculations of PET in ecological models dealing with climate change.
The intensity dependence of lesion position shift during focused ultrasound surgery.
Meaney, P M; Cahill, M D; ter Haar, G R
2000-03-01
Knowledge of the spatial distribution of intensity loss from an ultrasonic beam is critical for predicting lesion formation in focused ultrasound (US) surgery (FUS). To date, most models have used linear propagation models to predict intensity profiles required to compute the temporally varying temperature distributions used to compute thermal dose contours. These are used to predict the extent of thermal damage. However, these simulations fail to describe adequately the abnormal lesion formation behaviour observed during ex vivo experiments in cases for which the transducer drive levels are varied over a wide range. In such experiments, the extent of thermal damage has been observed to move significantly closer to the transducer with increased transducer drive levels than would be predicted using linear-propagation models. The first set of simulations described herein use the KZK (Khokhlov-Zabolotskaya-Kuznetsov) nonlinear propagation model with the parabolic approximation for highly focused US waves to demonstrate that both the peak intensity and the lesion positions do, indeed, move closer to the transducer. This illustrates that, for accurate modelling of heating during FUS, nonlinear effects should be considered. Additionally, a first order approximation has been employed that attempts to account for the abnormal heat deposition distributions that accompany high transducer drive level FUS exposures where cavitation and boiling may be present. The results of these simulations are presented. It is suggested that this type of approach may be a useful tool in understanding thermal damage mechanisms.
Evaluation of TOPLATS on three Mediterranean catchments
NASA Astrophysics Data System (ADS)
Loizu, Javier; Álvarez-Mozos, Jesús; Casalí, Javier; Goñi, Mikel
2016-08-01
Physically based hydrological models are complex tools that provide a complete description of the different processes occurring on a catchment. The TOPMODEL-based Land-Atmosphere Transfer Scheme (TOPLATS) simulates water and energy balances at different time steps, in both lumped and distributed modes. In order to gain insight on the behavior of TOPLATS and its applicability in different conditions a detailed evaluation needs to be carried out. This study aimed to develop a complete evaluation of TOPLATS including: (1) a detailed review of previous research works using this model; (2) a sensitivity analysis (SA) of the model with two contrasted methods (Morris and Sobol) of different complexity; (3) a 4-step calibration strategy based on a multi-start Powell optimization algorithm; and (4) an analysis of the influence of simulation time step (hourly vs. daily). The model was applied on three catchments of varying size (La Tejeria, Cidacos and Arga), located in Navarre (Northern Spain), and characterized by different levels of Mediterranean climate influence. Both Morris and Sobol methods showed very similar results that identified Brooks-Corey Pore Size distribution Index (B), Bubbling pressure (ψc) and Hydraulic conductivity decay (f) as the three overall most influential parameters in TOPLATS. After calibration and validation, adequate streamflow simulations were obtained in the two wettest catchments, but the driest (Cidacos) gave poor results in validation, due to the large climatic variability between calibration and validation periods. To overcome this issue, an alternative random and discontinuous method of cal/val period selection was implemented, improving model results.
NASA Astrophysics Data System (ADS)
Khuat Duy, B.; Archambeau, P.; Dewals, B. J.; Erpicum, S.; Pirotton, M.
2009-04-01
Following recurrent inundation problems on the Berwinne catchment, in Belgium, a combined hydrologic and hydrodynamic study has been carried out in order to find adequate solutions for the floods mitigation. Thanks to detailed 2D simulations, the effectiveness of the solutions can be assessed not only in terms of discharge and height reductions in the river, but also with other aspects such as the inundated surfaces reduction and the decrease of inundated buildings and roads. The study is carried out in successive phases. First, the hydrological runoffs are generated using a physically based and spatially distributed multi-layer model solving depth-integrated equations for overland flow, subsurface flow and baseflow. Real floods events are simulated using rainfall series collected at 8 stations (over 20 years of available data). The hydrological inputs are routed through the river network (and through the sewage network if relevant) with the 1D component of the modelling system, which solves the Saint-Venant equations for both free-surface and pressurized flows in a unified way. On the main part of the river, the measured river cross-sections are included in the modelling, and existing structures along the river (such as bridges, sluices or pipes) are modelled explicitely with specific cross sections. Two gauging stations with over 15 years of continuous measurements allow the calibration of both the hydrologic and hydrodynamic models. Second, the flood mitigation solutions are tested in the simulations in the case of an extreme flooding event, and their effects are assessed using detailed 2D simulations on a few selected sensitive areas. The digital elevation model comes from an airborne laser survey with a spatial resolution of 1 point per square metre and is completed in the river bed with a bathymetry interpolated from cross-section data. The upstream discharge is extracted from the 1D simulation for the selected rainfall event. The study carried out with this methodology allowed to assess the suggested solutions with multiple effectiveness criteria and therefore constitutes a very useful support for decision makers.
Simulation of the Impact of Climate Variability on Malaria Transmission in the Sahel
NASA Astrophysics Data System (ADS)
Bomblies, A.; Eltahir, E.; Duchemin, J.
2007-12-01
A coupled hydrology and entomology model for simulation of malaria transmission and malaria transmitting mosquito population dynamics is presented. Model development and validation is done using field data and observations collected at Banizoumbou and Zindarou, Niger spanning three wet seasons, from 2005 through 2007. The primary model objective is the accurate determination of climate variability effects on village scale malaria transmission. Malaria transmission dependence on climate variables is highly nonlinear and complex. Temperature and humidity affect mosquito longevity, temperature controls parasite development rates in the mosquito as well as subadult mosquito development rates, and precipitation determines the formation and persistence of adequate breeding pools. Moreover, unsaturated zone hydrology influences overland flow, and climate controlled evapotranspiration rates and root zone uptake therefore also influence breeding pool formation. High resolution distributed hydrologic simulation allows representation of the small-scale ephemeral pools that constitute the primary habitat of Anopheles gambiae mosquitoes, the dominant malaria vectors in the Niger Sahel. Remotely sensed soil type, vegetation type, and microtopography rasters are used to assign the distributed parameter fields for simulation of the land surface hydrologic response to precipitation and runoff generation. Predicted runoff from each cell flows overland and into topographic depressions, with explicit representation of infiltration and evapotranspiration. The model's entomology component interacts with simulated pools. Subadult (aquatic stage) mosquito breeding is simulated in the pools, and water temperature dependent stage advancement rates regulate adult mosquito emergence into the model domain. Once emerged, adult mosquitoes are tracked as independent individual agents that interact with their immediate environment. Attributes relevant to malaria transmission such as gonotrophic state, infected and infectious states, age, and location relative to human population are tracked for each individual. The model operates at a resolution consistent with the characteristic scale of relevant ecological processes. Microhabitat exploitation and spatial structure of the mosquito population surrounding villages is reproduced in this manner. The resulting coupled model predicts not only malaria transmission's response to interannual climate variability, but can also evaluate land use change effects on malaria transmission. The late Professor Andrew Spielman of the Harvard School of Public Health provided medical entomology expertise and was a part of this effort.
Workshop on Production and Uses of Simulated Lunar Materials
NASA Technical Reports Server (NTRS)
1991-01-01
A workshop entitled, Production and Uses of Simulated Lunar Materials, was convened to define the need for simulated lunar materials and examine related issues in support of extended space exploration and development. Lunar samples are a national treasure and cannot be sacrificed in sufficient quantity to test lunar resource utilization process adequately. Hence, the workshop focused on a detailed examination of the variety of potential simulants and the methods for their production.
Orr, Mark G; Thrush, Roxanne; Plaut, David C
2013-01-01
The reasoned action approach, although ubiquitous in health behavior theory (e.g., Theory of Reasoned Action/Planned Behavior), does not adequately address two key dynamical aspects of health behavior: learning and the effect of immediate social context (i.e., social influence). To remedy this, we put forth a computational implementation of the Theory of Reasoned Action (TRA) using artificial-neural networks. Our model re-conceptualized behavioral intention as arising from a dynamic constraint satisfaction mechanism among a set of beliefs. In two simulations, we show that constraint satisfaction can simultaneously incorporate the effects of past experience (via learning) with the effects of immediate social context to yield behavioral intention, i.e., intention is dynamically constructed from both an individual's pre-existing belief structure and the beliefs of others in the individual's social context. In a third simulation, we illustrate the predictive ability of the model with respect to empirically derived behavioral intention. As the first known computational model of health behavior, it represents a significant advance in theory towards understanding the dynamics of health behavior. Furthermore, our approach may inform the development of population-level agent-based models of health behavior that aim to incorporate psychological theory into models of population dynamics.
Orr, Mark G.; Thrush, Roxanne; Plaut, David C.
2013-01-01
The reasoned action approach, although ubiquitous in health behavior theory (e.g., Theory of Reasoned Action/Planned Behavior), does not adequately address two key dynamical aspects of health behavior: learning and the effect of immediate social context (i.e., social influence). To remedy this, we put forth a computational implementation of the Theory of Reasoned Action (TRA) using artificial-neural networks. Our model re-conceptualized behavioral intention as arising from a dynamic constraint satisfaction mechanism among a set of beliefs. In two simulations, we show that constraint satisfaction can simultaneously incorporate the effects of past experience (via learning) with the effects of immediate social context to yield behavioral intention, i.e., intention is dynamically constructed from both an individual’s pre-existing belief structure and the beliefs of others in the individual’s social context. In a third simulation, we illustrate the predictive ability of the model with respect to empirically derived behavioral intention. As the first known computational model of health behavior, it represents a significant advance in theory towards understanding the dynamics of health behavior. Furthermore, our approach may inform the development of population-level agent-based models of health behavior that aim to incorporate psychological theory into models of population dynamics. PMID:23671603
Bisetti, Fabrizio; Attili, Antonio; Pitsch, Heinz
2014-01-01
Combustion of fossil fuels is likely to continue for the near future due to the growing trends in energy consumption worldwide. The increase in efficiency and the reduction of pollutant emissions from combustion devices are pivotal to achieving meaningful levels of carbon abatement as part of the ongoing climate change efforts. Computational fluid dynamics featuring adequate combustion models will play an increasingly important role in the design of more efficient and cleaner industrial burners, internal combustion engines, and combustors for stationary power generation and aircraft propulsion. Today, turbulent combustion modelling is hindered severely by the lack of data that are accurate and sufficiently complete to assess and remedy model deficiencies effectively. In particular, the formation of pollutants is a complex, nonlinear and multi-scale process characterized by the interaction of molecular and turbulent mixing with a multitude of chemical reactions with disparate time scales. The use of direct numerical simulation (DNS) featuring a state of the art description of the underlying chemistry and physical processes has contributed greatly to combustion model development in recent years. In this paper, the analysis of the intricate evolution of soot formation in turbulent flames demonstrates how DNS databases are used to illuminate relevant physico-chemical mechanisms and to identify modelling needs. PMID:25024412
Yi, Shuhua; McGuire, A. David; Harden, Jennifer; Kasischke, Eric; Manies, Kristen L.; Hinzman, Larry; Liljedahl, Anna K.; Randerson, J.; Liu, Heping; Romanovsky, Vladimir E.; Marchenko, Sergey S.; Kim, Yongwon
2009-01-01
Soil temperature and moisture are important factors that control many ecosystem processes. However, interactions between soil thermal and hydrological processes are not adequately understood in cold regions, where the frozen soil, fire disturbance, and soil drainage play important roles in controlling interactions among these processes. These interactions were investigated with a new ecosystem model framework, the dynamic organic soil version of the Terrestrial Ecosystem Model, that incorporates an efficient and stable numerical scheme for simulating soil thermal and hydrological dynamics within soil profiles that contain a live moss horizon, fibrous and amorphous organic horizons, and mineral soil horizons. The performance of the model was evaluated for a tundra burn site that had both preburn and postburn measurements, two black spruce fire chronosequences (representing space-for-time substitutions in well and intermediately drained conditions), and a poorly drained black spruce site. Although space-for-time substitutions present challenges in model-data comparison, the model demonstrates substantial ability in simulating the dynamics of evapotranspiration, soil temperature, active layer depth, soil moisture, and water table depth in response to both climate variability and fire disturbance. Several differences between model simulations and field measurements identified key challenges for evaluating/improving model performance that include (1) proper representation of discrepancies between air temperature and ground surface temperature; (2) minimization of precipitation biases in the driving data sets; (3) improvement of the measurement accuracy of soil moisture in surface organic horizons; and (4) proper specification of organic horizon depth/properties, and soil thermal conductivity.
NASA Astrophysics Data System (ADS)
Viebahn, Jan; von der Heydt, Anna S.; Dijkstra, Henk A.
2014-05-01
During the past 65 Million (Ma) years, Earth's climate has undergone a major change from warm 'greenhouse' to colder 'icehouse' conditions with extensive ice sheets in the polar regions of both hemispheres. The Eocene-Oligocene (~34 Ma) and Oligocene-Miocene (~23 Ma) boundaries reflect major transitions in Cenozoic global climate change. Proposed mechanisms of these transitions include reorganization of ocean circulation due to critical gateway opening/deepening, changes in atmospheric CO2-concentration, and feedback mechanisms related to land-ice formation. A long-standing hypothesis is that the formation of the Antarctic Circumpolar Current due to opening/deepening of Southern Ocean gateways led to glaciation of the Antarctic continent. However, while this hypothesis remains controversial, its assessment via coupled climate model simulations depends crucially on the spatial resolution in the ocean component. More precisely, only high-resolution modeling of the turbulent ocean circulation is capable of adequately describing reorganizations in the ocean flow field and related changes in turbulent heat transport. In this study, for the first time results of a high-resolution (0.1° horizontally) realistic global ocean model simulation with a closed Drake Passage are presented. Changes in global ocean temperatures, heat transport, and ocean circulation (e.g., Meridional Overturning Circulation and Antarctic Coastal Current) are established by comparison with an open Drake Passage high-resolution reference simulation. Finally, corresponding low-resolution simulations are also analyzed. The results highlight the essential impact of the ocean eddy field in palaeoclimatic change.
Morrison, Abigail; Straube, Sirko; Plesser, Hans Ekkehard; Diesmann, Markus
2007-01-01
Very large networks of spiking neurons can be simulated efficiently in parallel under the constraint that spike times are bound to an equidistant time grid. Within this scheme, the subthreshold dynamics of a wide class of integrate-and-fire-type neuron models can be integrated exactly from one grid point to the next. However, the loss in accuracy caused by restricting spike times to the grid can have undesirable consequences, which has led to interest in interpolating spike times between the grid points to retrieve an adequate representation of network dynamics. We demonstrate that the exact integration scheme can be combined naturally with off-grid spike events found by interpolation. We show that by exploiting the existence of a minimal synaptic propagation delay, the need for a central event queue is removed, so that the precision of event-driven simulation on the level of single neurons is combined with the efficiency of time-driven global scheduling. Further, for neuron models with linear subthreshold dynamics, even local event queuing can be avoided, resulting in much greater efficiency on the single-neuron level. These ideas are exemplified by two implementations of a widely used neuron model. We present a measure for the efficiency of network simulations in terms of their integration error and show that for a wide range of input spike rates, the novel techniques we present are both more accurate and faster than standard techniques.
NASA Astrophysics Data System (ADS)
Manessa, Masita Dwi Mandini; Kanno, Ariyo; Sagawa, Tatsuyuki; Sekine, Masahiko; Nurdin, Nurjannah
2018-01-01
Lyzenga's multispectral bathymetry formula has attracted considerable interest due to its simplicity. However, there has been little discussion of the effect that variation in optical conditions and bottom types-which commonly appears in coral reef environments-has on this formula's results. The present paper evaluates Lyzenga's multispectral bathymetry formula for a variety of optical conditions and bottom types. A noiseless dataset of above-water remote sensing reflectance from WorldView-2 images over Case-1 shallow coral reef water is simulated using a radiative transfer model. The simulation-based assessment shows that Lyzenga's formula performs robustly, with adequate generality and good accuracy, under a range of conditions. As expected, the influence of bottom type on depth estimation accuracy is far greater than the influence of other optical parameters, i.e., chlorophyll-a concentration and solar zenith angle. Further, based on the simulation dataset, Lyzenga's formula estimates depth when the bottom type is unknown almost as accurately as when the bottom type is known. This study provides a better understanding of Lyzenga's multispectral bathymetry formula under various optical conditions and bottom types.
Numerical Investigation of Near-Field Plasma Flows in Magnetic Nozzles
NASA Technical Reports Server (NTRS)
Sankaran, Kamesh; Polzin, Kurt A.
2009-01-01
The development and application of a multidimensional numerical simulation code for investigating near-field plasma processes in magnetic nozzles are presented. The code calculates the time-dependent evolution of all three spatial components of both the magnetic field and velocity in a plasma flow, and includes physical models of relevant transport phenomena. It has been applied to an investigation of the behavior of plasma flows found in high-power thrusters, employing a realistic magnetic nozzle configuration. Simulation of a channel-flow case where the flow was super-Alfvenic has demonstrated that such a flow produces adequate back-emf to significantly alter the shape of the total magnetic field, preventing the flow from curving back to the magnetic field coil in the near-field region. Results from this simulation can be insightful in predicting far-field behavior and can be used as a set of self-consistent boundary conditions for far-field simulations. Future investigations will focus on cases where the inlet flow is sub-Alfvenic and where the flow is allowed to freely expand in the radial direction once it is downstream of the coil.
Brilli, Lorenzo; Bechini, Luca; Bindi, Marco; Carozzi, Marco; Cavalli, Daniele; Conant, Richard; Dorich, Cristopher D; Doro, Luca; Ehrhardt, Fiona; Farina, Roberta; Ferrise, Roberto; Fitton, Nuala; Francaviglia, Rosa; Grace, Peter; Iocola, Ileana; Klumpp, Katja; Léonard, Joël; Martin, Raphaël; Massad, Raia Silvia; Recous, Sylvie; Seddaiu, Giovanna; Sharp, Joanna; Smith, Pete; Smith, Ward N; Soussana, Jean-Francois; Bellocchi, Gianni
2017-11-15
Biogeochemical simulation models are important tools for describing and quantifying the contribution of agricultural systems to C sequestration and GHG source/sink status. The abundance of simulation tools developed over recent decades, however, creates a difficulty because predictions from different models show large variability. Discrepancies between the conclusions of different modelling studies are often ascribed to differences in the physical and biogeochemical processes incorporated in equations of C and N cycles and their interactions. Here we review the literature to determine the state-of-the-art in modelling agricultural (crop and grassland) systems. In order to carry out this study, we selected the range of biogeochemical models used by the CN-MIP consortium of FACCE-JPI (http://www.faccejpi.com): APSIM, CERES-EGC, DayCent, DNDC, DSSAT, EPIC, PaSim, RothC and STICS. In our analysis, these models were assessed for the quality and comprehensiveness of underlying processes related to pedo-climatic conditions and management practices, but also with respect to time and space of application, and for their accuracy in multiple contexts. Overall, it emerged that there is a possible impact of ill-defined pedo-climatic conditions in the unsatisfactory performance of the models (46.2%), followed by limitations in the algorithms simulating the effects of management practices (33.1%). The multiplicity of scales in both time and space is a fundamental feature, which explains the remaining weaknesses (i.e. 20.7%). Innovative aspects have been identified for future development of C and N models. They include the explicit representation of soil microbial biomass to drive soil organic matter turnover, the effect of N shortage on SOM decomposition, the improvements related to the production and consumption of gases and an adequate simulations of gas transport in soil. On these bases, the assessment of trends and gaps in the modelling approaches currently employed to represent biogeochemical cycles in crop and grassland systems appears an essential step for future research. Copyright © 2017 Elsevier B.V. All rights reserved.
Gaudart, Jean; Touré, Ousmane; Dessay, Nadine; Dicko, A lassane; Ranque, Stéphane; Forest, Loic; Demongeot, Jacques; Doumbo, Ogobara K
2009-01-01
Background The risk of Plasmodium falciparum infection is variable over space and time and this variability is related to environmental variability. Environmental factors affect the biological cycle of both vector and parasite. Despite this strong relationship, environmental effects have rarely been included in malaria transmission models. Remote sensing data on environment were incorporated into a temporal model of the transmission, to forecast the evolution of malaria epidemiology, in a locality of Sudanese savannah area. Methods A dynamic cohort was constituted in June 1996 and followed up until June 2001 in the locality of Bancoumana, Mali. The 15-day composite vegetation index (NDVI), issued from satellite imagery series (NOAA) from July 1981 to December 2006, was used as remote sensing data. The statistical relationship between NDVI and incidence of P. falciparum infection was assessed by ARIMA analysis. ROC analysis provided an NDVI value for the prediction of an increase in incidence of parasitaemia. Malaria transmission was modelled using an SIRS-type model, adapted to Bancoumana's data. Environmental factors influenced vector mortality and aggressiveness, as well as length of the gonotrophic cycle. NDVI observations from 1981 to 2001 were used for the simulation of the extrinsic variable of a hidden Markov chain model. Observations from 2002 to 2006 served as external validation. Results The seasonal pattern of P. falciparum incidence was significantly explained by NDVI, with a delay of 15 days (p = 0.001). An NDVI threshold of 0.361 (p = 0.007) provided a Diagnostic Odd Ratio (DOR) of 2.64 (CI95% [1.26;5.52]). The deterministic transmission model, with stochastic environmental factor, predicted an endemo-epidemic pattern of malaria infection. The incidences of parasitaemia were adequately modelled, using the observed NDVI as well as the NDVI simulations. Transmission pattern have been modelled and observed values were adequately predicted. The error parameters have shown the smallest values for a monthly model of environmental changes. Conclusion Remote-sensed data were coupled with field study data in order to drive a malaria transmission model. Several studies have shown that the NDVI presents significant correlations with climate variables, such as precipitations particularly in Sudanese savannah environments. Non-linear model combining environmental variables, predisposition factors and transmission pattern can be used for community level risk evaluation. PMID:19361335