NASA Astrophysics Data System (ADS)
Delon, C.; Mougin, E.; Serça, D.; Grippa, M.; Hiernaux, P.; Diawara, M.; Galy-Lacaux, C.; Kergoat, L.
2014-08-01
This work is an attempt to provide seasonal variation of biogenic NO emission fluxes in a sahelian rangeland in Mali (Agoufou, 15.34° N, 1.48° W) for years 2004, 2005, 2006, 2007 and 2008. Indeed, NO is one of the most important precursor for tropospheric ozone, and the contribution of the Sahel region in emitting NO is no more considered as negligible. The link between NO production in the soil and NO release to the atmosphere is investigated in this study, by taking into account vegetation litter production and degradation, microbial processes in the soil, emission fluxes, and environmental variables influencing these processes, using a coupled vegetation-litter decomposition-emission model. This model includes the Sahelian-Transpiration-Evaporation-Productivity (STEP) model for the simulation of herbaceous, tree leaf and fecal masses, the GENDEC model (GENeral DEComposition) for the simulation of the buried litter decomposition, and the NO emission model for the simulation of the NO flux to the atmosphere. Physical parameters (soil moisture and temperature, wind speed, sand percentage) which affect substrate diffusion and oxygen supply in the soil and influence the microbial activity, and biogeochemical parameters (pH and fertilization rate related to N content) are necessary to simulate the NO flux. The reliability of the simulated parameters is checked, in order to assess the robustness of the simulated NO flux. Simulated yearly average of NO flux ranges from 0.69 to 1.09 kg(N) ha-1 yr-1, and wet season average ranges from 1.16 to 2.08 kg(N) ha-1 yr-1. These results are in the same order as previous measurements made in several sites where the vegetation and the soil are comparable to the ones in Agoufou. This coupled vegetation-litter decomposition-emission model could be generalized at the scale of the Sahel region, and provide information where little data is available.
NASA Astrophysics Data System (ADS)
Delon, C.; Mougin, E.; Serça, D.; Grippa, M.; Hiernaux, P.; Diawara, M.; Galy-Lacaux, C.; Kergoat, L.
2015-01-01
This work is an attempt to provide seasonal variation of biogenic NO emission fluxes in a sahelian rangeland in Mali (Agoufou, 15.34° N, 1.48° W) for years 2004-2008. Indeed, NO is one of the most important precursor for tropospheric ozone, and the contribution of the Sahel region in emitting NO is no more considered as negligible. The link between NO production in the soil and NO release to the atmosphere is investigated in this study, by taking into account vegetation litter production and degradation, microbial processes in the soil, emission fluxes, and environmental variables influencing these processes, using a coupled vegetation-litter decomposition-emission model. This model includes the Sahelian-Transpiration-Evaporation-Productivity (STEP) model for the simulation of herbaceous, tree leaf and fecal masses, the GENDEC model (GENeral DEComposition) for the simulation of the buried litter decomposition and microbial dynamics, and the NO emission model (NOFlux) for the simulation of the NO release to the atmosphere. Physical parameters (soil moisture and temperature, wind speed, sand percentage) which affect substrate diffusion and oxygen supply in the soil and influence the microbial activity, and biogeochemical parameters (pH and fertilization rate related to N content) are necessary to simulate the NO flux. The reliability of the simulated parameters is checked, in order to assess the robustness of the simulated NO flux. Simulated yearly average of NO flux ranges from 0.66 to 0.96 kg(N) ha-1 yr-1, and wet season average ranges from 1.06 to 1.73 kg(N) ha-1 yr-1. These results are in the same order as previous measurements made in several sites where the vegetation and the soil are comparable to the ones in Agoufou. This coupled vegetation-litter decomposition-emission model could be generalized at the scale of the Sahel region, and provide information where little data is available.
NASA Astrophysics Data System (ADS)
Delon, C.; Mougin, E.; Serça, D.; Grippa, M.; Hiernaux, P.; Diawara, M.; Galy-Lacaux, C.; Kergoat, L.
2015-06-01
This work is an attempt to provide seasonal variation of biogenic NO emission fluxes in a Sahelian rangeland in Mali (Agoufou, 15.34° N, 1.48° W) for years 2004, 2005, 2006, 2007 and 2008. Indeed, NO is one of the most important precursors for tropospheric ozone, and previous studies have shown that arid areas potentially display significant NO emissions (due to both biotic and abiotic processes). Previous campaigns in the Sahel suggest that the contribution of this region in emitting NO is no longer considered as negligible. However, very few data are available in this region, therefore this study focuses on model development. The link between NO production in the soil and NO release to the atmosphere is investigated in this modelling study, by taking into account vegetation litter production and degradation, microbial processes in the soil, emission fluxes, and environmental variables influencing these processes, using a coupled vegetation-litter decomposition-emission model. This model includes the Sahelian Transpiration Evaporation and Productivity (STEP) model for the simulation of herbaceous, tree leaf and faecal masses, the GENDEC model (GENeral DEComposition) for the simulation of the buried litter decomposition and microbial dynamics, and the NO emission model (NOFlux) for the simulation of the NO release to the atmosphere. Physical parameters (soil moisture and temperature, wind speed, sand percentage) which affect substrate diffusion and oxygen supply in the soil and influence the microbial activity, and biogeochemical parameters (pH and fertilization rate related to N content) are necessary to simulate the NO flux. The reliability of the simulated parameters is checked, in order to assess the robustness of the simulated NO flux. Simulated yearly average of NO flux ranges from 2.09 to 3.04 ng(N) m-2 s-1 (0.66 to 0.96 kg(N) ha-1 yr-1), and wet season average ranges from 3.36 to 5.48 ng(N) m-2 s-1 (1.06 to 1.73 kg(N) ha-1 yr-1). These results are of the same order as previous measurements made in several sites where the vegetation and the soil are comparable to the ones in Agoufou. This coupled vegetation-litter decomposition-emission model could be generalized at the scale of the Sahel region, and provide information where few data are available.
WWTP dynamic disturbance modelling--an essential module for long-term benchmarking development.
Gernaey, K V; Rosen, C; Jeppsson, U
2006-01-01
Intensive use of the benchmark simulation model No. 1 (BSM1), a protocol for objective comparison of the effectiveness of control strategies in biological nitrogen removal activated sludge plants, has also revealed a number of limitations. Preliminary definitions of the long-term benchmark simulation model No. 1 (BSM1_LT) and the benchmark simulation model No. 2 (BSM2) have been made to extend BSM1 for evaluation of process monitoring methods and plant-wide control strategies, respectively. Influent-related disturbances for BSM1_LT/BSM2 are to be generated with a model, and this paper provides a general overview of the modelling methods used. Typical influent dynamic phenomena generated with the BSM1_LT/BSM2 influent disturbance model, including diurnal, weekend, seasonal and holiday effects, as well as rainfall, are illustrated with simulation results. As a result of the work described in this paper, a proposed influent model/file has been released to the benchmark developers for evaluation purposes. Pending this evaluation, a final BSM1_LT/BSM2 influent disturbance model definition is foreseen. Preliminary simulations with dynamic influent data generated by the influent disturbance model indicate that default BSM1 activated sludge plant control strategies will need extensions for BSM1_LT/BSM2 to efficiently handle 1 year of influent dynamics.
Equatorial waves simulated by the NCAR community climate model
NASA Technical Reports Server (NTRS)
Cheng, Xinhua; Chen, Tsing-Chang
1988-01-01
The equatorial planetary waves simulated by the NCAR CCM1 general circulation model were investigated in terms of space-time spectral analysis (Kao, 1968; Hayashi, 1971, 1973) and energetic analysis (Hayashi, 1980). These analyses are particularly applied to grid-point data on latitude circles. In order to test some physical factors which may affect the generation of tropical transient planetary waves, three different model simulations with the CCM1 (the control, the no-mountain, and the no-cloud experiments) were analyzed.
Effects of water-management alternatives on streamflow in the Ipswich River basin, Massachusetts
Zarriello, Philip J.
2001-01-01
Management alternatives that could help mitigate the effects of water withdrawals on streamflow in the Ipswich River Basin were evaluated by simulation with a calibrated Hydrologic Simulation Program--Fortran (HSPF) model. The effects of management alternatives on streamflow were simulated for a 35-year period (196195). Most alternatives examined increased low flows compared to the base simulation of average 1989-93 withdrawals. Only the simulation of no septic-effluent inflow, and the simulation of a 20-percent increase in withdrawals, further lowered flows or caused the river to stop flowing for longer periods of time than the simulation of average 198993 withdrawals. Simulations of reduced seasonal withdrawals by 20 percent, and by 50 percent, resulted in a modest increase in low flow in a critical habitat reach (model reach 8 near the Reading town well field); log-Pearson Type III analysis of simulated daily-mean flow indicated that under these reduced withdrawals, model reach 8 would stop flowing for a period of seven consecutive days about every other year, whereas under average 198993 withdrawals this reach would stop flowing for a seven consecutive day period almost every year. Simulations of no seasonal withdrawals, and simulations that stopped streamflow depletion when flow in model reach 19 was below 22 cubic feet per second, indicated flow would be maintained in model reach 8 at all times. Simulations indicated wastewater-return flows would augment low flow in proportion to the rate of return flow. Simulations of a 1.5 million gallons per day return flow rate indicated model reach 8 would stop flowing for a period of seven consecutive days about once every 5 years; simulated return flow rates of 1.1 million gallons per day indicated that model reach 8 would stop flowing for a period of seven consecutive days about every other year. Simulation of reduced seasonal withdrawals, combined with no septic effluent return flow, indicated only a slight increase in low flow compared to low flows simulated under average 198993 withdrawals. Simulation of reduced seasonal withdrawal, combined with 2.6 million gallons per day wastewater-return flows, provided more flow in model reach 8 than that simulated under no withdrawals.
Mars Global Reference Atmospheric Model (Mars-GRAM): Release No. 2 - Overview and applications
NASA Technical Reports Server (NTRS)
James, B.; Johnson, D.; Tyree, L.
1993-01-01
The Mars Global Reference Atmospheric Model (Mars-GRAM), a science and engineering model for empirically parameterizing the temperature, pressure, density, and wind structure of the Martian atmosphere, is described with particular attention to the model's newest version, Mars-GRAM, Release No. 2 and to the improvements incorporated into the Release No. 2 model as compared with the Release No. 1 version. These improvements include (1) an addition of a new capability to simulate local-scale Martian dust storms and the growth and decay of these storms; (2) an addition of the Zurek and Haberle (1988) wave perturbation model, for simulating tidal perturbation effects; and (3) a new modular version of Mars-GRAM, for incorporation as a subroutine into other codes.
Dilaveri, C A; Szostek, J H; Wang, A T; Cook, D A
2013-09-01
Breast and pelvic examinations are challenging intimate examinations. Technology-based simulation may help to overcome these challenges. To synthesise the evidence regarding the effectiveness of technology-based simulation training for breast and pelvic examination. Our systematic search included MEDLINE, EMBASE, CINAHL, PsychINFO, Scopus, and key journals and review articles; the date of the last search was January 2012. Original research studies evaluating technology-enhanced simulation of breast and pelvic examination to teach learners, compared with no intervention or with other educational activities. The reviewers evaluated study eligibility and abstracted data on methodological quality, learners, instructional design, and outcomes, and used random-effects models to pool weighted effect sizes. In total, 11 272 articles were identified for screening, and 22 studies were eligible, enrolling 2036 trainees. In eight studies comparing simulation for breast examination training with no intervention, simulation was associated with a significant improvement in skill, with a pooled effect size of 0.86 (95% CI 0.52-1.19; P < 0.001). Four studies comparing simulation training for pelvic examination with no intervention had a large and significant benefit, with a pooled effect size of 1.18 (95% CI 0.40-1.96; P = 0.003). Among breast examination simulation studies, dynamic models providing feedback were associated with improved outcomes. In pelvic examination simulation studies, the addition of a standardised patient to the simulation model and the use of an electronic model with enhanced feedback improved outcomes. In comparison with no intervention, breast and pelvic examination simulation training is associated with moderate to large effects for skills outcomes. Enhanced feedback appears to improve learning. © 2013 RCOG.
Global high-resolution simulations of tropospheric nitrogen dioxide using CHASER V4.0
NASA Astrophysics Data System (ADS)
Sekiya, Takashi; Miyazaki, Kazuyuki; Ogochi, Koji; Sudo, Kengo; Takigawa, Masayuki
2018-03-01
We evaluate global tropospheric nitrogen dioxide (NO2) simulations using the CHASER V4.0 global chemical transport model (CTM) at horizontal resolutions of 0.56, 1.1, and 2.8°. Model evaluation was conducted using satellite tropospheric NO2 retrievals from the Ozone Monitoring Instrument (OMI) and the Global Ozone Monitoring Experiment-2 (GOME-2) and aircraft observations from the 2014 Front Range Air Pollution and Photochemistry Experiment (FRAPPÉ). Agreement against satellite retrievals improved greatly at 1.1 and 0.56° resolutions (compared to 2.8° resolution) over polluted and biomass burning regions. The 1.1° simulation generally captured the regional distribution of the tropospheric NO2 column well, whereas 0.56° resolution was necessary to improve the model performance over areas with strong local sources, with mean bias reductions of 67 % over Beijing and 73 % over San Francisco in summer. Validation using aircraft observations indicated that high-resolution simulations reduced negative NO2 biases below 700 hPa over the Denver metropolitan area. These improvements in high-resolution simulations were attributable to (1) closer spatial representativeness between simulations and observations and (2) better representation of large-scale concentration fields (i.e., at 2.8°) through the consideration of small-scale processes. Model evaluations conducted at 0.5 and 2.8° bin grids indicated that the contributions of both these processes were comparable over most polluted regions, whereas the latter effect (2) made a larger contribution over eastern China and biomass burning areas. The evaluations presented in this paper demonstrate the potential of using a high-resolution global CTM for studying megacity-scale air pollutants across the entire globe, potentially also contributing to global satellite retrievals and chemical data assimilation.
Optimal Estimation with Two Process Models and No Measurements
2015-08-01
models will be lost if either of the models includes deterministic modeling errors. 12 5. References and Notes 1. Brown RG, Hwang PYC. Introduction to...independent process models when no measurements are present. The observer follows a derivation similar to that of the discrete time Kalman filter. A simulation...discrete time Kalman filter. A simulation example is provided in which a process model based on the dynamics of a ballistic projectile is blended with an
Multi-Scale Simulation of High Energy Density Ionic Liquids
2007-06-19
and simulation of ionic liquids (ILs). A polarizable model was developed to simulate ILs more accurately at the atomistic level. A multiscale coarse...propellant, 1- hydroxyethyl-4-amino-1, 2, 4-triazolium nitrate (HEATN), were studied with the all-atom polarizable model. The mechanism suggested for HEATN...with this AFOSR-supported project, a polarizable forcefield for the ionic liquids such as 1-ethyl-3-methylimidazolium nitrate (EMIM*/NO3-) was
Models Robustness for Simulating Drainage and NO3-N Fluxes
NASA Astrophysics Data System (ADS)
Jabro, Jay; Jabro, Ann
2013-04-01
Computer models simulate and forecast appropriate agricultural practices to reduce environmental impact. The objectives of this study were to assess and compare robustness and performance of three models -- LEACHM, NCSWAP, and SOIL-SOILN--for simulating drainage and NO3-N leaching fluxes in an intense pasture system without recalibration. A 3-yr study was conducted on a Hagerstown silt loam to measure drainage and NO3-N fluxes below 1 m depth from N-fertilized orchardgrass using intact core lysimeters. Five N-fertilizer treatments were replicated five times in a randomized complete block experimental design. The models were validated under orchardgrass using soil, water and N transformation rate parameters and C pools fractionation derived from a previous study conducted on similar soils under corn. The model efficiency (MEF) of drainage and NO3-N fluxes were 0.53, 0.69 for LEACHM; 0.75, 0.39 for NCSWAP; and 0.94, 0.91for SOIL-SOILN. The models failed to produce reasonable simulations of drainage and NO3-N fluxes in January, February and March due to limited water movement associated with frozen soil and snow accumulation and melt. The differences between simulated and measured NO3-N leaching and among models' performances may also be related to soil N and C transformation processes embedded in the models These results are a monumental progression in the validation of computer models which will lead to continued diffusion across diverse stakeholders.
NASA Astrophysics Data System (ADS)
Liu, Fei; van der A, Ronald J.; Eskes, Henk; Ding, Jieying; Mijling, Bas
2018-03-01
Chemical transport models together with emission inventories are widely used to simulate NO2 concentrations over China, but validation of the simulations with in situ measurements has been extremely limited. Here we use ground measurements obtained from the air quality monitoring network recently developed by the Ministry of Environmental Protection of China to validate modeling surface NO2 concentrations from the CHIMERE regional chemical transport model driven by the satellite-derived DECSO and the bottom-up MIX emission inventories. We applied a correction factor to the observations to account for the interferences of other oxidized nitrogen compounds (NOz), based on the modeled ratio of NO2 to NOz. The model accurately reproduces the spatial variability in NO2 from in situ measurements, with a spatial correlation coefficient of over 0.7 for simulations based on both inventories. A negative and positive bias is found for the simulation with the DECSO (slope = 0.74 and 0.64 for the daily mean and daytime only) and the MIX (slope = 1.3 and 1.1) inventories, respectively, suggesting an underestimation and overestimation of NOx emissions from corresponding inventories. The bias between observed and modeled concentrations is reduced, with the slope dropping from 1.3 to 1.0 when the spatial distribution of NOx emissions in the DECSO inventory is applied as the spatial proxy for the MIX inventory, which suggests an improvement of the distribution of emissions between urban and suburban or rural areas in the DECSO inventory compared to that used in the bottom-up inventory. A rough estimate indicates that the observed concentrations, from sites predominantly placed in the populated urban areas, may be 10-40 % higher than the corresponding model grid cell mean. This reduces the estimate of the negative bias of the DECSO-based simulation to the range of -30 to 0 % on average and more firmly establishes that the MIX inventory is biased high over major cities. The performance of the model is comparable over seasons, with a slightly worse spatial correlation in summer due to the difficulties in resolving the more active NOx photochemistry and larger concentration gradients in summer by the model. In addition, the model well captures the daytime diurnal cycle but shows more significant disagreement between simulations and measurements during nighttime, which likely produces a positive model bias of about 15 % in the daily mean concentrations. This is most likely related to the uncertainty in vertical mixing in the model at night.
NASA Technical Reports Server (NTRS)
Liu, Fei; van der A, Ronald J.; Eskes, Henk; Ding, Jieying; Mijling, Bas
2018-01-01
Chemical transport models together with emission inventories are widely used to simulate NO2 concentrations over China, but validation of the simulations with in situ measurements has been extremely limited. Here we use ground measurements obtained from the air quality monitoring network recently developed by the Ministry of Environmental Protection of China to validate modeling surface NO2 concentrations from the CHIMERE regional chemical transport model driven by the satellite-derived DECSO and the bottom-up MIX emission inventories. We applied a correction factor to the observations to account for the interferences of other oxidized nitrogen compounds (NOz), based on the modeled ratio of NO2 to NOz. The model accurately reproduces the spatial variability in NO2 from in situ measurements, with a spatial correlation coefficient of over 0.7 for simulations based on both inventories. A negative and positive bias is found for the simulation with the DECSO (slopeD0.74 and 0.64 for the daily mean and daytime only) and the MIX (slopeD1.3 and 1.1) inventories, respectively, suggesting an underestimation and overestimation of NOx emissions from corresponding inventories. The bias between observed and modeled concentrations is reduced, with the slope dropping from 1.3 to 1.0 when the spatial distribution of NOx emissions in the DECSO inventory is applied as the spatial proxy for the MIX inventory, which suggests an improvement of the distribution of emissions between urban and suburban or rural areas in the DECSO inventory compared to that used in the bottom-up inventory. A rough estimate indicates that the observed concentrations, from sites predominantly placed in the populated urban areas, may be 10-40% higher than the corresponding model grid cell mean. This reduces the estimate of the negative bias of the DECSO-based simulation to the range of -30 to 0% on average and more firmly establishes that the MIX inventory is biased high over major cities. The performance of the model is comparable over seasons, with a slightly worse spatial correlation in summer due to the difficulties in resolving the more active NOx photochemistry and larger concentration gradients in summer by the model. In addition, the model well captures the daytime diurnal cycle but shows more significant disagreement between simulations and measurements during nighttime, which likely produces a positive model bias of about 15% in the daily mean concentrations. This is most likely related to the uncertainty in vertical mixing in the model at night.
NASA Technical Reports Server (NTRS)
Rodriquez, J. M.; Yoshida, Y.; Duncan, B. N.; Bucsela, E. J.; Gleason, J. F.; Allen, D.; Pickering, K. E.
2007-01-01
We present simulations of the tropospheric composition for the years 2004 and 2005, carried out by the GMI Combined Stratosphere-Troposphere (Combo) model, at a resolution of 2degx2.5deg. The model includes a new parameterization of lightning sources of NO(x) which is coupled to the cloud mass fluxes in the adopted meteorological fields. These simulations use two different sets of input meteorological fields: a)late-look assimilated fields from the Global Modeling and Assimilation Office (GMAO), GEOS-4 system and b) 12-hour forecast fields initialized with the assimilated data. Comparison of the forecast to the assimilated fields indicates that the forecast fields exhibit less vigorous convection, and yield tropical precipitation fields in better agreement with observations. Since these simulations include a complete representation of the stratosphere, they provide realistic stratosphere-tropospheric fluxes of O3 and NO(y). Furthermore, the stratospheric contribution to total columns of different troposheric species can be subtracted in a consistent fashion, and the lightning production of NO(y) will depend on the adopted meteorological field. We concentrate here on the simulated tropospheric columns of NO2, and compare them to observations by the OM1 instrument for the years 2004 and 2005. The comparison is used to address these questions: a) is there a significant difference in the agreement/disagreement between simulations for these two different meteorological fields, and if so, what causes these differences?; b) how do the simulations compare to OMI observations, and does this comparison indicate an improvement in simulations with the forecast fields? c) what are the implications of these simulations for our understanding of the NO2 emissions over continental polluted regions?
NASA Astrophysics Data System (ADS)
Kim, Youngseob; Wu, You; Seigneur, Christian; Roustan, Yelva
2018-02-01
A new multi-scale model of urban air pollution is presented. This model combines a chemistry-transport model (CTM) that includes a comprehensive treatment of atmospheric chemistry and transport on spatial scales down to 1 km and a street-network model that describes the atmospheric concentrations of pollutants in an urban street network. The street-network model is the Model of Urban Network of Intersecting Canyons and Highways (MUNICH), which consists of two main components: a street-canyon component and a street-intersection component. MUNICH is coupled to the Polair3D CTM of the Polyphemus air quality modeling platform to constitute the Street-in-Grid (SinG) model. MUNICH is used to simulate the concentrations of the chemical species in the urban canopy, which is located in the lowest layer of Polair3D, and the simulation of pollutant concentrations above rooftops is performed with Polair3D. Interactions between MUNICH and Polair3D occur at roof level and depend on a vertical mass transfer coefficient that is a function of atmospheric turbulence. SinG is used to simulate the concentrations of nitrogen oxides (NOx) and ozone (O3) in a Paris suburb. Simulated concentrations are compared to NOx concentrations measured at two monitoring stations within a street canyon. SinG shows better performance than MUNICH for nitrogen dioxide (NO2) concentrations. However, both SinG and MUNICH underestimate NOx. For the case study considered, the model performance for NOx concentrations is not sensitive to using a complex chemistry model in MUNICH and the Leighton NO-NO2-O3 set of reactions is sufficient.
Feedbacks between Air Pollution and Weather, Part 1: Effects on Weather
The meteorological predictions of fully coupled air-quality models running in “feedback” versus “nofeedback” simulations were compared against each other as part of Phase 2 of the Air Quality Model Evaluation International Initiative. The model simulations included a “no-feedback...
Multicale modeling of the detonation of aluminized explosives using SPH-MD-QM method
NASA Astrophysics Data System (ADS)
Peng, Qing; Wang, Guangyu; Liu, Gui-Rong; de, Suvranu
Aluminized explosives have been applied in military industry since decades ago. Compared with ideal explosives, aluminized explosives feature both fast detonation and slow metal combustion chemistry, generating a complex multi-phase reactive flow. Here, we introduce a sequential multiscale model of SPH-MD-QM to simulate the detonation behavior of aluminized explosives. At the bottom level, first-principles quantum mechanics (QM) calculations are employed to obtain the training sets for fitting the ReaxFF potentials, which are used in turn in the reactive molecular dynamics (MD) simulations in the middle level to obtain the chemical reaction rates and equations of states. At the up lever, a smooth particle hydrodynamics (SPH) method incorporated ignition and growth model and afterburning model has been used for the simulation of the detonation and combustion of the aluminized explosive. Simulation is compared with experiment and good agreement is observed. The proposed multiscale method of SPH-MD-QM could be used to optimize the performance of aluminized explosives. The authors would like to acknowledge the generous financial support from the Defense Threat Reduction Agency (DTRA) Grant No. HDTRA1-13-1-0025 and the Office of Naval Research Grants ONR Award No. N00014-08-1-0462 and No. N00014-12-1-0527.
Enhanced vadose zone nitrogen removal by poplar during dormancy.
Ausland, Hayden; Ward, Adam; Licht, Louis; Just, Craig
2015-01-01
A pilot-scale, engineered poplar tree vadose zone system was utilized to determine effluent nitrate (NO3(-)) and ammonium concentrations resulting from intermittent dosing of a synthetic wastewater onto sandy soils at 4.5°C. The synthetic wastewater replicated that of an industrial food processor that irrigates onto sandy soils even during dormancy which can leave groundwater vulnerable to NO3(-) contamination. Data from a 21-day experiment was used to assess various Hydrus model parameterizations that simulated the impact of dormant roots. Bromide tracer data indicated that roots impacted the hydraulic properties of the packed sand by increasing effective dispersion, water content and residence time. The simulated effluent NO3(-) concentration on day 21 was 1.2 mg-N L(-1) in the rooted treatments compared to a measured value of 1.0 ± 0.72 mg-N L(-1). For the non-rooted treatment, the simulated NO3(-) concentration was 4.7 mg-N L(-1) compared to 5.1 ± 3.5 mg-N L(-1) measured on day 21. The model predicted a substantial "root benefit" toward protecting groundwater through increased denitrification in rooted treatments during a 21-day simulation with 8% of dosed nitrogen converted to N2 compared to 3.3% converted in the non-rooted test cells. Simulations at the 90-day timescale provided similar results, indicating increased denitrification in rooted treatments.
Neural network simulation of soil NO3 dynamic under potato crop system
NASA Astrophysics Data System (ADS)
Goulet-Fortin, Jérôme; Morais, Anne; Anctil, François; Parent, Léon-Étienne; Bolinder, Martin
2013-04-01
Nitrate leaching is a major issue in sandy soils intensively cropped to potato. Modelling could test and improve management practices, particularly as regard to the optimal N application rates. Lack of input data is an important barrier for the application of classical process-based models to predict soil NO3 content (SNOC) and NO3 leaching (NOL). Alternatively, data driven models such as neural networks (NN) could better take into account indicators of spatial soil heterogeneity and plant growth pattern such as the leaf area index (LAI), hence reducing the amount of soil information required. The first objective of this study was to evaluate NN and hybrid models to simulate SNOC in the 0-40 cm soil layer considering inter-annual variations, spatial soil heterogeneity and differential N application rates. The second objective was to evaluate the same methodology to simulate seasonal NOL dynamic at 1 m deep. To this aim, multilayer perceptrons with different combinations of driving meteorological variables, functions of the LAI and state variables of external deterministic models have been trained and evaluated. The state variables from external models were: drainage estimated by the CLASS model and the soil temperature estimated by an ICBM subroutine. Results of SNOC simulations were compared to field data collected between 2004 and 2011 at several experimental plots under potato cropping systems in Québec, Eastern Canada. Results of NOL simulation were compared to data obtained in 2012 from 11 suction lysimeters installed in 2 experimental plots under potato cropping systems in the same region. The most performing model for SNOC simulation was obtained using a 4-input hybrid model composed of 1) cumulative LAI, 2) cumulative drainage, 3) soil temperature and 4) day of year. The most performing model for NOL simulation was obtained using a 5-input NN model composed of 1) N fertilization rate at spring, 2) LAI, 3) cumulative rainfall, 4) the day of year and 5) the percentage of clay content. The MAE was 22% for SNOC simulation and 23% for NOL simulation. High sensitivity to LAI suggests that the model may take into account field and sub-field spatial variability and support N management. Further studies are needed to fully validate the method, particularly in the case of NOL simulation.
Implementing ADM1 for plant-wide benchmark simulations in Matlab/Simulink.
Rosen, C; Vrecko, D; Gernaey, K V; Pons, M N; Jeppsson, U
2006-01-01
The IWA Anaerobic Digestion Model No.1 (ADM1) was presented in 2002 and is expected to represent the state-of-the-art model within this field in the future. Due to its complexity the implementation of the model is not a simple task and several computational aspects need to be considered, in particular if the ADM1 is to be included in dynamic simulations of plant-wide or even integrated systems. In this paper, the experiences gained from a Matlab/Simulink implementation of ADM1 into the extended COST/IWA Benchmark Simulation Model (BSM2) are presented. Aspects related to system stiffness, model interfacing with the ASM family, mass balances, acid-base equilibrium and algebraic solvers for pH and other troublesome state variables, numerical solvers and simulation time are discussed. The main conclusion is that if implemented properly, the ADM1 will also produce high-quality results in dynamic plant-wide simulations including noise, discrete sub-systems, etc. without imposing any major restrictions due to extensive computational efforts.
Simulation-based bronchoscopy training: systematic review and meta-analysis.
Kennedy, Cassie C; Maldonado, Fabien; Cook, David A
2013-07-01
Simulation-based bronchoscopy training is increasingly used, but effectiveness remains uncertain. We sought to perform a comprehensive synthesis of published work on simulation-based bronchoscopy training. We searched MEDLINE, EMBASE, CINAHL, PsycINFO, ERIC, Web of Science, and Scopus for eligible articles through May 11, 2011. We included all original studies involving health professionals that evaluated, in comparison with no intervention or an alternative instructional approach, simulation-based training for flexible or rigid bronchoscopy. Study selection and data abstraction were performed independently and in duplicate. We pooled results using random effects meta-analysis. From an initial pool of 10,903 articles, we identified 17 studies evaluating simulation-based bronchoscopy training. In comparison with no intervention, simulation training was associated with large benefits on skills and behaviors (pooled effect size, 1.21 [95% CI, 0.82-1.60]; n=8 studies) and moderate benefits on time (0.62 [95% CI, 0.12-1.13]; n=7). In comparison with clinical instruction, behaviors with real patients showed nonsignificant effects favoring simulation for time (0.61 [95% CI, -1.47 to 2.69]) and process (0.33 [95% CI, -1.46 to 2.11]) outcomes (n=2 studies each), although variation in training time might account for these differences. Four studies compared alternate simulation-based training approaches. Inductive analysis to inform instructional design suggested that longer or more structured training is more effective, authentic clinical context adds value, and animal models and plastic part-task models may be superior to more costly virtual-reality simulators. Simulation-based bronchoscopy training is effective in comparison with no intervention. Comparative effectiveness studies are few.
Arnell, Magnus; Astals, Sergi; Åmand, Linda; Batstone, Damien J; Jensen, Paul D; Jeppsson, Ulf
2016-07-01
Anaerobic co-digestion is an emerging practice at wastewater treatment plants (WWTPs) to improve the energy balance and integrate waste management. Modelling of co-digestion in a plant-wide WWTP model is a powerful tool to assess the impact of co-substrate selection and dose strategy on digester performance and plant-wide effects. A feasible procedure to characterise and fractionate co-substrates COD for the Benchmark Simulation Model No. 2 (BSM2) was developed. This procedure is also applicable for the Anaerobic Digestion Model No. 1 (ADM1). Long chain fatty acid inhibition was included in the ADM1 model to allow for realistic modelling of lipid rich co-substrates. Sensitivity analysis revealed that, apart from the biodegradable fraction of COD, protein and lipid fractions are the most important fractions for methane production and digester stability, with at least two major failure modes identified through principal component analysis (PCA). The model and procedure were tested on bio-methane potential (BMP) tests on three substrates, each rich on carbohydrates, proteins or lipids with good predictive capability in all three cases. This model was then applied to a plant-wide simulation study which confirmed the positive effects of co-digestion on methane production and total operational cost. Simulations also revealed the importance of limiting the protein load to the anaerobic digester to avoid ammonia inhibition in the digester and overloading of the nitrogen removal processes in the water train. In contrast, the digester can treat relatively high loads of lipid rich substrates without prolonged disturbances. Copyright © 2016 Elsevier Ltd. All rights reserved.
Comparison of existing models to simulate anaerobic digestion of lipid-rich waste.
Béline, F; Rodriguez-Mendez, R; Girault, R; Bihan, Y Le; Lessard, P
2017-02-01
Models for anaerobic digestion of lipid-rich waste taking inhibition into account were reviewed and, if necessary, adjusted to the ADM1 model framework in order to compare them. Experimental data from anaerobic digestion of slaughterhouse waste at an organic loading rate (OLR) ranging from 0.3 to 1.9kgVSm -3 d -1 were used to compare and evaluate models. Experimental data obtained at low OLRs were accurately modeled whatever the model thereby validating the stoichiometric parameters used and influent fractionation. However, at higher OLRs, although inhibition parameters were optimized to reduce differences between experimental and simulated data, no model was able to accurately simulate accumulation of substrates and intermediates, mainly due to the wrong simulation of pH. A simulation using pH based on experimental data showed that acetogenesis and methanogenesis were the most sensitive steps to LCFA inhibition and enabled identification of the inhibition parameters of both steps. Copyright © 2016 Elsevier Ltd. All rights reserved.
Goldberg, Daniel L; Vinciguerra, Timothy P; Anderson, Daniel C; Hembeck, Linda; Canty, Timothy P; Ehrman, Sheryl H; Martins, Douglas K; Stauffer, Ryan M; Thompson, Anne M; Salawitch, Ross J; Dickerson, Russell R
2016-03-16
A Comprehensive Air-Quality Model with Extensions (CAMx) version 6.10 simulation was assessed through comparison with data acquired during NASA's 2011 DISCOVER-AQ Maryland field campaign. Comparisons for the baseline simulation (CB05 chemistry, EPA 2011 National Emissions Inventory) show a model overestimate of NO y by +86.2% and an underestimate of HCHO by -28.3%. We present a new model framework (CB6r2 chemistry, MEGAN v2.1 biogenic emissions, 50% reduction in mobile NO x , enhanced representation of isoprene nitrates) that better matches observations. The new model framework attributes 31.4% more surface ozone in Maryland to electric generating units (EGUs) and 34.6% less ozone to on-road mobile sources. Surface ozone becomes more NO x -limited throughout the eastern United States compared to the baseline simulation. The baseline model therefore likely underestimates the effectiveness of anthropogenic NO x reductions as well as the current contribution of EGUs to surface ozone.
Flores-Alsina, Xavier; Rodriguez-Roda, Ignasi; Sin, Gürkan; Gernaey, Krist V
2009-01-01
The objective of this paper is to perform an uncertainty and sensitivity analysis of the predictions of the Benchmark Simulation Model (BSM) No. 1, when comparing four activated sludge control strategies. The Monte Carlo simulation technique is used to evaluate the uncertainty in the BSM1 predictions, considering the ASM1 bio-kinetic parameters and influent fractions as input uncertainties while the Effluent Quality Index (EQI) and the Operating Cost Index (OCI) are focused on as model outputs. The resulting Monte Carlo simulations are presented using descriptive statistics indicating the degree of uncertainty in the predicted EQI and OCI. Next, the Standard Regression Coefficients (SRC) method is used for sensitivity analysis to identify which input parameters influence the uncertainty in the EQI predictions the most. The results show that control strategies including an ammonium (S(NH)) controller reduce uncertainty in both overall pollution removal and effluent total Kjeldahl nitrogen. Also, control strategies with an external carbon source reduce the effluent nitrate (S(NO)) uncertainty increasing both their economical cost and variability as a trade-off. Finally, the maximum specific autotrophic growth rate (micro(A)) causes most of the variance in the effluent for all the evaluated control strategies. The influence of denitrification related parameters, e.g. eta(g) (anoxic growth rate correction factor) and eta(h) (anoxic hydrolysis rate correction factor), becomes less important when a S(NO) controller manipulating an external carbon source addition is implemented.
Large-eddy simulation of a turbulent flow over the DrivAer fastback vehicle model
NASA Astrophysics Data System (ADS)
Ruettgers, Mario; Park, Junshin; You, Donghyun
2017-11-01
In 2012 the Technical University of Munich (TUM) made realistic generic car models called DrivAer available to the public. These detailed models allow a precise calculation of the flow around a lifelike car which was limited to simplified geometries in the past. In the present study, the turbulent flow around one of the models, the DrivAer Fastback model, is simulated using large-eddy simulation (LES). The goal of the study is to give a deeper physical understanding of highly turbulent regions around the car, like at the side mirror or at the rear end. For each region the contribution to the total drag is worked out. The results have shown that almost 35% of the drag is generated from the car wheels whereas the side mirror only contributes 4% of the total drag. Detailed frequency analysis on velocity signals in each wake region have also been conducted and found 3 dominant frequencies which correspond to the dominant frequency of the total drag. Furthermore, vortical structures are visualized and highly energetic points are identified. This work was supported by the National Research Foundation of Korea(NRF) Grant funded by the Korea government(Ministry of Science, ICT and Future Planning) (No. 2014R1A2A1A11049599, No. 2015R1A2A1A15056086, No. 2016R1E1A2A01939553).
Tenbus, Frederick J.; Fleck, William B.
2001-01-01
Military activity at Graces Quarters, a former open-air chemical-agent facility at Aberdeen Proving Ground, Maryland, has resulted in ground-water contamination by chlorinated hydrocarbons. As part of a ground-water remediation feasibility study, a three-dimensional model was constructed to simulate transport of four chlorinated hydrocarbons (1,1,2,2-tetrachloroethane, trichloroethene, carbon tetrachloride, and chloroform) that are components of a contaminant plume in the surficial and middle aquifers underlying the east-central part of Graces Quarters. The model was calibrated to steady-state hydraulic head at 58 observation wells and to the concentration of 1,1,2,2-tetrachloroethane in 58 observation wells and 101direct-push probe samples from the mid-1990s. Simulations using the same basic model with minor adjustments were then run for each of the other plume constituents. The error statistics between the simulated and measured concentrations of each of the constituents compared favorably to the error statisticst,1,2,2-tetrachloroethane calibration. Model simulations were used in conjunction with contaminant concentration data to examine the sources and degradation of the plume constituents. It was determined from this that mixed contaminant sources with no ambient degradation was the best approach for simulating multi-species solute transport at the site. Forward simulations were run to show potential solute transport 30 years and 100 years into the future with and without source removal. Although forward simulations are subject to uncertainty, they can be useful for illustrating various aspects of the conceptual model and its implementation. The forward simulation with no source removal indicates that contaminants would spread throughout various parts of the surficial and middle aquifers, with the100-year simulation showing potential discharge areas in either the marshes at the end of the Graces Quarters peninsula or just offshore in the estuaries. The simulation with source removal indicates that if the modeling assumptions are reasonable and ground-water cleanup within30 years is important, source removal alone is not a sufficient remedy, and cleanup might not even occur within 100 years.
Malone, Robert W.; Nolan, Bernard T.; Ma, Liwang; Kanwar, Rameshwar S.; Pederson, Carl H.; Heilman, Philip
2014-01-01
Well tested agricultural system models can improve our understanding of the water quality effects of management practices under different conditions. The Root Zone Water Quality Model (RZWQM) has been tested under a variety of conditions. However, the current model's ability to simulate pesticide transport to subsurface drain flow over a long term period under different tillage systems and application rates is not clear. Therefore, we calibrated and tested RZWQM using six years of data from Nashua, Iowa. In this experiment, atrazine was spring applied at 2.8 (1990–1992) and 0.6 kg/ha/yr (1993–1995) to two 0.4 ha plots with different tillage (till and no-till). The observed and simulated average annual flow weighted atrazine concentrations (FWAC) in subsurface drain flow from the no-till plot were 3.7 and 3.2 μg/L, respectively for the period with high atrazine application rates, and 0.8 and 0.9 μg/L, respectively for the period with low application rates. The 1990–1992 observed average annual FWAC difference between the no-till and tilled plot was 2.4 μg/L while the simulated difference was 2.1 μg/L. These observed and simulated differences for 1993–1995 were 0.1 and 0.1 μg/L, respectively. The Nash–Sutcliffe model performance statistic (EF) for cumulative atrazine flux to subsurface drain flow was 0.93 for the no-till plot testing years (1993–1995), which is comparable to other recent model tests. The value of EF is 1.0 when simulated data perfectly match observed data. The order of selected parameter sensitivity for RZWQM simulated FWAC was atrazine partition coefficient > number of macropores > atrazine half life in soil > soil hydraulic conductivity. Simulations from 1990 to 1995 with four different atrazine application rates applied at a constant rate throughout the simulation period showed concentrations in drain flow for the no-till plot to be twice those of the tilled plot. The differences were more pronounced in the early simulation period (1990–1992), partly because of the characteristics of macropore flow during large storms. The results suggest that RZWQM is a promising tool to study pesticide transport to subsurface drain flow under different tillage systems and application rates over several years, the concentrations of atrazine in drain flow can be higher with no-till than tilled soil over a range of atrazine application rates, and atrazine concentrations in drain flow are sensitive to the macropore flow characteristics under different tillage systems and rainfall timing and intensity.
Simulation-Based Bronchoscopy Training
Kennedy, Cassie C.; Maldonado, Fabien
2013-01-01
Background: Simulation-based bronchoscopy training is increasingly used, but effectiveness remains uncertain. We sought to perform a comprehensive synthesis of published work on simulation-based bronchoscopy training. Methods: We searched MEDLINE, EMBASE, CINAHL, PsycINFO, ERIC, Web of Science, and Scopus for eligible articles through May 11, 2011. We included all original studies involving health professionals that evaluated, in comparison with no intervention or an alternative instructional approach, simulation-based training for flexible or rigid bronchoscopy. Study selection and data abstraction were performed independently and in duplicate. We pooled results using random effects meta-analysis. Results: From an initial pool of 10,903 articles, we identified 17 studies evaluating simulation-based bronchoscopy training. In comparison with no intervention, simulation training was associated with large benefits on skills and behaviors (pooled effect size, 1.21 [95% CI, 0.82-1.60]; n = 8 studies) and moderate benefits on time (0.62 [95% CI, 0.12-1.13]; n = 7). In comparison with clinical instruction, behaviors with real patients showed nonsignificant effects favoring simulation for time (0.61 [95% CI, −1.47 to 2.69]) and process (0.33 [95% CI, −1.46 to 2.11]) outcomes (n = 2 studies each), although variation in training time might account for these differences. Four studies compared alternate simulation-based training approaches. Inductive analysis to inform instructional design suggested that longer or more structured training is more effective, authentic clinical context adds value, and animal models and plastic part-task models may be superior to more costly virtual-reality simulators. Conclusions: Simulation-based bronchoscopy training is effective in comparison with no intervention. Comparative effectiveness studies are few. PMID:23370487
Evaluation of GFDL-AM4 simulations of nitrogen oxides with OMI satellite observations
NASA Astrophysics Data System (ADS)
Penn, E.; Horowitz, L. W.; Naik, V.
2017-12-01
We examine the seasonal cycle and interannual variability of NO2 from 2005-2015 of NO2 over key global regions using simulations with a nudged version of the GFDL-AM4 chemistry-climate model and satellite-based observations from OMI (Ozone Monitoring Instrument), which observes near-global NO2 column abundances at 1pm local time daily. We gridded TEMIS (Tropospheric Emissions Monitoring Internet Service) OMI data to the model spatial grid using WHIPS 2.0 (Wisconsin Horizontal Interpolation Program for Satellites version 2.0) and applied the OMI averaging kernel to weight the model's NO2 concentrations vertically. Model-simulated tropospheric NO2 columns reproduce well the OMI spatial patterns (averaging r2=0.81) and seasonal cycles, but underestimate observations in most regions by 16-62%. A notable exception is the overestimate by 5-35% in East Asia. In regions dominated by biomass burning, these emissions tend to control the seasonal cycle of NO2. However, where anthropogenic emissions dominate, the photochemical conversion of NO2 to PAN and nitric acid controls the seasonal cycle, as indicated by NO2/NOy ratios. Future work is required to explain AM4 biases relative to OMI.
Simulation of radar backscattering from snowpack at X-band and Ku-band
NASA Astrophysics Data System (ADS)
Gay, Michel; Phan, Xuan-Vu; Ferro-Famil, Laurent
2016-04-01
This paper presents a multilayer snowpack electromagnetic backscattering model, based on Dense Media Radiative Transfer (DMRT). This model is capable of simulating the interaction of electromagnetic wave (EMW) at X-band and Ku-band frequencies with multilayer snowpack. The air-snow interface and snow-ground backscattering components are calculated using the Integral Equation Model (IEM) by [1], whereas the volume backscattering component is calculated based on the solution of Vector Radiative Transfer (VRT) equation at order 1. Case study has been carried out using measurement data from NoSREx project [2], which include SnowScat data in X-band and Ku-band, TerraSAR-X acquisitions and snowpack stratigraphic in-situ measurements. The results of model simulations show good agreement with the radar observations, and therefore allow the DMRT model to be used in various applications, such as data assimilation [3]. [1] A.K. Fung and K.S. Chen, "An update on the iem surface backscattering model," Geoscience and Remote Sensing Letters, IEEE, vol. 1, no. 2, pp. 75 - 77, april 2004. [2] J. Lemmetyinen, A. Kontu, J. Pulliainen, A. Wiesmann, C. Werner, T. Nagler, H. Rott, and M. Heidinger, "Technical assistance for the deployment of an x- to ku-band scatterometer during the nosrex ii experiment," Final Report, ESA ESTEC Contract No. 22671/09/NL/JA., 2011. [3] X. V. Phan, L. Ferro-Famil, M. Gay, Y. Durand, M. Dumont, S. Morin, S. Allain, G. D'Urso, and A. Girard, "3d-var multilayer assimilation of x-band sar data into a detailed snowpack model," The Cryosphere Discussions, vol. 7, no. 5, pp. 4881-4912, 2013.
Senapati, Nimai; Chabbi, Abad; Giostri, André Faé; Yeluripati, Jagadeesh B; Smith, Pete
2016-12-01
The DailyDayCent biogeochemical model was used to simulate nitrous oxide (N 2 O) emissions from two contrasting agro-ecosystems viz. a mown-grassland and a grain-cropping system in France. Model performance was tested using high frequency measurements over three years; additionally a local sensitivity analysis was performed. Annual N 2 O emissions of 1.97 and 1.24kgNha -1 year -1 were simulated from mown-grassland and grain-cropland, respectively. Measured and simulated water filled pore space (r=0.86, ME=-2.5%) and soil temperature (r=0.96, ME=-0.63°C) at 10cm soil depth matched well in mown-grassland. The model predicted cumulative hay and crop production effectively. The model simulated soil mineral nitrogen (N) concentrations, particularly ammonium (NH 4 + ), reasonably, but the model significantly underestimated soil nitrate (NO 3 - ) concentration under both systems. In general, the model effectively simulated the dynamics and the magnitude of daily N 2 O flux over the whole experimental period in grain-cropland (r=0.16, ME=-0.81gNha -1 day -1 ), with reasonable agreement between measured and modelled N 2 O fluxes for the mown-grassland (r=0.63, ME=-0.65gNha -1 day -1 ). Our results indicate that DailyDayCent has potential for use as a tool for predicting overall N 2 O emissions in the study region. However, in-depth analysis shows some systematic discrepancies between measured and simulated N 2 O fluxes on a daily basis. The current exercise suggests that the DailyDayCent may need improvement, particularly the sub-module responsible for N transformations, for better simulating soil mineral N, especially soil NO 3 - concentration, and N 2 O flux on a daily basis. The sensitivity analysis shows that many factors such as climate change, N-fertilizer use, input uncertainty and parameter value could influence the simulation of N 2 O emissions. Sensitivity estimation also helped to identify critical parameters, which need careful estimation or site-specific calibration for successful modelling of N 2 O emissions in the study region. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Yeh, J. W.
1971-01-01
The general features of the GENET system for simulating networks are described. A set of features is presented which are desirable for network simulations and which are expected to be achieved by this system. Among these features are: (1) two level network modeling; and (2) problem oriented operations. Several typical network systems are modeled in GENET framework to illustrate various of the features and to show its applicability.
Moriasi, Daniel N; Gowda, Prasanna H; Arnold, Jeffrey G; Mulla, David J; Ale, Srinivasulu; Steiner, Jean L; Tomer, Mark D
2013-11-01
Subsurface tile drains in agricultural systems of the midwestern United States are a major contributor of nitrate-N (NO-N) loadings to hypoxic conditions in the Gulf of Mexico. Hydrologic and water quality models, such as the Soil and Water Assessment Tool, are widely used to simulate tile drainage systems. The Hooghoudt and Kirkham tile drain equations in the Soil and Water Assessment Tool have not been rigorously tested for predicting tile flow and the corresponding NO-N losses. In this study, long-term (1983-1996) monitoring plot data from southern Minnesota were used to evaluate the SWAT version 2009 revision 531 (hereafter referred to as SWAT) model for accurately estimating subsurface tile drain flows and associated NO-N losses. A retention parameter adjustment factor was incorporated to account for the effects of tile drainage and slope changes on the computation of surface runoff using the curve number method (hereafter referred to as Revised SWAT). The SWAT and Revised SWAT models were calibrated and validated for tile flow and associated NO-N losses. Results indicated that, on average, Revised SWAT predicted monthly tile flow and associated NO-N losses better than SWAT by 48 and 28%, respectively. For the calibration period, the Revised SWAT model simulated tile flow and NO-N losses within 4 and 1% of the observed data, respectively. For the validation period, it simulated tile flow and NO-N losses within 8 and 2%, respectively, of the observed values. Therefore, the Revised SWAT model is expected to provide more accurate simulation of the effectiveness of tile drainage and NO-N management practices. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
[Real world instantaneous emission simulation for light-duty diesel vehicle].
Huang, Cheng; Chen, Chang-Hong; Dai, Pu; Li, Li; Huang, Hai-Ying; Cheng, Zhen; Jia, Ji-Hong
2008-10-01
Core architecture and input parameters of CMEM model were introduced to simulation the second by second vehicle emission rate on real world by taking a light-duty diesel car as a case. On-board test data by a portable emission measurement system were then used to validate the simulation results. Test emission factors of CO, THC, NO(x) and CO2 were respectively 0.81, 0.61, 2.09, and 193 g x km(-1), while calculated emission factors were 0.75, 0.47, 2.47, and 212 g x km(-1). The correlation coefficients reached 0.69, 0.69, 0.75, and 0.72. Simulated instantaneous emissions of the light duty diesel vehicle by CMEM model were strongly coherent with the transient driving cycle. By analysis, CO, THC, NO(x), and CO2 emissions would be reduced by 50%, 47%, 45%, and 44% after improving the traffic situation at the intersection. The result indicated that it is necessary and feasible to simulate the instantaneous emissions of mixed vehicle fleet in some typical traffic areas by the micro-scale vehicle emission model.
Sogbedji, Jean M; McIsaac, Gregory F
2006-01-01
Assessing the accuracy of agronomic and water quality simulation models in different soils, land-use systems, and environments provides a basis for using and improving these models. We evaluated the performance of the ADAPT model for simulating riverine nitrate-nitrogen (NO3-N) export from a 1500-km2 watershed in central Illinois, where approximately 85% of the land is used for maize-soybean production and tile drainage is common. Soil chemical properties, crop nitrogen (N) uptake coefficient, dry matter ratio, and a denitrification reduction coefficient were used as calibration parameters to optimize the fit between measured and simulated NO3-N load from the watershed for the 1989 to 1993 period. The applicability of the calibrated parameter values was tested by using these values for simulating the 1994 to 1997 period on the same watershed. Willmott's index of agreement ranged from 0.91 to 0.97 for daily, weekly, monthly, and annual comparisons of riverine nitrate N loads. Simulation accuracy generally decreased as the time interval decreased. Willmott's index for simulated crop yields ranged from 0.91 to 0.99; however, observed crop yields were used as input to the model. The partial N budget results suggested that 52 to 72 kg N ha(-1) yr(-1) accumulated in the soil, but simulated biological N fixation associated with soybeans was considerably greater than literature values for the region. Improvement of the N fixation algorithms and incorporation of mechanisms that describe soybean yield in response to environmental conditions appear to be needed to improve the performance of the model.
Logue, Jennifer M; Klepeis, Neil E; Lobscheid, Agnes B; Singer, Brett C
2014-01-01
Residential natural gas cooking burners (NGCBs) can emit substantial quantities of pollutants, and they are typically used without venting range hoods. We quantified pollutant concentrations and occupant exposures resulting from NGCB use in California homes. A mass-balance model was applied to estimate time-dependent pollutant concentrations throughout homes in Southern California and the exposure concentrations experienced by individual occupants. We estimated nitrogen dioxide (NO2), carbon monoxide (CO), and formaldehyde (HCHO) concentrations for 1 week each in summer and winter for a representative sample of Southern California homes. The model simulated pollutant emissions from NGCBs as well as NO2 and CO entry from outdoors, dilution throughout the home, and removal by ventilation and deposition. Residence characteristics and outdoor concentrations of NO2 and CO were obtained from available databases. We inferred ventilation rates, occupancy patterns, and burner use from household characteristics. We also explored proximity to the burner(s) and the benefits of using venting range hoods. Replicate model executions using independently generated sets of stochastic variable values yielded estimated pollutant concentration distributions with geometric means varying by <10%. The simulation model estimated that-in homes using NGCBs without coincident use of venting range hoods-62%, 9%, and 53% of occupants are routinely exposed to NO2, CO, and HCHO levels that exceed acute health-based standards and guidelines. NGCB use increased the sample median of the highest simulated 1-hr indoor concentrations by 100, 3,000, and 20 ppb for NO2, CO, and HCHO, respectively. Reducing pollutant exposures from NGCBs should be a public health priority. Simulation results suggest that regular use of even moderately effective venting range hoods would dramatically reduce the percentage of homes in which concentrations exceed health-based standards.
Aboulfotoh, Ahmed M
2018-03-01
Performance of continuous mesophilic high solids anaerobic digestion (HSAD) was simulated using Anaerobic Digestion Model No. 1 (ADM1), under different conditions (solids concentrations, sludge retention time (SRT), organic loading rate (OLR), and type of sludge). Implementation of ADM1, using the proposed biochemical parameters, proved to be a useful tool for the prediction and control of HSAD as the model predicted the behavior of the tested sets of data with considerable accuracy, especially for SRT more than 13 days. The model was then used to investigate the possibility of changing the existing conventional anaerobic digestion (CAD) units in Gabal El Asfar water resource recovery facility into HSAD, instead of establishing new CAD units, and results show that the system will be feasible. HSAD will produce the same bioenergy combined with a decrease in capital, operational, and maintenance costs.
Inverse Modeling of Texas NOx Emissions Using Space-Based and Ground-Based NO2 Observations
NASA Technical Reports Server (NTRS)
Tang, Wei; Cohan, D.; Lamsal, L. N.; Xiao, X.; Zhou, W.
2013-01-01
Inverse modeling of nitrogen oxide (NOx) emissions using satellite-based NO2 observations has become more prevalent in recent years, but has rarely been applied to regulatory modeling at regional scales. In this study, OMI satellite observations of NO2 column densities are used to conduct inverse modeling of NOx emission inventories for two Texas State Implementation Plan (SIP) modeling episodes. Addition of lightning, aircraft, and soil NOx emissions to the regulatory inventory narrowed but did not close the gap between modeled and satellite observed NO2 over rural regions. Satellitebased top-down emission inventories are created with the regional Comprehensive Air Quality Model with extensions (CAMx) using two techniques: the direct scaling method and discrete Kalman filter (DKF) with Decoupled Direct Method (DDM) sensitivity analysis. The simulations with satellite-inverted inventories are compared to the modeling results using the a priori inventory as well as an inventory created by a ground-level NO2 based DKF inversion. The DKF inversions yield conflicting results: the satellite based inversion scales up the a priori NOx emissions in most regions by factors of 1.02 to 1.84, leading to 3-55% increase in modeled NO2 column densities and 1-7 ppb increase in ground 8 h ozone concentrations, while the ground-based inversion indicates the a priori NOx emissions should be scaled by factors of 0.34 to 0.57 in each region. However, none of the inversions improve the model performance in simulating aircraft-observed NO2 or ground-level ozone (O3) concentrations.
Inverse modeling of Texas NOx emissions using space-based and ground-based NO2 observations
NASA Astrophysics Data System (ADS)
Tang, W.; Cohan, D. S.; Lamsal, L. N.; Xiao, X.; Zhou, W.
2013-11-01
Inverse modeling of nitrogen oxide (NOx) emissions using satellite-based NO2 observations has become more prevalent in recent years, but has rarely been applied to regulatory modeling at regional scales. In this study, OMI satellite observations of NO2 column densities are used to conduct inverse modeling of NOx emission inventories for two Texas State Implementation Plan (SIP) modeling episodes. Addition of lightning, aircraft, and soil NOx emissions to the regulatory inventory narrowed but did not close the gap between modeled and satellite-observed NO2 over rural regions. Satellite-based top-down emission inventories are created with the regional Comprehensive Air Quality Model with extensions (CAMx) using two techniques: the direct scaling method and discrete Kalman filter (DKF) with decoupled direct method (DDM) sensitivity analysis. The simulations with satellite-inverted inventories are compared to the modeling results using the a priori inventory as well as an inventory created by a ground-level NO2-based DKF inversion. The DKF inversions yield conflicting results: the satellite-based inversion scales up the a priori NOx emissions in most regions by factors of 1.02 to 1.84, leading to 3-55% increase in modeled NO2 column densities and 1-7 ppb increase in ground 8 h ozone concentrations, while the ground-based inversion indicates the a priori NOx emissions should be scaled by factors of 0.34 to 0.57 in each region. However, none of the inversions improve the model performance in simulating aircraft-observed NO2 or ground-level ozone (O3) concentrations.
Inverse modeling of Texas NOx emissions using space-based and ground-based NO2 observations
NASA Astrophysics Data System (ADS)
Tang, W.; Cohan, D.; Lamsal, L. N.; Xiao, X.; Zhou, W.
2013-07-01
Inverse modeling of nitrogen oxide (NOx) emissions using satellite-based NO2 observations has become more prevalent in recent years, but has rarely been applied to regulatory modeling at regional scales. In this study, OMI satellite observations of NO2 column densities are used to conduct inverse modeling of NOx emission inventories for two Texas State Implementation Plan (SIP) modeling episodes. Addition of lightning, aircraft, and soil NOx emissions to the regulatory inventory narrowed but did not close the gap between modeled and satellite observed NO2 over rural regions. Satellite-based top-down emission inventories are created with the regional Comprehensive Air Quality Model with extensions (CAMx) using two techniques: the direct scaling method and discrete Kalman filter (DKF) with Decoupled Direct Method (DDM) sensitivity analysis. The simulations with satellite-inverted inventories are compared to the modeling results using the a priori inventory as well as an inventory created by a ground-level NO2 based DKF inversion. The DKF inversions yield conflicting results: the satellite-based inversion scales up the a priori NOx emissions in most regions by factors of 1.02 to 1.84, leading to 3-55% increase in modeled NO2 column densities and 1-7 ppb increase in ground 8 h ozone concentrations, while the ground-based inversion indicates the a priori NOx emissions should be scaled by factors of 0.34 to 0.57 in each region. However, none of the inversions improve the model performance in simulating aircraft-observed NO2 or ground-level ozone (O3) concentrations.
Monte Carlo Studies of Phase Separation in Compressible 2-dim Ising Models
NASA Astrophysics Data System (ADS)
Mitchell, S. J.; Landau, D. P.
2006-03-01
Using high resolution Monte Carlo simulations, we study time-dependent domain growth in compressible 2-dim ferromagnetic (s=1/2) Ising models with continuous spin positions and spin-exchange moves [1]. Spins interact with slightly modified Lennard-Jones potentials, and we consider a model with no lattice mismatch and one with 4% mismatch. For comparison, we repeat calculations for the rigid Ising model [2]. For all models, large systems (512^2) and long times (10^ 6 MCS) are examined over multiple runs, and the growth exponent is measured in the asymptotic scaling regime. For the rigid model and the compressible model with no lattice mismatch, the growth exponent is consistent with the theoretically expected value of 1/3 [1] for Model B type growth. However, we find that non-zero lattice mismatch has a significant and unexpected effect on the growth behavior.Supported by the NSF.[1] D.P. Landau and K. Binder, A Guide to Monte Carlo Simulations in Statistical Physics, second ed. (Cambridge University Press, New York, 2005).[2] J. Amar, F. Sullivan, and R.D. Mountain, Phys. Rev. B 37, 196 (1988).
Annular mode changes in the CMIP5 simulations
NASA Astrophysics Data System (ADS)
Gillett, N. P.; Fyfe, J. C.
2013-03-01
We investigate simulated changes in the annular modes in historical and RCP 4.5 scenario simulations of 37 models from the fifth Coupled Model Intercomparison Project (CMIP5), a much larger ensemble of models than has previously been used to investigate annular mode trends, with improved resolution and forcings. The CMIP5 models on average simulate increases in the Northern Annular Mode (NAM) and Southern Annular Mode (SAM) in every season by 2100, and no CMIP5 model simulates a significant decrease in either the NAM or SAM in any season. No significant increase in the NAM or North Atlantic Oscillation (NAO) is simulated in response to volcanic aerosol, and no significant NAM or NAO response to solar irradiance variations is simulated. The CMIP5 models simulate a significant negative SAM response to volcanic aerosol in MAM and JJA, and a significant positive SAM response to solar irradiance variations in MAM, JJA and DJF.
Zarriello, Phillip J.; Ries, Kernell G.
2000-01-01
Water withdrawals from the 155-square-mile Ipswich River Basin in northeastern Massachusetts affect aquatic habitat, water quality, and recreational use of the river. To better understand the effects of these withdrawals on streamflow, particularly low flow, the Hydrological Simulation Program-FORTRAN (HSPF) was used to develop a watershed-scale precipitation-runoff model of the Ipswich River to simulate its hydrology and complex water-use patterns.An analytical solution was used to compute time series of streamflow depletions resulting from ground-water withdrawals at wells. The flow depletions caused by pumping from the wells were summed along with any surface-water withdrawals to calculate the total withdrawal along a stream reach. The water withdrawals, records of precipitation, and streamflow records on the Ipswich River at South Middleton and at Ipswich for the period 1989?93 were used to calibrate the model. Model-fit analysis indicates that the simulated flows matched observed flows over a wide range of conditions; at a minimum, the coefficient of model-fit efficiency indicates that the model explained 79 percent of the variance in the observed daily flow.Six alternative water-withdrawal and land-use scenarios were simulated with the model. Three scenarios were examined for the 1989?93 calibration period, and three scenarios were examined for the 1961?95 period to test alternative withdrawals and land use over a wider range of climatic conditions, and to compute 1-, 7-, and 30-day low-flow frequencies using a log-Pearson Type III analysis. Flow-duration curves computed from results of the 1989?93 simulations indicate that, at the South Middleton and Ipswich gaging stations, streamflows when no water withdrawals are being made are nearly identical to streamflows when no ground-water withdrawals are made. Streamflow under no water withdrawals at both stations are about an order of magnitude larger at the 99.8 percent exceedence probability than simulations with only ground-water withdrawals. Long-term simulations indicate that the differences between streamflow with no water withdrawals and average 1989?93 water withdrawals is similar to the difference between simulations for the same water-use conditions made for the 1989?93 period at both sites. The 7-day, 10-year low-flow (7Q10, a widely used regulatory statistic) at the South Middleton station was 4.1 cubic feet per second (ft3/s) with no water withdrawals and 1991 land use, 5.8 ft3/s no withdrawals and undeveloped land, and 0.54 ft3/s with average 1989?93 water withdrawals and 1991 land use. The 7Q10 at the Ipswich station was about 8.3 ft3/s for simulations with no water withdrawals for both the 1991 land use and the undeveloped land conditions, and 2.7 ft3/s for simulations with average 1989?93 water withdrawals and 1991 land use. Simulation results indicate that surface-water withdrawals have little effect on the duration and frequency of low flows, but the cumulative ground-water withdrawals substantially decrease low flows.
Application of Anaerobic Digestion Model No. 1 for simulating anaerobic mesophilic sludge digestion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendes, Carlos, E-mail: carllosmendez@gmail.com; Esquerre, Karla, E-mail: karlaesquerre@ufba.br; Matos Queiroz, Luciano, E-mail: lmqueiroz@ufba.br
2015-01-15
Highlights: • The behavior of a anaerobic reactor was evaluated through modeling. • Parametric sensitivity analysis was used to select most sensitive of the ADM1. • The results indicate that the ADM1 was able to predict the experimental results. • Organic load rate above of 35 kg/m{sup 3} day affects the performance of the process. - Abstract: Improving anaerobic digestion of sewage sludge by monitoring common indicators such as volatile fatty acids (VFAs), gas composition and pH is a suitable solution for better sludge management. Modeling is an important tool to assess and to predict process performance. The present studymore » focuses on the application of the Anaerobic Digestion Model No. 1 (ADM1) to simulate the dynamic behavior of a reactor fed with sewage sludge under mesophilic conditions. Parametric sensitivity analysis is used to select the most sensitive ADM1 parameters for estimation using a numerical procedure while other parameters are applied without any modification to the original values presented in the ADM1 report. The results indicate that the ADM1 model after parameter estimation was able to predict the experimental results of effluent acetate, propionate, composites and biogas flows and pH with reasonable accuracy. The simulation of the effect of organic shock loading clearly showed that an organic shock loading rate above of 35 kg/m{sup 3} day affects the performance of the reactor. The results demonstrate that simulations can be helpful to support decisions on predicting the anaerobic digestion process of sewage sludge.« less
Realistic modeling of deep brain stimulation implants for electromagnetic MRI safety studies.
Guerin, Bastien; Serano, Peter; Iacono, Maria Ida; Herrington, Todd M; Widge, Alik S; Dougherty, Darin D; Bonmassar, Giorgio; Angelone, Leonardo M; Wald, Lawrence L
2018-05-04
We propose a framework for electromagnetic (EM) simulation of deep brain stimulation (DBS) patients in radiofrequency (RF) coils. We generated a model of a DBS patient using post-operative head and neck computed tomography (CT) images stitched together into a 'virtual CT' image covering the entire length of the implant. The body was modeled as homogeneous. The implant path extracted from the CT data contained self-intersections, which we corrected automatically using an optimization procedure. Using the CT-derived DBS path, we built a model of the implant including electrodes, helicoidal internal conductor wires, loops, extension cables, and the implanted pulse generator. We also built four simplified models with straight wires, no extension cables and no loops to assess the impact of these simplifications on safety predictions. We simulated EM fields induced by the RF birdcage body coil in the body model, including at the DBS lead tip at both 1.5 Tesla (64 MHz) and 3 Tesla (123 MHz). We also assessed the robustness of our simulation results by systematically varying the EM properties of the body model and the position and length of the DBS implant (sensitivity analysis). The topology correction algorithm corrected all self-intersection and curvature violations of the initial path while introducing minimal deformations (open-source code available at http://ptx.martinos.org/index.php/Main_Page). The unaveraged lead-tip peak SAR predicted by the five DBS models (0.1 mm resolution grid) ranged from 12.8 kW kg -1 (full model, helicoidal conductors) to 43.6 kW kg -1 (no loops, straight conductors) at 1.5 T (3.4-fold variation) and 18.6 kW kg -1 (full model, straight conductors) to 73.8 kW kg -1 (no loops, straight conductors) at 3 T (4.0-fold variation). At 1.5 T and 3 T, the variability of lead-tip peak SAR with respect to the conductivity ranged between 18% and 30%. Variability with respect to the position and length of the DBS implant ranged between 9.5% and 27.6%.
Realistic modeling of deep brain stimulation implants for electromagnetic MRI safety studies
NASA Astrophysics Data System (ADS)
Guerin, Bastien; Serano, Peter; Iacono, Maria Ida; Herrington, Todd M.; Widge, Alik S.; Dougherty, Darin D.; Bonmassar, Giorgio; Angelone, Leonardo M.; Wald, Lawrence L.
2018-05-01
We propose a framework for electromagnetic (EM) simulation of deep brain stimulation (DBS) patients in radiofrequency (RF) coils. We generated a model of a DBS patient using post-operative head and neck computed tomography (CT) images stitched together into a ‘virtual CT’ image covering the entire length of the implant. The body was modeled as homogeneous. The implant path extracted from the CT data contained self-intersections, which we corrected automatically using an optimization procedure. Using the CT-derived DBS path, we built a model of the implant including electrodes, helicoidal internal conductor wires, loops, extension cables, and the implanted pulse generator. We also built four simplified models with straight wires, no extension cables and no loops to assess the impact of these simplifications on safety predictions. We simulated EM fields induced by the RF birdcage body coil in the body model, including at the DBS lead tip at both 1.5 Tesla (64 MHz) and 3 Tesla (123 MHz). We also assessed the robustness of our simulation results by systematically varying the EM properties of the body model and the position and length of the DBS implant (sensitivity analysis). The topology correction algorithm corrected all self-intersection and curvature violations of the initial path while introducing minimal deformations (open-source code available at http://ptx.martinos.org/index.php/Main_Page). The unaveraged lead-tip peak SAR predicted by the five DBS models (0.1 mm resolution grid) ranged from 12.8 kW kg‑1 (full model, helicoidal conductors) to 43.6 kW kg‑1 (no loops, straight conductors) at 1.5 T (3.4-fold variation) and 18.6 kW kg‑1 (full model, straight conductors) to 73.8 kW kg‑1 (no loops, straight conductors) at 3 T (4.0-fold variation). At 1.5 T and 3 T, the variability of lead-tip peak SAR with respect to the conductivity ranged between 18% and 30%. Variability with respect to the position and length of the DBS implant ranged between 9.5% and 27.6%.
1987-07-14
RD-RISE 368 CENTRIFUGAL AND NUMERICAL MODELING OF BURIED STRUCTURES 1/3 VOLUME 2 DYNAMIC..(U) COLORADO UNIV AT BOULDER DEPT OF CIVIL ENVIRONMENTAL...20332-6448 ELEMENT NO NO. NO ACCESSION NO 61102F 2302 Cl 11 TITLE (Include Security Classification) (U) Centrifugal and Numerical Modeling of Buried ...were buried in a dry sand and tested in the centrifuge to simulate the effects of gravity-induced overburden stresses which played a major role in
A Lagrangian Simulation of Subsonic Aircraft Exhaust Emissions
NASA Technical Reports Server (NTRS)
Schoeberl, M. R.; Morris, G. A.
1999-01-01
To estimate the effect of subsonic and supersonic aircraft exhaust on the stratospheric concentration of NO(y), we employ a trajectory model initialized with air parcels based on the standard release scenarios. The supersonic exhaust simulations are in good agreement with 2D and 3D model results and show a perturbation of about 1-2 ppbv of NO(y) in the stratosphere. The subsonic simulations show that subsonic emissions are almost entirely trapped below the 380 K potential temperature surface. Our subsonic results contradict results from most other models, which show exhaust products penetrating above 380 K, as summarized. The disagreement can likely be attributed to an excessive vertical diffusion in most models of the strong vertical gradient in NO(y) that forms at the boundary between the emission zone and the stratosphere above 380 K. Our results suggest that previous assessments of the impact of subsonic exhaust emission on the stratospheric region above 380 K should be considered to be an upper bound.
Evaluating Soil Carbon Sequestration in Central Iowa
NASA Astrophysics Data System (ADS)
Doraiswamy, P. C.; Hunt, E. R.; McCarty, G. W.; Daughtry, C. S.; Izaurralde, C.
2005-12-01
The potential for reducing atmospheric carbon dioxide (CO2) concentration through landuse and management of agricultural systems is of great interest worldwide. Agricultural soils can be a source of CO2 when not properly managed but can also be a sink for sequestering CO2 through proper soil and crop management. The EPIC-CENTURY biogeochemical model was used to simulate the baseline level of soil carbon from soil survey data and project changes in soil organic carbon (SOC) under different tillage and crop management practices for corn and soybean crops. The study was conducted in central Iowa (50 km x 100 km) to simulate changes in soil carbon over the next 50 years. The simulations were conducted in two phases; initially a 25-year period (1971-1995) was simulated using conventional tillage practices since there was a transition in new management after 1995. In the second 25-year period (1996-2020), four different modeling scenarios were applied namely; conventional tillage, mulch tillage, no-tillage and no-tillage with a rye cover crop over the winter. The model simulation results showed potential gains in soil carbon in the top layers of the soil for conservation tillage. The simulations were made at a spatial resolution of 1.6 km x 1.6 km and mapped for the study area. There was a mean reduction in soil organic carbon of 0.095 T/ha per year over the 25-year period starting with 1996 for the conventional tillage practice. However, for management practices of mulch tillage, no tillage and no tillage with cover crop there was an increase in soil organic carbon of 0.12, 0.202 and 0.263 T/ha respectively over the same 25-year period. These results are in general similar to studies conducted in this region.
Measurement error in time-series analysis: a simulation study comparing modelled and monitored data.
Butland, Barbara K; Armstrong, Ben; Atkinson, Richard W; Wilkinson, Paul; Heal, Mathew R; Doherty, Ruth M; Vieno, Massimo
2013-11-13
Assessing health effects from background exposure to air pollution is often hampered by the sparseness of pollution monitoring networks. However, regional atmospheric chemistry-transport models (CTMs) can provide pollution data with national coverage at fine geographical and temporal resolution. We used statistical simulation to compare the impact on epidemiological time-series analysis of additive measurement error in sparse monitor data as opposed to geographically and temporally complete model data. Statistical simulations were based on a theoretical area of 4 regions each consisting of twenty-five 5 km × 5 km grid-squares. In the context of a 3-year Poisson regression time-series analysis of the association between mortality and a single pollutant, we compared the error impact of using daily grid-specific model data as opposed to daily regional average monitor data. We investigated how this comparison was affected if we changed the number of grids per region containing a monitor. To inform simulations, estimates (e.g. of pollutant means) were obtained from observed monitor data for 2003-2006 for national network sites across the UK and corresponding model data that were generated by the EMEP-WRF CTM. Average within-site correlations between observed monitor and model data were 0.73 and 0.76 for rural and urban daily maximum 8-hour ozone respectively, and 0.67 and 0.61 for rural and urban loge(daily 1-hour maximum NO2). When regional averages were based on 5 or 10 monitors per region, health effect estimates exhibited little bias. However, with only 1 monitor per region, the regression coefficient in our time-series analysis was attenuated by an estimated 6% for urban background ozone, 13% for rural ozone, 29% for urban background loge(NO2) and 38% for rural loge(NO2). For grid-specific model data the corresponding figures were 19%, 22%, 54% and 44% respectively, i.e. similar for rural loge(NO2) but more marked for urban loge(NO2). Even if correlations between model and monitor data appear reasonably strong, additive classical measurement error in model data may lead to appreciable bias in health effect estimates. As process-based air pollution models become more widely used in epidemiological time-series analysis, assessments of error impact that include statistical simulation may be useful.
Guo, Lisha; Vanrolleghem, Peter A
2014-02-01
An activated sludge model for greenhouse gases no. 1 was calibrated with data from a wastewater treatment plant (WWTP) without control systems and validated with data from three similar plants equipped with control systems. Special about the calibration/validation approach adopted in this paper is that the data are obtained from simulations with a mathematical model that is widely accepted to describe effluent quality and operating costs of actual WWTPs, the Benchmark Simulation Model No. 2 (BSM2). The calibration also aimed at fitting the model to typical observed nitrous oxide (N₂O) emission data, i.e., a yearly average of 0.5% of the influent total nitrogen load emitted as N₂O-N. Model validation was performed by challenging the model in configurations with different control strategies. The kinetic term describing the dissolved oxygen effect on the denitrification by ammonia-oxidizing bacteria (AOB) was modified into a Haldane term. Both original and Haldane-modified models passed calibration and validation. Even though their yearly averaged values were similar, the two models presented different dynamic N₂O emissions under cold temperature conditions and control. Therefore, data collected in such situations can potentially permit model discrimination. Observed seasonal trends in N₂O emissions are simulated well with both original and Haldane-modified models. A mechanistic explanation based on the temperature-dependent interaction between heterotrophic and autotrophic N₂O pathways was provided. Finally, while adding the AOB denitrification pathway to a model with only heterotrophic N₂O production showed little impact on effluent quality and operating cost criteria, it clearly affected N2O emission productions.
Perspectives on Simulation and Miniaturization. Professional Paper No. 1472.
ERIC Educational Resources Information Center
McCluskey, Michael R.
Simulation--here defined as a physical, procedural, or symbolic representation of certain aspects of a functioning system, or as a working model or representation of a real world system--has at least four areas of application: (1) training where the objective of simulation is to provide the trainee with a learning environment that will facilitate…
ERIC Educational Resources Information Center
Dragoset, Lisa; Gordon, Anne
2010-01-01
This report describes work using nationally representative 2005 data from the School Nutrition Dietary Assessment-III (SNDA-III) study to develop a simulation model to predict the potential implications of changes in policies or practices related to school meals and school food environments. The model focuses on three domains of outcomes: (1) the…
Ludwig, T; Kern, P; Bongards, M; Wolf, C
2011-01-01
The optimization of relaxation and filtration times of submerged microfiltration flat modules in membrane bioreactors used for municipal wastewater treatment is essential for efficient plant operation. However, the optimization and control of such plants and their filtration processes is a challenging problem due to the underlying highly nonlinear and complex processes. This paper presents the use of genetic algorithms for this optimization problem in conjunction with a fully calibrated simulation model, as computational intelligence methods are perfectly suited to the nonconvex multi-objective nature of the optimization problems posed by these complex systems. The simulation model is developed and calibrated using membrane modules from the wastewater simulation software GPS-X based on the Activated Sludge Model No.1 (ASM1). Simulation results have been validated at a technical reference plant. They clearly show that filtration process costs for cleaning and energy can be reduced significantly by intelligent process optimization.
Chen, Xiaojuan; Chen, Zhihua; Wang, Xun; Huo, Chan; Hu, Zhiquan; Xiao, Bo; Hu, Mian
2016-07-01
The present study focused on the application of anaerobic digestion model no. 1 (ADM1) to simulate biogas production from Hydrilla verticillata. Model simulation was carried out by implementing ADM1 in AQUASIM 2.0 software. Sensitivity analysis was used to select the most sensitive parameters for estimation using the absolute-relative sensitivity function. Among all the kinetic parameters, disintegration constant (kdis), hydrolysis constant of protein (khyd_pr), Monod maximum specific substrate uptake rate (km_aa, km_ac, km_h2) and half-saturation constants (Ks_aa, Ks_ac) affect biogas production significantly, which were optimized by fitting of the model equations to the data obtained from batch experiments. The ADM1 model after parameter estimation was able to well predict the experimental results of daily biogas production and biogas composition. The simulation results of evolution of organic acids, bacteria concentrations and inhibition effects also helped to get insight into the reaction mechanisms. Copyright © 2016. Published by Elsevier Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eyler, L.L.; Trent, D.S.
The TEMPEST computer program was used to simulate fluid and thermal mixing in the cold leg and downcomer of a pressurized water reactor under emergency core cooling high-pressure injection (HPI), which is of concern to the pressurized thermal shock (PTS) problem. Application of the code was made in performing an analysis simulation of a full-scale Westinghouse three-loop plant design cold leg and downcomer. Verification/assessment of the code was performed and analysis procedures developed using data from Creare 1/5-scale experimental tests. Results of three simulations are presented. The first is a no-loop-flow case with high-velocity, low-negative-buoyancy HPI in a 1/5-scale modelmore » of a cold leg and downcomer. The second is a no-loop-flow case with low-velocity, high-negative density (modeled with salt water) injection in a 1/5-scale model. Comparison of TEMPEST code predictions with experimental data for these two cases show good agreement. The third simulation is a three-dimensional model of one loop of a full size Westinghouse three-loop plant design. Included in this latter simulation are loop components extending from the steam generator to the reactor vessel and a one-third sector of the vessel downcomer and lower plenum. No data were available for this case. For the Westinghouse plant simulation, thermally coupled conduction heat transfer in structural materials is included. The cold leg pipe and fluid mixing volumes of the primary pump, the stillwell, and the riser to the steam generator are included in the model. In the reactor vessel, the thermal shield, pressure vessel cladding, and pressure vessel wall are thermally coupled to the fluid and thermal mixing in the downcomer. The inlet plenum mixing volume is included in the model. A 10-min (real time) transient beginning at the initiation of HPI is computed to determine temperatures at the beltline of the pressure vessel wall.« less
Qinghua, Zhao; Jipeng, Li; Yongxing, Zhang; He, Liang; Xuepeng, Wang; Peng, Yan; Xiaofeng, Wu
2015-04-07
To employ three-dimensional finite element modeling and biomechanical simulation for evaluating the stability and stress conduction of two postoperative internal fixed modeling-multilevel posterior instrumentation ( MPI) and MPI with anterior instrumentation (MPAI) with neck-thoracic vertebral tumor en bloc resection. Mimics software and computed tomography (CT) images were used to establish the three-dimensional (3D) model of vertebrae C5-T2 and simulated the C7 en bloc vertebral resection for MPI and MPAI modeling. Then the statistics and images were transmitted into the ANSYS finite element system and 20N distribution load (simulating body weight) and applied 1 N · m torque on neutral point for simulating vertebral displacement and stress conduction and distribution of motion mode, i. e. flexion, extension, bending and rotating. With a better stability, the displacement of two adjacent vertebral bodies of MPI and MPAI modeling was less than that of complete vertebral modeling. No significant differences existed between each other. But as for stress shielding effect reduction, MPI was slightly better than MPAI. From biomechanical point of view, two internal instrumentations with neck-thoracic tumor en bloc resection may achieve an excellent stability with no significant differences. But with better stress conduction, MPI is more advantageous in postoperative reconstruction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rogers, L.A.
1980-06-01
In the Department of Energy test of the Edna Delcambre No. 1 well for recovery of natural gas from geopressured-geothermal brine, part of the test producted gas in excess of the amount that could be dissolved in the brine. Where this excess gas originated was unknown and several theories were proposed to explain the source. This annual report describes IGT's work to match the observed gas/water production with computer simulation. Two different theoretical models were calculated in detail using available reservoir simulators. One model considered the excess gas to be dispersed as small bubbles in pores. The other model consideredmore » the excess gas as a nearby free gas cap above the aquifer. Reservoir engineering analysis of the flow test data was used to determine the basic reservoir characteristics. The computer studies revealed that the dispersed gas model gave characteristically the wrong shape for plots of gas/water ratio, and no reasonable match of the calculated values could be made to the experimental results. The free gas cap model gave characteristically better shapes to the gas/water ratio plots if the initial edge of the free gas was only about 400 feet from the well. Because there were two other wells at approximately this distance (Delcambre No. 4 and No. 4A wells) which had a history of down-hole blowouts and mechanical problems, it appears that the source of the excess free gas is from a separate horizon which connected to the Delcambre No. 1 sand via these nearby wells. This conclusion is corroborated by the changes in gas composition when the excess gas occurs and the geological studies which indicate the nearest free gas cap to be several thousand feet away. The occurrence of this excess free gas can thus be explained by known reservoir characteristics, and no new model for gas entrapment or production is needed.« less
Application of a fast Newton-Krylov solver for equilibrium simulations of phosphorus and oxygen
NASA Astrophysics Data System (ADS)
Fu, Weiwei; Primeau, François
2017-11-01
Model drift due to inadequate spinup is a serious problem that complicates the interpretation of climate change simulations. Even after a 300 year spinup we show that solutions are not only still drifting but often drifting away from their eventual equilibrium over large parts of the ocean. Here we present a Newton-Krylov solver for computing cyclostationary equilibrium solutions of a biogeochemical model for the cycling of phosphorus and oxygen. In addition to using previously developed preconditioning strategies - time-averaging and coarse-graining the Jacobian matrix - we also introduce a new strategy: the adiabatic elimination of a fast variable (particulate organic phosphorus) by slaving it to a slow variable (dissolved inorganic phosphorus). We use transport matrices derived from the Community Earth System Model (CESM) with a nominal horizontal resolution of 1° × 1° and 60 vertical levels to implement and test the solver. We find that the new solver obtains seasonally-varying equilibrium solutions with no visible drift using no more than 80 simulation years.
NASA Technical Reports Server (NTRS)
Roselle, Shawn J.; Schere, Kenneth L.; Chu, Shao-Hang
1994-01-01
There is increasing recognition that controls on NO(x) emissions may be necessary, in addition to existing and future Volatile Organic Compounds (VOC) controls, for the abatement of ozone (O3) over portions of the United States. This study compares various combinations of anthropogenic NO(x) and VOC emission reductions through a series of model simulations. A total of 6 simulations were performed with the Regional Oxidant Model (ROM) for a 9-day period in July 1988. Each simulation reduced anthropogenic NO(x) and VOC emissions across-the-board by different amounts. Maximum O3 concentrations for the period were compared between the simulations. Comparison of the simulations suggests that: (1) NO(x) controls may be more effective than VOC controls in reducing peak O3 over most of the eastern United States; (2) VOC controls are most effective in urban areas having large sources of emissions; (3) NO(x) controls may increase O3 near large point sources; and (4) the benefit gained from increasing the amount of VOC controls may lessen as the amount of NO(x) control is increased. This paper has been reviewed in accordance with the U.S. Environmental Protection Agency's peer and administrative review policies and approved for presentation and publication. Mention of trade names or commercial products does not constitute endorsement or recommendation for use.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rogers, L.A.; Randolph, P.L.
1979-01-01
A paper presented by the Institute of Gas Technology (IGT) at the Third Geopressured-Geothermal Energy Conference hypothesized that the high ratio of produced gas to produced water from the No. 1 sand in the Edna Delcambre No. 1 well was due to free gas trapped in pores by imbibition over geological time. This hypothesis was examined in relation to preliminary test data which reported only average gas to water ratios over the roughly 2-day steps in flow rate. Subsequent public release of detailed test data revealed substantial departures from the previously reported computer simulation results. Also, data now in themore » public domain reveal the existence of a gas cap on the aquifier tested. This paper describes IGT's efforts to match the observed gas/water production with computer simulation. Two models for the occurrence and production of gas in excess of that dissolved in the brine have been used. One model considers the gas to be dispersed in pores by imbibition, and the other model considers the gas as a nearby free gas cap above the aquifier. The studies revealed that the dispersed gas model characteristically gave the wrong shape to plots of gas production on the gas/water ratio plots such that no reasonable match to the flow data could be achieved. The free gas cap model gave a characteristically better shape to the production plots and could provide an approximate fit to the data of the edge of the free gas cap is only about 400 feet from the well.Because the geological structure maps indicate the free gas cap to be several thousand feet away and the computer simulation results match the distance to the nearby Delcambre Nos. 4 and 4A wells, it appears that the source of the excess free gas in the test of the No. 1 sand may be from these nearby wells. The gas source is probably a separate gas zone and is brought into contact with the No. 1 sand via a conduit around the No. 4 well.« less
NASA Astrophysics Data System (ADS)
Calitri, Francesca; Necpalova, Magdalena; Lee, Juhwan; Zaccone, Claudio; Spiess, Ernst; Herrera, Juan; Six, Johan
2016-04-01
Organic cropping systems have been promoted as a sustainable alternative to minimize the environmental impacts of conventional practices. Relatively little is known about the potential to reduce NO3-N leaching through the large-scale adoption of organic practices. Moreover, the potential to mitigate NO3-N leaching and thus the N pollution under future climate change through organic farming remain unknown and highly uncertain. Here, we compared regional NO3-N leaching from organic and conventional cropping systems in Switzerland using a terrestrial biogeochemical process-based model DayCent. The objectives of this study are 1) to calibrate and evaluate the model for NO3-N leaching measured under various management practices from three experiments at two sites in Switzerland; 2) to estimate regional NO3-N leaching patterns and their spatial uncertainty in conventional and organic cropping systems (with and without cover crops) for future climate change scenario A1B; 3) to explore the sensitivity of NO3-N leaching to changes in soil and climate variables; and 4) to assess the nitrogen use efficiency for conventional and organic cropping systems with and without cover crops under climate change. The data for model calibration/evaluation were derived from field experiments conducted in Liebefeld (canton Bern) and Eschikon (canton Zürich). These experiments evaluated effects of various cover crops and N fertilizer inputs on NO3-N leaching. The preliminary results suggest that the model was able to explain 50 to 83% of the inter-annual variability in the measured soil drainage (RMSE from 12.32 to 16.89 cm y-1). The annual NO3-N leaching was also simulated satisfactory (RMSE = 3.94 to 6.38 g N m-2 y-1), although the model had difficulty to reproduce the inter-annual variability in the NO3-N leaching losses correctly (R2 = 0.11 to 0.35). Future climate datasets (2010-2099) from the 10 regional climate models (RCM) were used in the simulations. Regional NO3-N leaching predictions for conventional cropping system with a three years rotation (silage maize, potatoes and winter wheat) in Zurich and Bern cantons varied from 6.30 to 16.89 g N m-2 y-1 over a 30-years period. Further simulations and analyses will follow to provide insights into understanding of driving variables and patterns of N losses by leaching in response to changes from conventional to organic cropping systems, and climate change.
NASA Astrophysics Data System (ADS)
Kuhnle, Alan
2009-11-01
In [1], Liow et al. discern a general feature of the occurrence trajectories of biological species: the periods of rise and fall of a typical species are about as long as the period of dominance. In this work, an individual-based model of biological evolution that was developed by Rikvold and Zia in [2] is investigated, but no analogous feature is observed in the simulated species populations. Instead, the periods of rise and fall of a simulated species cannot always be sensibly defined; when it does make sense to define these quantities, they are quite short and independent of the period of dominance. [4pt] [1] Liow, L. H., Skaug, H. J., Ergon, T., Schweder, T.: Global occurence trajectories of microfossils: Is the rise and persistence of species influenced by environmental volatility? Manuscript for Paleobiology, 5 Dec 2008 [0pt] [2] Rikvold, P.A., Zia, R.K.P.: Punctuated equilibria and 1/f noise in a biological coevolution model with individual-based dynamics. Physical Review E 68, 031913 (2003)
Modelling the Effects of Information Campaigns Using Agent-Based Simulation
2006-04-01
individual i (±1). T=5 T=10 T=20 T=40 DSTO-TR-1853 9 The incorporation of media effects into Equation (1) results in a social impact model of the...that minority opinions often survived in a social margin [17]. Nevertheless, compared to the situation where there is no media effect in the simulation...analysis presented in this paper combines word-of-mouth communication and mass media broadcasting into a single line of analysis. The effects of
NASA Astrophysics Data System (ADS)
Bharatham, Kavitha; Bharatham, Nagakumar; Kwon, Yong Jung; Lee, Keun Woo
2008-12-01
Allosteric inhibition of protein tyrosine phosphatase 1B (PTP1B), has paved a new path to design specific inhibitors for PTP1B, which is an important drug target for the treatment of type II diabetes and obesity. The PTP1B1-282-allosteric inhibitor complex crystal structure lacks α7 (287-298) and moreover there is no available 3D structure of PTP1B1-298 in open form. As the interaction between α7 and α6-α3 helices plays a crucial role in allosteric inhibition, α7 was modeled to the PTP1B1-282 in open form complexed with an allosteric inhibitor (compound-2) and a 5 ns MD simulation was performed to investigate the relative orientation of the α7-α6-α3 helices. The simulation conformational space was statistically sampled by clustering analyses. This approach was helpful to reveal certain clues on PTP1B allosteric inhibition. The simulation was also utilized in the generation of receptor based pharmacophore models to include the conformational flexibility of the protein-inhibitor complex. Three cluster representative structures of the highly populated clusters were selected for pharmacophore model generation. The three pharmacophore models were subsequently utilized for screening databases to retrieve molecules containing the features that complement the allosteric site. The retrieved hits were filtered based on certain drug-like properties and molecular docking simulations were performed in two different conformations of protein. Thus, performing MD simulation with α7 to investigate the changes at the allosteric site, then developing receptor based pharmacophore models and finally docking the retrieved hits into two distinct conformations will be a reliable methodology in identifying PTP1B allosteric inhibitors.
Clark, Brian R.; Hart, Rheannon M.
2009-01-01
The Mississippi Embayment Regional Aquifer Study (MERAS) was conducted with support from the Groundwater Resources Program of the U.S. Geological Survey Office of Groundwater. This report documents the construction and calibration of a finite-difference groundwater model for use as a tool to quantify groundwater availability within the Mississippi embayment. To approximate the differential equation, the MERAS model was constructed with the U.S. Geological Survey's modular three-dimensional finite-difference code, MODFLOW-2005; the preconditioned conjugate gradient solver within MODFLOW-2005 was used for the numerical solution technique. The model area boundary is approximately 78,000 square miles and includes eight States with approximately 6,900 miles of simulated streams, 70,000 well locations, and 10 primary hydrogeologic units. The finite-difference grid consists of 414 rows, 397 columns, and 13 layers. Each model cell is 1 square mile with varying thickness by cell and by layer. The simulation period extends from January 1, 1870, to April 1, 2007, for a total of 137 years and 69 stress periods. The first stress period is simulated as steady state to represent predevelopment conditions. Areal recharge is applied throughout the MERAS model area using the MODFLOW-2005 Recharge Package. Irrigation, municipal, and industrial wells are simulated using the Multi-Node Well Package. There are 43 streams simulated by the MERAS model. Each stream or river in the model area was simulated using the Streamflow-Routing Package. The perimeter of the model area and the base of the flow system are represented as no-flow boundaries. The downgradient limit of each model layer is a no-flow boundary, which approximates the extent of water with less than 10,000 milligrams per liter of dissolved solids. The MERAS model was calibrated by making manual changes to parameter values and examining residuals for hydraulic heads and streamflow. Additional calibration was achieved through alternate use of UCODE-2005 and PEST. Simulated heads were compared to 55,786 hydraulic-head measurements from 3,245 wells in the MERAS model area. Values of root mean square error between simulated and observed hydraulic heads of all observations ranged from 8.33 feet in 1919 to 47.65 feet in 1951, though only six root mean square error values are greater than 40 feet for the entire simulation period. Simulated streamflow generally is lower than measured streamflow for streams with streamflow less than 1,000 cubic feet per second, and greater than measured streamflow for streams with streamflow more than 1,000 cubic feet per second. Simulated streamflow is underpredicted for 18 observations and overpredicted for 10 observations in the model. These differences in streamflow illustrate the large uncertainty in model inputs such as predevelopment recharge, overland flow, pumpage (from stream and aquifer), precipitation, and observation weights. The groundwater-flow budget indicates changes in flow into (inflows) and out of (outflows) the model area during the pregroundwater-irrigation period (pre-1870) to 2007. Total flow (sum of inflows or outflows) through the model ranged from about 600 million gallons per day prior to development to 18,197 million gallons per day near the end of the simulation. The pumpage from wells represents the largest outflow components with a net rate of 18,197 million gallons per day near the end of the model simulation in 2006. Groundwater outflows are offset primarily by inflow from aquifer storage and recharge.
COMSOL-Based Modeling and Simulation of SnO2/rGO Gas Sensor for Detection of NO2.
Yaghouti Niyat, Farshad; Shahrokh Abadi, M H
2018-02-01
Despite SIESTA and COMSOL being increasingly used for the simulation of the sensing mechanism in the gas sensors, there are no modeling and simulation reports in literature for detection of NO 2 based rGO/SnO 2 sensors. In the present study, we model, simulate, and characterize an NO 2 based rGO/SnO 2 gas sensor using COMSOL by solving the Poisson's equations under associated boundary conditions of mass, heat and electrical transitions. To perform the simulation, we use an exposure model for presenting the required NO 2 , a heat transfer model to obtain a reaction temperature, and an electrical model to characterize the sensor's response in the presence of the gas. We characterize the sensor's response in the presence of different concentrations of NO 2 at different working temperatures and compare the results with the experimental data, reported by Zhang et al. The results from the simulated sensor show a good agreement with the real sensor with some inconsistencies due to differences between the practical conditions in the real chamber and applied conditions to the analytical equations. The results also show that the method can be used to define and predict the behavior of the rGO-based gas sensors before undergoing the fabrication process.
Modelling of diesel engine fuelled with biodiesel using engine simulation software
NASA Astrophysics Data System (ADS)
Said, Mohd Farid Muhamad; Said, Mazlan; Aziz, Azhar Abdul
2012-06-01
This paper is about modelling of a diesel engine that operates using biodiesel fuels. The model is used to simulate or predict the performance and combustion of the engine by simplified the geometry of engine component in the software. The model is produced using one-dimensional (1D) engine simulation software called GT-Power. The fuel properties library in the software is expanded to include palm oil based biodiesel fuels. Experimental works are performed to investigate the effect of biodiesel fuels on the heat release profiles and the engine performance curves. The model is validated with experimental data and good agreement is observed. The simulation results show that combustion characteristics and engine performances differ when biodiesel fuels are used instead of no. 2 diesel fuel.
1987-09-01
can be reduced substantially, compared to using numerical methods to model inter - " connect parasitics. Although some accuracy might be lost with...conductor widths and spacings listed in Table 2 1 , have been employed for simulation. In the first set of the simulations, planar dielectric inter ...model, there are no restrictions on the iumber ol diele-iric and conductors. andl the shape of the conductors and the dielectric inter - a.e,, In the
Flores-Alsina, Xavier; Solon, Kimberly; Kazadi Mbamba, Christian; Tait, Stephan; Gernaey, Krist V; Jeppsson, Ulf; Batstone, Damien J
2016-05-15
This paper proposes a series of extensions to functionally upgrade the IWA Anaerobic Digestion Model No. 1 (ADM1) to allow for plant-wide phosphorus (P) simulation. The close interplay between the P, sulfur (S) and iron (Fe) cycles requires a substantial (and unavoidable) increase in model complexity due to the involved three-phase physico-chemical and biological transformations. The ADM1 version, implemented in the plant-wide context provided by the Benchmark Simulation Model No. 2 (BSM2), is used as the basic platform (A0). Three different model extensions (A1, A2, A3) are implemented, simulated and evaluated. The first extension (A1) considers P transformations by accounting for the kinetic decay of polyphosphates (XPP) and potential uptake of volatile fatty acids (VFA) to produce polyhydroxyalkanoates (XPHA) by phosphorus accumulating organisms (XPAO). Two variant extensions (A2,1/A2,2) describe biological production of sulfides (SIS) by means of sulfate reducing bacteria (XSRB) utilising hydrogen only (autolithotrophically) or hydrogen plus organic acids (heterorganotrophically) as electron sources, respectively. These two approaches also consider a potential hydrogen sulfide ( [Formula: see text] inhibition effect and stripping to the gas phase ( [Formula: see text] ). The third extension (A3) accounts for chemical iron (III) ( [Formula: see text] ) reduction to iron (II) ( [Formula: see text] ) using hydrogen ( [Formula: see text] ) and sulfides (SIS) as electron donors. A set of pre/post interfaces between the Activated Sludge Model No. 2d (ASM2d) and ADM1 are furthermore proposed in order to allow for plant-wide (model-based) analysis and study of the interactions between the water and sludge lines. Simulation (A1 - A3) results show that the ratio between soluble/particulate P compounds strongly depends on the pH and cationic load, which determines the capacity to form (or not) precipitation products. Implementations A1 and A2,1/A2,2 lead to a reduction in the predicted methane/biogas production (and potential energy recovery) compared to reference ADM1 predictions (A0). This reduction is attributed to two factors: (1) loss of electron equivalents due to sulfate [Formula: see text] reduction by XSRB and storage of XPHA by XPAO; and, (2) decrease of acetoclastic and hydrogenotrophic methanogenesis due to [Formula: see text] inhibition. Model A3 shows the potential for iron to remove free SIS (and consequently inhibition) and instead promote iron sulfide (XFeS) precipitation. It also reduces the quantities of struvite ( [Formula: see text] ) and calcium phosphate ( [Formula: see text] ) that are formed due to its higher affinity for phosphate anions. This study provides a detailed analysis of the different model assumptions, the effect that operational/design conditions have on the model predictions and the practical implications of the proposed model extensions in view of plant-wide modelling/development of resource recovery strategies. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Stewart, Mark E. M.
2017-01-01
This paper presents an analysis and simulation of evaporation and condensation at a motionless liquid/vapor interface. A 1-D model equation, emphasizing heat and mass transfer at the interface, is solved in two ways, and incorporated into a subgrid interface model within a CFD simulation. Simulation predictions are compared with experimental data from the CPST Engineering Design Unit tank, a cryogenic fluid management test tank in 1-g. The numerical challenge here is the physics of the liquid/vapor interface; pressurizing the ullage heats it by several degrees, and sets up an interfacial temperature gradient that transfers heat to the liquid phase-the rate limiting step of condensation is heat conducted through the liquid and vapor. This physics occurs in thin thermal layers O(1 mm) on either side of the interface which is resolved by the subgrid interface model. An accommodation coefficient of 1.0 is used in the simulations which is consistent with theory and measurements. This model is predictive of evaporation/condensation rates, that is, there is no parameter tuning.
Wang, Zuowei; Xia, Siqing; Xu, Xiaoyin; Wang, Chenhui
2016-02-01
In this study, a one-dimensional multispecies model (ODMSM) was utilized to simulate NO3(-)-N and ClO4(-) reduction performances in two kinds of H2-based membrane-aeration biofilm reactors (H2-MBfR) within different operating conditions (e.g., NO3(-)-N/ClO4(-) loading rates, H2 partial pressure, etc.). Before the simulation process, we conducted the sensitivity analysis of some key parameters which would fluctuate in different environmental conditions, then we used the experimental data to calibrate the more sensitive parameters μ1 and μ2 (maximum specific growth rates of denitrification bacteria and perchlorate reduction bacteria) in two H2-MBfRs, and the diversity of the two key parameters' values in two types of reactors may be resulted from the different carbon source fed in the reactors. From the simulation results of six different operating conditions (four in H2-MBfR 1 and two in H2-MBfR 2), the applicability of the model was approved, and the variation of the removal tendency in different operating conditions could be well simulated. Besides, the rationality of operating parameters (H2 partial pressure, etc.) could be judged especially in condition of high nutrients' loading rates. To a certain degree, the model could provide theoretical guidance to determine the operating parameters on some specific conditions in practical application.
1965-08-10
Artists used paintbrushes and airbrushes to recreate the lunar surface on each of the four models comprising the LOLA simulator. Project LOLA or Lunar Orbit and Landing Approach was a simulator built at Langley to study problems related to landing on the lunar surface. It was a complex project that cost nearly 2 million dollars. James Hansen wrote: This simulator was designed to provide a pilot with a detailed visual encounter with the lunar surface the machine consisted primarily of a cockpit, a closed-circuit TV system, and four large murals or scale models representing portions of the lunar surface as seen from various altitudes. The pilot in the cockpit moved along a track past these murals which would accustom him to the visual cues for controlling a spacecraft in the vicinity of the moon. Unfortunately, such a simulation--although great fun and quite aesthetic--was not helpful because flight in lunar orbit posed no special problems other than the rendezvous with the LEM, which the device did not simulate. Not long after the end of Apollo, the expensive machine was dismantled. (p. 379) Ellis J. White described the simulator as follows: Model 1 is a 20-foot-diameter sphere mounted on a rotating base and is scaled 1 in. 9 miles. Models 2,3, and 4 are approximately 15x40 feet scaled sections of model 1. Model 4 is a scaled-up section of the Crater Alphonsus and the scale is 1 in. 200 feet. All models are in full relief except the sphere. -- Published in James R. Hansen, Spaceflight Revolution: NASA Langley Research Center From Sputnik to Apollo, (Washington: NASA, 1995), p. 379 Ellis J. White, Discussion of Three Typical Langley Research Center Simulation Programs, Paper presented at the Eastern Simulation Council (EAI s Princeton Computation Center), Princeton, NJ, October 20, 1966.
1964-10-28
Artists used paintbrushes and airbrushes to recreate the lunar surface on each of the four models comprising the LOLA simulator. Project LOLA or Lunar Orbit and Landing Approach was a simulator built at Langley to study problems related to landing on the lunar surface. It was a complex project that cost nearly $2 million dollars. James Hansen wrote: "This simulator was designed to provide a pilot with a detailed visual encounter with the lunar surface; the machine consisted primarily of a cockpit, a closed-circuit TV system, and four large murals or scale models representing portions of the lunar surface as seen from various altitudes. The pilot in the cockpit moved along a track past these murals which would accustom him to the visual cues for controlling a spacecraft in the vicinity of the moon. Unfortunately, such a simulation--although great fun and quite aesthetic--was not helpful because flight in lunar orbit posed no special problems other than the rendezvous with the LEM, which the device did not simulate. Not long after the end of Apollo, the expensive machine was dismantled." (p. 379) Ellis J. White further described LOLA in his paper "Discussion of Three Typical Langley Research Center Simulation Programs," "Model 1 is a 20-foot-diameter sphere mounted on a rotating base and is scaled 1 in. = 9 miles. Models 2,3, and 4 are approximately 15x40 feet scaled sections of model 1. Model 4 is a scaled-up section of the Crater Alphonsus and the scale is 1 in. = 200 feet. All models are in full relief except the sphere." -- Published in James R. Hansen, Spaceflight Revolution, NASA SP-4308, p. 379; Ellis J. White, "Discussion of Three Typical Langley Research Center Simulation Programs," Paper presented at the Eastern Simulation Council (EAI's Princeton Computation Center), Princeton, NJ, October 20, 1966.
Modeling effluent distribution and nitrate transport through an on-site wastewater system.
Hassan, G; Reneau, R B; Hagedorn, C; Jantrania, A R
2008-01-01
Properly functioning on-site wastewater systems (OWS) are an integral component of the wastewater system infrastructure necessary to renovate wastewater before it reaches surface or ground waters. There are a large number of factors, including soil hydraulic properties, effluent quality and dispersal, and system design, that affect OWS function. The ability to evaluate these factors using a simulation model would improve the capability to determine the impact of wastewater application on the subsurface soil environment. An existing subsurface drip irrigation system (SDIS) dosed with sequential batch reactor effluent (SBRE) was used in this study. This system has the potential to solve soil and site problems that limit OWS and to reduce the potential for environmental degradation. Soil water potentials (Psi(s)) and nitrate (NO(3)) migration were simulated at 55- and 120-cm depths within and downslope of the SDIS using a two-dimensional code in HYDRUS-3D. Results show that the average measured Psi(s) were -121 and -319 cm, whereas simulated values were -121 and -322 cm at 55- and 120-cm depths, respectively, indicating unsaturated conditions. Average measured NO(3) concentrations were 0.248 and 0.176 mmol N L(-1), whereas simulated values were 0.237 and 0.152 mmol N L(-1) at 55- and 120-cm depths, respectively. Observed unsaturated conditions decreased the potential for NO(3) to migrate in more concentrated plumes away from the SDIS. The agreement (high R(2) values approximately 0.97) between the measured and simulated Psi(s) and NO(3) concentrations indicate that HYDRUS-3D adequately simulated SBRE flow and NO(3) transport through the soil domain under a range of environmental and effluent application conditions.
Benchmark Simulation Model No 2: finalisation of plant layout and default control strategy.
Nopens, I; Benedetti, L; Jeppsson, U; Pons, M-N; Alex, J; Copp, J B; Gernaey, K V; Rosen, C; Steyer, J-P; Vanrolleghem, P A
2010-01-01
The COST/IWA Benchmark Simulation Model No 1 (BSM1) has been available for almost a decade. Its primary purpose has been to create a platform for control strategy benchmarking of activated sludge processes. The fact that the research work related to the benchmark simulation models has resulted in more than 300 publications worldwide demonstrates the interest in and need of such tools within the research community. Recent efforts within the IWA Task Group on "Benchmarking of control strategies for WWTPs" have focused on an extension of the benchmark simulation model. This extension aims at facilitating control strategy development and performance evaluation at a plant-wide level and, consequently, includes both pretreatment of wastewater as well as the processes describing sludge treatment. The motivation for the extension is the increasing interest and need to operate and control wastewater treatment systems not only at an individual process level but also on a plant-wide basis. To facilitate the changes, the evaluation period has been extended to one year. A prolonged evaluation period allows for long-term control strategies to be assessed and enables the use of control handles that cannot be evaluated in a realistic fashion in the one week BSM1 evaluation period. In this paper, the finalised plant layout is summarised and, as was done for BSM1, a default control strategy is proposed. A demonstration of how BSM2 can be used to evaluate control strategies is also given.
Xing, Fei; Kettner, Albert J; Ashton, Andrew; Giosan, Liviu; Ibáñez, Carles; Kaplan, Jed O
2014-03-01
Fluvial sediment discharge can vary in response to climate changes and human activities, which in return influences human settlements and ecosystems through coastline progradation and retreat. To understand the mechanisms controlling the variations of fluvial water and sediment discharge for the Ebro drainage basin, Spain, we apply a hydrological model HydroTrend. Comparison of model results with a 47-year observational record (AD 1953-1999) suggests that the model adequately captures annual average water discharge (simulated 408 m(3)s(-1) versus observed 425 m(3)s(-1)) and sediment load (simulated 0.3 Mt yr(-1) versus observed 0.28 ± 0.04 Mt yr(-1)) for the Ebro basin. A long-term (4000-year) simulation, driven by paleoclimate and anthropogenic land cover change scenarios, indicates that water discharge is controlled by the changes in precipitation, which has a high annual variability but no long-term trend. Modeled suspended sediment load, however, has an increasing trend over time, which is closely related to anthropogenic land cover variations with no significant correlation to climatic changes. The simulation suggests that 4,000 years ago the annual sediment load to the ocean was 30.5 Mt yr(-1), which increased over time to 47.2 Mt yr(-1) (AD 1860-1960). In the second half of the 20th century, the emplacement of large dams resulted in a dramatic decrease in suspended sediment discharge, eventually reducing the flux to the ocean by more than 99% (mean value changes from 38.1 Mt yr(-1) to 0.3 Mt yr(-1)). Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Oettl, Dietmar; Uhrner, Ulrich
2011-02-01
Based on two recent publications using Lagrangian dispersion models to simulate NO-NO 2-O 3 chemistry for industrial plumes, a similar modified approach was implemented using GRAL-C ( Graz Lagrangian Model with Chemistry) and tested on two urban applications. In the hybrid dispersion model GRAL-C, the transport and turbulent diffusion of primary species such as NO and NO 2 are treated in a Lagrangian framework while those of O 3 are treated in an Eulerian framework. GRAL-C was employed on a one year street canyon simulation in Berlin and on a four-day simulation during a winter season in Graz, the second biggest city in Austria. In contrast to Middleton D.R., Jones A.R., Redington A.L., Thomson D.J., Sokhi R.S., Luhana L., Fisher B.E.A. (2008. Lagrangian modelling of plume chemistry for secondary pollutants in large industrial plumes. Atmospheric Environment 42, 415-427) and Alessandrini S., Ferrero E. (2008. A Lagrangian model with chemical reactions: application in real atmosphere. Proceedings of the 12th Int. Conf. on Harmonization within atmospheric dispersion modelling for regulatory purposes. Croatian Meteorological Journal, 43, ISSN: 1330-0083, 235-239) the treatment of ozone was modified in order to facilitate urban scale simulations encompassing dense road networks. For the street canyon application, modelled daily mean NO x/NO 2 concentrations deviated by +0.4%/-15% from observations, while the correlations for NO x and NO 2 were 0.67 and 0.76 respectively. NO 2 concentrations were underestimated in summer, but were captured well for other seasons. In Graz a fair agreement for NO x and NO 2 was obtained between observed and modelled values for NO x and NO 2. Simulated diurnal cycles of NO 2 and O 3 matched observations reasonably well, although O 3 was underestimated during the day. A possible explanation here might lie in the non-consideration of volatile organic compounds (VOCs) chemistry.
NASA Technical Reports Server (NTRS)
Funke, B.; Baumgaertner, A.; Calisto, M.; Egorova, T.; Jackman, C. H.; Kieser, J.; Krivolutsky, A.; Lopez-Puertas, M.; Marsh. D. R.; Reddmann, T.;
2010-01-01
We have compared composition changes of NO, NO2, H2O2,O3, N2O, HNO3 , N2O5, HNO4, ClO, HOCl, and ClONO2 as observed by the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) on Envisat in the aftermath of the "Halloween" solar proton event (SPE) in October/November 2003 at 25-0.01 hPa in the Northern hemisphere (40-90 N) and simulations performed by the following atmospheric models: the Bremen 2D model (B2dM) and Bremen 3D Chemical Transport Model (B3dCTM), the Central Aerological Observatory (CAO) model, FinROSE, the Hamburg Model of the Neutral and Ionized Atmosphere (HAMMONIA), the Karlsruhe Simulation Model of the Middle Atmosphere (KASIMA), the ECHAM5/MESSY Atmospheric Chemistry (EMAC) model, the modeling tool for SO1ar Climate Ozone Links studies (SOCOL and SOCOLi), and the Whole Atmosphere Community Climate Model (WACCM4). The large number of participating models allowed for an evaluation of the overall ability of atmospheric models to reproduce observed atmospheric perturbations generated by SPEs, particularly with respect to NOS, and ozone changes. We have further assessed the meteorological conditions and their implications on the chemical response to the SPE in both the models and observations by comparing temperature and tracer (CH4 and CO) fields. Simulated SPE-induced ozone losses agree on average within 5% with the observations. Simulated NO(y) enhancements around 1 hPa, however, are typically 30% higher than indicated by the observations which can be partly attributed to an overestimation of simulated electron-induced ionization. The analysis of the observed and modeled NO(y) partitioning in the aftermath of the SPE has demonstrated the need to implement additional ion chemistry (HNO3 formation via ion-ion recombination and water cluster ions) into the chemical schemes. An overestimation of observed H2O2 enhancements by all models hints at an underestimation of the OH/HO2 ratio in the upper polar stratosphere during the SPE. The analysis of chlorine species perturbations has shown that the encountered differences between models and observations, particularly the underestimation of observed ClONO2 enhancements, are related to a smaller availability of ClO in the polar night region already before the SPE. In general, the intercomparison has demonstrated that differences in the meteorology and/or initial state of the atmosphere in the simulations causes a relevant variability of the model results, even on a short timescale of only a few days.
Ma, Xiaosu; Chien, Jenny Y; Johnson, Jennal; Malone, James; Sinha, Vikram
2017-08-01
The purpose of this prospective, model-based simulation approach was to evaluate the impact of various rapid-acting mealtime insulin dose-titration algorithms on glycemic control (hemoglobin A1c [HbA1c]). Seven stepwise, glucose-driven insulin dose-titration algorithms were evaluated with a model-based simulation approach by using insulin lispro. Pre-meal blood glucose readings were used to adjust insulin lispro doses. Two control dosing algorithms were included for comparison: no insulin lispro (basal insulin+metformin only) or insulin lispro with fixed doses without titration. Of the seven dosing algorithms assessed, daily adjustment of insulin lispro dose, when glucose targets were met at pre-breakfast, pre-lunch, and pre-dinner, sequentially, demonstrated greater HbA1c reduction at 24 weeks, compared with the other dosing algorithms. Hypoglycemic rates were comparable among the dosing algorithms except for higher rates with the insulin lispro fixed-dose scenario (no titration), as expected. The inferior HbA1c response for the "basal plus metformin only" arm supports the additional glycemic benefit with prandial insulin lispro. Our model-based simulations support a simplified dosing algorithm that does not include carbohydrate counting, but that includes glucose targets for daily dose adjustment to maintain glycemic control with a low risk of hypoglycemia.
Loss of feed flow, steam generator tube rupture and steam line break thermohydraulic experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendler, O J; Takeuchi, K; Young, M Y
1986-10-01
The Westinghouse Model Boiler No. 2 (MB-2) steam generator test model at the Engineering Test Facility in Tampa, Florida, was reinstrumented and modified for performing a series of tests simulating steam generator accident transients. The transients simulated were: loss of feed flow, steam generator tube rupture, and steam line break events. This document presents a description of (1) the model boiler and the associated test facility, (2) the tests performed, and (3) the analyses of the test results.
Benchmark simulation model no 2: general protocol and exploratory case studies.
Jeppsson, U; Pons, M-N; Nopens, I; Alex, J; Copp, J B; Gernaey, K V; Rosen, C; Steyer, J-P; Vanrolleghem, P A
2007-01-01
Over a decade ago, the concept of objectively evaluating the performance of control strategies by simulating them using a standard model implementation was introduced for activated sludge wastewater treatment plants. The resulting Benchmark Simulation Model No 1 (BSM1) has been the basis for a significant new development that is reported on here: Rather than only evaluating control strategies at the level of the activated sludge unit (bioreactors and secondary clarifier) the new BSM2 now allows the evaluation of control strategies at the level of the whole plant, including primary clarifier and sludge treatment with anaerobic sludge digestion. In this contribution, the decisions that have been made over the past three years regarding the models used within the BSM2 are presented and argued, with particular emphasis on the ADM1 description of the digester, the interfaces between activated sludge and digester models, the included temperature dependencies and the reject water storage. BSM2-implementations are now available in a wide range of simulation platforms and a ring test has verified their proper implementation, consistent with the BSM2 definition. This guarantees that users can focus on the control strategy evaluation rather than on modelling issues. Finally, for illustration, twelve simple operational strategies have been implemented in BSM2 and their performance evaluated. Results show that it is an interesting control engineering challenge to further improve the performance of the BSM2 plant (which is the whole idea behind benchmarking) and that integrated control (i.e. acting at different places in the whole plant) is certainly worthwhile to achieve overall improvement.
An integrated modeling approach to predict flooding on urban basin.
Dey, Ashis Kumar; Kamioka, Seiji
2007-01-01
Correct prediction of flood extents in urban catchments has become a challenging issue. The traditional urban drainage models that consider only the sewerage-network are able to simulate the drainage system correctly until there is no overflow from the network inlet or manhole. When such overflows exist due to insufficient drainage capacity of downstream pipes or channels, it becomes difficult to reproduce the actual flood extents using these traditional one-phase simulation techniques. On the other hand, the traditional 2D models that simulate the surface flooding resulting from rainfall and/or levee break do not consider the sewerage network. As a result, the correct flooding situation is rarely addressed from those available traditional 1D and 2D models. This paper presents an integrated model that simultaneously simulates the sewerage network, river network and 2D mesh network to get correct flood extents. The model has been successfully applied into the Tenpaku basin (Nagoya, Japan), which experienced severe flooding with a maximum flood depth more than 1.5 m on September 11, 2000 when heavy rainfall, 580 mm in 28 hrs (return period > 100 yr), occurred over the catchments. Close agreements between the simulated flood depths and observed data ensure that the present integrated modeling approach is able to reproduce the urban flooding situation accurately, which rarely can be obtained through the traditional 1D and 2D modeling approaches.
Capabilities of stochastic rainfall models as data providers for urban hydrology
NASA Astrophysics Data System (ADS)
Haberlandt, Uwe
2017-04-01
For planning of urban drainage systems using hydrological models, long, continuous precipitation series with high temporal resolution are needed. Since observed time series are often too short or not available everywhere, the use of synthetic precipitation is a common alternative. This contribution compares three precipitation models regarding their suitability to provide 5 minute continuous rainfall time series for a) sizing of drainage networks for urban flood protection and b) dimensioning of combined sewage systems for pollution reduction. The rainfall models are a parametric stochastic model (Haberlandt et al., 2008), a non-parametric probabilistic approach (Bárdossy, 1998) and a stochastic downscaling of dynamically simulated rainfall (Berg et al., 2013); all models are operated both as single site and multi-site generators. The models are applied with regionalised parameters assuming that there is no station at the target location. Rainfall and discharge characteristics are utilised for evaluation of the model performance. The simulation results are compared against results obtained from reference rainfall stations not used for parameter estimation. The rainfall simulations are carried out for the federal states of Baden-Württemberg and Lower Saxony in Germany and the discharge simulations for the drainage networks of the cities of Hamburg, Brunswick and Freiburg. Altogether, the results show comparable simulation performance for the three models, good capabilities for single site simulations but low skills for multi-site simulations. Remarkably, there is no significant difference in simulation performance comparing the tasks flood protection with pollution reduction, so the models are finally able to simulate both the extremes and the long term characteristics of rainfall equally well. Bárdossy, A., 1998. Generating precipitation time series using simulated annealing. Wat. Resour. Res., 34(7): 1737-1744. Berg, P., Wagner, S., Kunstmann, H., Schädler, G., 2013. High resolution regional climate model simulations for Germany: part I — validation. Climate Dynamics, 40(1): 401-414. Haberlandt, U., Ebner von Eschenbach, A.-D., Buchwald, I., 2008. A space-time hybrid hourly rainfall model for derived flood frequency analysis. Hydrol. Earth Syst. Sci., 12: 1353-1367.
An experimental investigation of jet plume simulation with solid circular cylinders
NASA Technical Reports Server (NTRS)
Reubush, D. E.
1974-01-01
An investigation has been conducted in the Langley 16-foot transonic tunnel to determine the effectiveness of utilizing solid circular cylinders to simulate the jet exhaust plume for a series of four isolated circular arc afterbodies with little or no flow separation. This investigation was conducted at Mach numbers from 0.40 to 1.30 at 0 deg angle of attack. Plume simulators with simulator diameter to nozzle exit diameter ratios of 0.82, 0.88, 0.98, and 1.00 were investigated with one of the four configurations while the 0.82 and 1.00 simulators were investigated with the other three. Reynolds number based on maximum model diameter varied from approximately 1.50 to 2.14 million.
Chaste: A test-driven approach to software development for biological modelling
NASA Astrophysics Data System (ADS)
Pitt-Francis, Joe; Pathmanathan, Pras; Bernabeu, Miguel O.; Bordas, Rafel; Cooper, Jonathan; Fletcher, Alexander G.; Mirams, Gary R.; Murray, Philip; Osborne, James M.; Walter, Alex; Chapman, S. Jon; Garny, Alan; van Leeuwen, Ingeborg M. M.; Maini, Philip K.; Rodríguez, Blanca; Waters, Sarah L.; Whiteley, Jonathan P.; Byrne, Helen M.; Gavaghan, David J.
2009-12-01
Chaste ('Cancer, heart and soft-tissue environment') is a software library and a set of test suites for computational simulations in the domain of biology. Current functionality has arisen from modelling in the fields of cancer, cardiac physiology and soft-tissue mechanics. It is released under the LGPL 2.1 licence. Chaste has been developed using agile programming methods. The project began in 2005 when it was reasoned that the modelling of a variety of physiological phenomena required both a generic mathematical modelling framework, and a generic computational/simulation framework. The Chaste project evolved from the Integrative Biology (IB) e-Science Project, an inter-institutional project aimed at developing a suitable IT infrastructure to support physiome-level computational modelling, with a primary focus on cardiac and cancer modelling. Program summaryProgram title: Chaste Catalogue identifier: AEFD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFD_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: LGPL 2.1 No. of lines in distributed program, including test data, etc.: 5 407 321 No. of bytes in distributed program, including test data, etc.: 42 004 554 Distribution format: tar.gz Programming language: C++ Operating system: Unix Has the code been vectorised or parallelized?: Yes. Parallelized using MPI. RAM:<90 Megabytes for two of the scenarios described in Section 6 of the manuscript (Monodomain re-entry on a slab or Cylindrical crypt simulation). Up to 16 Gigabytes (distributed across processors) for full resolution bidomain cardiac simulation. Classification: 3. External routines: Boost, CodeSynthesis XSD, CxxTest, HDF5, METIS, MPI, PETSc, Triangle, Xerces Nature of problem: Chaste may be used for solving coupled ODE and PDE systems arising from modelling biological systems. Use of Chaste in two application areas are described in this paper: cardiac electrophysiology and intestinal crypt dynamics. Solution method: Coupled multi-physics with PDE, ODE and discrete mechanics simulation. Running time: The largest cardiac simulation described in the manuscript takes about 6 hours to run on a single 3 GHz core. See results section (Section 6) of the manuscript for discussion on parallel scaling.
Modelling an industrial anaerobic granular reactor using a multi-scale approach.
Feldman, H; Flores-Alsina, X; Ramin, P; Kjellberg, K; Jeppsson, U; Batstone, D J; Gernaey, K V
2017-12-01
The objective of this paper is to show the results of an industrial project dealing with modelling of anaerobic digesters. A multi-scale mathematical approach is developed to describe reactor hydrodynamics, granule growth/distribution and microbial competition/inhibition for substrate/space within the biofilm. The main biochemical and physico-chemical processes in the model are based on the Anaerobic Digestion Model No 1 (ADM1) extended with the fate of phosphorus (P), sulfur (S) and ethanol (Et-OH). Wastewater dynamic conditions are reproduced and data frequency increased using the Benchmark Simulation Model No 2 (BSM2) influent generator. All models are tested using two plant data sets corresponding to different operational periods (#D1, #D2). Simulation results reveal that the proposed approach can satisfactorily describe the transformation of organics, nutrients and minerals, the production of methane, carbon dioxide and sulfide and the potential formation of precipitates within the bulk (average deviation between computer simulations and measurements for both #D1, #D2 is around 10%). Model predictions suggest a stratified structure within the granule which is the result of: 1) applied loading rates, 2) mass transfer limitations and 3) specific (bacterial) affinity for substrate. Hence, inerts (X I ) and methanogens (X ac ) are situated in the inner zone, and this fraction lowers as the radius increases favouring the presence of acidogens (X su ,X aa , X fa ) and acetogens (X c4 ,X pro ). Additional simulations show the effects on the overall process performance when operational (pH) and loading (S:COD) conditions are modified. Lastly, the effect of intra-granular precipitation on the overall organic/inorganic distribution is assessed at: 1) different times; and, 2) reactor heights. Finally, the possibilities and opportunities offered by the proposed approach for conducting engineering optimization projects are discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.
RECOMMENDED METHODS FOR AMBIENT AIR MONITORING OF NO, NO2, NOY, AND INDIVIDUAL NOZ SPECIES
The most appropriate monitoring methods for reactive nitrogen oxides are identified subject to the requirements for diagnostic testing of air quality simulation models. Measurements must be made over 1 h or less and with an uncertainty of - 20% (10% for NO2) over a typical am...
NASA Astrophysics Data System (ADS)
Yousef, Adel K. M.; Taha, Ziad. A.; Shehab, Abeer A.
2011-01-01
This paper describes the development of a computer model used to analyze the heat flow during pulsed Nd: YAG laser spot welding of dissimilar metal; low carbon steel (1020) to aluminum alloy (6061). The model is built using ANSYS FLUENT 3.6 software where almost all the environments simulated to be similar to the experimental environments. A simulation analysis was implemented based on conduction heat transfer out of the key hole where no melting occurs. The effect of laser power and pulse duration was studied. Three peak powers 1, 1.66 and 2.5 kW were varied during pulsed laser spot welding (keeping the energy constant), also the effect of two pulse durations 4 and 8 ms (with constant peak power), on the transient temperature distribution and weld pool dimension were predicated using the present simulation. It was found that the present simulation model can give an indication for choosing the suitable laser parameters (i.e. pulse durations, peak power and interaction time required) during pulsed laser spot welding of dissimilar metals.
NASA Astrophysics Data System (ADS)
Li, Yanshun; Zhang, Qiang; Geng, Guannan; Zheng, Yixuan; Guo, Jianping
2017-04-01
Atmospheric NO2near the surface has notable health effects and is precursor of tropospheric ozone. In this work, we propose a novel method to estimate daily surface NO2 concentrations from the Ozone Monitoring Instrument (OMI) with improved accuracy. Two chemical transport models GEOS-Chem and WRF/CMAQ are used to simulate converting factors between OMI column densities and surface concentrations. GEOS-Chem is found to better capture the distribution of converting factors, while CMAQ has advantage in simulating the magnitude. We combine the two models to calculate optimal values of converting factors and further constrain them by using colocated boundary layer heights (BLH) derived from fine-resolution sounding observations made at OMI overpass time. Calculated converting factors over Chinese Mainland vary by more than three orders of magnitude (10-18 ˜10-15 μgṡcm-1ṡmolecule-1), indicating complexity of NO2 vertical structure over large spatial extent. We generate a map of surface NO2 mass concentrations during June 2013 from OMI retrievals at 0.1˚ ×0.1˚ grids. Estimated concentrations from our novel method show reasonable spatial agreement with in situ chemiluminescent measurements (R = 0.70, Slope = 0.58, N = 353), which significantly outperform estimations using only GEOS-Chem (R = 0.60, Slope = 0.20, N = 353) or WRF/CMAQ (R = 0.19, Slope = 0.52, N = 353) to simulate the converting factor. Preliminary results show that the novel method developed in this study could improve capability of satellite sensors to quantify surface NO2 pollution.
NASA Technical Reports Server (NTRS)
Majumdar, Alok
2013-01-01
The purpose of the paper is to present the analytical capability developed to model no vent chill and fill of cryogenic tank to support CPST (Cryogenic Propellant Storage and Transfer) program. Generalized Fluid System Simulation Program (GFSSP) was adapted to simulate charge-holdvent method of Tank Chilldown. GFSSP models were developed to simulate chilldown of LH2 tank in K-site Test Facility and numerical predictions were compared with test data. The report also describes the modeling technique of simulating the chilldown of a cryogenic transfer line and GFSSP models were developed to simulate the chilldown of a long transfer line and compared with test data.
Flores-Alsina, Xavier; Kazadi Mbamba, Christian; Solon, Kimberly; Vrecko, Darko; Tait, Stephan; Batstone, Damien J; Jeppsson, Ulf; Gernaey, Krist V
2015-11-15
There is a growing interest within the Wastewater Treatment Plant (WWTP) modelling community to correctly describe physico-chemical processes after many years of mainly focusing on biokinetics. Indeed, future modelling needs, such as a plant-wide phosphorus (P) description, require a major, but unavoidable, additional degree of complexity when representing cationic/anionic behaviour in Activated Sludge (AS)/Anaerobic Digestion (AD) systems. In this paper, a plant-wide aqueous phase chemistry module describing pH variations plus ion speciation/pairing is presented and interfaced with industry standard models. The module accounts for extensive consideration of non-ideality, including ion activities instead of molar concentrations and complex ion pairing. The general equilibria are formulated as a set of Differential Algebraic Equations (DAEs) instead of Ordinary Differential Equations (ODEs) in order to reduce the overall stiffness of the system, thereby enhancing simulation speed. Additionally, a multi-dimensional version of the Newton-Raphson algorithm is applied to handle the existing multiple algebraic inter-dependencies. The latter is reinforced with the Simulated Annealing method to increase the robustness of the solver making the system not so dependent of the initial conditions. Simulation results show pH predictions when describing Biological Nutrient Removal (BNR) by the activated sludge models (ASM) 1, 2d and 3 comparing the performance of a nitrogen removal (WWTP1) and a combined nitrogen and phosphorus removal (WWTP2) treatment plant configuration under different anaerobic/anoxic/aerobic conditions. The same framework is implemented in the Benchmark Simulation Model No. 2 (BSM2) version of the Anaerobic Digestion Model No. 1 (ADM1) (WWTP3) as well, predicting pH values at different cationic/anionic loads. In this way, the general applicability/flexibility of the proposed approach is demonstrated, by implementing the aqueous phase chemistry module in some of the most frequently used WWTP process simulation models. Finally, it is shown how traditional wastewater modelling studies can be complemented with a rigorous description of aqueous phase and ion chemistry (pH, speciation, complexation). Copyright © 2015 Elsevier Ltd. All rights reserved.
Sándor, Renáta; Ehrhardt, Fiona; Brilli, Lorenzo; Carozzi, Marco; Recous, Sylvie; Smith, Pete; Snow, Val; Soussana, Jean-François; Dorich, Christopher D; Fuchs, Kathrin; Fitton, Nuala; Gongadze, Kate; Klumpp, Katja; Liebig, Mark; Martin, Raphaël; Merbold, Lutz; Newton, Paul C D; Rees, Robert M; Rolinski, Susanne; Bellocchi, Gianni
2018-06-11
Simulation models quantify the impacts on carbon (C) and nitrogen (N) cycling in grassland systems caused by changes in management practices. To support agricultural policies, it is however important to contrast the responses of alternative models, which can differ greatly in their treatment of key processes and in their response to management. We applied eight biogeochemical models at five grassland sites (in France, New Zealand, Switzerland, United Kingdom and United States) to compare the sensitivity of modelled C and N fluxes to changes in the density of grazing animals (from 100% to 50% of the original livestock densities), also in combination with decreasing N fertilization levels (reduced to zero from the initial levels). Simulated multi-model median values indicated that input reduction would lead to an increase in the C sink strength (negative net ecosystem C exchange) in intensive grazing systems: -64 ± 74 g C m -2 yr -1 (animal density reduction) and -81 ± 74 g C m -2 yr -1 (N and animal density reduction), against the baseline of -30.5 ± 69.5 g C m -2 yr -1 (LSU [livestock units] ≥ 0.76 ha -1 yr -1 ). Simulations also indicated a strong effect of N fertilizer reduction on N fluxes, e.g. N 2 O-N emissions decreased from 0.34 ± 0.22 (baseline) to 0.1 ± 0.05 g N m -2 yr -1 (no N fertilization). Simulated decline in grazing intensity had only limited impact on the N balance. The simulated pattern of enteric methane emissions was dominated by high model-to-model variability. The reduction in simulated offtake (animal intake + cut biomass) led to a doubling in net primary production per animal (increased by 11.6 ± 8.1 t C LSU -1 yr -1 across sites). The highest N 2 O-N intensities (N 2 O-N/offtake) were simulated at mown and extensively grazed arid sites. We show the possibility of using grassland models to determine sound mitigation practices while quantifying the uncertainties associated with the simulated outputs. Copyright © 2018 Elsevier B.V. All rights reserved.
Interannual Variability in Soil Trace Gas (CO2, N2O, NO) Fluxes and Analysis of Controllers
NASA Technical Reports Server (NTRS)
Potter, C.; Klooster, S.; Peterson, David L. (Technical Monitor)
1997-01-01
Interannual variability in flux rates of biogenic trace gases must be quantified in order to understand the differences between short-term trends and actual long-term change in biosphere-atmosphere interactions. We simulated interannual patterns (1983-1988) of global trace gas fluxes from soils using the NASA Ames model version of CASA (Carnegie-Ames-Stanford Approach) in a transient simulation mode. This ecosystem model has been recalibrated for simulations driven by satellite vegetation index data from the NOAA Advanced Very High Resolution Radiometer (AVHRR) over the mid-1980s. The predicted interannual pattern of soil heterotropic CO2 emissions indicates that relatively large increases in global carbon flux from soils occurred about three years following the strong El Nino Southern Oscillation (ENSO) event of 1983. Results for the years 1986 and 1987 showed an annual increment of +1 Pg (1015 g) C-CO2 emitted from soils, which tended to dampen the estimated global increase in net ecosystem production with about a two year lag period relative to plant carbon fixation. Zonal discrimination of model results implies that 80-90 percent of the yearly positive increments in soil CO2 emission during 1986-87 were attributable to soil organic matter decomposition in the low-latitudes (between 30 N and 30 S). Soils of the northern middle-latitude zone (between 30 N and 60 N) accounted for the residual of these annual increments. Total annual emissions of nitrogen trace gases (N2O and NO) from soils were estimated to vary from 2-4 percent over the time period modeled, a level of variability which is consistent with predicted interannual fluctuations in global soil CO2 fluxes. Interannual variability of precipitation in tropical and subtropical zones (30 N to 20 S appeared to drive the dynamic inverse relationship between higher annual emissions of NO versus emissions of N2O. Global mean emission rates from natural (heterotrophic) soil sources over the period modeled (1983-1988) were estimated at 57.1 Pg C-CO2yr-1, 9.8Tg (1012 g) N-NO yr-1, and 9.7 Tg N-N2O yr-1. Chemical fertilizer contributions to global soil N gas fluxes were estimated at between 1.3 to 7.3 Tg N-NO yr-1, and 1.2 to 4.0 Tg N-N2O yr-1.
Borge, Rafael; Santiago, Jose Luis; de la Paz, David; Martín, Fernando; Domingo, Jessica; Valdés, Cristina; Sánchez, Beatriz; Rivas, Esther; Rozas, Mª Teresa; Lázaro, Sonia; Pérez, Javier; Fernández, Álvaro
2018-05-05
Air pollution continues to be one of the main issues in urban areas. In addition to air quality plans and emission abatement policies, additional measures for high pollution episodes are needed to avoid exceedances of hourly limit values under unfavourable meteorological conditions such as the Madrid's short-term action NO 2 protocol. In December 2016 there was a strong atmospheric stability episode that turned out in generalized high NO 2 levels, causing the stage 3 of the NO 2 protocol to be triggered for the first time in Madrid (29th December). In addition to other traffic-related measures, this involves access restrictions to the city centre (50% to private cars). We simulated the episode with and without measures under a multi-scale modelling approach. A 1 km 2 resolution modelling system based on WRF-SMOKE-CMAQ was applied to assess city-wide effects while the Star-CCM+ (RANS CFD model) was used to investigate the effect at street level in a microscale domain in the city centre, focusing on Gran Vía Avenue. Changes in road traffic were simulated with the mesoscale VISUM model, incorporating real flux measurements during those days. The corresponding simulations suggest that the application of the protocol during this particular episode may have prevented concentrations to increase by 24 μg·m -3 (14% respect to the hypothetical no action scenario) downtown although it may have cause NO 2 to slightly increase in the city outskirts due to traffic redistribution. Speed limitation and parking restrictions alone (stages 1 and 2 respectively) have a very limited effect. The microscale simulation provides consistent results but shows an important variability at street level, with reduction above 100 μg·m -3 in some spots inside Gran Vía. Although further research is needed, these results point out the need to implement short-term action plans and to apply a consistent multi-scale modelling assessment to optimize urban air quality abatement strategies. Copyright © 2018 Elsevier B.V. All rights reserved.
Coordinate Conversion Technique for OTH Backscatter Radar
1977-05-01
obliquity of the earth’s equator (=23.0), A is the mean longitude of the sun measured in the ecliptic counterclockwise from the first point of...MODEL FOR Fo-LAYER CORRECTION FACTORS-VERTICAL IO NO GRAM 11. MODEL FOR Fg-LAYER CORRECTION FACTORS- OBLIQUE IO NO GRAM 12. ELEMENTS OF COMMON BLOCK...simulation in (1) to a given oblique ionogram generate range gradient factors to apply to f F9 and I\\1(3000)F„ to force agreement; (3) from the
A high-resolution and observationally constrained OMI NO 2 satellite retrieval
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldberg, Daniel L.; Lamsal, Lok N.; Loughner, Christopher P.
Here, this work presents a new high-resolution NO 2 dataset derived from the NASA Ozone Monitoring Instrument (OMI) NO 2 version 3.0 retrieval that can be used to estimate surface-level concentrations. The standard NASA product uses NO 2 vertical profile shape factors from a 1.25° × 1° (~110 km × 110 km) resolution Global Model Initiative (GMI) model simulation to calculate air mass factors, a critical value used to determine observed tropospheric NO 2 vertical columns. To better estimate vertical profile shape factors, we use a high-resolution (1.33 km × 1.33 km) Community Multi-scale Air Quality (CMAQ) model simulation constrained by in situmore » aircraft observations to recalculate tropospheric air mass factors and tropospheric NO 2 vertical columns during summertime in the eastern US. In this new product, OMI NO 2 tropospheric columns increase by up to 160% in city centers and decrease by 20–50 % in the rural areas outside of urban areas when compared to the operational NASA product. Our new product shows much better agreement with the Pandora NO 2 and Airborne Compact Atmospheric Mapper (ACAM) NO 2 spectrometer measurements acquired during the DISCOVER-AQ Maryland field campaign. Furthermore, the correlation between our satellite product and EPA NO 2 monitors in urban areas has improved dramatically: r 2 = 0.60 in the new product vs. r 2 = 0.39 in the operational product, signifying that this new product is a better indicator of surface concentrations than the operational product. Our work emphasizes the need to use both high-resolution and high-fidelity models in order to recalculate satellite data in areas with large spatial heterogeneities in NO x emissions. Although the current work is focused on the eastern US, the methodology developed in this work can be applied to other world regions to produce high-quality region-specific NO 2 satellite retrievals.« less
A high-resolution and observationally constrained OMI NO 2 satellite retrieval
Goldberg, Daniel L.; Lamsal, Lok N.; Loughner, Christopher P.; ...
2017-09-26
Here, this work presents a new high-resolution NO 2 dataset derived from the NASA Ozone Monitoring Instrument (OMI) NO 2 version 3.0 retrieval that can be used to estimate surface-level concentrations. The standard NASA product uses NO 2 vertical profile shape factors from a 1.25° × 1° (~110 km × 110 km) resolution Global Model Initiative (GMI) model simulation to calculate air mass factors, a critical value used to determine observed tropospheric NO 2 vertical columns. To better estimate vertical profile shape factors, we use a high-resolution (1.33 km × 1.33 km) Community Multi-scale Air Quality (CMAQ) model simulation constrained by in situmore » aircraft observations to recalculate tropospheric air mass factors and tropospheric NO 2 vertical columns during summertime in the eastern US. In this new product, OMI NO 2 tropospheric columns increase by up to 160% in city centers and decrease by 20–50 % in the rural areas outside of urban areas when compared to the operational NASA product. Our new product shows much better agreement with the Pandora NO 2 and Airborne Compact Atmospheric Mapper (ACAM) NO 2 spectrometer measurements acquired during the DISCOVER-AQ Maryland field campaign. Furthermore, the correlation between our satellite product and EPA NO 2 monitors in urban areas has improved dramatically: r 2 = 0.60 in the new product vs. r 2 = 0.39 in the operational product, signifying that this new product is a better indicator of surface concentrations than the operational product. Our work emphasizes the need to use both high-resolution and high-fidelity models in order to recalculate satellite data in areas with large spatial heterogeneities in NO x emissions. Although the current work is focused on the eastern US, the methodology developed in this work can be applied to other world regions to produce high-quality region-specific NO 2 satellite retrievals.« less
NASA Technical Reports Server (NTRS)
Marcus, S. L.; Ghil, M.; Dickey, J. O.
1994-01-01
Variations in atmospheric angular momentum (AAM) are examined in a three-year simulation of the large-scale atmosphere with perpetual January forcing. The simulation is performed with a version of the University of California at Los Angeles (UCLA) general circulation model that contains no tropical Madden-Julian Oscillation (MJO). In addition, the results of three shorter experiments with no topography are analyzed. The three-year standard topography run contains no significant intraseasonal AAM periodicity in the tropics, consistent with the lack of the MJO, but produces a robust, 42-day AAM oscillation in the Northern Hemisphere (NH) extratropics. The model tropics undergoes a barotropic, zonally symmetric oscillation, driven by an exchange of mass with the NH extratropics. No intraseasonal periodicity is found in the average tropical latent heating field, indicating that the model oscillation is dynamically rather than thermodynamically driven. The no-mountain runs fail to produce an intraseasonal AAM oscillation, consistent with a topographic origin for the NH extratropical oscillation in the standard model. The spatial patterns of the oscillation in the 500-mb height field, and the relationship of the extratropical oscillation to intraseasonal variations in the tropics, will be discussed in Part 2 of this study.
Avionics Simulation, Development and Software Engineering
NASA Technical Reports Server (NTRS)
Francis, Ronald C.; Settle, Gray; Tobbe, Patrick A.; Kissel, Ralph; Glaese, John; Blanche, Jim; Wallace, L. D.
2001-01-01
This monthly report summarizes the work performed under contract NAS8-00114 for Marshall Space Flight Center in the following tasks: 1) Purchase Order No. H-32831D, Task Order 001A, GPB Program Software Oversight; 2) Purchase Order No. H-32832D, Task Order 002, ISS EXPRESS Racks Software Support; 3) Purchase Order No. H-32833D, Task Order 003, SSRMS Math Model Integration; 4) Purchase Order No. H-32834D, Task Order 004, GPB Program Hardware Oversight; 5) Purchase Order No. H-32835D, Task Order 005, Electrodynamic Tether Operations and Control Analysis; 6) Purchase Order No. H-32837D, Task Order 007, SRB Command Receiver/Decoder; and 7) Purchase Order No. H-32838D, Task Order 008, AVGS/DART SW and Simulation Support
Aurisano, A.; Backhouse, C.; Hatcher, R.; ...
2015-12-23
The NO vA experiment is a two-detector, long-baseline neutrino experiment operating in the recently upgraded NuMI muon neutrino beam. Simulating neutrino interactions and backgrounds requires many steps including: the simulation of the neutrino beam flux using FLUKA and the FLUGG interface, cosmic ray generation using CRY, neutrino interaction modeling using GENIE, and a simulation of the energy deposited in the detector using GEANT4. To shorten generation time, the modeling of detector-specific aspects, such as photon transport, detector and electronics noise, and readout electronics, employs custom, parameterized simulation applications. We will describe the NO vA simulation chain, and present details onmore » the techniques used in modeling photon transport near the ends of cells, and in developing a novel data-driven noise simulation. Due to the high intensity of the NuMI beam, the Near Detector samples a high rate of muons originating in the surrounding rock. In addition, due to its location on the surface at Ash River, MN, the Far Detector collects a large rate ((˜) 140 kHz) of cosmic muons. Furthermore, we will discuss the methods used in NO vA for overlaying rock muons and cosmic ray muons with simulated neutrino interactions and show how realistically the final simulation reproduces the preliminary NO vA data.« less
An equivalent circuit model for terahertz quantum cascade lasers: Modeling and experiments
NASA Astrophysics Data System (ADS)
Yao, Chen; Xu, Tian-Hong; Wan, Wen-Jian; Zhu, Yong-Hao; Cao, Jun-Cheng
2015-09-01
Terahertz quantum cascade lasers (THz QCLs) emitted at 4.4 THz are fabricated and characterized. An equivalent circuit model is established based on the five-level rate equations to describe their characteristics. In order to illustrate the capability of the model, the steady and dynamic performances of the fabricated THz QCLs are simulated by the model. Compared to the sophisticated numerical methods, the presented model has advantages of fast calculation and good compatibility with circuit simulation for system-level designs and optimizations. The validity of the model is verified by the experimental and numerical results. Project supported by the National Basic Research Program of China (Grant No. 2014CB339803), the National High Technology Research and Development Program of China (Grant No. 2011AA010205), the National Natural Science Foundation of China (Grant Nos. 61131006, 61321492, and 61404149), the Major National Development Project of Scientific Instrument and Equipment, China (Grant No. 2011YQ150021), the National Science and Technology Major Project, China (Grant No. 2011ZX02707), the Major Project, China (Grant No. YYYJ-1123-1), the International Collaboration and Innovation Program on High Mobility Materials Engineering of the Chinese Academy of Sciences, and the Shanghai Municipal Commission of Science and Technology, China (Grant Nos. 14530711300).
This report provides detailed comparisons and sensitivity analyses of three candidate models, MESOPLUME, MESOPUFF, and MESOGRID. This was not a validation study; there was no suitable regional air quality data base for the Four Corners area. Rather, the models have been evaluated...
Nustad, Rochelle A.; Wood, Tamara M.; Bales, Jerad D.
2011-01-01
The U.S. Geological Survey in cooperation with the North Dakota Department of Transportation, North Dakota State Water Commission, and U.S. Army Corps of Engineers, developed a two-dimensional hydrodynamic model of Devils Lake and Stump Lake, North Dakota to be used as a hydrologic tool for evaluating the effects of different inflow scenarios on water levels, circulation, and the transport of dissolved solids through the lake. The numerical model, UnTRIM, and data primarily collected during 2006 were used to develop and calibrate the Devils Lake model. Performance of the Devils Lake model was tested using 2009 data. The Devils Lake model was applied to evaluate the effects of an extreme flooding event on water levels and hydrological modifications within the lake on the transport of dissolved solids through Devils Lake and Stump Lake. For the 2006 calibration, simulated water levels in Devils Lake compared well with measured water levels. The maximum simulated water level at site 1 was within 0.13 feet of the maximum measured water level in the calibration, which gives reasonable confidence that the Devils Lake model is able to accurately simulate the maximum water level at site 1 for the extreme flooding scenario. The timing and direction of winddriven fluctuations in water levels on a short time scale (a few hours to a day) were reproduced well by the Devils Lake model. For this application, the Devils Lake model was not optimized for simulation of the current speed through bridge openings. In future applications, simulation of current speed through bridge openings could be improved by more accurate definition of the bathymetry and geometry of select areas in the model grid. As a test of the performance of the Devils Lake model, a simulation of 2009 conditions from April 1 through September 30, 2009 was performed. Overall, errors in inflow estimates affected the results for the 2009 simulation; however, for the rising phase of the lakes, the Devils Lake model accurately simulated the faster rate of rise in Devils Lake than in Stump Lake, and timing and direction of wind-driven fluctuations in water levels on a short time scale were reproduced well. To help the U.S. Army Corps of Engineers determine the elevation to which the protective embankment for the city of Devils Lake should be raised, an extreme flooding scenario based on an inflow of one-half the probable maximum flood was simulated. Under the conditions and assumptions of the extreme flooding scenario, the water level for both lakes reached a maximum water level around 1,461.9 feet above the National Geodetic Vertical Datum of 1929. One factor limiting the extent of pumping from the Devils Lake State Outlet is sulfate concentrations in West Bay. If sulfate concentrations can be reduced in West Bay, pumping from the Devils Lake State Outlet potentially can increase. The Devils Lake model was used to simulate the transport of dissolved solids using specific conductance data as a surrogate for sulfate. Because the transport of dissolved solids was not calibrated, results from the simulations were not actual expected concentrations. However, the effects of hydrological modifications on the transport of dissolved solids could be evaluated by comparing the effects of hydrological modifications relative to a baseline scenario in which no hydrological modifications were made. Four scenarios were simulated: (1) baseline condition (no hydrological modification), (2) diversion of Channel A, (3) reduction of the area of water exchange between Main Bay and East Bay, and (4) combination of scenarios 2 and 3. Relative to scenario 1, mean concentrations in West Bay for scenarios 2 and 4 were reduced by approximately 9 percent. Given that there is no change in concentration for scenario 3, but about a 9-percent reduction in concentration for scenario 4, the diversion of Channel A was the only hydrologic modification that appeared to have the potential to reduce sulfate c
Assessing the convergence of LHS Monte Carlo simulations of wastewater treatment models.
Benedetti, Lorenzo; Claeys, Filip; Nopens, Ingmar; Vanrolleghem, Peter A
2011-01-01
Monte Carlo (MC) simulation appears to be the only currently adopted tool to estimate global sensitivities and uncertainties in wastewater treatment modelling. Such models are highly complex, dynamic and non-linear, requiring long computation times, especially in the scope of MC simulation, due to the large number of simulations usually required. However, no stopping rule to decide on the number of simulations required to achieve a given confidence in the MC simulation results has been adopted so far in the field. In this work, a pragmatic method is proposed to minimize the computation time by using a combination of several criteria. It makes no use of prior knowledge about the model, is very simple, intuitive and can be automated: all convenient features in engineering applications. A case study is used to show an application of the method, and the results indicate that the required number of simulations strongly depends on the model output(s) selected, and on the type and desired accuracy of the analysis conducted. Hence, no prior indication is available regarding the necessary number of MC simulations, but the proposed method is capable of dealing with these variations and stopping the calculations after convergence is reached.
A high-resolution and observationally constrained OMI NO2 satellite retrieval
NASA Astrophysics Data System (ADS)
Goldberg, Daniel L.; Lamsal, Lok N.; Loughner, Christopher P.; Swartz, William H.; Lu, Zifeng; Streets, David G.
2017-09-01
This work presents a new high-resolution NO2 dataset derived from the NASA Ozone Monitoring Instrument (OMI) NO2 version 3.0 retrieval that can be used to estimate surface-level concentrations. The standard NASA product uses NO2 vertical profile shape factors from a 1.25° × 1° (˜ 110 km × 110 km) resolution Global Model Initiative (GMI) model simulation to calculate air mass factors, a critical value used to determine observed tropospheric NO2 vertical columns. To better estimate vertical profile shape factors, we use a high-resolution (1.33 km × 1.33 km) Community Multi-scale Air Quality (CMAQ) model simulation constrained by in situ aircraft observations to recalculate tropospheric air mass factors and tropospheric NO2 vertical columns during summertime in the eastern US. In this new product, OMI NO2 tropospheric columns increase by up to 160 % in city centers and decrease by 20-50 % in the rural areas outside of urban areas when compared to the operational NASA product. Our new product shows much better agreement with the Pandora NO2 and Airborne Compact Atmospheric Mapper (ACAM) NO2 spectrometer measurements acquired during the DISCOVER-AQ Maryland field campaign. Furthermore, the correlation between our satellite product and EPA NO2 monitors in urban areas has improved dramatically: r2 = 0.60 in the new product vs. r2 = 0.39 in the operational product, signifying that this new product is a better indicator of surface concentrations than the operational product. Our work emphasizes the need to use both high-resolution and high-fidelity models in order to recalculate satellite data in areas with large spatial heterogeneities in NOx emissions. Although the current work is focused on the eastern US, the methodology developed in this work can be applied to other world regions to produce high-quality region-specific NO2 satellite retrievals.
NASA Astrophysics Data System (ADS)
Li, C. H.; Wu, L. C.; Chan, P. C.; Lin, M. L.
2016-12-01
The National Highway No. 3 - Tianliao III Bridge is located in the southwestern Taiwan mudstone area and crosses the Chekualin fault. Since the bridge was opened to traffic, it has been repaired 11 times. To understand the interaction behavior between thrust faulting and the bridge, a discrete element method-based software program, PFC, was applied to conduct a numerical analysis. A 3D model for simulating the thrust faulting and bridge was established, as shown in Fig. 1. In this conceptual model, the length and width were 50 and 10 m, respectively. Part of the box bottom was moveable, simulating the displacement of the thrust fault. The overburden stratum had a height of 5 m with fault dip angles of 20° (Fig. 2). The bottom-up strata were mudstone, clay, and sand, separately. The uplift was 1 m, which was 20% of the stratum thickness. In accordance with the investigation, the position of the fault tip was set, depending on the fault zone, and the bridge deformation was observed (Fig. 3). By setting "Monitoring Balls" in the numerical model to analyzes bridge displacement, we determined that the bridge deck deflection increased as the uplift distance increased. Furthermore, the force caused by the loading of the bridge deck and fault dislocation was determined to cause a down deflection of the P1 and P2 bridge piers. Finally, the fault deflection trajectory of the P4 pier displayed the maximum displacement (Fig. 4). Similar behavior has been observed through numerical simulation as well as field monitoring data. Usage of the discrete element model (PFC3D) to simulate the deformation behavior between thrust faulting and the bridge provided feedback for the design and improved planning of the bridge.
Advances in HYDRA and its application to simulations of Inertial Confinement Fusion targets
NASA Astrophysics Data System (ADS)
Marinak, M. M.; Kerbel, G. D.; Koning, J. M.; Patel, M. V.; Sepke, S. M.; Brown, P. N.; Chang, B.; Procassini, R.; Veitzer, S. A.
2008-11-01
We will outline new capabilities added to the HYDRA 2D/3D multiphysics ICF simulation code. These include a new SN multigroup radiation transport package (1D), constitutive models for elastic-plastic (strength) effects, and a mix model. A Monte Carlo burn package is being incorporated to model diagnostic signatures of neutrons, gamma rays and charged particles. A 3D MHD package that treats resistive MHD is available. Improvements to HYDRA's implicit Monte Carlo photonics package, including the addition of angular biasing, now enable integrated hohlraum simulations to complete in substantially shorter time. The heavy ion beam deposition package now includes a new model for ion stopping power developed by the Tech-X Corporation, with improved accuracy below the Bragg peak. Examples will illustrate HYDRA's enhanced capabilities to simulate various aspects of inertial confinement fusion targets.This work was performed under the auspices of the Lawrence Livermore National Security, LLC, (LLNS) under Contract No. DE-AC52-07NA27344. The work of Tech-X personnel was funded by the Department of Energy under Small Business Innovation Research Contract No. DE-FG02-03ER83797.
Evaluating strategies to reduce urban air pollution
NASA Astrophysics Data System (ADS)
Duque, L.; Relvas, H.; Silveira, C.; Ferreira, J.; Monteiro, A.; Gama, C.; Rafael, S.; Freitas, S.; Borrego, C.; Miranda, A. I.
2016-02-01
During the last years, specific air quality problems have been detected in the urban area of Porto (Portugal). Both PM10 and NO2 limit values have been surpassed in several air quality monitoring stations and, following the European legislation requirements, Air Quality Plans were designed and implemented to reduce those levels. In this sense, measures to decrease PM10 and NO2 emissions have been selected, these mainly related to the traffic sector, but also regarding the industrial and residential combustion sectors. The main objective of this study is to investigate the efficiency of these reduction measures with regard to the improvement of PM10 and NO2 concentration levels over the Porto urban region using a numerical modelling tool - The Air Pollution Model (TAPM). TAPM was applied over the study region, for a simulation domain of 80 × 80 km2 with a spatial resolution of 1 × 1 km2. The entire year of 2012 was simulated and set as the base year for the analysis of the impacts of the selected measures. Taking into account the main activity sectors, four main scenarios have been defined and simulated, with focus on: (1) hybrid cars; (2) a Low Emission Zone (LEZ); (3) fireplaces and (4) industry. The modelling results indicate that measures to reduce PM10 should be focused on residential combustion (fireplaces) and industrial activity and for NO2 the strategy should be based on the traffic sector. The implementation of all the defined scenarios will allow a total maximum reduction of 4.5% on the levels of both pollutants.
NASA Astrophysics Data System (ADS)
Mues, A.; Kuenen, J.; Hendriks, C.; Manders, A.; Segers, A.; Scholz, Y.; Hueglin, C.; Builtjes, P.; Schaap, M.
2014-01-01
In this study the sensitivity of the model performance of the chemistry transport model (CTM) LOTOS-EUROS to the description of the temporal variability of emissions was investigated. Currently the temporal release of anthropogenic emissions is described by European average diurnal, weekly and seasonal time profiles per sector. These default time profiles largely neglect the variation of emission strength with activity patterns, region, species, emission process and meteorology. The three sources dealt with in this study are combustion in energy and transformation industries (SNAP1), nonindustrial combustion (SNAP2) and road transport (SNAP7). First of all, the impact of neglecting the temporal emission profiles for these SNAP categories on simulated concentrations was explored. In a second step, we constructed more detailed emission time profiles for the three categories and quantified their impact on the model performance both separately as well as combined. The performance in comparison to observations for Germany was quantified for the pollutants NO2, SO2 and PM10 and compared to a simulation using the default LOTOS-EUROS emission time profiles. The LOTOS-EUROS simulations were performed for the year 2006 with a temporal resolution of 1 h and a horizontal resolution of approximately 25 × 25km2. In general the largest impact on the model performance was found when neglecting the default time profiles for the three categories. The daily average correlation coefficient for instance decreased by 0.04 (NO2), 0.11 (SO2) and 0.01 (PM10) at German urban background stations compared to the default simulation. A systematic increase in the correlation coefficient is found when using the new time profiles. The size of the increase depends on the source category, component and station. Using national profiles for road transport showed important improvements in the explained variability over the weekdays as well as the diurnal cycle for NO2. The largest impact of the SNAP1 and 2 profiles were found for SO2. When using all new time profiles simultaneously in one simulation, the daily average correlation coefficient increased by 0.05 (NO2), 0.07 (SO2) and 0.03 (PM10) at urban background stations in Germany. This exercise showed that to improve the performance of a CTM, a better representation of the distribution of anthropogenic emission in time is recommendable. This can be done by developing a dynamical emission model that takes into account regional specific factors and meteorology.
Lübken, Manfred; Wichern, Marc; Schlattmann, Markus; Gronauer, Andreas; Horn, Harald
2007-10-01
Knowledge of the net energy production of anaerobic fermenters is important for reliable modelling of the efficiency of anaerobic digestion processes. By using the Anaerobic Digestion Model No. 1 (ADM1) the simulation of biogas production and composition is possible. This paper shows the application and modification of ADM1 to simulate energy production of the digestion of cattle manure and renewable energy crops. The paper additionally presents an energy balance model, which enables the dynamic calculation of the net energy production. The model was applied to a pilot-scale biogas reactor. It was found in a simulation study that a continuous feeding and splitting of the reactor feed into smaller heaps do not generally have a positive effect on the net energy yield. The simulation study showed that the ratio of co-substrate to liquid manure in the inflow determines the net energy production when the inflow load is split into smaller heaps. Mathematical equations are presented to calculate the increase of biogas and methane yield for the digestion of liquid manure and lipids for different feeding intervals. Calculations of different kinds of energy losses for the pilot-scale digester showed high dynamic variations, demonstrating the significance of using a dynamic energy balance model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Gong-Bo, E-mail: gongbo@icosmology.info; Institute of Cosmology and Gravitation, University of Portsmouth, Portsmouth PO1 3FX
2014-04-01
Based on a suite of N-body simulations of the Hu-Sawicki model of f(R) gravity with different sets of model and cosmological parameters, we develop a new fitting formula with a numeric code, MGHalofit, to calculate the nonlinear matter power spectrum P(k) for the Hu-Sawicki model. We compare the MGHalofit predictions at various redshifts (z ≤ 1) to the f(R) simulations and find that the relative error of the MGHalofit fitting formula of P(k) is no larger than 6% at k ≤ 1 h Mpc{sup –1} and 12% at k in (1, 10] h Mpc{sup –1}, respectively. Based on a sensitivitymore » study of an ongoing and a future spectroscopic survey, we estimate the detectability of a signal of modified gravity described by the Hu-Sawicki model using the power spectrum up to quasi-nonlinear scales.« less
Like-Me Simulation as an Effective and Cognitively Plausible Basis for Social Robotics
2009-02-24
sec- onds, and when productions were available , the model re- sponds in 2.3 seconds. 4.4 Perspective Taking Discussion In experiments testing human...behavior. First, if there are existing productions for the specific situation, they provide the re- sponse. If no productions are available , the model...1. REPORT DATE FEB 2009 2. REPORT TYPE 3. DATES COVERED 00-00-2009 to 00-00-2009 4. TITLE AND SUBTITLE ’Like-Me’ Simulation as an Effective
NASA Astrophysics Data System (ADS)
Novelli, A.; Bohn, B.; Dorn, H. P.; Häseler, R.; Hofzumahaus, A.; Kaminski, M.; Yu, Z.; Li, X.; Tillmann, R.; Wegener, R.; Fuchs, H.; Kiendler-Scharr, A.; Wahner, A.
2017-12-01
The hydroxyl radical (OH) is the dominant daytime oxidant in the troposphere. It starts the degradation of volatile organic compounds (VOC) originating from both anthropogenic and biogenic emissions. Hence, it is a crucial trace species in model simulations as it has a large impact on many reactive trace gases. Many field campaigns performed in isoprene dominated environment in low NOx conditions have shown large discrepancies between the measured and the modelled OH radical concentrations. These results have contributed to the discovery of new regeneration paths for OH radicals from isoprene-OH second generation products with maximum efficiency at low NO. The current chemical models (e.g. MCM 3.3.1) include this novel chemistry allowing for an investigation of the validity of the OH regeneration at different chemical conditions. Over 11 experiments focusing on the OH oxidation of isoprene were performed at the SAPHIR chamber in the Forschungszentrum Jülich. Measurements of VOCs, NOx, O3, HONO were performed together with the measurement of OH radicals (by both LIF-FAGE and DOAS) and OH reactivity. Within the simulation chamber, the NO mixing ratio was varied between 0.05 to 2 ppbv allowing the investigation of both the "new" regeneration path for OH radicals and the well-known NO+HO2 mechanism. A comparison with the MCM 3.3.1 that includes the upgraded LIM1 mechanism showed very good agreement (within 10%) for the OH data at all concentrations of NOx investigated. Comparison with different models, without LIM1 and with updated rates for the OH regeneration, will be presented together with a detailed analysis of the impact of this study on results from previous field campaigns.
NASA Astrophysics Data System (ADS)
Goldie, J. K.; Alexander, L. V.; Lewis, S. C.; Sherwood, S. C.
2017-12-01
A wide body of literature now establishes the harm of extreme heat on human health, and work is now emerging on the projection of future health impacts. However, heat-health relationships vary across different populations (Gasparrini et al. 2015), so accurate simulation of regional climate is an important component of joint health impact projection. Here, we evaluate the ability of nine Global Climate Models (GCMs) from CMIP5 and the NARCliM Regional Climate Model to reproduce a selection of 15 health-relevant heatwave and heat-humidity indices over the historical period (1990-2005) using the Perkins skill score (Perkins et al. 2007) in five Australian cities. We explore the reasons for poor model skill, comparing these modelled distributions to both weather station observations and gridded reanalysis data. Finally, we show changes in the modelled distributions from the highest-performing models under RCP4.5 and RCP8.5 greenhouse gas scenarios and discuss the implications of simulated heat stress for future climate change adaptation. ReferencesGasparrini, Antonio, Yuming Guo, Masahiro Hashizume, Eric Lavigne, Antonella Zanobetti, Joel Schwartz, Aurelio Tobias, et al. "Mortality Risk Attributable to High and Low Ambient Temperature: A Multicountry Observational Study." The Lancet 386, no. 9991 (July 31, 2015): 369-75. doi:10.1016/S0140-6736(14)62114-0. Perkins, S. E., A. J. Pitman, N. J. Holbrook, and J. McAneney. "Evaluation of the AR4 Climate Models' Simulated Daily Maximum Temperature, Minimum Temperature, and Precipitation over Australia Using Probability Density Functions." Journal of Climate 20, no. 17 (September 1, 2007): 4356-76. doi:10.1175/JCLI4253.1.
Exact results of 1D traffic cellular automata: The low-density behavior of the Fukui-Ishibashi model
NASA Astrophysics Data System (ADS)
Salcido, Alejandro; Hernández-Zapata, Ernesto; Carreón-Sierra, Susana
2018-03-01
The maximum entropy states of the cellular automata models for traffic flow in a single-lane with no anticipation are presented and discussed. The exact analytical solutions for the low-density behavior of the stochastic Fukui-Ishibashi traffic model were obtained and compared with computer simulations of the model. An excellent agreement was found.
NASA Astrophysics Data System (ADS)
Smeltzer, C. D.; Wang, Y.; Zhao, C.; Boersma, F.
2009-12-01
Polar orbiting satellite retrievals of tropospheric nitrogen dioxide (NO2) columns are important to a variety of scientific applications. These NO2 retrievals rely on a priori profiles from chemical transport models and radiative transfer models to derive the vertical columns (VCs) from slant columns measurements. In this work, we compare the retrieval results using a priori profiles from a global model (TM4) and a higher resolution regional model (REAM) at the OMI overpass hour of 1330 local time, implementing the Dutch OMI NO2 (DOMINO) retrieval. We also compare the retrieval results using a priori profiles from REAM model simulations with and without lightning NOx (NO + NO2) production. A priori model resolution and lightning NOx production are both found to have large impact on satellite retrievals by altering the satellite sensitivity to a particular observation by shifting the NO2 vertical distribution interpreted by the radiation model. The retrieved tropospheric NO2 VCs may increase by 25-100% in urban regions and be reduced by 50% in rural regions if the a priori profiles from REAM simulations are used during the retrievals instead of the profiles from TM4 simulations. The a priori profiles with lightning NOx may result in a 25-50% reduction of the retrieved tropospheric NO2 VCs compared to the a priori profiles without lightning. As first priority, a priori vertical NO2 profiles from a chemical transport model with a high resolution, which can better simulate urban-rural NO2 gradients in the boundary layer and make use of observation-based parameterizations of lightning NOx production, should be first implemented to obtain more accurate NO2 retrievals over the United States, where NOx source regions are spatially separated and lightning NOx production is significant. Then as consequence of a priori NO2 profile variabilities resulting from lightning and model resolution dynamics, geostationary satellite, daylight observations would further promote the next step towards producing a more complete NO2 data product provided sufficient resolution of the observations. Both the corrected retrieval algorithm and the proposed next generation geostationary satellite observations would thus improve emission inventories, better validate model simulations, and advantageously optimize regional specific ozone control strategies.
Damewood, Sara; Jeanmonod, Donald; Cadigan, Beth
2011-04-01
This study compared the effectiveness of a multimedia ultrasound (US) simulator to normal human models during the practical portion of a course designed to teach the skills of both image acquisition and image interpretation for the Focused Assessment with Sonography for Trauma (FAST) exam. This was a prospective, blinded, controlled education study using medical students as an US-naïve population. After a standardized didactic lecture on the FAST exam, trainees were separated into two groups to practice image acquisition on either a multimedia simulator or a normal human model. Four outcome measures were then assessed: image interpretation of prerecorded FAST exams, adequacy of image acquisition on a standardized normal patient, perceived confidence of image adequacy, and time to image acquisition. Ninety-two students were enrolled and separated into two groups, a multimedia simulator group (n = 44), and a human model group (n = 48). Bonferroni adjustment factor determined the level of significance to be p = 0.0125. There was no difference between those trained on the multimedia simulator and those trained on a human model in image interpretation (median 80 of 100 points, interquartile range [IQR] 71-87, vs. median 78, IQR 62-86; p = 0.16), image acquisition (median 18 of 24 points, IQR 12-18 points, vs. median 16, IQR 14-20; p = 0.95), trainee's confidence in obtaining images on a 1-10 visual analog scale (median 5, IQR 4.1-6.5, vs. median 5, IQR 3.7-6.0; p = 0.36), or time to acquire images (median 3.8 minutes, IQR 2.7-5.4 minutes, vs. median = 4.5 minutes, IQR = 3.4-5.9 minutes; p = 0.044). There was no difference in teaching the skills of image acquisition and interpretation to novice FAST examiners using the multimedia simulator or normal human models. These data suggest that practical image acquisition skills learned during simulated training can be directly applied to human models. © 2011 by the Society for Academic Emergency Medicine.
Assessment of CMIP5 historical simulations of rainfall over Southeast Asia
NASA Astrophysics Data System (ADS)
Raghavan, Srivatsan V.; Liu, Jiandong; Nguyen, Ngoc Son; Vu, Minh Tue; Liong, Shie-Yui
2018-05-01
We present preliminary analyses of the historical (1986-2005) climate simulations of a ten-member subset of the Coupled Model Inter-comparison Project Phase 5 (CMIP5) global climate models over Southeast Asia. The objective of this study was to evaluate the general circulation models' performance in simulating the mean state of climate over this less-studied climate vulnerable region, with a focus on precipitation. Results indicate that most of the models are unable to reproduce the observed state of climate over Southeast Asia. Though the multi-model ensemble mean is a better representation of the observations, the uncertainties in the individual models are far high. There is no particular model that performed well in simulating the historical climate of Southeast Asia. There seems to be no significant influence of the spatial resolutions of the models on the quality of simulation, despite the view that higher resolution models fare better. The study results emphasize on careful consideration of models for impact studies and the need to improve the next generation of models in their ability to simulate regional climates better.
NASA Astrophysics Data System (ADS)
Kim, Youngseob; Sartelet, Karine; Raut, Jean-Christophe; Chazette, Patrick
2015-04-01
Impacts of meteorological modeling in the planetary boundary layer (PBL) and urban canopy model (UCM) on the vertical mixing of pollutants are studied. Concentrations of gaseous chemical species, including ozone (O3) and nitrogen dioxide (NO2), and particulate matter over Paris and the near suburbs are simulated using the 3-dimensional chemistry-transport model Polair3D of the Polyphemus platform. Simulated concentrations of O3, NO2 and PM10/PM2.5 (particulate matter of aerodynamic diameter lower than 10 μm/2.5 μm, respectively) are first evaluated using ground measurements. Higher surface concentrations are obtained for PM10, PM2.5 and NO2 with the MYNN PBL scheme than the YSU PBL scheme because of lower PBL heights in the MYNN scheme. Differences between simulations using different PBL schemes are lower than differences between simulations with and without the UCM and the Corine land-use over urban areas. Regarding the root mean square error, the simulations using the UCM and the Corine land-use tend to perform better than the simulations without it. At urban stations, the PM10 and PM2.5 concentrations are over-estimated and the over-estimation is reduced using the UCM and the Corine land-use. The ability of the model to reproduce vertical mixing is evaluated using NO2 measurement data at the upper air observation station of the Eiffel Tower, and measurement data at a ground station near the Eiffel Tower. Although NO2 is under-estimated in all simulations, vertical mixing is greatly improved when using the UCM and the Corine land-use. Comparisons of the modeled PM10 vertical distributions to distributions deduced from surface and mobile lidar measurements are performed. The use of the UCM and the Corine land-use is crucial to accurately model PM10 concentrations during nighttime in the center of Paris. In the nocturnal stable boundary layer, PM10 is relatively well modeled, although it is over-estimated on 24 May and under-estimated on 25 May. However, PM10 is under-estimated on both days in the residual layer, and over-estimated on both days over the residual layer. The under-estimations in the residual layer are partly due to difficulties to estimate the PBL height, to an over-estimation of vertical mixing during nighttime at high altitudes and to uncertainties in PM10 emissions. The PBL schemes and the UCM influence the PM vertical distributions not only because they influence vertical mixing (PBL height and eddy-diffusion coefficient), but also horizontal wind fields and humidity. However, for the UCM, it is the influence on vertical mixing that impacts the most the PM10 vertical distribution below 1.5 km.
Sheibley, R.W.; Jackman, A.P.; Duff, J.H.; Triska, F.J.
2003-01-01
Nitrification and denitrification kinetics in sediment perfusion cores were numerically modeled and compared to experiments on cores from the Shingobee River MN, USA. The experimental design incorporated mixing groundwater discharge with stream water penetration into the cores, which provided a well-defined, one-dimensional simulation of in situ hydrologic conditions. Ammonium (NH+4) and nitrate (NO-3) concentration gradients suggested the upper region of the cores supported coupled nitrification-denitrification, where groundwater-derived NH+4 was first oxidized to NO-3 then subsequently reduced via denitrification to N2. Nitrification and denitrification were modeled using a Crank-Nicolson finite difference approximation to a one-dimensional advection-dispersion equation. Both processes were modeled using first-order reaction kinetics because substrate concentrations (NH+4 and NO-3) were much smaller than published Michaelis constants. Rate coefficients for nitrification and denitrification ranged from 0.2 to 15.8 h-1 and 0.02 to 8.0 h-1, respectively. The rate constants followed an Arrhenius relationship between 7.5 and 22 ??C. Activation energies for nitrification and denitrification were 162 and 97.3 kJ/mol, respectively. Seasonal NH+4 concentration patterns in the Shingobee River were accurately simulated from the relationship between perfusion core temperature and NH+4 flux to the overlying water. The simulations suggest that NH+4 in groundwater discharge is controlled by sediment nitrification that, consistent with its activation energy, is strongly temperature dependent. ?? 2003 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chipman, V D
Two-dimensional axisymmetric hydrodynamic models were developed using GEODYN to simulate the propagation of air blasts resulting from a series of high explosive detonations conducted at Kirtland Air Force Base in August and September of 2007. Dubbed Humble Redwood I (HR-1), these near-surface chemical high explosive detonations consisted of seven shots of varying height or depth of burst. Each shot was simulated numerically using GEODYN. An adaptive mesh refinement scheme based on air pressure gradients was employed such that the mesh refinement tracked the advancing shock front where sharp discontinuities existed in the state variables, but allowed the mesh to sufficientlymore » relax behind the shock front for runtime efficiency. Comparisons of overpressure, sound speed, and positive phase impulse from the GEODYN simulations were made to the recorded data taken from each HR-1 shot. Where the detonations occurred above ground or were shallowly buried (no deeper than 1 m), the GEODYN model was able to simulate the sound speeds, peak overpressures, and positive phase impulses to within approximately 1%, 23%, and 6%, respectively, of the actual recorded data, supporting the use of numerical simulation of the air blast as a forensic tool in determining the yield of an otherwise unknown explosion.« less
Marangoni, R; Preosti, G; Colombetti, G
2000-02-01
The marine ciliate Fabrea salina shows a clear positive phototaxis, but the mechanism by which a single cell is able to detect the direction of light and orient its swimming accordingly is still unknown. A simple model of phototaxis is that of a biased random walk, where the bias due to light can affect one or more of the parameters that characterize a random walk, i.e., the mean speed, the frequency distribution of the angles of directional changes and the frequency of directional changes. Since experimental evidence has shown no effect of light on the mean speed of Fabrea salina, we have excluded models depending on this parameter. We have, therefore, investigated the phototactic orientation of Fabrea salina by computer simulation of two simple models, the first where light affects the frequency distribution of the angles of directional changes (model M1) and the second where the light bias modifies the frequency of directional changes (model M2). Simulated M1 cells directly orient their swimming towards the direction of light, regardless of their current swimming orientation; simulated M2 cells, on the contrary, are unable to actively orient their motion, but remain locked along the light direction once they find it by chance. The simulations show that these two orientation models lead to different macroscopic behaviours of the simulated cell populations. By comparing the results of the simulations with the experimental ones, we have found that the phototactic behaviour of real cells is more similar to that of the M2 model.
NASA Astrophysics Data System (ADS)
Vandenbulcke, Luc; Barth, Alexander
2017-04-01
In the present European operational oceanography context, global and basin-scale models are run daily at different Monitoring and Forecasting Centers from the Copernicus Marine component (CMEMS). Regional forecasting centers, which run outside of CMEMS, then use these forecasts as initial conditions and/or boundary conditions for high-resolution or coastal forecasts. However, these improved simulations are lost to the basin-scale models (i.e. there is no feedback). Therefore, some potential improvements inside (and even outside) the areas covered by regional models are lost, and the risk for discrepancy between basin-scale and regional model remains high. The objective of this study is to simulate two-way nesting by extracting pseudo-observations from the regional models and assimilating them in the basin-scale models. The proposed method is called "upscaling". A ensemble of 100 one-way nested NEMO models of the Mediterranean Sea (Med) (1/16°) and the North-Western Med (1/80°) is implemented to simulate the period 2014-2015. Each member has perturbed initial conditions, atmospheric forcing fields and river discharge data. The Med model uses climatological Rhone river data, while the nested model uses measured daily discharges. The error of the pseudo-observations can be estimated by analyzing the ensemble of nested models. The pseudo-observations are then assimilated in the parent model by means of an Ensemble Kalman Filter. The experiments show that the proposed method improves different processes in the Med model, such as the position of the Northern Current and its incursion (or not) on the Gulf of Lions, the cold water mass on the shelf, and the position of the Rhone river plume. Regarding areas where no operational regional models exist, (some variables of) the parent model can still be improved by relating some resolved parameters to statistical properties of a higher-resolution simulation. This is the topic of a complementary study also presented at the EGU 2017 (Barth et al).
Notes on modeling and simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Redondo, Antonio
These notes present a high-level overview of how modeling and simulation are carried out by practitioners. The discussion is of a general nature; no specific techniques are examined but the activities associated with all modeling and simulation approaches are briefly addressed. There is also a discussion of validation and verification and, at the end, a section on why modeling and simulation are useful.
Evaluation of the high resolution DEHM/UBM model system over Denmark
NASA Astrophysics Data System (ADS)
Im, Ulas; Christensen, Jesper H.; Ellermann, Thomas; Ketzel, Matthias; Geels, Camilla; Hansen, Kaj M.; Plejdrup, Marlene S.; Brandt, Jørgen
2015-04-01
The air pollutant levels over Denmark are simulated using the high resolution DEHM/UBM model system for the years 2006 to 2014. The system employs a hemispheric chemistry-transport model, the Danish Eulerian Hemispheric Model (DEHM; Brandt et al., 2012) that runs on a 150 km x 150 km resolution over the Northern Hemisphere, with nesting capability for higher resolutions over Europe, Northern Europe and Denmark on 50 km x 50 km, 16.7 km x 16.7 km and 5.6 km x 5.6 km resolutions, respectively, coupled to the Urban Background Model (UBM; Berkowicz, 2000; Brandt et al., 2001) that covers the whole of Denmark with a 1 km x 1 km spatial resolution. Over Denmark, the system uses the SPREAD emission model (Plejdrup and Gyldenkærne, 2011) that distributes the Danish emissions for all pollutants and all sectors in the national emission database on a 1 km x 1 km resolution grid covering Denmark and its national sea territory. The study will describe the model system and we will evaluate the performance of the model system in simulating hourly and daily ozone (O3), carbon monoxide (CO), nitrogen monoxide (NO), nitrogen dioxide (NO2) and particulate matter (PM10 and PM2.5) concentrations against surface measurements from eight monitoring stations. Finally we investigate the spatial variation of air pollutants over Denmark on different time scales. References Berkowicz, R., 2000. A Simple Model for Urban Background Pollution. Environmental Monitoring and Assessment, 65, 1/2, 259-267. Brandt, J., J. H. Christensen, L. M. Frohn, F. Palmgren, R. Berkowicz and Z. Zlatev, 2001: "Operational air pollution forecasts from European to local scale". Atmospheric Environment, Vol. 35, Sup. No. 1, pp. S91-S98, 2001 Brandt et al., 2012. An integrated model study for Europe and North America using the Danish Eulerian Hemispheric Model with focus on intercontinental transport. Atmospheric Environment, 53, 156-176. Plejdrup, M.S., Gyldenkærne, S., 2011. Spatial distribution of pollutants to air - the SPREAD model. NERI Technical Report No. 823.
NASA Astrophysics Data System (ADS)
Edwards, T.
2015-12-01
Modelling Antarctic marine ice sheet instability (MISI) - the potential for sustained grounding line retreat along downsloping bedrock - is very challenging because high resolution at the grounding line is required for reliable simulation. Assessing modelling uncertainties is even more difficult, because such models are very computationally expensive, restricting the number of simulations that can be performed. Quantifying uncertainty in future Antarctic instability has therefore so far been limited. There are several ways to tackle this problem, including: Simulating a small domain, to reduce expense and allow the use of ensemble methods; Parameterising response of the grounding line to the onset of MISI, for the same reasons; Emulating the simulator with a statistical model, to explore the impacts of uncertainties more thoroughly; Substituting physical models with expert-elicited statistical distributions. Methods 2-4 require rigorous testing against observations and high resolution models to have confidence in their results. We use all four to examine the dependence of MISI in the Amundsen Sea Embayment (ASE) on uncertain model inputs, including bedrock topography, ice viscosity, basal friction, model structure (sliding law and treatment of grounding line migration) and MISI triggers (including basal melting and risk of ice shelf collapse). We compare simulations from a 3000 member ensemble with GRISLI (methods 2, 4) with a 284 member ensemble from BISICLES (method 1) and also use emulation (method 3). Results from the two ensembles show similarities, despite very different model structures and ensemble designs. Basal friction and topography have a large effect on the extent of grounding line retreat, and the sliding law strongly modifies sea level contributions through changes in the rate and extent of grounding line retreat and the rate of ice thinning. Over 50 years, MISI in the ASE gives up to 1.1 mm/year (95% quantile) SLE in GRISLI (calibrated with ASE mass losses in a Bayesian framework), and up to 1.2 mm/year SLE (95% quantile) in the 270 completed BISICLES simulations (no calibration). We will show preliminary results emulating the models, calibrating with observations, and comparing them to assess structural uncertainty. We use these to improve MISI projections for the whole continent.
NASA Astrophysics Data System (ADS)
Lu, Han-Han; Xu, Jing-Ping; Liu, Lu; Lai, Pui-To; Tang, Wing-Man
2016-11-01
An equivalent distributed capacitance model is established by considering only the gate oxide-trap capacitance to explain the frequency dispersion in the C-V curve of MOS capacitors measured for a frequency range from 1 kHz to 1 MHz. The proposed model is based on the Fermi-Dirac statistics and the charging/discharging effects of the oxide traps induced by a small ac signal. The validity of the proposed model is confirmed by the good agreement between the simulated results and experimental data. Simulations indicate that the capacitance dispersion of an MOS capacitor under accumulation and near flatband is mainly caused by traps adjacent to the oxide/semiconductor interface, with negligible effects from the traps far from the interface, and the relevant distance from the interface at which the traps can still contribute to the gate capacitance is also discussed. In addition, by excluding the negligible effect of oxide-trap conductance, the model avoids the use of imaginary numbers and complex calculations, and thus is simple and intuitive. Project supported by the National Natural Science Foundation of China (Grant Nos. 61176100 and 61274112), the University Development Fund of the University of Hong Kong, China (Grant No. 00600009), and the Hong Kong Polytechnic University, China (Grant No. 1-ZVB1).
Mukhtar, Hussnain; Lin, Yu-Pin; Shipin, Oleg V.; Petway, Joy R.
2017-01-01
This study presents an approach for obtaining realization sets of parameters for nitrogen removal in a pilot-scale waste stabilization pond (WSP) system. The proposed approach was designed for optimal parameterization, local sensitivity analysis, and global uncertainty analysis of a dynamic simulation model for the WSP by using the R software package Flexible Modeling Environment (R-FME) with the Markov chain Monte Carlo (MCMC) method. Additionally, generalized likelihood uncertainty estimation (GLUE) was integrated into the FME to evaluate the major parameters that affect the simulation outputs in the study WSP. Comprehensive modeling analysis was used to simulate and assess nine parameters and concentrations of ON-N, NH3-N and NO3-N. Results indicate that the integrated FME-GLUE-based model, with good Nash–Sutcliffe coefficients (0.53–0.69) and correlation coefficients (0.76–0.83), successfully simulates the concentrations of ON-N, NH3-N and NO3-N. Moreover, the Arrhenius constant was the only parameter sensitive to model performances of ON-N and NH3-N simulations. However, Nitrosomonas growth rate, the denitrification constant, and the maximum growth rate at 20 °C were sensitive to ON-N and NO3-N simulation, which was measured using global sensitivity. PMID:28704958
NASA Astrophysics Data System (ADS)
Verstraeten, W. W.; Boersma, K. F.; Douros, J.; Williams, J. E.; Eskes, H.; Delcloo, A. W.
2017-12-01
High nitrogen oxides (NOX = NO + NO2) concentrations near the surface impact humans and ecosystems badly and play a key role in tropospheric chemistry. NO2 is an important precursor of tropospheric ozone (O3) which in turn affects the production of the hydroxyl radical controlling the chemical lifetime of key atmospheric pollutants and reactive greenhouse gases. Combustion from industrial, traffic and household activities in large and densely populated urban areas result in high NOX emissions. Accurate mapping of these emissions is essential but hard to do since reported emissions factors may differ from real-time emissions in order of magnitude. Modelled NO2 levels and lifetimes also have large associated uncertainties and overestimation in the chemical lifetime which may mask missing NOX chemistry in current chemistry transport models (CTM's). The simultaneously estimation of both the NO2 lifetime and as well as the concentrations by applying the Exponentially Modified Gaussian (EMG) method on tropospheric NO2 columns lines densities should improve the surface NOX emission estimates. Here we evaluate if the EMG methodology applied on the tropospheric NO2 columns simulated by the LOTOS-EUROS (Long Term Ozone Simulation-European Ozone Simulation) CTM can reproduce the NOX emissions used as model input. First we process both the modelled tropospheric NO2 columns for the period April-September 2013 for 21 selected European urban areas under windy conditions (averaged vertical wind speeds between surface and 500 m from ECMWF > 2 m s-1) as well as the accompanying OMI (Ozone Monitoring Instrument) data providing us with real-time observation-based estimates of midday NO2 columns. Then we compare the top-down derived surface NOX emissions with the 2011 MACC-III emission inventory, used in the CTM as input to simulate the NO2 columns. For cities where NOX emissions can be assumed as originating from one large source good agreement is found between the top-down derived NOX emissions from CTM and OMI with the MACC-III inventory. For cities where multiple sources of NOX are observed (e.g. Brussels, London), an adapted methodology is required. For some cities such as St-Petersburg and Moscow the top-down NOX estimates from 2013 OMI data are biased low compared to the MACC-III inventory which uses a 2011 NOX emissions update.
2013-09-01
fraction of SRB could be active in O2 respiration, fermentation of organics, and even NO3- respiration. Therefore, the metabolic diversity of SRB...the case with PRB, which are able to reduce NO3- and ClO4-. To evaluate the model, we simulated effluent H2, UAP, and BAP concentrations, along with...effluent_experiment 56 Figure 36. Model- simulated concentrations of H2, UAP, and BAP in the effluent. Figure 37. Model- simulated
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Jian; Zhang, Yang; Wang, Kai
Accurate simulations of air quality and climate require robust model parameterizations on regional and global scales. The Weather Research and Forecasting model with Chemistry version 3.4.1 has been coupled with physics packages from the Community Atmosphere Model version 5 (CAM5) (WRF-CAM5) to assess the robustness of the CAM5 physics package for regional modeling at higher grid resolutions than typical grid resolutions used in global modeling. In this two-part study, Part I describes the application and evaluation of WRF-CAM5 over East Asia at a horizontal resolution of 36-km for six years: 2001, 2005, 2006, 2008, 2010, and 2011. The simulations aremore » evaluated comprehensively with a variety of datasets from surface networks, satellites, and aircraft. The results show that meteorology is relatively well simulated by WRF-CAM5. However, cloud variables are largely or moderately underpredicted, indicating uncertainties in the model treatments of dynamics, thermodynamics, and microphysics of clouds/ices as well as aerosol-cloud interactions. For chemical predictions, the tropospheric column abundances of CO, NO2, and O3 are well simulated, but those of SO2 and HCHO are moderately overpredicted, and the column HCHO/NO2 indicator is underpredicted. Large biases exist in the surface concentrations of CO, NO2, and PM10 due to uncertainties in the emissions as well as vertical mixing. The underpredictions of NO lead to insufficient O3 titration, thus O3 overpredictions. The model can generally reproduce the observed O3 and PM indicators. These indicators suggest to control NOx emissions throughout the year, and VOCs emissions in summer in big cities and in winter over North China Plain, North/South Korea, and Japan to reduce surface O3, and to control SO2, NH3, and NOx throughout the year to reduce inorganic surface PM.« less
Simulation of the Effect of Realistic Space Vehicle Environments on Binary Metal Alloys
NASA Technical Reports Server (NTRS)
Westra, Douglas G.; Poirier, D. R.; Heinrich, J. C.; Sung, P. K.; Felicelli, S. D.; Phelps, Lisa (Technical Monitor)
2001-01-01
Simulations that assess the effect of space vehicle acceleration environments on the solidification of Pb-Sb alloys are reported. Space microgravity missions are designed to provide a near zero-g acceleration environment for various types of scientific experiments. Realistically. these space missions cannot provide a perfect environment. Vibrations caused by crew activity, on-board experiments, support systems stems (pumps, fans, etc.), periodic orbital maneuvers, and water dumps can all cause perturbations to the microgravity environment. In addition, the drag on the space vehicle is a source of acceleration. Therefore, it is necessary to predict the impact of these vibration-perturbations and the steady-state drag acceleration on the experiments. These predictions can be used to design mission timelines. so that the experiment is run during times that the impact of the acceleration environment is acceptable for the experiment of interest. The simulations reported herein were conducted using a finite element model that includes mass, species, momentum, and energy conservation. This model predicts the existence of "channels" within the processing mushy zone and subsequently "freckles" within the fully processed solid, which are the effects of thermosolutal convection. It is necessary to mitigate thermosolutal convection during space experiments of metal alloys, in order to study and characterize diffusion-controlled transport phenomena (microsegregation) that are normally coupled with macrosegregation. The model allows simulation of steady-state and transient acceleration values ranging from no acceleration (0 g). to microgravity conditions (10(exp -6) to 10(exp -3) g), to terrestrial gravity conditions (1 g). The transient acceleration environments simulated were from the STS-89 SpaceHAB mission and from the STS-94 SpaceLAB mission. with on-orbit accelerometer data during different mission periods used as inputs for the simulation model. Periods of crew exercise, quiet (no crew activity), and nominal conditions from STS-89 were used as simulation inputs as were periods of nominal. overboard water-dump, and free-drift (no orbit maneuvering operations) from STS-94. Steady-state acceleration environments of 0.0 and 10(exp -6) to 10(exp -1) g were also simulated, to serve as a comparison to the transient data and to assess an acceptable magnitude for the steady-state vehicle drag
Process Modeling and Dynamic Simulation for EAST Helium Refrigerator
NASA Astrophysics Data System (ADS)
Lu, Xiaofei; Fu, Peng; Zhuang, Ming; Qiu, Lilong; Hu, Liangbing
2016-06-01
In this paper, the process modeling and dynamic simulation for the EAST helium refrigerator has been completed. The cryogenic process model is described and the main components are customized in detail. The process model is controlled by the PLC simulator, and the realtime communication between the process model and the controllers is achieved by a customized interface. Validation of the process model has been confirmed based on EAST experimental data during the cool down process of 300-80 K. Simulation results indicate that this process simulator is able to reproduce dynamic behaviors of the EAST helium refrigerator very well for the operation of long pulsed plasma discharge. The cryogenic process simulator based on control architecture is available for operation optimization and control design of EAST cryogenic systems to cope with the long pulsed heat loads in the future. supported by National Natural Science Foundation of China (No. 51306195) and Key Laboratory of Cryogenics, Technical Institute of Physics and Chemistry, CAS (No. CRYO201408)
NASA Astrophysics Data System (ADS)
Zhang, Xiaowen; Zhang, Shiqiang; Xu, Junli
2016-10-01
Glacier change in central Karakorum is known as `anomony' in the late 1990s, where many glaciers expanded and numbers of glacier surged while most of glaciers in the Greater Himalaya rapidly retreated. However, the understanding of glacier change in this region is still poor. Glacier changes for the Hunza river basin (HRB) in central Karakorum during 2003 to 2008 were investigated from different data sources. The mass variation in HRB were estimated from the DEOS Mass Transport Model (DMT-1) GRACE data and the Variable Infiltration Capacity (VIC) model, and compared with the simulated glacier mass balance by one monthly degree-day model. The surface elevation difference of glaciers between ASTER DEM and SRTM were calculated. The mass variations from GRACE data suggest that the glacier mass balance in HRB during 2003-2007 has no clear trend. The cumulative mass balance is positive during 2003-2008. The average glacier surface elevation difference between SRTM DEM and ASTER DEM is 11.8+/-3.2 m. The average differences of glacier surface elevation of Batura glaciers in accumulation zones is increased with 0.88m.a-1, These results indicate that there is no significant glacier retreat during 1999 to 2008. The seasonal amplitude of simulated mass variation of the monthly degree-day model agreed well with that estimated from DMT-1 GRACE data, but the simulated glacier accumulation is less than that calculated from GRACE data. The main reason probably lies in that the precipitation of glaciers and ungalciated areas were underestimated, especially in alpine areas.
NASA Astrophysics Data System (ADS)
Kurtzman, Daniel; Shapira, Roi H.; Bar-Tal, Asher; Fine, Pinchas; Russo, David
2013-08-01
Nitrate contamination of groundwater under land used for intensive-agriculture is probably the most worrisome agro-hydrological sustainability problem worldwide. Vadose-zone samples from 0 to 9 m depth under citrus orchards overlying an unconfined aquifer were analyzed for variables controlling water flow and the fate and transport of nitrogen fertilizers. Steady-state estimates of water and NO3-N fluxes to groundwater were found to vary spatially in the ranges of 90-330 mm yr- 1 and 50-220 kg ha- 1 yr- 1, respectively. Calibration of transient models to two selected vadose-zone profiles required limiting the concentration of NO3-N in the solution that is taken up by the roots to 30 mg L- 1. Results of an independent lysimeter experiment showed a similar nitrogen-uptake regime. Simulations of past conditions revealed a significant correlation between NO3-N flux to groundwater and the previous year's precipitation. Simulations of different nitrogen-application rates showed that using half of the nitrogen fertilizer added to the irrigation water by farmers would reduce average NO3-N flux to groundwater by 70%, decrease root nitrogen uptake by 20% and reduce the average pore water NO3-N concentration in the deep vadose zone to below the Israeli drinking water standard; hence this rate of nitrogen application was found to be agro-hydrologically sustainable. Beyond the investigation of nitrate fluxes to groundwater under citrus orchards and the interesting case-study aspects, this work demonstrates a methodology that enables skillful decisions concerning joint sustainability of both the water resource and agricultural production in a common environmental setting.
NASA Astrophysics Data System (ADS)
Bolejko, Krzysztof
2018-05-01
The measurements of the Hubble constant reveal a tension between high-redshift (CMB) and low-redshift (distance ladder) constraints. So far neither observational systematics nor new physics has been successfully implemented to explain away this tension. This paper presents a new solution to the Hubble constant problem. The solution is based on the Simsilun simulation (relativistic simulation of the large scale structure of the Universe) with the ray-tracing algorithm implemented. The initial conditions for the Simsilun simulation were set up as perturbations around the Λ CDM model. However, unlike in the standard cosmological model (i.e., Λ CDM model +perturbations ), within the Simsilun simulation relativistic and nonlinear evolution of cosmic structures lead to the phenomenon of emerging spatial curvature, where the mean spatial curvature evolves from the spatial flatness of the early Universe towards the slightly curved present-day Universe. Consequently, the present-day expansion rate is slightly faster compared to the spatially flat Λ CDM model. The results of the ray-tracing analysis show that the Universe which starts with initial conditions consistent with the Planck constraints should have the Hubble constant H0=72.5 ±2.1 km s-1 Mpc-1 . When the Simsilun simulation was rerun with no inhomogeneities imposed, the Hubble constant inferred within such a homogeneous simulation was H0=68.1 ±2.0 km s-1 Mpc-1 . Thus, the inclusion of nonlinear relativistic evolution that leads to the emergence of the spatial curvature can explain why the low-redshift measurements favor higher values compared to the high-redshift constraints and alleviate the tension between the CMB and distance ladder measurements of the Hubble constant.
Ludwig, Antoinette; Ginsberg, Howard; Hickling, Graham J.; Ogden, Nicholas H.
2016-01-01
The lone star tick, Amblyomma americanum, is a disease vector of significance for human and animal health throughout much of the eastern United States. To model the potential effects of climate change on this tick, a better understanding is needed of the relative roles of temperature-dependent and temperature-independent (day-length-dependent behavioral or morphogenetic diapause) processes acting on the tick lifecycle. In this study, we explored the roles of these processes by simulating seasonal activity patterns using models with site-specific temperature and day-length-dependent processes. We first modeled the transitions from engorged larvae to feeding nymphs, engorged nymphs to feeding adults, and engorged adult females to feeding larvae. The simulated seasonal patterns were compared against field observations at three locations in United States. Simulations suggested that 1) during the larva-to-nymph transition, some larvae undergo no diapause while others undergo morphogenetic diapause of engorged larvae; 2) molted adults undergo behavioral diapause during the transition from nymph-to-adult; and 3) there is no diapause during the adult-to-larva transition. A model constructed to simulate the full lifecycle of A. americanum successfully predicted observed tick activity at the three U.S. study locations. Some differences between observed and simulated seasonality patterns were observed, however, identifying the need for research to refine some model parameters. In simulations run using temperature data for Montreal, deterministic die-out of A. americanum populations did not occur, suggesting the possibility that current climate in parts of southern Canada is suitable for survival and reproduction of this tick.
Ludwig, Antoinette; Ginsberg, Howard S; Hickling, Graham J; Ogden, Nicholas H
2016-01-01
The lone star tick, Amblyomma americanum, is a disease vector of significance for human and animal health throughout much of the eastern United States. To model the potential effects of climate change on this tick, a better understanding is needed of the relative roles of temperature-dependent and temperature-independent (day-length-dependent behavioral or morphogenetic diapause) processes acting on the tick lifecycle. In this study, we explored the roles of these processes by simulating seasonal activity patterns using models with site-specific temperature and day-length-dependent processes. We first modeled the transitions from engorged larvae to feeding nymphs, engorged nymphs to feeding adults, and engorged adult females to feeding larvae. The simulated seasonal patterns were compared against field observations at three locations in United States. Simulations suggested that 1) during the larva-to-nymph transition, some larvae undergo no diapause while others undergo morphogenetic diapause of engorged larvae; 2) molted adults undergo behavioral diapause during the transition from nymph-to-adult; and 3) there is no diapause during the adult-to-larva transition. A model constructed to simulate the full lifecycle of A. americanum successfully predicted observed tick activity at the three U.S. study locations. Some differences between observed and simulated seasonality patterns were observed, however, identifying the need for research to refine some model parameters. In simulations run using temperature data for Montreal, deterministic die-out of A. americanum populations did not occur, suggesting the possibility that current climate in parts of southern Canada is suitable for survival and reproduction of this tick. © Crown copyright 2015.
Ryan, Patrick B; Schuemie, Martijn J
2013-10-01
There has been only limited evaluation of statistical methods for identifying safety risks of drug exposure in observational healthcare data. Simulations can support empirical evaluation, but have not been shown to adequately model the real-world phenomena that challenge observational analyses. To design and evaluate a probabilistic framework (OSIM2) for generating simulated observational healthcare data, and to use this data for evaluating the performance of methods in identifying associations between drug exposure and health outcomes of interest. Seven observational designs, including case-control, cohort, self-controlled case series, and self-controlled cohort design were applied to 399 drug-outcome scenarios in 6 simulated datasets with no effect and injected relative risks of 1.25, 1.5, 2, 4, and 10, respectively. Longitudinal data for 10 million simulated patients were generated using a model derived from an administrative claims database, with associated demographics, periods of drug exposure derived from pharmacy dispensings, and medical conditions derived from diagnoses on medical claims. Simulation validation was performed through descriptive comparison with real source data. Method performance was evaluated using Area Under ROC Curve (AUC), bias, and mean squared error. OSIM2 replicates prevalence and types of confounding observed in real claims data. When simulated data are injected with relative risks (RR) ≥ 2, all designs have good predictive accuracy (AUC > 0.90), but when RR < 2, no methods achieve 100 % predictions. Each method exhibits a different bias profile, which changes with the effect size. OSIM2 can support methodological research. Results from simulation suggest method operating characteristics are far from nominal properties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brink, A.; Kilpinen, P.; Hupa, M.
1996-01-01
Two methods to improve the modeling of NO{sub x} emissions in numerical flow simulation of combustion are investigated. The models used are a reduced mechanism for nitrogen chemistry in methane combustion and a new model based on regression analysis of perfectly stirred reactor simulations using detailed comprehensive reaction kinetics. The applicability of the methods to numerical flow simulation of practical furnaces, especially in the near burner region, is tested against experimental data from a pulverized coal fired single burner furnace. The results are also compared to those obtained using a commonly used description for the overall reaction rate of NO.
Klager, Brian J.; Kelly, Brian P.; Ziegler, Andrew C.
2014-01-01
The Equus Beds aquifer in south-central Kansas is a primary water-supply source for the city of Wichita. Water-level declines because of groundwater pumping for municipal and irrigation needs as well as sporadic drought conditions have caused concern about the adequacy of the Equus Beds aquifer as a future water supply for Wichita. In March 2006, the city of Wichita began construction of the Equus Beds Aquifer Storage and Recovery project, a plan to artificially recharge the aquifer with excess water from the Little Arkansas River. Artificial recharge will raise groundwater levels, increase storage volume in the aquifer, and deter or slow down a plume of chloride brine approaching the Wichita well field from the Burrton, Kansas area caused by oil production activities in the 1930s. Another source of high chloride water to the aquifer is the Arkansas River. This study was prepared in cooperation with the city of Wichita as part of the Equus Beds Aquifer Storage and Recovery project. Chloride transport in the Equus Beds aquifer was simulated between the Arkansas and Little Arkansas Rivers near the Wichita well field. Chloride transport was simulated for the Equus Beds aquifer using SEAWAT, a computer program that combines the groundwater-flow model MODFLOW-2000 and the solute-transport model MT3DMS. The chloride-transport model was used to simulate the period from 1990 through 2008 and the effects of five well pumping scenarios and one artificial recharge scenario. The chloride distribution in the aquifer for the beginning of 1990 was interpolated from groundwater samples from around that time, and the chloride concentrations in rivers for the study period were interpolated from surface water samples. Five well-pumping scenarios and one artificial-recharge scenario were assessed for their effects on simulated chloride transport and water levels in and around the Wichita well field. The scenarios were: (1) existing 1990 through 2008 pumping conditions, to serve as a baseline scenario for comparison with the hypothetical scenarios; (2) no pumping in the model area, to demonstrate the chloride movement without the influence of well pumping; (3) double municipal pumping from the Wichita well field with existing irrigation pumping; (4) existing municipal pumping with no irrigation pumping in the model area; (5) double municipal pumping in the Wichita well field and no irrigation pumping in the model area; and (6) increasing artificial recharge to the Phase 1 Artificial Storage and Recovery project sites by 2,300 acre-feet per year. The effects of the hypothetical pumping and artificial recharge scenarios on simulated chloride transport were measured by comparing the rate of movement of the 250-milligrams-per-liter-chloride front for each hypothetical scenario with the baseline scenario at the Arkansas River area near the southern part of the Wichita well field and the Burrton plume area. The scenarios that increased the rate of movement the most compared to the baseline scenario of existing pumping between the Arkansas River and the southern boundary of the well field were those that doubled the city of Wichita’s pumping from the well field (scenarios 3 and 5), increasing the rate of movement by 50 to 150 feet per year, with the highest rate increases in the shallow layer and the lowest rate increases in the deepest layer. The no pumping and no irrigation pumping scenarios (2 and 4) slowed the rate of movement in this area by 150 to 210 feet per year and 40 to 70 feet per year, respectively. In the double Wichita pumping scenario (3), the rate of movement in the shallow layer of the Burrton area decreased by about 50 feet per year. Simulated chloride rate of movement in the deeper layers of the Burrton area was decreased in the no pumping and no irrigation scenarios (2 and 4) by 80 to 120 feet per year and 50 feet per year, respectively, and increased in the scenarios that double Wichita’s pumping (3 and 5) from the well field by zero to 130 feet per year, with the largest increases in the deepest layer. In the increased Phase 1 artificial recharge scenario (6), the rate of chloride movement in the Burrton area increased in the shallow layer by about 30 feet per year, and decreased in the middle and deepest layer by about 10 and 60 feet per year, respectively. Comparisons of the rate of movement of the simulated 250-milligrams-per-liter-chloride front in the hypothetical scenarios to the baseline scenario indicated that, in general, increases to pumping in the well field area increased the rate of simulated chloride movement toward the well field area by as much as 150 feet per year. Reductions in pumping slowed the advance of chloride toward the well field by as much as 210 feet per year, although reductions did not stop the movement of chloride toward the well field, including when pumping rates were eliminated. If pumping is completely discontinued, the rate of chloride movement is about 500 to 600 feet per year in the area between the Arkansas River and the southern part of the Wichita well field, and 70 to 500 feet per year in the area near Burrton with the highest rate of movement in the shallow aquifer layer. The averages of simulated water-levels in index monitoring wells in the Wichita well field at the end of 2008 were calculated for each scenario. Compared to the baseline scenario, the average simulated water level was 5.05 feet higher for the no pumping scenario, 4.72 feet lower for the double Wichita pumping with existing irrigation scenario, 2.49 feet higher for the no irrigation pumping with existing Wichita pumping scenario, 1.53 feet lower for the double Wichita pumping with no irrigation scenario, and 0.48 feet higher for the increased Phase 1 artificial recharge scenario. The groundwater flow was simulated with a preexisting groundwater-flow model, which was not altered to calibrate the solute-transport model to observed chloride-concentration data. Therefore, some areas in the model had poor fit between simulated chloride concentrations and observed chloride concentrations, including the area between Arkansas River and the southern part of the Wichita well field, and the Hollow-Nikkel area about 6 miles north of Burrton. Compared to the interpreted location of the 250-milligrams per liter-chloride front based on data collected in 2011, in the Arkansas River area the simulated 250-milligrams per liter-chloride front moved from the river toward the well field about twice the rate of the actual 250-milligrams per liter-chloride front in the shallow layer and about four times the rate of the actual 250-milligrams per liter-chloride front in the deep layer. Future groundwater-flow and chloride-transport modeling efforts may achieve better agreement between observed and simulated chloride concentrations in these areas by taking the chloride-transport model fit into account when adjusting parameters such as hydraulic conductivity, riverbed conductance, and effective porosity during calibration. Results of the hypothetical scenarios simulated indicate that the Burrton chloride plume will continue moving toward the well field regardless of pumping in the area and that one alternative may be to increase pumping from within the plume area to reverse the groundwater-flow gradients and remove the plume. Additionally, the results of modeling these scenarios indicate that eastward movement of the Burrton plume could be slowed by the additional artificial recharge at the Phase 1 sites and that decreasing pumping along the Arkansas River or increasing water levels could retard the movement of chloride and may prevent further encroachment into the southern part of the well field area.
Smith, David W.; Buto, Susan G.; Welborn, Toby L.
2016-09-14
The acquisition and transfer of water rights to wetland areas of Lahontan Valley, Nevada, has caused concern over the potential effects on shallow aquifer water levels. In 1992, water levels in Lahontan Valley were measured to construct a water-table map of the shallow aquifer prior to the effects of water-right transfers mandated by the Fallon Paiute-Shoshone Tribal Settlement Act of 1990 (Public Law 101-618, 104 Stat. 3289). From 1992 to 2012, approximately 11,810 water-righted acres, or 34,356 acre-feet of water, were acquired and transferred to wetland areas of Lahontan Valley. This report documents changes in water levels measured during the period of water-right transfers and presents an evaluation of five groundwater-flow model scenarios that simulated water-level changes in Lahontan Valley in response to water-right transfers and a reduction in irrigation season length by 50 percent.Water levels measured in 98 wells from 2012 to 2013 were used to construct a water-table map. Water levels in 73 of the 98 wells were compared with water levels measured in 1992 and used to construct a water-level change map. Water-level changes in the 73 wells ranged from -16.2 to 4.1 feet over the 20-year period. Rises in water levels in Lahontan Valley may correspond to annual changes in available irrigation water, increased canal flows after the exceptionally dry and shortened irrigation season of 1992, and the increased conveyance of water rights transferred to Stillwater National Wildlife Refuge. Water-level declines generally occurred near the boundary of irrigated areas and may be associated with groundwater pumping, water-right transfers, and inactive surface-water storage reservoirs. The largest water-level declines were in the area near Carson Lake.Groundwater-level response to water-right transfers was evaluated by comparing simulated and observed water-level changes for periods representing water-right transfers and a shortened irrigation season in areas near Fallon and Stillwater, Nevada. In the Stillwater modeled area, water rights associated with nearly 50 percent of the irrigated land were transferred from 1992 to 1998, represented by the model scenario reduction in groundwater recharge by 50 percent. The scenario resulted in a simulated average decline of 0.6 foot; average observed water-level change for the modeled area was estimated to be 0.0 foot, or no change. In the Fallon modeled area, transfers of water rights associated with 180 acres of land occurred from 1994 to 2008. The transfer is most similar to the scenario for removal of 320 acres of irrigated land. The model scenario resulted in simulated water-level declines of 0.1; water levels measured from 1994 to 2012 indicate no significant trends in water levels, or approximately zero change in water levels, for the Fallon modeled area.The model scenarios included the simulation of a irrigation season shortened by 50 percent, which was determined to have occurred in the 1992 irrigation season in both modeled areas. The shortening of the irrigation season in the Fallon modeled area resulted in simulated water-level declines of 1.1 feet; observed declines were estimated to be 1.3 feet. The Stillwater model simulations resulted in a simulated decline of 1.4 feet, and observed water levels declined an estimated 2.3 feet for the area. The estimated difference between simulated and observed water levels are 0.2 and 0.9 foot for the Fallon and Stillwater modeled areas, respectively. Observed water-level changes were generally within one standard deviation of changes from model simulations, based on the selected periods of comparison. Simulated and observed water-level changes agree well, generally within 1 foot; however, the model scenarios were only approximately similar to the observed conditions, and periods of comparison were generally shorter for the observed periods and included additional cumulative effects of water-right transfers. Climate variability was not considered in the model scenarios.
NASA Astrophysics Data System (ADS)
Piot, M.; Pay, M.; Jorba, O.; Lopez, E.; Pirez, C.; Gasso, S.; Baldasano, J. M.
2009-12-01
In Europe, human exposure to air pollution often exceeds standards set by the EU commission (Directives 1996/62/EC, 2002/3/EC, 2008/50/EC) and the World Health Organization (WHO). Urban/suburban areas are predominantly impacted upon, although exceedances of particulate matter (PM10 and PM2.5) and Ozone (O3) also take place in rural areas. Within the CALIOPE project, a high-resolution air quality forecasting system, namely WRF-ARW/HERMES04/CMAQ/BSC-DREAM, has been developed and applied to the European domain (12x12 sq. km, 1hr) as well as the Spanish domain (4x4 sq. km, 1hr). The simulation of such high-resolution model system has been made possible by its implementation on the MareNostrum supercomputer. This contribution describes a thorough quantitative evaluation study performed for the reference year 2004. The WRF-ARW meteorological model contains 38 vertical layers reaching up to 50 hPa. The vertical resolution of the CMAQ chemistry-transport model for gas-phase and aerosols has been increased from 8 to 15 layers in order to simulate vertical exchanges more accurately. Gas phase boundary conditions are provided by the LMDz-INCA2 global climate-chemistry model. For the European simulation, emissions are disaggregated from the EMEP emission inventory for 2004 to the utilized resolution using the criteria implemented in the HERMES04 emission model. The HERMES04 model system, running through a bottom-up approach, is used to estimate emissions for Spain at a 1x1 sq. km horizontal resolution, every hour. In order to evaluate the performances of the CALIOPE system, the model simulation for Europe was compared with ground-based measurements from the EMEP and the Spanish air quality networks (total of 60 stations for O3, 43 for NO2, 31 for SO2, 25 for PM10 and 16 for PM2.5). The model simulation for Europe satisfactorily reproduces O3 concentrations throughout the year (annual correlation: 0.66) with relatively small errors: MNGE values range from 13% to 26%, and MNBE values show a slight negative bias ranging from -18% to 0%. These values lie within the range defined by the US-EPA (MNGE: +/- 30-35%; MNBE: +/- 10-15%. See US-EPA, 1991, 2005). NO2 is less accurately simulated, with a mean MNBE of -35% caused by an overall underestimation in concentrations. The reproduction of SO2 concentrations is relatively correct but false peaks are reported (mean annual MNBE=6%). The simulated variation of particulate matter is reliable, with a mean correlation of 0.57. The aerosol dynamics is well captured and false peaks are reduced by use of an improved 8-bin aerosol description in the BSC-DREAM dust model, but mean levels are still underestimated by a factor of two. The model simulation for Europe is used to force the nested high-resolution simulation of Spain. The performances of the latter will be also presented.
NASA Astrophysics Data System (ADS)
Mei, Donghai; Ge, Qingfeng; Neurock, Matthew; Kieken, Laurent; Lerou, Jan
First-principles-based kinetic Monte Carlo simulation was used to track the elementary surface transformations involved in the catalytic decomposition of NO over Pt(100) and Rh(100) surfaces under lean-burn operating conditions. Density functional theory (DFT) calculations were carried out to establish the structure and energetics for all reactants, intermediates and products over Pt(100) and Rh(100). Lateral interactions which arise from neighbouring adsorbates were calculated by examining changes in the binding energies as a function of coverage and different coadsorbed configurations. These data were fitted to a bond order conservation (BOC) model which is subsequently used to establish the effects of coverage within the simulation. The intrinsic activation barriers for all the elementary reaction steps in the proposed mechanism of NO reduction over Pt(100) were calculated by using DFT. These values are corrected for coverage effects by using the parametrized BOC model internally within the simulation. This enables a site-explicit kinetic Monte Carlo simulation that can follow the kinetics of NO decomposition over Pt(100) and Rh(100) in the presence of excess oxygen. The simulations are used here to model various experimental protocols including temperature programmed desorption as well as batch catalytic kinetics. The simulation results for the temperature programmed desorption and decomposition of NO over Pt(100) and Rh(100) under vacuum condition were found to be in very good agreement with experimental results. NO decomposition is strongly tied to the temporal number of sites that remain vacant. Experimental results show that Pt is active in the catalytic reaction of NO into N2 and NO2 under lean-burn conditions. The simulated reaction orders for NO and O2 were found to be +0.9 and -0.4 at 723 K, respectively. The simulation also indicates that there is no activity over Rh(100) since the surface becomes poisoned by oxygen.
Fabian, M Patricia; Stout, Natasha K; Adamkiewicz, Gary; Geggel, Amelia; Ren, Cizao; Sandel, Megan; Levy, Jonathan I
2012-09-18
In the United States, asthma is the most common chronic disease of childhood across all socioeconomic classes and is the most frequent cause of hospitalization among children. Asthma exacerbations have been associated with exposure to residential indoor environmental stressors such as allergens and air pollutants as well as numerous additional factors. Simulation modeling is a valuable tool that can be used to evaluate interventions for complex multifactorial diseases such as asthma but in spite of its flexibility and applicability, modeling applications in either environmental exposures or asthma have been limited to date. We designed a discrete event simulation model to study the effect of environmental factors on asthma exacerbations in school-age children living in low-income multi-family housing. Model outcomes include asthma symptoms, medication use, hospitalizations, and emergency room visits. Environmental factors were linked to percent predicted forced expiratory volume in 1 second (FEV1%), which in turn was linked to risk equations for each outcome. Exposures affecting FEV1% included indoor and outdoor sources of NO2 and PM2.5, cockroach allergen, and dampness as a proxy for mold. Model design parameters and equations are described in detail. We evaluated the model by simulating 50,000 children over 10 years and showed that pollutant concentrations and health outcome rates are comparable to values reported in the literature. In an application example, we simulated what would happen if the kitchen and bathroom exhaust fans were improved for the entire cohort, and showed reductions in pollutant concentrations and healthcare utilization rates. We describe the design and evaluation of a discrete event simulation model of pediatric asthma for children living in low-income multi-family housing. Our model simulates the effect of environmental factors (combustion pollutants and allergens), medication compliance, seasonality, and medical history on asthma outcomes (symptom-days, medication use, hospitalizations, and emergency room visits). The model can be used to evaluate building interventions and green building construction practices on pollutant concentrations, energy savings, and asthma healthcare utilization costs, and demonstrates the value of a simulation approach for studying complex diseases such as asthma.
The effects of indoor environmental exposures on pediatric asthma: a discrete event simulation model
2012-01-01
Background In the United States, asthma is the most common chronic disease of childhood across all socioeconomic classes and is the most frequent cause of hospitalization among children. Asthma exacerbations have been associated with exposure to residential indoor environmental stressors such as allergens and air pollutants as well as numerous additional factors. Simulation modeling is a valuable tool that can be used to evaluate interventions for complex multifactorial diseases such as asthma but in spite of its flexibility and applicability, modeling applications in either environmental exposures or asthma have been limited to date. Methods We designed a discrete event simulation model to study the effect of environmental factors on asthma exacerbations in school-age children living in low-income multi-family housing. Model outcomes include asthma symptoms, medication use, hospitalizations, and emergency room visits. Environmental factors were linked to percent predicted forced expiratory volume in 1 second (FEV1%), which in turn was linked to risk equations for each outcome. Exposures affecting FEV1% included indoor and outdoor sources of NO2 and PM2.5, cockroach allergen, and dampness as a proxy for mold. Results Model design parameters and equations are described in detail. We evaluated the model by simulating 50,000 children over 10 years and showed that pollutant concentrations and health outcome rates are comparable to values reported in the literature. In an application example, we simulated what would happen if the kitchen and bathroom exhaust fans were improved for the entire cohort, and showed reductions in pollutant concentrations and healthcare utilization rates. Conclusions We describe the design and evaluation of a discrete event simulation model of pediatric asthma for children living in low-income multi-family housing. Our model simulates the effect of environmental factors (combustion pollutants and allergens), medication compliance, seasonality, and medical history on asthma outcomes (symptom-days, medication use, hospitalizations, and emergency room visits). The model can be used to evaluate building interventions and green building construction practices on pollutant concentrations, energy savings, and asthma healthcare utilization costs, and demonstrates the value of a simulation approach for studying complex diseases such as asthma. PMID:22989068
Multi-scale and multi-physics simulations using the multi-fluid plasma model
2017-04-25
small The simulation uses 512 second-order elements Bz = 1.0, Te = Ti = 0.01, ui = ue = 0 ne = ni = 1.0 + e−10(x−6) 2 Baboolal, Math . and Comp. Sim. 55...DISTRIBUTION Clearance No. 17211 23 / 31 SUMMARY The blended finite element method (BFEM) is presented DG spatial discretization with explicit Runge...Kutta (i+, n) CG spatial discretization with implicit Crank-Nicolson (e−, fileds) DG captures shocks and discontinuities CG is efficient and robust for
NASA Astrophysics Data System (ADS)
Kuik, Friderike; Lauer, Axel; Churkina, Galina; Denier van der Gon, Hugo A. C.; Fenner, Daniel; Mar, Kathleen A.; Butler, Tim M.
2016-12-01
Air pollution is the number one environmental cause of premature deaths in Europe. Despite extensive regulations, air pollution remains a challenge, especially in urban areas. For studying summertime air quality in the Berlin-Brandenburg region of Germany, the Weather Research and Forecasting Model with Chemistry (WRF-Chem) is set up and evaluated against meteorological and air quality observations from monitoring stations as well as from a field campaign conducted in 2014. The objective is to assess which resolution and level of detail in the input data is needed for simulating urban background air pollutant concentrations and their spatial distribution in the Berlin-Brandenburg area. The model setup includes three nested domains with horizontal resolutions of 15, 3 and 1 km and anthropogenic emissions from the TNO-MACC III inventory. We use RADM2 chemistry and the MADE/SORGAM aerosol scheme. Three sensitivity simulations are conducted updating input parameters to the single-layer urban canopy model based on structural data for Berlin, specifying land use classes on a sub-grid scale (mosaic option) and downscaling the original emissions to a resolution of ca. 1 km × 1 km for Berlin based on proxy data including traffic density and population density. The results show that the model simulates meteorology well, though urban 2 m temperature and urban wind speeds are biased high and nighttime mixing layer height is biased low in the base run with the settings described above. We show that the simulation of urban meteorology can be improved when specifying the input parameters to the urban model, and to a lesser extent when using the mosaic option. On average, ozone is simulated reasonably well, but maximum daily 8 h mean concentrations are underestimated, which is consistent with the results from previous modelling studies using the RADM2 chemical mechanism. Particulate matter is underestimated, which is partly due to an underestimation of secondary organic aerosols. NOx (NO + NO2) concentrations are simulated reasonably well on average, but nighttime concentrations are overestimated due to the model's underestimation of the mixing layer height, and urban daytime concentrations are underestimated. The daytime underestimation is improved when using downscaled, and thus locally higher emissions, suggesting that part of this bias is due to deficiencies in the emission input data and their resolution. The results further demonstrate that a horizontal resolution of 3 km improves the results and spatial representativeness of the model compared to a horizontal resolution of 15 km. With the input data (land use classes, emissions) at the level of detail of the base run of this study, we find that a horizontal resolution of 1 km does not improve the results compared to a resolution of 3 km. However, our results suggest that a 1 km horizontal model resolution could enable a detailed simulation of local pollution patterns in the Berlin-Brandenburg region if the urban land use classes, together with the respective input parameters to the urban canopy model, are specified with a higher level of detail and if urban emissions of higher spatial resolution are used.
NASA Astrophysics Data System (ADS)
Kuik, Friderike; Lauer, Axel; von Schneidemesser, Erika; Butler, Tim
2017-04-01
Many European cities continue to struggle with meeting the European air quality limits for NO2. In Berlin, Germany, most of the exceedances in NO2 recorded at monitoring sites near busy roads can be largely attributed to emissions from traffic. In order to assess the impact of changes in traffic emissions on air quality at policy relevant scales, we combine the regional atmosphere-chemistry transport model WRF-Chem at a resolution of 1kmx1km with a statistical downscaling approach. Here, we build on the recently published study evaluating the performance of a WRF-Chem setup in representing observed urban background NO2 concentrations from Kuik et al. (2016) and extend this setup by developing and testing an approach to statistically downscale simulated urban background NO2 concentrations to street level. The approach uses a multilinear regression model to relate roadside NO2 concentrations observed with the municipal monitoring network with observed NO2 concentrations at urban background sites and observed traffic counts. For this, the urban background NO2 concentrations are decomposed into a long term, a synoptic and a diurnal component using the Kolmogorov-Zurbenko filtering method. We estimate the coefficients of the regression model for five different roadside stations in Berlin representing different street types. In a next step we combine the coefficients with simulated urban background concentrations and observed traffic counts, in order to estimate roadside NO2 concentrations based on the results obtained with WRF-Chem at the five selected stations. In a third step, we extrapolate the NO2 concentrations to all major roads in Berlin. The latter is based on available data for Berlin of daily mean traffic counts, diurnal and weekly cycles of traffic as well as simulated urban background NO2 concentrations. We evaluate the NO2 concentrations estimated with this method at street level for Berlin with additional observational data from stationary measurements and mobile measurements conducted during a campaign in summer 2014. The results show that this approach allows us to estimate NO2 concentrations at roadside reasonably well. The approach can be applied when observations show a strong correlation between roadside NO2 concentrations and traffic emissions from a single type of road. The method, however, shows weaknesses for intersections where observed NO2 concentrations are influenced by traffic on several different roads. We then apply this downscaling approach to estimate the impact of different traffic emission scenarios both on urban background and street level NO2 concentrations. References Kuik, F., Lauer, A., Churkina, G., Denier van der Gon, H. A. C., Fenner, D., Mar, K. A., and Butler, T. M.: Air quality modelling in the Berlin-Brandenburg region using WRF-Chem v3.7.1: sensitivity to resolution of model grid and input data, Geosci. Model Dev., 9, 4339-4363, doi:10.5194/gmd-9-4339-2016, 2016.
Sun, Xiaodan; Hartzell, Stephen; Rezaeian, Sanaz
2015-01-01
Three broadband simulation methods are used to generate synthetic ground motions for the 2011 Mineral, Virginia, earthquake and compare with observed motions. The methods include a physics‐based model by Hartzell et al. (1999, 2005), a stochastic source‐based model by Boore (2009), and a stochastic site‐based model by Rezaeian and Der Kiureghian (2010, 2012). The ground‐motion dataset consists of 40 stations within 600 km of the epicenter. Several metrics are used to validate the simulations: (1) overall bias of response spectra and Fourier spectra (from 0.1 to 10 Hz); (2) spatial distribution of residuals for GMRotI50 peak ground acceleration (PGA), peak ground velocity, and pseudospectral acceleration (PSA) at various periods; (3) comparison with ground‐motion prediction equations (GMPEs) for the eastern United States. Our results show that (1) the physics‐based model provides satisfactory overall bias from 0.1 to 10 Hz and produces more realistic synthetic waveforms; (2) the stochastic site‐based model also yields more realistic synthetic waveforms and performs superiorly for frequencies greater than about 1 Hz; (3) the stochastic source‐based model has larger bias at lower frequencies (<0.5 Hz) and cannot reproduce the varying frequency content in the time domain. The spatial distribution of GMRotI50 residuals shows that there is no obvious pattern with distance in the simulation bias, but there is some azimuthal variability. The comparison between synthetics and GMPEs shows similar fall‐off with distance for all three models, comparable PGA and PSA amplitudes for the physics‐based and stochastic site‐based models, and systematic lower amplitudes for the stochastic source‐based model at lower frequencies (<0.5 Hz).
NASA Astrophysics Data System (ADS)
Mues, A.; Kuenen, J.; Hendriks, C.; Manders, A.; Segers, A.; Scholz, Y.; Hueglin, C.; Builtjes, P.; Schaap, M.
2013-07-01
In this study the sensitivity of the model performance of the chemistry transport model (CTM) LOTOS-EUROS to the description of the temporal variability of emissions was investigated. Currently the temporal release of anthropogenic emissions is described by European average diurnal, weekly and seasonal time profiles per sector. These default time profiles largely neglect the variation of emission strength with activity patterns, region, species, emission process and meteorology. The three sources dealt with in this study are combustion in energy and transformation industries (SNAP1), non-industrial combustion (SNAP2) and road transport (SNAP7). First the impact of neglecting the temporal emission profiles for these SNAP categories on simulated concentrations was explored. In a~second step, we constructed more detailed emission time profiles for the three categories and quantified their impact on the model performance separately as well as combined. The performance in comparison to observations for Germany was quantified for the pollutants NO2, SO2 and PM10 and compared to a simulation using the default LOTOS-EUROS emission time profiles. In general the largest impact on the model performance was found when neglecting the default time profiles for the three categories. The daily average correlation coefficient for instance decreased by 0.04 (NO2), 0.11 (SO2) and 0.01 (PM10) at German urban background stations compared to the default simulation. A systematic increase of the correlation coefficient is found when using the new time profiles. The size of the increase depends on the source category, the component and station. Using national profiles for road transport showed important improvements of the explained variability over the weekdays as well as the diurnal cycle for NO2. The largest impact of the SNAP1 and 2 profiles were found for SO2. When using all new time profiles simultaneously in one simulation the daily average correlation coefficient increased by 0.05 (NO2), 0.07 (SO2) and 0.03 (PM10) at urban background stations in Germany. This exercise showed that to improve the performance of a CTM a better representation of the distribution of anthropogenic emission in time is recommendable. This can be done by developing a dynamical emission model which takes into account regional specific factors and meteorology.
Collapse transition in polymer models with multiple monomers per site and multiple bonds per edge
NASA Astrophysics Data System (ADS)
Rodrigues, Nathann T.; Oliveira, Tiago J.
2017-12-01
We present results from extensive Monte Carlo simulations of polymer models where each lattice site can be visited by up to K monomers and no restriction is imposed on the number of bonds on each lattice edge. These multiple monomer per site (MMS) models are investigated on the square and cubic lattices, for K =2 and 3, by associating Boltzmann weights ω0=1 , ω1=eβ1 , and ω2=eβ2 to sites visited by 1, 2, and 3 monomers, respectively. Two versions of the MMS models are considered for which immediate reversals of the walks are allowed (RA) or forbidden (RF). In contrast to previous simulations of these models, we find the same thermodynamic behavior for both RA and RF versions. In three dimensions, the phase diagrams, in space β2×β1 , are featured by coil and globule phases separated by a line of Θ points, as thoroughly demonstrated by the metric νt, crossover ϕt, and entropic γt exponents. The existence of the Θ lines is also confirmed by the second virial coefficient. This shows that no discontinuous collapse transition exists in these models, in contrast to previous claims based on a weak bimodality observed in some distributions, which indeed exists in a narrow region very close to the Θ line when β1<0 . Interestingly, in two dimensions, only a crossover is found between the coil and globule phases.
NASA Astrophysics Data System (ADS)
Khalil, M. I.; Smith, J.; Abdalla, M.; O'Brien, P.; Smith, P.; Müller, C.
2011-12-01
Agriculture and associated land-use changes contribute a significant portion to global greenhouse gas (GHG) emissions; mainly as N2O, CO2 and CH4. Improved modelling of soil processes will greatly enhance the value of national inventories, both in terms of more accurate reporting and better mitigation policy options. In Ireland, Agriculture and Land Use, Land Use Change and Forestry, is currently a priority research focus, aimed at reducing uncertainty in estimates of GHG emissions and sinks. The ECOSSE model has several advantages, including limited meteorological and soil data requirements, compared to other models. It can simulate the impacts of land-use, management and climate change on C and N emissions and stocks for both mineral and organic soils at field and national scales. In this study, ECOSSE has been used to predict GHG emissions and SOC changes in arable lands cropped with spring barley receiving different rates of N application. The simulated outputs are evaluated against measured data available from a two-year field study. The modelled responses of N2O fluxes are found to be consistent with the measured values. The bias in the total difference between measured values and the corresponding modelled N2O fluxes was large due to the impact of a few unexpected measurements. In the fertilized fields, significant correlation between modelled and measured N2O fluxes was observed, with correlation coefficients of 0.54-0.60 and root mean square errors of 18.6-20.8 g N ha-1 d-1. The measured seasonal (crop growth period) N2O losses (integrated) were 0.41 and 0.50% of the N applied at rates of 70-79 and 140-159 kg ha-1, respectively. As a further comparison, the simulated values for the dates when measurements were taken were similarly integrated. The corresponding simulated seasonal N2O losses were 0.69 and 1.11% of the added N, suggesting an overestimation by 70-123% of the measured values. However, this could be due to missed emissions associated with the sporadic timing of measurements, from 2 to 15 day intervals. The corresponding simulated annual losses obtained by summing the modelled daily fluxes were 0.49 and 0.62% of applied N, more closely matching the measured values. The model estimated a total CO2 emission of 4.0 t C ha-1 yr-1 for plots receiving no crop residues. This is less than 58% the typical range measured (9.6 t ha-1 yr-1) in a similar Irish field receiving crop residues, 47% of the annual average (7.5±4.3 t ha-1 yr-1) for temperate regions, and 26% of the global average (5.4±0.8 t ha-1 yr-1) for croplands. The simulated CH4 emissions were found to be negligible from the arable fields. The modelled SOC content increased with increasing N application rates, but on average showed a loss of 1.06 t C ha-1 yr-1 for fields receiving no residues. Preliminary results suggest that the model can reliably be used to estimate the process-based emissions of GHGs from the arable fields. However, further analyses are needed to fully determine the uncertainty in their estimates.
NASA Astrophysics Data System (ADS)
Itahashi, S.; Uno, I.; Irie, H.; Kurokawa, J.; Ohara, T.
2013-04-01
Satellite observations of the tropospheric NO2 vertical column density (VCD) are closely correlated to surface NOx emissions and can thus be used to estimate the latter. In this study, the NO2 VCDs simulated by a regional chemical transport model with data from the updated Regional Emission inventory in ASia (REAS) version 2.1 were validated by comparison with multi-satellite observations (GOME, SCIAMACHY, GOME-2, and OMI) between 2000 and 2010. Rapid growth in NO2 VCD driven by expansion of anthropogenic NOx emissions was revealed above the central eastern China region, except during the economic downturn. In contrast, slightly decreasing trends were captured above Japan. The modeled NO2 VCDs using the updated REAS emissions reasonably reproduced the annual trends observed by multi-satellites, suggesting that the NOx emissions growth rate estimated by the updated inventory is robust. On the basis of the close linear relationship of modeled NO2 VCD, observed NO2 VCD, and anthropogenic NOx emissions, the NOx emissions in 2009 and 2010 were estimated. It was estimated that the NOx emissions from anthropogenic sources in China beyond doubled between 2000 and 2010, reflecting the strong growth of anthropogenic emissions in China with the rapid recovery from the economic downturn during late 2008 and mid-2009.
Fischer, Paul W; Cullen, Alison C; Ettl, Gregory J
2017-01-01
The objectives of this study are to understand tradeoffs between forest carbon and timber values, and evaluate the impact of uncertainty in improved forest management (IFM) carbon offset projects to improve forest management decisions. The study uses probabilistic simulation of uncertainty in financial risk for three management scenarios (clearcutting in 45- and 65-year rotations and no harvest) under three carbon price schemes (historic voluntary market prices, cap and trade, and carbon prices set to equal net present value (NPV) from timber-oriented management). Uncertainty is modeled for value and amount of carbon credits and wood products, the accuracy of forest growth model forecasts, and four other variables relevant to American Carbon Registry methodology. Calculations use forest inventory data from a 1,740 ha forest in western Washington State, using the Forest Vegetation Simulator (FVS) growth model. Sensitivity analysis shows that FVS model uncertainty contributes more than 70% to overall NPV variance, followed in importance by variability in inventory sample (3-14%), and short-term prices for timber products (8%), while variability in carbon credit price has little influence (1.1%). At regional average land-holding costs, a no-harvest management scenario would become revenue-positive at a carbon credit break-point price of $14.17/Mg carbon dioxide equivalent (CO 2 e). IFM carbon projects are associated with a greater chance of both large payouts and large losses to landowners. These results inform policymakers and forest owners of the carbon credit price necessary for IFM approaches to equal or better the business-as-usual strategy, while highlighting the magnitude of financial risk and reward through probabilistic simulation. © 2016 Society for Risk Analysis.
Bishop, Joseph E.; Emery, John M.; Battaile, Corbett C.; ...
2016-03-16
Two fundamental approximations in macroscale solid-mechanics modeling are (1) the assumption of scale separation in homogenization theory and (2) the use of a macroscopic plasticity material model that represents, in a mean sense, the multitude of inelastic processes occurring at the microscale. With the goal of quantifying the errors induced by these approximations on engineering quantities of interest, we perform a set of direct numerical simulations (DNS) in which polycrystalline microstructures are embedded throughout a macroscale structure. The largest simulations model over 50,000 grains. The microstructure is idealized using a randomly close-packed Voronoi tessellation in which each polyhedral Voronoi cellmore » represents a grain. An face centered cubic crystal-plasticity model is used to model the mechanical response of each grain. The overall grain structure is equiaxed, and each grain is randomly oriented with no overall texture. The detailed results from the DNS simulations are compared to results obtained from conventional macroscale simulations that use homogeneous isotropic plasticity models. The macroscale plasticity models are calibrated using a representative volume element of the idealized microstructure. Furthermore, we envision that DNS modeling will be used to gain new insights into the mechanics of material deformation and failure.« less
Shi, Kuangyu; Bayer, Christine; Gaertner, Florian C; Astner, Sabrina T; Wilkens, Jan J; Nüsslin, Fridtjof; Vaupel, Peter; Ziegler, Sibylle I
2017-02-01
Positron-emission tomography (PET) with hypoxia specific tracers provides a noninvasive method to assess the tumor oxygenation status. Reaction-diffusion models have advantages in revealing the quantitative relation between in vivo imaging and the tumor microenvironment. However, there is no quantitative comparison of the simulation results with the real PET measurements yet. The lack of experimental support hampers further applications of computational simulation models. This study aims to compare the simulation results with a preclinical [ 18 F]FMISO PET study and to optimize the reaction-diffusion model accordingly. Nude mice with xenografted human squamous cell carcinomas (CAL33) were investigated with a 2 h dynamic [ 18 F]FMISO PET followed by immunofluorescence staining using the hypoxia marker pimonidazole and the endothelium marker CD 31. A large data pool of tumor time-activity curves (TAC) was simulated for each mouse by feeding the arterial input function (AIF) extracted from experiments into the model with different configurations of the tumor microenvironment. A measured TAC was considered to match a simulated TAC when the difference metric was below a certain, noise-dependent threshold. As an extension to the well-established Kelly model, a flow-limited oxygen-dependent (FLOD) model was developed to improve the matching between measurements and simulations. The matching rate between the simulated TACs of the Kelly model and the mouse PET data ranged from 0 to 28.1% (on average 9.8%). By modifying the Kelly model to an FLOD model, the matching rate between the simulation and the PET measurements could be improved to 41.2-84.8% (on average 64.4%). Using a simulation data pool and a matching strategy, we were able to compare the simulated temporal course of dynamic PET with in vivo measurements. By modifying the Kelly model to a FLOD model, the computational simulation was able to approach the dynamic [ 18 F]FMISO measurements in the investigated tumors.
Technology-enhanced simulation in emergency medicine: a systematic review and meta-analysis.
Ilgen, Jonathan S; Sherbino, Jonathan; Cook, David A
2013-02-01
Technology-enhanced simulation is used frequently in emergency medicine (EM) training programs. Evidence for its effectiveness, however, remains unclear. The objective of this study was to evaluate the effectiveness of technology-enhanced simulation for training in EM and identify instructional design features associated with improved outcomes by conducting a systematic review. The authors systematically searched MEDLINE, EMBASE, CINAHL, ERIC, PsychINFO, Scopus, key journals, and previous review bibliographies through May 2011. Original research articles in any language were selected if they compared simulation to no intervention or another educational activity for the purposes of training EM health professionals (including student and practicing physicians, midlevel providers, nurses, and prehospital providers). Reviewers evaluated study quality and abstracted information on learners, instructional design (curricular integration, feedback, repetitive practice, mastery learning), and outcomes. From a collection of 10,903 articles, 85 eligible studies enrolling 6,099 EM learners were identified. Of these, 56 studies compared simulation to no intervention, 12 compared simulation with another form of instruction, and 19 compared two forms of simulation. Effect sizes were pooled using a random-effects model. Heterogeneity among these studies was large (I(2) ≥ 50%). Among studies comparing simulation to no intervention, pooled effect sizes were large (range = 1.13 to 1.48) for knowledge, time, and skills and small to moderate for behaviors with patients (0.62) and patient effects (0.43; all p < 0.02 except patient effects p = 0.12). Among comparisons between simulation and other forms of instruction, the pooled effect sizes were small (≤ 0.33) for knowledge, time, and process skills (all p > 0.1). Qualitative comparisons of different simulation curricula are limited, although feedback, mastery learning, and higher fidelity were associated with improved learning outcomes. Technology-enhanced simulation for EM learners is associated with moderate or large favorable effects in comparison with no intervention and generally small and nonsignificant benefits in comparison with other instruction. Future research should investigate the features that lead to effective simulation-based instructional design. © 2013 by the Society for Academic Emergency Medicine.
Artefacts in intracavitary temperature measurements during regional hyperthermia.
Kok, H P; Van den Berg, C A T; Van Haaren, P M A; Crezee, J
2007-09-07
For adequate hyperthermia treatments, reliable temperature information during treatment is essential. During regional hyperthermia, temperature information is preferably obtained non-invasively from intracavitary or intraluminal measurements to avoid implant risks for the patient. However, for intracavitary or intraluminal thermometry optimal tissue contact is less natural as for invasive thermometry. In this study, the reliability of intraluminal/intracavitary measurements was examined in phantom experiments and in a numerical model for various extents of thermal contact between thermometry and the surroundings. Both thermocouple probes and fibre optic probes were investigated. Temperature rises after a 30 s power pulse of the 70 MHz AMC-4 hyperthermia system were measured in a tissue-equivalent phantom using a multisensor thermocouple probe placed centrally in a hollow tube. The tube was filled with (1) air, (2) distilled water or (3) saline solution that mimics the properties of tissue, simulating situations with (1) bad thermal contact and no power dissipation in the tube, (2) good thermal contact but no power dissipation or (3) good thermal contact and tissue representative power dissipation. For numerical simulations, a cylindrical symmetric model of a thermocouple probe or a fibre optic probe in a cavity was developed. The cavity was modelled as air, distilled water or saline solution. A generalised E-Field distribution was assumed, resulting in a power deposition. With this power deposition, the temperature rise after a 30 s power pulse was calculated. When thermal contact was bad (1), both phantom measurements and simulations with a thermocouple probe showed very high temperature rises (>0.5 degrees C), which are artefacts due to self-heating of the thermocouple probe, since no power is dissipated in air. Simulations with a fibre optic probe showed almost no temperature rise when the cavity was filled with air. When thermal contact was good, but no power was dissipated in the tube (2), artefacts due to self-heating were not significant and the observed temperature rises were very low ( approximately 0-0.1 degrees C). For the situation, with tissue representative power dissipation (3), a temperature rise of approximately 0.23 degrees C was observed for both measurements and simulations. A clinical example of a regional hyperthermia treatment of a patient with a cervix uteri carcinoma showed that the artefacts observed in the case of bad thermal contact also affect the steady-state temperature measurements. Good tissue contact must be assured for reliable intraluminal or intracavitary measurements.
Chen, Weiliang; De Schutter, Erik
2017-01-01
Stochastic, spatial reaction-diffusion simulations have been widely used in systems biology and computational neuroscience. However, the increasing scale and complexity of models and morphologies have exceeded the capacity of any serial implementation. This led to the development of parallel solutions that benefit from the boost in performance of modern supercomputers. In this paper, we describe an MPI-based, parallel operator-splitting implementation for stochastic spatial reaction-diffusion simulations with irregular tetrahedral meshes. The performance of our implementation is first examined and analyzed with simulations of a simple model. We then demonstrate its application to real-world research by simulating the reaction-diffusion components of a published calcium burst model in both Purkinje neuron sub-branch and full dendrite morphologies. Simulation results indicate that our implementation is capable of achieving super-linear speedup for balanced loading simulations with reasonable molecule density and mesh quality. In the best scenario, a parallel simulation with 2,000 processes runs more than 3,600 times faster than its serial SSA counterpart, and achieves more than 20-fold speedup relative to parallel simulation with 100 processes. In a more realistic scenario with dynamic calcium influx and data recording, the parallel simulation with 1,000 processes and no load balancing is still 500 times faster than the conventional serial SSA simulation. PMID:28239346
Chen, Weiliang; De Schutter, Erik
2017-01-01
Stochastic, spatial reaction-diffusion simulations have been widely used in systems biology and computational neuroscience. However, the increasing scale and complexity of models and morphologies have exceeded the capacity of any serial implementation. This led to the development of parallel solutions that benefit from the boost in performance of modern supercomputers. In this paper, we describe an MPI-based, parallel operator-splitting implementation for stochastic spatial reaction-diffusion simulations with irregular tetrahedral meshes. The performance of our implementation is first examined and analyzed with simulations of a simple model. We then demonstrate its application to real-world research by simulating the reaction-diffusion components of a published calcium burst model in both Purkinje neuron sub-branch and full dendrite morphologies. Simulation results indicate that our implementation is capable of achieving super-linear speedup for balanced loading simulations with reasonable molecule density and mesh quality. In the best scenario, a parallel simulation with 2,000 processes runs more than 3,600 times faster than its serial SSA counterpart, and achieves more than 20-fold speedup relative to parallel simulation with 100 processes. In a more realistic scenario with dynamic calcium influx and data recording, the parallel simulation with 1,000 processes and no load balancing is still 500 times faster than the conventional serial SSA simulation.
Simulation of root forms using cellular automata model
NASA Astrophysics Data System (ADS)
Winarno, Nanang; Prima, Eka Cahya; Afifah, Ratih Mega Ayu
2016-02-01
This research aims to produce a simulation program for root forms using cellular automata model. Stephen Wolfram in his book entitled "A New Kind of Science" discusses the formation rules based on the statistical analysis. In accordance with Stephen Wolfram's investigation, the research will develop a basic idea of computer program using Delphi 7 programming language. To best of our knowledge, there is no previous research developing a simulation describing root forms using the cellular automata model compared to the natural root form with the presence of stone addition as the disturbance. The result shows that (1) the simulation used four rules comparing results of the program towards the natural photographs and each rule had shown different root forms; (2) the stone disturbances prevent the root growth and the multiplication of root forms had been successfully modeled. Therefore, this research had added some stones, which have size of 120 cells placed randomly in the soil. Like in nature, stones cannot be penetrated by plant roots. The result showed that it is very likely to further develop the program of simulating root forms by 50 variations.
3D Printed Surgical Simulation Models as educational tool by maxillofacial surgeons.
Werz, S M; Zeichner, S J; Berg, B-I; Zeilhofer, H-F; Thieringer, F
2018-02-26
The aim of this study was to evaluate whether inexpensive 3D models can be suitable to train surgical skills to dental students or oral and maxillofacial surgery residents. Furthermore, we wanted to know which of the most common filament materials, acrylonitrile butadiene styrene (ABS) or polylactic acid (PLA), can better simulate human bone according to surgeons' subjective perceptions. Upper and lower jaw models were produced with common 3D desktop printers, ABS and PLA filament and silicon rubber for soft tissue simulation. Those models were given to 10 blinded, experienced maxillofacial surgeons to perform sinus lift and wisdom teeth extraction. Evaluation was made using a questionnaire. Because of slightly different density and filament prices, each silicon-covered model costs between 1.40-1.60 USD (ABS) and 1.80-2.00 USD (PLA) based on 2017 material costs. Ten experienced raters took part in the study. All raters deemed the models suitable for surgical education. No significant differences between ABS and PLA were found, with both having distinct advantages. The study demonstrated that 3D printing with inexpensive printing filaments is a promising method for training oral and maxillofacial surgery residents or dental students in selected surgical procedures. With a simple and cost-efficient manufacturing process, models of actual patient cases can be produced on a small scale, simulating many kinds of surgical procedures. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Simulation of the modern arctic climate by the NCAR CCM1
NASA Technical Reports Server (NTRS)
Bromwich, David H.; Tzeng, Ren-Yow; Parish, Thomas, R.
1994-01-01
The National Center of Atmospheric Research (NCAR) Community Climate Model Version 1 (CCM1's) simulation of the modern arctic climate is evaluated by comparing a five-year seasonal cycle simulation with the European Center for Medium-Range Weather Forecasts (ECMWF) global analyses. The sea level pressure (SLP), storm tracks, vertical cross section of height, 500-hPa height, total energy budget, and moisture budget are analyzed to investigate the biases in the simulated arctic climate. The results show that the model simulates anomalously low SLP, too much storm activity, and anomalously strong baroclinicity to the west of Greenland and vice versa to the east of Greenland. This bias is mainly attributed to the model's topographic representation of Greenland. First, the broadened Greenland topography in the model distorts the path of cyclone waves over the North Atlantic Ocean. Second, the model oversimulates the ridge over Greenland, which intensifies its blocking effect and steers the cyclone waves clockwise around it and hence produces an artificial circum-Greenland trough. These biases are significantly alleviated when the horizontal resolution increases to T42. Over the Arctic basin, the model simulates large amounts of low-level (stratus) clouds in winter and almost no stratus in summer, which is opposite to the observations. This bias is mainly due to the location of the simulated SLP features and the negative anomaly of storm activity, which prevent the transport of moisture into this region during summer but favor this transport in winter. The moisture budget analysis shows that the model's net annual precipitation (P-E) between 70 deg N and the North Pole is 6.6 times larger than the observations and the model transports six times more moisture into this region. The bias in the advection term is attributed to the positive moisture fixer scheme and the distorted flow pattern. However, the excessive moisture transport into the Arctic basin does not solely result from the advection term. The contribution by the moisture fixer is as large as from advection. By contrast, the semi-Lagrangian transport scheme used in the CCM2 significantly improves the moisture simulation for this region; however, globally the error is as serious as for the positive moisture fixer scheme. Finally, because the model has such serious problems in simulating the present arctic climate, its simulations of past and future climate change for this region are questionable.
Simulation of the modern arctic climate by the NCAR CCM1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bromwich, D.H.; Tzeng, R.Y.; Parish, T.R.
The NCAR CCM1's simulation of the modern arctic climate is evaluated by comparing a five-year seasonal cycle simulation with the ECMWF global analyses. The sea level pressure (SLP), storm tracks, vertical cross section of height, 500-hPa height, total energy budget, and moisture budget are analyzed to investigate the biases in the simulated arctic climate. The results show that the model simulates anomalously low SLP, too much activity, and anomalously strong baroclinicity to the west of Greenland and vice versa to the east of Greenland. This bias is mainly attributed to the model's topographic representation of Greenland. First, the broadened Greenlandmore » topography in the model distorts the path of cyclone waves over the North Atlantic Ocean. Second, the model oversimulates the ridge over Greenland, which intensifies its blocking effect and steers the cyclone waves clockwise around it and hence produces an artificial [open quotes]circum-Greenland[close quotes] trough. These biases are significantly alleviated when the horizontal resolution increases to T42. Over the Arctic basin, the modal simulates large amounts of low-level (stratus) clouds in winter and almost no stratus in summer, which is opposite to the observations. This bias is mainly due to the location of the simulated SLP features and the negative anomaly of storm activity, which prevent the transport of moisture into this region during summer but favor this transport in winter. 26 refs., 14 figs., 42 tabs.« less
Yerramilli, Anjaneyulu; Dodla, Venkata B; Desamsetti, Srinivas; Challa, Srinivas V; Young, John H; Patrick, Chuck; Baham, Julius M; Hughes, Robert L; Yerramilli, Sudha; Tuluri, Francis; Hardy, Mark G; Swanier, Shelton J
2011-06-01
In this study, an attempt was made to simulate the air quality with reference to ozone over the Jackson (Mississippi) region using an online WRF/Chem (Weather Research and Forecasting-Chemistry) model. The WRF/Chem model has the advantages of the integration of the meteorological and chemistry modules with the same computational grid and same physical parameterizations and includes the feedback between the atmospheric chemistry and physical processes. The model was designed to have three nested domains with the inner-most domain covering the study region with a resolution of 1 km. The model was integrated for 48 hours continuously starting from 0000 UTC of 6 June 2006 and the evolution of surface ozone and other precursor pollutants were analyzed. The model simulated atmospheric flow fields and distributions of NO2 and O3 were evaluated for each of the three different time periods. The GIS based spatial distribution maps for ozone, its precursors NO, NO2, CO and HONO and the back trajectories indicate that all the mobile sources in Jackson, Ridgeland and Madison contributing significantly for their formation. The present study demonstrates the applicability of WRF/Chem model to generate quantitative information at high spatial and temporal resolution for the development of decision support systems for air quality regulatory agencies and health administrators.
NASA Astrophysics Data System (ADS)
Pradhan, Aniruddhe; Akhavan, Rayhaneh
2017-11-01
Effect of collision model, subgrid-scale model and grid resolution in Large Eddy Simulation (LES) of wall-bounded turbulent flows with the Lattice Boltzmann Method (LBM) is investigated in turbulent channel flow. The Single Relaxation Time (SRT) collision model is found to be more accurate than Multi-Relaxation Time (MRT) collision model in well-resolved LES. Accurate LES requires grid resolutions of Δ+ <= 4 in the near-wall region, which is comparable to Δ+ <= 2 required in DNS. At larger grid resolutions SRT becomes unstable, while MRT remains stable but gives unacceptably large errors. LES with no model gave errors comparable to the Dynamic Smagorinsky Model (DSM) and the Wall Adapting Local Eddy-viscosity (WALE) model. The resulting errors in the prediction of the friction coefficient in turbulent channel flow at a bulk Reynolds Number of 7860 (Reτ 442) with Δ+ = 4 and no-model, DSM and WALE were 1.7%, 2.6%, 3.1% with SRT, and 8.3% 7.5% 8.7% with MRT, respectively. These results suggest that LES of wall-bounded turbulent flows with LBM requires either grid-embedding in the near-wall region, with grid resolutions comparable to DNS, or a wall model. Results of LES with grid-embedding and wall models will be discussed.
Euler-Lagrange Simulations of Shock Wave-Particle Cloud Interaction
NASA Astrophysics Data System (ADS)
Koneru, Rahul; Rollin, Bertrand; Ouellet, Frederick; Park, Chanyoung; Balachandar, S.
2017-11-01
Numerical experiments of shock interacting with an evolving and fixed cloud of particles are performed. In these simulations we use Eulerian-Lagrangian approach along with state-of-the-art point-particle force and heat transfer models. As validation, we use Sandia Multiphase Shock Tube experiments and particle-resolved simulations. The particle curtain upon interaction with the shock wave is expected to experience Kelvin-Helmholtz (KH) and Richtmyer-Meshkov (RM) instabilities. In the simulations evolving the particle cloud, the initial volume fraction profile matches with that of Sandia Multiphase Shock Tube experiments, and the shock Mach number is limited to M =1.66. Measurements of particle dispersion are made at different initial volume fractions. A detailed analysis of the influence of initial conditions on the evolution of the particle cloudis presented. The early time behavior of the models is studied in the fixed bed simulations at varying volume fractions and shock Mach numbers.The mean gas quantities are measured in the context of 1-way and 2-way coupled simulations. This work was supported by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program, as a Cooperative Agreement under the Predictive Science Academic Alliance Program, Contract No. DE-NA0002378.
Le Moullec, Y; Potier, O; Gentric, C; Leclerc, J P
2011-05-01
This paper presents an experimental and numerical study of an activated sludge channel pilot plant. Concentration profiles of oxygen, COD, NO(3) and NH(4) have been measured for several operating conditions. These profiles have been compared to the simulated ones with three different modelling approaches, namely a systemic approach, CFD and compartmental modelling. For these three approaches, the kinetics model was the ASM-1 model (Henze et al., 2001). The three approaches allowed a reasonable simulation of all the concentration profiles except for ammonium for which the simulations results were far from the experimental ones. The analysis of the results showed that the role of the kinetics model is of primary importance for the prediction of activated sludge reactors performance. The fact that existing kinetics parameters in the literature have been determined by parametric optimisation using a systemic model limits the reliability of the prediction of local concentrations and of the local design of activated sludge reactors. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Verstraeten, Willem W.; Folkert Boersma, K.; Douros, John; Williams, Jason E.; Eskes, Henk H.; Delcloo, Andy
2017-04-01
High nitrogen oxides concentrations at the surface (NOX = NO + NO2) impact humans and ecosystem badly and play a key role in tropospheric chemistry. Surface NOX emissions drive major processes in regional and global chemistry transport models (CTM). NOX contributes to the formation of acid rain, act as aerosol precursors and is an important trace gas for the formation of tropospheric ozone (O3). Via tropospheric O3, NOX indirectly affects the production of the hydroxyl radical which controls the chemical lifetime of key atmospheric pollutants and reactive greenhouse gases. High NOX emissions are mainly observed in polluted regions produced by anthropogenic combustion from industrial, traffic and household activities typically observed in large and densely populated urban areas. Accurate NOX inventories are essential, but state-of the- art emission databases may vary substantially and uncertainties are high since reported emissions factors may differ in order of magnitude and more. To date, the modelled NO2 concentrations and lifetimes have large associated uncertainties due to the highly non-linear small-scale chemistry that occurs in urban areas and uncertainties in the reaction rate data, missing nitrogen (N) species and volatile organic compounds (VOC) emissions, and incomplete knowledge of nitrogen oxides chemistry. Any overestimation in the chemical lifetime may mask missing NOX chemistry in current CTM's. By simultaneously estimating both the NO2 lifetime and concentrations, for instance by using the Exponentially Modified Gaussian (EMG), a better surface NOX emission flux estimate can be obtained. Here we evaluate if the EMG methodology can reproduce the emissions input from the tropospheric NO2 columns simulated by the LOTOS-EUROS (Long Term Ozone Simulation-European Ozone Simulation) CTM model. We apply the EMG methodology on LOTOS-EUROS simulated tropospheric NO2 columns for the period April-September 2013 for 21 selected European urban areas under windy conditions (surface wind speeds > 3 m s-1). We then compare the top-down derived surface NOX emissions with the 2011 MACC-III emission inventory, used in the LOTOS-EUROS model as input to simulate the NO2 columns. We also apply the EMG methodology on OMI (Ozone Monitoring Instrument) tropospheric NO2 column data, providing us with real-time observation-based estimates of midday NO2 lifetime and NOX emissions over 21 European cities in 2013. Results indicate that the top-down derived NOX emissions from LOTOS-EUROS (respectively OMI) are comparable with the MACC-III inventory with a R2 of 0.99 (respectively R2 = 0.79). For St-Petersburg and Moscow the top-down NOX estimates from 2013 OMI data are biased low compared to the MACC-III inventory which uses a 2011 NOX emissions update.
The Hunter-Killer Model, Version 2.0. User’s Manual.
1986-12-01
Contract No. DAAK21-85-C-0058 Prepared for The Center for Night Vision and Electro - Optics DELNV-V Fort Belvoir, Virginia 22060 This document has been...INQUIRIES Inquiries concerning the Hunter-Killer Model or the Hunter-Killer Database System should be addressed to: 1-1 I The Night Vision and Electro - Optics Center...is designed and constructed to study the performance of electro - optic sensor systems in a combat scenario. The model simulates a two-sided battle
Kasmarek, Mark C.
2012-01-01
The MODFLOW-2000 groundwater flow model described in this report comprises four layers, one for each of the hydrogeologic units of the aquifer system except the Catahoula confining system, the assumed no-flow base of the system. The HAGM is composed of 137 rows and 245 columns of 1-square-mile grid cells with lateral no-flow boundaries at the extent of each hydrogeologic unit to the northwest, at groundwater divides associated with large rivers to the southwest and northeast, and at the downdip limit of freshwater to the southeast. The model was calibrated within the specified criteria by using trial-and-error adjustment of selected model-input data in a series of transient simulations until the model output (potentiometric surfaces, land-surface subsidence, and selected water-budget components) acceptably reproduced field measured (or estimated) aquifer responses including water level and subsidence. The HAGM-simulated subsidence generally compared well to 26 Predictions Relating Effective Stress to Subsidence (PRESS) models in Harris, Galveston, and Fort Bend Counties. Simulated HAGM results indicate that as much as 10 feet (ft) of subsidence has occurred in southeastern Harris County. Measured subsidence and model results indicate that a larger geographic area encompassing this area of maximum subsidence and much of central to southeastern Harris County has subsided at least 6 ft. For the western part of the study area, the HAGM simulated as much as 3 ft of subsidence in Wharton, Jackson, and Matagorda Counties. For the eastern part of the study area, the HAGM simulated as much as 3 ft of subsidence at the boundary of Hardin and Jasper Counties. Additionally, in the southeastern part of the study area in Orange County, the HAGM simulated as much as 3 ft of subsidence. Measured subsidence for these areas in the western and eastern parts of the HAGM has not been documented.
O'Donnell, Michael
2015-01-01
State-and-transition simulation modeling relies on knowledge of vegetation composition and structure (states) that describe community conditions, mechanistic feedbacks such as fire that can affect vegetation establishment, and ecological processes that drive community conditions as well as the transitions between these states. However, as the need for modeling larger and more complex landscapes increase, a more advanced awareness of computing resources becomes essential. The objectives of this study include identifying challenges of executing state-and-transition simulation models, identifying common bottlenecks of computing resources, developing a workflow and software that enable parallel processing of Monte Carlo simulations, and identifying the advantages and disadvantages of different computing resources. To address these objectives, this study used the ApexRMS® SyncroSim software and embarrassingly parallel tasks of Monte Carlo simulations on a single multicore computer and on distributed computing systems. The results demonstrated that state-and-transition simulation models scale best in distributed computing environments, such as high-throughput and high-performance computing, because these environments disseminate the workloads across many compute nodes, thereby supporting analysis of larger landscapes, higher spatial resolution vegetation products, and more complex models. Using a case study and five different computing environments, the top result (high-throughput computing versus serial computations) indicated an approximate 96.6% decrease of computing time. With a single, multicore compute node (bottom result), the computing time indicated an 81.8% decrease relative to using serial computations. These results provide insight into the tradeoffs of using different computing resources when research necessitates advanced integration of ecoinformatics incorporating large and complicated data inputs and models. - See more at: http://aimspress.com/aimses/ch/reader/view_abstract.aspx?file_no=Environ2015030&flag=1#sthash.p1XKDtF8.dpuf
Browning, J. R.; Jonkman, J.; Robertson, A.; ...
2014-12-16
In this study, high-quality computer simulations are required when designing floating wind turbines because of the complex dynamic responses that are inherent with a high number of degrees of freedom and variable metocean conditions. In 2007, the FAST wind turbine simulation tool, developed and maintained by the U.S. Department of Energy's (DOE's) National Renewable Energy Laboratory (NREL), was expanded to include capabilities that are suitable for modeling floating offshore wind turbines. In an effort to validate FAST and other offshore wind energy modeling tools, DOE funded the DeepCwind project that tested three prototype floating wind turbines at 1/50 th scalemore » in a wave basin, including a semisubmersible, a tension-leg platform, and a spar buoy. This paper describes the use of the results of the spar wave basin tests to calibrate and validate the FAST offshore floating simulation tool, and presents some initial results of simulated dynamic responses of the spar to several combinations of wind and sea states. Wave basin tests with the spar attached to a scale model of the NREL 5-megawatt reference wind turbine were performed at the Maritime Research Institute Netherlands under the DeepCwind project. This project included free-decay tests, tests with steady or turbulent wind and still water (both periodic and irregular waves with no wind), and combined wind/wave tests. The resulting data from the 1/50th model was scaled using Froude scaling to full size and used to calibrate and validate a full-size simulated model in FAST. Results of the model calibration and validation include successes, subtleties, and limitations of both wave basin testing and FAST modeling capabilities.« less
NASA Technical Reports Server (NTRS)
Fleming, Eric L.; Jackman, Charles H.; Considine, David B.
1999-01-01
We have adopted the transport scenarios used in Part 1 to examine the sensitivity of stratospheric aircraft perturbations to transport changes in our 2-D model. Changes to the strength of the residual circulation in the upper troposphere and stratosphere and changes to the lower stratospheric K(sub zz) had similar effects in that increasing the transport rates decreased the overall stratospheric residence time and reduced the magnitude of the negative perturbation response in total ozone. Increasing the stratospheric K(sub yy) increased the residence time and enhanced the global scale negative total ozone response. However, increasing K(sub yy) along with self-consistent increases in the corresponding planetary wave drive, which leads to a stronger residual circulation, more than compensates for the K(sub yy)-effect, and results in a significantly weaker perturbation response, relative to the base case, throughout the stratosphere. We found a relatively minor model perturbation response sensitivity to the magnitude of K(sub yy) in the tropical stratosphere, and only a very small sensitivity to the magnitude of the horizontal mixing across the tropopause and to the strength of the mesospheric gravity wave drag and diffusion. These transport simulations also revealed a generally strong correlation between passive NO(sub y) accumulation and age of air throughout the stratosphere, such that faster transport rates resulted in a younger mean age and a smaller NO(y) mass accumulation. However, specific variations in K(sub yy) and mesospheric gravity wave strength exhibited very little NO(sub y)-age correlation in the lower stratosphere, similar to 3-D model simulations performed in the recent NASA "Models and Measurements" II analysis. The base model transport, which gives the most favorable overall comparison with inert tracer observations, simulated a global/annual mean total ozone response of -0.59%, with only a slightly larger response in the northern compared to the southern hemisphere. For transport scenarios which gave tracer simulations within some agreement with measurements, the annual/globally averaged total ozone response ranged from -0.45% to -0.70%. Our previous 1995 model exhibited overly fast transport rates, resulting in a global/annually averaged perturbation total ozone response of -0.25%, which is significantly weaker compared to the 1999 model. This illustrates how transport deficiencies can bias model simulations of stratospheric aircraft.
Liotta, Flavia; Chatellier, Patrice; Esposito, Giovanni; Fabbricino, Massimiliano; Frunzo, Luigi; van Hullebusch, Eric D; Lens, Piet N L; Pirozzi, Francesco
2015-01-01
The role of total solids (TS) content in anaerobic digestion of selected complex organic matter, e.g. rice straw and food waste, was investigated. A range of TS from wet (4.5%) to dry (23%) was evaluated. A modified version of the Anaerobic Digestion Model No.1 for a complex organic substrate is proposed to take into account the effect of the TS content on anaerobic digestion. A linear function that correlates the kinetic constants of three specific processes (i.e. disintegration, acetate and propionate up-take) was included in the model. Results of biomethanation and volatile fatty acids production tests were used to calibrate the proposed model. Model simulations showed a good agreement between numerical and observed data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bishop, Joseph E.; Emery, John M.; Battaile, Corbett C.
Two fundamental approximations in macroscale solid-mechanics modeling are (1) the assumption of scale separation in homogenization theory and (2) the use of a macroscopic plasticity material model that represents, in a mean sense, the multitude of inelastic processes occurring at the microscale. With the goal of quantifying the errors induced by these approximations on engineering quantities of interest, we perform a set of direct numerical simulations (DNS) in which polycrystalline microstructures are embedded throughout a macroscale structure. The largest simulations model over 50,000 grains. The microstructure is idealized using a randomly close-packed Voronoi tessellation in which each polyhedral Voronoi cellmore » represents a grain. An face centered cubic crystal-plasticity model is used to model the mechanical response of each grain. The overall grain structure is equiaxed, and each grain is randomly oriented with no overall texture. The detailed results from the DNS simulations are compared to results obtained from conventional macroscale simulations that use homogeneous isotropic plasticity models. The macroscale plasticity models are calibrated using a representative volume element of the idealized microstructure. Furthermore, we envision that DNS modeling will be used to gain new insights into the mechanics of material deformation and failure.« less
García-Diéguez, Carlos; Bernard, Olivier; Roca, Enrique
2013-03-01
The Anaerobic Digestion Model No. 1 (ADM1) is a complex model which is widely accepted as a common platform for anaerobic process modeling and simulation. However, it has a large number of parameters and states that hinder its calibration and use in control applications. A principal component analysis (PCA) technique was extended and applied to simplify the ADM1 using data of an industrial wastewater treatment plant processing winery effluent. The method shows that the main model features could be obtained with a minimum of two reactions. A reduced stoichiometric matrix was identified and the kinetic parameters were estimated on the basis of representative known biochemical kinetics (Monod and Haldane). The obtained reduced model takes into account the measured states in the anaerobic wastewater treatment (AWT) plant and reproduces the dynamics of the process fairly accurately. The reduced model can support on-line control, optimization and supervision strategies for AWT plants. Copyright © 2013 Elsevier Ltd. All rights reserved.
Cabaraban, Maria Theresa I; Kroll, Charles N; Hirabayashi, Satoshi; Nowak, David J
2013-05-01
A distributed adaptation of i-Tree Eco was used to simulate dry deposition in an urban area. This investigation focused on the effects of varying temperature, LAI, and NO2 concentration inputs on estimated NO2 dry deposition to trees in Baltimore, MD. A coupled modeling system is described, wherein WRF provided temperature and LAI fields, and CMAQ provided NO2 concentrations. A base case simulation was conducted using built-in distributed i-Tree Eco tools, and simulations using different inputs were compared against this base case. Differences in land cover classification and tree cover between the distributed i-Tree Eco and WRF resulted in changes in estimated LAI, which in turn resulted in variations in simulated NO2 dry deposition. Estimated NO2 removal decreased when CMAQ-derived concentration was applied to the distributed i-Tree Eco simulation. Discrepancies in temperature inputs did little to affect estimates of NO2 removal by dry deposition to trees in Baltimore. Copyright © 2013 Elsevier Ltd. All rights reserved.
Computer modeling and simulation in inertial confinement fusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCrory, R.L.; Verdon, C.P.
1989-03-01
The complex hydrodynamic and transport processes associated with the implosion of an inertial confinement fusion (ICF) pellet place considerable demands on numerical simulation programs. Processes associated with implosion can usually be described using relatively simple models, but their complex interplay requires that programs model most of the relevant physical phenomena accurately. Most hydrodynamic codes used in ICF incorporate a one-fluid, two-temperature model. Electrons and ions are assumed to flow as one fluid (no charge separation). Due to the relatively weak coupling between the ions and electrons, each species is treated separately in terms of its temperature. In this paper wemore » describe some of the major components associated with an ICF hydrodynamics simulation code. To serve as an example we draw heavily on a two-dimensional Lagrangian hydrodynamic code (ORCHID) written at the University of Rochester's Laboratory for Laser Energetics. 46 refs., 19 figs., 1 tab.« less
Simulations of reactive transport and precipitation with smoothed particle hydrodynamics
NASA Astrophysics Data System (ADS)
Tartakovsky, Alexandre M.; Meakin, Paul; Scheibe, Timothy D.; Eichler West, Rogene M.
2007-03-01
A numerical model based on smoothed particle hydrodynamics (SPH) was developed for reactive transport and mineral precipitation in fractured and porous materials. Because of its Lagrangian particle nature, SPH has several advantages for modeling Navier-Stokes flow and reactive transport including: (1) in a Lagrangian framework there is no non-linear term in the momentum conservation equation, so that accurate solutions can be obtained for momentum dominated flows and; (2) complicated physical and chemical processes such as surface growth due to precipitation/dissolution and chemical reactions are easy to implement. In addition, SPH simulations explicitly conserve mass and linear momentum. The SPH solution of the diffusion equation with fixed and moving reactive solid-fluid boundaries was compared with analytical solutions, Lattice Boltzmann [Q. Kang, D. Zhang, P. Lichtner, I. Tsimpanogiannis, Lattice Boltzmann model for crystal growth from supersaturated solution, Geophysical Research Letters, 31 (2004) L21604] simulations and diffusion limited aggregation (DLA) [P. Meakin, Fractals, scaling and far from equilibrium. Cambridge University Press, Cambridge, UK, 1998] model simulations. To illustrate the capabilities of the model, coupled three-dimensional flow, reactive transport and precipitation in a fracture aperture with a complex geometry were simulated.
Should adhesive debonding be simulated for intra-radicular post stress analyses?
Caldas, Ricardo A; Bacchi, Atais; Barão, Valentim A R; Versluis, Antheunis
2018-06-23
Elucidate the influence of debonding on stress distribution and maximum stresses for intra-radicular restorations. Five intra-radicular restorations were analyzed by finite element analysis (FEA): MP=metallic cast post core; GP=glass fiber post core; PP=pre-fabricated metallic post core; RE=resin endocrowns; CE=single piece ceramic endocrown. Two cervical preparations were considered: no ferule (f 0 ) and 2mm ferule (f 1 ). The simulation was conducted in three steps: (1) intact bonds at all contacts; (2) bond failure between crown and tooth; (3) bond failure among tooth, post and crown interfaces. Contact friction and separation between interfaces was modeled where bond failure occurred. Mohr-Coulomb stress ratios (σ MC ratio ) and fatigue safety factors (SF) for dentin structure were compared with published strength values, fatigue life, and fracture patterns of teeth with intra-radicular restorations. The σ MC ratio showed no differences among models at first step. The second step increased σ MC ratio at the ferule compared to step 1. At the third step, the σ MC ratio and SF for f 0 models were highly influenced by post material. CE and RE models had the highest values for σ MC ratio and lower SF. MP had the lowest σ MC ratio and higher SF. The f 1 models showed no relevant differences among them at the third step. FEA most closely predicted failure performance of intra-radicular posts when frictional contact was modeled. Results of analyses where all interfaces are assumed to be perfectly bonded should be considered with caution. Copyright © 2018 The Academy of Dental Materials. Published by Elsevier Inc. All rights reserved.
Experimental Study of Turbine Fuel Thermal Stability in an Aircraft Fuel System Simulator
NASA Technical Reports Server (NTRS)
Vranos, A.; Marteney, P. J.
1980-01-01
The thermal stability of aircraft gas turbines fuels was investigated. The objectives were: (1) to design and build an aircraft fuel system simulator; (2) to establish criteria for quantitative assessment of fuel thermal degradation; and (3) to measure the thermal degradation of Jet A and an alternative fuel. Accordingly, an aircraft fuel system simulator was built and the coking tendencies of Jet A and a model alternative fuel (No. 2 heating oil) were measured over a range of temperatures, pressures, flows, and fuel inlet conditions.
NASA Astrophysics Data System (ADS)
Lee, J.; Kim, M.; Son, Y.; Lee, W. K.
2017-12-01
Korean forests have recovered by the national-scale reforestation program and can contribute to the national greenhouse gas (GHG) mitigation goal. The forest carbon (C) sequestration is expected to change by climate change and forest management regime. In this context, estimating the changes in GHG mitigation potential of Korean forestry sector by climate and management is a timely issue. Thus, we estimated the forest C sequestration of Korea under four scenarios (2010-2050): constant temperature with no management (CT_No), representative concentration pathway (RCP) 8.5 with no management (RCP_No), constant temperature with thinning management (CT_Man), and RCP 8.5 with thinning management (RCP_Man). Dynamic stand growth model (KO-G-Dynamic; for biomass) and forest C model (FBDC model; for non-biomass) were used at approximately 64,000 simulation units (1km2). As model input data, the forest data (e.g., forest type and stand age) and climate data were spatially prepared from the national forest inventories and the RCP 8.5 climate data. The model simulation results showed that the mean annual C sequestrations during the period (Tg C yr-1) were 11.0, 9.9, 11.5, and 10.5, respectively, under the CT_No, RCP_No, CT_Man, and RCP_Man, respectively, at the national scale. The C sequestration decreased with the time passage due to the maturity of the forests. The climate change seemed disadvantageous to the C sequestration by the forest ecosystems (≒ -1.0 Tg C yr-1) due to the increase in organic matter decomposition. In particular, the decrease in C sequestration by the climate change was greater for the needle-leaved species, compared to the broad-leaved species. Meanwhile, the forest management enhanced forest C sequestration (≒ 0.5 Tg C yr-1). Accordingly, implementing appropriate forest management strategies for adaptation would contribute to maintaining the C sequestration by Korean forestry sector under climate change. Acknowledgement: This study was supported by Korean Ministry of Environment (2014001310008).
Muhammed, Shibu E; Coleman, Kevin; Wu, Lianhai; Bell, Victoria A; Davies, Jessica A C; Quinton, John N; Carnell, Edward J; Tomlinson, Samuel J; Dore, Anthony J; Dragosits, Ulrike; Naden, Pamela S; Glendining, Margaret J; Tipping, Edward; Whitmore, Andrew P
2018-09-01
This paper describes an agricultural model (Roth-CNP) that estimates carbon (C), nitrogen (N) and phosphorus (P) pools, pool changes, their balance and the nutrient fluxes exported from arable and grassland systems in the UK during 1800-2010. The Roth-CNP model was developed as part of an Integrated Model (IM) to simulate C, N and P cycling for the whole of UK, by loosely coupling terrestrial, hydrological and hydro-chemical models. The model was calibrated and tested using long term experiment (LTE) data from Broadbalk (1843) and Park Grass (1856) at Rothamsted. We estimated C, N and P balance and their fluxes exported from arable and grassland systems on a 5km×5km grid across the whole of UK by using the area of arable of crops and livestock numbers in each grid and their management. The model estimated crop and grass yields, soil organic carbon (SOC) stocks and nutrient fluxes in the form of NH 4 -N, NO 3 -N and PO 4 -P. The simulated crop yields were compared to that reported by national agricultural statistics for the historical to the current period. Overall, arable land in the UK have lost SOC by -0.18, -0.25 and -0.08MgCha -1 y -1 whereas land under improved grassland SOC stock has increased by 0.20, 0.47 and 0.24MgCha -1 y -1 during 1800-1950, 1950-1970 and 1970-2010 simulated in this study. Simulated N loss (by leaching, runoff, soil erosion and denitrification) increased both under arable (-15, -18 and -53kgNha -1 y -1 ) and grass (-18, -22 and -36kgNha -1 y -1 ) during different time periods. Simulated P surplus increased from 2.6, 10.8 and 18.1kgPha -1 y -1 under arable and 2.8, 11.3 and 3.6kgPha -1 y -1 under grass lands 1800-1950, 1950-1970 and 1970-2010. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Guleyupoglu, B; Schap, J; Kusano, K D; Gayzik, F S
2017-07-04
The objective of this study is to use a validated finite element model of the human body and a certified model of an anthropomorphic test dummy (ATD) to evaluate the effect of simulated precrash braking on driver kinematics, restraint loads, body loads, and computed injury criteria in 4 commonly injured body regions. The Global Human Body Models Consortium (GHBMC) 50th percentile male occupant (M50-O) and the Humanetics Hybrid III 50th percentile models were gravity settled in the driver position of a generic interior equipped with an advanced 3-point belt and driver airbag. Fifteen simulations per model (30 total) were conducted, including 4 scenarios at 3 severity levels: median, severe, and the U.S. New Car Assessment Program (U.S.-NCAP) and 3 extra per model with high-intensity braking. The 4 scenarios were no precollision system (no PCS), forward collision warning (FCW), FCW with prebraking assist (FCW+PBA), and FCW and PBA with autonomous precrash braking (FCW + PBA + PB). The baseline ΔV was 17, 34, and 56.4 kph for median, severe, and U.S.-NCAP scenarios, respectively, and were based on crash reconstructions from NASS/CDS. Pulses were then developed based on the assumed precrash systems equipped. Restraint properties and the generic pulse used were based on literature. In median crash severity cases, little to no risk (<10% risk for Abbreviated injury Scale [AIS] 3+) was found for all injury measures for both models. In the severe set of cases, little to no risk for AIS 3+ injury was also found for all injury measures. In NCAP cases, highest risk was typically found with No PCS and lowest with FCW + PBA + PB. In the higher intensity braking cases (1.0-1.4 g), head injury criterion (HIC), brain injury criterion (BrIC), and chest deflection injury measures increased with increased braking intensity. All other measures for these cases tended to decrease. The ATD also predicted and trended similar to the human body models predictions for both the median, severe, and NCAP cases. Forward excursion for both models decreased across median, severe, and NCAP cases and diverged from each other in cases above 1.0 g of braking intensity. The addition of precrash systems simulated through reduced precrash speeds caused reductions in some injury criteria, whereas others (chest deflection, HIC, and BrIC) increased due to a modified occupant position. The human model and ATD models trended similarly in nearly all cases with greater risk indicated in the human model. These results suggest the need for integrated safety systems that have restraints that optimize the occupant's position during precrash braking and prior to impact.
The Influences of Airmass Histories on Radical Species During POLARIS
NASA Technical Reports Server (NTRS)
Pierson, James M.; Kawa, S. R.
1998-01-01
The POLARIS mission focused on understanding the processes associated with the decrease of polar stratospheric ozone from spring to fall at high latitudes. This decrease is linked primarily to in situ photochemical destruction by reactive nitrogen species, NO and NO2, which also control other catalytic loss cycles. Steady state models have been used to test photochemistry and radical behavior but are not always adequate in simulating radical species observations. In some cases, air mass history can be important and trajectory models give an improved simulation of the radical species. Trajectory chemistry models, however, still consistently underestimate NO and NO2 abundances compared to measurements along the ER-2 flight track. The Goddard chemistry on trajectory model has been used to test updated rate constants for NO2 + OH, NO2 + O and OH + HNO3, key reactions that affect NO and NO2 abundances. We present comparisons between the modified Goddard chemistry on trajectory model, the JPL steady state model and observations from selected flights.
Vibration response comparison of twisted shrouded blades using different impact models
NASA Astrophysics Data System (ADS)
Xie, Fangtao; Ma, Hui; Cui, Can; Wen, Bangchun
2017-06-01
On the basis of our previous work (Ma et al., 2016, Journal of Sound and Vibration, 378, 92-108) [36], an improved analytical model (IAM) of a rotating twisted shrouded blade with stagger angle simulated by flexible beam with a tip-mass is established based on Timoshenko beam theory, whose effectiveness is verified using finite element (FE) method. The effects of different parameters such as shroud gaps, contact stiffness, stagger angles and twist angels on the vibration responses of the shrouded blades are analyzed using two different impact models where the adjacent two shrouded blades are simulated by massless springs in impact model 1 (IM1) and those are simulated by Timoshenko beam in impact model 2 (IM2). The results indicate that two impact models agree well under some cases such as big shroud gaps and small contact stiffness due to the small vibration effects of adjacent blades, but not vice versa under the condition of small shroud gaps and big contact stiffness. As for IM2, the resonance appears because the limitation of the adjacent blades is weakened due to their inertia effects, however, the resonance does not appear because of the strong limitation of the springs used to simulate adjacent blades for IM1. With the increase of stagger angles and twist angles, the first-order resonance rotational speed increases due to the increase of the dynamic stiffness under no-impact condition, and the rotational speeds of starting impact and ending impact rise under the impact condition.
Modeling sustainable reuse of nitrogen-laden wastewater by poplar.
Wang, Yusong; Licht, Louis; Just, Craig
2016-01-01
Numerical modeling was used to simulate the leaching of nitrogen (N) to groundwater as a consequence of irrigating food processing wastewater onto grass and poplar under various management scenarios. Under current management practices for a large food processor, a simulated annual N loading of 540 kg ha(-1) yielded 93 kg ha(-1) of N leaching for grass and no N leaching for poplar during the growing season. Increasing the annual growing season N loading to approximately 1,550 kg ha(-1) for poplar only, using "weekly", "daily" and "calculated" irrigation scenarios, yielded N leaching of 17 kg ha(-1), 6 kg ha(-1), and 4 kg ha(-1), respectively. Constraining the simulated irrigation schedule by the current onsite wastewater storage capacity of approximately 757 megaliters (Ml) yielded N leaching of 146 kg ha(-1) yr(-1) while storage capacity scenarios of 3,024 and 4,536 Ml yielded N leaching of 65 and 13 kg ha(-1) yr(-1), respectively, for a loading of 1,550 kg ha(-1) yr(-1). Further constraining the model by the current wastewater storage volume and the available land area (approximately 1,000 hectares) required a "diverse" irrigation schedule that was predicted to leach a weighted average of 13 kg-N ha(-1) yr(-1) when dosed with 1,063 kg-N ha(-1) yr(-1).
Xu, Changmou; Yagiz, Yavuz; Marshall, Sara; Li, Zheng; Simonne, Amarat; Lu, Jiang; Marshall, Maurice R
2015-09-01
Acrylamide is a byproduct of the Maillard reaction and is formed in a variety of heat-treated commercial starchy foods. It is known to be toxic and potentially carcinogenic to humans. Muscadine grape polyphenols and standard phenolic compounds were examined on the reduction of acrylamide in an equimolar asparagine/glucose chemical model, a potato chip model, and a simulated physiological system. Polyphenols were found to significantly reduce acrylamide in the chemical model, with reduced rates higher than 90% at 100 μg/ml. In the potato chip model, grape polyphenols reduced the acrylamide level by 60.3% as concentration was increased to 0.1%. However, polyphenols exhibited no acrylamide reduction in the simulated physiological system. Results also indicated no significant correlation between the antioxidant activities of polyphenols and their acrylamide inhibition. This study demonstrated muscadine grape extract can mitigate acrylamide formation in the Maillard reaction, which provides a new value-added application for winery pomace waste. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Pan, Jinmei; Durand, Michael; Sandells, Melody; Lemmetyinen, Juha; Kim, Edward J.; Pulliainen, Jouni; Kontu, Anna; Derksen, Chris
2015-01-01
Microwave emission models are a critical component of snow water equivalent retrieval algorithms applied to passive microwave measurements. Several such emission models exist, but their differences need to be systematically compared. This paper compares the basic theories of two models: the multiple-layer HUT (Helsinki University of Technology) model and MEMLS (Microwave Emission Model of Layered Snowpacks). By comparing the mathematical formulation side-by-side, three major differences were identified: (1) by assuming the scattered intensity is mostly (96) in the forward direction, the HUT model simplifies the radiative transfer (RT) equation into 1-flux; whereas MEMLS uses a 2-flux theory; (2) the HUT scattering coefficient is much larger than MEMLS; (3 ) MEMLS considers the trapped radiation inside snow due to internal reflection by a 6-flux model, which is not included in HUT. Simulation experiments indicate that, the large scattering coefficient of the HUT model compensates for its large forward scattering ratio to some extent, but the effects of 1-flux simplification and the trapped radiation still result in different T(sub B) simulations between the HUT model and MEMLS. The models were compared with observations of natural snow cover at Sodankyl, Finland; Churchill, Canada; and Colorado, USA. No optimization of the snow grain size was performed. It shows that HUT model tends to under estimate T(sub B) for deep snow. MEMLS with the physically-based improved Born approximation performed best among the models, with a bias of -1.4 K, and an RMSE of 11.0 K.
NASA Technical Reports Server (NTRS)
Ott, Lesley; Pickering, Kenneth; Stenchikov, Georgiy; Allen, Dale; DeCaria, Alex; Ridley, Brian; Lin, Ruei-Fong; Lang, Steve; Tao, Wei-Kuo
2009-01-01
A 3-D cloud scale chemical transport model that includes a parameterized source of lightning NO(x), based on observed flash rates has been used to simulate six midlatitude and subtropical thunderstorms observed during four field projects. Production per intracloud (P(sub IC) and cloud-to-ground (P(sub CG)) flash is estimated by assuming various values of P(sub IC) and P(sub CG) for each storm and determining which production scenario yields NO(x) mixing ratios that compare most favorably with in-cloud aircraft observations. We obtain a mean P(sub CG) value of 500 moles NO (7 kg N) per flash. The results of this analysis also suggest that on average, P(sub IC) may be nearly equal to P(sub CG), which is contrary to the common assumption that intracloud flashes are significantly less productive of NO than are cloud-to-ground flashes. This study also presents vertical profiles of the mass of lightning NO(x), after convection based on 3-D cloud-scale model simulations. The results suggest that following convection, a large percentage of lightning NO(x), remains in the middle and upper troposphere where it originated, while only a small percentage is found near the surface. The results of this work differ from profiles calculated from 2-D cloud-scale model simulations with a simpler lightning parameterization that were peaked near the surface and in the upper troposphere (referred to as a "C-shaped" profile). The new model results (a backward C-shaped profile) suggest that chemical transport models that assume a C-shaped vertical profile of lightning NO(x) mass may place too much mass neat the surface and too little in the middle troposphere.
Over, Thomas M.; Soong, David T.; Holmes, Robert R.
2011-01-01
Boneyard Creek—which drains an urbanized watershed in the cities of Champaign and Urbana, Illinois, including part of the University of Illinois at Urbana-Champaign (UIUC) campus—has historically been prone to flooding. Using the Stormwater Management Model (SWMM), a hydrologic and hydraulic model of Boneyard Creek was developed for the design of the projects making up the first phase of a long-term plan for flood control on Boneyard Creek, and the construction of the projects was completed in May 2003. The U.S. Geological Survey, in cooperation with the Cities of Champaign and Urbana and UIUC, installed and operated stream and rain gages in order to obtain data for evaluation of the design-model simulations. In this study, design-model simulations were evaluated by using observed postconstruction precipitation and peak-discharge data. Between May 2003 and September 2008, five high-flow events on Boneyard Creek satisfied the study criterion. The five events were simulated with the design model by using observed precipitation. The simulations were run with two different values of the parameter controlling the soil moisture at the beginning of the storms and two different ways of spatially distributing the precipitation, making a total of four simulation scenarios. The simulated and observed peak discharges and stages were compared at gaged locations along the Creek. The discharge at one of these locations was deemed to be critical for evaluating the design model. The uncertainty of the measured peak discharge was also estimated at the critical location with a method based on linear regression of the stage and discharge relation, an estimate of the uncertainty of the acoustic Doppler velocity meter measurements, and the uncertainty of the stage measurements. For four of the five events, the simulated peak discharges lie within the 95-percent confidence interval of the observed peak discharges at the critical location; the fifth was just outside the upper end of this interval. For two of the four simulation scenarios, the simulation results for one event at the critical location were numerically unstable in the vicinity of the discharge peak. For the remaining scenarios, the simulated peak discharges over the five events at the critical location differ from the observed peak discharges (simulated minus observed) by an average of 7.7 and -1.5 percent, respectively. The simulated peak discharges over the four events for which all scenarios have numerically stable results at the critical location differs from the observed peak discharges (simulated minus observed) by an average of -6.8, 4.0, -5.4, and 1.5 percent, for the four scenarios, respectively. Overall, the discharge peaks simulated for this study at the critical location are approximately balanced between overprediction and underprediction and do not indicate significant model bias or inaccuracy. Additional comparisons were made by using peak stages at the critical location and two additional sites and using peak discharges at one additional site. These comparisons showed the same pattern of differences between observed and simulated values across events but varying biases depending on streamgage and measurement type (discharge or stage). Altogether, the results from this study show no clear evidence that the design model is significantly inaccurate or biased and, therefore, no clear evidence that the modeled flood-control projects in Champaign and on the University of Illinois campus have increased flood stages or discharges downstream in Urbana.
2009-09-01
2.1 Participants Twelve civilians (7 men and 5 women ) with no prior experience with the Robotic NCO simulation participated in this study. The mean...operators in a multitasking environment. 15. SUBJECT TERMS design guidelines, robotics, simulation, unmanned systems, automation 16. SECURITY...model of operator performance, or a hybrid method which combines one or more of these different invocation techniques (e.g., critical events and
Simulation of streamflow temperatures in the Yakima River basin, Washington, April-October 1981
Vaccaro, J.J.
1986-01-01
The effects of storage, diversion, return flow, and meteorological variables on water temperature in the Yakima River, in Washington State, were simulated, and the changes in water temperature that could be expected under four alternative-management scenarios were examined for improvement in anadromous fish environment. A streamflow routing model and Lagrangian streamflow temperature model were used to simulate water discharge and temperature in the river. The estimated model errors were 12% for daily discharge and 1.7 C for daily temperature. Sensitivity analysis of the simulation of water temperatures showed that the effect of reservoir outflow temperatures diminishes in a downstream direction. A 4 C increase in outflow temperatures results in a 1.0 C increase in mean irrigation season water temperature at Umtanum in the upper Yakima River basin, but only a 0.01C increase at Prosser in the lower basin. The influence of air temperature on water temperature increases in a downstream direction and is the dominant influence in the lower basin. A 4 C increase in air temperature over the entire basin resulted in a 2.34 C increase in river temperatures at Prosser in the lower basin and 1.46 C at Umtanum in the upper basin. Changes in wind speed and model wind-function parameters had little effect on the model predicted water temperature. Of four alternative management scenarios suggested by the U.S. Bureau of Indian Affairs and the Yakima Indian Nation, the 1981 reservoir releases maintained without diversions or return flow in the river basin produced water temperatures nearest those considered as preferable for salmon and steelhead trout habitat. The alternative management scenario for no reservoir storage and no diversions or return flows in the river basin (estimate of natural conditions) produced conditions that were the least like those considered as preferable for salmon and steelhead trout habitat. (Author 's abstract)
Ekwueme, Donatus U; Uzunangelov, Vladislav J; Hoerger, Thomas J; Miller, Jacqueline W; Saraiya, Mona; Benard, Vicki B; Hall, Ingrid J; Royalty, Janet; Li, Chunyu; Myers, Evan R
2014-09-01
The benefits of the National Breast and Cervical Cancer Early Detection Program (NBCCEDP) on cervical cancer screening for participating uninsured low-income women have never been measured. To estimate the benefits in life-years (LYs) gained; quality-adjusted life-years (QALYs) gained; and deaths averted. A cervical cancer simulation model was constructed based on an existing cohort model. The model was applied to NBCCEDP participants aged 18-64 years. Screening habits for uninsured low-income women were estimated using National Health Interview Survey data from 1990 to 2005 and NBCCEDP data from 1991 to 2007. The study was conducted during 2011-2012 and covered all 68 NBCCEDP grantees in 50 states, the District of Columbia, five U.S. territories, and 12 tribal organizations. Separate simulations were performed for the following three scenarios: (1) women who received NBCCEDP (Program) screening; (2) women who received screening without the program (No Program); and (3) women who received no screening (No Screening). Among 1.8 million women screened in 1991-2007, the Program added 10,369 LYs gained compared to No Program, and 101,509 LYs gained compared to No Screening. The Program prevented 325 women from dying of cervical cancer relative to No Program, and 3,829 relative to No Screening. During this time period, the Program accounted for 15,589 QALYs gained when compared with No Program, and 121,529 QALYs gained when compared with No Screening. These estimates suggest that NBCCEDP cervical cancer screening has reduced mortality among medically underserved low-income women who participated in the program. Published by Elsevier Inc.
Dienus, Olaf; Sokolova, Ekaterina; Nyström, Fredrik; Matussek, Andreas; Löfgren, Sture; Blom, Lena; Pettersson, Thomas J R; Lindgren, Per-Eric
2016-10-04
Norovirus (NoV) that enters drinking water sources with wastewater discharges is a common cause of waterborne outbreaks. The impact of wastewater treatment plants (WWTPs) on the river Göta älv (Sweden) was studied using monitoring and hydrodynamic modeling. The concentrations of NoV genogroups (GG) I and II in samples collected at WWTPs and drinking water intakes (source water) during one year were quantified using duplex real-time reverse-transcription polymerase chain reaction. The mean (standard deviation) NoV GGI and GGII genome concentrations were 6.2 (1.4) and 6.8 (1.8) in incoming wastewater and 5.3 (1.4) and 5.9 (1.4) log 10 genome equivalents (g.e.) L -1 in treated wastewater, respectively. The reduction at the WWTPs varied between 0.4 and 1.1 log 10 units. In source water, the concentration ranged from below the detection limit to 3.8 log 10 g.e. L -1 . NoV GGII was detected in both wastewater and source water more frequently during the cold than the warm period of the year. The spread of NoV in the river was simulated using a three-dimensional hydrodynamic model. The modeling results indicated that the NoV GGI and GGII genome concentrations in source water may occasionally be up to 2.8 and 1.9 log 10 units higher, respectively, than the concentrations measured during the monitoring project.
Direct simulations of chemically reacting turbulent mixing layers
NASA Technical Reports Server (NTRS)
Riley, J. J.; Metcalfe, R. W.
1984-01-01
The report presents the results of direct numerical simulations of chemically reacting turbulent mixing layers. The work consists of two parts: (1) the development and testing of a spectral numerical computer code that treats the diffusion reaction equations; and (2) the simulation of a series of cases of chemical reactions occurring on mixing layers. The reaction considered is a binary, irreversible reaction with no heat release. The reacting species are nonpremixed. The results of the numerical tests indicate that the high accuracy of the spectral methods observed for rigid body rotation are also obtained when diffusion, reaction, and more complex flows are considered. In the simulations, the effects of vortex rollup and smaller scale turbulence on the overall reaction rates are investigated. The simulation results are found to be in approximate agreement with similarity theory. Comparisons of simulation results with certain modeling hypotheses indicate limitations in these hypotheses. The nondimensional product thickness computed from the simulations is compared with laboratory values and is found to be in reasonable agreement, especially since there are no adjustable constants in the method.
NASA Astrophysics Data System (ADS)
Votrubova, Jana; Vogel, Tomas; Dohnal, Michal; Dusek, Jaromir
2015-04-01
Coupled simulations of soil water flow and associated transport of substances have become a useful and increasingly popular tool of subsurface hydrology. Quality of such simulations is directly affected by correctness of its hydraulic part. When near-surface processes under vegetation cover are of interest, appropriate representation of the root water uptake becomes essential. Simulation study of coupled water and heat transport in soil profile under natural conditions was conducted. One-dimensional dual-continuum model (S1D code) with semi-separate flow domains representing the soil matrix and the network of preferential pathways was used. A simple root water uptake model based on water-potential-gradient (WPG) formulation was applied. As demonstrated before [1], the WPG formulation - capable of simulating both the compensatory root water uptake (in situations when reduced uptake from dry layers is compensated by increased uptake from wetter layers), and the root-mediated hydraulic redistribution of soil water - enables simulation of more natural soil moisture distribution throughout the root zone. The potential effect on heat transport in a soil profile is the subject of the present study. [1] Vogel T., M. Dohnal, J. Dusek, J. Votrubova, and M. Tesar. 2013. Macroscopic modeling of plant water uptake in a forest stand involving root-mediated soil-water redistribution. Vadose Zone Journal, 12, 10.2136/vzj2012.0154. The research was supported by the Czech Science Foundation Project No. 14-15201J.
NASA Astrophysics Data System (ADS)
Surendran, Divya E.; Ghude, Sachin D.; Beig, G.; Emmons, L. K.; Jena, Chinmay; Kumar, Rajesh; Pfister, G. G.; Chate, D. M.
2015-12-01
This study presents the distribution of tropospheric ozone and related species for South Asia using the Model for Ozone and Related chemical Tracers (MOZART-4) and Hemispheric Transport of Air Pollution version-2 (HTAP-v2) emission inventory. The model present-day simulated ozone (O3), carbon monoxide (CO) and nitrogen dioxide (NO2) are evaluated against surface-based, balloon-borne and satellite-based (MOPITT and OMI) observations. The model systematically overestimates surface O3 mixing ratios (range of mean bias about: 1-30 ppbv) at different ground-based measurement sites in India. Comparison between simulated and observed vertical profiles of ozone shows a positive bias from the surface up to 600 hPa and a negative bias above 600 hPa. The simulated seasonal variation in surface CO mixing ratio is consistent with the surface observations, but has a negative bias of about 50-200 ppb which can be attributed to a large part to the coarse model resolution. In contrast to the surface evaluation, the model shows a positive bias of about 15-20 × 1017 molecules/cm2 over South Asia when compared to satellite derived CO columns from the MOPITT instrument. The model also overestimates OMI retrieved tropospheric column NO2 abundance by about 100-250 × 1013 molecules/cm2. A response to 20% reduction in all anthropogenic emissions over South Asia shows a decrease in the anuual mean O3 mixing ratios by about 3-12 ppb, CO by about 10-80 ppb and NOX by about 3-6 ppb at the surface level. During summer monsoon, O3 mixing ratios at 200 hPa show a decrease of about 6-12 ppb over South Asia and about 1-4 ppb over the remote northern hemispheric western Pacific region.
A Model Independent General Search for new physics in ATLAS
NASA Astrophysics Data System (ADS)
Amoroso, S.; ATLAS Collaboration
2016-04-01
We present results of a model-independent general search for new phenomena in proton-proton collisions at a centre-of-mass energy of 8 TeV with the ATLAS detector at the LHC. The data set corresponds to a total integrated luminosity of 20.3 fb-1. Event topologies involving isolated electrons, photons and muons, as well as jets, including those identified as originating from b-quarks (b-jets) and missing transverse momentum are investigated. The events are subdivided according to their final states into exclusive event classes. For the 697 classes with a Standard Model expectation greater than 0.1 events, a search algorithm tests the compatibility of data against the Monte Carlo simulated background in three kinematic variables sensitive to new physics effects. No significant deviation is found in data. The number and size of the observed deviations follow the Standard Model expectation obtained from simulated pseudo-experiments.
Autoshaping and automaintenance: a neural-network approach.
Burgos, José E
2007-07-01
This article presents an interpretation of autoshaping, and positive and negative automaintenance, based on a neural-network model. The model makes no distinction between operant and respondent learning mechanisms, and takes into account knowledge of hippocampal and dopaminergic systems. Four simulations were run, each one using an A-B-A design and four instances of feedfoward architectures. In A, networks received a positive contingency between inputs that simulated a conditioned stimulus (CS) and an input that simulated an unconditioned stimulus (US). Responding was simulated as an output activation that was neither elicited by nor required for the US. B was an omission-training procedure. Response directedness was defined as sensory feedback from responding, simulated as a dependence of other inputs on responding. In Simulation 1, the phenomena were simulated with a fully connected architecture and maximally intense response feedback. The other simulations used a partially connected architecture without competition between CS and response feedback. In Simulation 2, a maximally intense feedback resulted in substantial autoshaping and automaintenance. In Simulation 3, eliminating response feedback interfered substantially with autoshaping and automaintenance. In Simulation 4, intermediate autoshaping and automaintenance resulted from an intermediate response feedback. Implications for the operant-respondent distinction and the behavior-neuroscience relation are discussed.
Autoshaping and Automaintenance: A Neural-Network Approach
Burgos, José E
2007-01-01
This article presents an interpretation of autoshaping, and positive and negative automaintenance, based on a neural-network model. The model makes no distinction between operant and respondent learning mechanisms, and takes into account knowledge of hippocampal and dopaminergic systems. Four simulations were run, each one using an A-B-A design and four instances of feedfoward architectures. In A, networks received a positive contingency between inputs that simulated a conditioned stimulus (CS) and an input that simulated an unconditioned stimulus (US). Responding was simulated as an output activation that was neither elicited by nor required for the US. B was an omission-training procedure. Response directedness was defined as sensory feedback from responding, simulated as a dependence of other inputs on responding. In Simulation 1, the phenomena were simulated with a fully connected architecture and maximally intense response feedback. The other simulations used a partially connected architecture without competition between CS and response feedback. In Simulation 2, a maximally intense feedback resulted in substantial autoshaping and automaintenance. In Simulation 3, eliminating response feedback interfered substantially with autoshaping and automaintenance. In Simulation 4, intermediate autoshaping and automaintenance resulted from an intermediate response feedback. Implications for the operant–respondent distinction and the behavior–neuroscience relation are discussed. PMID:17725055
Bales, Jerad; Fulford, Janice M.; Swain, Eric D.
1997-01-01
A study was conducted to review selected features of the Natural System Model, version 4.3 . The Natural System Model is a regional-scale model that uses recent climatic data and estimates of historic vegetation and topography to simulate pre-canal-drainage hydrologic response in south Florida. Equations used to represent the hydrologic system and the numerical solution of these equations in the model were documented and reviewed. Convergence testing was performed using 1965 input data, and selected other aspects of the model were evaluated.Some conclusions from the evaluation of the Natural System Model include the following observations . Simulations were generally insensitive to the temporal resolution used in the model. However, reduction of the computational cell size from 2-mile by 2-mile to 2/3-mile by 2/3-mile resulted in a decrease in spatial mean ponding depths for October of 0.35 foot for a 3-hour time step.Review of the computer code indicated that there is no limit on the amount of water that can be transferred from the river system to the overland flow system, on the amount of seepage from the river to the ground-water system, on evaporation from the river system, or on evapotranspiration from the overland-flow system . Oscillations of 0.2 foot or less in simulated river stage were identified and attributed to a volume limiting function which is applied in solution of the overland-flow equations. The computation of the resistance coefficient is not consistent with the computation of overland-flow velocity. Ground-water boundary conditions do not always ensure a no-flow condition at the boundary. These inconsistencies had varying degrees of effects on model simulations, and it is likely that simulations longer than 1 year are needed to fully identify effects. However, inconsistencies in model formulations should not be ignored, even if the effects of such errors on model results appear to be small or have not been clearly defined.The Natural System Model can be a very useful tool for estimating pre-drainage hydrologic response in south Florida. The model includes all of the important physical processes needed to simulate a water balance. With a few exceptions, these hydrologic processes are represented in a reasonable manner using empirical, semiempirical, and mechanistic relations . The data sets that have been assembled to represent physical features, and hydrologic and meteorological conditions are quite extensive in their scope.Some suggestions for model application were made. Simulation results from the Natural System Model need to be interpreted on a regional basis, rather than cell by cell. The available evidence suggests that simulated water levels should be interpreted with about a plus or minus 1 foot uncertainty. It is probably not appropriate to use the Natural System Model to estimate pre-drainage discharges (as opposed to hydroperiods and water levels) at a particular location or across a set of adjacent computational cells. All simulated results for computational cells within about 10 miles of the model boundaries have a higher degree of uncertainty than results for the interior of the model domain. It is most appropriate to interpret the Natural System Model simulation results in connection with other available information. Stronger linkages between hydrologic inputs to the Everglades and the ecological response of the system would enhance restoration efforts .
Lightning NOx Production and Its Consequences for Tropospheric Chemistry
NASA Technical Reports Server (NTRS)
Pickering, Kenneth E.
2005-01-01
Cloud-resolving case-study simulations of convective transport and lightning NO production have yielded results which are directly applicable to the design of lightning parameterizations for global chemical transport models. In this work we have used cloud-resolving models (the Goddard Cumulus Ensemble Model (GCE) and MMS) to drive an off-line cloud-scale chemical transport model (CSCTM). The CSCTM, in conjunction with aircraft measurements of NO x in thunderstorms and ground-l;>ased lightning observations, has been used to constrain the amount of NO produced per flash. Cloud and chemistry simulations for several case studies of storms in different environments will be presented. Observed lightning flash rates have been incorporated into the CSCTM, and several scenarios of NO production per intracloud (IC) and per cloud-to-ground (CG) flash have been tested for each storm. The resulting NOx mixing ratios are compared with aircraft measurements taken within the storm (typically the anvil region) to determine the most likely NO production scenario. The range of values of NO production per flash (or per meter of lightning channel length) that have been deduced from the model will be shown and compared with values of production in the literature that have been deduced from observed NO spikes and from anvil flux calculations. Results show that on a per flash basis, IC flashes are nearly as productive of NO as CG flashes. This result simplifies the lightning parameterization for global models (ie., an algorithm for estimating the IC/CG ratio is not necessary). Vertical profiles of lightning NOx mass at the end of the 3-D storm simulations have been summarized to yield suggested profiles for use in global models. Estimates of mean NO production per flash vary by a factor of three from one simulated storm to another. When combined with the global flash rate of 44 flashes per second from NASA's Optical Transient Detector (OTD) measurements, these estimates and the results from other techniques yield global NO production rates of2-9 TgN/year. Simulations of the photochemistry over the 24 hours following a storm has been performed to determine the additional ozone production which can be attributed to lightning NO. Convective transport of HOx precursors leads to the generation of a HOx plume which substantially aids the downstream ozone production.
GCSS Idealized Cirrus Model Comparison Project
NASA Technical Reports Server (NTRS)
Starr, David OC.; Benedetti, Angela; Boehm, Matt; Brown, Philip R. A.; Gierens, Klaus; Girard, Eric; Giraud, Vincent; Jakob, Christian; Jensen, Eric; Khvorostyanov, Vitaly;
2000-01-01
The GCSS Working Group on Cirrus Cloud Systems (WG2) is conducting a systematic comparison and evaluation of cirrus cloud models. This fundamental activity seeks to support the improvement of models used for climate simulation and numerical weather prediction through assessment and improvement of the "process" models underlying parametric treatments of cirrus cloud processes in large-scale models. The WG2 Idealized Cirrus Model Comparison Project is an initial comparison of cirrus cloud simulations by a variety of cloud models for a series of idealized situations with relatively simple initial conditions and forcing. The models (16) represent the state-of-the-art and include 3-dimensional large eddy simulation (LES) models, two-dimensional cloud resolving models (CRMs), and single column model (SCM) versions of GCMs. The model microphysical components are similarly varied, ranging from single-moment bulk (relative humidity) schemes to fully size-resolved (bin) treatments where ice crystal growth is explicitly calculated. Radiative processes are included in the physics package of each model. The baseline simulations include "warm" and "cold" cirrus cases where cloud top initially occurs at about -47C and -66C, respectively. All simulations are for nighttime conditions (no solar radiation) where the cloud is generated in an ice supersaturated layer, about 1 km in depth, with an ice pseudoadiabatic thermal stratification (neutral). Continuing cloud formation is forced via an imposed diabatic cooling representing a 3 cm/s uplift over a 4-hour time span followed by a 2-hour dissipation stage with no cooling. Variations of these baseline cases include no-radiation and stable-thermal-stratification cases. Preliminary results indicated the great importance of ice crystal fallout in determining even the gross cloud characteristics, such as average vertically-integrated ice water path (IWP). Significant inter-model differences were found. Ice water fall speed is directly related to the shape of the particle size distribution and the habits of the ice crystal population, whether assumed or explicitly calculated. In order to isolate the fall speed effect from that of the associated ice crystal population, simulations were also performed where ice water fall speed was set to the same constant value everywhere in each model. Values of 20 and 60 cm/s were assumed. Current results of the project will be described and implications will be drawn. In particular, this exercise is found to strongly focus the definition of issues resulting in observed inter-model differences and to suggest possible strategies for observational validation of the models. The next step in this project is to perform similar comparisons for well observed case studies with sufficient high quality data to adequately define model initiation and forcing specifications and to support quantitative validation of the results.
Molecular dynamics simulation of nitric oxide in myoglobin
Lee, Myung Won; Meuwly, Markus
2012-01-01
The infrared (IR) spectroscopy and ligand migration of photodissociated nitric oxide (NO) in and around the active sites in myoglobin (Mb) are investigated. A distributed multipolar model for open-shell systems is developed and used, which allows one to realistically describe the charge distribution around the diatomic probe molecule. The IR spectra were computed from the trajectories for two conformational substates at various temperatures. The lines are narrow (width of 3–7 cm–1 at 20–100 K), in agreement with the experimental observations where they have widths of 4–5 cm–1 at 4 K. It is found that within one conformational substate (B or C) the splitting of the spectrum can be correctly described compared with recent experiments. Similar to photodissociated CO in Mb, additional substates exist for NO in Mb, which are separated by barriers below 1 kcal/mol. Contrary to full quantum mechanical calculations, however, the force field and mixed QM/MM simulations do not correctly describe the relative shifts between the B- and C-states relative to gas-phase NO. Free energy simulations establish that NO preferably localizes in the distal site and the barrier for migration to the neighboring Xe4 pocket is ΔGB→C = 1.7–2.0 kcal/mol. The reverse barrier is ΔGB←C = 0.7 kcal/mol, which agrees well with the experimental value of 0.7 kcal/mol, estimated from kinetic data.
NASA Astrophysics Data System (ADS)
Piot, M.; Pay, M. T.; Jorba, O.; Baldasano, J. M.; Jiménez-Guerrero, P.; López, E.; Pérez, C.; Gassó, S.
2009-04-01
Often in Europe, population exposure to air pollution exceeds standards set by the EU and the World Health Organization (WHO). Urban/suburban areas are predominantly impacted upon, although exceedances of particulate matter (PM10 and PM2.5) and Ozone (O3) also take place in rural areas. In the frame of the CALIOPE project (Baldasano et al., 2008a), a high-resolution air quality forecasting system, WRF-ARW/HERMES/CMAQ/DREAM, has been developed and applied to the European domain (12km x 12km, 1hr) as well as to the Iberian Peninsula domain (4km x 4km, 1hr) to provide air quality forecasts for Spain (http://www.bsc.es/caliope/). The simulation of such high-resolution model system has been made possible by its implementation on the MareNostrum supercomputer. To reassure potential users and reduce uncertainties, the model system must be evaluated to assess its performances in terms of air quality levels and dynamics reproducibility. The present contribution describes a thorough quantitative evaluation study performed for a reference year (2004). The CALIOPE modelling system is configured with 38 vertical layers reaching up to 50 hPa for the meteorological core. Atmospheric initial and boundary conditions are obtained from the NCEP final analysis data. The vertical resolution of the CMAQ chemistry-transport model for gas-phase and aerosols has been increased from 8 to 15 layers in order to simulate vertical exchanges more accurately. Gas phase boundary conditions are provided by the LMDz-INCA2 global climate-chemistry model (see Hauglustaine et al., 2004). The DREAM model simulates long-range transport of mineral dust over the domains under study. For the European simulation, emissions are disaggregated from the EMEP expert emission inventory for 2004 to the utilized resolution using the criteria implemented in the HERMES emission model (Baldasano et al., 2008b). The HERMES model system, using a bottom-up approach, was adopted to estimate emissions for the Iberian Peninsula simulation at 4 km horizontal resolution, every hour. In order to evaluate the performances of the CALIOPE system, model simulations were compared with ground-based measurements from the EMEP and Spanish air quality networks. For the European domain, 45 stations have been used to evaluate NO2, 60 for O3, 39 for SO2, 25 for PM10 and 16 for PM2.5. On the other hand, the Iberian Peninsula domain has been evaluated against 75 NO2 stations, 84 O3 stations, 69 for SO2, and 46 for PM10. Such large number of observations allows us to provide a detailed discussion of the model skills over quite different geographical locations and meteorological situations. The model simulation for Europe satisfactorily reproduces O3 concentrations throughout the year with relatively small errors: MNGE values range from 13% to 24%, and MNBE values show a slight negative bias ranging from -15% to 0%. These values lie within the range defined by the US-EPA guidelines (MNGE: +/- 30-35%; MNBE: +/- 10-15%). NO2 is less accurately simulated, with a mean MNBE of -47% caused by an overall underestimation in concentrations. The reproduction of SO2 concentrations is relatively correct but false peaks are reported (mean MNBE=22%). The simulated variation of particulate matter is reliable, with a mean correlation of 0.5. False peaks were reduced by use of an improved 8-bin aerosol description in the DREAM dust model, but mean aerosol levels are still underestimated. This problem is most probably related to uncertainties in our knowledge of the sources and in the description of the sulfate chemistry. The model simulation for Europe will be used to force the nested high-resolution simulation of the Iberian Peninsula. The performances of the latter will be also presented. Such high resolution simulation will allow analysing the small scale features observed over Spain. REFERENCES Baldasano J.M, P. Jiménez-Guerrero, O. Jorba, C. Pérez, E. López, P. Güereca, F. Martin, M. García-Vivanco, I. Palomino, X. Querol, M. Pandolfi, M.J. Sanz and J.J. Diéguez, 2008a: CALIOPE: An operational air quality forecasting system for the Iberian Peninsula, Balearic Islands and Canary Islands- First annual evaluation and ongoing developments. Adv. Sci. and Res., 2: 89-98. Baldasano J.M., L. P. Güereca, E. López, S. Gassó, P. Jimenez-Guerrero, 2008b: Development of a high-resolution (1 km x 1 km, 1 h) emission model for Spain: the High-Elective Resolution Modelling Emission System (HERMES). Atm. Environ., 42 (31): 7215-7233. Hauglustaine, D. A. and F. Hourdin and L. Jourdain and M.A. Filiberti and S. Walters and J. F. Lamarque and E. A. Holland, 2004: Interactive chemistry in the Laboratoire de Meteorologie Dynamique general circulation model: Description and background tropospheric chemistry evaluation. J. Geophys. Res., doi:10.1029/2003JD003,957.
Predictive Computational Modeling of Chromatin Folding
NASA Astrophysics Data System (ADS)
di Pierro, Miichele; Zhang, Bin; Wolynes, Peter J.; Onuchic, Jose N.
In vivo, the human genome folds into well-determined and conserved three-dimensional structures. The mechanism driving the folding process remains unknown. We report a theoretical model (MiChroM) for chromatin derived by using the maximum entropy principle. The proposed model allows Molecular Dynamics simulations of the genome using as input the classification of loci into chromatin types and the presence of binding sites of loop forming protein CTCF. The model was trained to reproduce the Hi-C map of chromosome 10 of human lymphoblastoid cells. With no additional tuning the model was able to predict accurately the Hi-C maps of chromosomes 1-22 for the same cell line. Simulations show unknotted chromosomes, phase separation of chromatin types and a preference of chromatin of type A to sit at the periphery of the chromosomes.
Three-Dimensional Computational Model for Flow in an Over-Expanded Nozzle With Porous Surfaces
NASA Technical Reports Server (NTRS)
Abdol-Hamid, K. S.; Elmiligui, Alaa; Hunter, Craig A.; Massey, Steven J.
2006-01-01
A three-Dimensional computational model is used to simulate flow in a non-axisymmetric, convergent-divergent nozzle incorporating porous cavities for shock-boundary layer interaction control. The nozzle has an expansion ratio (exit area/throat area) of 1.797 and a design nozzle pressure ratio of 8.78. Flow fields for the baseline nozzle (no porosity) and for the nozzle with porous surfaces of 10% openness are computed for Nozzle Pressure Ratio (NPR) varying from 1.29 to 9.54. The three dimensional computational results indicate that baseline (no porosity) nozzle performance is dominated by unstable, shock-induced, boundary-layer separation at over-expanded conditions. For NPR less than or equal to 1.8, the separation is three dimensional, somewhat unsteady, and confined to a bubble (with partial reattachment over the nozzle flap). For NPR greater than or equal to 2.0, separation is steady and fully detached, and becomes more two dimensional as NPR increased. Numerical simulation of porous configurations indicates that a porous patch is capable of controlling off design separation in the nozzle by either alleviating separation or by encouraging stable separation of the exhaust flow. In the present paper, computational simulation results, wall centerline pressure, mach contours, and thrust efficiency ratio are presented, discussed and compared with experimental data. Results indicate that comparisons are in good agreement with experimental data. The three-dimensional simulation improves the comparisons for over-expanded flow conditions as compared with two-dimensional assumptions.
NASA Astrophysics Data System (ADS)
Kuttippurath, J.; Godin-Beekmann, S.; Lefèvre, F.; Goutail, F.
2010-06-01
The stratospheric ozone loss during the Arctic winters 2004/05-2009/10 is investigated by using high resolution simulations from the chemical transport model Mimosa-Chim and observations from Microwave Limb Sounder (MLS) on Aura by the passive tracer technique. The winter 2004/05 was the coldest of the series with strongest chlorine activation. The ozone loss diagnosed from both model and measurements inside the polar vortex at 475 K ranges from ~1-0.7 ppmv in the warm winter 2005/06 to 1.7 ppmv in the cold winter 2004/05. Halogenated (chlorine and bromine) catalytic cycles contribute to 75-90% of the accumulated ozone loss at this level. At 675 K the lowest loss of ~0.4 ppmv is computed in 2008/09 from both simulations and observations and, the highest loss is estimated in 2006/07 by the model (1.3 ppmv) and in 2004/05 by MLS (1.5 ppmv). Most of the ozone loss (60-75%) at this level results from cycles catalysed by nitrogen oxides (NO and NO2) rather than halogens. At both 475 and 675 K levels the simulated ozone evolution inside the polar vortex is in reasonably good agreement with the observations. The ozone total column loss deduced from the model calculations at the MLS sampling locations inside the vortex ranges between 40 DU in 2005/06 and 94 DU in 2004/05, while that derived from observations ranges between 37 DU and 111 DU in the same winters. These estimates from both Mimosa-Chim and MLS are in general good agreement with those from the ground-based UV-VIS (ultra violet-visible) ozone loss analyses for the respective winters.
NASA Astrophysics Data System (ADS)
Holanda, R. F. L.
2018-05-01
In this paper, we propose a new method to obtain the depletion factor γ(z), the ratio by which the measured baryon fraction in galaxy clusters is depleted with respect to the universal mean. We use exclusively galaxy cluster data, namely, X-ray gas mass fraction (fgas) and angular diameter distance measurements from Sunyaev-Zel'dovich effect plus X-ray observations. The galaxy clusters are the same in both data set and the non-isothermal spherical double β-model was used to describe their electron density and temperature profiles. In order to compare our results with those from recent cosmological hydrodynamical simulations, we suppose a possible time evolution for γ(z), such as, γ(z) =γ0(1 +γ1 z) . As main conclusions we found that: the γ0 value is in full agreement with the simulations. On the other hand, although the γ1 value found in our analysis is compatible with γ1 = 0 within 2σ c.l., our results show a non-negligible time evolution for the depletion factor, unlike the results of the simulations. However, we also put constraints on γ(z) by using the fgas measurements and angular diameter distances obtained from the flat ΛCDM model (Planck results) and from a sample of galaxy clusters described by an elliptical profile. For these cases no significant time evolution for γ(z) was found. Then, if a constant depletion factor is an inherent characteristic of these structures, our results show that the spherical double β-model used to describe the galaxy clusters considered does not affect the quality of their fgas measurements.
Zhou, Hai-Bin; Chen, Tong-Bin; Gao, Ding; Zheng, Guo-Di; Chen, Jun; Pan, Tian-Hao; Liu, Hong-Tao; Gu, Run-Yao
2014-11-01
Reducing moisture in sewage sludge is one of the main goals of sewage sludge composting and biodrying. A mathematical model was used to simulate the performance of water removal under different aeration strategies. Additionally, the correlations between temperature, moisture content (MC), volatile solids (VS), oxygen content (OC), and ambient air temperature and aeration strategies were predicted. The mathematical model was verified based on coefficients of correlation between the measured and predicted results of over 0.80 for OC, MC, and VS, and 0.72 for temperature. The results of the simulation showed that water reduction was enhanced when the average aeration rate (AR) increased to 15.37 m(3) min(-1) (6/34 min/min, AR: 102.46 m(3) min(-1)), above which no further increase was observed. Furthermore, more water was removed under a higher on/off time of 7/33 (min/min, AR: 87.34 m(3) min(-1)), and when ambient air temperature was higher. Copyright © 2014 Elsevier Ltd. All rights reserved.
A Physically-based Model for Predicting Soil Moisture Dynamics in Wetlands
NASA Astrophysics Data System (ADS)
Kalin, L.; Rezaeianzadeh, M.; Hantush, M. M.
2017-12-01
Wetlands are promoted as green infrastructures because of their characteristics in retaining and filtering water. In wetlands going through wetting/drying cycles, simulation of nutrient processes and biogeochemical reactions in both ponded and unsaturated wetland zones are needed for an improved understanding of wetland functioning for water quality improvement. The physically-based WetQual model can simulate the hydrology and nutrient and sediment cycles in natural and constructed wetlands. WetQual can be used in continuously flooded environments or in wetlands going through wetting/drying cycles. Currently, WetQual relies on 1-D Richards' Equation (RE) to simulate soil moisture dynamics in unponded parts of the wetlands. This is unnecessarily complex because as a lumped model, WetQual only requires average moisture contents. In this paper, we present a depth-averaged solution to the 1-D RE, called DARE, to simulate the average moisture content of the root zone and the layer below it in unsaturated parts of wetlands. DARE converts the PDE of the RE into ODEs; thus it is computationally more efficient. This method takes into account the plant uptake and groundwater table fluctuations, which are commonly overlooked in hydrologic models dealing with wetlands undergoing wetting and drying cycles. For verification purposes, DARE solutions were compared to Hydrus-1D model, which uses full RE, under gravity drainage only assumption and full-term equations. Model verifications were carried out under various top boundary conditions: no ponding at all, ponding at some point, and no rain. Through hypothetical scenarios and actual atmospheric data, the utility of DARE was demonstrated. Gravity drainage version of DARE worked well in comparison to Hydrus-1D, under all the assigned atmospheric boundary conditions of varying fluxes for all examined soil types (sandy loam, loam, sandy clay loam, and sand). The full-term version of DARE offers reasonable accuracy compared to the full RE solutions from Hydrus-1D, with a significant reduction in computational time. The full-term version of DARE estimated the moisture content with better accuracy for the root zone by considering zero pressure head at a fixed groundwater depth as the bottom boundary condition. The accuracy of this model is lower for the second layer.
MPPhys—A many-particle simulation package for computational physics education
NASA Astrophysics Data System (ADS)
Müller, Thomas
2014-03-01
In a first course to classical mechanics elementary physical processes like elastic two-body collisions, the mass-spring model, or the gravitational two-body problem are discussed in detail. The continuation to many-body systems, however, is deferred to graduate courses although the underlying equations of motion are essentially the same and although there is a strong motivation for high-school students in particular because of the use of particle systems in computer games. The missing link between the simple and the more complex problem is a basic introduction to solve the equations of motion numerically which could be illustrated, however, by means of the Euler method. The many-particle physics simulation package MPPhys offers a platform to experiment with simple particle simulations. The aim is to give a principle idea how to implement many-particle simulations and how simulation and visualization can be combined for interactive visual explorations. Catalogue identifier: AERR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERR_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 111327 No. of bytes in distributed program, including test data, etc.: 608411 Distribution format: tar.gz Programming language: C++, OpenGL, GLSL, OpenCL. Computer: Linux and Windows platforms with OpenGL support. Operating system: Linux and Windows. RAM: Source Code 4.5 MB Complete package 242 MB Classification: 14, 16.9. External routines: OpenGL, OpenCL Nature of problem: Integrate N-body simulations, mass-spring models Solution method: Numerical integration of N-body-simulations, 3D-Rendering via OpenGL. Running time: Problem dependent
Utility of a super-flexible three-dimensional printed heart model in congenital heart surgery.
Hoashi, Takaya; Ichikawa, Hajime; Nakata, Tomohiro; Shimada, Masatoshi; Ozawa, Hideto; Higashida, Akihiko; Kurosaki, Kenichi; Kanzaki, Suzu; Shiraishi, Isao
2018-05-28
The objective of this study was to assess the utility of 3D printed heart models of congenital heart disease for preoperative surgical simulation. Twenty patient-specific 3D models were created between March 2015 and August 2017. All operations were performed by a young consultant surgeon who had no prior experience with complex biventricular repair. All 15 patients with balanced ventricles had outflow tract malformations (double-outlet right ventricle in 7 patients, congenitally corrected transposition of great arteries in 5, transposition of great arteries in 1, interrupted aortic arch Type B in 1, tetralogy of Fallot with pulmonary atresia and major aortopulmonary collateral arteries in 1). One patient had hypoplastic left heart complex, and the remaining 4 patients had a functional single ventricle. The median age at operation was 1.4 (range 0.1-5.9) years. Based on a multislice computed tomography data set, the 3D models were made of polyurethane resins using stereolithography as the printing technology and vacuum casting as the manufacturing method. All but 4 patients with a functional single ventricle underwent complete biventricular repair. The median cardiopulmonary bypass time and aortic cross-clamp time were 345 (110-570) min and 114 (35-293) min, respectively. During the median follow-up period of 1.3 (0.1-2.5) years, no mortality was observed. None of the patients experienced surgical heart block or systemic ventricular outflow tract obstruction. Three-dimensional printed heart models showed potential utility, especially in understanding the relationship between intraventricular communications and great vessels, as well as in simulation for creating intracardiac pathways.
Shuguanga Liu; William A. Reiners; Michael Keller; Davis S. Schimel
2000-01-01
Nitrous oxide (N2O) and nitric oxide (NO) are important atmospheric trace gases participating in the regulation of global climate and environment. Predictive models on the emissions of N2O and NO emissions from soil into the atmosphere are required. We modified the CENTURY model (Soil Sci. Soc. Am. J., 51 (1987) 1173) to simulate the emissions of N2O and NO from...
Analysis and Design of a Novel W-band SPST Switch by Employing Full-Wave EM Simulator
NASA Astrophysics Data System (ADS)
Xu, Zhengbin; Guo, Jian; Qian, Cheng; Dou, Wenbin
2011-12-01
In this paper, a W-band single pole single throw (SPST) switch based on a novel PIN diode model is presented. The PIN diode is modeled using a full-wave electromagnetic (EM) simulator and its parasitic parameters under both forward and reverse bias states are described by a T-network. By this approach, the measurement-based model, which is usually a must for high performance switch design, is no longer necessary. A compensation structure is optimized to obtain a high isolation of the switch. Accordingly, a W-band SPST switch is designed using a full wave EM simulator. Measurement results agree very well with simulated ones. Our measurements show that the developed switch has less than 1.5 dB insertion loss under the `on' state from 88 GHz to 98 GHz. Isolation greater than 30 dB over 2 GHz bandwidth and greater than 20 dB over 5 GHz bandwidth can be achieved at the center frequency of 94 GHz under the `off' state.
Gibiansky, Leonid; Gibiansky, Ekaterina
2017-10-01
The paper extended the TMDD model to drugs with two identical binding sites (2-1 TMDD). The quasi-steady-state (2-1 QSS), quasi-equilibrium (2-1 QE), irreversible binding (2-1 IB), and Michaelis-Menten (2-1 MM) approximations of the model were derived. Using simulations, the 2-1 QSS approximation was compared with the full 2-1 TMDD model. As expected and similarly to the standard TMDD for monoclonal antibodies (mAb), 2-1 QSS predictions were nearly identical to 2-1 TMDD predictions, except for times of fast changes following initiation of dosing, when equilibrium has not yet been reached. To illustrate properties of new equations and approximations, several variations of population PK data for mAbs with soluble (slow elimination of the complex) or membrane-bound (fast elimination of the complex) targets were simulated from a full 2-1 TMDD model and fitted to 2-1 TMDD models, to its approximations, and to the standard (1-1) QSS model. For a mAb with a soluble target, it was demonstrated that the 2-1 QSS model provided nearly identical description of the observed (simulated) free drug and total target concentrations, although there was some minor bias in predictions of unobserved free target concentrations. The standard QSS approximation also provided a good description of the observed data, but was not able to distinguish between free drug concentrations (with no target attached and both binding site free) and partially bound drug concentrations (with one of the binding sites occupied by the target). For a mAb with a membrane-bound target, the 2-1 MM approximation adequately described the data. The 2-1 QSS approximation converged 10 times faster than the full 2-1 TMDD, and its run time was comparable with the standard QSS model.
The Digital Landmass Simulation Production Overview,
1987-01-01
L 187 978 THE DIGITAL LANDMASS SIMULATION PRODUCTION OVERVIEV (U) 1/1 DEFENSE MAPPING AGENCY AEROSPACE CENTER ST LOUIS AFS NO UNCLAS5SIFIED R ABR 97F...ADDRLS , (Ciry, Stile, ind, ZIP C4cJ) 10. SOURCE OF FUNDING NuMuERS ’ PROGRAM PROAtCT TASK VVCRK U’I ELEMENT NO. NO. NO ,-CCE5S GN NO. 1 1 TITLE...transformation program is run for each visual and radar simulation. The purpose of the transformation software is to convert the "raw" DTED and DFAD
NASA Astrophysics Data System (ADS)
Govind, A.; Chen, J. M.; Margolis, H.
2007-12-01
Current estimates of terrestrial carbon overlook the effects of topographically-driven lateral flow of soil water. We hypothesize that this component, which occur at a landscape or watershed scale have significant influences on the spatial distribution of carbon, due to its large contribution to the local water balance. To this end, we further developed a spatially explicit ecohydrological model, BEPS-TerrainLab V2.0. We simulated the coupled hydrological and carbon cycle processes in a black spruce-moss ecosystem in central Quebec, Canada. The carbon stocks were initialized using a long term carbon cycling model, InTEC, under a climate change and disturbance scenario, the accuracy of which was determined with inventory plot measurements. Further, we simulated and validated several ecosystem indicators such as ET, GPP, NEP, water table, snow depth and soil temperature, using the measurements for two years, 2004 and 2005. After gaining confidence in the model's ability to simulate ecohydrological processes, we tested the influence of lateral water flow on the carbon cycle. We made three hydrological modeling scenarios 1) Explicit, were realistic lateral water routing was considered 2) Implicit where calculations were based on a bucket modeling approach 3) NoFlow, where the lateral water flow was turned off in the model. The results showed that pronounced anomalies exist among the scenarios for the simulated GPP, ET and NEP. In general, Implicit calculation overestimated GPP and underestimated NEP, as opposed to Explicit simulation. NoFlow underestimated GPP and overestimated NEP. The key processes controlling GPP were manifested through stomatal conductance which reduces under conditions of rapid soil saturation ( NoFlow ) or increases in the Implicit case, and, nitrogen availability which affects Vcmax, the maximum carboxylation rate. However, for NEP, the anomalies were attributed to differences in soil carbon pool decomposition, which determine the heterotrophic respiration and the resultant nitrogen mineralization which affects GPP and several other feedback mechanisms. These results suggest that lateral water flow does play a significant role in the terrestrial carbon distribution. Therefore, regional or global scale terrestrial carbon estimates could have significant errors if proper hydrological constrains are not considered for modeling ecological processes due to large topographic variations on the Earth's surface. For more info please visit: http://ajit.govind.googlepages.com/agu2007
Annual Report 2015: High Fidelity Modeling of Field-Reversed Configuration (FRC) Thrusters
2016-06-01
simulations become unstable as time evolves leading to the magnetic island collision with the boundary and destruction of the close magnetic field structure...compares well with the results of the Hall-MHD code. 1 R. D. Milroy, "A magnetohydrodynamic model of rotating magnetic field current drive in a field...reversed configuration," Physics of Plasmas, vol. 7, no. 10. 2 Distribution A: Approved for Public Release. PA# 16202 Figure 1. Magnetic field
Viscoelastic Earthquake Cycle Simulation with Memory Variable Method
NASA Astrophysics Data System (ADS)
Hirahara, K.; Ohtani, M.
2017-12-01
There have so far been no EQ (earthquake) cycle simulations, based on RSF (rate and state friction) laws, in viscoelastic media, except for Kato (2002), who simulated cycles on a 2-D vertical strike-slip fault, and showed nearly the same cycles as those in elastic cases. The viscoelasticity could, however, give more effects on large dip-slip EQ cycles. In a boundary element approach, stress is calculated using a hereditary integral of stress relaxation function and slip deficit rate, where we need the past slip rates, leading to huge computational costs. This is a cause for almost no simulations in viscoelastic media. We have investigated the memory variable method utilized in numerical computation of wave propagation in dissipative media (e.g., Moczo and Kristek, 2005). In this method, introducing memory variables satisfying 1st order differential equations, we need no hereditary integrals in stress calculation and the computational costs are the same order of those in elastic cases. Further, Hirahara et al. (2012) developed the iterative memory variable method, referring to Taylor et al. (1970), in EQ cycle simulations in linear viscoelastic media. In this presentation, first, we introduce our method in EQ cycle simulations and show the effect of the linear viscoelasticity on stick-slip cycles in a 1-DOF block-SLS (standard linear solid) model, where the elastic spring of the traditional block-spring model is replaced by SLS element and we pull, in a constant rate, the block obeying RSF law. In this model, the memory variable stands for the displacement of the dash-pot in SLS element. The use of smaller viscosity reduces the recurrence time to a minimum value. The smaller viscosity means the smaller relaxation time, which makes the stress recovery quicker, leading to the smaller recurrence time. Second, we show EQ cycles on a 2-D dip-slip fault with the dip angel of 20 degrees in an elastic layer with thickness of 40 km overriding a Maxwell viscoelastic half layer with the relaxation time of 5 yrs. In a test model where we set the fault at 30-40 km depths, the recurrence time of the EQ cycle is reduced by 1 yr from 27.92 in elastic case to 26.85 yrs. This smaller recurrence time is the same as in Kato (2002), but the effect of the viscoelasticity on the cycles would be larger in the dip-slip fault case than that in the strike-slip one.
NASA Technical Reports Server (NTRS)
Ott, Lesley E.; Pickering, Kenneth E.; Stenchikov, Georgiy L.; Huntrieser, Heidi; Schumann, Ulrich
2006-01-01
The July 21,1998 thunderstonn observed during the European Lightning Nitrogen Oxides Project (EULINOX) project was simulated using the three-dimensional Goddard Cumulus Ensemble (GCE) model. The simulation successfully reproduced a number of observed storm features including the splitting of the original cell into a southern cell which developed supercell characteristics, and a northern cell which became multicellular. Output from the GCE simulation was used to drive an offline cloud-scale chemical transport model which calculates tracer transport and includes a parameterization of lightning NO(x) production which uses observed flash rates as input. Estimates of lightning NO(x) production were deduced by assuming various values of production per intracloud and production per cloud-to-ground flash and comparing the results with in-cloud aircraft observations. The assumption that both types of flashes produce 360 moles of NO per flash on average compared most favorably with column mass and probability distribution functions calculated from observations. This assumed production per flash corresponds to a global annual lightning NOx source of 7 Tg N per yr. Chemical reactions were included in the model to evaluate the impact of lightning NO(x), on ozone. During the storm, the inclusion of lightning NOx in the model results in a small loss of ozone (on average less than 4 ppbv) at all model levels. Simulations of the chemical environment in the 24 hours following the storm show on average a small increase in the net production of ozone at most levels resulting from lightning NO(x), maximizing at approximately 5 ppbv per day at 5.5 km. Between 8 and 10.5 km, lightning NO(x) causes decreased net ozone production.
NASA Astrophysics Data System (ADS)
Koo, Cheol Hea; Lee, Hoon Hee; Moon, Sung Tae; Han, Sang Hyuck; Ju, Gwang Hyeok
2013-08-01
In aerospace research and practical development area, increasing the usage of simulation in software development, component design and system operation has been maintained and the increasing speed getting faster. This phenomenon can be found from the easiness of handling of simulation and the powerfulness of the output from the simulation. Simulation brings lots of benefit from the several characteristics of it as following, - easy to handle ; it is never broken or damaged by mistake - never wear out ; it is never getting old - cost effective ; once it is built, it can be distributed over 100 ~ 1000 people GenSim (Generic Simulator) which is developing by KARI and compatible with ESA SMP standard provides such a simulation platform to support flight software validation and mission operation verification. User interface of GenSim is shown in Figure 1 [1,2]. As shown in Figure 1, as most simulation platform typically has, GenSim has GRD (Graphical Display) and AND (Alpha Numeric Display). But frequently more complex and powerful handling of the simulated data is required at the actual system validation for example mission operation. In Figure 2, system simulation result of COMS (Communication, Ocean, and Meteorological Satellite, launched at June 28 2008) is being drawn by Celestia 3D program. In this case, the needed data from Celestia is given by one of the simulation model resident in system simulator through UDP network connection in this case. But the requirement of displaying format, data size, and communication rate is variable so developer has to manage the connection protocol manually at each time and each case. It brings a chaos in the simulation model design and development, also to the performance issue at last. Performance issue is happen when the required data magnitude is higher than the capacity of simulation kernel to process the required data safely. The problem is that the sending data to a visualization tool such as celestia is given by a simulation model not kernel. Because the simulation model has no way to know about the status of simulation kernel load to process simulation events, as the result the simulation model sends the data as frequent as needed. This story may make many potential problems like lack of response, failure of meeting deadline and data integrity problem with the model data during the simulation. SIMSAT and EuroSim gives a warning message if the user request event such as printing log can't be processed as planned or requested. As the consequence the requested event will be delayed or not be able to be processed, and it means that this phenomenon may violate the planned deadline. In most soft real time simulation, this can be neglected and just make a little inconvenience of users. But it shall be noted that if the user request is not managed properly at some critical situation, the simulation results may be ended with a mess and chaos. As we traced the disadvantages of what simulation model provide the user request, simulation model is not appropriate to provide a service for such user request. This kind of work shall be minimized as much as possible.
Dionisio, Kathie L; Chang, Howard H; Baxter, Lisa K
2016-11-25
Exposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of air pollution and health. ZIP-code level estimates of exposure for six pollutants (CO, NO x , EC, PM 2.5 , SO 4 , O 3 ) from 1999 to 2002 in the Atlanta metropolitan area were used to calculate spatial, population (i.e. ambient versus personal), and total exposure measurement error. Empirically determined covariance of pollutant concentration pairs and the associated measurement errors were used to simulate true exposure (exposure without error) from observed exposure. Daily emergency department visits for respiratory diseases were simulated using a Poisson time-series model with a main pollutant RR = 1.05 per interquartile range, and a null association for the copollutant (RR = 1). Monte Carlo experiments were used to evaluate the impacts of correlated exposure errors of different copollutant pairs. Substantial attenuation of RRs due to exposure error was evident in nearly all copollutant pairs studied, ranging from 10 to 40% attenuation for spatial error, 3-85% for population error, and 31-85% for total error. When CO, NO x or EC is the main pollutant, we demonstrated the possibility of false positives, specifically identifying significant, positive associations for copollutants based on the estimated type I error rate. The impact of exposure error must be considered when interpreting results of copollutant epidemiologic models, due to the possibility of attenuation of main pollutant RRs and the increased probability of false positives when measurement error is present.
NASA Astrophysics Data System (ADS)
Chan, Chi-kwan; Psaltis, Dimitrios; Özel, Feryal; Medeiros, Lia; Marrone, Daniel; Saḑowski, Aleksander; Narayan, Ramesh
2015-10-01
We explore the variability properties of long, high-cadence general relativistic magnetohydrodynamic (GRMHD) simulations across the electromagnetic spectrum using an efficient, GPU-based radiative transfer algorithm. We focus on both standard and normal evolution (SANE) and magnetically arrested disk (MAD) simulations with parameters that successfully reproduce the time-averaged spectral properties of Sgr A* and the size of its image at 1.3 mm. We find that the SANE models produce short-timescale variability with amplitudes and power spectra that closely resemble those inferred observationally. In contrast, MAD models generate only slow variability at lower flux levels. Neither set of models shows any X-ray flares, which most likely indicates that additional physics, such as particle acceleration mechanisms, need to be incorporated into the GRMHD simulations to account for them. The SANE models show strong, short-lived millimeter/infrared (IR) flares, with short (≲1 hr) time lags between the millimeter and IR wavelengths, that arise from the combination of short-lived magnetic flux tubes and strong-field gravitational lensing near the horizon. Such events provide a natural explanation for the observed IR flares with no X-ray counterparts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chan, Chi-kwan; Psaltis, Dimitrios; Özel, Feryal
2015-10-20
We explore the variability properties of long, high-cadence general relativistic magnetohydrodynamic (GRMHD) simulations across the electromagnetic spectrum using an efficient, GPU-based radiative transfer algorithm. We focus on both standard and normal evolution (SANE) and magnetically arrested disk (MAD) simulations with parameters that successfully reproduce the time-averaged spectral properties of Sgr A* and the size of its image at 1.3 mm. We find that the SANE models produce short-timescale variability with amplitudes and power spectra that closely resemble those inferred observationally. In contrast, MAD models generate only slow variability at lower flux levels. Neither set of models shows any X-ray flares,more » which most likely indicates that additional physics, such as particle acceleration mechanisms, need to be incorporated into the GRMHD simulations to account for them. The SANE models show strong, short-lived millimeter/infrared (IR) flares, with short (≲1 hr) time lags between the millimeter and IR wavelengths, that arise from the combination of short-lived magnetic flux tubes and strong-field gravitational lensing near the horizon. Such events provide a natural explanation for the observed IR flares with no X-ray counterparts.« less
7-year of surface ozone in a coastal city of central Italy: Observations and models
NASA Astrophysics Data System (ADS)
Biancofiore, Fabio; Verdecchia, Marco; Di Carlo, Piero; Tomassetti, Barbara; Aruffo, Eleonora; Busilacchio, Marcella; Bianco, Sebastiano; Di Tommaso, Sinibaldo; Colangeli, Carlo
2014-05-01
Hourly concentrations of ozone (O3) and nitrogen dioxide (NO2) have been measured for seven years, from 1998 to 2005, in a seaside town in the central Italy. Seasonal trends of O3 and NO2 recorded in the considered years are studied. Furthermore, we have focused our attention on data collected during the 2005, analyzing them using two different methods: a regression model and a neural network model. Both models are used to simulate the hourly ozone concentration, using several sets of input. In order to evaluate the performance of the model four statistical criteria are used: correlation coefficient (R), fractional bias (FB), normalized mean squared error (NMSE) e factor of two (FA2). All the criteria show that the neural network has better results compared to the regression model in all the simulations. In addiction we have tested some improvements of the neural network model, results of these tests are discussed. Finally, we have used the neural network to forecast the ozone hourly concentrations a day ahead and 1, 3, 6, 12 hour ahead. Performances of the model in predicting ozone levels are discussed.
Simulation of root forms using cellular automata model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winarno, Nanang, E-mail: nanang-winarno@upi.edu; Prima, Eka Cahya; Afifah, Ratih Mega Ayu
This research aims to produce a simulation program for root forms using cellular automata model. Stephen Wolfram in his book entitled “A New Kind of Science” discusses the formation rules based on the statistical analysis. In accordance with Stephen Wolfram’s investigation, the research will develop a basic idea of computer program using Delphi 7 programming language. To best of our knowledge, there is no previous research developing a simulation describing root forms using the cellular automata model compared to the natural root form with the presence of stone addition as the disturbance. The result shows that (1) the simulation usedmore » four rules comparing results of the program towards the natural photographs and each rule had shown different root forms; (2) the stone disturbances prevent the root growth and the multiplication of root forms had been successfully modeled. Therefore, this research had added some stones, which have size of 120 cells placed randomly in the soil. Like in nature, stones cannot be penetrated by plant roots. The result showed that it is very likely to further develop the program of simulating root forms by 50 variations.« less
Yerramilli, Anjaneyulu; Dodla, Venkata B.; Desamsetti, Srinivas; Challa, Srinivas V.; Young, John H.; Patrick, Chuck; Baham, Julius M.; Hughes, Robert L.; Yerramilli, Sudha; Tuluri, Francis; Hardy, Mark G.; Swanier, Shelton J.
2011-01-01
In this study, an attempt was made to simulate the air quality with reference to ozone over the Jackson (Mississippi) region using an online WRF/Chem (Weather Research and Forecasting–Chemistry) model. The WRF/Chem model has the advantages of the integration of the meteorological and chemistry modules with the same computational grid and same physical parameterizations and includes the feedback between the atmospheric chemistry and physical processes. The model was designed to have three nested domains with the inner-most domain covering the study region with a resolution of 1 km. The model was integrated for 48 hours continuously starting from 0000 UTC of 6 June 2006 and the evolution of surface ozone and other precursor pollutants were analyzed. The model simulated atmospheric flow fields and distributions of NO2 and O3 were evaluated for each of the three different time periods. The GIS based spatial distribution maps for ozone, its precursors NO, NO2, CO and HONO and the back trajectories indicate that all the mobile sources in Jackson, Ridgeland and Madison contributing significantly for their formation. The present study demonstrates the applicability of WRF/Chem model to generate quantitative information at high spatial and temporal resolution for the development of decision support systems for air quality regulatory agencies and health administrators. PMID:21776240
Du, Fengzhou; Li, Binghang; Yin, Ningbei; Cao, Yilin; Wang, Yongqian
2017-03-01
Knowing the volume of a graft is essential in repairing alveolar bone defects. This study investigates the 2 advanced preoperative volume measurement methods: three-dimensional (3D) printing and computer-aided engineering (CAE). Ten unilateral alveolar cleft patients were enrolled in this study. Their computed tomographic data were sent to 3D printing and CAE software. A simulated graft was used on the 3D-printed model, and the graft volume was measured by water displacement. The volume calculated by CAE software used mirror-reverses technique. The authors compared the actual volumes of the simulated grafts with the CAE software-derived volumes. The average volume of the simulated bone grafts by 3D-printed models was 1.52 mL, higher than the mean volume of 1.47 calculated by CAE software. The difference between the 2 volumes was from -0.18 to 0.42 mL. The paired Student t test showed no statistically significant difference between the volumes derived from the 2 methods. This study demonstrated that the mirror-reversed technique by CAE software is as accurate as the simulated operation on 3D-printed models in unilateral alveolar cleft patients. These findings further validate the use of 3D printing and CAE technique in alveolar defect repairing.
Understanding quantum tunneling using diffusion Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Inack, E. M.; Giudici, G.; Parolini, T.; Santoro, G.; Pilati, S.
2018-03-01
In simple ferromagnetic quantum Ising models characterized by an effective double-well energy landscape the characteristic tunneling time of path-integral Monte Carlo (PIMC) simulations has been shown to scale as the incoherent quantum-tunneling time, i.e., as 1 /Δ2 , where Δ is the tunneling gap. Since incoherent quantum tunneling is employed by quantum annealers (QAs) to solve optimization problems, this result suggests that there is no quantum advantage in using QAs with respect to quantum Monte Carlo (QMC) simulations. A counterexample is the recently introduced shamrock model (Andriyash and Amin, arXiv:1703.09277), where topological obstructions cause an exponential slowdown of the PIMC tunneling dynamics with respect to incoherent quantum tunneling, leaving open the possibility for potential quantum speedup, even for stoquastic models. In this work we investigate the tunneling time of projective QMC simulations based on the diffusion Monte Carlo (DMC) algorithm without guiding functions, showing that it scales as 1 /Δ , i.e., even more favorably than the incoherent quantum-tunneling time, both in a simple ferromagnetic system and in the more challenging shamrock model. However, a careful comparison between the DMC ground-state energies and the exact solution available for the transverse-field Ising chain indicates an exponential scaling of the computational cost required to keep a fixed relative error as the system size increases.
First-Principles Monte Carlo Simulations of Reaction Equilibria in Compressed Vapors
2016-01-01
Predictive modeling of reaction equilibria presents one of the grand challenges in the field of molecular simulation. Difficulties in the study of such systems arise from the need (i) to accurately model both strong, short-ranged interactions leading to the formation of chemical bonds and weak interactions arising from the environment, and (ii) to sample the range of time scales involving frequent molecular collisions, slow diffusion, and infrequent reactive events. Here we present a novel reactive first-principles Monte Carlo (RxFPMC) approach that allows for investigation of reaction equilibria without the need to prespecify a set of chemical reactions and their ideal-gas equilibrium constants. We apply RxFPMC to investigate a nitrogen/oxygen mixture at T = 3000 K and p = 30 GPa, i.e., conditions that are present in atmospheric lightning strikes and explosions. The RxFPMC simulations show that the solvation environment leads to a significantly enhanced NO concentration that reaches a maximum when oxygen is present in slight excess. In addition, the RxFPMC simulations indicate the formation of NO2 and N2O in mole fractions approaching 1%, whereas N3 and O3 are not observed. The equilibrium distributions obtained from the RxFPMC simulations agree well with those from a thermochemical computer code parametrized to experimental data. PMID:27413785
Air Quality Modeling and Forecasting over the United States Using WRF-Chem
NASA Astrophysics Data System (ADS)
Boxe, C.; Hafsa, U.; Blue, S.; Emmanuel, S.; Griffith, E.; Moore, J.; Tam, J.; Khan, I.; Cai, Z.; Bocolod, B.; Zhao, J.; Ahsan, S.; Gurung, D.; Tang, N.; Bartholomew, J.; Rafi, R.; Caltenco, K.; Rivas, M.; Ditta, H.; Alawlaqi, H.; Rowley, N.; Khatim, F.; Ketema, N.; Strothers, J.; Diallo, I.; Owens, C.; Radosavljevic, J.; Austin, S. A.; Johnson, L. P.; Zavala-Gutierrez, R.; Breary, N.; Saint-Hilaire, D.; Skeete, D.; Stock, J.; Salako, O.
2016-12-01
WRF-Chem is the Weather Research and Forecasting (WRF) model coupled with Chemistry. The model simulates the emission, transport, mixing, and chemical transformation of trace gases and aerosols simultaneously with the meteorology. The model is used for investigation of regional-scale air quality, field program analysis, and cloud-scale interactions between clouds and chemistry. The development of WRF-Chem is a collaborative effort among the community led by NOAA/ESRL scientists. The Official WRF-Chem web page is located at the NOAA web site. Our model development is closely linked with both NOAA/ESRL and DOE/PNNL efforts. Description of PNNL WRF-Chem model development is located at the PNNL web site as well as the PNNL Aerosol Modeling Testbed. High school and undergraduate students, representative of academic institutions throughout USA's Tri-State Area (New York, New Jersey, Connecticut), set up WRF-Chem on CUNY CSI's High Performance Computing Center. Students learned the back-end coding that governs WRF-Chems structure and the front-end coding that displays visually specified weather simulations and forecasts. Students also investigated the impact, to select baseline simulations/forecasts, due to the reaction, NO2 + OH + M → HOONO + M (k = 9.2 × 10-12 cm3 molecule-1 s-1, Mollner et al. 2010). The reaction of OH and NO2 to form gaseous nitric acid (HONO2) is among the most influential and in atmospheric chemistry. Till a few years prior, its rate coefficient remained poorly determined under tropospheric conditions because of difficulties in making laboratory measurements at 760 torr. These activities fosters student coding competencies and deep insights into weather forecast and air quality.
An implicit turbulence model for low-Mach Roe scheme using truncated Navier-Stokes equations
NASA Astrophysics Data System (ADS)
Li, Chung-Gang; Tsubokura, Makoto
2017-09-01
The original Roe scheme is well-known to be unsuitable in simulations of turbulence because the dissipation that develops is unsatisfactory. Simulations of turbulent channel flow for Reτ = 180 show that, with the 'low-Mach-fix for Roe' (LMRoe) proposed by Rieper [J. Comput. Phys. 230 (2011) 5263-5287], the Roe dissipation term potentially equates the simulation to an implicit large eddy simulation (ILES) at low Mach number. Thus inspired, a new implicit turbulence model for low Mach numbers is proposed that controls the Roe dissipation term appropriately. Referred to as the automatic dissipation adjustment (ADA) model, the method of solution follows procedures developed previously for the truncated Navier-Stokes (TNS) equations and, without tuning of parameters, uses the energy ratio as a criterion to automatically adjust the upwind dissipation. Turbulent channel flow at two different Reynold numbers and the Taylor-Green vortex were performed to validate the ADA model. In simulations of turbulent channel flow for Reτ = 180 at Mach number of 0.05 using the ADA model, the mean velocity and turbulence intensities are in excellent agreement with DNS results. With Reτ = 950 at Mach number of 0.1, the result is also consistent with DNS results, indicating that the ADA model is also reliable at higher Reynolds numbers. In simulations of the Taylor-Green vortex at Re = 3000, the kinetic energy is consistent with the power law of decaying turbulence with -1.2 exponents for both LMRoe with and without the ADA model. However, with the ADA model, the dissipation rate can be significantly improved near the dissipation peak region and the peak duration can be also more accurately captured. With a firm basis in TNS theory, applicability at higher Reynolds number, and ease in implementation as no extra terms are needed, the ADA model offers to become a promising tool for turbulence modeling.
NASA Astrophysics Data System (ADS)
Afanasyev, Andrey
2017-04-01
Numerical modelling of multiphase flows in porous medium is necessary in many applications concerning subsurface utilization. An incomplete list of those applications includes oil and gas fields exploration, underground carbon dioxide storage and geothermal energy production. The numerical simulations are conducted using complicated computer programs called reservoir simulators. A robust simulator should include a wide range of modelling options covering various exploration techniques, rock and fluid properties, and geological settings. In this work we present a recent development of new options in MUFITS code [1]. The first option concerns modelling of multiphase flows in double-porosity double-permeability reservoirs. We describe internal representation of reservoir models in MUFITS, which are constructed as a 3D graph of grid blocks, pipe segments, interfaces, etc. In case of double porosity reservoir, two linked nodes of the graph correspond to a grid cell. We simulate the 6th SPE comparative problem [2] and a five-spot geothermal production problem to validate the option. The second option concerns modelling of flows in porous medium coupled with flows in horizontal wells that are represented in the 3D graph as a sequence of pipe segments linked with pipe junctions. The well completions link the pipe segments with reservoir. The hydraulics in the wellbore, i.e. the frictional pressure drop, is calculated in accordance with Haaland's formula. We validate the option against the 7th SPE comparative problem [3]. We acknowledge financial support by the Russian Foundation for Basic Research (project No RFBR-15-31-20585). References [1] Afanasyev, A. MUFITS Reservoir Simulation Software (www.mufits.imec.msu.ru). [2] Firoozabadi A. et al. Sixth SPE Comparative Solution Project: Dual-Porosity Simulators // J. Petrol. Tech. 1990. V.42. N.6. P.710-715. [3] Nghiem L., et al. Seventh SPE Comparative Solution Project: Modelling of Horizontal Wells in Reservoir Simulation // SPE Symp. Res. Sim., 1991. DOI: 10.2118/21221-MS.
Microinstabilities in the pedestal region
NASA Astrophysics Data System (ADS)
Dickinson, David; Dudson, Benjamin; Wilson, Howard; Roach, Colin
2014-10-01
The regulation of transport at the pedestal top is important for the inter-ELM pedestal dynamics. Linear gyrokinetic analysis of the pedestal region during an ELM cycle on MAST has shown kinetic ballooning modes to be unstable at the knee of the pressure profile and in the steep pedestal region whilst microtearing modes (MTMs) dominate in the shallow gradient region inboard of the pedestal top. The transition between these instabilities at the pedestal knee has been observed in low and high collisionality MAST pedestals, and is likely to play an important role in the broadening of the pedestal. Nonlinear simulations are needed in this region to understand the microturbulence, the corresponding transport fluxes, and to gain further insight into the processes underlying the pedestal evolution. Such gyrokinetic simulations are numerically challenging and recent upgrades to the GS2 gyrokinetic code help improve their feasibility. We are also exploring reduced models that capture the relevant physics using the plasma simulation framework BOUT + + . An electromagnetic gyrofluid model has recently been implemented with BOUT + + that has significantly reduced computational cost compared to the gyrokinetic simulations against which it will be benchmarked. This work was funded by the RCUK Energy programme, EURATOM and a EUROFusion fellowship WP14-FRF-CCFE/Dickinson and was carried out using: HELIOS at IFERC, Japan; ARCHER (EPSRC Grant No. EP/L000237/1); HECToR (EPSRC Grant No. EP/H002081/1).
1992-03-17
No. 1 Approved for Public Release; Distribution Unlimited PHILLIPS LABORATORY AIR FORCE SYSTEMS COMMAND HANSCOM AIR FORCE BASE, MASSACHUSETTS 01731...the SWOE thermal models and the design of a new Command Interface System and User Interface System . 14. SUBJECT TERMS 15. NUMBER OF PAGES 116 BTI/SWOE...to the 3-D Tree Model 24 4.2.1 Operation Via the SWOE Command Interface System 26 4.2.2 Addition of Radiation Exchange to the Environment 26 4.2.3
Early Weightbearing After Operatively Treated Ankle Fractures: A Biomechanical Analysis.
Tan, Eric W; Sirisreetreerux, Norachart; Paez, Adrian G; Parks, Brent G; Schon, Lew C; Hasenboehler, Erik A
2016-06-01
No consensus exists regarding the timing of weightbearing after surgical fixation of unstable traumatic ankle fractures. We evaluated fracture displacement and timing of displacement with simulated early weightbearing in a cadaveric model. Twenty-four fresh-frozen lower extremities were assigned to Group 1, bimalleolar ankle fracture (n=6); Group 2, trimalleolar ankle fracture with unfixed small posterior malleolar fracture (n=9); or Group 3, trimalleolar ankle fracture with fixed large posterior malleolar fracture (n=9) and tested with axial compressive load at 3 Hz from 0 to 1000 N for 250 000 cycles to simulate 5 weeks of full weightbearing. Displacement was measured by differential variable reluctance transducer. The average motion at all fracture sites in all groups was significantly less than 1 mm (P < .05). Group 1 displacement of the lateral and medial malleolus fracture was 0.1±0.1 mm and 0.4±0.4 mm, respectively. Group 2 displacement of the lateral, medial, and posterior malleolar fracture was 0.6±0.4 mm, 0.5±0.4 mm, and 0.5±0.6 mm, respectively. Group 3 displacement of the lateral, medial, and posterior malleolar fracture was 0.1±0.1 mm, 0.5±0.7 mm, and 0.5±0.4 mm, respectively. The majority of displacement (64.0% to 92.3%) occurred in the first 50 000 cycles. There was no correlation between fracture displacement and bone mineral density. No significant fracture displacement, no hardware failure, and no new fractures occurred in a cadaveric model of early weightbearing in unstable ankle fracture after open reduction and internal fixation. This study supports further investigation of early weightbearing postoperative protocols after fixation of unstable ankle fractures. © The Author(s) 2016.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Zhijie; Lai, Canhai; Marcy, Peter William
2017-05-01
A challenging problem in designing pilot-scale carbon capture systems is to predict, with uncertainty, the adsorber performance and capture efficiency under various operating conditions where no direct experimental data exist. Motivated by this challenge, we previously proposed a hierarchical framework in which relevant parameters of physical models were sequentially calibrated from different laboratory-scale carbon capture unit (C2U) experiments. Specifically, three models of increasing complexity were identified based on the fundamental physical and chemical processes of the sorbent-based carbon capture technology. Results from the corresponding laboratory experiments were used to statistically calibrate the physical model parameters while quantifying some of theirmore » inherent uncertainty. The parameter distributions obtained from laboratory-scale C2U calibration runs are used in this study to facilitate prediction at a larger scale where no corresponding experimental results are available. In this paper, we first describe the multiphase reactive flow model for a sorbent-based 1-MW carbon capture system then analyze results from an ensemble of simulations with the upscaled model. The simulation results are used to quantify uncertainty regarding the design’s predicted efficiency in carbon capture. In particular, we determine the minimum gas flow rate necessary to achieve 90% capture efficiency with 95% confidence.« less
Cardamone, Salvatore; Hughes, Timothy J; Popelier, Paul L A
2014-06-14
Atomistic simulation of chemical systems is currently limited by the elementary description of electrostatics that atomic point-charges offer. Unfortunately, a model of one point-charge for each atom fails to capture the anisotropic nature of electronic features such as lone pairs or π-systems. Higher order electrostatic terms, such as those offered by a multipole moment expansion, naturally recover these important electronic features. The question remains as to why such a description has not yet been widely adopted by popular molecular mechanics force fields. There are two widely-held misconceptions about the more rigorous formalism of multipolar electrostatics: (1) Accuracy: the implementation of multipole moments, compared to point-charges, offers little to no advantage in terms of an accurate representation of a system's energetics, structure and dynamics. (2) Efficiency: atomistic simulation using multipole moments is computationally prohibitive compared to simulation using point-charges. Whilst the second of these may have found some basis when computational power was a limiting factor, the first has no theoretical grounding. In the current work, we disprove the two statements above and systematically demonstrate that multipole moments are not discredited by either. We hope that this perspective will help in catalysing the transition to more realistic electrostatic modelling, to be adopted by popular molecular simulation software.
Performance optimization and validation of ADM1 simulations under anaerobic thermophilic conditions.
Atallah, Nabil M; El-Fadel, Mutasem; Ghanimeh, Sophia; Saikaly, Pascal; Abou-Najm, Majdi
2014-12-01
In this study, two experimental sets of data each involving two thermophilic anaerobic digesters treating food waste, were simulated using the Anaerobic Digestion Model No. 1 (ADM1). A sensitivity analysis was conducted, using both data sets of one digester, for parameter optimization based on five measured performance indicators: methane generation, pH, acetate, total COD, ammonia, and an equally weighted combination of the five indicators. The simulation results revealed that while optimization with respect to methane alone, a commonly adopted approach, succeeded in simulating methane experimental results, it predicted other intermediary outputs less accurately. On the other hand, the multi-objective optimization has the advantage of providing better results than methane optimization despite not capturing the intermediary output. The results from the parameter optimization were validated upon their independent application on the data sets of the second digester. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Marocchino, A.; Atzeni, S.; Schiavi, A.
2014-01-01
In some regions of a laser driven inertial fusion target, the electron mean-free path can become comparable to or even longer than the electron temperature gradient scale-length. This can be particularly important in shock-ignited (SI) targets, where the laser-spike heated corona reaches temperatures of several keV. In this case, thermal conduction cannot be described by a simple local conductivity model and a Fick's law. Fluid codes usually employ flux-limited conduction models, which preserve causality, but lose important features of the thermal flow. A more accurate thermal flow modeling requires convolution-like non-local operators. In order to improve the simulation of SI targets, the non-local electron transport operator proposed by Schurtz-Nicolaï-Busquet [G. P. Schurtz et al., Phys. Plasmas 7, 4238 (2000)] has been implemented in the DUED fluid code. Both one-dimensional (1D) and two-dimensional (2D) simulations of SI targets have been performed. 1D simulations of the ablation phase highlight that while the shock profile and timing might be mocked up with a flux-limiter; the electron temperature profiles exhibit a relatively different behavior with no major effects on the final gain. The spike, instead, can only roughly be reproduced with a fixed flux-limiter value. 1D target gain is however unaffected, provided some minor tuning of laser pulses. 2D simulations show that the use of a non-local thermal conduction model does not affect the robustness to mispositioning of targets driven by quasi-uniform laser irradiation. 2D simulations performed with only two final polar intense spikes yield encouraging results and support further studies.
The effects of solar radiation on thermal comfort.
Hodder, Simon G; Parsons, Ken
2007-01-01
The aim of this study was to investigate the relationship between simulated solar radiation and thermal comfort. Three studies investigated the effects of (1) the intensity of direct simulated solar radiation, (2) spectral content of simulated solar radiation and (3) glazing type on human thermal sensation responses. Eight male subjects were exposed in each of the three studies. In Study 1, subjects were exposed to four levels of simulated solar radiation: 0, 200, 400 and 600 Wm(-2). In Study 2, subjects were exposed to simulated solar radiation with four different spectral contents, each with a total intensity of 400 Wm(-2) on the subject. In Study 3, subjects were exposed through glass to radiation caused by 1,000 Wm(-2) of simulated solar radiation on the exterior surface of four different glazing types. The environment was otherwise thermally neutral where there was no direct radiation, predicted mean vote (PMV)=0+/-0.5, [International Standards Organisation (ISO) standard 7730]. Ratings of thermal sensation, comfort, stickiness and preference and measures of mean skin temperature (t(sk)) were taken. Increase in the total intensity of simulated solar radiation rather than the specific wavelength of the radiation is the critical factor affecting thermal comfort. Thermal sensation votes showed that there was a sensation scale increase of 1 scale unit for each increase of direct radiation of around 200 Wm(-2). The specific spectral content of the radiation has no direct effect on thermal sensation. The results contribute to models for determining the effects of solar radiation on thermal comfort in vehicles, buildings and outdoors.
Simulation Experiment Description Markup Language (SED-ML) Level 1 Version 3 (L1V3).
Bergmann, Frank T; Cooper, Jonathan; König, Matthias; Moraru, Ion; Nickerson, David; Le Novère, Nicolas; Olivier, Brett G; Sahle, Sven; Smith, Lucian; Waltemath, Dagmar
2018-03-19
The creation of computational simulation experiments to inform modern biological research poses challenges to reproduce, annotate, archive, and share such experiments. Efforts such as SBML or CellML standardize the formal representation of computational models in various areas of biology. The Simulation Experiment Description Markup Language (SED-ML) describes what procedures the models are subjected to, and the details of those procedures. These standards, together with further COMBINE standards, describe models sufficiently well for the reproduction of simulation studies among users and software tools. The Simulation Experiment Description Markup Language (SED-ML) is an XML-based format that encodes, for a given simulation experiment, (i) which models to use; (ii) which modifications to apply to models before simulation; (iii) which simulation procedures to run on each model; (iv) how to post-process the data; and (v) how these results should be plotted and reported. SED-ML Level 1 Version 1 (L1V1) implemented support for the encoding of basic time course simulations. SED-ML L1V2 added support for more complex types of simulations, specifically repeated tasks and chained simulation procedures. SED-ML L1V3 extends L1V2 by means to describe which datasets and subsets thereof to use within a simulation experiment.
NASA Astrophysics Data System (ADS)
Matha, Denis; Sandner, Frank; Schlipf, David
2014-12-01
Design verification of wind turbines is performed by simulation of design load cases (DLC) defined in the IEC 61400-1 and -3 standards or equivalent guidelines. Due to the resulting large number of necessary load simulations, here a method is presented to reduce the computational effort for DLC simulations significantly by introducing a reduced nonlinear model and simplified hydro- and aerodynamics. The advantage of the formulation is that the nonlinear ODE system only contains basic mathematic operations and no iterations or internal loops which makes it very computationally efficient. Global turbine extreme and fatigue loads such as rotor thrust, tower base bending moment and mooring line tension, as well as platform motions are outputs of the model. They can be used to identify critical and less critical load situations to be then analysed with a higher fidelity tool and so speed up the design process. Results from these reduced model DLC simulations are presented and compared to higher fidelity models. Results in frequency and time domain as well as extreme and fatigue load predictions demonstrate that good agreement between the reduced and advanced model is achieved, allowing to efficiently exclude less critical DLC simulations, and to identify the most critical subset of cases for a given design. Additionally, the model is applicable for brute force optimization of floater control system parameters.
CFD simulation of MSW combustion and SNCR in a commercial incinerator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xia, Zihong; Li, Jian; Wu, Tingting
Highlights: • Presented a CFD scheme for modeling MSW incinerator including SNCR process. • Performed a sensitivity analysis of SNCR operating conditions. • Non-uniform distributions of gas velocity, temperature and NO{sub x} in the incinerator. • The injection position of reagent was critical for a desirable performance of SNCR. • A NSR 1.5 was recommended as a compromise of NO{sub x} reduction rates and NH{sub 3} slip. - Abstract: A CFD scheme was presented for modeling municipal solid waste (MSW) combustion in a moving-grate incinerator, including the in-bed burning of solid wastes, the out-of-bed burnout of gaseous volatiles, and themore » selective non-catalytic reduction (SNCR) process between urea (CO(NH{sub 2}){sub 2}) and NO{sub x}. The in-bed calculations provided 2-D profiles of the gas–solid temperatures and the gas species concentrations along the bed length, which were then used as inlet conditions for the out-of-bed computations. The over-bed simulations provided the profiles of incident radiation heat flux on the top of bed. A 3-dimensional benchmark simulation was conducted with a 750 t/day commercial incinerator using the present coupling scheme incorporating with a reduced SNCR reduction mechanism. Numerical tests were performed to investigate the effects of operating parameters such as injection position, injection speed and the normalized stoichiometric ratio (NSR) on the SNCR performance. The simulation results showed that the distributions of gas velocity, temperature and NO{sub x} concentration were highly non-uniform, which made the injection position one of the most sensitive operating parameters influencing the SNCR performance of moving grate incinerators. The simulation results also showed that multi-layer injections were needed to meet the EU2000 standard, and a NSR 1.5 was suggested as a compromise of a satisfactory NO{sub x} reduction and reasonable NH{sub 3} slip rates. This work provided useful guides to the design and operation of SNCR process in moving-grate incinerators.« less
NASA Astrophysics Data System (ADS)
Kuik, Friderike; Kerschbaumer, Andreas; Lauer, Axel; Lupascu, Aurelia; von Schneidemesser, Erika; Butler, Tim M.
2018-06-01
With NO2 limit values being frequently exceeded in European cities, complying with the European air quality regulations still poses a problem for many cities. Traffic is typically a major source of NOx emissions in urban areas. High-resolution chemistry transport modelling can help to assess the impact of high urban NOx emissions on air quality inside and outside of urban areas. However, many modelling studies report an underestimation of modelled NOx and NO2 compared with observations. Part of this model bias has been attributed to an underestimation of NOx emissions, particularly in urban areas. This is consistent with recent measurement studies quantifying underestimations of urban NOx emissions by current emission inventories, identifying the largest discrepancies when the contribution of traffic NOx emissions is high. This study applies a high-resolution chemistry transport model in combination with ambient measurements in order to assess the potential underestimation of traffic NOx emissions in a frequently used emission inventory. The emission inventory is based on officially reported values and the Berlin-Brandenburg area in Germany is used as a case study. The WRF-Chem model is used at a 3 km × 3 km horizontal resolution, simulating the whole year of 2014. The emission data are downscaled from an original resolution of ca. 7 km × 7 km to a resolution of 1 km × 1 km. An in-depth model evaluation including spectral decomposition of observed and modelled time series and error apportionment suggests that an underestimation in traffic emissions is likely one of the main causes of the bias in modelled NO2 concentrations in the urban background, where NO2 concentrations are underestimated by ca. 8 µg m-3 (-30 %) on average over the whole year. Furthermore, a diurnal cycle of the bias in modelled NO2 suggests that a more realistic treatment of the diurnal cycle of traffic emissions might be needed. Model problems in simulating the correct mixing in the urban planetary boundary layer probably play an important role in contributing to the model bias, particularly in summer. Also taking into account this and other possible sources of model bias, a correction factor for traffic NOx emissions of ca. 3 is estimated for weekday daytime traffic emissions in the core urban area, which corresponds to an overall underestimation of traffic NOx emissions in the core urban area of ca. 50 %. Sensitivity simulations for the months of January and July using the calculated correction factor show that the weekday model bias can be improved from -8.8 µg m-3 (-26 %) to -5.4 µg m-3 (-16 %) in January on average in the urban background, and -10.3 µg m-3 (-46 %) to -7.6 µg m-3 (-34 %) in July. In addition, the negative bias of weekday NO2 concentrations downwind of the city in the rural and suburban background can be reduced from -3.4 µg m-3 (-12 %) to -1.2 µg m-3 (-4 %) in January and from -3.0 µg m-3 (-22 %) to -1.9 µg m-3 (-14 %) in July. The results and their consistency with findings from other studies suggest that more research is needed in order to more accurately understand the spatial and temporal variability in real-world NOx emissions from traffic, and apply this understanding to the inventories used in high-resolution chemical transport models.
NASA Astrophysics Data System (ADS)
Sharma, A.; Woldemeskel, F. M.; Sivakumar, B.; Mehrotra, R.
2014-12-01
We outline a new framework for assessing uncertainties in model simulations, be they hydro-ecological simulations for known scenarios, or climate simulations for assumed scenarios representing the future. This framework is illustrated here using GCM projections for future climates for hydrologically relevant variables (precipitation and temperature), with the uncertainty segregated into three dominant components - model uncertainty, scenario uncertainty (representing greenhouse gas emission scenarios), and ensemble uncertainty (representing uncertain initial conditions and states). A novel uncertainty metric, the Square Root Error Variance (SREV), is used to quantify the uncertainties involved. The SREV requires: (1) Interpolating raw and corrected GCM outputs to a common grid; (2) Converting these to percentiles; (3) Estimating SREV for model, scenario, initial condition and total uncertainty at each percentile; and (4) Transforming SREV to a time series. The outcome is a spatially varying series of SREVs associated with each model that can be used to assess how uncertain the system is at each simulated point or time. This framework, while illustrated in a climate change context, is completely applicable for assessment of uncertainties any modelling framework may be subject to. The proposed method is applied to monthly precipitation and temperature from 6 CMIP3 and 13 CMIP5 GCMs across the world. For CMIP3, B1, A1B and A2 scenarios whereas for CMIP5, RCP2.6, RCP4.5 and RCP8.5 representing low, medium and high emissions are considered. For both CMIP3 and CMIP5, model structure is the largest source of uncertainty, which reduces significantly after correcting for biases. Scenario uncertainly increases, especially for temperature, in future due to divergence of the three emission scenarios analysed. While CMIP5 precipitation simulations exhibit a small reduction in total uncertainty over CMIP3, there is almost no reduction observed for temperature projections. Estimation of uncertainty in both space and time sheds lights on the spatial and temporal patterns of uncertainties in GCM outputs, providing an effective platform for risk-based assessments of any alternate plans or decisions that may be formulated using GCM simulations.
Lim, Grace; Krohner, Robert G; Metro, David G; Rosario, Bedda L; Jeong, Jong-Hyeon; Sakai, Tetsuro
2016-05-01
There are many teaching methods for epidural anesthesia skill acquisition. Previous work suggests that there is no difference in skill acquisition whether novice learners engage in low-fidelity (LF) versus high-fidelity haptic simulation for epidural anesthesia. No study, however, has compared the effect of LF haptic simulation for epidural anesthesia versus mental imagery (MI) training in which no physical practice is attempted. We tested the hypothesis that MI training is superior to LF haptic simulation training for epidural anesthesia skill acquisition. Twenty Post-Graduate Year 2 (PGY-2) anesthesiology residents were tested at the beginning of the training year. After a didactic lecture on epidural anesthesia, they were randomized into 2 groups. Group LF had LF simulation training for epidural anesthesia using a previously described banana simulation technique. Group MI had guided, scripted MI training in which they initially were oriented to the epidural kit components and epidural anesthesia was described stepwise in detail, followed by individual mental rehearsal; no physical practice was undertaken. Each resident then individually performed epidural anesthesia on a partial-human task trainer on 3 consecutive occasions under the direct observation of skilled evaluators who were blinded to group assignment. Technical achievement was assessed with the use of a modified validated skills checklist. Scores (0-21) and duration to task completion (minutes) were recorded. A linear mixed-effects model analysis was performed to determine the differences in scores and duration between groups and over time. There was no statistical difference between the 2 groups for scores and duration to task completion. Both groups showed similarly significant increases (P = 0.0015) in scores over time (estimated mean score [SE]: group MI, 15.9 [0.55] to 17.4 [0.55] to 18.6 [0.55]; group LF, 16.2 [0.55] to 17.7 [0.55] to 18.9 [0.55]). Time to complete the procedure decreased similarly and significantly (P = 0.032) for both groups after the first attempt (estimated mean time [SE]: group MI, 16.0 [1.04] minutes to 13.7 [1.04] minutes to 13.3 [1.04] minutes; group LF: 15.8 [1.04] minutes to 13.4 [1.04] minutes to 13.1 [1.04] minutes). MI is not different from LF simulation training for epidural anesthesia skill acquisition. Education in epidural anesthesia with structured didactics and continual MI training may suffice to prepare novice learners before an attempt on human subjects.
NASA Technical Reports Server (NTRS)
Lutz, R. J.; Spar, J.
1978-01-01
The Hansen atmospheric model was used to compute five monthly forecasts (October 1976 through February 1977). The comparison is based on an energetics analysis, meridional and vertical profiles, error statistics, and prognostic and observed mean maps. The monthly mean model simulations suffer from several defects. There is, in general, no skill in the simulation of the monthly mean sea-level pressure field, and only marginal skill is indicated for the 850 mb temperatures and 500 mb heights. The coarse-mesh model appears to generate a less satisfactory monthly mean simulation than the finer mesh GISS model.
One-equation near-wall turbulence modeling with the aid of direct simulation data
NASA Technical Reports Server (NTRS)
Rodi, W.; Mansour, N. N.
1990-01-01
The length scales appearing in the relations for the eddy viscosity and dissipation rate in one-equation models were evaluated from direct numerical simulation data for developed channel and boundary-layer flow at two Reynolds numbers each. To prepare the ground for the evaluation, the distribution of the most relevant mean-flow and turbulence quantities is presented and discussed with respect to Reynolds-number influence and to differences between channel and boundary-layer flow. An alternative model is also examined in which bar-(v'(exp 2))(exp 1/2) is used as velocity scale instead of k(exp 1/2). With this velocity scale, the length scales now appearing in the model follow very closely a linear relationship near the wall so that no damping is necessary. For the determination of bar-v'(exp 2) in the context of a one-equation model, a correlation is provided between bar-(v'(exp 2))/k and bar-(u'v')/k.
Pietarila Graham, Jonathan; Holm, Darryl D; Mininni, Pablo D; Pouquet, Annick
2007-11-01
We compute solutions of the Lagrangian-averaged Navier-Stokes alpha - (LANS alpha ) model for significantly higher Reynolds numbers (up to Re approximately 8300 ) than have previously been accomplished. This allows sufficient separation of scales to observe a Navier-Stokes inertial range followed by a second inertial range specific to the LANS alpha model. Both fully helical and nonhelical flows are examined, up to Reynolds numbers of approximately 1300. Analysis of the third-order structure function scaling supports the predicted l3 scaling; it corresponds to a k-1 scaling of the energy spectrum for scales smaller than alpha. The energy spectrum itself shows a different scaling, which goes as k1. This latter spectrum is consistent with the absence of stretching in the subfilter scales due to the Taylor frozen-in hypothesis employed as a closure in the derivation of the LANS alpha model. These two scalings are conjectured to coexist in different spatial portions of the flow. The l3 [E(k) approximately k-1] scaling is subdominant to k1 in the energy spectrum, but the l3 scaling is responsible for the direct energy cascade, as no cascade can result from motions with no internal degrees of freedom. We demonstrate verification of the prediction for the size of the LANS alpha attractor resulting from this scaling. From this, we give a methodology either for arriving at grid-independent solutions for the LANS alpha model, or for obtaining a formulation of the large eddy simulation optimal in the context of the alpha models. The fully converged grid-independent LANS alpha model may not be the best approximation to a direct numerical simulation of the Navier-Stokes equations, since the minimum error is a balance between truncation errors and the approximation error due to using the LANS alpha instead of the primitive equations. Furthermore, the small-scale behavior of the LANS alpha model contributes to a reduction of flux at constant energy, leading to a shallower energy spectrum for large alpha. These small-scale features, however, do not preclude the LANS alpha model from reproducing correctly the intermittency properties of the high-Reynolds-number flow.
NASA Technical Reports Server (NTRS)
Havens, Robert F.
1946-01-01
Tests of a powered dynamic model of the Columbia XJL-1 amphibian were made in Langley tank no.1 to determine the hydrodynamic stability and spray characteristics of the basic hull and to investigate the effects of modifications on these characteristics. Modifications to the forebody chime flare, the step, and the afterbody, and an increase in the angle of incidence of the wing were included in the test program. The seaworthiness and spray characteristics were studied from simulated taxi runs in smooth and rough water. The trim limits of stability, the range of stable positions of the enter of gravity for take-off, and the landing stability were determined in smooth water. The aerodynamic lift, pitching moment, and thrust were determined at speeds up to take-off speed.
Ditching Tests of a 1/24-Scale Model of the Lockheed XR60-1 Airplane, TED No. NACA 235
NASA Technical Reports Server (NTRS)
Fisher, Lloyd J.; Cederborg, Gibson A.
1948-01-01
The ditching characteristics of the Lockheed XR60-1 airplane were determined by tests of a 1/24-scale dynamic model in calm water at the Langley tank no. 2 monorail. Various landing attitudes, flap settings, speeds, and conditions of damager were investigated. The ditching behavior was evaluated from recordings of decelerations, length of runs, and motions of the model. Scale-strength bottoms and simulated crumpled bottoms were used to reproduce probable damage to the fuselage. It was concluded that the airplane should be ditched at a landing attitude of about 5 deg with flaps full down. At this attitude, the maximum longitudinal deceleration should not exceed 2g and the landing run will be bout three fuselage lengths. Damage to the fuselage will not be excessive and will be greatest near the point of initial contact with the water.
Michael A. Larson; Frank R., III Thompson; Joshua J. Millspaugh; William D. Dijak; Stephen R. Shifley
2004-01-01
Methods for habitat modeling based on landscape simulations and population viability modeling based on habitat quality are well developed, but no published study of which we are aware has effectively joined them in a single, comprehensive analysis. We demonstrate the application of a population viability model for ovenbirds (Seiurus aurocapillus)...
Validation of WRF-Chem air quality simulations in the Netherlands at high resolution
NASA Astrophysics Data System (ADS)
Hilboll, A.; Lowe, D.; Kuenen, J. J. P.; Denier Van Der Gon, H.; Vrekoussis, M.
2017-12-01
Air pollution is the single most important environmental hazard for publichealth, and especially nitrogen dioxide (NO2) plays a key role in air qualityresearch. With the aim of improving the quality and reproducibility ofmeasurements of NO2 vertical distribution from MAX-DOAS instruments, theCINDI-2 campaign was held in Cabauw (NL) in September 2016.The measurement site was rural, but surrounded by several major pollutioncenters. Due to this spatial heterogeneity of emissions, as well as themeteorological conditions, high spatial and temporal variability in NO2 mixingratios were observed.Air quality models used in the analysis of the measured data must have highspatial resolution in order to resolve this fine spatial structure. Thisremains a challenge even today, mostly due to the uncertainties and largespatial heterogeneity of emission data, and the need to parameterize small-scaleprocesses.In this study, we use the state-of-the-art version 3.9 of the Weather Researchand Forecasting Model with Chemistry (WRF-Chem) to simulate air pollutantconcentrations over the Netherlands, to facilitate the analysis of the CINDI-2NO2 measurements. The model setup contains three nested domains withhorizontal resolutions of 15, 3, and 1 km. Anthropogenic emissions are takenfrom the TNO-MACC III inventory and, where available, from the Dutch PollutantRelease and Transfer Register (Emissieregistratie), at a spatial resolution of 7and 1 km, respectively. We use the Common Reactive Intermediates gas-phasechemical mechanism (CRIv2-R5) with the MOSAIC aerosol module.The high spatial resolution of model and emissions will allow us to resolve thestrong spatial gradients in the NO2 concentrations measured during theCINDI-2 campaign, allowing for an unprecedented level of detail in theanalysis of individual pollution sources.
NASA Astrophysics Data System (ADS)
Huang, Xiaomeng; Tang, Qiang; Tseng, Yuheng; Hu, Yong; Baker, Allison H.; Bryan, Frank O.; Dennis, John; Fu, Haohuan; Yang, Guangwen
2016-11-01
In the Community Earth System Model (CESM), the ocean model is computationally expensive for high-resolution grids and is often the least scalable component for high-resolution production experiments. The major bottleneck is that the barotropic solver scales poorly at high core counts. We design a new barotropic solver to accelerate the high-resolution ocean simulation. The novel solver adopts a Chebyshev-type iterative method to reduce the global communication cost in conjunction with an effective block preconditioner to further reduce the iterations. The algorithm and its computational complexity are theoretically analyzed and compared with other existing methods. We confirm the significant reduction of the global communication time with a competitive convergence rate using a series of idealized tests. Numerical experiments using the CESM 0.1° global ocean model show that the proposed approach results in a factor of 1.7 speed-up over the original method with no loss of accuracy, achieving 10.5 simulated years per wall-clock day on 16 875 cores.
A diagnostic model for studying daytime urban air quality trends
NASA Technical Reports Server (NTRS)
Brewer, D. A.; Remsberg, E. E.; Woodbury, G. E.
1981-01-01
A single cell Eulerian photochemical air quality simulation model was developed and validated for selected days of the 1976 St. Louis Regional Air Pollution Study (RAPS) data sets; parameterizations of variables in the model and validation studies using the model are discussed. Good agreement was obtained between measured and modeled concentrations of NO, CO, and NO2 for all days simulated. The maximum concentration of O3 was also predicted well. Predicted species concentrations were relatively insensitive to small variations in CO and NOx emissions and to the concentrations of species which are entrained as the mixed layer rises.
NASA Technical Reports Server (NTRS)
Bey, Isabelle; Jacob, Daniel J.; Yantosca, Robert M.; Logan, Jennifer A.; Field, Brendan D.; Fiore, Arlene M.; Li, Qin-Bin; Liu, Hong-Yu; Mickley, Loretta J.; Schultz, Martin G.
2001-01-01
We present a first description and evaluation of GEOS-CHEM, a global three-dimensional (3-D) model of tropospheric chemistry driven by assimilated meteorological observations from the Goddard Earth Observing System (GEOS) of the NASA Data Assimilation Office (DAO). The model is applied to a 1-year simulation of tropospheric ozone-NOx-hydrocarbon chemistry for 1994, and is evaluated with observations both for 1994 and for other years. It reproduces usually to within 10 ppb the concentrations of ozone observed from the worldwide ozonesonde data network. It simulates correctly the seasonal phases and amplitudes of ozone concentrations for different regions and altitudes, but tends to underestimate the seasonal amplitude at northern midlatitudes. Observed concentrations of NO and peroxyacetylnitrate (PAN) observed in aircraft campaigns are generally reproduced to within a factor of 2 and often much better. Concentrations of HNO3 in the remote troposphere are overestimated typically by a factor of 2-3, a common problem in global models that may reflect a combination of insufficient precipitation scavenging and gas-aerosol partitioning not resolved by the model. The model yields an atmospheric lifetime of methylchloroform (proxy for global OH) of 5.1 years, as compared to a best estimate from observations of 5.5 plus or minus 0.8 years, and simulates H2O2 concentrations observed from aircraft with significant regional disagreements but no global bias. The OH concentrations are approximately 20% higher than in our previous global 3-D model which included an UV-absorbing aerosol. Concentrations of CO tend to be underestimated by the model, often by 10-30 ppb, which could reflect a combination of excessive OH (a 20% decrease in model OH could be accommodated by the methylchloroform constraint) and an underestimate of CO sources (particularly biogenic). The model underestimates observed acetone concentrations over the South Pacific in fall by a factor of 3; a missing source from the ocean may be implicated.
Modeling the total dust production of Enceladus from stochastic charge equilibrium and simulations
NASA Astrophysics Data System (ADS)
Meier, Patrick; Motschmann, Uwe; Schmidt, Jürgen; Spahn, Frank; Hill, Thomas W.; Dong, Yaxue; Jones, Geraint H.; Kriegel, Hendrik
2015-12-01
Negatively and positively charged nano-sized ice grains were detected in the Enceladus plume by the Cassini Plasma Spectrometer (CAPS). However, no data for uncharged grains, and thus for the total amount of dust, are available. In this paper we estimate this population of uncharged grains based on a model of stochastic charging in thermodynamic equilibrium and on the assumption of quasi-neutrality in the plasma-dust system. This estimation is improved upon by combining simulations of the dust component of the plume and simulations for the plasma environment into one self-consistent model. Calibration of this model with CAPS data provides a total dust production rate of about 12 kg s-1, including larger dust grains up to a few microns in size. We find that the fraction of charged grains dominates over that of the uncharged grains. Moreover, our model reproduces densities of both negatively and positively charged nanograins measured by Cassini CAPS. In Enceladus' plume ion densities up to ~104cm-3 are required by the self-consistent model, resulting in an electron depletion of about 50% in the plasma, because electrons are attached to the negatively charged nanograins. These ion densities correspond to effective ionization rates of about 10-7s-1, which are about two orders of magnitude higher than expected.
NASA Astrophysics Data System (ADS)
Bonek, Mirosław; Śliwa, Agata; Mikuła, Jarosław
2016-12-01
Investigations >The language in this paper has been slightly changed. Please check for clarity of thought, and that the meaning is still correct, and amend if necessary.include Finite Element Method simulation model of remelting of PMHSS6-5-3 high-speed steel surface layer using the high power diode laser (HPDL). The Finite Element Method computations were performed using ANSYS software. The scope of FEM simulation was determination of temperature distribution during laser alloying process at various process configurations regarding the laser beam power and method of powder deposition, as pre-coated past or surface with machined grooves. The Finite Element Method simulation was performed on five different 3-dimensional models. The model assumed nonlinear change of thermal conductivity, specific heat and density that were depended on temperature. The heating process was realized as heat flux corresponding to laser beam power of 1.4, 1.7 and 2.1 kW. Latent heat effects are considered during solidification. The molten pool is composed of the same material as the substrate and there is no chemical reaction. The absorptivity of laser energy was dependent on the simulated materials properties and their surface condition. The Finite Element Method simulation allows specifying the heat affected zone and the temperature distribution in the sample as a function of time and thus allows the estimation of the structural changes taking place during laser remelting process. The simulation was applied to determine the shape of molten pool and the penetration depth of remelted surface. Simulated penetration depth and molten pool profile have a good match with the experimental results. The depth values obtained in simulation are very close to experimental data. Regarding the shape of molten pool, the little differences have been noted. The heat flux input considered in simulation is only part of the mechanism for heating; thus, the final shape of solidified molten pool will depend on more variables.
Aquilina, Peter; Parr, William C.H.; Chamoli, Uphar; Wroe, Stephen; Clausen, Philip
2014-01-01
The most stable pattern of internal fixation for mandibular condyle fractures is an area of ongoing discussion. This study investigates the stability of three patterns of plate fixation using readily available, commercially pure titanium implants. Finite element models of a simulated mandibular condyle fracture were constructed. The completed models were heterogeneous in bone material properties, contained approximately 1.2 million elements and incorporated simulated jaw adducting musculature. Models were run assuming linear elasticity and isotropic material properties for bone. No human subjects were involved in this investigation. The stability of the simulated condylar fracture reduced with the different implant configurations, and the von Mises stresses of a 1.5-mm X-shaped plate, a 1.5-mm rectangular plate, and a 1.5-mm square plate (all Synthes (Synthes GmbH, Zuchwil, Switzerland) were compared. The 1.5-mm X plate was the most stable of the three 1.5-mm profile plate configurations examined and had comparable mechanical performance to a single 2.0-mm straight four-hole plate. This study does not support the use of rectangular or square plate patterns in the open reduction and internal fixation of mandibular condyle fractures. It does provide some support for the use of a 1.5-mm X plate to reduce condylar fractures in selected clinical cases. PMID:25136411
Application of WRF/Chem over East Asia: Part II. Model improvement and sensitivity simulations
NASA Astrophysics Data System (ADS)
Zhang, Yang; Zhang, Xin; Wang, Kai; Zhang, Qiang; Duan, Fengkui; He, Kebin
2016-01-01
To address the problems and limitations identified through a comprehensive evaluation in Part I paper, several modifications are made in model inputs, treatments, and configurations and sensitivity simulations with improved model inputs and treatments are performed in this Part II paper. The use of reinitialization of meteorological variables reduces the biases and increases the spatial correlations in simulated temperature at 2-m (T2), specific humidity at 2-m (Q2), wind speed at 10-m (WS10), and precipitation (Precip). The use of a revised surface drag parameterization further reduces the biases in simulated WS10. The adjustment of only the magnitudes of anthropogenic emissions in the surface layer does not help improve overall model performance, whereas the adjustment of both the magnitudes and vertical distributions of anthropogenic emissions shows moderate to large improvement in simulated surface concentrations and column mass abundances of species in terms of domain mean performance statistics, hourly and monthly mean concentrations, and vertical profiles of concentrations at individual sites. The revised and more advanced dust emission schemes can help improve PM predictions. Using revised upper boundary conditions for O3 significantly improves the column O3 abundances. Using a simple SOA formation module further improves the predictions of organic carbon and PM2.5. The sensitivity simulation that combines all above model improvements greatly improves the overall model performance. For example, the sensitivity simulation gives the normalized mean biases (NMBs) of -6.1% to 23.8% for T2, 2.7-13.8% for Q2, 22.5-47.6% for WS10, and -9.1% to 15.6% for Precip, comparing to -9.8% to 75.6% for T2, 0.4-23.4% for Q2, 66.5-101.0% for WS10, and 11.4%-92.7% for Precip from the original simulation without those improvements. It also gives the NMBs for surface predictions of -68.2% to -3.7% for SO2, -73.8% to -20.6% for NO2, -8.8%-128.7% for O3, -61.4% to -26.5% for PM2.5, and -64.0% to 7.2% for PM10, comparing to -84.2% to -44.5% for SO2, -88.1% to -44.0% for NO2, -11.0%-160.3% for O3, -63.9% to -25.2% for PM2.5, and -68.9%-33.3% for PM10 from the original simulation. The improved WRF/Chem is applied to estimate the impact of anthropogenic aerosols on regional climate and air quality in East Asia. Anthropogenic aerosols can increase cloud condensation nuclei, aerosol optical depth, cloud droplet number concentrations, and cloud optical depth. They can decrease surface net radiation, temperature at 2-m, wind speed at 10-m, planetary boundary layer height, and precipitation through various direct and indirect effects. These changes in turn lead to changes in chemical predictions in a variety of ways.
Feaster, Toby D.; Conrads, Paul; Guimaraes, Wladmir B.; Sanders, Curtis L.; Bales, Jerad D.
2003-01-01
Time-series plots of dissolved-oxygen concentrations were determined for various simulated hydrologic and point-source loading conditions along a free-flowing section of the Catawba River from Lake Wylie Dam to the headwaters of Fishing Creek Reservoir in South Carolina. The U.S. Geological Survey one-dimensional dynamic-flow model, BRANCH, was used to simulate hydrodynamic data for the Branched Lagrangian Transport Model. Waterquality data were used to calibrate the Branched Lagrangian Transport Model and included concentrations of nutrients, chlorophyll a, and biochemical oxygen demand in water samples collected during two synoptic sampling surveys at 10 sites along the main stem of the Catawba River and at 3 tributaries; and continuous water temperature and dissolved-oxygen concentrations measured at 5 locations along the main stem of the Catawba River. A sensitivity analysis of the simulated dissolved-oxygen concentrations to model coefficients and data inputs indicated that the simulated dissolved-oxygen concentrations were most sensitive to watertemperature boundary data due to the effect of temperature on reaction kinetics and the solubility of dissolved oxygen. Of the model coefficients, the simulated dissolved-oxygen concentration was most sensitive to the biological oxidation rate of nitrite to nitrate. To demonstrate the utility of the Branched Lagrangian Transport Model for the Catawba River, the model was used to simulate several water-quality scenarios to evaluate the effect on the 24-hour mean dissolved-oxygen concentrations at selected sites for August 24, 1996, as simulated during the model calibration period of August 23 27, 1996. The first scenario included three loading conditions of the major effluent discharges along the main stem of the Catawba River (1) current load (as sampled in August 1996); (2) no load (all point-source loads were removed from the main stem of the Catawba River; loads from the main tributaries were not removed); and (3) fully loaded (in accordance with South Carolina Department of Health and Environmental Control National Discharge Elimination System permits). Results indicate that the 24-hour mean and minimum dissolved-oxygen concentrations for August 24, 1996, changed from the no-load condition within a range of - 0.33 to 0.02 milligram per liter and - 0.48 to 0.00 milligram per liter, respectively. Fully permitted loading conditions changed the 24-hour mean and minimum dissolved-oxygen concentrations from - 0.88 to 0.04 milligram per liter and - 1.04 to 0.00 milligram per liter, respectively. A second scenario included the addition of a point-source discharge of 25 million gallons per day to the August 1996 calibration conditions. The discharge was added at S.C. Highway 5 or at a location near Culp Island (about 4 miles downstream from S.C. Highway 5) and had no significant effect on the daily mean and minimum dissolved-oxygen concentration. A third scenario evaluated the phosphorus loading into Fishing Creek Reservoir; four loading conditions of phosphorus into Catawba River were simulated. The four conditions included fully permitted and actual loading conditions, removal of all point sources from the Catawba River, and removal of all point and nonpoint sources from Sugar Creek. Removing the point-source inputs on the Catawba River and the point and nonpoint sources in Sugar Creek reduced the organic phosphorus and orthophosphate loadings to Fishing Creek Reservoir by 78 and 85 percent, respectively.
Development of Standardized Lunar Regolith Simulant Materials
NASA Technical Reports Server (NTRS)
Carpenter, P.; Sibille, L.; Meeker, G.; Wilson, S.
2006-01-01
Lunar exploration requires scientific and engineering studies using standardized testing procedures that ultimately support flight certification of technologies and hardware. It is necessary to anticipate the range of source materials and environmental constraints that are expected on the Moon and Mars, and to evaluate in-situ resource utilization (ISRU) coupled with testing and development. We describe here the development of standardized lunar regolith simulant (SLRS) materials that are traceable inter-laboratory standards for testing and technology development. These SLRS materials must simulate the lunar regolith in terms of physical, chemical, and mineralogical properties. A summary of these issues is contained in the 2005 Workshop on Lunar Regolith Simulant Materials [l]. Lunar mare basalt simulants MLS-1 and JSC-1 were developed in the late 1980s. MLS-1 approximates an Apollo 11 high-Ti basalt, and was produced by milling of a holocrystalline, coarse-grained intrusive gabbro (Fig. 1). JSC-1 approximates an Apollo 14 basalt with a relatively low-Ti content, and was obtained from a glassy volcanic ash (Fig. 2). Supplies of MLS-1 and JSC-1 have been exhausted and these materials are no longer available. No highland anorthosite simulant was previously developed. Upcoming lunar polar missions thus require the identification, assessment, and development of both mare and highland simulants. A lunar regolith simulant is manufactured from terrestrial components for the purpose of simulating the physical and chemical properties of the lunar regolith. Significant challenges exist in the identification of appropriate terrestrial source materials. Lunar materials formed under comparatively reducing conditions in the absence of water, and were modified by meteorite impact events. Terrestrial materials formed under more oxidizing conditions with significantly greater access to water, and were modified by a wide range of weathering processes. The composition space of lunar materials can be modeled by mixing programs utilizing a low-Ti basalt, ilmenite, KREEP component, high-Ca anorthosite, and meteoritic components. This approach has been used for genetic studies of lunar samples via chemical and modal analysis. A reduced composition space may be appropriate for simulant development, but it is necessary to determine the controlling properties that affect the physical, chemical and mineralogical components of the simulant.
Comparison of different models for non-invasive FFR estimation
NASA Astrophysics Data System (ADS)
Mirramezani, Mehran; Shadden, Shawn
2017-11-01
Coronary artery disease is a leading cause of death worldwide. Fractional flow reserve (FFR), derived from invasively measuring the pressure drop across a stenosis, is considered the gold standard to diagnose disease severity and need for treatment. Non-invasive estimation of FFR has gained recent attention for its potential to reduce patient risk and procedural cost versus invasive FFR measurement. Non-invasive FFR can be obtained by using image-based computational fluid dynamics to simulate blood flow and pressure in a patient-specific coronary model. However, 3D simulations require extensive effort for model construction and numerical computation, which limits their routine use. In this study we compare (ordered by increasing computational cost/complexity): reduced-order algebraic models of pressure drop across a stenosis; 1D, 2D (multiring) and 3D CFD models; as well as 3D FSI for the computation of FFR in idealized and patient-specific stenosis geometries. We demonstrate the ability of an appropriate reduced order algebraic model to closely predict FFR when compared to FFR from a full 3D simulation. This work was supported by the NIH, Grant No. R01-HL103419.
Effect of climate data on simulated carbon and nitrogen balances for Europe
NASA Astrophysics Data System (ADS)
Blanke, Jan Hendrik; Lindeskog, Mats; Lindström, Johan; Lehsten, Veiko
2016-05-01
In this study, we systematically assess the spatial variability in carbon and nitrogen balance simulations related to the choice of global circulation models (GCMs), representative concentration pathways (RCPs), spatial resolutions, and the downscaling methods used as calculated with LPJ-GUESS. We employed a complete factorial design and performed 24 simulations for Europe with different climate input data sets and different combinations of these four factors. Our results reveal that the variability in simulated output in Europe is moderate with 35.6%-93.5% of the total variability being common among all combinations of factors. The spatial resolution is the most important factor among the examined factors, explaining 1.5%-10.7% of the total variability followed by GCMs (0.3%-7.6%), RCPs (0%-6.3%), and downscaling methods (0.1%-4.6%). The higher-order interactions effect that captures nonlinear relations between the factors and random effects is pronounced and accounts for 1.6%-45.8% to the total variability. The most distinct hot spots of variability include the mountain ranges in North Scandinavia and the Alps, and the Iberian Peninsula. Based on our findings, we advise to conduct the application of models such as LPJ-GUESS at a reasonably high spatial resolution which is supported by the model structure. There is no notable gain in simulations of ecosystem carbon and nitrogen stocks and fluxes from using regionally downscaled climate in preference to bias-corrected, bilinearly interpolated CMIP5 projections.
A big data approach to the development of mixed-effects models for seizure count data.
Tharayil, Joseph J; Chiang, Sharon; Moss, Robert; Stern, John M; Theodore, William H; Goldenholz, Daniel M
2017-05-01
Our objective was to develop a generalized linear mixed model for predicting seizure count that is useful in the design and analysis of clinical trials. This model also may benefit the design and interpretation of seizure-recording paradigms. Most existing seizure count models do not include children, and there is currently no consensus regarding the most suitable model that can be applied to children and adults. Therefore, an additional objective was to develop a model that accounts for both adult and pediatric epilepsy. Using data from SeizureTracker.com, a patient-reported seizure diary tool with >1.2 million recorded seizures across 8 years, we evaluated the appropriateness of Poisson, negative binomial, zero-inflated negative binomial, and modified negative binomial models for seizure count data based on minimization of the Bayesian information criterion. Generalized linear mixed-effects models were used to account for demographic and etiologic covariates and for autocorrelation structure. Holdout cross-validation was used to evaluate predictive accuracy in simulating seizure frequencies. For both adults and children, we found that a negative binomial model with autocorrelation over 1 day was optimal. Using holdout cross-validation, the proposed model was found to provide accurate simulation of seizure counts for patients with up to four seizures per day. The optimal model can be used to generate more realistic simulated patient data with very few input parameters. The availability of a parsimonious, realistic virtual patient model can be of great utility in simulations of phase II/III clinical trials, epilepsy monitoring units, outpatient biosensors, and mobile Health (mHealth) applications. Wiley Periodicals, Inc. © 2017 International League Against Epilepsy.
Hoard, C.J.
2010-01-01
The U.S. Geological Survey is evaluating water availability and use within the Great Lakes Basin. This is a pilot effort to develop new techniques and methods to aid in the assessment of water availability. As part of the pilot program, a regional groundwater-flow model for the Lake Michigan Basin was developed using SEAWAT-2000. The regional model was used as a framework for assessing local-scale water availability through grid-refinement techniques. Two grid-refinement techniques, telescopic mesh refinement and local grid refinement, were used to illustrate the capability of the regional model to evaluate local-scale problems. An intermediate model was developed in central Michigan spanning an area of 454 square miles (mi2) using telescopic mesh refinement. Within the intermediate model, a smaller local model covering an area of 21.7 mi2 was developed and simulated using local grid refinement. Recharge was distributed in space and time using a daily output from a modified Thornthwaite-Mather soil-water-balance method. The soil-water-balance method derived recharge estimates from temperature and precipitation data output from an atmosphere-ocean coupled general-circulation model. The particular atmosphere-ocean coupled general-circulation model used, simulated climate change caused by high global greenhouse-gas emissions to the atmosphere. The surface-water network simulated in the regional model was refined and simulated using a streamflow-routing package for MODFLOW. The refined models were used to demonstrate streamflow depletion and potential climate change using five scenarios. The streamflow-depletion scenarios include (1) natural conditions (no pumping), (2) a pumping well near a stream; the well is screened in surficial glacial deposits, (3) a pumping well near a stream; the well is screened in deeper glacial deposits, and (4) a pumping well near a stream; the well is open to a deep bedrock aquifer. Results indicated that a range of 59 to 50 percent of the water pumped originated from the stream for the shallow glacial and deep bedrock pumping scenarios, respectively. The difference in streamflow reduction between the shallow and deep pumping scenarios was compensated for in the deep well by deriving more water from regional sources. The climate-change scenario only simulated natural conditions from 1991-2044, so there was no pumping stress simulated. Streamflows were calculated for the simulated period and indicated that recharge over the period generally increased from the start of the simulation until approximately 2017, and decreased from then to the end of the simulation. Streamflow was highly correlated with recharge so that the lowest streamflows occurred in the later stress periods of the model when recharge was lowest.
NASA Astrophysics Data System (ADS)
Ahasan, M. N.; Alam, M. M.; Debsarma, S. K.
2015-02-01
A severe thunderstorm produced a tornado (F2 on the enhanced Fujita-Pearson scale), which affected the Brahmanbaria district of Bangladesh during 1100-1130 UTC of 22 March, 2013. The tornado consumed 38, injured 388 and caused a huge loss of property. The total length travelled by the tornado was about 12-15 km and about 1728 households were affected. An attempt has been made to simulate this rare event using the Weather Research and Forecasting (WRF) model. The model was run in a single domain at 9 km resolution for a period of 24 hrs, starting at 0000 UTC on 22 March, 2013. The meteorological conditions that led to form this tornado have been analyzed. The model simulated meteorological conditions are compared with that of a `no severe thunderstorm observed day' on 22 March, 2012. Thus, the model also ran in the same domain at same resolution for 24 hrs, starting at 0000 UTC on 22 March, 2012. The model simulated meteorological parameters are consistent with each other, and all are in good agreement with the observation in terms of the region of occurrence of the tornado activity. The model has efficiently captured the common favourable synoptic conditions for the occurrence of severe tornadoes though there are some spatial and temporal biases in the simulation. The wind speed is not in good agreement with the observation as it has shown the strongest wind of only 15-20 ms-1, against the estimated wind speed of about 55 ms-1. The spatial distributions as well as intensity of rainfall are also in good agreement with the observation. The results of these analyses demonstrated the capability of high-resolution WRF model with 3DVar Data Assimilation (DA) techniques in simulation of tornado over Brahmanbaria, Bangladesh.
Modeling biotic uptake by periphyton and transient hyporrheic storage of nitrate in a natural stream
Kim, Brian K.A.; Jackman, Alan P.; Triska, Frank J.
1992-01-01
To a convection-dispersion hydrologic transport model we coupled a transient storage submodel (Bencala, 1984) and a biotic uptake submodel based on Michaelis-Menten kinetics (Kim et al., 1990). Our purpose was threefold: (1) to simulate nitrate retention in response to change in load in a third-order stream, (2) to differentiate biotic versus hydrologie factors in nitrate retention, and (3) to produce a research tool whose properties are consistent with laboratory and field observations. Hydrodynamic parameters were fitted from chloride concentration during a 20-day chloride-nitrate coinjection (Bencala, 1984), and biotic uptake kinetics were based on flume studies by Kim et al. (1990) and Triska et al. (1983). Nitrate concentration from the 20-day coinjection experiment served as a base for model validation. The complete transport retention model reasonably predicted the observed nitrate concentration. However, simulations which lacked either the transient storage submodel or the biotic uptake submodel poorly predicted the observed nitrate concentration. Model simulations indicated that transient storage in channel and hyporrheic interstices dominated nitrate retention within the first 24 hours, whereas biotic uptake dominated thereafter. A sawtooth function for Vmax ranging from 0.10 to 0.17 μg NO3-N s−1 gAFDM−1 (grams ash free dry mass) slightly underpredicted nitrate retention in simulations of 2–7 days. This result was reasonable since uptake by other nitrate-demanding processes were not included. The model demonstrated how ecosystem retention is an interaction between physical and biotic processes and supports the validity of coupling separate hydrodynamic and reactive submodels to established solute transport models in biological studies of fluvial ecosystems.
A cloud model simulation of space shuttle exhaust clouds in different atmospheric conditions
NASA Technical Reports Server (NTRS)
Chen, C.; Zak, J. A.
1989-01-01
A three-dimensional cloud model was used to characterize the dominant influence of the environment on the Space Shuttle exhaust cloud. The model was modified to accept the actual heat and moisture from rocket exhausts and deluge water as initial conditions. An upper-air sounding determined the ambient atmosphere in which the cloud could grow. The model was validated by comparing simulated clouds with observed clouds from four actual Shuttle launches. The model successfully produced clouds with dimensions, rise, decay, liquid water contents and vertical motion fields very similar to observed clouds whose dimensions were calculated from 16 mm film frames. Once validated, the model was used in a number of different atmospheric conditions ranging from very unstable to very stable. In moist, unstable atmospheres simulated clouds rose to about 3.5 km in the first 4 to 8 minutes then decayed. Liquid water contents ranged from 0.3 to 1.0 g kg-1 mixing ratios and vertical motions were from 2 to 10 ms-1. An inversion served both to reduce entrainment (and erosion) at the top and to prevent continued cloud rise. Even in the most unstable atmospheres, the ground cloud did not rise beyond 4 km and in stable atmospheres with strong low level inversions the cloud could be trapped below 500 m. Wind shear strongly affected the appearance of both the ground cloud and vertical column cloud. The ambient low-level atmospheric moisture governed the amount of cloud water in model clouds. Some dry atmospheres produced little or no cloud water. One case of a simulated TITAN rocket explosion is also discussed.
Jirapinyo, Pichamol; Abidi, Wasif M; Aihara, Hiroyuki; Zaki, Theodore; Tsay, Cynthia; Imaeda, Avlin B; Thompson, Christopher C
2017-10-01
Preclinical simulator training has the potential to decrease endoscopic procedure time and patient discomfort. This study aims to characterize the learning curve of endoscopic novices in a part-task simulator and propose a threshold score for advancement to initial clinical cases. Twenty novices with no prior endoscopic experience underwent repeated endoscopic simulator sessions using the part-task simulator. Simulator scores were collected; their inverse was averaged and fit to an exponential curve. The incremental improvement after each session was calculated. Plateau was defined as the session after which incremental improvement in simulator score model was less than 5%. Additionally, all participants filled out questionnaires regarding simulator experience after sessions 1, 5, 10, 15, and 20. A visual analog scale and NASA task load index were used to assess levels of comfort and demand. Twenty novices underwent 400 simulator sessions. Mean simulator scores at sessions 1, 5, 10, 15, and 20 were 78.5 ± 5.95, 176.5 ± 17.7, 275.55 ± 23.56, 347 ± 26.49, and 441.11 ± 38.14. The best fit exponential model was [time/score] = 26.1 × [session #] -0.615 ; r 2 = 0.99. This corresponded to an incremental improvement in score of 35% after the first session, 22% after the second, 16% after the third and so on. Incremental improvement dropped below 5% after the 12th session corresponding to the predicted score of 265. Simulator training was related to higher comfort maneuvering an endoscope and increased readiness for supervised clinical endoscopy, both plateauing between sessions 10 and 15. Mental demand, physical demand, and frustration levels decreased with increased simulator training. Preclinical training using an endoscopic part-task simulator appears to increase comfort level and decrease mental and physical demand associated with endoscopy. Based on a rigorous model, we recommend that novices complete a minimum of 12 training sessions and obtain a simulator score of at least 265 to be best prepared for clinical endoscopy.
NASA Astrophysics Data System (ADS)
Li, Y.; Kinzelbach, W.; Zhou, J.; Cheng, G. D.; Li, X.
2012-05-01
The hydrologic model HYDRUS-1-D and the crop growth model WOFOST are coupled to efficiently manage water resources in agriculture and improve the prediction of crop production. The results of the coupled model are validated by experimental studies of irrigated-maize done in the middle reaches of northwest China's Heihe River, a semi-arid to arid region. Good agreement is achieved between the simulated evapotranspiration, soil moisture and crop production and their respective field measurements made under current maize irrigation and fertilization. Based on the calibrated model, the scenario analysis reveals that the most optimal amount of irrigation is 500-600 mm in this region. However, for regions without detailed observation, the results of the numerical simulation can be unreliable for irrigation decision making owing to the shortage of calibrated model boundary conditions and parameters. So, we develop a method of combining model ensemble simulations and uncertainty/sensitivity analysis to speculate the probability of crop production. In our studies, the uncertainty analysis is used to reveal the risk of facing a loss of crop production as irrigation decreases. The global sensitivity analysis is used to test the coupled model and further quantitatively analyse the impact of the uncertainty of coupled model parameters and environmental scenarios on crop production. This method can be used for estimation in regions with no or reduced data availability.
NASA Technical Reports Server (NTRS)
Sytkowski, A. J.; Davis, K. L.
2001-01-01
Prolonged exposure of humans and experimental animals to the altered gravitational conditions of space flight has adverse effects on the lymphoid and erythroid hematopoietic systems. Although some information is available regarding the cellular and molecular changes in lymphocytes exposed to microgravity, little is known about the erythroid cellular changes that may underlie the reduction in erythropoiesis and resultant anemia. We now report a reduction in erythroid growth and a profound inhibition of erythropoietin (Epo)-induced differentiation in a ground-based simulated microgravity model system. Rauscher murine erythroleukemia cells were grown either in tissue culture vessels at 1 x g or in the simulated microgravity environment of the NASA-designed rotating wall vessel (RWV) bioreactor. Logarithmic growth was observed under both conditions; however, the doubling time in simulated microgravity was only one-half of that seen at 1 x g. No difference in apoptosis was detected. Induction with Epo at the initiation of the culture resulted in differentiation of approximately 25% of the cells at 1 x g, consistent with our previous observations. In contrast, induction with Epo at the initiation of simulated microgravity resulted in only one-half of this degree of differentiation. Significantly, the growth of cells in simulated microgravity for 24 h prior to Epo induction inhibited the differentiation almost completely. The results suggest that the NASA RWV bioreactor may serve as a suitable ground-based microgravity simulator to model the cellular and molecular changes in erythroid cells observed in true microgravity.
ERIC Educational Resources Information Center
Anderson, G. Ernest, Jr.
The mission of the simulation team of the Model Elementary Teacher Education Project, 1968-71, was to develop simulation tools and conduct appropriate studies of the anticipated operation of that project. The team focused on the experiences of individual students and on the resources necessary for these experiences to be reasonable. This report…
NASA Technical Reports Server (NTRS)
Yamakov, Vesselin I.; Saether, Erik; Phillips, Dawn R.; Glaessgen, Edward H.
2006-01-01
A traction-displacement relationship that may be embedded into a cohesive zone model for microscale problems of intergranular fracture is extracted from atomistic molecular-dynamics simulations. A molecular-dynamics model for crack propagation under steady-state conditions is developed to analyze intergranular fracture along a flat 99 [1 1 0] symmetric tilt grain boundary in aluminum. Under hydrostatic tensile load, the simulation reveals asymmetric crack propagation in the two opposite directions along the grain boundary. In one direction, the crack propagates in a brittle manner by cleavage with very little or no dislocation emission, and in the other direction, the propagation is ductile through the mechanism of deformation twinning. This behavior is consistent with the Rice criterion for cleavage vs. dislocation blunting transition at the crack tip. The preference for twinning to dislocation slip is in agreement with the predictions of the Tadmor and Hai criterion. A comparison with finite element calculations shows that while the stress field around the brittle crack tip follows the expected elastic solution for the given boundary conditions of the model, the stress field around the twinning crack tip has a strong plastic contribution. Through the definition of a Cohesive-Zone-Volume-Element an atomistic analog to a continuum cohesive zone model element - the results from the molecular-dynamics simulation are recast to obtain an average continuum traction-displacement relationship to represent cohesive zone interaction along a characteristic length of the grain boundary interface for the cases of ductile and brittle decohesion. Keywords: Crack-tip plasticity; Cohesive zone model; Grain boundary decohesion; Intergranular fracture; Molecular-dynamics simulation
Lee, James S; Franc, Jeffrey M
2015-08-01
A high influx of patients during a mass-casualty incident (MCI) may disrupt patient flow in an already overcrowded emergency department (ED) that is functioning beyond its operating capacity. This pilot study examined the impact of a two-step ED triage model using Simple Triage and Rapid Treatment (START) for pre-triage, followed by triage with the Canadian Triage and Acuity Scale (CTAS), on patient flow during a MCI simulation exercise. Hypothesis/Problem It was hypothesized that there would be no difference in time intervals nor patient volumes at each patient-flow milestone. Physicians and nurses participated in a computer-based tabletop disaster simulation exercise. Physicians were randomized into the intervention group using START, then CTAS, or the control group using START alone. Patient-flow milestones including time intervals and patient volumes from ED arrival to triage, ED arrival to bed assignment, ED arrival to physician assessment, and ED arrival to disposition decision were compared. Triage accuracy was compared for secondary purposes. There were no significant differences in the time interval from ED arrival to triage (mean difference 108 seconds; 95% CI, -353 to 596 seconds; P=1.0), ED arrival to bed assignment (mean difference 362 seconds; 95% CI, -1,269 to 545 seconds; P=1.0), ED arrival to physician assessment (mean difference 31 seconds; 95% CI, -1,104 to 348 seconds; P=0.92), and ED arrival to disposition decision (mean difference 175 seconds; 95% CI, -1,650 to 1,300 seconds; P=1.0) between the two groups. There were no significant differences in the volume of patients to be triaged (32% vs 34%; 95% CI for the difference -16% to 21%; P=1.0), assigned a bed (16% vs 21%; 95% CI for the difference -11% to 20%; P=1.0), assessed by a physician (20% vs 22%; 95% CI for the difference -14% to 19%; P=1.0), and with a disposition decision (20% vs 9%; 95% CI for the difference -25% to 4%; P=.34) between the two groups. The accuracy of triage was similar in both groups (57% vs 70%; 95% CI for the difference -15% to 41%; P=.46). Experienced triage nurses were able to apply CTAS effectively during a MCI simulation exercise. A two-step ED triage model using START, then CTAS, had similar patient flow and triage accuracy when compared to START alone.
Potential reductions in ambient NO2 concentrations from meeting diesel vehicle emissions standards
NASA Astrophysics Data System (ADS)
von Schneidemesser, Erika; Kuik, Friderike; Mar, Kathleen A.; Butler, Tim
2017-11-01
Exceedances of the concentration limit value for ambient nitrogen dioxide (NO2) at roadside sites are an issue in many cities throughout Europe. This is linked to the emissions of light duty diesel vehicles which have on-road emissions that are far greater than the regulatory standards. These exceedances have substantial implications for human health and economic loss. This study explores the possible gains in ambient air quality if light duty diesel vehicles were able to meet the regulatory standards (including both emissions standards from Europe and the United States). We use two independent methods: a measurement-based and a model-based method. The city of Berlin is used as a case study. The measurement-based method used data from 16 monitoring stations throughout the city of Berlin to estimate annual average reductions in roadside NO2 of 9.0 to 23 µg m-3 and in urban background NO2 concentrations of 1.2 to 2.7 µg m-3. These ranges account for differences in fleet composition assumptions, and the stringency of the regulatory standard. The model simulations showed reductions in urban background NO2 of 2.0 µg m-3, and at the scale of the greater Berlin area of 1.6 to 2.0 µg m-3 depending on the setup of the simulation and resolution of the model. Similar results were found for other European cities. The similarities in results using the measurement- and model-based methods support our ability to draw robust conclusions that are not dependent on the assumptions behind either methodology. The results show the significant potential for NO2 reductions if regulatory standards for light duty diesel vehicles were to be met under real-world operating conditions. Such reductions could help improve air quality by reducing NO2 exceedances in urban areas, but also have broader implications for improvements in human health and other benefits.
[Application of spatially explicit landscape model in soil loss study in Huzhong area].
Xu, Chonggang; Hu, Yuanman; Chang, Yu; Li, Xiuzhen; Bu, Renchang; He, Hongshi; Leng, Wenfang
2004-10-01
Universal Soil Loss Equation (USLE) has been widely used to estimate the average annual soil loss. In most of the previous work on soil loss evaluation on forestland, cover management factor was calculated from the static forest landscape. The advent of spatially explicit forest landscape model in the last decade, which explicitly simulates the forest succession dynamics under natural and anthropogenic disturbances (fire, wind, harvest and so on) on heterogeneous landscape, makes it possible to take into consideration the change of forest cover, and to dynamically simulate the soil loss in different year (e.g. 10 years and 20 years after current year). In this study, we linked a spatially explicit landscape model (LANDIS) with USLE to simulate the soil loss dynamics under two scenarios: fire and no harvest, fire and harvest. We also simulated the soil loss with no fire and no harvest as a control. The results showed that soil loss varied periodically with simulation year, and the amplitude of change was the lowest under the control scenario and the highest under the fire and no harvest scenario. The effect of harvest on soil loss could not be easily identified on the map; however, the cumulative effect of harvest on soil loss was larger than that of fire. Decreasing the harvest area and the percent of bare soil increased by harvest could significantly reduce soil loss, but had no significant effects on the dynamic of soil loss. Although harvest increased the annual soil loss, it tended to decrease the variability of soil loss between different simulation years.
Comparison of Hall Thruster Plume Expansion Model with Experimental Data
2006-05-23
focus of this study, is a hybrid particle- in-cell ( PIC ) model that tracks particles along an unstructured tetrahedral mesh. * Research Engineer...measurements of the ion current density profile, ion energy distributions, and ion species fraction distributions using a nude Faraday probe, retarding...Vol.37 No.1. 6 Oh, D. and Hastings, D., “Three Dimensional PIC -DSMC Simulations of Hall Thruster Plumes and Analysis for Realistic Spacecraft
NASA Astrophysics Data System (ADS)
Wang, An; Fallah-Shorshani, Masoud; Xu, Junshi; Hatzopoulou, Marianne
2016-10-01
Near-road concentrations of nitrogen dioxide (NO2), a known marker of traffic-related air pollution, were simulated along a busy urban corridor in Montreal, Quebec using a combination of microscopic traffic simulation, instantaneous emission modeling, and air pollution dispersion. In order to calibrate and validate the model, a data collection campaign was designed. For this purpose, measurements of NO2 were conducted mid-block along four segments of the corridor throughout a four-week campaign conducted between March and April 2015. The four segments were chosen to be consecutive and yet exhibiting variability in road configuration and built environment characteristics. Roadside NO2 measurements were also paired with on-site and fixed-station meteorological data. In addition, traffic volumes, composition, and routing decisions were collected using video-cameras located at upstream and downstream intersections. Dispersion of simulated emissions was conducted for eight time slots and under a range of meteorological conditions using three different models with vastly different dispersion algorithms (OSPM, CALINE 4, and SIRANE). The three models exhibited poor correlation with near-road NO2 concentrations and were better able to simulate average concentrations occurring along the roadways rather than the range of concentrations measured under diverse meteorological and traffic conditions. As hypothesized, the model SIRANE that can handle a street canyon configuration was the most sensitive to the built environment especially to the presence of tall buildings around the road. In contrast, CALINE exhibited the lowest sensitivity to the built environment.
NOx Emission Reduction and its Effects on Ozone during the 2008 Olympic Games
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Qing; Wang, Yuhang; Zhao, Chun
2011-07-15
We applied a daily-assimilated inversion method to estimate NOx (NO+NO2) emissions for June-September 2007 and 2008 on the basis of the Aura Ozone Monitoring Instrument (OMI) observations of nitrogen dioxide (NO2) and model simulations using the Regional chEmistry and trAnsport Model (REAM). Over urban Beijing, rural Beijing, and the Huabei Plain, OMI column NO2 reductions are approximately 45%, 33%, and 14%, respectively, while the corresponding anthropogenic NOx emission reductions are only 28%, 24%, and 6%, during the full emission control period (July 20 – Sep 20, 2008). The emission reduction began in early July and was in full force bymore » July 20, corresponding to the scheduled implementation of emission controls over Beijing. The emissions did not appear to recover after the emission control period. Meteorological change from summer 2007 to 2008 is the main factor contributing to the column NO2 decreases not accounted for by the emission reduction. Model simulations suggest that the effect of emission reduction on ozone concentrations over Beijing is relatively minor using a standard VOC emission inventory in China. With an adjustment of the model emissions to reflect in situ observations of VOCs in Beijing, the model simulation suggests a larger effect of the emission reduction.« less
NASA Astrophysics Data System (ADS)
Werner, Bodo; Stutz, Jochen; Spolaor, Max; Scalone, Lisa; Raecke, Rasmus; Festa, James; Fedele Colosimo, Santo; Cheung, Ross; Tsai, Catalina; Hossaini, Ryan; Chipperfield, Martyn P.; Taverna, Giorgio S.; Feng, Wuhu; Elkins, James W.; Fahey, David W.; Gao, Ru-Shan; Hintsa, Erik J.; Thornberry, Troy D.; Moore, Free Lee; Navarro, Maria A.; Atlas, Elliot; Daube, Bruce C.; Pittman, Jasna; Wofsy, Steve; Pfeilsticker, Klaus
2017-01-01
We report measurements of CH4 (measured in situ by the Harvard University Picarro Cavity Ringdown Spectrometer (HUPCRS) and NOAA Unmanned Aircraft System Chromatograph for Atmospheric Trace Species (UCATS) instruments), O3 (measured in situ by the NOAA dual-beam ultraviolet (UV) photometer), NO2, BrO (remotely detected by spectroscopic UV-visible (UV-vis) limb observations; see the companion paper of Stutz et al., 2016), and of some key brominated source gases in whole-air samples of the Global Hawk Whole Air Sampler (GWAS) instrument within the subtropical lowermost stratosphere (LS) and the tropical upper troposphere (UT) and tropopause layer (TTL). The measurements were performed within the framework of the NASA-ATTREX (National Aeronautics and Space Administration - Airborne Tropical Tropopause Experiment) project from aboard the Global Hawk (GH) during six deployments over the eastern Pacific in early 2013. These measurements are compared with TOMCAT/SLIMCAT (Toulouse Off-line Model of Chemistry And Transport/Single Layer Isentropic Model of Chemistry And Transport) 3-D model simulations, aiming at improvements of our understanding of the bromine budget and photochemistry in the LS, UT, and TTL.Changes in local O3 (and NO2 and BrO) due to transport processes are separated from photochemical processes in intercomparisons of measured and modeled CH4 and O3. After excellent agreement is achieved among measured and simulated CH4 and O3, measured and modeled [NO2] are found to closely agree with ≤ 15 ppt in the TTL (which is the detection limit) and within a typical range of 70 to 170 ppt in the subtropical LS during the daytime. Measured [BrO] ranges between 3 and 9 ppt in the subtropical LS. In the TTL, [BrO] reaches 0.5 ± 0.5 ppt at the bottom (150 hPa/355 K/14 km) and up to about 5 ppt at the top (70 hPa/425 K/18.5 km; see Fueglistaler et al., 2009 for the definition of the TTL used), in overall good agreement with the model simulations. Depending on the photochemical regime, the TOMCAT/SLIMCAT simulations tend to slightly underpredict measured BrO for large BrO concentrations, i.e., in the upper TTL and LS. The measured BrO and modeled BrO / Bryinorg ratio is further used to calculate inorganic bromine, Bryinorg. For the TTL (i.e., when [CH4] ≥ 1790 ppb), [Bryinorg] is found to increase from a mean of 2.63 ± 1.04 ppt for potential temperatures (θ) in the range of 350-360 K to 5.11 ± 1.57 ppt for θ = 390 - 400 K, whereas in the subtropical LS (i.e., when [CH4] ≤ 1790 ppb), it reaches 7.66 ± 2.95 ppt for θ in the range of 390-400 K. Finally, for the eastern Pacific (170-90° W), the TOMCAT/SLIMCAT simulations indicate a net loss of ozone of -0.3 ppbv day-1 at the base of the TTL (θ = 355 K) and a net production of +1.8 ppbv day-1 in the upper part (θ = 383 K).
Stochastic dynamics for reinfection by transmitted diseases
NASA Astrophysics Data System (ADS)
Barros, Alessandro S.; Pinho, Suani T. R.
2017-06-01
The use of stochastic models to study the dynamics of infectious diseases is an important tool to understand the epidemiological process. For several directly transmitted diseases, reinfection is a relevant process, which can be expressed by endogenous reactivation of the pathogen or by exogenous reinfection due to direct contact with an infected individual (with smaller reinfection rate σ β than infection rate β ). In this paper, we examine the stochastic susceptible, infected, recovered, infected (SIRI) model simulating the endogenous reactivation by a spontaneous reaction, while exogenous reinfection by a catalytic reaction. Analyzing the mean-field approximations of a site and pairs of sites, and Monte Carlo (MC) simulations for the particular case of exogenous reinfection, we obtained continuous phase transitions involving endemic, epidemic, and no transmission phases for the simple approach; the approach of pairs is better to describe the phase transition from endemic phase (susceptible, infected, susceptible (SIS)-like model) to epidemic phase (susceptible, infected, and removed or recovered (SIR)-like model) considering the comparison with MC results; the reinfection increases the peaks of outbreaks until the system reaches endemic phase. For the particular case of endogenous reactivation, the approach of pairs leads to a continuous phase transition from endemic phase (SIS-like model) to no transmission phase. Finally, there is no phase transition when both effects are taken into account. We hope the results of this study can be generalized for the susceptible, exposed, infected, and removed or recovered (SEIRIE) model, for which the state exposed (infected but not infectious), describing more realistically transmitted diseases such as tuberculosis. In future work, we also intend to investigate the effect of network topology on phase transitions when the SIRI model describes both transmitted diseases (σ <1 ) and social contagions (σ >1 ).
A simple dynamic subgrid-scale model for LES of particle-laden turbulence
NASA Astrophysics Data System (ADS)
Park, George Ilhwan; Bassenne, Maxime; Urzay, Javier; Moin, Parviz
2017-04-01
In this study, a dynamic model for large-eddy simulations is proposed in order to describe the motion of small inertial particles in turbulent flows. The model is simple, involves no significant computational overhead, contains no adjustable parameters, and is flexible enough to be deployed in any type of flow solvers and grids, including unstructured setups. The approach is based on the use of elliptic differential filters to model the subgrid-scale velocity. The only model parameter, which is related to the nominal filter width, is determined dynamically by imposing consistency constraints on the estimated subgrid energetics. The performance of the model is tested in large-eddy simulations of homogeneous-isotropic turbulence laden with particles, where improved agreement with direct numerical simulation results is observed in the dispersed-phase statistics, including particle acceleration, local carrier-phase velocity, and preferential-concentration metrics.
NASA Astrophysics Data System (ADS)
Moursy, Aly; Sorour, Mohamed T.; Moustafa, Medhat; Elbarqi, Walid; Fayd, Mai; Elreedy, Ahmed
2018-05-01
This study concerns the upgrading of a real domestic wastewater treatment plant (WWTP) supported by simulation. The main aims of this work are to: (1) decide between two technologies to improve WWTP capacity and its nitrogen removal efficiency; membrane bioreactor (MBR) and integrated fixed film activated sludge (IFAS), and (2) perform a cost estimation analysis for the two proposed solutions. The model used was calibrated based on data from the existing WWTP, namely, Eastern plant and located in Alexandria, Egypt. The activated sludge model No. 1 (ASM1) was considered in the model analysis by GPS-X 7 software. Steady-state analysis revealed that high performances corresponded to high compliance with Egyptian standards were achieved by the two techniques; however, MBR was better. Nonetheless, the two systems showed poor nitrogen removal efficiency according to the current situation, which reveals that the plant needs a modification to add an anaerobic treatment unit before the aerobic zone.
User's manual for the Simulated Life Analysis of Vehicle Elements (SLAVE) model
NASA Technical Reports Server (NTRS)
Paul, D. D., Jr.
1972-01-01
The simulated life analysis of vehicle elements model was designed to perform statistical simulation studies for any constant loss rate. The outputs of the model consist of the total number of stages required, stages successfully completing their lifetime, and average stage flight life. This report contains a complete description of the model. Users' instructions and interpretation of input and output data are presented such that a user with little or no prior programming knowledge can successfully implement the program.
Impact of reconstruction parameters on quantitative I-131 SPECT
NASA Astrophysics Data System (ADS)
van Gils, C. A. J.; Beijst, C.; van Rooij, R.; de Jong, H. W. A. M.
2016-07-01
Radioiodine therapy using I-131 is widely used for treatment of thyroid disease or neuroendocrine tumors. Monitoring treatment by accurate dosimetry requires quantitative imaging. The high energy photons however render quantitative SPECT reconstruction challenging, potentially requiring accurate correction for scatter and collimator effects. The goal of this work is to assess the effectiveness of various correction methods on these effects using phantom studies. A SPECT/CT acquisition of the NEMA IEC body phantom was performed. Images were reconstructed using the following parameters: (1) without scatter correction, (2) with triple energy window (TEW) scatter correction and (3) with Monte Carlo-based scatter correction. For modelling the collimator-detector response (CDR), both (a) geometric Gaussian CDRs as well as (b) Monte Carlo simulated CDRs were compared. Quantitative accuracy, contrast to noise ratios and recovery coefficients were calculated, as well as the background variability and the residual count error in the lung insert. The Monte Carlo scatter corrected reconstruction method was shown to be intrinsically quantitative, requiring no experimentally acquired calibration factor. It resulted in a more accurate quantification of the background compartment activity density compared with TEW or no scatter correction. The quantification error relative to a dose calibrator derived measurement was found to be <1%,-26% and 33%, respectively. The adverse effects of partial volume were significantly smaller with the Monte Carlo simulated CDR correction compared with geometric Gaussian or no CDR modelling. Scatter correction showed a small effect on quantification of small volumes. When using a weighting factor, TEW correction was comparable to Monte Carlo reconstruction in all measured parameters, although this approach is clinically impractical since this factor may be patient dependent. Monte Carlo based scatter correction including accurately simulated CDR modelling is the most robust and reliable method to reconstruct accurate quantitative iodine-131 SPECT images.
NASA Technical Reports Server (NTRS)
Rasool, Quazi Z.; Zhang, Rui; Lash, Benjamin; Cohan, Daniel S.; Cooter, Ellen J.; Bash, Jesse O.; Lamsal, Lok N.
2016-01-01
Modeling of soil nitric oxide (NO) emissions is highly uncertain and may misrepresent its spatial and temporal distribution. This study builds upon a recently introduced parameterization to improve the timing and spatial distribution of soil NO emission estimates in the Community Multiscale Air Quality (CMAQ) model. The parameterization considers soil parameters, meteorology, land use, and mineral nitrogen (N) availability to estimate NO emissions. We incorporate daily year-specific fertilizer data from the Environmental Policy Integrated Climate (EPIC) agricultural model to replace the annual generic data of the initial parameterization, and use a 12km resolution soil biome map over the continental USA. CMAQ modeling for July 2011 shows slight differences in model performance in simulating fine particulate matter and ozone from Interagency Monitoring of Protected Visual Environments (IMPROVE) and Clean Air Status and Trends Network (CASTNET) sites and NO2 columns from Ozone Monitoring Instrument (OMI) satellite retrievals. We also simulate how the change in soil NO emissions scheme affects the expected O3 response to projected emissions reductions.
Can Landscape Evolution Models (LEMs) be used to reconstruct palaeo-climate and sea-level histories?
NASA Astrophysics Data System (ADS)
Leyland, J.; Darby, S. E.
2011-12-01
Reconstruction of palaeo-environmental conditions over long time periods is notoriously difficult, especially where there are limited or no proxy records from which to extract data. Application of landscape evolution models (LEMs) for palaeo-environmental reconstruction involves hindcast modeling, in which simulation scenarios are configured with specific model variables and parameters chosen to reflect a specific hypothesis of environmental change. In this form of modeling, the environmental time series utilized are considered credible when modeled and observed landscape metrics converge. Herein we account for the uncertainties involved in evaluating the degree to which the model simulations and observations converge using Monte Carlo analysis of reduced complexity `metamodels'. The technique is applied to a case study focused on a specific set of gullies found on the southwest coast of the Isle of Wight, UK. A key factor controlling the Holocene evolution of these coastal gullies is the balance between rates of sea-cliff retreat (driven by sea-level rise) and headwards incision caused by knickpoint migration (driven by the rate of runoff). We simulate these processes using a version of the GOLEM model that has been modified to represent sea-cliff retreat. A Central Composite Design (CCD) sampling technique was employed, enabling the trajectories of gully response to different combinations of driving conditions to be modeled explicitly. In some of these simulations, where the range of bedrock erodibility (0.03 to 0.04 m0.2 a-1) and rate of sea-level change (0.005 to 0.0059 m a-1) is tightly constrained, modeled gully forms conform closely to those observed in reality, enabling a suite of climate and sea-level change scenarios which plausibly explain the Holocene evolution of the Isle of Wight gullies to be identified.
A New Estimate of North American Mountain Snow Accumulation From Regional Climate Model Simulations
NASA Astrophysics Data System (ADS)
Wrzesien, Melissa L.; Durand, Michael T.; Pavelsky, Tamlin M.; Kapnick, Sarah B.; Zhang, Yu; Guo, Junyi; Shum, C. K.
2018-02-01
Despite the importance of mountain snowpack to understanding the water and energy cycles in North America's montane regions, no reliable mountain snow climatology exists for the entire continent. We present a new estimate of mountain snow water equivalent (SWE) for North America from regional climate model simulations. Climatological peak SWE in North America mountains is 1,006 km3, 2.94 times larger than previous estimates from reanalyses. By combining this mountain SWE value with the best available global product in nonmountain areas, we estimate peak North America SWE of 1,684 km3, 55% greater than previous estimates. In our simulations, the date of maximum SWE varies widely by mountain range, from early March to mid-April. Though mountains comprise 24% of the continent's land area, we estimate that they contain 60% of North American SWE. This new estimate is a suitable benchmark for continental- and global-scale water and energy budget studies.
Park, Jae Hong; Peters, Thomas M.; Altmaier, Ralph; Jones, Samuel M.; Gassman, Richard; Anthony, T. Renée
2017-01-01
We have developed a time-dependent simulation model to estimate in-room concentrations of multiple contaminants [ammonia (NH3), carbon dioxide (CO2), carbon monoxide (CO) and dust] as a function of increased ventilation with filtered recirculation for swine farrowing facilities. Energy and mass balance equations were used to simulate the indoor air quality (IAQ) and operational cost for a variety of ventilation conditions over a 3-month winter period for a facility located in the Midwest U.S., using simplified and real-time production parameters, comparing results to field data. A revised model was improved by minimizing the sum of squared errors (SSE) between modeled and measured NH3 and CO2. After optimizing NH3 and CO2, other IAQ results from the simulation were compared to field measurements using linear regression. For NH3, the coefficient of determination (R2) for simulation results and field measurements improved from 0.02 with the original model to 0.37 with the new model. For CO2, the R2 for simulation results and field measurements was 0.49 with the new model. When the makeup air was matched to hallway air CO2 concentrations (1,500 ppm), simulation results showed the smallest SSE. With the new model, the R2 for other contaminants were 0.34 for inhalable dust, 0.36 for respirable dust, and 0.26 for CO. Operation of the air cleaner decreased inhalable dust by 35% and respirable dust concentrations by 33%, while having no effect on NH3, CO2, in agreement with field data, and increasing operational cost by $860 (58%) for the three-month period. PMID:28775911
Feaster, Toby D.; Conrads, Paul
2000-01-01
In May 1996, the U.S. Geological Survey entered into a cooperative agreement with the Kershaw County Water and Sewer Authority to characterize and simulate the water quality in the Wateree River, South Carolina. Longitudinal profiling of dissolved-oxygen concentrations during the spring and summer of 1996 revealed dissolved-oxygen minimums occurring upstream from the point-source discharges. The mean dissolved-oxygen decrease upstream from the effluent discharges was 2.0 milligrams per liter, and the decrease downstream from the effluent discharges was 0.2 milligram per liter. Several theories were investigated to obtain an improved understanding of the dissolved-oxygen dynamics in the upper Wateree River. Data suggest that the dissolved-oxygen concentration decrease is associated with elevated levels of oxygen-consuming nutrients and metals that are flowing into the Wateree River from Lake Wateree. Analysis of long-term streamflow and water-quality data collected at two U.S. Geological Survey gaging stations suggests that no strong correlation exists between streamflow and dissolved-oxygen concentrations in the Wateree River. However, a strong negative correlation does exist between dissolved-oxygen concentrations and water temperature. Analysis of data from six South Carolina Department of Health and Environmental Control monitoring stations for 1980.95 revealed decreasing trends in ammonia nitrogen at all stations where data were available and decreasing trends in 5-day biochemical oxygen demand at three river stations. The influence of various hydrologic and point-source loading conditions on dissolved-oxygen concentrations in the Wateree River were determined by using results from water-quality simulations by the Branched Lagrangian Transport Model. The effects of five tributaries and four point-source discharges were included in the model. Data collected during two synoptic water-quality samplings on June 23.25 and August 11.13, 1997, were used to calibrate and validate the Branched Lagrangian Transport Model. The data include dye-tracer concentrations collected at six locations, stream-reaeration data collected at four locations, and water-quality and water-temperature data collected at nine locations. Hydraulic data for the Branched Lagrangian Transport Model were simulated by using the U.S. Geological Survey BRANCH one-dimensional, unsteady-flow model. Data that were used to calibrate and validate the BRANCH model included time-series of water-level and streamflow data at three locations. The domain of the hydraulic model and the transport model was a 57.3- and 43.5-mile reach of the river, respectively. A sensitivity analysis of the simulated dissolved-oxygen concentrations to model coefficients and data inputs indicated that the simulated dissolved-oxygen concentrations were most sensitive to changes in the boundary concentration inputs of water temperature and dissolved oxygen followed by sensitivity to the change in streamflow. A 35-percent increase in streamflow resulted in a negative normalized sensitivity index, indicating a decrease in dissolved-oxygen concentrations. The simulated dissolved-oxygen concentrations showed no significant sensitivity to changes in model input rate kinetics. To demonstrate the utility of the Branched Lagrangian Transport Model of the Wateree River, the model was used to simulate several hydrologic and water-quality scenarios to evaluate the effects on simulated dissolved-oxygen concentrations. The first scenario compared the 24-hour mean dissolved-oxygen concentrations for August 13, 1997, as simulated during the model validation, with simulations using two different streamflow patterns. The mean streamflow for August 13, 1997, was 2,000 cubic feet per second. Simulations were run using mean streamflows of 1,000 and 1,400 cubic feet per second while keeping the water-quality boundary conditions the same as were used during the validation simulations. When compared t
Modeling and Simulation of Ballistic Penetration of Ceramic-Polymer-Metal Layered Systems
2016-01-01
ARL-RP-0562 ● JAN 2016 US Army Research Laboratory Modeling and Simulation of Ballistic Penetration of Ceramic-Polymer-Metal...manufacturer’s or trade names does not constitute an official endorsement or approval of the use thereof. Destroy this report when it is no longer needed...Do not return it to the originator. ARL-RP-0562 ● JAN 2016 US Army Research Laboratory Modeling and Simulation of Ballistic
Structure prediction and molecular simulation of gases diffusion pathways in hydrogenase.
Sundaram, Shanthy; Tripathi, Ashutosh; Gupta, Vipul
2010-10-06
Although hydrogen is considered to be one of the most promising future energy sources and the technical aspects involved in using it have advanced considerably, the future supply of hydrogen from renewable sources is still unsolved. The [Fe]- hydrogenase enzymes are highly efficient H(2) catalysts found in ecologically and phylogenetically diverse microorganisms, including the photosynthetic green alga, Chlamydomonas reinhardtii. While these enzymes can occur in several forms, H(2) catalysis takes place at a unique [FeS] prosthetic group or H-cluster, located at the active site. 3D structure of the protein hydA1 hydrogenase from Chlamydomonas reinhardtti was predicted using the MODELER 8v2 software. Conserved region was depicted from the NCBI CDD Search. Template selection was done on the basis NCBI BLAST results. For single template 1FEH was used and for multiple templates 1FEH and 1HFE were used. The result of the Homology modeling was verified by uploading the file to SAVS server. On the basis of the SAVS result 3D structure predicted using single template was chosen for performing molecular simulation. For performing molecular simulation three strategies were used. First the molecular simulation of the protein was performed in solvated box containing bulk water. Then 100 H(2) molecules were randomly inserted in the solvated box and two simulations of 50 and 100 ps were performed. Similarly 100 O(2) molecules were randomly placed in the solvated box and again 50 and 100 ps simulation were performed. Energy minimization was performed before each simulation was performed. Conformations were saved after each simulation. Analysis of the gas diffusion was done on the basis of RMSD, Radius of Gyration and no. of gas molecule/ps plot.
Smith, Rebecca L.; Schukken, Ynte H.; Lu, Zhao; Mitchell, Rebecca M.; Grohn, Yrjo T.
2013-01-01
Objective To develop a mathematical model to simulate infection dynamics of Mycobacterium bovis in cattle herds in the United States and predict efficacy of the current national control strategy for tuberculosis in cattle. Design Stochastic simulation model. Sample Theoretical cattle herds in the United States. Procedures A model of within-herd M bovis transmission dynamics following introduction of 1 latently infected cow was developed. Frequency- and density-dependent transmission modes and 3 tuberculin-test based culling strategies (no test-based culling, constant (annual) testing with test-based culling, and the current strategy of slaughterhouse detection-based testing and culling) were investigated. Results were evaluated for 3 herd sizes over a 10-year period and validated via simulation of known outbreaks of M bovis infection. Results On the basis of 1,000 simulations (1000 herds each) at replacement rates typical for dairy cattle (0.33/y), median time to detection of M bovis infection in medium-sized herds (276 adult cattle) via slaughterhouse surveillance was 27 months after introduction, and 58% of these herds would spontaneously clear the infection prior to that time. Sixty-two percent of medium-sized herds without intervention and 99% of those managed with constant test-based culling were predicted to clear infection < 10 years after introduction. The model predicted observed outbreaks best for frequency-dependent transmission, and probability of clearance was most sensitive to replacement rate. Conclusions and Clinical Relevance Although modeling indicated the current national control strategy was sufficient for elimination of M bovis infection from dairy herds after detection, slaughterhouse surveillance was not sufficient to detect M bovis infection in all herds and resulted in subjectively delayed detection, compared with the constant testing method. Further research is required to economically optimize this strategy. PMID:23865885
Structure of aqueous proline via parallel tempering molecular dynamics and neutron diffraction.
Troitzsch, R Z; Martyna, G J; McLain, S E; Soper, A K; Crain, J
2007-07-19
The structure of aqueous L-proline amino acid has been the subject of much debate centering on the validity of various proposed models, differing widely in the extent to which local and long-range correlations are present. Here, aqueous proline is investigated by atomistic, replica exchange molecular dynamics simulations, and the results are compared to neutron diffraction and small angle neutron scattering (SANS) data, which have been reported recently (McLain, S.; Soper, A.; Terry, A.; Watts, A. J. Phys. Chem. B 2007, 111, 4568). Comparisons between neutron experiments and simulation are made via the static structure factor S(Q) which is measured and computed from several systems with different H/D isotopic compositions at a concentration of 1:20 molar ratio. Several different empirical water models (TIP3P, TIP4P, and SPC/E) in conjunction with the CHARMM22 force field are investigated. Agreement between experiment and simulation is reasonably good across the entire Q range although there are significant model-dependent variations in some cases. In general, agreement is improved slightly upon application of approximate quantum corrections obtained from gas-phase path integral simulations. Dimers and short oligomeric chains formed by hydrogen bonds (frequently bifurcated) coexist with apolar (hydrophobic) contacts. These emerge as the dominant local motifs in the mixture. Evidence for long-range association is more equivocal: No long-range structures form spontaneously in the MD simulations, and no obvious low-Q signature is seen in the SANS data. Moreover, associations introduced artificially to replicate a long-standing proposed mesoscale structure for proline correlations as an initial condition are annealed out by parallel tempering MD simulations. However, some small residual aggregates do remain, implying a greater degree of long-range order than is apparent in the SANS data.
Vargas, Maria V; Moawad, Gaby; Denny, Kathryn; Happ, Lindsey; Misa, Nana Yaa; Margulies, Samantha; Opoku-Anane, Jessica; Abi Khalil, Elias; Marfori, Cherie
To assess whether a robotic simulation curriculum for novice surgeons can improve performance of a suturing task in a live porcine model. Randomized controlled trial (Canadian Task Force classification I). Academic medical center. Thirty-five medical students without robotic surgical experience. Participants were enrolled in an online session of training modules followed by an in-person orientation. Baseline performance testing on the Mimic Technologies da Vinci Surgical Simulator (dVSS) was also performed. Participants were then randomly assigned to the completion of 4 dVSS training tasks (camera clutching 1, suture sponge 1 and 2, and tubes) versus no further training. The intervention group performed each dVSS task until proficiency or up to 10 times. A final suturing task was performed on a live porcine model, which was video recorded and blindly assessed by experienced surgeons. The primary outcomes were Global Evaluative Assessment of Robotic Skills (GEARS) scores and task time. The study had 90% power to detect a mean difference of 3 points on the GEARS scale, assuming a standard deviation (SD) of 2.65, and 80% power to detect a mean difference of 3 minutes, assuming an SD of 3 minutes. There were no differences in demographics and baseline skills between the 2 groups. No significant differences in task time in minutes or GEARS scores were seen for the final suturing task between the intervention and control groups, respectively (9.2 [2.65] vs 9.9 [2.07] minutes, p = .406; and 15.37 [2.51] vs 15.25 [3.38], p = .603). The 95% confidence interval for the difference in mean task times was -2.36 to .96 minutes and for mean GEARS scores -1.91 to 2.15 points. Live suturing task performance was not improved with a proficiency-based virtual reality simulation suturing curriculum compared with standard orientation to the da Vinci robotic console in a group of novice surgeons. Published by Elsevier Inc.
First-principles Monte Carlo simulations of reaction equilibria in compressed vapors
Fetisov, Evgenii O.; Kuo, I-Feng William; Knight, Chris; ...
2016-06-13
Predictive modeling of reaction equilibria presents one of the grand challenges in the field of molecular simulation. Difficulties in the study of such systems arise from the need (i) to accurately model both strong, short-ranged interactions leading to the formation of chemical bonds and weak interactions arising from the environment, and (ii) to sample the range of time scales involving frequent molecular collisions, slow diffusion, and infrequent reactive events. Here we present a novel reactive first-principles Monte Carlo (RxFPMC) approach that allows for investigation of reaction equilibria without the need to prespecify a set of chemical reactions and their ideal-gasmore » equilibrium constants. We apply RxFPMC to investigate a nitrogen/oxygen mixture at T = 3000 K and p = 30 GPa, i.e., conditions that are present in atmospheric lightning strikes and explosions. The RxFPMC simulations show that the solvation environment leads to a significantly enhanced NO concentration that reaches a maximum when oxygen is present in slight excess. In addition, the RxFPMC simulations indicate the formation of NO 2 and N 2O in mole fractions approaching 1%, whereas N 3 and O 3 are not observed. Lastly, the equilibrium distributions obtained from the RxFPMC simulations agree well with those from a thermochemical computer code parametrized to experimental data.« less
First-principles Monte Carlo simulations of reaction equilibria in compressed vapors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fetisov, Evgenii O.; Kuo, I-Feng William; Knight, Chris
Predictive modeling of reaction equilibria presents one of the grand challenges in the field of molecular simulation. Difficulties in the study of such systems arise from the need (i) to accurately model both strong, short-ranged interactions leading to the formation of chemical bonds and weak interactions arising from the environment, and (ii) to sample the range of time scales involving frequent molecular collisions, slow diffusion, and infrequent reactive events. Here we present a novel reactive first-principles Monte Carlo (RxFPMC) approach that allows for investigation of reaction equilibria without the need to prespecify a set of chemical reactions and their ideal-gasmore » equilibrium constants. We apply RxFPMC to investigate a nitrogen/oxygen mixture at T = 3000 K and p = 30 GPa, i.e., conditions that are present in atmospheric lightning strikes and explosions. The RxFPMC simulations show that the solvation environment leads to a significantly enhanced NO concentration that reaches a maximum when oxygen is present in slight excess. In addition, the RxFPMC simulations indicate the formation of NO 2 and N 2O in mole fractions approaching 1%, whereas N 3 and O 3 are not observed. Lastly, the equilibrium distributions obtained from the RxFPMC simulations agree well with those from a thermochemical computer code parametrized to experimental data.« less
Evaluation of approaches focused on modelling of organic carbon stocks using the RothC model
NASA Astrophysics Data System (ADS)
Koco, Štefan; Skalský, Rastislav; Makovníková, Jarmila; Tarasovičová, Zuzana; Barančíková, Gabriela
2014-05-01
The aim of current efforts in the European area is the protection of soil organic matter, which is included in all relevant documents related to the protection of soil. The use of modelling of organic carbon stocks for anticipated climate change, respectively for land management can significantly help in short and long-term forecasting of the state of soil organic matter. RothC model can be applied in the time period of several years to centuries and has been tested in long-term experiments within a large range of soil types and climatic conditions in Europe. For the initialization of the RothC model, knowledge about the carbon pool sizes is essential. Pool size characterization can be obtained from equilibrium model runs, but this approach is time consuming and tedious, especially for larger scale simulations. Due to this complexity we search for new possibilities how to simplify and accelerate this process. The paper presents a comparison of two approaches for SOC stocks modelling in the same area. The modelling has been carried out on the basis of unique input of land use, management and soil data for each simulation unit separately. We modeled 1617 simulation units of 1x1 km grid on the territory of agroclimatic region Žitný ostrov in the southwest of Slovakia. The first approach represents the creation of groups of simulation units based on the evaluation of results for simulation unit with similar input values. The groups were created after the testing and validation of modelling results for individual simulation units with results of modelling the average values of inputs for the whole group. Tests of equilibrium model for interval in the range 5 t.ha-1 from initial SOC stock showed minimal differences in results comparing with result for average value of whole interval. Management inputs data from plant residues and farmyard manure for modelling of carbon turnover were also the same for more simulation units. Combining these groups (intervals of initial SOC stock, groups of plant residues inputs, groups of farmyard manure inputs), we created 661 simulation groups. Within the group, for all simulation units we used average values of inputs. Export of input data and modelling has been carried out manually in the graphic environment of RothC 26.3 v2.0 application for each group separately. SOC stocks were modeled for 661 groups of simulation units. For the second possibility we used RothC 26.3 version for DOS. The inputs for modelling were exported using VBA scripts in the environment of MS Access program. Equilibrium modelling for more variations of plant residues inputs was performed. Subsequently we selected the nearest value of total pool size to the real initial SOC stock value. All simulation units (1617) were automatically modeled by means of the predefined Batch File. The comparison of two methods of modelling showed spatial differentiation of results mainly with the increasing time of modelling period. In the time sequence, from initial period we mark the increasing the number of simulation units with differences in SOC stocks according to selected approaches. Observed differences suggest that the results of modelling obtained by inputs generalization should be taken into account with a certain degree of reserve. At large scales simulations it is more appropriate to use the DOS version of RothC 26.3 model which allows automated modelling. This reduces the time needed for model operation, without the necessity to look for the possibilities of minimizing the simulated units. Key words Soil organic carbon stock, modelling, RothC 26.3, agricultural soils, Slovakia Acknowledgements This work was supported by the Slovak Research and Development Agency under the contract No. APVV-0580-10 and APVV-0131-11.
Durand, Casey P
2013-01-01
Statistical interactions are a common component of data analysis across a broad range of scientific disciplines. However, the statistical power to detect interactions is often undesirably low. One solution is to elevate the Type 1 error rate so that important interactions are not missed in a low power situation. To date, no study has quantified the effects of this practice on power in a linear regression model. A Monte Carlo simulation study was performed. A continuous dependent variable was specified, along with three types of interactions: continuous variable by continuous variable; continuous by dichotomous; and dichotomous by dichotomous. For each of the three scenarios, the interaction effect sizes, sample sizes, and Type 1 error rate were varied, resulting in a total of 240 unique simulations. In general, power to detect the interaction effect was either so low or so high at α = 0.05 that raising the Type 1 error rate only served to increase the probability of including a spurious interaction in the model. A small number of scenarios were identified in which an elevated Type 1 error rate may be justified. Routinely elevating Type 1 error rate when testing interaction effects is not an advisable practice. Researchers are best served by positing interaction effects a priori and accounting for them when conducting sample size calculations.
Yildirim, Ilyas; Park, Hajeung; Disney, Matthew D.; Schatz, George C.
2013-01-01
One class of functionally important RNA is repeating transcripts that cause disease through various mechanisms. For example, expanded r(CAG) repeats can cause Huntington’s and other disease through translation of toxic proteins. Herein, crystal structure of r[5ʹUUGGGC(CAG)3GUCC]2, a model of CAG expanded transcripts, refined to 1.65 Å resolution is disclosed that show both anti-anti and syn-anti orientations for 1×1 nucleotide AA internal loops. Molecular dynamics (MD) simulations using Amber force field in explicit solvent were run for over 500 ns on model systems r(5ʹGCGCAGCGC)2 (MS1) and r(5ʹCCGCAGCGG)2 (MS2). In these MD simulations, both anti-anti and syn-anti AA base pairs appear to be stable. While anti-anti AA base pairs were dynamic and sampled multiple anti-anti conformations, no syn-anti↔anti-anti transformations were observed. Umbrella sampling simulations were run on MS2, and a 2D free energy surface was created to extract transformation pathways. In addition, over 800 ns explicit solvent MD simulation was run on r[5ʹGGGC(CAG)3GUCC]2, which closely represents the refined crystal structure. One of the terminal AA base pairs (syn-anti conformation), transformed to anti-anti conformation. The pathway followed in this transformation was the one predicted by umbrella sampling simulations. Further analysis showed a binding pocket near AA base pairs in syn-anti conformations. Computational results combined with the refined crystal structure show that global minimum conformation of 1×1 nucleotide AA internal loops in r(CAG) repeats is anti-anti but can adopt syn-anti depending on the environment. These results are important to understand RNA dynamic-function relationships and develop small molecules that target RNA dynamic ensembles. PMID:23441937
Meteorological Simulations of Ozone Episode Case Days during the 1996 Paso del Norte Ozone Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, M.J.; Costigan, K.; Muller, C.
1999-02-01
Meteorological simulations centered around the border cities of El Paso and Ciudad Juarez have been performed during an ozone episode that occurred on Aug. 13,1996 during the 1996 Paso del Norte Ozone Study field campaign. Simulations were petiormed using the HOTMAC mesoscale meteorological model using a 1,2,4, and 8 km horizontal grid size nested mesh system. Investigation of the vertical structure and evolution of the atmospheric boundary layer for the Aug. 11-13 time period is emphasized in this paper. Comparison of model-produced wind speed profiles to rawirisonde and radar profiler measurements shows reasonable agreement. A persistent upper-level jet was capturedmore » in the model simulations through data assimilation. In the evening hours, the model was not able to produce the strong wind direction shear seen in the radar wind profiles. Based on virtual potential temperature profile comparisons, the model appears to correctly simulate the daytime growth of the convective mixed layer. However, the model underestimates the cooling of the surface layer at night. We found that the upper-level jet significantly impacted the turbulence structure of the boundary layer, leading to relatively high turbulent kinetic energy (tke) values aloft at night. The model indicates that these high tke values aloft enhance the mid-morning growth of the boundary layer. No upper-level turbulence measurements were available to verify this finding, however. Radar profiler-derived mixing heights do indicate relatively rapid morning growth of the mixed layer.« less
NASA Astrophysics Data System (ADS)
Jin, Meibing; Deal, Clara; Maslowski, Wieslaw; Matrai, Patricia; Roberts, Andrew; Osinski, Robert; Lee, Younjoo J.; Frants, Marina; Elliott, Scott; Jeffery, Nicole; Hunke, Elizabeth; Wang, Shanlin
2018-01-01
The current coarse-resolution global Community Earth System Model (CESM) can reproduce major and large-scale patterns but is still missing some key biogeochemical features in the Arctic Ocean, e.g., low surface nutrients in the Canada Basin. We incorporated the CESM Version 1 ocean biogeochemical code into the Regional Arctic System Model (RASM) and coupled it with a sea-ice algal module to investigate model limitations. Four ice-ocean hindcast cases are compared with various observations: two in a global 1° (40˜60 km in the Arctic) grid: G1deg and G1deg-OLD with/without new sea-ice processes incorporated; two on RASM's 1/12° (˜9 km) grid R9km and R9km-NB with/without a subgrid scale brine rejection parameterization which improves ocean vertical mixing under sea ice. Higher-resolution and new sea-ice processes contributed to lower model errors in sea-ice extent, ice thickness, and ice algae. In the Bering Sea shelf, only higher resolution contributed to lower model errors in salinity, nitrate (NO3), and chlorophyll-a (Chl-a). In the Arctic Basin, model errors in mixed layer depth (MLD) were reduced 36% by brine rejection parameterization, 20% by new sea-ice processes, and 6% by higher resolution. The NO3 concentration biases were caused by both MLD bias and coarse resolution, because of excessive horizontal mixing of high NO3 from the Chukchi Sea into the Canada Basin in coarse resolution models. R9km showed improvements over G1deg on NO3, but not on Chl-a, likely due to light limitation under snow and ice cover in the Arctic Basin.
Evaluation of CMIP5 and CORDEX Derived Wind Wave Climate in Arabian Sea and Bay of Bengal
NASA Astrophysics Data System (ADS)
Chowdhury, P.; Behera, M. R.
2017-12-01
Climate change impact on surface ocean wave parameters need robust assessment for effective coastal zone management. Climate model skill to simulate dynamical General Circulation Models (GCMs) and Regional Circulation Models (RCMs) forced wind-wave climate over northern Indian Ocean is assessed in the present work. The historical dynamical wave climate is simulated using surface winds derived from four GCMs and four RCMs, participating in the Coupled Model Inter-comparison Project (CMIP5) and Coordinated Regional Climate Downscaling Experiment (CORDEX-South Asia), respectively, and their ensemble are used to force a spectral wave model. The surface winds derived from GCMs and RCMs are corrected for bias, using Quantile Mapping method, before being forced to the spectral wave model. The climatological properties of wave parameters (significant wave height (Hs), mean wave period (Tp) and direction (θm)) are evaluated relative to ERA-Interim historical wave reanalysis datasets over Arabian Sea (AS) and Bay of Bengal (BoB) regions of the northern Indian Ocean for a period of 27 years. We identify that the nearshore wave climate of AS is better predicted than the BoB by both GCMs and RCMs. Ensemble GCM simulated Hs in AS has a better correlation with ERA-Interim ( 90%) than in BoB ( 80%), whereas ensemble RCM simulated Hs has a low correlation in both regions ( 50% in AS and 45% in BoB). In AS, ensemble GCM simulated Tp has better predictability ( 80%) compared to ensemble RCM ( 65%). However, neither GCM nor RCM could satisfactorily predict Tp in nearshore BoB. Wave direction is poorly simulated by GCMs and RCMs in both AS and BoB, with correlation around 50% with GCMs and 60% with RCMs wind derived simulations. However, upon comparing individual RCMs with their parent GCMs, it is found that few of the RCMs predict wave properties better than their parent GCMs. It may be concluded that there is no consistent added value by RCMs over GCMs forced wind-wave climate over northern Indian Ocean. We also identify that there is little to no significance of choosing a finer resolution GCM ( 1.4°) over a coarse GCM ( 2.8°) in improving skill of GCM forced dynamical wave simulations.
Target-mediated drug disposition model and its approximations for antibody-drug conjugates.
Gibiansky, Leonid; Gibiansky, Ekaterina
2014-02-01
Antibody-drug conjugate (ADC) is a complex structure composed of an antibody linked to several molecules of a biologically active cytotoxic drug. The number of ADC compounds in clinical development now exceeds 30, with two of them already on the market. However, there is no rigorous mechanistic model that describes pharmacokinetic (PK) properties of these compounds. PK modeling of ADCs is even more complicated than that of other biologics as the model should describe distribution, binding, and elimination of antibodies with different toxin load, and also the deconjugation process and PK of the released toxin. This work extends the target-mediated drug disposition (TMDD) model to describe ADCs, derives the rapid binding (quasi-equilibrium), quasi-steady-state, and Michaelis-Menten approximations of the TMDD model as applied to ADCs, derives the TMDD model and its approximations for ADCs with load-independent properties, and discusses further simplifications of the system under various assumptions. The developed models are shown to describe data simulated from the available clinical population PK models of trastuzumab emtansine (T-DM1), one of the two currently approved ADCs. Identifiability of model parameters is also discussed and illustrated on the simulated T-DM1 examples.
2016-11-01
ER D C/ G SL T R- 16 -3 1 Modeling the Blast Load Simulator Airblast Environment Using First Principles Codes Report 1, Blast Load...Simulator Airblast Environment using First Principles Codes Report 1, Blast Load Simulator Environment Gregory C. Bessette, James L. O’Daniel...evaluate several first principles codes (FPCs) for modeling airblast environments typical of those encountered in the BLS. The FPCs considered were
NASA Technical Reports Server (NTRS)
Sutton, Fred B.; Buell, Donald A.
1952-01-01
An investigation was conducted in the Ames 12-foot pressure wind tunnel to determine the effect of an operating propeller on the aerodynamic characteristics of a l/l9-scale model of the Lockheed XFV-1 airplane, Several full-scale power conditions were simulated at Mach numbers from 0.50 to 0.92; the.Reynolds number was constant at 1,7 million. Lift, longitudinal force, pitch, roll, and yaw characteristics, determined with and without power, are presented for the complete model and for various combinations of model components, Results of an investigation to determine the characteristics of the dual-rotating propeller used on the model are given also,
Adaptive resolution simulation of an atomistic protein in MARTINI water
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zavadlav, Julija; Melo, Manuel Nuno; Marrink, Siewert J., E-mail: s.j.marrink@rug.nl
2014-02-07
We present an adaptive resolution simulation of protein G in multiscale water. We couple atomistic water around the protein with mesoscopic water, where four water molecules are represented with one coarse-grained bead, farther away. We circumvent the difficulties that arise from coupling to the coarse-grained model via a 4-to-1 molecule coarse-grain mapping by using bundled water models, i.e., we restrict the relative movement of water molecules that are mapped to the same coarse-grained bead employing harmonic springs. The water molecules change their resolution from four molecules to one coarse-grained particle and vice versa adaptively on-the-fly. Having performed 15 ns long molecularmore » dynamics simulations, we observe within our error bars no differences between structural (e.g., root-mean-squared deviation and fluctuations of backbone atoms, radius of gyration, the stability of native contacts and secondary structure, and the solvent accessible surface area) and dynamical properties of the protein in the adaptive resolution approach compared to the fully atomistically solvated model. Our multiscale model is compatible with the widely used MARTINI force field and will therefore significantly enhance the scope of biomolecular simulations.« less
Adaptive resolution simulation of an atomistic protein in MARTINI water.
Zavadlav, Julija; Melo, Manuel Nuno; Marrink, Siewert J; Praprotnik, Matej
2014-02-07
We present an adaptive resolution simulation of protein G in multiscale water. We couple atomistic water around the protein with mesoscopic water, where four water molecules are represented with one coarse-grained bead, farther away. We circumvent the difficulties that arise from coupling to the coarse-grained model via a 4-to-1 molecule coarse-grain mapping by using bundled water models, i.e., we restrict the relative movement of water molecules that are mapped to the same coarse-grained bead employing harmonic springs. The water molecules change their resolution from four molecules to one coarse-grained particle and vice versa adaptively on-the-fly. Having performed 15 ns long molecular dynamics simulations, we observe within our error bars no differences between structural (e.g., root-mean-squared deviation and fluctuations of backbone atoms, radius of gyration, the stability of native contacts and secondary structure, and the solvent accessible surface area) and dynamical properties of the protein in the adaptive resolution approach compared to the fully atomistically solvated model. Our multiscale model is compatible with the widely used MARTINI force field and will therefore significantly enhance the scope of biomolecular simulations.
NASA Astrophysics Data System (ADS)
Jia, W.; Pan, F.; McPherson, B. J. O. L.
2015-12-01
Due to the presence of multiple phases in a given system, CO2 sequestration with enhanced oil recovery (CO2-EOR) includes complex multiphase flow processes compared to CO2 sequestration in deep saline aquifers (no hydrocarbons). Two of the most important factors are three-phase relative permeability and hysteresis effects, both of which are difficult to measure and are usually represented by numerical interpolation models. The purposes of this study included quantification of impacts of different three-phase relative permeability models and hysteresis models on CO2 sequestration simulation results, and associated quantitative estimation of uncertainty. Four three-phase relative permeability models and three hysteresis models were applied to a model of an active CO2-EOR site, the SACROC unit located in western Texas. To eliminate possible bias of deterministic parameters on the evaluation, a sequential Gaussian simulation technique was utilized to generate 50 realizations to describe heterogeneity of porosity and permeability, initially obtained from well logs and seismic survey data. Simulation results of forecasted pressure distributions and CO2 storage suggest that (1) the choice of three-phase relative permeability model and hysteresis model have noticeable impacts on CO2 sequestration simulation results; (2) influences of both factors are observed in all 50 realizations; and (3) the specific choice of hysteresis model appears to be somewhat more important relative to the choice of three-phase relative permeability model in terms of model uncertainty.
Analysis, Analysis Practices and Implications for Modeling and Simulation
2007-01-01
the Somme, New York: Penguin , 1983. Kent, Glenn A., “Looking Back: Four Decades of Analysis,” Operations Research, Vol. 50, No. 1, 2002, pp. 122–224...to many sources is http://www.saunalahti.fi/ fta /EBO.htm (as of December 18, 2006). Effects-based operations are controversial in some respects (Davis
The Radiative Forcing Model Intercomparison Project (RFMIP): Experimental protocol for CMIP6
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pincus, Robert; Forster, Piers M.; Stevens, Bjorn
The phrasing of the first of three questions motivating CMIP6 – “How does the Earth system respond to forcing?” – suggests that forcing is always well-known, yet the radiative forcing to which this question refers has historically been uncertain in coordinated experiments even as understanding of how best to infer radiative forcing has evolved. The Radiative Forcing Model Intercomparison Project (RFMIP) endorsed by CMIP6 seeks to provide a foundation for answering the question through three related activities: (i) accurate characterization of the effective radiative forcing relative to a near-preindustrial baseline and careful diagnosis of the components of this forcing; (ii) assessment ofmore » the absolute accuracy of clear-sky radiative transfer parameterizations against reference models on the global scales relevant for climate modeling; and (iii) identification of robust model responses to tightly specified aerosol radiative forcing from 1850 to present. Complete characterization of effective radiative forcing can be accomplished with 180 years (Tier 1) of atmosphere-only simulation using a sea-surface temperature and sea ice concentration climatology derived from the host model's preindustrial control simulation. Assessment of parameterization error requires trivial amounts of computation but the development of small amounts of infrastructure: new, spectrally detailed diagnostic output requested as two snapshots at present-day and preindustrial conditions, and results from the model's radiation code applied to specified atmospheric conditions. In conclusion, the search for robust responses to aerosol changes relies on the CMIP6 specification of anthropogenic aerosol properties; models using this specification can contribute to RFMIP with no additional simulation, while those using a full aerosol model are requested to perform at least one and up to four 165-year coupled ocean–atmosphere simulations at Tier 1.« less
The Radiative Forcing Model Intercomparison Project (RFMIP): Experimental protocol for CMIP6
Pincus, Robert; Forster, Piers M.; Stevens, Bjorn
2016-09-27
The phrasing of the first of three questions motivating CMIP6 – “How does the Earth system respond to forcing?” – suggests that forcing is always well-known, yet the radiative forcing to which this question refers has historically been uncertain in coordinated experiments even as understanding of how best to infer radiative forcing has evolved. The Radiative Forcing Model Intercomparison Project (RFMIP) endorsed by CMIP6 seeks to provide a foundation for answering the question through three related activities: (i) accurate characterization of the effective radiative forcing relative to a near-preindustrial baseline and careful diagnosis of the components of this forcing; (ii) assessment ofmore » the absolute accuracy of clear-sky radiative transfer parameterizations against reference models on the global scales relevant for climate modeling; and (iii) identification of robust model responses to tightly specified aerosol radiative forcing from 1850 to present. Complete characterization of effective radiative forcing can be accomplished with 180 years (Tier 1) of atmosphere-only simulation using a sea-surface temperature and sea ice concentration climatology derived from the host model's preindustrial control simulation. Assessment of parameterization error requires trivial amounts of computation but the development of small amounts of infrastructure: new, spectrally detailed diagnostic output requested as two snapshots at present-day and preindustrial conditions, and results from the model's radiation code applied to specified atmospheric conditions. In conclusion, the search for robust responses to aerosol changes relies on the CMIP6 specification of anthropogenic aerosol properties; models using this specification can contribute to RFMIP with no additional simulation, while those using a full aerosol model are requested to perform at least one and up to four 165-year coupled ocean–atmosphere simulations at Tier 1.« less
NASA Astrophysics Data System (ADS)
Kyrölä, Erkki; Andersson, Monika E.; Verronen, Pekka T.; Laine, Marko; Tukiainen, Simo; Marsh, Daniel R.
2018-04-01
Most of our understanding of the atmosphere is based on observations and their comparison with model simulations. In middle atmosphere studies it is common practice to use an approach, where the model dynamics are at least partly based on temperature and wind fields from an external meteorological model. In this work we test how closely satellite measurements of a few central trace gases agree with this kind of model simulation. We use collocated vertical profiles where each satellite measurement is compared to the closest model data. We compare profiles and distributions of O3, NO2 and NO3 from the Global Ozone Monitoring by Occultation of Stars instrument (GOMOS) on the Envisat satellite with simulations by the Whole Atmosphere Community Climate Model (WACCM). GOMOS measurements are from nighttime. Our comparisons show that in the stratosphere outside the polar regions differences in ozone between WACCM and GOMOS are small, between 0 and 6%. The correlation of 5-day time series show a very high 0.9-0.95. In the tropical region 10° S-10° N below 10 hPa WACCM values are up to 20 % larger than GOMOS. In the Arctic below 6 hPa WACCM ozone values are up to 20 % larger than GOMOS. In the mesosphere between 0.04 and 1 hPa the WACCM is at most 20 % smaller than GOMOS. Above the ozone minimum at 0.01 hPa (or 80 km) large differences are found between WACCM and GOMOS. The correlation can still be high, but at the second ozone peak the correlation falls strongly and the ozone abundance from WACCM is about 60 % smaller than that from GOMOS. The total ozone columns (above 50 hPa) of GOMOS and WACCM agree within ±2 % except in the Arctic where WACCM is 10 % larger than GOMOS. Outside the polar areas and in the validity region of GOMOS NO2 measurements (0.3-37 hPa) WACCM and GOMOS NO2 agree within -5 to +25 % and the correlation is high (0.7-0.95) except in the upper stratosphere at the southern latitudes. In the polar areas, where solar particle precipitation and downward transport from the thermosphere enhance NO2 abundance, large differences up to -90 % are found between WACCM and GOMOS NO2 and the correlation varies between 0.3 and 0.9. For NO3, we find that the WACCM and GOMOS difference is between -20 and 5 % with a very high correlation of 0.7-0.95. We show that NO3 values strongly depend on temperature and the dependency can be fitted by the exponential function of temperature. The ratio of NO3 to O3 from WACCM and GOMOS closely follow the prediction from the equilibrium chemical theory. Abrupt temperature increases from sudden stratospheric warmings (SSWs) are reflected as sudden enhancements of WACCM and GOMOS NO3 values.
Benay, G; Wipff, G
2014-03-20
We report a molecular dynamics (MD) study of biphasic systems involved in the liquid-liquid extraction of uranyl nitrate by tri-n-butylphosphate (TBP) to hexane, from "pH neutral" or acidic (3 M nitric acid) aqueous solutions, to assess the model dependence of the surface activity and partitioning of TBP alone, of its UO2(NO3)2(TBP)2 complex, and of UO2(NO3)2 or UO2(2+) uncomplexed. For this purpose, we first compare several electrostatic representations of TBP with regards to its polarity and conformational properties, its interactions with H2O, HNO3, and UO2(NO3)2 species, its relative free energies of solvation in water or oil environments, the properties of the pure TBP liquid and of the pure-TBP/water interface. The free energies of transfer of TBP, UO2(NO3)2, UO2(2+), and the UO2(NO3)2(TBP)2 complex across the water/oil interface are then investigated by potential of mean force (PMF) calculations, comparing different TBP models and two charge models of uranyl nitrate. Describing uranyl and nitrate ions with integer charges (+2 and -1, respectively) is shown to exaggerate the hydrophilicity and surface activity of the UO2(NO3)2(TBP)2 complex. With more appropriate ESP charges, mimicking charge transfer and polarization effects in the UO2(NO3)2 moiety or in the whole complex, the latter is no more surface active. This feature is confirmed by MD, PMF, and mixing-demixing simulations with or without polarization. Furthermore, with ESP charges, pulling the UO2(NO3)2 species to the TBP phase affords the formation of UO2(NO3)2(TBP)2 at the interface, followed by its energetically favorable extraction. The neutral complexes should therefore not accumulate at the interface during the extraction process, but diffuse to the oil phase. A similar feature is found for an UO2(NO3)2(Amide)2 neutral complex with fatty amide extracting ligands, calling for further simulations and experimental studies (e.g., time evolution of the nonlinear spectroscopic signature and of surface tension) on the interfacial landscape upon ion extraction.
Alastruey, Jordi; Khir, Ashraf W; Matthys, Koen S; Segers, Patrick; Sherwin, Spencer J; Verdonck, Pascal R; Parker, Kim H; Peiró, Joaquim
2011-08-11
The accuracy of the nonlinear one-dimensional (1-D) equations of pressure and flow wave propagation in Voigt-type visco-elastic arteries was tested against measurements in a well-defined experimental 1:1 replica of the 37 largest conduit arteries in the human systemic circulation. The parameters required by the numerical algorithm were directly measured in the in vitro setup and no data fitting was involved. The inclusion of wall visco-elasticity in the numerical model reduced the underdamped high-frequency oscillations obtained using a purely elastic tube law, especially in peripheral vessels, which was previously reported in this paper [Matthys et al., 2007. Pulse wave propagation in a model human arterial network: Assessment of 1-D numerical simulations against in vitro measurements. J. Biomech. 40, 3476-3486]. In comparison to the purely elastic model, visco-elasticity significantly reduced the average relative root-mean-square errors between numerical and experimental waveforms over the 70 locations measured in the in vitro model: from 3.0% to 2.5% (p<0.012) for pressure and from 15.7% to 10.8% (p<0.002) for the flow rate. In the frequency domain, average relative errors between numerical and experimental amplitudes from the 5th to the 20th harmonic decreased from 0.7% to 0.5% (p<0.107) for pressure and from 7.0% to 3.3% (p<10(-6)) for the flow rate. These results provide additional support for the use of 1-D reduced modelling to accurately simulate clinically relevant problems at a reasonable computational cost. Copyright © 2011 Elsevier Ltd. All rights reserved.
Microbial Internal Storage Alters the Carbon Transformation in Dynamic Anaerobic Fermentation.
Ni, Bing-Jie; Batstone, Damien; Zhao, Bai-Hang; Yu, Han-Qing
2015-08-04
Microbial internal storage processes have been demonstrated to occur and play an important role in activated sludge systems under both aerobic and anoxic conditions when operating under dynamic conditions. High-rate anaerobic reactors are often operated at a high volumetric organic loading and a relatively dynamic profile, with large amounts of fermentable substrates. These dynamic operating conditions and high catabolic energy availability might also facilitate the formation of internal storage polymers by anaerobic microorganisms. However, so far information about storage under anaerobic conditions (e.g., anaerobic fermentation) as well as its consideration in anaerobic process modeling (e.g., IWA Anaerobic Digestion Model No. 1, ADM1) is still sparse. In this work, the accumulation of storage polymers during anaerobic fermentation was evaluated by batch experiments using anaerobic methanogenic sludge and based on mass balance analysis of carbon transformation. A new mathematical model was developed to describe microbial storage in anaerobic systems. The model was calibrated and validated by using independent data sets from two different anaerobic systems, with significant storage observed, and effectively simulated in both systems. The inclusion of the new anaerobic storage processes in the developed model allows for more successful simulation of transients due to lower accumulation of volatile fatty acids (correction for the overestimation of volatile fatty acids), which mitigates pH fluctuations. Current models such as the ADM1 cannot effectively simulate these dynamics due to a lack of anaerobic storage mechanisms.
Bai, Jie; Liu, He; Yin, Bo; Ma, Huijun; Chen, Xinchun
2017-02-01
Anaerobic acidogenic fermentation with high-solid sludge is a promising method for volatile fatty acid (VFA) production to realize resource recovery. In this study, to model inhibition by free ammonia in high-solid sludge fermentation, the anaerobic digestion model No. 1 (ADM1) was modified to simulate the VFA generation in batch, semi-continuous and full scale sludge. The ADM1 was operated on the platform AQUASIM 2.0. Three kinds of inhibition forms, e.g., simple inhibition, Monod and non-inhibition forms, were integrated into the ADM1 and tested with the real experimental data for batch and semi-continuous fermentation, respectively. The improved particle swarm optimization technique was used for kinetic parameter estimation using the software MATLAB 7.0. In the modified ADM1, the K s of acetate is 0.025, the k m,ac is 12.51, and the K I_NH3 is 0.02, respectively. The results showed that the simple inhibition model could simulate the VFA generation accurately while the Monod model was the better inhibition kinetics form in semi-continuous fermentation at pH10.0. Finally, the modified ADM1 could successfully describe the VFA generation and ammonia accumulation in a 30m 3 full-scale sludge fermentation reactor, indicating that the developed model can be applicable in high-solid sludge anaerobic fermentation. Copyright © 2016. Published by Elsevier B.V.
Stream simulation in an analog model of the ground-water system on Long Island, New York
Harbaugh, Arlen W.; Getzen, Rufus T.
1977-01-01
The stream circuits of an electric analog model of the ground-water system of Long Island were modified to more accurately represent the relationahip between streamflow and ground-water levels. Assumptions for use of the revised circuits are (1) that streams are strictly gaining, and (2) that ground-water seepage into the streams is proportional to the difference between streambed elevation and the average water-table elevation near the stream. No seepage into streams occurs when ground-water levels drop below the streambed elevation. Regional simulation of the 1962-68 drought on Long Island was significantly improved by use of the revised stream circuits.
Urban Summertime Ozone of China: Peak Ozone Hour and Nighttime Mixing
NASA Astrophysics Data System (ADS)
Qu, H.; Wang, Y.; Zhang, R.
2017-12-01
We investigate the observed diurnal cycle of summertime ozone in the cities of China using a regional chemical transport model. The simulated daytime ozone is in general agreement with the observations. Model simulations suggest that the ozone peak time and peak concentration are a function of NOx (NO + NO2) and volatile organic compound (VOC) emissions. The differences between simulated and observed ozone peak time and peak concentration in some regions can be applied to understand biases in the emission inventories. For example, the VOCs emissions are underestimated over the Pearl River Delta (PRD) region, and either NOx emissions are underestimated or VOC emissions are overestimated over the Yangtze River Delta (YRD) regions. In contrast to the general good daytime ozone simulations, the simulated nighttime ozone has a large low bias of up to 40 ppbv. Nighttime ozone in urban areas is sensitive to the nocturnal boundary-layer mixing, and enhanced nighttime mixing (from the surface to 200-500 m) is necessary for the model to reproduce the observed level of ozone.
Miller, R.T.
1986-01-01
A study of the feasibility of storing heated water in a deep sandstone aquifer in Minnesota is described. The aquifer consists of four hydraulic zones that are areally anisotropic and have average hydraulic conductivities that range from 0. 03 to 1. 2 meters per day. A preliminary axially symmetric, nonisothermal, isotropic, single-phase, radial-flow, thermal-energy-transport model was constructed to investigate the sensitivity of model simulation to various hydraulic and thermal properties of the aquifer. A three-dimensional flow and thermal-energy transport model was constructed to incorporate the areal anisotropy of the aquifer. Analytical solutions of equations describing areally anisotropic groundwater flow around a doublet-well system were used to specify model boundary conditions for simulation of heat injection. The entire heat-injection-testing period of approximately 400 days was simulated. Model-computed temperatures compared favorably with field-recorded temperatures, with differences of no more than plus or minus 8 degree C. For each test cycle, model-computed aquifer thermal efficiency, defined as total heat withdrawn divided by total heat injected, was within plus or minus 2% of the field-calculated values.
Aromatic sulfonation with sulfur trioxide: mechanism and kinetic model.
Moors, Samuel L C; Deraet, Xavier; Van Assche, Guy; Geerlings, Paul; De Proft, Frank
2017-01-01
Electrophilic aromatic sulfonation of benzene with sulfur trioxide is studied with ab initio molecular dynamics simulations in gas phase, and in explicit noncomplexing (CCl 3 F) and complexing (CH 3 NO 2 ) solvent models. We investigate different possible reaction pathways, the number of SO 3 molecules participating in the reaction, and the influence of the solvent. Our simulations confirm the existence of a low-energy concerted pathway with formation of a cyclic transition state with two SO 3 molecules. Based on the simulation results, we propose a sequence of elementary reaction steps and a kinetic model compatible with experimental data. Furthermore, a new alternative reaction pathway is proposed in complexing solvent, involving two SO 3 and one CH 3 NO 2 .
Symplectic multiparticle tracking model for self-consistent space-charge simulation
Qiang, Ji
2017-01-23
Symplectic tracking is important in accelerator beam dynamics simulation. So far, to the best of our knowledge, there is no self-consistent symplectic space-charge tracking model available in the accelerator community. In this paper, we present a two-dimensional and a three-dimensional symplectic multiparticle spectral model for space-charge tracking simulation. This model includes both the effect from external fields and the effect of self-consistent space-charge fields using a split-operator method. Such a model preserves the phase space structure and shows much less numerical emittance growth than the particle-in-cell model in the illustrative examples.
Symplectic multiparticle tracking model for self-consistent space-charge simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiang, Ji
Symplectic tracking is important in accelerator beam dynamics simulation. So far, to the best of our knowledge, there is no self-consistent symplectic space-charge tracking model available in the accelerator community. In this paper, we present a two-dimensional and a three-dimensional symplectic multiparticle spectral model for space-charge tracking simulation. This model includes both the effect from external fields and the effect of self-consistent space-charge fields using a split-operator method. Such a model preserves the phase space structure and shows much less numerical emittance growth than the particle-in-cell model in the illustrative examples.
NASA Astrophysics Data System (ADS)
Chen, Z.; Wang, Y. J.; Chen, G. N.; Liu, J.; Liu, Y. J.
2017-12-01
The In-situ Melting model of granite reveals that granitic magma generated by anatexis is layer-like and magma convection results in thickening of the layer. On the basis and by integrating the research findings on rheological transitions of rocks in crustal melting, we simulated the thermodynamic process of granite formation by using Underworld1.7. The size of the numerical model is 100km×25km with free-slip boundary. The solidus temperature is postulated being 600° and the fusing-off temperatures is 705° that corresponds to the solid-liquid transition (SLT) of the partial melting system with the melt fraction percentage around 40%. The viscosities of rock and magma are separately calculated according to this melt percentage. The model runs on Tian-He2 supercomputer and the result indicates: 1) when temperature exceeds the solidus of rock, anatexis appears in the area below the 600° isotherm; 2) when temperature surpasses the fusing-off temperature of rock, a magma layer occurs in the area below 705° isotherm; 3) the initiation of magma convection accompanied with stoping is at the temperature around 739.6°, and the upper surface of magma layer, i.e. the MI (magma interface)/SLT (solid-liquid transition) moves upwards with time; 4) the velocity of the upward motion of MI/SLT depends on the bottom temperature and the thickness of magma layer depends on the duration of convection. Summing up, this modeling result demonstrates that the In-situ Melting model of granite meets the basic principle of physics and reveals details on the thermodynamic circumstances interacting with the development of melting and granite formation.Acknowledgement: This research is financially supported by NSFC (No 41372223, No 41230206 and No 41574087).
A numerical identifiability test for state-space models--application to optimal experimental design.
Hidalgo, M E; Ayesa, E
2001-01-01
This paper describes a mathematical tool for identifiability analysis, easily applicable to high order non-linear systems modelled in state-space and implementable in simulators with a time-discrete approach. This procedure also permits a rigorous analysis of the expected estimation errors (average and maximum) in calibration experiments. The methodology is based on the recursive numerical evaluation of the information matrix during the simulation of a calibration experiment and in the setting-up of a group of information parameters based on geometric interpretations of this matrix. As an example of the utility of the proposed test, the paper presents its application to an optimal experimental design of ASM Model No. 1 calibration, in order to estimate the maximum specific growth rate microH and the concentration of heterotrophic biomass XBH.
Hens, Bart; Pathak, Shriram M; Mitra, Amitava; Patel, Nikunjkumar; Liu, Bo; Patel, Sanjaykumar; Jamei, Masoud; Brouwers, Joachim; Augustijns, Patrick; Turner, David B
2017-12-04
The aim of this study was to evaluate gastrointestinal (GI) dissolution, supersaturation, and precipitation of posaconazole, formulated as an acidified (pH 1.6) and neutral (pH 7.1) suspension. A physiologically based pharmacokinetic (PBPK) modeling and simulation tool was applied to simulate GI and systemic concentration-time profiles of posaconazole, which were directly compared with intraluminal and systemic data measured in humans. The Advanced Dissolution Absorption and Metabolism (ADAM) model of the Simcyp Simulator correctly simulated incomplete gastric dissolution and saturated duodenal concentrations of posaconazole in the duodenal fluids following administration of the neutral suspension. In contrast, gastric dissolution was approximately 2-fold higher after administration of the acidified suspension, which resulted in supersaturated concentrations of posaconazole upon transfer to the upper small intestine. The precipitation kinetics of posaconazole were described by two precipitation rate constants, extracted by semimechanistic modeling of a two-stage medium change in vitro dissolution test. The 2-fold difference in exposure in the duodenal compartment for the two formulations corresponded with a 2-fold difference in systemic exposure. This study demonstrated for the first time predictive in silico simulations of GI dissolution, supersaturation, and precipitation for a weakly basic compound in part informed by modeling of in vitro dissolution experiments and validated via clinical measurements in both GI fluids and plasma. Sensitivity analysis with the PBPK model indicated that the critical supersaturation ratio (CSR) and second precipitation rate constant (sPRC) are important parameters of the model. Due to the limitations of the two-stage medium change experiment the CSR was extracted directly from the clinical data. However, in vitro experiments with the BioGIT transfer system performed after completion of the in silico modeling provided an almost identical CSR to the clinical study value; this had no significant impact on the PBPK model predictions.
Evaluation of Simulated Photochemical Partitioning of Oxidized Nitrogen in the Upper Troposphere
Regional and global chemical transport models underpredict NOx (NO +NO2) in the upper troposphere where it is a precursor to the greenhouse gas ozone. The NOx bias has been shown in model evaluations using aircraft data (Singh et al., 2007) and to...
Development and validation of a septoplasty training model using 3-dimensional printing technology.
AlReefi, Mahmoud A; Nguyen, Lily H P; Mongeau, Luc G; Haq, Bassam Ul; Boyanapalli, Siddharth; Hafeez, Nauman; Cegarra-Escolano, Francois; Tewfik, Marc A
2017-04-01
Providing alternative training modalities may improve trainees' ability to perform septoplasty. Three-dimensional printing has been shown to be a powerful tool in surgical training. The objectives of this study were to explain the development of our 3-dimensional (3D) printed septoplasty training model, to assess its face and content validity, and to present evidence supporting its ability to distinguish between levels of surgical proficiency. Imaging data of a patient with a nasal septal deviation was selected for printing. Printing materials reproducing the mechanical properties of human tissues were selected based on literature review and prototype testing. Eight expert rhinologists, 6 senior residents, and 6 junior residents performed endoscopic septoplasties on the model and completed a postsimulation survey. Performance metrics in quality (final product analysis), efficiency (time), and safety (eg, perforation length, nares damage) were recorded and analyzed in a study-blind manner. The model was judged to be anatomically correct and the steps performed realistic, with scores of 4.05 ± 0.82 and 4.2 ± 1, respectively, on a 5-point Likert scale. Ninety-two percent of residents desired the simulator to be integrated into their teaching curriculum. There was a significant difference (p < 0.05) between the expert, intermediate, and novice groups in time taken and nares cuts, whereas other performance metrics showed no significant difference. To our knowledge, there are no other simulator training models for septoplasty. Our model incorporates 2 different materials mixed into the 3 relevant consistencies necessary to simulate septoplasty. Our findings provide evidence supporting the validity of the model. © 2016 ARS-AAOA, LLC.
Analysis about modeling MEC7000 excitation system of nuclear power unit
NASA Astrophysics Data System (ADS)
Liu, Guangshi; Sun, Zhiyuan; Dou, Qian; Liu, Mosi; Zhang, Yihui; Wang, Xiaoming
2018-02-01
Aiming at the importance of accurate modeling excitation system in stability calculation of nuclear power plant inland and lack of research in modeling MEC7000 excitation system,this paper summarize a general method to modeling and simulate MEC7000 excitation system. Among this method also solve the key issues of computing method of IO interface parameter and the conversion process of excitation system measured model to BPA simulation model. At last complete the simulation modeling of MEC7000 excitation system first time in domestic. By used No-load small disturbance check, demonstrates that the proposed model and algorithm is corrective and efficient.
Evaluation of a two-dimensional numerical model for air quality simulation in a street canyon
NASA Astrophysics Data System (ADS)
Okamoto, Shin `Ichi; Lin, Fu Chi; Yamada, Hiroaki; Shiozawa, Kiyoshige
For many urban areas, the most severe air pollution caused by automobile emissions appears along a road surrounded by tall buildings: the so=called street canyon. A practical two-dimensional numerical model has been developed to be applied to this kind of road structure. This model contains two submodels: a wind-field model and a diffusion model based on a Monte Carlo particle scheme. In order to evaluate the predictive performance of this model, an air quality simulation was carried out at three trunk roads in the Tokyo metropolitan area: Nishi-Shimbashi, Aoyama and Kanda-Nishikicho (using SF 6 as a tracer and NO x measurement). Since this model has two-dimensional properties and cannot be used for the parallel wind condition, the perpendicular wind condition was selected for the simulation. The correlation coefficients for the SF 6 and NO x data in Aoyama were 0.67 and 0.62, respectively. When predictive performance of this model is compared with other models, this model is comparable to the SRI model, and superior to the APPS three-dimensional numerical model.
2007-08-01
primary somatotypes , which were identified by multivariate analysis, had no significant effect on the simulated thermo-physiological responses ...population. Anthropometric values for each somatotype applied to a thermal regulatory model resulted into physiological response comparisons of Figure 2 and...Public report ing burden for this collect ion of information is est imated to average 1 hour per response , including the time for review ing instruct ions
Comparison of Hall Thruster Plume Expansion Model with Experimental Data (Preprint)
2006-07-01
Cartesian mesh. AQUILA, the focus of this study, is a hybrid PIC model that tracks particles along an unstructured tetrahedral mesh. COLISEUM is capable...measurements of the ion current density profile, ion energy distributions, and ion species fraction distributions using a nude Faraday probe...Spacecraft and Rockets, Vol.37 No.1. 6 Oh, D. and Hastings, D., “Three Dimensional PIC -DSMC Simulations of Hall Thruster Plumes and Analysis for
Li, Dejun; Lanigan, Gary; Humphreys, James
2011-01-01
There is uncertainty about the potential reduction of soil nitrous oxide (N2O) emission when fertilizer nitrogen (FN) is partially or completely replaced by biological N fixation (BNF) in temperate grassland. The objectives of this study were to 1) investigate the changes in N2O emissions when BNF is used to replace FN in permanent grassland, and 2) evaluate the applicability of the process-based model DNDC to simulate N2O emissions from Irish grasslands. Three grazing treatments were: (i) ryegrass (Lolium perenne) grasslands receiving 226 kg FN ha−1 yr−1 (GG+FN), (ii) ryegrass/white clover (Trifolium repens) grasslands receiving 58 kg FN ha−1 yr−1 (GWC+FN) applied in spring, and (iii) ryegrass/white clover grasslands receiving no FN (GWC-FN). Two background treatments, un-grazed swards with ryegrass only (G–B) or ryegrass/white clover (WC–B), did not receive slurry or FN and the herbage was harvested by mowing. There was no significant difference in annual N2O emissions between G–B (2.38±0.12 kg N ha−1 yr−1 (mean±SE)) and WC-B (2.45±0.85 kg N ha−1 yr−1), indicating that N2O emission due to BNF itself and clover residual decomposition from permanent ryegrass/clover grassland was negligible. N2O emissions were 7.82±1.67, 6.35±1.14 and 6.54±1.70 kg N ha−1 yr−1, respectively, from GG+FN, GWC+FN and GWC-FN. N2O fluxes simulated by DNDC agreed well with the measured values with significant correlation between simulated and measured daily fluxes for the three grazing treatments, but the simulation did not agree very well for the background treatments. DNDC overestimated annual emission by 61% for GG+FN, and underestimated by 45% for GWC-FN, but simulated very well for GWC+FN. Both the measured and simulated results supported that there was a clear reduction of N2O emissions when FN was replaced by BNF. PMID:22028829
NASA Astrophysics Data System (ADS)
Wang, Qiongzhen; Dong, Xinyi; Fu, Joshua S.; Xu, Jian; Deng, Congrui; Jiang, Yilun; Fu, Qingyan; Lin, Yanfen; Huang, Kan; Zhuang, Guoshun
2018-03-01
Near-surface and vertical in situ measurements of atmospheric particles were conducted in Shanghai during 19-23 March 2010 to explore the transport and chemical evolution of dust particles in a super dust storm. An air quality model with optimized physical dust emission scheme and newly implemented dust chemistry was utilized to study the impact of dust chemistry on regional air quality. Two discontinuous dust periods were observed with one traveling over northern China (DS1) and the other passing over the coastal regions of eastern China (DS2). Stronger mixing extents between dust and anthropogenic emissions were found in DS2, reflected by the higher SO2 / PM10 and NO2 / PM10 ratios as well as typical pollution elemental species such as As, Cd, Pb, and Zn. As a result, the concentrations of SO42- and NO3- and the ratio of Ca2+ / Ca were more elevated in DS2 than in DS1 but opposite for the [NH4+] / [SO42-+NO3-] ratio, suggesting the heterogeneous reactions between calcites and acid gases were significantly promoted in DS2 due to the higher level of relative humidity and gaseous pollution precursors. Lidar observation showed a columnar effect on the vertical structure of particle optical properties in DS1 that dust dominantly accounted for ˜ 80-90 % of the total particle extinction from near the ground to ˜ 700 m. In contrast, the dust plumes in DS2 were restrained within lower altitudes while the extinction from spherical particles exhibited a maximum at a high altitude of ˜ 800 m. The model simulation reproduced relatively consistent results with observations that strong impacts of dust heterogeneous reactions on secondary aerosol formation occurred in areas where the anthropogenic emissions were intensive. Compared to the sulfate simulation, the nitrate formation on dust is suggested to be improved in the future modeling efforts.
Visualizing ultrasound through computational modeling
NASA Technical Reports Server (NTRS)
Guo, Theresa W.
2004-01-01
The Doppler Ultrasound Hematocrit Project (DHP) hopes to find non-invasive methods of determining a person s blood characteristics. Because of the limits of microgravity and the space travel environment, it is important to find non-invasive methods of evaluating the health of persons in space. Presently, there is no well developed method of determining blood composition non-invasively. This projects hopes to use ultrasound and Doppler signals to evaluate the characteristic of hematocrit, the percentage by volume of red blood cells within whole blood. These non-invasive techniques may also be developed to be used on earth for trauma patients where invasive measure might be detrimental. Computational modeling is a useful tool for collecting preliminary information and predictions for the laboratory research. We hope to find and develop a computer program that will be able to simulate the ultrasound signals the project will work with. Simulated models of test conditions will more easily show what might be expected from laboratory results thus help the research group make informed decisions before and during experimentation. There are several existing Matlab based computer programs available, designed to interpret and simulate ultrasound signals. These programs will be evaluated to find which is best suited for the project needs. The criteria of evaluation that will be used are 1) the program must be able to specify transducer properties and specify transmitting and receiving signals, 2) the program must be able to simulate ultrasound signals through different attenuating mediums, 3) the program must be able to process moving targets in order to simulate the Doppler effects that are associated with blood flow, 4) the program should be user friendly and adaptable to various models. After a computer program is chosen, two simulation models will be constructed. These models will simulate and interpret an RF data signal and a Doppler signal.
An Open Simulation System Model for Scientific Applications
NASA Technical Reports Server (NTRS)
Williams, Anthony D.
1995-01-01
A model for a generic and open environment for running multi-code or multi-application simulations - called the open Simulation System Model (OSSM) - is proposed and defined. This model attempts to meet the requirements of complex systems like the Numerical Propulsion Simulator System (NPSS). OSSM places no restrictions on the types of applications that can be integrated at any state of its evolution. This includes applications of different disciplines, fidelities, etc. An implementation strategy is proposed that starts with a basic prototype, and evolves over time to accommodate an increasing number of applications. Potential (standard) software is also identified which may aid in the design and implementation of the system.
Computational Modeling Approaches to Multiscale Design of Icephobic Surfaces
NASA Technical Reports Server (NTRS)
Tallman, Aaron; Wang, Yan; Vargas, Mario
2017-01-01
To aid in the design of surfaces that prevent icing, a model and computational simulation of impact ice formation at the single droplet scale was implemented. The nucleation of a single supercooled droplet impacting on a substrate, in rime ice conditions, was simulated. Open source computational fluid dynamics (CFD) software was used for the simulation. To aid in the design of surfaces that prevent icing, a model of impact ice formation at the single droplet scale was proposed•No existing model simulates simultaneous impact and freezing of a single super-cooled water droplet•For the 10-week project, a low-fidelity feasibility study was the goal.
Goodman, Dan F M; Brette, Romain
2009-09-01
"Brian" is a simulator for spiking neural networks (http://www.briansimulator.org). The focus is on making the writing of simulation code as quick and easy as possible for the user, and on flexibility: new and non-standard models are no more difficult to define than standard ones. This allows scientists to spend more time on the details of their models, and less on their implementation. Neuron models are defined by writing differential equations in standard mathematical notation, facilitating scientific communication. Brian is written in the Python programming language, and uses vector-based computation to allow for efficient simulations. It is particularly useful for neuroscientific modelling at the systems level, and for teaching computational neuroscience.
A multimedia fate and chemical transport modeling system for pesticides: II. Model evaluation
NASA Astrophysics Data System (ADS)
Li, Rong; Scholtz, M. Trevor; Yang, Fuquan; Sloan, James J.
2011-07-01
Pesticides have adverse health effects and can be transported over long distances to contaminate sensitive ecosystems. To address problems caused by environmental pesticides we developed a multimedia multi-pollutant modeling system, and here we present an evaluation of the model by comparing modeled results against measurements. The modeled toxaphene air concentrations for two sites, in Louisiana (LA) and Michigan (MI), are in good agreement with measurements (average concentrations agree to within a factor of 2). Because the residue inventory showed no soil residues at these two sites, resulting in no emissions, the concentrations must be caused by transport; the good agreement between the modeled and measured concentrations suggests that the model simulates atmospheric transport accurately. Compared to the LA and MI sites, the measured air concentrations at two other sites having toxaphene soil residues leading to emissions, in Indiana and Arkansas, showed more pronounced seasonal variability (higher in warmer months); this pattern was also captured by the model. The model-predicted toxaphene concentration fraction on particles (0.5-5%) agrees well with measurement-based estimates (3% or 6%). There is also good agreement between modeled and measured dry (1:1) and wet (within a factor of less than 2) depositions in Lake Ontario. Additionally this study identified erroneous soil residue data around a site in Texas in a published US toxaphene residue inventory, which led to very low modeled air concentrations at this site. Except for the erroneous soil residue data around this site, the good agreement between the modeled and observed results implies that both the US and Mexican toxaphene soil residue inventories are reasonably good. This agreement also suggests that the modeling system is capable of simulating the important physical and chemical processes in the multimedia compartments.
Finite-Difference Time-Domain Analysis of Tapered Photonic Crystal Fiber
NASA Astrophysics Data System (ADS)
Ali, M. I. Md; Sanusidin, S. N.; Yusof, M. H. M.
2018-03-01
This paper brief about the simulation of tapered photonic crystal fiber (PCF) LMA-8 single-mode type based on correlation of scattering pattern at wavelength of 1.55 μm, analyzation of transmission spectrum at wavelength over the range of 1.0 until 2.5 μm and correlation of transmission spectrum with the refractive index change in photonic crystal holes with respect to taper size of 0.1 until 1.0 using Optiwave simulation software. The main objective is to simulate using Finite-Difference Time-Domain (FDTD) technique of tapered LMA-8 PCF and for sensing application by improving the capabilities of PCF without collapsing the crystal holes. The types of FDTD techniques used are scattering pattern and transverse transmission and principal component analysis (PCA) used as a mathematical tool to model the data obtained by MathCad software. The simulation results showed that there is no obvious correlation of scattering pattern at a wavelength of 1.55 μm, a correlation obtained between taper sizes with a transverse transmission and there is a parabolic relationship between the refractive index changes inside the crystal structure.
Huang, Kuan-Chun; White, Ryan J
2013-08-28
We develop a random walk model to simulate the Brownian motion and the electrochemical response of a single molecule confined to an electrode surface via a flexible molecular tether. We use our simple model, which requires no prior knowledge of the physics of the molecular tether, to predict and better understand the voltammetric response of surface-confined redox molecules when motion of the redox molecule becomes important. The single molecule is confined to a hemispherical volume with a maximum radius determined by the flexible molecular tether (5-20 nm) and is allowed to undergo true three-dimensional diffusion. Distance- and potential-dependent electron transfer probabilities are evaluated throughout the simulations to generate cyclic voltammograms of the model system. We find that at sufficiently slow cyclic voltammetric scan rates the electrochemical reaction behaves like an adsorbed redox molecule with no mass transfer limitation; thus, the peak current is proportional to the scan rate. Conversely, at faster scan rates the diffusional motion of the molecule limits the simulated peak current, which exhibits a linear dependence on the square root of the scan rate. The switch between these two limiting regimes occurs when the diffusion layer thickness, (2Dt)(1/2), is ~10 times the tether length. Finally, we find that our model predicts the voltammetric behavior of a redox-active methylene blue tethered to an electrode surface via short flexible single-stranded, polythymine DNAs, allowing the estimation of diffusion coefficients for the end-tethered molecule.
NASA Astrophysics Data System (ADS)
Mupangwa, W.; Jewitt, G. P. W.
Crop output from the smallholder farming sector in sub-Saharan Africa is trailing population growth leading to widespread household food insecurity. It is therefore imperative that crop production in semi-arid areas be improved in order to meet the food demand of the ever increasing human population. No-till farming practices have the potential to increase crop productivity in smallholder production systems of sub-Saharan Africa, but rarely do because of the constraints experienced by these farmers. One of the most significant of these is the consumption of mulch by livestock. In the absence of long term on-farm assessment of the no-till system under smallholder conditions, simulation modelling is a tool that provides an insight into the potential benefits and can highlight shortcomings of the system under existing soil, climatic and socio-economic conditions. Thus, this study was designed to better understand the long term impact of no-till system without mulch cover on field water fluxes and maize productivity under a highly variable rainfall pattern typical of semi-arid South Africa. The simulated on-farm experiment consisted of two tillage treatments namely oxen-drawn conventional ploughing (CT) and ripping (NT). The APSIM model was applied for a 95 year period after first being calibrated and validated using measured runoff and maize yield data. The predicted results showed significantly higher surface runoff from the conventional system compared to the no-till system. Predicted deep drainage losses were higher from the NT system compared to the CT system regardless of the rainfall pattern. However, the APSIM model predicted 62% of the annual rainfall being lost through soil evaporation from both tillage systems. The predicted yields from the two systems were within 50 kg ha -1 difference in 74% of the years used in the simulation. In only 9% of the years, the model predicted higher grain yield in the NT system compared to the CT system. It is suggested that NT systems may have great potential for reducing surface runoff from smallholder fields and that the NT systems may have potential to recharge groundwater resources through increased deep drainage. However, it was also noted that the APSIM model has major shortcomings in simulating the water balance at this level of detail and that the findings need to be confirmed by further field based and modelling studies. Nevertheless, it is clear that without mulch or a cover crop, the continued high soil evaporation and correspondingly low crop yields suggest that there is little benefit to farmers adopting NT systems in semiarid environments, despite potential water resources benefits downstream. In such cases, the potential for payment for ecosystem services should be explored.
Lambe, Andrew; Massoli, Paola; Zhang, Xuan; ...
2017-06-22
Oxidation flow reactors that use low-pressure mercury lamps to produce hydroxyl (OH) radicals are an emerging technique for studying the oxidative aging of organic aerosols. Here, ozone (O 3) is photolyzed at 254 nm to produce O( 1D) radicals, which react with water vapor to produce OH. However, the need to use parts-per-million levels of O 3 hinders the ability of oxidation flow reactors to simulate NO x-dependent secondary organic aerosol (SOA) formation pathways. Simple addition of nitric oxide (NO) results in fast conversion of NO x (NO+NO 2) to nitric acid (HNO 3), making it impossible to sustain NOmore » x at levels that are sufficient to compete with hydroperoxy (HO 2) radicals as a sink for organic peroxy (RO 2) radicals. We developed a new method that is well suited to the characterization of NO x-dependent SOA formation pathways in oxidation flow reactors. NO and NO 2 are produced via the reaction O( 1D) + N 2O → 2NO, followed by the reaction NO + O 3 → NO 2+O 2. Laboratory measurements coupled with photochemical model simulations suggest that O( 1D) + N 2O reactions can be used to systematically vary the relative branching ratio of RO 2 + NO reactions relative to RO 2 + HO 2 and/or RO 2 + RO 2 reactions over a range of conditions relevant to atmospheric SOA formation. We demonstrate proof of concept using high-resolution time-of-flight chemical ionization mass spectrometer (HR-ToF-CIMS) measurements with nitrate (NO 3 -) reagent ion to detect gas-phase oxidation products of isoprene and α-pinene previously observed in NO x-influenced environments and in laboratory chamber experiments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lambe, Andrew; Massoli, Paola; Zhang, Xuan
Oxidation flow reactors that use low-pressure mercury lamps to produce hydroxyl (OH) radicals are an emerging technique for studying the oxidative aging of organic aerosols. Here, ozone (O 3) is photolyzed at 254 nm to produce O( 1D) radicals, which react with water vapor to produce OH. However, the need to use parts-per-million levels of O 3 hinders the ability of oxidation flow reactors to simulate NO x-dependent secondary organic aerosol (SOA) formation pathways. Simple addition of nitric oxide (NO) results in fast conversion of NO x (NO+NO 2) to nitric acid (HNO 3), making it impossible to sustain NOmore » x at levels that are sufficient to compete with hydroperoxy (HO 2) radicals as a sink for organic peroxy (RO 2) radicals. We developed a new method that is well suited to the characterization of NO x-dependent SOA formation pathways in oxidation flow reactors. NO and NO 2 are produced via the reaction O( 1D) + N 2O → 2NO, followed by the reaction NO + O 3 → NO 2+O 2. Laboratory measurements coupled with photochemical model simulations suggest that O( 1D) + N 2O reactions can be used to systematically vary the relative branching ratio of RO 2 + NO reactions relative to RO 2 + HO 2 and/or RO 2 + RO 2 reactions over a range of conditions relevant to atmospheric SOA formation. We demonstrate proof of concept using high-resolution time-of-flight chemical ionization mass spectrometer (HR-ToF-CIMS) measurements with nitrate (NO 3 -) reagent ion to detect gas-phase oxidation products of isoprene and α-pinene previously observed in NO x-influenced environments and in laboratory chamber experiments.« less
NASA Astrophysics Data System (ADS)
Barthlott, C.; Hoose, C.
2015-11-01
This paper assesses the resolution dependance of clouds and precipitation over Germany by numerical simulations with the COnsortium for Small-scale MOdeling (COSMO) model. Six intensive observation periods of the HOPE (HD(CP)2 Observational Prototype Experiment) measurement campaign conducted in spring 2013 and 1 summer day of the same year are simulated. By means of a series of grid-refinement resolution tests (horizontal grid spacing 2.8, 1 km, 500, and 250 m), the applicability of the COSMO model to represent real weather events in the gray zone, i.e., the scale ranging between the mesoscale limit (no turbulence resolved) and the large-eddy simulation limit (energy-containing turbulence resolved), is tested. To the authors' knowledge, this paper presents the first non-idealized COSMO simulations in the peer-reviewed literature at the 250-500 m scale. It is found that the kinetic energy spectra derived from model output show the expected -5/3 slope, as well as a dependency on model resolution, and that the effective resolution lies between 6 and 7 times the nominal resolution. Although the representation of a number of processes is enhanced with resolution (e.g., boundary-layer thermals, low-level convergence zones, gravity waves), their influence on the temporal evolution of precipitation is rather weak. However, rain intensities vary with resolution, leading to differences in the total rain amount of up to +48 %. Furthermore, the location of rain is similar for the springtime cases with moderate and strong synoptic forcing, whereas significant differences are obtained for the summertime case with air mass convection. Domain-averaged liquid water paths and cloud condensate profiles are used to analyze the temporal and spatial variability of the simulated clouds. Finally, probability density functions of convection-related parameters are analyzed to investigate their dependance on model resolution and their impact on cloud formation and subsequent precipitation.
ERIC Educational Resources Information Center
Reardon, Sean F.; Baker, Rachel; Kasman, Matt; Klasik, Daniel; Townsend, Joseph
2017-01-01
This paper simulates a system of socioeconomic status (SES)-based affirmative action in college admissions and examines the extent to which it can produce racial diversity in selective colleges. Using simulation models, we investigate the potential relative effects of race- and/or SES-based affirmative action policies, alongside targeted,…
Hydrograph separation for karst watersheds using a two-domain rainfall-discharge model
Long, Andrew J.
2009-01-01
Highly parameterized, physically based models may be no more effective at simulating the relations between rainfall and outflow from karst watersheds than are simpler models. Here an antecedent rainfall and convolution model was used to separate a karst watershed hydrograph into two outflow components: one originating from focused recharge in conduits and one originating from slow flow in a porous annex system. In convolution, parameters of a complex system are lumped together in the impulse-response function (IRF), which describes the response of the system to an impulse of effective precipitation. Two parametric functions in superposition approximate the two-domain IRF. The outflow hydrograph can be separated into flow components by forward modeling with isolated IRF components, which provides an objective criterion for separation. As an example, the model was applied to a karst watershed in the Madison aquifer, South Dakota, USA. Simulation results indicate that this watershed is characterized by a flashy response to storms, with a peak response time of 1 day, but that 89% of the flow results from the slow-flow domain, with a peak response time of more than 1 year. This long response time may be the result of perched areas that store water above the main water table. Simulation results indicated that some aspects of the system are stationary but that nonlinearities also exist.
2007-12-21
of hydrodynamics and the physical characteristics of the polymers. The physics models include both analytical models and numerical simulations ...the experimental observations. The numerical simulations also succeed in replicating some experimental measurements. However, there is still no...become quite significant. 4.5 Documentation The complete model is coded in MatLab . In the model, all units are cgs, so distances are in
Fine, Jason M.; Kuniansky, Eve L.
2014-01-01
Onslow County, North Carolina, is located within the designated Central Coastal Plain Capacity Use Area (CCPCUA). The CCPCUA was designated by law as a result of groundwater level declines of as much as 200 feet during the past four decades within aquifers in rocks of Cretaceous age in the central Coastal Plain of North Carolina and a depletion of water in storage from increased groundwater withdrawals in the area. The declines and depletion of water in storage within the Cretaceous aquifers increase the potential for saltwater migration—both lateral encroachment and upward leakage of brackish water. Within the CCPCUA, a reduction in groundwater withdrawals over a period of 16 years from 2003 to 2018 is mandated. Under the CCPCUA rules, withdrawals in excess of 100,000 gallons per day from any of the Cretaceous aquifer well systems are subject to water-use reductions of as much as 75 percent. To assess the effects of the CCPCUA rules and to assist with groundwater-management decisions, a numerical model was developed to simulate the groundwater flow and chloride concentrations in the surficial Castle Hayne, Beaufort, Peedee, and Black Creek aquifers in the Onslow County area. The model was used to (1) simulate groundwater flow from 1900 to 2010; (2) assess chloride movement throughout the aquifer system; and (3) create hypothetical scenarios of future groundwater development. After calibration of a groundwater flow model and conversion to a variable-density model, five scenarios were created to simulate future groundwater conditions in the Onslow County area: (1) full implementation of the CCPCUA rules with three phases of withdrawal reductions simulated through 2028; (2) implementation of only phase 1 withdrawal reductions of the CCPCUA rules and simulated through 2028; (3) implementation of only phases 1 and 2 withdrawal reductions of the CCPCUA rules and simulated through 2028; (4) full implementation of the CCPCUA rules with the addition of withdrawals from the Castle Hayne aquifer in Onslow County at the fully permitted amount in the final stress period and simulated through 2028; and (5) full implementation of the CCPCUA rules as in scenario 1 except simulated through 2100. Results from the scenarios give an indication of the water-level recovery in the Black Creek aquifer throughout each phase of the CCPCUA rules in Onslow County. Furthermore, as development of the Castle Hayne aquifers was increased in the scenarios, cones of depression were created around pumping centers. Additionally, the scenarios indicated little to no change in chloride concentrations for the time periods simulated.
Vreck, D; Gernaey, K V; Rosen, C; Jeppsson, U
2006-01-01
In this paper, implementation of the Benchmark Simulation Model No 2 (BSM2) within Matlab-Simulink is presented. The BSM2 is developed for plant-wide WWTP control strategy evaluation on a long-term basis. It consists of a pre-treatment process, an activated sludge process and sludge treatment processes. Extended evaluation criteria are proposed for plant-wide control strategy assessment. Default open-loop and closed-loop strategies are also proposed to be used as references with which to compare other control strategies. Simulations indicate that the BM2 is an appropriate tool for plant-wide control strategy evaluation.
Wu, D B C; Chaiyakunapruk, N; Pratoomsoot, C; Lee, K K C; Chong, H Y; Nelson, R E; Smith, P F; Kirkpatrick, C M; Kamal, M A; Nieforth, K; Dall, G; Toovey, S; Kong, D C M; Kamauu, A; Rayner, C R
2018-03-01
Simulation models are used widely in pharmacology, epidemiology and health economics (HEs). However, there have been no attempts to incorporate models from these disciplines into a single integrated model. Accordingly, we explored this linkage to evaluate the epidemiological and economic impact of oseltamivir dose optimisation in supporting pandemic influenza planning in the USA. An HE decision analytic model was linked to a pharmacokinetic/pharmacodynamics (PK/PD) - dynamic transmission model simulating the impact of pandemic influenza with low virulence and low transmissibility and, high virulence and high transmissibility. The cost-utility analysis was from the payer and societal perspectives, comparing oseltamivir 75 and 150 mg twice daily (BID) to no treatment over a 1-year time horizon. Model parameters were derived from published studies. Outcomes were measured as cost per quality-adjusted life year (QALY) gained. Sensitivity analyses were performed to examine the integrated model's robustness. Under both pandemic scenarios, compared to no treatment, the use of oseltamivir 75 or 150 mg BID led to a significant reduction of influenza episodes and influenza-related deaths, translating to substantial savings of QALYs. Overall drug costs were offset by the reduction of both direct and indirect costs, making these two interventions cost-saving from both perspectives. The results were sensitive to the proportion of inpatient presentation at the emergency visit and patients' quality of life. Integrating PK/PD-EPI/HE models is achievable. Whilst further refinement of this novel linkage model to more closely mimic the reality is needed, the current study has generated useful insights to support influenza pandemic planning.
Simulation in Surgical Education
de Montbrun, Sandra L.; MacRae, Helen
2012-01-01
The pedagogical approach to surgical training has changed significantly over the past few decades. No longer are surgical skills solely acquired through a traditional apprenticeship model of training. The acquisition of many technical and nontechnical skills is moving from the operating room to the surgical skills laboratory through the use of simulation. Many platforms exist for the learning and assessment of surgical skills. In this article, the authors provide a broad overview of some of the currently available surgical simulation modalities including bench-top models, laparoscopic simulators, simulation for new surgical technologies, and simulation for nontechnical surgical skills. PMID:23997671
Webb, R.M.; Sandstrom, M.W.; Krutz, L.J.; Shaner, D.L.
2011-01-01
In the present study a branched serial first-order decay (BSFOD) model is presented and used to derive transformation rates describing the decay of a common herbicide, atrazine, and its metabolites observed in unsaturated soils adapted to previous atrazine applications and in soils with no history of atrazine applications. Calibration of BSFOD models for soils throughout the country can reduce the uncertainty, relative to that of traditional models, in predicting the fate and transport of pesticides and their metabolites and thus support improved agricultural management schemes for reducing threats to the environment. Results from application of the BSFOD model to better understand the degradation of atrazine supports two previously reported conclusions: atrazine (6-chloro-N-ethyl-N'-(1-methylethyl)-1,3,5-triazine-2,4-diamine) and its primary metabolites are less persistent in adapted soils than in nonadapted soils; and hydroxyatrazine was the dominant primary metabolite in most of the soils tested. In addition, a method to simulate BSFOD in a one-dimensional solute-transport unsaturated zone model is also presented. ?? 2011 SETAC.
Webb, Richard M.; Sandstrom, Mark W.; Jason L. Krutz,; Dale L. Shaner,
2011-01-01
In the present study a branched serial first-order decay (BSFOD) model is presented and used to derive transformation rates describing the decay of a common herbicide, atrazine, and its metabolites observed in unsaturated soils adapted to previous atrazine applications and in soils with no history of atrazine applications. Calibration of BSFOD models for soils throughout the country can reduce the uncertainty, relative to that of traditional models, in predicting the fate and transport of pesticides and their metabolites and thus support improved agricultural management schemes for reducing threats to the environment. Results from application of the BSFOD model to better understand the degradation of atrazine supports two previously reported conclusions: atrazine (6-chloro-N-ethyl-N′-(1-methylethyl)-1,3,5-triazine-2,4-diamine) and its primary metabolites are less persistent in adapted soils than in nonadapted soils; and hydroxyatrazine was the dominant primary metabolite in most of the soils tested. In addition, a method to simulate BSFOD in a one-dimensional solute-transport unsaturated zone model is also presented.
A four-field model for collisionless reconnection: Hamiltonian structure and numerical simulations
NASA Astrophysics Data System (ADS)
Tassi, Emanuele; Grasso, Daniela; Pegoraro, Francesco
2008-11-01
A 4-field model for magnetic reconnection in collisionless plasmas is investigated both analytically and numerically. The model equations are shown to admit a non-canonical Hamiltonian formulation with four infinite families of Casimir invariants [1]. Numerical simulations show that, consistently with previously investigated models [2,3], in the absence of significant fluctuations along the toroidal direction, reconnection can lead to a macroscopic saturated state exhibiting filamentation on microsocopic scales, or to a secondary Kelvin-Helmholtz-like instability, depending on the value of a parameter measuring the compressibility of the electron fluid. The novel feature exhibited by the four-field model is the coexistence of significant filamentation with a secondary instability when magnetic and velocity perturbations along the toroidal direction are no longer negligible. An interpretation of this phenomenon in terms of Casimir invariants is given.[0pt] [1] E. Tassi et al., Plasma Phys. Contr. Fus., 50, 085014 (2008)[0pt] [2] D. Grasso et al., Phys. Rev. Lett. 86, 5051 (2001)[0pt] [3] D. Del Sarto, F. Califano and F. Pegoraro, Phys. Plasmas 12, 012317 (2005)
Chen, Yan; Huang, Fang; Xie, Xin-Yuan
2014-04-01
An Acidithiobacillus ferrooxidans strain WZ-1 (GenBank sequence number: JQ968461) was used as the research object. The effects of Cl-, NO3-, F- and 4 kinds of simulated inorganic anions leaching solutions of electroplating sludge on the bioactivity of Fe2+ oxidation and apparent respiratory rate of WZ-1 were investigated. The results showed that Cl-, NO3(-)- didn't have any influence on the bioactivity of WZ-1 at concentrations of 5.0 g x L(-1), 1.0 g x L(-1), respectively. WZ-1 showed tolerance to high levels of Cl- and NO3- (about 10.0 g x L(-1), 5.0 g x L(-1), respectively), but it had lower tolerance to F- (25 mg x L(-1)). Different kinds of simulated inorganic anions leaching solutions of electroplating sludge had significant differences in terms of their effects on bioactivity of WZ-1 with a sequence of Cl-/NO3(-)/F(-) > or = NO3(-)/F(-) > Cl-/F(-) > Cl(-)/NO3(-).
NASA Astrophysics Data System (ADS)
Chang, Tsang-Jung; Wang, Chia-Ho; Chen, Albert S.
2015-05-01
In this study, we developed a novel approach to simulate dynamic flow interactions between storm sewers and overland surface for different land covers in urban areas. The proposed approach couples the one-dimensional (1D) sewer flow model (SFM) and the two-dimensional (2D) overland flow model (OFM) with different techniques depending on the land cover type of the study areas. For roads, pavements, plazas, and so forth where rainfall becomes surface runoff before entering the sewer system, the rainfall-runoff process is simulated directly in the 2D OFM, and the runoff is drained to the sewer network via inlets, which is regarded as the input to 1D SFM. For green areas on which rainfall falls into the permeable ground surface and the generated direct runoff traverses terrain, the deduction rate is applied to the rainfall for reflecting the soil infiltration in the 2D OFM. For flat building roofs with drainage facilities allowing rainfall to drain directly from the roof to sewer networks, the rainfall-runoff process is simulated using the hydrological module in the 1D SFM where no rainfall is applied to these areas in the 2D OFM. The 1D SFM is used for hydraulic simulations in the sewer network. Where the flow in the drainage network exceeds its capacity, a surcharge occurs and water may spill onto the ground surface if the pressure head in a manhole exceeds the ground elevation. The overflow discharge from the sewer system is calculated by the 1D SFM and considered a point source in the 2D OFM. The overland flow will return into the sewer network when it reaches an inlet that connects to an un-surcharged manhole. In this case, the inlet is considered as a point sink in the 2D OFM and an inflow to a manhole in the 1D SFM. The proposed approach was compared to other five urban flood modelling techniques with four rainfall events that had previously recorded inundation areas. The merits and drawbacks of each modelling technique were compared and discussed. Based on the simulated results, the proposed approach was found to simulate floodings closer to the survey records than other approaches because the physical rainfall-runoff phenomena in urban environment were better reflected.
Numerical simulation of the groundwater-flow system of the Kitsap Peninsula, west-central Washington
Frans, Lonna M.; Olsen, Theresa D.
2016-05-05
A groundwater-flow model was developed to improve understanding of water resources on the Kitsap Peninsula. The Kitsap Peninsula is in the Puget Sound lowland of west-central Washington, is bounded by Puget Sound on the east and by Hood Canal on the west, and covers an area of about 575 square miles. The peninsula encompasses all of Kitsap County, Mason County north of Hood Canal, and part of Pierce County west of Puget Sound. The peninsula is surrounded by saltwater, and the hydrologic setting is similar to that of an island. The study area is underlain by a thick sequence of unconsolidated glacial and interglacial deposits that overlie sedimentary and volcanic bedrock units that crop out in the central part of the study area. Twelve hydrogeologic units consisting of aquifers, confining units, and an underlying bedrock unit form the basis of the groundwater-flow model.Groundwater flow on the Kitsap Peninsula was simulated using the groundwater-flow model, MODFLOW‑NWT. The finite difference model grid comprises 536 rows, 362 columns, and 14 layers. Each model cell has a horizontal dimension of 500 by 500 feet, and the model contains a total of 1,227,772 active cells. Groundwater flow was simulated for transient conditions. Transient conditions were simulated for January 1985–December 2012 using annual stress periods for 1985–2004 and monthly stress periods for 2005–2012. During model calibration, variables were adjusted within probable ranges to minimize differences between measured and simulated groundwater levels and stream baseflows. As calibrated to transient conditions, the model has a standard deviation for heads and flows of 47.04 feet and 2.46 cubic feet per second, respectively.Simulated inflow to the model area for the 2005–2012 period from precipitation and secondary recharge was 585,323 acre-feet per year (acre-ft/yr) (93 percent of total simulated inflow ignoring changes in storage), and simulated inflow from stream and lake leakage was 43,905 acre-ft/yr (7 percent of total simulated inflow). Simulated outflow from the model primarily was through discharge to streams, lakes, springs, seeps, and Puget Sound (594,595 acre-ft/yr; 95 percent of total simulated outflow excluding changes in storage) and through withdrawals from wells (30,761 acre-ft/yr; 5 percent of total simulated outflow excluding changes in storage).Six scenarios were formulated with input from project stakeholders and were simulated using the calibrated model to provide representative examples of how the model could be used to evaluate the effects on water levels and stream baseflows of potential changes in groundwater withdrawals, in consumptive use, and in recharge. These included simulations of a steady-state system, no-pumping and return flows, 15-percent increase in current withdrawals in all wells, 80-percent decrease in outdoor water to simulate effects of conservation efforts, 15-percent decrease in recharge from precipitation to simulate a drought, and particle tracking to determine flow paths.Changes in water-level altitudes and baseflow amounts vary depending on the stress applied to the system in these various scenarios. Reducing recharge by 15 percent between 2005 and 2012 had the largest effect, with water-level altitudes declining throughout the model domain and baseflow amounts decreasing by as much as 18 percent compared to baseline conditions. Changes in pumping volumes had a smaller effect on the model. Removing all pumping and resulting return flows caused increased water-level altitudes in many areas and increased baseflow amounts of between 1 and 3 percent.
Mesoscale model response to random, surface-based perturbations — A sea-breeze experiment
NASA Astrophysics Data System (ADS)
Garratt, J. R.; Pielke, R. A.; Miller, W. F.; Lee, T. J.
1990-09-01
The introduction into a mesoscale model of random (in space) variations in roughness length, or random (in space and time) surface perturbations of temperature and friction velocity, produces a measurable, but barely significant, response in the simulated flow dynamics of the lower atmosphere. The perturbations are an attempt to include the effects of sub-grid variability into the ensemble-mean parameterization schemes used in many numerical models. Their magnitude is set in our experiments by appeal to real-world observations of the spatial variations in roughness length and daytime surface temperature over the land on horizontal scales of one to several tens of kilometers. With sea-breeze simulations, comparisons of a number of realizations forced by roughness-length and surface-temperature perturbations with the standard simulation reveal no significant change in ensemble mean statistics, and only small changes in the sea-breeze vertical velocity. Changes in the updraft velocity for individual runs, of up to several cms-1 (compared to a mean of 14 cms-1), are directly the result of prefrontal temperature changes of 0.1 to 0.2K, produced by the random surface forcing. The correlation and magnitude of the changes are entirely consistent with a gravity-current interpretation of the sea breeze.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erickson III, David J
The climate of the last glacial maximum (LGM) is simulated with a high-resolution atmospheric general circulation model, the NCAR CCM3 at spectral truncation of T170, corresponding to a grid cell size of roughly 75 km. The purpose of the study is to assess whether there are significant benefits from the higher resolution simulation compared to the lower resolution simulation associated with the role of topography. The LGM simulations were forced with modified CLIMAP sea ice distribution and sea surface temperatures (SST) reduced by 1 C, ice sheet topography, reduced CO{sub 2}, and 21,000 BP orbital parameters. The high-resolution model capturesmore » modern climate reasonably well, in particular the distribution of heavy precipitation in the tropical Pacific. For the ice age case, surface temperature simulated by the high-resolution model agrees better with those of proxy estimates than does the low-resolution model. Despite the fact that tropical SSTs were only 2.1 C less than the control run, there are many lowland tropical land areas 4-6 C colder than present. Comparison of T170 model results with the best constrained proxy temperature estimates (noble gas concentrations in groundwater) now yield no significant differences between model and observations. There are also significant upland temperature changes in the best resolved tropical mountain belt (the Andes). We provisionally attribute this result in part as resulting from decreased lateral mixing between ocean and land in a model with more model grid cells. A longstanding model-data discrepancy therefore appears to be resolved without invoking any unusual model physics. The response of the Asian summer monsoon can also be more clearly linked to local geography in the high-resolution model than in the low-resolution model; this distinction should enable more confident validation of climate proxy data with the high-resolution model. Elsewhere, an inferred salinity increase in the subtropical North Atlantic may have significant implications for ocean circulation changes during the LGM. A large part of the Amazon and Congo Basins are simulated to be substantially drier in the ice age - consistent with many (but not all) paleo data. These results suggest that there are considerable benefits derived from high-resolution model regarding regional climate responses, and that observationalists can now compare their results with models that resolve geography at a resolution comparable to that which the proxy data represent.« less
Near IR Photolysis of HO2NO2: Supplemental Material
NASA Technical Reports Server (NTRS)
2002-01-01
MkIV measurements of the volume mixing ratio (VMR) of HO2NO2 at 35 deg N, sunset on Sept. 25, 1993 are given. Measurements of HO2NO2 made between approx. 65 and 70 deg N, sunrise on May 8, 1997 are listed. The uncertainties given are 1 sigma estimates of the measurement precision. Uncertainty in the HO2NO2 line strengths is estimated to be 20%; this is the dominant contribution to the systematic error of the HO2NO2 measurement. Model inputs for the simulations are given. The albedos were obtained from Total Ozone Mapping Spectrometer reflectively data (raw data at ftp://jwocky.gsfc.nasa.gov) for the time and place of observation. Profiles of sulfate aerosol surface area ("Surf. Area") were obtained from monthly, zonal mean profiles measured by SAGE II [Thomason et al., 1997 updated via private communication]. The profile of Be(y) is based on the Wamsley et al. relation with N2O, using MkIV measurements of N20O. All other model inputs given are based on direct MkIV measurements. Finally, we note the latitude of the MkIV tangent point varied considerably during sunrise on May 8, 1997. The simulations shown here were obtained using different latitudes for each altitude.
NASA Astrophysics Data System (ADS)
Kim, S.-W.; McDonald, B. C.; Baidar, S.; Brown, S. S.; Dube, B.; Ferrare, R. A.; Frost, G. J.; Harley, R. A.; Holloway, J. S.; Lee, H.-J.; McKeen, S. A.; Neuman, J. A.; Nowak, J. B.; Oetjen, H.; Ortega, I.; Pollack, I. B.; Roberts, J. M.; Ryerson, T. B.; Scarino, A. J.; Senff, C. J.; Thalman, R.; Trainer, M.; Volkamer, R.; Wagner, N.; Washenfelder, R. A.; Waxman, E.; Young, C. J.
2016-02-01
We developed a new nitrogen oxide (NOx) and carbon monoxide (CO) emission inventory for the Los Angeles-South Coast Air Basin (SoCAB) expanding the Fuel-based Inventory for motor-Vehicle Emissions and applied it in regional chemical transport modeling focused on the California Nexus of Air Quality and Climate Change (CalNex) 2010 field campaign. The weekday NOx emission over the SoCAB in 2010 is 620 t d-1, while the weekend emission is 410 t d-1. The NOx emission decrease on weekends is caused by reduced diesel truck activities. Weekday and weekend CO emissions over this region are similar: 2340 and 2180 t d-1, respectively. Previous studies reported large discrepancies between the airborne observations of NOx and CO mixing ratios and the model simulations for CalNex based on the available bottom-up emission inventories. Utilizing the newly developed emission inventory in this study, the simulated NOx and CO mixing ratios agree with the observations from the airborne and the ground-based in situ and remote sensing instruments during the field study. The simulations also reproduce the weekly cycles of these chemical species. Both the observations and the model simulations indicate that decreased NOx on weekends leads to enhanced photochemistry and increase of O3 and Ox (=O3 + NO2) in the basin. The emission inventory developed in this study can be extended to different years and other urban regions in the U.S. to study the long-term trends in O3 and its precursors with regional chemical transport models.
NASA Astrophysics Data System (ADS)
Hu, Lu; Jacob, Daniel J.; Liu, Xiong; Zhang, Yi; Zhang, Lin; Kim, Patrick S.; Sulprizio, Melissa P.; Yantosca, Robert M.
2017-10-01
The global budget of tropospheric ozone is governed by a complicated ensemble of coupled chemical and dynamical processes. Simulation of tropospheric ozone has been a major focus of the GEOS-Chem chemical transport model (CTM) over the past 20 years, and many developments over the years have affected the model representation of the ozone budget. Here we conduct a comprehensive evaluation of the standard version of GEOS-Chem (v10-01) with ozone observations from ozonesondes, the OMI satellite instrument, and MOZAIC-IAGOS commercial aircraft for 2012-2013. Global validation of the OMI 700-400 hPa data with ozonesondes shows that OMI maintained persistent high quality and no significant drift over the 2006-2013 period. GEOS-Chem shows no significant seasonal or latitudinal bias relative to OMI and strong correlations in all seasons on the 2° × 2.5° horizontal scale (r = 0.88-0.95), improving on previous model versions. The most pronounced model bias revealed by ozonesondes and MOZAIC-IAGOS is at high northern latitudes in winter-spring where the model is 10-20 ppbv too low. This appears to be due to insufficient stratosphere-troposphere exchange (STE). Model updates to lightning NOx, Asian anthropogenic emissions, bromine chemistry, isoprene chemistry, and meteorological fields over the past decade have overall led to gradual increase in the simulated global tropospheric ozone burden and more active ozone production and loss. From simulations with different versions of GEOS meteorological fields we find that tropospheric ozone in GEOS-Chem v10-01 has a global production rate of 4960-5530 Tg a-1, lifetime of 20.9-24.2 days, burden of 345-357 Tg, and STE of 325-492 Tg a-1. Change in the intensity of tropical deep convection between these different meteorological fields is a major factor driving differences in the ozone budget.
Numerical simulation of velocity and temperature fields in natural circulation loop
NASA Astrophysics Data System (ADS)
Sukomel, L. A.; Kaban'kov, O. N.
2017-11-01
Low flow natural circulation regimes are realized in many practical applications and the existence of the reliable engineering and design calculation methods of flows driven exclusively by buoyancy forces is an actual problem. In particular it is important for the analysis of start up regimes of passive safety systems of nuclear power plants. In spite of a long year investigations of natural circulation loops no suitable predicting recommendations for heat transfer and friction for the above regimes have been proposed for engineering practice and correlations for forced flow are commonly used which considerably overpredicts the real flow velocities. The 2D numerical simulation of velocity and temperature fields in circular tubes for laminar flow natural circulation with reference to the laboratory experimental loop has been carried out. The results were compared with the 1D modified model and experimental data obtained on the above loop. The 1D modified model was still based on forced flow correlations, but in these correlations the physical properties variability and the existence of thermal and hydrodynamic entrance regions are taken into account. The comparison of 2D simulation, 1D model calculations and the experimental data showed that even subject to influence of liquid properties variability and entrance regions on heat transfer and friction the use of 1D model with forced flow correlations do not improve the accuracy of calculations. In general, according to 2D numerical simulation the wall shear stresses are mainly affected by the change of wall velocity gradient due to practically continuous velocity profiles deformation along the whole heated zone. The form of velocity profiles and the extent of their deformation in its turn depend upon the wall heat flux density and the hydraulic diameter.
Flight test results of the strapdown ring laser gyro tetrad inertial navigation system
NASA Technical Reports Server (NTRS)
Carestia, R. A.; Hruby, R. J.; Bjorkman, W. S.
1983-01-01
A helicopter flight test program undertaken to evaluate the performance of Tetrad (a strap down, laser gyro, inertial navigation system) is described. The results of 34 flights show a mean final navigational velocity error of 5.06 knots, with a standard deviation of 3.84 knots; a corresponding mean final position error of 2.66 n. mi., with a standard deviation of 1.48 n. mi.; and a modeled mean position error growth rate for the 34 tests of 1.96 knots, with a standard deviation of 1.09 knots. No laser gyro or accelerometer failures were detected during the flight tests. Off line parity residual studies used simulated failures with the prerecorded flight test and laboratory test data. The airborne Tetrad system's failure--detection logic, exercised during the tests, successfully demonstrated the detection of simulated ""hard'' failures and the system's ability to continue successfully to navigate by removing the simulated faulted sensor from the computations. Tetrad's four ring laser gyros provided reliable and accurate angular rate sensing during the 4 yr of the test program, and no sensor failures were detected during the evaluation of free inertial navigation performance.
Molecular simulations of electrolyte structure and dynamics in lithium-sulfur battery solvents
NASA Astrophysics Data System (ADS)
Park, Chanbum; Kanduč, Matej; Chudoba, Richard; Ronneburg, Arne; Risse, Sebastian; Ballauff, Matthias; Dzubiella, Joachim
2018-01-01
The performance of modern lithium-sulfur (Li/S) battery systems critically depends on the electrolyte and solvent compositions. For fundamental molecular insights and rational guidance of experimental developments, efficient and sufficiently accurate molecular simulations are thus in urgent need. Here, we construct a molecular dynamics (MD) computer simulation model of representative state-of-the art electrolyte-solvent systems for Li/S batteries constituted by lithium-bis(trifluoromethane)sulfonimide (LiTFSI) and LiNO3 electrolytes in mixtures of the organic solvents 1,2-dimethoxyethane (DME) and 1,3-dioxolane (DOL). We benchmark and verify our simulations by comparing structural and dynamic features with various available experimental reference systems and demonstrate their applicability for a wide range of electrolyte-solvent compositions. For the state-of-the-art battery solvent, we finally calculate and discuss the detailed composition of the first lithium solvation shell, the temperature dependence of lithium diffusion, as well as the electrolyte conductivities and lithium transference numbers. Our model will serve as a basis for efficient future predictions of electrolyte structure and transport in complex electrode confinements for the optimization of modern Li/S batteries (and related devices).
Satellite-based emission constraint for nitrogen oxides: Capability and uncertainty
NASA Astrophysics Data System (ADS)
Lin, J.; McElroy, M. B.; Boersma, F.; Nielsen, C.; Zhao, Y.; Lei, Y.; Liu, Y.; Zhang, Q.; Liu, Z.; Liu, H.; Mao, J.; Zhuang, G.; Roozendael, M.; Martin, R.; Wang, P.; Spurr, R. J.; Sneep, M.; Stammes, P.; Clemer, K.; Irie, H.
2013-12-01
Vertical column densities (VCDs) of tropospheric nitrogen dioxide (NO2) retrieved from satellite remote sensing have been employed widely to constrain emissions of nitrogen oxides (NOx). A major strength of satellite-based emission constraint is analysis of emission trends and variability, while a crucial limitation is errors both in satellite NO2 data and in model simulations relating NOx emissions to NO2 columns. Through a series of studies, we have explored these aspects over China. We separate anthropogenic from natural sources of NOx by exploiting their different seasonality. We infer trends of NOx emissions in recent years and effects of a variety of socioeconomic events at different spatiotemporal scales including the general economic growth, global financial crisis, Chinese New Year, and Beijing Olympics. We further investigate the impact of growing NOx emissions on particulate matter (PM) pollution in China. As part of recent developments, we identify and correct errors in both satellite NO2 retrieval and model simulation that ultimately affect NOx emission constraint. We improve the treatments of aerosol optical effects, clouds and surface reflectance in the NO2 retrieval process, using as reference ground-based MAX-DOAS measurements to evaluate the improved retrieval results. We analyze the sensitivity of simulated NO2 to errors in the model representation of major meteorological and chemical processes with a subsequent correction of model bias. Future studies will implement these improvements to re-constrain NOx emissions.
Karthikeyan, Bagavathy Shanmugam; Suvaithenamudhan, Suvaiyarasan; Akbarsha, Mohammad Abdulkader; Parthasarathy, Subbiah
2018-06-01
Cytochrome P450 (CYP) 1A and 2B subfamily enzymes are important drug metabolizing enzymes, and are highly conserved across species in terms of sequence homology. However, there are major to minor structural and macromolecular differences which provide for species-selectivity and substrate-selectivity. Therefore, species-selectivity of CYP1A and CYP2B subfamily proteins across human, mouse and rat was analyzed using molecular modeling, docking and dynamics simulations when the chiral molecules quinine and quinidine were used as ligands. The three-dimensional structures of 17 proteins belonging to CYP1A and CYP2B subfamilies of mouse and rat were predicted by adopting homology modeling using the available structures of human CYP1A and CYP2B proteins as templates. Molecular docking and dynamics simulations of quinine and quinidine with CYP1A subfamily proteins revealed the existence of species-selectivity across the three species. On the other hand, in the case of CYP2B subfamily proteins, no role for chirality of quinine and quinidine in forming complexes with CYP2B subfamily proteins of the three species was indicated. Our findings reveal the roles of active site amino acid residues of CYP1A and CYP2B subfamily proteins and provide insights into species-selectivity of these enzymes across human, mouse, and rat.
NASA Astrophysics Data System (ADS)
Okeniyi, Joshua Olusegun; Nwadialo, Christopher Chukwuweike; Olu-Steven, Folusho Emmanuel; Ebinne, Samaru Smart; Coker, Taiwo Ebenezer; Okeniyi, Elizabeth Toyin; Ogbiye, Adebanji Samuel; Durotoye, Taiwo Omowunmi; Badmus, Emmanuel Omotunde Oluwasogo
2017-02-01
This paper investigates C3H7NO2S (Cysteine) effect on the inhibition of reinforcing steel corrosion in concrete immersed in 0.5 M H2SO4, for simulating industrial/microbial environment. Different C3H7NO2S concentrations were admixed, in duplicates, in steel-reinforced concrete samples that were partially immersed in the acidic sulphate environment. Electrochemical monitoring techniques of open circuit potential, as per ASTM C876-91 R99, and corrosion rate, by linear polarization resistance, were then employed for studying anticorrosion effect in steel-reinforced concrete samples by the organic hydrocarbon admixture. Analyses of electrochemical test-data followed ASTM G16-95 R04 prescriptions including probability distribution modeling with significant testing by Kolmogorov-Smirnov and student's t-tests statistics. Results established that all datasets of corrosion potential distributed like the Normal, the Gumbel and the Weibull distributions but that only the Weibull model described all the corrosion rate datasets in the study, as per the Kolmogorov-Smirnov test-statistics. Results of the student's t-test showed that differences of corrosion test-data between duplicated samples with the same C3H7NO2S concentrations were not statistically significant. These results indicated that 0.06878 M C3H7NO2S exhibited optimal inhibition efficiency η = 90.52±1.29% on reinforcing steel corrosion in the concrete samples immersed in 0.5 M H2SO4, simulating industrial/microbial service-environment.
NASA Technical Reports Server (NTRS)
Martin, Randall V.; Sioris, Christopher E.; Chance, Kelly; Ryerson, Thomas B.; Flocke, Frank M.; Bertram, Timothy H.; Wooldridge, Paul J.; Cohen, Ronald C.; Neuman, J. Andy; Swanson, Aaron
2006-01-01
We retrieve tropospheric nitrogen dioxide (NO 2) columns for May 2004 to April 2005 from the SCIAMACHY satellite instrument to derive top-down emissions of nitrogen oxides (NO(x) = NO + NO2) via inverse modeling with a global chemical transport model (GEOS-Chem). Simulated NO 2 vertical profiles used in the retrieval are evaluated with airborne measurements over and downwind of North America (ICARTT); a northern midlatitude lightning source of 1.6 Tg N/yr minimizes bias in the retrieval. Retrieved NO2 columns are validated (r2 = 0.60, slope = 0.82) with coincident airborne in situ measurements. The top-down emissions are combined with a priori information from a bottom-up emission inventory with error weighting to achieve an improved a posteriori estimate of the global distribution of surface NOx emissions. Our a posteriori NOx emission inventory for land surface NOx emissions (46.1 Tg N/yr) is 22% larger than the GEIA-based a priori bottom-up inventory for 1998, a difference that reflects rising anthropogenic emissions, especially from East Asia A posteriori NOx emissions for East Asia (9.8 Tg N/yr) exceed those from other continents. The a posteriori inventory improves the GEOS-Chem simulation of NOx, peroxyacetylnitrate, and nitric acid with respect to airborne in situ measurements over and downwind of New York City. The a posteriori is 7% larger than the EDGAR 3.2FT2000 global inventory, 3% larger than the NEI99 inventory for the United States, and 68% larger than a regional inventory for 2000 for eastern Asia. SCIAMACHY NO2 columns over the North Atlantic show a weak plume from lightning NO(x).
Study of CP(N-1) theta-vacua by cluster simulation of SU(N) quantum spin ladders.
Beard, B B; Pepe, M; Riederer, S; Wiese, U-J
2005-01-14
D-theory provides an alternative lattice regularization of the 2D CP(N-1) quantum field theory in which continuous classical fields emerge from the dimensional reduction of discrete SU(N) quantum spins. Spin ladders consisting of n transversely coupled spin chains lead to a CP(N-1) model with a vacuum angle theta=npi. In D-theory no sign problem arises and an efficient cluster algorithm is used to investigate theta-vacuum effects. At theta=pi there is a first order phase transition with spontaneous breaking of charge conjugation symmetry for CP(N-1) models with N>2.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khaleghi Hamedani, Hamid; Lau, Anthony K.; DeBruyn, Jake
The overall goal of this research is to investigate the logistics of agricultural biomass in Ontario, Canada using the Integrated Biomass Supply Analysis and Logistics Model (IBSAL). The supply of corn stover to the Ontario Power Generation (OPG) power plant in Lambton is simulated. This coal-fired power plant is currently not operating and there are no active plans by OPG to fuel it with biomass. Rather, this scenario is considered only to demonstrate the application of the IBSAL Model to this type of scenario. Here, five scenarios of delivering corn stover to the Lambton Generating Station (GS) power plant inmore » Lambton Ontario are modeled: (1) truck transport from field edge to OPG (base scenario); (2) farm to central storage located on the highway, then truck transport bales to OPG; (3) direct truck transport from farm (no-stacking) to OPG; (4) farm to a loading port on Lake Huron and from there on a barge to OPG; and (5) farm to a railhead and then to OPG by rail.« less
Shi, En; Li, Jianzheng; Leu, Shao-Yuan; Antwi, Philip
2016-12-01
To predict the dynamic profiles in volatile fatty acids (VFAs) with pH and hydraulic retention time (HRT) during the startup of a 4-compartment ABR, a mathematical model was constructed by introducing pH and thermodynamic inhibition functions into the biochemical processes derived from the ADM1. The calibration of inhibition parameter for propionate uptake effectively improved the prediction accuracy of VFAs. The developed model could simulate the VFAs profiles very well no matter the observable change of pH or/and HRT. The simulation results indicated that both H 2 -producing acetogenesis and methanogenesis in the ABR would be inhibited with a pH less than 4.61, and the propionate oxidation could be thermodynamically restricted even with a neutral pH. A decreased HRT would enhanced the acidogenesis and H 2 -producing acetogenesis in the first 3 compartments, but no observable increase in effluent VFAs could be found due to the synchronously enhanced methanogenesis in the last compartment. Copyright © 2016 Elsevier Ltd. All rights reserved.
Khaleghi Hamedani, Hamid; Lau, Anthony K.; DeBruyn, Jake; ...
2016-05-10
The overall goal of this research is to investigate the logistics of agricultural biomass in Ontario, Canada using the Integrated Biomass Supply Analysis and Logistics Model (IBSAL). The supply of corn stover to the Ontario Power Generation (OPG) power plant in Lambton is simulated. This coal-fired power plant is currently not operating and there are no active plans by OPG to fuel it with biomass. Rather, this scenario is considered only to demonstrate the application of the IBSAL Model to this type of scenario. Here, five scenarios of delivering corn stover to the Lambton Generating Station (GS) power plant inmore » Lambton Ontario are modeled: (1) truck transport from field edge to OPG (base scenario); (2) farm to central storage located on the highway, then truck transport bales to OPG; (3) direct truck transport from farm (no-stacking) to OPG; (4) farm to a loading port on Lake Huron and from there on a barge to OPG; and (5) farm to a railhead and then to OPG by rail.« less
Ulloa, Antonio; Bullock, Daniel
2003-10-01
We developed a neural network model to simulate temporal coordination of human reaching and grasping under variable initial grip apertures and perturbations of object size and object location/orientation. The proposed model computes reach-grasp trajectories by continuously updating vector positioning commands. The model hypotheses are (1) hand/wrist transport, grip aperture, and hand orientation control modules are coupled by a gating signal that fosters synchronous completion of the three sub-goals. (2) Coupling from transport and orientation velocities to aperture control causes maximum grip apertures that scale with these velocities and exceed object size. (3) Part of the aperture trajectory is attributable to an aperture-reducing passive biomechanical effect that is stronger for larger apertures. (4) Discrepancies between internal representations of targets partially inhibit the gating signal, leading to movement time increases that compensate for perturbations. Simulations of the model replicate key features of human reach-grasp kinematics observed under three experimental protocols. Our results indicate that no precomputation of component movement times is necessary for online temporal coordination of the components of reaching and grasping.
1D-3D hybrid modeling-from multi-compartment models to full resolution models in space and time.
Grein, Stephan; Stepniewski, Martin; Reiter, Sebastian; Knodel, Markus M; Queisser, Gillian
2014-01-01
Investigation of cellular and network dynamics in the brain by means of modeling and simulation has evolved into a highly interdisciplinary field, that uses sophisticated modeling and simulation approaches to understand distinct areas of brain function. Depending on the underlying complexity, these models vary in their level of detail, in order to cope with the attached computational cost. Hence for large network simulations, single neurons are typically reduced to time-dependent signal processors, dismissing the spatial aspect of each cell. For single cell or networks with relatively small numbers of neurons, general purpose simulators allow for space and time-dependent simulations of electrical signal processing, based on the cable equation theory. An emerging field in Computational Neuroscience encompasses a new level of detail by incorporating the full three-dimensional morphology of cells and organelles into three-dimensional, space and time-dependent, simulations. While every approach has its advantages and limitations, such as computational cost, integrated and methods-spanning simulation approaches, depending on the network size could establish new ways to investigate the brain. In this paper we present a hybrid simulation approach, that makes use of reduced 1D-models using e.g., the NEURON simulator-which couples to fully resolved models for simulating cellular and sub-cellular dynamics, including the detailed three-dimensional morphology of neurons and organelles. In order to couple 1D- and 3D-simulations, we present a geometry-, membrane potential- and intracellular concentration mapping framework, with which graph- based morphologies, e.g., in the swc- or hoc-format, are mapped to full surface and volume representations of the neuron and computational data from 1D-simulations can be used as boundary conditions for full 3D simulations and vice versa. Thus, established models and data, based on general purpose 1D-simulators, can be directly coupled to the emerging field of fully resolved, highly detailed 3D-modeling approaches. We present the developed general framework for 1D/3D hybrid modeling and apply it to investigate electrically active neurons and their intracellular spatio-temporal calcium dynamics.
Ground-water flow model of the Boone formation at the Tar Creek superfund site, Oklahoma and Kansas
Reed, T.B.; Czarnecki, John B.
2006-01-01
Extensive mining activities conducted at the Tar Creek Superfund site, one of the largest Superfund sites in the United States, pose substantial health and safety risks. Mining activities removed a total of about 6,000,000 tons of lead and zinc by 1949. To evaluate the effect of this mining on the ground-water flow, a MODFLOW 2000 digital model has been developed to simulate ground-water flow in the carbonate formations of Mississippian age underlying the Tar Creek Superfund site. The model consists of three layers of variable thickness and a grid of 580 rows by 680 columns of cells 164 feet (50 meters) on a side. Model flux boundary conditions are specified for rivers and general head boundaries along the northern boundary of the Boone Formation. Selected cells in layer 1 are simulated as drain cells. Model calibration has been performed to minimize the difference between simulated and observed water levels in the Boone Formation. Hydraulic conductivity values specified during calibration range from 1.3 to 35 feet per day for the Boone Formation with the larger values occurring along the axis of the Miami Syncline where horizontal anisotropy is specified as 10 to 1. Hydraulic conductivity associated with the mine void is set at 50,000 feet per day and a specific yield of 1.0 is specified to represent that the mine void is filled completely with water. Residuals (the difference between measured and simulated ground-water altitudes) has a root-mean-squared value of 8.53 feet and an absolute mean value of 7.29 feet for 17 observed values of water levels in the Boone Formation. The utility of the model for simulating and evaluating the possible consequences of remediation activities has been demonstrated. The model was used to simulate the emplacement of chat (mine waste consisting of fines and fragments of chert) back into the mine. Scenarios using 1,800,000 and 6,500,000 tons of chat were run. Hydraulic conductivity was reduced from 50,000 feet per day to 35 feet per day in the model cells corresponding to chat emplacement locations. A comparison of the simulated baseline conditions and conditions after simulated chat emplacement revealed little change in water levels, drainage and stream flux, and ground-water flow velocity. Using the calibrated flow model, particle tracks were simulated using MODPATH to evaluate the simultaneous movement of particles with water in the vicinity of four potential sites at which various volumes of chat might be emplaced in the underground mine workings as part of potential remediation efforts at the site. Particle tracks were generated to follow the rate and direction of water movement for a simulated period of 100 years. In general, chat emplacement had minimal effect on the direction and rate of movement when compared to baseline (current) flow conditions. Water-level differences between baseline and chat-emplacement scenarios showed declines as much as 2 to 3 feet in areas immediately downgradient from the chat emplacement cells and little or no head change upgradient. Chat emplacements had minimal effect on changes in surfacewater flux with the largest simulated difference in one cell between baseline and chat emplacement scenarios being about 3.5 gallons per minute.
Henn, R Frank; Shah, Neel; Warner, Jon J P; Gomoll, Andreas H
2013-06-01
The purpose of this study was to quantify the benefits of shoulder arthroscopy simulator training with a cadaveric model of shoulder arthroscopy. Seventeen first-year medical students with no prior experience in shoulder arthroscopy were enrolled and completed this study. Each subject completed a baseline proctored arthroscopy on a cadaveric shoulder, which included controlling the camera and completing a standard series of tasks using the probe. The subjects were randomized, and 9 of the subjects received training on a virtual reality simulator for shoulder arthroscopy. All subjects then repeated the same cadaveric arthroscopy. The arthroscopic videos were analyzed in a blinded fashion for time to task completion and subjective assessment of technical performance. The 2 groups were compared by use of Student t tests, and change over time within groups was analyzed with paired t tests. There were no observed differences between the 2 groups on the baseline evaluation. The simulator group improved significantly from baseline with respect to time to completion and subjective performance (P < .05). Time to completion was significantly faster in the simulator group compared with controls at the final evaluation (P < .05). No difference was observed between the groups on the subjective scores at the final evaluation (P = .98). Shoulder arthroscopy simulator training resulted in significant benefits in clinical shoulder arthroscopy time to task completion in this cadaveric model. This study provides important additional evidence of the benefit of simulators in orthopaedic surgical training. There may be a role for simulator training in shoulder arthroscopy education. Copyright © 2013 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.
Shoulder Arthroscopy Simulator Training Improves Shoulder Arthroscopy Performance in a Cadaver Model
Henn, R. Frank; Shah, Neel; Warner, Jon J.P.; Gomoll, Andreas H.
2013-01-01
Purpose The purpose of this study was to quantify the benefits of shoulder arthroscopy simulator training with a cadaver model of shoulder arthroscopy. Methods Seventeen first year medical students with no prior experience in shoulder arthroscopy were enrolled and completed this study. Each subject completed a baseline proctored arthroscopy on a cadaveric shoulder, which included controlling the camera and completing a standard series of tasks using the probe. The subjects were randomized, and nine of the subjects received training on a virtual reality simulator for shoulder arthroscopy. All subjects then repeated the same cadaveric arthroscopy. The arthroscopic videos were analyzed in a blinded fashion for time to task completion and subjective assessment of technical performance. The two groups were compared with students t-tests, and change over time within groups was analyzed with paired t-tests. Results There were no observed differences between the two groups on the baseline evaluation. The simulator group improved significantly from baseline with respect to time to completion and subjective performance (p<0.05). Time to completion was significantly faster in the simulator group compared to controls at final evaluation (p<0.05). No difference was observed between the groups on the subjective scores at final evaluation (p=0.98). Conclusions Shoulder arthroscopy simulator training resulted in significant benefits in clinical shoulder arthroscopy time to task completion in this cadaver model. This study provides important additional evidence of the benefit of simulators in orthopaedic surgical training. Clinical Relevance There may be a role for simulator training in shoulder arthroscopy education. PMID:23591380
Frezzato, Diego; Saielli, Giacomo
2016-03-10
We have investigated the structural and dynamic properties of Xe dissolved in the ionic liquid crystal (ILC) phase of 1-hexadecyl-3-methylimidazolium nitrate using classical molecular dynamics (MD) simulations. Xe is found to be preferentially dissolved within the hydrophobic environment of the alkyl chains rather than in the ionic layers of the smectic phase. The structural parameters and the estimated local diffusion coefficients concerning the short-time motion of Xe are used to parametrize a theoretical model based on the Smoluchowski equation for the macroscopic dynamics across the smectic layers, a feature which cannot be directly obtained from the relatively short MD simulations. This protocol represents an efficient combination of computational and theoretical tools to obtain information on slow processes concerning the permeability and diffusivity of the xenon in smectic ILCs.
Conceptual Hierarchies in a Flat Attractor Network
O’Connor, Christopher M.; Cree, George S.; McRae, Ken
2009-01-01
The structure of people’s conceptual knowledge of concrete nouns has traditionally been viewed as hierarchical (Collins & Quillian, 1969). For example, superordinate concepts (vegetable) are assumed to reside at a higher level than basic-level concepts (carrot). A feature-based attractor network with a single layer of semantic features developed representations of both basic-level and superordinate concepts. No hierarchical structure was built into the network. In Experiment and Simulation 1, the graded structure of categories (typicality ratings) is accounted for by the flat attractor-network. Experiment and Simulation 2 show that, as with basic-level concepts, such a network predicts feature verification latencies for superordinate concepts (vegetable
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lizarraga, Joanes; Urrestilla, Jon; Daverio, David
We present cosmic microwave background (CMB) power spectra from recent numerical simulations of cosmic strings in the Abelian Higgs model and compare them to CMB power spectra measured by Planck . We obtain revised constraints on the cosmic string tension parameter G μ. For example, in the ΛCDM model with the addition of strings and no primordial tensor perturbations, we find G μ < 2.0 × 10{sup −7} at 95% confidence, about 20% lower than the value obtained from previous simulations, which had 1/64 of the spatial volume. The increased computational volume also makes it possible to simulate fully themore » physical equations of motion, in which the string cores shrink in comoving coordinates. We find however that this, and the larger dynamic range, changes the amplitude of the power spectra by only about 10%. The main cause of the stronger constraints on G μ is instead an improved treatment of the string evolution across the radiation-matter transition.« less
Emergence of coherence and the dynamics of quantum phase transitions
Braun, Simon; Friesdorf, Mathis; Hodgman, Sean S.; Schreiber, Michael; Ronzheimer, Jens Philipp; Riera, Arnau; del Rey, Marco; Bloch, Immanuel; Eisert, Jens
2015-01-01
The dynamics of quantum phase transitions pose one of the most challenging problems in modern many-body physics. Here, we study a prototypical example in a clean and well-controlled ultracold atom setup by observing the emergence of coherence when crossing the Mott insulator to superfluid quantum phase transition. In the 1D Bose–Hubbard model, we find perfect agreement between experimental observations and numerical simulations for the resulting coherence length. We, thereby, perform a largely certified analog quantum simulation of this strongly correlated system reaching beyond the regime of free quasiparticles. Experimentally, we additionally explore the emergence of coherence in higher dimensions, where no classical simulations are available, as well as for negative temperatures. For intermediate quench velocities, we observe a power-law behavior of the coherence length, reminiscent of the Kibble–Zurek mechanism. However, we find nonuniversal exponents that cannot be captured by this mechanism or any other known model. PMID:25775515
Evaluation of a low-cost, 3D-printed model for bronchoscopy training.
Parotto, Matteo; Jiansen, Joshua Qua; AboTaiban, Ahmed; Ioukhova, Svetlana; Agzamov, Alisher; Cooper, Richard; O'Leary, Gerald; Meineri, Massimiliano
2017-01-01
Flexible bronchoscopy is a fundamental procedure in anaesthesia and critical care medicine. Although learning this procedure is a complex task, the use of simulation-based training provides significant advantages, such as enhanced patient safety. Access to bronchoscopy simulators may be limited in low-resource settings. We have developed a low-cost 3D-printed bronchoscopy training model. A parametric airway model was obtained from an online medical model repository and fabricated using a low-cost 3D printer. The participating physicians had no prior bronchoscopy experience. Participants received a 30-minute lecture on flexible bronchoscopy and were administered a 15-item pre-test questionnaire on bronchoscopy. Afterwards, participants were instructed to perform a series of predetermined bronchoscopy tasks on the 3D printed simulator on 4 consecutive occasions. The time needed to perform the tasks and the quality of task performance (identification of bronchial anatomy, technique, dexterity, lack of trauma) were recorded. Upon completion of the simulator tests, participants were administered the 15-item questionnaire (post-test) once again. Participant satisfaction data on the perceived usefulness and accuracy of the 3D model were collected. A statistical analysis was performed using the t-test. Data are reported as mean values (± standard deviation). The time needed to complete all tasks was 152.9 ± 71.5 sec on the 1st attempt vs. 98.7 ± 40.3 sec on the 4th attempt (P = 0.03). Likewise, the quality of performance score improved from 8.3 ± 6.7 to 18.2 ± 2.5 (P < 0.0001). The average number of correct answers in the questionnaire was 6.8 ± 1.9 pre-test and 13.3 ± 3.1 post-test (P < 0.0001). Participants reported a high level of satisfaction with the perceived usefulness and accuracy of the model. We developed a 3D-printed model for bronchoscopy training. This model improved trainee performance and may represent a valid, low-cost bronchoscopy training tool.
Pecha, M. Brennan; Garcia-Perez, Manuel; Foust, Thomas D.; ...
2016-11-08
Here, direct numerical simulation of convective heat transfer from hot gas to isolated biomass particle models with realistic morphology and explicit microstructure was performed over a range of conditions with laminar flow of hot gas (500 degrees C). Steady-state results demonstrated that convective interfacial heat transfer is dependent on the wood species. The computed heat transfer coefficients were shown to vary between the pine and aspen models by nearly 20%. These differences are attributed to the species-specific variations in the exterior surface morphology of the biomass particles. We also quantify variations in heat transfer experienced by the particle when positionedmore » in different orientations with respect to the direction of fluid flow. These results are compared to previously reported heat transfer coefficient correlations in the range of 0.1 < Pr < 1.5 and 10 < Re < 500. Comparison of these simulation results to correlations commonly used in the literature (Gunn, Ranz-Marshall, and Bird-Stewart-Lightfoot) shows that the Ranz-Marshall (sphere) correlation gave the closest h values to our steady-state simulations for both wood species, though no existing correlation was within 20% of both species at all conditions studied. In general, this work exemplifies the fact that all biomass feedstocks are not created equal, and that their species-specific characteristics must be appreciated in order to facilitate accurate simulations of conversion processes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pecha, M. Brennan; Garcia-Perez, Manuel; Foust, Thomas D.
Here, direct numerical simulation of convective heat transfer from hot gas to isolated biomass particle models with realistic morphology and explicit microstructure was performed over a range of conditions with laminar flow of hot gas (500 degrees C). Steady-state results demonstrated that convective interfacial heat transfer is dependent on the wood species. The computed heat transfer coefficients were shown to vary between the pine and aspen models by nearly 20%. These differences are attributed to the species-specific variations in the exterior surface morphology of the biomass particles. We also quantify variations in heat transfer experienced by the particle when positionedmore » in different orientations with respect to the direction of fluid flow. These results are compared to previously reported heat transfer coefficient correlations in the range of 0.1 < Pr < 1.5 and 10 < Re < 500. Comparison of these simulation results to correlations commonly used in the literature (Gunn, Ranz-Marshall, and Bird-Stewart-Lightfoot) shows that the Ranz-Marshall (sphere) correlation gave the closest h values to our steady-state simulations for both wood species, though no existing correlation was within 20% of both species at all conditions studied. In general, this work exemplifies the fact that all biomass feedstocks are not created equal, and that their species-specific characteristics must be appreciated in order to facilitate accurate simulations of conversion processes.« less
NASA Technical Reports Server (NTRS)
Burk, Sanger H., Jr.; Healy, Frederick M.
1955-01-01
An investigation of a l/21-scale model of the Chance Vought F7U-3 airplane in the co&at-load- condition has been conducted in the Langley 20-foot free-spinning tunnel, The recovery characteristics of the model were determined by use of spin-recovery rockets for the erect and inverted spinning condition. The rockets were so placed as to provide either a yawing or rolling moment about the model center of gravity. Also included in the investigation were tests to determine the effect of simulated engine thrust on the recovery characteristics of the model. On the basis of model tests, recoveries from erect and inverted spins were satisfactory when a yawing moment of 22,200 foot-pounds (full scale) was provided against the spin by rockets attached to the wing tips; the anti-spin yawing moment was applied for approximately 9 seconds, (full scale). Satisfactory recoveries were obtained from erect spins when a rolling moment of 22,200 foot-pounds (full scale) was provided with the spin (rolls right wing down in right spin). Although the inverted spin was satisfactorily terminated when a rolling moment of equal magnitude was provided, a roll rocket was not considered to be an optimum spin-recovery device to effect recoveries from inverted spins for this airplane because of resulting gyrations during spin recovery. Simulation of engine thrust had no apparent effect on the spin recovery characteristics.
SAI (Systems Applications, Incorporated) Urban Airshed Model. Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schere, K.L.
1985-06-01
This magnetic tape contains the FORTRAN source code, sample input data, and sample output data for the SAI Urban Airshed Model (UAM). The UAM is a 3-dimensional gridded air-quality simulation model that is well suited for predicting the spatial and temporal distribution of photochemical pollutant concentrations in an urban area. The model is based on the equations of conservation of mass for a set of reactive pollutants in a turbulent-flow field. To solve these equations, the UAM uses numerical techniques set in a 3-D finite-difference grid array of cells, each about 1 to 10 kilometers wide and 10 to severalmore » hundred meters deep. As output, the model provides the calculated pollutant concentrations in each cell as a function of time. The chemical species of prime interest included in the UAM simulations are O3, NO, NO/sub 2/ and several organic compounds and classes of compounds. The UAM system contains at its core the Airshed Simulation Program that accesses input data consisting of 10 to 14 files, depending on the program options chosen. Each file is created by a separate data-preparation program. There are 17 programs in the entire UAM system. The services of a qualified dispersion meteorologist, a chemist, and a computer programmer will be necessary to implement and apply the UAM and to interpret the results. Software Description: The program is written in the FORTRAN programming language for implementation on a UNIVAC 1110 computer under the UNIVAC 110 0 operating system level 38R5A. Memory requirement is 80K.« less
Neuronvisio: A Graphical User Interface with 3D Capabilities for NEURON.
Mattioni, Michele; Cohen, Uri; Le Novère, Nicolas
2012-01-01
The NEURON simulation environment is a commonly used tool to perform electrical simulation of neurons and neuronal networks. The NEURON User Interface, based on the now discontinued InterViews library, provides some limited facilities to explore models and to plot their simulation results. Other limitations include the inability to generate a three-dimensional visualization, no standard mean to save the results of simulations, or to store the model geometry within the results. Neuronvisio (http://neuronvisio.org) aims to address these deficiencies through a set of well designed python APIs and provides an improved UI, allowing users to explore and interact with the model. Neuronvisio also facilitates access to previously published models, allowing users to browse, download, and locally run NEURON models stored in ModelDB. Neuronvisio uses the matplotlib library to plot simulation results and uses the HDF standard format to store simulation results. Neuronvisio can be viewed as an extension of NEURON, facilitating typical user workflows such as model browsing, selection, download, compilation, and simulation. The 3D viewer simplifies the exploration of complex model structure, while matplotlib permits the plotting of high-quality graphs. The newly introduced ability of saving numerical results allows users to perform additional analysis on their previous simulations.
Modeling of the Plume Development Phase of the Shoemaker-Levy 9 Comet Impact
NASA Astrophysics Data System (ADS)
Palotai, Csaba J.; Korycansky, D.; Deming, D.; Harrington, J.
2008-09-01
We present a progress report on our numerical simulations of the plume blowout and flight/splash phases of the Shoemaker-Levy 9 (SL9) comet impact into Jupiter's atmosphere. For this project we have modified the ZEUS-MP/2 three-dimensional hydrodynamic model (Hayes et al. ApJ.SS. 165. 174-183, 2006) to be suitable for Jovian atmospheric simulations. To initialize our model we map the final state of high-resolution SL9 impact simulations of Korycansky et al. (ApJ 646. 642-652, 2006) onto our larger, stationary grid. In the current phase of the research we investigate how the dynamical chaos in the impact model affects simulations of the subsequent phases. We adapt the atmospheric radiation model from the 2D splash calculation of Deming and Harrington (ApJ 561. 455-467, 2001) to calculate realistic wavelength-dependent lightcurves and low-resolution spectra. Our goal is to compare synthetic images created from model output to the data taken by the Hubble Space Telescope of plumes on the limb of Jupiter during the impacts of various SL9 fragments (Hammel et al. Science 267. 1288-1296, 1995). Details of the model, validation of the code, and results of our latest simulations will be presented. This material is based on work supported by National Science Foundation Grant No. 0307638 and National Aeronautics and Space Administration Grant No. NNG 04GQ35G .
Suzuki, Misaki; Tse, Susanna; Hirai, Midori; Kurebayashi, Yoichi
2017-05-09
Tofacitinib (3-[(3R,4R)-4-methyl-3-[methyl(7H-pyrrolo[2,3-d]pyrimidin-4-yl)amino]piperidin-1-yl]-3 -oxopropanenitrile) is an oral Janus kinase inhibitor that is approved in countries including Japan and the United States for the treatment of rheumatoid arthritis, and is being developed across the globe for the treatment of inflammatory diseases. In the present study, a physiologically-based pharmacokinetic model was applied to compare the pharmacokinetics of tofacitinib in Japanese and Caucasians to assess the potential impact of ethnicity on the dosing regimen in the two populations. Simulated plasma concentration profiles and pharmacokinetic parameters, i.e. maximum concentration and area under plasma concentration-time curve, in Japanese and Caucasian populations after single or multiple doses of 1 to 30 mg tofacitinib were in agreement with clinically observed data. The similarity in simulated exposure between Japanese and Caucasian populations supports the currently approved dosing regimen in Japan and the United States, where there is no recommendation for dose adjustment according to race. Simulated results for single (1 to 100 mg) or multiple doses (5 mg twice daily) of tofacitinib in extensive and poor metabolizers of CYP2C19, an enzyme which has been shown to contribute in part to tofacitinib elimination and is known to exhibit higher frequency in Japanese compared to Caucasians, were also in support of no recommendation for dose adjustment in CYP2C19 poor metabolizers. This study demonstrated a successful application of physiologically-based pharmacokinetic modeling in evaluating ethnic sensitivity in pharmacokinetics at early stages of development, presenting its potential value as an efficient and scientific method for optimal dose setting in the Japanese population.
SUZUKI, MISAKI; TSE, SUSANNA; HIRAI, MIDORI; KUREBAYASHI, YOICHI
2016-01-01
Tofacitinib (3-[(3R,4R)-4-methyl-3-[methyl(7H-pyrrolo[2,3-d]pyrimidin-4-yl)amino]piperidin-1-yl]-3 -oxopropanenitrile) is an oral Janus kinase inhibitor that is approved in countries including Japan and the United States for the treatment of rheumatoid arthritis, and is being developed across the globe for the treatment of inflammatory diseases. In the present study, a physiologically-based pharmacokinetic model was applied to compare the pharmacokinetics of tofacitinib in Japanese and Caucasians to assess the potential impact of ethnicity on the dosing regimen in the two populations. Simulated plasma concentration profiles and pharmacokinetic parameters, i.e. maximum concentration and area under plasma concentration-time curve, in Japanese and Caucasian populations after single or multiple doses of 1 to 30 mg tofacitinib were in agreement with clinically observed data. The similarity in simulated exposure between Japanese and Caucasian populations supports the currently approved dosing regimen in Japan and the United States, where there is no recommendation for dose adjustment according to race. Simulated results for single (1 to 100 mg) or multiple doses (5 mg twice daily) of tofacitinib in extensive and poor metabolizers of CYP2C19, an enzyme which has been shown to contribute in part to tofacitinib elimination and is known to exhibit higher frequency in Japanese compared to Caucasians, were also in support of no recommendation for dose adjustment in CYP2C19 poor metabolizers. This study demonstrated a successful application of physiologically-based pharmacokinetic modeling in evaluating ethnic sensitivity in pharmacokinetics at early stages of development, presenting its potential value as an efficient and scientific method for optimal dose setting in the Japanese population. PMID:28490712
Liotta, Flavia; d'Antonio, Giuseppe; Esposito, Giovanni; Fabbricino, Massimiliano; Frunzo, Luigi; van Hullebusch, Eric D; Lens, Piet N L; Pirozzi, Francesco
2014-01-01
The role of the moisture content and particle size (PS) on the disintegration of complex organic matter during the wet anaerobic digestion (AD) process was investigated. A range of total solids (TS) from 5% to 11.3% and PS from 0.25 to 15 mm was evaluated using carrot waste as model complex organic matter. The experimental results showed that the methane production rate decreased with higher TS and PS. A modified version of the AD model no.1 for complex organic substrates was used to model the experimental data. The simulations showed a decrease of the disintegration rate constants with increasing TS and PS. The results of the biomethanation tests were used to calibrate and validate the applied model. In particular, the values of the disintegration constant for various TS and PS were determined. The simulations showed good agreement between the numerical and observed data.
Examining the impact of nitryl chloride chemistry on summertime air quality
NASA Astrophysics Data System (ADS)
Sarwar, G.; Simon, H. A.; Bhave, P.; Hutzell, W. T.
2011-12-01
Results of recent field campaigns suggest that heterogeneous reactions can form nitryl chloride (ClNO2) at night. ClNO2 photodissociates into nitrogen dioxide and chlorine radicals during the day. Subsequent photolysis of nitrogen dioxide and reactions of chlorine radicals with volatile organic compounds increase ozone production. Thus, the presence of ClNO2 in the atmosphere can enhance ozone. In this study, the impact of the heterogeneous production of ClNO2 on summertime air quality in the United States is examined by using the Community Multiscale Air Quality (CMAQ) model. Laboratory chamber experimental studies have parameterized the yield of ClNO2 and the heterogeneous uptake of dinitrogen pentoxide on aerosols. We implement these parameterizations into the CMAQ model. In addition to the typical emissions, the model also includes emissions of sea-salt, anthropogenic particulate chloride, anthropogenic hydrochloric acid and molecular chlorine from the National Emissions Inventory. Model simulations are conducted without and with the heterogeneous ClNO2 formation reaction for September 1-10, 2006. The results of the study suggest that the heterogeneous reaction produces ClNO2 in many coastal areas as well as inland locations in the United States. The ClNO2 increase in coastal areas is caused by chloride emissions from sea-salt and in inland-areas by chloride emissions from fire and anthropogenic sources. Predicted ClNO2 levels reach nighttime peaks of up to 4.0 ppb in the Los Angeles area and up to 1.2 ppb near Houston, similar to the measured values reported in the literature. The ClNO2 chemistry decreases nitric acid as well as particulate nitrate by a large margin; consequently it changes composition of NOz. It increases hourly and daily maximum 8-hr ozone by up to 9 ppbv and 6 ppbv, respectively. It increases aerosol sulfate while decreasing aerosol nitrate and ammonium. The accompanying presentation identifies predicted spatial patterns of ClNO2 concentrations across the United States and describes the detailed impact of the ClNO2 chemistry on ozone, nitric acid, sulfate, particulate nitrate, ammonium, and particulate chloride. To evaluate the impact of the ClNO2 chemistry on an ozone control strategy, two additional model simulations were conducted with reduced NOx emissions. Relative response factors were determined without and with the ClNO2 chemistry; the accompanying presentation discusses the impact on ozone control strategy.
Ku, Lawrence C.; Wu, Huali; Greenberg, Rachel G.; Hill, Kevin D.; Gonzalez, Daniel; Hornik, Christoph P.; Berezny, Alysha; Guptill, Jeffrey T.; Jiang, Wenlei; Zheng, Nan; Cohen-Wolkowiez, Michael; Melloni, Chiara
2016-01-01
Background Defining a drug's therapeutic index (TI) is important for patient safety and regulating the development of generic drugs. For many drugs, the TI is unknown. A systematic approach was developed to characterize the TI of a drug using therapeutic drug monitoring and electronic health record (EHR) data with pharmacokinetic (PK) modeling. This approach was first tested on phenytoin, which has a known TI, and then applied to lamotrigine, which lacks a defined TI. Methods Retrospective EHR data from patients in a tertiary hospital were used to develop phenytoin and lamotrigine population PK models and to identify adverse events (anemia, thrombocytopenia, and leukopenia) and efficacy outcomes (seizure-free). Phenytoin and lamotrigine concentrations were simulated for each day with an adverse event or seizure. Relationships between simulated concentrations and adverse events and efficacy outcomes were used to calculate the TI for phenytoin and lamotrigine. Results For phenytoin, 93 patients with 270 total and 174 free concentrations were identified. A de novo 1-compartment PK model with Michaelis-Menten kinetics described the data well. Simulated average total and free concentrations of 10-15 and 1.0-1.5 μg/mL were associated with both adverse events and efficacy in 50% of patients, resulting in a TI of 0.7–1.5. For lamotrigine, 45 patients with 53 concentrations were identified. A published 1-compartment model was adapted to characterize the PK data. No relationships between simulated lamotrigine concentrations and safety or efficacy endpoints were seen; therefore, the TI could not be calculated. Conclusions This approach correctly determined the TI of phenytoin but was unable to determine the TI of lamotrigine due to a limited sample size. The use of therapeutic drug monitoring and EHR data to aid in narrow TI drug classification is promising, but it requires an adequate sample size and accurate characterization of concentration–response relationships. PMID:27764025
Ku, Lawrence C; Wu, Huali; Greenberg, Rachel G; Hill, Kevin D; Gonzalez, Daniel; Hornik, Christoph P; Berezny, Alysha; Guptill, Jeffrey T; Jiang, Wenlei; Zheng, Nan; Cohen-Wolkowiez, Michael; Melloni, Chiara
2016-12-01
Defining a drug's therapeutic index (TI) is important for patient safety and regulating the development of generic drugs. For many drugs, the TI is unknown. A systematic approach was developed to characterize the TI of a drug using therapeutic drug monitoring and electronic health record (EHR) data with pharmacokinetic (PK) modeling. This approach was first tested on phenytoin, which has a known TI, and then applied to lamotrigine, which lacks a defined TI. Retrospective EHR data from patients in a tertiary hospital were used to develop phenytoin and lamotrigine population PK models and to identify adverse events (anemia, thrombocytopenia, and leukopenia) and efficacy outcomes (seizure-free). Phenytoin and lamotrigine concentrations were simulated for each day with an adverse event or seizure. Relationships between simulated concentrations and adverse events and efficacy outcomes were used to calculate the TI for phenytoin and lamotrigine. For phenytoin, 93 patients with 270 total and 174 free concentrations were identified. A de novo 1-compartment PK model with Michaelis-Menten kinetics described the data well. Simulated average total and free concentrations of 10-15 and 1.0-1.5 mcg/mL were associated with both adverse events and efficacy in 50% of patients, resulting in a TI of 0.7-1.5. For lamotrigine, 45 patients with 53 concentrations were identified. A published 1-compartment model was adapted to characterize the PK data. No relationships between simulated lamotrigine concentrations and safety or efficacy endpoints were seen; therefore, the TI could not be calculated. This approach correctly determined the TI of phenytoin but was unable to determine the TI of lamotrigine due to a limited sample size. The use of therapeutic drug monitoring and EHR data to aid in narrow TI drug classification is promising, but it requires an adequate sample size and accurate characterization of concentration-response relationships.
ERIC Educational Resources Information Center
Tao, Yu-Hui; Yeh, C. Rosa; Hung, Kung Chin
2015-01-01
Several theoretical models have been constructed to determine the effects of buisness simulation games (BSGs) on learning performance. Although these models agree on the concept of learning-cycle effect, no empirical evidence supports the claim that the use of learning cycle activities with BSGs produces an effect on incremental gains in knowledge…
An ASM/ADM model interface for dynamic plant-wide simulation.
Nopens, Ingmar; Batstone, Damien J; Copp, John B; Jeppsson, Ulf; Volcke, Eveline; Alex, Jens; Vanrolleghem, Peter A
2009-04-01
Mathematical modelling has proven to be very useful in process design, operation and optimisation. A recent trend in WWTP modelling is to include the different subunits in so-called plant-wide models rather than focusing on parts of the entire process. One example of a typical plant-wide model is the coupling of an upstream activated sludge plant (including primary settler, and secondary clarifier) to an anaerobic digester for sludge digestion. One of the key challenges when coupling these processes has been the definition of an interface between the well accepted activated sludge model (ASM1) and anaerobic digestion model (ADM1). Current characterisation and interface models have key limitations, the most critical of which is the over-use of X(c) (or lumped complex) variable as a main input to the ADM1. Over-use of X(c) does not allow for variation of degradability, carbon oxidation state or nitrogen content. In addition, achieving a target influent pH through the proper definition of the ionic system can be difficult. In this paper, we define an interface and characterisation model that maps degradable components directly to carbohydrates, proteins and lipids (and their soluble analogues), as well as organic acids, rather than using X(c). While this interface has been designed for use with the Benchmark Simulation Model No. 2 (BSM2), it is widely applicable to ADM1 input characterisation in general. We have demonstrated the model both hypothetically (BSM2), and practically on a full-scale anaerobic digester treating sewage sludge.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Yun, E-mail: genliyun@126.com, E-mail: cuiwanzhao@126.com; Cui, Wan-Zhao, E-mail: genliyun@126.com, E-mail: cuiwanzhao@126.com; Wang, Hong-Guang
2015-05-15
Effects of the secondary electron emission (SEE) phenomenon of metal surface on the multipactor analysis of microwave components are investigated numerically and experimentally in this paper. Both the secondary electron yield (SEY) and the emitted energy spectrum measurements are performed on silver plated samples for accurate description of the SEE phenomenon. A phenomenological probabilistic model based on SEE physics is utilized and fitted accurately to the measured SEY and emitted energy spectrum of the conditioned surface material of microwave components. Specially, the phenomenological probabilistic model is extended to the low primary energy end lower than 20 eV mathematically, since no accuratemore » measurement data can be obtained. Embedding the phenomenological probabilistic model into the Electromagnetic Particle-In-Cell (EM-PIC) method, the electronic resonant multipacting in microwave components can be tracked and hence the multipactor threshold can be predicted. The threshold prediction error of the transformer and the coaxial filter is 0.12 dB and 1.5 dB, respectively. Simulation results demonstrate that the discharge threshold is strongly dependent on the SEYs and its energy spectrum in the low energy end (lower than 50 eV). Multipacting simulation results agree quite well with experiments in practical components, while the phenomenological probabilistic model fit both the SEY and the emission energy spectrum better than the traditionally used model and distribution. The EM-PIC simulation method with the phenomenological probabilistic model for the surface collision simulation has been demonstrated for predicting the multipactor threshold in metal components for space application.« less
Olofsen, Erik; Boom, Merel; Nieuwenhuijs, Diederik; Sarton, Elise; Teppema, Luc; Aarts, Leon; Dahan, Albert
2010-06-01
Few studies address the dynamic effect of opioids on respiration. Models with intact feedback control of carbon dioxide on ventilation (non-steady-state models) that correctly incorporate the complex interaction among drug concentration, end-tidal partial pressure of carbon dioxide concentration, and ventilation yield reliable descriptions and predictions of the behavior of opioids. The authors measured the effect of remifentanil on respiration and developed a model of remifentanil-induced respiratory depression. Ten male healthy volunteers received remifentanil infusions with different infusion speeds (target concentrations: 4-9 ng/ml; at infusion rates: 0.17-9 ng x ml x min) while awake and at the background of low-dose propofol. The data were analyzed with a nonlinear model consisting of two additive linear parts, one describing the depressant effect of remifentanil and the other describing the stimulatory effect of carbon dioxide on ventilation. The model adequately described the data including the occurrence of apnea. Most important model parameters were as follows: C50 for respiratory depression 1.6 +/- 0.03 ng/ml, gain of the respiratory controller (G) 0.42 - 0.1 l x min x Torr, and remifentanil blood effect site equilibration half-life (t(1/2)ke0) 0.53 +/- 0.2 min. Propofol caused a 20-50% reduction of C50 and G but had no effect on t(1/2)ke0. Apnea occurred during propofol infusion only. A simulation study revealed an increase in apnea duration at infusion speeds of 2.5-0.5 ng x ml x min followed by a reduction. At an infusion speed of < or = 0.31 ng x ml x min, no apnea was seen. The effect of varying remifentanil infusions with and without a background of low-dose propofol on ventilation and end-tidal partial pressure of carbon dioxide concentration was described successfully using a non-steady-state model of the ventilatory control system. The model allows meaningful simulations and predictions.
Lübken, M; Wichern, M; Letsiou, I; Kehl, O; Bischof, F; Horn, H
2007-01-01
Thermophilic anaerobic digestion in compact systems can be an economical and ecological reasonable decentralised process technique, especially for rural areas. Thermophilic process conditions are important for a sufficient removal of pathogens. The high energy demand, however, can make such systems unfavourable in terms of energy costs. This is the case when low concentrated wastewater is treated or the system is operated at low ambient temperatures. In this paper we present experimental results of a compact thermophilic anaerobic system obtained with fluorescent in situ hybridisation (FISH) analysis and mathematical simulation. The system was operated with faecal sludge for a period of 135 days and with a model substrate consisting of forage and cellulose for a period of 60 days. The change in the microbial community due to the two different substrates treated could be well observed by the FISH analysis. The Anaerobic Digestion Model no. 1 (ADM1) was used to evaluate system performance at different temperature conditions. The model was extended to contribute to decreased methanogenic activity at lower temperatures and was used to calculate energy production. A model was developed to calculate the major parts of energy consumed by the digester itself at different temperature conditions. It was demonstrated by the simulation study that a reduction of the process temperature can lead to higher net energy yield. The simulation study additionally showed that the effect of temperature on the energy yield is higher when a substrate is treated with high protein content.
Masterson, John P.; Barlow, Paul M.
1994-01-01
The effects of changing patterns of ground-water pumping and aquifer recharge on the surface-water and ground-water hydrologic systems were determined for the Cape Cod, Martha's Vineyard, and Nantucket Island Basins. Three-dimensional, transient, ground-water-flow modelS that simulate both freshwater and saltwater flow were developed for the f1ow cells of Cape Cod which currently have large-capacity public-supply wells. Only the freshwater-flow system was simulated for the Cape Cod flow cells where public-water supply demands are satisfied by small-capacity domestic wells. Two- dimensional, finite-difference, change models were developed for Martha's Vineyard and Nantucket Island to determine the projected drawdowns in response to projected in-season pumping rates for 180 days of no aquifer recharge. Results of the simulations indicate very little change in the position of the freshwater-saltwater interface from predevelopment flow conditions to projected ground-water pumping and recharge rates for Cape Cod in the year 2020. Results of change model simulations for Martha's Vineyard and Nantucket Island indicate that the greatest impact in response to projected in-season ground-water pumping occurs at the pumping centers and the magnitude of the drawdowns are minimal with respect to the total thickness of the aquifers.
Regional air-sea coupled model simulation for two types of extreme heat in North China
NASA Astrophysics Data System (ADS)
Li, Donghuan; Zou, Liwei; Zhou, Tianjun
2018-03-01
Extreme heat (EH) over North China (NC) is affected by both large scale circulations and local topography, and could be categorized into foehn favorable and no-foehn types. In this study, the performance of a regional coupled model in simulating EH over NC was examined. The effects of regional air-sea coupling were also investigated by comparing the results with the corresponding atmosphere-alone regional model. On foehn favorable (no-foehn) EH days, a barotropic cyclonic (anticyclonic) anomaly is located to the northeast (northwest) of NC, while anomalous northwesterlies (southeasterlies) prevail over NC in the lower troposphere. In the uncoupled simulation, barotropic anticyclonic bias occurs over China on both foehn favorable and no-foehn EH days, and the northwesterlies in the lower troposphere on foehn favorable EH days are not obvious. These biases are significantly reduced in the regional coupled simulation, especially on foehn favorable EH days with wind anomalies skill scores improving from 0.38 to 0.47, 0.47 to 0.61 and 0.38 to 0.56 for horizontal winds at 250, 500 and 850 hPa, respectively. Compared with the uncoupled simulation, the reproduction of the longitudinal position of Northwest Pacific subtropical high (NPSH) and the spatial pattern of the low-level monsoon flow over East Asia are improved in the coupled simulation. Therefore, the anticyclonic bias over China is obviously reduced, and the proportion of EH days characterized by anticyclonic anomaly is more appropriate. The improvements in the regional coupled model indicate that it is a promising choice for the future projection of EH over NC.
Meltwater flux and runoff modeling in the abalation area of jakobshavn Isbrae, West Greenland
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mernild, Sebastian Haugard; Chylek, Petr; Liston, Glen
2009-01-01
The temporal variability in surface snow and glacier melt flux and runoff were investigated for the ablation area of lakobshavn Isbrae, West Greenland. High-resolution meteorological observations both on and outside the Greenland Ice Sheet (GrIS) were used as model input. Realistic descriptions of snow accumulation, snow and glacier-ice melt, and runoff are essential to understand trends in ice sheet surface properties and processes. SnowModel, a physically based, spatially distributed meteorological and snow-evolution modeling system was used to simulate the temporal variability of lakobshavn Isbrre accumulation and ablation processes for 2000/01-2006/07. Winter snow-depth observations and MODIS satellite-derived summer melt observations weremore » used for model validation of accumulation and ablation. Simulations agreed well with observed values. Simulated annual surface melt varied from as low as 3.83 x 10{sup 9} m{sup 3} (2001/02) to as high as 8.64 x 10{sup 9} m{sup 3} (2004/05). Modeled surface melt occurred at elevations reaching 1,870 m a.s.l. for 2004/05, while the equilibrium line altitude (ELA) fluctuated from 990 to 1,210 m a.s.l. during the simulation period. The SnowModel meltwater retention and refreezing routines considerably reduce the amount of meltwater available as ice sheet runoff; without these routines the lakobshavn surface runoff would be overestimated by an average of 80%. From September/October through May/June no runoff events were simulated. The modeled interannual runoff variability varied from 1.81 x 10{sup 9} m{sup 3} (2001/02) to 5.21 x 10{sup 9} m{sup 3} (2004/05), yielding a cumulative runoff at the Jakobshavn glacier terminus of {approx}2.25 m w.eq. to {approx}4.5 m w.eq., respectively. The average modeled lakobshavn runoff of {approx}3.4 km{sup 3} y{sup -1} was merged with previous estimates of Jakobshavn ice discharge to quantify the freshwater flux to Illulissat Icefiord. For both runoff and ice discharge the average trends are similar, indicating increasing (insignificant) influx of freshwater to the Illulissat Icefiord for the period 2000/01-2006/07. This study suggests that surface runoff forms a minor part of the overall Jakobshavn freshwater flux to the fiord: around 7% ({approx}3.4 km{sup 3} y{sup -1}) of the average annual freshwater flux of {approx}51.0 km{sup 3} y{sup -1} originates from the surface runoff.« less
Ditching Tests of a 1/8-Scale Model of the Chance Vought XF6U-1 Airplane, TED No. NACA DE319
NASA Technical Reports Server (NTRS)
Fisher, Lloyd J., Jr.; McBride, Ellis E.
1953-01-01
Tests were made with a 1/8-scale dynamically similar model of the Chance Vought XF6U-1 airplane to study its behavior when ditched. The model was ditched in calm water at the Langley tank no. 2 monorail. Various landing attitudes, speeds, and conditions of damage were simulated. The behavior of the model was determined from visual observations, by recording time histories of the accelerations, and by taking motion pictures of the ditchings. From the results of the tests it was concluded that the airplane should be ditched at the near-stall, tail-down attitude (12 deg). The flaps should be fully extended to obtain the lowest possible landing speed. The wing-tip tanks should be jettisoned. The underside of the fuselage will be critically damaged in a ditching and the airplane will dive violently after a run of about three fuselage lengths. Maximum longitudinal decelerations up to about 7g and maximum vertical accelerations up to about 5g will be encountered.
Critical flavor number of the Thirring model in three dimensions
NASA Astrophysics Data System (ADS)
Wellegehausen, Björn H.; Schmidt, Daniel; Wipf, Andreas
2017-11-01
The Thirring model is a four-fermion theory with a current-current interaction and U (2 N ) chiral symmetry. It is closely related to three-dimensional QED and other models used to describe properties of graphene. In addition, it serves as a toy model to study chiral symmetry breaking. In the limit of flavor number N →1 /2 it is equivalent to the Gross-Neveu model, which shows a parity-breaking discrete phase transition. The model was already studied with different methods, including Dyson-Schwinger equations, functional renormalization group methods, and lattice simulations. Most studies agree that there is a phase transition from a symmetric phase to a spontaneously broken phase for a small number of fermion flavors, but no symmetry breaking for large N . But there is no consensus on the critical flavor number Ncr above which there is no phase transition anymore and on further details of the critical behavior. Values of N found in the literature vary between 2 and 7. All earlier lattice studies were performed with staggered fermions. Thus it is questionable if in the continuum limit the lattice model recovers the internal symmetries of the continuum model. We present new results from lattice Monte Carlo simulations of the Thirring model with SLAC fermions which exactly implement all internal symmetries of the continuum model even at finite lattice spacing. If we reformulate the model in an irreducible representation of the Clifford algebra, we find, in contradiction to earlier results, that the behavior for even and odd flavor numbers is very different: for even flavor numbers, chiral and parity symmetry are always unbroken; for odd flavor numbers, parity symmetry is spontaneously broken below the critical flavor number Nircr=9 , while chiral symmetry is still unbroken.
Optimized multiple quantum MAS lineshape simulations in solid state NMR
NASA Astrophysics Data System (ADS)
Brouwer, William J.; Davis, Michael C.; Mueller, Karl T.
2009-10-01
The majority of nuclei available for study in solid state Nuclear Magnetic Resonance have half-integer spin I>1/2, with corresponding electric quadrupole moment. As such, they may couple with a surrounding electric field gradient. This effect introduces anisotropic line broadening to spectra, arising from distinct chemical species within polycrystalline solids. In Multiple Quantum Magic Angle Spinning (MQMAS) experiments, a second frequency dimension is created, devoid of quadrupolar anisotropy. As a result, the center of gravity of peaks in the high resolution dimension is a function of isotropic second order quadrupole and chemical shift alone. However, for complex materials, these parameters take on a stochastic nature due in turn to structural and chemical disorder. Lineshapes may still overlap in the isotropic dimension, complicating the task of assignment and interpretation. A distributed computational approach is presented here which permits simulation of the two-dimensional MQMAS spectrum, generated by random variates from model distributions of isotropic chemical and quadrupole shifts. Owing to the non-convex nature of the residual sum of squares (RSS) function between experimental and simulated spectra, simulated annealing is used to optimize the simulation parameters. In this manner, local chemical environments for disordered materials may be characterized, and via a re-sampling approach, error estimates for parameters produced. Program summaryProgram title: mqmasOPT Catalogue identifier: AEEC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3650 No. of bytes in distributed program, including test data, etc.: 73 853 Distribution format: tar.gz Programming language: C, OCTAVE Computer: UNIX/Linux Operating system: UNIX/Linux Has the code been vectorised or parallelized?: Yes RAM: Example: (1597 powder angles) × (200 Samples) × (81 F2 frequency pts) × (31 F1 frequency points) = 3.5M, SMP AMD opteron Classification: 2.3 External routines: OCTAVE ( http://www.gnu.org/software/octave/), GNU Scientific Library ( http://www.gnu.org/software/gsl/), OPENMP ( http://openmp.org/wp/) Nature of problem: The optimal simulation and modeling of multiple quantum magic angle spinning NMR spectra, for general systems, especially those with mild to significant disorder. The approach outlined and implemented in C and OCTAVE also produces model parameter error estimates. Solution method: A model for each distinct chemical site is first proposed, for the individual contribution of crystallite orientations to the spectrum. This model is averaged over all powder angles [1], as well as the (stochastic) parameters; isotropic chemical shift and quadrupole coupling constant. The latter is accomplished via sampling from a bi-variate Gaussian distribution, using the Box-Muller algorithm to transform Sobol (quasi) random numbers [2]. A simulated annealing optimization is performed, and finally the non-linear jackknife [3] is applied in developing model parameter error estimates. Additional comments: The distribution contains a script, mqmasOpt.m, which runs in the OCTAVE language workspace. Running time: Example: (1597 powder angles) × (200 Samples) × (81 F2 frequency pts) × (31 F1 frequency points) = 58.35 seconds, SMP AMD opteron. References:S.K. Zaremba, Annali di Matematica Pura ed Applicata 73 (1966) 293. H. Niederreiter, Random Number Generation and Quasi-Monte Carlo Methods, SIAM, 1992. T. Fox, D. Hinkley, K. Larntz, Technometrics 22 (1980) 29.
Gandjour, Afschin; Tschulena, Ulrich; Steppan, Sonja; Gatti, Emanuele
2015-04-01
The aim of this paper is to develop a simulation model that analyzes cost-offsets of a hypothetical disease management program (DMP) for patients with chronic kidney disease (CKD) in Germany compared to no such program. A lifetime Markov model with simulated 65-year-old patients with CKD was developed using published data on costs and health status and simulating the progression to end-stage renal disease (ESRD), cardiovascular disease and death. A statutory health insurance perspective was adopted. This modeling study shows considerable potential for cost-offsets from a DMP for patients with CKD. The potential for cost-offsets increases with relative risk reduction by the DMP and baseline glomerular filtration rate. Results are most sensitive to the cost of dialysis treatment. This paper presents a general 'prototype' simulation model for the prevention of ESRD. The model allows for further modification and adaptation in future applications.
NASA Technical Reports Server (NTRS)
Greenberg, Albert G.; Lubachevsky, Boris D.; Nicol, David M.; Wright, Paul E.
1994-01-01
Fast, efficient parallel algorithms are presented for discrete event simulations of dynamic channel assignment schemes for wireless cellular communication networks. The driving events are call arrivals and departures, in continuous time, to cells geographically distributed across the service area. A dynamic channel assignment scheme decides which call arrivals to accept, and which channels to allocate to the accepted calls, attempting to minimize call blocking while ensuring co-channel interference is tolerably low. Specifically, the scheme ensures that the same channel is used concurrently at different cells only if the pairwise distances between those cells are sufficiently large. Much of the complexity of the system comes from ensuring this separation. The network is modeled as a system of interacting continuous time automata, each corresponding to a cell. To simulate the model, conservative methods are used; i.e., methods in which no errors occur in the course of the simulation and so no rollback or relaxation is needed. Implemented on a 16K processor MasPar MP-1, an elegant and simple technique provides speedups of about 15 times over an optimized serial simulation running on a high speed workstation. A drawback of this technique, typical of conservative methods, is that processor utilization is rather low. To overcome this, new methods were developed that exploit slackness in event dependencies over short intervals of time, thereby raising the utilization to above 50 percent and the speedup over the optimized serial code to about 120 times.
NASA Astrophysics Data System (ADS)
Miller, D. J.; Liu, Z.; Sun, K.; Tao, L.; Nowak, J. B.; Bambha, R.; Michelsen, H. A.; Zondlo, M. A.
2014-12-01
Agricultural ammonia (NH3) emissions are highly uncertain in current bottom-up inventories. Ammonium nitrate is a dominant component of fine aerosols in agricultural regions such as the Central Valley of California, especially during winter. Recent high resolution regional modeling efforts in this region have found significant ammonium nitrate and gas-phase NH3 biases during summer. We compare spatially-resolved surface and boundary layer gas-phase NH3 observations during NASA DISCOVER-AQ California with Community Multi-Scale Air Quality (CMAQ) regional model simulations driven by the EPA NEI 2008 inventory to constrain wintertime NH3 model biases. We evaluate model performance with respect to aerosol partitioning, mixing and deposition to constrain contributions to modeled NH3 concentration biases in the Central Valley Tulare dairy region. Ammonia measurements performed with an open-path mobile platform on a vehicle are gridded to 4 km resolution hourly background concentrations. A peak detection algorithm is applied to remove local feedlot emission peaks. Aircraft NH3, NH4+ and NO3- observations are also compared with simulations extracted along the flight tracks. We find NH3 background concentrations in the dairy region are underestimated by three to five times during winter and NH3 simulations are moderately correlated with observations (r = 0.36). Although model simulations capture NH3 enhancements in the dairy region, these simulations are biased low by 30-60 ppbv NH3. Aerosol NH4+ and NO3- are also biased low in CMAQ by three and four times respectively. Unlike gas-phase NH3, CMAQ simulations do not capture typical NH4+ or NO3- enhancements observed in the dairy region. In contrast, boundary layer height simulations agree well with observations within 13%. We also address observational constraints on simulated NH3 deposition fluxes. These comparisons suggest that NEI 2008 wintertime dairy emissions are underestimated by a factor of three to five. We test sensitivity to emissions by increasing the NEI 2008 NH3 emissions uniformly across the dairy region and evaluate the impact on modeled concentrations. These results are applicable to improving predictions of ammoniated aerosol loading and highlight the value of mobile platform spatial NH3 measurements to constrain emission inventories.
Goodman, Dan F. M.; Brette, Romain
2009-01-01
“Brian” is a simulator for spiking neural networks (http://www.briansimulator.org). The focus is on making the writing of simulation code as quick and easy as possible for the user, and on flexibility: new and non-standard models are no more difficult to define than standard ones. This allows scientists to spend more time on the details of their models, and less on their implementation. Neuron models are defined by writing differential equations in standard mathematical notation, facilitating scientific communication. Brian is written in the Python programming language, and uses vector-based computation to allow for efficient simulations. It is particularly useful for neuroscientific modelling at the systems level, and for teaching computational neuroscience. PMID:20011141
Taguchi, Katsuyuki; Polster, Christoph; Lee, Okkyun; Stierstorfer, Karl; Kappler, Steffen
2016-12-01
An x-ray photon interacts with photon counting detectors (PCDs) and generates an electron charge cloud or multiple clouds. The clouds (thus, the photon energy) may be split between two adjacent PCD pixels when the interaction occurs near pixel boundaries, producing a count at both of the pixels. This is called double-counting with charge sharing. (A photoelectric effect with K-shell fluorescence x-ray emission would result in double-counting as well). As a result, PCD data are spatially and energetically correlated, although the output of individual PCD pixels is Poisson distributed. Major problems include the lack of a detector noise model for the spatio-energetic cross talk and lack of a computationally efficient simulation tool for generating correlated Poisson data. A Monte Carlo (MC) simulation can accurately simulate these phenomena and produce noisy data; however, it is not computationally efficient. In this study, the authors developed a new detector model and implemented it in an efficient software simulator that uses a Poisson random number generator to produce correlated noisy integer counts. The detector model takes the following effects into account: (1) detection efficiency; (2) incomplete charge collection and ballistic effect; (3) interaction with PCDs via photoelectric effect (with or without K-shell fluorescence x-ray emission, which may escape from the PCDs or be reabsorbed); and (4) electronic noise. The correlation was modeled by using these two simplifying assumptions: energy conservation and mutual exclusiveness. The mutual exclusiveness is that no more than two pixels measure energy from one photon. The effect of model parameters has been studied and results were compared with MC simulations. The agreement, with respect to the spectrum, was evaluated using the reduced χ 2 statistics or a weighted sum of squared errors, χ red 2 (≥1), where χ red 2 =1 indicates a perfect fit. The model produced spectra with flat field irradiation that qualitatively agree with previous studies. The spectra generated with different model and geometry parameters allowed for understanding the effect of the parameters on the spectrum and the correlation of data. The agreement between the model and MC data was very strong. The mean spectra with 90 keV and 140 kVp agreed exceptionally well: χ red 2 values were 1.049 with 90 keV data and 1.007 with 140 kVp data. The degrees of cross talk (in terms of the relative increase from single pixel irradiation to flat field irradiation) were 22% with 90 keV and 19% with 140 kVp for MC simulations, while they were 21% and 17%, respectively, for the model. The covariance was in strong agreement qualitatively, although it was overestimated. The noisy data generation was very efficient, taking less than a CPU minute as opposed to CPU hours for MC simulators. The authors have developed a novel, computationally efficient PCD model that takes into account double-counting and resulting spatio-energetic correlation between PCD pixels. The MC simulation validated the accuracy.
Feedbacks between air pollution and weather, Part 1: Effects on weather
NASA Astrophysics Data System (ADS)
Makar, P. A.; Gong, W.; Milbrandt, J.; Hogrefe, C.; Zhang, Y.; Curci, G.; Žabkar, R.; Im, U.; Balzarini, A.; Baró, R.; Bianconi, R.; Cheung, P.; Forkel, R.; Gravel, S.; Hirtl, M.; Honzak, L.; Hou, A.; Jiménez-Guerrero, P.; Langer, M.; Moran, M. D.; Pabla, B.; Pérez, J. L.; Pirovano, G.; San José, R.; Tuccella, P.; Werhahn, J.; Zhang, J.; Galmarini, S.
2015-08-01
The meteorological predictions of fully coupled air-quality models running in ;feedback; versus ;no-feedback; simulations were compared against each other and observations as part of Phase 2 of the Air Quality Model Evaluation International Initiative. In the ;no-feedback; mode, the aerosol direct and indirect effects were disabled, with the models reverting to either climatologies of aerosol properties, or a no-aerosol weather simulation. In the ;feedback; mode, the model-generated aerosols were allowed to modify the radiative transfer and/or cloud formation parameterizations of the respective models. Annual simulations with and without feedbacks were conducted on domains over North America for the years 2006 and 2010, and over Europe for the year 2010. The incorporation of feedbacks was found to result in systematic changes to forecast predictions of meteorological variables, both in time and space, with the largest impacts occurring in the summer and near large sources of pollution. Models incorporating only the aerosol direct effect predicted feedback-induced reductions in temperature, surface downward and upward shortwave radiation, precipitation and PBL height, and increased upward shortwave radiation, in both Europe and North America. The feedback response of models incorporating both the aerosol direct and indirect effects varied across models, suggesting the details of implementation of the indirect effect have a large impact on model results, and hence should be a focus for future research. The feedback response of models incorporating both direct and indirect effects was also consistently larger in magnitude to that of models incorporating the direct effect alone, implying that the indirect effect may be the dominant process. Comparisons across modelling platforms suggested that direct and indirect effect feedbacks may often act in competition: the sign of residual changes associated with feedbacks often changed between those models incorporating the direct effect alone versus those incorporating both feedback processes. Model comparisons to observations for no-feedback and feedback implementations of the same model showed that differences in performance between models were larger than the performance changes associated with implementing feedbacks within a given model. However, feedback implementation was shown to result in improved forecasts of meteorological parameters such as the 2 m surface temperature and precipitation. These findings suggest that meteorological forecasts may be improved through the use of fully coupled feedback models, or through incorporation of improved climatologies of aerosol properties, the latter designed to include spatial, temporal and aerosol size and/or speciation variations.
Monti, Jack; Misut, Paul E.; Busciolano, Ronald J.
2009-01-01
The coastal-aquifer system of Manhasset Neck, Nassau County, New York, has been stressed by pumping, which has led to saltwater intrusion and the abandonment of one public-supply well in 1944. Measurements of chloride concentrations and water levels in 2004 from the deep, confined aquifers indicate active saltwater intrusion in response to public-supply pumping. A numerical model capable of simulating three-dimensional variable-density ground-water flow and solute transport in heterogeneous, anisotropic aquifers was developed using the U.S. Geological Survey finite-element, variable-density, solute-transport simulator SUTRA, to investigate the extent of saltwater intrusion beneath Manhasset Neck. The model is composed of eight layers representing the hydrogeologic system beneath Manhasset Neck. Four modifications to the area?s previously described hydrogeologic framework were made in the model (1) the bedrock-surface altitude at well N12191 was corrected from a previously reported value, (2) part of the extent of the Raritan confining unit was shifted, (3) part of the extent of the North Shore confining unit was shifted, and (4) a clay layer in the upper glacial aquifer was added in the central and southern parts of the Manhasset Neck peninsula. Ground-water flow and the location of the freshwater-saltwater interface were simulated for three conditions (time periods) (1) a steady-state (predevelopment) simulation of no pumping prior to about 1905, (2) a 40-year transient simulation based on 1939 pumpage representing the 1905-1944 period of gradual saltwater intrusion, and (3) a 60-year transient simulation based on 1995 pumpage representing the 1945-2005 period of stabilized withdrawals. The 1939 pumpage rate (12.1 million gallons per day (Mgal/d)) applied to the 1905-1944 transient simulation caused modeled average water-level declines of 2 and 4 feet (ft) in the shallow and deep aquifer systems from predevelopment conditions, respectively, a net decrease of 5.2 Mgal/d in freshwater discharge to offshore areas and a net increase of 6.9 Mgal/d of freshwater entering the model from the eastern, western, and southern lateral boundaries. The 1995 pumpage rate (43.3 Mgal/d) applied to the 1945-2005 transient simulation caused modeled average water-level declines of 5 and 8 ft in the shallow and deep aquifer systems from predevelopment conditions, respectively, a net decrease of 13.2 Mgal/d in freshwater discharge to offshore areas and a net increase of 30.1 Mgal/d of freshwater entering the model from the eastern, western, and southern lateral boundaries. The simulated decrease in freshwater discharge to the offshore areas caused saltwater intrusion in two parts of the deep aquifer system under Manhasset Neck. Saline ground water simulated in a third part of the deep aquifer system under Manhasset Neck was due to the absence of the North Shore confining unit near Sands Point. Simulated chloride concentrations greater than 250 milligrams per liter (mg/L) were used to represent the freshwater-saltwater interface, and the movement of this concentration was evaluated for transient simulations. The decrease in the 1905-1944 simulated freshwater discharge to the offshore areas caused the freshwater-saltwater interface in the deep aquifer system to advance landward more than 1,700 ft from its steady-state position in the vicinity of Baxter Estates Village, Long Island, New York. The decrease in the 1945-2005 simulated freshwater discharge to the offshore areas caused a different area of the freshwater-saltwater interface in the deep aquifer system to advance more than 600 ft from its steady-state position approximately 1 mile south of the Baxter Estates Village. However, the 1945-2005 transient simulation underestimates the concentration and extent of saltwater intrusion determined from water-quality samples collected from wells N12508 and N12793, where measured chloride concentrations increased from 625 and 18 mg/L in 1997 t
Lindgren, R.J.
2001-01-01
The simulated contributing areas for selected watersupply wells in the Cold Spring area generally extend to and possibly beyond the model boundaries to the north and to the southeast. The contributing areas for the Gold'n Plump Poultry Processing Plant supply wells extend: (1) to the Sauk River, (2) to the north to and possibly beyond to the northern model boundary, and (3) to the southeast to and possibly beyond the southeastern model boundary. The primary effects of projected increased ground-water withdrawals of 0.23 cubic feet per second (7.5 percent increase) were to: (1) decrease outflow from the Sauk River Valley aquifer through constant-head boundaries and (2) decrease leakage from the valley unit of the Sauk River Valley aquifer to the streams. No appreciable differences were discernible between the simulated steady-state contributing areas to wells with 1998 pumpage and those with the projected pumpage.
NASA Astrophysics Data System (ADS)
Burleyson, C. D.; Voisin, N.; Taylor, T.; Xie, Y.; Kraucunas, I.
2017-12-01
The DOE's Pacific Northwest National Laboratory (PNNL) has been developing the Building ENergy Demand (BEND) model to simulate energy usage in residential and commercial buildings responding to changes in weather, climate, population, and building technologies. At its core, BEND is a mechanism to aggregate EnergyPlus simulations of a large number of individual buildings with a diversity of characteristics over large spatial scales. We have completed a series of experiments to explore methods to calibrate the BEND model, measure its ability to capture interannual variability in energy demand due to weather using simulations of two distinct weather years, and understand the sensitivity to the number and location of weather stations used to force the model. The use of weather from "representative cities" reduces computational costs, but often fails to capture spatial heterogeneity that may be important for simulations aimed at understanding how building stocks respond to a changing climate (Fig. 1). We quantify the potential reduction in temperature and load biases from using an increasing number of weather stations across the western U.S., ranging from 8 to roughly 150. Using 8 stations results in an average absolute summertime temperature bias of 4.0°C. The mean absolute bias drops to 1.5°C using all available stations. Temperature biases of this magnitude translate to absolute summertime mean simulated load biases as high as 13.8%. Additionally, using only 8 representative weather stations can lead to a 20-40% bias of peak building loads under heat wave or cold snap conditions, a significant error for capacity expansion planners who may rely on these types of simulations. This analysis suggests that using 4 stations per climate zone may be sufficient for most purposes. Our novel approach, which requires no new EnergyPlus simulations, could be useful to other researchers designing or calibrating aggregate building model simulations - particularly those looking at the impact of future climate scenarios. Fig. 1. An example of temperature bias that results from using 8 representative weather stations: (a) surface temperature from NLDAS on 5-July 2008 at 2000 UTC; (b) temperature from 8 representative stations at the same time mapped to all counties within a given IECC climate zone; (c) the difference between (a) and (b).
NASA Astrophysics Data System (ADS)
Jorba, O.; Piot, M.; Pay, M. T.; Jiménez-Guerrero, P.; López, E.; Pérez, C.; Gassó, S.; Baldasano, J. M.
2009-09-01
In the frame of the CALIOPE project (Baldasano et al., 2008a), a high-resolution air quality forecasting system, WRF-ARW/HERMES/CMAQ/DREAM, is under development and applied to the European domain (12km x 12km, 1hr) as well as to the Iberian Peninsula domain (4km x 4km, 1hr) to provide air quality forecasts for Spain (http://www.bsc.es/caliope/). The simulation of such high-resolution model system is possible by its implementation on the MareNostrum supercomputer. To reassure potential users and reduce uncertainties, the model system must be evaluated to assess its performances in terms of air quality levels and dynamics reproducibility. The present contribution describes a thorough quantitative evaluation study performed for a reference year (2004). CALIOPE is a complex system that integrates a variety of environmental models. WRF-ARW provides high-resolution meteorological fields to the system. It is configured with 38 vertical layers reaching up to 50 hPa. Meteorological initial and boundary conditions are obtained from the NCEP final analysis data. The HERMES emission model (Baldasano et al., 2008b) computes the emissions for the Iberian Peninsula simulation at 4 km horizontal resolution every hour using a bottom-up approach. For the European domain, HERMES disaggregates the EMEP expert emission inventory for 2004. The CMAQ chemical transport model solves the physico-chemical processes in the system. The vertical resolution of CMAQ for gas-phase and aerosols has been increased from 8 to 15 layers in order to simulate vertical exchanges more accurately. Chemical boundary conditions are provided by the LMDz-INCA2 global climate-chemistry model (see Hauglustaine et al., 2004). Finally, the DREAM model simulates long-range transport of mineral dust over the domains under study. In order to evaluate the performances of the CALIOPE system, model simulations were compared with ground-based measurements from the EMEP and Spanish air quality networks. For the European domain, 45 stations have been used to evaluate NO2, 60 for O3, 39 for SO2, 25 for PM10 and 16 for PM2.5. On the other hand, the Iberian Peninsula domain has been evaluated against 75 NO2 stations, 84 O3 stations, 69 for SO2, and 46 for PM10. Such large number of observations allows us to provide a detailed discussion of the model skills over quite different geographical locations and meteorological situations. The model simulation for Europe satisfactorily reproduces O3 concentrations throughout the year with small errors: monthly MNGE values range from 13% to 24%, and MNBE values show a slight negative bias ranging from -15% to 0%. These values lie within the range defined by the US-EPA guidelines (MNGE: +/- 30-35%; MNBE: +/- 10-15%). The reproduction of SO2 concentrations is relatively correct but false peaks are reported (mean MNBE=22%). The simulated variation of particulate matter is reliable, with a mean correlation of 0.5. False peaks were reduced by use of an improved 8-bin aerosol description in the DREAM dust model, but mean aerosol levels are still underestimated. This problem is most probably related to uncertainties in our knowledge of the sources and in the description of organic aerosols. The nested high-resolution simulation of Spain (4 km) shows a very good agreement with observations for O3 (monthly MNGE range from 13 to 19%). Particulate matter results are in agreement with the European simulation, and a net improvement on nitrate and sulphate is observed in several stations in Spain. Such high-resolution simulation will allow analysing the small scale features observed over Spain. REFERENCES Baldasano J.M, P. Jiménez-Guerrero, O. Jorba, C. Pérez, E. López, P. Güereca, F. Martin, M. García-Vivanco, I. Palomino, X. Querol, M. Pandolfi, M.J. Sanz and J.J. Diéguez, 2008a: CALIOPE: An operational air quality forecasting system for the Iberian Peninsula, Balearic Islands and Canary Islands- First annual evaluation and ongoing developments. Adv. Sci. and Res., 2: 89-98. Baldasano J.M., L. P. Güereca, E. López, S. Gassó, P. Jimenez-Guerrero, 2008b: Development of a high-resolution (1 km x 1 km, 1 h) emission model for Spain: the High-Elective Resolution Modelling Emission System (HERMES). Atm. Environ., 42 (31): 7215-7233. Hauglustaine, D. A. and F. Hourdin and L. Jourdain and M.A. Filiberti and S. Walters and J. F. Lamarque and E. A. Holland, 2004: Interactive chemistry in the Laboratoire de Meteorologie Dynamique general circulation model: Description and background tropospheric chemistry evaluation. J. Geophys. Res., doi:10.1029/2003JD003,957.
Achieving Remission in Gulf War Illness: A Simulation-Based Approach to Treatment Design.
Craddock, Travis J A; Del Rosario, Ryan R; Rice, Mark; Zysman, Joel P; Fletcher, Mary Ann; Klimas, Nancy G; Broderick, Gordon
2015-01-01
Gulf War Illness (GWI) is a chronic multi-symptom disorder affecting up to one-third of the 700,000 returning veterans of the 1991 Persian Gulf War and for which there is no known cure. GWI symptoms span several of the body's principal regulatory systems and include debilitating fatigue, severe musculoskeletal pain, cognitive and neurological problems. Using computational models, our group reported previously that GWI might be perpetuated at least in part by natural homeostatic regulation of the neuroendocrine-immune network. In this work, we attempt to harness these regulatory dynamics to identify treatment courses that might produce lasting remission. Towards this we apply a combinatorial optimization scheme to the Monte Carlo simulation of a discrete ternary logic model that represents combined hypothalamic-pituitary-adrenal (HPA), gonadal (HPG), and immune system regulation in males. In this work we found that no single intervention target allowed a robust return to normal homeostatic control. All combined interventions leading to a predicted remission involved an initial inhibition of Th1 inflammatory cytokines (Th1Cyt) followed by a subsequent inhibition of glucocorticoid receptor function (GR). These first two intervention events alone ended in stable and lasting return to the normal regulatory control in 40% of the simulated cases. Applying a second cycle of this combined treatment improved this predicted remission rate to 2 out of 3 simulated subjects (63%). These results suggest that in a complex illness such as GWI, a multi-tiered intervention strategy that formally accounts for regulatory dynamics may be required to reset neuroendocrine-immune homeostasis and support extended remission.
2018-01-01
Mathematical models simulating different and representative engineering problem, atomic dry friction, the moving front problems and elastic and solid mechanics are presented in the form of a set of non-linear, coupled or not coupled differential equations. For different parameters values that influence the solution, the problem is numerically solved by the network method, which provides all the variables of the problems. Although the model is extremely sensitive to the above parameters, no assumptions are considered as regards the linearization of the variables. The design of the models, which are run on standard electrical circuit simulation software, is explained in detail. The network model results are compared with common numerical methods or experimental data, published in the scientific literature, to show the reliability of the model. PMID:29518121
Sánchez-Pérez, J F; Marín, F; Morales, J L; Cánovas, M; Alhama, F
2018-01-01
Mathematical models simulating different and representative engineering problem, atomic dry friction, the moving front problems and elastic and solid mechanics are presented in the form of a set of non-linear, coupled or not coupled differential equations. For different parameters values that influence the solution, the problem is numerically solved by the network method, which provides all the variables of the problems. Although the model is extremely sensitive to the above parameters, no assumptions are considered as regards the linearization of the variables. The design of the models, which are run on standard electrical circuit simulation software, is explained in detail. The network model results are compared with common numerical methods or experimental data, published in the scientific literature, to show the reliability of the model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Simon C. H., E-mail: simonyu@cuhk.edu.hk; Liu, Wen; Wong, Randolph H. L.
PurposeWe aimed to assess the potential of computational fluid dynamics simulation (CFD) in detecting changes in pressure and flow velocity in response to morphological changes in type B aortic dissection.Materials and MethodsPressure and velocity in four morphological models of type B aortic dissection before and after closure of the entry tear were calculated with CFD and analyzed for changes among the different scenarios. The control model (Model 1) was patient specific and built from the DICOM data of CTA, which bore one entry tear and three re-entry tears. Models 2–4 were modifications of Model 1, with two re-entry tears lessmore » in Model 2, one re-entry tear more in Model 3, and a larger entry tear in Model 4.ResultsThe pressure and velocity pertaining to each of the morphological models were unique. Changes in pressure and velocity findings were accountable by the changes in morphological features of the different models. There was no blood flow in the false lumen across the entry tear after its closure, the blood flow direction across the re-entry tears was reversed after closure of the entry tear.ConclusionCFD simulation is probably useful to detect hemodynamic changes in the true and false lumens of type B aortic dissection in response to morphological changes, it may potentially be developed into a non-invasive and patient-specific tool for serial monitoring of hemodynamic changes of type B aortic dissection before and after treatment.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fan, Jiwen; Han, Bin; Varble, Adam
A constrained model intercomparison study of a mid-latitude mesoscale squall line is performed using the Weather Research & Forecasting (WRF) model at 1-km horizontal grid spacing with eight cloud microphysics schemes, to understand specific processes that lead to the large spread of simulated cloud and precipitation at cloud-resolving scales, with a focus of this paper on convective cores. Various observational data are employed to evaluate the baseline simulations. All simulations tend to produce a wider convective area than observed, but a much narrower stratiform area, with most bulk schemes overpredicting radar reflectivity. The magnitudes of the virtual potential temperature drop,more » pressure rise, and the peak wind speed associated with the passage of the gust front are significantly smaller compared with the observations, suggesting simulated cool pools are weaker. Simulations also overestimate the vertical velocity and Ze in convective cores as compared with observational retrievals. The modeled updraft velocity and precipitation have a significant spread across the eight schemes even in this strongly dynamically-driven system. The spread of updraft velocity is attributed to the combined effects of the low-level perturbation pressure gradient determined by cold pool intensity and buoyancy that is not necessarily well correlated to differences in latent heating among the simulations. Variability of updraft velocity between schemes is also related to differences in ice-related parameterizations, whereas precipitation variability increases in no-ice simulations because of scheme differences in collision-coalescence parameterizations.« less
Optimizing Muscle Parameters in Musculoskeletal Modeling Using Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Hanson, Andrea; Reed, Erik; Cavanagh, Peter
2011-01-01
Astronauts assigned to long-duration missions experience bone and muscle atrophy in the lower limbs. The use of musculoskeletal simulation software has become a useful tool for modeling joint and muscle forces during human activity in reduced gravity as access to direct experimentation is limited. Knowledge of muscle and joint loads can better inform the design of exercise protocols and exercise countermeasure equipment. In this study, the LifeModeler(TM) (San Clemente, CA) biomechanics simulation software was used to model a squat exercise. The initial model using default parameters yielded physiologically reasonable hip-joint forces. However, no activation was predicted in some large muscles such as rectus femoris, which have been shown to be active in 1-g performance of the activity. Parametric testing was conducted using Monte Carlo methods and combinatorial reduction to find a muscle parameter set that more closely matched physiologically observed activation patterns during the squat exercise. Peak hip joint force using the default parameters was 2.96 times body weight (BW) and increased to 3.21 BW in an optimized, feature-selected test case. The rectus femoris was predicted to peak at 60.1% activation following muscle recruitment optimization, compared to 19.2% activation with default parameters. These results indicate the critical role that muscle parameters play in joint force estimation and the need for exploration of the solution space to achieve physiologically realistic muscle activation.
Evaluation of some random effects methodology applicable to bird ringing data
Burnham, K.P.; White, Gary C.
2002-01-01
Existing models for ring recovery and recapture data analysis treat temporal variations in annual survival probability (S) as fixed effects. Often there is no explainable structure to the temporal variation in S1,..., Sk; random effects can then be a useful model: Si = E(S) + ??i. Here, the temporal variation in survival probability is treated as random with average value E(??2) = ??2. This random effects model can now be fit in program MARK. Resultant inferences include point and interval estimation for process variation, ??2, estimation of E(S) and var (E??(S)) where the latter includes a component for ??2 as well as the traditional component for v??ar(S??\\S??). Furthermore, the random effects model leads to shrinkage estimates, Si, as improved (in mean square error) estimators of Si compared to the MLE, S??i, from the unrestricted time-effects model. Appropriate confidence intervals based on the Si are also provided. In addition, AIC has been generalized to random effects models. This paper presents results of a Monte Carlo evaluation of inference performance under the simple random effects model. Examined by simulation, under the simple one group Cormack-Jolly-Seber (CJS) model, are issues such as bias of ??s2, confidence interval coverage on ??2, coverage and mean square error comparisons for inference about Si based on shrinkage versus maximum likelihood estimators, and performance of AIC model selection over three models: Si ??? S (no effects), Si = E(S) + ??i (random effects), and S1,..., Sk (fixed effects). For the cases simulated, the random effects methods performed well and were uniformly better than fixed effects MLE for the Si.
Ely, D. Matthew; Kahle, Sue C.
2004-01-01
Increased use of ground- and surface-water supplies in watersheds of Washington State in recent years has created concern that insufficient instream flows remain for fish and other uses. Issuance of new ground-water rights in the Colville River Watershed was halted by the Washington Department of Ecology due to possible hydraulic continuity of the ground and surface waters. A ground-water-flow model was developed to aid in the understanding of the ground-water system and the regional effects of ground-water development alternatives on the water resources of the Colville River Watershed. The Colville River Watershed is underlain by unconsolidated deposits of glacial and non-glacial origin. The surficial geologic units and the deposits at depth were differentiated into aquifers and confining units on the basis of areal extent and general water-bearing characteristics. Five principal hydrogeologic units are recognized in the study area and form the basis of the ground-water-flow model. A steady-state ground-water-flow model of the Colville River Watershed was developed to simulate September 2001 conditions. The simulation period represented a period of below-average precipitation. The model was calibrated using nonlinear regression to minimize the weighted differences or residuals between simulated and measured hydraulic head and stream discharge. Simulated inflow to the model area was 53,000 acre-feet per year (acre-ft/yr) from precipitation and secondary recharge, and 36,000 acre-ft/yr from stream and lake leakage. Simulated outflow from the model was primarily through discharge to streams and lakes (71,000 acre-ft/yr), ground-water outflow (9,000 acre-ft/yr), and ground-water withdrawals (9,000 acre-ft/yr). Because the period of simulation, September 2001, was extremely dry, all components of the ground-water budget are presumably less than average flow conditions. The calibrated model was used to simulate the possible effects of increased ground-water pumping. Although the steady-state model cannot be used to predict how long it would take for effects to occur, it does simulate the ultimate response to such changes relative to September 2001 (relatively dry) conditions. Steady-state simulations indicated that increased pumping would result in decreased discharge to streams and lakes and decreased ground-water outflow. The location of the simulated increased ground-water pumping determined the primary source of the water withdrawn. Simulated pumping wells in the northern end of the main Colville River valley diverted a large percentage of the pumpage from ground-water outflow. Simulated pumping wells in the southern end of the main Colville River valley diverted a large percentage of the pumpage from flow to rivers and streams. The calibrated steady-state model also was used to simulate predevelopment conditions, during which no ground-water pumping, secondary recharge, or irrigation application occurred. Cumulative streamflow in the Colville River Watershed increased by 1.1 cubic feet per second, or about 36 percent of net ground-water pumping in 2001. The model is intended to simulate the regional ground-water-flow system of the Colville River Watershed and can be used as a tool for water-resource managers to assess the ultimate regional effects of changes in stresses. The regional scale of the model, coupled with relatively sparse data, must be considered when applying the model in areas of poorly understood hydrology, or examining hydrologic conditions at a larger scale than what is appropriate.
Chetty, Mersha; Kenworthy, James J; Langham, Sue; Walker, Andrew; Dunlop, William C N
2017-02-24
Opioid dependence is a chronic condition with substantial health, economic and social costs. The study objective was to conduct a systematic review of published health-economic models of opioid agonist therapy for non-prescription opioid dependence, to review the different modelling approaches identified, and to inform future modelling studies. Literature searches were conducted in March 2015 in eight electronic databases, supplemented by hand-searching reference lists and searches on six National Health Technology Assessment Agency websites. Studies were included if they: investigated populations that were dependent on non-prescription opioids and were receiving opioid agonist or maintenance therapy; compared any pharmacological maintenance intervention with any other maintenance regimen (including placebo or no treatment); and were health-economic models of any type. A total of 18 unique models were included. These used a range of modelling approaches, including Markov models (n = 4), decision tree with Monte Carlo simulations (n = 3), decision analysis (n = 3), dynamic transmission models (n = 3), decision tree (n = 1), cohort simulation (n = 1), Bayesian (n = 1), and Monte Carlo simulations (n = 2). Time horizons ranged from 6 months to lifetime. The most common evaluation was cost-utility analysis reporting cost per quality-adjusted life-year (n = 11), followed by cost-effectiveness analysis (n = 4), budget-impact analysis/cost comparison (n = 2) and cost-benefit analysis (n = 1). Most studies took the healthcare provider's perspective. Only a few models included some wider societal costs, such as productivity loss or costs of drug-related crime, disorder and antisocial behaviour. Costs to individuals and impacts on family and social networks were not included in any model. A relatively small number of studies of varying quality were found. Strengths and weaknesses relating to model structure, inputs and approach were identified across all the studies. There was no indication of a single standard emerging as a preferred approach. Most studies omitted societal costs, an important issue since the implications of drug abuse extend widely beyond healthcare services. Nevertheless, elements from previous models could together form a framework for future economic evaluations in opioid agonist therapy including all relevant costs and outcomes. This could more adequately support decision-making and policy development for treatment of non-prescription opioid dependence.
Enhanced representation of soil NO emissions in the ...
Modeling of soil nitric oxide (NO) emissions is highly uncertain and may misrepresent its spatial and temporal distribution. This study builds upon a recently introduced parameterization to improve the timing and spatial distribution of soil NO emission estimates in the Community Multiscale Air Quality (CMAQ) model. The parameterization considers soil parameters, meteorology, land use, and mineral nitrogen (N) availability to estimate NO emissions. We incorporate daily year-specific fertilizer data from the Environmental Policy Integrated Climate (EPIC) agricultural model to replace the annual generic data of the initial parameterization, and use a 12 km resolution soil biome map over the continental USA. CMAQ modeling for July 2011 shows slight differences in model performance in simulating fine particulate matter and ozone from Interagency Monitoring of Protected Visual Environments (IMPROVE) and Clean Air Status and Trends Network (CASTNET) sites and NO2 columns from Ozone Monitoring Instrument (OMI) satellite retrievals. We also simulate how the change in soil NO emissions scheme affects the expected O3 response to projected emissions reductions. The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, decision-support tools, and models to be applied to media-specific or receptor-specific problem areas. CED uses modeling-based approaches to characterize exposures, evaluate fate and transport, and
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahmad Jan; Ethan Coon; Scott Painter
This Modeling Archive is in support of an NGEE Arctic manuscript under review. A new subgrid model was implemented in the Advanced Terrestrial Simulator (ATS) to capture micro-topography effects on surface flow. A comparison of the fine-scale simulations on seven individual ice-wedge polygons and a cluster of polygons was made between the results of the subgrid model and no-subgrid model. Our finding confirms that the effects of small-scale spatial heterogeneities can be captured in the coarsened models. The dataset contains meshes, inputfiles, subgrid parameters used in the simulations. Python scripts for post-processing and files for geometric analyses are also included.
Pinloche, E; Williams, M; D'Inca, R; Auclair, E; Newbold, C J
2012-12-01
The impact of 2 doses of a Saccharomyces cerevisiae were evaluated, 5 × 10(10) cfu/kg of feed (L1) and 5 × 10(11) cfu/kg of feed (L2) against a control (CON) with no added yeast, using an in vitro model [colon simulation technique (Cositec)] to mimic digestion in the pig colon. The L2 (but not L1) dose significantly improved DM digestibility compared to CON (61 v 58%) and increased NH(3) concentrations (+15%). Volatile fatty acid concentrations increased with L2 compared to CON--isobutyrate (+13.5%), propionate (+8.5%), isovalerate (+17.8%), and valerate (+25%)--but only valerate was increased with L1 (+14.2%). The analysis of microbiota from the liquid associated bacteria (LAB) and solid associated bacteria (SAB) revealed an interaction between the fraction and treatment (P < 0.05). Indeed, L2 had a significant impact on SAB and LAB (P < 0.01) whereas L1 only tended to change the structure of the population in the SAB (P < 0.1). Overall, this study showed that a live yeast probiotic could improve digestion in a colonic simulation model but only at the higher dose used and this effect was associated with a shift in the bacterial population therein.
On the use of programmable hardware and reduced numerical precision in earth-system modeling.
Düben, Peter D; Russell, Francis P; Niu, Xinyu; Luk, Wayne; Palmer, T N
2015-09-01
Programmable hardware, in particular Field Programmable Gate Arrays (FPGAs), promises a significant increase in computational performance for simulations in geophysical fluid dynamics compared with CPUs of similar power consumption. FPGAs allow adjusting the representation of floating-point numbers to specific application needs. We analyze the performance-precision trade-off on FPGA hardware for the two-scale Lorenz '95 model. We scale the size of this toy model to that of a high-performance computing application in order to make meaningful performance tests. We identify the minimal level of precision at which changes in model results are not significant compared with a maximal precision version of the model and find that this level is very similar for cases where the model is integrated for very short or long intervals. It is therefore a useful approach to investigate model errors due to rounding errors for very short simulations (e.g., 50 time steps) to obtain a range for the level of precision that can be used in expensive long-term simulations. We also show that an approach to reduce precision with increasing forecast time, when model errors are already accumulated, is very promising. We show that a speed-up of 1.9 times is possible in comparison to FPGA simulations in single precision if precision is reduced with no strong change in model error. The single-precision FPGA setup shows a speed-up of 2.8 times in comparison to our model implementation on two 6-core CPUs for large model setups.
Extension of PENELOPE to protons: simulation of nuclear reactions and benchmark with Geant4.
Sterpin, E; Sorriaux, J; Vynckier, S
2013-11-01
Describing the implementation of nuclear reactions in the extension of the Monte Carlo code (MC) PENELOPE to protons (PENH) and benchmarking with Geant4. PENH is based on mixed-simulation mechanics for both elastic and inelastic electromagnetic collisions (EM). The adopted differential cross sections for EM elastic collisions are calculated using the eikonal approximation with the Dirac-Hartree-Fock-Slater atomic potential. Cross sections for EM inelastic collisions are computed within the relativistic Born approximation, using the Sternheimer-Liljequist model of the generalized oscillator strength. Nuclear elastic and inelastic collisions were simulated using explicitly the scattering analysis interactive dialin database for (1)H and ICRU 63 data for (12)C, (14)N, (16)O, (31)P, and (40)Ca. Secondary protons, alphas, and deuterons were all simulated as protons, with the energy adapted to ensure consistent range. Prompt gamma emission can also be simulated upon user request. Simulations were performed in a water phantom with nuclear interactions switched off or on and integral depth-dose distributions were compared. Binary-cascade and precompound models were used for Geant4. Initial energies of 100 and 250 MeV were considered. For cases with no nuclear interactions simulated, additional simulations in a water phantom with tight resolution (1 mm in all directions) were performed with FLUKA. Finally, integral depth-dose distributions for a 250 MeV energy were computed with Geant4 and PENH in a homogeneous phantom with, first, ICRU striated muscle and, second, ICRU compact bone. For simulations with EM collisions only, integral depth-dose distributions were within 1%/1 mm for doses higher than 10% of the Bragg-peak dose. For central-axis depth-dose and lateral profiles in a phantom with tight resolution, there are significant deviations between Geant4 and PENH (up to 60%/1 cm for depth-dose distributions). The agreement is much better with FLUKA, with deviations within 3%/3 mm. When nuclear interactions were turned on, agreement (within 6% before the Bragg-peak) between PENH and Geant4 was consistent with uncertainties on nuclear models and cross sections, whatever the material simulated (water, muscle, or bone). A detailed and flexible description of nuclear reactions has been implemented in the PENH extension of PENELOPE to protons, which utilizes a mixed-simulation scheme for both elastic and inelastic EM collisions, analogous to the well-established algorithm for electrons/positrons. PENH is compatible with all current main programs that use PENELOPE as the MC engine. The nuclear model of PENH is realistic enough to give dose distributions in fair agreement with those computed by Geant4.
Extension of PENELOPE to protons: Simulation of nuclear reactions and benchmark with Geant4
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sterpin, E.; Sorriaux, J.; Vynckier, S.
2013-11-15
Purpose: Describing the implementation of nuclear reactions in the extension of the Monte Carlo code (MC) PENELOPE to protons (PENH) and benchmarking with Geant4.Methods: PENH is based on mixed-simulation mechanics for both elastic and inelastic electromagnetic collisions (EM). The adopted differential cross sections for EM elastic collisions are calculated using the eikonal approximation with the Dirac–Hartree–Fock–Slater atomic potential. Cross sections for EM inelastic collisions are computed within the relativistic Born approximation, using the Sternheimer–Liljequist model of the generalized oscillator strength. Nuclear elastic and inelastic collisions were simulated using explicitly the scattering analysis interactive dialin database for {sup 1}H and ICRUmore » 63 data for {sup 12}C, {sup 14}N, {sup 16}O, {sup 31}P, and {sup 40}Ca. Secondary protons, alphas, and deuterons were all simulated as protons, with the energy adapted to ensure consistent range. Prompt gamma emission can also be simulated upon user request. Simulations were performed in a water phantom with nuclear interactions switched off or on and integral depth–dose distributions were compared. Binary-cascade and precompound models were used for Geant4. Initial energies of 100 and 250 MeV were considered. For cases with no nuclear interactions simulated, additional simulations in a water phantom with tight resolution (1 mm in all directions) were performed with FLUKA. Finally, integral depth–dose distributions for a 250 MeV energy were computed with Geant4 and PENH in a homogeneous phantom with, first, ICRU striated muscle and, second, ICRU compact bone.Results: For simulations with EM collisions only, integral depth–dose distributions were within 1%/1 mm for doses higher than 10% of the Bragg-peak dose. For central-axis depth–dose and lateral profiles in a phantom with tight resolution, there are significant deviations between Geant4 and PENH (up to 60%/1 cm for depth–dose distributions). The agreement is much better with FLUKA, with deviations within 3%/3 mm. When nuclear interactions were turned on, agreement (within 6% before the Bragg-peak) between PENH and Geant4 was consistent with uncertainties on nuclear models and cross sections, whatever the material simulated (water, muscle, or bone).Conclusions: A detailed and flexible description of nuclear reactions has been implemented in the PENH extension of PENELOPE to protons, which utilizes a mixed-simulation scheme for both elastic and inelastic EM collisions, analogous to the well-established algorithm for electrons/positrons. PENH is compatible with all current main programs that use PENELOPE as the MC engine. The nuclear model of PENH is realistic enough to give dose distributions in fair agreement with those computed by Geant4.« less
N loss to drain flow and N2O emissions from a corn-soybean rotation with winter rye.
Gillette, K; Malone, R W; Kaspar, T C; Ma, L; Parkin, T B; Jaynes, D B; Fang, Q X; Hatfield, J L; Feyereisen, G W; Kersebaum, K C
2018-03-15
Anthropogenic perturbation of the global nitrogen cycle and its effects on the environment such as hypoxia in coastal regions and increased N 2 O emissions is of increasing, multi-disciplinary, worldwide concern, and agricultural production is a major contributor. Only limited studies, however, have simultaneously investigated NO 3 - losses to subsurface drain flow and N 2 O emissions under corn-soybean production. We used the Root Zone Water Quality Model (RZWQM) to evaluate NO 3 - losses to drain flow and N 2 O emissions in a corn-soybean system with a winter rye cover crop (CC) in central Iowa over a nine year period. The observed and simulated average drain flow N concentration reductions from CC were 60% and 54% compared to the no cover crop system (NCC). Average annual April through October cumulative observed and simulated N 2 O emissions (2004-2010) were 6.7 and 6.0kgN 2 O-Nha -1 yr -1 for NCC, and 6.2 and 7.2kgNha -1 for CC. In contrast to previous research, monthly N 2 O emissions were generally greatest when N loss to leaching were greatest, mostly because relatively high rainfall occurred during the months fertilizer was applied. N 2 O emission factors of 0.032 and 0.041 were estimated for NCC and CC using the tested model, which are similar to field results in the region. A local sensitivity analysis suggests that lower soil field capacity affects RZWQM simulations, which includes increased drain flow nitrate concentrations, increased N mineralization, and reduced soil water content. The results suggest that 1) RZWQM is a promising tool to estimate N 2 O emissions from subsurface drained corn-soybean rotations and to estimate the relative effects of a winter rye cover crop over a nine year period on nitrate loss to drain flow and 2) soil field capacity is an important parameter to model N mineralization and N loss to drain flow. Published by Elsevier B.V.
Exploring the large-scale structure of Taylor–Couette turbulence through Large-Eddy Simulations
NASA Astrophysics Data System (ADS)
Ostilla-Mónico, Rodolfo; Zhu, Xiaojue; Verzicco, Roberto
2018-04-01
Large eddy simulations (LES) of Taylor-Couette (TC) flow, the flow between two co-axial and independently rotating cylinders are performed in an attempt to explore the large-scale axially-pinned structures seen in experiments and simulations. Both static and dynamic LES models are used. The Reynolds number is kept fixed at Re = 3.4 · 104, and the radius ratio η = ri /ro is set to η = 0.909, limiting the effects of curvature and resulting in frictional Reynolds numbers of around Re τ ≈ 500. Four rotation ratios from Rot = ‑0.0909 to Rot = 0.3 are simulated. First, the LES of TC is benchmarked for different rotation ratios. Both the Smagorinsky model with a constant of cs = 0.1 and the dynamic model are found to produce reasonable results for no mean rotation and cyclonic rotation, but deviations increase for increasing rotation. This is attributed to the increasing anisotropic character of the fluctuations. Second, “over-damped” LES, i.e. LES with a large Smagorinsky constant is performed and is shown to reproduce some features of the large-scale structures, even when the near-wall region is not adequately modeled. This shows the potential for using over-damped LES for fast explorations of the parameter space where large-scale structures are found.
A software-based sensor for combined sewer overflows.
Leonhardt, G; Fach, S; Engelhard, C; Kinzel, H; Rauch, W
2012-01-01
A new methodology for online estimation of excess flow from combined sewer overflow (CSO) structures based on simulation models is presented. If sufficient flow and water level data from the sewer system is available, no rainfall data are needed to run the model. An inverse rainfall-runoff model was developed to simulate net rainfall based on flow and water level data. Excess flow at all CSO structures in a catchment can then be simulated with a rainfall-runoff model. The method is applied to a case study and results show that the inverse rainfall-runoff model can be used instead of missing rain gauges. Online operation is ensured by software providing an interface to the SCADA-system of the operator and controlling the model. A water quality model could be included to simulate also pollutant concentrations in the excess flow.
Dynamic stability of passive dynamic walking on an irregular surface.
Su, Jimmy Li-Shin; Dingwell, Jonathan B
2007-12-01
Falls that occur during walking are a significant health problem. One of the greatest impediments to solve this problem is that there is no single obviously "correct" way to quantify walking stability. While many people use variability as a proxy for stability, measures of variability do not quantify how the locomotor system responds to perturbations. The purpose of this study was to determine how changes in walking surface variability affect changes in both locomotor variability and stability. We modified an irreducibly simple model of walking to apply random perturbations that simulated walking over an irregular surface. Because the model's global basin of attraction remained fixed, increasing the amplitude of the applied perturbations directly increased the risk of falling in the model. We generated ten simulations of 300 consecutive strides of walking at each of six perturbation amplitudes ranging from zero (i.e., a smooth continuous surface) up to the maximum level the model could tolerate without falling over. Orbital stability defines how a system responds to small (i.e., "local") perturbations from one cycle to the next and was quantified by calculating the maximum Floquet multipliers for the model. Local stability defines how a system responds to similar perturbations in real time and was quantified by calculating short-term and long-term local exponential rates of divergence for the model. As perturbation amplitudes increased, no changes were seen in orbital stability (r(2)=2.43%; p=0.280) or long-term local instability (r(2)=1.0%; p=0.441). These measures essentially reflected the fact that the model never actually "fell" during any of our simulations. Conversely, the variability of the walker's kinematics increased exponentially (r(2)>or=99.6%; p<0.001) and short-term local instability increased linearly (r(2)=88.1%; p<0.001). These measures thus predicted the increased risk of falling exhibited by the model. For all simulated conditions, the walker remained orbitally stable, while exhibiting substantial local instability. This was because very small initial perturbations diverged away from the limit cycle, while larger initial perturbations converged toward the limit cycle. These results provide insight into how these different proposed measures of walking stability are related to each other and to risk of falling.
Heat Transfer Model for Hot Air Balloons
NASA Astrophysics Data System (ADS)
Llado-Gambin, Adriana
A heat transfer model and analysis for hot air balloons is presented in this work, backed with a flow simulation using SolidWorks. The objective is to understand the major heat losses in the balloon and to identify the parameters that affect most its flight performance. Results show that more than 70% of the heat losses are due to the emitted radiation from the balloon envelope and that convection losses represent around 20% of the total. A simulated heating source is also included in the modeling based on typical thermal input from a balloon propane burner. The burner duty cycle to keep a constant altitude can vary from 10% to 28% depending on the atmospheric conditions, and the ambient temperature is the parameter that most affects the total thermal input needed. The simulation and analysis also predict that the gas temperature inside the balloon decreases at a rate of -0.25 K/s when there is no burner activity, and it increases at a rate of +1 K/s when the balloon pilot operates the burner. The results were compared to actual flight data and they show very good agreement indicating that the major physical processes responsible for balloon performance aloft are accurately captured in the simulation.
NASA Astrophysics Data System (ADS)
Maute, A.; Hagan, M. E.; Richmond, A. D.; Roble, R. G.
2014-02-01
This modeling study quantifies the daytime low-latitude vertical E×B drift changes in the longitudinal wave number 1 (wn1) to wn4 during the major extended January 2006 stratospheric sudden warming (SSW) period as simulated by the National Center for Atmospheric Research thermosphere-ionosphere-mesosphere electrodynamics general circulation model (TIME-GCM), and attributes the drift changes to specific tides and planetary waves (PWs). The largest drift amplitude change (approximately 5 m/s) is seen in wn1 with a strong temporal correlation to the SSW. The wn1 drift is primarily caused by the semidiurnal westward propagating tide with zonal wave number 1 (SW1), and secondarily by a stationary planetary wave with zonal wave number 1 (PW1). SW1 is generated by the nonlinear interaction of PW1 and the migrating semidiurnal tide (SW2) at high latitude around 90-100 km. The simulations suggest that the E region PW1 around 100-130 km at the different latitudes has different origins: at high latitudes, the PW1 is related to the original stratospheric PW1; at midlatitudes, the model indicates PW1 is due to the nonlinear interaction of SW1 and SW2 around 95-105 km; and at low latitudes, the PW1 might be caused by the nonlinear interaction between DE2 and DE3. The time evolution of the simulated wn4 in the vertical E×B drift amplitude shows no temporal correlation with the SSW. The wn4 in the low-latitude vertical drift is attributed to the diurnal eastward propagating tide with zonal wave number 3 (DE3), and the contributions from SE2, TE1, and PW4 are negligible.
MERCURY SPECIATION IN COMBUSTION SYSTEMS: STUDIES WITH SIMULATED FLUE GASES AND MODEL FLY ASHES
The paper gives results of a bench-scale study of the effects of flue gas and fly ash parameters on the oxidation of elemental mercury in simulated flue gases containing hydrogen chloride (HCl), nitric oxide (NO), nitrogen dioxide (NO2), sulfur dioxide (SO2), and water vapor (H2O...
NASA Astrophysics Data System (ADS)
Liu, Yiming; Fan, Qi; Chen, Xiaoyang; Zhao, Jun; Ling, Zhenhao; Hong, Yingying; Li, Weibiao; Chen, Xunlai; Wang, Mingjie; Wei, Xiaolin
2018-02-01
Chlorine radicals can enhance atmospheric oxidation, which potentially increases tropospheric ozone concentration. However, few studies have been done to quantify the impact of chlorine emissions on ozone formation in China due to the lack of a chlorine emission inventory used in air quality models with sufficient resolution. In this study, the Anthropogenic Chlorine Emissions Inventory for China (ACEIC) was developed for the first time, including emissions of hydrogen chloride (HCl) and molecular chlorine (Cl2) from coal combustion and prescribed waste incineration (waste incineration plant). The HCl and Cl2 emissions from coal combustion in China in 2012 were estimated to be 232.9 and 9.4 Gg, respectively, while HCl emission from prescribed waste incineration was estimated to be 2.9 Gg. Spatially the highest emissions of HCl and Cl2 were found in the North China Plain, the Yangtze River Delta, and the Sichuan Basin. Air quality model simulations with the Community Multiscale Air Quality (CMAQ) modeling system were performed for November 2011, and the modeling results derived with and without chlorine emissions were compared. The magnitude of the simulated HCl, Cl2 and ClNO2 agreed reasonably with the observation when anthropogenic chlorine emissions were included in the model. The inclusion of the ACEIC increased the concentration of fine particulate Cl-, leading to enhanced heterogeneous reactions between Cl- and N2O5, which resulted in the higher production of ClNO2. Photolysis of ClNO2 and Cl2 in the morning and the reaction of HCl with OH in the afternoon produced chlorine radicals which accelerated tropospheric oxidation. When anthropogenic chlorine emissions were included in the model, the monthly mean concentrations of fine particulate Cl-, daily maximum 1 h ClNO2, and Cl radicals were estimated to increase by up to about 2.0 µg m-3, 773 pptv, and 1.5 × 103 molecule cm-3 in China, respectively. Meanwhile, the monthly mean daily maximum 8 h O3 concentration was found to increase by up to 2.0 ppbv (4.1 %), while the monthly mean NOx concentration decreased by up to 0.5 ppbv (6.1 %). The anthropogenic chlorine emissions potentially increased the 1 h O3 concentration by up to 7.7 ppbv in China. This study highlights the need for the inclusion of anthropogenic chlorine emission in air quality modeling and demonstrated its importance in tropospheric ozone formation.
Relation of landslides triggered by the Kiholo Bay earthquake to modeled ground motion
Harp, Edwin L.; Hartzell, Stephen H.; Jibson, Randall W.; Ramirez-Guzman, L.; Schmitt, Robert G.
2014-01-01
The 2006 Kiholo Bay, Hawaii, earthquake triggered high concentrations of rock falls and slides in the steep canyons of the Kohala Mountains along the north coast of Hawaii. Within these mountains and canyons a complex distribution of landslides was triggered by the earthquake shaking. In parts of the area, landslides were preferentially located on east‐facing slopes, whereas in other parts of the canyons no systematic pattern prevailed with respect to slope aspect or vertical position on the slopes. The geology within the canyons is homogeneous, so we hypothesize that the variable landslide distribution is the result of localized variation in ground shaking; therefore, we used a state‐of‐the‐art, high‐resolution ground‐motion simulation model to see if it could reproduce the landslide‐distribution patterns. We used a 3D finite‐element analysis to model earthquake shaking using a 10 m digital elevation model and slip on a finite‐fault model constructed from teleseismic records of the mainshock. Ground velocity time histories were calculated up to a frequency of 5 Hz. Dynamic shear strain also was calculated and compared with the landslide distribution. Results were mixed for the velocity simulations, with some areas showing correlation of landslide locations with peak modeled ground motions but many other areas showing no such correlation. Results were much improved for the comparison with dynamic shear strain. This suggests that (1) rock falls and slides are possibly triggered by higher frequency ground motions (velocities) than those in our simulations, (2) the ground‐motion velocity model needs more refinement, or (3) dynamic shear strain may be a more fundamental measurement of the decoupling process of slope materials during seismic shaking.
NASA Astrophysics Data System (ADS)
Lambe, Andrew; Massoli, Paola; Zhang, Xuan; Canagaratna, Manjula; Nowak, John; Daube, Conner; Yan, Chao; Nie, Wei; Onasch, Timothy; Jayne, John; Kolb, Charles; Davidovits, Paul; Worsnop, Douglas; Brune, William
2017-06-01
Oxidation flow reactors that use low-pressure mercury lamps to produce hydroxyl (OH) radicals are an emerging technique for studying the oxidative aging of organic aerosols. Here, ozone (O3) is photolyzed at 254 nm to produce O(1D) radicals, which react with water vapor to produce OH. However, the need to use parts-per-million levels of O3 hinders the ability of oxidation flow reactors to simulate NOx-dependent secondary organic aerosol (SOA) formation pathways. Simple addition of nitric oxide (NO) results in fast conversion of NOx (NO + NO2) to nitric acid (HNO3), making it impossible to sustain NOx at levels that are sufficient to compete with hydroperoxy (HO2) radicals as a sink for organic peroxy (RO2) radicals. We developed a new method that is well suited to the characterization of NOx-dependent SOA formation pathways in oxidation flow reactors. NO and NO2 are produced via the reaction O(1D) + N2O → 2NO, followed by the reaction NO + O3 → NO2 + O2. Laboratory measurements coupled with photochemical model simulations suggest that O(1D) + N2O reactions can be used to systematically vary the relative branching ratio of RO2 + NO reactions relative to RO2 + HO2 and/or RO2 + RO2 reactions over a range of conditions relevant to atmospheric SOA formation. We demonstrate proof of concept using high-resolution time-of-flight chemical ionization mass spectrometer (HR-ToF-CIMS) measurements with nitrate (NO3-) reagent ion to detect gas-phase oxidation products of isoprene and α-pinene previously observed in NOx-influenced environments and in laboratory chamber experiments.
NASA Technical Reports Server (NTRS)
Lambe, Andrew; Massoli, Paola; Zhang, Xuan; Canagaratna, Manjula; Nowak, John; Daube, Conner; Yan, Chao; Nie, Wei; Onasch, Timothy; Jayne, John;
2017-01-01
Oxidation flow reactors that use low-pressure mercury lamps to produce hydroxyl (OH) radicals are an emerging technique for studying the oxidative aging of organic aerosols. Here, ozone (O3) is photolyzed at 254 nm to produce O(1D) radicals, which react with water vapor to produce OH. However, the need to use parts-per-million levels of O3 hinders the ability of oxidation flow reactors to simulate NOx-dependent secondary organic aerosol (SOA) formation pathways. Simple addition of nitric oxide (NO) results in fast conversion of NOx (NO+NO2) to nitric acid (HNO3), making it impossible to sustain NOx at levels that are sufficient to compete with hydroperoxy (HO2) radicals as a sink for organic peroxy (RO2) radicals. We developed a new method that is well suited to the characterization of NOx-dependent SOA formation pathways in oxidation flow reactors. NO and NO2 are produced via the reaction O(1D)+N2O->2NO, followed by the reaction NO+O3->NO2+O2. Laboratory measurements coupled with photochemical model simulations suggest that O(1D)+N2O reactions can be used to systematically vary the relative branching ratio of RO2 +NO reactions relative to RO2 +HO2 and/or RO2+RO2 reactions over a range of conditions relevant to atmospheric SOA formation. We demonstrate proof of concept using high-resolution time-of-flight chemical ionization mass spectrometer (HR-ToF-CIMS) measurements with nitrate (NO-3 ) reagent ion to detect gas-phase oxidation products of isoprene and -pinene previously observed in NOx-influenced environments and in laboratory chamber experiments.
1D-3D hybrid modeling—from multi-compartment models to full resolution models in space and time
Grein, Stephan; Stepniewski, Martin; Reiter, Sebastian; Knodel, Markus M.; Queisser, Gillian
2014-01-01
Investigation of cellular and network dynamics in the brain by means of modeling and simulation has evolved into a highly interdisciplinary field, that uses sophisticated modeling and simulation approaches to understand distinct areas of brain function. Depending on the underlying complexity, these models vary in their level of detail, in order to cope with the attached computational cost. Hence for large network simulations, single neurons are typically reduced to time-dependent signal processors, dismissing the spatial aspect of each cell. For single cell or networks with relatively small numbers of neurons, general purpose simulators allow for space and time-dependent simulations of electrical signal processing, based on the cable equation theory. An emerging field in Computational Neuroscience encompasses a new level of detail by incorporating the full three-dimensional morphology of cells and organelles into three-dimensional, space and time-dependent, simulations. While every approach has its advantages and limitations, such as computational cost, integrated and methods-spanning simulation approaches, depending on the network size could establish new ways to investigate the brain. In this paper we present a hybrid simulation approach, that makes use of reduced 1D-models using e.g., the NEURON simulator—which couples to fully resolved models for simulating cellular and sub-cellular dynamics, including the detailed three-dimensional morphology of neurons and organelles. In order to couple 1D- and 3D-simulations, we present a geometry-, membrane potential- and intracellular concentration mapping framework, with which graph- based morphologies, e.g., in the swc- or hoc-format, are mapped to full surface and volume representations of the neuron and computational data from 1D-simulations can be used as boundary conditions for full 3D simulations and vice versa. Thus, established models and data, based on general purpose 1D-simulators, can be directly coupled to the emerging field of fully resolved, highly detailed 3D-modeling approaches. We present the developed general framework for 1D/3D hybrid modeling and apply it to investigate electrically active neurons and their intracellular spatio-temporal calcium dynamics. PMID:25120463
Díaz-González, Lorena; Quiroz-Ruiz, Alfredo
2014-01-01
Using highly precise and accurate Monte Carlo simulations of 20,000,000 replications and 102 independent simulation experiments with extremely low simulation errors and total uncertainties, we evaluated the performance of four single outlier discordancy tests (Grubbs test N2, Dixon test N8, skewness test N14, and kurtosis test N15) for normal samples of sizes 5 to 20. Statistical contaminations of a single observation resulting from parameters called δ from ±0.1 up to ±20 for modeling the slippage of central tendency or ε from ±1.1 up to ±200 for slippage of dispersion, as well as no contamination (δ = 0 and ε = ±1), were simulated. Because of the use of precise and accurate random and normally distributed simulated data, very large replications, and a large number of independent experiments, this paper presents a novel approach for precise and accurate estimations of power functions of four popular discordancy tests and, therefore, should not be considered as a simple simulation exercise unrelated to probability and statistics. From both criteria of the Power of Test proposed by Hayes and Kinsella and the Test Performance Criterion of Barnett and Lewis, Dixon test N8 performs less well than the other three tests. The overall performance of these four tests could be summarized as N2≅N15 > N14 > N8. PMID:24737992
Verma, Surendra P; Díaz-González, Lorena; Rosales-Rivera, Mauricio; Quiroz-Ruiz, Alfredo
2014-01-01
Using highly precise and accurate Monte Carlo simulations of 20,000,000 replications and 102 independent simulation experiments with extremely low simulation errors and total uncertainties, we evaluated the performance of four single outlier discordancy tests (Grubbs test N2, Dixon test N8, skewness test N14, and kurtosis test N15) for normal samples of sizes 5 to 20. Statistical contaminations of a single observation resulting from parameters called δ from ±0.1 up to ±20 for modeling the slippage of central tendency or ε from ±1.1 up to ±200 for slippage of dispersion, as well as no contamination (δ = 0 and ε = ±1), were simulated. Because of the use of precise and accurate random and normally distributed simulated data, very large replications, and a large number of independent experiments, this paper presents a novel approach for precise and accurate estimations of power functions of four popular discordancy tests and, therefore, should not be considered as a simple simulation exercise unrelated to probability and statistics. From both criteria of the Power of Test proposed by Hayes and Kinsella and the Test Performance Criterion of Barnett and Lewis, Dixon test N8 performs less well than the other three tests. The overall performance of these four tests could be summarized as N2≅N15 > N14 > N8.
Fabian, P; Adamkiewicz, G; Levy, J I
2012-02-01
Residents of low-income multifamily housing can have elevated exposures to multiple environmental pollutants known to influence asthma. Simulation models can characterize the health implications of changing indoor concentrations, but quantifying the influence of interventions on concentrations is challenging given complex airflow and source characteristics. In this study, we simulated concentrations in a prototype multifamily building using CONTAM, a multizone airflow and contaminant transport program. Contaminants modeled included PM(2.5) and NO(2) , and parameters included stove use, presence and operability of exhaust fans, smoking, unit level, and building leakiness. We developed regression models to explain variability in CONTAM outputs for individual sources, in a manner that could be utilized in simulation modeling of health outcomes. To evaluate our models, we generated a database of 1000 simulated households with characteristics consistent with Boston public housing developments and residents and compared the predicted levels of NO(2) and PM(2.5) and their correlates with the literature. Our analyses demonstrated that CONTAM outputs could be readily explained by available parameters (R(2) between 0.89 and 0.98 across models), but that one-compartment box models would mischaracterize concentrations and source contributions. Our study quantifies the key drivers for indoor concentrations in multifamily housing and helps to identify opportunities for interventions. Many low-income urban asthmatics live in multifamily housing that may be amenable to ventilation-related interventions such as weatherization or air sealing, wall and ceiling hole repairs, and exhaust fan installation or repair, but such interventions must be designed carefully given their cost and their offsetting effects on energy savings as well as indoor and outdoor pollutants. We developed models to take into account the complex behavior of airflow patterns in multifamily buildings, which can be used to identify and evaluate environmental and non-environmental interventions targeting indoor air pollutants which can trigger asthma exacerbations. © 2011 John Wiley & Sons A/S.
Importance of Winds and Soil Moistures to the US Summertime Drought of 1988: A GCM Simulation Study
NASA Technical Reports Server (NTRS)
Mocko, David M.; Sud, Y. C.; Lau, William K. M. (Technical Monitor)
2001-01-01
The climate version of NASA's GEOS 2 GCM did not simulate a realistic 1988 summertime drought in the central United States (Mocko et al., 1999). Despite several new upgrades to the model's parameterizations, as well as finer grid spacing from 4x5 degrees to 2x2.5 degrees, no significant improvements were noted in the model's simulation of the U.S. drought.
Virtual reality simulation training for health professions trainees in gastrointestinal endoscopy.
Walsh, Catharine M; Sherlock, Mary E; Ling, Simon C; Carnahan, Heather
2012-06-13
Traditionally, training in gastrointestinal endoscopy has been based upon an apprenticeship model, with novice endoscopists learning basic skills under the supervision of experienced preceptors in the clinical setting. Over the last two decades, however, the growing awareness of the need for patient safety has brought the issue of simulation-based training to the forefront. While the use of simulation-based training may have important educational and societal advantages, the effectiveness of virtual reality gastrointestinal endoscopy simulators has yet to be clearly demonstrated. To determine whether virtual reality simulation training can supplement and/or replace early conventional endoscopy training (apprenticeship model) in diagnostic oesophagogastroduodenoscopy, colonoscopy and/or sigmoidoscopy for health professions trainees with limited or no prior endoscopic experience. Health professions, educational and computer databases were searched until November 2011 including The Cochrane Central Register of Controlled Trials, MEDLINE, EMBASE, Scopus, Web of Science, Biosis Previews, CINAHL, Allied and Complementary Medicine Database, ERIC, Education Full Text, CBCA Education, Career and Technical Education @ Scholars Portal, Education Abstracts @ Scholars Portal, Expanded Academic ASAP @ Scholars Portal, ACM Digital Library, IEEE Xplore, Abstracts in New Technologies and Engineering and Computer & Information Systems Abstracts. The grey literature until November 2011 was also searched. Randomised and quasi-randomised clinical trials comparing virtual reality endoscopy (oesophagogastroduodenoscopy, colonoscopy and sigmoidoscopy) simulation training versus any other method of endoscopy training including conventional patient-based training, in-job training, training using another form of endoscopy simulation (e.g. low-fidelity simulator), or no training (however defined by authors) were included. Trials comparing one method of virtual reality training versus another method of virtual reality training (e.g. comparison of two different virtual reality simulators) were also included. Only trials measuring outcomes on humans in the clinical setting (as opposed to animals or simulators) were included. Two authors (CMS, MES) independently assessed the eligibility and methodological quality of trials, and extracted data on the trial characteristics and outcomes. Due to significant clinical and methodological heterogeneity it was not possible to pool study data in order to perform a meta-analysis. Where data were available for each continuous outcome we calculated standardized mean difference with 95% confidence intervals based on intention-to-treat analysis. Where data were available for dichotomous outcomes we calculated relative risk with 95% confidence intervals based on intention-to-treat-analysis. Thirteen trials, with 278 participants, met the inclusion criteria. Four trials compared simulation-based training with conventional patient-based endoscopy training (apprenticeship model) whereas nine trials compared simulation-based training with no training. Only three trials were at low risk of bias. Simulation-based training, as compared with no training, generally appears to provide participants with some advantage over their untrained peers as measured by composite score of competency, independent procedure completion, performance time, independent insertion depth, overall rating of performance or competency error rate and mucosal visualization. Alternatively, there was no conclusive evidence that simulation-based training was superior to conventional patient-based training, although data were limited. The results of this systematic review indicate that virtual reality endoscopy training can be used to effectively supplement early conventional endoscopy training (apprenticeship model) in diagnostic oesophagogastroduodenoscopy, colonoscopy and/or sigmoidoscopy for health professions trainees with limited or no prior endoscopic experience. However, there remains insufficient evidence to advise for or against the use of virtual reality simulation-based training as a replacement for early conventional endoscopy training (apprenticeship model) for health professions trainees with limited or no prior endoscopic experience. There is a great need for the development of a reliable and valid measure of endoscopic performance prior to the completion of further randomised clinical trials with high methodological quality.
Clinical value of homodynamic numerical simulation applied in the treatment of cerebral aneurysm.
Zhang, Hailin; Li, Li; Cheng, Chongjie; Sun, Xiaochuan
2017-12-01
Our objective was to evaluate the clinical value of numerical simulation in diagnosing cerebral aneurysm based on the analysis of numerical simulation of hemodynamic model. The experimental method used was the numerical model of cerebral aneurysm hemodynamic, and the numerical value of blood flow at each point was analyzed. The results showed that, the wall shear stress (WSS) value on the top of CA1 was significantly lower than that of the top (P<0.05), the WSS value of each point on the CA2 tumor was significantly lower than that of tumor neck (P<0.05); the pressure value on the tumor top and tumor neck between CA1 and CA2 had no significant difference (P>0.05); the unsteady index of shear (UIS) value at the points of 20 had distinctly changed, the wave range was 0.6-1.5; the unsteady index of pressure value of every point was significantly lower than UIS value, the wave range was 0.25-0.40. In conclusion, the application of cerebral aneurysm hemodynamic research can help doctors to diagnose cerebral aneurysm more precisely and to grasp the opportunity of treatment during the formulating of the treatment strategies.
Zarriello, Phillip J.; Bent, Gardner C.
2004-01-01
The 36.1-square-mile UsquepaugQueen River Basin in south-central Rhode Island is an important water resource. Streamflow records indicate that withdrawals may have diminished flows enough to affect aquatic habitat. Concern over the effect of withdrawals on streamflow and aquatic habitat prompted the development of a Hydrologic Simulation ProgramFORTRAN (HSPF) model to evaluate the water-management alternatives and land-use change in the basin. Climate, streamflow, and water-use data were collected to support the model development. A logistic-regression equation was developed for long-term simulations to predict the likelihood of irrigation, the primary water use in the basin, from antecedent potential evapotranspiration and precipitation for generating irrigation demands. The HSPF model represented the basin by 13 pervious-area and 2 impervious-area land-use segments and 20 stream reaches. The model was calibrated to the period January 1, 2000 to September 30, 2001, at three continuous streamflow-gaging stations that monitor flow from 10, 54, and 100 percent of the basin drainage area. Hydrographs and flow-duration curves of observed and simulated discharges, along with statistics compiled for various model-fit metrics, indicate a satisfactory model performance. The calibrated HSPF model was modified to evaluate streamflow (1) under no withdrawals to streamflow under current (200001) withdrawal conditions under long-term (19602001) climatic conditions, (2) under withdrawals by the former Ladd School water-supply wells, and (3) under fully developed land use. The effects of converting from direct-stream withdrawals to ground-water withdrawals were evaluated outside of the HSPF model by use of the STRMDEPL program, which calculates the time delayed response of ground-water withdrawals on streamflow depletion. Simulated effects of current withdrawals relative to no withdrawals indicate about a 20-percent decrease in the lowest mean daily streamflows at the basin outlet, but withdrawals have little effect on flows that are exceeded less than about 90 percent of the time. Tests of alternative model structures to evaluate model uncertainty indicate that the lowest mean daily flows ranged between 3 and 5 cubic feet per second (ft3/s) without withdrawals and 2.2 to 4 ft3/s with withdrawals. Changes in the minimum daily streamflows are more pronounced, however; at the upstream streamflow-gaging station, a minimum daily flow of 0.2 ft3/s was sustained without withdrawals, but simulations with withdrawals indicate that the reach would stop flowing part of a day about 5 percent of the time. The effect on streamflow of potential ground-water withdrawals of 0.20, 0.90, and 1.78 million gallons per day (Mgal/d) at the former Ladd School near the central part of the basin were evaluated. The lowest daily mean flows in model reach 3, the main stem of the Queen River closest to the pumped wells, decreased by about 50 percent for withdrawals of 0.20 Mgal/d (from about 0.4 to 0.2 ft3/s) in comparison to current withdrawals. Reach 3 would occasionally stop flowing during part of the day at the 0.20-Mgal/d withdrawal rate because of diurnal fluctuation in streamflow. The higher withdrawal rates (0.90 and 1.78 Mgal/d) would cause reach 3 to stop flowing about 10 to 20 percent of the time, but the effects of pumping rapidly diminished downstream because of tributary inflows. Simulation results indicate little change in the annual 1-, 7-, and 30-day low flows at the 0.20 Mgal/d pumping rate, but at the 1.78 Mgal/d pumping rate, reach 3 stopped flowing for nearly a 7-day period every year and for a 30-day period about every other year. At the 0.90 Mgal/d pumping rate, reach 3 stopped flowing about every other year for a 7-day period and about once every 5 years for a 30-day period. Land-use change was simulated by converting model hydrologic-response units (HRUs) representing undeveloped areas to HRUs representing developed areas o
Sensitivity of Chemical Shift-Encoded Fat Quantification to Calibration of Fat MR Spectrum
Wang, Xiaoke; Hernando, Diego; Reeder, Scott B.
2015-01-01
Purpose To evaluate the impact of different fat spectral models on proton density fat-fraction (PDFF) quantification using chemical shift-encoded (CSE) MRI. Material and Methods Simulations and in vivo imaging were performed. In a simulation study, spectral models of fat were compared pairwise. Comparison of magnitude fitting and mixed fitting was performed over a range of echo times and fat fractions. In vivo acquisitions from 41 patients were reconstructed using 7 published spectral models of fat. T2-corrected STEAM-MRS was used as reference. Results Simulations demonstrate that imperfectly calibrated spectral models of fat result in biases that depend on echo times and fat fraction. Mixed fitting is more robust against this bias than magnitude fitting. Multi-peak spectral models showed much smaller differences among themselves than when compared to the single-peak spectral model. In vivo studies show all multi-peak models agree better (for mixed fitting, slope ranged from 0.967–1.045 using linear regression) with reference standard than the single-peak model (for mixed fitting, slope=0.76). Conclusion It is essential to use a multi-peak fat model for accurate quantification of fat with CSE-MRI. Further, fat quantification techniques using multi-peak fat models are comparable and no specific choice of spectral model is shown to be superior to the rest. PMID:25845713
Application of OMI NO2 for Regional Air Quality Model Evaluation
NASA Astrophysics Data System (ADS)
Holloway, T.; Bickford, E.; Oberman, J.; Scotty, E.; Clifton, O. E.
2012-12-01
To support the application of satellite data for air quality analysis, we examine how column NO2 measurements from the Ozone Monitoring Instrument (OMI) aboard the NASA Aura satellite relate to ground-based and model estimates of NO2 and related species. Daily variability, monthly mean values, and spatial gradients in OMI NO2 from the Netherlands Royal Meteorological Institute (KNMI) are compared to ground-based measurements of NO2 from the EPA Air Quality System (AQS) database. Satellite data is gridded to two resolutions typical of regional air quality models - 36 km x 36 km over the continental U.S., and 12 km x 12 km over the Upper Midwestern U.S. Gridding is performed using the Wisconsin Horizontal Interpolation Program for Satellites (WHIPS), a publicly available software to support gridding of satellite data to model grids. Comparing daily OMI retrievals (13:45 daytime local overpass time) with ground-based measurements (13:00), we find January and July 2007 correlation coefficients (r-values) generally positive, with values higher in the winter (January) than summer (July) for most sites. Incidences of anti-correlation or low-correlation are evaluated with model simulations from the U.S. EPA Community Multiscale Air Quality Model version 4.7 (CMAQ). OMI NO2 is also used to evaluate CMAQ output, and to compare performance metrics for CMAQ relative to AQS measurements. We compare simulated NO2 across both the U.S. and Midwest study domains with both OMI NO2 (total column CMAQ values, weighted with the averaging kernel) and with ground-based observations (lowest model layer CMAQ values). 2007 CMAQ simulations employ emissions from the Lake Michigan Air Directors Consortium (LADCO) and meteorology from the Weather Research and Forecasting (WRF) model. Over most of the U.S., CMAQ is too high in January relative to OMI NO2, but too low in January relative to AQS NO2. In contrast, CMAQ is too low in July relative to OMI NO2, but too high relative to AQS NO2. These biases are used to evaluate emission sources (and the importance of missing sources, such as lightning NOx), and to explain model performance for related secondary species, especially nitrate aerosol and ozone.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reed, J; Micka, J; Culberson, W
Purpose: To determine the in-air azimuthal anisotropy and in-water dose distribution for the 1 cm length of the CivaString {sup 103}Pd brachytherapy source through measurements and Monte Carlo (MC) simulations. American Association of Physicists in Medicine Task Group No. 43 (TG-43) dosimetry parameters were also determined for this source. Methods: The in-air azimuthal anisotropy of the source was measured with a NaI scintillation detector and simulated with the MCNP5 radiation transport code. Measured and simulated results were normalized to their respective mean values and compared. The TG-43 dose-rate constant, line-source radial dose function, and 2D anisotropy function for this sourcemore » were determined from LiF:Mg,Ti thermoluminescent dosimeter (TLD) measurements and MC simulations. The impact of {sup 103}Pd well-loading variability on the in-water dose distribution was investigated using MC simulations by comparing the dose distribution for a source model with four wells of equal strength to that for a source model with strengths increased by 1% for two of the four wells. Results: NaI scintillation detector measurements and MC simulations of the in-air azimuthal anisotropy showed that ≥95% of the normalized data were within 1.2% of the mean value. TLD measurements and MC simulations of the TG-43 dose-rate constant, line-source radial dose function, and 2D anisotropy function agreed to within the experimental TLD uncertainties (k=2). MC simulations showed that a 1% variability in {sup 103}Pd well-loading resulted in changes of <0.1%, <0.1%, and <0.3% in the TG-43 dose-rate constant, radial dose distribution, and polar dose distribution, respectively. Conclusion: The CivaString source has a high degree of azimuthal symmetry as indicated by the NaI scintillation detector measurements and MC simulations of the in-air azimuthal anisotropy. TG-43 dosimetry parameters for this source were determined from TLD measurements and MC simulations. {sup 103}Pd well-loading variability results in minimal variations in the in-water dose distribution according to MC simulations. This work was partially supported by CivaTech Oncology, Inc. through an educational grant for Joshua Reed, John Micka, Wesley Culberson, and Larry DeWerd and through research support for Mark Rivard.« less
Llorens, Esther; Saaltink, Maarten W; Poch, Manel; García, Joan
2011-01-01
The performance and reliability of the CWM1-RETRASO model for simulating processes in horizontal subsurface flow constructed wetlands (HSSF CWs) and the relative contribution of different microbial reactions to organic matter (COD) removal in a HSSF CW treating urban wastewater were evaluated. Various different approaches with diverse influent configurations were simulated. According to the simulations, anaerobic processes were more widespread in the simulated wetland and contributed to a higher COD removal rate [72-79%] than anoxic [0-1%] and aerobic reactions [20-27%] did. In all the cases tested, the reaction that most contributed to COD removal was methanogenesis [58-73%]. All results provided by the model were in consonance with literature and experimental field observations, suggesting a good performance and reliability of CWM1-RETRASO. According to the good simulation predictions, CWM1-RETRASO is the first mechanistic model able to successfully simulate the processes described by the CWM1 model in HSSF CWs. Copyright © 2010 Elsevier Ltd. All rights reserved.
Jiang, Xiaoyan; Wiedinmyer, Christine; Carlton, Annmarie G
2012-11-06
This study presents a first attempt to investigate the roles of fire aerosols in ozone (O(3)) photochemistry using an online coupled meteorology-chemistry model, the Weather Research and Foresting model with Chemistry (WRF-Chem). Four 1-month WRF-Chem simulations for August 2007, with and without fire emissions, were carried out to assess the sensitivity of O(3) predictions to the emissions and subsequent radiative feedbacks associated with large-scale fires in the Western United States (U.S.). Results show that decreases in planetary boundary layer height (PBLH) resulting from the radiative effects of fire aerosols and increases in emissions of nitrogen oxides (NO(x)) and volatile organic compounds (VOCs) from the fires tend to increase modeled O(3) concentrations near the source. Reductions in downward shortwave radiation reaching the surface and surface temperature due to fire aerosols cause decreases in biogenic isoprene emissions and J(NO(2)) photolysis rates, resulting in reductions in O(3) concentrations by as much as 15%. Thus, the results presented in this study imply that considering the radiative effects of fire aerosols may reduce O(3) overestimation by traditional photochemical models that do not consider fire-induced changes in meteorology; implementation of coupled meteorology-chemistry models are required to simulate the atmospheric chemistry impacted by large-scale fires.
El Niño/Southern Oscillation response to global warming
Latif, M.; Keenlyside, N. S.
2009-01-01
The El Niño/Southern Oscillation (ENSO) phenomenon, originating in the Tropical Pacific, is the strongest natural interannual climate signal and has widespread effects on the global climate system and the ecology of the Tropical Pacific. Any strong change in ENSO statistics will therefore have serious climatic and ecological consequences. Most global climate models do simulate ENSO, although large biases exist with respect to its characteristics. The ENSO response to global warming differs strongly from model to model and is thus highly uncertain. Some models simulate an increase in ENSO amplitude, others a decrease, and others virtually no change. Extremely strong changes constituting tipping point behavior are not simulated by any of the models. Nevertheless, some interesting changes in ENSO dynamics can be inferred from observations and model integrations. Although no tipping point behavior is envisaged in the physical climate system, smooth transitions in it may give rise to tipping point behavior in the biological, chemical, and even socioeconomic systems. For example, the simulated weakening of the Pacific zonal sea surface temperature gradient in the Hadley Centre model (with dynamic vegetation included) caused rapid Amazon forest die-back in the mid-twenty-first century, which in turn drove a nonlinear increase in atmospheric CO2, accelerating global warming. PMID:19060210
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Tingwen; Dietiker, Jean -Francois; Rogers, William
2016-07-29
Both experimental tests and numerical simulations were conducted to investigate the fluidization behavior of a solid CO 2 sorbent with a mean diameter of 100 μm and density of about 480 kg/m, which belongs to Geldart's Group A powder. A carefully designed fluidized bed facility was used to perform a series of experimental tests to study the flow hydrodynamics. Numerical simulations using the two-fluid model indicated that the grid resolution has a significant impact on the bed expansion and bubbling flow behavior. Due to the limited computational resource, no good grid independent results were achieved using the standard models asmore » far as the bed expansion is concerned. In addition, all simulations tended to under-predict the bubble size substantially. Effects of various model settings including both numerical and physical parameters have been investigated with no significant improvement observed. The latest filtered sub-grid drag model was then tested in the numerical simulations. Compared to the standard drag model, the filtered drag model with two markers not only predicted reasonable bed expansion but also yielded realistic bubbling behavior. As a result, a grid sensitivity study was conducted for the filtered sub-grid model and its applicability and limitation were discussed.« less
El Nino/Southern Oscillation response to global warming.
Latif, M; Keenlyside, N S
2009-12-08
The El Niño/Southern Oscillation (ENSO) phenomenon, originating in the Tropical Pacific, is the strongest natural interannual climate signal and has widespread effects on the global climate system and the ecology of the Tropical Pacific. Any strong change in ENSO statistics will therefore have serious climatic and ecological consequences. Most global climate models do simulate ENSO, although large biases exist with respect to its characteristics. The ENSO response to global warming differs strongly from model to model and is thus highly uncertain. Some models simulate an increase in ENSO amplitude, others a decrease, and others virtually no change. Extremely strong changes constituting tipping point behavior are not simulated by any of the models. Nevertheless, some interesting changes in ENSO dynamics can be inferred from observations and model integrations. Although no tipping point behavior is envisaged in the physical climate system, smooth transitions in it may give rise to tipping point behavior in the biological, chemical, and even socioeconomic systems. For example, the simulated weakening of the Pacific zonal sea surface temperature gradient in the Hadley Centre model (with dynamic vegetation included) caused rapid Amazon forest die-back in the mid-twenty-first century, which in turn drove a nonlinear increase in atmospheric CO(2), accelerating global warming.
Modelling Temporal Variability in the Carbon Balance of a Spruce/Moss Boreal Forest
NASA Technical Reports Server (NTRS)
Frolking, S.; Goulden, M. L.; Wofsy, S. C.; Fan, S.-M.; Sutton, D. J.; Munger, J. W.; Bazzaz, A. M.; Daube, B. C.; Crill, P. M.; Aber, J. D.;
1996-01-01
A model of the daily carbon balance of a black spruce/feathermoss boreal forest ecosystem was developed and results compared to preliminary data from the 1994 BOREAS field campaign in northern Manitoba, Canada. The model, driven by daily weather conditions, simulated daily soil climate status (temperature and moisture profiles), spruce photosynthesis and respiration, moss photosynthesis and respiration, and litter decomposition. Model agreement with preliminary field data was good for net ecosystem exchange (NEE), capturing both the asymmetrical seasonality and short-term variability. During the growing season simulated daily NEE ranged from -4 g C m(exp -2) d(exp -1) (carbon uptake by ecosystem) to + 2 g C m(exp -2) d(exp -1) (carbon flux to atmosphere), with fluctuations from day to day. In the early winter simulated NEE values were + 0.5 g C m(exp -2) d(exp -1), dropping to + 0.2 g C m(exp -2) d(exp -1) in mid-winter. Simulated soil respiration during the growing season (+ 1 to + 5 g C m(exp -2) d(exp -1)) was dominated by metabolic respiration of the live moss, with litter decomposition usually contributing less than 30% and live spruce root respiration less than 10% of the total. Both spruce and moss net primary productivity (NPP) rates were higher in early summer than late summer. Simulated annual NEE for 1994 was -51 g C m(exp -2) y(exp -1), with 83% going into tree growth and 17% into the soil carbon accumulation. Moss NPP (58 g C m(exp -2) d(exp -1)) was considered to be litter (i.e. soil carbon input; no net increase in live moss biomass). Ecosystem respiration during the snow-covered season (84 g Cm(exp -2)) was 58% of the growing season net carbon uptake. A simulation of the same site for 1968-1989 showed about 10-20% year-to-year variability in heterotrophic respiration (mean of + 113 g C m-2 y@1). Moss NPP ranged from 19 to 114 g C m(exp -2) y(exp -1); spruce NPP from 81 to 150 g C nt-2 y,@l; spruce growth (NPP minus litterfall) from 34 to 103 g C m(exp -2) y(exp -1); NEE ranged from +37 to -142 g C m(exp -2) y(exp -1). Values for these carbon balance terms in 1994 were slightly smaller than the 1969 - 89 means. Higher ecosystem productivity years (more negative NEE) generally had early springs and relatively wet summers; lower productivity years had late springs and relatively dry summers.
NASA Astrophysics Data System (ADS)
Ott, Lesley E.; Pickering, Kenneth E.; Stenchikov, Georgiy L.; Huntrieser, Heidi; Schumann, Ulrich
2007-03-01
The 21 July 1998 thunderstorm observed during the European Lightning Nitrogen Oxides Project (EULINOX) project was simulated using the three-dimensional Goddard Cumulus Ensemble (GCE) model. The simulation successfully reproduced a number of observed storm features including the splitting of the original cell into a southern cell which developed supercell characteristics and a northern cell which became multicellular. Output from the GCE simulation was used to drive an offline cloud-scale chemical transport model which calculates tracer transport and includes a parameterization of lightning NOx production which uses observed flash rates as input. Estimates of lightning NOx production were deduced by assuming various values of production per intracloud and production per cloud-to-ground flash and comparing the results with in-cloud aircraft observations. The assumption that both types of flashes produce 360 moles of NO per flash on average compared most favorably with column mass and probability distribution functions calculated from observations. This assumed production per flash corresponds to a global annual lightning NOx source of 7 Tg N yr-1. Chemical reactions were included in the model to evaluate the impact of lightning NOx on ozone. During the storm, the inclusion of lightning NOx in the model results in a small loss of ozone (on average less than 4 ppbv) at all model levels. Simulations of the chemical environment in the 24 hours following the storm show on average a small increase in the net production of ozone at most levels resulting from lightning NOx, maximizing at approximately 5 ppbv day-1 at 5.5 km. Between 8 and 10.5 km, lightning NOx causes decreased net ozone production.
Badawy, Mahmoud A; Kamel, Amany O; Sammour, Omaima A
2016-01-01
The purpose of this work is to use biorelevant media to evaluate the robustness of a poorly water soluble weakly basic drug to variations along the gastrointestinal tract (GIT) after incorporation in liquisolid compacts and to assess the success of these models in predicting the in vivo performance. Liquisolid tablets were prepared using mosapride citrate as a model drug. A factorial design experiment was used to study the effect of three factors, namely: drug concentration at two levels (5% and 10%), carriers at three levels (avicel, mannitol and lactose) and powder excipients ratio (R) of the coating material at two levels (25 and 30). The in vitro dissolution media utilized were 0.1 N HCl, hypoacidic stomach model and a transfer model simulating the transfer from the stomach to the intestine. All compacts released above 95% of drug after 10 min in 0.1 N HCl. In the hypoacidic model, the compacts with R 30 were superior compared to R 25, where they released >90% of drug after 10 min compared to 80% for R 25. After the transfer of the optimum compacts from Simulated gastric fluid fast (SGFfast) to fasted state simulated intestinal fluid, slight turbidity appeared after 30 min, and the amount of drug dissolved slightly decreased from 96.91% to 90.59%. However, after the transfer from SGFfast to fed state simulated intestinal fluid, no turbidity or precipitation occurred throughout time of the test (60 min). In vivo pharmacokinetic study in human volunteers proved the success of the in vitro models with enhancement of the oral bioavailability (121.20%) compared to the commercial product.
Formaldehyde production from isoprene oxidation across NOx regimes.
Wolfe, G M; Kaiser, J; Hanisco, T F; Keutsch, F N; de Gouw, J A; Gilman, J B; Graus, M; Hatch, C D; Holloway, J; Horowitz, L W; Lee, B H; Lerner, B M; Lopez-Hilifiker, F; Mao, J; Marvin, M R; Peischl, J; Pollack, I B; Roberts, J M; Ryerson, T B; Thornton, J A; Veres, P R; Warneke, C
2016-01-01
The chemical link between isoprene and formaldehyde (HCHO) is a strong, non-linear function of NO x (= NO + NO 2 ). This relationship is a linchpin for top-down isoprene emission inventory verification from orbital HCHO column observations. It is also a benchmark for overall photochemical mechanism performance with regard to VOC oxidation. Using a comprehensive suite of airborne in situ observations over the Southeast U.S., we quantify HCHO production across the urban-rural spectrum. Analysis of isoprene and its major first-generation oxidation products allows us to define both a "prompt" yield of HCHO (molecules of HCHO produced per molecule of freshly-emitted isoprene) and the background HCHO mixing ratio (from oxidation of longer-lived hydrocarbons). Over the range of observed NO x values (roughly 0.1 - 2 ppbv), the prompt yield increases by a factor of 3 (from 0.3 to 0.9 ppbv ppbv -1 ), while background HCHO increases by a factor of 2 (from 1.6 to 3.3 ppbv). We apply the same method to evaluate the performance of both a global chemical transport model (AM3) and a measurement-constrained 0-D steady state box model. Both models reproduce the NO x dependence of the prompt HCHO yield, illustrating that models with updated isoprene oxidation mechanisms can adequately capture the link between HCHO and recent isoprene emissions. On the other hand, both models under-estimate background HCHO mixing ratios, suggesting missing HCHO precursors, inadequate representation of later-generation isoprene degradation and/or under-estimated hydroxyl radical concentrations. Detailed process rates from the box model simulation demonstrate a 3-fold increase in HCHO production across the range of observed NO x values, driven by a 100% increase in OH and a 40% increase in branching of organic peroxy radical reactions to produce HCHO.
De, Suvranu; Deo, Dhannanjay; Sankaranarayanan, Ganesh; Arikatla, Venkata S.
2012-01-01
Background While an update rate of 30 Hz is considered adequate for real time graphics, a much higher update rate of about 1 kHz is necessary for haptics. Physics-based modeling of deformable objects, especially when large nonlinear deformations and complex nonlinear material properties are involved, at these very high rates is one of the most challenging tasks in the development of real time simulation systems. While some specialized solutions exist, there is no general solution for arbitrary nonlinearities. Methods In this work we present PhyNNeSS - a Physics-driven Neural Networks-based Simulation System - to address this long-standing technical challenge. The first step is an off-line pre-computation step in which a database is generated by applying carefully prescribed displacements to each node of the finite element models of the deformable objects. In the next step, the data is condensed into a set of coefficients describing neurons of a Radial Basis Function network (RBFN). During real-time computation, these neural networks are used to reconstruct the deformation fields as well as the interaction forces. Results We present realistic simulation examples from interactive surgical simulation with real time force feedback. As an example, we have developed a deformable human stomach model and a Penrose-drain model used in the Fundamentals of Laparoscopic Surgery (FLS) training tool box. Conclusions A unique computational modeling system has been developed that is capable of simulating the response of nonlinear deformable objects in real time. The method distinguishes itself from previous efforts in that a systematic physics-based pre-computational step allows training of neural networks which may be used in real time simulations. We show, through careful error analysis, that the scheme is scalable, with the accuracy being controlled by the number of neurons used in the simulation. PhyNNeSS has been integrated into SoFMIS (Software Framework for Multimodal Interactive Simulation) for general use. PMID:22629108
Simulation of the 1992 Tessina landslide by a cellular automata model and future hazard scenarios
NASA Astrophysics Data System (ADS)
Avolio, MV; Di Gregorio, Salvatore; Mantovani, Franco; Pasuto, Alessandro; Rongo, Rocco; Silvano, Sandro; Spataro, William
Cellular Automata are a powerful tool for modelling natural and artificial systems, which can be described in terms of local interactions of their constituent parts. Some types of landslides, such as debris/mud flows, match these requirements. The 1992 Tessina landslide has characteristics (slow mud flows) which make it appropriate for modelling by means of Cellular Automata, except for the initial phase of detachment, which is caused by a rotational movement that has no effect on the mud flow path. This paper presents the Cellular Automata approach for modelling slow mud/debris flows, the results of simulation of the 1992 Tessina landslide and future hazard scenarios based on the volumes of masses that could be mobilised in the future. They were obtained by adapting the Cellular Automata Model called SCIDDICA, which has been validated for very fast landslides. SCIDDICA was applied by modifying the general model to the peculiarities of the Tessina landslide. The simulations obtained by this initial model were satisfactory for forecasting the surface covered by mud. Calibration of the model, which was obtained from simulation of the 1992 event, was used for forecasting flow expansion during possible future reactivation. For this purpose two simulations concerning the collapse of about 1 million m 3 of material were tested. In one of these, the presence of a containment wall built in 1992 for the protection of the Tarcogna hamlet was inserted. The results obtained identified the conditions of high risk affecting the villages of Funes and Lamosano and show that this Cellular Automata approach can have a wide range of applications for different types of mud/debris flows.
Assimilation of Sentinel-1 and SMAP observations to improve GEOS-5 soil moisture
NASA Astrophysics Data System (ADS)
Lievens, Hans; Reichle, Rolf; Wagner, Wolfgang; De Lannoy, Gabrielle; Liu, Qing; Verhoest, Niko
2017-04-01
The SMAP (Soil Moisture Active and Passive) mission carries an L-band radiometer that provides brightness temperature observations at a nominal resolution of 40 km. These radiance observations are routinely assimilated into GEOS-5 (Goddard Earth Observing System version 5) to generate the SMAP Level 4 Soil Moisture product. The use of C-band radar backscatter observations from Sentinel-1 has the potential to add value to the radiance assimilation by increasing the level of spatial detail. The specifications of Sentinel-1 are appealing, particularly its high spatial resolution (5 by 20 m in interferometric wide swath mode) and frequent revisit time (potentially every 3 days for the Sentinel-1A and Sentinel-1B constellation). However, the shorter wavelength of Sentinel-1 observations implies less sensitivity to soil moisture. This study investigates the value of Sentinel-1 data for hydrologic simulations by assimilating the radar observations into GEOS-5, either separately from or simultaneously with SMAP radiometer observations. The assimilation can be performed if either or both Sentinel-1 or SMAP observations are available, and is thus not restricted to synchronised overpasses. To facilitate the assimilation of the radar observations, GEOS-5 is coupled to the water cloud model, simulating the radar backscatter as observed by Sentinel-1. The innovations, i.e. differences between observations and simulations, are converted into increments to the model soil moisture state through an Ensemble Kalman Filter. The model runs are performed at 9-km spatial and 3-hourly temporal resolution, over the period from May 2015 to October 2016. The impact of the assimilation on surface and root-zone soil moisture simulations is assessed using in situ measurements from SMAP core validation sites and sparse networks. The assimilation of Sentinel-1 backscatter is found to consistently improve surface and root-zone soil moisture, relative to the open loop (no assimilation). However, the improvements are less pronounced than those with the assimilation of SMAP observations, likely because of less frequent observations. The best performance was obtained with the simultaneous assimilation of Sentinel-1 and SMAP data, indicating the complementary value of both types of observations for improving hydrologic simulations.
Modeling anaerobic digestion of aquatic plants by rumen cultures: cattail as an example.
Zhao, Bai-Hang; Yue, Zheng-Bo; Ni, Bing-Jie; Mu, Yang; Yu, Han-Qing; Harada, Hideki
2009-04-01
Despite of the significance of the anaerobic digestion of lignocellulosic materials, only a limited number of studies have been carried out to evaluate the lignocellulosic digestion kinetics, and information about the modeling of this process is limited. In this work, a mathematical model, based on the Anaerobic Digestion Model No.1 (ADM1), was developed to describe the anaerobic conversion of lignocellulose-rich aquatic plants, with cattail as an example, by rumen microbes. Cattail was fractionated into slowly hydrolysable fraction (SHF), readily hydrolysable fraction (RHF) and inert fraction in the model. The SHF was hydrolyzed by rumen microbes and resulted in the production of RHF. The SHF and RHF had different hydrolysis rates but both with surface-limiting kinetics. The rumen microbial population diversity, including the cattail-, butyrate-, acetate- and H(2)-degraders, was all incorporated in the model structure. Experiments were carried out to identify the parameters and to calibrate and validate this model. The simulation results match the experimental data, implying that the fractionation of cattail into two biodegradation parts, i.e., SHF and RHF, and modeling their hydrolysis rate with a surface-limiting kinetics were appropriate. The model was capable of simulating the anaerobic biodegradation of cattail by the rumen cultures.
Three-dimensional numerical modeling of land subsidence in Shanghai, China
NASA Astrophysics Data System (ADS)
Ye, Shujun; Luo, Yue; Wu, Jichun; Yan, Xuexin; Wang, Hanmei; Jiao, Xun; Teatini, Pietro
2016-05-01
Shanghai, in China, has experienced two periods of rapid land subsidence mainly caused by groundwater exploitation related to economic and population growth. The first period occurred during 1956-1965 and was characterized by an average land subsidence rate of 83 mm/yr, and the second period occurred during 1990-1998 with an average subsidence rate of 16 mm/yr. Owing to the establishment of monitoring networks for groundwater levels and land subsidence, a valuable dataset has been collected since the 1960s and used to develop regional land subsidence models applied to manage groundwater resources and mitigate land subsidence. The previous geomechanical modeling approaches to simulate land subsidence were based on one-dimensional (1D) vertical stress and deformation. In this study, a numerical model of land subsidence is developed to simulate explicitly coupled three-dimensional (3D) groundwater flow and 3D aquifer-system displacements in downtown Shanghai from 30 December 1979 to 30 December 1995. The model is calibrated using piezometric, geodetic-leveling, and borehole extensometer measurements made during the 16-year simulation period. The 3D model satisfactorily reproduces the measured piezometric and deformation observations. For the first time, the capability exists to provide some preliminary estimations on the horizontal displacement field associated with the well-known land subsidence in Shanghai and for which no measurements are available. The simulated horizontal displacements peak at 11 mm, i.e. less than 10 % of the simulated maximum land subsidence, and seems too small to seriously damage infrastructure such as the subways (metro lines) in the center area of Shanghai.
NASA Astrophysics Data System (ADS)
Lowe, D.; Archer-Nicholls, S.; Morgan, W.; Allan, J.; Utembe, S.; Ouyang, B.; Aruffo, E.; Le Breton, M.; Zaveri, R. A.; Di Carlo, P.; Percival, C.; Coe, H.; Jones, R.; McFiggans, G.
2015-02-01
Chemical modelling studies have been conducted over north-western Europe in summer conditions, showing that night-time dinitrogen pentoxide (N2O5) heterogeneous reactive uptake is important regionally in modulating particulate nitrate and has a~modest influence on oxidative chemistry. Results from Weather Research and Forecasting model with Chemistry (WRF-Chem) model simulations, run with a detailed volatile organic compound (VOC) gas-phase chemistry scheme and the Model for Simulating Aerosol Interactions and Chemistry (MOSAIC) sectional aerosol scheme, were compared with a series of airborne gas and particulate measurements made over the UK in July 2010. Modelled mixing ratios of key gas-phase species were reasonably accurate (correlations with measurements of 0.7-0.9 for NO2 and O3). However modelled loadings of particulate species were less accurate (correlation with measurements for particulate sulfate and ammonium were between 0.0 and 0.6). Sulfate mass loadings were particularly low (modelled means of 0.5-0.7 μg kg-1air, compared with measurements of 1.0-1.5 μg kg-1air). Two flights from the campaign were used as test cases - one with low relative humidity (RH) (60-70%), the other with high RH (80-90%). N2O5 heterogeneous chemistry was found to not be important in the low-RH test case; but in the high-RH test case it had a strong effect and significantly improved the agreement between modelled and measured NO3 and N2O5. When the model failed to capture atmospheric RH correctly, the modelled NO3 and N2O5 mixing ratios for these flights differed significantly from the measurements. This demonstrates that, for regional modelling which involves heterogeneous processes, it is essential to capture the ambient temperature and water vapour profiles. The night-time NO3 oxidation of VOCs across the whole region was found to be 100-300 times slower than the daytime OH oxidation of these compounds. The difference in contribution was less for alkenes (× 80) and comparable for dimethylsulfide (DMS). However the suppression of NO3 mixing ratios across the domain by N2O5 heterogeneous chemistry has only a very slight, negative, influence on this oxidative capacity. The influence on regional particulate nitrate mass loadings is stronger. Night-time N2O5 heterogeneous chemistry maintains the production of particulate nitrate within polluted regions: when this process is taken into consideration, the daytime peak (for the 95th percentile) of PM10 nitrate mass loadings remains around 5.6 μg kg-1air, but the night-time minimum increases from 3.5 to 4.6 μg kg-1air. The sustaining of higher particulate mass loadings through the night by this process improves model skill at matching measured aerosol nitrate diurnal cycles and will negatively impact on regional air quality, requiring this process to be included in regional models.
Lowe, Douglas; Archer-Nicholls, Scott; Morgan, Will; ...
2015-02-09
Chemical modelling studies have been conducted over north-western Europe in summer conditions, showing that night-time dinitrogen pentoxide (N 2O 5) heterogeneous reactive uptake is important regionally in modulating particulate nitrate and has a~modest influence on oxidative chemistry. Results from Weather Research and Forecasting model with Chemistry (WRF-Chem) model simulations, run with a detailed volatile organic compound (VOC) gas-phase chemistry scheme and the Model for Simulating Aerosol Interactions and Chemistry (MOSAIC) sectional aerosol scheme, were compared with a series of airborne gas and particulate measurements made over the UK in July 2010. Modelled mixing ratios of key gas-phase species were reasonablymore » accurate (correlations with measurements of 0.7–0.9 for NO 2 and O 3). However modelled loadings of particulate species were less accurate (correlation with measurements for particulate sulfate and ammonium were between 0.0 and 0.6). Sulfate mass loadings were particularly low (modelled means of 0.5–0.7 μg kg −1 air, compared with measurements of 1.0–1.5 μg kg −1 air). Two flights from the campaign were used as test cases – one with low relative humidity (RH) (60–70%), the other with high RH (80–90%). N 2O 5 heterogeneous chemistry was found to not be important in the low-RH test case; but in the high-RH test case it had a strong effect and significantly improved the agreement between modelled and measured NO 3 and N 2O 5. When the model failed to capture atmospheric RH correctly, the modelled NO 3 and N 2O 5 mixing ratios for these flights differed significantly from the measurements. This demonstrates that, for regional modelling which involves heterogeneous processes, it is essential to capture the ambient temperature and water vapour profiles. The night-time NO 3 oxidation of VOCs across the whole region was found to be 100–300 times slower than the daytime OH oxidation of these compounds. The difference in contribution was less for alkenes (× 80) and comparable for dimethylsulfide (DMS). However the suppression of NO 3 mixing ratios across the domain by N 2O 5 heterogeneous chemistry has only a very slight, negative, influence on this oxidative capacity. The influence on regional particulate nitrate mass loadings is stronger. Night-time N 2O 5 heterogeneous chemistry maintains the production of particulate nitrate within polluted regions: when this process is taken into consideration, the daytime peak (for the 95th percentile) of PM 10 nitrate mass loadings remains around 5.6 μg kg −1 air, but the night-time minimum increases from 3.5 to 4.6 μg kg −1 air. The sustaining of higher particulate mass loadings through the night by this process improves model skill at matching measured aerosol nitrate diurnal cycles and will negatively impact on regional air quality, requiring this process to be included in regional models.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lowe, Douglas; Archer-Nicholls, Scott; Morgan, Will
Chemical modelling studies have been conducted over north-western Europe in summer conditions, showing that night-time dinitrogen pentoxide (N 2O 5) heterogeneous reactive uptake is important regionally in modulating particulate nitrate and has a~modest influence on oxidative chemistry. Results from Weather Research and Forecasting model with Chemistry (WRF-Chem) model simulations, run with a detailed volatile organic compound (VOC) gas-phase chemistry scheme and the Model for Simulating Aerosol Interactions and Chemistry (MOSAIC) sectional aerosol scheme, were compared with a series of airborne gas and particulate measurements made over the UK in July 2010. Modelled mixing ratios of key gas-phase species were reasonablymore » accurate (correlations with measurements of 0.7–0.9 for NO 2 and O 3). However modelled loadings of particulate species were less accurate (correlation with measurements for particulate sulfate and ammonium were between 0.0 and 0.6). Sulfate mass loadings were particularly low (modelled means of 0.5–0.7 μg kg −1 air, compared with measurements of 1.0–1.5 μg kg −1 air). Two flights from the campaign were used as test cases – one with low relative humidity (RH) (60–70%), the other with high RH (80–90%). N 2O 5 heterogeneous chemistry was found to not be important in the low-RH test case; but in the high-RH test case it had a strong effect and significantly improved the agreement between modelled and measured NO 3 and N 2O 5. When the model failed to capture atmospheric RH correctly, the modelled NO 3 and N 2O 5 mixing ratios for these flights differed significantly from the measurements. This demonstrates that, for regional modelling which involves heterogeneous processes, it is essential to capture the ambient temperature and water vapour profiles. The night-time NO 3 oxidation of VOCs across the whole region was found to be 100–300 times slower than the daytime OH oxidation of these compounds. The difference in contribution was less for alkenes (× 80) and comparable for dimethylsulfide (DMS). However the suppression of NO 3 mixing ratios across the domain by N 2O 5 heterogeneous chemistry has only a very slight, negative, influence on this oxidative capacity. The influence on regional particulate nitrate mass loadings is stronger. Night-time N 2O 5 heterogeneous chemistry maintains the production of particulate nitrate within polluted regions: when this process is taken into consideration, the daytime peak (for the 95th percentile) of PM 10 nitrate mass loadings remains around 5.6 μg kg −1 air, but the night-time minimum increases from 3.5 to 4.6 μg kg −1 air. The sustaining of higher particulate mass loadings through the night by this process improves model skill at matching measured aerosol nitrate diurnal cycles and will negatively impact on regional air quality, requiring this process to be included in regional models.« less
Power spectrum for the small-scale Universe
NASA Astrophysics Data System (ADS)
Widrow, Lawrence M.; Elahi, Pascal J.; Thacker, Robert J.; Richardson, Mark; Scannapieco, Evan
2009-08-01
The first objects to arise in a cold dark matter (CDM) universe present a daunting challenge for models of structure formation. In the ultra small-scale limit, CDM structures form nearly simultaneously across a wide range of scales. Hierarchical clustering no longer provides a guiding principle for theoretical analyses and the computation time required to carry out credible simulations becomes prohibitively high. To gain insight into this problem, we perform high-resolution (N = 7203-15843) simulations of an Einstein-de Sitter cosmology where the initial power spectrum is P(k) ~ kn, with -2.5 <= n <= - 1. Self-similar scaling is established for n = -1 and -2 more convincingly than in previous, lower resolution simulations and for the first time, self-similar scaling is established for an n = -2.25 simulation. However, finite box-size effects induce departures from self-similar scaling in our n = -2.5 simulation. We compare our results with the predictions for the power spectrum from (one-loop) perturbation theory and demonstrate that the renormalization group approach suggested by McDonald improves perturbation theory's ability to predict the power spectrum in the quasi-linear regime. In the non-linear regime, our power spectra differ significantly from the widely used fitting formulae of Peacock & Dodds and Smith et al. and a new fitting formula is presented. Implications of our results for the stable clustering hypothesis versus halo model debate are discussed. Our power spectra are inconsistent with predictions of the stable clustering hypothesis in the high-k limit and lend credence to the halo model. Nevertheless, the fitting formula advocated in this paper is purely empirical and not derived from a specific formulation of the halo model.
Watershed Models for Decision Support for Inflows to Potholes Reservoir, Washington
Mastin, Mark C.
2009-01-01
A set of watershed models for four basins (Crab Creek, Rocky Ford Creek, Rocky Coulee, and Lind Coulee), draining into Potholes Reservoir in east-central Washington, was developed as part of a decision support system to aid the U.S. Department of the Interior, Bureau of Reclamation, in managing water resources in east-central Washington State. The project is part of the U.S. Geological Survey and Bureau of Reclamation collaborative Watershed and River Systems Management Program. A conceptual model of hydrology is outlined for the study area that highlights the significant processes that are important to accurately simulate discharge under a wide range of conditions. The conceptual model identified the following factors as significant for accurate discharge simulations: (1) influence of frozen ground on peak discharge, (2) evaporation and ground-water flow as major pathways in the system, (3) channel losses, and (4) influence of irrigation practices on reducing or increasing discharge. The Modular Modeling System was used to create a watershed model for the four study basins by combining standard Precipitation Runoff Modeling System modules with modified modules from a previous study and newly modified modules. The model proved unreliable in simulating peak-flow discharge because the index used to track frozen ground conditions was not reliable. Mean monthly and mean annual discharges were more reliable when simulated. Data from seven USGS streamflow-gaging stations were used to compare with simulated discharge for model calibration and evaluation. Mean annual differences between simulated and observed discharge varied from 1.2 to 13.8 percent for all stations used in the comparisons except one station on a regional ground-water discharge stream. Two thirds of the mean monthly percent differences between the simulated mean and the observed mean discharge for these six stations were between -20 and 240 percent, or in absolute terms, between -0.8 and 11 cubic feet per second. A graphical user interface was developed for the user to easily run the model, make runoff forecasts, and evaluate the results. The models; however, are not reliable for managing short-term operations because of their demonstrated inability to match individual storm peaks and individual monthly discharge values. Short-term forecasting may be improved with real-time monitoring of the extent of frozen ground and the snow-water equivalent in the basin. Despite the models unreliability for short-term runoff forecasts, they are useful in providing long-term, time-series discharge data where no observed data exist.
Chen, Aileen B; Neville, Bridget A; Sher, David J; Chen, Kun; Schrag, Deborah
2011-06-10
Technical studies suggest that computed tomography (CT) -based simulation improves the therapeutic ratio for thoracic radiation therapy (TRT), although few studies have evaluated its use or impact on outcomes. We used the Surveillance, Epidemiology and End Results (SEER) -Medicare linked data to identify CT-based simulation for TRT among Medicare beneficiaries diagnosed with stage III non-small-cell lung cancer (NSCLC) between 2000 and 2005. Demographic and clinical factors associated with use of CT simulation were identified, and the impact of CT simulation on survival was analyzed by using Cox models and propensity score analysis. The proportion of patients treated with TRT who had CT simulation increased from 2.4% in 1994 to 34.0% in 2000 to 77.6% in 2005. Of the 5,540 patients treated with TRT from 2000 to 2005, 60.1% had CT simulation. Geographic variation was seen in rates of CT simulation, with lower rates in rural areas and in the South and West compared with those in the Northeast and Midwest. Patients treated with chemotherapy were more likely to have CT simulation (65.2% v 51.2%; adjusted odds ratio, 1.67; 95% CI, 1.48 to 1.88; P < .01), although there was no significant association between use of surgery and CT simulation. Controlling for demographic and clinical characteristics, CT simulation was associated with lower risk of death (adjusted hazard ratio, 0.77; 95% CI, 0.73 to 0.82; P < .01) compared with conventional simulation. CT-based simulation has been widely, although not uniformly, adopted for the treatment of stage III NSCLC and is associated with higher survival among patients receiving TRT.
Zhang, Xuan; Xie, Li-yong; Guo, Li-ping; Fan, Jing-wei
2016-02-01
The Daycent model was calibrated and validated using measured crop yield and soil organic carbon (SOC) as double assessment standards based on the experimental data from three long-term experiments (i.e. Zhengzhou site in Henan Province, Yucheng site in Shandong Province and Quzhou site in Hebei Province) in North China. Results showed that the build-up parameters simulated the long-term dynamic changes of crop yields and SOC very well, indicating Daycent model could project the dynamic changes of crop yield and SOC soundly. After calibration and validation, Daycent model was used to simulate the changes of SOC under future climate scenarios (representative concentration pathway 4.5, RCP 4.5) with four different management practices (chemical fertilizer, NPK; chemical fertilizer + organic manure, MNPK; straw incorporation, SNPK; no-tillage +straw incorporation, NT) at the three sites. At Zhengzhou site, the change of SOC was highest for MNPK treatment during the period of 2001-2050 (1.7%) and followed by SNPK (1.3%) and NPK (0.8%) in terms of annual relative increase rate (ARIR), indicating long-term amendment of organic manure could effectively increase SOC for light loam soil with irrigation condition. At Yucheng site, the increase of SOC (ARIR) under MNPK treatment (0.4%) was higher than under NPK treatment (0.3%). In addition, the increase of SOC was very low under all treatments at this site, probably due to light soil salinization. At Quzhou site, the increase of SOC (ARIR) under NT treatment was 1.3%, higher than those under SNPK treatment (0.7%) and NPK treatment (0.4%), indicating NT was more effective for SOC increase in this area. We concluded that no-tillage with straw incorporation is the optimized management practice to increase SOC in North China Plain due to mild climate, sound irrigation and available mechanical equipment for straw processing and no-tillage operation.
NASA Astrophysics Data System (ADS)
Zhao, F.; Frieler, K.; Warszawski, L.; Lange, S.; Schewe, J.; Reyer, C.; Ostberg, S.; Piontek, F.; Betts, R. A.; Burke, E.; Ciais, P.; Deryng, D.; Ebi, K. L.; Emanuel, K.; Elliott, J. W.; Galbraith, E. D.; Gosling, S.; Hickler, T.; Hinkel, J.; Jones, C.; Krysanova, V.; Lotze-Campen, H.; Mouratiadou, I.; Popp, A.; Tian, H.; Tittensor, D.; Vautard, R.; van Vliet, M. T. H.; Eddy, T.; Hattermann, F.; Huber, V.; Mengel, M.; Stevanovic, M.; Kirsten, T.; Mueller Schmied, H.; Denvil, S.; Halladay, K.; Suzuki, T.; Lotze, H. K.
2016-12-01
In Paris, France, December 2015 the Conference of Parties (COP) to the United Nations Framework Convention on Climate Change (UNFCCC) invited the IPCC to provide a "special report in 2018 on the impacts of global warming of 1.5°C above pre-industrial levels and related global greenhouse gas emission pathways". In Nairobi, Kenya, April 2016 the IPCC panel accepted the invitation. Here we describe the model simulations planned within the Inter-Sectoral Impact Model Intercomparison Project (ISIMIP) to address the request by providing tailored cross-sectoral consistent impacts projections. The protocol is designed to allow for 1) a separation of the impacts of the historical warming starting from pre-industrial conditions from other human drivers such as historical land use changes (based on pre-industrial and historical impact model simulations), 2) a quantification of the effects of an additional warming to 1.5°C including a potential overshoot and long term effects up to 2300 in comparison to a no-mitigation scenario (based on the low emissions Representative Concentration Pathway RCP2.6 and a no-mitigation scenario RCP6.0) keeping socio-economic conditions fixed at year 2005 levels, and 3) an assessment of the climate effects based on the same climate scenarios but accounting for parallel changes in socio-economic conditions following the middle of the road Shared Socioeconomic Pathway (SSP2) and differential bio-energy requirements associated with the transformation of the energy system to reach RCP2.6 compared to RCP6.0. To provide the scientific basis for an aggregation of impacts across sectors and an analysis of cross-sectoral interactions potentially damping or amplifying sectoral impacts the protocol is designed to provide consistent impacts projections across a range of impact models from different sectors (global and regional hydrological models, global gridded crop models, global vegetation models, regional forestry models, global and regional marine ecosystem and fisheries models, global and regional coastal infrastructure models, energy models, health models, and agro-economic models).
NASA Astrophysics Data System (ADS)
Frieler, Katja; Warszawski, Lila; Zhao, Fang
2017-04-01
In Paris, France, December 2015 the Conference of Parties (COP) to the United Nations Framework Convention on Climate Change (UNFCCC) invited the IPCC to provide a "special report in 2018 on the impacts of global warming of 1.5°C above pre-industrial levels and related global greenhouse gas emission pathways". In Nairobi, Kenya, April 2016 the IPCC panel accepted the invitation. Here we describe the model simulations planned within the Inter-Sectoral Impact Model Intercomparison Project (ISIMIP) to address the request by providing tailored cross-sectoral consistent impacts projections. The protocol is designed to allow for 1) a separation of the impacts of the historical warming starting from pre-industrial conditions from other human drivers such as historical land use changes (based on pre-industrial and historical impact model simulations), 2) a quantification of the effects of an additional warming to 1.5°C including a potential overshoot and long term effects up to 2300 in comparison to a no-mitigation scenario (based on the low emissions Representative Concentration Pathway RCP2.6 and a no-mitigation scenario RCP6.0) keeping socio-economic conditions fixed at year 2005 levels, and 3) an assessment of the climate effects based on the same climate scenarios but accounting for parallel changes in socio-economic conditions following the middle of the road Shared Socioeconomic Pathway (SSP2) and differential bio-energy requirements associated with the transformation of the energy system to reach RCP2.6 compared to RCP6.0. To provide the scientific basis for an aggregation of impacts across sectors and an analysis of cross-sectoral interactions potentially damping or amplifying sectoral impacts the protocol is designed to provide consistent impacts projections across a range of impact models from different sectors (global and regional hydrological models, global gridded crop models, global vegetation models, regional forestry models, global and regional marine ecosystem and fisheries models, global and regional coastal infrastructure models, energy models, health models, and agro-economic models).
Distribution of lod scores in oligogenic linkage analysis.
Williams, J T; North, K E; Martin, L J; Comuzzie, A G; Göring, H H; Blangero, J
2001-01-01
In variance component oligogenic linkage analysis it can happen that the residual additive genetic variance bounds to zero when estimating the effect of the ith quantitative trait locus. Using quantitative trait Q1 from the Genetic Analysis Workshop 12 simulated general population data, we compare the observed lod scores from oligogenic linkage analysis with the empirical lod score distribution under a null model of no linkage. We find that zero residual additive genetic variance in the null model alters the usual distribution of the likelihood-ratio statistic.
2007-06-01
of SNR, she incorporated the effects that an InGaAs photovoltaic detector have in producing the signal along with the photon, Johnson, and shot noises ...the photovoltaic FPA detector modeled? • What detector noise sources limit the computed signal? 3.1 Modeling Methodology Two aspects in the IR camera...Another shot noise source in photovoltaic detectors is dark current. This current represents the current flowing in the detector when no optical radiation
van Heeswijk, Marijke
2006-01-01
Surface water has been diverted from the Salmon Creek Basin for irrigation purposes since the early 1900s, when the Bureau of Reclamation built the Okanogan Project. Spring snowmelt runoff is stored in two reservoirs, Conconully Reservoir and Salmon Lake Reservoir, and gradually released during the growing season. As a result of the out-of-basin streamflow diversions, the lower 4.3 miles of Salmon Creek typically has been a dry creek bed for almost 100 years, except during the spring snowmelt season during years of high runoff. To continue meeting the water needs of irrigators but also leave water in lower Salmon Creek for fish passage and to help restore the natural ecosystem, changes are being considered in how the Okanogan Project is operated. This report documents development of a precipitation-runoff model for the Salmon Creek Basin that can be used to simulate daily unregulated streamflows. The precipitation-runoff model is a component of a Decision Support System (DSS) that includes a water-operations model the Bureau of Reclamation plans to develop to study the water resources of the Salmon Creek Basin. The DSS will be similar to the DSS that the Bureau of Reclamation and the U.S. Geological Survey developed previously for the Yakima River Basin in central southern Washington. The precipitation-runoff model was calibrated for water years 1950-89 and tested for water years 1990-96. The model was used to simulate daily streamflows that were aggregated on a monthly basis and calibrated against historical monthly streamflows for Salmon Creek at Conconully Dam. Additional calibration data were provided by the snowpack water-equivalent record for a SNOTEL station in the basin. Model input time series of daily precipitation and minimum and maximum air temperatures were based on data from climate stations in the study area. Historical records of unregulated streamflow for Salmon Creek at Conconully Dam do not exist for water years 1950-96. Instead, estimates of historical monthly mean unregulated streamflow based on reservoir outflows and storage changes were used as a surrogate for the missing data and to calibrate and test the model. The estimated unregulated streamflows were corrected for evaporative losses from Conconully Reservoir (about 1 ft3/s) and ground-water losses from the basin (about 2 ft3/s). The total of the corrections was about 9 percent of the mean uncorrected streamflow of 32.2 ft3/s (23,300 acre-ft/yr) for water years 1949-96. For the calibration period, the basinwide mean annual evapotranspiration was simulated to be 19.1 inches, or about 83 percent of the mean annual precipitation of 23.1 inches. Model calibration and testing indicated that the daily streamflows simulated using the precipitation-runoff model should be used only to analyze historical and forecasted annual mean and April-July mean streamflows for Salmon Creek at Conconully Dam. Because of the paucity of model input data and uncertainty in the estimated unregulated streamflows, the model is not adequately calibrated and tested to estimate monthly mean streamflows for individual months, such as during low-flow periods, or for shorter periods such as during peak flows. No data were available to test the accuracy of simulated streamflows for lower Salmon Creek. As a result, simulated streamflows for lower Salmon Creek should be used with caution. For the calibration period (water years 1950-89), both the simulated mean annual streamflow and the simulated mean April-July streamflow compared well with the estimated uncorrected unregulated streamflow (UUS) and corrected unregulated streamflow (CUS). The simulated mean annual streamflow exceeded UUS by 5.9 percent and was less than CUS by 2.7 percent. Similarly, the simulated mean April-July streamflow exceeded UUS by 1.8 percent and was less than CUS by 3.1 percent. However, streamflow was significantly undersimulated during the low-flow, baseflow-dominated months of November through F
Line-by-line spectroscopic simulations on graphics processing units
NASA Astrophysics Data System (ADS)
Collange, Sylvain; Daumas, Marc; Defour, David
2008-01-01
We report here on software that performs line-by-line spectroscopic simulations on gases. Elaborate models (such as narrow band and correlated-K) are accurate and efficient for bands where various components are not simultaneously and significantly active. Line-by-line is probably the most accurate model in the infrared for blends of gases that contain high proportions of H 2O and CO 2 as this was the case for our prototype simulation. Our implementation on graphics processing units sustains a speedup close to 330 on computation-intensive tasks and 12 on memory intensive tasks compared to implementations on one core of high-end processors. This speedup is due to data parallelism, efficient memory access for specific patterns and some dedicated hardware operators only available in graphics processing units. It is obtained leaving most of processor resources available and it would scale linearly with the number of graphics processing units in parallel machines. Line-by-line simulation coupled with simulation of fluid dynamics was long believed to be economically intractable but our work shows that it could be done with some affordable additional resources compared to what is necessary to perform simulations on fluid dynamics alone. Program summaryProgram title: GPU4RE Catalogue identifier: ADZY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZY_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 62 776 No. of bytes in distributed program, including test data, etc.: 1 513 247 Distribution format: tar.gz Programming language: C++ Computer: x86 PC Operating system: Linux, Microsoft Windows. Compilation requires either gcc/g++ under Linux or Visual C++ 2003/2005 and Cygwin under Windows. It has been tested using gcc 4.1.2 under Ubuntu Linux 7.04 and using Visual C++ 2005 with Cygwin 1.5.24 under Windows XP. RAM: 1 gigabyte Classification: 21.2 External routines: OpenGL ( http://www.opengl.org) Nature of problem: Simulating radiative transfer on high-temperature high-pressure gases. Solution method: Line-by-line Monte-Carlo ray-tracing. Unusual features: Parallel computations are moved to the GPU. Additional comments: nVidia GeForce 7000 or ATI Radeon X1000 series graphics processing unit is required. Running time: A few minutes.
Numerical Study of the Simultaneous Oxidation of NO and SO2 by Ozone
Li, Bo; Zhao, Jinyang; Lu, Junfu
2015-01-01
This study used two kinetic mechanisms to evaluate the oxidation processes of NO and SO2 by ozone. The performance of the two models was assessed by comparisons with experimental results from previous studies. The first kinetic mechanism was a combined model developed by the author that consisted of 50 species and 172 reactions. The second mechanism consisted of 23 species and 63 reactions. Simulation results of both of the two models show under predictions compared with experimental data. The results showed that the optimized reaction temperature for NO with O3 ranged from 100~200 °C. At higher temperatures, O3 decomposed to O2 and O, which resulted in a decrease of the NO conversion rate. When the mole ratio of O3/NO was greater than 1, products with a higher oxidation state (such as NO3, N2O5) were formed. The reactions between O3 and SO2 were weak; as such, it was difficult for O3 to oxidize SO2. PMID:25642689
Kimbell, Julia S; Segal, Rebecca A; Asgharian, Bahman; Wong, Brian A; Schroeter, Jeffry D; Southall, Jeremy P; Dickens, Colin J; Brace, Geoff; Miller, Frederick J
2007-01-01
Many studies suggest limited effectiveness of spray devices for nasal drug delivery due primarily to high deposition and clearance at the front of the nose. Here, nasal spray behavior was studied using experimental measurements and a computational fluid dynamics model of the human nasal passages constructed from magnetic resonance imaging scans of a healthy adult male. Eighteen commercially available nasal sprays were analyzed for spray characteristics using laser diffraction, high-speed video, and high-speed spark photography. Steadystate, inspiratory airflow (15 L/min) and particle transport were simulated under measured spray conditions. Simulated deposition efficiency and spray behavior were consistent with previous experimental studies, two of which used nasal replica molds based on this nasal geometry. Deposition fractions (numbers of deposited particles divided by the number released) of 20- and 50-microm particles exceeded 90% in the anterior part of the nose for most simulated conditions. Predicted particle penetration past the nasal valve improved when (1) the smaller of two particle sizes or the lower of two spray velocities was used, (2) the simulated nozzle was positioned 1.0 rather than 0.5 or 1.5 cm into the nostril, and (3) inspiratory airflow was present rather than absent. Simulations also predicted that delaying the appearance of normal inspiratory airflow more than 1 sec after the release of particles produced results equivalent to cases in which no inspiratory airflow was present. These predictions contribute to more effective design of drug delivery devices through a better understanding of the effects of nasal airflow and spray characteristics on particle transport in the nose.
Interactive Nature of Climate Change and Aerosol Forcing
NASA Technical Reports Server (NTRS)
Nazarenko, L.; Rind, D.; Tsigaridis, K.; Del Genio, A. D.; Kelley, M.; Tausnev, N.
2017-01-01
The effect of changing cloud cover on climate, based on cloud-aerosol interactions, is one of the major unknowns for climate forcing and climate sensitivity. It has two components: (1) the impact of aerosols on clouds and climate due to in-situ interactions (i.e., rapid response); and (2) the effect of aerosols on the cloud feedback that arises as climate changes - climate feedback response. We examine both effects utilizing the NASA GISS ModelE2 to assess the indirect effect, with both mass-based and microphysical aerosol schemes, in transient twentieth-century simulations. We separate the rapid response and climate feedback effects by making simulations with a coupled version of the model as well as one with no sea surface temperature or sea ice response (atmosphere-only simulations). We show that the indirect effect of aerosols on temperature is altered by the climate feedbacks following the ocean response, and this change differs depending upon which aerosol model is employed. Overall the effective radiative forcing (ERF) for the direct effect of aerosol-radiation interaction (ERFari) ranges between -0.2 and -0.6 W/sq m for atmosphere-only experiments while the total effective radiative forcing, including the indirect effect (ERFari+aci) varies between about -0.4 and -1.1 W/sq m for atmosphere-only simulations; both ranges are in agreement with those given in IPCC (2013). Including the full feedback of the climate system lowers these ranges to -0.2 to -0.5 W/sq m for ERFari, and -0.3 to -0.74 W/sq m for ERFari+aci. With both aerosol schemes, the climate change feedbacks have reduced the global average indirect radiative effect of atmospheric aerosols relative to what the emission changes would have produced, at least partially due to its effect on tropical upper tropospheric clouds.
NASA Astrophysics Data System (ADS)
Kumar, R.; Samaniego, L. E.; Livneh, B.
2013-12-01
Knowledge of soil hydraulic properties such as porosity and saturated hydraulic conductivity is required to accurately model the dynamics of near-surface hydrological processes (e.g. evapotranspiration and root-zone soil moisture dynamics) and provide reliable estimates of regional water and energy budgets. Soil hydraulic properties are commonly derived from pedo-transfer functions using soil textural information recorded during surveys, such as the fractions of sand and clay, bulk density, and organic matter content. Typically large scale land-surface models are parameterized using a relatively coarse soil map with little or no information on parametric sub-grid variability. In this study we analyze the impact of sub-grid soil variability on simulated hydrological fluxes over the Mississippi River Basin (≈3,240,000 km2) at multiple spatio-temporal resolutions. A set of numerical experiments were conducted with the distributed mesoscale hydrologic model (mHM) using two soil datasets: (a) the Digital General Soil Map of the United States or STATSGO2 (1:250 000) and (b) the recently collated Harmonized World Soil Database based on the FAO-UNESCO Soil Map of the World (1:5 000 000). mHM was parameterized with the multi-scale regionalization technique that derives distributed soil hydraulic properties via pedo-transfer functions and regional coefficients. Within the experimental framework, the 3-hourly model simulations were conducted at four spatial resolutions ranging from 0.125° to 1°, using meteorological datasets from the NLDAS-2 project for the time period 1980-2012. Preliminary results indicate that the model was able to capture observed streamflow behavior reasonably well with both soil datasets, in the major sub-basins (i.e. the Missouri, the Upper Mississippi, the Ohio, the Red, and the Arkansas). However, the spatio-temporal patterns of simulated water fluxes and states (e.g. soil moisture, evapotranspiration) from both simulations, showed marked differences; particularly at a shorter time scale (hours to days) in regions with coarse texture sandy soils. Furthermore, the partitioning of total runoff into near-surface interflows and baseflow components was also significantly different between the two simulations. Simulations with the coarser soil map produced comparatively higher baseflows. At longer time scales (months to seasons) where climatic factors plays a major role, the integrated fluxes and states from both sets of model simulations match fairly closely, despite the apparent discrepancy in the partitioning of total runoff.
Project LOLA or Lunar Orbit and Landing Approach was a simulator built at Langley
1961-07-23
Test subject sitting at the controls: Project LOLA or Lunar Orbit and Landing Approach was a simulator built at Langley to study problems related to landing on the lunar surface. It was a complex project that cost nearly $2 million dollars. James Hansen wrote: "This simulator was designed to provide a pilot with a detailed visual encounter with the lunar surface; the machine consisted primarily of a cockpit, a closed-circuit TV system, and four large murals or scale models representing portions of the lunar surface as seen from various altitudes. The pilot in the cockpit moved along a track past these murals which would accustom him to the visual cues for controlling a spacecraft in the vicinity of the moon. Unfortunately, such a simulation--although great fun and quite aesthetic--was not helpful because flight in lunar orbit posed no special problems other than the rendezvous with the LEM, which the device did not simulate. Not long after the end of Apollo, the expensive machine was dismantled." (p. 379) Ellis J. White further described this simulator in his paper , "Discussion of Three Typical Langley Research Center Simulation Programs," (Paper presented at the Eastern Simulation Council (EAI's Princeton Computation Center), Princeton, NJ, October 20, 1966.) "A typical mission would start with the first cart positioned on model 1 for the translunar approach and orbit establishment. After starting the descent, the second cart is readied on model 2 and, at the proper time, when superposition occurs, the pilot's scene is switched from model 1 to model 2. then cart 1 is moved to and readied on model 3. The procedure continues until an altitude of 150 feet is obtained. The cabin of the LM vehicle has four windows which represent a 45 degree field of view. The projection screens in front of each window represent 65 degrees which allows limited head motion before the edges of the display can be seen. The lunar scene is presented to the pilot by rear projection on the screens with four Schmidt television projectors. The attitude orientation of the vehicle is represented by changing the lunar scene through the portholes determined by the scan pattern of four orthicons. The stars are front projected onto the upper three screens with a four-axis starfield generation (starball) mounted over the cabin and there is a separate starball for the low window." -- Published in James R. Hansen, Spaceflight Revolution: NASA Langley Research Center From Sputnik to Apollo, (Washington: NASA, 1995), p. 379.
Liu, Yang; Glass, Nancy L; Glover, Chris D; Power, Robert W; Watcha, Mehernoor F
2013-12-01
Ultrasound-guided regional anesthesia (UGRA) skills are traditionally obtained by supervised performance on patients, but practice on phantom models improves success. Currently available models are expensive or use perishable products, for example, olive-in-chicken breasts (OCB). We constructed 2 inexpensive phantom (transparent and opaque) models with readily available nonperishable products and compared the process of learning UGRA skills by novice practitioners on these models with the OCB model. Three experts first established criteria for a satisfactory completion of the simulated UGRA task in the 3 models. Thirty-six novice trainees (<20 previous UGRA experience) were randomly assigned to perform a UGRA task on 1 of 3 models-the transparent, opaque, and OCB models, where the hyperechoic target was identified, a needle was advanced to it under ultrasound guidance, fluid was injected, and images were saved. We recorded the errors during task completion, number of attempts and needle passes, and the time for target identification and needle placement until the predetermined benchmark of 3 consecutive successful UGRA simulations was accomplished. The number of errors, needle passes, and time for task completion per attempt progressively decreased in all 3 groups. However, failure to identify the target and to visualize the needle on the ultrasound image occurred more frequently with the OCB model. The time to complete simulator training was shortest with the transparent model, owing to shorter target identification times. However, trainees were less likely to agree strongly that this model was realistic for teaching UGRA skills. Training on inexpensive synthetic simulation models with no perishable products permits learning of UGRA skills by novices. The OCB model has disadvantages of containing potentially infective material, requires refrigeration, cannot be used after multiple needle punctures, and is associated with more failures during simulated UGRA. Direct visualization of the target in the transparent model allows the trainee to focus on needle insertion skills, but the opaque model may be more realistic for learning target identification skills required when UGRA is performed on real patients in the operating room.
Novel transformation-based response prediction of shear building using interval neural network
NASA Astrophysics Data System (ADS)
Chakraverty, S.; Sahoo, Deepti Moyi
2017-04-01
Present paper uses powerful technique of interval neural network (INN) to simulate and estimate structural response of multi-storey shear buildings subject to earthquake motion. The INN is first trained for a real earthquake data, viz., the ground acceleration as input and the numerically generated responses of different floors of multi-storey buildings as output. Till date, no model exists to handle positive and negative data in the INN. As such here, the bipolar data in [ -1, 1] are converted first to unipolar form, i.e., to [0, 1] by means of a novel transformation for the first time to handle the above training patterns in normalized form. Once the training is done, again the unipolar data are converted back to its bipolar form by using the inverse transformation. The trained INN architecture is then used to simulate and test the structural response of different floors for various intensity earthquake data and it is found that the predicted responses given by INN model are good for practical purposes.
NASA Astrophysics Data System (ADS)
Ngai, K. L.; Capaccioli, S.
2013-05-01
The Comment of Colmenero asserts no change in Fs(Q,t) of the poly(ethylene oxide) (PEO) chains in blends with poly(methyl methacrylate) on crossing times of about 1-2 ns in data obtained by neutron scattering experiments and simulations. The assertion is opposite to that reported in the original papers where the neutron data and simulations were published. To make this point clear, we cite the data and the very statements made in the original papers concluding that indeed in the time interval from 60 ps to 1-2 ns the dynamics of PEO chain follows approximately the Rouse model, but becomes slower and departs from the Rouse model in the dependencies on time, momentum transfer, and temperature at longer times past tc = 1-2 ns. It is noteworthy that similar crossover of chain dynamics in entangled homopolymers at the ns time scale was found by neutron scattering.
Saho, Tatsunori; Onishi, Hideo
2015-07-01
In this study, we evaluated hemodynamics using simulated models and determined how cerebral aneurysms develop in simulated and patient-specific models based on medical images. Computational fluid dynamics (CFD) was analyzed by use of OpenFOAM software. Flow velocity, stream line, and wall shear stress (WSS) were evaluated in a simulated model aneurysm with known geometry and in a three-dimensional angiographic model. The ratio of WSS at the aneurysm compared with that at the basilar artery was 1:10 in simulated model aneurysms with a diameter of 10 mm and 1:18 in the angiographic model, indicating similar tendencies. Vortex flow occurred in both model aneurysms, and the WSS decreased in larger model aneurysms. The angiographic model provided accurate CFD information, and the tendencies of simulated and angiographic models were similar. These findings indicate that hemodynamic effects are involved in the development of aneurysms.
Monthly mean simulation experiments with a course-mesh global atmospheric model
NASA Technical Reports Server (NTRS)
Spar, J.; Klugman, R.; Lutz, R. J.; Notario, J. J.
1978-01-01
Substitution of observed monthly mean sea-surface temperatures (SSTs) as lower boundary conditions, in place of climatological SSTs, failed to improve the model simulations. While the impact of SST anomalies on the model output is greater at sea level than at upper levels the impact on the monthly mean simulations is not beneficial at any level. Shifts of one and two days in initialization time produced small, but non-trivial, changes in the model-generated monthly mean synoptic fields. No improvements in the mean simulations resulted from the use of either time-averaged initial data or re-initialization with time-averaged early model output. The noise level of the model, as determined from a multiple initial state perturbation experiment, was found to be generally low, but with a noisier response to initial state errors in high latitudes than the tropics.